LFSPMC: Linear feature selection program using the probability of misclassification
NASA Technical Reports Server (NTRS)
Guseman, L. F., Jr.; Marion, B. P.
1975-01-01
The computational procedure and associated computer program for a linear feature selection technique are presented. The technique assumes that: a finite number, m, of classes exists; each class is described by an n-dimensional multivariate normal density function of its measurement vectors; the mean vector and covariance matrix for each density function are known (or can be estimated); and the a priori probability for each class is known. The technique produces a single linear combination of the original measurements which minimizes the one-dimensional probability of misclassification defined by the transformed densities.
Interpretation of the results of statistical measurements. [search for basic probability model
NASA Technical Reports Server (NTRS)
Olshevskiy, V. V.
1973-01-01
For random processes, the calculated probability characteristic, and the measured statistical estimate are used in a quality functional, which defines the difference between the two functions. Based on the assumption that the statistical measurement procedure is organized so that the parameters for a selected model are optimized, it is shown that the interpretation of experimental research is a search for a basic probability model.
Reinforcement Probability Modulates Temporal Memory Selection and Integration Processes
Matell, Matthew S.; Kurti, Allison N.
2013-01-01
We have previously shown that rats trained in a mixed-interval peak procedure (tone = 4s, light = 12s) respond in a scalar manner at a time in between the trained peak times when presented with the stimulus compound (Swanton & Matell, 2011). In our previous work, the two component cues were reinforced with different probabilities (short = 20%, long = 80%) to equate response rates, and we found that the compound peak time was biased toward the cue with the higher reinforcement probability. Here, we examined the influence that different reinforcement probabilities have on the temporal location and shape of the compound response function. We found that the time of peak responding shifted as a function of the relative reinforcement probability of the component cues, becoming earlier as the relative likelihood of reinforcement associated with the short cue increased. However, as the relative probabilities of the component cues grew dissimilar, the compound peak became non-scalar, suggesting that the temporal control of behavior shifted from a process of integration to one of selection. As our previous work has utilized durations and reinforcement probabilities more discrepant than those used here, these data suggest that the processes underlying the integration/selection decision for time are based on cue value. PMID:23896560
Som, Nicholas A.; Goodman, Damon H.; Perry, Russell W.; Hardy, Thomas B.
2016-01-01
Previous methods for constructing univariate habitat suitability criteria (HSC) curves have ranged from professional judgement to kernel-smoothed density functions or combinations thereof. We present a new method of generating HSC curves that applies probability density functions as the mathematical representation of the curves. Compared with previous approaches, benefits of our method include (1) estimation of probability density function parameters directly from raw data, (2) quantitative methods for selecting among several candidate probability density functions, and (3) concise methods for expressing estimation uncertainty in the HSC curves. We demonstrate our method with a thorough example using data collected on the depth of water used by juvenile Chinook salmon (Oncorhynchus tschawytscha) in the Klamath River of northern California and southern Oregon. All R code needed to implement our example is provided in the appendix. Published 2015. This article is a U.S. Government work and is in the public domain in the USA.
ERIC Educational Resources Information Center
Dellario, Donald J.
1985-01-01
Conducted structured interviews of seven selected Mental Health (MH) and Vocational Rehabilitation (VR) dyads to assess interagency functioning, and compared results to selected interagency performance indicators. Results suggested that improved MH-VR linkages can increase the probability of successful rehabilitation outcomes for psychiatrically…
NASA Technical Reports Server (NTRS)
Heine, John J. (Inventor); Clarke, Laurence P. (Inventor); Deans, Stanley R. (Inventor); Stauduhar, Richard Paul (Inventor); Cullers, David Kent (Inventor)
2001-01-01
A system and method for analyzing a medical image to determine whether an abnormality is present, for example, in digital mammograms, includes the application of a wavelet expansion to a raw image to obtain subspace images of varying resolution. At least one subspace image is selected that has a resolution commensurate with a desired predetermined detection resolution range. A functional form of a probability distribution function is determined for each selected subspace image, and an optimal statistical normal image region test is determined for each selected subspace image. A threshold level for the probability distribution function is established from the optimal statistical normal image region test for each selected subspace image. A region size comprising at least one sector is defined, and an output image is created that includes a combination of all regions for each selected subspace image. Each region has a first value when the region intensity level is above the threshold and a second value when the region intensity level is below the threshold. This permits the localization of a potential abnormality within the image.
Selection for territory acquisition is modulated by social network structure in a wild songbird
Farine, D R; Sheldon, B C
2015-01-01
The social environment may be a key mediator of selection that operates on animals. In many cases, individuals may experience selection not only as a function of their phenotype, but also as a function of the interaction between their phenotype and the phenotypes of the conspecifics they associate with. For example, when animals settle after dispersal, individuals may benefit from arriving early, but, in many cases, these benefits will be affected by the arrival times of other individuals in their local environment. We integrated a recently described method for calculating assortativity on weighted networks, which is the correlation between an individual's phenotype and that of its associates, into an existing framework for measuring the magnitude of social selection operating on phenotypes. We applied this approach to large-scale data on social network structure and the timing of arrival into the breeding area over three years. We found that late-arriving individuals had a reduced probability of breeding. However, the probability of breeding was also influenced by individuals’ social networks. Associating with late-arriving conspecifics increased the probability of successfully acquiring a breeding territory. Hence, social selection could offset the effects of nonsocial selection. Given parallel theoretical developments of the importance of local network structure on population processes, and increasing data being collected on social networks in free-living populations, the integration of these concepts could yield significant insights into social evolution. PMID:25611344
Use and interpretation of logistic regression in habitat-selection studies
Keating, Kim A.; Cherry, Steve
2004-01-01
Logistic regression is an important tool for wildlife habitat-selection studies, but the method frequently has been misapplied due to an inadequate understanding of the logistic model, its interpretation, and the influence of sampling design. To promote better use of this method, we review its application and interpretation under 3 sampling designs: random, case-control, and use-availability. Logistic regression is appropriate for habitat use-nonuse studies employing random sampling and can be used to directly model the conditional probability of use in such cases. Logistic regression also is appropriate for studies employing case-control sampling designs, but careful attention is required to interpret results correctly. Unless bias can be estimated or probability of use is small for all habitats, results of case-control studies should be interpreted as odds ratios, rather than probability of use or relative probability of use. When data are gathered under a use-availability design, logistic regression can be used to estimate approximate odds ratios if probability of use is small, at least on average. More generally, however, logistic regression is inappropriate for modeling habitat selection in use-availability studies. In particular, using logistic regression to fit the exponential model of Manly et al. (2002:100) does not guarantee maximum-likelihood estimates, valid probabilities, or valid likelihoods. We show that the resource selection function (RSF) commonly used for the exponential model is proportional to a logistic discriminant function. Thus, it may be used to rank habitats with respect to probability of use and to identify important habitat characteristics or their surrogates, but it is not guaranteed to be proportional to probability of use. Other problems associated with the exponential model also are discussed. We describe an alternative model based on Lancaster and Imbens (1996) that offers a method for estimating conditional probability of use in use-availability studies. Although promising, this model fails to converge to a unique solution in some important situations. Further work is needed to obtain a robust method that is broadly applicable to use-availability studies.
Calibrating SALT: a sampling scheme to improve estimates of suspended sediment yield
Robert B. Thomas
1986-01-01
Abstract - SALT (Selection At List Time) is a variable probability sampling scheme that provides unbiased estimates of suspended sediment yield and its variance. SALT performs better than standard schemes which are estimate variance. Sampling probabilities are based on a sediment rating function which promotes greater sampling intensity during periods of high...
NASA Technical Reports Server (NTRS)
Sidik, S. M.
1972-01-01
The error variance of the process prior multivariate normal distributions of the parameters of the models are assumed to be specified, prior probabilities of the models being correct. A rule for termination of sampling is proposed. Upon termination, the model with the largest posterior probability is chosen as correct. If sampling is not terminated, posterior probabilities of the models and posterior distributions of the parameters are computed. An experiment was chosen to maximize the expected Kullback-Leibler information function. Monte Carlo simulation experiments were performed to investigate large and small sample behavior of the sequential adaptive procedure.
Properties of Neurons in External Globus Pallidus Can Support Optimal Action Selection
Bogacz, Rafal; Martin Moraud, Eduardo; Abdi, Azzedine; Magill, Peter J.; Baufreton, Jérôme
2016-01-01
The external globus pallidus (GPe) is a key nucleus within basal ganglia circuits that are thought to be involved in action selection. A class of computational models assumes that, during action selection, the basal ganglia compute for all actions available in a given context the probabilities that they should be selected. These models suggest that a network of GPe and subthalamic nucleus (STN) neurons computes the normalization term in Bayes’ equation. In order to perform such computation, the GPe needs to send feedback to the STN equal to a particular function of the activity of STN neurons. However, the complex form of this function makes it unlikely that individual GPe neurons, or even a single GPe cell type, could compute it. Here, we demonstrate how this function could be computed within a network containing two types of GABAergic GPe projection neuron, so-called ‘prototypic’ and ‘arkypallidal’ neurons, that have different response properties in vivo and distinct connections. We compare our model predictions with the experimentally-reported connectivity and input-output functions (f-I curves) of the two populations of GPe neurons. We show that, together, these dichotomous cell types fulfil the requirements necessary to compute the function needed for optimal action selection. We conclude that, by virtue of their distinct response properties and connectivities, a network of arkypallidal and prototypic GPe neurons comprises a neural substrate capable of supporting the computation of the posterior probabilities of actions. PMID:27389780
Smith, Bruce W; Mitchell, Derek G V; Hardin, Michael G; Jazbec, Sandra; Fridberg, Daniel; Blair, R James R; Ernst, Monique
2009-01-15
Economic decision-making involves the weighting of magnitude and probability of potential gains/losses. While previous work has examined the neural systems involved in decision-making, there is a need to understand how the parameters associated with decision-making (e.g., magnitude of expected reward, probability of expected reward and risk) modulate activation within these neural systems. In the current fMRI study, we modified the monetary wheel of fortune (WOF) task [Ernst, M., Nelson, E.E., McClure, E.B., Monk, C.S., Munson, S., Eshel, N., et al. (2004). Choice selection and reward anticipation: an fMRI study. Neuropsychologia 42(12), 1585-1597.] to examine in 25 healthy young adults the neural responses to selections of different reward magnitudes, probabilities, or risks. Selection of high, relative to low, reward magnitude increased activity in insula, amygdala, middle and posterior cingulate cortex, and basal ganglia. Selection of low-probability, as opposed to high-probability reward, increased activity in anterior cingulate cortex, as did selection of risky, relative to safe reward. In summary, decision-making that did not involve conflict, as in the magnitude contrast, recruited structures known to support the coding of reward values, and those that integrate motivational and perceptual information for behavioral responses. In contrast, decision-making under conflict, as in the probability and risk contrasts, engaged the dorsal anterior cingulate cortex whose role in conflict monitoring is well established. However, decision-making under conflict failed to activate the structures that track reward values per se. Thus, the presence of conflict in decision-making seemed to significantly alter the pattern of neural responses to simple rewards. In addition, this paradigm further clarifies the functional specialization of the cingulate cortex in processes of decision-making.
A critical analysis of high-redshift, massive, galaxy clusters. Part I
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hoyle, Ben; Jimenez, Raul; Verde, Licia
2012-02-01
We critically investigate current statistical tests applied to high redshift clusters of galaxies in order to test the standard cosmological model and describe their range of validity. We carefully compare a sample of high-redshift, massive, galaxy clusters with realistic Poisson sample simulations of the theoretical mass function, which include the effect of Eddington bias. We compare the observations and simulations using the following statistical tests: the distributions of ensemble and individual existence probabilities (in the > M, > z sense), the redshift distributions, and the 2d Kolmogorov-Smirnov test. Using seemingly rare clusters from Hoyle et al. (2011), and Jee etmore » al. (2011) and assuming the same survey geometry as in Jee et al. (2011, which is less conservative than Hoyle et al. 2011), we find that the ( > M, > z) existence probabilities of all clusters are fully consistent with ΛCDM. However assuming the same survey geometry, we use the 2d K-S test probability to show that the observed clusters are not consistent with being the least probable clusters from simulations at > 95% confidence, and are also not consistent with being a random selection of clusters, which may be caused by the non-trivial selection function and survey geometry. Tension can be removed if we examine only a X-ray selected sub sample, with simulations performed assuming a modified survey geometry.« less
Karn, Robert C; Laukaitis, Christina M
2014-08-01
In the present article, we summarize two aspects of our work on mouse ABP (androgen-binding protein): (i) the sexual selection function producing incipient reinforcement on the European house mouse hybrid zone, and (ii) the mechanism behind the dramatic expansion of the Abp gene region in the mouse genome. Selection unifies these two components, although the ways in which selection has acted differ. At the functional level, strong positive selection has acted on key sites on the surface of one face of the ABP dimer, possibly to influence binding to a receptor. A different kind of selection has apparently driven the recent and rapid expansion of the gene region, probably by increasing the amount of Abp transcript, in one or both of two ways. We have shown previously that groups of Abp genes behave as LCRs (low-copy repeats), duplicating as relatively large blocks of genes by NAHR (non-allelic homologous recombination). The second type of selection involves the close link between the accumulation of L1 elements and the expansion of the Abp gene family by NAHR. It is probably predicated on an initial selection for increased transcription of existing Abp genes and/or an increase in Abp gene number providing more transcriptional sites. Either or both could increase initial transcript production, a quantitative change similar to increasing the volume of a radio transmission. In closing, we also provide a note on Abp gene nomenclature.
NASA Astrophysics Data System (ADS)
Wu, Yunna; Chen, Kaifeng; Xu, Hu; Xu, Chuanbo; Zhang, Haobo; Yang, Meng
2017-12-01
There is insufficient research relating to offshore wind farm site selection in China. The current methods for site selection have some defects. First, information loss is caused by two aspects: the implicit assumption that the probability distribution on the interval number is uniform; and ignoring the value of decision makers' (DMs') common opinion on the criteria information evaluation. Secondly, the difference in DMs' utility function has failed to receive attention. An innovative method is proposed in this article to solve these drawbacks. First, a new form of interval number and its weighted operator are proposed to reflect the uncertainty and reduce information loss. Secondly, a new stochastic dominance degree is proposed to quantify the interval number with a probability distribution. Thirdly, a two-stage method integrating the weighted operator with stochastic dominance degree is proposed to evaluate the alternatives. Finally, a case from China proves the effectiveness of this method.
On the Structure of a Best Possible Crossover Selection Strategy in Genetic Algorithms
NASA Astrophysics Data System (ADS)
Lässig, Jörg; Hoffmann, Karl Heinz
The paper considers the problem of selecting individuals in the current population in genetic algorithms for crossover to find a solution with high fitness for a given optimization problem. Many different schemes have been described in the literature as possible strategies for this task but so far comparisons have been predominantly empirical. It is shown that if one wishes to maximize any linear function of the final state probabilities, e.g. the fitness of the best individual in the final population of the algorithm, then a best probability distribution for selecting an individual in each generation is a rectangular distribution over the individuals sorted in descending sequence by their fitness values. This means uniform probabilities have to be assigned to a group of the best individuals of the population but probabilities equal to zero to individuals with lower fitness, assuming that the probability distribution to choose individuals from the current population can be chosen independently for each iteration and each individual. This result is then generalized also to typical practically applied performance measures, such as maximizing the expected fitness value of the best individual seen in any generation.
Advanced reliability methods for structural evaluation
NASA Technical Reports Server (NTRS)
Wirsching, P. H.; Wu, Y.-T.
1985-01-01
Fast probability integration (FPI) methods, which can yield approximate solutions to such general structural reliability problems as the computation of the probabilities of complicated functions of random variables, are known to require one-tenth the computer time of Monte Carlo methods for a probability level of 0.001; lower probabilities yield even more dramatic differences. A strategy is presented in which a computer routine is run k times with selected perturbed values of the variables to obtain k solutions for a response variable Y. An approximating polynomial is fit to the k 'data' sets, and FPI methods are employed for this explicit form.
NASA Astrophysics Data System (ADS)
Bandte, Oliver
It has always been the intention of systems engineering to invent or produce the best product possible. Many design techniques have been introduced over the course of decades that try to fulfill this intention. Unfortunately, no technique has succeeded in combining multi-criteria decision making with probabilistic design. The design technique developed in this thesis, the Joint Probabilistic Decision Making (JPDM) technique, successfully overcomes this deficiency by generating a multivariate probability distribution that serves in conjunction with a criterion value range of interest as a universally applicable objective function for multi-criteria optimization and product selection. This new objective function constitutes a meaningful Xnetric, called Probability of Success (POS), that allows the customer or designer to make a decision based on the chance of satisfying the customer's goals. In order to incorporate a joint probabilistic formulation into the systems design process, two algorithms are created that allow for an easy implementation into a numerical design framework: the (multivariate) Empirical Distribution Function and the Joint Probability Model. The Empirical Distribution Function estimates the probability that an event occurred by counting how many times it occurred in a given sample. The Joint Probability Model on the other hand is an analytical parametric model for the multivariate joint probability. It is comprised of the product of the univariate criterion distributions, generated by the traditional probabilistic design process, multiplied with a correlation function that is based on available correlation information between pairs of random variables. JPDM is an excellent tool for multi-objective optimization and product selection, because of its ability to transform disparate objectives into a single figure of merit, the likelihood of successfully meeting all goals or POS. The advantage of JPDM over other multi-criteria decision making techniques is that POS constitutes a single optimizable function or metric that enables a comparison of all alternative solutions on an equal basis. Hence, POS allows for the use of any standard single-objective optimization technique available and simplifies a complex multi-criteria selection problem into a simple ordering problem, where the solution with the highest POS is best. By distinguishing between controllable and uncontrollable variables in the design process, JPDM can account for the uncertain values of the uncontrollable variables that are inherent to the design problem, while facilitating an easy adjustment of the controllable ones to achieve the highest possible POS. Finally, JPDM's superiority over current multi-criteria decision making techniques is demonstrated with an optimization of a supersonic transport concept and ten contrived equations as well as a product selection example, determining an airline's best choice among Boeing's B-747, B-777, Airbus' A340, and a Supersonic Transport. The optimization examples demonstrate JPDM's ability to produce a better solution with a higher POS than an Overall Evaluation Criterion or Goal Programming approach. Similarly, the product selection example demonstrates JPDM's ability to produce a better solution with a higher POS and different ranking than the Overall Evaluation Criterion or Technique for Order Preferences by Similarity to the Ideal Solution (TOPSIS) approach.
NASA Astrophysics Data System (ADS)
Jarabo-Amores, María-Pilar; la Mata-Moya, David de; Gil-Pita, Roberto; Rosa-Zurera, Manuel
2013-12-01
The application of supervised learning machines trained to minimize the Cross-Entropy error to radar detection is explored in this article. The detector is implemented with a learning machine that implements a discriminant function, which output is compared to a threshold selected to fix a desired probability of false alarm. The study is based on the calculation of the function the learning machine approximates to during training, and the application of a sufficient condition for a discriminant function to be used to approximate the optimum Neyman-Pearson (NP) detector. In this article, the function a supervised learning machine approximates to after being trained to minimize the Cross-Entropy error is obtained. This discriminant function can be used to implement the NP detector, which maximizes the probability of detection, maintaining the probability of false alarm below or equal to a predefined value. Some experiments about signal detection using neural networks are also presented to test the validity of the study.
Measurement of the top-quark mass with dilepton events selected using neuroevolution at CDF.
Aaltonen, T; Adelman, J; Akimoto, T; Albrow, M G; Alvarez González, B; Amerio, S; Amidei, D; Anastassov, A; Annovi, A; Antos, J; Apollinari, G; Apresyan, A; Arisawa, T; Artikov, A; Ashmanskas, W; Attal, A; Aurisano, A; Azfar, F; Azzurri, P; Badgett, W; Barbaro-Galtieri, A; Barnes, V E; Barnett, B A; Bartsch, V; Bauer, G; Beauchemin, P-H; Bedeschi, F; Bednar, P; Beecher, D; Behari, S; Bellettini, G; Bellinger, J; Benjamin, D; Beretvas, A; Beringer, J; Bhatti, A; Binkley, M; Bisello, D; Bizjak, I; Blair, R E; Blocker, C; Blumenfeld, B; Bocci, A; Bodek, A; Boisvert, V; Bolla, G; Bortoletto, D; Boudreau, J; Boveia, A; Brau, B; Bridgeman, A; Brigliadori, L; Bromberg, C; Brubaker, E; Budagov, J; Budd, H S; Budd, S; Burkett, K; Busetto, G; Bussey, P; Buzatu, A; Byrum, K L; Cabrera, S; Calancha, C; Campanelli, M; Campbell, M; Canelli, F; Canepa, A; Carlsmith, D; Carosi, R; Carrillo, S; Carron, S; Casal, B; Casarsa, M; Castro, A; Catastini, P; Cauz, D; Cavaliere, V; Cavalli-Sforza, M; Cerri, A; Cerrito, L; Chang, S H; Chen, Y C; Chertok, M; Chiarelli, G; Chlachidze, G; Chlebana, F; Cho, K; Chokheli, D; Chou, J P; Choudalakis, G; Chuang, S H; Chung, K; Chung, W H; Chung, Y S; Ciobanu, C I; Ciocci, M A; Clark, A; Clark, D; Compostella, G; Convery, M E; Conway, J; Copic, K; Cordelli, M; Cortiana, G; Cox, D J; Crescioli, F; Cuenca Almenar, C; Cuevas, J; Culbertson, R; Cully, J C; Dagenhart, D; Datta, M; Davies, T; de Barbaro, P; De Cecco, S; Deisher, A; De Lorenzo, G; Dell'orso, M; Deluca, C; Demortier, L; Deng, J; Deninno, M; Derwent, P F; di Giovanni, G P; Dionisi, C; Di Ruzza, B; Dittmann, J R; D'Onofrio, M; Donati, S; Dong, P; Donini, J; Dorigo, T; Dube, S; Efron, J; Elagin, A; Erbacher, R; Errede, D; Errede, S; Eusebi, R; Fang, H C; Farrington, S; Fedorko, W T; Feild, R G; Feindt, M; Fernandez, J P; Ferrazza, C; Field, R; Flanagan, G; Forrest, R; Franklin, M; Freeman, J C; Furic, I; Gallinaro, M; Galyardt, J; Garberson, F; Garcia, J E; Garfinkel, A F; Genser, K; Gerberich, H; Gerdes, D; Gessler, A; Giagu, S; Giakoumopoulou, V; Giannetti, P; Gibson, K; Gimmell, J L; Ginsburg, C M; Giokaris, N; Giordani, M; Giromini, P; Giunta, M; Giurgiu, G; Glagolev, V; Glenzinski, D; Gold, M; Goldschmidt, N; Golossanov, A; Gomez, G; Gomez-Ceballos, G; Goncharov, M; González, O; Gorelov, I; Goshaw, A T; Goulianos, K; Gresele, A; Grinstein, S; Grosso-Pilcher, C; Grundler, U; Guimaraes da Costa, J; Gunay-Unalan, Z; Haber, C; Hahn, K; Hahn, S R; Halkiadakis, E; Han, B-Y; Han, J Y; Handler, R; Happacher, F; Hara, K; Hare, D; Hare, M; Harper, S; Harr, R F; Harris, R M; Hartz, M; Hatakeyama, K; Hauser, J; Hays, C; Heck, M; Heijboer, A; Heinemann, B; Heinrich, J; Henderson, C; Herndon, M; Heuser, J; Hewamanage, S; Hidas, D; Hill, C S; Hirschbuehl, D; Hocker, A; Hou, S; Houlden, M; Hsu, S-C; Huffman, B T; Hughes, R E; Husemann, U; Huston, J; Incandela, J; Introzzi, G; Iori, M; Ivanov, A; James, E; Jayatilaka, B; Jeon, E J; Jha, M K; Jindariani, S; Johnson, W; Jones, M; Joo, K K; Jun, S Y; Jung, J E; Junk, T R; Kamon, T; Kar, D; Karchin, P E; Kato, Y; Kephart, R; Keung, J; Khotilovich, V; Kilminster, B; Kim, D H; Kim, H S; Kim, J E; Kim, M J; Kim, S B; Kim, S H; Kim, Y K; Kimura, N; Kirsch, L; Klimenko, S; Knuteson, B; Ko, B R; Koay, S A; Kondo, K; Kong, D J; Konigsberg, J; Korytov, A; Kotwal, A V; Kreps, M; Kroll, J; Krop, D; Krumnack, N; Kruse, M; Krutelyov, V; Kubo, T; Kuhr, T; Kulkarni, N P; Kurata, M; Kusakabe, Y; Kwang, S; Laasanen, A T; Lami, S; Lammel, S; Lancaster, M; Lander, R L; Lannon, K; Lath, A; Latino, G; Lazzizzera, I; Lecompte, T; Lee, E; Lee, S W; Leone, S; Lewis, J D; Lin, C S; Linacre, J; Lindgren, M; Lipeles, E; Lister, A; Litvintsev, D O; Liu, C; Liu, T; Lockyer, N S; Loginov, A; Loreti, M; Lovas, L; Lu, R-S; Lucchesi, D; Lueck, J; Luci, C; Lujan, P; Lukens, P; Lungu, G; Lyons, L; Lys, J; Lysak, R; Lytken, E; Mack, P; Macqueen, D; Madrak, R; Maeshima, K; Makhoul, K; Maki, T; Maksimovic, P; Malde, S; Malik, S; Manca, G; Manousakis-Katsikakis, A; Margaroli, F; Marino, C; Marino, C P; Martin, A; Martin, V; Martínez, M; Martínez-Ballarín, R; Maruyama, T; Mastrandrea, P; Masubuchi, T; Mattson, M E; Mazzanti, P; McFarland, K S; McIntyre, P; McNulty, R; Mehta, A; Mehtala, P; Menzione, A; Merkel, P; Mesropian, C; Miao, T; Miladinovic, N; Miller, R; Mills, C; Milnik, M; Mitra, A; Mitselmakher, G; Miyake, H; Moggi, N; Moon, C S; Moore, R; Morello, M J; Morlok, J; Movilla Fernandez, P; Mülmenstädt, J; Mukherjee, A; Muller, Th; Mumford, R; Murat, P; Mussini, M; Nachtman, J; Nagai, Y; Nagano, A; Naganoma, J; Nakamura, K; Nakano, I; Napier, A; Necula, V; Neu, C; Neubauer, M S; Nielsen, J; Nodulman, L; Norman, M; Norniella, O; Nurse, E; Oakes, L; Oh, S H; Oh, Y D; Oksuzian, I; Okusawa, T; Orava, R; Osterberg, K; Pagan Griso, S; Pagliarone, C; Palencia, E; Papadimitriou, V; Papaikonomou, A; Paramonov, A A; Parks, B; Pashapour, S; Patrick, J; Pauletta, G; Paulini, M; Paus, C; Pellett, D E; Penzo, A; Phillips, T J; Piacentino, G; Pianori, E; Pinera, L; Pitts, K; Plager, C; Pondrom, L; Poukhov, O; Pounder, N; Prakoshyn, F; Pronko, A; Proudfoot, J; Ptohos, F; Pueschel, E; Punzi, G; Pursley, J; Rademacker, J; Rahaman, A; Ramakrishnan, V; Ranjan, N; Redondo, I; Reisert, B; Rekovic, V; Renton, P; Rescigno, M; Richter, S; Rimondi, F; Ristori, L; Robson, A; Rodrigo, T; Rodriguez, T; Rogers, E; Rolli, S; Roser, R; Rossi, M; Rossin, R; Roy, P; Ruiz, A; Russ, J; Rusu, V; Saarikko, H; Safonov, A; Sakumoto, W K; Saltó, O; Santi, L; Sarkar, S; Sartori, L; Sato, K; Savoy-Navarro, A; Scheidle, T; Schlabach, P; Schmidt, A; Schmidt, E E; Schmidt, M A; Schmidt, M P; Schmitt, M; Schwarz, T; Scodellaro, L; Scott, A L; Scribano, A; Scuri, F; Sedov, A; Seidel, S; Seiya, Y; Semenov, A; Sexton-Kennedy, L; Sfyrla, A; Shalhout, S Z; Shears, T; Shekhar, R; Shepard, P F; Sherman, D; Shimojima, M; Shiraishi, S; Shochet, M; Shon, Y; Shreyber, I; Sidoti, A; Sinervo, P; Sisakyan, A; Slaughter, A J; Slaunwhite, J; Sliwa, K; Smith, J R; Snider, F D; Snihur, R; Soha, A; Somalwar, S; Sorin, V; Spalding, J; Spreitzer, T; Squillacioti, P; Stanitzki, M; St Denis, R; Stelzer, B; Stelzer-Chilton, O; Stentz, D; Strologas, J; Stuart, D; Suh, J S; Sukhanov, A; Suslov, I; Suzuki, T; Taffard, A; Takashima, R; Takeuchi, Y; Tanaka, R; Tecchio, M; Teng, P K; Terashi, K; Thom, J; Thompson, A S; Thompson, G A; Thomson, E; Tipton, P; Tiwari, V; Tkaczyk, S; Toback, D; Tokar, S; Tollefson, K; Tomura, T; Tonelli, D; Torre, S; Torretta, D; Totaro, P; Tourneur, S; Tu, Y; Turini, N; Ukegawa, F; Vallecorsa, S; van Remortel, N; Varganov, A; Vataga, E; Vázquez, F; Velev, G; Vellidis, C; Veszpremi, V; Vidal, M; Vidal, R; Vila, I; Vilar, R; Vine, T; Vogel, M; Volobouev, I; Volpi, G; Würthwein, F; Wagner, P; Wagner, R G; Wagner, R L; Wagner-Kuhr, J; Wagner, W; Wakisaka, T; Wallny, R; Wang, S M; Warburton, A; Waters, D; Weinberger, M; Wester, W C; Whitehouse, B; Whiteson, D; Whiteson, S; Wicklund, A B; Wicklund, E; Williams, G; Williams, H H; Wilson, P; Winer, B L; Wittich, P; Wolbers, S; Wolfe, C; Wright, T; Wu, X; Wynne, S M; Xie, S; Yagil, A; Yamamoto, K; Yamaoka, J; Yang, U K; Yang, Y C; Yao, W M; Yeh, G P; Yoh, J; Yorita, K; Yoshida, T; Yu, G B; Yu, I; Yu, S S; Yun, J C; Zanello, L; Zanetti, A; Zaw, I; Zhang, X; Zheng, Y; Zucchelli, S
2009-04-17
We report a measurement of the top-quark mass M_{t} in the dilepton decay channel tt[over ] --> bl;{'+} nu_{l};{'}b[over ]l;{-}nu[over ]_{l}. Events are selected with a neural network which has been directly optimized for statistical precision in top-quark mass using neuroevolution, a technique modeled on biological evolution. The top-quark mass is extracted from per-event probability densities that are formed by the convolution of leading order matrix elements and detector resolution functions. The joint probability is the product of the probability densities from 344 candidate events in 2.0 fb;{-1} of pp[over ] collisions collected with the CDF II detector, yielding a measurement of M_{t} = 171.2 +/- 2.7(stat) +/- 2.9(syst) GeV / c;{2}.
Sheehan, David V; Mancini, Michele; Wang, Jianing; Berggren, Lovisa; Cao, Haijun; Dueñas, Héctor José; Yue, Li
2016-01-01
We compared functional impairment outcomes assessed with Sheehan Disability Scale (SDS) after treatment with duloxetine versus selective serotonin reuptake inhibitors (SSRIs) in patients with major depressive disorder. Data were pooled from four randomized studies comparing treatment with duloxetine and SSRIs (three double blind and one open label). Analysis of covariance, with last-observation-carried-forward approach for missing data, explored treatment differences between duloxetine and SSRIs on SDS changes during 8 to 12 weeks of acute treatment for the intent-to-treat population. Logistic regression analysis examined the predictive capacity of baseline patient characteristics for remission in functional impairment (SDS total score ≤ 6 and SDS item scores ≤ 2) at endpoint. Included were 2193 patients (duloxetine n = 1029; SSRIs n = 835; placebo n = 329). Treatment with duloxetine and SSRIs resulted in significantly (p < 0.01) greater improvements in the SDS total score versus treatment with placebo. Higher SDS (p < 0.0001) or 17-item Hamilton Depression Rating Scale baseline scores (p < 0.01) predicted lower probability of functional improvement after treatment with duloxetine or SSRIs. Female gender (p ≤ 0.05) predicted higher probability of functional improvement after treatment with duloxetine or SSRIs. Treatment with SSRIs and duloxetine improved functional impairment in patients with major depressive disorder. Higher SDS or 17-item Hamilton Depression Rating Scale baseline scores predicted less probability of SDS improvement; female gender predicted better improvement in functional impairment at endpoint. © 2015 The Authors. Human Psychopharmacology: Clinical and Experimental published by John Wiley & Sons, Ltd.
Reward and uncertainty in exploration programs
NASA Technical Reports Server (NTRS)
Kaufman, G. M.; Bradley, P. G.
1971-01-01
A set of variables which are crucial to the economic outcome of petroleum exploration are discussed. These are treated as random variables; the values they assume indicate the number of successes that occur in a drilling program and determine, for a particular discovery, the unit production cost and net economic return if that reservoir is developed. In specifying the joint probability law for those variables, extreme and probably unrealistic assumptions are made. In particular, the different random variables are assumed to be independently distributed. Using postulated probability functions and specified parameters, values are generated for selected random variables, such as reservoir size. From this set of values the economic magnitudes of interest, net return and unit production cost are computed. This constitutes a single trial, and the procedure is repeated many times. The resulting histograms approximate the probability density functions of the variables which describe the economic outcomes of an exploratory drilling program.
An extended car-following model considering random safety distance with different probabilities
NASA Astrophysics Data System (ADS)
Wang, Jufeng; Sun, Fengxin; Cheng, Rongjun; Ge, Hongxia; Wei, Qi
2018-02-01
Because of the difference in vehicle type or driving skill, the driving strategy is not exactly the same. The driving speeds of the different vehicles may be different for the same headway. Since the optimal velocity function is just determined by the safety distance besides the maximum velocity and headway, an extended car-following model accounting for random safety distance with different probabilities is proposed in this paper. The linear stable condition for this extended traffic model is obtained by using linear stability theory. Numerical simulations are carried out to explore the complex phenomenon resulting from multiple safety distance in the optimal velocity function. The cases of multiple types of safety distances selected with different probabilities are presented. Numerical results show that the traffic flow with multiple safety distances with different probabilities will be more unstable than that with single type of safety distance, and will result in more stop-and-go phenomena.
A comparison of two modeling approaches for evaluating wildlife--habitat relationships
Ryan A. Long; Jonathan D. Muir; Janet L. Rachlow; John G. Kie
2009-01-01
Studies of resource selection form the basis for much of our understanding of wildlife habitat requirements, and resource selection functions (RSFs), which predict relative probability of use, have been proposed as a unifying concept for analysis and interpretation of wildlife habitat data. Logistic regression that contrasts used and available or unused resource units...
NASA Astrophysics Data System (ADS)
Dubreuil, S.; Salaün, M.; Rodriguez, E.; Petitjean, F.
2018-01-01
This study investigates the construction and identification of the probability distribution of random modal parameters (natural frequencies and effective parameters) in structural dynamics. As these parameters present various types of dependence structures, the retained approach is based on pair copula construction (PCC). A literature review leads us to choose a D-Vine model for the construction of modal parameters probability distributions. Identification of this model is based on likelihood maximization which makes it sensitive to the dimension of the distribution, namely the number of considered modes in our context. To this respect, a mode selection preprocessing step is proposed. It allows the selection of the relevant random modes for a given transfer function. The second point, addressed in this study, concerns the choice of the D-Vine model. Indeed, D-Vine model is not uniquely defined. Two strategies are proposed and compared. The first one is based on the context of the study whereas the second one is purely based on statistical considerations. Finally, the proposed approaches are numerically studied and compared with respect to their capabilities, first in the identification of the probability distribution of random modal parameters and second in the estimation of the 99 % quantiles of some transfer functions.
MO-FG-CAMPUS-TeP2-04: Optimizing for a Specified Target Coverage Probability
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fredriksson, A
2016-06-15
Purpose: The purpose of this work is to develop a method for inverse planning of radiation therapy margins. When using this method the user specifies a desired target coverage probability and the system optimizes to meet the demand without any explicit specification of margins to handle setup uncertainty. Methods: The method determines which voxels to include in an optimization function promoting target coverage in order to achieve a specified target coverage probability. Voxels are selected in a way that retains the correlation between them: The target is displaced according to the setup errors and the voxels to include are selectedmore » as the union of the displaced target regions under the x% best scenarios according to some quality measure. The quality measure could depend on the dose to the considered structure alone or could depend on the dose to multiple structures in order to take into account correlation between structures. Results: A target coverage function was applied to the CTV of a prostate case with prescription 78 Gy and compared to conventional planning using a DVH function on the PTV. Planning was performed to achieve 90% probability of CTV coverage. The plan optimized using the coverage probability function had P(D98 > 77.95 Gy) = 0.97 for the CTV. The PTV plan using a constraint on minimum DVH 78 Gy at 90% had P(D98 > 77.95) = 0.44 for the CTV. To match the coverage probability optimization, the DVH volume parameter had to be increased to 97% which resulted in 0.5 Gy higher average dose to the rectum. Conclusion: Optimizing a target coverage probability is an easily used method to find a margin that achieves the desired coverage probability. It can lead to reduced OAR doses at the same coverage probability compared to planning with margins and DVH functions.« less
Kim, Yusung; Tomé, Wolfgang A
2008-01-01
Voxel based iso-Tumor Control Probability (TCP) maps and iso-Complication maps are proposed as a plan-review tool especially for functional image-guided intensity-modulated radiotherapy (IMRT) strategies such as selective boosting (dose painting) and conformal avoidance IMRT. The maps employ voxel-based phenomenological biological dose-response models for target volumes and normal organs. Two IMRT strategies for prostate cancer, namely conventional uniform IMRT delivering an EUD = 84 Gy (equivalent uniform dose) to the entire PTV and selective boosting delivering an EUD = 82 Gy to the entire PTV, are investigated, to illustrate the advantages of this approach over iso-dose maps. Conventional uniform IMRT did yield a more uniform isodose map to the entire PTV while selective boosting did result in a nonuniform isodose map. However, when employing voxel based iso-TCP maps selective boosting exhibited a more uniform tumor control probability map compared to what could be achieved using conventional uniform IMRT, which showed TCP cold spots in high-risk tumor subvolumes despite delivering a higher EUD to the entire PTV. Voxel based iso-Complication maps are presented for rectum and bladder, and their utilization for selective avoidance IMRT strategies are discussed. We believe as the need for functional image guided treatment planning grows, voxel based iso-TCP and iso-Complication maps will become an important tool to assess the integrity of such treatment plans.
NASA Astrophysics Data System (ADS)
Liu, Zhangjun; Liu, Zenghui
2018-06-01
This paper develops a hybrid approach of spectral representation and random function for simulating stationary stochastic vector processes. In the proposed approach, the high-dimensional random variables, included in the original spectral representation (OSR) formula, could be effectively reduced to only two elementary random variables by introducing the random functions that serve as random constraints. Based on this, a satisfactory simulation accuracy can be guaranteed by selecting a small representative point set of the elementary random variables. The probability information of the stochastic excitations can be fully emerged through just several hundred of sample functions generated by the proposed approach. Therefore, combined with the probability density evolution method (PDEM), it could be able to implement dynamic response analysis and reliability assessment of engineering structures. For illustrative purposes, a stochastic turbulence wind velocity field acting on a frame-shear-wall structure is simulated by constructing three types of random functions to demonstrate the accuracy and efficiency of the proposed approach. Careful and in-depth studies concerning the probability density evolution analysis of the wind-induced structure have been conducted so as to better illustrate the application prospects of the proposed approach. Numerical examples also show that the proposed approach possesses a good robustness.
Item Selection and Pre-equating with Empirical Item Characteristic Curves.
ERIC Educational Resources Information Center
Livingston, Samuel A.
An empirical item characteristic curve shows the probability of a correct response as a function of the student's total test score. These curves can be estimated from large-scale pretest data. They enable test developers to select items that discriminate well in the score region where decisions are made. A similar set of curves can be used to…
Quasar probabilities and redshifts from WISE mid-IR through GALEX UV photometry
NASA Astrophysics Data System (ADS)
DiPompeo, M. A.; Bovy, J.; Myers, A. D.; Lang, D.
2015-09-01
Extreme deconvolution (XD) of broad-band photometric data can both separate stars from quasars and generate probability density functions for quasar redshifts, while incorporating flux uncertainties and missing data. Mid-infrared photometric colours are now widely used to identify hot dust intrinsic to quasars, and the release of all-sky WISE data has led to a dramatic increase in the number of IR-selected quasars. Using forced photometry on public WISE data at the locations of Sloan Digital Sky Survey (SDSS) point sources, we incorporate this all-sky data into the training of the XDQSOz models originally developed to select quasars from optical photometry. The combination of WISE and SDSS information is far more powerful than SDSS alone, particularly at z > 2. The use of SDSS+WISE photometry is comparable to the use of SDSS+ultraviolet+near-IR data. We release a new public catalogue of 5537 436 (total; 3874 639 weighted by probability) potential quasars with probability PQSO > 0.2. The catalogue includes redshift probabilities for all objects. We also release an updated version of the publicly available set of codes to calculate quasar and redshift probabilities for various combinations of data. Finally, we demonstrate that this method of selecting quasars using WISE data is both more complete and efficient than simple WISE colour-cuts, especially at high redshift. Our fits verify that above z ˜ 3 WISE colours become bluer than the standard cuts applied to select quasars. Currently, the analysis is limited to quasars with optical counterparts, and thus cannot be used to find highly obscured quasars that WISE colour-cuts identify in significant numbers.
Efficient Iris Recognition Based on Optimal Subfeature Selection and Weighted Subregion Fusion
Deng, Ning
2014-01-01
In this paper, we propose three discriminative feature selection strategies and weighted subregion matching method to improve the performance of iris recognition system. Firstly, we introduce the process of feature extraction and representation based on scale invariant feature transformation (SIFT) in detail. Secondly, three strategies are described, which are orientation probability distribution function (OPDF) based strategy to delete some redundant feature keypoints, magnitude probability distribution function (MPDF) based strategy to reduce dimensionality of feature element, and compounded strategy combined OPDF and MPDF to further select optimal subfeature. Thirdly, to make matching more effective, this paper proposes a novel matching method based on weighted sub-region matching fusion. Particle swarm optimization is utilized to accelerate achieve different sub-region's weights and then weighted different subregions' matching scores to generate the final decision. The experimental results, on three public and renowned iris databases (CASIA-V3 Interval, Lamp, andMMU-V1), demonstrate that our proposed methods outperform some of the existing methods in terms of correct recognition rate, equal error rate, and computation complexity. PMID:24683317
Efficient iris recognition based on optimal subfeature selection and weighted subregion fusion.
Chen, Ying; Liu, Yuanning; Zhu, Xiaodong; He, Fei; Wang, Hongye; Deng, Ning
2014-01-01
In this paper, we propose three discriminative feature selection strategies and weighted subregion matching method to improve the performance of iris recognition system. Firstly, we introduce the process of feature extraction and representation based on scale invariant feature transformation (SIFT) in detail. Secondly, three strategies are described, which are orientation probability distribution function (OPDF) based strategy to delete some redundant feature keypoints, magnitude probability distribution function (MPDF) based strategy to reduce dimensionality of feature element, and compounded strategy combined OPDF and MPDF to further select optimal subfeature. Thirdly, to make matching more effective, this paper proposes a novel matching method based on weighted sub-region matching fusion. Particle swarm optimization is utilized to accelerate achieve different sub-region's weights and then weighted different subregions' matching scores to generate the final decision. The experimental results, on three public and renowned iris databases (CASIA-V3 Interval, Lamp, and MMU-V1), demonstrate that our proposed methods outperform some of the existing methods in terms of correct recognition rate, equal error rate, and computation complexity.
Metocean design parameter estimation for fixed platform based on copula functions
NASA Astrophysics Data System (ADS)
Zhai, Jinjin; Yin, Qilin; Dong, Sheng
2017-08-01
Considering the dependent relationship among wave height, wind speed, and current velocity, we construct novel trivariate joint probability distributions via Archimedean copula functions. Total 30-year data of wave height, wind speed, and current velocity in the Bohai Sea are hindcast and sampled for case study. Four kinds of distributions, namely, Gumbel distribution, lognormal distribution, Weibull distribution, and Pearson Type III distribution, are candidate models for marginal distributions of wave height, wind speed, and current velocity. The Pearson Type III distribution is selected as the optimal model. Bivariate and trivariate probability distributions of these environmental conditions are established based on four bivariate and trivariate Archimedean copulas, namely, Clayton, Frank, Gumbel-Hougaard, and Ali-Mikhail-Haq copulas. These joint probability models can maximize marginal information and the dependence among the three variables. The design return values of these three variables can be obtained by three methods: univariate probability, conditional probability, and joint probability. The joint return periods of different load combinations are estimated by the proposed models. Platform responses (including base shear, overturning moment, and deck displacement) are further calculated. For the same return period, the design values of wave height, wind speed, and current velocity obtained by the conditional and joint probability models are much smaller than those by univariate probability. Considering the dependence among variables, the multivariate probability distributions provide close design parameters to actual sea state for ocean platform design.
Adaptive attunement of selective covert attention to evolutionary-relevant emotional visual scenes.
Fernández-Martín, Andrés; Gutiérrez-García, Aída; Capafons, Juan; Calvo, Manuel G
2017-05-01
We investigated selective attention to emotional scenes in peripheral vision, as a function of adaptive relevance of scene affective content for male and female observers. Pairs of emotional-neutral images appeared peripherally-with perceptual stimulus differences controlled-while viewers were fixating on a different stimulus in central vision. Early selective orienting was assessed by the probability of directing the first fixation towards either scene, and the time until first fixation. Emotional scenes selectively captured covert attention even when they were task-irrelevant, thus revealing involuntary, automatic processing. Sex of observers and specific emotional scene content (e.g., male-to-female-aggression, families and babies, etc.) interactively modulated covert attention, depending on adaptive priorities and goals for each sex, both for pleasant and unpleasant content. The attentional system exhibits domain-specific and sex-specific biases and attunements, probably rooted in evolutionary pressures to enhance reproductive and protective success. Emotional cues selectively capture covert attention based on their bio-social significance. Copyright © 2017 Elsevier Inc. All rights reserved.
Roff, D A; Crnokrak, P; Fairbairn, D J
2003-07-01
Quantitative genetic theory assumes that trade-offs are best represented by bivariate normal distributions. This theory predicts that selection will shift the trade-off function itself and not just move the mean trait values along a fixed trade-off line, as is generally assumed in optimality models. As a consequence, quantitative genetic theory predicts that the trade-off function will vary among populations in which at least one of the component traits itself varies. This prediction is tested using the trade-off between call duration and flight capability, as indexed by the mass of the dorsolateral flight muscles, in the macropterous morph of the sand cricket. We use four different populations of crickets that vary in the proportion of macropterous males (Lab = 33%, Florida = 29%, Bermuda = 72%, South Carolina = 80%). We find, as predicted, that there is significant variation in the intercept of the trade-off function but not the slope, supporting the hypothesis that trade-off functions are better represented as bivariate normal distributions rather than single lines. We also test the prediction from a quantitative genetical model of the evolution of wing dimorphism that the mean call duration of macropterous males will increase with the percentage of macropterous males in the population. This prediction is also supported. Finally, we estimate the probability of a macropterous male attracting a female, P, as a function of the relative time spent calling (P = time spent calling by macropterous male/(total time spent calling by both micropterous and macropterous male). We find that in the Lab and Florida populations the probability of a female selecting the macropterous male is equal to P, indicating that preference is due simply to relative call duration. But in the Bermuda and South Carolina populations the probability of a female selecting a macropterous male is less than P, indicating a preference for the micropterous male even after differences in call duration are accounted for.
Kim, Yusung; Tomé, Wolfgang A.
2010-01-01
Summary Voxel based iso-Tumor Control Probability (TCP) maps and iso-Complication maps are proposed as a plan-review tool especially for functional image-guided intensity-modulated radiotherapy (IMRT) strategies such as selective boosting (dose painting) and conformal avoidance IMRT. The maps employ voxel-based phenomenological biological dose-response models for target volumes and normal organs. Two IMRT strategies for prostate cancer, namely conventional uniform IMRT delivering an EUD = 84 Gy (equivalent uniform dose) to the entire PTV and selective boosting delivering an EUD = 82 Gy to the entire PTV, are investigated, to illustrate the advantages of this approach over iso-dose maps. Conventional uniform IMRT did yield a more uniform isodose map to the entire PTV while selective boosting did result in a nonuniform isodose map. However, when employing voxel based iso-TCP maps selective boosting exhibited a more uniform tumor control probability map compared to what could be achieved using conventional uniform IMRT, which showed TCP cold spots in high-risk tumor subvolumes despite delivering a higher EUD to the entire PTV. Voxel based iso-Complication maps are presented for rectum and bladder, and their utilization for selective avoidance IMRT strategies are discussed. We believe as the need for functional image guided treatment planning grows, voxel based iso-TCP and iso-Complication maps will become an important tool to assess the integrity of such treatment plans. PMID:21151734
An investigation into the probabilistic combination of quasi-static and random accelerations
NASA Technical Reports Server (NTRS)
Schock, R. W.; Tuell, L. P.
1984-01-01
The development of design load factors for aerospace and aircraft components and experiment support structures, which are subject to a simultaneous vehicle dynamic vibration (quasi-static) and acoustically generated random vibration, require the selection of a combination methodology. Typically, the procedure is to define the quasi-static and the random generated response separately, and arithmetically add or root sum square to get combined accelerations. Since the combination of a probabilistic and a deterministic function yield a probabilistic function, a viable alternate approach would be to determine the characteristics of the combined acceleration probability density function and select an appropriate percentile level for the combined acceleration. The following paper develops this mechanism and provides graphical data to select combined accelerations for most popular percentile levels.
Fixation probability of a nonmutator in a large population of asexual mutators.
Jain, Kavita; James, Ananthu
2017-11-21
In an adapted population of mutators in which most mutations are deleterious, a nonmutator that lowers the mutation rate is under indirect selection and can sweep to fixation. Using a multitype branching process, we calculate the fixation probability of a rare nonmutator in a large population of asexual mutators. We show that when beneficial mutations are absent, the fixation probability is a nonmonotonic function of the mutation rate of the mutator: it first increases sublinearly and then decreases exponentially. We also find that beneficial mutations can enhance the fixation probability of a nonmutator. Our analysis is relevant to an understanding of recent experiments in which a reduction in the mutation rates has been observed. Copyright © 2017 Elsevier Ltd. All rights reserved.
1981-02-01
monotonic increasing function of true ability or performance score. A cumulative probability function is * then very convenient for describiny; one’s...possible outcomes such as test scores, grade-point averages or other common outcome variables. Utility is usually a monotonic increasing function of true ...r(0) is negative for 8 <i and positive for 0 > M, U(o) is risk-prone for low 0 values and risk-averse for high 0 values. This property is true for
Nonparametric variational optimization of reaction coordinates
DOE Office of Scientific and Technical Information (OSTI.GOV)
Banushkina, Polina V.; Krivov, Sergei V., E-mail: s.krivov@leeds.ac.uk
State of the art realistic simulations of complex atomic processes commonly produce trajectories of large size, making the development of automated analysis tools very important. A popular approach aimed at extracting dynamical information consists of projecting these trajectories into optimally selected reaction coordinates or collective variables. For equilibrium dynamics between any two boundary states, the committor function also known as the folding probability in protein folding studies is often considered as the optimal coordinate. To determine it, one selects a functional form with many parameters and trains it on the trajectories using various criteria. A major problem with such anmore » approach is that a poor initial choice of the functional form may lead to sub-optimal results. Here, we describe an approach which allows one to optimize the reaction coordinate without selecting its functional form and thus avoiding this source of error.« less
Weiner, Kevin S; Barnett, Michael A; Witthoft, Nathan; Golarai, Golijeh; Stigliani, Anthony; Kay, Kendrick N; Gomez, Jesse; Natu, Vaidehi S; Amunts, Katrin; Zilles, Karl; Grill-Spector, Kalanit
2018-04-15
The parahippocampal place area (PPA) is a widely studied high-level visual region in the human brain involved in place and scene processing. The goal of the present study was to identify the most probable location of place-selective voxels in medial ventral temporal cortex. To achieve this goal, we first used cortex-based alignment (CBA) to create a probabilistic place-selective region of interest (ROI) from one group of 12 participants. We then tested how well this ROI could predict place selectivity in each hemisphere within a new group of 12 participants. Our results reveal that a probabilistic ROI (pROI) generated from one group of 12 participants accurately predicts the location and functional selectivity in individual brains from a new group of 12 participants, despite between subject variability in the exact location of place-selective voxels relative to the folding of parahippocampal cortex. Additionally, the prediction accuracy of our pROI is significantly higher than that achieved by volume-based Talairach alignment. Comparing the location of the pROI of the PPA relative to published data from over 500 participants, including data from the Human Connectome Project, shows a striking convergence of the predicted location of the PPA and the cortical location of voxels exhibiting the highest place selectivity across studies using various methods and stimuli. Specifically, the most predictive anatomical location of voxels exhibiting the highest place selectivity in medial ventral temporal cortex is the junction of the collateral and anterior lingual sulci. Methodologically, we make this pROI freely available (vpnl.stanford.edu/PlaceSelectivity), which provides a means to accurately identify a functional region from anatomical MRI data when fMRI data are not available (for example, in patient populations). Theoretically, we consider different anatomical and functional factors that may contribute to the consistent anatomical location of place selectivity relative to the folding of high-level visual cortex. Copyright © 2017 Elsevier Inc. All rights reserved.
Wiehe, Kevin; Bradley, Todd; Meyerhoff, R Ryan; Hart, Connor; Williams, Wilton B; Easterhoff, David; Faison, William J; Kepler, Thomas B; Saunders, Kevin O; Alam, S Munir; Bonsignori, Mattia; Haynes, Barton F
2018-06-13
HIV-1 broadly neutralizing antibodies (bnAbs) require high levels of activation-induced cytidine deaminase (AID)-catalyzed somatic mutations for optimal neutralization potency. Probable mutations occur at sites of frequent AID activity, while improbable mutations occur where AID activity is infrequent. One bottleneck for induction of bnAbs is the evolution of viral envelopes (Envs) that can select bnAb B cell receptors (BCR) with improbable mutations. Here we define the probability of bnAb mutations and demonstrate the functional significance of key improbable mutations in three bnAb B cell lineages. We show that bnAbs are enriched for improbable mutations, which implies that their elicitation will be critical for successful vaccine induction of potent bnAb B cell lineages. We discuss a mutation-guided vaccine strategy for identification of Envs that can select B cells with BCRs that have key improbable mutations required for bnAb development. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.
Winter habitat selection of mule deer before and during development of a natural gas field
Sawyer, H.; Nielson, R.M.; Lindzey, F.; McDonald, L.L.
2006-01-01
Increased levels of natural gas exploration, development, and production across the Intermountain West have created a variety of concerns for mule deer (Odocoileus hemionus) populations, including direct habitat loss to road and well-pad construction and indirect habitat losses that may occur if deer use declines near roads or well pads. We examined winter habitat selection patterns of adult female mule deer before and during the first 3 years of development in a natural gas field in western Wyoming. We used global positioning system (GPS) locations collected from a sample of adult female mule deer to model relative frequency or probability of use as a function of habitat variables. Model coefficients and predictive maps suggested mule deer were less likely to occupy areas in close proximity to well pads than those farther away. Changes in habitat selection appeared to be immediate (i.e., year 1 of development), and no evidence of well-pad acclimation occurred through the course of the study; rather, mule deer selected areas farther from well pads as development progressed. Lower predicted probabilities of use within 2.7 to 3.7 km of well pads suggested indirect habitat losses may be substantially larger than direct habitat losses. Additionally, some areas classified as high probability of use by mule deer before gas field development changed to areas of low use following development, and others originally classified as low probability of use were used more frequently as the field developed. If areas with high probability of use before development were those preferred by the deer, observed shifts in their distribution as development progressed were toward less-preferred and presumably less-suitable habitats.
Bayes classification of terrain cover using normalized polarimetric data
NASA Technical Reports Server (NTRS)
Yueh, H. A.; Swartz, A. A.; Kong, J. A.; Shin, R. T.; Novak, L. M.
1988-01-01
The normalized polarimetric classifier (NPC) which uses only the relative magnitudes and phases of the polarimetric data is proposed for discrimination of terrain elements. The probability density functions (PDFs) of polarimetric data are assumed to have a complex Gaussian distribution, and the marginal PDF of the normalized polarimetric data is derived by adopting the Euclidean norm as the normalization function. The general form of the distance measure for the NPC is also obtained. It is demonstrated that for polarimetric data with an arbitrary PDF, the distance measure of NPC will be independent of the normalization function selected even when the classifier is mistrained. A complex Gaussian distribution is assumed for the polarimetric data consisting of grass and tree regions. The probability of error for the NPC is compared with those of several other single-feature classifiers. The classification error of NPCs is shown to be independent of the normalization function.
Chemical Selectivity and Sensitivity of a 16-Channel Electronic Nose for Trace Vapour Detection
Strle, Drago; Trifkovič, Mario; Van Miden, Marion; Kvasić, Ivan; Zupanič, Erik; Muševič, Igor
2017-01-01
Good chemical selectivity of sensors for detecting vapour traces of targeted molecules is vital to reliable detection systems for explosives and other harmful materials. We present the design, construction and measurements of the electronic response of a 16 channel electronic nose based on 16 differential microcapacitors, which were surface-functionalized by different silanes. The e-nose detects less than 1 molecule of TNT out of 10+12 N2 molecules in a carrier gas in 1 s. Differently silanized sensors give different responses to different molecules. Electronic responses are presented for TNT, RDX, DNT, H2S, HCN, FeS, NH3, propane, methanol, acetone, ethanol, methane, toluene and water. We consider the number density of these molecules and find that silane surfaces show extreme affinity for attracting molecules of TNT, DNT and RDX. The probability to bind these molecules and form a surface-adsorbate is typically 10+7 times larger than the probability to bind water molecules, for example. We present a matrix of responses of differently functionalized microcapacitors and we propose that chemical selectivity of multichannel e-nose could be enhanced by using artificial intelligence deep learning methods. PMID:29292764
Traffic routing for multicomputer networks with virtual cut-through capability
NASA Technical Reports Server (NTRS)
Kandlur, Dilip D.; Shin, Kang G.
1992-01-01
Consideration is given to the problem of selecting routes for interprocess communication in a network with virtual cut-through capability, while balancing the network load and minimizing the number of times that a message gets buffered. An approach is proposed that formulates the route selection problem as a minimization problem with a link cost function that depends upon the traffic through the link. The form of this cost function is derived using the probability of establishing a virtual cut-through route. The route selection problem is shown to be NP-hard, and an algorithm is developed to incrementally reduce the cost by rerouting the traffic. The performance of this algorithm is exemplified by two network topologies: the hypercube and the C-wrapped hexagonal mesh.
Hutchings, Jeffrey A
2009-08-01
I examined how the fitness (r) associated with early- and late-maturing genotypes varies with fishing mortality (F) and age-/size-specific probability of capture. Life-history data on Newfoundland's northern Atlantic cod (Gadus morhua) allowed for the estimation of r for individuals maturing at 4 and 7 year in the absence of fishing. Catch selectivity data associated with four types of fishing gear (trap, gillnet, handline, otter trawl) were then incorporated to examine how r varied with gear type and with F. The resulting fitness functions were then used to estimate the F above which selection would favour early (4 year) rather than delayed (7 year) maturity. This evolutionarily-sensitive threshold, F evol, identifies a limit reference point somewhat similar to those used to define overfishing (e.g., F msy, F 0.1). Over-exploitation of northern cod resulted in fishing mortalities considerably greater than those required to effect evolutionary change. Selection for early maturity is reduced by the dome-shaped selectivities characteristic of fixed gears such as handlines (the greater the leptokurtosis, the lower the probability of a selection response) and enhanced by the knife-edged selectivities of bottom trawls. Strategies to minimize genetic change are consistent with traditional management objectives (e.g., yield maximization, population increase). Compliance with harvest control rules guided by evolutionarily-sensitive limit reference points, which may be achieved by adherence to traditional reference points such as F msy and F 0.1, should be sufficient to minimize the probability of fisheries-induced evolution for commercially exploited species.
Probability Density Functions of Observed Rainfall in Montana
NASA Technical Reports Server (NTRS)
Larsen, Scott D.; Johnson, L. Ronald; Smith, Paul L.
1995-01-01
The question of whether a rain rate probability density function (PDF) can vary uniformly between precipitation events is examined. Image analysis on large samples of radar echoes is possible because of advances in technology. The data provided by such an analysis easily allow development of radar reflectivity factors (and by extension rain rate) distribution. Finding a PDF becomes a matter of finding a function that describes the curve approximating the resulting distributions. Ideally, one PDF would exist for all cases; or many PDF's that have the same functional form with only systematic variations in parameters (such as size or shape) exist. Satisfying either of theses cases will, validate the theoretical basis of the Area Time Integral (ATI). Using the method of moments and Elderton's curve selection criteria, the Pearson Type 1 equation was identified as a potential fit for 89 percent of the observed distributions. Further analysis indicates that the Type 1 curve does approximate the shape of the distributions but quantitatively does not produce a great fit. Using the method of moments and Elderton's curve selection criteria, the Pearson Type 1 equation was identified as a potential fit for 89% of the observed distributions. Further analysis indicates that the Type 1 curve does approximate the shape of the distributions but quantitatively does not produce a great fit.
Lum, Kirsten J.; Sundaram, Rajeshwari; Louis, Thomas A.
2015-01-01
Prospective pregnancy studies are a valuable source of longitudinal data on menstrual cycle length. However, care is needed when making inferences of such renewal processes. For example, accounting for the sampling plan is necessary for unbiased estimation of the menstrual cycle length distribution for the study population. If couples can enroll when they learn of the study as opposed to waiting for the start of a new menstrual cycle, then due to length-bias, the enrollment cycle will be stochastically larger than the general run of cycles, a typical property of prevalent cohort studies. Furthermore, the probability of enrollment can depend on the length of time since a woman’s last menstrual period (a backward recurrence time), resulting in selection effects. We focus on accounting for length-bias and selection effects in the likelihood for enrollment menstrual cycle length, using a recursive two-stage approach wherein we first estimate the probability of enrollment as a function of the backward recurrence time and then use it in a likelihood with sampling weights that account for length-bias and selection effects. To broaden the applicability of our methods, we augment our model to incorporate a couple-specific random effect and time-independent covariate. A simulation study quantifies performance for two scenarios of enrollment probability when proper account is taken of sampling plan features. In addition, we estimate the probability of enrollment and the distribution of menstrual cycle length for the study population of the Longitudinal Investigation of Fertility and the Environment Study. PMID:25027273
Cole, Shelley A; Voruganti, V Saroja; Cai, Guowen; Haack, Karin; Kent, Jack W; Blangero, John; Comuzzie, Anthony G; McPherson, John D; Gibbs, Richard A
2010-01-01
Background: Melanocortin-4-receptor (MC4R) haploinsufficiency is the most common form of monogenic obesity; however, the frequency of MC4R variants and their functional effects in general populations remain uncertain. Objective: The aim was to identify and characterize the effects of MC4R variants in Hispanic children. Design: MC4R was resequenced in 376 parents, and the identified single nucleotide polymorphisms (SNPs) were genotyped in 613 parents and 1016 children from the Viva la Familia cohort. Measured genotype analysis (MGA) tested associations between SNPs and phenotypes. Bayesian quantitative trait nucleotide (BQTN) analysis was used to infer the most likely functional polymorphisms influencing obesity-related traits. Results: Seven rare SNPs in coding and 18 SNPs in flanking regions of MC4R were identified. MGA showed suggestive associations between MC4R variants and body size, adiposity, glucose, insulin, leptin, ghrelin, energy expenditure, physical activity, and food intake. BQTN analysis identified SNP 1704 in a predicted micro-RNA target sequence in the downstream flanking region of MC4R as a strong, probable functional variant influencing total, sedentary, and moderate activities with posterior probabilities of 1.0. SNP 2132 was identified as a variant with a high probability (1.0) of exerting a functional effect on total energy expenditure and sleeping metabolic rate. SNP rs34114122 was selected as having likely functional effects on the appetite hormone ghrelin, with a posterior probability of 0.81. Conclusion: This comprehensive investigation provides strong evidence that MC4R genetic variants are likely to play a functional role in the regulation of weight, not only through energy intake but through energy expenditure. PMID:19889825
Saccade selection when reward probability is dynamically manipulated using Markov chains
Lovejoy, Lee P.; Krauzlis, Richard J.
2012-01-01
Markov chains (stochastic processes where probabilities are assigned based on the previous outcome) are commonly used to examine the transitions between behavioral states, such as those that occur during foraging or social interactions. However, relatively little is known about how well primates can incorporate knowledge about Markov chains into their behavior. Saccadic eye movements are an example of a simple behavior influenced by information about probability, and thus are good candidates for testing whether subjects can learn Markov chains. In addition, when investigating the influence of probability on saccade target selection, the use of Markov chains could provide an alternative method that avoids confounds present in other task designs. To investigate these possibilities, we evaluated human behavior on a task in which stimulus reward probabilities were assigned using a Markov chain. On each trial, the subject selected one of four identical stimuli by saccade; after selection, feedback indicated the rewarded stimulus. Each session consisted of 200–600 trials, and on some sessions, the reward magnitude varied. On sessions with a uniform reward, subjects (n = 6) learned to select stimuli at a frequency close to reward probability, which is similar to human behavior on matching or probability classification tasks. When informed that a Markov chain assigned reward probabilities, subjects (n = 3) learned to select the greatest reward probability more often, bringing them close to behavior that maximizes reward. On sessions where reward magnitude varied across stimuli, subjects (n = 6) demonstrated preferences for both greater reward probability and greater reward magnitude, resulting in a preference for greater expected value (the product of reward probability and magnitude). These results demonstrate that Markov chains can be used to dynamically assign probabilities that are rapidly exploited by human subjects during saccade target selection. PMID:18330552
Saccade selection when reward probability is dynamically manipulated using Markov chains.
Nummela, Samuel U; Lovejoy, Lee P; Krauzlis, Richard J
2008-05-01
Markov chains (stochastic processes where probabilities are assigned based on the previous outcome) are commonly used to examine the transitions between behavioral states, such as those that occur during foraging or social interactions. However, relatively little is known about how well primates can incorporate knowledge about Markov chains into their behavior. Saccadic eye movements are an example of a simple behavior influenced by information about probability, and thus are good candidates for testing whether subjects can learn Markov chains. In addition, when investigating the influence of probability on saccade target selection, the use of Markov chains could provide an alternative method that avoids confounds present in other task designs. To investigate these possibilities, we evaluated human behavior on a task in which stimulus reward probabilities were assigned using a Markov chain. On each trial, the subject selected one of four identical stimuli by saccade; after selection, feedback indicated the rewarded stimulus. Each session consisted of 200-600 trials, and on some sessions, the reward magnitude varied. On sessions with a uniform reward, subjects (n = 6) learned to select stimuli at a frequency close to reward probability, which is similar to human behavior on matching or probability classification tasks. When informed that a Markov chain assigned reward probabilities, subjects (n = 3) learned to select the greatest reward probability more often, bringing them close to behavior that maximizes reward. On sessions where reward magnitude varied across stimuli, subjects (n = 6) demonstrated preferences for both greater reward probability and greater reward magnitude, resulting in a preference for greater expected value (the product of reward probability and magnitude). These results demonstrate that Markov chains can be used to dynamically assign probabilities that are rapidly exploited by human subjects during saccade target selection.
Abductor pollicis longus: a case of mistaken identity.
Elliott, B G
1992-08-01
Abductor pollicis longus, long regarded as a motor for the thumb, is anatomically and functionally a radial deviator of the wrist and should be so named. The abductor carpi is proposed. If the other radial deviators of the wrist are acting this tendon can be selectively utilized as a transfer without loss of function. Reflex spasm of this muscle probably plays an important role in the radial deviation deformity seen in the rheumatoid hand.
Skier triggering of backcountry avalanches with skilled route selection
NASA Astrophysics Data System (ADS)
Sinickas, Alexandra; Haegeli, Pascal; Jamieson, Bruce
2015-04-01
Jamieson (2009) provided numerical estimates for the baseline probabilities of triggering an avalanche by a backcountry skier making fresh tracks without skilled route selection as a function of the North American avalanche danger scale (i.e., hazard levels Low, Moderate, Considerable, High and Extreme). Using the results of an expert survey, he showed that triggering probabilities while skiing directly up, down or across a trigger zone without skilled route selection increase roughly by a factor of 10 with each step of the North American avalanche danger scale (i.e. hazard level). The objective of the present study is to examine the effect of skilled route selection on the relationship between triggering probability and hazard level. To assess the effect of skilled route selection on triggering probability by hazard level, we analysed avalanche hazard assessments as well as reports of skiing activity and triggering of avalanches from 11 Canadian helicopter and snowcat operations during two winters (2012-13 and 2013-14). These reports were submitted to the daily information exchange among Canadian avalanche safety operations, and reflect professional decision-making and route selection practices of guides leading groups of skiers. We selected all skier-controlled or accidentally triggered avalanches with a destructive size greater than size 1 according to the Canadian avalanche size classification, triggered by any member of a guided group (guide or guest). These operations forecast the avalanche hazard daily for each of three elevation bands: alpine, treeline and below treeline. In contrast to the 2009 study, an exposure was defined as a group skiing within any one of the three elevation bands, and consequently within a hazard rating, for the day (~4,300 ratings over two winters). For example, a group that skied below treeline (rated Moderate) and treeline (rated Considerable) in one day, would receive one count for exposure to Moderate hazard, and one count for exposure to Considerable hazard. While the absolute values for triggering probability cannot be compared to the 2009 study because of different definitions of exposure, our preliminary results suggest that with skilled route selection the triggering probability is similar all hazard levels, except for extreme for which there are few exposures. This means that the guiding teams of backcountry skiing operations effectively control the hazard from triggering avalanches with skilled route selection. Groups were exposed relatively evenly to Low hazard (1275 times or 29% of total exposure), Moderate hazard (1450 times or 33 %) and Considerable hazard (1215 times or 28 %). At higher levels, the exposure reduced to roughly 380 times (9 % of total exposure) to High hazard, and only 13 times (0.3 %) to Extreme hazard. We assess the sensitivity of the results to some of our key assumptions.
NASA Astrophysics Data System (ADS)
Ahn, Hyunjun; Jung, Younghun; Om, Ju-Seong; Heo, Jun-Haeng
2014-05-01
It is very important to select the probability distribution in Statistical hydrology. Goodness of fit test is a statistical method that selects an appropriate probability model for a given data. The probability plot correlation coefficient (PPCC) test as one of the goodness of fit tests was originally developed for normal distribution. Since then, this test has been widely applied to other probability models. The PPCC test is known as one of the best goodness of fit test because it shows higher rejection powers among them. In this study, we focus on the PPCC tests for the GEV distribution which is widely used in the world. For the GEV model, several plotting position formulas are suggested. However, the PPCC statistics are derived only for the plotting position formulas (Goel and De, In-na and Nguyen, and Kim et al.) in which the skewness coefficient (or shape parameter) are included. And then the regression equations are derived as a function of the shape parameter and sample size for a given significance level. In addition, the rejection powers of these formulas are compared using Monte-Carlo simulation. Keywords: Goodness-of-fit test, Probability plot correlation coefficient test, Plotting position, Monte-Carlo Simulation ACKNOWLEDGEMENTS This research was supported by a grant 'Establishing Active Disaster Management System of Flood Control Structures by using 3D BIM Technique' [NEMA-12-NH-57] from the Natural Hazard Mitigation Research Group, National Emergency Management Agency of Korea.
N-mix for fish: estimating riverine salmonid habitat selection via N-mixture models
Som, Nicholas A.; Perry, Russell W.; Jones, Edward C.; De Juilio, Kyle; Petros, Paul; Pinnix, William D.; Rupert, Derek L.
2018-01-01
Models that formulate mathematical linkages between fish use and habitat characteristics are applied for many purposes. For riverine fish, these linkages are often cast as resource selection functions with variables including depth and velocity of water and distance to nearest cover. Ecologists are now recognizing the role that detection plays in observing organisms, and failure to account for imperfect detection can lead to spurious inference. Herein, we present a flexible N-mixture model to associate habitat characteristics with the abundance of riverine salmonids that simultaneously estimates detection probability. Our formulation has the added benefits of accounting for demographics variation and can generate probabilistic statements regarding intensity of habitat use. In addition to the conceptual benefits, model application to data from the Trinity River, California, yields interesting results. Detection was estimated to vary among surveyors, but there was little spatial or temporal variation. Additionally, a weaker effect of water depth on resource selection is estimated than that reported by previous studies not accounting for detection probability. N-mixture models show great promise for applications to riverine resource selection.
Probability of success for phase III after exploratory biomarker analysis in phase II.
Götte, Heiko; Kirchner, Marietta; Sailer, Martin Oliver
2017-05-01
The probability of success or average power describes the potential of a future trial by weighting the power with a probability distribution of the treatment effect. The treatment effect estimate from a previous trial can be used to define such a distribution. During the development of targeted therapies, it is common practice to look for predictive biomarkers. The consequence is that the trial population for phase III is often selected on the basis of the most extreme result from phase II biomarker subgroup analyses. In such a case, there is a tendency to overestimate the treatment effect. We investigate whether the overestimation of the treatment effect estimate from phase II is transformed into a positive bias for the probability of success for phase III. We simulate a phase II/III development program for targeted therapies. This simulation allows to investigate selection probabilities and allows to compare the estimated with the true probability of success. We consider the estimated probability of success with and without subgroup selection. Depending on the true treatment effects, there is a negative bias without selection because of the weighting by the phase II distribution. In comparison, selection increases the estimated probability of success. Thus, selection does not lead to a bias in probability of success if underestimation due to the phase II distribution and overestimation due to selection cancel each other out. We recommend to perform similar simulations in practice to get the necessary information about the risk and chances associated with such subgroup selection designs. Copyright © 2017 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Bura, E.; Zhmurov, A.; Barsegov, V.
2009-01-01
Dynamic force spectroscopy and steered molecular simulations have become powerful tools for analyzing the mechanical properties of proteins, and the strength of protein-protein complexes and aggregates. Probability density functions of the unfolding forces and unfolding times for proteins, and rupture forces and bond lifetimes for protein-protein complexes allow quantification of the forced unfolding and unbinding transitions, and mapping the biomolecular free energy landscape. The inference of the unknown probability distribution functions from the experimental and simulated forced unfolding and unbinding data, as well as the assessment of analytically tractable models of the protein unfolding and unbinding requires the use of a bandwidth. The choice of this quantity is typically subjective as it draws heavily on the investigator's intuition and past experience. We describe several approaches for selecting the "optimal bandwidth" for nonparametric density estimators, such as the traditionally used histogram and the more advanced kernel density estimators. The performance of these methods is tested on unimodal and multimodal skewed, long-tailed distributed data, as typically observed in force spectroscopy experiments and in molecular pulling simulations. The results of these studies can serve as a guideline for selecting the optimal bandwidth to resolve the underlying distributions from the forced unfolding and unbinding data for proteins.
More attention when speaking: does it help or does it hurt?
Nozari, Nazbanou; Thompson-Schill, Sharon L.
2013-01-01
Paying selective attention to a word in a multi-word utterance results in a decreased probability of error on that word (benefit), but an increased probability of error on the other words (cost). We ask whether excitation of the prefrontal cortex helps or hurts this cost. One hypothesis (the resource hypothesis) predicts a decrease in the cost due to the deployment of more attentional resources, while another (the focus hypothesis) predicts even greater costs due to further fine-tuning of selective attention. Our results are more consistent with the focus hypothesis: prefrontal stimulation caused a reliable increase in the benefit and a marginal increase in the cost of selective attention. To ensure that the effects are due to changes to the prefrontal cortex, we provide two checks: We show that the pattern of results is quite different if, instead, the primary motor cortex is stimulated. We also show that the stimulation-related benefits in the verbal task correlate with the stimulation-related benefits in an N-back task, which is known to tap into a prefrontal function. Our results shed light on how selective attention affects language production, and more generally, on how selective attention affects production of a sequence over time. PMID:24012690
Multistage classification of multispectral Earth observational data: The design approach
NASA Technical Reports Server (NTRS)
Bauer, M. E. (Principal Investigator); Muasher, M. J.; Landgrebe, D. A.
1981-01-01
An algorithm is proposed which predicts the optimal features at every node in a binary tree procedure. The algorithm estimates the probability of error by approximating the area under the likelihood ratio function for two classes and taking into account the number of training samples used in estimating each of these two classes. Some results on feature selection techniques, particularly in the presence of a very limited set of training samples, are presented. Results comparing probabilities of error predicted by the proposed algorithm as a function of dimensionality as compared to experimental observations are shown for aircraft and LANDSAT data. Results are obtained for both real and simulated data. Finally, two binary tree examples which use the algorithm are presented to illustrate the usefulness of the procedure.
A Comparative Study of Probability Collectives Based Multi-agent Systems and Genetic Algorithms
NASA Technical Reports Server (NTRS)
Huang, Chien-Feng; Wolpert, David H.; Bieniawski, Stefan; Strauss, Charles E. M.
2005-01-01
We compare Genetic Algorithms (GA's) with Probability Collectives (PC), a new framework for distributed optimization and control. In contrast to GA's, PC-based methods do not update populations of solutions. Instead they update an explicitly parameterized probability distribution p over the space of solutions. That updating of p arises as the optimization of a functional of p. The functional is chosen so that any p that optimizes it should be p peaked about good solutions. The PC approach works in both continuous and discrete problems. It does not suffer from the resolution limitation of the finite bit length encoding of parameters into GA alleles. It also has deep connections with both game theory and statistical physics. We review the PC approach using its motivation as the information theoretic formulation of bounded rationality for multi-agent systems. It is then compared with GA's on a diverse set of problems. To handle high dimensional surfaces, in the PC method investigated here p is restricted to a product distribution. Each distribution in that product is controlled by a separate agent. The test functions were selected for their difficulty using either traditional gradient descent or genetic algorithms. On those functions the PC-based approach significantly outperforms traditional GA's in both rate of descent, trapping in false minima, and long term optimization.
Fractional Gaussian model in global optimization
NASA Astrophysics Data System (ADS)
Dimri, V. P.; Srivastava, R. P.
2009-12-01
Earth system is inherently non-linear and it can be characterized well if we incorporate no-linearity in the formulation and solution of the problem. General tool often used for characterization of the earth system is inversion. Traditionally inverse problems are solved using least-square based inversion by linearizing the formulation. The initial model in such inversion schemes is often assumed to follow posterior Gaussian probability distribution. It is now well established that most of the physical properties of the earth follow power law (fractal distribution). Thus, the selection of initial model based on power law probability distribution will provide more realistic solution. We present a new method which can draw samples of posterior probability density function very efficiently using fractal based statistics. The application of the method has been demonstrated to invert band limited seismic data with well control. We used fractal based probability density function which uses mean, variance and Hurst coefficient of the model space to draw initial model. Further this initial model is used in global optimization inversion scheme. Inversion results using initial models generated by our method gives high resolution estimates of the model parameters than the hitherto used gradient based liner inversion method.
Lum, Kirsten J; Sundaram, Rajeshwari; Louis, Thomas A
2015-01-01
Prospective pregnancy studies are a valuable source of longitudinal data on menstrual cycle length. However, care is needed when making inferences of such renewal processes. For example, accounting for the sampling plan is necessary for unbiased estimation of the menstrual cycle length distribution for the study population. If couples can enroll when they learn of the study as opposed to waiting for the start of a new menstrual cycle, then due to length-bias, the enrollment cycle will be stochastically larger than the general run of cycles, a typical property of prevalent cohort studies. Furthermore, the probability of enrollment can depend on the length of time since a woman's last menstrual period (a backward recurrence time), resulting in selection effects. We focus on accounting for length-bias and selection effects in the likelihood for enrollment menstrual cycle length, using a recursive two-stage approach wherein we first estimate the probability of enrollment as a function of the backward recurrence time and then use it in a likelihood with sampling weights that account for length-bias and selection effects. To broaden the applicability of our methods, we augment our model to incorporate a couple-specific random effect and time-independent covariate. A simulation study quantifies performance for two scenarios of enrollment probability when proper account is taken of sampling plan features. In addition, we estimate the probability of enrollment and the distribution of menstrual cycle length for the study population of the Longitudinal Investigation of Fertility and the Environment Study. Published by Oxford University Press 2014. This work is written by (a) US Government employee(s) and is in the public domain in the US.
Marton, James; Ketsche, Patricia G; Snyder, Angela; Adams, E Kathleen; Zhou, Mei
2015-01-01
Objective To estimate the effect of premium increases on the probability that near-poor and moderate-income children disenroll from public coverage. Data Sources Enrollment, eligibility, and claims data for Georgia's PeachCare for Kids™ (CHIP) program for multiple years. Study Design We exploited policy-induced variation in premiums generated by cross-sectional differences and changes over time in enrollee age, family size, and income to estimate the duration of enrollment as a function of the effective (per child) premium. We classify children as being of low, medium, or high illness severity. Principal Findings A dollar increase in the per-child premium is associated with a slight increase in a typical child's monthly probability of exiting coverage from 7.70 to 7.83 percent. Children with low illness severity have a significantly higher monthly baseline probability of exiting than children with medium or high illness severity, but the enrollment response to premium increases is similar across all three groups. Conclusions Success in achieving coverage gains through public programs is tempered by persistent problems in maintaining enrollment, which is modestly affected by premium increases. Retention is subject to adverse selection problems, but premium increases do not appear to significantly magnify the selection problem in this case. PMID:25130764
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jiang, Zhang; Chen, Wei
Generalized skew-symmetric probability density functions are proposed to model asymmetric interfacial density distributions for the parameterization of any arbitrary density profiles in the `effective-density model'. The penetration of the densities into adjacent layers can be selectively controlled and parameterized. A continuous density profile is generated and discretized into many independent slices of very thin thickness with constant density values and sharp interfaces. The discretized profile can be used to calculate reflectivities via Parratt's recursive formula, or small-angle scattering via the concentric onion model that is also developed in this work.
Jiang, Zhang; Chen, Wei
2017-11-03
Generalized skew-symmetric probability density functions are proposed to model asymmetric interfacial density distributions for the parameterization of any arbitrary density profiles in the `effective-density model'. The penetration of the densities into adjacent layers can be selectively controlled and parameterized. A continuous density profile is generated and discretized into many independent slices of very thin thickness with constant density values and sharp interfaces. The discretized profile can be used to calculate reflectivities via Parratt's recursive formula, or small-angle scattering via the concentric onion model that is also developed in this work.
Feedback-Driven Trial-by-Trial Learning in Autism Spectrum Disorders
Solomon, Marjorie; Frank, Michael J.; Ragland, J. Daniel; Smith, Anne C.; Niendam, Tara A.; Lesh, Tyler A.; Grayson, David S.; Beck, Jonathan S.; Matter, John C.; Carter, Cameron S.
2017-01-01
Objective Impairments in learning are central to autism spectrum disorders. The authors investigated the cognitive and neural basis of these deficits in young adults with autism spectrum disorders using a well-characterized probabilistic reinforcement learning paradigm. Method The probabilistic selection task was implemented among matched participants with autism spectrum disorders (N=22) and with typical development (N=25), aged 18–40 years, using rapid event-related functional MRI. Participants were trained to choose the correct stimulus in high-probability (AB), medium-probability (CD), and low-probability (EF) pairs, presented with valid feedback 80%, 70%, and 60% of the time, respectively. Whole-brain voxel-wise and parametric modulator analyses examined early and late learning during the stimulus and feedback epochs of the task. Results The groups exhibited comparable performance on medium- and low-probability pairs. Typically developing persons showed higher accuracy on the high-probability pair, better win-stay performance (selection of the previously rewarded stimulus on the next trial of that type), and more robust recruitment of the anterior and medial prefrontal cortex during the stimulus epoch, suggesting development of an intact reward-based working memory for recent stimulus values. Throughout the feedback epoch, individuals with autism spectrum disorders exhibited greater recruitment of the anterior cingulate and orbito-frontal cortices compared with individuals with typical development, indicating continuing trial-by-trial activity related to feedback processing. Conclusions Individuals with autism spectrum disorders exhibit learning deficits reflecting impaired ability to develop an effective reward-based working memory to guide stimulus selection. Instead, they continue to rely on trial-by-trial feedback processing to support learning dependent upon engagement of the anterior cingulate and orbito-frontal cortices. PMID:25158242
An epidemic model of rumor diffusion in online social networks
NASA Astrophysics Data System (ADS)
Cheng, Jun-Jun; Liu, Yun; Shen, Bo; Yuan, Wei-Guo
2013-01-01
So far, in some standard rumor spreading models, the transition probability from ignorants to spreaders is always treated as a constant. However, from a practical perspective, the case that individual whether or not be infected by the neighbor spreader greatly depends on the trustiness of ties between them. In order to solve this problem, we introduce a stochastic epidemic model of the rumor diffusion, in which the infectious probability is defined as a function of the strength of ties. Moreover, we investigate numerically the behavior of the model on a real scale-free social site with the exponent γ = 2.2. We verify that the strength of ties plays a critical role in the rumor diffusion process. Specially, selecting weak ties preferentially cannot make rumor spread faster and wider, but the efficiency of diffusion will be greatly affected after removing them. Another significant finding is that the maximum number of spreaders max( S) is very sensitive to the immune probability μ and the decay probability v. We show that a smaller μ or v leads to a larger spreading of the rumor, and their relationships can be described as the function ln(max( S)) = Av + B, in which the intercept B and the slope A can be fitted perfectly as power-law functions of μ. Our findings may offer some useful insights, helping guide the application in practice and reduce the damage brought by the rumor.
THE EVOLUTION OF BET-HEDGING ADAPTATIONS TO RARE SCENARIOS
King, Oliver D.
2007-01-01
When faced with a variable environment, organisms may switch between different strategies according to some probabilistic rule. In an infinite population, evolution is expected to favor the rule that maximizes geometric mean fitness. If some environments are encountered only rarely, selection may not be strong enough for optimal switching probabilities to evolve. Here we calculate the evolution of switching probabilities in a finite population by analyzing fixation probabilities of alleles specifying switching rules. We calculate the conditions required for the evolution of phenotypic switching as a form of bet-hedging as a function of the population size N, the rateθ at which a rare environment is encountered, and the selective advantage s associated with switching in the rare environment. We consider a simplified model in which environmental switching and phenotypic switching are one-way processes, and mutation is symmetric and rare with respect to the timescale of fixation events. In this case, the approximate requirements for bet-hedging to be favored by a ratio of at least R are that sN > log(R) and θN>R. PMID:17915273
An application of model-fitting procedures for marginal structural models.
Mortimer, Kathleen M; Neugebauer, Romain; van der Laan, Mark; Tager, Ira B
2005-08-15
Marginal structural models (MSMs) are being used more frequently to obtain causal effect estimates in observational studies. Although the principal estimator of MSM coefficients has been the inverse probability of treatment weight (IPTW) estimator, there are few published examples that illustrate how to apply IPTW or discuss the impact of model selection on effect estimates. The authors applied IPTW estimation of an MSM to observational data from the Fresno Asthmatic Children's Environment Study (2000-2002) to evaluate the effect of asthma rescue medication use on pulmonary function and compared their results with those obtained through traditional regression methods. Akaike's Information Criterion and cross-validation methods were used to fit the MSM. In this paper, the influence of model selection and evaluation of key assumptions such as the experimental treatment assignment assumption are discussed in detail. Traditional analyses suggested that medication use was not associated with an improvement in pulmonary function--a finding that is counterintuitive and probably due to confounding by symptoms and asthma severity. The final MSM estimated that medication use was causally related to a 7% improvement in pulmonary function. The authors present examples that should encourage investigators who use IPTW estimation to undertake and discuss the impact of model-fitting procedures to justify the choice of the final weights.
Six-dimensional quantum dynamics study for the dissociative adsorption of DCl on Au(111) surface
NASA Astrophysics Data System (ADS)
Liu, Tianhui; Fu, Bina; Zhang, Dong H.
2014-04-01
We carried out six-dimensional quantum dynamics calculations for the dissociative adsorption of deuterium chloride (DCl) on Au(111) surface using the initial state-selected time-dependent wave packet approach. The four-dimensional dissociation probabilities are also obtained with the center of mass of DCl fixed at various sites. These calculations were all performed based on an accurate potential energy surface recently constructed by neural network fitting to density function theory energy points. The origin of the extremely small dissociation probability for DCl/HCl (v = 0, j = 0) fixed at the top site compared to other fixed sites is elucidated in this study. The influence of vibrational excitation and rotational orientation of DCl on the reactivity was investigated by calculating six-dimensional dissociation probabilities. The vibrational excitation of DCl enhances the reactivity substantially and the helicopter orientation yields higher dissociation probability than the cartwheel orientation. The site-averaged dissociation probability over 25 fixed sites obtained from four-dimensional quantum dynamics calculations can accurately reproduce the six-dimensional dissociation probability.
Six-dimensional quantum dynamics study for the dissociative adsorption of DCl on Au(111) surface
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Tianhui; Fu, Bina, E-mail: bina@dicp.ac.cn, E-mail: zhangdh@dicp.ac.cn; Zhang, Dong H., E-mail: bina@dicp.ac.cn, E-mail: zhangdh@dicp.ac.cn
We carried out six-dimensional quantum dynamics calculations for the dissociative adsorption of deuterium chloride (DCl) on Au(111) surface using the initial state-selected time-dependent wave packet approach. The four-dimensional dissociation probabilities are also obtained with the center of mass of DCl fixed at various sites. These calculations were all performed based on an accurate potential energy surface recently constructed by neural network fitting to density function theory energy points. The origin of the extremely small dissociation probability for DCl/HCl (v = 0, j = 0) fixed at the top site compared to other fixed sites is elucidated in this study. The influence of vibrational excitationmore » and rotational orientation of DCl on the reactivity was investigated by calculating six-dimensional dissociation probabilities. The vibrational excitation of DCl enhances the reactivity substantially and the helicopter orientation yields higher dissociation probability than the cartwheel orientation. The site-averaged dissociation probability over 25 fixed sites obtained from four-dimensional quantum dynamics calculations can accurately reproduce the six-dimensional dissociation probability.« less
Psychophysics of the probability weighting function
NASA Astrophysics Data System (ADS)
Takahashi, Taiki
2011-03-01
A probability weighting function w(p) for an objective probability p in decision under risk plays a pivotal role in Kahneman-Tversky prospect theory. Although recent studies in econophysics and neuroeconomics widely utilized probability weighting functions, psychophysical foundations of the probability weighting functions have been unknown. Notably, a behavioral economist Prelec (1998) [4] axiomatically derived the probability weighting function w(p)=exp(-() (0<α<1 and w(0)=1,w(
Modeling Dynamic Contrast-Enhanced MRI Data with a Constrained Local AIF.
Duan, Chong; Kallehauge, Jesper F; Pérez-Torres, Carlos J; Bretthorst, G Larry; Beeman, Scott C; Tanderup, Kari; Ackerman, Joseph J H; Garbow, Joel R
2018-02-01
This study aims to develop a constrained local arterial input function (cL-AIF) to improve quantitative analysis of dynamic contrast-enhanced (DCE)-magnetic resonance imaging (MRI) data by accounting for the contrast-agent bolus amplitude error in the voxel-specific AIF. Bayesian probability theory-based parameter estimation and model selection were used to compare tracer kinetic modeling employing either the measured remote-AIF (R-AIF, i.e., the traditional approach) or an inferred cL-AIF against both in silico DCE-MRI data and clinical, cervical cancer DCE-MRI data. When the data model included the cL-AIF, tracer kinetic parameters were correctly estimated from in silico data under contrast-to-noise conditions typical of clinical DCE-MRI experiments. Considering the clinical cervical cancer data, Bayesian model selection was performed for all tumor voxels of the 16 patients (35,602 voxels in total). Among those voxels, a tracer kinetic model that employed the voxel-specific cL-AIF was preferred (i.e., had a higher posterior probability) in 80 % of the voxels compared to the direct use of a single R-AIF. Maps of spatial variation in voxel-specific AIF bolus amplitude and arrival time for heterogeneous tissues, such as cervical cancer, are accessible with the cL-AIF approach. The cL-AIF method, which estimates unique local-AIF amplitude and arrival time for each voxel within the tissue of interest, provides better modeling of DCE-MRI data than the use of a single, measured R-AIF. The Bayesian-based data analysis described herein affords estimates of uncertainties for each model parameter, via posterior probability density functions, and voxel-wise comparison across methods/models, via model selection in data modeling.
Fisher, Charles K; Mehta, Pankaj
2015-06-01
Feature selection, identifying a subset of variables that are relevant for predicting a response, is an important and challenging component of many methods in statistics and machine learning. Feature selection is especially difficult and computationally intensive when the number of variables approaches or exceeds the number of samples, as is often the case for many genomic datasets. Here, we introduce a new approach--the Bayesian Ising Approximation (BIA)-to rapidly calculate posterior probabilities for feature relevance in L2 penalized linear regression. In the regime where the regression problem is strongly regularized by the prior, we show that computing the marginal posterior probabilities for features is equivalent to computing the magnetizations of an Ising model with weak couplings. Using a mean field approximation, we show it is possible to rapidly compute the feature selection path described by the posterior probabilities as a function of the L2 penalty. We present simulations and analytical results illustrating the accuracy of the BIA on some simple regression problems. Finally, we demonstrate the applicability of the BIA to high-dimensional regression by analyzing a gene expression dataset with nearly 30 000 features. These results also highlight the impact of correlations between features on Bayesian feature selection. An implementation of the BIA in C++, along with data for reproducing our gene expression analyses, are freely available at http://physics.bu.edu/∼pankajm/BIACode. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
PresenceAbsence: An R package for presence absence analysis
Elizabeth A. Freeman; Gretchen Moisen
2008-01-01
The PresenceAbsence package for R provides a set of functions useful when evaluating the results of presence-absence analysis, for example, models of species distribution or the analysis of diagnostic tests. The package provides a toolkit for selecting the optimal threshold for translating a probability surface into presence-absence maps specifically tailored to their...
Sensitivity analysis of limit state functions for probability-based plastic design
NASA Technical Reports Server (NTRS)
Frangopol, D. M.
1984-01-01
The evaluation of the total probability of a plastic collapse failure P sub f for a highly redundant structure of random interdependent plastic moments acted on by random interdepedent loads is a difficult and computationally very costly process. The evaluation of reasonable bounds to this probability requires the use of second moment algebra which involves man statistical parameters. A computer program which selects the best strategy for minimizing the interval between upper and lower bounds of P sub f is now in its final stage of development. The relative importance of various uncertainties involved in the computational process on the resulting bounds of P sub f, sensitivity is analyzed. Response sensitivities for both mode and system reliability of an ideal plastic portal frame are shown.
Mestres-Missé, Anna; Trampel, Robert; Turner, Robert; Kotz, Sonja A
2016-04-01
A key aspect of optimal behavior is the ability to predict what will come next. To achieve this, we must have a fairly good idea of the probability of occurrence of possible outcomes. This is based both on prior knowledge about a particular or similar situation and on immediately relevant new information. One question that arises is: when considering converging prior probability and external evidence, is the most probable outcome selected or does the brain represent degrees of uncertainty, even highly improbable ones? Using functional magnetic resonance imaging, the current study explored these possibilities by contrasting words that differ in their probability of occurrence, namely, unbalanced ambiguous words and unambiguous words. Unbalanced ambiguous words have a strong frequency-based bias towards one meaning, while unambiguous words have only one meaning. The current results reveal larger activation in lateral prefrontal and insular cortices in response to dominant ambiguous compared to unambiguous words even when prior and contextual information biases one interpretation only. These results suggest a probability distribution, whereby all outcomes and their associated probabilities of occurrence--even if very low--are represented and maintained.
Space Shuttle RTOS Bayesian Network
NASA Technical Reports Server (NTRS)
Morris, A. Terry; Beling, Peter A.
2001-01-01
With shrinking budgets and the requirements to increase reliability and operational life of the existing orbiter fleet, NASA has proposed various upgrades for the Space Shuttle that are consistent with national space policy. The cockpit avionics upgrade (CAU), a high priority item, has been selected as the next major upgrade. The primary functions of cockpit avionics include flight control, guidance and navigation, communication, and orbiter landing support. Secondary functions include the provision of operational services for non-avionics systems such as data handling for the payloads and caution and warning alerts to the crew. Recently, a process to selection the optimal commercial-off-the-shelf (COTS) real-time operating system (RTOS) for the CAU was conducted by United Space Alliance (USA) Corporation, which is a joint venture between Boeing and Lockheed Martin, the prime contractor for space shuttle operations. In order to independently assess the RTOS selection, NASA has used the Bayesian network-based scoring methodology described in this paper. Our two-stage methodology addresses the issue of RTOS acceptability by incorporating functional, performance and non-functional software measures related to reliability, interoperability, certifiability, efficiency, correctness, business, legal, product history, cost and life cycle. The first stage of the methodology involves obtaining scores for the various measures using a Bayesian network. The Bayesian network incorporates the causal relationships between the various and often competing measures of interest while also assisting the inherently complex decision analysis process with its ability to reason under uncertainty. The structure and selection of prior probabilities for the network is extracted from experts in the field of real-time operating systems. Scores for the various measures are computed using Bayesian probability. In the second stage, multi-criteria trade-off analyses are performed between the scores. Using a prioritization of measures from the decision-maker, trade-offs between the scores are used to rank order the available set of RTOS candidates.
Aanen, Duur K.; Spelbrink, Johannes N.; Beekman, Madeleine
2014-01-01
The peculiar biology of mitochondrial DNA (mtDNA) potentially has detrimental consequences for organismal health and lifespan. Typically, eukaryotic cells contain multiple mitochondria, each with multiple mtDNA genomes. The high copy number of mtDNA implies that selection on mtDNA functionality is relaxed. Furthermore, because mtDNA replication is not strictly regulated, within-cell selection may favour mtDNA variants with a replication advantage, but a deleterious effect on cell fitness. The opportunities for selfish mtDNA mutations to spread are restricted by various organism-level adaptations, such as uniparental transmission, germline mtDNA bottlenecks, germline selection and, during somatic growth, regular alternation between fusion and fission of mitochondria. These mechanisms are all hypothesized to maintain functional mtDNA. However, the strength of selection for maintenance of functional mtDNA progressively declines with age, resulting in age-related diseases. Furthermore, organismal adaptations that most probably evolved to restrict the opportunities for selfish mtDNA create secondary problems. Owing to predominantly maternal mtDNA transmission, recombination among mtDNA from different individuals is highly restricted or absent, reducing the scope for repair. Moreover, maternal inheritance precludes selection against mtDNA variants with male-specific effects. We finish by discussing the consequences of life-history differences among taxa with respect to mtDNA evolution and make a case for the use of microorganisms to experimentally manipulate levels of selection. PMID:24864309
Salazar, Rosie D; Montgomery, Robert A; Thresher, Sarah E; Macdonald, David W
2016-01-01
The common toad (Bufo bufo) is of increasing conservation concern in the United Kingdom (UK) due to dramatic population declines occurring in the past century. Many of these population declines coincided with reductions in both terrestrial and aquatic habitat availability and quality and have been primarily attributed to the effect of agricultural land conversion (of natural and semi-natural habitats to arable and pasture fields) and pond drainage. However, there is little evidence available to link habitat availability with common toad population declines, especially when examined at a broad landscape scale. Assessing such patterns of population declines at the landscape scale, for instance, require an understanding of how this species uses terrestrial habitat. We intensively studied the terrestrial resource selection of a large population of common toads in Oxfordshire, England, UK. Adult common toads were fitted with passive integrated transponder (PIT) tags to allow detection in the terrestrial environment using a portable PIT antenna once toads left the pond and before going into hibernation (April/May-October 2012 and 2013). We developed a population-level resource selection function (RSF) to assess the relative probability of toad occurrence in the terrestrial environment by collecting location data for 90 recaptured toads. The predicted relative probability of toad occurrence for this population was greatest in wooded habitat near to water bodies; relative probability of occurrence declined dramatically > 50 m from these habitats. Toads also tended to select habitat near to their breeding pond and toad occurrence was negatively related to urban environments.
Exact solutions for the selection-mutation equilibrium in the Crow-Kimura evolutionary model.
Semenov, Yuri S; Novozhilov, Artem S
2015-08-01
We reformulate the eigenvalue problem for the selection-mutation equilibrium distribution in the case of a haploid asexually reproduced population in the form of an equation for an unknown probability generating function of this distribution. The special form of this equation in the infinite sequence limit allows us to obtain analytically the steady state distributions for a number of particular cases of the fitness landscape. The general approach is illustrated by examples; theoretical findings are compared with numerical calculations. Copyright © 2015. Published by Elsevier Inc.
NASA Astrophysics Data System (ADS)
Skilling, John
2005-11-01
This tutorial gives a basic overview of Bayesian methodology, from its axiomatic foundation through the conventional development of data analysis and model selection to its rôle in quantum mechanics, and ending with some comments on inference in general human affairs. The central theme is that probability calculus is the unique language within which we can develop models of our surroundings that have predictive capability. These models are patterns of belief; there is no need to claim external reality. 1. Logic and probability 2. Probability and inference 3. Probability and model selection 4. Prior probabilities 5. Probability and frequency 6. Probability and quantum mechanics 7. Probability and fundamentalism 8. Probability and deception 9. Prediction and truth
Weather conditions drive dynamic habitat selection in a generalist predator.
Sunde, Peter; Thorup, Kasper; Jacobsen, Lars B; Rahbek, Carsten
2014-01-01
Despite the dynamic nature of habitat selection, temporal variation as arising from factors such as weather are rarely quantified in species-habitat relationships. We analysed habitat use and selection (use/availability) of foraging, radio-tagged little owls (Athene noctua), a nocturnal, year-round resident generalist predator, to see how this varied as a function of weather, season and availability. Use of the two most frequently used land cover types, gardens/buildings and cultivated fields varied more than 3-fold as a simple function of season and weather through linear effects of wind and quadratic effects of temperature. Even when controlling for the temporal context, both land cover types were used more evenly than predicted from variation in availability (functional response in habitat selection). Use of two other land cover categories (pastures and moist areas) increased linearly with temperature and was proportional to their availability. The study shows that habitat selection by generalist foragers may be highly dependent on temporal variables such as weather, probably because such foragers switch between weather dependent feeding opportunities offered by different land cover types. An opportunistic foraging strategy in a landscape with erratically appearing feeding opportunities in different land cover types, may possibly also explain decreasing selection of the two most frequently used land cover types with increasing availability.
Memory disorders in probable Alzheimer's disease: the role of hippocampal atrophy as shown with MRI.
Deweer, B; Lehéricy, S; Pillon, B; Baulac, M; Chiras, J; Marsault, C; Agid, Y; Dubois, B
1995-01-01
Magnetic resonance based volumetric measures of hippocampal formation, amygdala (A), caudate nucleus (CN), normalised for total intracranial volume (TIV), were analysed in relation to measures of cognitive deterioration and specific features of memory functions in 18 patients with probable Alzheimer's disease. Neuropsychological examination included the mini mental state examination (MMSE), the Mattis dementia rating scale (DRS), tests of executive functions, assessment of language abilities and praxis, the Wechsler memory scale (WMS), the California verbal learning test (CVLT) and the Grober and Buschke test. The volume of the hippocampal formation (HF/TIV) was correlated with specific memory variables: memory quotient and paired associates of the WMS; intrusions and discriminability at recognition for the Grober and Buschke test. By contrast, except for intrusions, no correlations were found between memory variables and the volume of amygdala (A/TIV). No correlations were found between the volume of caudate nuclei (CN/TIV) and any neuropsychological score. The volume of the hippocampal formation was therefore selectively related to quantitative and qualitative aspects of memory performance in patients with probable Alzheimer's disease. Images PMID:7745409
On optima: the case of myoglobin-facilitated oxygen diffusion.
Wittenberg, Jonathan B
2007-08-15
The process of myoglobin/leghemoglobin-facilitated oxygen diffusion is adapted to function in different environments in diverse organisms. We enquire how the functional parameters of the process are optimized in particular organisms. The ligand-binding properties of the proteins, myoglobin and plant symbiotic hemoglobins, we discover, suggest that they have been adapted under genetic selection pressure for optimal performance. Since carrier-mediated oxygen transport has probably evolved independantly many times, adaptation of diverse proteins for a common functionality exemplifies the process of convergent evolution. The progenitor proteins may be built on the myoglobin scaffold or may be very different.
Effect of aqueous environment in chemical reactivity of monolignols. A New Fukui Function Study.
Martínez, Carmen; Sedano, Miriam; Mendoza, Jorge; Herrera, Rafael; Rutiaga, Jose G; Lopez, Pablo
2009-09-01
The free radical reactivity of monolignols can be explained in terms of the Fukui function and the local hard and soft acids and bases (HSAB) principle to determine the potential linkages among them for reactions involving free radicals. Our results in gas-phase and aqueous environment elucidate the most probable free radical resonance structures in monolignols. Their reactivity toward nucleophilic or electrophilic species was described applying the Fukui function after a second analysis of the selected resonance structures. Methodology herein described could differentiate the inherent nature of one radical from another.
More attention when speaking: does it help or does it hurt?
Nozari, Nazbanou; Thompson-Schill, Sharon L
2013-11-01
Paying selective attention to a word in a multi-word utterance results in a decreased probability of error on that word (benefit), but an increased probability of error on the other words (cost). We ask whether excitation of the prefrontal cortex helps or hurts this cost. One hypothesis (the resource hypothesis) predicts a decrease in the cost due to the deployment of more attentional resources, while another (the focus hypothesis) predicts even greater costs due to further fine-tuning of selective attention. Our results are more consistent with the focus hypothesis: prefrontal stimulation caused a reliable increase in the benefit and a marginal increase in the cost of selective attention. To ensure that the effects are due to changes to the prefrontal cortex, we provide two checks: We show that the pattern of results is quite different if, instead, the primary motor cortex is stimulated. We also show that the stimulation-related benefits in the verbal task correlate with the stimulation-related benefits in an N-back task, which is known to tap into a prefrontal function. Our results shed light on how selective attention affects language production, and more generally, on how selective attention affects production of a sequence over time. Copyright © 2013 Elsevier Ltd. All rights reserved.
Quantum aspects of brain activity and the role of consciousness.
Beck, F; Eccles, J C
1992-01-01
The relationship of brain activity to conscious intentions is considered on the basis of the functional microstructure of the cerebral cortex. Each incoming nerve impulse causes the emission of transmitter molecules by the process of exocytosis. Since exocytosis is a quantal phenomenon of the presynaptic vesicular grid with a probability much less than 1, we present a quantum mechanical model for it based on a tunneling process of the trigger mechanism. Consciousness manifests itself in mental intentions. The consequent voluntary actions become effective by momentary increases of the probability of vesicular emission in the thousands of synapses on each pyramidal cell by quantal selection. PMID:1333607
Principle of maximum entropy for reliability analysis in the design of machine components
NASA Astrophysics Data System (ADS)
Zhang, Yimin
2018-03-01
We studied the reliability of machine components with parameters that follow an arbitrary statistical distribution using the principle of maximum entropy (PME). We used PME to select the statistical distribution that best fits the available information. We also established a probability density function (PDF) and a failure probability model for the parameters of mechanical components using the concept of entropy and the PME. We obtained the first four moments of the state function for reliability analysis and design. Furthermore, we attained an estimate of the PDF with the fewest human bias factors using the PME. This function was used to calculate the reliability of the machine components, including a connecting rod, a vehicle half-shaft, a front axle, a rear axle housing, and a leaf spring, which have parameters that typically follow a non-normal distribution. Simulations were conducted for comparison. This study provides a design methodology for the reliability of mechanical components for practical engineering projects.
Beyond statistical inference: A decision theory for science
KILLEEN, PETER R.
2008-01-01
Traditional null hypothesis significance testing does not yield the probability of the null or its alternative and, therefore, cannot logically ground scientific decisions. The decision theory proposed here calculates the expected utility of an effect on the basis of (1) the probability of replicating it and (2) a utility function on its size. It takes significance tests—which place all value on the replicability of an effect and none on its magnitude—as a special case, one in which the cost of a false positive is revealed to be an order of magnitude greater than the value of a true positive. More realistic utility functions credit both replicability and effect size, integrating them for a single index of merit. The analysis incorporates opportunity cost and is consistent with alternate measures of effect size, such as r2 and information transmission, and with Bayesian model selection criteria. An alternate formulation is functionally equivalent to the formal theory, transparent, and easy to compute. PMID:17201351
Beyond statistical inference: a decision theory for science.
Killeen, Peter R
2006-08-01
Traditional null hypothesis significance testing does not yield the probability of the null or its alternative and, therefore, cannot logically ground scientific decisions. The decision theory proposed here calculates the expected utility of an effect on the basis of (1) the probability of replicating it and (2) a utility function on its size. It takes significance tests--which place all value on the replicability of an effect and none on its magnitude--as a special case, one in which the cost of a false positive is revealed to be an order of magnitude greater than the value of a true positive. More realistic utility functions credit both replicability and effect size, integrating them for a single index of merit. The analysis incorporates opportunity cost and is consistent with alternate measures of effect size, such as r2 and information transmission, and with Bayesian model selection criteria. An alternate formulation is functionally equivalent to the formal theory, transparent, and easy to compute.
The 5-10 keV AGN luminosity function at 0.01 < z < 4.0
NASA Astrophysics Data System (ADS)
Fotopoulou, S.; Buchner, J.; Georgantopoulos, I.; Hasinger, G.; Salvato, M.; Georgakakis, A.; Cappelluti, N.; Ranalli, P.; Hsu, L. T.; Brusa, M.; Comastri, A.; Miyaji, T.; Nandra, K.; Aird, J.; Paltani, S.
2016-03-01
The active galactic nuclei (AGN) X-ray luminosity function traces actively accreting supermassive black holes and is essential for the study of the properties of the AGN population, black hole evolution, and galaxy-black hole coevolution. Up to now, the AGN luminosity function has been estimated several times in soft (0.5-2 keV) and hard X-rays (2-10 keV). AGN selection in these energy ranges often suffers from identification and redshift incompleteness and, at the same time, photoelectric absorption can obscure a significant amount of the X-ray radiation. We estimate the evolution of the luminosity function in the 5-10 keV band, where we effectively avoid the absorbed part of the spectrum, rendering absorption corrections unnecessary up to NH ~ 1023 cm-2. Our dataset is a compilation of six wide, and deep fields: MAXI, HBSS, XMM-COSMOS, Lockman Hole, XMM-CDFS, AEGIS-XD, Chandra-COSMOS, and Chandra-CDFS. This extensive sample of ~1110 AGN (0.01 < z < 4.0, 41 < log Lx < 46) is 98% redshift complete with 68% spectroscopic redshifts. For sources lacking a spectroscopic redshift estimation we use the probability distribution function of photometric redshift estimation specifically tuned for AGN, and a flat probability distribution function for sources with no redshift information. We use Bayesian analysis to select the best parametric model from simple pure luminosity and pure density evolution to more complicated luminosity and density evolution and luminosity-dependent density evolution (LDDE). We estimate the model parameters that describe best our dataset separately for each survey and for the combined sample. We show that, according to Bayesian model selection, the preferred model for our dataset is the LDDE. Our estimation of the AGN luminosity function does not require any assumption on the AGN absorption and is in good agreement with previous works in the 2-10 keV energy band based on X-ray hardness ratios to model the absorption in AGN up to redshift three. Our sample does not show evidence of a rapid decline of the AGN luminosity function up to redshift four.
A diffusion approach to approximating preservation probabilities for gene duplicates.
O'Hely, Martin
2006-08-01
Consider a haploid population and, within its genome, a gene whose presence is vital for the survival of any individual. Each copy of this gene is subject to mutations which destroy its function. Suppose one member of the population somehow acquires a duplicate copy of the gene, where the duplicate is fully linked to the original gene's locus. Preservation is said to occur if eventually the entire population consists of individuals descended from this one which initially carried the duplicate. The system is modelled by a finite state-space Markov process which in turn is approximated by a diffusion process, whence an explicit expression for the probability of preservation is derived. The event of preservation can be compared to the fixation of a selectively neutral gene variant initially present in a single individual, the probability of which is the reciprocal of the population size. For very weak mutation, this and the probability of preservation are equal, while as mutation becomes stronger, the preservation probability tends to double this reciprocal. This is in excellent agreement with simulation studies.
NASA Astrophysics Data System (ADS)
von der Linden, Wolfgang; Dose, Volker; von Toussaint, Udo
2014-06-01
Preface; Part I. Introduction: 1. The meaning of probability; 2. Basic definitions; 3. Bayesian inference; 4. Combinatrics; 5. Random walks; 6. Limit theorems; 7. Continuous distributions; 8. The central limit theorem; 9. Poisson processes and waiting times; Part II. Assigning Probabilities: 10. Transformation invariance; 11. Maximum entropy; 12. Qualified maximum entropy; 13. Global smoothness; Part III. Parameter Estimation: 14. Bayesian parameter estimation; 15. Frequentist parameter estimation; 16. The Cramer-Rao inequality; Part IV. Testing Hypotheses: 17. The Bayesian way; 18. The frequentist way; 19. Sampling distributions; 20. Bayesian vs frequentist hypothesis tests; Part V. Real World Applications: 21. Regression; 22. Inconsistent data; 23. Unrecognized signal contributions; 24. Change point problems; 25. Function estimation; 26. Integral equations; 27. Model selection; 28. Bayesian experimental design; Part VI. Probabilistic Numerical Techniques: 29. Numerical integration; 30. Monte Carlo methods; 31. Nested sampling; Appendixes; References; Index.
Bayesian image reconstruction - The pixon and optimal image modeling
NASA Technical Reports Server (NTRS)
Pina, R. K.; Puetter, R. C.
1993-01-01
In this paper we describe the optimal image model, maximum residual likelihood method (OptMRL) for image reconstruction. OptMRL is a Bayesian image reconstruction technique for removing point-spread function blurring. OptMRL uses both a goodness-of-fit criterion (GOF) and an 'image prior', i.e., a function which quantifies the a priori probability of the image. Unlike standard maximum entropy methods, which typically reconstruct the image on the data pixel grid, OptMRL varies the image model in order to find the optimal functional basis with which to represent the image. We show how an optimal basis for image representation can be selected and in doing so, develop the concept of the 'pixon' which is a generalized image cell from which this basis is constructed. By allowing both the image and the image representation to be variable, the OptMRL method greatly increases the volume of solution space over which the image is optimized. Hence the likelihood of the final reconstructed image is greatly increased. For the goodness-of-fit criterion, OptMRL uses the maximum residual likelihood probability distribution introduced previously by Pina and Puetter (1992). This GOF probability distribution, which is based on the spatial autocorrelation of the residuals, has the advantage that it ensures spatially uncorrelated image reconstruction residuals.
Brandvain, Yaniv; Wade, Michael J
2009-08-01
The transfer of mitochondrial genes to the nucleus is a recurrent and consistent feature of eukaryotic genome evolution. Although many theories have been proposed to explain such transfers, little relevant data exist. The observation that clonal and self-fertilizing plants transfer more mitochondrial genes to their nuclei than do outcrossing plants contradicts predictions of major theories based on nuclear recombination and leaves a gap in our conceptual understanding how the observed pattern of gene transfer could arise. Here, with a series of deterministic and stochastic simulations, we show how epistatic selection and relative mutation rates of mitochondrial and nuclear genes influence mitochondrial-to-nuclear gene transfer. Specifically, we show that when there is a benefit to having a mitochondrial gene present in the nucleus, but absent in the mitochondria, self-fertilization dramatically increases both the rate and the probability of gene transfer. However, absent such a benefit, when mitochondrial mutation rates exceed those of the nucleus, self-fertilization decreases the rate and probability of transfer. This latter effect, however, is much weaker than the former. Our results are relevant to understanding the probabilities of fixation when loci in different genomes interact.
Resting bold fMRI differentiates dementia with Lewy bodies vs Alzheimer disease
Price, J.L.; Yan, Z.; Morris, J.C.; Sheline, Y.I.
2011-01-01
Objective: Clinicopathologic phenotypes of dementia with Lewy bodies (DLB) and Alzheimer disease (AD) often overlap, making discrimination difficult. We performed resting state blood oxygen level–dependent (BOLD) functional connectivity MRI (fcMRI) to determine whether there were differences between AD and DLB. Methods: Participants (n = 88) enrolled in a longitudinal study of memory and aging underwent 3-T fcMRI. Clinical diagnoses of probable DLB (n = 15) were made according to published criteria. Cognitively normal control participants (n = 38) were selected for the absence of cerebral amyloid burden as imaged with Pittsburgh compound B (PiB). Probable AD cases (n = 35) met published criteria and had appreciable amyloid deposits with PiB imaging. Functional images were collected using a gradient spin-echo sequence sensitive to BOLD contrast (T2* weighting). Correlation maps selected a seed region in the combined bilateral precuneus. Results: Participants with DLB had a functional connectivity pattern for the precuneus seed region that was distinct from AD; both the DLB and AD groups had functional connectivity patterns that differed from the cognitively normal group. In the DLB group, we found increased connectivity between the precuneus and regions in the dorsal attention network and the putamen. In contrast, we found decreased connectivity between the precuneus and other task-negative default regions and visual cortices. There was also a reversal of connectivity in the right hippocampus. Conclusions: Changes in functional connectivity in DLB indicate patterns of activation that are distinct from those seen in AD and may improve discrimination of DLB from AD and cognitively normal individuals. Since patterns of connectivity differ between AD and DLB groups, measurements of BOLD functional connectivity can shed further light on neuroanatomic connections that distinguish DLB from AD. PMID:21525427
Bayesian functional integral method for inferring continuous data from discrete measurements.
Heuett, William J; Miller, Bernard V; Racette, Susan B; Holloszy, John O; Chow, Carson C; Periwal, Vipul
2012-02-08
Inference of the insulin secretion rate (ISR) from C-peptide measurements as a quantification of pancreatic β-cell function is clinically important in diseases related to reduced insulin sensitivity and insulin action. ISR derived from C-peptide concentration is an example of nonparametric Bayesian model selection where a proposed ISR time-course is considered to be a "model". An inferred value of inaccessible continuous variables from discrete observable data is often problematic in biology and medicine, because it is a priori unclear how robust the inference is to the deletion of data points, and a closely related question, how much smoothness or continuity the data actually support. Predictions weighted by the posterior distribution can be cast as functional integrals as used in statistical field theory. Functional integrals are generally difficult to evaluate, especially for nonanalytic constraints such as positivity of the estimated parameters. We propose a computationally tractable method that uses the exact solution of an associated likelihood function as a prior probability distribution for a Markov-chain Monte Carlo evaluation of the posterior for the full model. As a concrete application of our method, we calculate the ISR from actual clinical C-peptide measurements in human subjects with varying degrees of insulin sensitivity. Our method demonstrates the feasibility of functional integral Bayesian model selection as a practical method for such data-driven inference, allowing the data to determine the smoothing timescale and the width of the prior probability distribution on the space of models. In particular, our model comparison method determines the discrete time-step for interpolation of the unobservable continuous variable that is supported by the data. Attempts to go to finer discrete time-steps lead to less likely models. Copyright © 2012 Biophysical Society. Published by Elsevier Inc. All rights reserved.
Neves, Fabiana; Abrantes, Joana; Esteves, Pedro J
2016-07-01
The interactions between chemokines and their receptors are crucial for differentiation and activation of inflammatory cells. CC chemokine ligand 11 (CCL11) binds to CCR3 and to CCR5 that in leporids underwent gene conversion with CCR2. Here, we genetically characterized CCL11 in lagomorphs (leporids and pikas). All lagomorphs have a potentially functional CCL11, and the Pygmy rabbit has a mutation in the stop codon that leads to a longer protein. Other mammals also have mutations at the stop codon that result in proteins with different lengths. By employing maximum likelihood methods, we observed that, in mammals, CCL11 exhibits both signatures of purifying and positive selection. Signatures of purifying selection were detected in sites important for receptor binding and activation. Of the three sites detected as under positive selection, two were located close to the stop codon. Our results suggest that CCL11 is functional in all lagomorphs, and that the signatures of purifying and positive selection in mammalian CCL11 probably reflect the protein's biological roles. © The Author(s) 2016.
Kryzwanski, David M.; Moellering, Douglas; Fetterman, Jessica L.; Dunham-Snary, Kimberly J.; Sammy, Melissa J.; Ballinger, Scott W.
2013-01-01
While there is general agreement that cardiovascular disease (CVD) development is influenced by a combination of genetic, environmental, and behavioral contributors, the actual mechanistic basis of how these factors initiate or promote CVD development in some individuals while others with identical risk profiles do not, is not clearly understood. This review considers the potential role for mitochondrial genetics and function in determining CVD susceptibility from the standpoint that the original features that molded cellular function were based upon mitochondrial-nuclear relationships established millions of years ago and were likely refined during prehistoric environmental selection events that today, are largely absent. Consequently, contemporary risk factors that influence our susceptibility to a variety of age-related diseases, including CVD were probably not part of the dynamics that defined the processes of mitochondrial – nuclear interaction, and thus, cell function. In this regard, the selective conditions that contributed to cellular functionality and evolution should be given more consideration when interpreting and designing experimental data and strategies. Finally, future studies that probe beyond epidemiologic associations are required. These studies will serve as the initial steps for addressing the provocative concept that contemporary human disease susceptibility is the result of selection events for mitochondrial function that increased chances for prehistoric human survival and reproductive success. PMID:21647091
Definition and Measurement of Selection Bias: From Constant Ratio to Constant Difference
ERIC Educational Resources Information Center
Cahan, Sorel; Gamliel, Eyal
2006-01-01
Despite its intuitive appeal and popularity, Thorndike's constant ratio (CR) model for unbiased selection is inherently inconsistent in "n"-free selection. Satisfaction of the condition for unbiased selection, when formulated in terms of success/acceptance probabilities, usually precludes satisfaction by the converse probabilities of…
NASA Astrophysics Data System (ADS)
Xu, Chong; Dai, Fuchu; Xu, Xiwei; Lee, Yuan Hsi
2012-04-01
Support vector machine (SVM) modeling is based on statistical learning theory. It involves a training phase with associated input and target output values. In recent years, the method has become increasingly popular. The main purpose of this study is to evaluate the mapping power of SVM modeling in earthquake triggered landslide-susceptibility mapping for a section of the Jianjiang River watershed using a Geographic Information System (GIS) software. The river was affected by the Wenchuan earthquake of May 12, 2008. Visual interpretation of colored aerial photographs of 1-m resolution and extensive field surveys provided a detailed landslide inventory map containing 3147 landslides related to the 2008 Wenchuan earthquake. Elevation, slope angle, slope aspect, distance from seismogenic faults, distance from drainages, and lithology were used as the controlling parameters. For modeling, three groups of positive and negative training samples were used in concert with four different kernel functions. Positive training samples include the centroids of 500 large landslides, those of all 3147 landslides, and 5000 randomly selected points in landslide polygons. Negative training samples include 500, 3147, and 5000 randomly selected points on slopes that remained stable during the Wenchuan earthquake. The four kernel functions are linear, polynomial, radial basis, and sigmoid. In total, 12 cases of landslide susceptibility were mapped. Comparative analyses of landslide-susceptibility probability and area relation curves show that both the polynomial and radial basis functions suitably classified the input data as either landslide positive or negative though the radial basis function was more successful. The 12 generated landslide-susceptibility maps were compared with known landslide centroid locations and landslide polygons to verify the success rate and predictive accuracy of each model. The 12 results were further validated using area-under-curve analysis. Group 3 with 5000 randomly selected points on the landslide polygons, and 5000 randomly selected points along stable slopes gave the best results with a success rate of 79.20% and predictive accuracy of 79.13% under the radial basis function. Of all the results, the sigmoid kernel function was the least skillful when used in concert with the centroid data of all 3147 landslides as positive training samples, and the negative training samples of 3147 randomly selected points in regions of stable slope (success rate = 54.95%; predictive accuracy = 61.85%). This paper also provides suggestions and reference data for selecting appropriate training samples and kernel function types for earthquake triggered landslide-susceptibility mapping using SVM modeling. Predictive landslide-susceptibility maps could be useful in hazard mitigation by helping planners understand the probability of landslides in different regions.
ERIC Educational Resources Information Center
Penrod, Becky; Gardella, Laura; Fernand, Jonathan
2012-01-01
Few studies have examined the effects of the high-probability instructional sequence in the treatment of food selectivity, and results of these studies have been mixed (e.g., Dawson et al., 2003; Patel et al., 2007). The present study extended previous research on the high-probability instructional sequence by combining this procedure with…
Community stability and selective extinction during the Permian-Triassic mass extinction
NASA Astrophysics Data System (ADS)
Roopnarine, Peter D.; Angielczyk, Kenneth D.
2015-10-01
The fossil record contains exemplars of extreme biodiversity crises. Here, we examined the stability of terrestrial paleocommunities from South Africa during Earth's most severe mass extinction, the Permian-Triassic. We show that stability depended critically on functional diversity and patterns of guild interaction, regardless of species richness. Paleocommunities exhibited less transient instability—relative to model communities with alternative community organization—and significantly greater probabilities of being locally stable during the mass extinction. Functional patterns that have evolved during an ecosystem's history support significantly more stable communities than hypothetical alternatives.
Boyd, Charlotte; Castillo, Ramiro; Hunt, George L; Punt, André E; VanBlaricom, Glenn R; Weimerskirch, Henri; Bertrand, Sophie
2015-11-01
Understanding the ecological processes that underpin species distribution patterns is a fundamental goal in spatial ecology. However, developing predictive models of habitat use is challenging for species that forage in marine environments, as both predators and prey are often highly mobile and difficult to monitor. Consequently, few studies have developed resource selection functions for marine predators based directly on the abundance and distribution of their prey. We analysed contemporaneous data on the diving locations of two seabird species, the shallow-diving Peruvian Booby (Sula variegata) and deeper diving Guanay Cormorant (Phalacrocorax bougainvilliorum), and the abundance and depth distribution of their main prey, Peruvian anchoveta (Engraulis ringens). Based on this unique data set, we developed resource selection functions to test the hypothesis that the probability of seabird diving behaviour at a given location is a function of the relative abundance of prey in the upper water column. For both species, we show that the probability of diving behaviour is mostly explained by the distribution of prey at shallow depths. While the probability of diving behaviour increases sharply with prey abundance at relatively low levels of abundance, support for including abundance in addition to the depth distribution of prey is weak, suggesting that prey abundance was not a major factor determining the location of diving behaviour during the study period. The study thus highlights the importance of the depth distribution of prey for two species of seabird with different diving capabilities. The results complement previous research that points towards the importance of oceanographic processes that enhance the accessibility of prey to seabirds. The implications are that locations where prey is predictably found at accessible depths may be more important for surface foragers, such as seabirds, than locations where prey is predictably abundant. Analysis of the relative importance of abundance and accessibility is essential for the design and evaluation of effective management responses to reduced prey availability for seabirds and other top predators in marine systems. © 2015 The Authors. Journal of Animal Ecology © 2015 British Ecological Society.
Fuzzy membership functions for analysis of high-resolution CT images of diffuse pulmonary diseases.
Almeida, Eliana; Rangayyan, Rangaraj M; Azevedo-Marques, Paulo M
2015-08-01
We propose the use of fuzzy membership functions to analyze images of diffuse pulmonary diseases (DPDs) based on fractal and texture features. The features were extracted from preprocessed regions of interest (ROIs) selected from high-resolution computed tomography images. The ROIs represent five different patterns of DPDs and normal lung tissue. A Gaussian mixture model (GMM) was constructed for each feature, with six Gaussians modeling the six patterns. Feature selection was performed and the GMMs of the five significant features were used. From the GMMs, fuzzy membership functions were obtained by a probability-possibility transformation and further statistical analysis was performed. An average classification accuracy of 63.5% was obtained for the six classes. For four of the six classes, the classification accuracy was superior to 65%, and the best classification accuracy was 75.5% for one class. The use of fuzzy membership functions to assist in pattern classification is an alternative to deterministic approaches to explore strategies for medical diagnosis.
Optimization Of Mean-Semivariance-Skewness Portfolio Selection Model In Fuzzy Random Environment
NASA Astrophysics Data System (ADS)
Chatterjee, Amitava; Bhattacharyya, Rupak; Mukherjee, Supratim; Kar, Samarjit
2010-10-01
The purpose of the paper is to construct a mean-semivariance-skewness portfolio selection model in fuzzy random environment. The objective is to maximize the skewness with predefined maximum risk tolerance and minimum expected return. Here the security returns in the objectives and constraints are assumed to be fuzzy random variables in nature and then the vagueness of the fuzzy random variables in the objectives and constraints are transformed into fuzzy variables which are similar to trapezoidal numbers. The newly formed fuzzy model is then converted into a deterministic optimization model. The feasibility and effectiveness of the proposed method is verified by numerical example extracted from Bombay Stock Exchange (BSE). The exact parameters of fuzzy membership function and probability density function are obtained through fuzzy random simulating the past dates.
Lima, Robson B DE; Bufalino, Lina; Alves, Francisco T; Silva, José A A DA; Ferreira, Rinaldo L C
2017-01-01
Currently, there is a lack of studies on the correct utilization of continuous distributions for dry tropical forests. Therefore, this work aims to investigate the diameter structure of a brazilian tropical dry forest and to select suitable continuous distributions by means of statistic tools for the stand and the main species. Two subsets were randomly selected from 40 plots. Diameter at base height was obtained. The following functions were tested: log-normal; gamma; Weibull 2P and Burr. The best fits were selected by Akaike's information validation criterion. Overall, the diameter distribution of the dry tropical forest was better described by negative exponential curves and positive skewness. The forest studied showed diameter distributions with decreasing probability for larger trees. This behavior was observed for both the main species and the stand. The generalization of the function fitted for the main species show that the development of individual models is needed. The Burr function showed good flexibility to describe the diameter structure of the stand and the behavior of Mimosa ophthalmocentra and Bauhinia cheilantha species. For Poincianella bracteosa, Aspidosperma pyrifolium and Myracrodum urundeuva better fitting was obtained with the log-normal function.
Learning to select useful landmarks.
Greiner, R; Isukapalli, R
1996-01-01
To navigate effectively, an autonomous agent must be able to quickly and accurately determine its current location. Given an initial estimate of its position (perhaps based on dead-reckoning) and an image taken of a known environment, our agent first attempts to locate a set of landmarks (real-world objects at known locations), then uses their angular separation to obtain an improved estimate of its current position. Unfortunately, some landmarks may not be visible, or worse, may be confused with other landmarks, resulting in both time wasted in searching for the undetected landmarks, and in further errors in the agent's estimate of its position. To address these problems, we propose a method that uses previous experiences to learn a selection function that, given the set of landmarks that might be visible, returns the subset that can be used to reliably provide an accurate registration of the agent's position. We use statistical techniques to prove that the learned selection function is, with high probability, effectively at a local optimum in the space of such functions. This paper also presents empirical evidence, using real-world data, that demonstrate the effectiveness of our approach.
Q-Learning-Based Adjustable Fixed-Phase Quantum Grover Search Algorithm
NASA Astrophysics Data System (ADS)
Guo, Ying; Shi, Wensha; Wang, Yijun; Hu, Jiankun
2017-02-01
We demonstrate that the rotation phase can be suitably chosen to increase the efficiency of the phase-based quantum search algorithm, leading to a dynamic balance between iterations and success probabilities of the fixed-phase quantum Grover search algorithm with Q-learning for a given number of solutions. In this search algorithm, the proposed Q-learning algorithm, which is a model-free reinforcement learning strategy in essence, is used for performing a matching algorithm based on the fraction of marked items λ and the rotation phase α. After establishing the policy function α = π(λ), we complete the fixed-phase Grover algorithm, where the phase parameter is selected via the learned policy. Simulation results show that the Q-learning-based Grover search algorithm (QLGA) enables fewer iterations and gives birth to higher success probabilities. Compared with the conventional Grover algorithms, it avoids the optimal local situations, thereby enabling success probabilities to approach one.
Analysis and selection of magnitude relations for the Working Group on Utah Earthquake Probabilities
Duross, Christopher; Olig, Susan; Schwartz, David
2015-01-01
Prior to calculating time-independent and -dependent earthquake probabilities for faults in the Wasatch Front region, the Working Group on Utah Earthquake Probabilities (WGUEP) updated a seismic-source model for the region (Wong and others, 2014) and evaluated 19 historical regressions on earthquake magnitude (M). These regressions relate M to fault parameters for historical surface-faulting earthquakes, including linear fault length (e.g., surface-rupture length [SRL] or segment length), average displacement, maximum displacement, rupture area, seismic moment (Mo ), and slip rate. These regressions show that significant epistemic uncertainties complicate the determination of characteristic magnitude for fault sources in the Basin and Range Province (BRP). For example, we found that M estimates (as a function of SRL) span about 0.3–0.4 units (figure 1) owing to differences in the fault parameter used; age, quality, and size of historical earthquake databases; and fault type and region considered.
NASA Astrophysics Data System (ADS)
Wang, Bei; Sugi, Takenao; Wang, Xingyu; Nakamura, Masatoshi
Data for human sleep study may be affected by internal and external influences. The recorded sleep data contains complex and stochastic factors, which increase the difficulties for the computerized sleep stage determination techniques to be applied for clinical practice. The aim of this study is to develop an automatic sleep stage determination system which is optimized for variable sleep data. The main methodology includes two modules: expert knowledge database construction and automatic sleep stage determination. Visual inspection by a qualified clinician is utilized to obtain the probability density function of parameters during the learning process of expert knowledge database construction. Parameter selection is introduced in order to make the algorithm flexible. Automatic sleep stage determination is manipulated based on conditional probability. The result showed close agreement comparing with the visual inspection by clinician. The developed system can meet the customized requirements in hospitals and institutions.
Fixation probability in a two-locus intersexual selection model.
Durand, Guillermo; Lessard, Sabin
2016-06-01
We study a two-locus model of intersexual selection in a finite haploid population reproducing according to a discrete-time Moran model with a trait locus expressed in males and a preference locus expressed in females. We show that the probability of ultimate fixation of a single mutant allele for a male ornament introduced at random at the trait locus given any initial frequency state at the preference locus is increased by weak intersexual selection and recombination, weak or strong. Moreover, this probability exceeds the initial frequency of the mutant allele even in the case of a costly male ornament if intersexual selection is not too weak. On the other hand, the probability of ultimate fixation of a single mutant allele for a female preference towards a male ornament introduced at random at the preference locus is increased by weak intersexual selection and weak recombination if the female preference is not costly, and is strong enough in the case of a costly male ornament. The analysis relies on an extension of the ancestral recombination-selection graph for samples of haplotypes to take into account events of intersexual selection, while the symbolic calculation of the fixation probabilities is made possible in a reasonable time by an optimizing algorithm. Copyright © 2016 Elsevier Inc. All rights reserved.
A model for field toxicity tests
Kaiser, Mark S.; Finger, Susan E.
1996-01-01
Toxicity tests conducted under field conditions present an interesting challenge for statistical modelling. In contrast to laboratory tests, the concentrations of potential toxicants are not held constant over the test. In addition, the number and identity of toxicants that belong in a model as explanatory factors are not known and must be determined through a model selection process. We present one model to deal with these needs. This model takes the record of mortalities to form a multinomial distribution in which parameters are modelled as products of conditional daily survival probabilities. These conditional probabilities are in turn modelled as logistic functions of the explanatory factors. The model incorporates lagged values of the explanatory factors to deal with changes in the pattern of mortalities over time. The issue of model selection and assessment is approached through the use of generalized information criteria and power divergence goodness-of-fit tests. These model selection criteria are applied in a cross-validation scheme designed to assess the ability of a model to both fit data used in estimation and predict data deleted from the estimation data set. The example presented demonstrates the need for inclusion of lagged values of the explanatory factors and suggests that penalized likelihood criteria may not provide adequate protection against overparameterized models in model selection.
NASA Astrophysics Data System (ADS)
Sergeenko, N. P.
2017-11-01
An adequate statistical method should be developed in order to predict probabilistically the range of ionospheric parameters. This problem is solved in this paper. The time series of the critical frequency of the layer F2- foF2( t) were subjected to statistical processing. For the obtained samples {δ foF2}, statistical distributions and invariants up to the fourth order are calculated. The analysis shows that the distributions differ from the Gaussian law during the disturbances. At levels of sufficiently small probability distributions, there are arbitrarily large deviations from the model of the normal process. Therefore, it is attempted to describe statistical samples {δ foF2} based on the Poisson model. For the studied samples, the exponential characteristic function is selected under the assumption that time series are a superposition of some deterministic and random processes. Using the Fourier transform, the characteristic function is transformed into a nonholomorphic excessive-asymmetric probability-density function. The statistical distributions of the samples {δ foF2} calculated for the disturbed periods are compared with the obtained model distribution function. According to the Kolmogorov's criterion, the probabilities of the coincidence of a posteriori distributions with the theoretical ones are P 0.7-0.9. The conducted analysis makes it possible to draw a conclusion about the applicability of a model based on the Poisson random process for the statistical description and probabilistic variation estimates during heliogeophysical disturbances of the variations {δ foF2}.
Mate-Selection Systems and Criteria: Variation according to Family Structure.
ERIC Educational Resources Information Center
Lee, Gary R.; Stone, Lorene Hemphill
1980-01-01
Autonomous mate selection based on romantic attraction is more likely to be institutionalized in societies with nuclear family systems. Neolocal residence customs increase the probability that mate selection is autonomous but decrease the probability that it is based on romantic attraction. (Author)
Moreland, J; Thomson, M A
1994-06-01
The purpose of this study was to examine the efficacy of electromyographic biofeedback compared with conventional physical therapy for improving upper-extremity function in patients following a stroke. A literature search was done for the years 1976 to 1992. The selection criteria included single-blinded randomized control trials. Study quality was assessed for nine criteria. For functional (disability index or stage of recovery) and impairment outcomes, meta-analyses were performed on odds ratios for improvement versus no improvement. Mann-Whitney U-Test probability values were combined across studies. Six studies were selected, and outcome data were obtained for five studies. The common odds ratio was 2.2 for function and 1.1 for impairments in favor of biofeedback. The estimate of the number needed to treat to prevent a nonresponder was 11 for function and 22 for impairments. None of the meta-analyses were statistically significant. The results do not conclusively indicate superiority of either form of therapy. Although there is a chance of Type II error, the estimated size of the effect is small. Given this estimate of little or no difference, therapists need to consider cost, ease of application, and patient preference when selecting these therapies.
NASA Technical Reports Server (NTRS)
Elfer, N.; Meibaum, R.; Olsen, G.
1995-01-01
A unique collection of computer codes, Space Debris Surfaces (SD_SURF), have been developed to assist in the design and analysis of space debris protection systems. SD_SURF calculates and summarizes a vehicle's vulnerability to space debris as a function of impact velocity and obliquity. An SD_SURF analysis will show which velocities and obliquities are the most probable to cause a penetration. This determination can help the analyst select a shield design that is best suited to the predominant penetration mechanism. The analysis also suggests the most suitable parameters for development or verification testing. The SD_SURF programs offer the option of either FORTRAN programs or Microsoft-EXCEL spreadsheets and macros. The FORTRAN programs work with BUMPERII. The EXCEL spreadsheets and macros can be used independently or with selected output from the SD_SURF FORTRAN programs. Examples will be presented of the interaction between space vehicle geometry, the space debris environment, and the penetration and critical damage ballistic limit surfaces of the shield under consideration.
Sandhill crane roost selection, human disturbance, and forage resources
Pearse, Aaron T.; Krapu, Gary; Brandt, David
2017-01-01
Sites used for roosting represent a key habitat requirement for many species of birds because availability and quality of roost sites can influence individual fitness. Birds select roost sites based on numerous factors, requirements, and motivations, and selection of roosts can be dynamic in time and space because of various ecological and environmental influences. For sandhill cranes (Antigone canadensis) at their main spring-staging area along the Platte River in south-central Nebraska, USA, past investigations of roosting cranes focused on physical channel characteristics related to perceived security as motivating roost distribution. We used 6,310 roost sites selected by 313 sandhill cranes over 5 spring migration seasons (2003–2007) to quantify resource selection functions of roost sites on the central Platte River using a discrete choice analysis. Sandhill cranes generally showed stronger selection for wider channels with shorter bank vegetation situated farther from potential human disturbance features such as roads, bridges, and dwellings. Furthermore, selection for roost sites with preferable physical characteristics (wide channels with short bank vegetation) was more resilient to nearby disturbance features than more narrow channels with taller bank vegetation. The amount of cornfields surrounding sandhill crane roost sites positively influenced relative probability of use but only for more narrow channels < 100 m and those with shorter bank vegetation. We confirmed key resource features that sandhill cranes selected at river channels along the Platte River, and after incorporating spatial variation due to human disturbance, our understanding of roost site selection was more robust, providing insights on how disturbance may interact with physical habitat features. Managers can use information on roost-site selection when developing plans to increase probability of crane use at existing roost sites and to identify new areas for potential use if existing sites become limited.
Gomez-Lazaro, Emilio; Bueso, Maria C.; Kessler, Mathieu; ...
2016-02-02
Here, the Weibull probability distribution has been widely applied to characterize wind speeds for wind energy resources. Wind power generation modeling is different, however, due in particular to power curve limitations, wind turbine control methods, and transmission system operation requirements. These differences are even greater for aggregated wind power generation in power systems with high wind penetration. Consequently, models based on one-Weibull component can provide poor characterizations for aggregated wind power generation. With this aim, the present paper focuses on discussing Weibull mixtures to characterize the probability density function (PDF) for aggregated wind power generation. PDFs of wind power datamore » are firstly classified attending to hourly and seasonal patterns. The selection of the number of components in the mixture is analyzed through two well-known different criteria: the Akaike information criterion (AIC) and the Bayesian information criterion (BIC). Finally, the optimal number of Weibull components for maximum likelihood is explored for the defined patterns, including the estimated weight, scale, and shape parameters. Results show that multi-Weibull models are more suitable to characterize aggregated wind power data due to the impact of distributed generation, variety of wind speed values and wind power curtailment.« less
Optimal design of compact and connected nature reserves for multiple species.
Wang, Yicheng; Önal, Hayri
2016-04-01
When designing a conservation reserve system for multiple species, spatial attributes of the reserves must be taken into account at species level. The existing optimal reserve design literature considers either one spatial attribute or when multiple attributes are considered the analysis is restricted only to one species. We built a linear integer programing model that incorporates compactness and connectivity of the landscape reserved for multiple species. The model identifies multiple reserves that each serve a subset of target species with a specified coverage probability threshold to ensure the species' long-term survival in the reserve, and each target species is covered (protected) with another probability threshold at the reserve system level. We modeled compactness by minimizing the total distance between selected sites and central sites, and we modeled connectivity of a selected site to its designated central site by selecting at least one of its adjacent sites that has a nearer distance to the central site. We considered structural distance and functional distances that incorporated site quality between sites. We tested the model using randomly generated data on 2 species, one ground species that required structural connectivity and the other an avian species that required functional connectivity. We applied the model to 10 bird species listed as endangered by the state of Illinois (U.S.A.). Spatial coherence and selection cost of the reserves differed substantially depending on the weights assigned to these 2 criteria. The model can be used to design a reserve system for multiple species, especially species whose habitats are far apart in which case multiple disjunct but compact and connected reserves are advantageous. The model can be modified to increase or decrease the distance between reserves to reduce or promote population connectivity. © 2015 Society for Conservation Biology.
NASA Technical Reports Server (NTRS)
Mavris, Dimitri N.; Bandte, Oliver; Schrage, Daniel P.
1996-01-01
This paper outlines an approach for the determination of economically viable robust design solutions using the High Speed Civil Transport (HSCT) as a case study. Furthermore, the paper states the advantages of a probability based aircraft design over the traditional point design approach. It also proposes a new methodology called Robust Design Simulation (RDS) which treats customer satisfaction as the ultimate design objective. RDS is based on a probabilistic approach to aerospace systems design, which views the chosen objective as a distribution function introduced by so called noise or uncertainty variables. Since the designer has no control over these variables, a variability distribution is defined for each one of them. The cumulative effect of all these distributions causes the overall variability of the objective function. For cases where the selected objective function depends heavily on these noise variables, it may be desirable to obtain a design solution that minimizes this dependence. The paper outlines a step by step approach on how to achieve such a solution for the HSCT case study and introduces an evaluation criterion which guarantees the highest customer satisfaction. This customer satisfaction is expressed by the probability of achieving objective function values less than a desired target value.
Optimal Control via Self-Generated Stochasticity
NASA Technical Reports Server (NTRS)
Zak, Michail
2011-01-01
The problem of global maxima of functionals has been examined. Mathematical roots of local maxima are the same as those for a much simpler problem of finding global maximum of a multi-dimensional function. The second problem is instability even if an optimal trajectory is found, there is no guarantee that it is stable. As a result, a fundamentally new approach is introduced to optimal control based upon two new ideas. The first idea is to represent the functional to be maximized as a limit of a probability density governed by the appropriately selected Liouville equation. Then, the corresponding ordinary differential equations (ODEs) become stochastic, and that sample of the solution that has the largest value will have the highest probability to appear in ODE simulation. The main advantages of the stochastic approach are that it is not sensitive to local maxima, the function to be maximized must be only integrable but not necessarily differentiable, and global equality and inequality constraints do not cause any significant obstacles. The second idea is to remove possible instability of the optimal solution by equipping the control system with a self-stabilizing device. The applications of the proposed methodology will optimize the performance of NASA spacecraft, as well as robot performance.
Probability of cell hits in selected organs and tissues by high-LET particles at the ISS orbit
NASA Technical Reports Server (NTRS)
Yasuda, H.; Komiyama, T.; Fujitaka, K.; Badhwar, G. D. (Principal Investigator)
2002-01-01
The fluence of high-LET particles (HLP) with LET infinity H2O greater than 15 keV micrometers-1 in selected organs and tissues were measured with plastic nuclear track detectors using a life-size human phantom on the 9th Shuttle-Mir Mission (STS-91). The planar-track fluence of HLP during the 9.8-day mission ranged from 1.9 x 10(3) n cm-2 (bladder) to 5.1 x 10(3) n cm-2 (brain) by a factor of 2.7. Based on these data, a probability of HLP hits to a matured cell of each organ or tissue was roughly estimated for a 90-day ISS mission. In the calculation, all cells were assumed to be spheres with a geometric cross-sectional area of 500 micrometers2 and the cell-hit frequency from isotropic space radiation can be described by the Poisson-distribution function. As results, the probability of one or more than 1 hit to a single cell by HLP for 90 days ranged from 17% to 38%; that of two or more than 2 hits was estimated to be 1.3-8.2%. c2002 COSPAR. Published by Elsevier Science Ltd. All rights reserved.
VLA observations of a complete sample of extragalactic X-ray sources. II
NASA Technical Reports Server (NTRS)
Schild, R.; Zamorani, G.; Gioia, I. M.; Feigelson, E. D.; Maccacaro, T.
1983-01-01
A complete sample of 35 X-ray selected sources found with the Einstein Observatory has been observed with the Very Large Array at 6 cm to investigate the relationship between radio and X-ray emission in extragalactic objects. Detections include three active galactic nuclei (AGNs), two clusters or groups of galaxies, two individual galaxies, and two BL Lac objects. The frequency of radio emission in X-ray selected AGNs is compared with that of optically selected quasars using the integral radio-optical luminosity function. The result suggests that the probability for X-ray selected quasars to be radio sources is higher than for those optically selected. No obvious correlation is found in the sample between the richness of X-ray luminosity of the cluster and the presence of a galaxy with radio luminosity at 5 GHz larger than 10 to the 30th ergs/s/Hz.
From Evolutionary Allometry to Sexual Display: (A Reply to Holman and Bro-Jørgensen).
Raia, Pasquale; Passaro, Federico; Carotenuto, Francesco; Meiri, Shai; Piras, Paolo
2016-08-01
Conventional wisdom holds that the complex shapes of deer antlers are produced under the sole influence of sexual selection. We questioned this view by demonstrating that trends for increased body size evolution passively yield more-complex ornaments, even in organisms where no effect of sexual selection is possible, with similar allometric slopes. Recent investigations suggest that sexual selection on antlers of larger deer species is stronger than that in smaller species; hence, the use of conspicuous antlers for display in large male deer is a secondary function driven by especially intense sexual selection on these large-bodied species. Since ancestral deer were small and had very simple antlers, such an intense selection on antlers shape was probably absent in early deer. Therefore, the evolution of complex ornaments is coupled with body size evolution, even in deer.
The statistical theory of the fracture of fragile bodies. Part 2: The integral equation method
NASA Technical Reports Server (NTRS)
Kittl, P.
1984-01-01
It is demonstrated how with the aid of a bending test, the Weibull fracture risk function can be determined - without postulating its analytical form - by resolving an integral equation. The respective solutions for rectangular and circular section beams are given. In the first case the function is expressed as an algorithm and in the second, in the form of series. Taking into account that the cumulative fracture probability appearing in the solution to the integral equation must be continuous and monotonically increasing, any case of fabrication or selection of samples can be treated.
NASA Technical Reports Server (NTRS)
Parrish, R. S.; Carter, M. C.
1974-01-01
This analysis utilizes computer simulation and statistical estimation. Realizations of stationary gaussian stochastic processes with selected autocorrelation functions are computer simulated. Analysis of the simulated data revealed that the mean and the variance of a process were functionally dependent upon the autocorrelation parameter and crossing level. Using predicted values for the mean and standard deviation, by the method of moments, the distribution parameters was estimated. Thus, given the autocorrelation parameter, crossing level, mean, and standard deviation of a process, the probability of exceeding the crossing level for a particular length of time was calculated.
Shapley, Robert M.; Xing, Dajun
2012-01-01
Theoretical considerations have led to the concept that the cerebral cortex is operating in a balanced state in which synaptic excitation is approximately balanced by synaptic inhibition from the local cortical circuit. This paper is about the functional consequences of the balanced state in sensory cortex. One consequence is gain control: there is experimental evidence and theoretical support for the idea that local circuit inhibition acts as a local automatic gain control throughout the cortex. Second, inhibition increases cortical feature selectivity: many studies of different sensory cortical areas have reported that suppressive mechanisms contribute to feature selectivity. Synaptic inhibition from the local microcircuit should be untuned (or broadly tuned) for stimulus features because of the microarchitecture of the cortical microcircuit. Untuned inhibition probably is the source of Untuned Suppression that enhances feature selectivity. We studied inhibition’s function in our experiments, guided by a neuronal network model, on orientation selectivity in the primary visual cortex, V1, of the Macaque monkey. Our results revealed that Untuned Suppression, generated by local circuit inhibition, is crucial for the generation of highly orientation-selective cells in V1 cortex. PMID:23036513
NASA Astrophysics Data System (ADS)
Narukawa, Takafumi; Yamaguchi, Akira; Jang, Sunghyon; Amaya, Masaki
2018-02-01
For estimating fracture probability of fuel cladding tube under loss-of-coolant accident conditions of light-water-reactors, laboratory-scale integral thermal shock tests were conducted on non-irradiated Zircaloy-4 cladding tube specimens. Then, the obtained binary data with respect to fracture or non-fracture of the cladding tube specimen were analyzed statistically. A method to obtain the fracture probability curve as a function of equivalent cladding reacted (ECR) was proposed using Bayesian inference for generalized linear models: probit, logit, and log-probit models. Then, model selection was performed in terms of physical characteristics and information criteria, a widely applicable information criterion and a widely applicable Bayesian information criterion. As a result, it was clarified that the log-probit model was the best among the three models to estimate the fracture probability in terms of the degree of prediction accuracy for both next data to be obtained and the true model. Using the log-probit model, it was shown that 20% ECR corresponded to a 5% probability level with a 95% confidence of fracture of the cladding tube specimens.
Supernova Cosmology Inference with Probabilistic Photometric Redshifts (SCIPPR)
NASA Astrophysics Data System (ADS)
Peters, Christina; Malz, Alex; Hlozek, Renée
2018-01-01
The Bayesian Estimation Applied to Multiple Species (BEAMS) framework employs probabilistic supernova type classifications to do photometric SN cosmology. This work extends BEAMS to replace high-confidence spectroscopic redshifts with photometric redshift probability density functions, a capability that will be essential in the era the Large Synoptic Survey Telescope and other next-generation photometric surveys where it will not be possible to perform spectroscopic follow up on every SN. We present the Supernova Cosmology Inference with Probabilistic Photometric Redshifts (SCIPPR) Bayesian hierarchical model for constraining the cosmological parameters from photometric lightcurves and host galaxy photometry, which includes selection effects and is extensible to uncertainty in the redshift-dependent supernova type proportions. We create a pair of realistic mock catalogs of joint posteriors over supernova type, redshift, and distance modulus informed by photometric supernova lightcurves and over redshift from simulated host galaxy photometry. We perform inference under our model to obtain a joint posterior probability distribution over the cosmological parameters and compare our results with other methods, namely: a spectroscopic subset, a subset of high probability photometrically classified supernovae, and reducing the photometric redshift probability to a single measurement and error bar.
[Inverse probability weighting (IPW) for evaluating and "correcting" selection bias].
Narduzzi, Silvia; Golini, Martina Nicole; Porta, Daniela; Stafoggia, Massimo; Forastiere, Francesco
2014-01-01
the Inverse probability weighting (IPW) is a methodology developed to account for missingness and selection bias caused by non-randomselection of observations, or non-random lack of some information in a subgroup of the population. to provide an overview of IPW methodology and an application in a cohort study of the association between exposure to traffic air pollution (nitrogen dioxide, NO₂) and 7-year children IQ. this methodology allows to correct the analysis by weighting the observations with the probability of being selected. The IPW is based on the assumption that individual information that can predict the probability of inclusion (non-missingness) are available for the entire study population, so that, after taking account of them, we can make inferences about the entire target population starting from the nonmissing observations alone.The procedure for the calculation is the following: firstly, we consider the entire population at study and calculate the probability of non-missing information using a logistic regression model, where the response is the nonmissingness and the covariates are its possible predictors.The weight of each subject is given by the inverse of the predicted probability. Then the analysis is performed only on the non-missing observations using a weighted model. IPW is a technique that allows to embed the selection process in the analysis of the estimates, but its effectiveness in "correcting" the selection bias depends on the availability of enough information, for the entire population, to predict the non-missingness probability. In the example proposed, the IPW application showed that the effect of exposure to NO2 on the area of verbal intelligence quotient of children is stronger than the effect showed from the analysis performed without regard to the selection processes.
Computational analysis of sequence selection mechanisms.
Meyerguz, Leonid; Grasso, Catherine; Kleinberg, Jon; Elber, Ron
2004-04-01
Mechanisms leading to gene variations are responsible for the diversity of species and are important components of the theory of evolution. One constraint on gene evolution is that of protein foldability; the three-dimensional shapes of proteins must be thermodynamically stable. We explore the impact of this constraint and calculate properties of foldable sequences using 3660 structures from the Protein Data Bank. We seek a selection function that receives sequences as input, and outputs survival probability based on sequence fitness to structure. We compute the number of sequences that match a particular protein structure with energy lower than the native sequence, the density of the number of sequences, the entropy, and the "selection" temperature. The mechanism of structure selection for sequences longer than 200 amino acids is approximately universal. For shorter sequences, it is not. We speculate on concrete evolutionary mechanisms that show this behavior.
Statistical validation of normal tissue complication probability models.
Xu, Cheng-Jian; van der Schaaf, Arjen; Van't Veld, Aart A; Langendijk, Johannes A; Schilstra, Cornelis
2012-09-01
To investigate the applicability and value of double cross-validation and permutation tests as established statistical approaches in the validation of normal tissue complication probability (NTCP) models. A penalized regression method, LASSO (least absolute shrinkage and selection operator), was used to build NTCP models for xerostomia after radiation therapy treatment of head-and-neck cancer. Model assessment was based on the likelihood function and the area under the receiver operating characteristic curve. Repeated double cross-validation showed the uncertainty and instability of the NTCP models and indicated that the statistical significance of model performance can be obtained by permutation testing. Repeated double cross-validation and permutation tests are recommended to validate NTCP models before clinical use. Copyright © 2012 Elsevier Inc. All rights reserved.
Maritime Anomaly Detection: Domain Introduction and Review of Selected Literature
2011-10-01
This “side” information can be used to justify the behaviour of an entity. This is one more reason why good situational awareness is needed to...the probability density function is updated using the expected maximization and the Kullback - Leibler information metric. The last step tries to...on this assessment, recommendations about future directions are made. To prevent confusion, the use of the terms “data” and “ information ” must be
ERIC Educational Resources Information Center
Huynh, Huynh
By noting that a Rasch or two parameter logistic (2PL) item belongs to the exponential family of random variables and that the probability density function (pdf) of the correct response (X=1) and the incorrect response (X=0) are symmetric with respect to the vertical line at the item location, it is shown that the conjugate prior for ability is…
Fixation Probability in a Haploid-Diploid Population
Bessho, Kazuhiro; Otto, Sarah P.
2017-01-01
Classical population genetic theory generally assumes either a fully haploid or fully diploid life cycle. However, many organisms exhibit more complex life cycles, with both free-living haploid and diploid stages. Here we ask what the probability of fixation is for selected alleles in organisms with haploid-diploid life cycles. We develop a genetic model that considers the population dynamics using both the Moran model and Wright–Fisher model. Applying a branching process approximation, we obtain an accurate fixation probability assuming that the population is large and the net effect of the mutation is beneficial. We also find the diffusion approximation for the fixation probability, which is accurate even in small populations and for deleterious alleles, as long as selection is weak. These fixation probabilities from branching process and diffusion approximations are similar when selection is weak for beneficial mutations that are not fully recessive. In many cases, particularly when one phase predominates, the fixation probability differs substantially for haploid-diploid organisms compared to either fully haploid or diploid species. PMID:27866168
NASA Astrophysics Data System (ADS)
Roy, M. L.; Roy, A. G.; Grant, J. W.
2013-12-01
For stream fish, flow properties have been shown to influence energy expenses and habitat selection. Furthermore, flow properties directly influence the velocity of drifting prey items, therefore influencing the probability of fish at catch prey. Flow properties might also have an effect on prey trajectories that can become more unpredictable with increased turbulence. In this study, we combined field and experimental approaches to examine the foraging behaviour and position choice of juvenile Atlantic salmon in various flow conditions. We used an in situ portable flume, which consists in a transparent enclosure (observation section) equipped with hinged doors upstream allowing to funnel the water inside and modify flow properties. Portable flumes have been developed and used to simulate benthic invertebrate drift and sediment transport, but have not been previously been used to examine fish behaviour. Specifically, we tested the predictions that 1) capture probability declined with turbulence, 2) the number of attacks and the proportion of time spent on the substrate decreased with turbulence and 3) parr will preferably selected focal positions with lower turbulence than random locations across the observation section. The portable flume allowed creating four flow treatments on a gradient of mean downstream velocity and turbulence. Fish were fed with brine shrimps and filmed through translucent panels using a submerged camera. Twenty-three juvenile salmon were captured and submitted to each flow treatment for 20 minutes feeding trials. Our results showed high inter-individual variability in the foraging success and time budget within each flow treatment associated to levels of velocity and turbulence. However, the average prey capture probability for the two lower velocity treatments was higher than that for the two higher velocity treatments. An inverse relationship between flow velocity and prey capture probability was observed and might have resulted from a diminution in prey detection distance. Fish preferentially selected focal positions in moderate velocity, and low turbulence areas and avoided the highly turbulent locations. Similarly, selection of average downward velocity and avoidance of upward velocity might be associated to the ease at maintaining position. Considering the streamlined shape providing high hydrodynamism, average vertical velocity might be an important feature driving microhabitat selection. Our results do not rule out the effect of turbulence on fish foraging but rather highlights the need to further investigate this question with a wider range of hydraulic values in order to possibly implement a turbulence-dependent prey capture function that might be useful to mechanistic foraging models.
Lei, Youming; Zheng, Fan
2016-12-01
Stochastic chaos induced by diffusion processes, with identical spectral density but different probability density functions (PDFs), is investigated in selected lightly damped Hamiltonian systems. The threshold amplitude of diffusion processes for the onset of chaos is derived by using the stochastic Melnikov method together with a mean-square criterion. Two quasi-Hamiltonian systems, namely, a damped single pendulum and damped Duffing oscillator perturbed by stochastic excitations, are used as illustrative examples. Four different cases of stochastic processes are taking as the driving excitations. It is shown that in such two systems the spectral density of diffusion processes completely determines the threshold amplitude for chaos, regardless of the shape of their PDFs, Gaussian or otherwise. Furthermore, the mean top Lyapunov exponent is employed to verify analytical results. The results obtained by numerical simulations are in accordance with the analytical results. This demonstrates that the stochastic Melnikov method is effective in predicting the onset of chaos in the quasi-Hamiltonian systems.
Halo correlations in nonlinear cosmic density fields
NASA Astrophysics Data System (ADS)
Bernardeau, F.; Schaeffer, R.
1999-09-01
The question we address in this paper is the determination of the correlation properties of the dark matter halos appearing in cosmic density fields once they underwent a strongly nonlinear evolution induced by gravitational dynamics. A series of previous works have given indications that kind of non-Gaussian features are induced by nonlinear evolution in term of the high-order correlation functions. Assuming such patterns for the matter field, i.e. that the high-order correlation functions behave as products of two-body correlation functions, we derive the correlation properties of the halos, that are assumed to represent the correlation properties of galaxies or clusters. The hierarchical pattern originally induced by gravity is shown to be conserved for the halos. The strength of their correlations at any order varies, however, but is found to depend only on their internal properties, namely on the parameter x~ m/r(3-gamma ) where m is the mass of the halo, r its size and gamma is the power law index of the two-body correlation function. This internal parameter is seen to be close to the depth of the internal potential well of virialized objects. We were able to derive the explicit form of the generating function of the moments of the halo counts probability distribution function. In particular we show explicitly that, generically, S_P(x)-> P(P-2) in the rare halo limit. Various illustrations of our general results are presented. As a function of the properties of the underlying matter field, we construct the count probabilities for halos and in particular discuss the halo void probability. We evaluate the dependence of the halo mass function on the environment: within clusters, hierarchical clustering implies the higher masses are favored. These properties solely arise from what is a natural bias (ie, naturally induced by gravity) between the observed objects and the unseen matter field, and how it manifests itself depending on which selection effects are imposed.
Takemura, Kazuhisa; Murakami, Hajime
2016-01-01
A probability weighting function (w(p)) is considered to be a nonlinear function of probability (p) in behavioral decision theory. This study proposes a psychophysical model of probability weighting functions derived from a hyperbolic time discounting model and a geometric distribution. The aim of the study is to show probability weighting functions from the point of view of waiting time for a decision maker. Since the expected value of a geometrically distributed random variable X is 1/p, we formulized the probability weighting function of the expected value model for hyperbolic time discounting as w(p) = (1 - k log p)(-1). Moreover, the probability weighting function is derived from Loewenstein and Prelec's (1992) generalized hyperbolic time discounting model. The latter model is proved to be equivalent to the hyperbolic-logarithmic weighting function considered by Prelec (1998) and Luce (2001). In this study, we derive a model from the generalized hyperbolic time discounting model assuming Fechner's (1860) psychophysical law of time and a geometric distribution of trials. In addition, we develop median models of hyperbolic time discounting and generalized hyperbolic time discounting. To illustrate the fitness of each model, a psychological experiment was conducted to assess the probability weighting and value functions at the level of the individual participant. The participants were 50 university students. The results of individual analysis indicated that the expected value model of generalized hyperbolic discounting fitted better than previous probability weighting decision-making models. The theoretical implications of this finding are discussed.
The nature of selection on the major histocompatibility complex.
Apanius, V; Penn, D; Slev, P R; Ruff, L R; Potts, W K
1997-01-01
Only natural selection can account for the extreme genetic diversity of genes of the major histocompatibility complex (MHC). Although the structure and function of classic MHC genes is well understood at the molecular and cellular levels, there is controversy about how MHC diversity is selectively maintained. The diversifying selection can be driven by pathogen interactions and inbreeding avoidance mechanisms. Pathogen-driven selection can maintain MHC polymorphism based on heterozygote advantage or frequency-dependent selection due to pathogen evasion of MHC-dependent immune recognition. Empirical evidence demonstrates that specific MHC haplotypes are resistant to certain infectious agents, while susceptible to others. These data are consistent with both heterozygote advantage and frequency-dependent models. Additional research is needed to discriminate between these mechanisms. Infectious agents can precipitate autoimmunity and can potentially contribute to MHC diversity through molecular mimicry and by favoring immunodominance. MHC-dependent abortion and mate choice, based on olfaction, can also maintain MHC diversity and probably functions both to avoid genome-wide inbreeding and produce MHC-heterozygous offspring with increased immune responsiveness. Although this diverse set of hypotheses are often treated as competing alternatives, we believe that they all fit into a coherent, internally consistent thesis. It is likely that at least in some species, all of these mechanisms operate, leading to the extreme diversification found in MHC genes.
Habitat selection of Rocky Mountain elk in a nonforested environment
Sawyer, H.; Nielson, R.M.; Lindzey, F.G.; Keith, L.; Powell, J.H.; Abraham, A.A.
2007-01-01
Recent expansions by Rocky Mountain elk (Cervus elaphus) into nonforested habitats across the Intermountain West have required managers to reconsider the traditional paradigms of forage and cover as they relate to managing elk and their habitats. We examined seasonal habitat selection patterns of a hunted elk population in a nonforested high-desert region of southwestern Wyoming, USA. We used 35,246 global positioning system locations collected from 33 adult female elk to model probability of use as a function of 6 habitat variables: slope, aspect, elevation, habitat diversity, distance to shrub cover, and distance to road. We developed resource selection probability functions for individual elk, and then we averaged the coefficients to estimate population-level models for summer and winter periods. We used the population-level models to generate predictive maps by assigning pixels across the study area to 1 of 4 use categories (i.e., high, medium-high, medium-low, or low), based on quartiles of the predictions. Model coefficients and predictive maps indicated that elk selected for summer habitats characterized by higher elevations in areas of high vegetative diversity, close to shrub cover, northerly aspects, moderate slopes, and away from roads. Winter habitat selection patterns were similar, except elk shifted to areas with lower elevations and southerly aspects. We validated predictive maps by using 528 locations collected from an independent sample of radiomarked elk (n = 55) and calculating the proportion of locations that occurred in each of the 4 use categories. Together, the high- and medium-high use categories of the summer and winter predictive maps contained 92% and 74% of summer and winter elk locations, respectively. Our population-level models and associated predictive maps were successful in predicting winter and summer habitat use by elk in a nonforested environment. In the absence of forest cover, elk seemed to rely on a combination of shrubs, topography, and low human disturbance to meet their thermal and hiding cover requirements.
How Life History Can Sway the Fixation Probability of Mutants
Li, Xiang-Yi; Kurokawa, Shun; Giaimo, Stefano; Traulsen, Arne
2016-01-01
In this work, we study the effects of demographic structure on evolutionary dynamics when selection acts on reproduction, survival, or both. In contrast to the previously discovered pattern that the fixation probability of a neutral mutant decreases while the population becomes younger, we show that a mutant with a constant selective advantage may have a maximum or a minimum of the fixation probability in populations with an intermediate fraction of young individuals. This highlights the importance of life history and demographic structure in studying evolutionary dynamics. We also illustrate the fundamental differences between selection on reproduction and selection on survival when age structure is present. In addition, we evaluate the relative importance of size and structure of the population in determining the fixation probability of the mutant. Our work lays the foundation for also studying density- and frequency-dependent effects in populations when demographic structures cannot be neglected. PMID:27129737
qPR: An adaptive partial-report procedure based on Bayesian inference.
Baek, Jongsoo; Lesmes, Luis Andres; Lu, Zhong-Lin
2016-08-01
Iconic memory is best assessed with the partial report procedure in which an array of letters appears briefly on the screen and a poststimulus cue directs the observer to report the identity of the cued letter(s). Typically, 6-8 cue delays or 600-800 trials are tested to measure the iconic memory decay function. Here we develop a quick partial report, or qPR, procedure based on a Bayesian adaptive framework to estimate the iconic memory decay function with much reduced testing time. The iconic memory decay function is characterized by an exponential function and a joint probability distribution of its three parameters. Starting with a prior of the parameters, the method selects the stimulus to maximize the expected information gain in the next test trial. It then updates the posterior probability distribution of the parameters based on the observer's response using Bayesian inference. The procedure is reiterated until either the total number of trials or the precision of the parameter estimates reaches a certain criterion. Simulation studies showed that only 100 trials were necessary to reach an average absolute bias of 0.026 and a precision of 0.070 (both in terms of probability correct). A psychophysical validation experiment showed that estimates of the iconic memory decay function obtained with 100 qPR trials exhibited good precision (the half width of the 68.2% credible interval = 0.055) and excellent agreement with those obtained with 1,600 trials of the conventional method of constant stimuli procedure (RMSE = 0.063). Quick partial-report relieves the data collection burden in characterizing iconic memory and makes it possible to assess iconic memory in clinical populations.
qPR: An adaptive partial-report procedure based on Bayesian inference
Baek, Jongsoo; Lesmes, Luis Andres; Lu, Zhong-Lin
2016-01-01
Iconic memory is best assessed with the partial report procedure in which an array of letters appears briefly on the screen and a poststimulus cue directs the observer to report the identity of the cued letter(s). Typically, 6–8 cue delays or 600–800 trials are tested to measure the iconic memory decay function. Here we develop a quick partial report, or qPR, procedure based on a Bayesian adaptive framework to estimate the iconic memory decay function with much reduced testing time. The iconic memory decay function is characterized by an exponential function and a joint probability distribution of its three parameters. Starting with a prior of the parameters, the method selects the stimulus to maximize the expected information gain in the next test trial. It then updates the posterior probability distribution of the parameters based on the observer's response using Bayesian inference. The procedure is reiterated until either the total number of trials or the precision of the parameter estimates reaches a certain criterion. Simulation studies showed that only 100 trials were necessary to reach an average absolute bias of 0.026 and a precision of 0.070 (both in terms of probability correct). A psychophysical validation experiment showed that estimates of the iconic memory decay function obtained with 100 qPR trials exhibited good precision (the half width of the 68.2% credible interval = 0.055) and excellent agreement with those obtained with 1,600 trials of the conventional method of constant stimuli procedure (RMSE = 0.063). Quick partial-report relieves the data collection burden in characterizing iconic memory and makes it possible to assess iconic memory in clinical populations. PMID:27580045
Su, Yu-Shiang; Chen, Jheng-Ting; Tang, Yong-Jheng; Yuan, Shu-Yun; McCarrey, Anna C; Goh, Joshua Oon Soo
2018-05-21
Appropriate neural representation of value and application of decision strategies are necessary to make optimal investment choices in real life. Normative human aging alters neural selectivity and control processing in brain regions implicated in value-based decision processing including striatal, medial temporal, and frontal areas. However, the specific neural mechanisms of how these age-related functional brain changes modulate value processing in older adults remain unclear. Here, young and older adults performed a lottery-choice functional magnetic resonance imaging experiment in which probabilities of winning different magnitudes of points constituted expected values of stakes. Increasing probability of winning modulated striatal responses in young adults, but modulated medial temporal and ventromedial prefrontal areas instead in older adults. Older adults additionally engaged higher responses in dorso-medio-lateral prefrontal cortices to more unfavorable stakes. Such extrastriatal involvement mediated age-related increase in risk-taking decisions. Furthermore, lower resting-state functional connectivity between lateral prefrontal and striatal areas also predicted lottery-choice task risk-taking that was mediated by higher functional connectivity between prefrontal and medial temporal areas during the task, with this mediation relationship being stronger in older than younger adults. Overall, we report evidence of a systemic neural mechanistic change in processing of probability in mixed-lottery values with age that increases risk-taking of unfavorable stakes in older adults. Moreover, individual differences in age-related effects on baseline frontostriatal communication may be a central determinant of such subsequent age differences in value-based decision neural processing and resulting behaviors. Copyright © 2018 Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
Hanks, G. W.; Shomber, H. A.; Dethman, H. A.; Gratzer, L. B.; Maeshiro, A.; Gangsaas, D.; Blight, J. D.; Buchan, S. M.; Crumb, C. B.; Dorwart, R. J.
1981-01-01
An active controls technology (ACT) system architecture was selected based on current technology system elements and optimal control theory was evaluated for use in analyzing and synthesizing ACT multiple control laws. The system selected employs three redundant computers to implement all of the ACT functions, four redundant smaller computers to implement the crucial pitch-augmented stability function, and a separate maintenance and display computer. The reliability objective of probability of crucial function failure of less than 1 x 10 to the -9th power per flight of 1 hr can be met with current technology system components, if the software is assumed fault free and coverage approaching 1.0 can be provided. The optimal control theory approach to ACT control law synthesis yielded comparable control law performance much more systematically and directly than the classical s-domain approach. The ACT control law performance, although somewhat degraded by the inclusion of representative nonlinearities, remained quite effective. Certain high-frequency gust-load alleviation functions may require increased surface rate capability.
Hector, Andy; Philipson, Christopher; Saner, Philippe; Chamagne, Juliette; Dzulkifli, Dzaeman; O'Brien, Michael; Snaddon, Jake L.; Ulok, Philip; Weilenmann, Maja; Reynolds, Glen; Godfray, H. Charles J.
2011-01-01
Relatively, little is known about the relationship between biodiversity and ecosystem functioning in forests, especially in the tropics. We describe the Sabah Biodiversity Experiment: a large-scale, long-term field study on the island of Borneo. The project aims at understanding the relationship between tree species diversity and the functioning of lowland dipterocarp rainforest during restoration following selective logging. The experiment is planned to run for several decades (from seed to adult tree), so here we focus on introducing the project and its experimental design and on assessing initial conditions and the potential for restoration of the structure and functioning of the study system, the Malua Forest Reserve. We estimate residual impacts 22 years after selective logging by comparison with an appropriate neighbouring area of primary forest in Danum Valley of similar conditions. There was no difference in the alpha or beta species diversity of transect plots in the two forest types, probably owing to the selective nature of the logging and potential effects of competitive release. However, despite equal total stem density, forest structure differed as expected with a deficit of large trees and a surfeit of saplings in selectively logged areas. These impacts on structure have the potential to influence ecosystem functioning. In particular, above-ground biomass and carbon pools in selectively logged areas were only 60 per cent of those in the primary forest even after 22 years of recovery. Our results establish the initial conditions for the Sabah Biodiversity Experiment and confirm the potential to accelerate restoration by using enrichment planting of dipterocarps to overcome recruitment limitation. What role dipterocarp diversity plays in restoration only will become clear with long-term results. PMID:22006970
Hector, Andy; Philipson, Christopher; Saner, Philippe; Chamagne, Juliette; Dzulkifli, Dzaeman; O'Brien, Michael; Snaddon, Jake L; Ulok, Philip; Weilenmann, Maja; Reynolds, Glen; Godfray, H Charles J
2011-11-27
Relatively, little is known about the relationship between biodiversity and ecosystem functioning in forests, especially in the tropics. We describe the Sabah Biodiversity Experiment: a large-scale, long-term field study on the island of Borneo. The project aims at understanding the relationship between tree species diversity and the functioning of lowland dipterocarp rainforest during restoration following selective logging. The experiment is planned to run for several decades (from seed to adult tree), so here we focus on introducing the project and its experimental design and on assessing initial conditions and the potential for restoration of the structure and functioning of the study system, the Malua Forest Reserve. We estimate residual impacts 22 years after selective logging by comparison with an appropriate neighbouring area of primary forest in Danum Valley of similar conditions. There was no difference in the alpha or beta species diversity of transect plots in the two forest types, probably owing to the selective nature of the logging and potential effects of competitive release. However, despite equal total stem density, forest structure differed as expected with a deficit of large trees and a surfeit of saplings in selectively logged areas. These impacts on structure have the potential to influence ecosystem functioning. In particular, above-ground biomass and carbon pools in selectively logged areas were only 60 per cent of those in the primary forest even after 22 years of recovery. Our results establish the initial conditions for the Sabah Biodiversity Experiment and confirm the potential to accelerate restoration by using enrichment planting of dipterocarps to overcome recruitment limitation. What role dipterocarp diversity plays in restoration only will become clear with long-term results.
Effects of environmental changes on natural selection active on human polygenic traits.
Ulizzi, L
1993-06-01
During the last century, industrialized countries experienced such an improvement in socioeconomic conditions and in sanitation that it is likely that the selective forces active on human metric traits have been modified. Perinatal mortality as a function of birth weight is one of the clearest examples of natural selection in humans. Here, trends over time of stabilizing and directional selection associated with birth weight have been analyzed in Japan from 1969 to 1989. The population of newborns has been subdivided according to gestational age, which is one of the main covariates of birth weight. The results show that in full-term babies both stabilizing and directional selection are coming to an end, whereas in babies born after 8 months of gestation these selective forces are still active, even if at much lower levels than in the past. The peculiar results found in the 7-month-gestation population are probably due to grossly abnormal cases of immaturity.
Simonov, P V
1997-01-01
At the 23rd International Congress of Physiology Sciences (Tokyo, 1965) the results of experiment led us to the conclusion that emotions were determined by the actual need and estimation of probability (possibility) of its satisfaction. Low probability of need satisfaction leads to negative emotions actively minimized by the subject. Increased probability of satisfaction, as compared to the earlier forecast, generates positive emotions which the subject tries to maximize, that is, to enhance, to prolong, to repeat. We named our concept the Need-Informational Theory of Emotions. According to this theory, motivation, emotion, and estimation of probability have different neuromorphological substrates. Activation through the hypothalamic motivatiogenic structures of the frontal parts of the neocortex orients the behavior to signals with a high probability of their reinforcement. At the same time the hippocampus is necessary for reactions to signals of low probability events, which are typical for the emotionally excited brain. By comparison of motivational excitation with available stimuli or their engrams, the amygdala selects a dominant motivation, destined to be satisfied in the first instance. In the cases of classical conditioning and escape reaction the reinforcement was related to involvement of the negative emotion's hypothalamic neurons, while in the course of avoidance reaction the positive emotion's neurons were involved. The role of the left and right frontal neocortex in the appearance or positive or negative emotions depends on these informational (cognitive) functions.
[The brain mechanisms of emotions].
Simonov, P V
1997-01-01
At the 23rd International Congress of Physiological Sciences (Tokyo, 1965) the results of experiment brought us to a conclusion that emotions were determined by the actual need and estimation of probability (possibility) of its satisfaction. Low probability of need satisfaction leads to negative emotions actively minimized by the subject. Increased probability of satisfaction, as compared to the earlier forecast, generates positive emotions which the subject tries to maximize, that is to enhance, to prolong, to repeat. We named our concept the Need-Informational Theory of Emotions. According to this theory, motivation, emotion and estimation of probability have different neuromorphological substrate. Activating by motivatiogenic structures of the hypothalamus the frontal parts of neocortex orients the behavior to signals with a high probability of their reinforcement. At the same time the hippocampus is necessary for reactions to signals of low probability events, which is typical for emotionally excited brain. By comparison of motivational excitation with available stimuli or their engrams the amygdala selects a dominant motivation, destined to be satisfied in the first instance. In the cases of classical conditioning and escape reaction the reinforcement was related to involvement of the negative emotion's hypothalamic neurons while in the course of avoidance reaction the positive emotion's neurons being involved. The role of the left and right frontal neocortex in the appearance of positive or negative emotions depends on this informational (cognitive) functions.
A reliability-based cost effective fail-safe design procedure
NASA Technical Reports Server (NTRS)
Hanagud, S.; Uppaluri, B.
1976-01-01
The authors have developed a methodology for cost-effective fatigue design of structures subject to random fatigue loading. A stochastic model for fatigue crack propagation under random loading has been discussed. Fracture mechanics is then used to estimate the parameters of the model and the residual strength of structures with cracks. The stochastic model and residual strength variations have been used to develop procedures for estimating the probability of failure and its changes with inspection frequency. This information on reliability is then used to construct an objective function in terms of either a total weight function or cost function. A procedure for selecting the design variables, subject to constraints, by optimizing the objective function has been illustrated by examples. In particular, optimum design of stiffened panel has been discussed.
Enhanced CARES Software Enables Improved Ceramic Life Prediction
NASA Technical Reports Server (NTRS)
Janosik, Lesley A.
1997-01-01
The NASA Lewis Research Center has developed award-winning software that enables American industry to establish the reliability and life of brittle material (e.g., ceramic, intermetallic, graphite) structures in a wide variety of 21st century applications. The CARES (Ceramics Analysis and Reliability Evaluation of Structures) series of software is successfully used by numerous engineers in industrial, academic, and government organizations as an essential element of the structural design and material selection processes. The latest version of this software, CARES/Life, provides a general- purpose design tool that predicts the probability of failure of a ceramic component as a function of its time in service. CARES/Life was recently enhanced by adding new modules designed to improve functionality and user-friendliness. In addition, a beta version of the newly-developed CARES/Creep program (for determining the creep life of monolithic ceramic components) has just been released to selected organizations.
Probability function of breaking-limited surface elevation. [wind generated waves of ocean
NASA Technical Reports Server (NTRS)
Tung, C. C.; Huang, N. E.; Yuan, Y.; Long, S. R.
1989-01-01
The effect of wave breaking on the probability function of surface elevation is examined. The surface elevation limited by wave breaking zeta sub b(t) is first related to the original wave elevation zeta(t) and its second derivative. An approximate, second-order, nonlinear, non-Gaussian model for zeta(t) of arbitrary but moderate bandwidth is presented, and an expression for the probability density function zeta sub b(t) is derived. The results show clearly that the effect of wave breaking on the probability density function of surface elevation is to introduce a secondary hump on the positive side of the probability density function, a phenomenon also observed in wind wave tank experiments.
Machine recognition of navel orange worm damage in x-ray images of pistachio nuts
NASA Astrophysics Data System (ADS)
Keagy, Pamela M.; Parvin, Bahram; Schatzki, Thomas F.
1995-01-01
Insect infestation increases the probability of aflatoxin contamination in pistachio nuts. A non- destructive test is currently not available to determine the insect content of pistachio nuts. This paper uses film X-ray images of various types of pistachio nuts to assess the possibility of machine recognition of insect infested nuts. Histogram parameters of four derived images are used in discriminant functions to select insect infested nuts from specific processing streams.
Decision theory applied to image quality control in radiology.
Lessa, Patrícia S; Caous, Cristofer A; Arantes, Paula R; Amaro, Edson; de Souza, Fernando M Campello
2008-11-13
The present work aims at the application of the decision theory to radiological image quality control (QC) in diagnostic routine. The main problem addressed in the framework of decision theory is to accept or reject a film lot of a radiology service. The probability of each decision of a determined set of variables was obtained from the selected films. Based on a radiology service routine a decision probability function was determined for each considered group of combination characteristics. These characteristics were related to the film quality control. These parameters were also framed in a set of 8 possibilities, resulting in 256 possible decision rules. In order to determine a general utility application function to access the decision risk, we have used a simple unique parameter called r. The payoffs chosen were: diagnostic's result (correct/incorrect), cost (high/low), and patient satisfaction (yes/no) resulting in eight possible combinations. Depending on the value of r, more or less risk will occur related to the decision-making. The utility function was evaluated in order to determine the probability of a decision. The decision was made with patients or administrators' opinions from a radiology service center. The model is a formal quantitative approach to make a decision related to the medical imaging quality, providing an instrument to discriminate what is really necessary to accept or reject a film or a film lot. The method presented herein can help to access the risk level of an incorrect radiological diagnosis decision.
Impact of literacy and years of education on the diagnosis of dementia: A population-based study.
Contador, Israel; Del Ser, Teodoro; Llamas, Sara; Villarejo, Alberto; Benito-León, Julián; Bermejo-Pareja, Félix
2017-03-01
The effect of different educational indices on clinical diagnosis of dementia requires more investigation. We compared the differential influence of two educational indices (EIs): years of schooling and level of education (i.e., null/low literacy, can read and write, primary school, and secondary school) on global cognition, functional performance, and the probability of having a dementia diagnosis. A total of 3,816 participants were selected from the population-based study of older adults "Neurological Disorders in Central Spain" (NEDICES). The 37-item version of the Mini-Mental State Examination (MMSE-37) and the Pfeffer's questionnaire were applied to assess cognitive and functional performance, respectively. The diagnosis of dementia was performed by expert neurologists according to Diagnostic and Statistical Manual of Mental Disorders-Fourth Edition (DSM-IV) criteria. Logistic regression models adjusted for potential confounders were carried out to test the association between the two EIs and dementia diagnosis. Both EIs were significantly associated with cognitive and functional scores, but individuals with null/low literacy performed significantly worse on MMSE-37 than literates when these groups were compared in terms of years of schooling. The two EIs were also related to an increased probability of dementia diagnosis in logistic models, but the association's strength was stronger for level of education than for years of schooling. Literacy predicted cognitive performance over and above the years of schooling. Lower education increases the probability of having a dementia diagnosis but the impact of different EIs is not uniform.
A testable model of earthquake probability based on changes in mean event size
NASA Astrophysics Data System (ADS)
Imoto, Masajiro
2003-02-01
We studied changes in mean event size using data on microearthquakes obtained from a local network in Kanto, central Japan, from a viewpoint that a mean event size tends to increase as the critical point is approached. A parameter describing changes was defined using a simple weighting average procedure. In order to obtain the distribution of the parameter in the background, we surveyed values of the parameter from 1982 to 1999 in a 160 × 160 × 80 km volume. The 16 events of M5.5 or larger in this volume were selected as target events. The conditional distribution of the parameter was estimated from the 16 values, each of which referred to the value immediately prior to each target event. The distribution of the background becomes a function of symmetry, the center of which corresponds to no change in b value. In contrast, the conditional distribution exhibits an asymmetric feature, which tends to decrease the b value. The difference in the distributions between the two groups was significant and provided us a hazard function for estimating earthquake probabilities. Comparing the hazard function with a Poisson process, we obtained an Akaike Information Criterion (AIC) reduction of 24. This reduction agreed closely with the probability gains of a retrospective study in a range of 2-4. A successful example of the proposed model can be seen in the earthquake of 3 June 2000, which is the only event during the period of prospective testing.
The preference of probability over negative values in action selection.
Neyedli, Heather F; Welsh, Timothy N
2015-01-01
It has previously been found that when participants are presented with a pair of motor prospects, they can select the prospect with the largest maximum expected gain (MEG). Many of those decisions, however, were trivial because of large differences in MEG between the prospects. The purpose of the present study was to explore participants' preferences when making non-trivial decisions between two motor prospects. Participants were presented with pairs of prospects that: 1) differed in MEG with either only the values or only the probabilities differing between the prospects; and 2) had similar MEG with one prospect having a larger probability of hitting the target and a higher penalty value and the other prospect a smaller probability of hitting the target but a lower penalty value. In different experiments, participants either had 400 ms or 2000 ms to decide between the prospects. It was found that participants chose the configuration with the larger MEG more often when the probability varied between prospects than when the value varied. In pairs with similar MEGs, participants preferred a larger probability of hitting the target over a smaller penalty value. These results indicate that participants prefer probability information over negative value information in a motor selection task.
To Eat or Not to Eat? Debris Selectivity by Marine Turtles
Schuyler, Qamar; Hardesty, Britta Denise; Wilcox, Chris; Townsend, Kathy
2012-01-01
Marine debris is a growing problem for wildlife, and has been documented to affect more than 267 species worldwide. We investigated the prevalence of marine debris ingestion in 115 sea turtles stranded in Queensland between 2006–2011, and assessed how the ingestion rates differ between species (Eretmochelys imbricata vs. Chelonia mydas) and by turtle size class (smaller oceanic feeders vs. larger benthic feeders). Concurrently, we conducted 25 beach surveys to estimate the composition of the debris present in the marine environment. Based on this proxy measurement of debris availability, we modeled turtles’ debris preferences (color and type) using a resource selection function, a method traditionally used for habitat and food selection. We found no significant difference in the overall probability of ingesting debris between the two species studied, both of which have similar life histories. Curved carapace length, however, was inversely correlated with the probability of ingesting debris; 54.5% of pelagic sized turtles had ingested debris, whereas only 25% of benthic feeding turtles were found with debris in their gastrointestinal system. Benthic and pelagic sized turtles also exhibited different selectivity ratios for debris ingestion. Benthic phase turtles had a strong selectivity for soft, clear plastic, lending support to the hypothesis that sea turtles ingest debris because it resembles natural prey items such as jellyfish. Pelagic turtles were much less selective in their feeding, though they showed a trend towards selectivity for rubber items such as balloons. Most ingested items were plastic and were positively buoyant. This study highlights the need to address increasing amounts of plastic in the marine environment, and provides evidence for the disproportionate ingestion of balloons by marine turtles. PMID:22829894
Self-imposed length limits in recreational fisheries
Chizinski, Christopher J.; Martin, Dustin R.; Hurley, Keith L.; Pope, Kevin L.
2014-01-01
A primary motivating factor on the decision to harvest a fish among consumptive-orientated anglers is the size of the fish. There is likely a cost-benefit trade-off for harvest of individual fish that is size and species dependent, which should produce a logistic-type response of fish fate (release or harvest) as a function of fish size and species. We define the self-imposed length limit as the length at which a captured fish had a 50% probability of being harvested, which was selected because it marks the length of the fish where the probability of harvest becomes greater than the probability of release. We assessed the influences of fish size, catch per unit effort, size distribution of caught fish, and creel limit on the self-imposed length limits for bluegill Lepomis macrochirus, channel catfish Ictalurus punctatus, black crappie Pomoxis nigromaculatus and white crappie Pomoxis annularis combined, white bass Morone chrysops, and yellow perch Perca flavescens at six lakes in Nebraska, USA. As we predicted, the probability of harvest increased with increasing size for all species harvested, which supported the concept of a size-dependent trade-off in costs and benefits of harvesting individual fish. It was also clear that probability of harvest was not simply defined by fish length, but rather was likely influenced to various degrees by interactions between species, catch rate, size distribution, creel-limit regulation and fish size. A greater understanding of harvest decisions within the context of perceived likelihood that a creel limit will be realized by a given angler party, which is a function of fish availability, harvest regulation and angler skill and orientation, is needed to predict the influence that anglers have on fish communities and to allow managers to sustainable manage exploited fish populations in recreational fisheries.
Martin, Mario; Contreras-Hernández, Enrique; Béjar, Javier; Esposito, Gennaro; Chávez, Diógenes; Glusman, Silvio; Cortés, Ulises; Rudomin, Pablo
2015-01-01
Previous studies aimed to disclose the functional organization of the neuronal networks involved in the generation of the spontaneous cord dorsum potentials (CDPs) generated in the lumbosacral spinal segments used predetermined templates to select specific classes of spontaneous CDPs. Since this procedure was time consuming and required continuous supervision, it was limited to the analysis of two specific types of CDPs (negative CDPs and negative positive CDPs), thus excluding potentials that may reflect activation of other neuronal networks of presumed functional relevance. We now present a novel procedure based in machine learning that allows the efficient and unbiased selection of a variety of spontaneous CDPs with different shapes and amplitudes. The reliability and performance of the present method is evaluated by analyzing the effects on the probabilities of generation of different classes of spontaneous CDPs induced by the intradermic injection of small amounts of capsaicin in the anesthetized cat, a procedure known to induce a state of central sensitization leading to allodynia and hyperalgesia. The results obtained with the selection method presently described allowed detection of spontaneous CDPs with specific shapes and amplitudes that are assumed to represent the activation of functionally coupled sets of dorsal horn neurones that acquire different, structured configurations in response to nociceptive stimuli. These changes are considered as responses tending to adequate transmission of sensory information to specific functional requirements as part of homeostatic adjustments. PMID:26379540
Loxley, P N
2017-10-01
The two-dimensional Gabor function is adapted to natural image statistics, leading to a tractable probabilistic generative model that can be used to model simple cell receptive field profiles, or generate basis functions for sparse coding applications. Learning is found to be most pronounced in three Gabor function parameters representing the size and spatial frequency of the two-dimensional Gabor function and characterized by a nonuniform probability distribution with heavy tails. All three parameters are found to be strongly correlated, resulting in a basis of multiscale Gabor functions with similar aspect ratios and size-dependent spatial frequencies. A key finding is that the distribution of receptive-field sizes is scale invariant over a wide range of values, so there is no characteristic receptive field size selected by natural image statistics. The Gabor function aspect ratio is found to be approximately conserved by the learning rules and is therefore not well determined by natural image statistics. This allows for three distinct solutions: a basis of Gabor functions with sharp orientation resolution at the expense of spatial-frequency resolution, a basis of Gabor functions with sharp spatial-frequency resolution at the expense of orientation resolution, or a basis with unit aspect ratio. Arbitrary mixtures of all three cases are also possible. Two parameters controlling the shape of the marginal distributions in a probabilistic generative model fully account for all three solutions. The best-performing probabilistic generative model for sparse coding applications is found to be a gaussian copula with Pareto marginal probability density functions.
Canetta, S; Bolkan, S; Padilla-Coreano, N; Song, L J; Sahn, R; Harrison, N L; Gordon, J A; Brown, A; Kellendonk, C
2016-07-01
Abnormalities in prefrontal gamma aminobutyric acid (GABA)ergic transmission, particularly in fast-spiking interneurons that express parvalbumin (PV), are hypothesized to contribute to the pathophysiology of multiple psychiatric disorders, including schizophrenia, bipolar disorder, anxiety disorders and depression. While primarily histological abnormalities have been observed in patients and in animal models of psychiatric disease, evidence for abnormalities in functional neurotransmission at the level of specific interneuron populations has been lacking in animal models and is difficult to establish in human patients. Using an animal model of a psychiatric disease risk factor, prenatal maternal immune activation (MIA), we found reduced functional GABAergic transmission in the medial prefrontal cortex (mPFC) of adult MIA offspring. Decreased transmission was selective for interneurons expressing PV, resulted from a decrease in release probability and was not observed in calretinin-expressing neurons. This deficit in PV function in MIA offspring was associated with increased anxiety-like behavior and impairments in attentional set shifting, but did not affect working memory. Furthermore, cell-type specific optogenetic inhibition of mPFC PV interneurons was sufficient to impair attentional set shifting and enhance anxiety levels. Finally, we found that in vivo mPFC gamma oscillations, which are supported by PV interneuron function, were linearly correlated with the degree of anxiety displayed in adult mice, and that this correlation was disrupted in MIA offspring. These results demonstrate a selective functional vulnerability of PV interneurons to MIA, leading to affective and cognitive symptoms that have high relevance for schizophrenia and other psychiatric disorders.
Vanderhaeghe, F; Smolders, A J P; Roelofs, J G M; Hoffmann, M
2012-03-01
Selecting an appropriate variable subset in linear multivariate methods is an important methodological issue for ecologists. Interest often exists in obtaining general predictive capacity or in finding causal inferences from predictor variables. Because of a lack of solid knowledge on a studied phenomenon, scientists explore predictor variables in order to find the most meaningful (i.e. discriminating) ones. As an example, we modelled the response of the amphibious softwater plant Eleocharis multicaulis using canonical discriminant function analysis. We asked how variables can be selected through comparison of several methods: univariate Pearson chi-square screening, principal components analysis (PCA) and step-wise analysis, as well as combinations of some methods. We expected PCA to perform best. The selected methods were evaluated through fit and stability of the resulting discriminant functions and through correlations between these functions and the predictor variables. The chi-square subset, at P < 0.05, followed by a step-wise sub-selection, gave the best results. In contrast to expectations, PCA performed poorly, as so did step-wise analysis. The different chi-square subset methods all yielded ecologically meaningful variables, while probable noise variables were also selected by PCA and step-wise analysis. We advise against the simple use of PCA or step-wise discriminant analysis to obtain an ecologically meaningful variable subset; the former because it does not take into account the response variable, the latter because noise variables are likely to be selected. We suggest that univariate screening techniques are a worthwhile alternative for variable selection in ecology. © 2011 German Botanical Society and The Royal Botanical Society of the Netherlands.
Optimizing DNA assembly based on statistical language modelling.
Fang, Gang; Zhang, Shemin; Dong, Yafei
2017-12-15
By successively assembling genetic parts such as BioBrick according to grammatical models, complex genetic constructs composed of dozens of functional blocks can be built. However, usually every category of genetic parts includes a few or many parts. With increasing quantity of genetic parts, the process of assembling more than a few sets of these parts can be expensive, time consuming and error prone. At the last step of assembling it is somewhat difficult to decide which part should be selected. Based on statistical language model, which is a probability distribution P(s) over strings S that attempts to reflect how frequently a string S occurs as a sentence, the most commonly used parts will be selected. Then, a dynamic programming algorithm was designed to figure out the solution of maximum probability. The algorithm optimizes the results of a genetic design based on a grammatical model and finds an optimal solution. In this way, redundant operations can be reduced and the time and cost required for conducting biological experiments can be minimized. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.
Resource planning and scheduling of payload for satellite with particle swarm optimization
NASA Astrophysics Data System (ADS)
Li, Jian; Wang, Cheng
2007-11-01
The resource planning and scheduling technology of payload is a key technology to realize an automated control for earth observing satellite with limited resources on satellite, which is implemented to arrange the works states of various payloads to carry out missions by optimizing the scheme of the resources. The scheduling task is a difficult constraint optimization problem with various and mutative requests and constraints. Based on the analysis of the satellite's functions and the payload's resource constraints, a proactive planning and scheduling strategy based on the availability of consumable and replenishable resources in time-order is introduced along with dividing the planning and scheduling period to several pieces. A particle swarm optimization algorithm is proposed to address the problem with an adaptive mutation operator selection, where the swarm is divided into groups with different probabilities to employ various mutation operators viz., differential evolution, Gaussian and random mutation operators. The probabilities are adjusted adaptively by comparing the effectiveness of the groups to select a proper operator. The simulation results have shown the feasibility and effectiveness of the method.
Tygert, Mark
2010-09-21
We discuss several tests for determining whether a given set of independent and identically distributed (i.i.d.) draws does not come from a specified probability density function. The most commonly used are Kolmogorov-Smirnov tests, particularly Kuiper's variant, which focus on discrepancies between the cumulative distribution function for the specified probability density and the empirical cumulative distribution function for the given set of i.i.d. draws. Unfortunately, variations in the probability density function often get smoothed over in the cumulative distribution function, making it difficult to detect discrepancies in regions where the probability density is small in comparison with its values in surrounding regions. We discuss tests without this deficiency, complementing the classical methods. The tests of the present paper are based on the plain fact that it is unlikely to draw a random number whose probability is small, provided that the draw is taken from the same distribution used in calculating the probability (thus, if we draw a random number whose probability is small, then we can be confident that we did not draw the number from the same distribution used in calculating the probability).
Sage-grouse habitat selection during winter in Alberta
Carpenter, Jennifer L.; Aldridge, Cameron L.; Boyce, Mark S.
2010-01-01
Greater sage-grouse (Centrocercus urophasianus) are dependent on sagebrush (Artemisia spp.) for food and shelter during winter, yet few studies have assessed winter habitat selection, particularly at scales applicable to conservation planning. Small changes to availability of winter habitats have caused drastic reductions in some sage-grouse populations. We modeled winter habitat selection by sage-grouse in Alberta, Canada, by using a resource selection function. Our purpose was to 1) generate a robust winter habitat-selection model for Alberta sage-grouse; 2) spatially depict habitat suitability in a Geographic Information System to identify areas with a high probability of selection and thus, conservation importance; and 3) assess the relative influence of human development, including oil and gas wells, in landscape models of winter habitat selection. Terrain and vegetation characteristics, sagebrush cover, anthropogenic landscape features, and energy development were important in top Akaike's Information Criterionselected models. During winter, sage-grouse selected dense sagebrush cover and homogenous less rugged areas, and avoided energy development and 2-track truck trails. Sage-grouse avoidance of energy development highlights the need for comprehensive management strategies that maintain suitable habitats across all seasons. ?? 2010 The Wildlife Society.
Fixation Probability in a Haploid-Diploid Population.
Bessho, Kazuhiro; Otto, Sarah P
2017-01-01
Classical population genetic theory generally assumes either a fully haploid or fully diploid life cycle. However, many organisms exhibit more complex life cycles, with both free-living haploid and diploid stages. Here we ask what the probability of fixation is for selected alleles in organisms with haploid-diploid life cycles. We develop a genetic model that considers the population dynamics using both the Moran model and Wright-Fisher model. Applying a branching process approximation, we obtain an accurate fixation probability assuming that the population is large and the net effect of the mutation is beneficial. We also find the diffusion approximation for the fixation probability, which is accurate even in small populations and for deleterious alleles, as long as selection is weak. These fixation probabilities from branching process and diffusion approximations are similar when selection is weak for beneficial mutations that are not fully recessive. In many cases, particularly when one phase predominates, the fixation probability differs substantially for haploid-diploid organisms compared to either fully haploid or diploid species. Copyright © 2017 by the Genetics Society of America.
Selected considerations of implementation of the GNSS
NASA Astrophysics Data System (ADS)
Cwiklak, Janusz; Fellner, Andrzej; Fellner, Radoslaw; Jafernik, Henryk; Sledzinski, Janusz
2014-05-01
The article describes analysis of the safety and risk for the implementation of precise approach procedures (Localizer Performance and Vertical Guidance - LPV) with GNSS sensor at airports in Warsaw and Katowice. There were used some techniques of the identification of threats (inducing controlled flight into terrain, landing accident, mid-air collision) and evaluations methods based on Fault Tree Analysis, probability of the risk, safety risk evaluation matrix and Functional Hazard Assesment. Also safety goals were determined. Research led to determine probabilities of appearing of threats, as well as allow compare them with regard to the ILS. As a result of conducting the Preliminary System Safety Assessment (PSSA), there were defined requirements essential to reach the required level of the safety. It is worth to underline, that quantitative requirements were defined using FTA.
Studies of uncontrolled air traffic patterns, phase 1
NASA Technical Reports Server (NTRS)
Baxa, E. G., Jr.; Scharf, L. L.; Ruedger, W. H.; Modi, J. A.; Wheelock, S. L.; Davis, C. M.
1975-01-01
The general aviation air traffic flow patterns at uncontrolled airports are investigated and analyzed and traffic pattern concepts are developed to minimize the midair collision hazard in uncontrolled airspace. An analytical approach to evaluate midair collision hazard probability as a function of traffic densities is established which is basically independent of path structure. Two methods of generating space-time interrelationships between terminal area aircraft are presented; one is a deterministic model to generate pseudorandom aircraft tracks, the other is a statistical model in preliminary form. Some hazard measures are presented for selected traffic densities. It is concluded that the probability of encountering a hazard should be minimized independently of any other considerations and that the number of encounters involving visible-avoidable aircraft should be maximized at the expense of encounters in other categories.
Flux of a Ratchet Model and Applications to Processive Motor Proteins
NASA Astrophysics Data System (ADS)
Li, Jing-Hui
2015-10-01
In this paper, we investigate the stationary probability current (or flux) of a Brownian ratchet model as a function of the flipping rate of the fluctuating potential barrier. It is shown that, with suitably selecting the parameters' values of the ratchet system, we can get the negative resonant activation, the positive resonant activation, the double resonant activation, and the current reversal, for the stationary probability current versus the flipping rate. The appearance of these phenomena is the result of the cooperative effects of the potential's dichotomous fluctuations and the internal thermal fluctuations on the evolution of the flux versus the flipping rate of the fluctuating potential barrier. In addition, some applications of our results to the motor proteins are discussed. Supported by K.C. Wong Magna Fund in Ningbo University in China
Sexual selection targets cetacean pelvic bones
Dines, J. P.; Otárola-Castillo, E.; Ralph, P.; Alas, J.; Daley, T.; Smith, A. D.; Dean, M. D.
2014-01-01
Male genitalia evolve rapidly, probably as a result of sexual selection. Whether this pattern extends to the internal infrastructure that influences genital movements remains unknown. Cetaceans (whales and dolphins) offer a unique opportunity to test this hypothesis: since evolving from land-dwelling ancestors, they lost external hind limbs and evolved a highly reduced pelvis which seems to serve no other function except to anchor muscles that maneuver the penis. Here we create a novel morphometric pipeline to analyze the size and shape evolution of pelvic bones from 130 individuals (29 species) in the context of inferred mating system. We present two main findings: 1) males from species with relatively intense sexual selection (inferred by relative testes size) have evolved relatively large penises and pelvic bones compared to their body size, and 2) pelvic bone shape diverges more quickly in species pairs that have diverged in inferred mating system. Neither pattern was observed in the anterior-most pair of vertebral ribs, which served as a negative control. This study provides evidence that sexual selection can affect internal anatomy that controls male genitalia. These important functions may explain why cetacean pelvic bones have not been lost through evolutionary time. PMID:25186496
Computation of the Complex Probability Function
DOE Office of Scientific and Technical Information (OSTI.GOV)
Trainer, Amelia Jo; Ledwith, Patrick John
The complex probability function is important in many areas of physics and many techniques have been developed in an attempt to compute it for some z quickly and e ciently. Most prominent are the methods that use Gauss-Hermite quadrature, which uses the roots of the n th degree Hermite polynomial and corresponding weights to approximate the complex probability function. This document serves as an overview and discussion of the use, shortcomings, and potential improvements on the Gauss-Hermite quadrature for the complex probability function.
A multi-scale model for correlation in B cell VDJ usage of zebrafish
NASA Astrophysics Data System (ADS)
Pan, Keyao; Deem, Michael W.
2011-10-01
The zebrafish (Danio rerio) is one of the model animals used for the study of immunology because the dynamics in the adaptive immune system of zebrafish are similar to that in higher animals. In this work, we built a multi-scale model to simulate the dynamics of B cells in the primary and secondary immune responses of zebrafish. We use this model to explain the reported correlation between VDJ usage of B cell repertoires in individual zebrafish. We use a delay ordinary differential equation (ODE) system to model the immune responses in the 6-month lifespan of a zebrafish. This mean field theory gives the number of high-affinity B cells as a function of time during an infection. The sequences of those B cells are then taken from a distribution calculated by a 'microscopic' random energy model. This generalized NK model shows that mature B cells specific to one antigen largely possess a single VDJ recombination. The model allows first-principle calculation of the probability, p, that two zebrafish responding to the same antigen will select the same VDJ recombination. This probability p increases with the B cell population size and the B cell selection intensity. The probability p decreases with the B cell hypermutation rate. The multi-scale model predicts correlations in the immune system of the zebrafish that are highly similar to that from experiment.
Probability and Quantum Paradigms: the Interplay
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kracklauer, A. F.
Since the introduction of Born's interpretation of quantum wave functions as yielding the probability density of presence, Quantum Theory and Probability have lived in a troubled symbiosis. Problems arise with this interpretation because quantum probabilities exhibit features alien to usual probabilities, namely non Boolean structure and non positive-definite phase space probability densities. This has inspired research into both elaborate formulations of Probability Theory and alternate interpretations for wave functions. Herein the latter tactic is taken and a suggested variant interpretation of wave functions based on photo detection physics proposed, and some empirical consequences are considered. Although incomplete in a fewmore » details, this variant is appealing in its reliance on well tested concepts and technology.« less
Probability and Quantum Paradigms: the Interplay
NASA Astrophysics Data System (ADS)
Kracklauer, A. F.
2007-12-01
Since the introduction of Born's interpretation of quantum wave functions as yielding the probability density of presence, Quantum Theory and Probability have lived in a troubled symbiosis. Problems arise with this interpretation because quantum probabilities exhibit features alien to usual probabilities, namely non Boolean structure and non positive-definite phase space probability densities. This has inspired research into both elaborate formulations of Probability Theory and alternate interpretations for wave functions. Herein the latter tactic is taken and a suggested variant interpretation of wave functions based on photo detection physics proposed, and some empirical consequences are considered. Although incomplete in a few details, this variant is appealing in its reliance on well tested concepts and technology.
Schleuning, Matthias; Farwig, Nina; Peters, Marcell K; Bergsdorf, Thomas; Bleher, Bärbel; Brandl, Roland; Dalitz, Helmut; Fischer, Georg; Freund, Wolfram; Gikungu, Mary W; Hagen, Melanie; Garcia, Francisco Hita; Kagezi, Godfrey H; Kaib, Manfred; Kraemer, Manfred; Lung, Tobias; Naumann, Clas M; Schaab, Gertrud; Templin, Mathias; Uster, Dana; Wägele, J Wolfgang; Böhning-Gaese, Katrin
2011-01-01
Forest fragmentation and selective logging are two main drivers of global environmental change and modify biodiversity and environmental conditions in many tropical forests. The consequences of these changes for the functioning of tropical forest ecosystems have rarely been explored in a comprehensive approach. In a Kenyan rainforest, we studied six animal-mediated ecosystem processes and recorded species richness and community composition of all animal taxa involved in these processes. We used linear models and a formal meta-analysis to test whether forest fragmentation and selective logging affected ecosystem processes and biodiversity and used structural equation models to disentangle direct from biodiversity-related indirect effects of human disturbance on multiple ecosystem processes. Fragmentation increased decomposition and reduced antbird predation, while selective logging consistently increased pollination, seed dispersal and army-ant raiding. Fragmentation modified species richness or community composition of five taxa, whereas selective logging did not affect any component of biodiversity. Changes in the abundance of functionally important species were related to lower predation by antbirds and higher decomposition rates in small forest fragments. The positive effects of selective logging on bee pollination, bird seed dispersal and army-ant raiding were direct, i.e. not related to changes in biodiversity, and were probably due to behavioural changes of these highly mobile animal taxa. We conclude that animal-mediated ecosystem processes respond in distinct ways to different types of human disturbance in Kakamega Forest. Our findings suggest that forest fragmentation affects ecosystem processes indirectly by changes in biodiversity, whereas selective logging influences processes directly by modifying local environmental conditions and resource distributions. The positive to neutral effects of selective logging on ecosystem processes show that the functionality of tropical forests can be maintained in moderately disturbed forest fragments. Conservation concepts for tropical forests should thus include not only remaining pristine forests but also functionally viable forest remnants.
Schleuning, Matthias; Farwig, Nina; Peters, Marcell K.; Bergsdorf, Thomas; Bleher, Bärbel; Brandl, Roland; Dalitz, Helmut; Fischer, Georg; Freund, Wolfram; Gikungu, Mary W.; Hagen, Melanie; Garcia, Francisco Hita; Kagezi, Godfrey H.; Kaib, Manfred; Kraemer, Manfred; Lung, Tobias; Schaab, Gertrud; Templin, Mathias; Uster, Dana; Wägele, J. Wolfgang; Böhning-Gaese, Katrin
2011-01-01
Forest fragmentation and selective logging are two main drivers of global environmental change and modify biodiversity and environmental conditions in many tropical forests. The consequences of these changes for the functioning of tropical forest ecosystems have rarely been explored in a comprehensive approach. In a Kenyan rainforest, we studied six animal-mediated ecosystem processes and recorded species richness and community composition of all animal taxa involved in these processes. We used linear models and a formal meta-analysis to test whether forest fragmentation and selective logging affected ecosystem processes and biodiversity and used structural equation models to disentangle direct from biodiversity-related indirect effects of human disturbance on multiple ecosystem processes. Fragmentation increased decomposition and reduced antbird predation, while selective logging consistently increased pollination, seed dispersal and army-ant raiding. Fragmentation modified species richness or community composition of five taxa, whereas selective logging did not affect any component of biodiversity. Changes in the abundance of functionally important species were related to lower predation by antbirds and higher decomposition rates in small forest fragments. The positive effects of selective logging on bee pollination, bird seed dispersal and army-ant raiding were direct, i.e. not related to changes in biodiversity, and were probably due to behavioural changes of these highly mobile animal taxa. We conclude that animal-mediated ecosystem processes respond in distinct ways to different types of human disturbance in Kakamega Forest. Our findings suggest that forest fragmentation affects ecosystem processes indirectly by changes in biodiversity, whereas selective logging influences processes directly by modifying local environmental conditions and resource distributions. The positive to neutral effects of selective logging on ecosystem processes show that the functionality of tropical forests can be maintained in moderately disturbed forest fragments. Conservation concepts for tropical forests should thus include not only remaining pristine forests but also functionally viable forest remnants. PMID:22114695
Wavelength band selection method for multispectral target detection.
Karlholm, Jörgen; Renhorn, Ingmar
2002-11-10
A framework is proposed for the selection of wavelength bands for multispectral sensors by use of hyperspectral reference data. Using the results from the detection theory we derive a cost function that is minimized by a set of spectral bands optimal in terms of detection performance for discrimination between a class of small rare targets and clutter with known spectral distribution. The method may be used, e.g., in the design of multispectral infrared search and track and electro-optical missile warning sensors, where a low false-alarm rate and a high-detection probability for detection of small targets against a clutter background are of critical importance, but the required high frame rate prevents the use of hyperspectral sensors.
A non-stationary cost-benefit based bivariate extreme flood estimation approach
NASA Astrophysics Data System (ADS)
Qi, Wei; Liu, Junguo
2018-02-01
Cost-benefit analysis and flood frequency analysis have been integrated into a comprehensive framework to estimate cost effective design values. However, previous cost-benefit based extreme flood estimation is based on stationary assumptions and analyze dependent flood variables separately. A Non-Stationary Cost-Benefit based bivariate design flood estimation (NSCOBE) approach is developed in this study to investigate influence of non-stationarities in both the dependence of flood variables and the marginal distributions on extreme flood estimation. The dependence is modeled utilizing copula functions. Previous design flood selection criteria are not suitable for NSCOBE since they ignore time changing dependence of flood variables. Therefore, a risk calculation approach is proposed based on non-stationarities in both marginal probability distributions and copula functions. A case study with 54-year observed data is utilized to illustrate the application of NSCOBE. Results show NSCOBE can effectively integrate non-stationarities in both copula functions and marginal distributions into cost-benefit based design flood estimation. It is also found that there is a trade-off between maximum probability of exceedance calculated from copula functions and marginal distributions. This study for the first time provides a new approach towards a better understanding of influence of non-stationarities in both copula functions and marginal distributions on extreme flood estimation, and could be beneficial to cost-benefit based non-stationary bivariate design flood estimation across the world.
Incorporating detection probability into northern Great Plains pronghorn population estimates
Jacques, Christopher N.; Jenks, Jonathan A.; Grovenburg, Troy W.; Klaver, Robert W.; DePerno, Christopher S.
2014-01-01
Pronghorn (Antilocapra americana) abundances commonly are estimated using fixed-wing surveys, but these estimates are likely to be negatively biased because of violations of key assumptions underpinning line-transect methodology. Reducing bias and improving precision of abundance estimates through use of detection probability and mark-resight models may allow for more responsive pronghorn management actions. Given their potential application in population estimation, we evaluated detection probability and mark-resight models for use in estimating pronghorn population abundance. We used logistic regression to quantify probabilities that detecting pronghorn might be influenced by group size, animal activity, percent vegetation, cover type, and topography. We estimated pronghorn population size by study area and year using mixed logit-normal mark-resight (MLNM) models. Pronghorn detection probability increased with group size, animal activity, and percent vegetation; overall detection probability was 0.639 (95% CI = 0.612–0.667) with 396 of 620 pronghorn groups detected. Despite model selection uncertainty, the best detection probability models were 44% (range = 8–79%) and 180% (range = 139–217%) greater than traditional pronghorn population estimates. Similarly, the best MLNM models were 28% (range = 3–58%) and 147% (range = 124–180%) greater than traditional population estimates. Detection probability of pronghorn was not constant but depended on both intrinsic and extrinsic factors. When pronghorn detection probability is a function of animal group size, animal activity, landscape complexity, and percent vegetation, traditional aerial survey techniques will result in biased pronghorn abundance estimates. Standardizing survey conditions, increasing resighting occasions, or accounting for variation in individual heterogeneity in mark-resight models will increase the accuracy and precision of pronghorn population estimates.
A Deterministic Approach to Active Debris Removal Target Selection
NASA Astrophysics Data System (ADS)
Lidtke, A.; Lewis, H.; Armellin, R.
2014-09-01
Many decisions, with widespread economic, political and legal consequences, are being considered based on space debris simulations that show that Active Debris Removal (ADR) may be necessary as the concerns about the sustainability of spaceflight are increasing. The debris environment predictions are based on low-accuracy ephemerides and propagators. This raises doubts about the accuracy of those prognoses themselves but also the potential ADR target-lists that are produced. Target selection is considered highly important as removal of many objects will increase the overall mission cost. Selecting the most-likely candidates as soon as possible would be desirable as it would enable accurate mission design and allow thorough evaluation of in-orbit validations, which are likely to occur in the near-future, before any large investments are made and implementations realized. One of the primary factors that should be used in ADR target selection is the accumulated collision probability of every object. A conjunction detection algorithm, based on the smart sieve method, has been developed. Another algorithm is then applied to the found conjunctions to compute the maximum and true probabilities of collisions taking place. The entire framework has been verified against the Conjunction Analysis Tools in AGIs Systems Toolkit and relative probability error smaller than 1.5% has been achieved in the final maximum collision probability. Two target-lists are produced based on the ranking of the objects according to the probability they will take part in any collision over the simulated time window. These probabilities are computed using the maximum probability approach, that is time-invariant, and estimates of the true collision probability that were computed with covariance information. The top-priority targets are compared, and the impacts of the data accuracy and its decay are highlighted. General conclusions regarding the importance of Space Surveillance and Tracking for the purpose of ADR are also drawn and a deterministic method for ADR target selection, which could reduce the number of ADR missions to be performed, is proposed.
2013-01-01
Background As there are limited patients for chronic lymphocytic leukaemia trials, it is important that statistical methodologies in Phase II efficiently select regimens for subsequent evaluation in larger-scale Phase III trials. Methods We propose the screened selection design (SSD), which is a practical multi-stage, randomised Phase II design for two experimental arms. Activity is first evaluated by applying Simon’s two-stage design (1989) on each arm. If both are active, the play-the-winner selection strategy proposed by Simon, Wittes and Ellenberg (SWE) (1985) is applied to select the superior arm. A variant of the design, Modified SSD, also allows the arm with the higher response rates to be recommended only if its activity rate is greater by a clinically-relevant value. The operating characteristics are explored via a simulation study and compared to a Bayesian Selection approach. Results Simulations showed that with the proposed SSD, it is possible to retain the sample size as required in SWE and obtain similar probabilities of selecting the correct superior arm of at least 90%; with the additional attractive benefit of reducing the probability of selecting ineffective arms. This approach is comparable to a Bayesian Selection Strategy. The Modified SSD performs substantially better than the other designs in selecting neither arm if the underlying rates for both arms are desirable but equivalent, allowing for other factors to be considered in the decision making process. Though its probability of correctly selecting a superior arm might be reduced, it still performs reasonably well. It also reduces the probability of selecting an inferior arm. Conclusions SSD provides an easy to implement randomised Phase II design that selects the most promising treatment that has shown sufficient evidence of activity, with available R codes to evaluate its operating characteristics. PMID:23819695
Yap, Christina; Pettitt, Andrew; Billingham, Lucinda
2013-07-03
As there are limited patients for chronic lymphocytic leukaemia trials, it is important that statistical methodologies in Phase II efficiently select regimens for subsequent evaluation in larger-scale Phase III trials. We propose the screened selection design (SSD), which is a practical multi-stage, randomised Phase II design for two experimental arms. Activity is first evaluated by applying Simon's two-stage design (1989) on each arm. If both are active, the play-the-winner selection strategy proposed by Simon, Wittes and Ellenberg (SWE) (1985) is applied to select the superior arm. A variant of the design, Modified SSD, also allows the arm with the higher response rates to be recommended only if its activity rate is greater by a clinically-relevant value. The operating characteristics are explored via a simulation study and compared to a Bayesian Selection approach. Simulations showed that with the proposed SSD, it is possible to retain the sample size as required in SWE and obtain similar probabilities of selecting the correct superior arm of at least 90%; with the additional attractive benefit of reducing the probability of selecting ineffective arms. This approach is comparable to a Bayesian Selection Strategy. The Modified SSD performs substantially better than the other designs in selecting neither arm if the underlying rates for both arms are desirable but equivalent, allowing for other factors to be considered in the decision making process. Though its probability of correctly selecting a superior arm might be reduced, it still performs reasonably well. It also reduces the probability of selecting an inferior arm. SSD provides an easy to implement randomised Phase II design that selects the most promising treatment that has shown sufficient evidence of activity, with available R codes to evaluate its operating characteristics.
NASA Astrophysics Data System (ADS)
Yang, P.; Ng, T. L.; Yang, W.
2015-12-01
Effective water resources management depends on the reliable estimation of the uncertainty of drought events. Confidence intervals (CIs) are commonly applied to quantify this uncertainty. A CI seeks to be at the minimal length necessary to cover the true value of the estimated variable with the desired probability. In drought analysis where two or more variables (e.g., duration and severity) are often used to describe a drought, copulas have been found suitable for representing the joint probability behavior of these variables. However, the comprehensive assessment of the parameter uncertainties of copulas of droughts has been largely ignored, and the few studies that have recognized this issue have not explicitly compared the various methods to produce the best CIs. Thus, the objective of this study to compare the CIs generated using two widely applied uncertainty estimation methods, bootstrapping and Markov Chain Monte Carlo (MCMC). To achieve this objective, (1) the marginal distributions lognormal, Gamma, and Generalized Extreme Value, and the copula functions Clayton, Frank, and Plackett are selected to construct joint probability functions of two drought related variables. (2) The resulting joint functions are then fitted to 200 sets of simulated realizations of drought events with known distribution and extreme parameters and (3) from there, using bootstrapping and MCMC, CIs of the parameters are generated and compared. The effect of an informative prior on the CIs generated by MCMC is also evaluated. CIs are produced for different sample sizes (50, 100, and 200) of the simulated drought events for fitting the joint probability functions. Preliminary results assuming lognormal marginal distributions and the Clayton copula function suggest that for cases with small or medium sample sizes (~50-100), MCMC to be superior method if an informative prior exists. Where an informative prior is unavailable, for small sample sizes (~50), both bootstrapping and MCMC yield the same level of performance, and for medium sample sizes (~100), bootstrapping is better. For cases with a large sample size (~200), there is little difference between the CIs generated using bootstrapping and MCMC regardless of whether or not an informative prior exists.
Detection performance in clutter with variable resolution
NASA Astrophysics Data System (ADS)
Schmieder, D. E.; Weathersby, M. R.
1983-07-01
Experiments were conducted to determine the influence of background clutter on target detection criteria. The experiment consisted of placing observers in front of displayed images on a TV monitor. Observer ability to detect military targets embedded in simulated natural and manmade background clutter was measured when there was unlimited viewing time. Results were described in terms of detection probability versus target resolution for various signal to clutter ratios (SCR). The experiments were preceded by a search for a meaningful clutter definition. The selected definition was a statistical measure computed by averaging the standard deviation of contiguous scene cells over the whole scene. The cell size was comparable to the target size. Observer test results confirmed the expectation that the resolution required for a given detection probability was a continuum function of the clutter level. At the lower SCRs the resolution required for a high probability of detection was near 6 line pairs per target (LP/TGT), while at the higher SCRs it was found that a resoluton of less than 0.25 LP/TGT would yield a high probability of detection. These results are expected to aid in target acquisition performance modeling and to lead to improved specifications for imaging automatic target screeners.
An Experiment Quantifying The Effect Of Clutter On Target Detection
NASA Astrophysics Data System (ADS)
Weathersby, Marshall R.; Schmieder, David E.
1985-01-01
Experiments were conducted to determine the influence of background clutter on target detection criteria. The experiment consisted of placing observers in front of displayed images on a TV monitor. Observer ability to detect military targets embedded in simulated natural and manmade background clutter was measured when there was unlimited viewing time. Results were described in terms of detection probability versus target resolution for various signal to clutter ratios (SCR). The experiments were preceded by a search for a meaningful clutter definition. The selected definition was a statistical measure computed by averaging the standard deviation of contiguous scene cells over the whole scene. The cell size was comparable to the target size. Observer test results confirmed the expectation that the resolution required for a given detection probability was a continuum function of the clutter level. At the lower SCRs the resolution required for a high probability of detection was near 6 lines pairs per target (LP/TGT), while at the higher SCRs it was found that a resolution of less than 0.25 LP/TGT would yield a high probability of detection. These results are expected to aid in target acquisition performance modeling and to lead to improved specifications for imaging automatic target screeners.
A functional model of sensemaking in a neurocognitive architecture.
Lebiere, Christian; Pirolli, Peter; Thomson, Robert; Paik, Jaehyon; Rutledge-Taylor, Matthew; Staszewski, James; Anderson, John R
2013-01-01
Sensemaking is the active process of constructing a meaningful representation (i.e., making sense) of some complex aspect of the world. In relation to intelligence analysis, sensemaking is the act of finding and interpreting relevant facts amongst the sea of incoming reports, images, and intelligence. We present a cognitive model of core information-foraging and hypothesis-updating sensemaking processes applied to complex spatial probability estimation and decision-making tasks. While the model was developed in a hybrid symbolic-statistical cognitive architecture, its correspondence to neural frameworks in terms of both structure and mechanisms provided a direct bridge between rational and neural levels of description. Compared against data from two participant groups, the model correctly predicted both the presence and degree of four biases: confirmation, anchoring and adjustment, representativeness, and probability matching. It also favorably predicted human performance in generating probability distributions across categories, assigning resources based on these distributions, and selecting relevant features given a prior probability distribution. This model provides a constrained theoretical framework describing cognitive biases as arising from three interacting factors: the structure of the task environment, the mechanisms and limitations of the cognitive architecture, and the use of strategies to adapt to the dual constraints of cognition and the environment.
A Functional Model of Sensemaking in a Neurocognitive Architecture
Lebiere, Christian; Paik, Jaehyon; Rutledge-Taylor, Matthew; Staszewski, James; Anderson, John R.
2013-01-01
Sensemaking is the active process of constructing a meaningful representation (i.e., making sense) of some complex aspect of the world. In relation to intelligence analysis, sensemaking is the act of finding and interpreting relevant facts amongst the sea of incoming reports, images, and intelligence. We present a cognitive model of core information-foraging and hypothesis-updating sensemaking processes applied to complex spatial probability estimation and decision-making tasks. While the model was developed in a hybrid symbolic-statistical cognitive architecture, its correspondence to neural frameworks in terms of both structure and mechanisms provided a direct bridge between rational and neural levels of description. Compared against data from two participant groups, the model correctly predicted both the presence and degree of four biases: confirmation, anchoring and adjustment, representativeness, and probability matching. It also favorably predicted human performance in generating probability distributions across categories, assigning resources based on these distributions, and selecting relevant features given a prior probability distribution. This model provides a constrained theoretical framework describing cognitive biases as arising from three interacting factors: the structure of the task environment, the mechanisms and limitations of the cognitive architecture, and the use of strategies to adapt to the dual constraints of cognition and the environment. PMID:24302930
Selectivity Mechanism of the Nuclear Pore Complex Characterized by Single Cargo Tracking
Lowe, Alan R.; Siegel, Jake J.; Kalab, Petr; Siu, Merek; Weis, Karsten; Liphardt, Jan T.
2010-01-01
The Nuclear Pore Complex (NPC) mediates all exchange between the cytoplasm and the nucleus. Small molecules can passively diffuse through the NPC, while larger cargos require transport receptors to translocate1. How the NPC facilitates the translocation of transport receptor/cargo complexes remains unclear. Here, we track single protein-functionalized Quantum Dot (QD) cargos as they translocate the NPC. Import proceeds by successive sub-steps comprising cargo capture, filtering and translocation, and release into the nucleus. The majority of QDs are rejected at one of these steps and return to the cytoplasm including very large cargos that abort at a size-selective barrier. Cargo movement in the central channel is subdiffusive and cargos that can bind more transport receptors diffuse more freely. Without Ran, cargos still explore the entire NPC, but have a markedly reduced probability of exit into the nucleus, suggesting that NPC entry and exit steps are not equivalent and that the pore is functionally asymmetric to importing cargos. The overall selectivity of the NPC appears to arise from the cumulative action of multiple reversible sub-steps and a final irreversible exit step. PMID:20811366
Selection of forage-fish schools by Murrelets and Tufted Puffins in Prince William Sound, Alaska
Ostrand, William D.; Coyle, Kenneth O.; Drew, Gary S.; Maniscalco, John M.; Irons, David B.
1998-01-01
We collected hydroacoustic and bird-observation data simultaneously along transects in three areas in Prince William Sound, Alaska, 21 July-11 August 1995. The probability of the association of fish schools with Marbled Murrelets (Brachyramphus marmoratus) and Tufted Puffins (Fratercula cirrhata) was determined through the use of resource selection functions based on logistic regression. Mean (± SD) group sizes were small for both species, 1.7 ± 1.1 and 1.2 ± 0.7 for Marbled Murrelets and Tufted Puffins, respectively. Oceanographically, all study areas were stratified with synchronous thermo- and pycnoclines (a water layer of increasing temperature and density, respectively, with increasing depth). Our analysis indicated that Tufted Puffins selected fish schools near their colony, whereas Marbled Murrelets selected smaller, denser fish schools in shallower habitats. We suggest that murrelets selected shallower habitats in response to lower maximum diving depths than puffins. Small feeding-groups size is discussed in terms of foraging theory and as a consequence of dispersed, low density food resources.
NASA Astrophysics Data System (ADS)
Cudlín, Ondřej; Řehák, Zdeněk; Cudlín, Pavel
2016-10-01
The aim of this study was to compare soil characteristics, plant communities and the rate of selected ecosystem function performance on reclaimed and unreclaimed plots (left for spontaneous succession) of different age on spoil heaps. Twelve spoil heaps (three circle plots of radius 12.5 m) near the town Kladno in north-west direction from Prague, created after deep coal mining, were compared. Five mixed soil samples from organo-mineral horizons in each plot were analysed for total content of carbon, nitrogen and phosphorus. In addition, active soil pH (pHH2O) was determined. Plant diversity was determined by vegetation releves. The biodiversity value of the habitat according to the Habitat Valuation Method was assessed and the rate of evapotranspiration function by the Method of Valuation Functions and Services of Ecosystems in the Czech Republic were determined. The higher organo-mineral layers and higher amount of total nitrogen content were found on the older reclaimed and unreclaimed plots than in younger plots. The number of plant species and the total contents of carbon and nitrogen were significantly higher at the unreclaimed plots compared to reclaimed plots. The biodiversity values and evapotranspiration function rate were also higher on unreclaimed plots. From this perspective, it is possible to recommend using of spontaneous succession, together with routine reclamation methods to restore habitats after coal mining. Despite the relatively high age of vegetation in some of selected plots (90 years), both the reclaimed and unreclaimed plots have not reached the stage of potential vegetation near to natural climax. Slow development of vegetation was probably due to unsuitable substrate of spoil heaps and a lack of plant and tree species of natural forest habitats in this area. However, it is probable that vegetation communities on observed spoil heaps in both type of management (reclaimed and unreclaimed) will achieve the stage of natural climax and they will provide ecosystem functions more effectively in the future.
A quantitative model of optimal data selection in Wason's selection task.
Hattori, Masasi
2002-10-01
The optimal data selection model proposed by Oaksford and Chater (1994) successfully formalized Wason's selection task (Wason, 1966). The model, however, involved some questionable assumptions and was also not sufficient as a model of the task because it could not provide quantitative predictions of the card selection frequencies. In this paper, the model was revised to provide quantitative fits to the data. The model can predict the selection frequencies of cards based on a selection tendency function (STF), or conversely, it enables the estimation of subjective probabilities from data. Past experimental data were first re-analysed based on the model. In Experiment 1, the superiority of the revised model was shown. However, when the relationship between antecedent and consequent was forced to deviate from the biconditional form, the model was not supported. In Experiment 2, it was shown that sufficient emphasis on probabilistic information can affect participants' performance. A detailed experimental method to sort participants by probabilistic strategies was introduced. Here, the model was supported by a subgroup of participants who used the probabilistic strategy. Finally, the results were discussed from the viewpoint of adaptive rationality.
Machine recognition of navel orange worm damage in X-ray images of pistachio nuts
DOE Office of Scientific and Technical Information (OSTI.GOV)
Keagy, P.M.; Schatzki, T.F.; Parvin, B.
Insect infestation increases the probability of aflatoxin contamination in pistachio nuts. A non-destructive test is currently not available to determine the insect content of pistachio nuts. This paper presents the use of film X-ray images of various types of pistachio nuts to assess the possibility of machine recognition of insect infested nuts. Histogram parameters of four derived images are used in discriminant functions to select insect infested nuts from specific processing streams.
Advanced LPI (Low-Probability-of-Intercept) Intercept Detector Research
1985-11-13
following comparisons, we select a nominal loss figure of - 1.5 dB. We note that the above losses pertain to a rectangular BPF ; other filter ships will...this brief expose on the useful properties of the LLR moment- * generating functions, we can now prove the BPF theorem in a rather compact fashion...can be performed following guidelines similar to those in Appendix C, as follows: Let the bandpass AWGN n(t) be represented by 0) n(t) -T2 nj (t) cos
Bayesian Inference in Satellite Gravity Inversion
NASA Technical Reports Server (NTRS)
Kis, K. I.; Taylor, Patrick T.; Wittmann, G.; Kim, Hyung Rae; Torony, B.; Mayer-Guerr, T.
2005-01-01
To solve a geophysical inverse problem means applying measurements to determine the parameters of the selected model. The inverse problem is formulated as the Bayesian inference. The Gaussian probability density functions are applied in the Bayes's equation. The CHAMP satellite gravity data are determined at the altitude of 400 kilometer altitude over the South part of the Pannonian basin. The model of interpretation is the right vertical cylinder. The parameters of the model are obtained from the minimum problem solved by the Simplex method.
Multi-objective possibilistic model for portfolio selection with transaction cost
NASA Astrophysics Data System (ADS)
Jana, P.; Roy, T. K.; Mazumder, S. K.
2009-06-01
In this paper, we introduce the possibilistic mean value and variance of continuous distribution, rather than probability distributions. We propose a multi-objective Portfolio based model and added another entropy objective function to generate a well diversified asset portfolio within optimal asset allocation. For quantifying any potential return and risk, portfolio liquidity is taken into account and a multi-objective non-linear programming model for portfolio rebalancing with transaction cost is proposed. The models are illustrated with numerical examples.
Selection biases in empirical p(z) methods for weak lensing
Gruen, D.; Brimioulle, F.
2017-02-23
To measure the mass of foreground objects with weak gravitational lensing, one needs to estimate the redshift distribution of lensed background sources. This is commonly done in an empirical fashion, i.e. with a reference sample of galaxies of known spectroscopic redshift, matched to the source population. In this paper, we develop a simple decision tree framework that, under the ideal conditions of a large, purely magnitude-limited reference sample, allows an unbiased recovery of the source redshift probability density function p(z), as a function of magnitude and colour. We use this framework to quantify biases in empirically estimated p(z) caused bymore » selection effects present in realistic reference and weak lensing source catalogues, namely (1) complex selection of reference objects by the targeting strategy and success rate of existing spectroscopic surveys and (2) selection of background sources by the success of object detection and shape measurement at low signal to noise. For intermediate-to-high redshift clusters, and for depths and filter combinations appropriate for ongoing lensing surveys, we find that (1) spectroscopic selection can cause biases above the 10 per cent level, which can be reduced to ≈5 per cent by optimal lensing weighting, while (2) selection effects in the shape catalogue bias mass estimates at or below the 2 per cent level. Finally, this illustrates the importance of completeness of the reference catalogues for empirical redshift estimation.« less
Our Selections and Decisions: Inherent Features of the Nervous System?
NASA Astrophysics Data System (ADS)
Rösler, Frank
The chapter summarizes findings on the neuronal bases of decisionmaking. Taking the phenomenon of selection it will be explained that systems built only from excitatory and inhibitory neuron (populations) have the emergent property of selecting between different alternatives. These considerations suggest that there exists a hierarchical architecture with central selection switches. However, in such a system, functions of selection and decision-making are not localized, but rather emerge from an interaction of several participating networks. These are, on the one hand, networks that process specific input and output representations and, on the other hand, networks that regulate the relative activation/inhibition of the specific input and output networks. These ideas are supported by recent empirical evidence. Moreover, other studies show that rather complex psychological variables, like subjective probability estimates, expected gains and losses, prediction errors, etc., do have biological correlates, i.e., they can be localized in time and space as activation states of neural networks and single cells. These findings suggest that selections and decisions are consequences of an architecture which, seen from a biological perspective, is fully deterministic. However, a transposition of such nomothetic functional principles into the idiographic domain, i.e., using them as elements for comprehensive 'mechanistic' explanations of individual decisions, seems not to be possible because of principle limitations. Therefore, individual decisions will remain predictable by means of probabilistic models alone.
A hierarchical two-phase framework for selecting genes in cancer datasets with a neuro-fuzzy system.
Lim, Jongwoo; Wang, Bohyun; Lim, Joon S
2016-04-29
Finding the minimum number of appropriate biomarkers for specific targets such as a lung cancer has been a challenging issue in bioinformatics. We propose a hierarchical two-phase framework for selecting appropriate biomarkers that extracts candidate biomarkers from the cancer microarray datasets and then selects the minimum number of appropriate biomarkers from the extracted candidate biomarkers datasets with a specific neuro-fuzzy algorithm, which is called a neural network with weighted fuzzy membership function (NEWFM). In this context, as the first phase, the proposed framework is to extract candidate biomarkers by using a Bhattacharyya distance method that measures the similarity of two discrete probability distributions. Finally, the proposed framework is able to reduce the cost of finding biomarkers by not receiving medical supplements and improve the accuracy of the biomarkers in specific cancer target datasets.
NASA Astrophysics Data System (ADS)
Jean, Y. C.; Li, Ying; Liu, Gaung; Chen, Hongmin; Zhang, Junjie; Gadzia, Joseph E.
2006-02-01
Slow positrons and positron annihilation spectroscopy (PAS) have been applied to medical research in searching for positron annihilation selectivity to cancer cells. We report the results of positron lifetime and Doppler broadening energy spectroscopies in human skin samples with and without cancer as a function of positron incident energy (up to 8 μm depth) and found that the positronium annihilates at a significantly lower rate and forms at a lower probability in the samples having either basal cell carcinoma (BCC) or squamous cell carcinoma (SCC) than in the normal skin. The significant selectivity of positron annihilation to skin cancer may open a new research area of developing positron annihilation spectroscopy as a novel medical tool to detect cancer formation externally and non-invasively at the early stages.
Uncertainty plus Prior Equals Rational Bias: An Intuitive Bayesian Probability Weighting Function
ERIC Educational Resources Information Center
Fennell, John; Baddeley, Roland
2012-01-01
Empirical research has shown that when making choices based on probabilistic options, people behave as if they overestimate small probabilities, underestimate large probabilities, and treat positive and negative outcomes differently. These distortions have been modeled using a nonlinear probability weighting function, which is found in several…
Beckmann, Matthias; Johansen-Berg, Heidi; Rushworth, Matthew F S
2009-01-28
Whole-brain neuroimaging studies have demonstrated regional variations in function within human cingulate cortex. At the same time, regional variations in cingulate anatomical connections have been found in animal models. It has, however, been difficult to estimate the relationship between connectivity and function throughout the whole cingulate cortex within the human brain. In this study, magnetic resonance diffusion tractography was used to investigate cingulate probabilistic connectivity in the human brain with two approaches. First, an algorithm was used to search for regional variations in the probabilistic connectivity profiles of all cingulate cortex voxels with the whole of the rest of the brain. Nine subregions with distinctive connectivity profiles were identified. It was possible to characterize several distinct areas in the dorsal cingulate sulcal region. Several distinct regions were also found in subgenual and perigenual cortex. Second, the probabilities of connection between cingulate cortex and 11 predefined target regions of interest were calculated. Cingulate voxels with a high probability of connection with the different targets formed separate clusters within cingulate cortex. Distinct connectivity fingerprints characterized the likelihood of connections between the extracingulate target regions and the nine cingulate subregions. Last, a meta-analysis of 171 functional studies reporting cingulate activation was performed. Seven different cognitive conditions were selected and peak activation coordinates were plotted to create maps of functional localization within the cingulate cortex. Regional functional specialization was found to be related to regional differences in probabilistic anatomical connectivity.
NASA Astrophysics Data System (ADS)
Tibell, Lena A. E.; Harms, Ute
2017-11-01
Modern evolutionary theory is both a central theory and an integrative framework of the life sciences. This is reflected in the common references to evolution in modern science education curricula and contexts. In fact, evolution is a core idea that is supposed to support biology learning by facilitating the organization of relevant knowledge. In addition, evolution can function as a pivotal link between concepts and highlight similarities in the complexity of biological concepts. However, empirical studies in many countries have for decades identified deficiencies in students' scientific understanding of evolution mainly focusing on natural selection. Clearly, there are major obstacles to learning natural selection, and we argue that to overcome them, it is essential to address explicitly the general abstract concepts that underlie the biological processes, e.g., randomness or probability. Hence, we propose a two-dimensional framework for analyzing and structuring teaching of natural selection. The first—purely biological—dimension embraces the three main principles variation, heredity, and selection structured in nine key concepts that form the core idea of natural selection. The second dimension encompasses four so-called thresholds, i.e., general abstract and/or non-perceptual concepts: randomness, probability, spatial scales, and temporal scales. We claim that both of these dimensions must be continuously considered, in tandem, when teaching evolution in order to allow development of a meaningful understanding of the process. Further, we suggest that making the thresholds tangible with the aid of appropriate kinds of visualizations will facilitate grasping of the threshold concepts, and thus, help learners to overcome the difficulties in understanding the central theory of life.
Edge Effects in Line Intersect Sampling With
David L. R. Affleck; Timothy G. Gregoire; Harry T. Valentine
2005-01-01
Transects consisting of multiple, connected segments with a prescribed configuration are commonly used in ecological applications of line intersect sampling. The transect configuration has implications for the probability with which population elements are selected and for how the selection probabilities can be modified by the boundary of the tract being sampled. As...
Hazard Function Estimation with Cause-of-Death Data Missing at Random.
Wang, Qihua; Dinse, Gregg E; Liu, Chunling
2012-04-01
Hazard function estimation is an important part of survival analysis. Interest often centers on estimating the hazard function associated with a particular cause of death. We propose three nonparametric kernel estimators for the hazard function, all of which are appropriate when death times are subject to random censorship and censoring indicators can be missing at random. Specifically, we present a regression surrogate estimator, an imputation estimator, and an inverse probability weighted estimator. All three estimators are uniformly strongly consistent and asymptotically normal. We derive asymptotic representations of the mean squared error and the mean integrated squared error for these estimators and we discuss a data-driven bandwidth selection method. A simulation study, conducted to assess finite sample behavior, demonstrates that the proposed hazard estimators perform relatively well. We illustrate our methods with an analysis of some vascular disease data.
Accelerating rejection-based simulation of biochemical reactions with bounded acceptance probability
NASA Astrophysics Data System (ADS)
Thanh, Vo Hong; Priami, Corrado; Zunino, Roberto
2016-06-01
Stochastic simulation of large biochemical reaction networks is often computationally expensive due to the disparate reaction rates and high variability of population of chemical species. An approach to accelerate the simulation is to allow multiple reaction firings before performing update by assuming that reaction propensities are changing of a negligible amount during a time interval. Species with small population in the firings of fast reactions significantly affect both performance and accuracy of this simulation approach. It is even worse when these small population species are involved in a large number of reactions. We present in this paper a new approximate algorithm to cope with this problem. It is based on bounding the acceptance probability of a reaction selected by the exact rejection-based simulation algorithm, which employs propensity bounds of reactions and the rejection-based mechanism to select next reaction firings. The reaction is ensured to be selected to fire with an acceptance rate greater than a predefined probability in which the selection becomes exact if the probability is set to one. Our new algorithm improves the computational cost for selecting the next reaction firing and reduces the updating the propensities of reactions.
Accelerating rejection-based simulation of biochemical reactions with bounded acceptance probability
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thanh, Vo Hong, E-mail: vo@cosbi.eu; Priami, Corrado, E-mail: priami@cosbi.eu; Department of Mathematics, University of Trento, Trento
Stochastic simulation of large biochemical reaction networks is often computationally expensive due to the disparate reaction rates and high variability of population of chemical species. An approach to accelerate the simulation is to allow multiple reaction firings before performing update by assuming that reaction propensities are changing of a negligible amount during a time interval. Species with small population in the firings of fast reactions significantly affect both performance and accuracy of this simulation approach. It is even worse when these small population species are involved in a large number of reactions. We present in this paper a new approximatemore » algorithm to cope with this problem. It is based on bounding the acceptance probability of a reaction selected by the exact rejection-based simulation algorithm, which employs propensity bounds of reactions and the rejection-based mechanism to select next reaction firings. The reaction is ensured to be selected to fire with an acceptance rate greater than a predefined probability in which the selection becomes exact if the probability is set to one. Our new algorithm improves the computational cost for selecting the next reaction firing and reduces the updating the propensities of reactions.« less
Warship Combat System Selection Methodology Based on Discrete Event Simulation
2010-09-01
Platform (from Spanish) PD Damage Probability xiv PHit Hit Probability PKill Kill Probability RSM Response Surface Model SAM Surface-Air Missile...such a large target allows an assumption that the probability of a hit ( PHit ) is one. This structure can be considered as a bridge; therefore, the
Value and probability coding in a feedback-based learning task utilizing food rewards.
Tricomi, Elizabeth; Lempert, Karolina M
2015-01-01
For the consequences of our actions to guide behavior, the brain must represent different types of outcome-related information. For example, an outcome can be construed as negative because an expected reward was not delivered or because an outcome of low value was delivered. Thus behavioral consequences can differ in terms of the information they provide about outcome probability and value. We investigated the role of the striatum in processing probability-based and value-based negative feedback by training participants to associate cues with food rewards and then employing a selective satiety procedure to devalue one food outcome. Using functional magnetic resonance imaging, we examined brain activity related to receipt of expected rewards, receipt of devalued outcomes, omission of expected rewards, omission of devalued outcomes, and expected omissions of an outcome. Nucleus accumbens activation was greater for rewarding outcomes than devalued outcomes, but activity in this region did not correlate with the probability of reward receipt. Activation of the right caudate and putamen, however, was largest in response to rewarding outcomes relative to expected omissions of reward. The dorsal striatum (caudate and putamen) at the time of feedback also showed a parametric increase correlating with the trialwise probability of reward receipt. Our results suggest that the ventral striatum is sensitive to the motivational relevance, or subjective value, of the outcome, while the dorsal striatum codes for a more complex signal that incorporates reward probability. Value and probability information may be integrated in the dorsal striatum, to facilitate action planning and allocation of effort. Copyright © 2015 the American Physiological Society.
NASA Astrophysics Data System (ADS)
Park, Tae Jae; Jung, Gyu Il; Kim, Euk Hyun; Koo, Sang Man
2017-06-01
Development of mesoporous structures of composite silica particles with various organic functional groups was investigated by using a two-step process, consisting of one-pot sol-gel process in the presence and absence of ammonium hydroxide and a selective dissolution process with an ethanol-water mixture. Five different organosilanes, including methyltrimethoxysilane (MTMS), 3-mercaptopropyltrimethoxysilane (MPTMS), phenyltrimethoxysilane (PTMS), vinyltrimethoxysilane (VTMS), and 3-aminopropyltrimethoxysilane (APTMS) were employed. The mesoporous (organically modified silica) ORMOSIL particles were obtained even in the absence of ammonium hydroxide when the reaction mixture contained APTMS. The morphology of the particles, however, were different from those prepared with ammonia catalyst and the same organosilane mixtures, probably because the overall hydrolysis/condensation rates became slower. Co-existence of APTMS and VTMS was essential to prepare mesoporous particles from ternary organosilane mixtures. The work presented here demonstrates that organosilica particles with desired functionality and desired mesoporous structures can be obtained by selecting proper types of organosilane monomers and performing a facile and mild process either with or without ammonium hydroxide.
Quantitative trait nucleotide analysis using Bayesian model selection.
Blangero, John; Goring, Harald H H; Kent, Jack W; Williams, Jeff T; Peterson, Charles P; Almasy, Laura; Dyer, Thomas D
2005-10-01
Although much attention has been given to statistical genetic methods for the initial localization and fine mapping of quantitative trait loci (QTLs), little methodological work has been done to date on the problem of statistically identifying the most likely functional polymorphisms using sequence data. In this paper we provide a general statistical genetic framework, called Bayesian quantitative trait nucleotide (BQTN) analysis, for assessing the likely functional status of genetic variants. The approach requires the initial enumeration of all genetic variants in a set of resequenced individuals. These polymorphisms are then typed in a large number of individuals (potentially in families), and marker variation is related to quantitative phenotypic variation using Bayesian model selection and averaging. For each sequence variant a posterior probability of effect is obtained and can be used to prioritize additional molecular functional experiments. An example of this quantitative nucleotide analysis is provided using the GAW12 simulated data. The results show that the BQTN method may be useful for choosing the most likely functional variants within a gene (or set of genes). We also include instructions on how to use our computer program, SOLAR, for association analysis and BQTN analysis.
Path probability of stochastic motion: A functional approach
NASA Astrophysics Data System (ADS)
Hattori, Masayuki; Abe, Sumiyoshi
2016-06-01
The path probability of a particle undergoing stochastic motion is studied by the use of functional technique, and the general formula is derived for the path probability distribution functional. The probability of finding paths inside a tube/band, the center of which is stipulated by a given path, is analytically evaluated in a way analogous to continuous measurements in quantum mechanics. Then, the formalism developed here is applied to the stochastic dynamics of stock price in finance.
Survival and selection of migrating salmon from capture-recapture models with individual traits
Zabel, R.W.; Wagner, T.; Congleton, J.L.; Smith, S.G.; Williams, J.G.
2005-01-01
Capture-recapture studies are powerful tools for studying animal population dynamics, providing information on population abundance, survival rates, population growth rates, and selection for phenotypic traits. In these studies, the probability of observing a tagged individual reflects both the probability of the individual surviving to the time of recapture and the probability of recapturing an animal, given that it is alive. If both of these probabilities are related to the same phenotypic trait, it can be difficult to distinguish effects on survival probabilities from effects on recapture probabilities. However, when animals are individually tagged and have multiple opportunities for recapture, we can properly partition observed trait-related variability into survival and recapture components. We present an overview of capture-recapture models that incorporate individual variability and develop methods to incorporate results from these models into estimates of population survival and selection for phenotypic traits. We conducted a series of simulations to understand the performance of these estimators and to assess the consequences of ignoring individual variability when it exists. In addition, we analyzed a large data set of > 153 000 juvenile chinook salmon (Oncorhynchus tshawytscha) and steelhead (O. mykiss) of known length that were PIT-tagged during their seaward migration. Both our simulations and the case study indicated that the ability to precisely estimate selection for phenotypic traits was greatly compromised when differential recapture probabilities were ignored. Estimates of population survival, however, were far more robust. In the chinook salmon and steelhead study, we consistently found that smaller fish had a greater probability of recapture. We also uncovered length-related survival relationships in over half of the release group/river segment combinations that we observed, but we found both positive and negative relationships between length and survival probability. These results have important implications for the management of salmonid populations. ?? 2005 by the Ecological Society of America.
Thaker, Maria; Vanak, Abi T; Owen, Cailey R; Ogden, Monika B; Niemann, Sophie M; Slotow, Rob
2011-02-01
Studies that focus on single predator-prey interactions can be inadequate for understanding antipredator responses in multi-predator systems. Yet there is still a general lack of information about the strategies of prey to minimize predation risk from multiple predators at the landscape level. Here we examined the distribution of seven African ungulate species in the fenced Karongwe Game Reserve (KGR), South Africa, as a function of predation risk from all large carnivore species (lion, leopard, cheetah, African wild dog, and spotted hyena). Using observed kill data, we generated ungulate-specific predictions of relative predation risk and of riskiness of habitats. To determine how ungulates minimize predation risk at the landscape level, we explicitly tested five hypotheses consisting of strategies that reduce the probability of encountering predators, and the probability of being killed. All ungulate species avoided risky habitats, and most selected safer habitats, thus reducing their probability of being killed. To reduce the probability of encountering predators, most of the smaller prey species (impala, warthog, waterbuck, kudu) avoided the space use of all predators, while the larger species (wildebeest, zebra, giraffe) only avoided areas where lion and leopard space use were high. The strength of avoidance for the space use of predators generally did not correspond to the relative predation threat from those predators. Instead, ungulates used a simpler behavioral rule of avoiding the activity areas of sit-and-pursue predators (lion and leopard), but not those of cursorial predators (cheetah and African wild dog). In general, selection and avoidance of habitats was stronger than avoidance of the predator activity areas. We expect similar decision rules to drive the distribution pattern of ungulates in other African savannas and in other multi-predator systems, especially where predators differ in their hunting modes.
Speech processing using conditional observable maximum likelihood continuity mapping
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hogden, John; Nix, David
A computer implemented method enables the recognition of speech and speech characteristics. Parameters are initialized of first probability density functions that map between the symbols in the vocabulary of one or more sequences of speech codes that represent speech sounds and a continuity map. Parameters are also initialized of second probability density functions that map between the elements in the vocabulary of one or more desired sequences of speech transcription symbols and the continuity map. The parameters of the probability density functions are then trained to maximize the probabilities of the desired sequences of speech-transcription symbols. A new sequence ofmore » speech codes is then input to the continuity map having the trained first and second probability function parameters. A smooth path is identified on the continuity map that has the maximum probability for the new sequence of speech codes. The probability of each speech transcription symbol for each input speech code can then be output.« less
Improving receiver performance of diffusive molecular communication with enzymes.
Noel, Adam; Cheung, Karen C; Schober, Robert
2014-03-01
This paper studies the mitigation of intersymbol interference in a diffusive molecular communication system using enzymes that freely diffuse in the propagation environment. The enzymes form reaction intermediates with information molecules and then degrade them so that they cannot interfere with future transmissions. A lower bound expression on the expected number of molecules measured at the receiver is derived. A simple binary receiver detection scheme is proposed where the number of observed molecules is sampled at the time when the maximum number of molecules is expected. Insight is also provided into the selection of an appropriate bit interval. The expected bit error probability is derived as a function of the current and all previously transmitted bits. Simulation results show the accuracy of the bit error probability expression and the improvement in communication performance by having active enzymes present.
Probabilistic Structural Analysis Methods (PSAM) for select space propulsion system components
NASA Technical Reports Server (NTRS)
1991-01-01
This annual report summarizes the work completed during the third year of technical effort on the referenced contract. Principal developments continue to focus on the Probabilistic Finite Element Method (PFEM) which has been under development for three years. Essentially all of the linear capabilities within the PFEM code are in place. Major progress in the application or verifications phase was achieved. An EXPERT module architecture was designed and partially implemented. EXPERT is a user interface module which incorporates an expert system shell for the implementation of a rule-based interface utilizing the experience and expertise of the user community. The Fast Probability Integration (FPI) Algorithm continues to demonstrate outstanding performance characteristics for the integration of probability density functions for multiple variables. Additionally, an enhanced Monte Carlo simulation algorithm was developed and demonstrated for a variety of numerical strategies.
Drew, L.J.
1979-01-01
In this study the selection of the optimum type of drilling pattern to be used when exploring for elliptical shaped targets is examined. The rhombic pattern is optimal when the targets are known to have a preferred orientation. Situations can also be found where a rectangular pattern is as efficient as the rhombic pattern. A triangular or square drilling pattern should be used when the orientations of the targets are unknown. The way in which the optimum hole spacing varies as a function of (1) the cost of drilling, (2) the value of the targets, (3) the shape of the targets, (4) the target occurrence probabilities was determined for several examples. Bayes' rule was used to show how target occurrence probabilities can be revised within a multistage pattern drilling scheme. ?? 1979 Plenum Publishing Corporation.
Using Geothermal Play Types as an Analogue for Estimating Potential Resource Size
DOE Office of Scientific and Technical Information (OSTI.GOV)
Terry, Rachel; Young, Katherine
Blind geothermal systems are becoming increasingly common as more geothermal fields are developed. Geothermal development is known to have high risk in the early stages of a project development because reservoir characteristics are relatively unknown until wells are drilled. Play types (or occurrence models) categorize potential geothermal fields into groups based on geologic characteristics. To aid in lowering exploration risk, these groups' reservoir characteristics can be used as analogues in new site exploration. The play type schemes used in this paper were Moeck and Beardsmore play types (Moeck et al. 2014) and Brophy occurrence models (Brophy et al. 2011). Operatingmore » geothermal fields throughout the world were classified based on their associated play type, and then reservoir characteristics data were catalogued. The distributions of these characteristics were plotted in histograms to develop probability density functions for each individual characteristic. The probability density functions can be used as input analogues in Monte Carlo estimations of resource potential for similar play types in early exploration phases. A spreadsheet model was created to estimate resource potential in undeveloped fields. The user can choose to input their own values for each reservoir characteristic or choose to use the probability distribution functions provided from the selected play type. This paper also addresses the United States Geological Survey's 1978 and 2008 assessment of geothermal resources by comparing their estimated values to reported values from post-site development. Information from the collected data was used in the comparison for thirty developed sites in the United States. No significant trends or suggestions for methodologies could be made by the comparison.« less
Economic Choices Reveal Probability Distortion in Macaque Monkeys
Lak, Armin; Bossaerts, Peter; Schultz, Wolfram
2015-01-01
Economic choices are largely determined by two principal elements, reward value (utility) and probability. Although nonlinear utility functions have been acknowledged for centuries, nonlinear probability weighting (probability distortion) was only recently recognized as a ubiquitous aspect of real-world choice behavior. Even when outcome probabilities are known and acknowledged, human decision makers often overweight low probability outcomes and underweight high probability outcomes. Whereas recent studies measured utility functions and their corresponding neural correlates in monkeys, it is not known whether monkeys distort probability in a manner similar to humans. Therefore, we investigated economic choices in macaque monkeys for evidence of probability distortion. We trained two monkeys to predict reward from probabilistic gambles with constant outcome values (0.5 ml or nothing). The probability of winning was conveyed using explicit visual cues (sector stimuli). Choices between the gambles revealed that the monkeys used the explicit probability information to make meaningful decisions. Using these cues, we measured probability distortion from choices between the gambles and safe rewards. Parametric modeling of the choices revealed classic probability weighting functions with inverted-S shape. Therefore, the animals overweighted low probability rewards and underweighted high probability rewards. Empirical investigation of the behavior verified that the choices were best explained by a combination of nonlinear value and nonlinear probability distortion. Together, these results suggest that probability distortion may reflect evolutionarily preserved neuronal processing. PMID:25698750
Economic choices reveal probability distortion in macaque monkeys.
Stauffer, William R; Lak, Armin; Bossaerts, Peter; Schultz, Wolfram
2015-02-18
Economic choices are largely determined by two principal elements, reward value (utility) and probability. Although nonlinear utility functions have been acknowledged for centuries, nonlinear probability weighting (probability distortion) was only recently recognized as a ubiquitous aspect of real-world choice behavior. Even when outcome probabilities are known and acknowledged, human decision makers often overweight low probability outcomes and underweight high probability outcomes. Whereas recent studies measured utility functions and their corresponding neural correlates in monkeys, it is not known whether monkeys distort probability in a manner similar to humans. Therefore, we investigated economic choices in macaque monkeys for evidence of probability distortion. We trained two monkeys to predict reward from probabilistic gambles with constant outcome values (0.5 ml or nothing). The probability of winning was conveyed using explicit visual cues (sector stimuli). Choices between the gambles revealed that the monkeys used the explicit probability information to make meaningful decisions. Using these cues, we measured probability distortion from choices between the gambles and safe rewards. Parametric modeling of the choices revealed classic probability weighting functions with inverted-S shape. Therefore, the animals overweighted low probability rewards and underweighted high probability rewards. Empirical investigation of the behavior verified that the choices were best explained by a combination of nonlinear value and nonlinear probability distortion. Together, these results suggest that probability distortion may reflect evolutionarily preserved neuronal processing. Copyright © 2015 Stauffer et al.
NASA Astrophysics Data System (ADS)
Zhang, Zhao-yang; Xu, Xue-cheng
2015-08-01
Chemical oxidation is still the major approach to the covalent functionalization of carbon nanotubes (CNTs). Theoretically, the defects on CNTs are more reactive than skeletal hexagons and should be preferentially oxidized, but conventional oxidation methods, e.g., HNO3/H2SO4 treatment, have poor reaction selectivity and inevitably consume the Cdbnd C bonds in the hexagonal lattices, leading to structural damage, π-electrons loss and weight decrease. In this work, we realized the nondestructive covalent functionalization of CNTs by selective oxidation of the defects. In our method, potassium ferrate K2FeVIO4 was employed as an oxidant for CNTs in H2SO4 medium. The CNT samples, before and after K2FeO4/H2SO4 treatment, were characterized with colloid dispersibility, IR, Raman spectroscopy, FESEM and XPS. The results indicated that (i) CNTs could be effectively oxidized by Fe (VI) under mild condition (60 °C, 3 h), and hydrophilic CNTs with abundant surface sbnd COOH groups were produced; and (ii) Fe (VI) oxidation of CNTs followed a defect-specific oxidation process, that is, only the sp3-hybridized carbon atoms on CNT surface were oxidized while the Cdbnd C bonds remained unaffected. This selective/nondestructive oxidation afforded oxidized CNTs in yields of above 100 wt%. This paper shows that K2FeO4/H2SO4 is an effective, nondestructive and green oxidation system for oxidative functionalization of CNTs and probably other carbon materials as well.
Fuzzy-logic detection and probability of hail exploiting short-range X-band weather radar
NASA Astrophysics Data System (ADS)
Capozzi, Vincenzo; Picciotti, Errico; Mazzarella, Vincenzo; Marzano, Frank Silvio; Budillon, Giorgio
2018-03-01
This work proposes a new method for hail precipitation detection and probability, based on single-polarization X-band radar measurements. Using a dataset consisting of reflectivity volumes, ground truth observations and atmospheric sounding data, a probability of hail index, which provides a simple estimate of the hail potential, has been trained and adapted within Naples metropolitan environment study area. The probability of hail has been calculated starting by four different hail detection methods. The first two, based on (1) reflectivity data and temperature measurements and (2) on vertically-integrated liquid density product, respectively, have been selected from the available literature. The other two techniques are based on combined criteria of the above mentioned methods: the first one (3) is based on the linear discriminant analysis, whereas the other one (4) relies on the fuzzy-logic approach. The latter is an innovative criterion based on a fuzzyfication step performed through ramp membership functions. The performances of the four methods have been tested using an independent dataset: the results highlight that the fuzzy-oriented combined method performs slightly better in terms of false alarm ratio, critical success index and area under the relative operating characteristic. An example of application of the proposed hail detection and probability products is also presented for a relevant hail event, occurred on 21 July 2014.
Long-term effects of steroid withdrawal in kidney transplantation.
Offermann, G; Schwarz, A; Krause, P H
1993-01-01
The long-term graft function after withdrawal of steroids from maintenance immunosuppression was analyzed in 98 kidney recipients (59 on cyclosporin monotherapy, 39 on cyclosporin plus azathioprine) who had not developed an early rejection episode when prednisolone was discontinued. Seven years after steroid withdrawal the probability of an increase in serum creatinine (> 20% of baseline levels) was 51%. The increase in creatinine was associated with sclerosing arteriopathy as a marker of chronic rejection in 29 of 43 graft biopsies. The addition of azathioprine had no effect on the stability of long-term graft function and did not influence the 7-year graft survival rate in this highly selected group of patients.
VARIABLE SELECTION IN NONPARAMETRIC ADDITIVE MODELS
Huang, Jian; Horowitz, Joel L.; Wei, Fengrong
2010-01-01
We consider a nonparametric additive model of a conditional mean function in which the number of variables and additive components may be larger than the sample size but the number of nonzero additive components is “small” relative to the sample size. The statistical problem is to determine which additive components are nonzero. The additive components are approximated by truncated series expansions with B-spline bases. With this approximation, the problem of component selection becomes that of selecting the groups of coefficients in the expansion. We apply the adaptive group Lasso to select nonzero components, using the group Lasso to obtain an initial estimator and reduce the dimension of the problem. We give conditions under which the group Lasso selects a model whose number of components is comparable with the underlying model, and the adaptive group Lasso selects the nonzero components correctly with probability approaching one as the sample size increases and achieves the optimal rate of convergence. The results of Monte Carlo experiments show that the adaptive group Lasso procedure works well with samples of moderate size. A data example is used to illustrate the application of the proposed method. PMID:21127739
Relevance popularity: A term event model based feature selection scheme for text classification.
Feng, Guozhong; An, Baiguo; Yang, Fengqin; Wang, Han; Zhang, Libiao
2017-01-01
Feature selection is a practical approach for improving the performance of text classification methods by optimizing the feature subsets input to classifiers. In traditional feature selection methods such as information gain and chi-square, the number of documents that contain a particular term (i.e. the document frequency) is often used. However, the frequency of a given term appearing in each document has not been fully investigated, even though it is a promising feature to produce accurate classifications. In this paper, we propose a new feature selection scheme based on a term event Multinomial naive Bayes probabilistic model. According to the model assumptions, the matching score function, which is based on the prediction probability ratio, can be factorized. Finally, we derive a feature selection measurement for each term after replacing inner parameters by their estimators. On a benchmark English text datasets (20 Newsgroups) and a Chinese text dataset (MPH-20), our numerical experiment results obtained from using two widely used text classifiers (naive Bayes and support vector machine) demonstrate that our method outperformed the representative feature selection methods.
Evolution of conditional cooperation under multilevel selection.
Zhang, Huanren; Perc, Matjaž
2016-03-11
We study the emergence of conditional cooperation in the presence of both intra-group and inter-group selection. Individuals play public goods games within their groups using conditional strategies, which are represented as piecewise linear response functions. Accordingly, groups engage in conflicts with a certain probability. In contrast to previous studies, we consider continuous contribution levels and a rich set of conditional strategies, allowing for a wide range of possible interactions between strategies. We find that the existence of conditional strategies enables the stabilization of cooperation even under strong intra-group selection. The strategy that eventually dominates in the population has two key properties: (i) It is unexploitable with strong intra-group selection; (ii) It can achieve full contribution to outperform other strategies in the inter-group selection. The success of this strategy is robust to initial conditions as well as changes to important parameters. We also investigate the influence of different factors on cooperation levels, including group conflicts, group size, and migration rate. Their effect on cooperation can be attributed to and explained by their influence on the relative strength of intra-group and inter-group selection.
Diversity Order Analysis of Dual-Hop Relaying with Partial Relay Selection
NASA Astrophysics Data System (ADS)
Bao, Vo Nguyen Quoc; Kong, Hyung Yun
In this paper, we study the performance of dual hop relaying in which the best relay selected by partial relay selection will help the source-destination link to overcome the channel impairment. Specifically, closed-form expressions for outage probability, symbol error probability and achievable diversity gain are derived using the statistical characteristic of the signal-to-noise ratio. Numerical investigation shows that the system achieves diversity of two regardless of relay number and also confirms the correctness of the analytical results. Furthermore, the performance loss due to partial relay selection is investigated.
NASA Astrophysics Data System (ADS)
Thanos, Konstantinos-Georgios; Thomopoulos, Stelios C. A.
2016-05-01
wayGoo is a fully functional application whose main functionalities include content geolocation, event scheduling, and indoor navigation. However, significant information about events do not reach users' attention, either because of the size of this information or because some information comes from real - time data sources. The purpose of this work is to facilitate event management operations by prioritizing the presented events, based on users' interests using both, static and real - time data. Through the wayGoo interface, users select conceptual topics that are interesting for them. These topics constitute a browsing behavior vector which is used for learning users' interests implicitly, without being intrusive. Then, the system estimates user preferences and return an events list sorted from the most preferred one to the least. User preferences are modeled via a Naïve Bayesian Network which consists of: a) the `decision' random variable corresponding to users' decision on attending an event, b) the `distance' random variable, modeled by a linear regression that estimates the probability that the distance between a user and each event destination is not discouraging, ` the seat availability' random variable, modeled by a linear regression, which estimates the probability that the seat availability is encouraging d) and the `relevance' random variable, modeled by a clustering - based collaborative filtering, which determines the relevance of each event users' interests. Finally, experimental results show that the proposed system contribute essentially to assisting users in browsing and selecting events to attend.
Performance Analysis of Cluster Formation in Wireless Sensor Networks.
Montiel, Edgar Romo; Rivero-Angeles, Mario E; Rubino, Gerardo; Molina-Lozano, Heron; Menchaca-Mendez, Rolando; Menchaca-Mendez, Ricardo
2017-12-13
Clustered-based wireless sensor networks have been extensively used in the literature in order to achieve considerable energy consumption reductions. However, two aspects of such systems have been largely overlooked. Namely, the transmission probability used during the cluster formation phase and the way in which cluster heads are selected. Both of these issues have an important impact on the performance of the system. For the former, it is common to consider that sensor nodes in a clustered-based Wireless Sensor Network (WSN) use a fixed transmission probability to send control data in order to build the clusters. However, due to the highly variable conditions experienced by these networks, a fixed transmission probability may lead to extra energy consumption. In view of this, three different transmission probability strategies are studied: optimal, fixed and adaptive. In this context, we also investigate cluster head selection schemes, specifically, we consider two intelligent schemes based on the fuzzy C-means and k-medoids algorithms and a random selection with no intelligence. We show that the use of intelligent schemes greatly improves the performance of the system, but their use entails higher complexity and selection delay. The main performance metrics considered in this work are energy consumption, successful transmission probability and cluster formation latency. As an additional feature of this work, we study the effect of errors in the wireless channel and the impact on the performance of the system under the different transmission probability schemes.
Performance Analysis of Cluster Formation in Wireless Sensor Networks
Montiel, Edgar Romo; Rivero-Angeles, Mario E.; Rubino, Gerardo; Molina-Lozano, Heron; Menchaca-Mendez, Rolando; Menchaca-Mendez, Ricardo
2017-01-01
Clustered-based wireless sensor networks have been extensively used in the literature in order to achieve considerable energy consumption reductions. However, two aspects of such systems have been largely overlooked. Namely, the transmission probability used during the cluster formation phase and the way in which cluster heads are selected. Both of these issues have an important impact on the performance of the system. For the former, it is common to consider that sensor nodes in a clustered-based Wireless Sensor Network (WSN) use a fixed transmission probability to send control data in order to build the clusters. However, due to the highly variable conditions experienced by these networks, a fixed transmission probability may lead to extra energy consumption. In view of this, three different transmission probability strategies are studied: optimal, fixed and adaptive. In this context, we also investigate cluster head selection schemes, specifically, we consider two intelligent schemes based on the fuzzy C-means and k-medoids algorithms and a random selection with no intelligence. We show that the use of intelligent schemes greatly improves the performance of the system, but their use entails higher complexity and selection delay. The main performance metrics considered in this work are energy consumption, successful transmission probability and cluster formation latency. As an additional feature of this work, we study the effect of errors in the wireless channel and the impact on the performance of the system under the different transmission probability schemes. PMID:29236065
Testing hypotheses of bat baculum function with 3D models derived from microCT
Herdina, Anna Nele; Kelly, Diane A; Jahelková, Helena; Lina, Peter H C; Horáček, Ivan; Metscher, Brian D
2015-01-01
The baculum (os penis) has been extensively studied as a taxon-specific character in bats and other mammals but its mechanical function is still unclear. There is a wide consensus in the literature that the baculum is probably a sexually selected character. Using a novel approach combining postmortem manipulation and three-dimensional (3D) imaging, we tested two functional hypotheses in the common noctule bat Nyctalus noctula, the common pipistrelle Pipistrellus pipistrellus, and Nathusius’ pipistrelle Pipistrellus nathusii: (i) whether the baculum can protect the distal urethra and urethral opening from compression during erection and copulation; and (ii) whether the baculum and corpora cavernosa form a functional unit to support both the penile shaft and the more distal glans tip. In freshly dead or frozen and thawed bats, we compared flaccid penises with artificially ‘erect’ penises that were inflated with 10% formalin. Penises were stained with alcoholic iodine and imaged with a lab-based high-resolution x-ray microtomography system. Analysis of the 3D images enabled us to compare the changes in relative positions of the baculum, corpora cavernosa, urethra, and corpus spongiosum with one another between flaccid and ‘erect’ penises. Our results support both functional hypotheses, indicating that the baculum probably performs two different roles during erection. Our approach should prove valuable for comparing and testing the functions of different baculum morphologies in bats and other mammals. Moreover, we have validated an essential component of the groundwork necessary to extend this approach with finite element analysis for quantitative 3D biomechanical modeling of penis function. PMID:25655647
What Are Probability Surveys used by the National Aquatic Resource Surveys?
The National Aquatic Resource Surveys (NARS) use probability-survey designs to assess the condition of the nation’s waters. In probability surveys (also known as sample-surveys or statistical surveys), sampling sites are selected randomly.
Barnes, J C; Beaver, Kevin M; Miller, J Mitchell
2010-01-01
This study reconsiders the well-known link between gang membership and criminal involvement. Recently developed analytical techniques enabled the approximation of an experimental design to determine whether gang members, after being matched with similarly situated nongang members, exhibited greater involvement in nonviolent and violent delinquency. Findings indicated that while gang membership is a function of self-selection, selection effects alone do not account for the greater involvement in delinquency exhibited by gang members. After propensity score matching was employed, gang members maintained a greater involvement in both nonviolent and violent delinquency when measured cross-sectionally, but only violent delinquency when measured longitudinally. Additional analyses using inverse probability of treatment weights reaffirmed these conclusions. © 2010 Wiley-Liss, Inc.
Vegas-Sanchez-Ferrero, G; Aja-Fernandez, S; Martin-Fernandez, M; Frangi, A F; Palencia, C
2010-01-01
A novel anisotropic diffusion filter is proposed in this work with application to cardiac ultrasonic images. It includes probabilistic models which describe the probability density function (PDF) of tissues and adapts the diffusion tensor to the image iteratively. For this purpose, a preliminary study is performed in order to select the probability models that best fit the stastitical behavior of each tissue class in cardiac ultrasonic images. Then, the parameters of the diffusion tensor are defined taking into account the statistical properties of the image at each voxel. When the structure tensor of the probability of belonging to each tissue is included in the diffusion tensor definition, a better boundaries estimates can be obtained instead of calculating directly the boundaries from the image. This is the main contribution of this work. Additionally, the proposed method follows the statistical properties of the image in each iteration. This is considered as a second contribution since state-of-the-art methods suppose that noise or statistical properties of the image do not change during the filter process.
Finding and testing network communities by lumped Markov chains.
Piccardi, Carlo
2011-01-01
Identifying communities (or clusters), namely groups of nodes with comparatively strong internal connectivity, is a fundamental task for deeply understanding the structure and function of a network. Yet, there is a lack of formal criteria for defining communities and for testing their significance. We propose a sharp definition that is based on a quality threshold. By means of a lumped Markov chain model of a random walker, a quality measure called "persistence probability" is associated to a cluster, which is then defined as an "α-community" if such a probability is not smaller than α. Consistently, a partition composed of α-communities is an "α-partition." These definitions turn out to be very effective for finding and testing communities. If a set of candidate partitions is available, setting the desired α-level allows one to immediately select the α-partition with the finest decomposition. Simultaneously, the persistence probabilities quantify the quality of each single community. Given its ability in individually assessing each single cluster, this approach can also disclose single well-defined communities even in networks that overall do not possess a definite clusterized structure.
Two Hop Adaptive Vector Based Quality Forwarding for Void Hole Avoidance in Underwater WSNs
Javaid, Nadeem; Ahmed, Farwa; Wadud, Zahid; Alrajeh, Nabil; Alabed, Mohamad Souheil; Ilahi, Manzoor
2017-01-01
Underwater wireless sensor networks (UWSNs) facilitate a wide range of aquatic applications in various domains. However, the harsh underwater environment poses challenges like low bandwidth, long propagation delay, high bit error rate, high deployment cost, irregular topological structure, etc. Node mobility and the uneven distribution of sensor nodes create void holes in UWSNs. Void hole creation has become a critical issue in UWSNs, as it severely affects the network performance. Avoiding void hole creation benefits better coverage over an area, less energy consumption in the network and high throughput. For this purpose, minimization of void hole probability particularly in local sparse regions is focused on in this paper. The two-hop adaptive hop by hop vector-based forwarding (2hop-AHH-VBF) protocol aims to avoid the void hole with the help of two-hop neighbor node information. The other protocol, quality forwarding adaptive hop by hop vector-based forwarding (QF-AHH-VBF), selects an optimal forwarder based on the composite priority function. QF-AHH-VBF improves network good-put because of optimal forwarder selection. QF-AHH-VBF aims to reduce void hole probability by optimally selecting next hop forwarders. To attain better network performance, mathematical problem formulation based on linear programming is performed. Simulation results show that by opting these mechanisms, significant reduction in end-to-end delay and better throughput are achieved in the network. PMID:28763014
Two Hop Adaptive Vector Based Quality Forwarding for Void Hole Avoidance in Underwater WSNs.
Javaid, Nadeem; Ahmed, Farwa; Wadud, Zahid; Alrajeh, Nabil; Alabed, Mohamad Souheil; Ilahi, Manzoor
2017-08-01
Underwater wireless sensor networks (UWSNs) facilitate a wide range of aquatic applications in various domains. However, the harsh underwater environment poses challenges like low bandwidth, long propagation delay, high bit error rate, high deployment cost, irregular topological structure, etc. Node mobility and the uneven distribution of sensor nodes create void holes in UWSNs. Void hole creation has become a critical issue in UWSNs, as it severely affects the network performance. Avoiding void hole creation benefits better coverage over an area, less energy consumption in the network and high throughput. For this purpose, minimization of void hole probability particularly in local sparse regions is focused on in this paper. The two-hop adaptive hop by hop vector-based forwarding (2hop-AHH-VBF) protocol aims to avoid the void hole with the help of two-hop neighbor node information. The other protocol, quality forwarding adaptive hop by hop vector-based forwarding (QF-AHH-VBF), selects an optimal forwarder based on the composite priority function. QF-AHH-VBF improves network good-put because of optimal forwarder selection. QF-AHH-VBF aims to reduce void hole probability by optimally selecting next hop forwarders. To attain better network performance, mathematical problem formulation based on linear programming is performed. Simulation results show that by opting these mechanisms, significant reduction in end-to-end delay and better throughput are achieved in the network.
Anabolic-androgenic steroids and decision making: Probability and effort discounting in male rats.
Wallin, Kathryn G; Alves, Jasmin M; Wood, Ruth I
2015-07-01
Anabolic-androgenic steroid (AAS) abuse is implicated in maladaptive behaviors such as increased aggression and risk taking. Impaired judgment due to changes in the mesocorticolimbic dopamine system may contribute to these behavioral changes. While AAS are known to influence dopamine function in mesocorticolimbic circuitry, the effects on decision making are unknown. This was the focus of the present study. Adolescent male Long-Evans rats were treated chronically with high-dose testosterone (7.5 mg/kg) or vehicle (13% cyclodextrin in water), and tested for cost/benefit decision making in two discounting paradigms. Rats chose between a small reward (1 sugar pellet) and a large discounted reward (3 or 4 pellets). Probability discounting (PD) measures sensitivity to reward uncertainty by decreasing the probability (100, 75, 50, 25, 0%) of receiving the large reward in successive blocks of each daily session. Effort discounting (ED) measures sensitivity to a work cost by increasing the lever presses required to earn the large reward (1, 2, 5, 10, 15 presses). In PD, testosterone-treated rats selected the large/uncertain reward significantly less than vehicle-treated controls. However, during ED, testosterone-treated rats selected the large/high effort reward significantly more than controls. These studies show that testosterone has divergent effects on different aspects of decision making. Specifically, testosterone increases aversion to uncertainty but decreases sensitivity to the output of effort for reward. These results have implications for understanding maladaptive behavioral changes in human AAS users. Copyright © 2015 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Hoffman, R. N.; Leidner, S. M.; Henderson, J. M.; Atlas, R.; Ardizzone, J. V.; Bloom, S. C.; Atlas, Robert (Technical Monitor)
2001-01-01
In this study, we apply a two-dimensional variational analysis method (2d-VAR) to select a wind solution from NASA Scatterometer (NSCAT) ambiguous winds. 2d-VAR determines a "best" gridded surface wind analysis by minimizing a cost function. The cost function measures the misfit to the observations, the background, and the filtering and dynamical constraints. The ambiguity closest in direction to the minimizing analysis is selected. 2d-VAR method, sensitivity and numerical behavior are described. 2d-VAR is compared to statistical interpolation (OI) by examining the response of both systems to a single ship observation and to a swath of unique scatterometer winds. 2d-VAR is used with both NSCAT ambiguities and NSCAT backscatter values. Results are roughly comparable. When the background field is poor, 2d-VAR ambiguity removal often selects low probability ambiguities. To avoid this behavior, an initial 2d-VAR analysis, using only the two most likely ambiguities, provides the first guess for an analysis using all the ambiguities or the backscatter data. 2d-VAR and median filter selected ambiguities usually agree. Both methods require horizontal consistency, so disagreements occur in clumps, or as linear features. In these cases, 2d-VAR ambiguities are often more meteorologically reasonable and more consistent with satellite imagery.
[Portal hypertension. Evidence-based guide].
Mercado, Miguel Angel; Orozco Zepeda, Héctor; Plata-Muñoz, Juan José
2004-01-01
Treatment of portal hypertension has evolved widely during the last decades. Advances in physiopathology have allowed better application of therapeutic options and also have permitted to know the natural history of varices and variceal bleeding, predicting which patients have a higher risk of bleeding. It also permits probability of designing patient treatment. According to liver function and subadjacent liver disease, it is possible to offer different alternatives within the three possible scenarios (primary prophylaxis, acute bleeding episode, and secondary prophylaxis). For primary prophylaxis, pharmacotherapy offers the best choice. Endoscopic banding is also growing in these scenarios and probably will be accepted in the near future. For the acute bleeding episode, endoscopic therapy (sclerosis and/or bands) and/or pharmacologic therapy (octreotide, terlipresin) represent best choice, considering TIPS as a rescue option. Surgery is not used routinely in this scenario in most centers. For secondary prophylaxis, pharmaco- and endoscopic therapy are first-line treatments, while TIPS and surgery as second-line treatments. TIPS is mainly used in patients on a waiting list for liver transplantation. Surgery offers good results for low-risk patients, with good liver function and with portal blood-flow preserving procedures (selective shunts, extensive devascularizations). Liver transplantation is recommended for patients with poor liver function because together with portal hypertension, it treats subadjacent liver disease.
Selection in a subdivided population with local extinction and recolonization.
Cherry, Joshua L
2003-01-01
In a subdivided population, local extinction and subsequent recolonization affect the fate of alleles. Of particular interest is the interaction of this force with natural selection. The effect of selection can be weakened by this additional source of stochastic change in allele frequency. The behavior of a selected allele in such a population is shown to be equivalent to that of an allele with a different selection coefficient in an unstructured population with a different size. This equivalence allows use of established results for panmictic populations to predict such quantities as fixation probabilities and mean times to fixation. The magnitude of the quantity N(e)s(e), which determines fixation probability, is decreased by extinction and recolonization. Thus deleterious alleles are more likely to fix, and advantageous alleles less likely to do so, in the presence of extinction and recolonization. Computer simulations confirm that the theoretical predictions of both fixation probabilities and mean times to fixation are good approximations. PMID:12807797
Random Evolutionary Dynamics Driven by Fitness and House-of-Cards Mutations: Sampling Formulae
NASA Astrophysics Data System (ADS)
Huillet, Thierry E.
2017-07-01
We first revisit the multi-allelic mutation-fitness balance problem, especially when mutations obey a house of cards condition, where the discrete-time deterministic evolutionary dynamics of the allelic frequencies derives from a Shahshahani potential. We then consider multi-allelic Wright-Fisher stochastic models whose deviation to neutrality is from the Shahshahani mutation/selection potential. We next focus on the weak selection, weak mutation cases and, making use of a Gamma calculus, we compute the normalizing partition functions of the invariant probability densities appearing in their Wright-Fisher diffusive approximations. Using these results, generalized Ewens sampling formulae (ESF) from the equilibrium distributions are derived. We start treating the ESF in the mixed mutation/selection potential case and then we restrict ourselves to the ESF in the simpler house-of-cards mutations only situation. We also address some issues concerning sampling problems from infinitely-many alleles weak limits.
Cyber-Physical Correlations for Infrastructure Resilience: A Game-Theoretic Approach
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rao, Nageswara S; He, Fei; Ma, Chris Y. T.
In several critical infrastructures, the cyber and physical parts are correlated so that disruptions to one affect the other and hence the whole system. These correlations may be exploited to strategically launch components attacks, and hence must be accounted for ensuring the infrastructure resilience, specified by its survival probability. We characterize the cyber-physical interactions at two levels: (i) the failure correlation function specifies the conditional survival probability of cyber sub-infrastructure given the physical sub-infrastructure as a function of their marginal probabilities, and (ii) the individual survival probabilities of both sub-infrastructures are characterized by first-order differential conditions. We formulate a resiliencemore » problem for infrastructures composed of discrete components as a game between the provider and attacker, wherein their utility functions consist of an infrastructure survival probability term and a cost term expressed in terms of the number of components attacked and reinforced. We derive Nash Equilibrium conditions and sensitivity functions that highlight the dependence of infrastructure resilience on the cost term, correlation function and sub-infrastructure survival probabilities. These results generalize earlier ones based on linear failure correlation functions and independent component failures. We apply the results to models of cloud computing infrastructures and energy grids.« less
Fingerprint Ridge Density as a Potential Forensic Anthropological Tool for Sex Identification.
Dhall, Jasmine Kaur; Kapoor, Anup Kumar
2016-03-01
In cases of partial or poor print recovery and lack of database/suspect print, fingerprint evidence is generally neglected. In light of such constraints, this study was designed to examine whether ridge density can aid in narrowing down the investigation for sex identification. The study was conducted on the right-hand index digit of 245 males and 246 females belonging to the Punjabis of Delhi region. Five ridge density count areas, namely upper radial, radial, ulnar, upper ulnar, and proximal, were selected and designated. Probability of sex origin was calculated, and stepwise discriminant function analysis was performed to determine the discriminating ability of the selected areas. Females were observed with a significantly higher ridge density than males in all the five areas. Discriminant function analysis and logistic regression exhibited 96.8% and 97.4% accuracy, respectively, in sex identification. Hence, fingerprint ridge density is a potential tool for sex identification, even from partial prints. © 2015 American Academy of Forensic Sciences.
Detection of content adaptive LSB matching: a game theory approach
NASA Astrophysics Data System (ADS)
Denemark, Tomáš; Fridrich, Jessica
2014-02-01
This paper is an attempt to analyze the interaction between Alice and Warden in Steganography using the Game Theory. We focus on the modern steganographic embedding paradigm based on minimizing an additive distortion function. The strategies of both players comprise of the probabilistic selection channel. The Warden is granted the knowledge of the payload and the embedding costs, and detects embedding using the likelihood ratio. In particular, the Warden is ignorant about the embedding probabilities chosen by Alice. When adopting a simple multivariate Gaussian model for the cover, the payoff function in the form of the Warden's detection error can be numerically evaluated for a mutually independent embedding operation. We demonstrate on the example of a two-pixel cover that the Nash equilibrium is different from the traditional Alice's strategy that minimizes the KL divergence between cover and stego objects under an omnipotent Warden. Practical implications of this case study include computing the loss per pixel of Warden's ability to detect embedding due to her ignorance about the selection channel.
Quantifying recent erosion and sediment delivery using probability sampling: A case study
Jack Lewis
2002-01-01
Abstract - Estimates of erosion and sediment delivery have often relied on measurements from locations that were selected to be representative of particular terrain types. Such judgement samples are likely to overestimate or underestimate the mean of the quantity of interest. Probability sampling can eliminate the bias due to sample selection, and it permits the...
Neighbourhood inequalities in health and health-related behaviour: results of selective migration?
van Lenthe, Frank J; Martikainen, Pekka; Mackenbach, Johan P
2007-03-01
We hypothesised that neighbourhood inequalities in health and health-related behaviour are due to selective migration between neighbourhoods. Ten-year follow-up data of 25-74-year-old participants in a Dutch city (Eindhoven) showed an increased probability of both upward and downward migration in 25-34-year-old participants, and in single and divorced participants. Women and those highly educated showed an increased probability of upward migration from the most deprived neighbourhoods; lower educated showed an increased probability of moving downwards. Adjusted for these factors, health and health-related behaviour were weakly associated with migration. Over 10 years of follow-up, selective migration will hardly contribute to neighbourhood inequalities in health and health-related behaviour.
End-to-end distance and contour length distribution functions of DNA helices
NASA Astrophysics Data System (ADS)
Zoli, Marco
2018-06-01
I present a computational method to evaluate the end-to-end and the contour length distribution functions of short DNA molecules described by a mesoscopic Hamiltonian. The method generates a large statistical ensemble of possible configurations for each dimer in the sequence, selects the global equilibrium twist conformation for the molecule, and determines the average base pair distances along the molecule backbone. Integrating over the base pair radial and angular fluctuations, I derive the room temperature distribution functions as a function of the sequence length. The obtained values for the most probable end-to-end distance and contour length distance, providing a measure of the global molecule size, are used to examine the DNA flexibility at short length scales. It is found that, also in molecules with less than ˜60 base pairs, coiled configurations maintain a large statistical weight and, consistently, the persistence lengths may be much smaller than in kilo-base DNA.
Hazard Function Estimation with Cause-of-Death Data Missing at Random
Wang, Qihua; Dinse, Gregg E.; Liu, Chunling
2010-01-01
Hazard function estimation is an important part of survival analysis. Interest often centers on estimating the hazard function associated with a particular cause of death. We propose three nonparametric kernel estimators for the hazard function, all of which are appropriate when death times are subject to random censorship and censoring indicators can be missing at random. Specifically, we present a regression surrogate estimator, an imputation estimator, and an inverse probability weighted estimator. All three estimators are uniformly strongly consistent and asymptotically normal. We derive asymptotic representations of the mean squared error and the mean integrated squared error for these estimators and we discuss a data-driven bandwidth selection method. A simulation study, conducted to assess finite sample behavior, demonstrates that the proposed hazard estimators perform relatively well. We illustrate our methods with an analysis of some vascular disease data. PMID:22267874
Force Density Function Relationships in 2-D Granular Media
NASA Technical Reports Server (NTRS)
Youngquist, Robert C.; Metzger, Philip T.; Kilts, Kelly N.
2004-01-01
An integral transform relationship is developed to convert between two important probability density functions (distributions) used in the study of contact forces in granular physics. Developing this transform has now made it possible to compare and relate various theoretical approaches with one another and with the experimental data despite the fact that one may predict the Cartesian probability density and another the force magnitude probability density. Also, the transforms identify which functional forms are relevant to describe the probability density observed in nature, and so the modified Bessel function of the second kind has been identified as the relevant form for the Cartesian probability density corresponding to exponential forms in the force magnitude distribution. Furthermore, it is shown that this transform pair supplies a sufficient mathematical framework to describe the evolution of the force magnitude distribution under shearing. Apart from the choice of several coefficients, whose evolution of values must be explained in the physics, this framework successfully reproduces the features of the distribution that are taken to be an indicator of jamming and unjamming in a granular packing. Key words. Granular Physics, Probability Density Functions, Fourier Transforms
ERIC Educational Resources Information Center
Maher, Nicole; Muir, Tracey
2014-01-01
This paper reports on one aspect of a wider study that investigated a selection of final year pre-service primary teachers' responses to four probability tasks. The tasks focused on foundational ideas of probability including sample space, independence, variation and expectation. Responses suggested that strongly held intuitions appeared to…
A diffusion model for the fate of tandem gene duplicates in diploids.
O'Hely, Martin
2007-06-01
Suppose one chromosome in one member of a population somehow acquires a duplicate copy of the gene, fully linked to the original gene's locus. Preservation is the event that eventually every chromosome in the population is a descendant of the one which initially carried the duplicate. For a haploid population in which the absence of all copies of the gene is lethal, the probability of preservation has recently been estimated via a diffusion approximation. That approximation is shown to carry over to the case of diploids and arbitrary strong selection against the absence of the gene. The techniques used lead to some new results. In the large population limit, it is shown that the relative probability that descendants of a small number of individuals carrying multiple copies of the gene fix in the population is proportional to the number of copies carried. The probability of preservation is approximated when chromosomes carrying two copies of the gene are subject to additional, fully non-functionalizing mutations, thereby modelling either an additional cost of replicating a longer genome, or a partial duplication of the gene. In the latter case the preservation probability depends only on the mutation rate to null for the duplicated portion of the gene.
Spawning habitat associations and selection by fishes in a flow-regulated prairie river
Brewer, S.K.; Papoulias, D.M.; Rabeni, C.F.
2006-01-01
We used histological features to identify the spawning chronologies of river-dwelling populations of slenderhead darter Percina phoxocephala, suckermouth minnow Phenacobius mirabilis, stonecat Noturus flavus, and red shiner Cyprinella lutrensis and to relate their reproductive status to microhabitat associations. We identified spawning and nonspawning differences in habitat associations resulting from I year of field data via logistic regression modeling and identified shifts in microhabitat selection via frequency-of-use and availability histograms. Each species demonstrated different habitat associations between spawning and nonspawning periods. The peak spawning period for slenderhead darters was April to May in high-velocity microhabitats containing cobble. Individuals were associated with similar microhabitats during the postspawn summer and began migrating to deeper habitats in the fall. Most suckermouth minnow spawned from late March through early May in shallow microhabitats. The probability of the presence of these fish in shallow habitats declined postspawn, as fish apparently shifted to deeper habitats. Stonecats conducted prespawn activities in nearshore microhabitats containing large substrates but probably moved to deeper habitats during summer to spawn. Microhabitats with shallow depths containing cobble were associated with the presence of spawning red shiners during the summer. Prespawn fish selected low-velocity microhabitats during the spring, whereas postspawn fish selected habitats similar to the spawning habitat but added a shallow depth component. Hydraulic variables had the most influence on microhabitat models for all of these species, emphasizing the importance of flow in habitat selection by river-dwelling fishes. Histological analyses allowed us to more precisely document the time periods when habitat use is critical to species success. Without evidence demonstrating the functional mechanisms behind habitat associations, protective flows implemented for habitat protection are unlikely to be effective. ?? Copyright by the American Fisheries Society 2006.
47 CFR 1.1623 - Probability calculation.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 47 Telecommunication 1 2010-10-01 2010-10-01 false Probability calculation. 1.1623 Section 1.1623 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL PRACTICE AND PROCEDURE Random Selection Procedures for Mass Media Services General Procedures § 1.1623 Probability calculation. (a) All calculations shall be...
Executive dysfunction, brain aging, and political leadership.
Fisher, Mark; Franklin, David L; Post, Jerrold M
2014-01-01
Decision-making is an essential component of executive function, and a critical skill of political leadership. Neuroanatomic localization studies have established the prefrontal cortex as the critical brain site for executive function. In addition to the prefrontal cortex, white matter tracts as well as subcortical brain structures are crucial for optimal executive function. Executive function shows a significant decline beginning at age 60, and this is associated with age-related atrophy of prefrontal cortex, cerebral white matter disease, and cerebral microbleeds. Notably, age-related decline in executive function appears to be a relatively selective cognitive deterioration, generally sparing language and memory function. While an individual may appear to be functioning normally with regard to relatively obvious cognitive functions such as language and memory, that same individual may lack the capacity to integrate these cognitive functions to achieve normal decision-making. From a historical perspective, global decline in cognitive function of political leaders has been alternatively described as a catastrophic event, a slowly progressive deterioration, or a relatively episodic phenomenon. Selective loss of executive function in political leaders is less appreciated, but increased utilization of highly sensitive brain imaging techniques will likely bring greater appreciation to this phenomenon. Former Israeli Prime Minister Ariel Sharon was an example of a political leader with a well-described neurodegenerative condition (cerebral amyloid angiopathy) that creates a neuropathological substrate for executive dysfunction. Based on the known neuroanatomical and neuropathological changes that occur with aging, we should probably assume that a significant proportion of political leaders over the age of 65 have impairment of executive function.
Modeling the effect of reward amount on probability discounting.
Myerson, Joel; Green, Leonard; Morris, Joshua
2011-03-01
The present study with college students examined the effect of amount on the discounting of probabilistic monetary rewards. A hyperboloid function accurately described the discounting of hypothetical rewards ranging in amount from $20 to $10,000,000. The degree of discounting increased continuously with amount of probabilistic reward. This effect of amount was not due to changes in the rate parameter of the discounting function, but rather was due to increases in the exponent. These results stand in contrast to those observed with the discounting of delayed monetary rewards, in which the degree of discounting decreases with reward amount due to amount-dependent decreases in the rate parameter. Taken together, this pattern of results suggests that delay and probability discounting reflect different underlying mechanisms. That is, the fact that the exponent in the delay discounting function is independent of amount is consistent with a psychophysical scaling interpretation, whereas the finding that the exponent of the probability-discounting function is amount-dependent is inconsistent with such an interpretation. Instead, the present results are consistent with the idea that the probability-discounting function is itself the product of a value function and a weighting function. This idea was first suggested by Kahneman and Tversky (1979), although their prospect theory does not predict amount effects like those observed. The effect of amount on probability discounting was parsimoniously incorporated into our hyperboloid discounting function by assuming that the exponent was proportional to the amount raised to a power. The amount-dependent exponent of the probability-discounting function may be viewed as reflecting the effect of amount on the weighting of the probability with which the reward will be received.
Reply to Efford on ‘Integrating resource selection information with spatial capture-recapture’
Royle, Andy; Chandler, Richard; Sun, Catherine C.; Fuller, Angela K.
2014-01-01
3. A key point of Royle et al. (Methods in Ecology and Evolution, 2013, 4) was that active resource selection induces heterogeneity in encounter probability which, if unaccounted for, should bias estimates of population size or density. The models of Royle et al. (Methods in Ecology and Evolution, 2013, 4) and Efford (Methods in Ecology and Evolution, 2014, 000, 000) merely amount to alternative models of resource selection, and hence varying amounts of heterogeneity in encounter probability.
Probability of the moiré effect in barrier and lenticular autostereoscopic 3D displays.
Saveljev, Vladimir; Kim, Sung-Kyu
2015-10-05
The probability of the moiré effect in LCD displays is estimated as a function of angle based on the experimental data; a theoretical function (node spacing) is proposed basing on the distance between nodes. Both functions are close to each other. The connection between the probability of the moiré effect and the Thomae's function is also found. The function proposed in this paper can be used in the minimization of the moiré effect in visual displays, especially in autostereoscopic 3D displays.
Ruddy, Barbara C.; Stevens, Michael R.; Verdin, Kristine
2010-01-01
This report presents a preliminary emergency assessment of the debris-flow hazards from drainage basins burned by the Fourmile Creek fire in Boulder County, Colorado, in 2010. Empirical models derived from statistical evaluation of data collected from recently burned basins throughout the intermountain western United States were used to estimate the probability of debris-flow occurrence and volumes of debris flows for selected drainage basins. Data for the models include burn severity, rainfall total and intensity for a 25-year-recurrence, 1-hour-duration rainstorm, and topographic and soil property characteristics. Several of the selected drainage basins in Fourmile Creek and Gold Run were identified as having probabilities of debris-flow occurrence greater than 60 percent, and many more with probabilities greater than 45 percent, in response to the 25-year recurrence, 1-hour rainfall. None of the Fourmile Canyon Creek drainage basins selected had probabilities greater than 45 percent. Throughout the Gold Run area and the Fourmile Creek area upstream from Gold Run, the higher probabilities tend to be in the basins with southerly aspects (southeast, south, and southwest slopes). Many basins along the perimeter of the fire area were identified as having low probability of occurrence of debris flow. Volume of debris flows predicted from drainage basins with probabilities of occurrence greater than 60 percent ranged from 1,200 to 9,400 m3. The predicted moderately high probabilities and some of the larger volumes responses predicted for the modeled storm indicate a potential for substantial debris-flow effects to buildings, roads, bridges, culverts, and reservoirs located both within these drainages and immediately downstream from the burned area. However, even small debris flows that affect structures at the basin outlets could cause considerable damage.
Bamdad, M; David, L; Grolière, C A
1995-12-01
A study of the toxicity of epinigericin, an antibiotic ionophor, towards the ciliate Tetrahymena pyriformis showed that this molecule stopped cell division, increased cell volume and led to a more basic intracellular pH. The action of epinigericin was probably linked to its function as an ionophor. The ionic selectivity of this molecule is still not known. The raising of the intracellular pH of ciliates by this antibiotic may be linked to its toxic action and its iontransport mechanism in Tetrahymena.
Modelling the Probability of Landslides Impacting Road Networks
NASA Astrophysics Data System (ADS)
Taylor, F. E.; Malamud, B. D.
2012-04-01
During a landslide triggering event, the threat of landslides blocking roads poses a risk to logistics, rescue efforts and communities dependant on those road networks. Here we present preliminary results of a stochastic model we have developed to evaluate the probability of landslides intersecting a simple road network during a landslide triggering event and apply simple network indices to measure the state of the road network in the affected region. A 4000 x 4000 cell array with a 5 m x 5 m resolution was used, with a pre-defined simple road network laid onto it, and landslides 'randomly' dropped onto it. Landslide areas (AL) were randomly selected from a three-parameter inverse gamma probability density function, consisting of a power-law decay of about -2.4 for medium and large values of AL and an exponential rollover for small values of AL; the rollover (maximum probability) occurs at about AL = 400 m2 This statistical distribution was chosen based on three substantially complete triggered landslide inventories recorded in existing literature. The number of landslide areas (NL) selected for each triggered event iteration was chosen to have an average density of 1 landslide km-2, i.e. NL = 400 landslide areas chosen randomly for each iteration, and was based on several existing triggered landslide event inventories. A simple road network was chosen, in a 'T' shape configuration, with one road 1 x 4000 cells (5 m x 20 km) in a 'T' formation with another road 1 x 2000 cells (5 m x 10 km). The landslide areas were then randomly 'dropped' over the road array and indices such as the location, size (ABL) and number of road blockages (NBL) recorded. This process was performed 500 times (iterations) in a Monte-Carlo type simulation. Initial results show that for a landslide triggering event with 400 landslides over a 400 km2 region, the number of road blocks per iteration, NBL,ranges from 0 to 7. The average blockage area for the 500 iterations (A¯ BL) is about 3000 m2, which closely matches the value of A¯ L for the triggered landslide inventories. We further find that over the 500 iterations, the probability of a given number of road blocks occurring on any given iteration, p(NBL) as a function of NBL, follows reasonably well a three-parameter inverse gamma probability density distribution with an exponential rollover (i.e., the most frequent value) at NBL = 1.3. In this paper we have begun to calculate the probability of the number of landslides blocking roads during a triggering event, and have found that this follows an inverse-gamma distribution, which is similar to that found for the statistics of landslide areas resulting from triggers. As we progress to model more realistic road networks, this work will aid in both long-term and disaster management for road networks by allowing probabilistic assessment of road network potential damage during different magnitude landslide triggering event scenarios.
Decision making generalized by a cumulative probability weighting function
NASA Astrophysics Data System (ADS)
dos Santos, Lindomar Soares; Destefano, Natália; Martinez, Alexandre Souto
2018-01-01
Typical examples of intertemporal decision making involve situations in which individuals must choose between a smaller reward, but more immediate, and a larger one, delivered later. Analogously, probabilistic decision making involves choices between options whose consequences differ in relation to their probability of receiving. In Economics, the expected utility theory (EUT) and the discounted utility theory (DUT) are traditionally accepted normative models for describing, respectively, probabilistic and intertemporal decision making. A large number of experiments confirmed that the linearity assumed by the EUT does not explain some observed behaviors, as nonlinear preference, risk-seeking and loss aversion. That observation led to the development of new theoretical models, called non-expected utility theories (NEUT), which include a nonlinear transformation of the probability scale. An essential feature of the so-called preference function of these theories is that the probabilities are transformed by decision weights by means of a (cumulative) probability weighting function, w(p) . We obtain in this article a generalized function for the probabilistic discount process. This function has as particular cases mathematical forms already consecrated in the literature, including discount models that consider effects of psychophysical perception. We also propose a new generalized function for the functional form of w. The limiting cases of this function encompass some parametric forms already proposed in the literature. Far beyond a mere generalization, our function allows the interpretation of probabilistic decision making theories based on the assumption that individuals behave similarly in the face of probabilities and delays and is supported by phenomenological models.
Selective binding behavior of humic acid removal by aluminum coagulation.
Jin, Pengkang; Song, Jina; Yang, Lei; Jin, Xin; Wang, Xiaochang C
2018-02-01
The reactivity characteristics of humic acid (HA) with aluminium coagulants at different pH values was investigated. It revealed that the linear complexation reaction occurred between aluminum and humic acid at pH < 7, and the reaction rate increased as the pH increased from 2 to 6. While at pH = 7, most of the dosed aluminum existed in the form of free aluminum and remained unreacted in the presence of HA until the concentration reached to trigger Al(OH) 3(s) formation. Differentiating the change of functional groups of HA by 1 H nuclear magnetic resonance spectroscopy and X-ray photoelectron spectra analysis, it elucidated that there was a selective complexation between HA and Al with lower Al dosage at pH 5, which was probably due to coordination of the activated functional groups onto aluminium. While almost all components were removed proportionally by sweep adsorption without selectivity at pH 7, as well as that with higher Al dosage at pH 5. This study provided a promising pathway to analyse the mechanism of the interaction between HA and metal coagulants in future. Copyright © 2017 Elsevier Ltd. All rights reserved.
Failure detection system risk reduction assessment
NASA Technical Reports Server (NTRS)
Aguilar, Robert B. (Inventor); Huang, Zhaofeng (Inventor)
2012-01-01
A process includes determining a probability of a failure mode of a system being analyzed reaching a failure limit as a function of time to failure limit, determining a probability of a mitigation of the failure mode as a function of a time to failure limit, and quantifying a risk reduction based on the probability of the failure mode reaching the failure limit and the probability of the mitigation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Hesheng, E-mail: hesheng@umich.edu; Feng, Mary; Jackson, Andrew
Purpose: To develop a local and global function model in the liver based on regional and organ function measurements to support individualized adaptive radiation therapy (RT). Methods and Materials: A local and global model for liver function was developed to include both functional volume and the effect of functional variation of subunits. Adopting the assumption of parallel architecture in the liver, the global function was composed of a sum of local function probabilities of subunits, varying between 0 and 1. The model was fit to 59 datasets of liver regional and organ function measures from 23 patients obtained before, during, andmore » 1 month after RT. The local function probabilities of subunits were modeled by a sigmoid function in relating to MRI-derived portal venous perfusion values. The global function was fitted to a logarithm of an indocyanine green retention rate at 15 minutes (an overall liver function measure). Cross-validation was performed by leave-m-out tests. The model was further evaluated by fitting to the data divided according to whether the patients had hepatocellular carcinoma (HCC) or not. Results: The liver function model showed that (1) a perfusion value of 68.6 mL/(100 g · min) yielded a local function probability of 0.5; (2) the probability reached 0.9 at a perfusion value of 98 mL/(100 g · min); and (3) at a probability of 0.03 [corresponding perfusion of 38 mL/(100 g · min)] or lower, the contribution to global function was lost. Cross-validations showed that the model parameters were stable. The model fitted to the data from the patients with HCC indicated that the same amount of portal venous perfusion was translated into less local function probability than in the patients with non-HCC tumors. Conclusions: The developed liver function model could provide a means to better assess individual and regional dose-responses of hepatic functions, and provide guidance for individualized treatment planning of RT.« less
Chao, Li-Wei; Szrek, Helena; Peltzer, Karl; Ramlagan, Shandir; Fleming, Peter; Leite, Rui; Magerman, Jesswill; Ngwenya, Godfrey B.; Pereira, Nuno Sousa; Behrman, Jere
2011-01-01
Finding an efficient method for sampling micro- and small-enterprises (MSEs) for research and statistical reporting purposes is a challenge in developing countries, where registries of MSEs are often nonexistent or outdated. This lack of a sampling frame creates an obstacle in finding a representative sample of MSEs. This study uses computer simulations to draw samples from a census of businesses and non-businesses in the Tshwane Municipality of South Africa, using three different sampling methods: the traditional probability sampling method, the compact segment sampling method, and the World Health Organization’s Expanded Programme on Immunization (EPI) sampling method. Three mechanisms by which the methods could differ are tested, the proximity selection of respondents, the at-home selection of respondents, and the use of inaccurate probability weights. The results highlight the importance of revisits and accurate probability weights, but the lesser effect of proximity selection on the samples’ statistical properties. PMID:22582004
Impact of high-risk conjunctions on Active Debris Removal target selection
NASA Astrophysics Data System (ADS)
Lidtke, Aleksander A.; Lewis, Hugh G.; Armellin, Roberto
2015-10-01
Space debris simulations show that if current space launches continue unchanged, spacecraft operations might become difficult in the congested space environment. It has been suggested that Active Debris Removal (ADR) might be necessary in order to prevent such a situation. Selection of objects to be targeted by ADR is considered important because removal of non-relevant objects will unnecessarily increase the cost of ADR. One of the factors to be used in this ADR target selection is the collision probability accumulated by every object. This paper shows the impact of high-probability conjunctions on the collision probability accumulated by individual objects as well as the probability of any collision occurring in orbit. Such conjunctions cannot be predicted far in advance and, consequently, not all the objects that will be involved in such dangerous conjunctions can be removed through ADR. Therefore, a debris remediation method that would address such events at short notice, and thus help prevent likely collisions, is suggested.
ERIC Educational Resources Information Center
Yevdokimov, Oleksiy
2009-01-01
This article presents a problem set which includes a selection of probability problems. Probability theory started essentially as an empirical science and developed on the mathematical side later. The problems featured in this article demonstrate diversity of ideas and different concepts of probability, in particular, they refer to Laplace and…
Gao, Zhengguang; Liu, Hongzhan; Ma, Xiaoping; Lu, Wei
2016-11-10
Multi-hop parallel relaying is considered in a free-space optical (FSO) communication system deploying binary phase-shift keying (BPSK) modulation under the combined effects of a gamma-gamma (GG) distribution and misalignment fading. Based on the best path selection criterion, the cumulative distribution function (CDF) of this cooperative random variable is derived. Then the performance of this optical mesh network is analyzed in detail. A Monte Carlo simulation is also conducted to demonstrate the effectiveness of the results for the average bit error rate (ABER) and outage probability. The numerical result proves that it needs a smaller average transmitted optical power to achieve the same ABER and outage probability when using the multi-hop parallel network in FSO links. Furthermore, the system use of more number of hops and cooperative paths can improve the quality of the communication.
Ziegler, Andreas; Dohr, Gotrfried; Uchanska-Ziegler, Barbara
2002-07-01
Polymorphic genes of the human major histocompatibility complex [MHC; human leukocyte antigen (HLA)] are probably important in determining resistance to parasites and avoidance of inbreeding. We investigated whether HLA-associated sexual selection could also involve HLA-linked olfactory receptor (OR) genes, which might not only participate in olfaction-guided mate choice, but also in selection processes within the testis. The testicular expression status of HLA class I molecules (by immunohistology) and HLA-linked OR genes (by transcriptional analysis) was determined. Various HLA class I heavy chains, but not beta2-microglobulin (beta2m), were expressed, mainly at the spermatocyte I stage. Of 17 HLA-linked OR genes analyzed, eight were found to be transcribed in the testis. They exhibited varying numbers of 5'- or 3'-non-coding exons as well as differential splicing. We suggest that testis-expressed polymorphic HLA and OR proteins are functionally connected and serve the selection of spermatozoa, enabling them to distinguish 'self from 'non-self [the sperm-receptor-selection (SRS) hypothesis].
A re-evaluation of a case-control model with contaminated controls for resource selection studies
Christopher T. Rota; Joshua J. Millspaugh; Dylan C. Kesler; Chad P. Lehman; Mark A. Rumble; Catherine M. B. Jachowski
2013-01-01
A common sampling design in resource selection studies involves measuring resource attributes at sample units used by an animal and at sample units considered available for use. Few models can estimate the absolute probability of using a sample unit from such data, but such approaches are generally preferred over statistical methods that estimate a relative probability...
Organic Over-the-Horizon Targeting for the 2025 Surface Fleet
2015-06-01
Detection Phit Probability of Hit Pk Probability of Kill PLAN People’s Liberation Army Navy PMEL Pacific Marine Environmental Laboratory...probability of hit ( Phit ). 2. Top-Level Functional Flow Block Diagram With the high-level functions of the project’s systems of systems properly
Uncertainty plus prior equals rational bias: an intuitive Bayesian probability weighting function.
Fennell, John; Baddeley, Roland
2012-10-01
Empirical research has shown that when making choices based on probabilistic options, people behave as if they overestimate small probabilities, underestimate large probabilities, and treat positive and negative outcomes differently. These distortions have been modeled using a nonlinear probability weighting function, which is found in several nonexpected utility theories, including rank-dependent models and prospect theory; here, we propose a Bayesian approach to the probability weighting function and, with it, a psychological rationale. In the real world, uncertainty is ubiquitous and, accordingly, the optimal strategy is to combine probability statements with prior information using Bayes' rule. First, we show that any reasonable prior on probabilities leads to 2 of the observed effects; overweighting of low probabilities and underweighting of high probabilities. We then investigate 2 plausible kinds of priors: informative priors based on previous experience and uninformative priors of ignorance. Individually, these priors potentially lead to large problems of bias and inefficiency, respectively; however, when combined using Bayesian model comparison methods, both forms of prior can be applied adaptively, gaining the efficiency of empirical priors and the robustness of ignorance priors. We illustrate this for the simple case of generic good and bad options, using Internet blogs to estimate the relevant priors of inference. Given this combined ignorant/informative prior, the Bayesian probability weighting function is not only robust and efficient but also matches all of the major characteristics of the distortions found in empirical research. PsycINFO Database Record (c) 2012 APA, all rights reserved.
1978-03-01
for the risk of rupture for a unidirectionally laminat - ed composite subjected to pure bending. (5D This equation can be simplified further by use of...C EVALUATION OF THE THREE PARAMETER WEIBULL DISTRIBUTION FUNCTION FOR PREDICTING FRACTURE PROBABILITY IN COMPOSITE MATERIALS. THESIS / AFIT/GAE...EVALUATION OF THE THREE PARAMETER WE1BULL DISTRIBUTION FUNCTION FOR PREDICTING FRACTURE PROBABILITY IN COMPOSITE MATERIALS THESIS Presented
Statistical analysis of field data for aircraft warranties
NASA Astrophysics Data System (ADS)
Lakey, Mary J.
Air Force and Navy maintenance data collection systems were researched to determine their scientific applicability to the warranty process. New and unique algorithms were developed to extract failure distributions which were then used to characterize how selected families of equipment typically fails. Families of similar equipment were identified in terms of function, technology and failure patterns. Statistical analyses and applications such as goodness-of-fit test, maximum likelihood estimation and derivation of confidence intervals for the probability density function parameters were applied to characterize the distributions and their failure patterns. Statistical and reliability theory, with relevance to equipment design and operational failures were also determining factors in characterizing the failure patterns of the equipment families. Inferences about the families with relevance to warranty needs were then made.
Awual, Md Rabiul; Yaita, Tsuyoshi; Taguchi, Tomitsugu; Shiwaku, Hideaki; Suzuki, Shinichi; Okamoto, Yoshihiro
2014-08-15
Conjugate materials can provide chemical functionality, enabling an assembly of the ligand complexation ability to metal ions that are important for applications, such as separation and removal devices. In this study, we developed ligand immobilized conjugate adsorbent for selective cesium (Cs) removal from wastewater. The adsorbent was synthesized by direct immobilization of dibenzo-24-crown-8 ether onto inorganic mesoporous silica. The effective parameters such as solution pH, contact time, initial Cs concentration and ionic strength of Na and K ion concentrations were evaluated and optimized systematically. This adsorbent was exhibited the high surface area-to-volume ratios and uniformly shaped pores in case cavities, and its active sites kept open functionality to taking up Cs. The obtained results revealed that adsorbent had higher selectivity toward Cs even in the presence of a high concentration of Na and K and this is probably due to the Cs-π interaction of the benzene ring. The proposed adsorbent was successfully applied for radioactive Cs removal to be used as the potential candidate in Fukushima nuclear wastewater treatment. The adsorbed Cs was eluted with suitable eluent and simultaneously regenerated into the initial form for the next removal operation after rinsing with water. The adsorbent retained functionality despite several cycles during sorption-elution-regeneration operations. Copyright © 2014 Elsevier B.V. All rights reserved.
Lagasse, Fabrice; Moreno, Celine; Preat, Thomas; Mery, Frederic
2012-01-01
Memory is a complex and dynamic process that is composed of different phases. Its evolution under natural selection probably depends on a balance between fitness benefits and costs. In Drosophila, two separate forms of consolidated memory phases can be generated experimentally: anaesthesia-resistant memory (ARM) and long-term memory (LTM). In recent years, several studies have focused on the differences between these long-lasting memory types and have found that, at the functional level, ARM and LTM are antagonistic. How this functional relationship will affect their evolutionary dynamics remains unknown. We selected for flies with either improved ARM or improved LTM over several generations, and found that flies selected specifically for improvement of one consolidated memory phase show reduced performance in the other memory phase. We also found that improved LTM was linked to decreased longevity in male flies but not in females. Conversely, males with improved ARM had increased longevity. We found no correlation between either improved ARM or LTM and other phenotypic traits. This is, to our knowledge, the first evidence of a symmetrical evolutionary trade-off between two memory phases for the same learning task. Such trade-offs may have an important impact on the evolution of cognitive capacities. On a neural level, these results support the hypothesis that mechanisms underlying these forms of consolidated memory are, to some degree, antagonistic. PMID:22859595
Martin, Petra; Leighl, Natasha B
2017-06-01
This article considers the use of pretest probability in non-small cell lung cancer (NSCLC) and how its use in EGFR testing has helped establish clinical guidelines on selecting patients for EGFR testing. With an ever-increasing number of molecular abnormalities being identified and often limited tissue available for testing, the use of pretest probability will need to be increasingly considered in the future for selecting investigations and treatments in patients. In addition we review new mutations that have the potential to affect clinical practice.
Escribano-Ávila, Gema; Pías, Beatriz; Sanz-Pérez, Virginia; Virgós, Emilio; Escudero, Adrián; Valladares, Fernando
2013-10-01
Seed dispersal is typically performed by a diverse array of species assemblages with different behavioral and morphological traits which determine dispersal quality (DQ, defined as the probability of recruitment of a dispersed seed). Fate of ecosystems to ongoing environmental changes is critically dependent on dispersal and mainly on DQ in novel scenarios. We assess here the DQ, thus the multiplicative effect of germination and survival probability to the first 3 years of life, for seeds dispersed by several bird species (Turdus spp.) and carnivores (Vulpes vulpes, Martes foina) in mature woodland remnants of Spanish juniper (Juniperus thurifera) and old fields which are being colonized by this species. Results showed that DQ was similar in mature woodlands and old fields. Germination rate for seeds dispersed by carnivores (11.5%) and thrushes (9.12%) was similar, however, interacted with microhabitat suitability. Seeds dispersed by carnivores reach the maximum germination rate on shrubs (16%), whereas seeds dispersed by thrushes did on female juniper canopies (15.5) indicating that each group of dispersers performed a directed dispersal. This directional effect was diluted when survival probability was considered: thrushes selected smaller seeds which had higher mortality in the seedling stage (70%) in relation to seedlings dispersed by carnivores (40%). Overall, thrushes resulted low-quality dispersers which provided a probability or recruitment of 2.5%, while a seed dispersed by carnivores had a probability of recruitment of 6.5%. Our findings show that generalist dispersers (i.e., carnivores) can provide a higher probability of recruitment than specialized dispersers (i.e., Turdus spp.). However, generalist species are usually opportunistic dispersers as their role as seed dispersers is dependent on the availability of trophic resources and species feeding preferences. As a result, J. thurifera dispersal community is composed by two functional groups of dispersers: specialized low-quality but trustworthy dispersers and generalist high-quality but opportunistic dispersers. The maintenance of both, generalist and specialist dispersers, in the dispersal assemblage community assures the dispersal services and increases the opportunities for regeneration and colonization of degraded areas under a land-use change scenario.
Bayesian network representing system dynamics in risk analysis of nuclear systems
NASA Astrophysics Data System (ADS)
Varuttamaseni, Athi
2011-12-01
A dynamic Bayesian network (DBN) model is used in conjunction with the alternating conditional expectation (ACE) regression method to analyze the risk associated with the loss of feedwater accident coupled with a subsequent initiation of the feed and bleed operation in the Zion-1 nuclear power plant. The use of the DBN allows the joint probability distribution to be factorized, enabling the analysis to be done on many simpler network structures rather than on one complicated structure. The construction of the DBN model assumes conditional independence relations among certain key reactor parameters. The choice of parameter to model is based on considerations of the macroscopic balance statements governing the behavior of the reactor under a quasi-static assumption. The DBN is used to relate the peak clad temperature to a set of independent variables that are known to be important in determining the success of the feed and bleed operation. A simple linear relationship is then used to relate the clad temperature to the core damage probability. To obtain a quantitative relationship among different nodes in the DBN, surrogates of the RELAP5 reactor transient analysis code are used. These surrogates are generated by applying the ACE algorithm to output data obtained from about 50 RELAP5 cases covering a wide range of the selected independent variables. These surrogates allow important safety parameters such as the fuel clad temperature to be expressed as a function of key reactor parameters such as the coolant temperature and pressure together with important independent variables such as the scram delay time. The time-dependent core damage probability is calculated by sampling the independent variables from their probability distributions and propagate the information up through the Bayesian network to give the clad temperature. With the knowledge of the clad temperature and the assumption that the core damage probability has a one-to-one relationship to it, we have calculated the core damage probably as a function of transient time. The use of the DBN model in combination with ACE allows risk analysis to be performed with much less effort than if the analysis were done using the standard techniques.
Decomposition of conditional probability for high-order symbolic Markov chains.
Melnik, S S; Usatenko, O V
2017-07-01
The main goal of this paper is to develop an estimate for the conditional probability function of random stationary ergodic symbolic sequences with elements belonging to a finite alphabet. We elaborate on a decomposition procedure for the conditional probability function of sequences considered to be high-order Markov chains. We represent the conditional probability function as the sum of multilinear memory function monomials of different orders (from zero up to the chain order). This allows us to introduce a family of Markov chain models and to construct artificial sequences via a method of successive iterations, taking into account at each step increasingly high correlations among random elements. At weak correlations, the memory functions are uniquely expressed in terms of the high-order symbolic correlation functions. The proposed method fills the gap between two approaches, namely the likelihood estimation and the additive Markov chains. The obtained results may have applications for sequential approximation of artificial neural network training.
Decomposition of conditional probability for high-order symbolic Markov chains
NASA Astrophysics Data System (ADS)
Melnik, S. S.; Usatenko, O. V.
2017-07-01
The main goal of this paper is to develop an estimate for the conditional probability function of random stationary ergodic symbolic sequences with elements belonging to a finite alphabet. We elaborate on a decomposition procedure for the conditional probability function of sequences considered to be high-order Markov chains. We represent the conditional probability function as the sum of multilinear memory function monomials of different orders (from zero up to the chain order). This allows us to introduce a family of Markov chain models and to construct artificial sequences via a method of successive iterations, taking into account at each step increasingly high correlations among random elements. At weak correlations, the memory functions are uniquely expressed in terms of the high-order symbolic correlation functions. The proposed method fills the gap between two approaches, namely the likelihood estimation and the additive Markov chains. The obtained results may have applications for sequential approximation of artificial neural network training.
Impact of signal scattering and parametric uncertainties on receiver operating characteristics
NASA Astrophysics Data System (ADS)
Wilson, D. Keith; Breton, Daniel J.; Hart, Carl R.; Pettit, Chris L.
2017-05-01
The receiver operating characteristic (ROC curve), which is a plot of the probability of detection as a function of the probability of false alarm, plays a key role in the classical analysis of detector performance. However, meaningful characterization of the ROC curve is challenging when practically important complications such as variations in source emissions, environmental impacts on the signal propagation, uncertainties in the sensor response, and multiple sources of interference are considered. In this paper, a relatively simple but realistic model for scattered signals is employed to explore how parametric uncertainties impact the ROC curve. In particular, we show that parametric uncertainties in the mean signal and noise power substantially raise the tails of the distributions; since receiver operation with a very low probability of false alarm and a high probability of detection is normally desired, these tails lead to severely degraded performance. Because full a priori knowledge of such parametric uncertainties is rarely available in practice, analyses must typically be based on a finite sample of environmental states, which only partially characterize the range of parameter variations. We show how this effect can lead to misleading assessments of system performance. For the cases considered, approximately 64 or more statistically independent samples of the uncertain parameters are needed to accurately predict the probabilities of detection and false alarm. A connection is also described between selection of suitable distributions for the uncertain parameters, and Bayesian adaptive methods for inferring the parameters.
Genetic Algorithms and Classification Trees in Feature Discovery: Diabetes and the NHANES database
DOE Office of Scientific and Technical Information (OSTI.GOV)
Heredia-Langner, Alejandro; Jarman, Kristin H.; Amidan, Brett G.
2013-09-01
This paper presents a feature selection methodology that can be applied to datasets containing a mixture of continuous and categorical variables. Using a Genetic Algorithm (GA), this method explores a dataset and selects a small set of features relevant for the prediction of a binary (1/0) response. Binary classification trees and an objective function based on conditional probabilities are used to measure the fitness of a given subset of features. The method is applied to health data in order to find factors useful for the prediction of diabetes. Results show that our algorithm is capable of narrowing down the setmore » of predictors to around 8 factors that can be validated using reputable medical and public health resources.« less
Experimental and simulation study results of an Adaptive Video Guidance System /AVGS/
NASA Technical Reports Server (NTRS)
Schappell, R. T.; Knickerbocker, R. L.
1975-01-01
Studies relating to stellar-body exploration programs have pointed out the need for an adaptive guidance scheme capable of providing automatic real-time guidance and site selection capability. For the case of a planetary lander, without such guidance, targeting is limited to what are believed to be generally benign areas in order to ensure a reasonable landing-success probability. Typically, the Mars Viking Lander will be jeopardized by obstacles exceeding 22 centimers in diameter. The benefits of on-board navigation and real-time selection of a landing site and obstacle avoidance have been demonstrated by the Apollo lunar landings, in which man performed the surface sensing and steering functions. Therefore, an Adaptive Video Guidance System (AVGS) has been developed, bread-boarded, and flown on a six-degree-of-freedom simulator.
Numerical algebraic geometry for model selection and its application to the life sciences
Gross, Elizabeth; Davis, Brent; Ho, Kenneth L.; Bates, Daniel J.
2016-01-01
Researchers working with mathematical models are often confronted by the related problems of parameter estimation, model validation and model selection. These are all optimization problems, well known to be challenging due to nonlinearity, non-convexity and multiple local optima. Furthermore, the challenges are compounded when only partial data are available. Here, we consider polynomial models (e.g. mass-action chemical reaction networks at steady state) and describe a framework for their analysis based on optimization using numerical algebraic geometry. Specifically, we use probability-one polynomial homotopy continuation methods to compute all critical points of the objective function, then filter to recover the global optima. Our approach exploits the geometrical structures relating models and data, and we demonstrate its utility on examples from cell signalling, synthetic biology and epidemiology. PMID:27733697
High throughput nonparametric probability density estimation.
Farmer, Jenny; Jacobs, Donald
2018-01-01
In high throughput applications, such as those found in bioinformatics and finance, it is important to determine accurate probability distribution functions despite only minimal information about data characteristics, and without using human subjectivity. Such an automated process for univariate data is implemented to achieve this goal by merging the maximum entropy method with single order statistics and maximum likelihood. The only required properties of the random variables are that they are continuous and that they are, or can be approximated as, independent and identically distributed. A quasi-log-likelihood function based on single order statistics for sampled uniform random data is used to empirically construct a sample size invariant universal scoring function. Then a probability density estimate is determined by iteratively improving trial cumulative distribution functions, where better estimates are quantified by the scoring function that identifies atypical fluctuations. This criterion resists under and over fitting data as an alternative to employing the Bayesian or Akaike information criterion. Multiple estimates for the probability density reflect uncertainties due to statistical fluctuations in random samples. Scaled quantile residual plots are also introduced as an effective diagnostic to visualize the quality of the estimated probability densities. Benchmark tests show that estimates for the probability density function (PDF) converge to the true PDF as sample size increases on particularly difficult test probability densities that include cases with discontinuities, multi-resolution scales, heavy tails, and singularities. These results indicate the method has general applicability for high throughput statistical inference.
High throughput nonparametric probability density estimation
Farmer, Jenny
2018-01-01
In high throughput applications, such as those found in bioinformatics and finance, it is important to determine accurate probability distribution functions despite only minimal information about data characteristics, and without using human subjectivity. Such an automated process for univariate data is implemented to achieve this goal by merging the maximum entropy method with single order statistics and maximum likelihood. The only required properties of the random variables are that they are continuous and that they are, or can be approximated as, independent and identically distributed. A quasi-log-likelihood function based on single order statistics for sampled uniform random data is used to empirically construct a sample size invariant universal scoring function. Then a probability density estimate is determined by iteratively improving trial cumulative distribution functions, where better estimates are quantified by the scoring function that identifies atypical fluctuations. This criterion resists under and over fitting data as an alternative to employing the Bayesian or Akaike information criterion. Multiple estimates for the probability density reflect uncertainties due to statistical fluctuations in random samples. Scaled quantile residual plots are also introduced as an effective diagnostic to visualize the quality of the estimated probability densities. Benchmark tests show that estimates for the probability density function (PDF) converge to the true PDF as sample size increases on particularly difficult test probability densities that include cases with discontinuities, multi-resolution scales, heavy tails, and singularities. These results indicate the method has general applicability for high throughput statistical inference. PMID:29750803
Identification and analysis of student conceptions used to solve chemical equilibrium problems
NASA Astrophysics Data System (ADS)
Voska, Kirk William
This study identified and quantified chemistry conceptions students use when solving chemical equilibrium problems requiring the application of Le Chatelier's principle, and explored the feasibility of designing a paper and pencil test for this purpose. It also demonstrated the utility of conditional probabilities to assess test quality. A 10-item pencil-and-paper, two-tier diagnostic instrument, the Test to Identify Student Conceptualizations (TISC) was developed and administered to 95 second-semester university general chemistry students after they received regular course instruction concerning equilibrium in homogeneous aqueous, heterogeneous aqueous, and homogeneous gaseous systems. The content validity of TISC was established through a review of TISC by a panel of experts; construct validity was established through semi-structured interviews and conditional probabilities. Nine students were then selected from a stratified random sample for interviews to validate TISC. The probability that TISC correctly identified an answer given by a student in an interview was p = .64, while the probability that TISC correctly identified a reason given by a student in an interview was p=.49. Each TISC item contained two parts. In the first part the student selected the correct answer to a problem from a set of four choices. In the second part students wrote reasons for their answer to the first part. TISC questions were designed to identify students' conceptions concerning the application of Le Chatelier's principle, the constancy of the equilibrium constant, K, and the effect of a catalyst. Eleven prevalent incorrect conceptions were identified. This study found students consistently selected correct answers more frequently (53% of the time) than they provided correct reasons (33% of the time). The association between student answers and respective reasons on each TISC item was quantified using conditional probabilities calculated from logistic regression coefficients. The probability a student provided correct reasoning (B) when the student selected a correct answer (A) ranged from P(B| A) =.32 to P(B| A) =.82. However, the probability a student selected a correct answer when they provided correct reasoning ranged from P(A| B) =.96 to P(A| B) = 1. The K-R 20 reliability for TISC was found to be.79.
Quantum fluctuation theorems and generalized measurements during the force protocol.
Watanabe, Gentaro; Venkatesh, B Prasanna; Talkner, Peter; Campisi, Michele; Hänggi, Peter
2014-03-01
Generalized measurements of an observable performed on a quantum system during a force protocol are investigated and conditions that guarantee the validity of the Jarzynski equality and the Crooks relation are formulated. In agreement with previous studies by M. Campisi, P. Talkner, and P. Hänggi [Phys. Rev. Lett. 105, 140601 (2010); Phys. Rev. E 83, 041114 (2011)], we find that these fluctuation relations are satisfied for projective measurements; however, for generalized measurements special conditions on the operators determining the measurements need to be met. For the Jarzynski equality to hold, the measurement operators of the forward protocol must be normalized in a particular way. The Crooks relation additionally entails that the backward and forward measurement operators depend on each other. Yet, quite some freedom is left as to how the two sets of operators are interrelated. This ambiguity is removed if one considers selective measurements, which are specified by a joint probability density function of work and measurement results of the considered observable. We find that the respective forward and backward joint probabilities satisfy the Crooks relation only if the measurement operators of the forward and backward protocols are the time-reversed adjoints of each other. In this case, the work probability density function conditioned on the measurement result satisfies a modified Crooks relation. The modification appears as a protocol-dependent factor that can be expressed by the information gained by the measurements during the forward and backward protocols. Finally, detailed fluctuation theorems with an arbitrary number of intervening measurements are obtained.
Sampling considerations for disease surveillance in wildlife populations
Nusser, S.M.; Clark, W.R.; Otis, D.L.; Huang, L.
2008-01-01
Disease surveillance in wildlife populations involves detecting the presence of a disease, characterizing its prevalence and spread, and subsequent monitoring. A probability sample of animals selected from the population and corresponding estimators of disease prevalence and detection provide estimates with quantifiable statistical properties, but this approach is rarely used. Although wildlife scientists often assume probability sampling and random disease distributions to calculate sample sizes, convenience samples (i.e., samples of readily available animals) are typically used, and disease distributions are rarely random. We demonstrate how landscape-based simulation can be used to explore properties of estimators from convenience samples in relation to probability samples. We used simulation methods to model what is known about the habitat preferences of the wildlife population, the disease distribution, and the potential biases of the convenience-sample approach. Using chronic wasting disease in free-ranging deer (Odocoileus virginianus) as a simple illustration, we show that using probability sample designs with appropriate estimators provides unbiased surveillance parameter estimates but that the selection bias and coverage errors associated with convenience samples can lead to biased and misleading results. We also suggest practical alternatives to convenience samples that mix probability and convenience sampling. For example, a sample of land areas can be selected using a probability design that oversamples areas with larger animal populations, followed by harvesting of individual animals within sampled areas using a convenience sampling method.
Integrating resource selection information with spatial capture--recapture
Royle, J. Andrew; Chandler, Richard B.; Sun, Catherine C.; Fuller, Angela K.
2013-01-01
4. Finally, we find that SCR models using standard symmetric and stationary encounter probability models may not fully explain variation in encounter probability due to space usage, and therefore produce biased estimates of density when animal space usage is related to resource selection. Consequently, it is important that space usage be taken into consideration, if possible, in studies focused on estimating density using capture–recapture methods.
Quantile Functions, Convergence in Quantile, and Extreme Value Distribution Theory.
1980-11-01
Gnanadesikan (1968). Quantile functions are advocated by Parzen (1979) as providing an approach to probability-based data analysis. Quantile functions are... Gnanadesikan , R. (1968). Probability Plotting Methods for the Analysis of Data, Biomtrika, 55, 1-17.
Kasiri, Keyvan; Kazemi, Kamran; Dehghani, Mohammad Javad; Helfroush, Mohammad Sadegh
2013-01-01
In this paper, we present a new semi-automatic brain tissue segmentation method based on a hybrid hierarchical approach that combines a brain atlas as a priori information and a least-square support vector machine (LS-SVM). The method consists of three steps. In the first two steps, the skull is removed and the cerebrospinal fluid (CSF) is extracted. These two steps are performed using the toolbox FMRIB's automated segmentation tool integrated in the FSL software (FSL-FAST) developed in Oxford Centre for functional MRI of the brain (FMRIB). Then, in the third step, the LS-SVM is used to segment grey matter (GM) and white matter (WM). The training samples for LS-SVM are selected from the registered brain atlas. The voxel intensities and spatial positions are selected as the two feature groups for training and test. SVM as a powerful discriminator is able to handle nonlinear classification problems; however, it cannot provide posterior probability. Thus, we use a sigmoid function to map the SVM output into probabilities. The proposed method is used to segment CSF, GM and WM from the simulated magnetic resonance imaging (MRI) using Brainweb MRI simulator and real data provided by Internet Brain Segmentation Repository. The semi-automatically segmented brain tissues were evaluated by comparing to the corresponding ground truth. The Dice and Jaccard similarity coefficients, sensitivity and specificity were calculated for the quantitative validation of the results. The quantitative results show that the proposed method segments brain tissues accurately with respect to corresponding ground truth. PMID:24696800
Tran-Duy, An; Boonen, Annelies; van de Laar, Mart A F J; Franke, Angelinus C; Severens, Johan L
2011-12-01
To develop a modelling framework which can simulate long-term quality of life, societal costs and cost-effectiveness as affected by sequential drug treatment strategies for ankylosing spondylitis (AS). Discrete event simulation paradigm was selected for model development. Drug efficacy was modelled as changes in disease activity (Bath Ankylosing Spondylitis Disease Activity Index (BASDAI)) and functional status (Bath Ankylosing Spondylitis Functional Index (BASFI)), which were linked to costs and health utility using statistical models fitted based on an observational AS cohort. Published clinical data were used to estimate drug efficacy and time to events. Two strategies were compared: (1) five available non-steroidal anti-inflammatory drugs (strategy 1) and (2) same as strategy 1 plus two tumour necrosis factor α inhibitors (strategy 2). 13,000 patients were followed up individually until death. For probability sensitivity analysis, Monte Carlo simulations were performed with 1000 sets of parameters sampled from the appropriate probability distributions. The models successfully generated valid data on treatments, BASDAI, BASFI, utility, quality-adjusted life years (QALYs) and costs at time points with intervals of 1-3 months during the simulation length of 70 years. Incremental cost per QALY gained in strategy 2 compared with strategy 1 was €35,186. At a willingness-to-pay threshold of €80,000, it was 99.9% certain that strategy 2 was cost-effective. The modelling framework provides great flexibility to implement complex algorithms representing treatment selection, disease progression and changes in costs and utilities over time of patients with AS. Results obtained from the simulation are plausible.
Investigation of MHD flow structure and fluctuations by potassium lineshape fitting
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bauman, L.E.
1993-12-31
Multiple Potassium D-line emission absorption spectra from a high temperature, coal-fired flow have been fit to a radiative transfer, boundary layer flow model. The results of fitting spectra from the aerodynamic duct of the Department of Energy Coal-Fired Flow Facility provide information about the thickness and shape of the thermal boundary layer and the bulk potassium seed atom density in a simulated magnetohydrodynamic channel flow. Probability distribution functions for the entire set of more than six thousand spectra clearly indicate the typical values and magnitude of fluctuations for the flow: core temperature of 2538 {plus_minus} 20 K, near wall temperaturemore » of 1945 {plus_minus} 135 K, boundary layer width of about 1 cm, and potassium seed atom density of (5.1 {plus_minus} 0.8)x 10{sup 22}/m{sup 3}. Probability distribution functions for selected times during the eight hours of measurements indicate occasional periods of unstable combustion. In addition, broadband particle parameters during the unstable start of the test may be related to differing particle and gas temperatures. The results clearly demonstrate the ability of lineshape fitting to provide valuable data for diagnosing the high speed turbulent flow.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Papastergis, Emmanouil; Giovanelli, Riccardo; Haynes, Martha P.
We use a sample of ≈6000 galaxies detected by the Arecibo Legacy Fast ALFA (ALFALFA) 21 cm survey to measure the clustering properties of H I-selected galaxies. We find no convincing evidence for a dependence of clustering on galactic atomic hydrogen (H I) mass, over the range M{sub H{sub I}} ≈ 10{sup 8.5}-10{sup 10.5} M{sub ☉}. We show that previously reported results of weaker clustering for low H I mass galaxies are probably due to finite-volume effects. In addition, we compare the clustering of ALFALFA galaxies with optically selected samples drawn from the Sloan Digital Sky Survey (SDSS). We findmore » that H I-selected galaxies cluster more weakly than even relatively optically faint galaxies, when no color selection is applied. Conversely, when SDSS galaxies are split based on their color, we find that the correlation function of blue optical galaxies is practically indistinguishable from that of H I-selected galaxies. At the same time, SDSS galaxies with red colors are found to cluster significantly more than H I-selected galaxies, a fact that is evident in both the projected as well as the full two-dimensional correlation function. A cross-correlation analysis further reveals that gas-rich galaxies 'avoid' being located within ≈3 Mpc of optical galaxies with red colors. Next, we consider the clustering properties of halo samples selected from the Bolshoi ΛCDM simulation. A comparison with the clustering of ALFALFA galaxies suggests that galactic H I mass is not tightly related to host halo mass and that a sizable fraction of subhalos do not host H I galaxies. Lastly, we find that we can recover fairly well the correlation function of H I galaxies by just excluding halos with low spin parameter. This finding lends support to the hypothesis that halo spin plays a key role in determining the gas content of galaxies.« less
Paolucci, Francesco; Schut, Erik; Beck, Konstantin; Gress, Stefan; Van de Voorde, Carine; Zmora, Irit
2007-04-01
As the share of supplementary health insurance (SI) in health care finance is likely to grow, SI may become an increasingly attractive tool for risk-selection in basic health insurance (BI). In this paper, we develop a conceptual framework to assess the probability that insurers will use SI for favourable risk-selection in BI. We apply our framework to five countries in which risk-selection via SI is feasible: Belgium, Germany, Israel, the Netherlands, and Switzerland. For each country, we review the available evidence of SI being used as selection device. We find that the probability that SI is and will be used for risk-selection substantially varies across countries. Finally, we discuss several strategies for policy makers to reduce the chance that SI will be used for risk-selection in BI markets.
MicroRNA-integrated and network-embedded gene selection with diffusion distance.
Huang, Di; Zhou, Xiaobo; Lyon, Christopher J; Hsueh, Willa A; Wong, Stephen T C
2010-10-29
Gene network information has been used to improve gene selection in microarray-based studies by selecting marker genes based both on their expression and the coordinate expression of genes within their gene network under a given condition. Here we propose a new network-embedded gene selection model. In this model, we first address the limitations of microarray data. Microarray data, although widely used for gene selection, measures only mRNA abundance, which does not always reflect the ultimate gene phenotype, since it does not account for post-transcriptional effects. To overcome this important (critical in certain cases) but ignored-in-almost-all-existing-studies limitation, we design a new strategy to integrate together microarray data with the information of microRNA, the major post-transcriptional regulatory factor. We also handle the challenges led by gene collaboration mechanism. To incorporate the biological facts that genes without direct interactions may work closely due to signal transduction and that two genes may be functionally connected through multi paths, we adopt the concept of diffusion distance. This concept permits us to simulate biological signal propagation and therefore to estimate the collaboration probability for all gene pairs, directly or indirectly-connected, according to multi paths connecting them. We demonstrate, using type 2 diabetes (DM2) as an example, that the proposed strategies can enhance the identification of functional gene partners, which is the key issue in a network-embedded gene selection model. More importantly, we show that our gene selection model outperforms related ones. Genes selected by our model 1) have improved classification capability; 2) agree with biological evidence of DM2-association; and 3) are involved in many well-known DM2-associated pathways.
Rao, Uma; Sidhartha, Tanuj; Harker, Karen R.; Bidesi, Anup S.; Chen, Li-Ann; Ernst, Monique
2010-01-01
Purpose The goal of the study was to assess individual differences in risk-taking behavior among adolescents in the laboratory. A second aim was to evaluate whether the laboratory-based risk-taking behavior is associated with other behavioral and psychological measures associated with risk-taking behavior. Methods Eighty-two adolescents with no personal history of psychiatric disorder completed a computerized decision-making task, the Wheel of Fortune (WOF). By offering choices between clearly defined probabilities and real monetary outcomes, this task assesses risk preferences when participants are confronted with potential rewards and losses. The participants also completed a variety of behavioral and psychological measures associated with risk-taking behavior. Results Performance on the task varied based on the probability and anticipated outcomes. In the winning sub-task, participants selected low probability-high magnitude reward (high-risk choice) less frequently than high probability-low magnitude reward (low-risk choice). In the losing sub-task, participants selected low probability-high magnitude loss more often than high probability-low magnitude loss. On average, the selection of probabilistic rewards was optimal and similar to performance in adults. There were, however, individual differences in performance, and one-third of the adolescents made high-risk choice more frequently than low-risk choice while selecting a reward. After controlling for sociodemographic and psychological variables, high-risk choice on the winning task predicted “real-world” risk-taking behavior and substance-related problems. Conclusions These findings highlight individual differences in risk-taking behavior. Preliminary data on face validity of the WOF task suggest that it might be a valuable laboratory tool for studying behavioral and neurobiological processes associated with risk-taking behavior in adolescents. PMID:21257113
Arístegui, Javier; Gasol, Josep M.; Herndl, Gerhard J.
2012-01-01
We analyzed the regional distribution of bulk heterotrophic prokaryotic activity (leucine incorporation) and selected single-cell parameters (cell viability and nucleic acid content) as parameters for microbial functioning, as well as bacterial and archaeal community structure in the epipelagic (0 to 200 m) and mesopelagic (200 to 1,000 m) subtropical Northeast Atlantic Ocean. We selectively sampled three contrasting regions covering a wide range of surface productivity and oceanographic properties within the same basin: (i) the eddy field south of the Canary Islands, (ii) the open-ocean NE Atlantic Subtropical Gyre, and (iii) the upwelling filament off Cape Blanc. In the epipelagic waters, a high regional variation in hydrographic parameters and bacterial community structure was detected, accompanied, however, by a low variability in microbial functioning. In contrast, mesopelagic microbial functioning was highly variable between the studied regions despite the homogeneous abiotic conditions found therein. More microbial functioning parameters indicated differences among the three regions within the mesopelagic (i.e., viability of cells, nucleic acid content, cell-specific heterotrophic activity, nanoflagellate abundance, prokaryote-to-nanoflagellate abundance ratio) than within the epipelagic (i.e., bulk activity, nucleic acid content, and nanoflagellate abundance) waters. Our results show that the mesopelagic realm in the Northeast Atlantic is, in terms of microbial activity, more heterogeneous than its epipelagic counterpart, probably linked to mesoscale hydrographical variations. PMID:22344670
QVAST: a new Quantum GIS plugin for estimating volcanic susceptibility
NASA Astrophysics Data System (ADS)
Bartolini, S.; Cappello, A.; Martí, J.; Del Negro, C.
2013-08-01
One of the most important tasks of modern volcanology is the construction of hazard maps simulating different eruptive scenarios that can be used in risk-based decision-making in land-use planning and emergency management. The first step in the quantitative assessment of volcanic hazards is the development of susceptibility maps, i.e. the spatial probability of a future vent opening given the past eruptive activity of a volcano. This challenging issue is generally tackled using probabilistic methods that use the calculation of a kernel function at each data location to estimate probability density functions (PDFs). The smoothness and the modeling ability of the kernel function are controlled by the smoothing parameter, also known as the bandwidth. Here we present a new tool, QVAST, part of the open-source Geographic Information System Quantum GIS, that is designed to create user-friendly quantitative assessments of volcanic susceptibility. QVAST allows to select an appropriate method for evaluating the bandwidth for the kernel function on the basis of the input parameters and the shapefile geometry, and can also evaluate the PDF with the Gaussian kernel. When different input datasets are available for the area, the total susceptibility map is obtained by assigning different weights to each of the PDFs, which are then combined via a weighted summation and modeled in a non-homogeneous Poisson process. The potential of QVAST, developed in a free and user-friendly environment, is here shown through its application in the volcanic fields of Lanzarote (Canary Islands) and La Garrotxa (NE Spain).
Reproductive maturation and senescence in the female brown bear
Schwartz, Charles C.; Keating, Kim A.; Reynolds III, Harry V.; Barnes, Victor G.; Sellers, Richard A.; Swenson, J.E.; Miller, Sterling D.; McLellan, B.N.; Keay, Jeffrey A.; McCann, Robert; Gibeau, Michael; Wakkinen, Wayne F.; Mace, Richard D.; Kasworm, Wayne; Smith, Rodger; Herrero, Steven
2003-01-01
Changes in age-specific reproductive rates can have important implications for managing populations, but the number of female brown (grizzly) bears (Ursus arctos) observed in any one study is usually inadequate to quantify such patterns, especially for older females and in hunted areas. We examined patterns of reproductive maturation and senescence in female brown bears by combining data from 20 study areas from Sweden, Alaska, Canada, and the continental United States. We assessed reproductive performance based on 4,726 radiocollared years for free-ranging female brown bears (age 3); 482 of these were for bears 20 years of age. We modeled age-specific probability of litter production using extreme value distributions to describe probabilities for young- and old-age classes, and a power distribution function to describe probabilities for prime-aged animals. We then fit 4 models to pooled observations from our 20 study areas. We used Akaike’s Information Criterion (AIC) to select the best model. Inflection points suggest that major shifts in litter production occur at 4–5 and 28–29 years of age. The estimated model asymptote (0.332, 95% CI ¼ 0.319–0.344) was consistent with the expected reproductive cycle of a cub litter every 3 years (0.333). We discuss assumptions and biases in data collection relative to the shape of the model curve. Our results conform to senescence theory and suggest that female age structure in contemporary brown bear populations is considerably younger than would be expected in the absence of modern man. This implies that selective pressures today differ from those that influenced brown bear evolution.
Evans, T M; LeChevallier, M W; Waarvick, C E; Seidler, R J
1981-01-01
The species of total coliform bacteria isolated from drinking water and untreated surface water by the membrane filter (MF), the standard most-probable-number (S-MPN), and modified most-probable-number (M-MPN) techniques were compared. Each coliform detection technique selected for a different profile of coliform species from both types of water samples. The MF technique indicated that Citrobacter freundii was the most common coliform species in water samples. However, the fermentation tube techniques displayed selectivity towards the isolation of Escherichia coli and Klebsiella. The M-MPN technique selected for more C. freundii and Enterobacter spp. from untreated surface water samples and for more Enterobacter and Klebsiella spp. from drinking water samples than did the S-MPN technique. The lack of agreement between the number of coliforms detected in a water sample by the S-MPN, M-MPN, and MF techniques was a result of the selection for different coliform species by the various techniques. PMID:7013706
A computer program for uncertainty analysis integrating regression and Bayesian methods
Lu, Dan; Ye, Ming; Hill, Mary C.; Poeter, Eileen P.; Curtis, Gary
2014-01-01
This work develops a new functionality in UCODE_2014 to evaluate Bayesian credible intervals using the Markov Chain Monte Carlo (MCMC) method. The MCMC capability in UCODE_2014 is based on the FORTRAN version of the differential evolution adaptive Metropolis (DREAM) algorithm of Vrugt et al. (2009), which estimates the posterior probability density function of model parameters in high-dimensional and multimodal sampling problems. The UCODE MCMC capability provides eleven prior probability distributions and three ways to initialize the sampling process. It evaluates parametric and predictive uncertainties and it has parallel computing capability based on multiple chains to accelerate the sampling process. This paper tests and demonstrates the MCMC capability using a 10-dimensional multimodal mathematical function, a 100-dimensional Gaussian function, and a groundwater reactive transport model. The use of the MCMC capability is made straightforward and flexible by adopting the JUPITER API protocol. With the new MCMC capability, UCODE_2014 can be used to calculate three types of uncertainty intervals, which all can account for prior information: (1) linear confidence intervals which require linearity and Gaussian error assumptions and typically 10s–100s of highly parallelizable model runs after optimization, (2) nonlinear confidence intervals which require a smooth objective function surface and Gaussian observation error assumptions and typically 100s–1,000s of partially parallelizable model runs after optimization, and (3) MCMC Bayesian credible intervals which require few assumptions and commonly 10,000s–100,000s or more partially parallelizable model runs. Ready access allows users to select methods best suited to their work, and to compare methods in many circumstances.
Reranking candidate gene models with cross-species comparison for improved gene prediction
Liu, Qian; Crammer, Koby; Pereira, Fernando CN; Roos, David S
2008-01-01
Background Most gene finders score candidate gene models with state-based methods, typically HMMs, by combining local properties (coding potential, splice donor and acceptor patterns, etc). Competing models with similar state-based scores may be distinguishable with additional information. In particular, functional and comparative genomics datasets may help to select among competing models of comparable probability by exploiting features likely to be associated with the correct gene models, such as conserved exon/intron structure or protein sequence features. Results We have investigated the utility of a simple post-processing step for selecting among a set of alternative gene models, using global scoring rules to rerank competing models for more accurate prediction. For each gene locus, we first generate the K best candidate gene models using the gene finder Evigan, and then rerank these models using comparisons with putative orthologous genes from closely-related species. Candidate gene models with lower scores in the original gene finder may be selected if they exhibit strong similarity to probable orthologs in coding sequence, splice site location, or signal peptide occurrence. Experiments on Drosophila melanogaster demonstrate that reranking based on cross-species comparison outperforms the best gene models identified by Evigan alone, and also outperforms the comparative gene finders GeneWise and Augustus+. Conclusion Reranking gene models with cross-species comparison improves gene prediction accuracy. This straightforward method can be readily adapted to incorporate additional lines of evidence, as it requires only a ranked source of candidate gene models. PMID:18854050
Radial Domany-Kinzel models with mutation and selection
NASA Astrophysics Data System (ADS)
Lavrentovich, Maxim O.; Korolev, Kirill S.; Nelson, David R.
2013-01-01
We study the effect of spatial structure, genetic drift, mutation, and selective pressure on the evolutionary dynamics in a simplified model of asexual organisms colonizing a new territory. Under an appropriate coarse-graining, the evolutionary dynamics is related to the directed percolation processes that arise in voter models, the Domany-Kinzel (DK) model, contact process, and so on. We explore the differences between linear (flat front) expansions and the much less familiar radial (curved front) range expansions. For the radial expansion, we develop a generalized, off-lattice DK model that minimizes otherwise persistent lattice artifacts. With both simulations and analytical techniques, we study the survival probability of advantageous mutants, the spatial correlations between domains of neutral strains, and the dynamics of populations with deleterious mutations. “Inflation” at the frontier leads to striking differences between radial and linear expansions. For a colony with initial radius R0 expanding at velocity v, significant genetic demixing, caused by local genetic drift, occurs only up to a finite time t*=R0/v, after which portions of the colony become causally disconnected due to the inflating perimeter of the expanding front. As a result, the effect of a selective advantage is amplified relative to genetic drift, increasing the survival probability of advantageous mutants. Inflation also modifies the underlying directed percolation transition, introducing novel scaling functions and modifications similar to a finite-size effect. Finally, we consider radial range expansions with deflating perimeters, as might arise from colonization initiated along the shores of an island.
Prospect evaluation as a function of numeracy and probability denominator.
Millroth, Philip; Juslin, Peter
2015-05-01
This study examines how numeracy and probability denominator (a direct-ratio probability, a relative frequency with denominator 100, a relative frequency with denominator 10,000) affect the evaluation of prospects in an expected-value based pricing task. We expected that numeracy would affect the results due to differences in the linearity of number perception and the susceptibility to denominator neglect with different probability formats. An analysis with functional measurement verified that participants integrated value and probability into an expected value. However, a significant interaction between numeracy and probability format and subsequent analyses of the parameters of cumulative prospect theory showed that the manipulation of probability denominator changed participants' psychophysical response to probability and value. Standard methods in decision research may thus confound people's genuine risk attitude with their numerical capacities and the probability format used. Copyright © 2015 Elsevier B.V. All rights reserved.
Sexual selection for indicators of intelligence.
Miller, G
2000-01-01
Many traits in many species have evolved through sexual selection specifically to function as 'fitness indicators' that reveal good genes and good health. Sexually selected fitness indicators typically show (1) higher coefficients of phenotypic and genetic variation than survival traits, (2) at least moderate genetic heritabilities and (3) positive correlations with many aspects of an animal's general condition, including body size, body symmetry, parasite resistance, longevity and freedom from deleterious mutations. These diagnostic criteria also appear to describe human intelligence (the g factor). This paper argues that during human evolution, mate choice by both sexes focused increasingly on intelligence as a major heritable component of biological fitness. Many human-specific behaviours (such as conversation, music production, artistic ability and humour) may have evolved principally to advertise intelligence during courtship. Though these mental adaptations may be modular at the level of psychological functioning, their efficiencies may be tightly intercorrelated because they still tap into common genetic and neurophysiological variables associated with fitness itself. Although the g factor (like the superordinate factor of fitness itself) probably exists in all animal species, humans evolved an unusually high degree of interest in assessing each other's intelligence during courtship and other social interactions--and, consequently, a unique suite of highly g-loaded mental adaptations for advertising their intelligence to one another through linguistic and cultural interaction. This paper includes nine novel, testable predictions about human intelligence derived from sexual selection theory.
Optimal Sampling to Provide User-Specific Climate Information.
NASA Astrophysics Data System (ADS)
Panturat, Suwanna
The types of weather-related world problems which are of socio-economic importance selected in this study as representative of three different levels of user groups include: (i) a regional problem concerned with air pollution plumes which lead to acid rain in the north eastern United States, (ii) a state-level problem in the form of winter wheat production in Oklahoma, and (iii) an individual-level problem involving reservoir management given errors in rainfall estimation at Lake Ellsworth, upstream from Lawton, Oklahoma. The study is aimed at designing optimal sampling networks which are based on customer value systems and also abstracting from data sets that information which is most cost-effective in reducing the climate-sensitive aspects of a given user problem. Three process models being used in this study to interpret climate variability in terms of the variables of importance to the user comprise: (i) the HEFFTER-SAMSON diffusion model as the climate transfer function for acid rain, (ii) the CERES-MAIZE plant process model for winter wheat production and (iii) the AGEHYD streamflow model selected as "a black box" for reservoir management. A state-of-the-art Non Linear Program (NLP) algorithm for minimizing an objective function is employed to determine the optimal number and location of various sensors. Statistical quantities considered in determining sensor locations including Bayes Risk, the chi-squared value, the probability of the Type I error (alpha) and the probability of the Type II error (beta) and the noncentrality parameter delta^2. Moreover, the number of years required to detect a climate change resulting in a given bushel per acre change in mean wheat production is determined; the number of seasons of observations required to reduce the standard deviation of the error variance of the ambient sulfur dioxide to less than a certain percent of the mean is found; and finally the policy of maintaining pre-storm flood pools at selected levels is examined given information from the optimal sampling network as defined by the study.
Miklós, István; Zádori, Zoltán
2012-02-01
HD amino acid duplex has been found in the active center of many different enzymes. The dyad plays remarkably different roles in their catalytic processes that usually involve metal coordination. An HD motif is positioned directly on the amyloid beta fragment (Aβ) and on the carboxy-terminal region of the extracellular domain (CAED) of the human amyloid precursor protein (APP) and a taxonomically well defined group of APP orthologues (APPOs). In human Aβ HD is part of a presumed, RGD-like integrin-binding motif RHD; however, neither RHD nor RXD demonstrates reasonable conservation in APPOs. The sequences of CAEDs and the position of the HD are not particularly conserved either, yet we show with a novel statistical method using evolutionary modeling that the presence of HD on CAEDs cannot be the result of neutral evolutionary forces (p<0.0001). The motif is positively selected along the evolutionary process in the majority of APPOs, despite the fact that HD motif is underrepresented in the proteomes of all species of the animal kingdom. Position migration can be explained by high probability occurrence of multiple copies of HD on intermediate sequences, from which only one is kept by selective evolutionary forces, in a similar way as in the case of the "transcription binding site turnover." CAED of all APP orthologues and homologues are predicted to bind metal ions including Amyloid-like protein 1 (APLP1) and Amyloid-like protein 2 (APLP2). Our results suggest that HDs on the CAEDs are most probably key components of metal-binding domains, which facilitate and/or regulate inter- or intra-molecular interactions in a metal ion-dependent or metal ion concentration-dependent manner. The involvement of naturally occurring mutations of HD (Tottori (D7N) and English (H6R) mutations) in early onset Alzheimer's disease gives additional support to our finding that HD has an evolutionary preserved function on APPOs.
Miklós, István; Zádori, Zoltán
2012-01-01
HD amino acid duplex has been found in the active center of many different enzymes. The dyad plays remarkably different roles in their catalytic processes that usually involve metal coordination. An HD motif is positioned directly on the amyloid beta fragment (Aβ) and on the carboxy-terminal region of the extracellular domain (CAED) of the human amyloid precursor protein (APP) and a taxonomically well defined group of APP orthologues (APPOs). In human Aβ HD is part of a presumed, RGD-like integrin-binding motif RHD; however, neither RHD nor RXD demonstrates reasonable conservation in APPOs. The sequences of CAEDs and the position of the HD are not particularly conserved either, yet we show with a novel statistical method using evolutionary modeling that the presence of HD on CAEDs cannot be the result of neutral evolutionary forces (p<0.0001). The motif is positively selected along the evolutionary process in the majority of APPOs, despite the fact that HD motif is underrepresented in the proteomes of all species of the animal kingdom. Position migration can be explained by high probability occurrence of multiple copies of HD on intermediate sequences, from which only one is kept by selective evolutionary forces, in a similar way as in the case of the “transcription binding site turnover.” CAED of all APP orthologues and homologues are predicted to bind metal ions including Amyloid-like protein 1 (APLP1) and Amyloid-like protein 2 (APLP2). Our results suggest that HDs on the CAEDs are most probably key components of metal-binding domains, which facilitate and/or regulate inter- or intra-molecular interactions in a metal ion-dependent or metal ion concentration-dependent manner. The involvement of naturally occurring mutations of HD (Tottori (D7N) and English (H6R) mutations) in early onset Alzheimer's disease gives additional support to our finding that HD has an evolutionary preserved function on APPOs. PMID:22319430
Game-Theoretic strategies for systems of components using product-form utilities
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rao, Nageswara S; Ma, Cheng-Yu; Hausken, K.
Many critical infrastructures are composed of multiple systems of components which are correlated so that disruptions to one may propagate to others. We consider such infrastructures with correlations characterized in two ways: (i) an aggregate failure correlation function specifies the conditional failure probability of the infrastructure given the failure of an individual system, and (ii) a pairwise correlation function between two systems specifies the failure probability of one system given the failure of the other. We formulate a game for ensuring the resilience of the infrastructure, wherein the utility functions of the provider and attacker are products of an infrastructuremore » survival probability term and a cost term, both expressed in terms of the numbers of system components attacked and reinforced. The survival probabilities of individual systems satisfy first-order differential conditions that lead to simple Nash Equilibrium conditions. We then derive sensitivity functions that highlight the dependence of infrastructure resilience on the cost terms, correlation functions, and individual system survival probabilities. We apply these results to simplified models of distributed cloud computing and energy grid infrastructures.« less
Phytoplankton global mapping from space with a support vector machine algorithm
NASA Astrophysics Data System (ADS)
de Boissieu, Florian; Menkes, Christophe; Dupouy, Cécile; Rodier, Martin; Bonnet, Sophie; Mangeas, Morgan; Frouin, Robert J.
2014-11-01
In recent years great progress has been made in global mapping of phytoplankton from space. Two main trends have emerged, the recognition of phytoplankton functional types (PFT) based on reflectance normalized to chlorophyll-a concentration, and the recognition of phytoplankton size class (PSC) based on the relationship between cell size and chlorophyll-a concentration. However, PFTs and PSCs are not decorrelated, and one approach can complement the other in a recognition task. In this paper, we explore the recognition of several dominant PFTs by combining reflectance anomalies, chlorophyll-a concentration and other environmental parameters, such as sea surface temperature and wind speed. Remote sensing pixels are labeled thanks to coincident in-situ pigment data from GeP&CO, NOMAD and MAREDAT datasets, covering various oceanographic environments. The recognition is made with a supervised Support Vector Machine classifier trained on the labeled pixels. This algorithm enables a non-linear separation of the classes in the input space and is especially adapted for small training datasets as available here. Moreover, it provides a class probability estimate, allowing one to enhance the robustness of the classification results through the choice of a minimum probability threshold. A greedy feature selection associated to a 10-fold cross-validation procedure is applied to select the most discriminative input features and evaluate the classification performance. The best classifiers are finally applied on daily remote sensing datasets (SeaWIFS, MODISA) and the resulting dominant PFT maps are compared with other studies. Several conclusions are drawn: (1) the feature selection highlights the weight of temperature, chlorophyll-a and wind speed variables in phytoplankton recognition; (2) the classifiers show good results and dominant PFT maps in agreement with phytoplankton distribution knowledge; (3) classification on MODISA data seems to perform better than on SeaWIFS data, (4) the probability threshold screens correctly the areas of smallest confidence such as the interclass regions.
Ferrante, Oscar; Patacca, Alessia; Di Caro, Valeria; Della Libera, Chiara; Santandrea, Elisa; Chelazzi, Leonardo
2018-05-01
The cognitive system has the capacity to learn and make use of environmental regularities - known as statistical learning (SL), including for the implicit guidance of attention. For instance, it is known that attentional selection is biased according to the spatial probability of targets; similarly, changes in distractor filtering can be triggered by the unequal spatial distribution of distractors. Open questions remain regarding the cognitive/neuronal mechanisms underlying SL of target selection and distractor filtering. Crucially, it is unclear whether the two processes rely on shared neuronal machinery, with unavoidable cross-talk, or they are fully independent, an issue that we directly addressed here. In a series of visual search experiments, participants had to discriminate a target stimulus, while ignoring a task-irrelevant salient distractor (when present). We systematically manipulated spatial probabilities of either one or the other stimulus, or both. We then measured performance to evaluate the direct effects of the applied contingent probability distribution (e.g., effects on target selection of the spatial imbalance in target occurrence across locations) as well as its indirect or "transfer" effects (e.g., effects of the same spatial imbalance on distractor filtering across locations). By this approach, we confirmed that SL of both target and distractor location implicitly bias attention. Most importantly, we described substantial indirect effects, with the unequal spatial probability of the target affecting filtering efficiency and, vice versa, the unequal spatial probability of the distractor affecting target selection efficiency across locations. The observed cross-talk demonstrates that SL of target selection and distractor filtering are instantiated via (at least partly) shared neuronal machinery, as further corroborated by strong correlations between direct and indirect effects at the level of individual participants. Our findings are compatible with the notion that both kinds of SL adjust the priority of specific locations within attentional priority maps of space. Copyright © 2017 Elsevier Ltd. All rights reserved.
Formal properties of the probability of fixation: identities, inequalities and approximations.
McCandlish, David M; Epstein, Charles L; Plotkin, Joshua B
2015-02-01
The formula for the probability of fixation of a new mutation is widely used in theoretical population genetics and molecular evolution. Here we derive a series of identities, inequalities and approximations for the exact probability of fixation of a new mutation under the Moran process (equivalent results hold for the approximate probability of fixation under the Wright-Fisher process, after an appropriate change of variables). We show that the logarithm of the fixation probability has particularly simple behavior when the selection coefficient is measured as a difference of Malthusian fitnesses, and we exploit this simplicity to derive inequalities and approximations. We also present a comprehensive comparison of both existing and new approximations for the fixation probability, highlighting those approximations that induce a reversible Markov chain when used to describe the dynamics of evolution under weak mutation. To demonstrate the power of these results, we consider the classical problem of determining the total substitution rate across an ensemble of biallelic loci and prove that, at equilibrium, a strict majority of substitutions are due to drift rather than selection. Copyright © 2014 Elsevier Inc. All rights reserved.
A Time-Dependent Quantum Dynamics Study of the H2 + CH3 yields H + CH4 Reaction
NASA Technical Reports Server (NTRS)
Wang, Dunyou; Kwak, Dochan (Technical Monitor)
2002-01-01
We present a time-dependent wave-packet propagation calculation for the H2 + CH3 yields H + CH4 reaction in six degrees of freedom and for zero total angular momentum. Initial state selected reaction probability for different initial rotational-vibrational states are presented in this study. The cumulative reaction probability (CRP) is obtained by summing over initial-state-selected reaction probability. The energy-shift approximation to account for the contribution of degrees of freedom missing in the 6D calculation is employed to obtain an approximate full-dimensional CRP. Thermal rate constant is compared with different experiment results.
Streamflow characteristics at streamgages in northern Afghanistan and selected locations
Olson, Scott A.; Williams-Sether, Tara
2010-01-01
Statistical summaries of streamflow data for 79 historical streamgages in Northern Afghanistan and other selected historical streamgages are presented in this report. The summaries for each streamgage include (1) station description, (2) graph of the annual mean discharge for the period of record, (3) statistics of monthly and annual mean discharges, (4) monthly and annual flow duration, (5) probability of occurrence of annual high discharges, (6) probability of occurrence of annual low discharges, (7) probability of occurrence of seasonal low discharges, (8) annual peak discharges for the period of record, and (9) monthly and annual mean discharges for the period of record.
Martin, Petra; Leighl, Natasha B.
2017-01-01
This article considers the use of pretest probability in non-small cell lung cancer (NSCLC) and how its use in EGFR testing has helped establish clinical guidelines on selecting patients for EGFR testing. With an ever-increasing number of molecular abnormalities being identified and often limited tissue available for testing, the use of pretest probability will need to be increasingly considered in the future for selecting investigations and treatments in patients. In addition we review new mutations that have the potential to affect clinical practice. PMID:28607579
Objectively combining AR5 instrumental period and paleoclimate climate sensitivity evidence
NASA Astrophysics Data System (ADS)
Lewis, Nicholas; Grünwald, Peter
2018-03-01
Combining instrumental period evidence regarding equilibrium climate sensitivity with largely independent paleoclimate proxy evidence should enable a more constrained sensitivity estimate to be obtained. Previous, subjective Bayesian approaches involved selection of a prior probability distribution reflecting the investigators' beliefs about climate sensitivity. Here a recently developed approach employing two different statistical methods—objective Bayesian and frequentist likelihood-ratio—is used to combine instrumental period and paleoclimate evidence based on data presented and assessments made in the IPCC Fifth Assessment Report. Probabilistic estimates from each source of evidence are represented by posterior probability density functions (PDFs) of physically-appropriate form that can be uniquely factored into a likelihood function and a noninformative prior distribution. The three-parameter form is shown accurately to fit a wide range of estimated climate sensitivity PDFs. The likelihood functions relating to the probabilistic estimates from the two sources are multiplicatively combined and a prior is derived that is noninformative for inference from the combined evidence. A posterior PDF that incorporates the evidence from both sources is produced using a single-step approach, which avoids the order-dependency that would arise if Bayesian updating were used. Results are compared with an alternative approach using the frequentist signed root likelihood ratio method. Results from these two methods are effectively identical, and provide a 5-95% range for climate sensitivity of 1.1-4.05 K (median 1.87 K).
Vulnerability of manned spacecraft to crew loss from orbital debris penetration
NASA Technical Reports Server (NTRS)
Williamsen, J. E.
1994-01-01
Orbital debris growth threatens the survival of spacecraft systems from impact-induced failures. Whereas the probability of debris impact and spacecraft penetration may currently be calculated, another parameter of great interest to safety engineers is the probability that debris penetration will cause actual spacecraft or crew loss. Quantifying the likelihood of crew loss following a penetration allows spacecraft designers to identify those design features and crew operational protocols that offer the highest improvement in crew safety for available resources. Within this study, a manned spacecraft crew survivability (MSCSurv) computer model is developed that quantifies the conditional probability of losing one or more crew members, P(sub loss/pen), following the remote likelihood of an orbital debris penetration into an eight module space station. Contributions to P(sub loss/pen) are quantified from three significant penetration-induced hazards: pressure wall rupture (explosive decompression), fragment-induced injury, and 'slow' depressurization. Sensitivity analyses are performed using alternate assumptions for hazard-generating functions, crew vulnerability thresholds, and selected spacecraft design and crew operations parameters. These results are then used to recommend modifications to the spacecraft design and expected crew operations that quantitatively increase crew safety from orbital debris impacts.
Modeling uncertainty in producing natural gas from tight sands
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chermak, J.M.; Dahl, C.A.; Patrick, R.H
1995-12-31
Since accurate geologic, petroleum engineering, and economic information are essential ingredients in making profitable production decisions for natural gas, we combine these ingredients in a dynamic framework to model natural gas reservoir production decisions. We begin with the certainty case before proceeding to consider how uncertainty might be incorporated in the decision process. Our production model uses dynamic optimal control to combine economic information with geological constraints to develop optimal production decisions. To incorporate uncertainty into the model, we develop probability distributions on geologic properties for the population of tight gas sand wells and perform a Monte Carlo study tomore » select a sample of wells. Geological production factors, completion factors, and financial information are combined into the hybrid economic-petroleum reservoir engineering model to determine the optimal production profile, initial gas stock, and net present value (NPV) for an individual well. To model the probability of the production abandonment decision, the NPV data is converted to a binary dependent variable. A logit model is used to model this decision as a function of the above geological and economic data to give probability relationships. Additional ways to incorporate uncertainty into the decision process include confidence intervals and utility theory.« less
Quantum Dynamics Study of the Isotopic Effect on Capture Reactions: HD, D2 + CH3
NASA Technical Reports Server (NTRS)
Wang, Dunyou; Kwak, Dochan (Technical Monitor)
2002-01-01
Time-dependent wave-packet-propagation calculations are reported for the isotopic reactions, HD + CH3 and D2 + CH3, in six degrees of freedom and for zero total angular momentum. Initial state selected reaction probabilities for different initial rotational-vibrational states are presented in this study. This study shows that excitations of the HD(D2) enhances the reactivities; whereas the excitations of the CH3 umbrella mode have the opposite effects. This is consistent with the reaction of H2 + CH3. The comparison of these three isotopic reactions also shows the isotopic effects in the initial-state-selected reaction probabilities. The cumulative reaction probabilities (CRP) are obtained by summing over initial-state-selected reaction probabilities. The energy-shift approximation to account for the contribution of degrees of freedom missing in the six dimensionality calculation is employed to obtain approximate full-dimensional CRPs. The rate constant comparison shows H2 + CH3 reaction has the biggest reactivity, then HD + CH3, and D2 + CH3 has the smallest.
Optimal estimation for discrete time jump processes
NASA Technical Reports Server (NTRS)
Vaca, M. V.; Tretter, S. A.
1977-01-01
Optimum estimates of nonobservable random variables or random processes which influence the rate functions of a discrete time jump process (DTJP) are obtained. The approach is based on the a posteriori probability of a nonobservable event expressed in terms of the a priori probability of that event and of the sample function probability of the DTJP. A general representation for optimum estimates and recursive equations for minimum mean squared error (MMSE) estimates are obtained. MMSE estimates are nonlinear functions of the observations. The problem of estimating the rate of a DTJP when the rate is a random variable with a probability density function of the form cx super K (l-x) super m and show that the MMSE estimates are linear in this case. This class of density functions explains why there are insignificant differences between optimum unconstrained and linear MMSE estimates in a variety of problems.
Optimal estimation for discrete time jump processes
NASA Technical Reports Server (NTRS)
Vaca, M. V.; Tretter, S. A.
1978-01-01
Optimum estimates of nonobservable random variables or random processes which influence the rate functions of a discrete time jump process (DTJP) are derived. The approach used is based on the a posteriori probability of a nonobservable event expressed in terms of the a priori probability of that event and of the sample function probability of the DTJP. Thus a general representation is obtained for optimum estimates, and recursive equations are derived for minimum mean-squared error (MMSE) estimates. In general, MMSE estimates are nonlinear functions of the observations. The problem is considered of estimating the rate of a DTJP when the rate is a random variable with a beta probability density function and the jump amplitudes are binomially distributed. It is shown that the MMSE estimates are linear. The class of beta density functions is rather rich and explains why there are insignificant differences between optimum unconstrained and linear MMSE estimates in a variety of problems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mysina, N Yu; Maksimova, L A; Ryabukho, V P
Investigated are statistical properties of the phase difference of oscillations in speckle-fields at two points in the far-field diffraction region, with different shapes of the scatterer aperture. Statistical and spatial nonuniformity of the probability density function of the field phase difference is established. Numerical experiments show that, for the speckle-fields with an oscillating alternating-sign transverse correlation function, a significant nonuniformity of the probability density function of the phase difference in the correlation region of the field complex amplitude, with the most probable values 0 and p, is observed. A natural statistical interference experiment using Young diagrams has confirmed the resultsmore » of numerical experiments. (laser applications and other topics in quantum electronics)« less
Performance of concatenated Reed-Solomon/Viterbi channel coding
NASA Technical Reports Server (NTRS)
Divsalar, D.; Yuen, J. H.
1982-01-01
The concatenated Reed-Solomon (RS)/Viterbi coding system is reviewed. The performance of the system is analyzed and results are derived with a new simple approach. A functional model for the input RS symbol error probability is presented. Based on this new functional model, we compute the performance of a concatenated system in terms of RS word error probability, output RS symbol error probability, bit error probability due to decoding failure, and bit error probability due to decoding error. Finally we analyze the effects of the noisy carrier reference and the slow fading on the system performance.
Evaluation of statistical distributions to analyze the pollution of Cd and Pb in urban runoff.
Toranjian, Amin; Marofi, Safar
2017-05-01
Heavy metal pollution in urban runoff causes severe environmental damage. Identification of these pollutants and their statistical analysis is necessary to provide management guidelines. In this study, 45 continuous probability distribution functions were selected to fit the Cd and Pb data in the runoff events of an urban area during October 2014-May 2015. The sampling was conducted from the outlet of the city basin during seven precipitation events. For evaluation and ranking of the functions, we used the goodness of fit Kolmogorov-Smirnov and Anderson-Darling tests. The results of Cd analysis showed that Hyperbolic Secant, Wakeby and Log-Pearson 3 are suitable for frequency analysis of the event mean concentration (EMC), the instantaneous concentration series (ICS) and instantaneous concentration of each event (ICEE), respectively. In addition, the LP3, Wakeby and Generalized Extreme Value functions were chosen for the EMC, ICS and ICEE related to Pb contamination.
Hwang, Cheng-An
2009-05-01
The objectives of this study were to examine and model the probability of growth of Listeria monocytogenes in cooked salmon containing salt and smoke (phenol) compound and stored at various temperatures. A growth probability model was developed, and the model was compared to a model developed from tryptic soy broth (TSB) to assess the possibility of using TSB as a substitute for salmon. A 6-strain mixture of L. monocytogenes was inoculated into minced cooked salmon and TSB containing 0-10% NaCl and 0-34 ppm phenol to levels of 10(2-3) cfu/g, and the samples were vacuum-packed and stored at 0--25 degrees C for up to 42 days. A total 32 treatments, each with 16 samples, selected by central composite designs were tested. A logistic regression was used to model the probability of growth of L. monocytogenes as a function of concentrations of salt and phenol, and storage temperature. Resulted models showed that the probabilities of growth of L. monocytogenes in both salmon and TSB decreased when the salt and/or phenol concentrations increased, and at lower storage temperatures. In general, the growth probabilities of L. monocytogenes were affected more profoundly by salt and storage temperature than by phenol. The growth probabilities of L. monocytogenes estimated by the TSB model were higher than those by the salmon model at the same salt/phenol concentrations and storage temperatures. The growth probabilities predicted by the salmon and TSB models were comparable at higher storage temperatures, indicating the potential use of TSB as a model system to substitute salmon in studying the growth behavior of L. monocytogenes may only be suitable when the temperatures of interest are in higher storage temperatures (e.g., >12 degrees C). The model for salmon demonstrated the effects of salt, phenol, and storage temperature and their interactions on the growth probabilities of L. monocytogenes, and may be used to determine the growth probability of L. monocytogenes in smoked seafood.
A Search Model for Imperfectly Detected Targets
NASA Technical Reports Server (NTRS)
Ahumada, Albert
2012-01-01
Under the assumptions that 1) the search region can be divided up into N non-overlapping sub-regions that are searched sequentially, 2) the probability of detection is unity if a sub-region is selected, and 3) no information is available to guide the search, there are two extreme case models. The search can be done perfectly, leading to a uniform distribution over the number of searches required, or the search can be done with no memory, leading to a geometric distribution for the number of searches required with a success probability of 1/N. If the probability of detection P is less than unity, but the search is done otherwise perfectly, the searcher will have to search the N regions repeatedly until detection occurs. The number of searches is thus the sum two random variables. One is N times the number of full searches (a geometric distribution with success probability P) and the other is the uniform distribution over the integers 1 to N. The first three moments of this distribution were computed, giving the mean, standard deviation, and the kurtosis of the distribution as a function of the two parameters. The model was fit to the data presented last year (Ahumada, Billington, & Kaiwi, 2 required to find a single pixel target on a simulated horizon. The model gave a good fit to the three moments for all three observers.
Dissociation in decision bias mechanism between probabilistic information and previous decision
Kaneko, Yoshiyuki; Sakai, Katsuyuki
2015-01-01
Target detection performance is known to be influenced by events in the previous trials. It has not been clear, however, whether this bias effect is due to the previous sensory stimulus, motor response, or decision. Also it remains open whether or not the previous trial effect emerges via the same mechanism as the effect of knowledge about the target probability. In the present study, we asked normal human subjects to make a decision about the presence or absence of a visual target. We presented a pre-cue indicating the target probability before the stimulus, and also a decision-response mapping cue after the stimulus so as to tease apart the effect of decision from that of motor response. We found that the target detection performance was significantly affected by the probability cue in the current trial and also by the decision in the previous trial. While the information about the target probability modulated the decision criteria, the previous decision modulated the sensitivity to target-relevant sensory signals (d-prime). Using functional magnetic resonance imaging (fMRI), we also found that activation in the left intraparietal sulcus (IPS) was decreased when the probability cue indicated a high probability of the target. By contrast, activation in the right inferior frontal gyrus (IFG) was increased when the subjects made a target-present decision in the previous trial, but this change was observed specifically when the target was present in the current trial. Activation in these regions was associated with individual-difference in the decision computation parameters. We argue that the previous decision biases the target detection performance by modulating the processing of target-selective information, and this mechanism is distinct from modulation of decision criteria due to expectation of a target. PMID:25999844
Erus, Guray; Zacharaki, Evangelia I; Davatzikos, Christos
2014-04-01
This paper presents a method for capturing statistical variation of normal imaging phenotypes, with emphasis on brain structure. The method aims to estimate the statistical variation of a normative set of images from healthy individuals, and identify abnormalities as deviations from normality. A direct estimation of the statistical variation of the entire volumetric image is challenged by the high-dimensionality of images relative to smaller sample sizes. To overcome this limitation, we iteratively sample a large number of lower dimensional subspaces that capture image characteristics ranging from fine and localized to coarser and more global. Within each subspace, a "target-specific" feature selection strategy is applied to further reduce the dimensionality, by considering only imaging characteristics present in a test subject's images. Marginal probability density functions of selected features are estimated through PCA models, in conjunction with an "estimability" criterion that limits the dimensionality of estimated probability densities according to available sample size and underlying anatomy variation. A test sample is iteratively projected to the subspaces of these marginals as determined by PCA models, and its trajectory delineates potential abnormalities. The method is applied to segmentation of various brain lesion types, and to simulated data on which superiority of the iterative method over straight PCA is demonstrated. Copyright © 2014 Elsevier B.V. All rights reserved.
Erus, Guray; Zacharaki, Evangelia I.; Davatzikos, Christos
2014-01-01
This paper presents a method for capturing statistical variation of normal imaging phenotypes, with emphasis on brain structure. The method aims to estimate the statistical variation of a normative set of images from healthy individuals, and identify abnormalities as deviations from normality. A direct estimation of the statistical variation of the entire volumetric image is challenged by the high-dimensionality of images relative to smaller sample sizes. To overcome this limitation, we iteratively sample a large number of lower dimensional subspaces that capture image characteristics ranging from fine and localized to coarser and more global. Within each subspace, a “target-specific” feature selection strategy is applied to further reduce the dimensionality, by considering only imaging characteristics present in a test subject’s images. Marginal probability density functions of selected features are estimated through PCA models, in conjunction with an “estimability” criterion that limits the dimensionality of estimated probability densities according to available sample size and underlying anatomy variation. A test sample is iteratively projected to the subspaces of these marginals as determined by PCA models, and its trajectory delineates potential abnormalities. The method is applied to segmentation of various brain lesion types, and to simulated data on which superiority of the iterative method over straight PCA is demonstrated. PMID:24607564
A small-group functional balance intervention for individuals with Alzheimer disease: a pilot study.
Ries, Julie D; Drake, Jamie Michelle; Marino, Christopher
2010-03-01
Individuals with Alzheimer disease (AD) have a higher risk of falls than their cognitively intact peers. This pilot study was designed to assess the feasibility and effectiveness of a small-group balance exercise program for individuals with AD in a day center environment. Seven participants met the inclusion criteria: diagnosis of AD or probable AD, medical stability, and ability to walk (with or without assistive device). We used an exploratory pre- and post-test study design. Participants engaged in a functional balance exercise program in two 45-minute sessions each week for eight weeks. Balance activities were functional and concrete, and the intervention was organized into constant, blocked, massed practice. Outcome measures included Berg Balance Scale (BBS), Timed Up and Go (TUG), and gait speed (GS; self-selected and fast assessed by an instrumented walkway). Data were analyzed by comparing individual change scores with previously identified minimal detectable change scores at the 90% confidence level (MDC90). Pre- and post-test data were acquired for five participants (two participants withdrew). The BBS improved in all five participants, and improved > or = 6.4 points (the MDC90 for the BBS in three participants. Four participants improved their performance on the TUG, and three participants improved > or = 4.09 seconds (the MDC90 for the TUG). Self-selected GS increased > or = 9.44 cm/sec (the MDC90 for gait speed) in three participants. Two participants demonstrated post-test self-selected GS comparable with their pretest fast GS. This pilot study suggests that a small-group functional balance intervention for individuals with AD is feasible and effective. Although participants had no explicit memory of the program, four of five improved in at least two outcome measures. Larger scale functional balance intervention studies with individuals with AD are warranted.
On defense strategies for system of systems using aggregated correlations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rao, Nageswara S.; Imam, Neena; Ma, Chris Y. T.
2017-04-01
We consider a System of Systems (SoS) wherein each system Si, i = 1; 2; ... ;N, is composed of discrete cyber and physical components which can be attacked and reinforced. We characterize the disruptions using aggregate failure correlation functions given by the conditional failure probability of SoS given the failure of an individual system. We formulate the problem of ensuring the survival of SoS as a game between an attacker and a provider, each with a utility function composed of asurvival probability term and a cost term, both expressed in terms of the number of components attacked and reinforced.more » The survival probabilities of systems satisfy simple product-form, first-order differential conditions, which simplify the Nash Equilibrium (NE) conditions. We derive the sensitivity functions that highlight the dependence of SoS survival probability at NE on cost terms, correlation functions, and individual system survival probabilities.We apply these results to a simplified model of distributed cloud computing infrastructure.« less
NASA Astrophysics Data System (ADS)
Jenkins, Colleen; Jordan, Jay; Carlson, Jeff
2007-02-01
This paper presents parameter estimation techniques useful for detecting background changes in a video sequence with extreme foreground activity. A specific application of interest is automated detection of the covert placement of threats (e.g., a briefcase bomb) inside crowded public facilities. We propose that a histogram of pixel intensity acquired from a fixed mounted camera over time for a series of images will be a mixture of two Gaussian functions: the foreground probability distribution function and background probability distribution function. We will use Pearson's Method of Moments to separate the two probability distribution functions. The background function can then be "remembered" and changes in the background can be detected. Subsequent comparisons of background estimates are used to detect changes. Changes are flagged to alert security forces to the presence and location of potential threats. Results are presented that indicate the significant potential for robust parameter estimation techniques as applied to video surveillance.
Darlington, P J
1972-02-01
Mathematical biologists have failed to produce a satisfactory general model for evolution of altruism, i.e., of behaviors by which "altruists" benefit other individuals but not themselves; kin selection does not seem to be a sufficient explanation of nonreciprocal altruism. Nonmathematical (but mathematically acceptable) models are now proposed for evolution of negative altruism in dual-determinant and of positive altruism in tri-determinant systems. Peck orders, territorial systems, and an ant society are analyzed as examples. In all models, evolution is primarily by individual selection, probably supplemented by group selection. Group selection is differential extinction of populations. It can act only on populations preformed by selection at the individual level, but can either cancel individual selective trends (effecting evolutionary homeostasis) or supplement them; its supplementary effect is probably increasingly important in the evolution of increasingly organized populations.
A Tomographic Method for the Reconstruction of Local Probability Density Functions
NASA Technical Reports Server (NTRS)
Sivathanu, Y. R.; Gore, J. P.
1993-01-01
A method of obtaining the probability density function (PDF) of local properties from path integrated measurements is described. The approach uses a discrete probability function (DPF) method to infer the PDF of the local extinction coefficient from measurements of the PDFs of the path integrated transmittance. The local PDFs obtained using the method are compared with those obtained from direct intrusive measurements in propylene/air and ethylene/air diffusion flames. The results of this comparison are good.
Nonstationary envelope process and first excursion probability.
NASA Technical Reports Server (NTRS)
Yang, J.-N.
1972-01-01
The definition of stationary random envelope proposed by Cramer and Leadbetter, is extended to the envelope of nonstationary random process possessing evolutionary power spectral densities. The density function, the joint density function, the moment function, and the crossing rate of a level of the nonstationary envelope process are derived. Based on the envelope statistics, approximate solutions to the first excursion probability of nonstationary random processes are obtained. In particular, applications of the first excursion probability to the earthquake engineering problems are demonstrated in detail.
Properties of the probability density function of the non-central chi-squared distribution
NASA Astrophysics Data System (ADS)
András, Szilárd; Baricz, Árpád
2008-10-01
In this paper we consider the probability density function (pdf) of a non-central [chi]2 distribution with arbitrary number of degrees of freedom. For this function we prove that can be represented as a finite sum and we deduce a partial derivative formula. Moreover, we show that the pdf is log-concave when the degrees of freedom is greater or equal than 2. At the end of this paper we present some Turán-type inequalities for this function and an elegant application of the monotone form of l'Hospital's rule in probability theory is given.
Empirically Estimable Classification Bounds Based on a Nonparametric Divergence Measure
Berisha, Visar; Wisler, Alan; Hero, Alfred O.; Spanias, Andreas
2015-01-01
Information divergence functions play a critical role in statistics and information theory. In this paper we show that a non-parametric f-divergence measure can be used to provide improved bounds on the minimum binary classification probability of error for the case when the training and test data are drawn from the same distribution and for the case where there exists some mismatch between training and test distributions. We confirm the theoretical results by designing feature selection algorithms using the criteria from these bounds and by evaluating the algorithms on a series of pathological speech classification tasks. PMID:26807014
Skyrme RPA description of γ-vibrational states in rare-earth nuclei
NASA Astrophysics Data System (ADS)
Nesterenko, V. O.; Kartavenko, V. G.; Kleinig, W.; Kvasil, J.; Repko, A.; Jolos, R. V.; Reinhard, P.-G.
2016-01-01
The lowest γ-vibrational states with Kπ = 2+γ in well-deformed Dy, Er and Yb isotopes are investigated within the self-consistent separable quasiparticle random-phase-approximation (QRPA) approach based on the Skyrme functional. The energies Eγ and reduced transition probabilities B(E2)γ of the states are calculated with the Skyrme force SV-mas10. We demonstrate the strong effect of the pairing blocking on the energies of γ-vibrational states. It is also shown that collectivity of γ-vibrational states is strictly determined by keeping the Nilsson selection rules in the corresponding lowest 2qp configurations.
Estimation and classification by sigmoids based on mutual information
NASA Technical Reports Server (NTRS)
Baram, Yoram
1994-01-01
An estimate of the probability density function of a random vector is obtained by maximizing the mutual information between the input and the output of a feedforward network of sigmoidal units with respect to the input weights. Classification problems can be solved by selecting the class associated with the maximal estimated density. Newton's s method, applied to an estimated density, yields a recursive maximum likelihood estimator, consisting of a single internal layer of sigmoids, for a random variable or a random sequence. Applications to the diamond classification and to the prediction of a sun-spot process are demonstrated.
Universal portfolios generated by the Bregman divergence
NASA Astrophysics Data System (ADS)
Tan, Choon Peng; Kuang, Kee Seng
2017-04-01
The Bregman divergence of two probability vectors is a stronger form of the f-divergence introduced by Csiszar. Two versions of the Bregman universal portfolio are presented by exploiting the mean-value theorem. The explicit form of the Bregman universal portfolio generated by a function of a convex polynomial is derived and studied empirically. This portfolio can be regarded as another generalized of the well-known Helmbold portfolio. By running the portfolios on selected stock-price data sets from the local stock exchange, it is shown that it is possible to increase the wealth of the investor by using the portfolios in investment.
Monte Carlo PDF method for turbulent reacting flow in a jet-stirred reactor
NASA Astrophysics Data System (ADS)
Roekaerts, D.
1992-01-01
A stochastic algorithm for the solution of the modeled scalar probability density function (PDF) transport equation for single-phase turbulent reacting flow is described. Cylindrical symmetry is assumed. The PDF is represented by ensembles of N representative values of the thermochemical variables in each cell of a nonuniform finite-difference grid and operations on these elements representing convection, diffusion, mixing and reaction are derived. A simplified model and solution algorithm which neglects the influence of turbulent fluctuations on mean reaction rates is also described. Both algorithms are applied to a selectivity problem in a real reactor.
NASA Astrophysics Data System (ADS)
Feng, Jian-xin; Tang, Jia-fu; Wang, Guang-xing
2007-04-01
On the basis of the analysis of clustering algorithm that had been proposed for MANET, a novel clustering strategy was proposed in this paper. With the trust defined by statistical hypothesis in probability theory and the cluster head selected by node trust and node mobility, this strategy can realize the function of the malicious nodes detection which was neglected by other clustering algorithms and overcome the deficiency of being incapable of implementing the relative mobility metric of corresponding nodes in the MOBIC algorithm caused by the fact that the receiving power of two consecutive HELLO packet cannot be measured. It's an effective solution to cluster MANET securely.
NASA Technical Reports Server (NTRS)
Antaki, P. J.
1981-01-01
The joint probability distribution function (pdf), which is a modification of the bivariate Gaussian pdf, is discussed and results are presented for a global reaction model using the joint pdf. An alternative joint pdf is discussed. A criterion which permits the selection of temperature pdf's in different regions of turbulent, reacting flow fields is developed. Two principal approaches to the determination of reaction rates in computer programs containing detailed chemical kinetics are outlined. These models represent a practical solution to the modeling of species reaction rates in turbulent, reacting flows.
Learning with imperfectly labeled patterns
NASA Technical Reports Server (NTRS)
Chittineni, C. B.
1979-01-01
The problem of learning in pattern recognition using imperfectly labeled patterns is considered. The performance of the Bayes and nearest neighbor classifiers with imperfect labels is discussed using a probabilistic model for the mislabeling of the training patterns. Schemes for training the classifier using both parametric and non parametric techniques are presented. Methods for the correction of imperfect labels were developed. To gain an understanding of the learning process, expressions are derived for success probability as a function of training time for a one dimensional increment error correction classifier with imperfect labels. Feature selection with imperfectly labeled patterns is described.
Sloma, Michael F.; Mathews, David H.
2016-01-01
RNA secondary structure prediction is widely used to analyze RNA sequences. In an RNA partition function calculation, free energy nearest neighbor parameters are used in a dynamic programming algorithm to estimate statistical properties of the secondary structure ensemble. Previously, partition functions have largely been used to estimate the probability that a given pair of nucleotides form a base pair, the conditional stacking probability, the accessibility to binding of a continuous stretch of nucleotides, or a representative sample of RNA structures. Here it is demonstrated that an RNA partition function can also be used to calculate the exact probability of formation of hairpin loops, internal loops, bulge loops, or multibranch loops at a given position. This calculation can also be used to estimate the probability of formation of specific helices. Benchmarking on a set of RNA sequences with known secondary structures indicated that loops that were calculated to be more probable were more likely to be present in the known structure than less probable loops. Furthermore, highly probable loops are more likely to be in the known structure than the set of loops predicted in the lowest free energy structures. PMID:27852924
Does Mother Know Best? Treatment Adherence as a Function of Anticipated Treatment Benefit
Glymour, M. Maria; Nguyen, Quynh; Matsouaka, Roland; Tchetgen Tchetgen, Eric J.; Schmidt, Nicole M.; Osypuk, Theresa L.
2016-01-01
Background We describe bias resulting from individualized treatment selection, which occurs when treatment has heterogeneous effects and individuals selectively choose treatments of greatest benefit to themselves. This pernicious bias may confound estimates from observational studies and lead to important misinterpretation of intent-to-treat analyses of randomized trials. Despite the potentially serious threat to inferences, individualized treatment selection has rarely been formally described or assessed. Methods The Moving to Opportunity (MTO) trial randomly assigned subsidized rental vouchers to low-income families in high-poverty public housing. We assessed the Kessler-6 psychological distress and Behavior Problems Index outcomes for 2,829 adolescents 4–7 years after randomization. Among families randomly assigned to receive vouchers, we estimated probability of moving (treatment), predicted by pre-randomization characteristics (c-statistic=0.63). We categorized families into tertiles of this estimated probability of moving, and compared instrumental variable effect estimates for moving on Behavior Problems Index and Kessler-6 across tertiles. Results Instrumental variable estimated effects of moving on behavioral problems index were most adverse for boys least likely to move (b=0.93; 95% CI: 0.33, 1.53) compared to boys most likely to move (b=0.14; 95% CI: −0.15, 0.44; p=.02 for treatment*tertile interaction). Effects on Kessler-6 were more beneficial for girls least likely to move compared to girls most likely to move (−0.62 vs. 0.02; interaction p=.03). Conclusions Evidence of Individualized treatment selection differed by child gender and outcome and should be evaluated in randomized trial reports, especially when heterogeneous treatment effects are likely and non-adherence is common. PMID:26628424
Askari-Saryazdi, Ghasem; Hejazi, Mir Jalil; Ferguson, J Scott; Rashidi, Mohammad-Reza
2015-10-01
The vegetable leafminer (VLM), Liriomyza sativae (Diptera: Agromyzidae) is a serious pest of vegetable crops and ornamentals worldwide. In cropping systems with inappropriate management strategies, development of resistance to insecticides in leafminers is probable. Chlorpyrifos is a commonly used pesticide for controlling leafminers in Iran, but resistance to this insecticide in leafminers has not been characterized. In order to develop strategies to minimize resistance in the field and greenhouse, a laboratory selected chlorpyrifos resistant strain of L. sativae was used to characterize resistance and determine the rate of development and stability of resistance. Selecting for resistance in the laboratory after 23 generations yielded a chlorpyrifos resistant selected strain (CRSS) with a resistance ratio of 40.34, determined on the larval stage. CRSS exhibited no cross-resistance to other tested insecticides except for diazinon. Synergism and biochemical assays indicated that esterases (EST) had a key role in metabolic resistance to chlorpyrifos, but glutathione S-transferase (GST) and mixed function oxidase (MFO) were not mediators in this resistance. In CRSS acetylcholinesterase (AChE) was more active than the susceptible strain, Sharif (SH). AChE in CRSS was also less sensitive to inhibition by propoxur. The kinetics parameters (Km and Vmax) of AChE indicated that affinities and hydrolyzing efficiencies of this enzyme in CRSS were higher than SH. Susceptibility to chlorpyrifos in L. sativae was re-gained in the absence of insecticide pressure. Synergism, biochemical and cross-resistance assays revealed that overactivity of metabolic enzymes and reduction in target site sensitivity are probably joint factors in chlorpyrifos resistance. An effective insecticide resistance management program is necessary to prevent fast resistance development in crop systems. Copyright © 2015 Elsevier Inc. All rights reserved.
Planning for robust reserve networks using uncertainty analysis
Moilanen, A.; Runge, M.C.; Elith, Jane; Tyre, A.; Carmel, Y.; Fegraus, E.; Wintle, B.A.; Burgman, M.; Ben-Haim, Y.
2006-01-01
Planning land-use for biodiversity conservation frequently involves computer-assisted reserve selection algorithms. Typically such algorithms operate on matrices of species presence?absence in sites, or on species-specific distributions of model predicted probabilities of occurrence in grid cells. There are practically always errors in input data?erroneous species presence?absence data, structural and parametric uncertainty in predictive habitat models, and lack of correspondence between temporal presence and long-run persistence. Despite these uncertainties, typical reserve selection methods proceed as if there is no uncertainty in the data or models. Having two conservation options of apparently equal biological value, one would prefer the option whose value is relatively insensitive to errors in planning inputs. In this work we show how uncertainty analysis for reserve planning can be implemented within a framework of information-gap decision theory, generating reserve designs that are robust to uncertainty. Consideration of uncertainty involves modifications to the typical objective functions used in reserve selection. Search for robust-optimal reserve structures can still be implemented via typical reserve selection optimization techniques, including stepwise heuristics, integer-programming and stochastic global search.
Variable selection in subdistribution hazard frailty models with competing risks data
Do Ha, Il; Lee, Minjung; Oh, Seungyoung; Jeong, Jong-Hyeon; Sylvester, Richard; Lee, Youngjo
2014-01-01
The proportional subdistribution hazards model (i.e. Fine-Gray model) has been widely used for analyzing univariate competing risks data. Recently, this model has been extended to clustered competing risks data via frailty. To the best of our knowledge, however, there has been no literature on variable selection method for such competing risks frailty models. In this paper, we propose a simple but unified procedure via a penalized h-likelihood (HL) for variable selection of fixed effects in a general class of subdistribution hazard frailty models, in which random effects may be shared or correlated. We consider three penalty functions (LASSO, SCAD and HL) in our variable selection procedure. We show that the proposed method can be easily implemented using a slight modification to existing h-likelihood estimation approaches. Numerical studies demonstrate that the proposed procedure using the HL penalty performs well, providing a higher probability of choosing the true model than LASSO and SCAD methods without losing prediction accuracy. The usefulness of the new method is illustrated using two actual data sets from multi-center clinical trials. PMID:25042872
Performance Analysis of Relay Subset Selection for Amplify-and-Forward Cognitive Relay Networks
Qureshi, Ijaz Mansoor; Malik, Aqdas Naveed; Zubair, Muhammad
2014-01-01
Cooperative communication is regarded as a key technology in wireless networks, including cognitive radio networks (CRNs), which increases the diversity order of the signal to combat the unfavorable effects of the fading channels, by allowing distributed terminals to collaborate through sophisticated signal processing. Underlay CRNs have strict interference constraints towards the secondary users (SUs) active in the frequency band of the primary users (PUs), which limits their transmit power and their coverage area. Relay selection offers a potential solution to the challenges faced by underlay networks, by selecting either single best relay or a subset of potential relay set under different design requirements and assumptions. The best relay selection schemes proposed in the literature for amplify-and-forward (AF) based underlay cognitive relay networks have been very well studied in terms of outage probability (OP) and bit error rate (BER), which is deficient in multiple relay selection schemes. The novelty of this work is to study the outage behavior of multiple relay selection in the underlay CRN and derive the closed-form expressions for the OP and BER through cumulative distribution function (CDF) of the SNR received at the destination. The effectiveness of relay subset selection is shown through simulation results. PMID:24737980
Summer habitat selection by Dall’s sheep in Wrangell-St. Elias National Park and Preserve, Alaska
Roffler, Gretchen H.; Adams, Layne G.; Hebblewhite, Mark
2017-01-01
Sexual segregation occurs frequently in sexually dimorphic species, and it may be influenced by differential habitat requirements between sexes or by social or evolutionary mechanisms that maintain separation of sexes regardless of habitat selection. Understanding the degree of sex-specific habitat specialization is important for management of wildlife populations and the design of monitoring and research programs. Using mid-summer aerial survey data for Dall’s sheep (Ovis dalli dalli) in southern Alaska during 1983–2011, we assessed differences in summer habitat selection by sex and reproductive status at the landscape scale in Wrangell-St. Elias National Park and Preserve (WRST). Males and females were highly segregated socially, as were females with and without young. Resource selection function (RSF) models containing rugged terrain, intermediate values of the normalized difference vegetation index (NDVI), and open landcover types best explained resource selection by each sex, female reproductive classes, and all sheep combined. For male and all female models, most coefficients were similar, suggesting little difference in summer habitat selection between sexes at the landscape scale. A combined RSF model therefore may be used to predict the relative probability of resource selection by Dall’s sheep in WRST regardless of sex or reproductive status.
Beck, Valerie M; Hollingworth, Andrew
2017-02-01
The content of visual working memory (VWM) guides attention, but whether this interaction is limited to a single VWM representation or functional for multiple VWM representations is under debate. To test this issue, we developed a gaze-contingent search paradigm to directly manipulate selection history and examine the competition between multiple cue-matching saccade target objects. Participants first saw a dual-color cue followed by two pairs of colored objects presented sequentially. For each pair, participants selectively fixated an object that matched one of the cued colors. Critically, for the second pair, the cued color from the first pair was presented either with a new distractor color or with the second cued color. In the latter case, if two cued colors in VWM interact with selection simultaneously, we expected the second cued color object to generate substantial competition for selection, even though the first cued color was used to guide attention in the immediately previous pair. Indeed, in the second pair, selection probability of the first cued color was substantially reduced in the presence of the second cued color. This competition between cue-matching objects provides strong evidence that both VWM representations interacted simultaneously with selection. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
A wave function for stock market returns
NASA Astrophysics Data System (ADS)
Ataullah, Ali; Davidson, Ian; Tippett, Mark
2009-02-01
The instantaneous return on the Financial Times-Stock Exchange (FTSE) All Share Index is viewed as a frictionless particle moving in a one-dimensional square well but where there is a non-trivial probability of the particle tunneling into the well’s retaining walls. Our analysis demonstrates how the complementarity principle from quantum mechanics applies to stock market prices and of how the wave function presented by it leads to a probability density which exhibits strong compatibility with returns earned on the FTSE All Share Index. In particular, our analysis shows that the probability density for stock market returns is highly leptokurtic with slight (though not significant) negative skewness. Moreover, the moments of the probability density determined under the complementarity principle employed here are all convergent - in contrast to many of the probability density functions on which the received theory of finance is based.
Schwark, Jeremy D; Dolgov, Igor; Sandry, Joshua; Volkman, C Brooks
2013-10-01
Recent theories of attention have proposed that selection history is a separate, dissociable source of information that influences attention. The current study sought to investigate the simultaneous involvement of selection history and working-memory on attention during visual search. Experiments 1 and 2 used target feature probability to manipulate selection history and found significant effects of both working-memory and selection history, although working-memory dominated selection history when they cued different locations. Experiment 3 eliminated the contribution of voluntary refreshing of working-memory and replicated the main effects, although selection history became dominant. Using the same methodology, but with reduced probability cue validity, both effects were present in Experiment 4 and did not significantly differ in their contribution to attention. Effects of selection history and working-memory never interacted. These results suggest that selection history and working-memory are separate influences on attention and have little impact on each other. Theoretical implications for models of attention are discussed. © 2013.
Delay, Probability, and Social Discounting in a Public Goods Game
ERIC Educational Resources Information Center
Jones, Bryan A.; Rachlin, Howard
2009-01-01
A human social discount function measures the value to a person of a reward to another person at a given social distance. Just as delay discounting is a hyperbolic function of delay, and probability discounting is a hyperbolic function of odds-against, social discounting is a hyperbolic function of social distance. Experiment 1 obtained individual…
Macrophage Autophagy and Bacterial Infections
Bah, Aïcha; Vergne, Isabelle
2017-01-01
Autophagy is a well-conserved lysosomal degradation pathway that plays key roles in bacterial infections. One of the most studied is probably xenophagy, the selective capture and degradation of intracellular bacteria by lysosomes. However, the impact of autophagy goes beyond xenophagy and involves intensive cross-talks with other host defense mechanisms. In addition, autophagy machinery can have non-canonical functions such as LC3-associated phagocytosis. In this review, we intend to summarize the current knowledge on the many functions of autophagy proteins in cell defenses with a focus on bacteria–macrophage interaction. We also present the strategies developed by pathogens to evade or to exploit this machinery in order to establish a successful infection. Finally, we discuss the opportunities and challenges of autophagy manipulation in improving therapeutics and vaccines against bacterial pathogens. PMID:29163544
Optogenetic Acidification of Synaptic Vesicles and Lysosomes
Grauel, M. Katharina; Wozny, Christian; Bentz, Claudia; Blessing, Anja; Rosenmund, Tanja; Jentsch, Thomas J.; Schmitz, Dietmar; Hegemann, Peter; Rosenmund, Christian
2016-01-01
Acidification is required for the function of many intracellular organelles, but methods to acutely manipulate their intraluminal pH have not been available. Here we present a targeting strategy to selectively express the light-driven proton pump Arch3 on synaptic vesicles. Our new tool, pHoenix, can functionally replace endogenous proton pumps, enabling optogenetic control of vesicular acidification and neurotransmitter accumulation. Under physiological conditions, glutamatergic vesicles are nearly full, as additional vesicle acidification with pHoenix only slightly increased the quantal size. By contrast, we found that incompletely filled vesicles exhibited a lower release probability than full vesicles, suggesting preferential exocytosis of vesicles with high transmitter content. Our subcellular targeting approach can be transferred to other organelles, as demonstrated for a pHoenix variant that allows light-activated acidification of lysosomes. PMID:26551543
Optogenetic acidification of synaptic vesicles and lysosomes.
Rost, Benjamin R; Schneider, Franziska; Grauel, M Katharina; Wozny, Christian; Bentz, Claudia; Blessing, Anja; Rosenmund, Tanja; Jentsch, Thomas J; Schmitz, Dietmar; Hegemann, Peter; Rosenmund, Christian
2015-12-01
Acidification is required for the function of many intracellular organelles, but methods to acutely manipulate their intraluminal pH have not been available. Here we present a targeting strategy to selectively express the light-driven proton pump Arch3 on synaptic vesicles. Our new tool, pHoenix, can functionally replace endogenous proton pumps, enabling optogenetic control of vesicular acidification and neurotransmitter accumulation. Under physiological conditions, glutamatergic vesicles are nearly full, as additional vesicle acidification with pHoenix only slightly increased the quantal size. By contrast, we found that incompletely filled vesicles exhibited a lower release probability than full vesicles, suggesting preferential exocytosis of vesicles with high transmitter content. Our subcellular targeting approach can be transferred to other organelles, as demonstrated for a pHoenix variant that allows light-activated acidification of lysosomes.
The genomic signature of dog domestication reveals adaptation to a starch-rich diet.
Axelsson, Erik; Ratnakumar, Abhirami; Arendt, Maja-Louise; Maqbool, Khurram; Webster, Matthew T; Perloski, Michele; Liberg, Olof; Arnemo, Jon M; Hedhammar, Ake; Lindblad-Toh, Kerstin
2013-03-21
The domestication of dogs was an important episode in the development of human civilization. The precise timing and location of this event is debated and little is known about the genetic changes that accompanied the transformation of ancient wolves into domestic dogs. Here we conduct whole-genome resequencing of dogs and wolves to identify 3.8 million genetic variants used to identify 36 genomic regions that probably represent targets for selection during dog domestication. Nineteen of these regions contain genes important in brain function, eight of which belong to nervous system development pathways and potentially underlie behavioural changes central to dog domestication. Ten genes with key roles in starch digestion and fat metabolism also show signals of selection. We identify candidate mutations in key genes and provide functional support for an increased starch digestion in dogs relative to wolves. Our results indicate that novel adaptations allowing the early ancestors of modern dogs to thrive on a diet rich in starch, relative to the carnivorous diet of wolves, constituted a crucial step in the early domestication of dogs.
Morphological integration in the appendicular skeleton of two domestic taxa: the horse and donkey.
Hanot, Pauline; Herrel, Anthony; Guintard, Claude; Cornette, Raphaël
2017-10-11
Organisms are organized into suites of anatomical structures that typically covary when developmentally or functionally related, and this morphological integration plays a determinant role in evolutionary processes. Artificial selection on domestic species causes strong morphological changes over short time spans, frequently resulting in a wide and exaggerated phenotypic diversity. This raises the question of whether integration constrains the morphological diversification of domestic species and how natural and artificial selection may impact integration patterns. Here, we study the morphological integration in the appendicular skeleton of domestic horses and donkeys, using three-dimensional geometric morphometrics on 75 skeletons. Our results indicate that a strong integration is inherited from developmental mechanisms which interact with functional factors. This strong integration reveals a specialization in the locomotion of domestic equids, partly for running abilities. We show that the integration is stronger in horses than in donkeys, probably because of a greater degree of specialization and predictability of their locomotion. Thus, the constraints imposed by integration are weak enough to allow important morphological changes and the phenotypic diversification of domestic species. © 2017 The Author(s).
Electron number probability distributions for correlated wave functions.
Francisco, E; Martín Pendás, A; Blanco, M A
2007-03-07
Efficient formulas for computing the probability of finding exactly an integer number of electrons in an arbitrarily chosen volume are only known for single-determinant wave functions [E. Cances et al., Theor. Chem. Acc. 111, 373 (2004)]. In this article, an algebraic method is presented that extends these formulas to the case of multideterminant wave functions and any number of disjoint volumes. The derived expressions are applied to compute the probabilities within the atomic domains derived from the space partitioning based on the quantum theory of atoms in molecules. Results for a series of test molecules are presented, paying particular attention to the effects of electron correlation and of some numerical approximations on the computed probabilities.
A mechanism producing power law etc. distributions
NASA Astrophysics Data System (ADS)
Li, Heling; Shen, Hongjun; Yang, Bin
2017-07-01
Power law distribution is playing an increasingly important role in the complex system study. Based on the insolvability of complex systems, the idea of incomplete statistics is utilized and expanded, three different exponential factors are introduced in equations about the normalization condition, statistical average and Shannon entropy, with probability distribution function deduced about exponential function, power function and the product form between power function and exponential function derived from Shannon entropy and maximal entropy principle. So it is shown that maximum entropy principle can totally replace equal probability hypothesis. Owing to the fact that power and probability distribution in the product form between power function and exponential function, which cannot be derived via equal probability hypothesis, can be derived by the aid of maximal entropy principle, it also can be concluded that maximal entropy principle is a basic principle which embodies concepts more extensively and reveals basic principles on motion laws of objects more fundamentally. At the same time, this principle also reveals the intrinsic link between Nature and different objects in human society and principles complied by all.
A Water-Stable Metal-Organic Framework for Highly Sensitive and Selective Sensing of Fe3+ Ion.
Hou, Bing-Lei; Tian, Dan; Liu, Jiang; Dong, Long-Zhang; Li, Shun-Li; Li, Dong-Sheng; Lan, Ya-Qian
2016-10-17
A new metal-organic framework [Zn 5 (hfipbb) 4 (trz) 2 (H 2 O) 2 ] (NNU-1) [H 2 hfipbb = 4,4'-(hexafluoroisopropylidene)bis(benzoic acid), Htrz = 1H-1,2,3-triazole] was assembled by hydrothermal synthesis. Single-crystal X-ray diffraction analysis reveals that NNU-1 displays a twofold interpenetrating three-dimensional (3D) framework with a {4 24 ·6 4 }-bcu topology. Interestingly, the 3D framework contains a two-dimensional (2D) layered structure that consists of alternating left- and right-handed double helical chains. On the basis of the hydrophobic -CF 3 groups from H 2 hfipbb ligand, NNU-1 possesses excellent stability in water. It is worth noting that NNU-1 not only shows a highly selective fluorescence quenching effect to Fe 3+ ion in aqueous solution but also resists the interference of other metals including Fe 2+ ion. Accordingly, NNU-1 probably functions as a potential promising fluorescence sensor for detecting Fe 3+ ion with high sensitivity and selectivity.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Y Cao; X Jin; H Huang
The TrkH/TrkG/KtrB proteins mediate K{sup +} uptake in bacteria and probably evolved from simple K{sup +} channels by multiple gene duplications or fusions. Here we present the crystal structure of a TrkH from Vibrio parahaemolyticus. TrkH is a homodimer, and each protomer contains an ion permeation pathway. A selectivity filter, similar in architecture to those of K{sup +} channels but significantly shorter, is lined by backbone and side-chain oxygen atoms. Functional studies showed that TrkH is selective for permeation of K{sup +} and Rb{sup +} over smaller ions such as Na{sup +} or Li{sup +}. Immediately intracellular to the selectivitymore » filter are an intramembrane loop and an arginine residue, both highly conserved, which constrict the permeation pathway. Substituting the arginine with an alanine significantly increases the rate of K{sup +} flux. These results reveal the molecular basis of K{sup +} selectivity and suggest a novel gating mechanism for this large and important family of membrane transport proteins.« less
Population viability and connectivity of the Louisiana black bear (Ursus americanus luteolus)
Laufenberg, Jared S.; Clark, Joseph D.
2014-01-01
From April 2010 to April 2012, global positioning system (GPS) radio collars were placed on 8 female and 23 male bears ranging from 1 to 11 years of age to develop a step-selection function model to predict routes and rates of interchange. For both males and females, the probability of a step being selected increased as the distance to natural land cover and agriculture at the end of the step decreased and as distance from roads at the end of a step increased. Of 4,000 correlated random walks, the least potential interchange was between TRB and TRC and between UARB and LARB, but the relative potential for natural interchange between UARB and TRC was high. The step-selection model predicted that dispersals between the LARB and UARB populations were infrequent but possible for males and nearly nonexistent for females. No evidence of natural female dispersal between subpopulations has been documented thus far, which is also consistent with model predictions.
NASA Astrophysics Data System (ADS)
Yuvchenko, S. A.; Ushakova, E. V.; Pavlova, M. V.; Alonova, M. V.; Zimnyakov, D. A.
2018-04-01
We consider the practical realization of a new optical probe method of the random media which is defined as the reference-free path length interferometry with the intensity moments analysis. A peculiarity in the statistics of the spectrally selected fluorescence radiation in laser-pumped dye-doped random medium is discussed. Previously established correlations between the second- and the third-order moments of the intensity fluctuations in the random interference patterns, the coherence function of the probe radiation, and the path difference probability density for the interfering partial waves in the medium are confirmed. The correlations were verified using the statistical analysis of the spectrally selected fluorescence radiation emitted by a laser-pumped dye-doped random medium. Water solution of Rhodamine 6G was applied as the doping fluorescent agent for the ensembles of the densely packed silica grains, which were pumped by the 532 nm radiation of a solid state laser. The spectrum of the mean path length for a random medium was reconstructed.
Hindersin, Laura; Traulsen, Arne
2015-11-01
We analyze evolutionary dynamics on graphs, where the nodes represent individuals of a population. The links of a node describe which other individuals can be displaced by the offspring of the individual on that node. Amplifiers of selection are graphs for which the fixation probability is increased for advantageous mutants and decreased for disadvantageous mutants. A few examples of such amplifiers have been developed, but so far it is unclear how many such structures exist and how to construct them. Here, we show that almost any undirected random graph is an amplifier of selection for Birth-death updating, where an individual is selected to reproduce with probability proportional to its fitness and one of its neighbors is replaced by that offspring at random. If we instead focus on death-Birth updating, in which a random individual is removed and its neighbors compete for the empty spot, then the same ensemble of graphs consists of almost only suppressors of selection for which the fixation probability is decreased for advantageous mutants and increased for disadvantageous mutants. Thus, the impact of population structure on evolutionary dynamics is a subtle issue that will depend on seemingly minor details of the underlying evolutionary process.
Optimal nonlinear codes for the perception of natural colours.
von der Twer, T; MacLeod, D I
2001-08-01
We discuss how visual nonlinearity can be optimized for the precise representation of environmental inputs. Such optimization leads to neural signals with a compressively nonlinear input-output function the gradient of which is matched to the cube root of the probability density function (PDF) of the environmental input values (and not to the PDF directly as in histogram equalization). Comparisons between theory and psychophysical and electrophysiological data are roughly consistent with the idea that parvocellular (P) cells are optimized for precision representation of colour: their contrast-response functions span a range appropriately matched to the environmental distribution of natural colours along each dimension of colour space. Thus P cell codes for colour may have been selected to minimize error in the perceptual estimation of stimulus parameters for natural colours. But magnocellular (M) cells have a much stronger than expected saturating nonlinearity; this supports the view that the function of M cells is mainly to detect boundaries rather than to specify contrast or lightness.
NASA Astrophysics Data System (ADS)
Pande, Saket; Sharma, Ashish
2014-05-01
This study is motivated by the need to robustly specify, identify, and forecast runoff generation processes for hydroelectricity production. It atleast requires the identification of significant predictors of runoff generation and the influence of each such significant predictor on runoff response. To this end, we compare two non-parametric algorithms of predictor subset selection. One is based on information theory that assesses predictor significance (and hence selection) based on Partial Information (PI) rationale of Sharma and Mehrotra (2014). The other algorithm is based on a frequentist approach that uses bounds on probability of error concept of Pande (2005), assesses all possible predictor subsets on-the-go and converges to a predictor subset in an computationally efficient manner. Both the algorithms approximate the underlying system by locally constant functions and select predictor subsets corresponding to these functions. The performance of the two algorithms is compared on a set of synthetic case studies as well as a real world case study of inflow forecasting. References: Sharma, A., and R. Mehrotra (2014), An information theoretic alternative to model a natural system using observational information alone, Water Resources Research, 49, doi:10.1002/2013WR013845. Pande, S. (2005), Generalized local learning in water resource management, PhD dissertation, Utah State University, UT-USA, 148p.
Kraschnewski, Jennifer L; Keyserling, Thomas C; Bangdiwala, Shrikant I; Gizlice, Ziya; Garcia, Beverly A; Johnston, Larry F; Gustafson, Alison; Petrovic, Lindsay; Glasgow, Russell E; Samuel-Hodge, Carmen D
2010-01-01
Studies of type 2 translation, the adaption of evidence-based interventions to real-world settings, should include representative study sites and staff to improve external validity. Sites for such studies are, however, often selected by convenience sampling, which limits generalizability. We used an optimized probability sampling protocol to select an unbiased, representative sample of study sites to prepare for a randomized trial of a weight loss intervention. We invited North Carolina health departments within 200 miles of the research center to participate (N = 81). Of the 43 health departments that were eligible, 30 were interested in participating. To select a representative and feasible sample of 6 health departments that met inclusion criteria, we generated all combinations of 6 from the 30 health departments that were eligible and interested. From the subset of combinations that met inclusion criteria, we selected 1 at random. Of 593,775 possible combinations of 6 counties, 15,177 (3%) met inclusion criteria. Sites in the selected subset were similar to all eligible sites in terms of health department characteristics and county demographics. Optimized probability sampling improved generalizability by ensuring an unbiased and representative sample of study sites.
Yang, Ziheng; Zhu, Tianqi
2018-02-20
The Bayesian method is noted to produce spuriously high posterior probabilities for phylogenetic trees in analysis of large datasets, but the precise reasons for this overconfidence are unknown. In general, the performance of Bayesian selection of misspecified models is poorly understood, even though this is of great scientific interest since models are never true in real data analysis. Here we characterize the asymptotic behavior of Bayesian model selection and show that when the competing models are equally wrong, Bayesian model selection exhibits surprising and polarized behaviors in large datasets, supporting one model with full force while rejecting the others. If one model is slightly less wrong than the other, the less wrong model will eventually win when the amount of data increases, but the method may become overconfident before it becomes reliable. We suggest that this extreme behavior may be a major factor for the spuriously high posterior probabilities for evolutionary trees. The philosophical implications of our results to the application of Bayesian model selection to evaluate opposing scientific hypotheses are yet to be explored, as are the behaviors of non-Bayesian methods in similar situations.
Threshold-selecting strategy for best possible ground state detection with genetic algorithms
NASA Astrophysics Data System (ADS)
Lässig, Jörg; Hoffmann, Karl Heinz
2009-04-01
Genetic algorithms are a standard heuristic to find states of low energy in complex state spaces as given by physical systems such as spin glasses but also in combinatorial optimization. The paper considers the problem of selecting individuals in the current population in genetic algorithms for crossover. Many schemes have been considered in literature as possible crossover selection strategies. We show for a large class of quality measures that the best possible probability distribution for selecting individuals in each generation of the algorithm execution is a rectangular distribution over the individuals sorted by their energy values. This means uniform probabilities have to be assigned to a group of the individuals with lowest energy in the population but probabilities equal to zero to individuals which are corresponding to energy values higher than a fixed cutoff, which is equal to a certain rank in the vector sorted by the energy of the states in the current population. The considered strategy is dubbed threshold selecting. The proof applies basic arguments of Markov chains and linear optimization and makes only a few assumptions on the underlying principles and hence applies to a large class of algorithms.
Su, Nan-Yao; Lee, Sang-Hee
2008-04-01
Marked termites were released in a linear-connected foraging arena, and the spatial heterogeneity of their capture probabilities was averaged for both directions at distance r from release point to obtain a symmetrical distribution, from which the density function of directionally averaged capture probability P(x) was derived. We hypothesized that as marked termites move into the population and given sufficient time, the directionally averaged capture probability may reach an equilibrium P(e) over the distance r and thus satisfy the equal mixing assumption of the mark-recapture protocol. The equilibrium capture probability P(e) was used to estimate the population size N. The hypothesis was tested in a 50-m extended foraging arena to simulate the distance factor of field colonies of subterranean termites. Over the 42-d test period, the density functions of directionally averaged capture probability P(x) exhibited four phases: exponential decline phase, linear decline phase, equilibrium phase, and postequilibrium phase. The equilibrium capture probability P(e), derived as the intercept of the linear regression during the equilibrium phase, correctly projected N estimates that were not significantly different from the known number of workers in the arena. Because the area beneath the probability density function is a constant (50% in this study), preequilibrium regression parameters and P(e) were used to estimate the population boundary distance 1, which is the distance between the release point and the boundary beyond which the population is absent.
Prospect theory reflects selective allocation of attention.
Pachur, Thorsten; Schulte-Mecklenbeck, Michael; Murphy, Ryan O; Hertwig, Ralph
2018-02-01
There is a disconnect in the literature between analyses of risky choice based on cumulative prospect theory (CPT) and work on predecisional information processing. One likely reason is that for expectation models (e.g., CPT), it is often assumed that people behaved only as if they conducted the computations leading to the predicted choice and that the models are thus mute regarding information processing. We suggest that key psychological constructs in CPT, such as loss aversion and outcome and probability sensitivity, can be interpreted in terms of attention allocation. In two experiments, we tested hypotheses about specific links between CPT parameters and attentional regularities. Experiment 1 used process tracing to monitor participants' predecisional attention allocation to outcome and probability information. As hypothesized, individual differences in CPT's loss-aversion, outcome-sensitivity, and probability-sensitivity parameters (estimated from participants' choices) were systematically associated with individual differences in attention allocation to outcome and probability information. For instance, loss aversion was associated with the relative attention allocated to loss and gain outcomes, and a more strongly curved weighting function was associated with less attention allocated to probabilities. Experiment 2 manipulated participants' attention to losses or gains, causing systematic differences in CPT's loss-aversion parameter. This result indicates that attention allocation can to some extent cause choice regularities that are captured by CPT. Our findings demonstrate an as-if model's capacity to reflect characteristics of information processing. We suggest that the observed CPT-attention links can be harnessed to inform the development of process models of risky choice. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Eaton, Mitchell J.; Hughes, Phillip T.; Hines, James E.; Nichols, James D.
2014-01-01
Metapopulation ecology is a field that is richer in theory than in empirical results. Many existing empirical studies use an incidence function approach based on spatial patterns and key assumptions about extinction and colonization rates. Here we recast these assumptions as hypotheses to be tested using 18 years of historic detection survey data combined with four years of data from a new monitoring program for the Lower Keys marsh rabbit. We developed a new model to estimate probabilities of local extinction and colonization in the presence of nondetection, while accounting for estimated occupancy levels of neighboring patches. We used model selection to identify important drivers of population turnover and estimate the effective neighborhood size for this system. Several key relationships related to patch size and isolation that are often assumed in metapopulation models were supported: patch size was negatively related to the probability of extinction and positively related to colonization, and estimated occupancy of neighboring patches was positively related to colonization and negatively related to extinction probabilities. This latter relationship suggested the existence of rescue effects. In our study system, we inferred that coastal patches experienced higher probabilities of extinction and colonization than interior patches. Interior patches exhibited higher occupancy probabilities and may serve as refugia, permitting colonization of coastal patches following disturbances such as hurricanes and storm surges. Our modeling approach should be useful for incorporating neighbor occupancy into future metapopulation analyses and in dealing with other historic occupancy surveys that may not include the recommended levels of sampling replication.
2013-01-01
Background Gene expression data could likely be a momentous help in the progress of proficient cancer diagnoses and classification platforms. Lately, many researchers analyze gene expression data using diverse computational intelligence methods, for selecting a small subset of informative genes from the data for cancer classification. Many computational methods face difficulties in selecting small subsets due to the small number of samples compared to the huge number of genes (high-dimension), irrelevant genes, and noisy genes. Methods We propose an enhanced binary particle swarm optimization to perform the selection of small subsets of informative genes which is significant for cancer classification. Particle speed, rule, and modified sigmoid function are introduced in this proposed method to increase the probability of the bits in a particle’s position to be zero. The method was empirically applied to a suite of ten well-known benchmark gene expression data sets. Results The performance of the proposed method proved to be superior to other previous related works, including the conventional version of binary particle swarm optimization (BPSO) in terms of classification accuracy and the number of selected genes. The proposed method also requires lower computational time compared to BPSO. PMID:23617960
Sexual segregation in North American elk: the role of density dependence
Stewart, Kelley M; Walsh, Danielle R; Kie, John G; Dick, Brian L; Bowyer, R Terry
2015-01-01
We investigated how density-dependent processes and subsequent variation in nutritional condition of individuals influenced both timing and duration of sexual segregation and selection of resources. During 1999–2001, we experimentally created two population densities of North American elk (Cervus elaphus), a high-density population at 20 elk/km2, and a low-density population at 4 elk/km2 to test hypotheses relative to timing and duration of sexual segregation and variation in selection of resources. We used multi-response permutation procedures to investigate patterns of sexual segregation, and resource selection functions to document differences in selection of resources by individuals in high- and low-density populations during sexual segregation and aggregation. The duration of sexual segregation was 2 months longer in the high-density population and likely was influenced by individuals in poorer nutritional condition, which corresponded with later conception and parturition, than at low density. Males and females in the high-density population overlapped in selection of resources to a greater extent than in the low-density population, probably resulting from density-dependent effects of increased intraspecific competition and lower availability of resources. PMID:25691992
Brand, Matthias; Schiebener, Johannes; Pertl, Marie-Theres; Delazer, Margarete
2014-01-01
Recent models on decision making under risk conditions have suggested that numerical abilities are important ingredients of advantageous decision-making performance, but empirical evidence is still limited. The results of our first study show that logical reasoning and basic mental calculation capacities predict ratio processing and that ratio processing predicts decision making under risk. In the second study, logical reasoning together with executive functions predicted probability processing (numeracy and probability knowledge), and probability processing predicted decision making under risk. These findings suggest that increasing an individual's understanding of ratios and probabilities should lead to more advantageous decisions under risk conditions.
Emergence and Evolution of Hominidae-Specific Coding and Noncoding Genomic Sequences
Saber, Morteza Mahmoudi; Adeyemi Babarinde, Isaac; Hettiarachchi, Nilmini; Saitou, Naruya
2016-01-01
Family Hominidae, which includes humans and great apes, is recognized for unique complex social behavior and intellectual abilities. Despite the increasing genome data, however, the genomic origin of its phenotypic uniqueness has remained elusive. Clade-specific genes and highly conserved noncoding sequences (HCNSs) are among the high-potential evolutionary candidates involved in driving clade-specific characters and phenotypes. On this premise, we analyzed whole genome sequences along with gene orthology data retrieved from major DNA databases to find Hominidae-specific (HS) genes and HCNSs. We discovered that Down syndrome critical region 4 (DSCR4) is the only experimentally verified gene uniquely present in Hominidae. DSCR4 has no structural homology to any known protein and was inferred to have emerged in several steps through LTR/ERV1, LTR/ERVL retrotransposition, and transversion. Using the genomic distance as neutral evolution threshold, we identified 1,658 HS HCNSs. Polymorphism coverage and derived allele frequency analysis of HS HCNSs showed that these HCNSs are under purifying selection, indicating that they may harbor important functions. They are overrepresented in promoters/untranslated regions, in close proximity of genes involved in sensory perception of sound and developmental process, and also showed a significantly lower nucleosome occupancy probability. Interestingly, many ancestral sequences of the HS HCNSs showed very high evolutionary rates. This suggests that new functions emerged through some kind of positive selection, and then purifying selection started to operate to keep these functions. PMID:27289096
Chemtob, Claude M; Pat-Horenczyk, Ruth; Madan, Anita; Pitman, Seth R; Wang, Yanping; Doppelt, Osnat; Burns, Kelly Dugan; Abramovitz, Robert; Brom, Daniel
2011-12-01
In this study, we examined the relationships among terrorism exposure, functional impairment, suicidal ideation, and probable partial or full posttraumatic stress disorder (PTSD) from exposure to terrorism in adolescents continuously exposed to this threat in Israel. A convenience sample of 2,094 students, aged 12 to 18, was drawn from 10 Israeli secondary schools. In terms of demographic factors, older age was associated with increased risk for suicidal ideation, OR = 1.33, 95% CI [1.09, 1.62], p < .01, but was protective against probable partial or full PTSD, OR = 0.72, 95% CI [0.54, 0.95], p < .05; female gender was associated with greater likelihood of probable partial or full PTSD, OR = 1.57, 95% CI [1.02, 2.40], p < .05. Exposure to trauma due to terrorism was associated with increased risk for each of the measured outcomes including probable partial or full PTSD, functional impairment, and suicidal ideation. When age, gender, level of exposure to terrorism, probable partial or full PTSD, and functional impairment were examined together, only terrorism exposure and functional impairment were associated with suicidal ideation. This study underscores the importance and feasibility of examining exposure to terrorism and functional impairment as risk factors for suicidal ideation. Copyright © 2011 International Society for Traumatic Stress Studies.
Tracking the Sensory Environment: An ERP Study of Probability and Context Updating in ASD
Westerfield, Marissa A.; Zinni, Marla; Vo, Khang; Townsend, Jeanne
2014-01-01
We recorded visual event-related brain potentials (ERPs) from 32 adult male participants (16 high-functioning participants diagnosed with Autism Spectrum Disorder (ASD) and 16 control participants, ranging in age from 18–53 yrs) during a three-stimulus oddball paradigm. Target and non-target stimulus probability was varied across three probability conditions, whereas the probability of a third non-target stimulus was held constant in all conditions. P3 amplitude to target stimuli was more sensitive to probability in ASD than in TD participants, whereas P3 amplitude to non-target stimuli was less responsive to probability in ASD participants. This suggests that neural responses to changes in event probability are attention-dependant in high-functioning ASD. The implications of these findings for higher-level behaviors such as prediction and planning are discussed. PMID:24488156
Wilson, Ryan R.; Prichard, Alexander K.; Parrett, Lincoln S.; Person, Brian T.; Carroll, Geoffry M.; Smith, Melanie A.; Rea, Caryn L.; Yokel, David A.
2012-01-01
Many caribou (Rangifer tarandus) populations are declining worldwide in part due to disturbance from human development. Prior to human development, important areas of habitat should be identified to help managers minimize adverse effects. Resource selection functions can help identify these areas by providing a link between space use and landscape attributes. We estimated resource selection during five summer periods at two spatial scales for the Teshekpuk Caribou Herd in northern Alaska prior to industrial development to identify areas of high predicted use for the herd. Additionally, given the strong influence parturition and insect harassment have on space use, we determined how selection differed between parturient and non-parturient females, and between periods with and without insect harassment. We used location data acquired between 2004–2010 for 41 female caribou to estimate resource selection functions. Patterns of selection varied through summer but caribou consistently avoided patches of flooded vegetation and selected areas with a high density of sedge-grass meadow. Predicted use by parturient females during calving was almost entirely restricted to the area surrounding Teshekpuk Lake presumably due to high concentration of sedge-grass meadows, whereas selection for this area by non-parturient females was less strong. When insect harassment was low, caribou primarily selected the areas around Teshekpuk Lake but when it was high, caribou used areas having climates where insect abundance would be lower (i.e., coastal margins, gravel bars). Areas with a high probability of use were predominately restricted to the area surrounding Teshekpuk Lake except during late summer when high use areas were less aggregated because of more general patterns of resource selection. Planning is currently underway for establishing where oil and gas development can occur in the herd’s range, so our results provide land managers with information that can help predict and minimize impacts of development on the herd. PMID:23144932
Recursive Branching Simulated Annealing Algorithm
NASA Technical Reports Server (NTRS)
Bolcar, Matthew; Smith, J. Scott; Aronstein, David
2012-01-01
This innovation is a variation of a simulated-annealing optimization algorithm that uses a recursive-branching structure to parallelize the search of a parameter space for the globally optimal solution to an objective. The algorithm has been demonstrated to be more effective at searching a parameter space than traditional simulated-annealing methods for a particular problem of interest, and it can readily be applied to a wide variety of optimization problems, including those with a parameter space having both discrete-value parameters (combinatorial) and continuous-variable parameters. It can take the place of a conventional simulated- annealing, Monte-Carlo, or random- walk algorithm. In a conventional simulated-annealing (SA) algorithm, a starting configuration is randomly selected within the parameter space. The algorithm randomly selects another configuration from the parameter space and evaluates the objective function for that configuration. If the objective function value is better than the previous value, the new configuration is adopted as the new point of interest in the parameter space. If the objective function value is worse than the previous value, the new configuration may be adopted, with a probability determined by a temperature parameter, used in analogy to annealing in metals. As the optimization continues, the region of the parameter space from which new configurations can be selected shrinks, and in conjunction with lowering the annealing temperature (and thus lowering the probability for adopting configurations in parameter space with worse objective functions), the algorithm can converge on the globally optimal configuration. The Recursive Branching Simulated Annealing (RBSA) algorithm shares some features with the SA algorithm, notably including the basic principles that a starting configuration is randomly selected from within the parameter space, the algorithm tests other configurations with the goal of finding the globally optimal solution, and the region from which new configurations can be selected shrinks as the search continues. The key difference between these algorithms is that in the SA algorithm, a single path, or trajectory, is taken in parameter space, from the starting point to the globally optimal solution, while in the RBSA algorithm, many trajectories are taken; by exploring multiple regions of the parameter space simultaneously, the algorithm has been shown to converge on the globally optimal solution about an order of magnitude faster than when using conventional algorithms. Novel features of the RBSA algorithm include: 1. More efficient searching of the parameter space due to the branching structure, in which multiple random configurations are generated and multiple promising regions of the parameter space are explored; 2. The implementation of a trust region for each parameter in the parameter space, which provides a natural way of enforcing upper- and lower-bound constraints on the parameters; and 3. The optional use of a constrained gradient- search optimization, performed on the continuous variables around each branch s configuration in parameter space to improve search efficiency by allowing for fast fine-tuning of the continuous variables within the trust region at that configuration point.
MRI Brain Tumor Segmentation and Necrosis Detection Using Adaptive Sobolev Snakes.
Nakhmani, Arie; Kikinis, Ron; Tannenbaum, Allen
2014-03-21
Brain tumor segmentation in brain MRI volumes is used in neurosurgical planning and illness staging. It is important to explore the tumor shape and necrosis regions at different points of time to evaluate the disease progression. We propose an algorithm for semi-automatic tumor segmentation and necrosis detection. Our algorithm consists of three parts: conversion of MRI volume to a probability space based on the on-line learned model, tumor probability density estimation, and adaptive segmentation in the probability space. We use manually selected acceptance and rejection classes on a single MRI slice to learn the background and foreground statistical models. Then, we propagate this model to all MRI slices to compute the most probable regions of the tumor. Anisotropic 3D diffusion is used to estimate the probability density. Finally, the estimated density is segmented by the Sobolev active contour (snake) algorithm to select smoothed regions of the maximum tumor probability. The segmentation approach is robust to noise and not very sensitive to the manual initialization in the volumes tested. Also, it is appropriate for low contrast imagery. The irregular necrosis regions are detected by using the outliers of the probability distribution inside the segmented region. The necrosis regions of small width are removed due to a high probability of noisy measurements. The MRI volume segmentation results obtained by our algorithm are very similar to expert manual segmentation.
MRI brain tumor segmentation and necrosis detection using adaptive Sobolev snakes
NASA Astrophysics Data System (ADS)
Nakhmani, Arie; Kikinis, Ron; Tannenbaum, Allen
2014-03-01
Brain tumor segmentation in brain MRI volumes is used in neurosurgical planning and illness staging. It is important to explore the tumor shape and necrosis regions at di erent points of time to evaluate the disease progression. We propose an algorithm for semi-automatic tumor segmentation and necrosis detection. Our algorithm consists of three parts: conversion of MRI volume to a probability space based on the on-line learned model, tumor probability density estimation, and adaptive segmentation in the probability space. We use manually selected acceptance and rejection classes on a single MRI slice to learn the background and foreground statistical models. Then, we propagate this model to all MRI slices to compute the most probable regions of the tumor. Anisotropic 3D di usion is used to estimate the probability density. Finally, the estimated density is segmented by the Sobolev active contour (snake) algorithm to select smoothed regions of the maximum tumor probability. The segmentation approach is robust to noise and not very sensitive to the manual initialization in the volumes tested. Also, it is appropriate for low contrast imagery. The irregular necrosis regions are detected by using the outliers of the probability distribution inside the segmented region. The necrosis regions of small width are removed due to a high probability of noisy measurements. The MRI volume segmentation results obtained by our algorithm are very similar to expert manual segmentation.
Complete Numerical Solution of the Diffusion Equation of Random Genetic Drift
Zhao, Lei; Yue, Xingye; Waxman, David
2013-01-01
A numerical method is presented to solve the diffusion equation for the random genetic drift that occurs at a single unlinked locus with two alleles. The method was designed to conserve probability, and the resulting numerical solution represents a probability distribution whose total probability is unity. We describe solutions of the diffusion equation whose total probability is unity as complete. Thus the numerical method introduced in this work produces complete solutions, and such solutions have the property that whenever fixation and loss can occur, they are automatically included within the solution. This feature demonstrates that the diffusion approximation can describe not only internal allele frequencies, but also the boundary frequencies zero and one. The numerical approach presented here constitutes a single inclusive framework from which to perform calculations for random genetic drift. It has a straightforward implementation, allowing it to be applied to a wide variety of problems, including those with time-dependent parameters, such as changing population sizes. As tests and illustrations of the numerical method, it is used to determine: (i) the probability density and time-dependent probability of fixation for a neutral locus in a population of constant size; (ii) the probability of fixation in the presence of selection; and (iii) the probability of fixation in the presence of selection and demographic change, the latter in the form of a changing population size. PMID:23749318
Levy, Ifat; Rosenberg Belmaker, Lior; Manson, Kirk; Tymula, Agnieszka; Glimcher, Paul W
2012-09-19
Most of the choices we make have uncertain consequences. In some cases the probabilities for different possible outcomes are precisely known, a condition termed "risky". In other cases when probabilities cannot be estimated, this is a condition described as "ambiguous". While most people are averse to both risk and ambiguity(1,2), the degree of those aversions vary substantially across individuals, such that the subjective value of the same risky or ambiguous option can be very different for different individuals. We combine functional MRI (fMRI) with an experimental economics-based method(3 )to assess the neural representation of the subjective values of risky and ambiguous options(4). This technique can be now used to study these neural representations in different populations, such as different age groups and different patient populations. In our experiment, subjects make consequential choices between two alternatives while their neural activation is tracked using fMRI. On each trial subjects choose between lotteries that vary in their monetary amount and in either the probability of winning that amount or the ambiguity level associated with winning. Our parametric design allows us to use each individual's choice behavior to estimate their attitudes towards risk and ambiguity, and thus to estimate the subjective values that each option held for them. Another important feature of the design is that the outcome of the chosen lottery is not revealed during the experiment, so that no learning can take place, and thus the ambiguous options remain ambiguous and risk attitudes are stable. Instead, at the end of the scanning session one or few trials are randomly selected and played for real money. Since subjects do not know beforehand which trials will be selected, they must treat each and every trial as if it and it alone was the one trial on which they will be paid. This design ensures that we can estimate the true subjective value of each option to each subject. We then look for areas in the brain whose activation is correlated with the subjective value of risky options and for areas whose activation is correlated with the subjective value of ambiguous options.
Quantum fluctuation theorems and generalized measurements during the force protocol
NASA Astrophysics Data System (ADS)
Watanabe, Gentaro; Venkatesh, B. Prasanna; Talkner, Peter; Campisi, Michele; Hänggi, Peter
2014-03-01
Generalized measurements of an observable performed on a quantum system during a force protocol are investigated and conditions that guarantee the validity of the Jarzynski equality and the Crooks relation are formulated. In agreement with previous studies by M. Campisi, P. Talkner, and P. Hänggi [Phys. Rev. Lett. 105, 140601 (2010), 10.1103/PhysRevLett.105.140601; Phys. Rev. E 83, 041114 (2011), 10.1103/PhysRevE.83.041114], we find that these fluctuation relations are satisfied for projective measurements; however, for generalized measurements special conditions on the operators determining the measurements need to be met. For the Jarzynski equality to hold, the measurement operators of the forward protocol must be normalized in a particular way. The Crooks relation additionally entails that the backward and forward measurement operators depend on each other. Yet, quite some freedom is left as to how the two sets of operators are interrelated. This ambiguity is removed if one considers selective measurements, which are specified by a joint probability density function of work and measurement results of the considered observable. We find that the respective forward and backward joint probabilities satisfy the Crooks relation only if the measurement operators of the forward and backward protocols are the time-reversed adjoints of each other. In this case, the work probability density function conditioned on the measurement result satisfies a modified Crooks relation. The modification appears as a protocol-dependent factor that can be expressed by the information gained by the measurements during the forward and backward protocols. Finally, detailed fluctuation theorems with an arbitrary number of intervening measurements are obtained.
Benkert, Pascal; Schwede, Torsten; Tosatto, Silvio Ce
2009-05-20
The selection of the most accurate protein model from a set of alternatives is a crucial step in protein structure prediction both in template-based and ab initio approaches. Scoring functions have been developed which can either return a quality estimate for a single model or derive a score from the information contained in the ensemble of models for a given sequence. Local structural features occurring more frequently in the ensemble have a greater probability of being correct. Within the context of the CASP experiment, these so called consensus methods have been shown to perform considerably better in selecting good candidate models, but tend to fail if the best models are far from the dominant structural cluster. In this paper we show that model selection can be improved if both approaches are combined by pre-filtering the models used during the calculation of the structural consensus. Our recently published QMEAN composite scoring function has been improved by including an all-atom interaction potential term. The preliminary model ranking based on the new QMEAN score is used to select a subset of reliable models against which the structural consensus score is calculated. This scoring function called QMEANclust achieves a correlation coefficient of predicted quality score and GDT_TS of 0.9 averaged over the 98 CASP7 targets and perform significantly better in selecting good models from the ensemble of server models than any other groups participating in the quality estimation category of CASP7. Both scoring functions are also benchmarked on the MOULDER test set consisting of 20 target proteins each with 300 alternatives models generated by MODELLER. QMEAN outperforms all other tested scoring functions operating on individual models, while the consensus method QMEANclust only works properly on decoy sets containing a certain fraction of near-native conformations. We also present a local version of QMEAN for the per-residue estimation of model quality (QMEANlocal) and compare it to a new local consensus-based approach. Improved model selection is obtained by using a composite scoring function operating on single models in order to enrich higher quality models which are subsequently used to calculate the structural consensus. The performance of consensus-based methods such as QMEANclust highly depends on the composition and quality of the model ensemble to be analysed. Therefore, performance estimates for consensus methods based on large meta-datasets (e.g. CASP) might overrate their applicability in more realistic modelling situations with smaller sets of models based on individual methods.
Nematode Damage Functions: The Problems of Experimental and Sampling Error
Ferris, H.
1984-01-01
The development and use of pest damage functions involves measurement and experimental errors associated with cultural, environmental, and distributional factors. Damage predictions are more valuable if considered with associated probability. Collapsing population densities into a geometric series of population classes allows a pseudo-replication removal of experimental and sampling error in damage function development. Recognition of the nature of sampling error for aggregated populations allows assessment of probability associated with the population estimate. The product of the probabilities incorporated in the damage function and in the population estimate provides a basis for risk analysis of the yield loss prediction and the ensuing management decision. PMID:19295865
Surveillance system and method having an adaptive sequential probability fault detection test
NASA Technical Reports Server (NTRS)
Herzog, James P. (Inventor); Bickford, Randall L. (Inventor)
2005-01-01
System and method providing surveillance of an asset such as a process and/or apparatus by providing training and surveillance procedures that numerically fit a probability density function to an observed residual error signal distribution that is correlative to normal asset operation and then utilizes the fitted probability density function in a dynamic statistical hypothesis test for providing improved asset surveillance.
Surveillance system and method having an adaptive sequential probability fault detection test
NASA Technical Reports Server (NTRS)
Bickford, Randall L. (Inventor); Herzog, James P. (Inventor)
2006-01-01
System and method providing surveillance of an asset such as a process and/or apparatus by providing training and surveillance procedures that numerically fit a probability density function to an observed residual error signal distribution that is correlative to normal asset operation and then utilizes the fitted probability density function in a dynamic statistical hypothesis test for providing improved asset surveillance.
Surveillance System and Method having an Adaptive Sequential Probability Fault Detection Test
NASA Technical Reports Server (NTRS)
Bickford, Randall L. (Inventor); Herzog, James P. (Inventor)
2008-01-01
System and method providing surveillance of an asset such as a process and/or apparatus by providing training and surveillance procedures that numerically fit a probability density function to an observed residual error signal distribution that is correlative to normal asset operation and then utilizes the fitted probability density function in a dynamic statistical hypothesis test for providing improved asset surveillance.
Simple gain probability functions for large reflector antennas of JPL/NASA
NASA Technical Reports Server (NTRS)
Jamnejad, V.
2003-01-01
Simple models for the patterns as well as their cumulative gain probability and probability density functions of the Deep Space Network antennas are developed. These are needed for the study and evaluation of interference from unwanted sources such as the emerging terrestrial system, High Density Fixed Service, with the Ka-band receiving antenna systems in Goldstone Station of the Deep Space Network.
The (Un)Certainty of Selectivity in Liquid Chromatography Tandem Mass Spectrometry
NASA Astrophysics Data System (ADS)
Berendsen, Bjorn J. A.; Stolker, Linda A. M.; Nielen, Michel W. F.
2013-01-01
We developed a procedure to determine the "identification power" of an LC-MS/MS method operated in the MRM acquisition mode, which is related to its selectivity. The probability of any compound showing the same precursor ion, product ions, and retention time as the compound of interest is used as a measure of selectivity. This is calculated based upon empirical models constructed from three very large compound databases. Based upon the final probability estimation, additional measures to assure unambiguous identification can be taken, like the selection of different or additional product ions. The reported procedure in combination with criteria for relative ion abundances results in a powerful technique to determine the (un)certainty of the selectivity of any LC-MS/MS analysis and thus the risk of false positive results. Furthermore, the procedure is very useful as a tool to validate method selectivity.
Nielsen, Bjørn G; Jensen, Morten Ø; Bohr, Henrik G
2003-01-01
The structure of enkephalin, a small neuropeptide with five amino acids, has been simulated on computers using molecular dynamics. Such simulations exhibit a few stable conformations, which also have been identified experimentally. The simulations provide the possibility to perform cluster analysis in the space defined by potentially pharmacophoric measures such as dihedral angles, side-chain orientation, etc. By analyzing the statistics of the resulting clusters, the probability distribution of the side-chain conformations may be determined. These probabilities allow us to predict the selectivity of [Leu]enkephalin and [Met]enkephalin to the known mu- and delta-type opiate receptors to which they bind as agonists. Other plausible consequences of these probability distributions are discussed in relation to the way in which they may influence the dynamics of the synapse. Copyright 2003 Wiley Periodicals, Inc. Biopolymers (Pept Sci) 71: 577-592, 2003
Application of a multipurpose unequal probability stream survey in the Mid-Atlantic Coastal Plain
Ator, S.W.; Olsen, A.R.; Pitchford, A.M.; Denver, J.M.
2003-01-01
A stratified, spatially balanced sample with unequal probability selection was used to design a multipurpose survey of headwater streams in the Mid-Atlantic Coastal Plain. Objectives for the survey include unbiased estimates of regional stream conditions, and adequate coverage of unusual but significant environmental settings to support empirical modeling of the factors affecting those conditions. The design and field application of the survey are discussed in light of these multiple objectives. A probability (random) sample of 175 first-order nontidal streams was selected for synoptic sampling of water chemistry and benthic and riparian ecology during late winter and spring 2000. Twenty-five streams were selected within each of seven hydrogeologic subregions (strata) that were delineated on the basis of physiography and surficial geology. In each subregion, unequal inclusion probabilities were used to provide an approximately even distribution of streams along a gradient of forested to developed (agricultural or urban) land in the contributing watershed. Alternate streams were also selected. Alternates were included in groups of five in each subregion when field reconnaissance demonstrated that primary streams were inaccessible or otherwise unusable. Despite the rejection and replacement of a considerable number of primary streams during reconnaissance (up to 40 percent in one subregion), the desired land use distribution was maintained within each hydrogeologic subregion without sacrificing the probabilistic design.
Howe, Chanelle J.; Cole, Stephen R.; Chmiel, Joan S.; Muñoz, Alvaro
2011-01-01
In time-to-event analyses, artificial censoring with correction for induced selection bias using inverse probability-of-censoring weights can be used to 1) examine the natural history of a disease after effective interventions are widely available, 2) correct bias due to noncompliance with fixed or dynamic treatment regimens, and 3) estimate survival in the presence of competing risks. Artificial censoring entails censoring participants when they meet a predefined study criterion, such as exposure to an intervention, failure to comply, or the occurrence of a competing outcome. Inverse probability-of-censoring weights use measured common predictors of the artificial censoring mechanism and the outcome of interest to determine what the survival experience of the artificially censored participants would be had they never been exposed to the intervention, complied with their treatment regimen, or not developed the competing outcome. Even if all common predictors are appropriately measured and taken into account, in the context of small sample size and strong selection bias, inverse probability-of-censoring weights could fail because of violations in assumptions necessary to correct selection bias. The authors used an example from the Multicenter AIDS Cohort Study, 1984–2008, regarding estimation of long-term acquired immunodeficiency syndrome-free survival to demonstrate the impact of violations in necessary assumptions. Approaches to improve correction methods are discussed. PMID:21289029
A global logrank test for adaptive treatment strategies based on observational studies.
Li, Zhiguo; Valenstein, Marcia; Pfeiffer, Paul; Ganoczy, Dara
2014-02-28
In studying adaptive treatment strategies, a natural question that is of paramount interest is whether there is any significant difference among all possible treatment strategies. When the outcome variable of interest is time-to-event, we propose an inverse probability weighted logrank test for testing the equivalence of a fixed set of pre-specified adaptive treatment strategies based on data from an observational study. The weights take into account both the possible selection bias in an observational study and the fact that the same subject may be consistent with more than one treatment strategy. The asymptotic distribution of the weighted logrank statistic under the null hypothesis is obtained. We show that, in an observational study where the treatment selection probabilities need to be estimated, the estimation of these probabilities does not have an effect on the asymptotic distribution of the weighted logrank statistic, as long as the estimation of the parameters in the models for these probabilities is n-consistent. Finite sample performance of the test is assessed via a simulation study. We also show in the simulation that the test can be pretty robust to misspecification of the models for the probabilities of treatment selection. The method is applied to analyze data on antidepressant adherence time from an observational database maintained at the Department of Veterans Affairs' Serious Mental Illness Treatment Research and Evaluation Center. Copyright © 2013 John Wiley & Sons, Ltd.
Sloma, Michael F; Mathews, David H
2016-12-01
RNA secondary structure prediction is widely used to analyze RNA sequences. In an RNA partition function calculation, free energy nearest neighbor parameters are used in a dynamic programming algorithm to estimate statistical properties of the secondary structure ensemble. Previously, partition functions have largely been used to estimate the probability that a given pair of nucleotides form a base pair, the conditional stacking probability, the accessibility to binding of a continuous stretch of nucleotides, or a representative sample of RNA structures. Here it is demonstrated that an RNA partition function can also be used to calculate the exact probability of formation of hairpin loops, internal loops, bulge loops, or multibranch loops at a given position. This calculation can also be used to estimate the probability of formation of specific helices. Benchmarking on a set of RNA sequences with known secondary structures indicated that loops that were calculated to be more probable were more likely to be present in the known structure than less probable loops. Furthermore, highly probable loops are more likely to be in the known structure than the set of loops predicted in the lowest free energy structures. © 2016 Sloma and Mathews; Published by Cold Spring Harbor Laboratory Press for the RNA Society.
Butterfly survival on an isolated island by improved grip
Duplouy, Anne; Hanski, Ilkka
2013-01-01
On small isolated islands, natural selection is expected to reduce the dispersal capacity of organisms, as short distances do not require a high rate of dispersal, which might lead to accidental emigration from the population. In addition, individuals foregoing the high cost of maintaining flight capacity may instead allocate resources to other functions. However, in butterflies and many other insects, flight is necessary not only for dispersal but also for most other activities. A weakly flying individual would probably do worse and have an elevated rather than reduced probability of accidental emigration. Here, we report results consistent with the hypothesis that a butterfly population on an isolated island, instead of having lost its flight capacity, has evolved better grip to resist the force of wind and to avoid being blown off the island. Our study suggests that local adaptation has occurred in this population in spite of its very small size (Ne ∼ 100), complete isolation, low genetic variation and high genetic load. PMID:23445946
A K(+)-selective CNG channel orchestrates Ca(2+) signalling in zebrafish sperm.
Fechner, Sylvia; Alvarez, Luis; Bönigk, Wolfgang; Müller, Astrid; Berger, Thomas K; Pascal, Rene; Trötschel, Christian; Poetsch, Ansgar; Stölting, Gabriel; Siegfried, Kellee R; Kremmer, Elisabeth; Seifert, Reinhard; Kaupp, U Benjamin
2015-12-09
Calcium in the flagellum controls sperm navigation. In sperm of marine invertebrates and mammals, Ca(2+) signalling has been intensely studied, whereas for fish little is known. In sea urchin sperm, a cyclic nucleotide-gated K(+) channel (CNGK) mediates a cGMP-induced hyperpolarization that evokes Ca(2+) influx. Here, we identify in sperm of the freshwater fish Danio rerio a novel CNGK family member featuring non-canonical properties. It is located in the sperm head rather than the flagellum and is controlled by intracellular pH, but not cyclic nucleotides. Alkalization hyperpolarizes sperm and produces Ca(2+) entry. Ca(2+) induces spinning-like swimming, different from swimming of sperm from other species. The "spinning" mode probably guides sperm into the micropyle, a narrow entrance on the surface of fish eggs. A picture is emerging of sperm channel orthologues that employ different activation mechanisms and serve different functions. The channel inventories probably reflect adaptations to species-specific challenges during fertilization.
Coupled Multi-Disciplinary Optimization for Structural Reliability and Affordability
NASA Technical Reports Server (NTRS)
Abumeri, Galib H.; Chamis, Christos C.
2003-01-01
A computational simulation method is presented for Non-Deterministic Multidisciplinary Optimization of engine composite materials and structures. A hypothetical engine duct made with ceramic matrix composites (CMC) is evaluated probabilistically in the presence of combined thermo-mechanical loading. The structure is tailored by quantifying the uncertainties in all relevant design variables such as fabrication, material, and loading parameters. The probabilistic sensitivities are used to select critical design variables for optimization. In this paper, two approaches for non-deterministic optimization are presented. The non-deterministic minimization of combined failure stress criterion is carried out by: (1) performing probabilistic evaluation first and then optimization and (2) performing optimization first and then probabilistic evaluation. The first approach shows that the optimization feasible region can be bounded by a set of prescribed probability limits and that the optimization follows the cumulative distribution function between those limits. The second approach shows that the optimization feasible region is bounded by 0.50 and 0.999 probabilities.
Infomax Strategies for an Optimal Balance Between Exploration and Exploitation
NASA Astrophysics Data System (ADS)
Reddy, Gautam; Celani, Antonio; Vergassola, Massimo
2016-06-01
Proper balance between exploitation and exploration is what makes good decisions that achieve high reward, like payoff or evolutionary fitness. The Infomax principle postulates that maximization of information directs the function of diverse systems, from living systems to artificial neural networks. While specific applications turn out to be successful, the validity of information as a proxy for reward remains unclear. Here, we consider the multi-armed bandit decision problem, which features arms (slot-machines) of unknown probabilities of success and a player trying to maximize cumulative payoff by choosing the sequence of arms to play. We show that an Infomax strategy (Info-p) which optimally gathers information on the highest probability of success among the arms, saturates known optimal bounds and compares favorably to existing policies. Conversely, gathering information on the identity of the best arm in the bandit leads to a strategy that is vastly suboptimal in terms of payoff. The nature of the quantity selected for Infomax acquisition is then crucial for effective tradeoffs between exploration and exploitation.
QVAST: a new Quantum GIS plugin for estimating volcanic susceptibility
NASA Astrophysics Data System (ADS)
Bartolini, S.; Cappello, A.; Martí, J.; Del Negro, C.
2013-11-01
One of the most important tasks of modern volcanology is the construction of hazard maps simulating different eruptive scenarios that can be used in risk-based decision making in land-use planning and emergency management. The first step in the quantitative assessment of volcanic hazards is the development of susceptibility maps (i.e., the spatial probability of a future vent opening given the past eruptive activity of a volcano). This challenging issue is generally tackled using probabilistic methods that use the calculation of a kernel function at each data location to estimate probability density functions (PDFs). The smoothness and the modeling ability of the kernel function are controlled by the smoothing parameter, also known as the bandwidth. Here we present a new tool, QVAST, part of the open-source geographic information system Quantum GIS, which is designed to create user-friendly quantitative assessments of volcanic susceptibility. QVAST allows the selection of an appropriate method for evaluating the bandwidth for the kernel function on the basis of the input parameters and the shapefile geometry, and can also evaluate the PDF with the Gaussian kernel. When different input data sets are available for the area, the total susceptibility map is obtained by assigning different weights to each of the PDFs, which are then combined via a weighted summation and modeled in a non-homogeneous Poisson process. The potential of QVAST, developed in a free and user-friendly environment, is here shown through its application in the volcanic fields of Lanzarote (Canary Islands) and La Garrotxa (NE Spain).
Iodine intake in human nutrition: a systematic literature review
Gunnarsdottir, Ingibjörg; Dahl, Lisbeth
2012-01-01
The present literature review is a part of the NNR5 project with the aim of reviewing and updating the scientific basis of the 4th edition of the Nordic Nutrition Recommendations (NNR) issued in 2004. The main objective of the review is to assess the influence of different intakes of iodine at different life stages (infants, children, adolescents, adults, elderly, and during pregnancy and lactation) in order to estimate the requirement for adequate growth, development, and maintenance of health. The literature search resulted in 1,504 abstracts. Out of those, 168 papers were identified as potentially relevant. Full paper selection resulted in 40 papers that were quality assessed (A, B, or C). The grade of evidence was classified as convincing, probable, suggestive, and no conclusion. We found suggestive evidence for improved maternal iodine status and thyroid function by iodine supplementation during pregnancy. Suggestive evidence was found for the relationship between improved thyroid function (used as an indicator of iodine status) during pregnancy and cognitive function in the offspring up to 18 months of age. Moderately to severely iodine-deficient children will probably benefit from iodine supplementation or improved iodine status in order to improve their cognitive function, while only one study showed improved cognitive function following iodine supplementation in children from a mildly iodine-deficient area (no conclusion). No conclusions can be drawn related to other outcomes included in our review. There are no new data supporting changes in dietary reference values for children or adults. The rationale for increasing the dietary reference values for pregnant and lactating women in the NNR5 needs to be discussed in a broader perspective, taking iodine status of pregnant women in the Nordic countries into account. PMID:23060737
Video Shot Boundary Detection Using QR-Decomposition and Gaussian Transition Detection
NASA Astrophysics Data System (ADS)
Amiri, Ali; Fathy, Mahmood
2010-12-01
This article explores the problem of video shot boundary detection and examines a novel shot boundary detection algorithm by using QR-decomposition and modeling of gradual transitions by Gaussian functions. Specifically, the authors attend to the challenges of detecting gradual shots and extracting appropriate spatiotemporal features that affect the ability of algorithms to efficiently detect shot boundaries. The algorithm utilizes the properties of QR-decomposition and extracts a block-wise probability function that illustrates the probability of video frames to be in shot transitions. The probability function has abrupt changes in hard cut transitions, and semi-Gaussian behavior in gradual transitions. The algorithm detects these transitions by analyzing the probability function. Finally, we will report the results of the experiments using large-scale test sets provided by the TRECVID 2006, which has assessments for hard cut and gradual shot boundary detection. These results confirm the high performance of the proposed algorithm.
Discriminating Among Probability Weighting Functions Using Adaptive Design Optimization
Cavagnaro, Daniel R.; Pitt, Mark A.; Gonzalez, Richard; Myung, Jay I.
2014-01-01
Probability weighting functions relate objective probabilities and their subjective weights, and play a central role in modeling choices under risk within cumulative prospect theory. While several different parametric forms have been proposed, their qualitative similarities make it challenging to discriminate among them empirically. In this paper, we use both simulation and choice experiments to investigate the extent to which different parametric forms of the probability weighting function can be discriminated using adaptive design optimization, a computer-based methodology that identifies and exploits model differences for the purpose of model discrimination. The simulation experiments show that the correct (data-generating) form can be conclusively discriminated from its competitors. The results of an empirical experiment reveal heterogeneity between participants in terms of the functional form, with two models (Prelec-2, Linear in Log Odds) emerging as the most common best-fitting models. The findings shed light on assumptions underlying these models. PMID:24453406
NASA Astrophysics Data System (ADS)
Novikova, Tatyana; Babeyko, Andrey; Papadopoulos, Gerassimos
2017-04-01
Greece and adjacent coastal areas are characterized by a high population exposure to tsunami hazard. The Hellenic Arc is the most active geotectonic structure for the generation of earthquakes and tsunamis. We performed probabilistic tsunami hazard assessment for selected locations of Greek coastlines which are the forecasting points officially used in the tsunami warning operations by the Hellenic National Tsunami Warning Center and the NEAMTWS/IOC/UNESCO. In our analysis we considered seismic sources for tsunami generation along the western, central and eastern segments of the Hellenic Arc. We first created a synthetic catalog as long as 10,000 years for all the significant earthquakes with magnitudes in the range from 6.0 to 8.5, the real events being included in this catalog. For each event included in the synthetic catalog a tsunami was generated and propagated using Boussinesq model. The probability of occurrence for each event was determined by Gutenberg-Richter magnitude-frequency distribution. The results of our study are expressed as hazard curves and hazard maps. The hazard curves were obtained for the selected sites and present the annual probability of exceedance as a function of pick coastal tsunami amplitude. Hazard maps represent the distribution of peak coastal tsunami amplitudes corresponding to a fixed annual probability. In such forms our results can be easily compared to the ones obtained in other studies and further employed for the development of tsunami risk management plans. This research is a contribution to the EU-FP7 tsunami research project ASTARTE (Assessment, Strategy And Risk Reduction for Tsunamis in Europe), grant agreement no: 603839, 2013-10-30.
Descriptive and Experimental Analyses of Potential Precursors to Problem Behavior
Borrero, Carrie S.W; Borrero, John C
2008-01-01
We conducted descriptive observations of severe problem behavior for 2 individuals with autism to identify precursors to problem behavior. Several comparative probability analyses were conducted in addition to lag-sequential analyses using the descriptive data. Results of the descriptive analyses showed that the probability of the potential precursor was greater given problem behavior compared to the unconditional probability of the potential precursor. Results of the lag-sequential analyses showed a marked increase in the probability of a potential precursor in the 1-s intervals immediately preceding an instance of problem behavior, and that the probability of problem behavior was highest in the 1-s intervals immediately following an instance of the precursor. We then conducted separate functional analyses of problem behavior and the precursor to identify respective operant functions. Results of the functional analyses showed that both problem behavior and the precursor served the same operant functions. These results replicate prior experimental analyses on the relation between problem behavior and precursors and extend prior research by illustrating a quantitative method to identify precursors to more severe problem behavior. PMID:18468281
A SVM-based quantitative fMRI method for resting-state functional network detection.
Song, Xiaomu; Chen, Nan-kuei
2014-09-01
Resting-state functional magnetic resonance imaging (fMRI) aims to measure baseline neuronal connectivity independent of specific functional tasks and to capture changes in the connectivity due to neurological diseases. Most existing network detection methods rely on a fixed threshold to identify functionally connected voxels under the resting state. Due to fMRI non-stationarity, the threshold cannot adapt to variation of data characteristics across sessions and subjects, and generates unreliable mapping results. In this study, a new method is presented for resting-state fMRI data analysis. Specifically, the resting-state network mapping is formulated as an outlier detection process that is implemented using one-class support vector machine (SVM). The results are refined by using a spatial-feature domain prototype selection method and two-class SVM reclassification. The final decision on each voxel is made by comparing its probabilities of functionally connected and unconnected instead of a threshold. Multiple features for resting-state analysis were extracted and examined using an SVM-based feature selection method, and the most representative features were identified. The proposed method was evaluated using synthetic and experimental fMRI data. A comparison study was also performed with independent component analysis (ICA) and correlation analysis. The experimental results show that the proposed method can provide comparable or better network detection performance than ICA and correlation analysis. The method is potentially applicable to various resting-state quantitative fMRI studies. Copyright © 2014 Elsevier Inc. All rights reserved.
Falck, Ryan S; Landry, Glenn J; Best, John R; Davis, Jennifer C; Chiu, Bryan K; Liu-Ambrose, Teresa
2017-10-01
Mild cognitive impairment (MCI) represents a transition between normal cognitive aging and dementia and may represent a critical time frame for promoting cognitive health through behavioral strategies. Current evidence suggests that physical activity (PA) and sedentary behavior are important for cognition. However, it is unclear whether there are differences in PA and sedentary behavior between people with probable MCI and people without MCI or whether the relationships of PA and sedentary behavior with cognitive function differ by MCI status. The aims of this study were to examine differences in PA and sedentary behavior between people with probable MCI and people without MCI and whether associations of PA and sedentary behavior with cognitive function differed by MCI status. This was a cross-sectional study. Physical activity and sedentary behavior in adults dwelling in the community (N = 151; at least 55 years old) were measured using a wrist-worn actigraphy unit. The Montreal Cognitive Assessment was used to categorize participants with probable MCI (scores of <26/30) and participants without MCI (scores of ≥26/30). Cognitive function was indexed using the Alzheimer Disease Assessment Scale-Cognitive-Plus (ADAS-Cog Plus). Physical activity and sedentary behavior were compared based on probable MCI status, and relationships of ADAS-Cog Plus with PA and sedentary behavior were examined by probable MCI status. Participants with probable MCI (n = 82) had lower PA and higher sedentary behavior than participants without MCI (n = 69). Higher PA and lower sedentary behavior were associated with better ADAS-Cog Plus performance in participants without MCI (β = -.022 and β = .012, respectively) but not in participants with probable MCI (β < .001 for both). This study was cross-sectional and therefore could not establish whether conversion to MCI attenuated the relationships of PA and sedentary behavior with cognitive function. The diagnosis of MCI was not confirmed with a physician; therefore, this study could not conclude how many of the participants categorized as having probable MCI would actually have been diagnosed with MCI by a physician. Participants with probable MCI were less active and more sedentary. The relationships of these behaviors with cognitive function differed by MCI status; associations were found only in participants without MCI. © 2017 American Physical Therapy Association
Automated measurement of spatial preference in the open field test with transmitted lighting.
Kulikov, Alexander V; Tikhonova, Maria A; Kulikov, Victor A
2008-05-30
New modification of the open field was designed to improve automation of the test. The main innovations were: (1) transmitted lighting and (2) estimation of probability to find pixels associated with an animal in the selected region of arena as an objective index of spatial preference. Transmitted (inverted) lighting significantly ameliorated the contrast between an animal and arena and allowed to track white animals with similar efficacy as colored ones. Probability as a measure of preference of selected region was mathematically proved and experimentally verified. A good correlation between probability and classic indices of spatial preference (number of region entries and time spent therein) was shown. The algorithm of calculation of probability to find pixels associated with an animal in the selected region was implemented in the EthoStudio software. Significant interstrain differences in locomotion and the central zone preference (index of anxiety) were shown using the inverted lighting and the EthoStudio software in mice of six inbred strains. The effects of arena shape (circle or square) and a novel object presence in the center of arena on the open field behavior in mice were studied.
Yu, Zhi; Xu, Bin
2016-08-25
Abundant clinical practice has showed that acupuncture therapy has some distinct advantages in the treatment of chronic functional constipation (CFC), such as faster positive effect, shorter course of treatment, long-term post-effect, etc. In the present paper, the authors reviewed progresses of researches in clinical treatment of CFC with acupuncture therapy in recent years. Results of clinical trials indicated that among the 3 types (slow transit, outlet obstruction and mixed type) of constipation, acupuncture therapy showed a better effect for slow transit constipation by improving severity, increasing defecation frequency, reducing abdominal distension, easing patients' psychological discomfort and raising daily life activity, probably by ameliorating colonic motility, enteric nervous system function and neurotransmitter secretion (vasoactive intestinal peptide, acetylcholine, substance P, nitrix oxide,etc.). Most of the chosen acupoints (ST 25, SP 15, SP 14, CV 6, BL 25, BL 23, etc.) are located in the projection region of colon. For outlet obstruction defecation, the effect of acupuncture is relatively better for chalasia type, in spite of generally being poorer in the efficacy. Majority of the selected acupoints (GV 1, BL 32, BL 33, BL 30, etc.)are located near the pelvic floor region. In addition, the clinical therapeutic effects of acupuncture need being confirmed by more large sample, multiple centers randomized controlled trials.
Cam, E.; Monnat, J.-Y.
2000-01-01
1. Many studies have provided evidence that first-time breeders have a lower survival, a lower probability of success, or of breeding, in the following year. Hypotheses based on reproductive costs have often been proposed to explain this. However, because of the intrinsic relationship between age and experience, the apparent inferiority of first-time breeders at the population level may result from selection, and experience may not influence performance within each individual. In this paper we address the question of phenotypic correlations between fitness components. This addresses differences in individual quality, a prerequisite for a selection process to occur. We also test the hypothesis of an influence of experience on these components while taking age and reproductive success into account: two factors likely to play a key role in a selection process. 2. Using data from a long-term study on the kittiwake, we found that first-time breeders have a lower probability of success, a lower survival and a lower probability of breeding in the next year than experienced breeders. However, neither experienced nor inexperienced breeders have a lower survival or a lower probability of breeding in the following year than birds that skipped a breeding opportunity. This suggests heterogeneity in quality among individuals. 3. Failed birds have a lower survival and a lower probability of breeding in the following year regardless of experience. This can be interpreted in the light of the selection hypothesis. The inferiority of inexperienced breeders may be linked to a higher proportion of lower-quality individuals in younger age classes. When age and breeding success are controlled for, there is no evidence of an influence of experience on survival or future breeding probability. 4. Using data from individuals whose reproductive life lasted the same number of years, we investigated the influence of experience on reproductive performance within individuals. There is no strong evidence that a process operating within individuals explains the improvement in performance observed at the population level.
Examining speed versus selection in connectivity models using elk migration as an example
Brennan, Angela; Hanks, Ephraim M.; Merkle, Jerod A.; Cole, Eric K.; Dewey, Sarah R.; Courtemanch, Alyson B.; Cross, Paul C.
2018-01-01
ContextLandscape resistance is vital to connectivity modeling and frequently derived from resource selection functions (RSFs). RSFs estimate relative probability of use and tend to focus on understanding habitat preferences during slow, routine animal movements (e.g., foraging). Dispersal and migration, however, can produce rarer, faster movements, in which case models of movement speed rather than resource selection may be more realistic for identifying habitats that facilitate connectivity.ObjectiveTo compare two connectivity modeling approaches applied to resistance estimated from models of movement rate and resource selection.MethodsUsing movement data from migrating elk, we evaluated continuous time Markov chain (CTMC) and movement-based RSF models (i.e., step selection functions [SSFs]). We applied circuit theory and shortest random path (SRP) algorithms to CTMC, SSF and null (i.e., flat) resistance surfaces to predict corridors between elk seasonal ranges. We evaluated prediction accuracy by comparing model predictions to empirical elk movements.ResultsAll connectivity models predicted elk movements well, but models applied to CTMC resistance were more accurate than models applied to SSF and null resistance. Circuit theory models were more accurate on average than SRP models.ConclusionsCTMC can be more realistic than SSFs for estimating resistance for fast movements, though SSFs may demonstrate some predictive ability when animals also move slowly through corridors (e.g., stopover use during migration). High null model accuracy suggests seasonal range data may also be critical for predicting direct migration routes. For animals that migrate or disperse across large landscapes, we recommend incorporating CTMC into the connectivity modeling toolkit.
Does Wyoming's Core Area Policy Protect Winter Habitats for Greater Sage-Grouse?
Smith, Kurt T; Beck, Jeffrey L; Pratt, Aaron C
2016-10-01
Conservation reserves established to protect important habitat for wildlife species are used world-wide as a wildlife conservation measure. Effective reserves must adequately protect year-round habitats to maintain wildlife populations. Wyoming's Sage-Grouse Core Area policy was established to protect breeding habitats for greater sage-grouse (Centrocercus urophasianus). Protecting only one important seasonal habitat could result in loss or degradation of other important habitats and potential declines in local populations. The purpose of our study was to identify the timing of winter habitat use, the extent which individuals breeding in Core Areas used winter habitats, and develop resource selection functions to assess effectiveness of Core Areas in conserving sage-grouse winter habitats in portions of 5 Core Areas in central and north-central Wyoming during winters 2011-2015. We found that use of winter habitats occured over a longer period than current Core Area winter timing stipulations and a substantial amount of winter habitat outside of Core Areas was used by individuals that bred in Core Areas, particularly in smaller Core Areas. Resource selection functions for each study area indicated that sage-grouse were selecting habitats in response to landscapes dominated by big sagebrush and flatter topography similar to other research on sage-grouse winter habitat selection. The substantial portion of sage-grouse locations and predicted probability of selection during winter outside small Core Areas illustrate that winter requirements for sage-grouse are not adequately met by existing Core Areas. Consequently, further considerations for identifying and managing important winter sage-grouse habitats under Wyoming's Core Area Policy are warranted.
Examining speed versus selection in connectivity models using elk migration as an example
Brennan, Angela; Hanks, EM; Merkle, JA; Cole, EK; Dewey, SR; Courtemanch, AB; Cross, Paul C.
2018-01-01
Context: Landscape resistance is vital to connectivity modeling and frequently derived from resource selection functions (RSFs). RSFs estimate relative probability of use and tend to focus on understanding habitat preferences during slow, routine animal movements (e.g., foraging). Dispersal and migration, however, can produce rarer, faster movements, in which case models of movement speed rather than resource selection may be more realistic for identifying habitats that facilitate connectivity. Objective: To compare two connectivity modeling approaches applied to resistance estimated from models of movement rate and resource selection. Methods: Using movement data from migrating elk, we evaluated continuous time Markov chain (CTMC) and movement-based RSF models (i.e., step selection functions [SSFs]). We applied circuit theory and shortest random path (SRP) algorithms to CTMC, SSF and null (i.e., flat) resistance surfaces to predict corridors between elk seasonal ranges. We evaluated prediction accuracy by comparing model predictions to empirical elk movements. Results: All models predicted elk movements well, but models applied to CTMC resistance were more accurate than models applied to SSF and null resistance. Circuit theory models were more accurate on average than SRP algorithms. Conclusions: CTMC can be more realistic than SSFs for estimating resistance for fast movements, though SSFs may demonstrate some predictive ability when animals also move slowly through corridors (e.g., stopover use during migration). High null model accuracy suggests seasonal range data may also be critical for predicting direct migration routes. For animals that migrate or disperse across large landscapes, we recommend incorporating CTMC into the connectivity modeling toolkit.
OPTIMAL TIME-SERIES SELECTION OF QUASARS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Butler, Nathaniel R.; Bloom, Joshua S.
2011-03-15
We present a novel method for the optimal selection of quasars using time-series observations in a single photometric bandpass. Utilizing the damped random walk model of Kelly et al., we parameterize the ensemble quasar structure function in Sloan Stripe 82 as a function of observed brightness. The ensemble model fit can then be evaluated rigorously for and calibrated with individual light curves with no parameter fitting. This yields a classification in two statistics-one describing the fit confidence and the other describing the probability of a false alarm-which can be tuned, a priori, to achieve high quasar detection fractions (99% completenessmore » with default cuts), given an acceptable rate of false alarms. We establish the typical rate of false alarms due to known variable stars as {approx}<3% (high purity). Applying the classification, we increase the sample of potential quasars relative to those known in Stripe 82 by as much as 29%, and by nearly a factor of two in the redshift range 2.5 < z < 3, where selection by color is extremely inefficient. This represents 1875 new quasars in a 290 deg{sup 2} field. The observed rates of both quasars and stars agree well with the model predictions, with >99% of quasars exhibiting the expected variability profile. We discuss the utility of the method at high redshift and in the regime of noisy and sparse data. Our time-series selection complements well-independent selection based on quasar colors and has strong potential for identifying high-redshift quasars for Baryon Acoustic Oscillations and other cosmology studies in the LSST era.« less
NASA Technical Reports Server (NTRS)
Mah, Robert W. (Inventor)
2005-01-01
System and method for performing one or more relevant measurements at a target site in an animal body, using a probe. One or more of a group of selected internal measurements is performed at the target site, is optionally combined with one or more selected external measurements, and is optionally combined with one or more selected heuristic information items, in order to reduce to a relatively small number the probable medical conditions associated with the target site. One or more of the internal measurements is optionally used to navigate the probe to the target site. Neural net information processing is performed to provide a reduced set of probable medical conditions associated with the target site.
Negative values of quasidistributions and quantum wave and number statistics
NASA Astrophysics Data System (ADS)
Peřina, J.; Křepelka, J.
2018-04-01
We consider nonclassical wave and number quantum statistics, and perform a decomposition of quasidistributions for nonlinear optical down-conversion processes using Bessel functions. We show that negative values of the quasidistribution do not directly represent probabilities; however, they directly influence measurable number statistics. Negative terms in the decomposition related to the nonclassical behavior with negative amplitudes of probability can be interpreted as positive amplitudes of probability in the negative orthogonal Bessel basis, whereas positive amplitudes of probability in the positive basis describe classical cases. However, probabilities are positive in all cases, including negative values of quasidistributions. Negative and positive contributions of decompositions to quasidistributions are estimated. The approach can be adapted to quantum coherence functions.
Xu, X J; Wang, L L; Zhou, N
2016-02-23
To explore the characteristics of ecological executive function in school-aged children with idiopathic or probably symptomatic epilepsy and examine the effects of executive function on social adaptive function. A total of 51 school-aged children with idiopathic or probably symptomatic epilepsy aged 5-12 years at our hospital and 37 normal ones of the same gender, age and educational level were included. The differences in ecological executive function and social adaptive function were compared between the two groups with the Behavior Rating Inventory of Executive Function (BRIEF) and Child Adaptive Behavior Scale, the Pearson's correlation test and multiple stepwise linear regression were used to explore the impact of executive function on social adaptive function. The scores of school-aged children with idiopathic or probably symptomatic epilepsy in global executive composite (GEC), behavioral regulation index (BRI) and metacognition index (MI) of BRIEF ((62±12), (58±13) and (63±12), respectively) were significantly higher than those of the control group ((47±7), (44±6) and (48±8), respectively))(P<0.01). The scores of school-aged children with idiopathic or probably symptomatic epilepsy in adaptive behavior quotient (ADQ), independence, cognition, self-control ((86±22), (32±17), (49±14), (41±16), respectively) were significantly lower than those of the control group ((120±12), (59±14), (59±7) and (68±10), respectively))(P<0.01). Pearson's correlation test showed that the scores of BRIEF, such as GEC, BRI, MI, inhibition, emotional control, monitoring, initiation and working memory had significantly negative correlations with the score of ADQ, independence, self-control ((r=-0.313--0.741, P<0.05)). Also, GEC, inhibition, MI, initiation, working memory, plan, organization and monitoring had significantly negative correlations with the score of cognition ((r=-0.335--0.437, P<0.05)); Multiple stepwise linear regression analysis showed that BRI, inhibition and working memory were closely related with the social adaptive function of school-aged children with idiopathic or probably symptomatic epilepsy. School-aged children with idiopathic or probably symptomatic epilepsy may have significantly ecological executive function impairment and social adaptive function reduction. The aspects of BRI, inhibition and working memory in ecological executive function are significantly related with social adaptive function in school-aged children with epilepsy.
Kirova, Anna-Mariya; Bays, Rebecca B; Lagalwar, Sarita
2015-01-01
Alzheimer's disease (AD) is a progressive neurodegenerative disease marked by deficits in episodic memory, working memory (WM), and executive function. Examples of executive dysfunction in AD include poor selective and divided attention, failed inhibition of interfering stimuli, and poor manipulation skills. Although episodic deficits during disease progression have been widely studied and are the benchmark of a probable AD diagnosis, more recent research has investigated WM and executive function decline during mild cognitive impairment (MCI), also referred to as the preclinical stage of AD. MCI is a critical period during which cognitive restructuring and neuroplasticity such as compensation still occur; therefore, cognitive therapies could have a beneficial effect on decreasing the likelihood of AD progression during MCI. Monitoring performance on working memory and executive function tasks to track cognitive function may signal progression from normal cognition to MCI to AD. The present review tracks WM decline through normal aging, MCI, and AD to highlight the behavioral and neurological differences that distinguish these three stages in an effort to guide future research on MCI diagnosis, cognitive therapy, and AD prevention.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Halvorson, J.J.; Smith, J.L.; Bolton, H. Jr.
1995-09-01
Geostatistics are often calculated for a single variable at a time, even though many natural phenomena are functions of several variables. The objective of this work was to demonstrate a nonparametric approach for assessing the spatial characteristics of multiple-variable phenomena. Specifically, we analyzed the spatial characteristics of resource islands in the soil under big sagebrush (Artemisia tridentala Nutt.), a dominant shrub in the intermountain western USA. For our example, we defined resource islands as a function of six soil variables representing concentrations of soil resources, populations of microorganisms, and soil microbial physiological variables. By collectively evaluating the indicator transformations ofmore » these individual variables, we created a new data set, termed a multiple-variable indicator transform or MVIT. Alternate MVITs were obtained by varying the selection criteria. Each MVIT was analyzed with variography to characterize spatial continuity, and with indicator kriging to predict the combined probability of their occurrence at unsampled locations in the landscape. Simple graphical analysis and variography demonstrated spatial dependence for all individual soil variables. Maps derived from ordinary kriging of MVITs suggested that the combined probabilities for encountering zones of above-median resources were greatest near big sagebrush. 51 refs., 5 figs., 1 tab.« less
A Statistical Treatment of Bioassay Pour Fractions
NASA Technical Reports Server (NTRS)
Barengoltz, Jack; Hughes, David W.
2014-01-01
The binomial probability distribution is used to treat the statistics of a microbiological sample that is split into two parts, with only one part evaluated for spore count. One wishes to estimate the total number of spores in the sample based on the counts obtained from the part that is evaluated (pour fraction). Formally, the binomial distribution is recharacterized as a function of the observed counts (successes), with the total number (trials) an unknown. The pour fraction is the probability of success per spore (trial). This distribution must be renormalized in terms of the total number. Finally, the new renormalized distribution is integrated and mathematically inverted to yield the maximum estimate of the total number as a function of a desired level of confidence ( P(
The database search problem: a question of rational decision making.
Gittelson, S; Biedermann, A; Bozza, S; Taroni, F
2012-10-10
This paper applies probability and decision theory in the graphical interface of an influence diagram to study the formal requirements of rationality which justify the individualization of a person found through a database search. The decision-theoretic part of the analysis studies the parameters that a rational decision maker would use to individualize the selected person. The modeling part (in the form of an influence diagram) clarifies the relationships between this decision and the ingredients that make up the database search problem, i.e., the results of the database search and the different pairs of propositions describing whether an individual is at the source of the crime stain. These analyses evaluate the desirability associated with the decision of 'individualizing' (and 'not individualizing'). They point out that this decision is a function of (i) the probability that the individual in question is, in fact, at the source of the crime stain (i.e., the state of nature), and (ii) the decision maker's preferences among the possible consequences of the decision (i.e., the decision maker's loss function). We discuss the relevance and argumentative implications of these insights with respect to recent comments in specialized literature, which suggest points of view that are opposed to the results of our study. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
Delay, probability, and social discounting in a public goods game.
Jones, Bryan A; Rachlin, Howard
2009-01-01
A human social discount function measures the value to a person of a reward to another person at a given social distance. Just as delay discounting is a hyperbolic function of delay, and probability discounting is a hyperbolic function of odds-against, social discounting is a hyperbolic function of social distance. Experiment 1 obtained individual social, delay, and probability discount functions for a hypothetical $75 reward; participants also indicated how much of an initial $100 endowment they would contribute to a common investment in a public good. Steepness of discounting correlated, across participants, among all three discount dimensions. However, only social and probability discounting were correlated with the public-good contribution; high public-good contributors were more altruistic and also less risk averse than low contributors. Experiment 2 obtained social discount functions with hypothetical $75 rewards and delay discount functions with hypothetical $1,000 rewards, as well as public-good contributions. The results replicated those of Experiment 1; steepness of the two forms of discounting correlated with each other across participants but only social discounting correlated with the public-good contribution. Most participants in Experiment 2 predicted that the average contribution would be lower than their own contribution.
Ander, Bradley P.; Zhang, Xiaoshuai; Xue, Fuzhong; Sharp, Frank R.; Yang, Xiaowei
2013-01-01
The discovery of genetic or genomic markers plays a central role in the development of personalized medicine. A notable challenge exists when dealing with the high dimensionality of the data sets, as thousands of genes or millions of genetic variants are collected on a relatively small number of subjects. Traditional gene-wise selection methods using univariate analyses face difficulty to incorporate correlational, structural, or functional structures amongst the molecular measures. For microarray gene expression data, we first summarize solutions in dealing with ‘large p, small n’ problems, and then propose an integrative Bayesian variable selection (iBVS) framework for simultaneously identifying causal or marker genes and regulatory pathways. A novel partial least squares (PLS) g-prior for iBVS is developed to allow the incorporation of prior knowledge on gene-gene interactions or functional relationships. From the point view of systems biology, iBVS enables user to directly target the joint effects of multiple genes and pathways in a hierarchical modeling diagram to predict disease status or phenotype. The estimated posterior selection probabilities offer probabilitic and biological interpretations. Both simulated data and a set of microarray data in predicting stroke status are used in validating the performance of iBVS in a Probit model with binary outcomes. iBVS offers a general framework for effective discovery of various molecular biomarkers by combining data-based statistics and knowledge-based priors. Guidelines on making posterior inferences, determining Bayesian significance levels, and improving computational efficiencies are also discussed. PMID:23844055
Peng, Bin; Zhu, Dianwen; Ander, Bradley P; Zhang, Xiaoshuai; Xue, Fuzhong; Sharp, Frank R; Yang, Xiaowei
2013-01-01
The discovery of genetic or genomic markers plays a central role in the development of personalized medicine. A notable challenge exists when dealing with the high dimensionality of the data sets, as thousands of genes or millions of genetic variants are collected on a relatively small number of subjects. Traditional gene-wise selection methods using univariate analyses face difficulty to incorporate correlational, structural, or functional structures amongst the molecular measures. For microarray gene expression data, we first summarize solutions in dealing with 'large p, small n' problems, and then propose an integrative Bayesian variable selection (iBVS) framework for simultaneously identifying causal or marker genes and regulatory pathways. A novel partial least squares (PLS) g-prior for iBVS is developed to allow the incorporation of prior knowledge on gene-gene interactions or functional relationships. From the point view of systems biology, iBVS enables user to directly target the joint effects of multiple genes and pathways in a hierarchical modeling diagram to predict disease status or phenotype. The estimated posterior selection probabilities offer probabilitic and biological interpretations. Both simulated data and a set of microarray data in predicting stroke status are used in validating the performance of iBVS in a Probit model with binary outcomes. iBVS offers a general framework for effective discovery of various molecular biomarkers by combining data-based statistics and knowledge-based priors. Guidelines on making posterior inferences, determining Bayesian significance levels, and improving computational efficiencies are also discussed.
Rebollar, Eria A; Antwis, Rachael E; Becker, Matthew H; Belden, Lisa K; Bletz, Molly C; Brucker, Robert M; Harrison, Xavier A; Hughey, Myra C; Kueneman, Jordan G; Loudon, Andrew H; McKenzie, Valerie; Medina, Daniel; Minbiole, Kevin P C; Rollins-Smith, Louise A; Walke, Jenifer B; Weiss, Sophie; Woodhams, Douglas C; Harris, Reid N
2016-01-01
Emerging infectious diseases in wildlife are responsible for massive population declines. In amphibians, chytridiomycosis caused by Batrachochytrium dendrobatidis, Bd, has severely affected many amphibian populations and species around the world. One promising management strategy is probiotic bioaugmentation of antifungal bacteria on amphibian skin. In vivo experimental trials using bioaugmentation strategies have had mixed results, and therefore a more informed strategy is needed to select successful probiotic candidates. Metagenomic, transcriptomic, and metabolomic methods, colloquially called "omics," are approaches that can better inform probiotic selection and optimize selection protocols. The integration of multiple omic data using bioinformatic and statistical tools and in silico models that link bacterial community structure with bacterial defensive function can allow the identification of species involved in pathogen inhibition. We recommend using 16S rRNA gene amplicon sequencing and methods such as indicator species analysis, the Kolmogorov-Smirnov Measure, and co-occurrence networks to identify bacteria that are associated with pathogen resistance in field surveys and experimental trials. In addition to 16S amplicon sequencing, we recommend approaches that give insight into symbiont function such as shotgun metagenomics, metatranscriptomics, or metabolomics to maximize the probability of finding effective probiotic candidates, which can then be isolated in culture and tested in persistence and clinical trials. An effective mitigation strategy to ameliorate chytridiomycosis and other emerging infectious diseases is necessary; the advancement of omic methods and the integration of multiple omic data provide a promising avenue toward conservation of imperiled species.
2012-02-29
surface and Swiss roll) and real-world data sets (UCI Machine Learning Repository [12] and USPS digit handwriting data). In our experiments, we use...less than µn ( say µ = 0.8), we can first use screening technique to select µn candidate nodes, and then apply BIPS on them for further selection and...identified from node j to node i. So we can say the probability for the existence of this connection is approximately 82%. Given the probability matrix
Computer routines for probability distributions, random numbers, and related functions
Kirby, W.
1983-01-01
Use of previously coded and tested subroutines simplifies and speeds up program development and testing. This report presents routines that can be used to calculate various probability distributions and other functions of importance in statistical hydrology. The routines are designed as general-purpose Fortran subroutines and functions to be called from user-written main progress. The probability distributions provided include the beta, chi-square, gamma, Gaussian (normal), Pearson Type III (tables and approximation), and Weibull. Also provided are the distributions of the Grubbs-Beck outlier test, Kolmogorov 's and Smirnov 's D, Student 's t, noncentral t (approximate), and Snedecor F. Other mathematical functions include the Bessel function, I sub o, gamma and log-gamma functions, error functions, and exponential integral. Auxiliary services include sorting and printer-plotting. Random number generators for uniform and normal numbers are provided and may be used with some of the above routines to generate numbers from other distributions. (USGS)
Computer routines for probability distributions, random numbers, and related functions
Kirby, W.H.
1980-01-01
Use of previously codes and tested subroutines simplifies and speeds up program development and testing. This report presents routines that can be used to calculate various probability distributions and other functions of importance in statistical hydrology. The routines are designed as general-purpose Fortran subroutines and functions to be called from user-written main programs. The probability distributions provided include the beta, chisquare, gamma, Gaussian (normal), Pearson Type III (tables and approximation), and Weibull. Also provided are the distributions of the Grubbs-Beck outlier test, Kolmogorov 's and Smirnov 's D, Student 's t, noncentral t (approximate), and Snedecor F tests. Other mathematical functions include the Bessel function I (subzero), gamma and log-gamma functions, error functions and exponential integral. Auxiliary services include sorting and printer plotting. Random number generators for uniform and normal numbers are provided and may be used with some of the above routines to generate numbers from other distributions. (USGS)
Katsube, Takayuki; Wajima, Toshihiro; Ishibashi, Toru; Arjona Ferreira, Juan Camilo; Echols, Roger
2017-01-01
Cefiderocol, a novel parenteral siderophore cephalosporin, exhibits potent efficacy against most Gram-negative bacteria, including carbapenem-resistant strains. Since cefiderocol is excreted primarily via the kidneys, this study was conducted to develop a population pharmacokinetics (PK) model to determine dose adjustment based on renal function. Population PK models were developed based on data for cefiderocol concentrations in plasma, urine, and dialysate with a nonlinear mixed-effects model approach. Monte-Carlo simulations were conducted to calculate the probability of target attainment (PTA) of fraction of time during the dosing interval where the free drug concentration in plasma exceeds the MIC (T f >MIC ) for an MIC range of 0.25 to 16 μg/ml. For the simulations, dose regimens were selected to compare cefiderocol exposure among groups with different levels of renal function. The developed models well described the PK of cefiderocol for each renal function group. A dose of 2 g every 8 h with 3-h infusions provided >90% PTA for 75% T f >MIC for an MIC of ≤4 μg/ml for patients with normal renal function, while a more frequent dose (every 6 h) could be used for patients with augmented renal function. A reduced dose and/or extended dosing interval was selected for patients with impaired renal function. A supplemental dose immediately after intermittent hemodialysis was proposed for patients requiring intermittent hemodialysis. The PK of cefiderocol could be adequately modeled, and the modeling-and-simulation approach suggested dose regimens based on renal function, ensuring drug exposure with adequate bactericidal effect. Copyright © 2016 American Society for Microbiology.
Sampling in epidemiological research: issues, hazards and pitfalls.
Tyrer, Stephen; Heyman, Bob
2016-04-01
Surveys of people's opinions are fraught with difficulties. It is easier to obtain information from those who respond to text messages or to emails than to attempt to obtain a representative sample. Samples of the population that are selected non-randomly in this way are termed convenience samples as they are easy to recruit. This introduces a sampling bias. Such non-probability samples have merit in many situations, but an epidemiological enquiry is of little value unless a random sample is obtained. If a sufficient number of those selected actually complete a survey, the results are likely to be representative of the population. This editorial describes probability and non-probability sampling methods and illustrates the difficulties and suggested solutions in performing accurate epidemiological research.
Sampling in epidemiological research: issues, hazards and pitfalls
Tyrer, Stephen; Heyman, Bob
2016-01-01
Surveys of people's opinions are fraught with difficulties. It is easier to obtain information from those who respond to text messages or to emails than to attempt to obtain a representative sample. Samples of the population that are selected non-randomly in this way are termed convenience samples as they are easy to recruit. This introduces a sampling bias. Such non-probability samples have merit in many situations, but an epidemiological enquiry is of little value unless a random sample is obtained. If a sufficient number of those selected actually complete a survey, the results are likely to be representative of the population. This editorial describes probability and non-probability sampling methods and illustrates the difficulties and suggested solutions in performing accurate epidemiological research. PMID:27087985
Singular solution of the Feller diffusion equation via a spectral decomposition.
Gan, Xinjun; Waxman, David
2015-01-01
Feller studied a branching process and found that the distribution for this process approximately obeys a diffusion equation [W. Feller, in Proceedings of the Second Berkeley Symposium on Mathematical Statistics and Probability (University of California Press, Berkeley and Los Angeles, 1951), pp. 227-246]. This diffusion equation and its generalizations play an important role in many scientific problems, including, physics, biology, finance, and probability theory. We work under the assumption that the fundamental solution represents a probability density and should account for all of the probability in the problem. Thus, under the circumstances where the random process can be irreversibly absorbed at the boundary, this should lead to the presence of a Dirac delta function in the fundamental solution at the boundary. However, such a feature is not present in the standard approach (Laplace transformation). Here we require that the total integrated probability is conserved. This yields a fundamental solution which, when appropriate, contains a term proportional to a Dirac delta function at the boundary. We determine the fundamental solution directly from the diffusion equation via spectral decomposition. We obtain exact expressions for the eigenfunctions, and when the fundamental solution contains a Dirac delta function at the boundary, every eigenfunction of the forward diffusion operator contains a delta function. We show how these combine to produce a weight of the delta function at the boundary which ensures the total integrated probability is conserved. The solution we present covers cases where parameters are time dependent, thereby greatly extending its applicability.
Singular solution of the Feller diffusion equation via a spectral decomposition
NASA Astrophysics Data System (ADS)
Gan, Xinjun; Waxman, David
2015-01-01
Feller studied a branching process and found that the distribution for this process approximately obeys a diffusion equation [W. Feller, in Proceedings of the Second Berkeley Symposium on Mathematical Statistics and Probability (University of California Press, Berkeley and Los Angeles, 1951), pp. 227-246]. This diffusion equation and its generalizations play an important role in many scientific problems, including, physics, biology, finance, and probability theory. We work under the assumption that the fundamental solution represents a probability density and should account for all of the probability in the problem. Thus, under the circumstances where the random process can be irreversibly absorbed at the boundary, this should lead to the presence of a Dirac delta function in the fundamental solution at the boundary. However, such a feature is not present in the standard approach (Laplace transformation). Here we require that the total integrated probability is conserved. This yields a fundamental solution which, when appropriate, contains a term proportional to a Dirac delta function at the boundary. We determine the fundamental solution directly from the diffusion equation via spectral decomposition. We obtain exact expressions for the eigenfunctions, and when the fundamental solution contains a Dirac delta function at the boundary, every eigenfunction of the forward diffusion operator contains a delta function. We show how these combine to produce a weight of the delta function at the boundary which ensures the total integrated probability is conserved. The solution we present covers cases where parameters are time dependent, thereby greatly extending its applicability.
Cullen, Kate; Talbot, Daniel; Gillmor, Julie; McGrath, Colin; OʼDonnell, Rory; Baily-Scanlan, Maria; Broderick, Julie
2017-07-01
Anxiety and depression are prevalent comorbidities in people with chronic respiratory diseases (CRDs). This study sought to quantify the influence of varying degrees of anxiety and depression on functional performance and disease impact in a population with CRDs following pulmonary rehabilitation (PR) intervention. The Hospital Anxiety and Depression Scale (HADS), the 6-Minute Walk Test (6MWT), and the Chronic Obstructive Pulmonary Disease Assessment Test (CAT) were assessed pre- and post-PR. Participants were categorized into 3 groups (None, Probable, and Present) based on their level of anxiety and depression. Functional performance and disease impact outcomes were compared pre- and post-PR. Patients consisted of a total of 134 program completers (72 males, 62 females; mean age = 67.8 years). Significant improvements in functional performance with regard to 6MWT scores were observed across all groups postintervention (P < .05). The Present group, in both the anxiety and depression domains, failed to reach a minimally clinically important difference postintervention. The Probable and Present groups achieved a significant improvement in CAT scores postintervention (P < .05). This study showed that symptoms of anxiety and depression in people with CRDs were significantly related to lower exercise tolerance levels and higher levels of disease impact. People with increased levels of anxiety and depression have the potential to significantly improve disease impact outcomes post-PR. The results demonstrated that the detection and treatment of anxiety and depression symptoms in people with CRDs are likely to be clinically important.
Yin, Guangjun; Xu, Hongliang; Xiao, Shuyang; Qin, Yajuan; Li, Yaxuan; Yan, Yueming; Hu, Yingkao
2013-10-03
WRKY genes encode one of the most abundant groups of transcription factors in higher plants, and its members regulate important biological process such as growth, development, and responses to biotic and abiotic stresses. Although the soybean genome sequence has been published, functional studies on soybean genes still lag behind those of other species. We identified a total of 133 WRKY members in the soybean genome. According to structural features of their encoded proteins and to the phylogenetic tree, the soybean WRKY family could be classified into three groups (groups I, II, and III). A majority of WRKY genes (76.7%; 102 of 133) were segmentally duplicated and 13.5% (18 of 133) of the genes were tandemly duplicated. This pattern was not apparent in Arabidopsis or rice. The transcriptome atlas revealed notable differential expression in either transcript abundance or in expression patterns under normal growth conditions, which indicated wide functional divergence in this family. Furthermore, some critical amino acids were detected using DIVERGE v2.0 in specific comparisons, suggesting that these sites have contributed to functional divergence among groups or subgroups. In addition, site model and branch-site model analyses of positive Darwinian selection (PDS) showed that different selection regimes could have affected the evolution of these groups. Sites with high probabilities of having been under PDS were found in groups I, II c, II e, and III. Together, these results contribute to a detailed understanding of the molecular evolution of the WRKY gene family in soybean. In this work, all the WRKY genes, which were generated mainly through segmental duplication, were identified in the soybean genome. Moreover, differential expression and functional divergence of the duplicated WRKY genes were two major features of this family throughout their evolutionary history. Positive selection analysis revealed that the different groups have different evolutionary rates. Together, these results contribute to a detailed understanding of the molecular evolution of the WRKY gene family in soybean.
Galactic Cosmic Ray Event-Based Risk Model (GERM) Code
NASA Technical Reports Server (NTRS)
Cucinotta, Francis A.; Plante, Ianik; Ponomarev, Artem L.; Kim, Myung-Hee Y.
2013-01-01
This software describes the transport and energy deposition of the passage of galactic cosmic rays in astronaut tissues during space travel, or heavy ion beams in patients in cancer therapy. Space radiation risk is a probability distribution, and time-dependent biological events must be accounted for physical description of space radiation transport in tissues and cells. A stochastic model can calculate the probability density directly without unverified assumptions about shape of probability density function. The prior art of transport codes calculates the average flux and dose of particles behind spacecraft and tissue shielding. Because of the signaling times for activation and relaxation in the cell and tissue, transport code must describe temporal and microspatial density of functions to correlate DNA and oxidative damage with non-targeted effects of signals, bystander, etc. These are absolutely ignored or impossible in the prior art. The GERM code provides scientists data interpretation of experiments; modeling of beam line, shielding of target samples, and sample holders; and estimation of basic physical and biological outputs of their experiments. For mono-energetic ion beams, basic physical and biological properties are calculated for a selected ion type, such as kinetic energy, mass, charge number, absorbed dose, or fluence. Evaluated quantities are linear energy transfer (LET), range (R), absorption and fragmentation cross-sections, and the probability of nuclear interactions after 1 or 5 cm of water equivalent material. In addition, a set of biophysical properties is evaluated, such as the Poisson distribution for a specified cellular area, cell survival curves, and DNA damage yields per cell. Also, the GERM code calculates the radiation transport of the beam line for either a fixed number of user-specified depths or at multiple positions along the Bragg curve of the particle in a selected material. The GERM code makes the numerical estimates of basic physical and biophysical quantities of high-energy protons and heavy ions that have been studied at the NASA Space Radiation Laboratory (NSRL) for the purpose of simulating space radiation biological effects. In the first option, properties of monoenergetic beams are treated. In the second option, the transport of beams in different materials is treated. Similar biophysical properties as in the first option are evaluated for the primary ion and its secondary particles. Additional properties related to the nuclear fragmentation of the beam are evaluated. The GERM code is a computationally efficient Monte-Carlo heavy-ion-beam model. It includes accurate models of LET, range, residual energy, and straggling, and the quantum multiple scattering fragmentation (QMSGRG) nuclear database.
NASA Astrophysics Data System (ADS)
Tsuzuki, Satori; Yanagisawa, Daichi; Nishinari, Katsuhiro
2018-04-01
This study proposes a model of a totally asymmetric simple exclusion process on a single-channel lane with functions of site assignments along the pit lane. The system model attempts to insert a new particle to the leftmost site at a certain probability by randomly selecting one of the empty sites in the pit lane, and reserving it for the particle. Thereafter, the particle is directed to stop at the site only once during its travel. Recently, the system was determined to show a self-deflection effect, in which the site usage distribution biases spontaneously toward the leftmost site, and the throughput becomes maximum when the site usage distribution is slightly biased to the rightmost site. Our exact analysis describes this deflection effect and show a good agreement with simulations.
Dynamics of morphological evolution in experimental Escherichia coli populations.
Cui, F; Yuan, B
2016-08-30
Here, we applied a two-stage clonal expansion model of morphological (cell-size) evolution to a long-term evolution experiment with Escherichia coli. Using this model, we derived the incidence function of the appearance of cell-size stability, the waiting time until this morphological stability, and the conditional and unconditional probabilities of morphological stability. After assessing the parameter values, we verified that the calculated waiting time was consistent with the experimental results, demonstrating the effectiveness of the two-stage model. According to the relative contributions of parameters to the incidence function and the waiting time, cell-size evolution is largely determined by the promotion rate, i.e., the clonal expansion rate of selectively advantageous organisms. This rate plays a prominent role in the evolution of cell size in experimental populations, whereas all other evolutionary forces were found to be less influential.
Yura, Harold T; Hanson, Steen G
2012-04-01
Methods for simulation of two-dimensional signals with arbitrary power spectral densities and signal amplitude probability density functions are disclosed. The method relies on initially transforming a white noise sample set of random Gaussian distributed numbers into a corresponding set with the desired spectral distribution, after which this colored Gaussian probability distribution is transformed via an inverse transform into the desired probability distribution. In most cases the method provides satisfactory results and can thus be considered an engineering approach. Several illustrative examples with relevance for optics are given.
[Determinants of pride and shame: outcome, expected success and attribution].
Schützwohl, A
1991-01-01
In two experiments we investigated the relationship between subjective probability of success and pride and shame. According to Atkinson (1957), pride (the incentive of success) is an inverse linear function of the probability of success, shame (the incentive of failure) being a negative linear function. Attribution theory predicts an inverse U-shaped relationship between subjective probability of success and pride and shame. The results presented here are at variance with both theories: Pride and shame do not vary with subjective probability of success. However, pride and shame are systematically correlated with internal attributions of action outcome.
NASA Astrophysics Data System (ADS)
Sallah, M.
2014-03-01
The problem of monoenergetic radiative transfer in a finite planar stochastic atmospheric medium with polarized (vector) Rayleigh scattering is proposed. The solution is presented for an arbitrary absorption and scattering cross sections. The extinction function of the medium is assumed to be a continuous random function of position, with fluctuations about the mean taken as Gaussian distributed. The joint probability distribution function of these Gaussian random variables is used to calculate the ensemble-averaged quantities, such as reflectivity and transmissivity, for an arbitrary correlation function. A modified Gaussian probability distribution function is also used to average the solution in order to exclude the probable negative values of the optical variable. Pomraning-Eddington approximation is used, at first, to obtain the deterministic analytical solution for both the total intensity and the difference function used to describe the polarized radiation. The problem is treated with specular reflecting boundaries and angular-dependent externally incident flux upon the medium from one side and with no flux from the other side. For the sake of comparison, two different forms of the weight function, which introduced to force the boundary conditions to be fulfilled, are used. Numerical results of the average reflectivity and average transmissivity are obtained for both Gaussian and modified Gaussian probability density functions at the different degrees of polarization.
Feed-forward frequency offset estimation for 32-QAM optical coherent detection.
Xiao, Fei; Lu, Jianing; Fu, Songnian; Xie, Chenhui; Tang, Ming; Tian, Jinwen; Liu, Deming
2017-04-17
Due to the non-rectangular distribution of the constellation points, traditional fast Fourier transform based frequency offset estimation (FFT-FOE) is no longer suitable for 32-QAM signal. Here, we report a modified FFT-FOE technique by selecting and digitally amplifying the inner QPSK ring of 32-QAM after the adaptive equalization, which is defined as QPSK-selection assisted FFT-FOE. Simulation results show that no FOE error occurs with a FFT size of only 512 symbols, when the signal-to-noise ratio (SNR) is above 17.5 dB using our proposed FOE technique. However, the error probability of traditional FFT-FOE scheme for 32-QAM is always intolerant. Finally, our proposed FOE scheme functions well for 10 Gbaud dual polarization (DP)-32-QAM signal to reach 20% forward error correction (FEC) threshold of BER=2×10-2, under the scenario of back-to-back (B2B) transmission.
Buelow, Melissa T; Frakey, Laura L; Grace, Janet; Friedman, Joseph H
2014-02-01
Impairments in executive functioning are commonly found in Parkinson's disease (PD); however, the research into risky decision making has been mixed. The present study sought to investigate three potential hypotheses: difficulty learning the task probabilities, levodopa equivalent dose (LED), and the presence of apathy. Twenty-four individuals with idiopathic PD and 13 healthy controls completed the Frontal Systems Behavior Scale to assess current apathy, the Iowa Gambling Task, and the Balloon Analog Risk Task (BART). Results indicated that individuals with PD selected more from Deck B, a disadvantageous deck. However, with an additional set of trials, participants with PD and apathy selected more from the most risky deck (Deck A). Apathy was not related to the BART, and LED was not related to either task. Results indicate that apathy is associated with decision-making in PD, and providing additional learning trials can improve decision-making in PD without apathy.
A large-scale study of the random variability of a coding sequence: a study on the CFTR gene.
Modiano, Guido; Bombieri, Cristina; Ciminelli, Bianca Maria; Belpinati, Francesca; Giorgi, Silvia; Georges, Marie des; Scotet, Virginie; Pompei, Fiorenza; Ciccacci, Cinzia; Guittard, Caroline; Audrézet, Marie Pierre; Begnini, Angela; Toepfer, Michael; Macek, Milan; Ferec, Claude; Claustres, Mireille; Pignatti, Pier Franco
2005-02-01
Coding single nucleotide substitutions (cSNSs) have been studied on hundreds of genes using small samples (n(g) approximately 100-150 genes). In the present investigation, a large random European population sample (average n(g) approximately 1500) was studied for a single gene, the CFTR (Cystic Fibrosis Transmembrane conductance Regulator). The nonsynonymous (NS) substitutions exhibited, in accordance with previous reports, a mean probability of being polymorphic (q > 0.005), much lower than that of the synonymous (S) substitutions, but they showed a similar rate of subpolymorphic (q < 0.005) variability. This indicates that, in autosomal genes that may have harmful recessive alleles (nonduplicated genes with important functions), genetic drift overwhelms selection in the subpolymorphic range of variability, making disadvantageous alleles behave as neutral. These results imply that the majority of the subpolymorphic nonsynonymous alleles of these genes are selectively negative or even pathogenic.
User's Manual for Space Debris Surfaces (SD_SURF)
NASA Technical Reports Server (NTRS)
Elfer, N. C.
1996-01-01
A unique collection of computer codes, Space Debris Surfaces (SD_SURF), have been developed to assist in the design and analysis of space debris protection systems. SD_SURF calculates and summarizes a vehicle's vulnerability to space debris as a function of impact velocity and obliquity. An SD_SURF analysis will show which velocities and obliquities are the most probable to cause a penetration. This determination can help the analyst select a shield design which is best suited to the predominant penetration mechanism. The analysis also indicates the most suitable parameters for development or verification testing. The SD_SURF programs offer the option of either FORTRAN programs and Microsoft EXCEL spreadsheets and macros. The FORTRAN programs work with BUMPERII version 1.2a or 1.3 (Cosmic released). The EXCEL spreadsheets and macros can be used independently or with selected output from the SD_SURF FORTRAN programs.
A novel property of DNA - as a bioflotation reagent in mineral processing.
Vasanthakumar, Balasubramanian; Ravishankar, Honnavar; Subramanian, Sankaran
2012-01-01
Environmental concerns regarding the use of certain chemicals in the froth flotation of minerals have led investigators to explore biological entities as potential substitutes for the reagents in vogue. Despite the fact that several microorganisms have been used for the separation of a variety of mineral systems, a detailed characterization of the biochemical molecules involved therein has not been reported so far. In this investigation, the selective flotation of sphalerite from a sphalerite-galena mineral mixture has been achieved using the cellular components of Bacillus species. The key constituent primarily responsible for the flotation of sphalerite has been identified as DNA, which functions as a bio-collector. Furthermore, using reconstitution studies, the obligatory need for the presence of non-DNA components as bio-depressants for galena has been demonstrated. A probable model involving these entities in the selective flotation of sphalerite from the mineral mixture has been discussed.
A Novel Property of DNA – As a Bioflotation Reagent in Mineral Processing
Vasanthakumar, Balasubramanian; Ravishankar, Honnavar; Subramanian, Sankaran
2012-01-01
Environmental concerns regarding the use of certain chemicals in the froth flotation of minerals have led investigators to explore biological entities as potential substitutes for the reagents in vogue. Despite the fact that several microorganisms have been used for the separation of a variety of mineral systems, a detailed characterization of the biochemical molecules involved therein has not been reported so far. In this investigation, the selective flotation of sphalerite from a sphalerite-galena mineral mixture has been achieved using the cellular components of Bacillus species. The key constituent primarily responsible for the flotation of sphalerite has been identified as DNA, which functions as a bio-collector. Furthermore, using reconstitution studies, the obligatory need for the presence of non-DNA components as bio-depressants for galena has been demonstrated. A probable model involving these entities in the selective flotation of sphalerite from the mineral mixture has been discussed. PMID:22768298
Analysis of various descent trajectories for a hypersonic-cruise, cold-wall research airplane
NASA Technical Reports Server (NTRS)
Lawing, P. L.
1975-01-01
The probable descent operating conditions for a hypersonic air-breathing research airplane were examined. Descents selected were cruise angle of attack, high dynamic pressure, high lift coefficient, turns, and descents with drag brakes. The descents were parametrically exercised and compared from the standpoint of cold-wall (367 K) aircraft heat load. The descent parameters compared were total heat load, peak heating rate, time to landing, time to end of heat pulse, and range. Trends in total heat load as a function of cruise Mach number, cruise dynamic pressure, angle-of-attack limitation, pull-up g-load, heading angle, and drag-brake size are presented.
On the unity of activity in galaxies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rowan-Robinson, M.
1977-05-01
A scheme is presented which unites quasars, radio galaxies, N galaxies, and Seyfert galaxies into a single picture of activity in galaxies. Probability functions are given for optical and radio cores, and extended radio sources (in the case of ellipticals), for both spirals and ellipticals. Activity occurs in galaxies of all luminosities, but the strength of it is made proportional to galaxy luminosity. It is assumed that there is dust surrounding the optical cores, to explain the strong infrared emission in Seyferts.Quasars may, in this picture, occur in both spirals and ellipticals, and in fact most optically selected QSOs aremore » predicted to be in spirals.« less
Yurtkuran, Alkın; Emel, Erdal
2016-01-01
The artificial bee colony (ABC) algorithm is a popular swarm based technique, which is inspired from the intelligent foraging behavior of honeybee swarms. This paper proposes a new variant of ABC algorithm, namely, enhanced ABC with solution acceptance rule and probabilistic multisearch (ABC-SA) to address global optimization problems. A new solution acceptance rule is proposed where, instead of greedy selection between old solution and new candidate solution, worse candidate solutions have a probability to be accepted. Additionally, the acceptance probability of worse candidates is nonlinearly decreased throughout the search process adaptively. Moreover, in order to improve the performance of the ABC and balance the intensification and diversification, a probabilistic multisearch strategy is presented. Three different search equations with distinctive characters are employed using predetermined search probabilities. By implementing a new solution acceptance rule and a probabilistic multisearch approach, the intensification and diversification performance of the ABC algorithm is improved. The proposed algorithm has been tested on well-known benchmark functions of varying dimensions by comparing against novel ABC variants, as well as several recent state-of-the-art algorithms. Computational results show that the proposed ABC-SA outperforms other ABC variants and is superior to state-of-the-art algorithms proposed in the literature.
Efficient Simulation Budget Allocation for Selecting an Optimal Subset
NASA Technical Reports Server (NTRS)
Chen, Chun-Hung; He, Donghai; Fu, Michael; Lee, Loo Hay
2008-01-01
We consider a class of the subset selection problem in ranking and selection. The objective is to identify the top m out of k designs based on simulated output. Traditional procedures are conservative and inefficient. Using the optimal computing budget allocation framework, we formulate the problem as that of maximizing the probability of correc tly selecting all of the top-m designs subject to a constraint on the total number of samples available. For an approximation of this corre ct selection probability, we derive an asymptotically optimal allocat ion and propose an easy-to-implement heuristic sequential allocation procedure. Numerical experiments indicate that the resulting allocatio ns are superior to other methods in the literature that we tested, and the relative efficiency increases for larger problems. In addition, preliminary numerical results indicate that the proposed new procedur e has the potential to enhance computational efficiency for simulation optimization.
Gu, Yong; Angelaki, Dora E; DeAngelis, Gregory C
2014-07-01
Trial by trial covariations between neural activity and perceptual decisions (quantified by choice Probability, CP) have been used to probe the contribution of sensory neurons to perceptual decisions. CPs are thought to be determined by both selective decoding of neural activity and by the structure of correlated noise among neurons, but the respective roles of these factors in creating CPs have been controversial. We used biologically-constrained simulations to explore this issue, taking advantage of a peculiar pattern of CPs exhibited by multisensory neurons in area MSTd that represent self-motion. Although models that relied on correlated noise or selective decoding could both account for the peculiar pattern of CPs, predictions of the selective decoding model were substantially more consistent with various features of the neural and behavioral data. While correlated noise is essential to observe CPs, our findings suggest that selective decoding of neuronal signals also plays important roles.
NASA Astrophysics Data System (ADS)
El Naqa, I.; Suneja, G.; Lindsay, P. E.; Hope, A. J.; Alaly, J. R.; Vicic, M.; Bradley, J. D.; Apte, A.; Deasy, J. O.
2006-11-01
Radiotherapy treatment outcome models are a complicated function of treatment, clinical and biological factors. Our objective is to provide clinicians and scientists with an accurate, flexible and user-friendly software tool to explore radiotherapy outcomes data and build statistical tumour control or normal tissue complications models. The software tool, called the dose response explorer system (DREES), is based on Matlab, and uses a named-field structure array data type. DREES/Matlab in combination with another open-source tool (CERR) provides an environment for analysing treatment outcomes. DREES provides many radiotherapy outcome modelling features, including (1) fitting of analytical normal tissue complication probability (NTCP) and tumour control probability (TCP) models, (2) combined modelling of multiple dose-volume variables (e.g., mean dose, max dose, etc) and clinical factors (age, gender, stage, etc) using multi-term regression modelling, (3) manual or automated selection of logistic or actuarial model variables using bootstrap statistical resampling, (4) estimation of uncertainty in model parameters, (5) performance assessment of univariate and multivariate analyses using Spearman's rank correlation and chi-square statistics, boxplots, nomograms, Kaplan-Meier survival plots, and receiver operating characteristics curves, and (6) graphical capabilities to visualize NTCP or TCP prediction versus selected variable models using various plots. DREES provides clinical researchers with a tool customized for radiotherapy outcome modelling. DREES is freely distributed. We expect to continue developing DREES based on user feedback.
Forced to remember: when memory is biased by salient information.
Santangelo, Valerio
2015-04-15
The last decades have seen a rapid growing in the attempt to understand the key factors involved in the internal memory representation of the external world. Visual salience have been found to provide a major contribution in predicting the probability for an item/object embedded in a complex setting (i.e., a natural scene) to be encoded and then remembered later on. Here I review the existing literature highlighting the impact of perceptual- (based on low-level sensory features) and semantics-related salience (based on high-level knowledge) on short-term memory representation, along with the neural mechanisms underpinning the interplay between these factors. The available evidence reveal that both perceptual- and semantics-related factors affect attention selection mechanisms during the encoding of natural scenes. Biasing internal memory representation, both perceptual and semantics factors increase the probability to remember high- to the detriment of low-saliency items. The available evidence also highlight an interplay between these factors, with a reduced impact of perceptual-related salience in biasing memory representation as a function of the increasing availability of semantics-related salient information. The neural mechanisms underpinning this interplay involve the activation of different portions of the frontoparietal attention control network. Ventral regions support the assignment of selection/encoding priorities based on high-level semantics, while the involvement of dorsal regions reflects priorities assignment based on low-level sensory features. Copyright © 2015 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Vio, R.; Vergès, C.; Andreani, P.
2017-08-01
The matched filter (MF) is one of the most popular and reliable techniques to the detect signals of known structure and amplitude smaller than the level of the contaminating noise. Under the assumption of stationary Gaussian noise, MF maximizes the probability of detection subject to a constant probability of false detection or false alarm (PFA). This property relies upon a priori knowledge of the position of the searched signals, which is usually not available. Recently, it has been shown that when applied in its standard form, MF may severely underestimate the PFA. As a consequence the statistical significance of features that belong to noise is overestimated and the resulting detections are actually spurious. For this reason, an alternative method of computing the PFA has been proposed that is based on the probability density function (PDF) of the peaks of an isotropic Gaussian random field. In this paper we further develop this method. In particular, we discuss the statistical meaning of the PFA and show that, although useful as a preliminary step in a detection procedure, it is not able to quantify the actual reliability of a specific detection. For this reason, a new quantity is introduced called the specific probability of false alarm (SPFA), which is able to carry out this computation. We show how this method works in targeted simulations and apply it to a few interferometric maps taken with the Atacama Large Millimeter/submillimeter Array (ALMA) and the Australia Telescope Compact Array (ATCA). We select a few potential new point sources and assign an accurate detection reliability to these sources.
Fixation Probability in a Two-Locus Model by the Ancestral Recombination–Selection Graph
Lessard, Sabin; Kermany, Amir R.
2012-01-01
We use the ancestral influence graph (AIG) for a two-locus, two-allele selection model in the limit of a large population size to obtain an analytic approximation for the probability of ultimate fixation of a single mutant allele A. We assume that this new mutant is introduced at a given locus into a finite population in which a previous mutant allele B is already segregating with a wild type at another linked locus. We deduce that the fixation probability increases as the recombination rate increases if allele A is either in positive epistatic interaction with B and allele B is beneficial or in no epistatic interaction with B and then allele A itself is beneficial. This holds at least as long as the recombination fraction and the selection intensity are small enough and the population size is large enough. In particular this confirms the Hill–Robertson effect, which predicts that recombination renders more likely the ultimate fixation of beneficial mutants at different loci in a population in the presence of random genetic drift even in the absence of epistasis. More importantly, we show that this is true from weak negative epistasis to positive epistasis, at least under weak selection. In the case of deleterious mutants, the fixation probability decreases as the recombination rate increases. This supports Muller’s ratchet mechanism to explain the accumulation of deleterious mutants in a population lacking recombination. PMID:22095080
SEXUAL SELECTION THROUGH FEMALE CHOICE IN LAWES' PAROTIA, A LEK-MATING BIRD OF PARADISE.
Pruett-Jones, S G; Pruett-Jones, M A
1990-05-01
We studied sexual selection in Lawes' Parotia, a lek-mating bird of paradise, during 1981-1983 in Papua New Guinea. There was a high variance in mating success among males, with fewer than half of the individuals mating in any one year. This variance was independent of male-male interactions and disruptions. A role of female choice in sexual selection was suggested by the patterns of female visitation to courts and statistical correlations across males between phenotypic traits and mating success. Females repeatedly visited most males in their home ranges and began visiting males up to six weeks before mating. In one or more years, six aspects of male behavior and one morphological variable were positively correlated with mating success, but the probability values were not significant using a simultaneous inference test. Calculation of combined probability values across all three years revealed that one aspect of male display behavior, the probability of display, positively and significantly influenced mating status. The probability of display was also significantly correlated with relative mating success among males. Females showed strong fidelity to mates, both within and between seasons. Display sites of male Lawes' Parotia are variably dispersed, but mating success did not differ for grouped and solitary males. These data confirm an important role of female choice in sexual selection in birds of paradise but also suggest that female choice may be unrelated to the process of lek-initiation in this species. © 1990 The Society for the Study of Evolution.
Randomness and diversity matter in the maintenance of the public resources
NASA Astrophysics Data System (ADS)
Liu, Aizhi; Zhang, Yanling; Chen, Xiaojie; Sun, Changyin
2017-03-01
Most previous models about the public goods game usually assume two possible strategies, i.e., investing all or nothing. The real-life situation is rarely all or nothing. In this paper, we consider that multiple strategies are adopted in a well-mixed population, and each strategy represents an investment to produce the public goods. Past efforts have found that randomness matters in the evolution of fairness in the ultimatum game. In the framework involving no other mechanisms, we study how diversity and randomness influence the average investment of the population defined by the mean value of all individuals' strategies. The level of diversity is increased by increasing the strategy number, and the level of randomness is increased by increasing the mutation probability, or decreasing the population size or the selection intensity. We find that a higher level of diversity and a higher level of randomness lead to larger average investment and favor more the evolution of cooperation. Under weak selection, the average investment changes very little with the strategy number, the population size, and the mutation probability. Under strong selection, the average investment changes very little with the strategy number and the population size, but changes a lot with the mutation probability. Under intermediate selection, the average investment increases significantly with the strategy number and the mutation probability, and decreases significantly with the population size. These findings are meaningful to study how to maintain the public resource.
Using a Calendar and Explanatory Instructions to Aid Within-Household Selection in Mail Surveys
ERIC Educational Resources Information Center
Stange, Mathew; Smyth, Jolene D.; Olson, Kristen
2016-01-01
Although researchers can easily select probability samples of addresses using the U.S. Postal Service's Delivery Sequence File, randomly selecting respondents within households for surveys remains challenging. Researchers often place within-household selection instructions, such as the next or last birthday methods, in survey cover letters to…
Probability distribution for the Gaussian curvature of the zero level surface of a random function
NASA Astrophysics Data System (ADS)
Hannay, J. H.
2018-04-01
A rather natural construction for a smooth random surface in space is the level surface of value zero, or ‘nodal’ surface f(x,y,z) = 0, of a (real) random function f; the interface between positive and negative regions of the function. A physically significant local attribute at a point of a curved surface is its Gaussian curvature (the product of its principal curvatures) because, when integrated over the surface it gives the Euler characteristic. Here the probability distribution for the Gaussian curvature at a random point on the nodal surface f = 0 is calculated for a statistically homogeneous (‘stationary’) and isotropic zero mean Gaussian random function f. Capitalizing on the isotropy, a ‘fixer’ device for axes supplies the probability distribution directly as a multiple integral. Its evaluation yields an explicit algebraic function with a simple average. Indeed, this average Gaussian curvature has long been known. For a non-zero level surface instead of the nodal one, the probability distribution is not fully tractable, but is supplied as an integral expression.
NASA Astrophysics Data System (ADS)
Gromov, Yu Yu; Minin, Yu V.; Ivanova, O. G.; Morozova, O. N.
2018-03-01
Multidimensional discrete distributions of probabilities of independent random values were received. Their one-dimensional distribution is widely used in probability theory. Producing functions of those multidimensional distributions were also received.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vinogradskiy, Y; Miyasaka, Y; Kadoya, N
Purpose: CT-ventilation is an exciting new imaging modality that uses 4DCTs to calculate lung ventilation. Studies have proposed to use 4DCT-ventilation imaging for functional avoidance radiotherapy which implies designing treatment plans to spare functional portions of the lung. Although retrospective studies have been performed to evaluate the dosimetric gains to functional lung; no work has been done to translate the dosimetric gains to an improvement in pulmonary toxicity. The purpose of our work was to evaluate the potential reduction in toxicity for 4DCT-ventilation based functional avoidance. Methods: 70 lung cancer patients with 4DCT imaging were used for the study. CT-ventilationmore » maps were calculated using the patient’s 4DCT, deformable image registrations, and a density-change-based algorithm. Radiation pneumonitis was graded using imaging and clinical information. Log-likelihood methods were used to fit a normal-tissue-complication-probability (NTCP) model predicting grade 2+ radiation pneumonitis as a function of doses (mean and V20) to functional lung (>15% ventilation). For 20 patients a functional plan was generated that reduced dose to functional lung while meeting RTOG 0617-based constraints. The NTCP model was applied to the functional plan to determine the reduction in toxicity with functional planning Results: The mean dose to functional lung was 16.8 and 17.7 Gy with the functional and clinical plans respectively. The corresponding grade 2+ pneumonitis probability was 26.9% with the clinically-used plan and 24.6% with the functional plan (8.5% reduction). The V20-based grade 2+ pneumonitis probability was 23.7% with the clinically-used plan and reduced to 19.6% with the functional plan (20.9% reduction). Conclusion: Our results revealed a reduction of 9–20% in complication probability with functional planning. To our knowledge this is the first study to apply complication probability to convert dosimetric results to toxicity improvement. The results presented in the current work provide seminal data for prospective clinical trials in functional avoidance. YV discloses funding from State of Colorado. TY discloses National Lung Cancer Partnership; Young Investigator Research grant.« less
Functional-diversity indices can be driven by methodological choices and species richness.
Poos, Mark S; Walker, Steven C; Jackson, Donald A
2009-02-01
Functional diversity is an important concept in community ecology because it captures information on functional traits absent in measures of species diversity. One popular method of measuring functional diversity is the dendrogram-based method, FD. To calculate FD, a variety of methodological choices are required, and it has been debated about whether biological conclusions are sensitive to such choices. We studied the probability that conclusions regarding FD were sensitive, and that patterns in sensitivity were related to alpha and beta components of species richness. We developed a randomization procedure that iteratively calculated FD by assigning species into two assemblages and calculating the probability that the community with higher FD varied across methods. We found evidence of sensitivity in all five communities we examined, ranging from a probability of sensitivity of 0 (no sensitivity) to 0.976 (almost completely sensitive). Variations in these probabilities were driven by differences in alpha diversity between assemblages and not by beta diversity. Importantly, FD was most sensitive when it was most useful (i.e., when differences in alpha diversity were low). We demonstrate that trends in functional-diversity analyses can be largely driven by methodological choices or species richness, rather than functional trait information alone.
Vacuum quantum stress tensor fluctuations: A diagonalization approach
NASA Astrophysics Data System (ADS)
Schiappacasse, Enrico D.; Fewster, Christopher J.; Ford, L. H.
2018-01-01
Large vacuum fluctuations of a quantum stress tensor can be described by the asymptotic behavior of its probability distribution. Here we focus on stress tensor operators which have been averaged with a sampling function in time. The Minkowski vacuum state is not an eigenstate of the time-averaged operator, but can be expanded in terms of its eigenstates. We calculate the probability distribution and the cumulative probability distribution for obtaining a given value in a measurement of the time-averaged operator taken in the vacuum state. In these calculations, we study a specific operator that contributes to the stress-energy tensor of a massless scalar field in Minkowski spacetime, namely, the normal ordered square of the time derivative of the field. We analyze the rate of decrease of the tail of the probability distribution for different temporal sampling functions, such as compactly supported functions and the Lorentzian function. We find that the tails decrease relatively slowly, as exponentials of fractional powers, in agreement with previous work using the moments of the distribution. Our results lend additional support to the conclusion that large vacuum stress tensor fluctuations are more probable than large thermal fluctuations, and may have observable effects.
Lava-flow hazard on the SE flank of Mt. Etna (Southern Italy)
NASA Astrophysics Data System (ADS)
Crisci, G. M.; Iovine, G.; Di Gregorio, S.; Lupiano, V.
2008-11-01
A method for mapping lava-flow hazard on the SE flank of Mt. Etna (Sicily, Southern Italy) by applying the Cellular Automata model SCIARA -fv is described, together with employed techniques of calibration and validation through a parallel Genetic Algorithm. The study area is partly urbanised; it has repeatedly been affected by lava flows from flank eruptions in historical time, and shows evidence of a dominant SSE-trending fracture system. Moreover, a dormant deep-seated gravitational deformation, associated with a larger volcano-tectonic phenomenon, affects the whole south-eastern flank of the volcano. The Etnean 2001 Mt. Calcarazzi lava-flow event has been selected for model calibration, while validation has been performed by considering the 2002 Linguaglossa and the 1991-93 Valle del Bove events — suitable data for back analysis being available for these recent eruptions. Quantitative evaluation of the simulations, with respect to the real events, has been performed by means of a couple of fitness functions, which consider either the areas affected by the lava flows, or areas and eruption duration. Sensitivity analyses are in progress for thoroughly evaluating the role of parameters, topographic input data, and mesh geometry on model performance; though, preliminary results have already given encouraging responses on model robustness. In order to evaluate lava-flow hazard in the study area, a regular grid of n.340 possible vents, uniformly covering the study area and located at 500 m intervals, has been hypothesised. For each vent, a statistically-significant number of simulations has been planned, by adopting combinations of durations, lava volumes, and effusion-rate functions, selected by considering available volcanological data. Performed simulations have been stored in a GIS environment for successive analyses and map elaboration. Probabilities of activation, empirically based on past behaviour of the volcano, can be assigned to each vent of the grid, by considering its elevation, location with respect to the volcanic edifice, and proximity to its main weakness zones. Similarly, different probabilities can be assigned to the simulated event types (combinations of durations and lava volumes, and to the effusion-rate functions considered). In such a way, an implicit assumption is made that the volcanic style will not dramatically change in the near future. Depending on adopted criteria for probability evaluation, different maps of lava-flow hazard can be compiled, by taking into account both the overlapping of the simulated lava flows and their assumed probabilities, and by finally ranking computed values into few relative classes. The adopted methodology allows to rapidly exploring changes in lava-flow hazard as a function of varying probabilities of occurrence, by simply re-processing the database of the simulations stored in the GIS. For Civil Protection purposes, in case of expected imminent opening of a vent in a given sector of the volcano, re-processing may help in real-time forecasting the presumable affected areas, and thus in better managing the eruptive crisis. Moreover, further simulations can be added to the GIS data base at any time new different event types were recognised to be of interest. In this paper, three examples of maps of lava-flow hazard for the SE flank of Mt. Etna are presented: the first has been realised without assigning any probability to the performed simulations, by simply counting the frequencies of lava flows affecting each site; in the second map, information on past eruptions is taken into account, and probabilities are empirically attributed to each simulation based on location of vents and types of eruption; in the third one, a stronger role is ascribed to the main SSE-trending weakness zone, which crosses the study area between Nicolosi and Trecastagni, associated with the right flank of the above-cited deep-seated deformation. Despite being only preliminary (as based on a sub-set of the overall planned simulations), the maps clearly depict the most hazardous sectors of the volcano, which have been identified by applying the coupled modelling-GIS method here described.
NASA Astrophysics Data System (ADS)
Taylor, Faith E.; Santangelo, Michele; Marchesini, Ivan; Malamud, Bruce D.
2013-04-01
During a landslide triggering event, the tens to thousands of landslides resulting from the trigger (e.g., earthquake, heavy rainfall) may block a number of sections of the road network, posing a risk to rescue efforts, logistics and accessibility to a region. Here, we present initial results from a semi-stochastic model we are developing to evaluate the probability of landslides intersecting a road network and the network-accessibility implications of this across a region. This was performed in the open source GRASS GIS software, where we took 'model' landslides and dropped them on a 79 km2 test area region in Collazzone, Umbria, Central Italy, with a given road network (major and minor roads, 404 km in length) and already determined landslide susceptibilities. Landslide areas (AL) were randomly selected from a three-parameter inverse gamma probability density function, consisting of a power-law decay of about -2.4 for medium and large values of AL and an exponential rollover for small values of AL; the rollover (maximum probability) occurs at about AL = 400 m.2 The number of landslide areas selected for each triggered event iteration was chosen to have an average density of 1 landslide km-2, i.e. 79 landslide areas chosen randomly for each iteration. Landslides were then 'dropped' over the region semi-stochastically: (i) random points were generated across the study region; (ii) based on the landslide susceptibility map, points were accepted/rejected based on the probability of a landslide occurring at that location. After a point was accepted, it was assigned a landslide area (AL) and length to width ratio. Landslide intersections with roads were then assessed and indices such as the location, number and size of road blockage recorded. The GRASS-GIS model was performed 1000 times in a Monte-Carlo type simulation. Initial results show that for a landslide triggering event of 1 landslide km-2 over a 79 km2 region with 404 km of road, the number of road blockages ranges from 6 to 17, resulting in one road blockage every 24-67 km of roads. The average length of road blocked was 33 m. As we progress with model development and more sophisticated network analysis, we believe this semi-stochastic modelling approach will aid civil protection agencies to get a rough idea for the probability of road network potential damage (road block number and extent) as the result of different magnitude landslide triggering event scenarios.
Chrzanowski, Frank
2008-01-01
Two numerical methods, Decision Analysis (DA) and Potential Problem Analysis (PPA) are presented as alternative selection methods to the logical method presented in Part I. In DA properties are weighted and outcomes are scored. The weighted scores for each candidate are totaled and final selection is based on the totals. Higher scores indicate better candidates. In PPA potential problems are assigned a seriousness factor and test outcomes are used to define the probability of occurrence. The seriousness-probability products are totaled and forms with minimal scores are preferred. DA and PPA have never been compared to the logical-elimination method. Additional data were available for two forms of McN-5707 to provide complete preformulation data for five candidate forms. Weight and seriousness factors (independent variables) were obtained from a survey of experienced formulators. Scores and probabilities (dependent variables) were provided independently by Preformulation. The rankings of the five candidate forms, best to worst, were similar for all three methods. These results validate the applicability of DA and PPA for candidate form selection. DA and PPA are particularly applicable in cases where there are many candidate forms and where each form has some degree of unfavorable properties.
Kallio, Eva R.; Koskela, Esa; Lonn, Eija
2017-01-01
The loci arginine vasopressin receptor 1a (avpr1a) and oxytocin receptor (oxtr) have evolutionarily conserved roles in vertebrate social and sexual behaviour. Allelic variation at a microsatellite locus in the 5′ regulatory region of these genes is associated with fitness in the bank vole Myodes glareolus. Given the low frequency of long and short alleles at these microsatellite loci in wild bank voles, we used breeding trials to determine whether selection acts against long and short alleles. Female bank voles with intermediate length avpr1a alleles had the highest probability of breeding, while male voles whose avpr1a alleles were very different in length had reduced probability of breeding. Moreover, there was a significant interaction between male and female oxtr genotypes, where potential breeding pairs with dissimilar length alleles had reduced probability of breeding. These data show how genetic variation at microsatellite loci associated with avpr1a and oxtr is associated with fitness, and highlight complex patterns of selection at these loci. More widely, these data show how stabilizing selection might act on allele length frequency distributions at gene-associated microsatellite loci. PMID:29237850
Maximizing Information Diffusion in the Cyber-physical Integrated Network †
Lu, Hongliang; Lv, Shaohe; Jiao, Xianlong; Wang, Xiaodong; Liu, Juan
2015-01-01
Nowadays, our living environment has been embedded with smart objects, such as smart sensors, smart watches and smart phones. They make cyberspace and physical space integrated by their abundant abilities of sensing, communication and computation, forming a cyber-physical integrated network. In order to maximize information diffusion in such a network, a group of objects are selected as the forwarding points. To optimize the selection, a minimum connected dominating set (CDS) strategy is adopted. However, existing approaches focus on minimizing the size of the CDS, neglecting an important factor: the weight of links. In this paper, we propose a distributed maximizing the probability of information diffusion (DMPID) algorithm in the cyber-physical integrated network. Unlike previous approaches that only consider the size of CDS selection, DMPID also considers the information spread probability that depends on the weight of links. To weaken the effects of excessively-weighted links, we also present an optimization strategy that can properly balance the two factors. The results of extensive simulation show that DMPID can nearly double the information diffusion probability, while keeping a reasonable size of selection with low overhead in different distributed networks. PMID:26569254
Accounting for Selection Bias in Studies of Acute Cardiac Events.
Banack, Hailey R; Harper, Sam; Kaufman, Jay S
2018-06-01
In cardiovascular research, pre-hospital mortality represents an important potential source of selection bias. Inverse probability of censoring weights are a method to account for this source of bias. The objective of this article is to examine and correct for the influence of selection bias due to pre-hospital mortality on the relationship between cardiovascular risk factors and all-cause mortality after an acute cardiac event. The relationship between the number of cardiovascular disease (CVD) risk factors (0-5; smoking status, diabetes, hypertension, dyslipidemia, and obesity) and all-cause mortality was examined using data from the Atherosclerosis Risk in Communities (ARIC) study. To illustrate the magnitude of selection bias, estimates from an unweighted generalized linear model with a log link and binomial distribution were compared with estimates from an inverse probability of censoring weighted model. In unweighted multivariable analyses the estimated risk ratio for mortality ranged from 1.09 (95% confidence interval [CI], 0.98-1.21) for 1 CVD risk factor to 1.95 (95% CI, 1.41-2.68) for 5 CVD risk factors. In the inverse probability of censoring weights weighted analyses, the risk ratios ranged from 1.14 (95% CI, 0.94-1.39) to 4.23 (95% CI, 2.69-6.66). Estimates from the inverse probability of censoring weighted model were substantially greater than unweighted, adjusted estimates across all risk factor categories. This shows the magnitude of selection bias due to pre-hospital mortality and effect on estimates of the effect of CVD risk factors on mortality. Moreover, the results highlight the utility of using this method to address a common form of bias in cardiovascular research. Copyright © 2018 Canadian Cardiovascular Society. Published by Elsevier Inc. All rights reserved.
5 CFR 532.215 - Establishments included in regular appropriated fund surveys.
Code of Federal Regulations, 2010 CFR
2010-01-01
... in surveys shall be selected under standard probability sample selection procedures. In areas with... establishment list drawn under statistical sampling procedures. [55 FR 46142, Nov. 1, 1990] ...
Selection of a cardiac surgery provider in the managed care era.
Shahian, D M; Yip, W; Westcott, G; Jacobson, J
2000-11-01
Many health planners promote the use of competition to contain cost and improve quality of care. Using a standard econometric model, we examined the evidence for "value-based" cardiac surgery provider selection in eastern Massachusetts, where there is significant competition and managed care penetration. McFadden's conditional logit model was used to study cardiac surgery provider selection among 6952 patients and eight metropolitan Boston hospitals in 1997. Hospital predictor variables included beds, cardiac surgery case volume, objective clinical and financial performance, reputation (percent out-of-state referrals, cardiac residency program), distance from patient's home to hospital, and historical referral patterns. Subgroup analyses were performed for each major payer category. Distance from patient's home to hospital (odds ratio 0.90; P =.000) and the historical referral pattern from each patient's hometown (z = 45.305; P =.000) were important predictors in all models. A cardiac surgery residency enhanced the probability of selection (odds ratio 5.25; P =.000), as did percent out-of-state referrals (odds ratio 1.10; P =.001). Higher mortality rates were associated with decreased probability of selection (odds ratio 0.51; P =.027), but higher length of stay was paradoxically associated with greater probability (odds ratio 1.72; P =.000). Total hospital costs were irrelevant (odds ratio 1.00; P =.179). When analyzed by payer subgroup, Medicare patients appeared to select hospitals with both low mortality (odds ratio 0.43; P =.176) and short length of stay (odds ratio 0.76; P =.213), although the results did not achieve statistical significance. The commercial managed care subgroup exhibited the least "value-based" behavior. The odds ratio for length of stay was the highest of any group (odds ratio = 2.589; P =.000) and there was a subset of hospitals for which higher mortality was actually associated with greater likelihood of selection. The observable determinants of cardiac surgery provider selection are related to hospital reputation, historical referral patterns, and patient proximity, not objective clinical or cost performance. The paradoxic behavior of commercial managed care probably results from unobserved choice factors that are not primarily based on objective provider performance.
Optimized Vertex Method and Hybrid Reliability
NASA Technical Reports Server (NTRS)
Smith, Steven A.; Krishnamurthy, T.; Mason, B. H.
2002-01-01
A method of calculating the fuzzy response of a system is presented. This method, called the Optimized Vertex Method (OVM), is based upon the vertex method but requires considerably fewer function evaluations. The method is demonstrated by calculating the response membership function of strain-energy release rate for a bonded joint with a crack. The possibility of failure of the bonded joint was determined over a range of loads. After completing the possibilistic analysis, the possibilistic (fuzzy) membership functions were transformed to probability density functions and the probability of failure of the bonded joint was calculated. This approach is called a possibility-based hybrid reliability assessment. The possibility and probability of failure are presented and compared to a Monte Carlo Simulation (MCS) of the bonded joint.
Redel, P; Bublak, P; Sorg, C; Kurz, A; Förstl, H; Müller, H J; Schneider, W X; Perneczky, R; Finke, K
2012-01-01
Visual selective attention was assessed with a partial-report task in patients with probable Alzheimer's disease (AD), amnestic mild cognitive impairment (MCI), and healthy elderly controls. Based on Bundesen's "theory of visual attention" (TVA), two parameters were derived: top-down control of attentional selection, representing task-related attentional weighting for prioritizing relevant visual objects, and spatial distribution of attentional weights across the left and the right hemifield. Compared with controls, MCI patients showed significantly reduced top-down controlled selection, which was further deteriorated in AD subjects. Moreover, attentional weighting was significantly unbalanced across hemifields in MCI and tended to be more lateralized in AD. Across MCI and AD patients, carriers of the apolipoprotein E ε4 allele (ApoE4) displayed a leftward spatial bias, which was the more pronounced the younger the ApoE4-positive patients and the earlier disease onset. These results indicate that impaired top-down control may be linked to early dysfunction of fronto-parietal networks. An early temporo-parietal interhemispheric asymmetry might cause a pathological spatial bias which is associated with ApoE4 genotype and may therefore function as early cognitive marker of upcoming AD. Copyright © 2012 Elsevier Inc. All rights reserved.
Emergence and Evolution of Hominidae-Specific Coding and Noncoding Genomic Sequences.
Saber, Morteza Mahmoudi; Adeyemi Babarinde, Isaac; Hettiarachchi, Nilmini; Saitou, Naruya
2016-07-12
Family Hominidae, which includes humans and great apes, is recognized for unique complex social behavior and intellectual abilities. Despite the increasing genome data, however, the genomic origin of its phenotypic uniqueness has remained elusive. Clade-specific genes and highly conserved noncoding sequences (HCNSs) are among the high-potential evolutionary candidates involved in driving clade-specific characters and phenotypes. On this premise, we analyzed whole genome sequences along with gene orthology data retrieved from major DNA databases to find Hominidae-specific (HS) genes and HCNSs. We discovered that Down syndrome critical region 4 (DSCR4) is the only experimentally verified gene uniquely present in Hominidae. DSCR4 has no structural homology to any known protein and was inferred to have emerged in several steps through LTR/ERV1, LTR/ERVL retrotransposition, and transversion. Using the genomic distance as neutral evolution threshold, we identified 1,658 HS HCNSs. Polymorphism coverage and derived allele frequency analysis of HS HCNSs showed that these HCNSs are under purifying selection, indicating that they may harbor important functions. They are overrepresented in promoters/untranslated regions, in close proximity of genes involved in sensory perception of sound and developmental process, and also showed a significantly lower nucleosome occupancy probability. Interestingly, many ancestral sequences of the HS HCNSs showed very high evolutionary rates. This suggests that new functions emerged through some kind of positive selection, and then purifying selection started to operate to keep these functions. © The Author(s) 2016. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution.
Martínez-Castilla, León Patricio; Alvarez-Buylla, Elena R.
2003-01-01
Gene duplication is a substrate of evolution. However, the relative importance of positive selection versus relaxation of constraints in the functional divergence of gene copies is still under debate. Plant MADS-box genes encode transcriptional regulators key in various aspects of development and have undergone extensive duplications to form a large family. We recovered 104 MADS sequences from the Arabidopsis genome. Bayesian phylogenetic trees recover type II lineage as a monophyletic group and resolve a branching sequence of monophyletic groups within this lineage. The type I lineage is comprised of several divergent groups. However, contrasting gene structure and patterns of chromosomal distribution between type I and II sequences suggest that they had different evolutionary histories and support the placement of the root of the gene family between these two groups. Site-specific and site-branch analyses of positive Darwinian selection (PDS) suggest that different selection regimes could have affected the evolution of these lineages. We found evidence for PDS along the branch leading to flowering time genes that have a direct impact on plant fitness. Sites with high probabilities of having been under PDS were found in the MADS and K domains, suggesting that these played important roles in the acquisition of novel functions during MADS-box diversification. Detected sites are targets for further experimental analyses. We argue that adaptive changes in MADS-domain protein sequences have been important for their functional divergence, suggesting that changes within coding regions of transcriptional regulators have influenced phenotypic evolution of plants. PMID:14597714
Quantum Jeffreys prior for displaced squeezed thermal states
NASA Astrophysics Data System (ADS)
Kwek, L. C.; Oh, C. H.; Wang, Xiang-Bin
1999-09-01
It is known that, by extending the equivalence of the Fisher information matrix to its quantum version, the Bures metric, the quantum Jeffreys prior can be determined from the volume element of the Bures metric. We compute the Bures metric for the displaced squeezed thermal state and analyse the quantum Jeffreys prior and its marginal probability distributions. To normalize the marginal probability density function, it is necessary to provide a range of values of the squeezing parameter or the inverse temperature. We find that if the range of the squeezing parameter is kept narrow, there are significant differences in the marginal probability density functions in terms of the squeezing parameters for the displaced and undisplaced situations. However, these differences disappear as the range increases. Furthermore, marginal probability density functions against temperature are very different in the two cases.
Malešević, Jovana; Štrbac, Matija; Isaković, Milica; Kojić, Vladimir; Konstantinović, Ljubica; Vidaković, Aleksandra; Dedijer Dujović, Suzana; Kostić, Miloš; Keller, Thierry
2017-11-01
The goal of this study was to investigate surface motor activation zones and their temporal variability using an advanced multi-pad functional electrical stimulation system. With this system motor responses are elicited through concurrent activation of electrode matrix pads collectively termed "virtual electrodes" (VEs) with appropriate stimulation parameters. We observed VEs used to produce selective wrist, finger, and thumb extension movements in 20 therapy sessions of 12 hemiplegic stroke patients. The VEs which produce these three selective movements were created manually on the ergonomic multi-pad electrode by experienced clinicians based on visual inspection of the muscle responses. Individual results indicated that changes in VE configuration were required each session for all patients and that overlap in joint movements was evident between some VEs. However, by analyzing group data, we defined the probability distribution over the electrode surface for the three VEs of interest. Furthermore, through Bayesian logic we obtained preferred stimulation zones that are in accordance with our previously reported heuristically obtained results. We have also analyzed the number of active pads and stimulation amplitudes for these three VEs. Presented results provide a basis for an automated electrode calibration algorithm built on a priori knowledge or the starting point for manual selection of stimulation points. © 2017 International Center for Artificial Organs and Transplantation and Wiley Periodicals, Inc.
Blaimer, Bonnie B.; Schmitt, Thomas
2017-01-01
Cuticular hydrocarbons (CHCs) cover the cuticles of virtually all insects, serving as a waterproofing agent and as a communication signal. The causes for the high CHC variation between species, and the factors influencing CHC profiles, are scarcely understood. Here, we compare CHC profiles of ant species from seven biogeographic regions, searching for physiological constraints and for climatic and biotic selection pressures. Molecule length constrained CHC composition: long-chain profiles contained fewer linear alkanes, but more hydrocarbons with disruptive features in the molecule. This is probably owing to selection on the physiology to build a semi-fluid cuticular layer, which is necessary for waterproofing and communication. CHC composition also depended on the precipitation in the ants' habitats. Species from wet climates had more alkenes and fewer dimethyl alkanes than those from drier habitats, which can be explained by different waterproofing capacities of these compounds. By contrast, temperature did not affect CHC composition. Mutualistically associated (parabiotic) species possessed profiles highly distinct from non-associated species. Our study is, to our knowledge, the first to show systematic impacts of physiological, climatic and biotic factors on quantitative CHC composition across a global, multi-species dataset. We demonstrate how they jointly shape CHC profiles, and advance our understanding of the evolution of this complex functional trait in insects. PMID:28298343
van Son, Dana; Schalbroeck, Rik; Angelidis, Angelos; van der Wee, Nic J A; van der Does, Willem; Putman, Peter
2018-05-21
Spontaneous EEG theta/beta ratio (TBR) probably marks prefrontal cortical (PFC) executive control, and its regulation of attentional threat-bias. Caffeine at moderate doses may strengthen executive control through increased PFC catecholamine action, dependent on basal PFC function. To test if caffeine affects threat-bias, moderated by baseline frontal TBR and trait-anxiety. A pictorial emotional Stroop task was used to assess threat-bias in forty female participants in a cross-over, double-blind study after placebo and 200 mg caffeine. At baseline and after placebo, comparable relations were observed for negative pictures: high TBR was related to low threat-bias in low trait-anxious people. Caffeine had opposite effects on threat-bias in low trait-anxious people with low and high TBR. This further supports TBR as a marker of executive control and highlights the importance of taking baseline executive function into consideration when studying effects of caffeine on executive functions. Copyright © 2018 Elsevier B.V. All rights reserved.
Horton, Bethany Jablonski; Wages, Nolan A.; Conaway, Mark R.
2016-01-01
Toxicity probability interval designs have received increasing attention as a dose-finding method in recent years. In this study, we compared the two-stage, likelihood-based continual reassessment method (CRM), modified toxicity probability interval (mTPI), and the Bayesian optimal interval design (BOIN) in order to evaluate each method's performance in dose selection for Phase I trials. We use several summary measures to compare the performance of these methods, including percentage of correct selection (PCS) of the true maximum tolerable dose (MTD), allocation of patients to doses at and around the true MTD, and an accuracy index. This index is an efficiency measure that describes the entire distribution of MTD selection and patient allocation by taking into account the distance between the true probability of toxicity at each dose level and the target toxicity rate. The simulation study considered a broad range of toxicity curves and various sample sizes. When considering PCS, we found that CRM outperformed the two competing methods in most scenarios, followed by BOIN, then mTPI. We observed a similar trend when considering the accuracy index for dose allocation, where CRM most often outperformed both the mTPI and BOIN. These trends were more pronounced with increasing number of dose levels. PMID:27435150
Connectivity among subpopulations of Louisiana black bears as estimated by a step selection function
Clark, Joseph D.; Jared S. Laufenberg,; Maria Davidson,; Jennifer L. Murrow,
2015-01-01
Habitat fragmentation is a fundamental cause of population decline and increased risk of extinction for many wildlife species; animals with large home ranges and small population sizes are particularly sensitive. The Louisiana black bear (Ursus americanus luteolus) exists only in small, isolated subpopulations as a result of land clearing for agriculture, but the relative potential for inter-subpopulation movement by Louisiana black bears has not been quantified, nor have characteristics of effective travel routes between habitat fragments been identified. We placed and monitored global positioning system (GPS) radio collars on 8 female and 23 male bears located in 4 subpopulations in Louisiana, which included a reintroduced subpopulation located between 2 of the remnant subpopulations. We compared characteristics of sequential radiolocations of bears (i.e., steps) with steps that were possible but not chosen by the bears to develop step selection function models based on conditional logistic regression. The probability of a step being selected by a bear increased as the distance to natural land cover and agriculture at the end of the step decreased and as distance from roads at the end of a step increased. To characterize connectivity among subpopulations, we used the step selection models to create 4,000 hypothetical correlated random walks for each subpopulation representing potential dispersal events to estimate the proportion that intersected adjacent subpopulations (hereafter referred to as successful dispersals). Based on the models, movement paths for males intersected all adjacent subpopulations but paths for females intersected only the most proximate subpopulations. Cross-validation and genetic and independent observation data supported our findings. Our models also revealed that successful dispersals were facilitated by a reintroduced population located between 2 distant subpopulations. Successful dispersals for males were dependent on natural land cover in private ownership. The addition of hypothetical 1,000-m- or 3,000-m-wide corridors between the 4 study areas had minimal effects on connectivity among subpopulations. For females, our model suggested that habitat between subpopulations would probably have to be permanently occupied for demographic rescue to occur. Thus, the establishment of stepping-stone populations, such as the reintroduced population that we studied, may be a more effective conservation measure than long corridors without a population presence in between.
NASA Astrophysics Data System (ADS)
Kim, Hannah; Hong, Helen
2014-03-01
We propose an automatic method for nipple detection on 3D automated breast ultrasound (3D ABUS) images using coronal slab-average-projection and cumulative probability map. First, to identify coronal images that appeared remarkable distinction between nipple-areola region and skin, skewness of each coronal image is measured and the negatively skewed images are selected. Then, coronal slab-average-projection image is reformatted from selected images. Second, to localize nipple-areola region, elliptical ROI covering nipple-areola region is detected using Hough ellipse transform in coronal slab-average-projection image. Finally, to separate the nipple from areola region, 3D Otsu's thresholding is applied to the elliptical ROI and cumulative probability map in the elliptical ROI is generated by assigning high probability to low intensity region. False detected small components are eliminated using morphological opening and the center point of detected nipple region is calculated. Experimental results show that our method provides 94.4% nipple detection rate.
Time-dependent landslide probability mapping
Campbell, Russell H.; Bernknopf, Richard L.; ,
1993-01-01
Case studies where time of failure is known for rainfall-triggered debris flows can be used to estimate the parameters of a hazard model in which the probability of failure is a function of time. As an example, a time-dependent function for the conditional probability of a soil slip is estimated from independent variables representing hillside morphology, approximations of material properties, and the duration and rate of rainfall. If probabilities are calculated in a GIS (geomorphic information system ) environment, the spatial distribution of the result for any given hour can be displayed on a map. Although the probability levels in this example are uncalibrated, the method offers a potential for evaluating different physical models and different earth-science variables by comparing the map distribution of predicted probabilities with inventory maps for different areas and different storms. If linked with spatial and temporal socio-economic variables, this method could be used for short-term risk assessment.