Sample records for maximized sequential probability

  1. Probability matching and strategy availability.

    PubMed

    Koehler, Derek J; James, Greta

    2010-09-01

    Findings from two experiments indicate that probability matching in sequential choice arises from an asymmetry in strategy availability: The matching strategy comes readily to mind, whereas a superior alternative strategy, maximizing, does not. First, compared with the minority who spontaneously engage in maximizing, the majority of participants endorse maximizing as superior to matching in a direct comparison when both strategies are described. Second, when the maximizing strategy is brought to their attention, more participants subsequently engage in maximizing. Third, matchers are more likely than maximizers to base decisions in other tasks on their initial intuitions, suggesting that they are more inclined to use a choice strategy that comes to mind quickly. These results indicate that a substantial subset of probability matchers are victims of "underthinking" rather than "overthinking": They fail to engage in sufficient deliberation to generate a superior alternative to the matching strategy that comes so readily to mind.

  2. Kullback-Leibler information function and the sequential selection of experiments to discriminate among several linear models

    NASA Technical Reports Server (NTRS)

    Sidik, S. M.

    1972-01-01

    The error variance of the process prior multivariate normal distributions of the parameters of the models are assumed to be specified, prior probabilities of the models being correct. A rule for termination of sampling is proposed. Upon termination, the model with the largest posterior probability is chosen as correct. If sampling is not terminated, posterior probabilities of the models and posterior distributions of the parameters are computed. An experiment was chosen to maximize the expected Kullback-Leibler information function. Monte Carlo simulation experiments were performed to investigate large and small sample behavior of the sequential adaptive procedure.

  3. Radiation detection method and system using the sequential probability ratio test

    DOEpatents

    Nelson, Karl E [Livermore, CA; Valentine, John D [Redwood City, CA; Beauchamp, Brock R [San Ramon, CA

    2007-07-17

    A method and system using the Sequential Probability Ratio Test to enhance the detection of an elevated level of radiation, by determining whether a set of observations are consistent with a specified model within a given bounds of statistical significance. In particular, the SPRT is used in the present invention to maximize the range of detection, by providing processing mechanisms for estimating the dynamic background radiation, adjusting the models to reflect the amount of background knowledge at the current point in time, analyzing the current sample using the models to determine statistical significance, and determining when the sample has returned to the expected background conditions.

  4. Taking the easy way out? Increasing implementation effort reduces probability maximizing under cognitive load.

    PubMed

    Schulze, Christin; Newell, Ben R

    2016-07-01

    Cognitive load has previously been found to have a positive effect on strategy selection in repeated risky choice. Specifically, whereas inferior probability matching often prevails under single-task conditions, optimal probability maximizing sometimes dominates when a concurrent task competes for cognitive resources. We examined the extent to which this seemingly beneficial effect of increased task demands hinges on the effort required to implement each of the choice strategies. Probability maximizing typically involves a simple repeated response to a single option, whereas probability matching requires choice proportions to be tracked carefully throughout a sequential choice task. Here, we flipped this pattern by introducing a manipulation that made the implementation of maximizing more taxing and, at the same time, allowed decision makers to probability match via a simple repeated response to a single option. The results from two experiments showed that increasing the implementation effort of probability maximizing resulted in decreased adoption rates of this strategy. This was the case both when decision makers simultaneously learned about the outcome probabilities and responded to a dual task (Exp. 1) and when these two aspects were procedurally separated in two distinct stages (Exp. 2). We conclude that the effort involved in implementing a choice strategy is a key factor in shaping repeated choice under uncertainty. Moreover, highlighting the importance of implementation effort casts new light on the sometimes surprising and inconsistent effects of cognitive load that have previously been reported in the literature.

  5. Probability matching in risky choice: the interplay of feedback and strategy availability.

    PubMed

    Newell, Ben R; Koehler, Derek J; James, Greta; Rakow, Tim; van Ravenzwaaij, Don

    2013-04-01

    Probability matching in sequential decision making is a striking violation of rational choice that has been observed in hundreds of experiments. Recent studies have demonstrated that matching persists even in described tasks in which all the information required for identifying a superior alternative strategy-maximizing-is present before the first choice is made. These studies have also indicated that maximizing increases when (1) the asymmetry in the availability of matching and maximizing strategies is reduced and (2) normatively irrelevant outcome feedback is provided. In the two experiments reported here, we examined the joint influences of these factors, revealing that strategy availability and outcome feedback operate on different time courses. Both behavioral and modeling results showed that while availability of the maximizing strategy increases the choice of maximizing early during the task, feedback appears to act more slowly to erode misconceptions about the task and to reinforce optimal responding. The results illuminate the interplay between "top-down" identification of choice strategies and "bottom-up" discovery of those strategies via feedback.

  6. Analyzing multicomponent receptive fields from neural responses to natural stimuli

    PubMed Central

    Rowekamp, Ryan; Sharpee, Tatyana O

    2011-01-01

    The challenge of building increasingly better models of neural responses to natural stimuli is to accurately estimate the multiple stimulus features that may jointly affect the neural spike probability. The selectivity for combinations of features is thought to be crucial for achieving classical properties of neural responses such as contrast invariance. The joint search for these multiple stimulus features is difficult because estimating spike probability as a multidimensional function of stimulus projections onto candidate relevant dimensions is subject to the curse of dimensionality. An attractive alternative is to search for relevant dimensions sequentially, as in projection pursuit regression. Here we demonstrate using analytic arguments and simulations of model cells that different types of sequential search strategies exhibit systematic biases when used with natural stimuli. Simulations show that joint optimization is feasible for up to three dimensions with current algorithms. When applied to the responses of V1 neurons to natural scenes, models based on three jointly optimized dimensions had better predictive power in a majority of cases compared to dimensions optimized sequentially, with different sequential methods yielding comparable results. Thus, although the curse of dimensionality remains, at least several relevant dimensions can be estimated by joint information maximization. PMID:21780916

  7. Sequential and simultaneous choices: testing the diet selection and sequential choice models.

    PubMed

    Freidin, Esteban; Aw, Justine; Kacelnik, Alex

    2009-03-01

    We investigate simultaneous and sequential choices in starlings, using Charnov's Diet Choice Model (DCM) and Shapiro, Siller and Kacelnik's Sequential Choice Model (SCM) to integrate function and mechanism. During a training phase, starlings encountered one food-related option per trial (A, B or R) in random sequence and with equal probability. A and B delivered food rewards after programmed delays (shorter for A), while R ('rejection') moved directly to the next trial without reward. In this phase we measured latencies to respond. In a later, choice, phase, birds encountered the pairs A-B, A-R and B-R, the first implementing a simultaneous choice and the second and third sequential choices. The DCM predicts when R should be chosen to maximize intake rate, and SCM uses latencies of the training phase to predict choices between any pair of options in the choice phase. The predictions of both models coincided, and both successfully predicted the birds' preferences. The DCM does not deal with partial preferences, while the SCM does, and experimental results were strongly correlated to this model's predictions. We believe that the SCM may expose a very general mechanism of animal choice, and that its wider domain of success reflects the greater ecological significance of sequential over simultaneous choices.

  8. A framework for sensitivity analysis of decision trees.

    PubMed

    Kamiński, Bogumił; Jakubczyk, Michał; Szufel, Przemysław

    2018-01-01

    In the paper, we consider sequential decision problems with uncertainty, represented as decision trees. Sensitivity analysis is always a crucial element of decision making and in decision trees it often focuses on probabilities. In the stochastic model considered, the user often has only limited information about the true values of probabilities. We develop a framework for performing sensitivity analysis of optimal strategies accounting for this distributional uncertainty. We design this robust optimization approach in an intuitive and not overly technical way, to make it simple to apply in daily managerial practice. The proposed framework allows for (1) analysis of the stability of the expected-value-maximizing strategy and (2) identification of strategies which are robust with respect to pessimistic/optimistic/mode-favoring perturbations of probabilities. We verify the properties of our approach in two cases: (a) probabilities in a tree are the primitives of the model and can be modified independently; (b) probabilities in a tree reflect some underlying, structural probabilities, and are interrelated. We provide a free software tool implementing the methods described.

  9. Optimal decision making on the basis of evidence represented in spike trains.

    PubMed

    Zhang, Jiaxiang; Bogacz, Rafal

    2010-05-01

    Experimental data indicate that perceptual decision making involves integration of sensory evidence in certain cortical areas. Theoretical studies have proposed that the computation in neural decision circuits approximates statistically optimal decision procedures (e.g., sequential probability ratio test) that maximize the reward rate in sequential choice tasks. However, these previous studies assumed that the sensory evidence was represented by continuous values from gaussian distributions with the same variance across alternatives. In this article, we make a more realistic assumption that sensory evidence is represented in spike trains described by the Poisson processes, which naturally satisfy the mean-variance relationship observed in sensory neurons. We show that for such a representation, the neural circuits involving cortical integrators and basal ganglia can approximate the optimal decision procedures for two and multiple alternative choice tasks.

  10. Nomogram predicting response after chemoradiotherapy in rectal cancer using sequential PETCT imaging: a multicentric prospective study with external validation.

    PubMed

    van Stiphout, Ruud G P M; Valentini, Vincenzo; Buijsen, Jeroen; Lammering, Guido; Meldolesi, Elisa; van Soest, Johan; Leccisotti, Lucia; Giordano, Alessandro; Gambacorta, Maria A; Dekker, Andre; Lambin, Philippe

    2014-11-01

    To develop and externally validate a predictive model for pathologic complete response (pCR) for locally advanced rectal cancer (LARC) based on clinical features and early sequential (18)F-FDG PETCT imaging. Prospective data (i.a. THUNDER trial) were used to train (N=112, MAASTRO Clinic) and validate (N=78, Università Cattolica del S. Cuore) the model for pCR (ypT0N0). All patients received long-course chemoradiotherapy (CRT) and surgery. Clinical parameters were age, gender, clinical tumour (cT) stage and clinical nodal (cN) stage. PET parameters were SUVmax, SUVmean, metabolic tumour volume (MTV) and maximal tumour diameter, for which response indices between pre-treatment and intermediate scan were calculated. Using multivariate logistic regression, three probability groups for pCR were defined. The pCR rates were 21.4% (training) and 23.1% (validation). The selected predictive features for pCR were cT-stage, cN-stage, response index of SUVmean and maximal tumour diameter during treatment. The models' performances (AUC) were 0.78 (training) and 0.70 (validation). The high probability group for pCR resulted in 100% correct predictions for training and 67% for validation. The model is available on the website www.predictcancer.org. The developed predictive model for pCR is accurate and externally validated. This model may assist in treatment decisions during CRT to select complete responders for a wait-and-see policy, good responders for extra RT boost and bad responders for additional chemotherapy. Copyright © 2014 The Authors. Published by Elsevier Ireland Ltd.. All rights reserved.

  11. Distributed Immune Systems for Wireless Network Information Assurance

    DTIC Science & Technology

    2010-04-26

    ratio test (SPRT), where the goal is to optimize a hypothesis testing problem given a trade-off between the probability of errors and the...using cumulative sum (CUSUM) and Girshik-Rubin-Shiryaev (GRSh) statistics. In sequential versions of the problem the sequential probability ratio ...the more complicated problems, in particular those where no clear mean can be established. We developed algorithms based on the sequential probability

  12. Efficient Simulation Budget Allocation for Selecting an Optimal Subset

    NASA Technical Reports Server (NTRS)

    Chen, Chun-Hung; He, Donghai; Fu, Michael; Lee, Loo Hay

    2008-01-01

    We consider a class of the subset selection problem in ranking and selection. The objective is to identify the top m out of k designs based on simulated output. Traditional procedures are conservative and inefficient. Using the optimal computing budget allocation framework, we formulate the problem as that of maximizing the probability of correc tly selecting all of the top-m designs subject to a constraint on the total number of samples available. For an approximation of this corre ct selection probability, we derive an asymptotically optimal allocat ion and propose an easy-to-implement heuristic sequential allocation procedure. Numerical experiments indicate that the resulting allocatio ns are superior to other methods in the literature that we tested, and the relative efficiency increases for larger problems. In addition, preliminary numerical results indicate that the proposed new procedur e has the potential to enhance computational efficiency for simulation optimization.

  13. Location Prediction Based on Transition Probability Matrices Constructing from Sequential Rules for Spatial-Temporal K-Anonymity Dataset

    PubMed Central

    Liu, Zhao; Zhu, Yunhong; Wu, Chenxue

    2016-01-01

    Spatial-temporal k-anonymity has become a mainstream approach among techniques for protection of users’ privacy in location-based services (LBS) applications, and has been applied to several variants such as LBS snapshot queries and continuous queries. Analyzing large-scale spatial-temporal anonymity sets may benefit several LBS applications. In this paper, we propose two location prediction methods based on transition probability matrices constructing from sequential rules for spatial-temporal k-anonymity dataset. First, we define single-step sequential rules mined from sequential spatial-temporal k-anonymity datasets generated from continuous LBS queries for multiple users. We then construct transition probability matrices from mined single-step sequential rules, and normalize the transition probabilities in the transition matrices. Next, we regard a mobility model for an LBS requester as a stationary stochastic process and compute the n-step transition probability matrices by raising the normalized transition probability matrices to the power n. Furthermore, we propose two location prediction methods: rough prediction and accurate prediction. The former achieves the probabilities of arriving at target locations along simple paths those include only current locations, target locations and transition steps. By iteratively combining the probabilities for simple paths with n steps and the probabilities for detailed paths with n-1 steps, the latter method calculates transition probabilities for detailed paths with n steps from current locations to target locations. Finally, we conduct extensive experiments, and correctness and flexibility of our proposed algorithm have been verified. PMID:27508502

  14. Not all (possibly) “random” sequences are created equal

    PubMed Central

    Pincus, Steve; Kalman, Rudolf E.

    1997-01-01

    The need to assess the randomness of a single sequence, especially a finite sequence, is ubiquitous, yet is unaddressed by axiomatic probability theory. Here, we assess randomness via approximate entropy (ApEn), a computable measure of sequential irregularity, applicable to single sequences of both (even very short) finite and infinite length. We indicate the novelty and facility of the multidimensional viewpoint taken by ApEn, in contrast to classical measures. Furthermore and notably, for finite length, finite state sequences, one can identify maximally irregular sequences, and then apply ApEn to quantify the extent to which given sequences differ from maximal irregularity, via a set of deficit (defm) functions. The utility of these defm functions which we show allows one to considerably refine the notions of probabilistic independence and normality, is featured in several studies, including (i) digits of e, π, √2, and √3, both in base 2 and in base 10, and (ii) sequences given by fractional parts of multiples of irrationals. We prove companion analytic results, which also feature in a discussion of the role and validity of the almost sure properties from axiomatic probability theory insofar as they apply to specified sequences and sets of sequences (in the physical world). We conclude by relating the present results and perspective to both previous and subsequent studies. PMID:11038612

  15. Auctions with Dynamic Populations: Efficiency and Revenue Maximization

    NASA Astrophysics Data System (ADS)

    Said, Maher

    We study a stochastic sequential allocation problem with a dynamic population of privately-informed buyers. We characterize the set of efficient allocation rules and show that a dynamic VCG mechanism is both efficient and periodic ex post incentive compatible; we also show that the revenue-maximizing direct mechanism is a pivot mechanism with a reserve price. We then consider sequential ascending auctions in this setting, both with and without a reserve price. We construct equilibrium bidding strategies in this indirect mechanism where bidders reveal their private information in every period, yielding the same outcomes as the direct mechanisms. Thus, the sequential ascending auction is a natural institution for achieving either efficient or optimal outcomes.

  16. Near real-time adverse drug reaction surveillance within population-based health networks: methodology considerations for data accrual.

    PubMed

    Avery, Taliser R; Kulldorff, Martin; Vilk, Yury; Li, Lingling; Cheetham, T Craig; Dublin, Sascha; Davis, Robert L; Liu, Liyan; Herrinton, Lisa; Brown, Jeffrey S

    2013-05-01

    This study describes practical considerations for implementation of near real-time medical product safety surveillance in a distributed health data network. We conducted pilot active safety surveillance comparing generic divalproex sodium to historical branded product at four health plans from April to October 2009. Outcomes reported are all-cause emergency room visits and fractures. One retrospective data extract was completed (January 2002-June 2008), followed by seven prospective monthly extracts (January 2008-November 2009). To evaluate delays in claims processing, we used three analytic approaches: near real-time sequential analysis, sequential analysis with 1.5 month delay, and nonsequential (using final retrospective data). Sequential analyses used the maximized sequential probability ratio test. Procedural and logistical barriers to active surveillance were documented. We identified 6586 new users of generic divalproex sodium and 43,960 new users of the branded product. Quality control methods identified 16 extract errors, which were corrected. Near real-time extracts captured 87.5% of emergency room visits and 50.0% of fractures, which improved to 98.3% and 68.7% respectively with 1.5 month delay. We did not identify signals for either outcome regardless of extract timeframe, and slight differences in the test statistic and relative risk estimates were found. Near real-time sequential safety surveillance is feasible, but several barriers warrant attention. Data quality review of each data extract was necessary. Although signal detection was not affected by delay in analysis, when using a historical control group differential accrual between exposure and outcomes may theoretically bias near real-time risk estimates towards the null, causing failure to detect a signal. Copyright © 2013 John Wiley & Sons, Ltd.

  17. Hold it! The influence of lingering rewards on choice diversification and persistence.

    PubMed

    Schulze, Christin; van Ravenzwaaij, Don; Newell, Ben R

    2017-11-01

    Learning to choose adaptively when faced with uncertain and variable outcomes is a central challenge for decision makers. This study examines repeated choice in dynamic probability learning tasks in which outcome probabilities changed either as a function of the choices participants made or independently of those choices. This presence/absence of sequential choice-outcome dependencies was implemented by manipulating a single task aspect between conditions: the retention/withdrawal of reward across individual choice trials. The study addresses how people adapt to these learning environments and to what extent they engage in 2 choice strategies often contrasted as paradigmatic examples of striking violation of versus nominal adherence to rational choice: diversification and persistent probability maximizing, respectively. Results show that decisions approached adaptive choice diversification and persistence when sufficient feedback was provided on the dynamic rules of the probabilistic environments. The findings of divergent behavior in the 2 environments indicate that diversified choices represented a response to the reward retention manipulation rather than to the mere variability of outcome probabilities. Choice in both environments was well accounted for by the generalized matching law, and computational modeling-based strategy analyses indicated that adaptive choice arose mainly from reliance on reinforcement learning strategies. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  18. Optimal immunization cocktails can promote induction of broadly neutralizing Abs against highly mutable pathogens.

    PubMed

    Shaffer, J Scott; Moore, Penny L; Kardar, Mehran; Chakraborty, Arup K

    2016-10-24

    Strategies to elicit Abs that can neutralize diverse strains of a highly mutable pathogen are likely to result in a potent vaccine. Broadly neutralizing Abs (bnAbs) against HIV have been isolated from patients, proving that the human immune system can evolve them. Using computer simulations and theory, we study immunization with diverse mixtures of variant antigens (Ags). Our results show that particular choices for the number of variant Ags and the mutational distances separating them maximize the probability of inducing bnAbs. The variant Ags represent potentially conflicting selection forces that can frustrate the Darwinian evolutionary process of affinity maturation. An intermediate level of frustration maximizes the chance of evolving bnAbs. A simple model makes vivid the origin of this principle of optimal frustration. Our results, combined with past studies, suggest that an appropriately chosen permutation of immunization with an optimally designed mixture (using the principles that we describe) and sequential immunization with variant Ags that are separated by relatively large mutational distances may best promote the evolution of bnAbs.

  19. Optimal immunization cocktails can promote induction of broadly neutralizing Abs against highly mutable pathogens

    PubMed Central

    Shaffer, J. Scott; Moore, Penny L.; Kardar, Mehran; Chakraborty, Arup K.

    2016-01-01

    Strategies to elicit Abs that can neutralize diverse strains of a highly mutable pathogen are likely to result in a potent vaccine. Broadly neutralizing Abs (bnAbs) against HIV have been isolated from patients, proving that the human immune system can evolve them. Using computer simulations and theory, we study immunization with diverse mixtures of variant antigens (Ags). Our results show that particular choices for the number of variant Ags and the mutational distances separating them maximize the probability of inducing bnAbs. The variant Ags represent potentially conflicting selection forces that can frustrate the Darwinian evolutionary process of affinity maturation. An intermediate level of frustration maximizes the chance of evolving bnAbs. A simple model makes vivid the origin of this principle of optimal frustration. Our results, combined with past studies, suggest that an appropriately chosen permutation of immunization with an optimally designed mixture (using the principles that we describe) and sequential immunization with variant Ags that are separated by relatively large mutational distances may best promote the evolution of bnAbs. PMID:27791170

  20. Memory and decision making: Effects of sequential presentation of probabilities and outcomes in risky prospects.

    PubMed

    Millroth, Philip; Guath, Mona; Juslin, Peter

    2018-06-07

    The rationality of decision making under risk is of central concern in psychology and other behavioral sciences. In real-life, the information relevant to a decision often arrives sequentially or changes over time, implying nontrivial demands on memory. Yet, little is known about how this affects the ability to make rational decisions and a default assumption is rather that information about outcomes and probabilities are simultaneously available at the time of the decision. In 4 experiments, we show that participants receiving probability- and outcome information sequentially report substantially (29 to 83%) higher certainty equivalents than participants with simultaneous presentation. This holds also for monetary-incentivized participants with perfect recall of the information. Participants in the sequential conditions often violate stochastic dominance in the sense that they pay more for a lottery with low probability of an outcome than participants in the simultaneous condition pay for a high probability of the same outcome. Computational modeling demonstrates that Cumulative Prospect Theory (Tversky & Kahneman, 1992) fails to account for the effects of sequential presentation, but a model assuming anchoring-and adjustment constrained by memory can account for the data. By implication, established assumptions of rationality may need to be reconsidered to account for the effects of memory in many real-life tasks. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  1. Mining of high utility-probability sequential patterns from uncertain databases

    PubMed Central

    Zhang, Binbin; Fournier-Viger, Philippe; Li, Ting

    2017-01-01

    High-utility sequential pattern mining (HUSPM) has become an important issue in the field of data mining. Several HUSPM algorithms have been designed to mine high-utility sequential patterns (HUPSPs). They have been applied in several real-life situations such as for consumer behavior analysis and event detection in sensor networks. Nonetheless, most studies on HUSPM have focused on mining HUPSPs in precise data. But in real-life, uncertainty is an important factor as data is collected using various types of sensors that are more or less accurate. Hence, data collected in a real-life database can be annotated with existing probabilities. This paper presents a novel pattern mining framework called high utility-probability sequential pattern mining (HUPSPM) for mining high utility-probability sequential patterns (HUPSPs) in uncertain sequence databases. A baseline algorithm with three optional pruning strategies is presented to mine HUPSPs. Moroever, to speed up the mining process, a projection mechanism is designed to create a database projection for each processed sequence, which is smaller than the original database. Thus, the number of unpromising candidates can be greatly reduced, as well as the execution time for mining HUPSPs. Substantial experiments both on real-life and synthetic datasets show that the designed algorithm performs well in terms of runtime, number of candidates, memory usage, and scalability for different minimum utility and minimum probability thresholds. PMID:28742847

  2. Safeguarding a Lunar Rover with Wald's Sequential Probability Ratio Test

    NASA Technical Reports Server (NTRS)

    Furlong, Michael; Dille, Michael; Wong, Uland; Nefian, Ara

    2016-01-01

    The virtual bumper is a safeguarding mechanism for autonomous and remotely operated robots. In this paper we take a new approach to the virtual bumper system by using an old statistical test. By using a modified version of Wald's sequential probability ratio test we demonstrate that we can reduce the number of false positive reported by the virtual bumper, thereby saving valuable mission time. We use the concept of sequential probability ratio to control vehicle speed in the presence of possible obstacles in order to increase certainty about whether or not obstacles are present. Our new algorithm reduces the chances of collision by approximately 98 relative to traditional virtual bumper safeguarding without speed control.

  3. Sensitivity Analysis of Genetic Algorithm Parameters for Optimal Groundwater Monitoring Network Design

    NASA Astrophysics Data System (ADS)

    Abdeh-Kolahchi, A.; Satish, M.; Datta, B.

    2004-05-01

    A state art groundwater monitoring network design is introduced. The method combines groundwater flow and transport results with optimization Genetic Algorithm (GA) to identify optimal monitoring well locations. Optimization theory uses different techniques to find a set of parameter values that minimize or maximize objective functions. The suggested groundwater optimal monitoring network design is based on the objective of maximizing the probability of tracking a transient contamination plume by determining sequential monitoring locations. The MODFLOW and MT3DMS models included as separate modules within the Groundwater Modeling System (GMS) are used to develop three dimensional groundwater flow and contamination transport simulation. The groundwater flow and contamination simulation results are introduced as input to the optimization model, using Genetic Algorithm (GA) to identify the groundwater optimal monitoring network design, based on several candidate monitoring locations. The groundwater monitoring network design model is used Genetic Algorithms with binary variables representing potential monitoring location. As the number of decision variables and constraints increase, the non-linearity of the objective function also increases which make difficulty to obtain optimal solutions. The genetic algorithm is an evolutionary global optimization technique, which is capable of finding the optimal solution for many complex problems. In this study, the GA approach capable of finding the global optimal solution to a groundwater monitoring network design problem involving 18.4X 1018 feasible solutions will be discussed. However, to ensure the efficiency of the solution process and global optimality of the solution obtained using GA, it is necessary that appropriate GA parameter values be specified. The sensitivity analysis of genetic algorithms parameters such as random number, crossover probability, mutation probability, and elitism are discussed for solution of monitoring network design.

  4. On the error probability of general tree and trellis codes with applications to sequential decoding

    NASA Technical Reports Server (NTRS)

    Johannesson, R.

    1973-01-01

    An upper bound on the average error probability for maximum-likelihood decoding of the ensemble of random binary tree codes is derived and shown to be independent of the length of the tree. An upper bound on the average error probability for maximum-likelihood decoding of the ensemble of random L-branch binary trellis codes of rate R = 1/n is derived which separates the effects of the tail length T and the memory length M of the code. It is shown that the bound is independent of the length L of the information sequence. This implication is investigated by computer simulations of sequential decoding utilizing the stack algorithm. These simulations confirm the implication and further suggest an empirical formula for the true undetected decoding error probability with sequential decoding.

  5. Wald Sequential Probability Ratio Test for Analysis of Orbital Conjunction Data

    NASA Technical Reports Server (NTRS)

    Carpenter, J. Russell; Markley, F. Landis; Gold, Dara

    2013-01-01

    We propose a Wald Sequential Probability Ratio Test for analysis of commonly available predictions associated with spacecraft conjunctions. Such predictions generally consist of a relative state and relative state error covariance at the time of closest approach, under the assumption that prediction errors are Gaussian. We show that under these circumstances, the likelihood ratio of the Wald test reduces to an especially simple form, involving the current best estimate of collision probability, and a similar estimate of collision probability that is based on prior assumptions about the likelihood of collision.

  6. The Sequential Probability Ratio Test and Binary Item Response Models

    ERIC Educational Resources Information Center

    Nydick, Steven W.

    2014-01-01

    The sequential probability ratio test (SPRT) is a common method for terminating item response theory (IRT)-based adaptive classification tests. To decide whether a classification test should stop, the SPRT compares a simple log-likelihood ratio, based on the classification bound separating two categories, to prespecified critical values. As has…

  7. Treatment Utility of the Kaufman Assessment Battery for Children: Effects of Matching Instruction and Student Processing Strength.

    ERIC Educational Resources Information Center

    Good, Roland H, III; And Others

    1993-01-01

    Tested hypothesis that achievement would be maximized by matching student's Kaufman Assessment Battery for Children-identified processing strength with sequential or simultaneous instruction. Findings from analyses of data from three students with strengths in sequential processing and three students with strengths in simultaneous processing…

  8. A detailed description of the sequential probability ratio test for 2-IMU FDI

    NASA Technical Reports Server (NTRS)

    Rich, T. M.

    1976-01-01

    The sequential probability ratio test (SPRT) for 2-IMU FDI (inertial measuring unit failure detection/isolation) is described. The SPRT is a statistical technique for detecting and isolating soft IMU failures originally developed for the strapdown inertial reference unit. The flowchart of a subroutine incorporating the 2-IMU SPRT is included.

  9. Bayes factor design analysis: Planning for compelling evidence.

    PubMed

    Schönbrodt, Felix D; Wagenmakers, Eric-Jan

    2018-02-01

    A sizeable literature exists on the use of frequentist power analysis in the null-hypothesis significance testing (NHST) paradigm to facilitate the design of informative experiments. In contrast, there is almost no literature that discusses the design of experiments when Bayes factors (BFs) are used as a measure of evidence. Here we explore Bayes Factor Design Analysis (BFDA) as a useful tool to design studies for maximum efficiency and informativeness. We elaborate on three possible BF designs, (a) a fixed-n design, (b) an open-ended Sequential Bayes Factor (SBF) design, where researchers can test after each participant and can stop data collection whenever there is strong evidence for either [Formula: see text] or [Formula: see text], and (c) a modified SBF design that defines a maximal sample size where data collection is stopped regardless of the current state of evidence. We demonstrate how the properties of each design (i.e., expected strength of evidence, expected sample size, expected probability of misleading evidence, expected probability of weak evidence) can be evaluated using Monte Carlo simulations and equip researchers with the necessary information to compute their own Bayesian design analyses.

  10. Robust multiperson detection and tracking for mobile service and social robots.

    PubMed

    Li, Liyuan; Yan, Shuicheng; Yu, Xinguo; Tan, Yeow Kee; Li, Haizhou

    2012-10-01

    This paper proposes an efficient system which integrates multiple vision models for robust multiperson detection and tracking for mobile service and social robots in public environments. The core technique is a novel maximum likelihood (ML)-based algorithm which combines the multimodel detections in mean-shift tracking. First, a likelihood probability which integrates detections and similarity to local appearance is defined. Then, an expectation-maximization (EM)-like mean-shift algorithm is derived under the ML framework. In each iteration, the E-step estimates the associations to the detections, and the M-step locates the new position according to the ML criterion. To be robust to the complex crowded scenarios for multiperson tracking, an improved sequential strategy to perform the mean-shift tracking is proposed. Under this strategy, human objects are tracked sequentially according to their priority order. To balance the efficiency and robustness for real-time performance, at each stage, the first two objects from the list of the priority order are tested, and the one with the higher score is selected. The proposed method has been successfully implemented on real-world service and social robots. The vision system integrates stereo-based and histograms-of-oriented-gradients-based human detections, occlusion reasoning, and sequential mean-shift tracking. Various examples to show the advantages and robustness of the proposed system for multiperson tracking from mobile robots are presented. Quantitative evaluations on the performance of multiperson tracking are also performed. Experimental results indicate that significant improvements have been achieved by using the proposed method.

  11. Inverse sequential detection of parameter changes in developing time series

    NASA Technical Reports Server (NTRS)

    Radok, Uwe; Brown, Timothy J.

    1992-01-01

    Progressive values of two probabilities are obtained for parameter estimates derived from an existing set of values and from the same set enlarged by one or more new values, respectively. One probability is that of erroneously preferring the second of these estimates for the existing data ('type 1 error'), while the second probability is that of erroneously accepting their estimates for the enlarged test ('type 2 error'). A more stable combined 'no change' probability which always falls between 0.5 and 0 is derived from the (logarithmic) width of the uncertainty region of an equivalent 'inverted' sequential probability ratio test (SPRT, Wald 1945) in which the error probabilities are calculated rather than prescribed. A parameter change is indicated when the compound probability undergoes a progressive decrease. The test is explicitly formulated and exemplified for Gaussian samples.

  12. Particle Filter with State Permutations for Solving Image Jigsaw Puzzles

    PubMed Central

    Yang, Xingwei; Adluru, Nagesh; Latecki, Longin Jan

    2016-01-01

    We deal with an image jigsaw puzzle problem, which is defined as reconstructing an image from a set of square and non-overlapping image patches. It is known that a general instance of this problem is NP-complete, and it is also challenging for humans, since in the considered setting the original image is not given. Recently a graphical model has been proposed to solve this and related problems. The target label probability function is then maximized using loopy belief propagation. We also formulate the problem as maximizing a label probability function and use exactly the same pairwise potentials. Our main contribution is a novel inference approach in the sampling framework of Particle Filter (PF). Usually in the PF framework it is assumed that the observations arrive sequentially, e.g., the observations are naturally ordered by their time stamps in the tracking scenario. Based on this assumption, the posterior density over the corresponding hidden states is estimated. In the jigsaw puzzle problem all observations (puzzle pieces) are given at once without any particular order. Therefore, we relax the assumption of having ordered observations and extend the PF framework to estimate the posterior density by exploring different orders of observations and selecting the most informative permutations of observations. This significantly broadens the scope of applications of the PF inference. Our experimental results demonstrate that the proposed inference framework significantly outperforms the loopy belief propagation in solving the image jigsaw puzzle problem. In particular, the extended PF inference triples the accuracy of the label assignment compared to that using loopy belief propagation. PMID:27795660

  13. Sequential Probability Ratio Test for Collision Avoidance Maneuver Decisions

    NASA Technical Reports Server (NTRS)

    Carpenter, J. Russell; Markley, F. Landis

    2010-01-01

    When facing a conjunction between space objects, decision makers must chose whether to maneuver for collision avoidance or not. We apply a well-known decision procedure, the sequential probability ratio test, to this problem. We propose two approaches to the problem solution, one based on a frequentist method, and the other on a Bayesian method. The frequentist method does not require any prior knowledge concerning the conjunction, while the Bayesian method assumes knowledge of prior probability densities. Our results show that both methods achieve desired missed detection rates, but the frequentist method's false alarm performance is inferior to the Bayesian method's

  14. Poster error probability in the Mu-11 Sequential Ranging System

    NASA Technical Reports Server (NTRS)

    Coyle, C. W.

    1981-01-01

    An expression is derived for the posterior error probability in the Mu-2 Sequential Ranging System. An algorithm is developed which closely bounds the exact answer and can be implemented in the machine software. A computer simulation is provided to illustrate the improved level of confidence in a ranging acquisition using this figure of merit as compared to that using only the prior probabilities. In a simulation of 20,000 acquisitions with an experimentally determined threshold setting, the algorithm detected 90% of the actual errors and made false indication of errors on 0.2% of the acquisitions.

  15. Technical Reports Prepared Under Contract N00014-76-C-0475.

    DTIC Science & Technology

    1987-05-29

    264 Approximations to Densities in Geometric H. Solomon 10/27/78 Probability M.A. Stephens 3. Technical Relort No. Title Author Date 265 Sequential ...Certain Multivariate S. Iyengar 8/12/82 Normal Probabilities 323 EDF Statistics for Testing for the Gamma M.A. Stephens 8/13/82 Distribution with...20-85 Nets 360 Random Sequential Coding By Hamming Distance Yoshiaki Itoh 07-11-85 Herbert Solomon 361 Transforming Censored Samples And Testing Fit

  16. Observation of non-classical correlations in sequential measurements of photon polarization

    NASA Astrophysics Data System (ADS)

    Suzuki, Yutaro; Iinuma, Masataka; Hofmann, Holger F.

    2016-10-01

    A sequential measurement of two non-commuting quantum observables results in a joint probability distribution for all output combinations that can be explained in terms of an initial joint quasi-probability of the non-commuting observables, modified by the resolution errors and back-action of the initial measurement. Here, we show that the error statistics of a sequential measurement of photon polarization performed at different measurement strengths can be described consistently by an imaginary correlation between the statistics of resolution and back-action. The experimental setup was designed to realize variable strength measurements with well-controlled imaginary correlation between the statistical errors caused by the initial measurement of diagonal polarizations, followed by a precise measurement of the horizontal/vertical polarization. We perform the experimental characterization of an elliptically polarized input state and show that the same complex joint probability distribution is obtained at any measurement strength.

  17. Robust parameter design for automatically controlled systems and nanostructure synthesis

    NASA Astrophysics Data System (ADS)

    Dasgupta, Tirthankar

    2007-12-01

    This research focuses on developing comprehensive frameworks for developing robust parameter design methodology for dynamic systems with automatic control and for synthesis of nanostructures. In many automatically controlled dynamic processes, the optimal feedback control law depends on the parameter design solution and vice versa and therefore an integrated approach is necessary. A parameter design methodology in the presence of feedback control is developed for processes of long duration under the assumption that experimental noise factors are uncorrelated over time. Systems that follow a pure-gain dynamic model are considered and the best proportional-integral and minimum mean squared error control strategies are developed by using robust parameter design. The proposed method is illustrated using a simulated example and a case study in a urea packing plant. This idea is also extended to cases with on-line noise factors. The possibility of integrating feedforward control with a minimum mean squared error feedback control scheme is explored. To meet the needs of large scale synthesis of nanostructures, it is critical to systematically find experimental conditions under which the desired nanostructures are synthesized reproducibly, at large quantity and with controlled morphology. The first part of the research in this area focuses on modeling and optimization of existing experimental data. Through a rigorous statistical analysis of experimental data, models linking the probabilities of obtaining specific morphologies to the process variables are developed. A new iterative algorithm for fitting a Multinomial GLM is proposed and used. The optimum process conditions, which maximize the above probabilities and make the synthesis process less sensitive to variations of process variables around set values, are derived from the fitted models using Monte-Carlo simulations. The second part of the research deals with development of an experimental design methodology, tailor-made to address the unique phenomena associated with nanostructure synthesis. A sequential space filling design called Sequential Minimum Energy Design (SMED) for exploring best process conditions for synthesis of nanowires. The SMED is a novel approach to generate sequential designs that are model independent, can quickly "carve out" regions with no observable nanostructure morphology, and allow for the exploration of complex response surfaces.

  18. Two-IMU FDI performance of the sequential probability ratio test during shuttle entry

    NASA Technical Reports Server (NTRS)

    Rich, T. M.

    1976-01-01

    Performance data for the sequential probability ratio test (SPRT) during shuttle entry are presented. Current modeling constants and failure thresholds are included for the full mission 3B from entry through landing trajectory. Minimum 100 percent detection/isolation failure levels and a discussion of the effects of failure direction are presented. Finally, a limited comparison of failures introduced at trajectory initiation shows that the SPRT algorithm performs slightly worse than the data tracking test.

  19. Wald Sequential Probability Ratio Test for Space Object Conjunction Assessment

    NASA Technical Reports Server (NTRS)

    Carpenter, James R.; Markley, F Landis

    2014-01-01

    This paper shows how satellite owner/operators may use sequential estimates of collision probability, along with a prior assessment of the base risk of collision, in a compound hypothesis ratio test to inform decisions concerning collision risk mitigation maneuvers. The compound hypothesis test reduces to a simple probability ratio test, which appears to be a novel result. The test satisfies tolerances related to targeted false alarm and missed detection rates. This result is independent of the method one uses to compute the probability density that one integrates to compute collision probability. A well-established test case from the literature shows that this test yields acceptable results within the constraints of a typical operational conjunction assessment decision timeline. Another example illustrates the use of the test in a practical conjunction assessment scenario based on operations of the International Space Station.

  20. Delay test generation for synchronous sequential circuits

    NASA Astrophysics Data System (ADS)

    Devadas, Srinivas

    1989-05-01

    We address the problem of generating tests for delay faults in non-scan synchronous sequential circuits. Delay test generation for sequential circuits is a considerably more difficult problem than delay testing of combinational circuits and has received much less attention. In this paper, we present a method for generating test sequences to detect delay faults in sequential circuits using the stuck-at fault sequential test generator STALLION. The method is complete in that it will generate a delay test sequence for a targeted fault given sufficient CPU time, if such a sequence exists. We term faults for which no delay test sequence exists, under out test methodology, sequentially delay redundant. We describe means of eliminating sequential delay redundancies in logic circuits. We present a partial-scan methodology for enhancing the testability of difficult-to-test of untestable sequential circuits, wherein a small number of flip-flops are selected and made controllable/observable. The selection process guarantees the elimination of all sequential delay redundancies. We show that an intimate relationship exists between state assignment and delay testability of a sequential machine. We describe a state assignment algorithm for the synthesis of sequential machines with maximal delay fault testability. Preliminary experimental results using the test generation, partial-scan and synthesis algorithm are presented.

  1. Buffer management for sequential decoding. [block erasure probability reduction

    NASA Technical Reports Server (NTRS)

    Layland, J. W.

    1974-01-01

    Sequential decoding has been found to be an efficient means of communicating at low undetected error rates from deep space probes, but erasure or computational overflow remains a significant problem. Erasure of a block occurs when the decoder has not finished decoding that block at the time that it must be output. By drawing upon analogies in computer time sharing, this paper develops a buffer-management strategy which reduces the decoder idle time to a negligible level, and therefore improves the erasure probability of a sequential decoder. For a decoder with a speed advantage of ten and a buffer size of ten blocks, operating at an erasure rate of .01, use of this buffer-management strategy reduces the erasure rate to less than .0001.

  2. A Method for Evaluating Tuning Functions of Single Neurons based on Mutual Information Maximization

    NASA Astrophysics Data System (ADS)

    Brostek, Lukas; Eggert, Thomas; Ono, Seiji; Mustari, Michael J.; Büttner, Ulrich; Glasauer, Stefan

    2011-03-01

    We introduce a novel approach for evaluation of neuronal tuning functions, which can be expressed by the conditional probability of observing a spike given any combination of independent variables. This probability can be estimated out of experimentally available data. By maximizing the mutual information between the probability distribution of the spike occurrence and that of the variables, the dependence of the spike on the input variables is maximized as well. We used this method to analyze the dependence of neuronal activity in cortical area MSTd on signals related to movement of the eye and retinal image movement.

  3. Pure perceptual-based learning of second-, third-, and fourth-order sequential probabilities.

    PubMed

    Remillard, Gilbert

    2011-07-01

    There is evidence that sequence learning in the traditional serial reaction time task (SRTT), where target location is the response dimension, and sequence learning in the perceptual SRTT, where target location is not the response dimension, are handled by different mechanisms. The ability of the latter mechanism to learn sequential contingencies that can be learned by the former mechanism was examined. Prior research has established that people can learn second-, third-, and fourth-order probabilities in the traditional SRTT. The present study reveals that people can learn such probabilities in the perceptual SRTT. This suggests that the two mechanisms may have similar architectures. A possible neural basis of the two mechanisms is discussed.

  4. Asymptotically optimum multialternative sequential procedures for discernment of processes minimizing average length of observations

    NASA Astrophysics Data System (ADS)

    Fishman, M. M.

    1985-01-01

    The problem of multialternative sequential discernment of processes is formulated in terms of conditionally optimum procedures minimizing the average length of observations, without any probabilistic assumptions about any one occurring process, rather than in terms of Bayes procedures minimizing the average risk. The problem is to find the procedure that will transform inequalities into equalities. The problem is formulated for various models of signal observation and data processing: (1) discernment of signals from background interference by a multichannel system; (2) discernment of pulse sequences with unknown time delay; (3) discernment of harmonic signals with unknown frequency. An asymptotically optimum sequential procedure is constructed which compares the statistics of the likelihood ratio with the mean-weighted likelihood ratio and estimates the upper bound for conditional average lengths of observations. This procedure is shown to remain valid as the upper bound for the probability of erroneous partial solutions decreases approaching zero and the number of hypotheses increases approaching infinity. It also remains valid under certain special constraints on the probability such as a threshold. A comparison with a fixed-length procedure reveals that this sequential procedure decreases the length of observations to one quarter, on the average, when the probability of erroneous partial solutions is low.

  5. The Role of Orthotactic Probability in Incidental and Intentional Vocabulary Acquisition L1 and L2

    ERIC Educational Resources Information Center

    Bordag, Denisa; Kirschenbaum, Amit; Rogahn, Maria; Tschirner, Erwin

    2017-01-01

    Four experiments were conducted to examine the role of orthotactic probability, i.e. the sequential letter probability, in the early stages of vocabulary acquisition by adult native speakers and advanced learners of German. The results show different effects for orthographic probability in incidental and intentional vocabulary acquisition: Whereas…

  6. Proceedings of the Conference on the Design of Experiments in Army Research, Development and Testing (29th)

    DTIC Science & Technology

    1984-06-01

    SEQUENTIAL TESTING (Bldg. A, Room C) 1300-1330 ’ 1330-1415 1415-1445 1445-1515 BREAK 1515-1545 A TRUNCATED SEQUENTIAL PROBABILITY RATIO TEST J...suicide optical data operational testing reliability random numbers bootstrap methods missing data sequential testing fire support complex computer model carcinogenesis studies EUITION Of 1 NOV 68 I% OBSOLETE a ...contributed papers can be ascertained from the titles of the

  7. Prioritization of engineering support requests and advanced technology projects using decision support and industrial engineering models

    NASA Technical Reports Server (NTRS)

    Tavana, Madjid

    1995-01-01

    The evaluation and prioritization of Engineering Support Requests (ESR's) is a particularly difficult task at the Kennedy Space Center (KSC) -- Shuttle Project Engineering Office. This difficulty is due to the complexities inherent in the evaluation process and the lack of structured information. The evaluation process must consider a multitude of relevant pieces of information concerning Safety, Supportability, O&M Cost Savings, Process Enhancement, Reliability, and Implementation. Various analytical and normative models developed over the past have helped decision makers at KSC utilize large volumes of information in the evaluation of ESR's. The purpose of this project is to build on the existing methodologies and develop a multiple criteria decision support system that captures the decision maker's beliefs through a series of sequential, rational, and analytical processes. The model utilizes the Analytic Hierarchy Process (AHP), subjective probabilities, the entropy concept, and Maximize Agreement Heuristic (MAH) to enhance the decision maker's intuition in evaluating a set of ESR's.

  8. Homeostatic Agent for General Environment

    NASA Astrophysics Data System (ADS)

    Yoshida, Naoto

    2018-03-01

    One of the essential aspect in biological agents is dynamic stability. This aspect, called homeostasis, is widely discussed in ethology, neuroscience and during the early stages of artificial intelligence. Ashby's homeostats are general-purpose learning machines for stabilizing essential variables of the agent in the face of general environments. However, despite their generality, the original homeostats couldn't be scaled because they searched their parameters randomly. In this paper, first we re-define the objective of homeostats as the maximization of a multi-step survival probability from the view point of sequential decision theory and probabilistic theory. Then we show that this optimization problem can be treated by using reinforcement learning algorithms with special agent architectures and theoretically-derived intrinsic reward functions. Finally we empirically demonstrate that agents with our architecture automatically learn to survive in a given environment, including environments with visual stimuli. Our survival agents can learn to eat food, avoid poison and stabilize essential variables through theoretically-derived single intrinsic reward formulations.

  9. A generic motif discovery algorithm for sequential data.

    PubMed

    Jensen, Kyle L; Styczynski, Mark P; Rigoutsos, Isidore; Stephanopoulos, Gregory N

    2006-01-01

    Motif discovery in sequential data is a problem of great interest and with many applications. However, previous methods have been unable to combine exhaustive search with complex motif representations and are each typically only applicable to a certain class of problems. Here we present a generic motif discovery algorithm (Gemoda) for sequential data. Gemoda can be applied to any dataset with a sequential character, including both categorical and real-valued data. As we show, Gemoda deterministically discovers motifs that are maximal in composition and length. As well, the algorithm allows any choice of similarity metric for finding motifs. Finally, Gemoda's output motifs are representation-agnostic: they can be represented using regular expressions, position weight matrices or any number of other models for any type of sequential data. We demonstrate a number of applications of the algorithm, including the discovery of motifs in amino acids sequences, a new solution to the (l,d)-motif problem in DNA sequences and the discovery of conserved protein substructures. Gemoda is freely available at http://web.mit.edu/bamel/gemoda

  10. Computerized Classification Testing with the Rasch Model

    ERIC Educational Resources Information Center

    Eggen, Theo J. H. M.

    2011-01-01

    If classification in a limited number of categories is the purpose of testing, computerized adaptive tests (CATs) with algorithms based on sequential statistical testing perform better than estimation-based CATs (e.g., Eggen & Straetmans, 2000). In these computerized classification tests (CCTs), the Sequential Probability Ratio Test (SPRT) (Wald,…

  11. An information maximization model of eye movements

    NASA Technical Reports Server (NTRS)

    Renninger, Laura Walker; Coughlan, James; Verghese, Preeti; Malik, Jitendra

    2005-01-01

    We propose a sequential information maximization model as a general strategy for programming eye movements. The model reconstructs high-resolution visual information from a sequence of fixations, taking into account the fall-off in resolution from the fovea to the periphery. From this framework we get a simple rule for predicting fixation sequences: after each fixation, fixate next at the location that minimizes uncertainty (maximizes information) about the stimulus. By comparing our model performance to human eye movement data and to predictions from a saliency and random model, we demonstrate that our model is best at predicting fixation locations. Modeling additional biological constraints will improve the prediction of fixation sequences. Our results suggest that information maximization is a useful principle for programming eye movements.

  12. Further reduction of minimal first-met bad markings for the computationally efficient synthesis of a maximally permissive controller

    NASA Astrophysics Data System (ADS)

    Liu, GaiYun; Chao, Daniel Yuh

    2015-08-01

    To date, research on the supervisor design for flexible manufacturing systems focuses on speeding up the computation of optimal (maximally permissive) liveness-enforcing controllers. Recent deadlock prevention policies for systems of simple sequential processes with resources (S3PR) reduce the computation burden by considering only the minimal portion of all first-met bad markings (FBMs). Maximal permissiveness is ensured by not forbidding any live state. This paper proposes a method to further reduce the size of minimal set of FBMs to efficiently solve integer linear programming problems while maintaining maximal permissiveness using a vector-covering approach. This paper improves the previous work and achieves the simplest structure with the minimal number of monitors.

  13. Risk-adjusted sequential probability ratio tests: applications to Bristol, Shipman and adult cardiac surgery.

    PubMed

    Spiegelhalter, David; Grigg, Olivia; Kinsman, Robin; Treasure, Tom

    2003-02-01

    To investigate the use of the risk-adjusted sequential probability ratio test in monitoring the cumulative occurrence of adverse clinical outcomes. Retrospective analysis of three longitudinal datasets. Patients aged 65 years and over under the care of Harold Shipman between 1979 and 1997, patients under 1 year of age undergoing paediatric heart surgery in Bristol Royal Infirmary between 1984 and 1995, adult patients receiving cardiac surgery from a team of cardiac surgeons in London,UK. Annual and 30-day mortality rates. Using reasonable boundaries, the procedure could have indicated an 'alarm' in Bristol after publication of the 1991 Cardiac Surgical Register, and in 1985 or 1997 for Harold Shipman depending on the data source and the comparator. The cardiac surgeons showed no significant deviation from expected performance. The risk-adjusted sequential probability test is simple to implement, can be applied in a variety of contexts, and might have been useful to detect specific instances of past divergent performance. The use of this and related techniques deserves further attention in the context of prospectively monitoring adverse clinical outcomes.

  14. Moving Synergistically Acting Drug Combinations to the Clinic by Comparing Sequential versus Simultaneous Drug Administrations.

    PubMed

    Dinavahi, Saketh S; Noory, Mohammad A; Gowda, Raghavendra; Drabick, Joseph J; Berg, Arthur; Neves, Rogerio I; Robertson, Gavin P

    2018-03-01

    Drug combinations acting synergistically to kill cancer cells have become increasingly important in melanoma as an approach to manage the recurrent resistant disease. Protein kinase B (AKT) is a major target in this disease but its inhibitors are not effective clinically, which is a major concern. Targeting AKT in combination with WEE1 (mitotic inhibitor kinase) seems to have potential to make AKT-based therapeutics effective clinically. Since agents targeting AKT and WEE1 have been tested individually in the clinic, the quickest way to move the drug combination to patients would be to combine these agents sequentially, enabling the use of existing phase I clinical trial toxicity data. Therefore, a rapid preclinical approach is needed to evaluate whether simultaneous or sequential drug treatment has maximal therapeutic efficacy, which is based on a mechanistic rationale. To develop this approach, melanoma cell lines were treated with AKT inhibitor AZD5363 [4-amino- N -[(1 S )-1-(4-chlorophenyl)-3-hydroxypropyl]-1-(7 H -pyrrolo[2,3- d ]pyrimidin-4-yl)piperidine-4-carboxamide] and WEE1 inhibitor AZD1775 [2-allyl-1-(6-(2-hydroxypropan-2-yl)pyridin-2-yl)-6-((4-(4-methylpiperazin-1-yl)phenyl)amino)-1 H -pyrazolo[3,4- d ]pyrimidin-3(2 H )-one] using simultaneous and sequential dosing schedules. Simultaneous treatment synergistically reduced melanoma cell survival and tumor growth. In contrast, sequential treatment was antagonistic and had a minimal tumor inhibitory effect compared with individual agents. Mechanistically, simultaneous targeting of AKT and WEE1 enhanced deregulation of the cell cycle and DNA damage repair pathways by modulating transcription factors p53 and forkhead box M1, which was not observed with sequential treatment. Thus, this study identifies a rapid approach to assess the drug combinations with a mechanistic basis for selection, which suggests that combining AKT and WEE1 inhibitors is needed for maximal efficacy. Copyright © 2018 by The American Society for Pharmacology and Experimental Therapeutics.

  15. Inverse sequential procedures for the monitoring of time series

    NASA Technical Reports Server (NTRS)

    Radok, Uwe; Brown, Timothy

    1993-01-01

    Climate changes traditionally have been detected from long series of observations and long after they happened. The 'inverse sequential' monitoring procedure is designed to detect changes as soon as they occur. Frequency distribution parameters are estimated both from the most recent existing set of observations and from the same set augmented by 1,2,...j new observations. Individual-value probability products ('likelihoods') are then calculated which yield probabilities for erroneously accepting the existing parameter(s) as valid for the augmented data set and vice versa. A parameter change is signaled when these probabilities (or a more convenient and robust compound 'no change' probability) show a progressive decrease. New parameters are then estimated from the new observations alone to restart the procedure. The detailed algebra is developed and tested for Gaussian means and variances, Poisson and chi-square means, and linear or exponential trends; a comprehensive and interactive Fortran program is provided in the appendix.

  16. Kinematics of the field hockey penalty corner push-in.

    PubMed

    Kerr, Rebecca; Ness, Kevin

    2006-01-01

    The aims of the study were to determine those variables that significantly affect push-in execution and thereby formulate coaching recommendations specific to the push-in. Two 50 Hz video cameras recorded transverse and longitudinal views of push-in trials performed by eight experienced and nine inexperienced male push-in performers. Video footage was digitized for data analysis of ball speed, stance width, drag distance, drag time, drag speed, centre of massy displacement and segment and stick displacements and velocities. Experienced push-in performers demonstrated a significantly greater (p < 0.05) stance width, a significantly greater distance between the ball and the front foot at the start of the push-in and a significantly faster ball speed than inexperienced performers. In addition, the experienced performers showed a significant positive correlation between ball speed and playing experience and tended to adopt a combination of simultaneous and sequential segment rotation to achieve accuracy and fast ball speed. The study yielded the following coaching recommendations for enhanced push-in performance: maximize drag distance by maximizing front foot-ball distance at the start of the push-in; use a combination of simultaneous and sequential segment rotations to optimise both accuracy and ball speed and maximize drag speed.

  17. Thermoelectric properties of an interacting quantum dot based heat engine

    NASA Astrophysics Data System (ADS)

    Erdman, Paolo Andrea; Mazza, Francesco; Bosisio, Riccardo; Benenti, Giuliano; Fazio, Rosario; Taddei, Fabio

    2017-06-01

    We study the thermoelectric properties and heat-to-work conversion performance of an interacting, multilevel quantum dot (QD) weakly coupled to electronic reservoirs. We focus on the sequential tunneling regime. The dynamics of the charge in the QD is studied by means of master equations for the probabilities of occupation. From here we compute the charge and heat currents in the linear response regime. Assuming a generic multiterminal setup, and for low temperatures (quantum limit), we obtain analytical expressions for the transport coefficients which account for the interplay between interactions (charging energy) and level quantization. In the case of systems with two and three terminals we derive formulas for the power factor Q and the figure of merit Z T for a QD-based heat engine, identifying optimal working conditions which maximize output power and efficiency of heat-to-work conversion. Beyond the linear response we concentrate on the two-terminal setup. We first study the thermoelectric nonlinear coefficients assessing the consequences of large temperature and voltage biases, focusing on the breakdown of the Onsager reciprocal relation between thermopower and Peltier coefficient. We then investigate the conditions which optimize the performance of a heat engine, finding that in the quantum limit output power and efficiency at maximum power can almost be simultaneously maximized by choosing appropriate values of electrochemical potential and bias voltage. At last we study how energy level degeneracy can increase the output power.

  18. On the Possibility to Combine the Order Effect with Sequential Reproducibility for Quantum Measurements

    NASA Astrophysics Data System (ADS)

    Basieva, Irina; Khrennikov, Andrei

    2015-10-01

    In this paper we study the problem of a possibility to use quantum observables to describe a possible combination of the order effect with sequential reproducibility for quantum measurements. By the order effect we mean a dependence of probability distributions (of measurement results) on the order of measurements. We consider two types of the sequential reproducibility: adjacent reproducibility (A-A) (the standard perfect repeatability) and separated reproducibility(A-B-A). The first one is reproducibility with probability 1 of a result of measurement of some observable A measured twice, one A measurement after the other. The second one, A-B-A, is reproducibility with probability 1 of a result of A measurement when another quantum observable B is measured between two A's. Heuristically, it is clear that the second type of reproducibility is complementary to the order effect. We show that, surprisingly, this may not be the case. The order effect can coexist with a separated reproducibility as well as adjacent reproducibility for both observables A and B. However, the additional constraint in the form of separated reproducibility of the B-A-B type makes this coexistence impossible. The problem under consideration was motivated by attempts to apply the quantum formalism outside of physics, especially, in cognitive psychology and psychophysics. However, it is also important for foundations of quantum physics as a part of the problem about the structure of sequential quantum measurements.

  19. Dynamic Encoding of Speech Sequence Probability in Human Temporal Cortex

    PubMed Central

    Leonard, Matthew K.; Bouchard, Kristofer E.; Tang, Claire

    2015-01-01

    Sensory processing involves identification of stimulus features, but also integration with the surrounding sensory and cognitive context. Previous work in animals and humans has shown fine-scale sensitivity to context in the form of learned knowledge about the statistics of the sensory environment, including relative probabilities of discrete units in a stream of sequential auditory input. These statistics are a defining characteristic of one of the most important sequential signals humans encounter: speech. For speech, extensive exposure to a language tunes listeners to the statistics of sound sequences. To address how speech sequence statistics are neurally encoded, we used high-resolution direct cortical recordings from human lateral superior temporal cortex as subjects listened to words and nonwords with varying transition probabilities between sound segments. In addition to their sensitivity to acoustic features (including contextual features, such as coarticulation), we found that neural responses dynamically encoded the language-level probability of both preceding and upcoming speech sounds. Transition probability first negatively modulated neural responses, followed by positive modulation of neural responses, consistent with coordinated predictive and retrospective recognition processes, respectively. Furthermore, transition probability encoding was different for real English words compared with nonwords, providing evidence for online interactions with high-order linguistic knowledge. These results demonstrate that sensory processing of deeply learned stimuli involves integrating physical stimulus features with their contextual sequential structure. Despite not being consciously aware of phoneme sequence statistics, listeners use this information to process spoken input and to link low-level acoustic representations with linguistic information about word identity and meaning. PMID:25948269

  20. Empirical Identification of Hierarchies.

    ERIC Educational Resources Information Center

    McCormick, Douglas; And Others

    Outlining a cluster procedure which maximizes specific criteria while building scales from binary measures using a sequential, agglomerative, overlapping, non-hierarchic method results in indices giving truer results than exploratory facotr analyses or multidimensional scaling. In a series of eleven figures, patterns within cluster histories…

  1. Comparison of rate one-half, equivalent constraint length 24, binary convolutional codes for use with sequential decoding on the deep-space channel

    NASA Technical Reports Server (NTRS)

    Massey, J. L.

    1976-01-01

    Virtually all previously-suggested rate 1/2 binary convolutional codes with KE = 24 are compared. Their distance properties are given; and their performance, both in computation and in error probability, with sequential decoding on the deep-space channel is determined by simulation. Recommendations are made both for the choice of a specific KE = 24 code as well as for codes to be included in future coding standards for the deep-space channel. A new result given in this report is a method for determining the statistical significance of error probability data when the error probability is so small that it is not feasible to perform enough decoding simulations to obtain more than a very small number of decoding errors.

  2. Simple and flexible SAS and SPSS programs for analyzing lag-sequential categorical data.

    PubMed

    O'Connor, B P

    1999-11-01

    This paper describes simple and flexible programs for analyzing lag-sequential categorical data, using SAS and SPSS. The programs read a stream of codes and produce a variety of lag-sequential statistics, including transitional frequencies, expected transitional frequencies, transitional probabilities, adjusted residuals, z values, Yule's Q values, likelihood ratio tests of stationarity across time and homogeneity across groups or segments, transformed kappas for unidirectional dependence, bidirectional dependence, parallel and nonparallel dominance, and significance levels based on both parametric and randomization tests.

  3. Optimal sequential measurements for bipartite state discrimination

    NASA Astrophysics Data System (ADS)

    Croke, Sarah; Barnett, Stephen M.; Weir, Graeme

    2017-05-01

    State discrimination is a useful test problem with which to clarify the power and limitations of different classes of measurement. We consider the problem of discriminating between given states of a bipartite quantum system via sequential measurement of the subsystems, with classical feed-forward of measurement results. Our aim is to understand when sequential measurements, which are relatively easy to implement experimentally, perform as well, or almost as well, as optimal joint measurements, which are in general more technologically challenging. We construct conditions that the optimal sequential measurement must satisfy, analogous to the well-known Helstrom conditions for minimum error discrimination in the unrestricted case. We give several examples and compare the optimal probability of correctly identifying the state via global versus sequential measurement strategies.

  4. Mining patterns in persistent surveillance systems with smart query and visual analytics

    NASA Astrophysics Data System (ADS)

    Habibi, Mohammad S.; Shirkhodaie, Amir

    2013-05-01

    In Persistent Surveillance Systems (PSS) the ability to detect and characterize events geospatially help take pre-emptive steps to counter adversary's actions. Interactive Visual Analytic (VA) model offers this platform for pattern investigation and reasoning to comprehend and/or predict such occurrences. The need for identifying and offsetting these threats requires collecting information from diverse sources, which brings with it increasingly abstract data. These abstract semantic data have a degree of inherent uncertainty and imprecision, and require a method for their filtration before being processed further. In this paper, we have introduced an approach based on Vector Space Modeling (VSM) technique for classification of spatiotemporal sequential patterns of group activities. The feature vectors consist of an array of attributes extracted from generated sensors semantic annotated messages. To facilitate proper similarity matching and detection of time-varying spatiotemporal patterns, a Temporal-Dynamic Time Warping (DTW) method with Gaussian Mixture Model (GMM) for Expectation Maximization (EM) is introduced. DTW is intended for detection of event patterns from neighborhood-proximity semantic frames derived from established ontology. GMM with EM, on the other hand, is employed as a Bayesian probabilistic model to estimated probability of events associated with a detected spatiotemporal pattern. In this paper, we present a new visual analytic tool for testing and evaluation group activities detected under this control scheme. Experimental results demonstrate the effectiveness of proposed approach for discovery and matching of subsequences within sequentially generated patterns space of our experiments.

  5. A model for sequential decoding overflow due to a noisy carrier reference. [communication performance prediction

    NASA Technical Reports Server (NTRS)

    Layland, J. W.

    1974-01-01

    An approximate analysis of the effect of a noisy carrier reference on the performance of sequential decoding is presented. The analysis uses previously developed techniques for evaluating noisy reference performance for medium-rate uncoded communications adapted to sequential decoding for data rates of 8 to 2048 bits/s. In estimating the ten to the minus fourth power deletion probability thresholds for Helios, the model agrees with experimental data to within the experimental tolerances. The computational problem involved in sequential decoding, carrier loop effects, the main characteristics of the medium-rate model, modeled decoding performance, and perspectives on future work are discussed.

  6. Propagating probability distributions of stand variables using sequential Monte Carlo methods

    Treesearch

    Jeffrey H. Gove

    2009-01-01

    A general probabilistic approach to stand yield estimation is developed based on sequential Monte Carlo filters, also known as particle filters. The essential steps in the development of the sampling importance resampling (SIR) particle filter are presented. The SIR filter is then applied to simulated and observed data showing how the 'predictor - corrector'...

  7. Descriptive and Experimental Analyses of Potential Precursors to Problem Behavior

    PubMed Central

    Borrero, Carrie S.W; Borrero, John C

    2008-01-01

    We conducted descriptive observations of severe problem behavior for 2 individuals with autism to identify precursors to problem behavior. Several comparative probability analyses were conducted in addition to lag-sequential analyses using the descriptive data. Results of the descriptive analyses showed that the probability of the potential precursor was greater given problem behavior compared to the unconditional probability of the potential precursor. Results of the lag-sequential analyses showed a marked increase in the probability of a potential precursor in the 1-s intervals immediately preceding an instance of problem behavior, and that the probability of problem behavior was highest in the 1-s intervals immediately following an instance of the precursor. We then conducted separate functional analyses of problem behavior and the precursor to identify respective operant functions. Results of the functional analyses showed that both problem behavior and the precursor served the same operant functions. These results replicate prior experimental analyses on the relation between problem behavior and precursors and extend prior research by illustrating a quantitative method to identify precursors to more severe problem behavior. PMID:18468281

  8. Predicted sequence of cortical tau and amyloid-β deposition in Alzheimer disease spectrum.

    PubMed

    Cho, Hanna; Lee, Hye Sun; Choi, Jae Yong; Lee, Jae Hoon; Ryu, Young Hoon; Lee, Myung Sik; Lyoo, Chul Hyoung

    2018-04-17

    We investigated sequential order between tau and amyloid-β (Aβ) deposition in Alzheimer disease spectrum using a conditional probability method. Two hundred twenty participants underwent 18 F-flortaucipir and 18 F-florbetaben positron emission tomography scans and neuropsychological tests. The presence of tau and Aβ in each region and impairment in each cognitive domain were determined by Z-score cutoffs. By comparing pairs of conditional probabilities, the sequential order of tau and Aβ deposition were determined. Probability for the presence of tau in the entorhinal cortex was higher than that of Aβ in all cortical regions, and in the medial temporal cortices, probability for the presence of tau was higher than that of Aβ. Conversely, in the remaining neocortex above the inferior temporal cortex, probability for the presence of Aβ was always higher than that of tau. Tau pathology in the entorhinal cortex may appear earlier than neocortical Aβ and may spread in the absence of Aβ within the neighboring medial temporal regions. However, Aβ may be required for massive tau deposition in the distant cortical areas. Copyright © 2018 Elsevier Inc. All rights reserved.

  9. Mechanical System Reliability and Cost Integration Using a Sequential Linear Approximation Method

    NASA Technical Reports Server (NTRS)

    Kowal, Michael T.

    1997-01-01

    The development of new products is dependent on product designs that incorporate high levels of reliability along with a design that meets predetermined levels of system cost. Additional constraints on the product include explicit and implicit performance requirements. Existing reliability and cost prediction methods result in no direct linkage between variables affecting these two dominant product attributes. A methodology to integrate reliability and cost estimates using a sequential linear approximation method is proposed. The sequential linear approximation method utilizes probability of failure sensitivities determined from probabilistic reliability methods as well a manufacturing cost sensitivities. The application of the sequential linear approximation method to a mechanical system is demonstrated.

  10. Automated segmentation of dental CBCT image with prior-guided sequential random forests

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Li; Gao, Yaozong; Shi, Feng

    Purpose: Cone-beam computed tomography (CBCT) is an increasingly utilized imaging modality for the diagnosis and treatment planning of the patients with craniomaxillofacial (CMF) deformities. Accurate segmentation of CBCT image is an essential step to generate 3D models for the diagnosis and treatment planning of the patients with CMF deformities. However, due to the image artifacts caused by beam hardening, imaging noise, inhomogeneity, truncation, and maximal intercuspation, it is difficult to segment the CBCT. Methods: In this paper, the authors present a new automatic segmentation method to address these problems. Specifically, the authors first employ a majority voting method to estimatemore » the initial segmentation probability maps of both mandible and maxilla based on multiple aligned expert-segmented CBCT images. These probability maps provide an important prior guidance for CBCT segmentation. The authors then extract both the appearance features from CBCTs and the context features from the initial probability maps to train the first-layer of random forest classifier that can select discriminative features for segmentation. Based on the first-layer of trained classifier, the probability maps are updated, which will be employed to further train the next layer of random forest classifier. By iteratively training the subsequent random forest classifier using both the original CBCT features and the updated segmentation probability maps, a sequence of classifiers can be derived for accurate segmentation of CBCT images. Results: Segmentation results on CBCTs of 30 subjects were both quantitatively and qualitatively validated based on manually labeled ground truth. The average Dice ratios of mandible and maxilla by the authors’ method were 0.94 and 0.91, respectively, which are significantly better than the state-of-the-art method based on sparse representation (p-value < 0.001). Conclusions: The authors have developed and validated a novel fully automated method for CBCT segmentation.« less

  11. Short-Range Temporal Interactions in Sleep; Hippocampal Spike Avalanches Support a Large Milieu of Sequential Activity Including Replay

    PubMed Central

    Mahoney, J. Matthew; Titiz, Ali S.; Hernan, Amanda E.; Scott, Rod C.

    2016-01-01

    Hippocampal neural systems consolidate multiple complex behaviors into memory. However, the temporal structure of neural firing supporting complex memory consolidation is unknown. Replay of hippocampal place cells during sleep supports the view that a simple repetitive behavior modifies sleep firing dynamics, but does not explain how multiple episodes could be integrated into associative networks for recollection during future cognition. Here we decode sequential firing structure within spike avalanches of all pyramidal cells recorded in sleeping rats after running in a circular track. We find that short sequences that combine into multiple long sequences capture the majority of the sequential structure during sleep, including replay of hippocampal place cells. The ensemble, however, is not optimized for maximally producing the behavior-enriched episode. Thus behavioral programming of sequential correlations occurs at the level of short-range interactions, not whole behavioral sequences and these short sequences are assembled into a large and complex milieu that could support complex memory consolidation. PMID:26866597

  12. Surveillance system and method having an adaptive sequential probability fault detection test

    NASA Technical Reports Server (NTRS)

    Herzog, James P. (Inventor); Bickford, Randall L. (Inventor)

    2005-01-01

    System and method providing surveillance of an asset such as a process and/or apparatus by providing training and surveillance procedures that numerically fit a probability density function to an observed residual error signal distribution that is correlative to normal asset operation and then utilizes the fitted probability density function in a dynamic statistical hypothesis test for providing improved asset surveillance.

  13. Surveillance system and method having an adaptive sequential probability fault detection test

    NASA Technical Reports Server (NTRS)

    Bickford, Randall L. (Inventor); Herzog, James P. (Inventor)

    2006-01-01

    System and method providing surveillance of an asset such as a process and/or apparatus by providing training and surveillance procedures that numerically fit a probability density function to an observed residual error signal distribution that is correlative to normal asset operation and then utilizes the fitted probability density function in a dynamic statistical hypothesis test for providing improved asset surveillance.

  14. Surveillance System and Method having an Adaptive Sequential Probability Fault Detection Test

    NASA Technical Reports Server (NTRS)

    Bickford, Randall L. (Inventor); Herzog, James P. (Inventor)

    2008-01-01

    System and method providing surveillance of an asset such as a process and/or apparatus by providing training and surveillance procedures that numerically fit a probability density function to an observed residual error signal distribution that is correlative to normal asset operation and then utilizes the fitted probability density function in a dynamic statistical hypothesis test for providing improved asset surveillance.

  15. Dosimetric comparison of standard three-dimensional conformal radiotherapy followed by intensity-modulated radiotherapy boost schedule (sequential IMRT plan) with simultaneous integrated boost-IMRT (SIB IMRT) treatment plan in patients with localized carcinoma prostate.

    PubMed

    Bansal, A; Kapoor, R; Singh, S K; Kumar, N; Oinam, A S; Sharma, S C

    2012-07-01

    DOSIMETERIC AND RADIOBIOLOGICAL COMPARISON OF TWO RADIATION SCHEDULES IN LOCALIZED CARCINOMA PROSTATE: Standard Three-Dimensional Conformal Radiotherapy (3DCRT) followed by Intensity Modulated Radiotherapy (IMRT) boost (sequential-IMRT) with Simultaneous Integrated Boost IMRT (SIB-IMRT). Thirty patients were enrolled. In all, the target consisted of PTV P + SV (Prostate and seminal vesicles) and PTV LN (lymph nodes) where PTV refers to planning target volume and the critical structures included: bladder, rectum and small bowel. All patients were treated with sequential-IMRT plan, but for dosimetric comparison, SIB-IMRT plan was also created. The prescription dose to PTV P + SV was 74 Gy in both strategies but with different dose per fraction, however, the dose to PTV LN was 50 Gy delivered in 25 fractions over 5 weeks for sequential-IMRT and 54 Gy delivered in 27 fractions over 5.5 weeks for SIB-IMRT. The treatment plans were compared in terms of dose-volume histograms. Also, Tumor Control Probability (TCP) and Normal Tissue Complication Probability (NTCP) obtained with the two plans were compared. The volume of rectum receiving 70 Gy or more (V > 70 Gy) was reduced to 18.23% with SIB-IMRT from 22.81% with sequential-IMRT. SIB-IMRT reduced the mean doses to both bladder and rectum by 13% and 17%, respectively, as compared to sequential-IMRT. NTCP of 0.86 ± 0.75% and 0.01 ± 0.02% for the bladder, 5.87 ± 2.58% and 4.31 ± 2.61% for the rectum and 8.83 ± 7.08% and 8.25 ± 7.98% for the bowel was seen with sequential-IMRT and SIB-IMRT plans respectively. For equal PTV coverage, SIB-IMRT markedly reduced doses to critical structures, therefore should be considered as the strategy for dose escalation. SIB-IMRT achieves lesser NTCP than sequential-IMRT.

  16. Evidence for decreased interaction and improved carotenoid bioavailability by sequential delivery of a supplement.

    PubMed

    Salter-Venzon, Dawna; Kazlova, Valentina; Izzy Ford, Samantha; Intra, Janjira; Klosner, Allison E; Gellenbeck, Kevin W

    2017-05-01

    Despite the notable health benefits of carotenoids for human health, the majority of human diets worldwide are repeatedly shown to be inadequate in intake of carotenoid-rich fruits and vegetables, according to current health recommendations. To address this deficit, strategies designed to increase dietary intakes and subsequent plasma levels of carotenoids are warranted. When mixed carotenoids are delivered into the intestinal tract simultaneously, competition occurs for micelle formation and absorption, affecting carotenoid bioavailability. Previously, we tested the in vitro viability of a carotenoid mix designed to deliver individual carotenoids sequentially spaced from one another over the 6 hr transit time of the human upper gastrointestinal system. We hypothesized that temporally and spatially separating the individual carotenoids would reduce competition for micelle formation, improve uptake, and maximize efficacy. Here, we test this hypothesis in a double-blind, repeated-measure, cross-over human study with 12 subjects by comparing the change of plasma carotenoid levels for 8 hr after oral doses of a sequentially spaced carotenoid mix, to a matched mix without sequential spacing. We find the carotenoid change from baseline, measured as area under the curve, is increased following consumption of the sequentially spaced mix compared to concomitant carotenoids delivery. These results demonstrate reduced interaction and regulation between the sequentially spaced carotenoids, suggesting improved bioavailability from a novel sequentially spaced carotenoid mix.

  17. Human Inferences about Sequences: A Minimal Transition Probability Model

    PubMed Central

    2016-01-01

    The brain constantly infers the causes of the inputs it receives and uses these inferences to generate statistical expectations about future observations. Experimental evidence for these expectations and their violations include explicit reports, sequential effects on reaction times, and mismatch or surprise signals recorded in electrophysiology and functional MRI. Here, we explore the hypothesis that the brain acts as a near-optimal inference device that constantly attempts to infer the time-varying matrix of transition probabilities between the stimuli it receives, even when those stimuli are in fact fully unpredictable. This parsimonious Bayesian model, with a single free parameter, accounts for a broad range of findings on surprise signals, sequential effects and the perception of randomness. Notably, it explains the pervasive asymmetry between repetitions and alternations encountered in those studies. Our analysis suggests that a neural machinery for inferring transition probabilities lies at the core of human sequence knowledge. PMID:28030543

  18. Sequential Probability Ratio Test for Collision Avoidance Maneuver Decisions Based on a Bank of Norm-Inequality-Constrained Epoch-State Filters

    NASA Technical Reports Server (NTRS)

    Carpenter, J. R.; Markley, F. L.; Alfriend, K. T.; Wright, C.; Arcido, J.

    2011-01-01

    Sequential probability ratio tests explicitly allow decision makers to incorporate false alarm and missed detection risks, and are potentially less sensitive to modeling errors than a procedure that relies solely on a probability of collision threshold. Recent work on constrained Kalman filtering has suggested an approach to formulating such a test for collision avoidance maneuver decisions: a filter bank with two norm-inequality-constrained epoch-state extended Kalman filters. One filter models 1he null hypothesis 1ha1 the miss distance is inside the combined hard body radius at the predicted time of closest approach, and one filter models the alternative hypothesis. The epoch-state filter developed for this method explicitly accounts for any process noise present in the system. The method appears to work well using a realistic example based on an upcoming highly-elliptical orbit formation flying mission.

  19. Sequential Probability Ratio Test for Spacecraft Collision Avoidance Maneuver Decisions

    NASA Technical Reports Server (NTRS)

    Carpenter, J. Russell; Markley, F. Landis

    2013-01-01

    A document discusses sequential probability ratio tests that explicitly allow decision-makers to incorporate false alarm and missed detection risks, and are potentially less sensitive to modeling errors than a procedure that relies solely on a probability of collision threshold. Recent work on constrained Kalman filtering has suggested an approach to formulating such a test for collision avoidance maneuver decisions: a filter bank with two norm-inequality-constrained epoch-state extended Kalman filters. One filter models the null hypotheses that the miss distance is inside the combined hard body radius at the predicted time of closest approach, and one filter models the alternative hypothesis. The epoch-state filter developed for this method explicitly accounts for any process noise present in the system. The method appears to work well using a realistic example based on an upcoming, highly elliptical orbit formation flying mission.

  20. Exact Tests for the Rasch Model via Sequential Importance Sampling

    ERIC Educational Resources Information Center

    Chen, Yuguo; Small, Dylan

    2005-01-01

    Rasch proposed an exact conditional inference approach to testing his model but never implemented it because it involves the calculation of a complicated probability. This paper furthers Rasch's approach by (1) providing an efficient Monte Carlo methodology for accurately approximating the required probability and (2) illustrating the usefulness…

  1. Learning in Reverse: Eight-Month-Old Infants Track Backward Transitional Probabilities

    ERIC Educational Resources Information Center

    Pelucchi, Bruna; Hay, Jessica F.; Saffran, Jenny R.

    2009-01-01

    Numerous recent studies suggest that human learners, including both infants and adults, readily track sequential statistics computed between adjacent elements. One such statistic, transitional probability, is typically calculated as the likelihood that one element predicts another. However, little is known about whether listeners are sensitive to…

  2. Type I error probability spending for post-market drug and vaccine safety surveillance with binomial data.

    PubMed

    Silva, Ivair R

    2018-01-15

    Type I error probability spending functions are commonly used for designing sequential analysis of binomial data in clinical trials, but it is also quickly emerging for near-continuous sequential analysis of post-market drug and vaccine safety surveillance. It is well known that, for clinical trials, when the null hypothesis is not rejected, it is still important to minimize the sample size. Unlike in post-market drug and vaccine safety surveillance, that is not important. In post-market safety surveillance, specially when the surveillance involves identification of potential signals, the meaningful statistical performance measure to be minimized is the expected sample size when the null hypothesis is rejected. The present paper shows that, instead of the convex Type I error spending shape conventionally used in clinical trials, a concave shape is more indicated for post-market drug and vaccine safety surveillance. This is shown for both, continuous and group sequential analysis. Copyright © 2017 John Wiley & Sons, Ltd.

  3. CP function: an alpha spending function based on conditional power.

    PubMed

    Jiang, Zhiwei; Wang, Ling; Li, Chanjuan; Xia, Jielai; Wang, William

    2014-11-20

    Alpha spending function and stochastic curtailment are two frequently used methods in group sequential design. In the stochastic curtailment approach, the actual type I error probability cannot be well controlled within the specified significance level. But conditional power (CP) in stochastic curtailment is easier to be accepted and understood by clinicians. In this paper, we develop a spending function based on the concept of conditional power, named CP function, which combines desirable features of alpha spending and stochastic curtailment. Like other two-parameter functions, CP function is flexible to fit the needs of the trial. A simulation study is conducted to explore the choice of CP boundary in CP function that maximizes the trial power. It is equivalent to, even better than, classical Pocock, O'Brien-Fleming, and quadratic spending function as long as a proper ρ0 is given, which is pre-specified CP threshold for efficacy. It also well controls the overall type I error type I error rate and overcomes the disadvantage of stochastic curtailment. Copyright © 2014 John Wiley & Sons, Ltd.

  4. Optimal two-stage dynamic treatment regimes from a classification perspective with censored survival data.

    PubMed

    Hager, Rebecca; Tsiatis, Anastasios A; Davidian, Marie

    2018-05-18

    Clinicians often make multiple treatment decisions at key points over the course of a patient's disease. A dynamic treatment regime is a sequence of decision rules, each mapping a patient's observed history to the set of available, feasible treatment options at each decision point, and thus formalizes this process. An optimal regime is one leading to the most beneficial outcome on average if used to select treatment for the patient population. We propose a method for estimation of an optimal regime involving two decision points when the outcome of interest is a censored survival time, which is based on maximizing a locally efficient, doubly robust, augmented inverse probability weighted estimator for average outcome over a class of regimes. By casting this optimization as a classification problem, we exploit well-studied classification techniques such as support vector machines to characterize the class of regimes and facilitate implementation via a backward iterative algorithm. Simulation studies of performance and application of the method to data from a sequential, multiple assignment randomized clinical trial in acute leukemia are presented. © 2018, The International Biometric Society.

  5. A meta-analysis of response-time tests of the sequential two-systems model of moral judgment.

    PubMed

    Baron, Jonathan; Gürçay, Burcu

    2017-05-01

    The (generalized) sequential two-system ("default interventionist") model of utilitarian moral judgment predicts that utilitarian responses often arise from a system-two correction of system-one deontological intuitions. Response-time (RT) results that seem to support this model are usually explained by the fact that low-probability responses have longer RTs. Following earlier results, we predicted response probability from each subject's tendency to make utilitarian responses (A, "Ability") and each dilemma's tendency to elicit deontological responses (D, "Difficulty"), estimated from a Rasch model. At the point where A = D, the two responses are equally likely, so probability effects cannot account for any RT differences between them. The sequential two-system model still predicts that many of the utilitarian responses made at this point will result from system-two corrections of system-one intuitions, hence should take longer. However, when A = D, RT for the two responses was the same, contradicting the sequential model. Here we report a meta-analysis of 26 data sets, which replicated the earlier results of no RT difference overall at the point where A = D. The data sets used three different kinds of moral judgment items, and the RT equality at the point where A = D held for all three. In addition, we found that RT increased with A-D. This result holds for subjects (characterized by Ability) but not for items (characterized by Difficulty). We explain the main features of this unanticipated effect, and of the main results, with a drift-diffusion model.

  6. Protein classification using sequential pattern mining.

    PubMed

    Exarchos, Themis P; Papaloukas, Costas; Lampros, Christos; Fotiadis, Dimitrios I

    2006-01-01

    Protein classification in terms of fold recognition can be employed to determine the structural and functional properties of a newly discovered protein. In this work sequential pattern mining (SPM) is utilized for sequence-based fold recognition. One of the most efficient SPM algorithms, cSPADE, is employed for protein primary structure analysis. Then a classifier uses the extracted sequential patterns for classifying proteins of unknown structure in the appropriate fold category. The proposed methodology exhibited an overall accuracy of 36% in a multi-class problem of 17 candidate categories. The classification performance reaches up to 65% when the three most probable protein folds are considered.

  7. An Alternative Approach to the Total Probability Formula. Classroom Notes

    ERIC Educational Resources Information Center

    Wu, Dane W. Wu; Bangerter, Laura M.

    2004-01-01

    Given a set of urns, each filled with a mix of black chips and white chips, what is the probability of drawing a black chip from the last urn after some sequential random shifts of chips among the urns? The Total Probability Formula (TPF) is the common tool to solve such a problem. However, when the number of urns is more than two and the number…

  8. Statistical Segmentation of Tone Sequences Activates the Left Inferior Frontal Cortex: A Near-Infrared Spectroscopy Study

    ERIC Educational Resources Information Center

    Abla, Dilshat; Okanoya, Kazuo

    2008-01-01

    Word segmentation, that is, discovering the boundaries between words that are embedded in a continuous speech stream, is an important faculty for language learners; humans solve this task partly by calculating transitional probabilities between sounds. Behavioral and ERP studies suggest that detection of sequential probabilities (statistical…

  9. Sequential Probability Ratio Testing with Power Projective Base Method Improves Decision-Making for BCI

    PubMed Central

    Liu, Rong

    2017-01-01

    Obtaining a fast and reliable decision is an important issue in brain-computer interfaces (BCI), particularly in practical real-time applications such as wheelchair or neuroprosthetic control. In this study, the EEG signals were firstly analyzed with a power projective base method. Then we were applied a decision-making model, the sequential probability ratio testing (SPRT), for single-trial classification of motor imagery movement events. The unique strength of this proposed classification method lies in its accumulative process, which increases the discriminative power as more and more evidence is observed over time. The properties of the method were illustrated on thirteen subjects' recordings from three datasets. Results showed that our proposed power projective method outperformed two benchmark methods for every subject. Moreover, with sequential classifier, the accuracies across subjects were significantly higher than that with nonsequential ones. The average maximum accuracy of the SPRT method was 84.1%, as compared with 82.3% accuracy for the sequential Bayesian (SB) method. The proposed SPRT method provides an explicit relationship between stopping time, thresholds, and error, which is important for balancing the time-accuracy trade-off. These results suggest SPRT would be useful in speeding up decision-making while trading off errors in BCI. PMID:29348781

  10. Stochastic approach for an unbiased estimation of the probability of a successful separation in conventional chromatography and sequential elution liquid chromatography.

    PubMed

    Ennis, Erin J; Foley, Joe P

    2016-07-15

    A stochastic approach was utilized to estimate the probability of a successful isocratic or gradient separation in conventional chromatography for numbers of sample components, peak capacities, and saturation factors ranging from 2 to 30, 20-300, and 0.017-1, respectively. The stochastic probabilities were obtained under conditions of (i) constant peak width ("gradient" conditions) and (ii) peak width increasing linearly with time ("isocratic/constant N" conditions). The isocratic and gradient probabilities obtained stochastically were compared with the probabilities predicted by Martin et al. [Anal. Chem., 58 (1986) 2200-2207] and Davis and Stoll [J. Chromatogr. A, (2014) 128-142]; for a given number of components and peak capacity the same trend is always observed: probability obtained with the isocratic stochastic approach

  11. Metropolitan open-space protection with uncertain site availability

    Treesearch

    Robert G. Haight; Stephanie A. Snyder; Charles S. Revelle

    2005-01-01

    Urban planners acquire open space to protect natural areas and provide public access to recreation opportunities. Because of limited budgets and dynamic land markets, acquisitions take place sequentially depending on available funds and sites. To address these planning features, we formulated a two-period site selection model with two objectives: maximize the...

  12. Nanomedicine for Early Disease Detection and Treatment

    DTIC Science & Technology

    2013-09-01

    AD_________________ Award Number: W81XWH-11-1-0442 TITLE: Nanomedicine for early disease ...been developed to report and cure diseases . ESNM is prepared with multiple layers of polyelectrolytes, sequentially assembled on an inert gold...molecular characteristics of the patient and his/her specific diseased tissues with the treatment. In order to maximize therapeutic effects and

  13. Optimization of Multiple Related Negotiation through Multi-Negotiation Network

    NASA Astrophysics Data System (ADS)

    Ren, Fenghui; Zhang, Minjie; Miao, Chunyan; Shen, Zhiqi

    In this paper, a Multi-Negotiation Network (MNN) and a Multi- Negotiation Influence Diagram (MNID) are proposed to optimally handle Multiple Related Negotiations (MRN) in a multi-agent system. Most popular, state-of-the-art approaches perform MRN sequentially. However, a sequential procedure may not optimally execute MRN in terms of maximizing the global outcome, and may even lead to unnecessary losses in some situations. The motivation of this research is to use a MNN to handle MRN concurrently so as to maximize the expected utility of MRN. Firstly, both the joint success rate and the joint utility by considering all related negotiations are dynamically calculated based on a MNN. Secondly, by employing a MNID, an agent's possible decision on each related negotiation is reflected by the value of expected utility. Lastly, through comparing expected utilities between all possible policies to conduct MRN, an optimal policy is generated to optimize the global outcome of MRN. The experimental results indicate that the proposed approach can improve the global outcome of MRN in a successful end scenario, and avoid unnecessary losses in an unsuccessful end scenario.

  14. A mechanism producing power law etc. distributions

    NASA Astrophysics Data System (ADS)

    Li, Heling; Shen, Hongjun; Yang, Bin

    2017-07-01

    Power law distribution is playing an increasingly important role in the complex system study. Based on the insolvability of complex systems, the idea of incomplete statistics is utilized and expanded, three different exponential factors are introduced in equations about the normalization condition, statistical average and Shannon entropy, with probability distribution function deduced about exponential function, power function and the product form between power function and exponential function derived from Shannon entropy and maximal entropy principle. So it is shown that maximum entropy principle can totally replace equal probability hypothesis. Owing to the fact that power and probability distribution in the product form between power function and exponential function, which cannot be derived via equal probability hypothesis, can be derived by the aid of maximal entropy principle, it also can be concluded that maximal entropy principle is a basic principle which embodies concepts more extensively and reveals basic principles on motion laws of objects more fundamentally. At the same time, this principle also reveals the intrinsic link between Nature and different objects in human society and principles complied by all.

  15. Expert system for online surveillance of nuclear reactor coolant pumps

    DOEpatents

    Gross, Kenny C.; Singer, Ralph M.; Humenik, Keith E.

    1993-01-01

    An expert system for online surveillance of nuclear reactor coolant pumps. This system provides a means for early detection of pump or sensor degradation. Degradation is determined through the use of a statistical analysis technique, sequential probability ratio test, applied to information from several sensors which are responsive to differing physical parameters. The results of sequential testing of the data provide the operator with an early warning of possible sensor or pump failure.

  16. The usefulness of administrative databases for identifying disease cohorts is increased with a multivariate model.

    PubMed

    van Walraven, Carl; Austin, Peter C; Manuel, Douglas; Knoll, Greg; Jennings, Allison; Forster, Alan J

    2010-12-01

    Administrative databases commonly use codes to indicate diagnoses. These codes alone are often inadequate to accurately identify patients with particular conditions. In this study, we determined whether we could quantify the probability that a person has a particular disease-in this case renal failure-using other routinely collected information available in an administrative data set. This would allow the accurate identification of a disease cohort in an administrative database. We determined whether patients in a randomly selected 100,000 hospitalizations had kidney disease (defined as two or more sequential serum creatinines or the single admission creatinine indicating a calculated glomerular filtration rate less than 60 mL/min/1.73 m²). The independent association of patient- and hospitalization-level variables with renal failure was measured using a multivariate logistic regression model in a random 50% sample of the patients. The model was validated in the remaining patients. Twenty thousand seven hundred thirteen patients had kidney disease (20.7%). A diagnostic code of kidney disease was strongly associated with kidney disease (relative risk: 34.4), but the accuracy of the code was poor (sensitivity: 37.9%; specificity: 98.9%). Twenty-nine patient- and hospitalization-level variables entered the kidney disease model. This model had excellent discrimination (c-statistic: 90.1%) and accurately predicted the probability of true renal failure. The probability threshold that maximized sensitivity and specificity for the identification of true kidney disease was 21.3% (sensitivity: 80.0%; specificity: 82.2%). Multiple variables available in administrative databases can be combined to quantify the probability that a person has a particular disease. This process permits accurate identification of a disease cohort in an administrative database. These methods may be extended to other diagnoses or procedures and could both facilitate and clarify the use of administrative databases for research and quality improvement. Copyright © 2010 Elsevier Inc. All rights reserved.

  17. Maximizing the Detection Probability of Kilonovae Associated with Gravitational Wave Observations

    NASA Astrophysics Data System (ADS)

    Chan, Man Leong; Hu, Yi-Ming; Messenger, Chris; Hendry, Martin; Heng, Ik Siong

    2017-01-01

    Estimates of the source sky location for gravitational wave signals are likely to span areas of up to hundreds of square degrees or more, making it very challenging for most telescopes to search for counterpart signals in the electromagnetic spectrum. To boost the chance of successfully observing such counterparts, we have developed an algorithm that optimizes the number of observing fields and their corresponding time allocations by maximizing the detection probability. As a proof-of-concept demonstration, we optimize follow-up observations targeting kilonovae using telescopes including the CTIO-Dark Energy Camera, Subaru-HyperSuprimeCam, Pan-STARRS, and the Palomar Transient Factory. We consider three simulated gravitational wave events with 90% credible error regions spanning areas from ∼ 30 {\\deg }2 to ∼ 300 {\\deg }2. Assuming a source at 200 {Mpc}, we demonstrate that to obtain a maximum detection probability, there is an optimized number of fields for any particular event that a telescope should observe. To inform future telescope design studies, we present the maximum detection probability and corresponding number of observing fields for a combination of limiting magnitudes and fields of view over a range of parameters. We show that for large gravitational wave error regions, telescope sensitivity rather than field of view is the dominating factor in maximizing the detection probability.

  18. 3D hybrid tectono-stochastic modeling of naturally fractured reservoir: Application of finite element method and stochastic simulation technique

    NASA Astrophysics Data System (ADS)

    Gholizadeh Doonechaly, N.; Rahman, S. S.

    2012-05-01

    Simulation of naturally fractured reservoirs offers significant challenges due to the lack of a methodology that can utilize field data. To date several methods have been proposed by authors to characterize naturally fractured reservoirs. Among them is the unfolding/folding method which offers some degree of accuracy in estimating the probability of the existence of fractures in a reservoir. Also there are statistical approaches which integrate all levels of field data to simulate the fracture network. This approach, however, is dependent on the availability of data sources, such as seismic attributes, core descriptions, well logs, etc. which often make it difficult to obtain field wide. In this study a hybrid tectono-stochastic simulation is proposed to characterize a naturally fractured reservoir. A finite element based model is used to simulate the tectonic event of folding and unfolding of a geological structure. A nested neuro-stochastic technique is used to develop the inter-relationship between the data and at the same time it utilizes the sequential Gaussian approach to analyze field data along with fracture probability data. This approach has the ability to overcome commonly experienced discontinuity of the data in both horizontal and vertical directions. This hybrid technique is used to generate a discrete fracture network of a specific Australian gas reservoir, Palm Valley in the Northern Territory. Results of this study have significant benefit in accurately describing fluid flow simulation and well placement for maximal hydrocarbon recovery.

  19. Sequential experimental design based generalised ANOVA

    NASA Astrophysics Data System (ADS)

    Chakraborty, Souvik; Chowdhury, Rajib

    2016-07-01

    Over the last decade, surrogate modelling technique has gained wide popularity in the field of uncertainty quantification, optimization, model exploration and sensitivity analysis. This approach relies on experimental design to generate training points and regression/interpolation for generating the surrogate. In this work, it is argued that conventional experimental design may render a surrogate model inefficient. In order to address this issue, this paper presents a novel distribution adaptive sequential experimental design (DA-SED). The proposed DA-SED has been coupled with a variant of generalised analysis of variance (G-ANOVA), developed by representing the component function using the generalised polynomial chaos expansion. Moreover, generalised analytical expressions for calculating the first two statistical moments of the response, which are utilized in predicting the probability of failure, have also been developed. The proposed approach has been utilized in predicting probability of failure of three structural mechanics problems. It is observed that the proposed approach yields accurate and computationally efficient estimate of the failure probability.

  20. Sequential experimental design based generalised ANOVA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chakraborty, Souvik, E-mail: csouvik41@gmail.com; Chowdhury, Rajib, E-mail: rajibfce@iitr.ac.in

    Over the last decade, surrogate modelling technique has gained wide popularity in the field of uncertainty quantification, optimization, model exploration and sensitivity analysis. This approach relies on experimental design to generate training points and regression/interpolation for generating the surrogate. In this work, it is argued that conventional experimental design may render a surrogate model inefficient. In order to address this issue, this paper presents a novel distribution adaptive sequential experimental design (DA-SED). The proposed DA-SED has been coupled with a variant of generalised analysis of variance (G-ANOVA), developed by representing the component function using the generalised polynomial chaos expansion. Moreover,more » generalised analytical expressions for calculating the first two statistical moments of the response, which are utilized in predicting the probability of failure, have also been developed. The proposed approach has been utilized in predicting probability of failure of three structural mechanics problems. It is observed that the proposed approach yields accurate and computationally efficient estimate of the failure probability.« less

  1. Dosimetric comparison of standard three-dimensional conformal radiotherapy followed by intensity-modulated radiotherapy boost schedule (sequential IMRT plan) with simultaneous integrated boost–IMRT (SIB IMRT) treatment plan in patients with localized carcinoma prostate

    PubMed Central

    Bansal, A.; Kapoor, R.; Singh, S. K.; Kumar, N.; Oinam, A. S.; Sharma, S. C.

    2012-01-01

    Aims: Dosimeteric and radiobiological comparison of two radiation schedules in localized carcinoma prostate: Standard Three-Dimensional Conformal Radiotherapy (3DCRT) followed by Intensity Modulated Radiotherapy (IMRT) boost (sequential-IMRT) with Simultaneous Integrated Boost IMRT (SIB-IMRT). Material and Methods: Thirty patients were enrolled. In all, the target consisted of PTV P + SV (Prostate and seminal vesicles) and PTV LN (lymph nodes) where PTV refers to planning target volume and the critical structures included: bladder, rectum and small bowel. All patients were treated with sequential-IMRT plan, but for dosimetric comparison, SIB-IMRT plan was also created. The prescription dose to PTV P + SV was 74 Gy in both strategies but with different dose per fraction, however, the dose to PTV LN was 50 Gy delivered in 25 fractions over 5 weeks for sequential-IMRT and 54 Gy delivered in 27 fractions over 5.5 weeks for SIB-IMRT. The treatment plans were compared in terms of dose–volume histograms. Also, Tumor Control Probability (TCP) and Normal Tissue Complication Probability (NTCP) obtained with the two plans were compared. Results: The volume of rectum receiving 70 Gy or more (V > 70 Gy) was reduced to 18.23% with SIB-IMRT from 22.81% with sequential-IMRT. SIB-IMRT reduced the mean doses to both bladder and rectum by 13% and 17%, respectively, as compared to sequential-IMRT. NTCP of 0.86 ± 0.75% and 0.01 ± 0.02% for the bladder, 5.87 ± 2.58% and 4.31 ± 2.61% for the rectum and 8.83 ± 7.08% and 8.25 ± 7.98% for the bowel was seen with sequential-IMRT and SIB-IMRT plans respectively. Conclusions: For equal PTV coverage, SIB-IMRT markedly reduced doses to critical structures, therefore should be considered as the strategy for dose escalation. SIB-IMRT achieves lesser NTCP than sequential-IMRT. PMID:23204659

  2. Sequential biases in accumulating evidence

    PubMed Central

    Huggins, Richard; Dogo, Samson Henry

    2015-01-01

    Whilst it is common in clinical trials to use the results of tests at one phase to decide whether to continue to the next phase and to subsequently design the next phase, we show that this can lead to biased results in evidence synthesis. Two new kinds of bias associated with accumulating evidence, termed ‘sequential decision bias’ and ‘sequential design bias’, are identified. Both kinds of bias are the result of making decisions on the usefulness of a new study, or its design, based on the previous studies. Sequential decision bias is determined by the correlation between the value of the current estimated effect and the probability of conducting an additional study. Sequential design bias arises from using the estimated value instead of the clinically relevant value of an effect in sample size calculations. We considered both the fixed‐effect and the random‐effects models of meta‐analysis and demonstrated analytically and by simulations that in both settings the problems due to sequential biases are apparent. According to our simulations, the sequential biases increase with increased heterogeneity. Minimisation of sequential biases arises as a new and important research area necessary for successful evidence‐based approaches to the development of science. © 2015 The Authors. Research Synthesis Methods Published by John Wiley & Sons Ltd. PMID:26626562

  3. The Effects of the Previous Outcome on Probabilistic Choice in Rats

    PubMed Central

    Marshall, Andrew T.; Kirkpatrick, Kimberly

    2014-01-01

    This study examined the effects of previous outcomes on subsequent choices in a probabilistic-choice task. Twenty-four rats were trained to choose between a certain outcome (1 or 3 pellets) versus an uncertain outcome (3 or 9 pellets), delivered with a probability of .1, .33, .67, and .9 in different phases. Uncertain outcome choices increased with the probability of uncertain food. Additionally, uncertain choices increased with the probability of uncertain food following both certain-choice outcomes and unrewarded uncertain choices. However, following uncertain-choice food outcomes, there was a tendency to choose the uncertain outcome in all cases, indicating that the rats continued to “gamble” after successful uncertain choices, regardless of the overall probability or magnitude of food. A subsequent manipulation, in which the probability of uncertain food varied within each session as a function of the previous uncertain outcome, examined how the previous outcome and probability of uncertain food affected choice in a dynamic environment. Uncertain-choice behavior increased with the probability of uncertain food. The rats exhibited increased sensitivity to probability changes and a greater degree of win–stay/lose–shift behavior than in the static phase. Simulations of two sequential choice models were performed to explore the possible mechanisms of reward value computations. The simulation results supported an exponentially decaying value function that updated as a function of trial (rather than time). These results emphasize the importance of analyzing global and local factors in choice behavior and suggest avenues for the future development of sequential-choice models. PMID:23205915

  4. Maximizing carbon storage in the Appalachians: A method for considering the risk of disturbance events

    Treesearch

    Michael R. Vanderberg; Kevin Boston; John Bailey

    2011-01-01

    Accounting for the probability of loss due to disturbance events can influence the prediction of carbon flux over a planning horizon, and can affect the determination of optimal silvicultural regimes to maximize terrestrial carbon storage. A preliminary model that includes forest disturbance-related carbon loss was developed to maximize expected values of carbon stocks...

  5. An exact computational method for performance analysis of sequential test algorithms for detecting network intrusions

    NASA Astrophysics Data System (ADS)

    Chen, Xinjia; Lacy, Fred; Carriere, Patrick

    2015-05-01

    Sequential test algorithms are playing increasingly important roles for quick detecting network intrusions such as portscanners. In view of the fact that such algorithms are usually analyzed based on intuitive approximation or asymptotic analysis, we develop an exact computational method for the performance analysis of such algorithms. Our method can be used to calculate the probability of false alarm and average detection time up to arbitrarily pre-specified accuracy.

  6. The impact of eyewitness identifications from simultaneous and sequential lineups.

    PubMed

    Wright, Daniel B

    2007-10-01

    Recent guidelines in the US allow either simultaneous or sequential lineups to be used for eyewitness identification. This paper investigates how potential jurors weight the probative value of the different outcomes from both of these types of lineups. Participants (n=340) were given a description of a case that included some exonerating and some incriminating evidence. There was either a simultaneous or a sequential lineup. Depending on the condition, an eyewitness chose the suspect, chose a filler, or made no identification. The participant had to judge the guilt of the suspect and decide whether to render a guilty verdict. For both simultaneous and sequential lineups an identification had a large effect,increasing the probability of a guilty verdict. There were no reliable effects detected between making no identification and identifying a filler. The effect sizes were similar for simultaneous and sequential lineups. These findings are important for judges and other legal professionals to know for trials involving lineup identifications.

  7. Maximizing the probability of satisfying the clinical goals in radiation therapy treatment planning under setup uncertainty

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fredriksson, Albin, E-mail: albin.fredriksson@raysearchlabs.com; Hårdemark, Björn; Forsgren, Anders

    2015-07-15

    Purpose: This paper introduces a method that maximizes the probability of satisfying the clinical goals in intensity-modulated radiation therapy treatments subject to setup uncertainty. Methods: The authors perform robust optimization in which the clinical goals are constrained to be satisfied whenever the setup error falls within an uncertainty set. The shape of the uncertainty set is included as a variable in the optimization. The goal of the optimization is to modify the shape of the uncertainty set in order to maximize the probability that the setup error will fall within the modified set. Because the constraints enforce the clinical goalsmore » to be satisfied under all setup errors within the uncertainty set, this is equivalent to maximizing the probability of satisfying the clinical goals. This type of robust optimization is studied with respect to photon and proton therapy applied to a prostate case and compared to robust optimization using an a priori defined uncertainty set. Results: Slight reductions of the uncertainty sets resulted in plans that satisfied a larger number of clinical goals than optimization with respect to a priori defined uncertainty sets, both within the reduced uncertainty sets and within the a priori, nonreduced, uncertainty sets. For the prostate case, the plans taking reduced uncertainty sets into account satisfied 1.4 (photons) and 1.5 (protons) times as many clinical goals over the scenarios as the method taking a priori uncertainty sets into account. Conclusions: Reducing the uncertainty sets enabled the optimization to find better solutions with respect to the errors within the reduced as well as the nonreduced uncertainty sets and thereby achieve higher probability of satisfying the clinical goals. This shows that asking for a little less in the optimization sometimes leads to better overall plan quality.« less

  8. Uncountably many maximizing measures for a dense subset of continuous functions

    NASA Astrophysics Data System (ADS)

    Shinoda, Mao

    2018-05-01

    Ergodic optimization aims to single out dynamically invariant Borel probability measures which maximize the integral of a given ‘performance’ function. For a continuous self-map of a compact metric space and a dense set of continuous functions, we show the existence of uncountably many ergodic maximizing measures. We also show that, for a topologically mixing subshift of finite type and a dense set of continuous functions there exist uncountably many ergodic maximizing measures with full support and positive entropy.

  9. A probabilistic quantum communication protocol using mixed entangled channel

    NASA Astrophysics Data System (ADS)

    Choudhury, Binayak S.; Dhara, Arpan

    2016-05-01

    Qubits are realized as polarization state of photons or as superpositions of the spin states of electrons. In this paper we propose a scheme to probabilistically teleport an unknown arbitrary two-qubit state using a non-maximally entangled GHZ- like state and a non-maximally Bell state simultaneously as quantum channels. We also discuss the success probability of our scheme. We perform POVM in the protocol which is operationally advantageous. In our scheme we show that the non-maximal quantum resources perform better than maximal resources.

  10. Levobupivacaine vs racemic bupivacaine in spinal anesthesia for sequential bilateral total knee arthroplasty: a retrospective cohort study.

    PubMed

    Chen, Chee Kean; Lau, Francis C S; Lee, Woo Guan; Phui, Vui Eng

    2016-09-01

    To compare the anesthetic potency and safety of spinal anesthesia with higher dosages of levobupivacaine and bupivacaine in patients for bilateral sequential for total knee arthroplasty (TKA). Retrospective cohort study. Operation theater with postoperative inpatient follow-up. The medical records of 315 patients who underwent sequential bilateral TKA were reviewed. Patients who received intrathecal levobupicavaine 0.5% were compared with patients who received hyperbaric bupivacaine 0.5% with fentanyl 25 μg for spinal anesthesia. The primary outcome was the use of rescue analgesia (systemic opioids, conversion to general anesthesia) during surgery for both groups. Secondary outcomes included adverse effects of local anesthetics (hypotension and bradycardia) during surgery and morbidity related to spinal anesthesia (postoperative nausea, vomiting, and bleeding) during hospital stay. One hundred fifty patients who received intrathecal levobupivacaine 0.5% (group L) were compared with 90 patients given hyperbaric bupivacaine 0.5% with fentanyl 25 μg (group B). The mean volume of levobupivacaine administered was 5.8 mL (range, 5.0-6.0 mL), and that of bupivacaine was 3.8 mL (range, 3.5-4.0 mL). Both groups achieved similar maximal sensory level of block (T6). The time to maximal height of sensory block was significantly shorter in group B than group L, 18.2 ± 4.5 vs 23.9 ± 3.8 minutes (P< .001). The time to motor block of Bromage 3 was also shorter in group B (8.7 ± 4.1 minutes) than group L (16.0 ± 4.5 minutes) (P< .001). Patients in group B required more anesthetic supplement than group L (P< .001). Hypotension and postoperative bleeding were significantly less common in group L than group B. Levobupivacaine at a higher dosage provided longer duration of spinal anesthesia with better safety profile in sequential bilateral TKA. Copyright © 2016 Elsevier Inc. All rights reserved.

  11. Designing Robust and Resilient Tactical MANETs

    DTIC Science & Technology

    2014-09-25

    Bounds on the Throughput Efficiency of Greedy Maximal Scheduling in Wireless Networks , IEEE/ACM Transactions on Networking , (06 2011): 0. doi: N... Wireless Sensor Networks and Effects of Long Range Dependant Data, Special IWSM Issue of Sequential Analysis, (11 2012): 0. doi: A. D. Dominguez...Bushnell, R. Poovendran. A Convex Optimization Approach for Clone Detection in Wireless Sensor Networks , Pervasive and Mobile Computing, (01 2012

  12. Cell-Mediated Immunity to Target the Persistent Human Immunodeficiency Virus Reservoir

    PubMed Central

    Montaner, Luis J.

    2017-01-01

    Abstract Effective clearance of virally infected cells requires the sequential activity of innate and adaptive immunity effectors. In human immunodeficiency virus (HIV) infection, naturally induced cell-mediated immune responses rarely eradicate infection. However, optimized immune responses could potentially be leveraged in HIV cure efforts if epitope escape and lack of sustained effector memory responses were to be addressed. Here we review leading HIV cure strategies that harness cell-mediated control against HIV in stably suppressed antiretroviral-treated subjects. We focus on strategies that may maximize target recognition and eradication by the sequential activation of a reconstituted immune system, together with delivery of optimal T-cell responses that can eliminate the reservoir and serve as means to maintain control of HIV spread in the absence of antiretroviral therapy (ART). As evidenced by the evolution of ART, we argue that a combination of immune-based strategies will be a superior path to cell-mediated HIV control and eradication. Available data from several human pilot trials already identify target strategies that may maximize antiviral pressure by joining innate and engineered T cell responses toward testing for sustained HIV remission and/or cure. PMID:28520969

  13. Maximizing the Spread of Influence via Generalized Degree Discount.

    PubMed

    Wang, Xiaojie; Zhang, Xue; Zhao, Chengli; Yi, Dongyun

    2016-01-01

    It is a crucial and fundamental issue to identify a small subset of influential spreaders that can control the spreading process in networks. In previous studies, a degree-based heuristic called DegreeDiscount has been shown to effectively identify multiple influential spreaders and has severed as a benchmark method. However, the basic assumption of DegreeDiscount is not adequate, because it treats all the nodes equally without any differences. To consider a general situation in real world networks, a novel heuristic method named GeneralizedDegreeDiscount is proposed in this paper as an effective extension of original method. In our method, the status of a node is defined as a probability of not being influenced by any of its neighbors, and an index generalized discounted degree of one node is presented to measure the expected number of nodes it can influence. Then the spreaders are selected sequentially upon its generalized discounted degree in current network. Empirical experiments are conducted on four real networks, and the results show that the spreaders identified by our approach are more influential than several benchmark methods. Finally, we analyze the relationship between our method and three common degree-based methods.

  14. Maximizing the Spread of Influence via Generalized Degree Discount

    PubMed Central

    Wang, Xiaojie; Zhang, Xue; Zhao, Chengli; Yi, Dongyun

    2016-01-01

    It is a crucial and fundamental issue to identify a small subset of influential spreaders that can control the spreading process in networks. In previous studies, a degree-based heuristic called DegreeDiscount has been shown to effectively identify multiple influential spreaders and has severed as a benchmark method. However, the basic assumption of DegreeDiscount is not adequate, because it treats all the nodes equally without any differences. To consider a general situation in real world networks, a novel heuristic method named GeneralizedDegreeDiscount is proposed in this paper as an effective extension of original method. In our method, the status of a node is defined as a probability of not being influenced by any of its neighbors, and an index generalized discounted degree of one node is presented to measure the expected number of nodes it can influence. Then the spreaders are selected sequentially upon its generalized discounted degree in current network. Empirical experiments are conducted on four real networks, and the results show that the spreaders identified by our approach are more influential than several benchmark methods. Finally, we analyze the relationship between our method and three common degree-based methods. PMID:27732681

  15. Statistical characteristics of the sequential detection of signals in correlated noise

    NASA Astrophysics Data System (ADS)

    Averochkin, V. A.; Baranov, P. E.

    1985-10-01

    A solution is given to the problem of determining the distribution of the duration of the sequential two-threshold Wald rule for the time-discrete detection of determinate and Gaussian correlated signals on a background of Gaussian correlated noise. Expressions are obtained for the joint probability densities of the likelihood ratio logarithms, and an analysis is made of the effect of correlation and SNR on the duration distribution and the detection efficiency. Comparison is made with Neumann-Pearson detection.

  16. Some sequential, distribution-free pattern classification procedures with applications

    NASA Technical Reports Server (NTRS)

    Poage, J. L.

    1971-01-01

    Some sequential, distribution-free pattern classification techniques are presented. The decision problem to which the proposed classification methods are applied is that of discriminating between two kinds of electroencephalogram responses recorded from a human subject: spontaneous EEG and EEG driven by a stroboscopic light stimulus at the alpha frequency. The classification procedures proposed make use of the theory of order statistics. Estimates of the probabilities of misclassification are given. The procedures were tested on Gaussian samples and the EEG responses.

  17. A Bayesian sequential processor approach to spectroscopic portal system decisions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sale, K; Candy, J; Breitfeller, E

    The development of faster more reliable techniques to detect radioactive contraband in a portal type scenario is an extremely important problem especially in this era of constant terrorist threats. Towards this goal the development of a model-based, Bayesian sequential data processor for the detection problem is discussed. In the sequential processor each datum (detector energy deposit and pulse arrival time) is used to update the posterior probability distribution over the space of model parameters. The nature of the sequential processor approach is that a detection is produced as soon as it is statistically justified by the data rather than waitingmore » for a fixed counting interval before any analysis is performed. In this paper the Bayesian model-based approach, physics and signal processing models and decision functions are discussed along with the first results of our research.« less

  18. ANALYSES OF RESPONSE–STIMULUS SEQUENCES IN DESCRIPTIVE OBSERVATIONS

    PubMed Central

    Samaha, Andrew L; Vollmer, Timothy R; Borrero, Carrie; Sloman, Kimberly; Pipkin, Claire St. Peter; Bourret, Jason

    2009-01-01

    Descriptive observations were conducted to record problem behavior displayed by participants and to record antecedents and consequences delivered by caregivers. Next, functional analyses were conducted to identify reinforcers for problem behavior. Then, using data from the descriptive observations, lag-sequential analyses were conducted to examine changes in the probability of environmental events across time in relation to occurrences of problem behavior. The results of the lag-sequential analyses were interpreted in light of the results of functional analyses. Results suggested that events identified as reinforcers in a functional analysis followed behavior in idiosyncratic ways: after a range of delays and frequencies. Thus, it is possible that naturally occurring reinforcement contingencies are arranged in ways different from those typically evaluated in applied research. Further, these complex response–stimulus relations can be represented by lag-sequential analyses. However, limitations to the lag-sequential analysis are evident. PMID:19949537

  19. Physics-based, Bayesian sequential detection method and system for radioactive contraband

    DOEpatents

    Candy, James V; Axelrod, Michael C; Breitfeller, Eric F; Chambers, David H; Guidry, Brian L; Manatt, Douglas R; Meyer, Alan W; Sale, Kenneth E

    2014-03-18

    A distributed sequential method and system for detecting and identifying radioactive contraband from highly uncertain (noisy) low-count, radionuclide measurements, i.e. an event mode sequence (EMS), using a statistical approach based on Bayesian inference and physics-model-based signal processing based on the representation of a radionuclide as a monoenergetic decomposition of monoenergetic sources. For a given photon event of the EMS, the appropriate monoenergy processing channel is determined using a confidence interval condition-based discriminator for the energy amplitude and interarrival time and parameter estimates are used to update a measured probability density function estimate for a target radionuclide. A sequential likelihood ratio test is then used to determine one of two threshold conditions signifying that the EMS is either identified as the target radionuclide or not, and if not, then repeating the process for the next sequential photon event of the EMS until one of the two threshold conditions is satisfied.

  20. [Pay attention to the application of the international intraocular retinoblastoma classification and sequential multiple modality treatment].

    PubMed

    Fan, X Q

    2017-08-11

    Retinoblastoma (RB) is the most common intraocular malignancy in childhood. It may seriously affect vision, and even threaten the life. The early diagnosis rate of RB in China remains low, and the majority of patients are at late phase with high rates of enucleation and mortality. The International Intraocular Retinoblastoma Classification and TNM staging system are guidances for therapeutic choices and bases for prognosis evaluation. Based on the sequential multi-method treatment modality, chemotherapy combined with local therapy is the mainstream in dealing with RB, which may maximize the results of eye saving and even vision retaining. New therapeutic techniques including supra-selective ophthalmic artery interventional chemotherapy and intravitreal chemotherapy can further improve the efficacy of treatment, especially the eye salvage rate. The overall level of RB treatment should be improved by promoting the international staging, new therapeutic techniques, and the sequential multiple modality treatment. (Chin J Ophthalmol, 2017, 53: 561 - 565) .

  1. The maximum entropy production and maximum Shannon information entropy in enzyme kinetics

    NASA Astrophysics Data System (ADS)

    Dobovišek, Andrej; Markovič, Rene; Brumen, Milan; Fajmut, Aleš

    2018-04-01

    We demonstrate that the maximum entropy production principle (MEPP) serves as a physical selection principle for the description of the most probable non-equilibrium steady states in simple enzymatic reactions. A theoretical approach is developed, which enables maximization of the density of entropy production with respect to the enzyme rate constants for the enzyme reaction in a steady state. Mass and Gibbs free energy conservations are considered as optimization constraints. In such a way computed optimal enzyme rate constants in a steady state yield also the most uniform probability distribution of the enzyme states. This accounts for the maximal Shannon information entropy. By means of the stability analysis it is also demonstrated that maximal density of entropy production in that enzyme reaction requires flexible enzyme structure, which enables rapid transitions between different enzyme states. These results are supported by an example, in which density of entropy production and Shannon information entropy are numerically maximized for the enzyme Glucose Isomerase.

  2. Implicit Learning of Predictive Relationships in Three-element Visual Sequences by Young and Old Adults

    PubMed Central

    Howard, James H.; Howard, Darlene V.; Dennis, Nancy A.; Kelly, Andrew J.

    2008-01-01

    Knowledge of sequential relationships enables future events to be anticipated and processed efficiently. Research with the serial reaction time task (SRTT) has shown that sequence learning often occurs implicitly without effort or awareness. Here we report four experiments that use a triplet-learning task (TLT) to investigate sequence learning in young and older adults. In the TLT people respond only to the last target event in a series of discrete, three-event sequences or triplets. Target predictability is manipulated by varying the triplet frequency (joint probability) and/or the statistical relationships (conditional probabilities) among events within the triplets. Results revealed that both groups learned, though older adults showed less learning of both joint and conditional probabilities. Young people used the statistical information in both cues, but older adults relied primarily on information in the second cue alone. We conclude that the TLT complements and extends the SRTT and other tasks by offering flexibility in the kinds of sequential statistical regularities that may be studied as well as by controlling event timing and eliminating motor response sequencing. PMID:18763897

  3. Implementing reduced-risk integrated pest management in fresh-market cabbage: influence of sampling parameters, and validation of binomial sequential sampling plans for the cabbage looper (Lepidoptera Noctuidae).

    PubMed

    Burkness, Eric C; Hutchison, W D

    2009-10-01

    Populations of cabbage looper, Trichoplusiani (Lepidoptera: Noctuidae), were sampled in experimental plots and commercial fields of cabbage (Brasicca spp.) in Minnesota during 1998-1999 as part of a larger effort to implement an integrated pest management program. Using a resampling approach and the Wald's sequential probability ratio test, sampling plans with different sampling parameters were evaluated using independent presence/absence and enumerative data. Evaluations and comparisons of the different sampling plans were made based on the operating characteristic and average sample number functions generated for each plan and through the use of a decision probability matrix. Values for upper and lower decision boundaries, sequential error rates (alpha, beta), and tally threshold were modified to determine parameter influence on the operating characteristic and average sample number functions. The following parameters resulted in the most desirable operating characteristic and average sample number functions; action threshold of 0.1 proportion of plants infested, tally threshold of 1, alpha = beta = 0.1, upper boundary of 0.15, lower boundary of 0.05, and resampling with replacement. We found that sampling parameters can be modified and evaluated using resampling software to achieve desirable operating characteristic and average sample number functions. Moreover, management of T. ni by using binomial sequential sampling should provide a good balance between cost and reliability by minimizing sample size and maintaining a high level of correct decisions (>95%) to treat or not treat.

  4. Reserve design to maximize species persistence

    Treesearch

    Robert G. Haight; Laurel E. Travis

    2008-01-01

    We develop a reserve design strategy to maximize the probability of species persistence predicted by a stochastic, individual-based, metapopulation model. Because the population model does not fit exact optimization procedures, our strategy involves deriving promising solutions from theory, obtaining promising solutions from a simulation optimization heuristic, and...

  5. State-Dependent Risk Preferences in Evolutionary Games

    NASA Astrophysics Data System (ADS)

    Roos, Patrick; Nau, Dana

    There is much empirical evidence that human decision-making under risk does not correspond the decision-theoretic notion of "rational" decision making, namely to make choices that maximize the expected value. An open question is how such behavior could have arisen evolutionarily. We believe that the answer to this question lies, at least in part, in the interplay between risk-taking and sequentiality of choice in evolutionary environments.

  6. On the recognition of complex structures: Computer software using artificial intelligence applied to pattern recognition

    NASA Technical Reports Server (NTRS)

    Yakimovsky, Y.

    1974-01-01

    An approach to simultaneous interpretation of objects in complex structures so as to maximize a combined utility function is presented. Results of the application of a computer software system to assign meaning to regions in a segmented image based on the principles described in this paper and on a special interactive sequential classification learning system, which is referenced, are demonstrated.

  7. Localisation in a Growth Model with Interaction

    NASA Astrophysics Data System (ADS)

    Costa, M.; Menshikov, M.; Shcherbakov, V.; Vachkovskaia, M.

    2018-05-01

    This paper concerns the long term behaviour of a growth model describing a random sequential allocation of particles on a finite cycle graph. The model can be regarded as a reinforced urn model with graph-based interaction. It is motivated by cooperative sequential adsorption, where adsorption rates at a site depend on the configuration of existing particles in the neighbourhood of that site. Our main result is that, with probability one, the growth process will eventually localise either at a single site, or at a pair of neighbouring sites.

  8. Localisation in a Growth Model with Interaction

    NASA Astrophysics Data System (ADS)

    Costa, M.; Menshikov, M.; Shcherbakov, V.; Vachkovskaia, M.

    2018-06-01

    This paper concerns the long term behaviour of a growth model describing a random sequential allocation of particles on a finite cycle graph. The model can be regarded as a reinforced urn model with graph-based interaction. It is motivated by cooperative sequential adsorption, where adsorption rates at a site depend on the configuration of existing particles in the neighbourhood of that site. Our main result is that, with probability one, the growth process will eventually localise either at a single site, or at a pair of neighbouring sites.

  9. Error Control Coding Techniques for Space and Satellite Communications

    NASA Technical Reports Server (NTRS)

    Costello, Daniel J., Jr.; Cabral, Hermano A.; He, Jiali

    1997-01-01

    Bootstrap Hybrid Decoding (BHD) (Jelinek and Cocke, 1971) is a coding/decoding scheme that adds extra redundancy to a set of convolutionally encoded codewords and uses this redundancy to provide reliability information to a sequential decoder. Theoretical results indicate that bit error probability performance (BER) of BHD is close to that of Turbo-codes, without some of their drawbacks. In this report we study the use of the Multiple Stack Algorithm (MSA) (Chevillat and Costello, Jr., 1977) as the underlying sequential decoding algorithm in BHD, which makes possible an iterative version of BHD.

  10. Diagnostic causal reasoning with verbal information.

    PubMed

    Meder, Björn; Mayrhofer, Ralf

    2017-08-01

    In diagnostic causal reasoning, the goal is to infer the probability of causes from one or multiple observed effects. Typically, studies investigating such tasks provide subjects with precise quantitative information regarding the strength of the relations between causes and effects or sample data from which the relevant quantities can be learned. By contrast, we sought to examine people's inferences when causal information is communicated through qualitative, rather vague verbal expressions (e.g., "X occasionally causes A"). We conducted three experiments using a sequential diagnostic inference task, where multiple pieces of evidence were obtained one after the other. Quantitative predictions of different probabilistic models were derived using the numerical equivalents of the verbal terms, taken from an unrelated study with different subjects. We present a novel Bayesian model that allows for incorporating the temporal weighting of information in sequential diagnostic reasoning, which can be used to model both primacy and recency effects. On the basis of 19,848 judgments from 292 subjects, we found a remarkably close correspondence between the diagnostic inferences made by subjects who received only verbal information and those of a matched control group to whom information was presented numerically. Whether information was conveyed through verbal terms or numerical estimates, diagnostic judgments closely resembled the posterior probabilities entailed by the causes' prior probabilities and the effects' likelihoods. We observed interindividual differences regarding the temporal weighting of evidence in sequential diagnostic reasoning. Our work provides pathways for investigating judgment and decision making with verbal information within a computational modeling framework. Copyright © 2017 Elsevier Inc. All rights reserved.

  11. Introducing a Method for Calculating the Allocation of Attention in a Cognitive “Two-Armed Bandit” Procedure: Probability Matching Gives Way to Maximizing

    PubMed Central

    Heyman, Gene M.; Grisanzio, Katherine A.; Liang, Victor

    2016-01-01

    We tested whether principles that describe the allocation of overt behavior, as in choice experiments, also describe the allocation of cognition, as in attention experiments. Our procedure is a cognitive version of the “two-armed bandit choice procedure.” The two-armed bandit procedure has been of interest to psychologistsand economists because it tends to support patterns of responding that are suboptimal. Each of two alternatives provides rewards according to fixed probabilities. The optimal solution is to choose the alternative with the higher probability of reward on each trial. However, subjects often allocate responses so that the probability of a response approximates its probability of reward. Although it is this result which has attracted most interest, probability matching is not always observed. As a function of monetary incentives, practice, and individual differences, subjects tend to deviate from probability matching toward exclusive preference, as predicted by maximizing. In our version of the two-armed bandit procedure, the monitor briefly displayed two, small adjacent stimuli that predicted correct responses according to fixed probabilities, as in a two-armed bandit procedure. We show that in this setting, a simple linear equation describes the relationship between attention and correct responses, and that the equation’s solution is the allocation of attention between the two stimuli. The calculations showed that attention allocation varied as a function of the degree to which the stimuli predicted correct responses. Linear regression revealed a strong correlation (r = 0.99) between the predictiveness of a stimulus and the probability of attending to it. Nevertheless there were deviations from probability matching, and although small, they were systematic and statistically significant. As in choice studies, attention allocation deviated toward maximizing as a function of practice, feedback, and incentives. Our approach also predicts the frequency of correct guesses and the relationship between attention allocation and response latencies. The results were consistent with these two predictions, the assumptions of the equations used to calculate attention allocation, and recent studies which show that predictiveness and reward are important determinants of attention. PMID:27014109

  12. Solid State Television Camera (CID)

    NASA Technical Reports Server (NTRS)

    Steele, D. W.; Green, W. T.

    1976-01-01

    The design, development and test are described of a charge injection device (CID) camera using a 244x248 element array. A number of video signal processing functions are included which maximize the output video dynamic range while retaining the inherently good resolution response of the CID. Some of the unique features of the camera are: low light level performance, high S/N ratio, antiblooming, geometric distortion, sequential scanning and AGC.

  13. Method and apparatus for telemetry adaptive bandwidth compression

    NASA Technical Reports Server (NTRS)

    Graham, Olin L.

    1987-01-01

    Methods and apparatus are provided for automatic and/or manual adaptive bandwidth compression of telemetry. An adaptive sampler samples a video signal from a scanning sensor and generates a sequence of sampled fields. Each field and range rate information from the sensor are hence sequentially transmitted to and stored in a multiple and adaptive field storage means. The field storage means then, in response to an automatic or manual control signal, transfers the stored sampled field signals to a video monitor in a form for sequential or simultaneous display of a desired number of stored signal fields. The sampling ratio of the adaptive sample, the relative proportion of available communication bandwidth allocated respectively to transmitted data and video information, and the number of fields simultaneously displayed are manually or automatically selectively adjustable in functional relationship to each other and detected range rate. In one embodiment, when relatively little or no scene motion is detected, the control signal maximizes sampling ratio and causes simultaneous display of all stored fields, thus maximizing resolution and bandwidth available for data transmission. When increased scene motion is detected, the control signal is adjusted accordingly to cause display of fewer fields. If greater resolution is desired, the control signal is adjusted to increase the sampling ratio.

  14. Cell-Mediated Immunity to Target the Persistent Human Immunodeficiency Virus Reservoir.

    PubMed

    Riley, James L; Montaner, Luis J

    2017-03-15

    Effective clearance of virally infected cells requires the sequential activity of innate and adaptive immunity effectors. In human immunodeficiency virus (HIV) infection, naturally induced cell-mediated immune responses rarely eradicate infection. However, optimized immune responses could potentially be leveraged in HIV cure efforts if epitope escape and lack of sustained effector memory responses were to be addressed. Here we review leading HIV cure strategies that harness cell-mediated control against HIV in stably suppressed antiretroviral-treated subjects. We focus on strategies that may maximize target recognition and eradication by the sequential activation of a reconstituted immune system, together with delivery of optimal T-cell responses that can eliminate the reservoir and serve as means to maintain control of HIV spread in the absence of antiretroviral therapy (ART). As evidenced by the evolution of ART, we argue that a combination of immune-based strategies will be a superior path to cell-mediated HIV control and eradication. Available data from several human pilot trials already identify target strategies that may maximize antiviral pressure by joining innate and engineered T cell responses toward testing for sustained HIV remission and/or cure. © The Author 2017. Published by Oxford University Press for the Infectious Diseases Society of America. All rights reserved. For permissions, e-mail: journals.permissions@oup.com.

  15. A Brief Review of Effective Teaching Practices That Maximize Student Engagement

    ERIC Educational Resources Information Center

    Harbour, Kristin E.; Evanovich, Lauren L.; Sweigart, Chris A.; Hughes, Lindsay E.

    2015-01-01

    What teachers do and how students perform intersect, making teachers a critical factor for determining student success. When teachers use effective practices, they maximize the probability that students will be actively engaged in instruction. Student engagement is one of the most well-established predictors of achievement; when students are more…

  16. Analysis of SET pulses propagation probabilities in sequential circuits

    NASA Astrophysics Data System (ADS)

    Cai, Shuo; Yu, Fei; Yang, Yiqun

    2018-05-01

    As the feature size of CMOS transistors scales down, single event transient (SET) has been an important consideration in designing logic circuits. Many researches have been done in analyzing the impact of SET. However, it is difficult to consider numerous factors. We present a new approach for analyzing the SET pulses propagation probabilities (SPPs). It considers all masking effects and uses SET pulses propagation probabilities matrices (SPPMs) to represent the SPPs in current cycle. Based on the matrix union operations, the SPPs in consecutive cycles can be calculated. Experimental results show that our approach is practicable and efficient.

  17. Optimal temperature ladders in replica exchange simulations

    NASA Astrophysics Data System (ADS)

    Denschlag, Robert; Lingenheil, Martin; Tavan, Paul

    2009-04-01

    In replica exchange simulations, a temperature ladder with N rungs spans a given temperature interval. Considering systems with heat capacities independent of the temperature, here we address the question of how large N should be chosen for an optimally fast diffusion of the replicas through the temperature space. Using a simple example we show that choosing average acceptance probabilities of about 45% and computing N accordingly maximizes the round trip rates r across the given temperature range. This result differs from previous analyses which suggested smaller average acceptance probabilities of about 23%. We show that the latter choice maximizes the ratio r/N instead of r.

  18. Maximizing Information Diffusion in the Cyber-physical Integrated Network †

    PubMed Central

    Lu, Hongliang; Lv, Shaohe; Jiao, Xianlong; Wang, Xiaodong; Liu, Juan

    2015-01-01

    Nowadays, our living environment has been embedded with smart objects, such as smart sensors, smart watches and smart phones. They make cyberspace and physical space integrated by their abundant abilities of sensing, communication and computation, forming a cyber-physical integrated network. In order to maximize information diffusion in such a network, a group of objects are selected as the forwarding points. To optimize the selection, a minimum connected dominating set (CDS) strategy is adopted. However, existing approaches focus on minimizing the size of the CDS, neglecting an important factor: the weight of links. In this paper, we propose a distributed maximizing the probability of information diffusion (DMPID) algorithm in the cyber-physical integrated network. Unlike previous approaches that only consider the size of CDS selection, DMPID also considers the information spread probability that depends on the weight of links. To weaken the effects of excessively-weighted links, we also present an optimization strategy that can properly balance the two factors. The results of extensive simulation show that DMPID can nearly double the information diffusion probability, while keeping a reasonable size of selection with low overhead in different distributed networks. PMID:26569254

  19. A Randomised Trial of empiric 14-day Triple, five-day Concomitant, and ten-day Sequential Therapies for Helicobacter pylori in Seven Latin American Sites

    PubMed Central

    Greenberg, E. Robert; Anderson, Garnet L.; Morgan, Douglas R.; Torres, Javier; Chey, William D.; Bravo, Luis Eduardo; Dominguez, Ricardo L.; Ferreccio, Catterina; Herrero, Rolando; Lazcano-Ponce, Eduardo C.; Meza-Montenegro, Mercedes María; Peña, Rodolfo; Peña, Edgar M.; Salazar-Martínez, Eduardo; Correa, Pelayo; Martínez, María Elena; Valdivieso, Manuel; Goodman, Gary E.; Crowley, John J.; Baker, Laurence H.

    2011-01-01

    Summary Background Evidence from Europe, Asia, and North America suggests that standard three-drug regimens of a proton pump inhibitor plus amoxicillin and clarithromycin are significantly less effective for eradicating Helicobacter pylori (H. pylori) infection than five-day concomitant and ten-day sequential four-drug regimens that include a nitroimidazole. These four-drug regimens also entail fewer antibiotic doses and thus may be suitable for eradication programs in low-resource settings. Studies are limited from Latin America, however, where the burden of H. pylori-associated diseases is high. Methods We randomised 1463 men and women ages 21–65 selected from general populations in Chile, Colombia, Costa Rica, Honduras, Nicaragua, and Mexico (two sites) who tested positive for H. pylori by a urea breath test (UBT) to: 14 days of lansoprazole, amoxicillin, and clarithromycin (standard therapy); five days of lansoprazole, amoxicillin, clarithromycin, and metronidazole (concomitant therapy); or five days of lansoprazole and amoxicillin followed by five of lansoprazole, clarithromycin, and metronidazole (sequential therapy). Eradication was assessed by UBT six–eight weeks after randomisation. Findings In intention-to-treat analyses, the probability of eradication with standard therapy was 82·2%, which was 8·6% higher (95% adjusted CI: 2·6%, 14·5%) than with concomitant therapy (73·6%) and 5·6% higher (95% adjusted CI: −0·04%, 11·6%) than with sequential therapy (76·5%). In analyses limited to the 1314 participants who adhered to their assigned therapy, the probabilities of eradication were 87·1%, 78·7%, and 81·1% with standard, concomitant, and sequential therapies, respectively. Neither four-drug regimen was significantly better than standard triple therapy in any of the seven sites. Interpretation Standard 14-day triple-drug therapy is preferable to five-day concomitant or ten-day sequential four-drug regimens as empiric therapy for H. pylori among diverse Latin American populations. Funding Bill & Melinda Gates Foundation and US National Institutes of Health. PMID:21777974

  20. A new augmentation based algorithm for extracting maximal chordal subgraphs

    DOE PAGES

    Bhowmick, Sanjukta; Chen, Tzu-Yi; Halappanavar, Mahantesh

    2014-10-18

    If every cycle of a graph is chordal length greater than three then it contains an edge between non-adjacent vertices. Chordal graphs are of interest both theoretically, since they admit polynomial time solutions to a range of NP-hard graph problems, and practically, since they arise in many applications including sparse linear algebra, computer vision, and computational biology. A maximal chordal subgraph is a chordal subgraph that is not a proper subgraph of any other chordal subgraph. Existing algorithms for computing maximal chordal subgraphs depend on dynamically ordering the vertices, which is an inherently sequential process and therefore limits the algorithms’more » parallelizability. In our paper we explore techniques to develop a scalable parallel algorithm for extracting a maximal chordal subgraph. We demonstrate that an earlier attempt at developing a parallel algorithm may induce a non-optimal vertex ordering and is therefore not guaranteed to terminate with a maximal chordal subgraph. We then give a new algorithm that first computes and then repeatedly augments a spanning chordal subgraph. After proving that the algorithm terminates with a maximal chordal subgraph, we then demonstrate that this algorithm is more amenable to parallelization and that the parallel version also terminates with a maximal chordal subgraph. That said, the complexity of the new algorithm is higher than that of the previous parallel algorithm, although the earlier algorithm computes a chordal subgraph which is not guaranteed to be maximal. Finally, we experimented with our augmentation-based algorithm on both synthetic and real-world graphs. We provide scalability results and also explore the effect of different choices for the initial spanning chordal subgraph on both the running time and on the number of edges in the maximal chordal subgraph.« less

  1. A New Augmentation Based Algorithm for Extracting Maximal Chordal Subgraphs.

    PubMed

    Bhowmick, Sanjukta; Chen, Tzu-Yi; Halappanavar, Mahantesh

    2015-02-01

    A graph is chordal if every cycle of length greater than three contains an edge between non-adjacent vertices. Chordal graphs are of interest both theoretically, since they admit polynomial time solutions to a range of NP-hard graph problems, and practically, since they arise in many applications including sparse linear algebra, computer vision, and computational biology. A maximal chordal subgraph is a chordal subgraph that is not a proper subgraph of any other chordal subgraph. Existing algorithms for computing maximal chordal subgraphs depend on dynamically ordering the vertices, which is an inherently sequential process and therefore limits the algorithms' parallelizability. In this paper we explore techniques to develop a scalable parallel algorithm for extracting a maximal chordal subgraph. We demonstrate that an earlier attempt at developing a parallel algorithm may induce a non-optimal vertex ordering and is therefore not guaranteed to terminate with a maximal chordal subgraph. We then give a new algorithm that first computes and then repeatedly augments a spanning chordal subgraph. After proving that the algorithm terminates with a maximal chordal subgraph, we then demonstrate that this algorithm is more amenable to parallelization and that the parallel version also terminates with a maximal chordal subgraph. That said, the complexity of the new algorithm is higher than that of the previous parallel algorithm, although the earlier algorithm computes a chordal subgraph which is not guaranteed to be maximal. We experimented with our augmentation-based algorithm on both synthetic and real-world graphs. We provide scalability results and also explore the effect of different choices for the initial spanning chordal subgraph on both the running time and on the number of edges in the maximal chordal subgraph.

  2. Evaluating specificity of sequential extraction for chemical forms of lead in artificially-contaminated and field-contaminated soils.

    PubMed

    Tai, Yiping; McBride, Murray B; Li, Zhian

    2013-03-30

    In the present study, we evaluated a commonly employed modified Bureau Communautaire de Référence (BCR test) 3-step sequential extraction procedure for its ability to distinguish forms of solid-phase Pb in soils with different sources and histories of contamination. When the modified BCR test was applied to mineral soils spiked with three forms of Pb (pyromorphite, hydrocerussite and nitrate salt), the added Pb was highly susceptible to dissolution in the operationally-defined "reducible" or "oxide" fraction regardless of form. When three different materials (mineral soil, organic soil and goethite) were spiked with soluble Pb nitrate, the BCR sequential extraction profiles revealed that soil organic matter was capable of retaining Pb in more stable and acid-resistant forms than silicate clay minerals or goethite. However, the BCR sequential extraction for field-collected soils with known and different sources of Pb contamination was not sufficiently discriminatory in the dissolution of soil Pb phases to allow soil Pb forms to be "fingerprinted" by this method. It is concluded that standard sequential extraction procedures are probably not very useful in predicting lability and bioavailability of Pb in contaminated soils. Copyright © 2013 Elsevier B.V. All rights reserved.

  3. [Sequential sampling plans to Orthezia praelonga Douglas (Hemiptera: Sternorrhyncha, Ortheziidae) in citrus].

    PubMed

    Costa, Marilia G; Barbosa, José C; Yamamoto, Pedro T

    2007-01-01

    The sequential sampling is characterized by using samples of variable sizes, and has the advantage of reducing sampling time and costs if compared to fixed-size sampling. To introduce an adequate management for orthezia, sequential sampling plans were developed for orchards under low and high infestation. Data were collected in Matão, SP, in commercial stands of the orange variety 'Pêra Rio', at five, nine and 15 years of age. Twenty samplings were performed in the whole area of each stand by observing the presence or absence of scales on plants, being plots comprised of ten plants. After observing that in all of the three stands the scale population was distributed according to the contagious model, fitting the Negative Binomial Distribution in most samplings, two sequential sampling plans were constructed according to the Sequential Likelihood Ratio Test (SLRT). To construct these plans an economic threshold of 2% was adopted and the type I and II error probabilities were fixed in alpha = beta = 0.10. Results showed that the maximum numbers of samples expected to determine control need were 172 and 76 samples for stands with low and high infestation, respectively.

  4. Condition-dependent mate choice: A stochastic dynamic programming approach.

    PubMed

    Frame, Alicia M; Mills, Alex F

    2014-09-01

    We study how changing female condition during the mating season and condition-dependent search costs impact female mate choice, and what strategies a female could employ in choosing mates to maximize her own fitness. We address this problem via a stochastic dynamic programming model of mate choice. In the model, a female encounters males sequentially and must choose whether to mate or continue searching. As the female searches, her own condition changes stochastically, and she incurs condition-dependent search costs. The female attempts to maximize the quality of the offspring, which is a function of the female's condition at mating and the quality of the male with whom she mates. The mating strategy that maximizes the female's net expected reward is a quality threshold. We compare the optimal policy with other well-known mate choice strategies, and we use simulations to examine how well the optimal policy fares under imperfect information. Copyright © 2014 Elsevier Inc. All rights reserved.

  5. [Significance of heterogenity in endothelium-dependent vasodilatation occurrence in healthy individuals with or without coronary risk factors].

    PubMed

    Polovina, Marija; Potpara, Tatjana; Giga, Vojislav; Ostojić, Miodrag

    2009-10-01

    Brachial artery flow-mediated dilation (FMD) is extensively used for non-invasive assessment of endothelial function. Traditionally, FMD is calculated as a percent change of arterial diameter from the baseline value at an arbitrary time point after cuff deflation (usually 60 seconds). Considerable individual differences in brachial artery temporal response to hyperemic stimulus have been observed, potentially influenced by the presence of atherosclerotic risk factors (RF). The importance of such differences for the evaluation of endothelial function has not been well established. The aim of the study was to determine the time course of maximal brachial artery endothelium-dependent dilation in healthy adults with and without RF, to explore the correlation of RF with brachial artery temporal response and to evaluate the importance of individual differences in temporal response for the assessment of endothelial function. A total of 115 healthy volunteers were included in the study. Out of them, 58 had no RF (26 men, mean age 44 +/-14 years) and 57 had at least one RF (29 men, mean age 45 +/-14 years). High-resolution color Doppler vascular ultrasound was used for brachial artery imaging. To determine maximal arterial diameter after cuff deflation and the time-point of maximal vasodilation off-line sequential measurements were performed every 10 seconds from 0 to 240 seconds after cuff release. True maximal FMD value was calculated as a percent change of the true maximal diameter from the baseline, and compared with FMD value calculated assuming that every participant reached maximal dilation at 60 seconds post cuff deflation (FMD60). Correlation of different RF with brachial artery temporal response was assessed. A maximal brachial artery endothelium-dependent vasodilation occurred from 30-120 seconds after cuff release, and the mean time of endothelium-dependent dilation was 68 +/-20 seconds. Individuals without RF had faster endothelium-dependent dilation (mean time 62 +/-17 seconds), and a shorter time-span (30 to 100 seconds), than participants with RF (mean time 75 +/-21 seconds, time-span 40 to 120 seconds) (p < 0.001). Time when the maximal endothelium-dependent dilation occurred was independently associated with age, serum lipid fractions (total cholesterol, LDL and HDL cholesterol), smoking, physical activity and C-reactive protein. True maximal FMD value in the whole group (6.7 +/-3.0%) was significantly higher (p < 0.001) than FMD60 (5.2 +/-3.5%). The same results were demonstrated for individuals with RF (4.9 +/- 1.7% vs 3.1 +/- 2.3%, p < 0.001) and without RF (8.4 +/- 2.9% vs 7.2 +/- 3.2%, p < 0.05). The temporal response of endothelium-dependent dilation is influenced by the presence of coronary FR and individually heterogeneous. When calculated according to the commonly used approach, i.e. 60 seconds after cuff deflation, FMD is significantly lower than the true maximal FMD. The routinely used measurement time-points for FMD assessment may not be adequate for the detection of true peak vasodilation in individual persons. More precise evaluation of endothelial function can be achieved with sequential measurement of arterial diameter after hyperemic stimulus.

  6. Sequential state discrimination and requirement of quantum dissonance

    NASA Astrophysics Data System (ADS)

    Pang, Chao-Qian; Zhang, Fu-Lin; Xu, Li-Fang; Liang, Mai-Lin; Chen, Jing-Ling

    2013-11-01

    We study the procedure for sequential unambiguous state discrimination. A qubit is prepared in one of two possible states and measured by two observers Bob and Charlie sequentially. A necessary condition for the state to be unambiguously discriminated by Charlie is the absence of entanglement between the principal qubit, prepared by Alice, and Bob's auxiliary system. In general, the procedure for both Bob and Charlie to recognize between two nonorthogonal states conclusively relies on the availability of quantum discord which is precisely the quantum dissonance when the entanglement is absent. In Bob's measurement, the left discord is positively correlated with the information extracted by Bob, and the right discord enhances the information left to Charlie. When their product achieves its maximum the probability for both Bob and Charlie to identify the state achieves its optimal value.

  7. Quantum Tasks with Non-maximally Quantum Channels via Positive Operator-Valued Measurement

    NASA Astrophysics Data System (ADS)

    Peng, Jia-Yin; Luo, Ming-Xing; Mo, Zhi-Wen

    2013-01-01

    By using a proper positive operator-valued measure (POVM), we present two new schemes for probabilistic transmission with non-maximally four-particle cluster states. In the first scheme, we demonstrate that two non-maximally four-particle cluster states can be used to realize probabilistically sharing an unknown three-particle GHZ-type state within either distant agent's place. In the second protocol, we demonstrate that a non-maximally four-particle cluster state can be used to teleport an arbitrary unknown multi-particle state in a probabilistic manner with appropriate unitary operations and POVM. Moreover the total success probability of these two schemes are also worked out.

  8. Patterns and Sequences: Interactive Exploration of Clickstreams to Understand Common Visitor Paths.

    PubMed

    Liu, Zhicheng; Wang, Yang; Dontcheva, Mira; Hoffman, Matthew; Walker, Seth; Wilson, Alan

    2017-01-01

    Modern web clickstream data consists of long, high-dimensional sequences of multivariate events, making it difficult to analyze. Following the overarching principle that the visual interface should provide information about the dataset at multiple levels of granularity and allow users to easily navigate across these levels, we identify four levels of granularity in clickstream analysis: patterns, segments, sequences and events. We present an analytic pipeline consisting of three stages: pattern mining, pattern pruning and coordinated exploration between patterns and sequences. Based on this approach, we discuss properties of maximal sequential patterns, propose methods to reduce the number of patterns and describe design considerations for visualizing the extracted sequential patterns and the corresponding raw sequences. We demonstrate the viability of our approach through an analysis scenario and discuss the strengths and limitations of the methods based on user feedback.

  9. On the Structure of a Best Possible Crossover Selection Strategy in Genetic Algorithms

    NASA Astrophysics Data System (ADS)

    Lässig, Jörg; Hoffmann, Karl Heinz

    The paper considers the problem of selecting individuals in the current population in genetic algorithms for crossover to find a solution with high fitness for a given optimization problem. Many different schemes have been described in the literature as possible strategies for this task but so far comparisons have been predominantly empirical. It is shown that if one wishes to maximize any linear function of the final state probabilities, e.g. the fitness of the best individual in the final population of the algorithm, then a best probability distribution for selecting an individual in each generation is a rectangular distribution over the individuals sorted in descending sequence by their fitness values. This means uniform probabilities have to be assigned to a group of the best individuals of the population but probabilities equal to zero to individuals with lower fitness, assuming that the probability distribution to choose individuals from the current population can be chosen independently for each iteration and each individual. This result is then generalized also to typical practically applied performance measures, such as maximizing the expected fitness value of the best individual seen in any generation.

  10. Sequentially Simulated Outcomes: Kind Experience versus Nontransparent Description

    ERIC Educational Resources Information Center

    Hogarth, Robin M.; Soyer, Emre

    2011-01-01

    Recently, researchers have investigated differences in decision making based on description and experience. We address the issue of when experience-based judgments of probability are more accurate than are those based on description. If description is well understood ("transparent") and experience is misleading ("wicked"), it…

  11. The Effects of Heuristics and Apophenia on Probabilistic Choice.

    PubMed

    Ellerby, Zack W; Tunney, Richard J

    2017-01-01

    Given a repeated choice between two or more options with independent and identically distributed reward probabilities, overall pay-offs can be maximized by the exclusive selection of the option with the greatest likelihood of reward. The tendency to match response proportions to reward contingencies is suboptimal. Nevertheless, this behaviour is well documented. A number of explanatory accounts have been proposed for probability matching. These include failed pattern matching, driven by apophenia, and a heuristic-driven response that can be overruled with sufficient deliberation. We report two experiments that were designed to test the relative effects on choice behaviour of both an intuitive versus strategic approach to the task and belief that there was a predictable pattern in the reward sequence, through a combination of both direct experimental manipulation and post-experimental self-report. Mediation analysis was used to model the pathways of effects. Neither of two attempted experimental manipulations of apophenia, nor self-reported levels of apophenia, had a significant effect on proportions of maximizing choices. However, the use of strategy over intuition proved a consistent predictor of maximizing, across all experimental conditions. A parallel analysis was conducted to assess the effect of controlling for individual variance in perceptions of reward contingencies. Although this analysis suggested that apophenia did increase probability matching in the standard task preparation, this effect was found to result from an unforeseen relationship between self-reported apophenia and perceived reward probabilities. A Win-Stay Lose-Shift (WSLS ) analysis indicated no reliable relationship between WSLS and either intuition or strategy use.

  12. One-way quantum computing in superconducting circuits

    NASA Astrophysics Data System (ADS)

    Albarrán-Arriagada, F.; Alvarado Barrios, G.; Sanz, M.; Romero, G.; Lamata, L.; Retamal, J. C.; Solano, E.

    2018-03-01

    We propose a method for the implementation of one-way quantum computing in superconducting circuits. Measurement-based quantum computing is a universal quantum computation paradigm in which an initial cluster state provides the quantum resource, while the iteration of sequential measurements and local rotations encodes the quantum algorithm. Up to now, technical constraints have limited a scalable approach to this quantum computing alternative. The initial cluster state can be generated with available controlled-phase gates, while the quantum algorithm makes use of high-fidelity readout and coherent feedforward. With current technology, we estimate that quantum algorithms with above 20 qubits may be implemented in the path toward quantum supremacy. Moreover, we propose an alternative initial state with properties of maximal persistence and maximal connectedness, reducing the required resources of one-way quantum computing protocols.

  13. Teleporting an unknown quantum state with unit fidelity and unit probability via a non-maximally entangled channel and an auxiliary system

    NASA Astrophysics Data System (ADS)

    Rashvand, Taghi

    2016-11-01

    We present a new scheme for quantum teleportation that one can teleport an unknown state via a non-maximally entangled channel with certainly, using an auxiliary system. In this scheme depending on the state of the auxiliary system, one can find a class of orthogonal vectors set as a basis which by performing von Neumann measurement in each element of this class Alice can teleport an unknown state with unit fidelity and unit probability. A comparison of our scheme with some previous schemes is given and we will see that our scheme has advantages that the others do not.

  14. Extinction times of epidemic outbreaks in networks.

    PubMed

    Holme, Petter

    2013-01-01

    In the Susceptible-Infectious-Recovered (SIR) model of disease spreading, the time to extinction of the epidemics happens at an intermediate value of the per-contact transmission probability. Too contagious infections burn out fast in the population. Infections that are not contagious enough die out before they spread to a large fraction of people. We characterize how the maximal extinction time in SIR simulations on networks depend on the network structure. For example we find that the average distances in isolated components, weighted by the component size, is a good predictor of the maximal time to extinction. Furthermore, the transmission probability giving the longest outbreaks is larger than, but otherwise seemingly independent of, the epidemic threshold.

  15. The utility of Bayesian predictive probabilities for interim monitoring of clinical trials

    PubMed Central

    Connor, Jason T.; Ayers, Gregory D; Alvarez, JoAnn

    2014-01-01

    Background Bayesian predictive probabilities can be used for interim monitoring of clinical trials to estimate the probability of observing a statistically significant treatment effect if the trial were to continue to its predefined maximum sample size. Purpose We explore settings in which Bayesian predictive probabilities are advantageous for interim monitoring compared to Bayesian posterior probabilities, p-values, conditional power, or group sequential methods. Results For interim analyses that address prediction hypotheses, such as futility monitoring and efficacy monitoring with lagged outcomes, only predictive probabilities properly account for the amount of data remaining to be observed in a clinical trial and have the flexibility to incorporate additional information via auxiliary variables. Limitations Computational burdens limit the feasibility of predictive probabilities in many clinical trial settings. The specification of prior distributions brings additional challenges for regulatory approval. Conclusions The use of Bayesian predictive probabilities enables the choice of logical interim stopping rules that closely align with the clinical decision making process. PMID:24872363

  16. Increased Automaticity and Altered Temporal Preparation Following Sleep Deprivation.

    PubMed

    Kong, Danyang; Asplund, Christopher L; Ling, Aiqing; Chee, Michael W L

    2015-08-01

    Temporal expectation enables us to focus limited processing resources, thereby optimizing perceptual and motor processing for critical upcoming events. We investigated the effects of total sleep deprivation (TSD) on temporal expectation by evaluating the foreperiod and sequential effects during a psychomotor vigilance task (PVT). We also examined how these two measures were modulated by vulnerability to TSD. Three 10-min visual PVT sessions using uniformly distributed foreperiods were conducted in the wake-maintenance zone the evening before sleep deprivation (ESD) and three more in the morning following approximately 22 h of TSD. TSD vulnerable and nonvulnerable groups were determined by a tertile split of participants based on the change in the number of behavioral lapses recorded during ESD and TSD. A subset of participants performed six additional 10-min modified auditory PVTs with exponentially distributed foreperiods during rested wakefulness (RW) and TSD to test the effect of temporal distribution on foreperiod and sequential effects. Sleep laboratory. There were 172 young healthy participants (90 males) with regular sleep patterns. Nineteen of these participants performed the modified auditory PVT. Despite behavioral lapses and slower response times, sleep deprived participants could still perceive the conditional probability of temporal events and modify their level of preparation accordingly. Both foreperiod and sequential effects were magnified following sleep deprivation in vulnerable individuals. Only the foreperiod effect increased in nonvulnerable individuals. The preservation of foreperiod and sequential effects suggests that implicit time perception and temporal preparedness are intact during total sleep deprivation. Individuals appear to reallocate their depleted preparatory resources to more probable event timings in ongoing trials, whereas vulnerable participants also rely more on automatic processes. © 2015 Associated Professional Sleep Societies, LLC.

  17. Perceptual salience affects the contents of working memory during free-recollection of objects from natural scenes

    PubMed Central

    Pedale, Tiziana; Santangelo, Valerio

    2015-01-01

    One of the most important issues in the study of cognition is to understand which are the factors determining internal representation of the external world. Previous literature has started to highlight the impact of low-level sensory features (indexed by saliency-maps) in driving attention selection, hence increasing the probability for objects presented in complex and natural scenes to be successfully encoded into working memory (WM) and then correctly remembered. Here we asked whether the probability of retrieving high-saliency objects modulates the overall contents of WM, by decreasing the probability of retrieving other, lower-saliency objects. We presented pictures of natural scenes for 4 s. After a retention period of 8 s, we asked participants to verbally report as many objects/details as possible of the previous scenes. We then computed how many times the objects located at either the peak of maximal or minimal saliency in the scene (as indexed by a saliency-map; Itti et al., 1998) were recollected by participants. Results showed that maximal-saliency objects were recollected more often and earlier in the stream of successfully reported items than minimal-saliency objects. This indicates that bottom-up sensory salience increases the recollection probability and facilitates the access to memory representation at retrieval, respectively. Moreover, recollection of the maximal- (but not the minimal-) saliency objects predicted the overall amount of successfully recollected objects: The higher the probability of having successfully reported the most-salient object in the scene, the lower the amount of recollected objects. These findings highlight that bottom-up sensory saliency modulates the current contents of WM during recollection of objects from natural scenes, most likely by reducing available resources to encode and then retrieve other (lower saliency) objects. PMID:25741266

  18. Heat accumulation during sequential cortical bone drilling.

    PubMed

    Palmisano, Andrew C; Tai, Bruce L; Belmont, Barry; Irwin, Todd A; Shih, Albert; Holmes, James R

    2016-03-01

    Significant research exists regarding heat production during single-hole bone drilling. No published data exist regarding repetitive sequential drilling. This study elucidates the phenomenon of heat accumulation for sequential drilling with both Kirschner wires (K wires) and standard two-flute twist drills. It was hypothesized that cumulative heat would result in a higher temperature with each subsequent drill pass. Nine holes in a 3 × 3 array were drilled sequentially on moistened cadaveric tibia bone kept at body temperature (about 37 °C). Four thermocouples were placed at the center of four adjacent holes and 2 mm below the surface. A battery-driven hand drill guided by a servo-controlled motion system was used. Six samples were drilled with each tool (2.0 mm K wire and 2.0 and 2.5 mm standard drills). K wire drilling increased temperature from 5 °C at the first hole to 20 °C at holes 6 through 9. A similar trend was found in standard drills with less significant increments. The maximum temperatures of both tools increased from <0.5 °C to nearly 13 °C. The difference between drill sizes was found to be insignificant (P > 0.05). In conclusion, heat accumulated during sequential drilling, with size difference being insignificant. K wire produced more heat than its twist-drill counterparts. This study has demonstrated the heat accumulation phenomenon and its significant effect on temperature. Maximizing the drilling field and reducing the number of drill passes may decrease bone injury. © 2015 Orthopaedic Research Society. Published by Wiley Periodicals, Inc.

  19. Quantum-Inspired Maximizer

    NASA Technical Reports Server (NTRS)

    Zak, Michail

    2008-01-01

    A report discusses an algorithm for a new kind of dynamics based on a quantum- classical hybrid-quantum-inspired maximizer. The model is represented by a modified Madelung equation in which the quantum potential is replaced by different, specially chosen 'computational' potential. As a result, the dynamics attains both quantum and classical properties: it preserves superposition and entanglement of random solutions, while allowing one to measure its state variables, using classical methods. Such optimal combination of characteristics is a perfect match for quantum-inspired computing. As an application, an algorithm for global maximum of an arbitrary integrable function is proposed. The idea of the proposed algorithm is very simple: based upon the Quantum-inspired Maximizer (QIM), introduce a positive function to be maximized as the probability density to which the solution is attracted. Then the larger value of this function will have the higher probability to appear. Special attention is paid to simulation of integer programming and NP-complete problems. It is demonstrated that the problem of global maximum of an integrable function can be found in polynomial time by using the proposed quantum- classical hybrid. The result is extended to a constrained maximum with applications to integer programming and TSP (Traveling Salesman Problem).

  20. Anytime synthetic projection: Maximizing the probability of goal satisfaction

    NASA Technical Reports Server (NTRS)

    Drummond, Mark; Bresina, John L.

    1990-01-01

    A projection algorithm is presented for incremental control rule synthesis. The algorithm synthesizes an initial set of goal achieving control rules using a combination of situation probability and estimated remaining work as a search heuristic. This set of control rules has a certain probability of satisfying the given goal. The probability is incrementally increased by synthesizing additional control rules to handle 'error' situations the execution system is likely to encounter when following the initial control rules. By using situation probabilities, the algorithm achieves a computationally effective balance between the limited robustness of triangle tables and the absolute robustness of universal plans.

  1. Diagnostic value of tendon thickness and structure in the sonographic diagnosis of supraspinatus tendinopathy: room for a two-step approach.

    PubMed

    Arend, Carlos Frederico; Arend, Ana Amalia; da Silva, Tiago Rodrigues

    2014-06-01

    The aim of our study was to systematically compare different methodologies to establish an evidence-based approach based on tendon thickness and structure for sonographic diagnosis of supraspinatus tendinopathy when compared to MRI. US was obtained from 164 symptomatic patients with supraspinatus tendinopathy detected at MRI and 42 asymptomatic controls with normal MRI. Diagnostic yield was calculated for either maximal supraspinatus tendon thickness (MSTT) and tendon structure as isolated criteria and using different combinations of parallel and sequential testing at US. Chi-squared tests were performed to assess sensitivity, specificity, and accuracy of different diagnostic approaches. Mean MSTT was 6.68 mm in symptomatic patients and 5.61 mm in asymptomatic controls (P<.05). When used as an isolated criterion, MSTT>6.0mm provided best results for accuracy (93.7%) when compared to other measurements of tendon thickness. Also as an isolated criterion, abnormal tendon structure (ATS) yielded 93.2% accuracy for diagnosis. The best overall yield was obtained by both parallel and sequential testing using either MSTT>6.0mm or ATS as diagnostic criteria at no particular order, which provided 99.0% accuracy, 100% sensitivity, and 95.2% specificity. Among these parallel and sequential tests that provided best overall yield, additional analysis revealed that sequential testing first evaluating tendon structure required assessment of 258 criteria (vs. 261 for sequential testing first evaluating tendon thickness and 412 for parallel testing) and demanded a mean of 16.1s to assess diagnostic criteria and reach the diagnosis (vs. 43.3s for sequential testing first evaluating tendon thickness and 47.4s for parallel testing). We found that using either MSTT>6.0mm or ATS as diagnostic criteria for both parallel and sequential testing provides the best overall yield for sonographic diagnosis of supraspinatus tendinopathy when compared to MRI. Among these strategies, a two-step sequential approach first assessing tendon structure was advantageous because it required a lower number of criteria to be assessed and demanded less time to assess diagnostic criteria and reach the diagnosis. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  2. Sequential dynamics in visual short-term memory.

    PubMed

    Kool, Wouter; Conway, Andrew R A; Turk-Browne, Nicholas B

    2014-10-01

    Visual short-term memory (VSTM) is thought to help bridge across changes in visual input, and yet many studies of VSTM employ static displays. Here we investigate how VSTM copes with sequential input. In particular, we characterize the temporal dynamics of several different components of VSTM performance, including: storage probability, precision, variability in precision, guessing, and swapping. We used a variant of the continuous-report VSTM task developed for static displays, quantifying the contribution of each component with statistical likelihood estimation, as a function of serial position and set size. In Experiments 1 and 2, storage probability did not vary by serial position for small set sizes, but showed a small primacy effect and a robust recency effect for larger set sizes; precision did not vary by serial position or set size. In Experiment 3, the recency effect was shown to reflect an increased likelihood of swapping out items from earlier serial positions and swapping in later items, rather than an increased rate of guessing for earlier items. Indeed, a model that incorporated responding to non-targets provided a better fit to these data than alternative models that did not allow for swapping or that tried to account for variable precision. These findings suggest that VSTM is updated in a first-in-first-out manner, and they bring VSTM research into closer alignment with classical working memory research that focuses on sequential behavior and interference effects.

  3. The Sequential Probability Ratio Test: An efficient alternative to exact binomial testing for Clean Water Act 303(d) evaluation.

    PubMed

    Chen, Connie; Gribble, Matthew O; Bartroff, Jay; Bay, Steven M; Goldstein, Larry

    2017-05-01

    The United States's Clean Water Act stipulates in section 303(d) that states must identify impaired water bodies for which total maximum daily loads (TMDLs) of pollution inputs into water bodies are developed. Decision-making procedures about how to list, or delist, water bodies as impaired, or not, per Clean Water Act 303(d) differ across states. In states such as California, whether or not a particular monitoring sample suggests that water quality is impaired can be regarded as a binary outcome variable, and California's current regulatory framework invokes a version of the exact binomial test to consolidate evidence across samples and assess whether the overall water body complies with the Clean Water Act. Here, we contrast the performance of California's exact binomial test with one potential alternative, the Sequential Probability Ratio Test (SPRT). The SPRT uses a sequential testing framework, testing samples as they become available and evaluating evidence as it emerges, rather than measuring all the samples and calculating a test statistic at the end of the data collection process. Through simulations and theoretical derivations, we demonstrate that the SPRT on average requires fewer samples to be measured to have comparable Type I and Type II error rates as the current fixed-sample binomial test. Policymakers might consider efficient alternatives such as SPRT to current procedure. Copyright © 2017 Elsevier Ltd. All rights reserved.

  4. Sequential dynamics in visual short-term memory

    PubMed Central

    Conway, Andrew R. A.; Turk-Browne, Nicholas B.

    2014-01-01

    Visual short-term memory (VSTM) is thought to help bridge across changes in visual input, and yet many studies of VSTM employ static displays. Here we investigate how VSTM copes with sequential input. In particular, we characterize the temporal dynamics of several different components of VSTM performance, including: storage probability, precision, variability in precision, guessing, and swapping. We used a variant of the continuous-report VSTM task developed for static displays, quantifying the contribution of each component with statistical likelihood estimation, as a function of serial position and set size. In Experiments 1 and 2, storage probability did not vary by serial position for small set sizes, but showed a small primacy effect and a robust recency effect for larger set sizes; precision did not vary by serial position or set size. In Experiment 3, the recency effect was shown to reflect an increased likelihood of swapping out items from earlier serial positions and swapping in later items, rather than an increased rate of guessing for earlier items. Indeed, a model that incorporated responding to non-targets provided a better fit to these data than alternative models that did not allow for swapping or that tried to account for variable precision. These findings suggest that VSTM is updated in a first-in-first-out manner, and they bring VSTM research into closer alignment with classical working memory research that focuses on sequential behavior and interference effects. PMID:25228092

  5. Adaptive x-ray threat detection using sequential hypotheses testing with fan-beam experimental data (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Thamvichai, Ratchaneekorn; Huang, Liang-Chih; Ashok, Amit; Gong, Qian; Coccarelli, David; Greenberg, Joel A.; Gehm, Michael E.; Neifeld, Mark A.

    2017-05-01

    We employ an adaptive measurement system, based on sequential hypotheses testing (SHT) framework, for detecting material-based threats using experimental data acquired on an X-ray experimental testbed system. This testbed employs 45-degree fan-beam geometry and 15 views over a 180-degree span to generate energy sensitive X-ray projection data. Using this testbed system, we acquire multiple view projection data for 200 bags. We consider an adaptive measurement design where the X-ray projection measurements are acquired in a sequential manner and the adaptation occurs through the choice of the optimal "next" source/view system parameter. Our analysis of such an adaptive measurement design using the experimental data demonstrates a 3x-7x reduction in the probability of error relative to a static measurement design. Here the static measurement design refers to the operational system baseline that corresponds to a sequential measurement using all the available sources/views. We also show that by using adaptive measurements it is possible to reduce the number of sources/views by nearly 50% compared a system that relies on static measurements.

  6. Diel periodicity of pheromone release by females of Planococcus citri and Planococcus ficus and the temporal flight activity of their conspecific males

    NASA Astrophysics Data System (ADS)

    Levi-Zada, Anat; Fefer, Daniela; David, Maayan; Eliyahu, Miriam; Franco, José Carlos; Protasov, Alex; Dunkelblum, Ezra; Mendel, Zvi

    2014-08-01

    The diel periodicity of sex pheromone release was monitored in two mealybug species, Planococcus citri and Planococcus ficus (Hemiptera; Pseudococcidae), using sequential SPME/GCMS analysis. A maximal release of 2 ng/h pheromone by 9-12-day-old P. citri females occurred 1-2 h before the beginning of photophase. The highest release of pheromone by P. ficus females was 1-2 ng/2 h of 10-20-day-old females, approximately 2 h after the beginning of photophase. Mating resulted in termination of the pheromone release in both mealybug species. The temporal flight activity of the males was monitored in rearing chambers using pheromone baited delta traps. Males of both P. citri and P. ficus displayed the same flight pattern and began flying at 06:00 hours when the light was turned on, reaching a peak during the first and second hour of the photophase. Our results suggest that other biparental mealybug species display also diel periodicities of maximal pheromone release and response. Direct evaluation of the diel periodicity of the pheromone release by the automatic sequential analysis is convenient and will be very helpful in optimizing the airborne collection and identification of other unknown mealybug pheromones and to study the calling behavior of females. Considering this behavior pattern may help to develop more effective pheromone-based management strategies against mealybugs.

  7. Sequential Design of Experiments to Maximize Learning from Carbon Capture Pilot Plant Testing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Soepyan, Frits B.; Morgan, Joshua C.; Omell, Benjamin P.

    Pilot plant test campaigns can be expensive and time-consuming. Therefore, it is of interest to maximize the amount of learning and the efficiency of the test campaign given the limited number of experiments that can be conducted. This work investigates the use of sequential design of experiments (SDOE) to overcome these challenges by demonstrating its usefulness for a recent solvent-based CO2 capture plant test campaign. Unlike traditional design of experiments methods, SDOE regularly uses information from ongoing experiments to determine the optimum locations in the design space for subsequent runs within the same experiment. However, there are challenges that needmore » to be addressed, including reducing the high computational burden to efficiently update the model, and the need to incorporate the methodology into a computational tool. We address these challenges by applying SDOE in combination with a software tool, the Framework for Optimization, Quantification of Uncertainty and Surrogates (FOQUS) (Miller et al., 2014a, 2016, 2017). The results of applying SDOE on a pilot plant test campaign for CO2 capture suggests that relative to traditional design of experiments methods, SDOE can more effectively reduce the uncertainty of the model, thus decreasing technical risk. Future work includes integrating SDOE into FOQUS and using SDOE to support additional large-scale pilot plant test campaigns.« less

  8. The impacts of the quantum-dot confining potential on the spin-orbit effect.

    PubMed

    Li, Rui; Liu, Zhi-Hai; Wu, Yidong; Liu, C S

    2018-05-09

    For a nanowire quantum dot with the confining potential modeled by both the infinite and the finite square wells, we obtain exactly the energy spectrum and the wave functions in the strong spin-orbit coupling regime. We find that regardless of how small the well height is, there are at least two bound states in the finite square well: one has the σ x [Formula: see text] = -1 symmetry and the other has the σ x [Formula: see text] = 1 symmetry. When the well height is slowly tuned from large to small, the position of the maximal probability density of the first excited state moves from the center to x ≠ 0, while the position of the maximal probability density of the ground state is always at the center. A strong enhancement of the spin-orbit effect is demonstrated by tuning the well height. In particular, there exists a critical height [Formula: see text], at which the spin-orbit effect is enhanced to maximal.

  9. Mining sequential patterns for protein fold recognition.

    PubMed

    Exarchos, Themis P; Papaloukas, Costas; Lampros, Christos; Fotiadis, Dimitrios I

    2008-02-01

    Protein data contain discriminative patterns that can be used in many beneficial applications if they are defined correctly. In this work sequential pattern mining (SPM) is utilized for sequence-based fold recognition. Protein classification in terms of fold recognition plays an important role in computational protein analysis, since it can contribute to the determination of the function of a protein whose structure is unknown. Specifically, one of the most efficient SPM algorithms, cSPADE, is employed for the analysis of protein sequence. A classifier uses the extracted sequential patterns to classify proteins in the appropriate fold category. For training and evaluating the proposed method we used the protein sequences from the Protein Data Bank and the annotation of the SCOP database. The method exhibited an overall accuracy of 25% in a classification problem with 36 candidate categories. The classification performance reaches up to 56% when the five most probable protein folds are considered.

  10. Parallelization of sequential Gaussian, indicator and direct simulation algorithms

    NASA Astrophysics Data System (ADS)

    Nunes, Ruben; Almeida, José A.

    2010-08-01

    Improving the performance and robustness of algorithms on new high-performance parallel computing architectures is a key issue in efficiently performing 2D and 3D studies with large amount of data. In geostatistics, sequential simulation algorithms are good candidates for parallelization. When compared with other computational applications in geosciences (such as fluid flow simulators), sequential simulation software is not extremely computationally intensive, but parallelization can make it more efficient and creates alternatives for its integration in inverse modelling approaches. This paper describes the implementation and benchmarking of a parallel version of the three classic sequential simulation algorithms: direct sequential simulation (DSS), sequential indicator simulation (SIS) and sequential Gaussian simulation (SGS). For this purpose, the source used was GSLIB, but the entire code was extensively modified to take into account the parallelization approach and was also rewritten in the C programming language. The paper also explains in detail the parallelization strategy and the main modifications. Regarding the integration of secondary information, the DSS algorithm is able to perform simple kriging with local means, kriging with an external drift and collocated cokriging with both local and global correlations. SIS includes a local correction of probabilities. Finally, a brief comparison is presented of simulation results using one, two and four processors. All performance tests were carried out on 2D soil data samples. The source code is completely open source and easy to read. It should be noted that the code is only fully compatible with Microsoft Visual C and should be adapted for other systems/compilers.

  11. Multilevel sequential Monte Carlo samplers

    DOE PAGES

    Beskos, Alexandros; Jasra, Ajay; Law, Kody; ...

    2016-08-24

    Here, we study the approximation of expectations w.r.t. probability distributions associated to the solution of partial differential equations (PDEs); this scenario appears routinely in Bayesian inverse problems. In practice, one often has to solve the associated PDE numerically, using, for instance finite element methods and leading to a discretisation bias, with the step-size level h L. In addition, the expectation cannot be computed analytically and one often resorts to Monte Carlo methods. In the context of this problem, it is known that the introduction of the multilevel Monte Carlo (MLMC) method can reduce the amount of computational effort to estimate expectations, for a given level of error. This is achieved via a telescoping identity associated to a Monte Carlo approximation of a sequence of probability distributions with discretisation levelsmore » $${\\infty}$$ >h 0>h 1 ...>h L. In many practical problems of interest, one cannot achieve an i.i.d. sampling of the associated sequence of probability distributions. A sequential Monte Carlo (SMC) version of the MLMC method is introduced to deal with this problem. In conclusion, it is shown that under appropriate assumptions, the attractive property of a reduction of the amount of computational effort to estimate expectations, for a given level of error, can be maintained within the SMC context.« less

  12. Cell wall invertase as a regulator in determining sequential development of endosperm and embryo through glucose signaling early in seed development.

    PubMed

    Wang, Lu; Liao, Shengjin; Ruan, Yong-Ling

    2013-01-01

    Seed development depends on coordination among embryo, endosperm and seed coat. Endosperm undergoes nuclear division soon after fertilization, whereas embryo remains quiescent for a while. Such a developmental sequence is of great importance for proper seed development. However, the underlying mechanism remains unclear. Recent results on the cellular domain- and stage-specific expression of invertase genes in cotton and Arabidopsis revealed that cell wall invertase may positively and specifically regulate nuclear division of endosperm after fertilization, thereby playing a role in determining the sequential development of endosperm and embryo, probably through glucose signaling.

  13. The Relationship between the Emotional Intelligence of Secondary Public School Principals and School Performance

    ERIC Educational Resources Information Center

    Ashworth, Stephanie R.

    2013-01-01

    The study examined the relationship between secondary public school principals' emotional intelligence and school performance. The correlational study employed an explanatory sequential mixed methods model. The non-probability sample consisted of 105 secondary public school principals in Texas. The emotional intelligence characteristics of the…

  14. Mutual Information Item Selection in Adaptive Classification Testing

    ERIC Educational Resources Information Center

    Weissman, Alexander

    2007-01-01

    A general approach for item selection in adaptive multiple-category classification tests is provided. The approach uses mutual information (MI), a special case of the Kullback-Leibler distance, or relative entropy. MI works efficiently with the sequential probability ratio test and alleviates the difficulties encountered with using other local-…

  15. How to improve an un-alterable model forecast? A sequential data assimilation based error updating approach

    NASA Astrophysics Data System (ADS)

    Gragne, A. S.; Sharma, A.; Mehrotra, R.; Alfredsen, K. T.

    2012-12-01

    Accuracy of reservoir inflow forecasts is instrumental for maximizing value of water resources and influences operation of hydropower reservoirs significantly. Improving hourly reservoir inflow forecasts over a 24 hours lead-time is considered with the day-ahead (Elspot) market of the Nordic exchange market in perspectives. The procedure presented comprises of an error model added on top of an un-alterable constant parameter conceptual model, and a sequential data assimilation routine. The structure of the error model was investigated using freely available software for detecting mathematical relationships in a given dataset (EUREQA) and adopted to contain minimum complexity for computational reasons. As new streamflow data become available the extra information manifested in the discrepancies between measurements and conceptual model outputs are extracted and assimilated into the forecasting system recursively using Sequential Monte Carlo technique. Besides improving forecast skills significantly, the probabilistic inflow forecasts provided by the present approach entrains suitable information for reducing uncertainty in decision making processes related to hydropower systems operation. The potential of the current procedure for improving accuracy of inflow forecasts at lead-times unto 24 hours and its reliability in different seasons of the year will be illustrated and discussed thoroughly.

  16. Enhanced energy recovery from cassava ethanol wastewater through sequential dark hydrogen, photo hydrogen and methane fermentation combined with ammonium removal.

    PubMed

    Lin, Richen; Cheng, Jun; Yang, Zongbo; Ding, Lingkan; Zhang, Jiabei; Zhou, Junhu; Cen, Kefa

    2016-08-01

    Cassava ethanol wastewater (CEW) was subjected to sequential dark H2, photo H2 and CH4 fermentation to maximize H2 production and energy yield. A relatively low H2 yield of 23.6mL/g soluble chemical oxygen demand (CODs) was obtained in dark fermentation. To eliminate the inhibition of excessive NH4(+) on sequential photo fermentation, zeolite was used to remove NH4(+) in residual dark solution (86.5% removal efficiency). The treated solution from 5gCODs/L of CEW achieved the highest photo H2 yield of 369.7mL/gCODs, while the solution from 20gCODs/L gave the lowest yield of 259.6mL/gCODs. This can be explained that photo H2 yield was correlated to soluble metabolic products (SMPs) yield in dark fermentation, and specific SMPs yield decreased from 38.0 to 18.1mM/g CODs. The total energy yield significantly increased to 8.39kJ/gCODs by combining methanogenesis with a CH4 yield of 117.9mL/gCODs. Copyright © 2016 Elsevier Ltd. All rights reserved.

  17. Sequential Design of Experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anderson-Cook, Christine Michaela

    2017-06-30

    A sequential design of experiments strategy is being developed and implemented that allows for adaptive learning based on incoming results as the experiment is being run. The plan is to incorporate these strategies for the NCCC and TCM experimental campaigns to be run in the coming months. This strategy for experimentation has the advantages of allowing new data collected during the experiment to inform future experimental runs based on their projected utility for a particular goal. For example, the current effort for the MEA capture system at NCCC plans to focus on maximally improving the quality of prediction of COmore » 2 capture efficiency as measured by the width of the confidence interval for the underlying response surface that is modeled as a function of 1) Flue Gas Flowrate [1000-3000] kg/hr; 2) CO 2 weight fraction [0.125-0.175]; 3) Lean solvent loading [0.1-0.3], and; 4) Lean solvent flowrate [3000-12000] kg/hr.« less

  18. Sequential geophysical and flow inversion to characterize fracture networks in subsurface systems

    DOE PAGES

    Mudunuru, Maruti Kumar; Karra, Satish; Makedonska, Nataliia; ...

    2017-09-05

    Subsurface applications, including geothermal, geological carbon sequestration, and oil and gas, typically involve maximizing either the extraction of energy or the storage of fluids. Fractures form the main pathways for flow in these systems, and locating these fractures is critical for predicting flow. However, fracture characterization is a highly uncertain process, and data from multiple sources, such as flow and geophysical are needed to reduce this uncertainty. We present a nonintrusive, sequential inversion framework for integrating data from geophysical and flow sources to constrain fracture networks in the subsurface. In this framework, we first estimate bounds on the statistics formore » the fracture orientations using microseismic data. These bounds are estimated through a combination of a focal mechanism (physics-based approach) and clustering analysis (statistical approach) of seismic data. Then, the fracture lengths are constrained using flow data. In conclusion, the efficacy of this inversion is demonstrated through a representative example.« less

  19. Re-animation of muscle flaps for improved function in dynamic myoplasty.

    PubMed

    Stremel, R W; Zonnevijlle, E D

    2001-01-01

    The authors report on a series of experiments designed to produce a skeletal muscle contraction functional for dynamic myoplasties. Conventional stimulation techniques recruit all or most of the muscle fibers simultaneously and with maximal strength. This approach has limitations in free dynamic muscle flap transfers that require the muscle to contract immediately after transfer and before re-innervation. Sequential stimulation of segments of the transferred muscle provides a means of producing non-fatiguing contractions of the muscle in the presence or absence of innervation. The muscles studied were the canine gracilis, and all experiments were acute studies in anesthetized animals. Comparison of conventional and sequential segmental neuromuscular stimulation revealed an increase in muscle fatigue resistance and muscle blood flow with the new approach. This approach offers the opportunity for development of physiologically animated tissue and broadening the abilities of reconstructive surgeons in the repair of functional defects. Copyright 2001 Wiley-Liss, Inc.

  20. Sequential geophysical and flow inversion to characterize fracture networks in subsurface systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mudunuru, Maruti Kumar; Karra, Satish; Makedonska, Nataliia

    Subsurface applications, including geothermal, geological carbon sequestration, and oil and gas, typically involve maximizing either the extraction of energy or the storage of fluids. Fractures form the main pathways for flow in these systems, and locating these fractures is critical for predicting flow. However, fracture characterization is a highly uncertain process, and data from multiple sources, such as flow and geophysical are needed to reduce this uncertainty. We present a nonintrusive, sequential inversion framework for integrating data from geophysical and flow sources to constrain fracture networks in the subsurface. In this framework, we first estimate bounds on the statistics formore » the fracture orientations using microseismic data. These bounds are estimated through a combination of a focal mechanism (physics-based approach) and clustering analysis (statistical approach) of seismic data. Then, the fracture lengths are constrained using flow data. In conclusion, the efficacy of this inversion is demonstrated through a representative example.« less

  1. Optimal mode transformations for linear-optical cluster-state generation

    DOE PAGES

    Uskov, Dmitry B.; Lougovski, Pavel; Alsing, Paul M.; ...

    2015-06-15

    In this paper, we analyze the generation of linear-optical cluster states (LOCSs) via sequential addition of one and two qubits. Existing approaches employ the stochastic linear-optical two-qubit controlled-Z (CZ) gate with success rate of 1/9 per operation. The question of optimality of the CZ gate with respect to LOCS generation has remained open. We report that there are alternative schemes to the CZ gate that are exponentially more efficient and show that sequential LOCS growth is indeed globally optimal. We find that the optimal cluster growth operation is a state transformation on a subspace of the full Hilbert space. Finally,more » we show that the maximal success rate of postselected entangling n photonic qubits or m Bell pairs into a cluster is (1/2) n-1 and (1/4) m-1, respectively, with no ancilla photons, and we give an explicit optical description of the optimal mode transformations.« less

  2. A sequential linear optimization approach for controller design

    NASA Technical Reports Server (NTRS)

    Horta, L. G.; Juang, J.-N.; Junkins, J. L.

    1985-01-01

    A linear optimization approach with a simple real arithmetic algorithm is presented for reliable controller design and vibration suppression of flexible structures. Using first order sensitivity of the system eigenvalues with respect to the design parameters in conjunction with a continuation procedure, the method converts a nonlinear optimization problem into a maximization problem with linear inequality constraints. The method of linear programming is then applied to solve the converted linear optimization problem. The general efficiency of the linear programming approach allows the method to handle structural optimization problems with a large number of inequality constraints on the design vector. The method is demonstrated using a truss beam finite element model for the optimal sizing and placement of active/passive-structural members for damping augmentation. Results using both the sequential linear optimization approach and nonlinear optimization are presented and compared. The insensitivity to initial conditions of the linear optimization approach is also demonstrated.

  3. Little Bayesians or Little Einsteins? Probability and Explanatory Virtue in Children's Inferences

    ERIC Educational Resources Information Center

    Johnston, Angie M.; Johnson, Samuel G. B.; Koven, Marissa L.; Keil, Frank C.

    2017-01-01

    Like scientists, children seek ways to explain causal systems in the world. But are children scientists in the strict Bayesian tradition of maximizing posterior probability? Or do they attend to other explanatory considerations, as laypeople and scientists--such as Einstein--do? Four experiments support the latter possibility. In particular, we…

  4. Decision analysis for conservation breeding: Maximizing production for reintroduction of whooping cranes

    USGS Publications Warehouse

    Smith, Des H.V.; Converse, Sarah J.; Gibson, Keith; Moehrenschlager, Axel; Link, William A.; Olsen, Glenn H.; Maguire, Kelly

    2011-01-01

    Captive breeding is key to management of severely endangered species, but maximizing captive production can be challenging because of poor knowledge of species breeding biology and the complexity of evaluating different management options. In the face of uncertainty and complexity, decision-analytic approaches can be used to identify optimal management options for maximizing captive production. Building decision-analytic models requires iterations of model conception, data analysis, model building and evaluation, identification of remaining uncertainty, further research and monitoring to reduce uncertainty, and integration of new data into the model. We initiated such a process to maximize captive production of the whooping crane (Grus americana), the world's most endangered crane, which is managed through captive breeding and reintroduction. We collected 15 years of captive breeding data from 3 institutions and used Bayesian analysis and model selection to identify predictors of whooping crane hatching success. The strongest predictor, and that with clear management relevance, was incubation environment. The incubation period of whooping crane eggs is split across two environments: crane nests and artificial incubators. Although artificial incubators are useful for allowing breeding pairs to produce multiple clutches, our results indicate that crane incubation is most effective at promoting hatching success. Hatching probability increased the longer an egg spent in a crane nest, from 40% hatching probability for eggs receiving 1 day of crane incubation to 95% for those receiving 30 days (time incubated in each environment varied independently of total incubation period). Because birds will lay fewer eggs when they are incubating longer, a tradeoff exists between the number of clutches produced and egg hatching probability. We developed a decision-analytic model that estimated 16 to be the optimal number of days of crane incubation needed to maximize the number of offspring produced. These results show that using decision-analytic tools to account for uncertainty in captive breeding can improve the rate at which such programs contribute to wildlife reintroductions. 

  5. Minimal entropy approximation for cellular automata

    NASA Astrophysics Data System (ADS)

    Fukś, Henryk

    2014-02-01

    We present a method for the construction of approximate orbits of measures under the action of cellular automata which is complementary to the local structure theory. The local structure theory is based on the idea of Bayesian extension, that is, construction of a probability measure consistent with given block probabilities and maximizing entropy. If instead of maximizing entropy one minimizes it, one can develop another method for the construction of approximate orbits, at the heart of which is the iteration of finite-dimensional maps, called minimal entropy maps. We present numerical evidence that the minimal entropy approximation sometimes outperforms the local structure theory in characterizing the properties of cellular automata. The density response curve for elementary CA rule 26 is used to illustrate this claim.

  6. Optimization of laminated stacking sequence for buckling load maximization by genetic algorithm

    NASA Technical Reports Server (NTRS)

    Le Riche, Rodolphe; Haftka, Raphael T.

    1992-01-01

    The use of a genetic algorithm to optimize the stacking sequence of a composite laminate for buckling load maximization is studied. Various genetic parameters including the population size, the probability of mutation, and the probability of crossover are optimized by numerical experiments. A new genetic operator - permutation - is proposed and shown to be effective in reducing the cost of the genetic search. Results are obtained for a graphite-epoxy plate, first when only the buckling load is considered, and then when constraints on ply contiguity and strain failure are added. The influence on the genetic search of the penalty parameter enforcing the contiguity constraint is studied. The advantage of the genetic algorithm in producing several near-optimal designs is discussed.

  7. Non-common path aberration correction in an adaptive optics scanning ophthalmoscope.

    PubMed

    Sulai, Yusufu N; Dubra, Alfredo

    2014-09-01

    The correction of non-common path aberrations (NCPAs) between the imaging and wavefront sensing channel in a confocal scanning adaptive optics ophthalmoscope is demonstrated. NCPA correction is achieved by maximizing an image sharpness metric while the confocal detection aperture is temporarily removed, effectively minimizing the monochromatic aberrations in the illumination path of the imaging channel. Comparison of NCPA estimated using zonal and modal orthogonal wavefront corrector bases provided wavefronts that differ by ~λ/20 in root-mean-squared (~λ/30 standard deviation). Sequential insertion of a cylindrical lens in the illumination and light collection paths of the imaging channel was used to compare image resolution after changing the wavefront correction to maximize image sharpness and intensity metrics. Finally, the NCPA correction was incorporated into the closed-loop adaptive optics control by biasing the wavefront sensor signals without reducing its bandwidth.

  8. Bayesian approach to inverse statistical mechanics.

    PubMed

    Habeck, Michael

    2014-05-01

    Inverse statistical mechanics aims to determine particle interactions from ensemble properties. This article looks at this inverse problem from a Bayesian perspective and discusses several statistical estimators to solve it. In addition, a sequential Monte Carlo algorithm is proposed that draws the interaction parameters from their posterior probability distribution. The posterior probability involves an intractable partition function that is estimated along with the interactions. The method is illustrated for inverse problems of varying complexity, including the estimation of a temperature, the inverse Ising problem, maximum entropy fitting, and the reconstruction of molecular interaction potentials.

  9. Bayesian approach to inverse statistical mechanics

    NASA Astrophysics Data System (ADS)

    Habeck, Michael

    2014-05-01

    Inverse statistical mechanics aims to determine particle interactions from ensemble properties. This article looks at this inverse problem from a Bayesian perspective and discusses several statistical estimators to solve it. In addition, a sequential Monte Carlo algorithm is proposed that draws the interaction parameters from their posterior probability distribution. The posterior probability involves an intractable partition function that is estimated along with the interactions. The method is illustrated for inverse problems of varying complexity, including the estimation of a temperature, the inverse Ising problem, maximum entropy fitting, and the reconstruction of molecular interaction potentials.

  10. Improved minimum cost and maximum power two stage genome-wide association study designs.

    PubMed

    Stanhope, Stephen A; Skol, Andrew D

    2012-01-01

    In a two stage genome-wide association study (2S-GWAS), a sample of cases and controls is allocated into two groups, and genetic markers are analyzed sequentially with respect to these groups. For such studies, experimental design considerations have primarily focused on minimizing study cost as a function of the allocation of cases and controls to stages, subject to a constraint on the power to detect an associated marker. However, most treatments of this problem implicitly restrict the set of feasible designs to only those that allocate the same proportions of cases and controls to each stage. In this paper, we demonstrate that removing this restriction can improve the cost advantages demonstrated by previous 2S-GWAS designs by up to 40%. Additionally, we consider designs that maximize study power with respect to a cost constraint, and show that recalculated power maximizing designs can recover a substantial amount of the planned study power that might otherwise be lost if study funding is reduced. We provide open source software for calculating cost minimizing or power maximizing 2S-GWAS designs.

  11. Quantum teleportation via quantum channels with non-maximal Schmidt rank

    NASA Astrophysics Data System (ADS)

    Solís-Prosser, M. A.; Jiménez, O.; Neves, L.; Delgado, A.

    2013-03-01

    We study the problem of teleporting unknown pure states of a single qudit via a pure quantum channel with non-maximal Schmidt rank. We relate this process to the discrimination of linearly dependent symmetric states with the help of the maximum-confidence discrimination strategy. We show that with a certain probability, it is possible to teleport with a fidelity larger than the fidelity optimal deterministic teleportation.

  12. Dosimetric effects of patient rotational setup errors on prostate IMRT treatments

    NASA Astrophysics Data System (ADS)

    Fu, Weihua; Yang, Yong; Li, Xiang; Heron, Dwight E.; Saiful Huq, M.; Yue, Ning J.

    2006-10-01

    The purpose of this work is to determine dose delivery errors that could result from systematic rotational setup errors (ΔΦ) for prostate cancer patients treated with three-phase sequential boost IMRT. In order to implement this, different rotational setup errors around three Cartesian axes were simulated for five prostate patients and dosimetric indices, such as dose-volume histogram (DVH), tumour control probability (TCP), normal tissue complication probability (NTCP) and equivalent uniform dose (EUD), were employed to evaluate the corresponding dosimetric influences. Rotational setup errors were simulated by adjusting the gantry, collimator and horizontal couch angles of treatment beams and the dosimetric effects were evaluated by recomputing the dose distributions in the treatment planning system. Our results indicated that, for prostate cancer treatment with the three-phase sequential boost IMRT technique, the rotational setup errors do not have significant dosimetric impacts on the cumulative plan. Even in the worst-case scenario with ΔΦ = 3°, the prostate EUD varied within 1.5% and TCP decreased about 1%. For seminal vesicle, slightly larger influences were observed. However, EUD and TCP changes were still within 2%. The influence on sensitive structures, such as rectum and bladder, is also negligible. This study demonstrates that the rotational setup error degrades the dosimetric coverage of target volume in prostate cancer treatment to a certain degree. However, the degradation was not significant for the three-phase sequential boost prostate IMRT technique and for the margin sizes used in our institution.

  13. Fusion of Scores in a Detection Context Based on Alpha Integration.

    PubMed

    Soriano, Antonio; Vergara, Luis; Ahmed, Bouziane; Salazar, Addisson

    2015-09-01

    We present a new method for fusing scores corresponding to different detectors (two-hypotheses case). It is based on alpha integration, which we have adapted to the detection context. Three optimization methods are presented: least mean square error, maximization of the area under the ROC curve, and minimization of the probability of error. Gradient algorithms are proposed for the three methods. Different experiments with simulated and real data are included. Simulated data consider the two-detector case to illustrate the factors influencing alpha integration and demonstrate the improvements obtained by score fusion with respect to individual detector performance. Two real data cases have been considered. In the first, multimodal biometric data have been processed. This case is representative of scenarios in which the probability of detection is to be maximized for a given probability of false alarm. The second case is the automatic analysis of electroencephalogram and electrocardiogram records with the aim of reproducing the medical expert detections of arousal during sleeping. This case is representative of scenarios in which probability of error is to be minimized. The general superior performance of alpha integration verifies the interest of optimizing the fusing parameters.

  14. Factors, Practices, and Policies Influencing Students' Upward Transfer to Baccalaureate-Degree Programs and Institutions: A Mixed Methods Analysis

    ERIC Educational Resources Information Center

    LaSota, Robin Rae

    2013-01-01

    My dissertation utilizes an explanatory, sequential mixed-methods research design to assess factors influencing community college students' transfer probability to baccalaureate-granting institutions and to present promising practices in colleges and states directed at improving upward transfer, particularly for low-income and first-generation…

  15. Numerically stable algorithm for combining census and sample estimates with the multivariate composite estimator

    Treesearch

    R. L. Czaplewski

    2009-01-01

    The minimum variance multivariate composite estimator is a relatively simple sequential estimator for complex sampling designs (Czaplewski 2009). Such designs combine a probability sample of expensive field data with multiple censuses and/or samples of relatively inexpensive multi-sensor, multi-resolution remotely sensed data. Unfortunately, the multivariate composite...

  16. Upregulation of transmitter release probability improves a conversion of synaptic analogue signals into neuronal digital spikes

    PubMed Central

    2012-01-01

    Action potentials at the neurons and graded signals at the synapses are primary codes in the brain. In terms of their functional interaction, the studies were focused on the influence of presynaptic spike patterns on synaptic activities. How the synapse dynamics quantitatively regulates the encoding of postsynaptic digital spikes remains unclear. We investigated this question at unitary glutamatergic synapses on cortical GABAergic neurons, especially the quantitative influences of release probability on synapse dynamics and neuronal encoding. Glutamate release probability and synaptic strength are proportionally upregulated by presynaptic sequential spikes. The upregulation of release probability and the efficiency of probability-driven synaptic facilitation are strengthened by elevating presynaptic spike frequency and Ca2+. The upregulation of release probability improves spike capacity and timing precision at postsynaptic neuron. These results suggest that the upregulation of presynaptic glutamate release facilitates a conversion of synaptic analogue signals into digital spikes in postsynaptic neurons, i.e., a functional compatibility between presynaptic and postsynaptic partners. PMID:22852823

  17. Models based on value and probability in health improve shared decision making.

    PubMed

    Ortendahl, Monica

    2008-10-01

    Diagnostic reasoning and treatment decisions are a key competence of doctors. A model based on values and probability provides a conceptual framework for clinical judgments and decisions, and also facilitates the integration of clinical and biomedical knowledge into a diagnostic decision. Both value and probability are usually estimated values in clinical decision making. Therefore, model assumptions and parameter estimates should be continually assessed against data, and models should be revised accordingly. Introducing parameter estimates for both value and probability, which usually pertain in clinical work, gives the model labelled subjective expected utility. Estimated values and probabilities are involved sequentially for every step in the decision-making process. Introducing decision-analytic modelling gives a more complete picture of variables that influence the decisions carried out by the doctor and the patient. A model revised for perceived values and probabilities by both the doctor and the patient could be used as a tool for engaging in a mutual and shared decision-making process in clinical work.

  18. Language experience changes subsequent learning

    PubMed Central

    Onnis, Luca; Thiessen, Erik

    2013-01-01

    What are the effects of experience on subsequent learning? We explored the effects of language-specific word order knowledge on the acquisition of sequential conditional information. Korean and English adults were engaged in a sequence learning task involving three different sets of stimuli: auditory linguistic (nonsense syllables), visual non-linguistic (nonsense shapes), and auditory non-linguistic (pure tones). The forward and backward probabilities between adjacent elements generated two equally probable and orthogonal perceptual parses of the elements, such that any significant preference at test must be due to either general cognitive biases, or prior language-induced biases. We found that language modulated parsing preferences with the linguistic stimuli only. Intriguingly, these preferences are congruent with the dominant word order patterns of each language, as corroborated by corpus analyses, and are driven by probabilistic preferences. Furthermore, although the Korean individuals had received extensive formal explicit training in English and lived in an English-speaking environment, they exhibited statistical learning biases congruent with their native language. Our findings suggest that mechanisms of statistical sequential learning are implicated in language across the lifespan, and experience with language may affect cognitive processes and later learning. PMID:23200510

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shusharina, N; Khan, F; Sharp, G

    Purpose: To determine the dose level and timing of the boost in locally advanced lung cancer patients with confirmed tumor recurrence by comparing different boosting strategies by an impact of dose escalation in improvement of the therapeutic ratio. Methods: We selected eighteen patients with advanced NSCLC and confirmed recurrence. For each patient, a base IMRT plan to 60 Gy prescribed to PTV was created. Then we compared three dose escalation strategies: a uniform escalation to the original PTV, an escalation to a PET-defined target planned sequentially and concurrently. The PET-defined targets were delineated by biologically-weighed regions on a pre-treatment 18F-FDGmore » PET. The maximal achievable dose, without violating the OAR constraints, was identified for each boosting method. The EUD for the target, spinal cord, combined lung, and esophagus was compared for each plan. Results: The average prescribed dose was 70.4±13.9 Gy for the uniform boost, 88.5±15.9 Gy for the sequential boost and 89.1±16.5 Gy for concurrent boost. The size of the boost planning volume was 12.8% (range: 1.4 – 27.9%) of the PTV. The most prescription-limiting dose constraints was the V70 of the esophagus. The EUD within the target increased by 10.6 Gy for the uniform boost, by 31.4 Gy for the sequential boost and by 38.2 for the concurrent boost. The EUD for OARs increased by the following amounts: spinal cord, 3.1 Gy for uniform boost, 2.8 Gy for sequential boost, 5.8 Gy for concurrent boost; combined lung, 1.6 Gy for uniform, 1.1 Gy for sequential, 2.8 Gy for concurrent; esophagus, 4.2 Gy for uniform, 1.3 Gy for sequential, 5.6 Gy for concurrent. Conclusion: Dose escalation to a biologically-weighed gross tumor volume defined on a pre-treatment 18F-FDG PET may provide improved therapeutic ratio without breaching predefined OAR constraints. Sequential boost provides better sparing of OARs as compared with concurrent boost.« less

  20. Extending Data Worth Analyses to Select Multiple Observations Targeting Multiple Forecasts.

    PubMed

    Vilhelmsen, Troels N; Ferré, Ty P A

    2018-05-01

    Hydrological models are often set up to provide specific forecasts of interest. Owing to the inherent uncertainty in data used to derive model structure and used to constrain parameter variations, the model forecasts will be uncertain. Additional data collection is often performed to minimize this forecast uncertainty. Given our common financial restrictions, it is critical that we identify data with maximal information content with respect to forecast of interest. In practice, this often devolves to qualitative decisions based on expert opinion. However, there is no assurance that this will lead to optimal design, especially for complex hydrogeological problems. Specifically, these complexities include considerations of multiple forecasts, shared information among potential observations, information content of existing data, and the assumptions and simplifications underlying model construction. In the present study, we extend previous data worth analyses to include: simultaneous selection of multiple new measurements and consideration of multiple forecasts of interest. We show how the suggested approach can be used to optimize data collection. This can be used in a manner that suggests specific measurement sets or that produces probability maps indicating areas likely to be informative for specific forecasts. Moreover, we provide examples documenting that sequential measurement election approaches often lead to suboptimal designs and that estimates of data covariance should be included when selecting future measurement sets. © 2017, National Ground Water Association.

  1. Localized sequence-specific release of a chemopreventive agent and an anticancer drug in a time-controllable manner to enhance therapeutic efficacy.

    PubMed

    Pan, Wen-Yu; Lin, Kun-Ju; Huang, Chieh-Cheng; Chiang, Wei-Lun; Lin, Yu-Jung; Lin, Wei-Chih; Chuang, Er-Yuan; Chang, Yen; Sung, Hsing-Wen

    2016-09-01

    Combination chemotherapy with multiple drugs commonly requires several injections on various schedules, and the probability that the drug molecules reach the diseased tissues at the proper time and effective therapeutic concentrations is very low. This work elucidates an injectable co-delivery system that is based on cationic liposomes that are adsorbed on anionic hollow microspheres (Lipos-HMs) via electrostatic interaction, from which the localized sequence-specific release of a chemopreventive agent (1,25(OH)2D3) and an anticancer drug (doxorubicin; DOX) can be thermally driven in a time-controllable manner by an externally applied high-frequency magnetic field (HFMF). Lipos-HMs can greatly promote the accumulation of reactive oxygen species (ROS) in tumor cells by reducing their cytoplasmic expression of an antioxidant enzyme (superoxide dismutase) by 1,25(OH)2D3, increasing the susceptibility of cancer cells to the cytotoxic action of DOX. In nude mice that bear xenograft tumors, treatment with Lipos-HMs under exposure to HFMF effectively inhibits tumor growth and is the most effective therapeutic intervention among all the investigated. These empirical results demonstrate that the synergistic anticancer effects of sequential release of 1,25(OH)2D3 and DOX from the Lipos-HMs may have potential for maximizing DOX cytotoxicity, supporting more effective cancer treatment. Copyright © 2016 Elsevier Ltd. All rights reserved.

  2. Sequential measurement of conjugate variables as an alternative quantum state tomography.

    PubMed

    Di Lorenzo, Antonio

    2013-01-04

    It is shown how it is possible to reconstruct the initial state of a one-dimensional system by sequentially measuring two conjugate variables. The procedure relies on the quasicharacteristic function, the Fourier transform of the Wigner quasiprobability. The proper characteristic function obtained by Fourier transforming the experimentally accessible joint probability of observing "position" then "momentum" (or vice versa) can be expressed as a product of the quasicharacteristic function of the two detectors and that unknown of the quantum system. This allows state reconstruction through the sequence (1) data collection, (2) Fourier transform, (3) algebraic operation, and (4) inverse Fourier transform. The strength of the measurement should be intermediate for the procedure to work.

  3. Melioration as rational choice: sequential decision making in uncertain environments.

    PubMed

    Sims, Chris R; Neth, Hansjörg; Jacobs, Robert A; Gray, Wayne D

    2013-01-01

    Melioration-defined as choosing a lesser, local gain over a greater longer term gain-is a behavioral tendency that people and pigeons share. As such, the empirical occurrence of meliorating behavior has frequently been interpreted as evidence that the mechanisms of human choice violate the norms of economic rationality. In some environments, the relationship between actions and outcomes is known. In this case, the rationality of choice behavior can be evaluated in terms of how successfully it maximizes utility given knowledge of the environmental contingencies. In most complex environments, however, the relationship between actions and future outcomes is uncertain and must be learned from experience. When the difficulty of this learning challenge is taken into account, it is not evident that melioration represents suboptimal choice behavior. In the present article, we examine human performance in a sequential decision-making experiment that is known to induce meliorating behavior. In keeping with previous results using this paradigm, we find that the majority of participants in the experiment fail to adopt the optimal decision strategy and instead demonstrate a significant bias toward melioration. To explore the origins of this behavior, we develop a rational analysis (Anderson, 1990) of the learning problem facing individuals in uncertain decision environments. Our analysis demonstrates that an unbiased learner would adopt melioration as the optimal response strategy for maximizing long-term gain. We suggest that many documented cases of melioration can be reinterpreted not as irrational choice but rather as globally optimal choice under uncertainty.

  4. Cardiorespiratory deconditioning with static and dynamic leg exercise during bed rest

    NASA Technical Reports Server (NTRS)

    Stremel, R. W.; Convertino, V. A.; Bernauer, E. M.; Greenleaf, J. E.

    1976-01-01

    Results are presented for an experimental study designed to compare the effects of heavy static and dynamic exercise training during 14 days of bed rest on the cardiorespiratory responses to submaximal and maximal exercise performed by seven healthy men aged 19-22 yr. The parameters measured were submaximal and maximal oxygen uptake, minute ventilation, heart rate, and plasma volume. The results indicate that exercise alone during bed rest reduces but does not eliminate the reduction in maximal oxygen uptake. An additional positive hydrostatic effect is therefore necessary to restore maximal oxygen uptake to ambulatory control levels. The greater protective effect of static exercise on maximal oxygen uptake is probably due to a greater hydrostatic component from the isometric muscular contraction. Neither the static nor the dynamic exercise training regimes are found to minimize the changes in all the variables studied, thereby suggesting a combination of static and dynamic exercises.

  5. Risk-sensitive reinforcement learning.

    PubMed

    Shen, Yun; Tobia, Michael J; Sommer, Tobias; Obermayer, Klaus

    2014-07-01

    We derive a family of risk-sensitive reinforcement learning methods for agents, who face sequential decision-making tasks in uncertain environments. By applying a utility function to the temporal difference (TD) error, nonlinear transformations are effectively applied not only to the received rewards but also to the true transition probabilities of the underlying Markov decision process. When appropriate utility functions are chosen, the agents' behaviors express key features of human behavior as predicted by prospect theory (Kahneman & Tversky, 1979 ), for example, different risk preferences for gains and losses, as well as the shape of subjective probability curves. We derive a risk-sensitive Q-learning algorithm, which is necessary for modeling human behavior when transition probabilities are unknown, and prove its convergence. As a proof of principle for the applicability of the new framework, we apply it to quantify human behavior in a sequential investment task. We find that the risk-sensitive variant provides a significantly better fit to the behavioral data and that it leads to an interpretation of the subject's responses that is indeed consistent with prospect theory. The analysis of simultaneously measured fMRI signals shows a significant correlation of the risk-sensitive TD error with BOLD signal change in the ventral striatum. In addition we find a significant correlation of the risk-sensitive Q-values with neural activity in the striatum, cingulate cortex, and insula that is not present if standard Q-values are used.

  6. Nonadditive entropies yield probability distributions with biases not warranted by the data.

    PubMed

    Pressé, Steve; Ghosh, Kingshuk; Lee, Julian; Dill, Ken A

    2013-11-01

    Different quantities that go by the name of entropy are used in variational principles to infer probability distributions from limited data. Shore and Johnson showed that maximizing the Boltzmann-Gibbs form of the entropy ensures that probability distributions inferred satisfy the multiplication rule of probability for independent events in the absence of data coupling such events. Other types of entropies that violate the Shore and Johnson axioms, including nonadditive entropies such as the Tsallis entropy, violate this basic consistency requirement. Here we use the axiomatic framework of Shore and Johnson to show how such nonadditive entropy functions generate biases in probability distributions that are not warranted by the underlying data.

  7. Comparison between variable and fixed dwell-time PN acquisition algorithms. [for synchronization in pseudonoise spread spectrum systems

    NASA Technical Reports Server (NTRS)

    Braun, W. R.

    1981-01-01

    Pseudo noise (PN) spread spectrum systems require a very accurate alignment between the PN code epochs at the transmitter and receiver. This synchronism is typically established through a two-step algorithm, including a coarse synchronization procedure and a fine synchronization procedure. A standard approach for the coarse synchronization is a sequential search over all code phases. The measurement of the power in the filtered signal is used to either accept or reject the code phase under test as the phase of the received PN code. This acquisition strategy, called a single dwell-time system, has been analyzed by Holmes and Chen (1977). A synopsis of the field of sequential analysis as it applies to the PN acquisition problem is provided. From this, the implementation of the variable dwell time algorithm as a sequential probability ratio test is developed. The performance of this algorithm is compared to the optimum detection algorithm and to the fixed dwell-time system.

  8. Time scale of random sequential adsorption.

    PubMed

    Erban, Radek; Chapman, S Jonathan

    2007-04-01

    A simple multiscale approach to the diffusion-driven adsorption from a solution to a solid surface is presented. The model combines two important features of the adsorption process: (i) The kinetics of the chemical reaction between adsorbing molecules and the surface and (ii) geometrical constraints on the surface made by molecules which are already adsorbed. The process (i) is modeled in a diffusion-driven context, i.e., the conditional probability of adsorbing a molecule provided that the molecule hits the surface is related to the macroscopic surface reaction rate. The geometrical constraint (ii) is modeled using random sequential adsorption (RSA), which is the sequential addition of molecules at random positions on a surface; one attempt to attach a molecule is made per one RSA simulation time step. By coupling RSA with the diffusion of molecules in the solution above the surface the RSA simulation time step is related to the real physical time. The method is illustrated on a model of chemisorption of reactive polymers to a virus surface.

  9. Sequential bearings-only-tracking initiation with particle filtering method.

    PubMed

    Liu, Bin; Hao, Chengpeng

    2013-01-01

    The tracking initiation problem is examined in the context of autonomous bearings-only-tracking (BOT) of a single appearing/disappearing target in the presence of clutter measurements. In general, this problem suffers from a combinatorial explosion in the number of potential tracks resulted from the uncertainty in the linkage between the target and the measurement (a.k.a the data association problem). In addition, the nonlinear measurements lead to a non-Gaussian posterior probability density function (pdf) in the optimal Bayesian sequential estimation framework. The consequence of this nonlinear/non-Gaussian context is the absence of a closed-form solution. This paper models the linkage uncertainty and the nonlinear/non-Gaussian estimation problem jointly with solid Bayesian formalism. A particle filtering (PF) algorithm is derived for estimating the model's parameters in a sequential manner. Numerical results show that the proposed solution provides a significant benefit over the most commonly used methods, IPDA and IMMPDA. The posterior Cramér-Rao bounds are also involved for performance evaluation.

  10. Competitive Facility Location with Random Demands

    NASA Astrophysics Data System (ADS)

    Uno, Takeshi; Katagiri, Hideki; Kato, Kosuke

    2009-10-01

    This paper proposes a new location problem of competitive facilities, e.g. shops and stores, with uncertain demands in the plane. By representing the demands for facilities as random variables, the location problem is formulated to a stochastic programming problem, and for finding its solution, three deterministic programming problems: expectation maximizing problem, probability maximizing problem, and satisfying level maximizing problem are considered. After showing that one of their optimal solutions can be found by solving 0-1 programming problems, their solution method is proposed by improving the tabu search algorithm with strategic vibration. Efficiency of the solution method is shown by applying to numerical examples of the facility location problems.

  11. Non-common path aberration correction in an adaptive optics scanning ophthalmoscope

    PubMed Central

    Sulai, Yusufu N.; Dubra, Alfredo

    2014-01-01

    The correction of non-common path aberrations (NCPAs) between the imaging and wavefront sensing channel in a confocal scanning adaptive optics ophthalmoscope is demonstrated. NCPA correction is achieved by maximizing an image sharpness metric while the confocal detection aperture is temporarily removed, effectively minimizing the monochromatic aberrations in the illumination path of the imaging channel. Comparison of NCPA estimated using zonal and modal orthogonal wavefront corrector bases provided wavefronts that differ by ~λ/20 in root-mean-squared (~λ/30 standard deviation). Sequential insertion of a cylindrical lens in the illumination and light collection paths of the imaging channel was used to compare image resolution after changing the wavefront correction to maximize image sharpness and intensity metrics. Finally, the NCPA correction was incorporated into the closed-loop adaptive optics control by biasing the wavefront sensor signals without reducing its bandwidth. PMID:25401020

  12. On the lower bound of monitor solutions of maximally permissive supervisors for a subclass α-S3PR of flexible manufacturing systems

    NASA Astrophysics Data System (ADS)

    Chao, Daniel Yuh

    2015-01-01

    Recently, a novel and computationally efficient method - based on a vector covering approach - to design optimal control places and an iteration approach that computes the reachability graph to obtain a maximally permissive liveness enforcing supervisor for FMS (flexible manufacturing systems) have been reported. However, it is unclear as to the relationship between the structure of the net and the minimal number of monitors required. This paper develops a theory to show that the minimal number of monitors required cannot be less than that of basic siphons in α-S3PR (systems of simple sequential processes with resources). This confirms that two of the three controlled systems by Chen et al. are of a minimal monitor configuration since they belong to α-S3PR and their number in each example equals that of basic siphons.

  13. Maximizing a Probability: A Student Workshop on an Application of Continuous Distributions

    ERIC Educational Resources Information Center

    Griffiths, Martin

    2010-01-01

    For many students meeting, say, the gamma distribution for the first time, it may well turn out to be a rather fruitless encounter unless they are immediately able to see an application of this probability model to some real-life situation. With this in mind, we pose here an appealing problem that can be used as the basis for a workshop activity…

  14. Cluster State Quantum Computing

    DTIC Science & Technology

    2012-12-01

    probability that the desired target gate ATar has been faithfully implemented on the computational modes given a successful measurement of the ancilla...modes: () = �(†)� 2 2(†) , (3) since Tr ( ATar † ATar )=2Mc for a properly normalized target gate. As we are interested...optimization method we have developed maximizes the success probability S for a given target transformation ATar , for given ancilla resources, and for a

  15. Cluster State Quantum Computation

    DTIC Science & Technology

    2014-02-01

    information of relevance to the transformation. We define the fidelity as the probability that the desired target gate ATar has been faithfully...implemented on the computational modes given a successful measurement of the ancilla modes: 2 , (3) since Tr ( ATar † ATar )=2Mc for a properly normalized...photonic gates The optimization method we have developed maximizes the success probability S for a given target transformation ATar , for given

  16. Asking better questions: How presentation formats influence information search.

    PubMed

    Wu, Charley M; Meder, Björn; Filimon, Flavia; Nelson, Jonathan D

    2017-08-01

    While the influence of presentation formats have been widely studied in Bayesian reasoning tasks, we present the first systematic investigation of how presentation formats influence information search decisions. Four experiments were conducted across different probabilistic environments, where subjects (N = 2,858) chose between 2 possible search queries, each with binary probabilistic outcomes, with the goal of maximizing classification accuracy. We studied 14 different numerical and visual formats for presenting information about the search environment, constructed across 6 design features that have been prominently related to improvements in Bayesian reasoning accuracy (natural frequencies, posteriors, complement, spatial extent, countability, and part-to-whole information). The posterior variants of the icon array and bar graph formats led to the highest proportion of correct responses, and were substantially better than the standard probability format. Results suggest that presenting information in terms of posterior probabilities and visualizing natural frequencies using spatial extent (a perceptual feature) were especially helpful in guiding search decisions, although environments with a mixture of probabilistic and certain outcomes were challenging across all formats. Subjects who made more accurate probability judgments did not perform better on the search task, suggesting that simple decision heuristics may be used to make search decisions without explicitly applying Bayesian inference to compute probabilities. We propose a new take-the-difference (TTD) heuristic that identifies the accuracy-maximizing query without explicit computation of posterior probabilities. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  17. Increased Automaticity and Altered Temporal Preparation Following Sleep Deprivation

    PubMed Central

    Kong, Danyang; Asplund, Christopher L.; Ling, Aiqing; Chee, Michael W.L.

    2015-01-01

    Study Objectives: Temporal expectation enables us to focus limited processing resources, thereby optimizing perceptual and motor processing for critical upcoming events. We investigated the effects of total sleep deprivation (TSD) on temporal expectation by evaluating the foreperiod and sequential effects during a psychomotor vigilance task (PVT). We also examined how these two measures were modulated by vulnerability to TSD. Design: Three 10-min visual PVT sessions using uniformly distributed foreperiods were conducted in the wake-maintenance zone the evening before sleep deprivation (ESD) and three more in the morning following approximately 22 h of TSD. TSD vulnerable and nonvulnerable groups were determined by a tertile split of participants based on the change in the number of behavioral lapses recorded during ESD and TSD. A subset of participants performed six additional 10-min modified auditory PVTs with exponentially distributed foreperiods during rested wakefulness (RW) and TSD to test the effect of temporal distribution on foreperiod and sequential effects. Setting: Sleep laboratory. Participants: There were 172 young healthy participants (90 males) with regular sleep patterns. Nineteen of these participants performed the modified auditory PVT. Measurements and Results: Despite behavioral lapses and slower response times, sleep deprived participants could still perceive the conditional probability of temporal events and modify their level of preparation accordingly. Both foreperiod and sequential effects were magnified following sleep deprivation in vulnerable individuals. Only the foreperiod effect increased in nonvulnerable individuals. Conclusions: The preservation of foreperiod and sequential effects suggests that implicit time perception and temporal preparedness are intact during total sleep deprivation. Individuals appear to reallocate their depleted preparatory resources to more probable event timings in ongoing trials, whereas vulnerable participants also rely more on automatic processes. Citation: Kong D, Asplund CL, Ling A, Chee MWL. Increased automaticity and altered temporal preparation following sleep deprivation. SLEEP 2015;38(8):1219–1227. PMID:25845689

  18. Sequential Analysis of Mastery Behavior in 6- and 12-Month-Old Infants.

    ERIC Educational Resources Information Center

    MacTurk, Robert H.; And Others

    1987-01-01

    Sequences of mastery behavior were analyzed in a sample of 67 infants 6 to 12 months old. Authors computed (a) frequencies of six categories of mastery behavior, transitional probabilities, and z scores for each behavior change, and (b) transitions from a mastery behavior to positive affect. Changes in frequencies and similarity in organization…

  19. Comparing and Combining Dichotomous and Polytomous Items with SPRT Procedure in Computerized Classification Testing.

    ERIC Educational Resources Information Center

    Lau, C. Allen; Wang, Tianyou

    The purposes of this study were to: (1) extend the sequential probability ratio testing (SPRT) procedure to polytomous item response theory (IRT) models in computerized classification testing (CCT); (2) compare polytomous items with dichotomous items using the SPRT procedure for their accuracy and efficiency; (3) study a direct approach in…

  20. Importance and Effectiveness of Student Health Services at a South Texas University

    ERIC Educational Resources Information Center

    McCaig, Marilyn M.

    2013-01-01

    The study examined the health needs of students at a south Texas university and documented the utility of the student health center. The descriptive study employed a mixed methods explanatory sequential design (ESD). The non-probability sample consisted of 140 students who utilized the university's health center during the period of March 23-30,…

  1. EXSPRT: An Expert Systems Approach to Computer-Based Adaptive Testing.

    ERIC Educational Resources Information Center

    Frick, Theodore W.; And Others

    Expert systems can be used to aid decision making. A computerized adaptive test (CAT) is one kind of expert system, although it is not commonly recognized as such. A new approach, termed EXSPRT, was devised that combines expert systems reasoning and sequential probability ratio test stopping rules. EXSPRT-R uses random selection of test items,…

  2. Inertial navigation sensor integrated obstacle detection system

    NASA Technical Reports Server (NTRS)

    Bhanu, Bir (Inventor); Roberts, Barry A. (Inventor)

    1992-01-01

    A system that incorporates inertial sensor information into optical flow computations to detect obstacles and to provide alternative navigational paths free from obstacles. The system is a maximally passive obstacle detection system that makes selective use of an active sensor. The active detection typically utilizes a laser. Passive sensor suite includes binocular stereo, motion stereo and variable fields-of-view. Optical flow computations involve extraction, derotation and matching of interest points from sequential frames of imagery, for range interpolation of the sensed scene, which in turn provides obstacle information for purposes of safe navigation.

  3. Chaotic dynamics in nonlinear duopoly Stackelberg game with heterogeneous players

    NASA Astrophysics Data System (ADS)

    Xiao, Yue; Peng, Yu; Lu, Qian; Wu, Xue

    2018-02-01

    In this paper, a nonlinear duopoly Stackelberg game of competition on output is concerned. In consideration of the effects of difference between plan products and actual products, the two heterogeneous players always adopt suitable strategies which can improve their benefits most. In general, status of each firm is unequal. As the firms take strategies sequentially and produce simultaneously, complex behaviors are brought about. Numerical simulation presents period doubling bifurcation, maximal Lyapunov exponent and chaos. Moreover, an appropriate method of chaos controlling is applied and fractal dimension is analyzed as well.

  4. Mammographic x-ray unit kilovoltage test tool based on k-edge absorption effect.

    PubMed

    Napolitano, Mary E; Trueblood, Jon H; Hertel, Nolan E; David, George

    2002-09-01

    A simple tool to determine the peak kilovoltage (kVp) of a mammographic x-ray unit has been designed. Tool design is based on comparing the effect of k-edge discontinuity of the attenuation coefficient for a series of element filters. Compatibility with the mammography accreditation phantom (MAP) to obtain a single quality control film is a second design objective. When the attenuation of a series of sequential elements is studied simultaneously, differences in the absorption characteristics due to the k-edge discontinuities are more evident. Specifically, when the incident photon energy is higher than the k-edge energy of a number of the elements and lower than the remainder, an inflection may be seen in the resulting attenuation data. The maximum energy of the incident photon spectra may be determined based on this inflection point for a series of element filters. Monte Carlo photon transport analysis was used to estimate the photon transmission probabilities for each of the sequential k-edge filter elements. The photon transmission corresponds directly to optical density recorded on mammographic x-ray film. To observe the inflection, the element filters chosen must have k-edge energies that span a range greater than the expected range of the end point energies to be determined. For the design, incident x-ray spectra ranging from 25 to 40 kVp were assumed to be from a molybdenum target. Over this range, the k-edge energy changes by approximately 1.5 keV between sequential elements. For this design 21 elements spanning an energy range from 20 to 50 keV were chosen. Optimum filter element thicknesses were calculated to maximize attenuation differences at the k-edge while maintaining optical densities between 0.10 and 3.00. Calculated relative transmission data show that the kVp could be determined to within +/-1 kV. To obtain experimental data, a phantom was constructed containing 21 different elements placed in an acrylic holder. MAP images were used to determine appropriate exposure techniques for a series of end point energies from 25 to 35 kVp. The average difference between the kVp determination and the calibrated dial setting was 0.8 and 1.0 kV for a Senographe 600 T and a Senographe DMR, respectively. Since the k-edge absorption energies of the filter materials are well known, independent calibration or a series of calibration curves is not required.

  5. Decomposition of conditional probability for high-order symbolic Markov chains.

    PubMed

    Melnik, S S; Usatenko, O V

    2017-07-01

    The main goal of this paper is to develop an estimate for the conditional probability function of random stationary ergodic symbolic sequences with elements belonging to a finite alphabet. We elaborate on a decomposition procedure for the conditional probability function of sequences considered to be high-order Markov chains. We represent the conditional probability function as the sum of multilinear memory function monomials of different orders (from zero up to the chain order). This allows us to introduce a family of Markov chain models and to construct artificial sequences via a method of successive iterations, taking into account at each step increasingly high correlations among random elements. At weak correlations, the memory functions are uniquely expressed in terms of the high-order symbolic correlation functions. The proposed method fills the gap between two approaches, namely the likelihood estimation and the additive Markov chains. The obtained results may have applications for sequential approximation of artificial neural network training.

  6. Decomposition of conditional probability for high-order symbolic Markov chains

    NASA Astrophysics Data System (ADS)

    Melnik, S. S.; Usatenko, O. V.

    2017-07-01

    The main goal of this paper is to develop an estimate for the conditional probability function of random stationary ergodic symbolic sequences with elements belonging to a finite alphabet. We elaborate on a decomposition procedure for the conditional probability function of sequences considered to be high-order Markov chains. We represent the conditional probability function as the sum of multilinear memory function monomials of different orders (from zero up to the chain order). This allows us to introduce a family of Markov chain models and to construct artificial sequences via a method of successive iterations, taking into account at each step increasingly high correlations among random elements. At weak correlations, the memory functions are uniquely expressed in terms of the high-order symbolic correlation functions. The proposed method fills the gap between two approaches, namely the likelihood estimation and the additive Markov chains. The obtained results may have applications for sequential approximation of artificial neural network training.

  7. Partial knowledge, entropy, and estimation

    PubMed Central

    MacQueen, James; Marschak, Jacob

    1975-01-01

    In a growing body of literature, available partial knowledge is used to estimate the prior probability distribution p≡(p1,...,pn) by maximizing entropy H(p)≡-Σpi log pi, subject to constraints on p which express that partial knowledge. The method has been applied to distributions of income, of traffic, of stock-price changes, and of types of brand-article purchases. We shall respond to two justifications given for the method: (α) It is “conservative,” and therefore good, to maximize “uncertainty,” as (uniquely) represented by the entropy parameter. (β) One should apply the mathematics of statistical thermodynamics, which implies that the most probable distribution has highest entropy. Reason (α) is rejected. Reason (β) is valid when “complete ignorance” is defined in a particular way and both the constraint and the estimator's loss function are of certain kinds. PMID:16578733

  8. Evaluation of probable maximum snow accumulation: Development of a methodology for climate change studies

    NASA Astrophysics Data System (ADS)

    Klein, Iris M.; Rousseau, Alain N.; Frigon, Anne; Freudiger, Daphné; Gagnon, Patrick

    2016-06-01

    Probable maximum snow accumulation (PMSA) is one of the key variables used to estimate the spring probable maximum flood (PMF). A robust methodology for evaluating the PMSA is imperative so the ensuing spring PMF is a reasonable estimation. This is of particular importance in times of climate change (CC) since it is known that solid precipitation in Nordic landscapes will in all likelihood change over the next century. In this paper, a PMSA methodology based on simulated data from regional climate models is developed. Moisture maximization represents the core concept of the proposed methodology; precipitable water being the key variable. Results of stationarity tests indicate that CC will affect the monthly maximum precipitable water and, thus, the ensuing ratio to maximize important snowfall events. Therefore, a non-stationary approach is used to describe the monthly maximum precipitable water. Outputs from three simulations produced by the Canadian Regional Climate Model were used to give first estimates of potential PMSA changes for southern Quebec, Canada. A sensitivity analysis of the computed PMSA was performed with respect to the number of time-steps used (so-called snowstorm duration) and the threshold for a snowstorm to be maximized or not. The developed methodology is robust and a powerful tool to estimate the relative change of the PMSA. Absolute results are in the same order of magnitude as those obtained with the traditional method and observed data; but are also found to depend strongly on the climate projection used and show spatial variability.

  9. Sequential ranging integration times in the presence of CW interference in the ranging channel

    NASA Technical Reports Server (NTRS)

    Mathur, Ashok; Nguyen, Tien

    1986-01-01

    The Deep Space Network (DSN), managed by the Jet Propulsion Laboratory for NASA, is used primarily for communication with interplanetary spacecraft. The high sensitivity required to achieve planetary communications makes the DSN very susceptible to radio-frequency interference (RFI). In this paper, an analytical model is presented of the performance degradation of the DSN sequential ranging subsystem in the presence of downlink CW interference in the ranging channel. A trade-off between the ranging component integration times and the ranging signal-to-noise ratio to achieve a desired level of range measurement accuracy and the probability of error in the code components is also presented. Numerical results presented illustrate the required trade-offs under various interference conditions.

  10. Combined Parameter and State Estimation Problem in a Complex Domain: RF Hyperthermia Treatment Using Nanoparticles

    NASA Astrophysics Data System (ADS)

    Bermeo Varon, L. A.; Orlande, H. R. B.; Eliçabe, G. E.

    2016-09-01

    The particle filter methods have been widely used to solve inverse problems with sequential Bayesian inference in dynamic models, simultaneously estimating sequential state variables and fixed model parameters. This methods are an approximation of sequences of probability distributions of interest, that using a large set of random samples, with presence uncertainties in the model, measurements and parameters. In this paper the main focus is the solution combined parameters and state estimation in the radiofrequency hyperthermia with nanoparticles in a complex domain. This domain contains different tissues like muscle, pancreas, lungs, small intestine and a tumor which is loaded iron oxide nanoparticles. The results indicated that excellent agreements between estimated and exact value are obtained.

  11. Controllable uncertain opinion diffusion under confidence bound and unpredicted diffusion probability

    NASA Astrophysics Data System (ADS)

    Yan, Fuhan; Li, Zhaofeng; Jiang, Yichuan

    2016-05-01

    The issues of modeling and analyzing diffusion in social networks have been extensively studied in the last few decades. Recently, many studies focus on uncertain diffusion process. The uncertainty of diffusion process means that the diffusion probability is unpredicted because of some complex factors. For instance, the variety of individuals' opinions is an important factor that can cause uncertainty of diffusion probability. In detail, the difference between opinions can influence the diffusion probability, and then the evolution of opinions will cause the uncertainty of diffusion probability. It is known that controlling the diffusion process is important in the context of viral marketing and political propaganda. However, previous methods are hardly feasible to control the uncertain diffusion process of individual opinion. In this paper, we present suitable strategy to control this diffusion process based on the approximate estimation of the uncertain factors. We formulate a model in which the diffusion probability is influenced by the distance between opinions, and briefly discuss the properties of the diffusion model. Then, we present an optimization problem at the background of voting to show how to control this uncertain diffusion process. In detail, it is assumed that each individual can choose one of the two candidates or abstention based on his/her opinion. Then, we present strategy to set suitable initiators and their opinions so that the advantage of one candidate will be maximized at the end of diffusion. The results show that traditional influence maximization algorithms are not applicable to this problem, and our algorithm can achieve expected performance.

  12. Multiple model cardinalized probability hypothesis density filter

    NASA Astrophysics Data System (ADS)

    Georgescu, Ramona; Willett, Peter

    2011-09-01

    The Probability Hypothesis Density (PHD) filter propagates the first-moment approximation to the multi-target Bayesian posterior distribution while the Cardinalized PHD (CPHD) filter propagates both the posterior likelihood of (an unlabeled) target state and the posterior probability mass function of the number of targets. Extensions of the PHD filter to the multiple model (MM) framework have been published and were implemented either with a Sequential Monte Carlo or a Gaussian Mixture approach. In this work, we introduce the multiple model version of the more elaborate CPHD filter. We present the derivation of the prediction and update steps of the MMCPHD particularized for the case of two target motion models and proceed to show that in the case of a single model, the new MMCPHD equations reduce to the original CPHD equations.

  13. How Do High School Students Solve Probability Problems? A Mixed Methods Study on Probabilistic Reasoning

    ERIC Educational Resources Information Center

    Heyvaert, Mieke; Deleye, Maarten; Saenen, Lore; Van Dooren, Wim; Onghena, Patrick

    2018-01-01

    When studying a complex research phenomenon, a mixed methods design allows to answer a broader set of research questions and to tap into different aspects of this phenomenon, compared to a monomethod design. This paper reports on how a sequential equal status design (QUAN ? QUAL) was used to examine students' reasoning processes when solving…

  14. Is a Basketball Free-Throw Sequence Nonrandom? A Group Exercise for Undergraduate Statistics Students

    ERIC Educational Resources Information Center

    Adolph, Stephen C.

    2007-01-01

    I describe a group exercise that I give to my undergraduate biostatistics class. The exercise involves analyzing a series of 200 consecutive basketball free-throw attempts to determine whether there is any evidence for sequential dependence in the probability of making a free-throw. The students are given the exercise before they have learned the…

  15. The Approximate Bayesian Computation methods in the localization of the atmospheric contamination source

    NASA Astrophysics Data System (ADS)

    Kopka, P.; Wawrzynczak, A.; Borysiewicz, M.

    2015-09-01

    In many areas of application, a central problem is a solution to the inverse problem, especially estimation of the unknown model parameters to model the underlying dynamics of a physical system precisely. In this situation, the Bayesian inference is a powerful tool to combine observed data with prior knowledge to gain the probability distribution of searched parameters. We have applied the modern methodology named Sequential Approximate Bayesian Computation (S-ABC) to the problem of tracing the atmospheric contaminant source. The ABC is technique commonly used in the Bayesian analysis of complex models and dynamic system. Sequential methods can significantly increase the efficiency of the ABC. In the presented algorithm, the input data are the on-line arriving concentrations of released substance registered by distributed sensor network from OVER-LAND ATMOSPHERIC DISPERSION (OLAD) experiment. The algorithm output are the probability distributions of a contamination source parameters i.e. its particular location, release rate, speed and direction of the movement, start time and duration. The stochastic approach presented in this paper is completely general and can be used in other fields where the parameters of the model bet fitted to the observable data should be found.

  16. A random walk rule for phase I clinical trials.

    PubMed

    Durham, S D; Flournoy, N; Rosenberger, W F

    1997-06-01

    We describe a family of random walk rules for the sequential allocation of dose levels to patients in a dose-response study, or phase I clinical trial. Patients are sequentially assigned the next higher, same, or next lower dose level according to some probability distribution, which may be determined by ethical considerations as well as the patient's response. It is shown that one can choose these probabilities in order to center dose level assignments unimodally around any target quantile of interest. Estimation of the quantile is discussed; the maximum likelihood estimator and its variance are derived under a two-parameter logistic distribution, and the maximum likelihood estimator is compared with other nonparametric estimators. Random walk rules have clear advantages: they are simple to implement, and finite and asymptotic distribution theory is completely worked out. For a specific random walk rule, we compute finite and asymptotic properties and give examples of its use in planning studies. Having the finite distribution theory available and tractable obviates the need for elaborate simulation studies to analyze the properties of the design. The small sample properties of our rule, as determined by exact theory, compare favorably to those of the continual reassessment method, determined by simulation.

  17. Language experience changes subsequent learning.

    PubMed

    Onnis, Luca; Thiessen, Erik

    2013-02-01

    What are the effects of experience on subsequent learning? We explored the effects of language-specific word order knowledge on the acquisition of sequential conditional information. Korean and English adults were engaged in a sequence learning task involving three different sets of stimuli: auditory linguistic (nonsense syllables), visual non-linguistic (nonsense shapes), and auditory non-linguistic (pure tones). The forward and backward probabilities between adjacent elements generated two equally probable and orthogonal perceptual parses of the elements, such that any significant preference at test must be due to either general cognitive biases, or prior language-induced biases. We found that language modulated parsing preferences with the linguistic stimuli only. Intriguingly, these preferences are congruent with the dominant word order patterns of each language, as corroborated by corpus analyses, and are driven by probabilistic preferences. Furthermore, although the Korean individuals had received extensive formal explicit training in English and lived in an English-speaking environment, they exhibited statistical learning biases congruent with their native language. Our findings suggest that mechanisms of statistical sequential learning are implicated in language across the lifespan, and experience with language may affect cognitive processes and later learning. Copyright © 2012 Elsevier B.V. All rights reserved.

  18. Absolute continuity for operator valued completely positive maps on C∗-algebras

    NASA Astrophysics Data System (ADS)

    Gheondea, Aurelian; Kavruk, Ali Şamil

    2009-02-01

    Motivated by applicability to quantum operations, quantum information, and quantum probability, we investigate the notion of absolute continuity for operator valued completely positive maps on C∗-algebras, previously introduced by Parthasarathy [in Athens Conference on Applied Probability and Time Series Analysis I (Springer-Verlag, Berlin, 1996), pp. 34-54]. We obtain an intrinsic definition of absolute continuity, we show that the Lebesgue decomposition defined by Parthasarathy is the maximal one among all other Lebesgue-type decompositions and that this maximal Lebesgue decomposition does not depend on the jointly dominating completely positive map, we obtain more flexible formulas for calculating the maximal Lebesgue decomposition, and we point out the nonuniqueness of the Lebesgue decomposition as well as a sufficient condition for uniqueness. In addition, we consider Radon-Nikodym derivatives for absolutely continuous completely positive maps that, in general, are unbounded positive self-adjoint operators affiliated to a certain von Neumann algebra, and we obtain a spectral approximation by bounded Radon-Nikodym derivatives. An application to the existence of the infimum of two completely positive maps is indicated, and formulas in terms of Choi's matrices for the Lebesgue decomposition of completely positive maps in matrix algebras are obtained.

  19. Sequential PET/CT with [18F]-FDG Predicts Pathological Tumor Response to Preoperative Short Course Radiotherapy with Delayed Surgery in Patients with Locally Advanced Rectal Cancer Using Logistic Regression Analysis

    PubMed Central

    Pecori, Biagio; Lastoria, Secondo; Caracò, Corradina; Celentani, Marco; Tatangelo, Fabiana; Avallone, Antonio; Rega, Daniela; De Palma, Giampaolo; Mormile, Maria; Budillon, Alfredo; Muto, Paolo; Bianco, Francesco; Aloj, Luigi; Petrillo, Antonella; Delrio, Paolo

    2017-01-01

    Previous studies indicate that FDG PET/CT may predict pathological response in patients undergoing neoadjuvant chemo-radiotherapy for locally advanced rectal cancer (LARC). Aim of the current study is evaluate if pathological response can be similarly predicted in LARC patients after short course radiation therapy alone. Methods: Thirty-three patients with cT2-3, N0-2, M0 rectal adenocarcinoma treated with hypo fractionated short course neoadjuvant RT (5x5 Gy) with delayed surgery (SCRTDS) were prospectively studied. All patients underwent 3 PET/CT studies at baseline, 10 days from RT end (early), and 53 days from RT end (delayed). Maximal standardized uptake value (SUVmax), mean standardized uptake value (SUVmean) and total lesion glycolysis (TLG) of the primary tumor were measured and recorded at each PET/CT study. We use logistic regression analysis to aggregate different measures of metabolic response to predict the pathological response in the course of SCRTDS. Results: We provide straightforward formulas to classify response and estimate the probability of being a major responder (TRG1-2) or a complete responder (TRG1) for each individual. The formulas are based on the level of TLG at the early PET and on the overall proportional reduction of TLG between baseline and delayed PET studies. Conclusions: This study demonstrates that in the course of SCRTDS it is possible to estimate the probabilities of pathological tumor responses on the basis of PET/CT with FDG. Our formulas make it possible to assess the risks associated to LARC borne by a patient in the course of SCRTDS. These risk assessments can be balanced against other health risks associated with further treatments and can therefore be used to make informed therapy adjustments during SCRTDS. PMID:28060889

  20. Sequential PET/CT with [18F]-FDG Predicts Pathological Tumor Response to Preoperative Short Course Radiotherapy with Delayed Surgery in Patients with Locally Advanced Rectal Cancer Using Logistic Regression Analysis.

    PubMed

    Pecori, Biagio; Lastoria, Secondo; Caracò, Corradina; Celentani, Marco; Tatangelo, Fabiana; Avallone, Antonio; Rega, Daniela; De Palma, Giampaolo; Mormile, Maria; Budillon, Alfredo; Muto, Paolo; Bianco, Francesco; Aloj, Luigi; Petrillo, Antonella; Delrio, Paolo

    2017-01-01

    Previous studies indicate that FDG PET/CT may predict pathological response in patients undergoing neoadjuvant chemo-radiotherapy for locally advanced rectal cancer (LARC). Aim of the current study is evaluate if pathological response can be similarly predicted in LARC patients after short course radiation therapy alone. Thirty-three patients with cT2-3, N0-2, M0 rectal adenocarcinoma treated with hypo fractionated short course neoadjuvant RT (5x5 Gy) with delayed surgery (SCRTDS) were prospectively studied. All patients underwent 3 PET/CT studies at baseline, 10 days from RT end (early), and 53 days from RT end (delayed). Maximal standardized uptake value (SUVmax), mean standardized uptake value (SUVmean) and total lesion glycolysis (TLG) of the primary tumor were measured and recorded at each PET/CT study. We use logistic regression analysis to aggregate different measures of metabolic response to predict the pathological response in the course of SCRTDS. We provide straightforward formulas to classify response and estimate the probability of being a major responder (TRG1-2) or a complete responder (TRG1) for each individual. The formulas are based on the level of TLG at the early PET and on the overall proportional reduction of TLG between baseline and delayed PET studies. This study demonstrates that in the course of SCRTDS it is possible to estimate the probabilities of pathological tumor responses on the basis of PET/CT with FDG. Our formulas make it possible to assess the risks associated to LARC borne by a patient in the course of SCRTDS. These risk assessments can be balanced against other health risks associated with further treatments and can therefore be used to make informed therapy adjustments during SCRTDS.

  1. Statistical Metamodeling and Sequential Design of Computer Experiments to Model Glyco-Altered Gating of Sodium Channels in Cardiac Myocytes.

    PubMed

    Du, Dongping; Yang, Hui; Ednie, Andrew R; Bennett, Eric S

    2016-09-01

    Glycan structures account for up to 35% of the mass of cardiac sodium ( Nav ) channels. To question whether and how reduced sialylation affects Nav activity and cardiac electrical signaling, we conducted a series of in vitro experiments on ventricular apex myocytes under two different glycosylation conditions, reduced protein sialylation (ST3Gal4(-/-)) and full glycosylation (control). Although aberrant electrical signaling is observed in reduced sialylation, realizing a better understanding of mechanistic details of pathological variations in INa and AP is difficult without performing in silico studies. However, computer model of Nav channels and cardiac myocytes involves greater levels of complexity, e.g., high-dimensional parameter space, nonlinear and nonconvex equations. Traditional linear and nonlinear optimization methods have encountered many difficulties for model calibration. This paper presents a new statistical metamodeling approach for efficient computer experiments and optimization of Nav models. First, we utilize a fractional factorial design to identify control variables from the large set of model parameters, thereby reducing the dimensionality of parametric space. Further, we develop the Gaussian process model as a surrogate of expensive and time-consuming computer models and then identify the next best design point that yields the maximal probability of improvement. This process iterates until convergence, and the performance is evaluated and validated with real-world experimental data. Experimental results show the proposed algorithm achieves superior performance in modeling the kinetics of Nav channels under a variety of glycosylation conditions. As a result, in silico models provide a better understanding of glyco-altered mechanistic details in state transitions and distributions of Nav channels. Notably, ST3Gal4(-/-) myocytes are shown to have higher probabilities accumulated in intermediate inactivation during the repolarization and yield a shorter refractory period than WTs. The proposed statistical design of computer experiments is generally extensible to many other disciplines that involve large-scale and computationally expensive models.

  2. A matter of tradeoffs: reintroduction as a multiple objective decision

    USGS Publications Warehouse

    Converse, Sarah J.; Moore, Clinton T.; Folk, Martin J.; Runge, Michael C.

    2013-01-01

    Decision making in guidance of reintroduction efforts is made challenging by the substantial scientific uncertainty typically involved. However, a less recognized challenge is that the management objectives are often numerous and complex. Decision makers managing reintroduction efforts are often concerned with more than just how to maximize the probability of reintroduction success from a population perspective. Decision makers are also weighing other concerns such as budget limitations, public support and/or opposition, impacts on the ecosystem, and the need to consider not just a single reintroduction effort, but conservation of the entire species. Multiple objective decision analysis is a powerful tool for formal analysis of such complex decisions. We demonstrate the use of multiple objective decision analysis in the case of the Florida non-migratory whooping crane reintroduction effort. In this case, the State of Florida was considering whether to resume releases of captive-reared crane chicks into the non-migratory whooping crane population in that state. Management objectives under consideration included maximizing the probability of successful population establishment, minimizing costs, maximizing public relations benefits, maximizing the number of birds available for alternative reintroduction efforts, and maximizing learning about the demographic patterns of reintroduced whooping cranes. The State of Florida engaged in a collaborative process with their management partners, first, to evaluate and characterize important uncertainties about system behavior, and next, to formally evaluate the tradeoffs between objectives using the Simple Multi-Attribute Rating Technique (SMART). The recommendation resulting from this process, to continue releases of cranes at a moderate intensity, was adopted by the State of Florida in late 2008. Although continued releases did not receive support from the International Whooping Crane Recovery Team, this approach does provide a template for the formal, transparent consideration of multiple, potentially competing, objectives in reintroduction decision making.

  3. Determining animal drug combinations based on efficacy and safety.

    PubMed

    Kratzer, D D; Geng, S

    1986-08-01

    A procedure for deriving drug combinations for animal health is used to derive an optimal combination of 200 mg of novobiocin and 650,000 IU of penicillin for nonlactating cow mastitis treatment. The procedure starts with an estimated second order polynomial response surface equation. That surface is translated into a probability surface with contours called isoprobs. The isoprobs show drug amounts that have equal probability to produce maximal efficacy. Safety factors are incorporated into the probability surface via a noncentrality parameter that causes the isoprobs to expand as safety decreases, resulting in lower amounts of drug being used.

  4. Exploiting vibrational resonance in weak-signal detection

    NASA Astrophysics Data System (ADS)

    Ren, Yuhao; Pan, Yan; Duan, Fabing; Chapeau-Blondeau, François; Abbott, Derek

    2017-08-01

    In this paper, we investigate the first exploitation of the vibrational resonance (VR) effect to detect weak signals in the presence of strong background noise. By injecting a series of sinusoidal interference signals of the same amplitude but with different frequencies into a generalized correlation detector, we show that the detection probability can be maximized at an appropriate interference amplitude. Based on a dual-Dirac probability density model, we compare the VR method with the stochastic resonance approach via adding dichotomous noise. The compared results indicate that the VR method can achieve a higher detection probability for a wider variety of noise distributions.

  5. Exploiting vibrational resonance in weak-signal detection.

    PubMed

    Ren, Yuhao; Pan, Yan; Duan, Fabing; Chapeau-Blondeau, François; Abbott, Derek

    2017-08-01

    In this paper, we investigate the first exploitation of the vibrational resonance (VR) effect to detect weak signals in the presence of strong background noise. By injecting a series of sinusoidal interference signals of the same amplitude but with different frequencies into a generalized correlation detector, we show that the detection probability can be maximized at an appropriate interference amplitude. Based on a dual-Dirac probability density model, we compare the VR method with the stochastic resonance approach via adding dichotomous noise. The compared results indicate that the VR method can achieve a higher detection probability for a wider variety of noise distributions.

  6. Optimized nested Markov chain Monte Carlo sampling: theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Coe, Joshua D; Shaw, M Sam; Sewell, Thomas D

    2009-01-01

    Metropolis Monte Carlo sampling of a reference potential is used to build a Markov chain in the isothermal-isobaric ensemble. At the endpoints of the chain, the energy is reevaluated at a different level of approximation (the 'full' energy) and a composite move encompassing all of the intervening steps is accepted on the basis of a modified Metropolis criterion. By manipulating the thermodynamic variables characterizing the reference system we maximize the average acceptance probability of composite moves, lengthening significantly the random walk made between consecutive evaluations of the full energy at a fixed acceptance probability. This provides maximally decorrelated samples ofmore » the full potential, thereby lowering the total number required to build ensemble averages of a given variance. The efficiency of the method is illustrated using model potentials appropriate to molecular fluids at high pressure. Implications for ab initio or density functional theory (DFT) treatment are discussed.« less

  7. The influences of delay time on the stability of a market model with stochastic volatility

    NASA Astrophysics Data System (ADS)

    Li, Jiang-Cheng; Mei, Dong-Cheng

    2013-02-01

    The effects of the delay time on the stability of a market model are investigated, by using a modified Heston model with a cubic nonlinearity and cross-correlated noise sources. These results indicate that: (i) There is an optimal delay time τo which maximally enhances the stability of the stock price under strong demand elasticity of stock price, and maximally reduces the stability of the stock price under weak demand elasticity of stock price; (ii) The cross correlation coefficient of noises and the delay time play an opposite role on the stability for the case of the delay time <τo and the same role for the case of the delay time >τo. Moreover, the probability density function of the escape time of stock price returns, the probability density function of the returns and the correlation function of the returns are compared with other literatures.

  8. On-line Flagging of Anomalies and Adaptive Sequential Hypothesis Testing for Fine-feature Characterization of Geosynchronous Satellites

    NASA Astrophysics Data System (ADS)

    Chaudhary, A.; Payne, T.; Kinateder, K.; Dao, P.; Beecher, E.; Boone, D.; Elliott, B.

    The objective of on-line flagging in this paper is to perform interactive assessment of geosynchronous satellites anomalies such as cross-tagging of a satellites in a cluster, solar panel offset change, etc. This assessment will utilize a Bayesian belief propagation procedure and will include automated update of baseline signature data for the satellite, while accounting for the seasonal changes. Its purpose is to enable an ongoing, automated assessment of satellite behavior through its life cycle using the photometry data collected during the synoptic search performed by a ground or space-based sensor as a part of its metrics mission. The change in the satellite features will be reported along with the probabilities of Type I and Type II errors. The objective of adaptive sequential hypothesis testing in this paper is to define future sensor tasking for the purpose of characterization of fine features of the satellite. The tasking will be designed in order to maximize new information with the least number of photometry data points to be collected during the synoptic search by a ground or space-based sensor. Its calculation is based on the utilization of information entropy techniques. The tasking is defined by considering a sequence of hypotheses in regard to the fine features of the satellite. The optimal observation conditions are then ordered in order to maximize new information about a chosen fine feature. The combined objective of on-line flagging and adaptive sequential hypothesis testing is to progressively discover new information about the features of a geosynchronous satellites by leveraging the regular but sparse cadence of data collection during the synoptic search performed by a ground or space-based sensor. Automated Algorithm to Detect Changes in Geostationary Satellite's Configuration and Cross-Tagging Phan Dao, Air Force Research Laboratory/RVB By characterizing geostationary satellites based on photometry and color photometry, analysts can evaluate satellite operational status and affirm its true identity. The process of ingesting photometry data and deriving satellite physical characteristics can be directed by analysts in a batch mode, meaning using a batch of recent data, or by automated algorithms in an on-line mode in which the assessment is updated with each new data point. Tools used for detecting change to satellite's status or identity, whether performed with a human in the loop or automated algorithms, are generally not built to detect with minimum latency and traceable confidence intervals. To alleviate those deficiencies, we investigate the use of Hidden Markov Models (HMM), in a Bayesian Network framework, to infer the hidden state (changed or unchanged) of a three-axis stabilized geostationary satellite using broadband and color photometry. Unlike frequentist statistics which exploit only the stationary statistics of the observables in the database, HMM also exploits the temporal pattern of the observables as well. The algorithm also operates in “learning” mode to gradually evolve the HMM and accommodate natural changes such as due to the seasonal dependence of GEO satellite's light curve. Our technique is designed to operate with missing color data. The version that ingests both panchromatic and color data can accommodate gaps in color photometry data. That attribute is important because while color indices, e.g. Johnson R and B, enhance the belief (probability) of a hidden state, in real world situations, flux data is collected sporadically in an untasked collect, and color data is limited and sometimes absent. Fluxes are measured with experimental error whose effect on the algorithm will be studied. Photometry data in the AFRL's Geo Color Photometry Catalog and Geo Observations with Latitudinal Diversity Simultaneously (GOLDS) data sets are used to simulate a wide variety of operational changes and identity cross tags. The algorithm is tested against simulated sequences of observed magnitudes, mimicking both the cadence of untasked SSN and other ground sensors, occasional operational changes and possible occurrence of cross tags of in-cluster satellites. We would like to show that the on-line algorithm can detect change; sometimes right after the first post-change data point is analyzed, for zero latency. We also want to show the unsupervised “learning” capability that allows the HMM to evolve with time without user's assistance. For example, the users are not required to “label” the true state of the data points.

  9. Adaptively biased sequential importance sampling for rare events in reaction networks with comparison to exact solutions from finite buffer dCME method

    PubMed Central

    Cao, Youfang; Liang, Jie

    2013-01-01

    Critical events that occur rarely in biological processes are of great importance, but are challenging to study using Monte Carlo simulation. By introducing biases to reaction selection and reaction rates, weighted stochastic simulation algorithms based on importance sampling allow rare events to be sampled more effectively. However, existing methods do not address the important issue of barrier crossing, which often arises from multistable networks and systems with complex probability landscape. In addition, the proliferation of parameters and the associated computing cost pose significant problems. Here we introduce a general theoretical framework for obtaining optimized biases in sampling individual reactions for estimating probabilities of rare events. We further describe a practical algorithm called adaptively biased sequential importance sampling (ABSIS) method for efficient probability estimation. By adopting a look-ahead strategy and by enumerating short paths from the current state, we estimate the reaction-specific and state-specific forward and backward moving probabilities of the system, which are then used to bias reaction selections. The ABSIS algorithm can automatically detect barrier-crossing regions, and can adjust bias adaptively at different steps of the sampling process, with bias determined by the outcome of exhaustively generated short paths. In addition, there are only two bias parameters to be determined, regardless of the number of the reactions and the complexity of the network. We have applied the ABSIS method to four biochemical networks: the birth-death process, the reversible isomerization, the bistable Schlögl model, and the enzymatic futile cycle model. For comparison, we have also applied the finite buffer discrete chemical master equation (dCME) method recently developed to obtain exact numerical solutions of the underlying discrete chemical master equations of these problems. This allows us to assess sampling results objectively by comparing simulation results with true answers. Overall, ABSIS can accurately and efficiently estimate rare event probabilities for all examples, often with smaller variance than other importance sampling algorithms. The ABSIS method is general and can be applied to study rare events of other stochastic networks with complex probability landscape. PMID:23862966

  10. Adaptively biased sequential importance sampling for rare events in reaction networks with comparison to exact solutions from finite buffer dCME method

    NASA Astrophysics Data System (ADS)

    Cao, Youfang; Liang, Jie

    2013-07-01

    Critical events that occur rarely in biological processes are of great importance, but are challenging to study using Monte Carlo simulation. By introducing biases to reaction selection and reaction rates, weighted stochastic simulation algorithms based on importance sampling allow rare events to be sampled more effectively. However, existing methods do not address the important issue of barrier crossing, which often arises from multistable networks and systems with complex probability landscape. In addition, the proliferation of parameters and the associated computing cost pose significant problems. Here we introduce a general theoretical framework for obtaining optimized biases in sampling individual reactions for estimating probabilities of rare events. We further describe a practical algorithm called adaptively biased sequential importance sampling (ABSIS) method for efficient probability estimation. By adopting a look-ahead strategy and by enumerating short paths from the current state, we estimate the reaction-specific and state-specific forward and backward moving probabilities of the system, which are then used to bias reaction selections. The ABSIS algorithm can automatically detect barrier-crossing regions, and can adjust bias adaptively at different steps of the sampling process, with bias determined by the outcome of exhaustively generated short paths. In addition, there are only two bias parameters to be determined, regardless of the number of the reactions and the complexity of the network. We have applied the ABSIS method to four biochemical networks: the birth-death process, the reversible isomerization, the bistable Schlögl model, and the enzymatic futile cycle model. For comparison, we have also applied the finite buffer discrete chemical master equation (dCME) method recently developed to obtain exact numerical solutions of the underlying discrete chemical master equations of these problems. This allows us to assess sampling results objectively by comparing simulation results with true answers. Overall, ABSIS can accurately and efficiently estimate rare event probabilities for all examples, often with smaller variance than other importance sampling algorithms. The ABSIS method is general and can be applied to study rare events of other stochastic networks with complex probability landscape.

  11. Adaptively biased sequential importance sampling for rare events in reaction networks with comparison to exact solutions from finite buffer dCME method.

    PubMed

    Cao, Youfang; Liang, Jie

    2013-07-14

    Critical events that occur rarely in biological processes are of great importance, but are challenging to study using Monte Carlo simulation. By introducing biases to reaction selection and reaction rates, weighted stochastic simulation algorithms based on importance sampling allow rare events to be sampled more effectively. However, existing methods do not address the important issue of barrier crossing, which often arises from multistable networks and systems with complex probability landscape. In addition, the proliferation of parameters and the associated computing cost pose significant problems. Here we introduce a general theoretical framework for obtaining optimized biases in sampling individual reactions for estimating probabilities of rare events. We further describe a practical algorithm called adaptively biased sequential importance sampling (ABSIS) method for efficient probability estimation. By adopting a look-ahead strategy and by enumerating short paths from the current state, we estimate the reaction-specific and state-specific forward and backward moving probabilities of the system, which are then used to bias reaction selections. The ABSIS algorithm can automatically detect barrier-crossing regions, and can adjust bias adaptively at different steps of the sampling process, with bias determined by the outcome of exhaustively generated short paths. In addition, there are only two bias parameters to be determined, regardless of the number of the reactions and the complexity of the network. We have applied the ABSIS method to four biochemical networks: the birth-death process, the reversible isomerization, the bistable Schlögl model, and the enzymatic futile cycle model. For comparison, we have also applied the finite buffer discrete chemical master equation (dCME) method recently developed to obtain exact numerical solutions of the underlying discrete chemical master equations of these problems. This allows us to assess sampling results objectively by comparing simulation results with true answers. Overall, ABSIS can accurately and efficiently estimate rare event probabilities for all examples, often with smaller variance than other importance sampling algorithms. The ABSIS method is general and can be applied to study rare events of other stochastic networks with complex probability landscape.

  12. Bounding the Failure Probability Range of Polynomial Systems Subject to P-box Uncertainties

    NASA Technical Reports Server (NTRS)

    Crespo, Luis G.; Kenny, Sean P.; Giesy, Daniel P.

    2012-01-01

    This paper proposes a reliability analysis framework for systems subject to multiple design requirements that depend polynomially on the uncertainty. Uncertainty is prescribed by probability boxes, also known as p-boxes, whose distribution functions have free or fixed functional forms. An approach based on the Bernstein expansion of polynomials and optimization is proposed. In particular, we search for the elements of a multi-dimensional p-box that minimize (i.e., the best-case) and maximize (i.e., the worst-case) the probability of inner and outer bounding sets of the failure domain. This technique yields intervals that bound the range of failure probabilities. The offset between this bounding interval and the actual failure probability range can be made arbitrarily tight with additional computational effort.

  13. NASA DOE POD NDE Capabilities Data Book

    NASA Technical Reports Server (NTRS)

    Generazio, Edward R.

    2015-01-01

    This data book contains the Directed Design of Experiments for Validating Probability of Detection (POD) Capability of NDE Systems (DOEPOD) analyses of the nondestructive inspection data presented in the NTIAC, Nondestructive Evaluation (NDE) Capabilities Data Book, 3rd ed., NTIAC DB-97-02. DOEPOD is designed as a decision support system to validate inspection system, personnel, and protocol demonstrating 0.90 POD with 95% confidence at critical flaw sizes, a90/95. The test methodology used in DOEPOD is based on the field of statistical sequential analysis founded by Abraham Wald. Sequential analysis is a method of statistical inference whose characteristic feature is that the number of observations required by the procedure is not determined in advance of the experiment. The decision to terminate the experiment depends, at each stage, on the results of the observations previously made. A merit of the sequential method, as applied to testing statistical hypotheses, is that test procedures can be constructed which require, on average, a substantially smaller number of observations than equally reliable test procedures based on a predetermined number of observations.

  14. A short note on the maximal point-biserial correlation under non-normality.

    PubMed

    Cheng, Ying; Liu, Haiyan

    2016-11-01

    The aim of this paper is to derive the maximal point-biserial correlation under non-normality. Several widely used non-normal distributions are considered, namely the uniform distribution, t-distribution, exponential distribution, and a mixture of two normal distributions. Results show that the maximal point-biserial correlation, depending on the non-normal continuous variable underlying the binary manifest variable, may not be a function of p (the probability that the dichotomous variable takes the value 1), can be symmetric or non-symmetric around p = .5, and may still lie in the range from -1.0 to 1.0. Therefore researchers should exercise caution when they interpret their sample point-biserial correlation coefficients based on popular beliefs that the maximal point-biserial correlation is always smaller than 1, and that the size of the correlation is always further restricted as p deviates from .5. © 2016 The British Psychological Society.

  15. Cortico-Cortical, Cortico-Striatal, and Cortico-Thalamic White Matter Fiber Tracts Generated in the Macaque Brain via Dynamic Programming

    PubMed Central

    Lal, Rakesh M.; An, Michael; Poynton, Clare B.; Li, Muwei; Jiang, Hangyi; Oishi, Kenichi; Selemon, Lynn D.; Mori, Susumu; Miller, Michael I.

    2013-01-01

    Abstract Probabilistic methods have the potential to generate multiple and complex white matter fiber tracts in diffusion tensor imaging (DTI). Here, a method based on dynamic programming (DP) is introduced to reconstruct fibers pathways whose complex anatomical structures cannot be resolved beyond the resolution of standard DTI data. DP is based on optimizing a sequentially additive cost function derived from a Gaussian diffusion model whose covariance is defined by the diffusion tensor. DP is used to determine the optimal path between initial and terminal nodes by efficiently searching over all paths, connecting the nodes, and choosing the path in which the total probability is maximized. An ex vivo high-resolution scan of a macaque hemi-brain is used to demonstrate the advantages and limitations of DP. DP can generate fiber bundles between distant cortical areas (superior longitudinal fasciculi, arcuate fasciculus, uncinate fasciculus, and fronto-occipital fasciculus), neighboring cortical areas (dorsal and ventral banks of the principal sulcus), as well as cortical projections to the hippocampal formation (cingulum bundle), neostriatum (motor cortical projections to the putamen), thalamus (subcortical bundle), and hippocampal formation projections to the mammillary bodies via the fornix. Validation is established either by comparison with in vivo intracellular transport of horseradish peroxidase in another macaque monkey or by comparison with atlases. DP is able to generate known pathways, including crossing and kissing tracts. Thus, DP has the potential to enhance neuroimaging studies of cortical connectivity. PMID:23879573

  16. CROSS: A GDSS for the Evaluation and Prioritization of Engineering Support Requests and Advanced Technology Projects at NASA

    NASA Technical Reports Server (NTRS)

    Tavana, Madjid; Lee, Seunghee

    1996-01-01

    Objective evaluation and prioritization of engineering support requests (ESRs) is a difficult task at the Kennedy Space Center (KSC) Shuttle Project Engineering Office. The difficulty arises from the complexities inherent in the evaluation process and the lack of structured information. The purpose of this project is to implement the consensus ranking organizational support system (CROSS), a multiple criteria decision support system (DSS) developed at KSC that captures the decision maker's beliefs through a series of sequential, rational, and analytical processes. CROSS utilizes the analytic hierarchy process (AHP), subjective probabilities, entropy concept, and maximize agreement heuristic (MAH) to enhance the decision maker's intuition in evaluation ESRs. Some of the preliminary goals of the project are to: (1) revisit the structure of the ground systems working team (GWST) steering committee, (2) develop a template for ESR originators to provide more comple and consistent information to the GSWT steering committee members to eliminate the need for a facilitator, (3) develop an objective and structured process for the initial screening of ESRs, (4) extensive training of the stakeholders and the GWST steering committee to eliminate the need for a facilitator, (5) automate the process as much as possible, (6) create an environment to compile project success factor data on ESRs and move towards a disciplined system that could be used to address supportability threshold issues at the KSC, and (7) investigate the possibility of an organization-wide implementation of CROSS.

  17. Near Real-Time Surveillance for Influenza Vaccine Safety: Proof-of-Concept in the Vaccine Safety Datalink Project

    PubMed Central

    Greene, Sharon K.; Kulldorff, Martin; Lewis, Edwin M.; Li, Rong; Yin, Ruihua; Weintraub, Eric S.; Fireman, Bruce H.; Lieu, Tracy A.; Nordin, James D.; Glanz, Jason M.; Baxter, Roger; Jacobsen, Steven J.; Broder, Karen R.; Lee, Grace M.

    2010-01-01

    The emergence of pandemic H1N1 influenza in 2009 has prompted public health responses, including production and licensure of new influenza A (H1N1) 2009 monovalent vaccines. Safety monitoring is a critical component of vaccination programs. As proof-of-concept, the authors mimicked near real-time prospective surveillance for prespecified neurologic and allergic adverse events among enrollees in 8 medical care organizations (the Vaccine Safety Datalink Project) who received seasonal trivalent inactivated influenza vaccine during the 2005/06–2007/08 influenza seasons. In self-controlled case series analysis, the risk of adverse events in a prespecified exposure period following vaccination was compared with the risk in 1 control period for the same individual either before or after vaccination. In difference-in-difference analysis, the relative risk in exposed versus control periods each season was compared with the relative risk in previous seasons since 2000/01. The authors used Poisson-based analysis to compare the risk of Guillian-Barré syndrome following vaccination in each season with that in previous seasons. Maximized sequential probability ratio tests were used to adjust for repeated analyses on weekly data. With administration of 1,195,552 doses to children under age 18 years and 4,773,956 doses to adults, no elevated risk of adverse events was identified. Near real-time surveillance for selected adverse events can be implemented prospectively to rapidly assess seasonal and pandemic influenza vaccine safety. PMID:19965887

  18. Ultrasensitive surveillance of sensors and processes

    DOEpatents

    Wegerich, Stephan W.; Jarman, Kristin K.; Gross, Kenneth C.

    2001-01-01

    A method and apparatus for monitoring a source of data for determining an operating state of a working system. The method includes determining a sensor (or source of data) arrangement associated with monitoring the source of data for a system, activating a method for performing a sequential probability ratio test if the data source includes a single data (sensor) source, activating a second method for performing a regression sequential possibility ratio testing procedure if the arrangement includes a pair of sensors (data sources) with signals which are linearly or non-linearly related; activating a third method for performing a bounded angle ratio test procedure if the sensor arrangement includes multiple sensors and utilizing at least one of the first, second and third methods to accumulate sensor signals and determining the operating state of the system.

  19. Ultrasensitive surveillance of sensors and processes

    DOEpatents

    Wegerich, Stephan W.; Jarman, Kristin K.; Gross, Kenneth C.

    1999-01-01

    A method and apparatus for monitoring a source of data for determining an operating state of a working system. The method includes determining a sensor (or source of data) arrangement associated with monitoring the source of data for a system, activating a method for performing a sequential probability ratio test if the data source includes a single data (sensor) source, activating a second method for performing a regression sequential possibility ratio testing procedure if the arrangement includes a pair of sensors (data sources) with signals which are linearly or non-linearly related; activating a third method for performing a bounded angle ratio test procedure if the sensor arrangement includes multiple sensors and utilizing at least one of the first, second and third methods to accumulate sensor signals and determining the operating state of the system.

  20. Hybrid and concatenated coding applications.

    NASA Technical Reports Server (NTRS)

    Hofman, L. B.; Odenwalder, J. P.

    1972-01-01

    Results of a study to evaluate the performance and implementation complexity of a concatenated and a hybrid coding system for moderate-speed deep-space applications. It is shown that with a total complexity of less than three times that of the basic Viterbi decoder, concatenated coding improves a constraint length 8 rate 1/3 Viterbi decoding system by 1.1 and 2.6 dB at bit error probabilities of 0.0001 and one hundred millionth, respectively. With a somewhat greater total complexity, the hybrid coding system is shown to obtain a 0.9-dB computational performance improvement over the basic rate 1/3 sequential decoding system. Although substantial, these complexities are much less than those required to achieve the same performances with more complex Viterbi or sequential decoder systems.

  1. Convolutional coding at 50 Mbps for the Shuttle Ku-band return link

    NASA Technical Reports Server (NTRS)

    Batson, B. H.; Huth, G. K.

    1976-01-01

    Error correcting coding is required for 50 Mbps data link from the Shuttle Orbiter through the Tracking and Data Relay Satellite System (TDRSS) to the ground because of severe power limitations. Convolutional coding has been chosen because the decoding algorithms (sequential and Viterbi) provide significant coding gains at the required bit error probability of one in 10 to the sixth power and can be implemented at 50 Mbps with moderate hardware. While a 50 Mbps sequential decoder has been built, the highest data rate achieved for a Viterbi decoder is 10 Mbps. Thus, five multiplexed 10 Mbps Viterbi decoders must be used to provide a 50 Mbps data rate. This paper discusses the tradeoffs which were considered when selecting the multiplexed Viterbi decoder approach for this application.

  2. Turning EGFR mutation-positive non-small-cell lung cancer into a chronic disease: optimal sequential therapy with EGFR tyrosine kinase inhibitors

    PubMed Central

    Hirsh, Vera

    2018-01-01

    Four epidermal growth factor receptor (EGFR) tyrosine kinase inhibitors (TKIs), erlotinib, gefitinib, afatinib and osimertinib, are currently available for the management of EGFR mutation-positive non-small-cell lung cancer (NSCLC), with others in development. Although tumors are exquisitely sensitive to these agents, acquired resistance is inevitable. Furthermore, emerging data indicate that first- (erlotinib and gefitinib), second- (afatinib) and third-generation (osimertinib) EGFR TKIs differ in terms of efficacy and tolerability profiles. Therefore, there is a strong imperative to optimize the sequence of TKIs in order to maximize their clinical benefit. Osimertinib has demonstrated striking efficacy as a second-line treatment option in patients with T790M-positive tumors, and also confers efficacy and tolerability advantages over first-generation TKIs in the first-line setting. However, while accrual of T790M is the most predominant mechanism of resistance to erlotinib, gefitinib and afatinib, resistance mechanisms to osimertinib have not been clearly elucidated, meaning that possible therapy options after osimertinib failure are not clear. At present, few data comparing sequential regimens in patients with EGFR mutation-positive NSCLC are available and prospective clinical trials are required. This article reviews the similarities and differences between EGFR TKIs, and discusses key considerations when assessing optimal sequential therapy with these agents for the treatment of EGFR mutation-positive NSCLC. PMID:29383041

  3. Sequential weighted Wiener estimation for extraction of key tissue parameters in color imaging: a phantom study

    NASA Astrophysics Data System (ADS)

    Chen, Shuo; Lin, Xiaoqian; Zhu, Caigang; Liu, Quan

    2014-12-01

    Key tissue parameters, e.g., total hemoglobin concentration and tissue oxygenation, are important biomarkers in clinical diagnosis for various diseases. Although point measurement techniques based on diffuse reflectance spectroscopy can accurately recover these tissue parameters, they are not suitable for the examination of a large tissue region due to slow data acquisition. The previous imaging studies have shown that hemoglobin concentration and oxygenation can be estimated from color measurements with the assumption of known scattering properties, which is impractical in clinical applications. To overcome this limitation and speed-up image processing, we propose a method of sequential weighted Wiener estimation (WE) to quickly extract key tissue parameters, including total hemoglobin concentration (CtHb), hemoglobin oxygenation (StO2), scatterer density (α), and scattering power (β), from wide-band color measurements. This method takes advantage of the fact that each parameter is sensitive to the color measurements in a different way and attempts to maximize the contribution of those color measurements likely to generate correct results in WE. The method was evaluated on skin phantoms with varying CtHb, StO2, and scattering properties. The results demonstrate excellent agreement between the estimated tissue parameters and the corresponding reference values. Compared with traditional WE, the sequential weighted WE shows significant improvement in the estimation accuracy. This method could be used to monitor tissue parameters in an imaging setup in real time.

  4. The Impact of Optional Flexible Year Program on Texas Assessment of Knowledge and Skills Test Scores of Fifth Grade Students

    ERIC Educational Resources Information Center

    Longbotham, Pamela J.

    2012-01-01

    The study examined the impact of participation in an optional flexible year program (OFYP) on academic achievement. The ex post facto study employed an explanatory sequential mixed methods design. The non-probability sample consisted of 163 fifth grade students in an OFYP district and 137 5th graders in a 180-day instructional year school…

  5. The Bayesian Learning Automaton — Empirical Evaluation with Two-Armed Bernoulli Bandit Problems

    NASA Astrophysics Data System (ADS)

    Granmo, Ole-Christoffer

    The two-armed Bernoulli bandit (TABB) problem is a classical optimization problem where an agent sequentially pulls one of two arms attached to a gambling machine, with each pull resulting either in a reward or a penalty. The reward probabilities of each arm are unknown, and thus one must balance between exploiting existing knowledge about the arms, and obtaining new information.

  6. Mass test of AdvanSiD model ASD-NUV3S-P SiliconPMs for the Pixel Timing Counter of the MEG II experiment

    NASA Astrophysics Data System (ADS)

    Rossella, M.; Bariani, S.; Barnaba, O.; Cattaneo, P. W.; Cervi, T.; Menegolli, A.; Nardò, R.; Prata, M. C.; Romano, E.; Scagliotti, C.; Simonetta, M.; Vercellati, F.

    2017-02-01

    The MEG II Timing Counter will measure the positron time of arrival with a resolution of 30 ps relying on two arrays of scintillator pixels read out by 6144 Silicon Photomultipliers (SiPMs) from AdvanSiD. They must be characterized, measuring their breakdown voltage, to assure that the gains of the SiPMs of each pixel are as uniform as possible, to maximize the pixel resolution. To do this an automatic test system that can measure sequentially the parameters of 32 devices has been developed.

  7. Sequential quantum secret sharing in a noisy environment aided with weak measurements

    NASA Astrophysics Data System (ADS)

    Ray, Maharshi; Chatterjee, Sourav; Chakrabarty, Indranil

    2016-05-01

    In this work we give a (n,n)-threshold protocol for sequential secret sharing of quantum information for the first time. By sequential secret sharing we refer to a situation where the dealer is not having all the secrets at the same time, at the beginning of the protocol; however if the dealer wishes to share secrets at subsequent phases she/he can realize it with the help of our protocol. First of all we present our protocol for three parties and later we generalize it for the situation where we have more (n> 3) parties. Interestingly, we show that our protocol of sequential secret sharing requires less amount of quantum as well as classical resource as compared to the situation wherein existing protocols are repeatedly used. Further in a much more realistic situation, we consider the sharing of qubits through two kinds of noisy channels, namely the phase damping channel (PDC) and the amplitude damping channel (ADC). When we carry out the sequential secret sharing in the presence of noise we observe that the fidelity of secret sharing at the kth iteration is independent of the effect of noise at the (k - 1)th iteration. In case of ADC we have seen that the average fidelity of secret sharing drops down to ½ which is equivalent to a random guess of the quantum secret. Interestingly, we find that by applying weak measurements one can enhance the average fidelity. This increase of the average fidelity can be achieved with certain trade off with the success probability of the weak measurements.

  8. EEG Classification with a Sequential Decision-Making Method in Motor Imagery BCI.

    PubMed

    Liu, Rong; Wang, Yongxuan; Newman, Geoffrey I; Thakor, Nitish V; Ying, Sarah

    2017-12-01

    To develop subject-specific classifier to recognize mental states fast and reliably is an important issue in brain-computer interfaces (BCI), particularly in practical real-time applications such as wheelchair or neuroprosthetic control. In this paper, a sequential decision-making strategy is explored in conjunction with an optimal wavelet analysis for EEG classification. The subject-specific wavelet parameters based on a grid-search method were first developed to determine evidence accumulative curve for the sequential classifier. Then we proposed a new method to set the two constrained thresholds in the sequential probability ratio test (SPRT) based on the cumulative curve and a desired expected stopping time. As a result, it balanced the decision time of each class, and we term it balanced threshold SPRT (BTSPRT). The properties of the method were illustrated on 14 subjects' recordings from offline and online tests. Results showed the average maximum accuracy of the proposed method to be 83.4% and the average decision time of 2.77[Formula: see text]s, when compared with 79.2% accuracy and a decision time of 3.01[Formula: see text]s for the sequential Bayesian (SB) method. The BTSPRT method not only improves the classification accuracy and decision speed comparing with the other nonsequential or SB methods, but also provides an explicit relationship between stopping time, thresholds and error, which is important for balancing the speed-accuracy tradeoff. These results suggest that BTSPRT would be useful in explicitly adjusting the tradeoff between rapid decision-making and error-free device control.

  9. Impact of a Sequential Intervention on Albumin Utilization in Critical Care.

    PubMed

    Lyu, Peter F; Hockenberry, Jason M; Gaydos, Laura M; Howard, David H; Buchman, Timothy G; Murphy, David J

    2016-07-01

    Literature generally finds no advantages in mortality risk for albumin over cheaper alternatives in many settings. Few studies have combined financial and nonfinancial strategies to reduce albumin overuse. We evaluated the effect of a sequential multifaceted intervention on decreasing albumin use in ICU and explore the effects of different strategies. Prospective prepost cohort study. Eight ICUs at two hospitals in an academic healthcare system. Adult patients admitted to study ICUs from September 2011 to August 2014 (n = 22,004). Over 2 years, providers in study ICUs participated in an intervention to reduce albumin use involving monthly feedback and explicit financial incentives in the first year and internal guidelines and order process changes in the second year. Outcomes measured were albumin orders per ICU admission, direct albumin costs, and mortality. Mean (SD) utilization decreased 37% from 2.7 orders (6.8) per admission during the baseline to 1.7 orders (4.6) during the intervention (p < 0.001). Regression analysis revealed that the intervention was independently associated with 0.9 fewer orders per admission, a 42% relative decrease. This adjusted effect consisted of an 18% reduction in the probability of using any albumin (p < 0.001) and a 29% reduction in the number of orders per admission among patients receiving any (p < 0.001). Secondary analysis revealed that probability reductions were concurrent with internal guidelines and order process modification while reductions in quantity occurred largely during the financial incentives and feedback period. Estimated cost savings totaled $2.5M during the 2-year intervention. There was no significant difference in ICU or hospital mortality between baseline and intervention. A sequential intervention achieved significant reductions in ICU albumin use and cost savings without changes in patient outcomes, supporting the combination of financial and nonfinancial strategies to align providers with evidence-based practices.

  10. The PMHT: solutions for some of its problems

    NASA Astrophysics Data System (ADS)

    Wieneke, Monika; Koch, Wolfgang

    2007-09-01

    Tracking multiple targets in a cluttered environment is a challenging task. Probabilistic Multiple Hypothesis Tracking (PMHT) is an efficient approach for dealing with it. Essentially PMHT is based on the method of Expectation-Maximization for handling with association conflicts. Linearity in the number of targets and measurements is the main motivation for a further development and extension of this methodology. Unfortunately, compared with the Probabilistic Data Association Filter (PDAF), PMHT has not yet shown its superiority in terms of track-lost statistics. Furthermore, the problem of track extraction and deletion is apparently not yet satisfactorily solved within this framework. Four properties of PMHT are responsible for its problems in track maintenance: Non-Adaptivity, Hospitality, Narcissism and Local Maxima. 1, 2 In this work we present a solution for each of them and derive an improved PMHT by integrating the solutions into the PMHT formalism. The new PMHT is evaluated by Monte-Carlo simulations. A sequential Likelihood-Ratio (LR) test for track extraction has been developed and already integrated into the framework of traditional Bayesian Multiple Hypothesis Tracking. 3 As a multi-scan approach, also the PMHT methodology has the potential for track extraction. In this paper an analogous integration of a sequential LR test into the PMHT framework is proposed. We present an LR formula for track extraction and deletion using the PMHT update formulae. As PMHT provides all required ingredients for a sequential LR calculation, the LR is thus a by-product of the PMHT iteration process. Therefore the resulting update formula for the sequential LR test affords the development of Track-Before-Detect algorithms for PMHT. The approach is illustrated by a simple example.

  11. Actively learning human gaze shifting paths for semantics-aware photo cropping.

    PubMed

    Zhang, Luming; Gao, Yue; Ji, Rongrong; Xia, Yingjie; Dai, Qionghai; Li, Xuelong

    2014-05-01

    Photo cropping is a widely used tool in printing industry, photography, and cinematography. Conventional cropping models suffer from the following three challenges. First, the deemphasized role of semantic contents that are many times more important than low-level features in photo aesthetics. Second, the absence of a sequential ordering in the existing models. In contrast, humans look at semantically important regions sequentially when viewing a photo. Third, the difficulty of leveraging inputs from multiple users. Experience from multiple users is particularly critical in cropping as photo assessment is quite a subjective task. To address these challenges, this paper proposes semantics-aware photo cropping, which crops a photo by simulating the process of humans sequentially perceiving semantically important regions of a photo. We first project the local features (graphlets in this paper) onto the semantic space, which is constructed based on the category information of the training photos. An efficient learning algorithm is then derived to sequentially select semantically representative graphlets of a photo, and the selecting process can be interpreted by a path, which simulates humans actively perceiving semantics in a photo. Furthermore, we learn a prior distribution of such active graphlet paths from training photos that are marked as aesthetically pleasing by multiple users. The learned priors enforce the corresponding active graphlet path of a test photo to be maximally similar to those from the training photos. Experimental results show that: 1) the active graphlet path accurately predicts human gaze shifting, and thus is more indicative for photo aesthetics than conventional saliency maps and 2) the cropped photos produced by our approach outperform its competitors in both qualitative and quantitative comparisons.

  12. Generalized Wishart Mixtures for Unsupervised Classification of PolSAR Data

    NASA Astrophysics Data System (ADS)

    Li, Lan; Chen, Erxue; Li, Zengyuan

    2013-01-01

    This paper presents an unsupervised clustering algorithm based upon the expectation maximization (EM) algorithm for finite mixture modelling, using the complex wishart probability density function (PDF) for the probabilities. The mixture model enables to consider heterogeneous thematic classes which could not be better fitted by the unimodal wishart distribution. In order to make it fast and robust to calculate, we use the recently proposed generalized gamma distribution (GΓD) for the single polarization intensity data to make the initial partition. Then we use the wishart probability density function for the corresponding sample covariance matrix to calculate the posterior class probabilities for each pixel. The posterior class probabilities are used for the prior probability estimates of each class and weights for all class parameter updates. The proposed method is evaluated and compared with the wishart H-Alpha-A classification. Preliminary results show that the proposed method has better performance.

  13. Probabilistic description of probable maximum precipitation

    NASA Astrophysics Data System (ADS)

    Ben Alaya, Mohamed Ali; Zwiers, Francis W.; Zhang, Xuebin

    2017-04-01

    Probable Maximum Precipitation (PMP) is the key parameter used to estimate probable Maximum Flood (PMF). PMP and PMF are important for dam safety and civil engineering purposes. Even if the current knowledge of storm mechanisms remains insufficient to properly evaluate limiting values of extreme precipitation, PMP estimation methods are still based on deterministic consideration, and give only single values. This study aims to provide a probabilistic description of the PMP based on the commonly used method, the so-called moisture maximization. To this end, a probabilistic bivariate extreme values model is proposed to address the limitations of traditional PMP estimates via moisture maximization namely: (i) the inability to evaluate uncertainty and to provide a range PMP values, (ii) the interpretation that a maximum of a data series as a physical upper limit (iii) and the assumption that a PMP event has maximum moisture availability. Results from simulation outputs of the Canadian Regional Climate Model CanRCM4 over North America reveal the high uncertainties inherent in PMP estimates and the non-validity of the assumption that PMP events have maximum moisture availability. This later assumption leads to overestimation of the PMP by an average of about 15% over North America, which may have serious implications for engineering design.

  14. Statistical iterative reconstruction for streak artefact reduction when using multidetector CT to image the dento-alveolar structures.

    PubMed

    Dong, J; Hayakawa, Y; Kober, C

    2014-01-01

    When metallic prosthetic appliances and dental fillings exist in the oral cavity, the appearance of metal-induced streak artefacts is not avoidable in CT images. The aim of this study was to develop a method for artefact reduction using the statistical reconstruction on multidetector row CT images. Adjacent CT images often depict similar anatomical structures. Therefore, reconstructed images with weak artefacts were attempted using projection data of an artefact-free image in a neighbouring thin slice. Images with moderate and strong artefacts were continuously processed in sequence by successive iterative restoration where the projection data was generated from the adjacent reconstructed slice. First, the basic maximum likelihood-expectation maximization algorithm was applied. Next, the ordered subset-expectation maximization algorithm was examined. Alternatively, a small region of interest setting was designated. Finally, the general purpose graphic processing unit machine was applied in both situations. The algorithms reduced the metal-induced streak artefacts on multidetector row CT images when the sequential processing method was applied. The ordered subset-expectation maximization and small region of interest reduced the processing duration without apparent detriments. A general-purpose graphic processing unit realized the high performance. A statistical reconstruction method was applied for the streak artefact reduction. The alternative algorithms applied were effective. Both software and hardware tools, such as ordered subset-expectation maximization, small region of interest and general-purpose graphic processing unit achieved fast artefact correction.

  15. Efficient and faithful remote preparation of arbitrary three- and four-particle -class entangled states

    NASA Astrophysics Data System (ADS)

    Wang, Dong; Hu, You-Di; Wang, Zhe-Qiang; Ye, Liu

    2015-06-01

    We develop two efficient measurement-based schemes for remotely preparing arbitrary three- and four-particle W-class entangled states by utilizing genuine tripartite Greenberg-Horn-Zeilinger-type states as quantum channels, respectively. Through appropriate local operations and classical communication, the desired states can be faithfully retrieved at the receiver's place with certain probability. Compared with the previously existing schemes, the success probability in current schemes is greatly increased. Moreover, the required classical communication cost is calculated as well. Further, several attractive discussions on the properties of the presented schemes, including the success probability and reducibility, are made. Remarkably, the proposed schemes can be faithfully achieved with unity total success probability when the employed channels are reduced into maximally entangled ones.

  16. Maximal violation of the Clauser-Horne-Shimony-Holt inequality for two qutrits

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fu, Li-Bin; Zhao, Xian-Geng; Chen, Jing-Ling

    2003-08-01

    The Bell-Clauser-Horne-Shimony-Holt (BCHSH) inequality (in terms of correlation functions) of two qutrits is studied in detail by employing tritter measurements. A uniform formula for the maximum value of this inequality for tritter measurements is obtained. Based on this formula, we show that nonmaximally entangled states violate the BCHSH inequality more strongly than the maximally entangled one. This result is consistent with what was obtained by Acin et al. [Phys. Rev. A 65, 052325 (2002)] using the Bell-Clauser-Horne inequality (in terms of probabilities)

  17. Dysfunction of bulbar central pattern generator in ALS patients with dysphagia during sequential deglutition.

    PubMed

    Aydogdu, Ibrahim; Tanriverdi, Zeynep; Ertekin, Cumhur

    2011-06-01

    The aim of this study is to investigate a probable dysfunction of the central pattern generator (CPG) in dysphagic patients with ALS. We investigated 58 patients with ALS, 23 patients with PD, and 33 normal subjects. The laryngeal movements and EMG of the submental muscles were recorded during sequential water swallowing (SWS) of 100ml of water. The coordination of SWS and respiration was also studied in some normal cases and ALS patients. Normal subjects could complete the SWS optimally within 10s using 7 swallows, while in dysphagic ALS patients, the total duration and the number of swallows were significantly increased. The novel finding was that the regularity and rhythmicity of the swallowing pattern during SWS was disorganized to irregular and arhythmic pattern in 43% of the ALS patients. The duration and speed of swallowing were the most sensitive parameters for the disturbed oropharyngeal motility during SWS. The corticobulbar control of swallowing is insufficient in ALS, and the swallowing CPG cannot work very well to produce segmental muscle activation and sequential swallowing. CPG dysfunction can result in irregular and arhythmical sequential swallowing in ALS patients with bulbar plus pseudobulbar types. The arhythmical SWS pattern can be considered as a kind of dysfunction of CPG in human ALS cases with dysphagia. Copyright © 2010 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.

  18. Dry minor mergers and size evolution of high-z compact massive early-type galaxies

    NASA Astrophysics Data System (ADS)

    Oogi, Taira; Habe, Asao

    2012-09-01

    Recent observations show evidence that high-z (z ~ 2 - 3) early-type galaxies (ETGs) are quite compact than that with comparable mass at z ~ 0. Dry merger scenario is one of the most probable one that can explain such size evolution. However, previous studies based on this scenario do not succeed to explain both properties of high-z compact massive ETGs and local ETGs, consistently. We investigate effects of sequential, multiple dry minor (stellar mass ratio M2/M1<1/4) mergers on the size evolution of compact massive ETGs. We perform N-body simulations of the sequential minor mergers with parabolic and head-on orbits, including a dark matter component and a stellar component. We show that the sequential minor mergers of compact satellite galaxies are the most efficient in the size growth and in decrease of the velocity dispersion of the compact massive ETGs. The change of stellar size and density of the merger remnant is consistent with the recent observations. Furthermore, we construct the merger histories of candidates of high-z compact massive ETGs using the Millennium Simulation Database, and estimate the size growth of the galaxies by dry minor mergers. We can reproduce the mean size growth factor between z = 2 and z = 0, assuming the most efficient size growth obtained in the case of the sequential minor mergers in our simulations.

  19. Delayed Expression of Circulating TGF-β1 and BMP-2 Levels in Human Nonunion Long Bone Fracture Healing.

    PubMed

    Hara, Yoshiaki; Ghazizadeh, Mohammad; Shimizu, Hajime; Matsumoto, Hisashi; Saito, Nobuyuki; Yagi, Takanori; Mashiko, Kazuki; Mashiko, Kunihiro; Kawai, Makoto; Yokota, Hiroyuki

    2017-01-01

    The healing process of bone fracture requires a well-controlled multistage and sequential order beginning immediately after the injury. However, complications leading to nonunion exist, creating serious problems and costs for patients. Transforming growth factor-beta 1 (TGF-β1) and bone morphogenic protein 2 (BMP-2) are two major growth factors involved in human bone fracture healing by promoting various stages of bone ossification. In this study, we aimed to determine the role of these factors during the fracture healing of human long bones and assess their impacts on nonunion condition. We performed a comprehensive analysis of plasma TGF-β1 and BMP-2 levels in blood samples from 10 patients with proved nonunion and 10 matched patients with normal union following a predetermined time schedule. The concentrations of TGF-β1 and BMP-2 were measured at each time point using a solid-phase ELISA. TGF-β1 and BMP-2 levels were detectable in all patients. For all patients, a maximal peak for TGF-β1 was found at 3-week. In normal union group, TGF-β1 showed a maximal peak at 2-week while nonunion group had a delayed maximal peak at 3-week. Plasma levels of BMP-2 for all patients and for normal union group reached a maximal peak at 1-week, but nonunion group showed a delayed maximal peak at 2-week. In general, plasma TGF-β1 or BMP-2 level was not significantly different between normal union and nonunion groups. The expression levels of TGF-β1 and BMP-2 appeared to be delayed in nonunion patients which could play an important role in developing an early marker of fracture union condition and facilitate improved patient's management.

  20. Sequential Revision of Belief, Trust Type, and the Order Effect.

    PubMed

    Entin, Elliot E; Serfaty, Daniel

    2017-05-01

    Objective To investigate how people's sequential adjustments to their position are impacted by the source of the information. Background There is an extensive body of research on how the order in which new information is received affects people's final views and decisions as well as research on how they adjust their views in light of new information. Method Seventy college-aged students, 60% of whom were women, completed one of eight different randomly distributed booklets prepared to create the eight different between-subjects treatment conditions created by crossing the two levels of information source with the four level of order conditions. Based on the information provided, participants estimated the probability of an attack, the dependent measure. Results Confirming information from an expert intelligence officer significantly increased the attack probability from the initial position more than confirming information from a longtime friend. Conversely, disconfirming information from a longtime friend decreased the attack probability significantly more than the same information from an intelligence officer. Conclusion It was confirmed that confirming and disconfirming evidence were differentially affected depending on information source, either an expert or a close friend. The difference appears to be due to the existence of two kinds of trust: cognitive-based imbued to an expert and affective-based imbued to a close friend. Application Purveyors of information need to understand that it is not only the content of a message that counts but that other forces are at work such as the order in which information is received and characteristics of the information source.

  1. Phenomenology of maximal and near-maximal lepton mixing

    NASA Astrophysics Data System (ADS)

    Gonzalez-Garcia, M. C.; Peña-Garay, Carlos; Nir, Yosef; Smirnov, Alexei Yu.

    2001-01-01

    The possible existence of maximal or near-maximal lepton mixing constitutes an intriguing challenge for fundamental theories of flavor. We study the phenomenological consequences of maximal and near-maximal mixing of the electron neutrino with other (x=tau and/or muon) neutrinos. We describe the deviations from maximal mixing in terms of a parameter ɛ≡1-2 sin2 θex and quantify the present experimental status for \\|ɛ\\|<0.3. We show that both probabilities and observables depend on ɛ quadratically when effects are due to vacuum oscillations and they depend on ɛ linearly if matter effects dominate. The most important information on νe mixing comes from solar neutrino experiments. We find that the global analysis of solar neutrino data allows maximal mixing with confidence level better than 99% for 10-8 eV2<~Δm2<~2×10-7 eV2. In the mass ranges Δm2>~1.5×10-5 eV2 and 4×10-10 eV2<~Δm2<~2×10-7 eV2 the full interval \\|ɛ\\|<0.3 is allowed within ~4σ (99.995% CL) We suggest ways to measure ɛ in future experiments. The observable that is most sensitive to ɛ is the rate [NC]/[CC] in combination with the day-night asymmetry in the SNO detector. With theoretical and statistical uncertainties, the expected accuracy after 5 years is Δɛ~0.07. We also discuss the effects of maximal and near-maximal νe mixing in atmospheric neutrinos, supernova neutrinos, and neutrinoless double beta decay.

  2. Speech processing using conditional observable maximum likelihood continuity mapping

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hogden, John; Nix, David

    A computer implemented method enables the recognition of speech and speech characteristics. Parameters are initialized of first probability density functions that map between the symbols in the vocabulary of one or more sequences of speech codes that represent speech sounds and a continuity map. Parameters are also initialized of second probability density functions that map between the elements in the vocabulary of one or more desired sequences of speech transcription symbols and the continuity map. The parameters of the probability density functions are then trained to maximize the probabilities of the desired sequences of speech-transcription symbols. A new sequence ofmore » speech codes is then input to the continuity map having the trained first and second probability function parameters. A smooth path is identified on the continuity map that has the maximum probability for the new sequence of speech codes. The probability of each speech transcription symbol for each input speech code can then be output.« less

  3. Energy optimization in mobile sensor networks

    NASA Astrophysics Data System (ADS)

    Yu, Shengwei

    Mobile sensor networks are considered to consist of a network of mobile robots, each of which has computation, communication and sensing capabilities. Energy efficiency is a critical issue in mobile sensor networks, especially when mobility (i.e., locomotion control), routing (i.e., communications) and sensing are unique characteristics of mobile robots for energy optimization. This thesis focuses on the problem of energy optimization of mobile robotic sensor networks, and the research results can be extended to energy optimization of a network of mobile robots that monitors the environment, or a team of mobile robots that transports materials from stations to stations in a manufacturing environment. On the energy optimization of mobile robotic sensor networks, our research focuses on the investigation and development of distributed optimization algorithms to exploit the mobility of robotic sensor nodes for network lifetime maximization. In particular, the thesis studies these five problems: 1. Network-lifetime maximization by controlling positions of networked mobile sensor robots based on local information with distributed optimization algorithms; 2. Lifetime maximization of mobile sensor networks with energy harvesting modules; 3. Lifetime maximization using joint design of mobility and routing; 4. Optimal control for network energy minimization; 5. Network lifetime maximization in mobile visual sensor networks. In addressing the first problem, we consider only the mobility strategies of the robotic relay nodes in a mobile sensor network in order to maximize its network lifetime. By using variable substitutions, the original problem is converted into a convex problem, and a variant of the sub-gradient method for saddle-point computation is developed for solving this problem. An optimal solution is obtained by the method. Computer simulations show that mobility of robotic sensors can significantly prolong the lifetime of the whole robotic sensor network while consuming negligible amount of energy for mobility cost. For the second problem, the problem is extended to accommodate mobile robotic nodes with energy harvesting capability, which makes it a non-convex optimization problem. The non-convexity issue is tackled by using the existing sequential convex approximation method, based on which we propose a novel procedure of modified sequential convex approximation that has fast convergence speed. For the third problem, the proposed procedure is used to solve another challenging non-convex problem, which results in utilizing mobility and routing simultaneously in mobile robotic sensor networks to prolong the network lifetime. The results indicate that joint design of mobility and routing has an edge over other methods in prolonging network lifetime, which is also the justification for the use of mobility in mobile sensor networks for energy efficiency purpose. For the fourth problem, we include the dynamics of the robotic nodes in the problem by modeling the networked robotic system using hybrid systems theory. A novel distributed method for the networked hybrid system is used to solve the optimal moving trajectories for robotic nodes and optimal network links, which are not answered by previous approaches. Finally, the fact that mobility is more effective in prolonging network lifetime for a data-intensive network leads us to apply our methods to study mobile visual sensor networks, which are useful in many applications. We investigate the joint design of mobility, data routing, and encoding power to help improving the video quality while maximizing the network lifetime. This study leads to a better understanding of the role mobility can play in data-intensive surveillance sensor networks.

  4. Rank-k Maximal Statistics for Divergence and Probability of Misclassification

    NASA Technical Reports Server (NTRS)

    Decell, H. P., Jr.

    1972-01-01

    A technique is developed for selecting from n-channel multispectral data some k combinations of the n-channels upon which to base a given classification technique so that some measure of the loss of the ability to distinguish between classes, using the compressed k-dimensional data, is minimized. Information loss in compressing the n-channel data to k channels is taken to be the difference in the average interclass divergences (or probability of misclassification) in n-space and in k-space.

  5. Optimal nonlinear filtering using the finite-volume method

    NASA Astrophysics Data System (ADS)

    Fox, Colin; Morrison, Malcolm E. K.; Norton, Richard A.; Molteno, Timothy C. A.

    2018-01-01

    Optimal sequential inference, or filtering, for the state of a deterministic dynamical system requires simulation of the Frobenius-Perron operator, that can be formulated as the solution of a continuity equation. For low-dimensional, smooth systems, the finite-volume numerical method provides a solution that conserves probability and gives estimates that converge to the optimal continuous-time values, while a Courant-Friedrichs-Lewy-type condition assures that intermediate discretized solutions remain positive density functions. This method is demonstrated in an example of nonlinear filtering for the state of a simple pendulum, with comparison to results using the unscented Kalman filter, and for a case where rank-deficient observations lead to multimodal probability distributions.

  6. Health Monitoring of a Satellite System

    NASA Technical Reports Server (NTRS)

    Chen, Robert H.; Ng, Hok K.; Speyer, Jason L.; Guntur, Lokeshkumar S.; Carpenter, Russell

    2004-01-01

    A health monitoring system based on analytical redundancy is developed for satellites on elliptical orbits. First, the dynamics of the satellite including orbital mechanics and attitude dynamics is modelled as a periodic system. Then, periodic fault detection filters are designed to detect and identify the satellite's actuator and sensor faults. In addition, parity equations are constructed using the algebraic redundant relationship among the actuators and sensors. Furthermore, a residual processor is designed to generate the probability of each of the actuator and sensor faults by using a sequential probability test. Finally, the health monitoring system, consisting of periodic fault detection lters, parity equations and residual processor, is evaluated in the simulation in the presence of disturbances and uncertainty.

  7. Correlations between prefrontal neurons form a small-world network that optimizes the generation of multineuron sequences of activity

    PubMed Central

    Luongo, Francisco J.; Zimmerman, Chris A.; Horn, Meryl E.

    2016-01-01

    Sequential patterns of prefrontal activity are believed to mediate important behaviors, e.g., working memory, but it remains unclear exactly how they are generated. In accordance with previous studies of cortical circuits, we found that prefrontal microcircuits in young adult mice spontaneously generate many more stereotyped sequences of activity than expected by chance. However, the key question of whether these sequences depend on a specific functional organization within the cortical microcircuit, or emerge simply as a by-product of random interactions between neurons, remains unanswered. We observed that correlations between prefrontal neurons do follow a specific functional organization—they have a small-world topology. However, until now it has not been possible to directly link small-world topologies to specific circuit functions, e.g., sequence generation. Therefore, we developed a novel analysis to address this issue. Specifically, we constructed surrogate data sets that have identical levels of network activity at every point in time but nevertheless represent various network topologies. We call this method shuffling activity to rearrange correlations (SHARC). We found that only surrogate data sets based on the actual small-world functional organization of prefrontal microcircuits were able to reproduce the levels of sequences observed in actual data. As expected, small-world data sets contained many more sequences than surrogate data sets with randomly arranged correlations. Surprisingly, small-world data sets also outperformed data sets in which correlations were maximally clustered. Thus the small-world functional organization of cortical microcircuits, which effectively balances the random and maximally clustered regimes, is optimal for producing stereotyped sequential patterns of activity. PMID:26888108

  8. A Comparative Study of Frequent and Maximal Periodic Pattern Mining Algorithms in Spatiotemporal Databases

    NASA Astrophysics Data System (ADS)

    Obulesu, O.; Rama Mohan Reddy, A., Dr; Mahendra, M.

    2017-08-01

    Detecting regular and efficient cyclic models is the demanding activity for data analysts due to unstructured, vigorous and enormous raw information produced from web. Many existing approaches generate large candidate patterns in the occurrence of huge and complex databases. In this work, two novel algorithms are proposed and a comparative examination is performed by considering scalability and performance parameters. The first algorithm is, EFPMA (Extended Regular Model Detection Algorithm) used to find frequent sequential patterns from the spatiotemporal dataset and the second one is, ETMA (Enhanced Tree-based Mining Algorithm) for detecting effective cyclic models with symbolic database representation. EFPMA is an algorithm grows models from both ends (prefixes and suffixes) of detected patterns, which results in faster pattern growth because of less levels of database projection compared to existing approaches such as Prefixspan and SPADE. ETMA uses distinct notions to store and manage transactions data horizontally such as segment, sequence and individual symbols. ETMA exploits a partition-and-conquer method to find maximal patterns by using symbolic notations. Using this algorithm, we can mine cyclic models in full-series sequential patterns including subsection series also. ETMA reduces the memory consumption and makes use of the efficient symbolic operation. Furthermore, ETMA only records time-series instances dynamically, in terms of character, series and section approaches respectively. The extent of the pattern and proving efficiency of the reducing and retrieval techniques from synthetic and actual datasets is a really open & challenging mining problem. These techniques are useful in data streams, traffic risk analysis, medical diagnosis, DNA sequence Mining, Earthquake prediction applications. Extensive investigational outcomes illustrates that the algorithms outperforms well towards efficiency and scalability than ECLAT, STNR and MAFIA approaches.

  9. Production of DagA and ethanol by sequential utilization of sugars in a mixed-sugar medium simulating microalgal hydrolysate.

    PubMed

    Park, Juyi; Hong, Soon-Kwang; Chang, Yong Keun

    2015-09-01

    A novel two-step fermentation process using a mixed-sugar medium mimicking microalgal hydrolysate has been proposed to avoid glucose repression and thus to maximize substrate utilization efficiency. When DagA, a β-agarase was produced in one step in the mixed-sugar medium by using a recombinant Streptomyces lividans, glucose was found to have negative effects on the consumption of the other sugars and DagA biosynthesis causing low substrate utilization efficiency and low DagA productivity. To overcome such difficulties, a new strategy of sequential substrate utilization was developed. In the first step, glucose was consumed by Saccharomyces cerevisiae together with galactose and mannose producing ethanol, after which DagA was produced from the remaining sugars of xylose, rhamnose and ribose. Fucose was not consumed. By adopting this two-step process, the overall substrate utilization efficiency was increased approximately 3-fold with a nearly 2-fold improvement of DagA production, let alone the additional benefit of ethanol production. Copyright © 2015 Elsevier Ltd. All rights reserved.

  10. Development of a Newcastle disease virus vector expressing a foreign gene through an internal ribosomal entry site provides direct proof for a sequential transcription mechanism.

    PubMed

    Zhang, Zhenyu; Zhao, Wei; Li, Deshan; Yang, Jinlong; Zsak, Laszlo; Yu, Qingzhong

    2015-08-01

    In the present study, we developed a novel approach for foreign gene expression by Newcastle disease virus (NDV) from a second ORF through an internal ribosomal entry site (IRES). Six NDV LaSota strain-based recombinant viruses vectoring the IRES and a red fluorescence protein (RFP) gene behind the nucleocapsid (NP), phosphoprotein (P), matrix (M), fusion (F), haemagglutinin-neuraminidase (HN) or large polymerase (L) gene ORF were generated using reverse genetics technology. The insertion of the second ORF slightly attenuated virus pathogenicity, but did not affect ability of the virus to grow. Quantitative measurements of RFP expression in virus-infected DF-1 cells revealed that the abundance of viral mRNAs and red fluorescence intensity were positively correlated with the gene order of NDV, 3'-NP-P-M-F-HN-L-5', proving the sequential transcription mechanism for NDV. The results herein suggest that the level of foreign gene expression could be regulated by selecting the second ORF insertion site to maximize the efficacy of vaccine and gene therapy.

  11. Enhancing battery efficiency for pervasive health-monitoring systems based on electronic textiles.

    PubMed

    Zheng, Nenggan; Wu, Zhaohui; Lin, Man; Yang, Laurence Tianruo

    2010-03-01

    Electronic textiles are regarded as one of the most important computation platforms for future computer-assisted health-monitoring applications. In these novel systems, multiple batteries are used in order to prolong their operational lifetime, which is a significant metric for system usability. However, due to the nonlinear features of batteries, computing systems with multiple batteries cannot achieve the same battery efficiency as those powered by a monolithic battery of equal capacity. In this paper, we propose an algorithm aiming to maximize battery efficiency globally for the computer-assisted health-care systems with multiple batteries. Based on an accurate analytical battery model, the concept of weighted battery fatigue degree is introduced and the novel battery-scheduling algorithm called predicted weighted fatigue degree least first (PWFDLF) is developed. Besides, we also discuss our attempts during search PWFDLF: a weighted round-robin (WRR) and a greedy algorithm achieving highest local battery efficiency, which reduces to the sequential discharging policy. Evaluation results show that a considerable improvement in battery efficiency can be obtained by PWFDLF under various battery configurations and current profiles compared to conventional sequential and WRR discharging policies.

  12. Topology optimization of induction heating model using sequential linear programming based on move limit with adaptive relaxation

    NASA Astrophysics Data System (ADS)

    Masuda, Hiroshi; Kanda, Yutaro; Okamoto, Yoshifumi; Hirono, Kazuki; Hoshino, Reona; Wakao, Shinji; Tsuburaya, Tomonori

    2017-12-01

    It is very important to design electrical machineries with high efficiency from the point of view of saving energy. Therefore, topology optimization (TO) is occasionally used as a design method for improving the performance of electrical machinery under the reasonable constraints. Because TO can achieve a design with much higher degree of freedom in terms of structure, there is a possibility for deriving the novel structure which would be quite different from the conventional structure. In this paper, topology optimization using sequential linear programming using move limit based on adaptive relaxation is applied to two models. The magnetic shielding, in which there are many local minima, is firstly employed as firstly benchmarking for the performance evaluation among several mathematical programming methods. Secondly, induction heating model is defined in 2-D axisymmetric field. In this model, the magnetic energy stored in the magnetic body is maximized under the constraint on the volume of magnetic body. Furthermore, the influence of the location of the design domain on the solutions is investigated.

  13. Sequential CFAR detectors using a dead-zone limiter

    NASA Astrophysics Data System (ADS)

    Tantaratana, Sawasd

    1990-09-01

    The performances of some proposed sequential constant-false-alarm-rate (CFAR) detectors are evaluated. The observations are passed through a dead-zone limiter, the output of which is -1, 0, or +1, depending on whether the input is less than -c, between -c and c, or greater than c, where c is a constant. The test statistic is the sum of the outputs. The test is performed on a reduced set of data (those with absolute value larger than c), with the test statistic being the sum of the signs of the reduced set of data. Both constant and linear boundaries are considered. Numerical results show a significant reduction of the average number of observations needed to achieve the same false alarm and detection probabilities as a fixed-sample-size CFAR detector using the same kind of test statistic.

  14. Structured filtering

    NASA Astrophysics Data System (ADS)

    Granade, Christopher; Wiebe, Nathan

    2017-08-01

    A major challenge facing existing sequential Monte Carlo methods for parameter estimation in physics stems from the inability of existing approaches to robustly deal with experiments that have different mechanisms that yield the results with equivalent probability. We address this problem here by proposing a form of particle filtering that clusters the particles that comprise the sequential Monte Carlo approximation to the posterior before applying a resampler. Through a new graphical approach to thinking about such models, we are able to devise an artificial-intelligence based strategy that automatically learns the shape and number of the clusters in the support of the posterior. We demonstrate the power of our approach by applying it to randomized gap estimation and a form of low circuit-depth phase estimation where existing methods from the physics literature either exhibit much worse performance or even fail completely.

  15. A novel approach for small sample size family-based association studies: sequential tests.

    PubMed

    Ilk, Ozlem; Rajabli, Farid; Dungul, Dilay Ciglidag; Ozdag, Hilal; Ilk, Hakki Gokhan

    2011-08-01

    In this paper, we propose a sequential probability ratio test (SPRT) to overcome the problem of limited samples in studies related to complex genetic diseases. The results of this novel approach are compared with the ones obtained from the traditional transmission disequilibrium test (TDT) on simulated data. Although TDT classifies single-nucleotide polymorphisms (SNPs) to only two groups (SNPs associated with the disease and the others), SPRT has the flexibility of assigning SNPs to a third group, that is, those for which we do not have enough evidence and should keep sampling. It is shown that SPRT results in smaller ratios of false positives and negatives, as well as better accuracy and sensitivity values for classifying SNPs when compared with TDT. By using SPRT, data with small sample size become usable for an accurate association analysis.

  16. Gleason-Busch theorem for sequential measurements

    NASA Astrophysics Data System (ADS)

    Flatt, Kieran; Barnett, Stephen M.; Croke, Sarah

    2017-12-01

    Gleason's theorem is a statement that, given some reasonable assumptions, the Born rule used to calculate probabilities in quantum mechanics is essentially unique [A. M. Gleason, Indiana Univ. Math. J. 6, 885 (1957), 10.1512/iumj.1957.6.56050]. We show that Gleason's theorem contains within it also the structure of sequential measurements, and along with this the state update rule. We give a small set of axioms, which are physically motivated and analogous to those in Busch's proof of Gleason's theorem [P. Busch, Phys. Rev. Lett. 91, 120403 (2003), 10.1103/PhysRevLett.91.120403], from which the familiar Kraus operator form follows. An axiomatic approach has practical relevance as well as fundamental interest, in making clear those assumptions which underlie the security of quantum communication protocols. Interestingly, the two-time formalism is seen to arise naturally in this approach.

  17. Multilevel sequential Monte Carlo: Mean square error bounds under verifiable conditions

    DOE PAGES

    Del Moral, Pierre; Jasra, Ajay; Law, Kody J. H.

    2017-01-09

    We consider the multilevel sequential Monte Carlo (MLSMC) method of Beskos et al. (Stoch. Proc. Appl. [to appear]). This technique is designed to approximate expectations w.r.t. probability laws associated to a discretization. For instance, in the context of inverse problems, where one discretizes the solution of a partial differential equation. The MLSMC approach is especially useful when independent, coupled sampling is not possible. Beskos et al. show that for MLSMC the computational effort to achieve a given error, can be less than independent sampling. In this article we significantly weaken the assumptions of Beskos et al., extending the proofs tomore » non-compact state-spaces. The assumptions are based upon multiplicative drift conditions as in Kontoyiannis and Meyn (Electron. J. Probab. 10 [2005]: 61–123). The assumptions are verified for an example.« less

  18. Multilevel sequential Monte Carlo: Mean square error bounds under verifiable conditions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Del Moral, Pierre; Jasra, Ajay; Law, Kody J. H.

    We consider the multilevel sequential Monte Carlo (MLSMC) method of Beskos et al. (Stoch. Proc. Appl. [to appear]). This technique is designed to approximate expectations w.r.t. probability laws associated to a discretization. For instance, in the context of inverse problems, where one discretizes the solution of a partial differential equation. The MLSMC approach is especially useful when independent, coupled sampling is not possible. Beskos et al. show that for MLSMC the computational effort to achieve a given error, can be less than independent sampling. In this article we significantly weaken the assumptions of Beskos et al., extending the proofs tomore » non-compact state-spaces. The assumptions are based upon multiplicative drift conditions as in Kontoyiannis and Meyn (Electron. J. Probab. 10 [2005]: 61–123). The assumptions are verified for an example.« less

  19. Upper bounds on sequential decoding performance parameters

    NASA Technical Reports Server (NTRS)

    Jelinek, F.

    1974-01-01

    This paper presents the best obtainable random coding and expurgated upper bounds on the probabilities of undetectable error, of t-order failure (advance to depth t into an incorrect subset), and of likelihood rise in the incorrect subset, applicable to sequential decoding when the metric bias G is arbitrary. Upper bounds on the Pareto exponent are also presented. The G-values optimizing each of the parameters of interest are determined, and are shown to lie in intervals that in general have nonzero widths. The G-optimal expurgated bound on undetectable error is shown to agree with that for maximum likelihood decoding of convolutional codes, and that on failure agrees with the block code expurgated bound. Included are curves evaluating the bounds for interesting choices of G and SNR for a binary-input quantized-output Gaussian additive noise channel.

  20. Human skeletal muscle mitochondrial capacity.

    PubMed

    Rasmussen, U F; Rasmussen, H N

    2000-04-01

    Under aerobic work, the oxygen consumption and major ATP production occur in the mitochondria and it is therefore a relevant question whether the in vivo rates can be accounted for by mitochondrial capacities measured in vitro. Mitochondria were isolated from human quadriceps muscle biopsies in yields of approximately 45%. The tissue content of total creatine, mitochondrial protein and different cytochromes was estimated. A number of activities were measured in functional assays of the mitochondria: pyruvate, ketoglutarate, glutamate and succinate dehydrogenases, palmitoyl-carnitine respiration, cytochrome oxidase, the respiratory chain and the ATP synthesis. The activities involved in carbohydrate oxidation could account for in vivo oxygen uptakes of 15-16 mmol O2 min-1 kg-1 or slightly above the value measured at maximal work rates in the knee-extensor model of Saltin and co-workers, i.e. without limitation from the cardiac output. This probably indicates that the maximal oxygen consumption of the muscle is limited by the mitochondrial capacities. The in vitro activities of fatty acid oxidation corresponded to only 39% of those of carbohydrate oxidation. The maximal rate of free energy production from aerobic metabolism of glycogen was calculated from the mitochondrial activities and estimates of the DeltaG or ATP hydrolysis and the efficiency of the actin-myosin reaction. The resultant value was 20 W kg-1 or approximately 70% of the maximal in vivo work rates of which 10-20% probably are sustained by the anaerobic ATP production. The lack of aerobic in vitro ATP synthesis might reflect termination of some critical interplay between cytoplasm and mitochondria.

  1. A Looping-Based Model for Quenching Repression

    PubMed Central

    Pollak, Yaroslav; Goldberg, Sarah; Amit, Roee

    2017-01-01

    We model the regulatory role of proteins bound to looped DNA using a simulation in which dsDNA is represented as a self-avoiding chain, and proteins as spherical protrusions. We simulate long self-avoiding chains using a sequential importance sampling Monte-Carlo algorithm, and compute the probabilities for chain looping with and without a protrusion. We find that a protrusion near one of the chain’s termini reduces the probability of looping, even for chains much longer than the protrusion–chain-terminus distance. This effect increases with protrusion size, and decreases with protrusion-terminus distance. The reduced probability of looping can be explained via an eclipse-like model, which provides a novel inhibitory mechanism. We test the eclipse model on two possible transcription-factor occupancy states of the D. melanogaster eve 3/7 enhancer, and show that it provides a possible explanation for the experimentally-observed eve stripe 3 and 7 expression patterns. PMID:28085884

  2. Dizocilpine (MK-801) impairs learning in the active place avoidance task but has no effect on the performance during task/context alternation.

    PubMed

    Vojtechova, Iveta; Petrasek, Tomas; Hatalova, Hana; Pistikova, Adela; Vales, Karel; Stuchlik, Ales

    2016-05-15

    The prevention of engram interference, pattern separation, flexibility, cognitive coordination and spatial navigation are usually studied separately at the behavioral level. Impairment in executive functions is often observed in patients suffering from schizophrenia. We have designed a protocol for assessing these functions all together as behavioral separation. This protocol is based on alternated or sequential training in two tasks testing different hippocampal functions (the Morris water maze and active place avoidance), and alternated or sequential training in two similar environments of the active place avoidance task. In Experiment 1, we tested, in adult rats, whether the performance in two different spatial tasks was affected by their order in sequential learning, or by their day-to-day alternation. In Experiment 2, rats learned to solve the active place avoidance task in two environments either alternately or sequentially. We found that rats are able to acquire both tasks and to discriminate both similar contexts without obvious problems regardless of the order or the alternation. We used two groups of rats, controls and a rat model of psychosis induced by a subchronic intraperitoneal application of 0.08mg/kg of dizocilpine (MK-801), a non-competitive antagonist of NMDA receptors. Dizocilpine had no selective effect on parallel/sequential learning of tasks/contexts. However, it caused hyperlocomotion and a significant deficit in learning in the active place avoidance task regardless of the task alternation. Cognitive coordination tested by this task is probably more sensitive to dizocilpine than spatial orientation because no hyperactivity or learning impairment was observed in the Morris water maze. Copyright © 2016 Elsevier B.V. All rights reserved.

  3. Sequential stages and distribution patterns of aging-related tau astrogliopathy (ARTAG) in the human brain.

    PubMed

    Kovacs, Gabor G; Xie, Sharon X; Robinson, John L; Lee, Edward B; Smith, Douglas H; Schuck, Theresa; Lee, Virginia M-Y; Trojanowski, John Q

    2018-06-11

    Aging-related tau astrogliopathy (ARTAG) describes tau pathology in astrocytes in different locations and anatomical regions. In the present study we addressed the question of whether sequential distribution patterns can be recognized for ARTAG or astroglial tau pathologies in both primary FTLD-tauopathies and non-FTLD-tauopathy cases. By evaluating 687 postmortem brains with diverse disorders we identified ARTAG in 455. We evaluated frequencies and hierarchical clustering of anatomical involvement and used conditional probability and logistic regression to model the sequential distribution of ARTAG and astroglial tau pathologies across different brain regions. For subpial and white matter ARTAG we recognize three and two patterns, respectively, each with three stages initiated or ending in the amygdala. Subependymal ARTAG does not show a clear sequential pattern. For grey matter (GM) ARTAG we recognize four stages including a striatal pathway of spreading towards the cortex and/or amygdala, and the brainstem, and an amygdala pathway, which precedes the involvement of the striatum and/or cortex and proceeds towards the brainstem. GM ARTAG and astrocytic plaque pathology in corticobasal degeneration follows a predominantly frontal-parietal cortical to temporal-occipital cortical, to subcortical, to brainstem pathway (four stages). GM ARTAG and tufted astrocyte pathology in progressive supranuclear palsy shows a striatum to frontal-parietal cortical to temporal to occipital, to amygdala, and to brainstem sequence (four stages). In Pick's disease cases with astroglial tau pathology an overlapping pattern with PSP can be appreciated. We conclude that tau-astrogliopathy type-specific sequential patterns cannot be simplified as neuron-based staging systems. The proposed cytopathological and hierarchical stages provide a conceptual approach to identify the initial steps of the pathogenesis of tau pathologies in ARTAG and primary FTLD-tauopathies.

  4. Probability Learning: Changes in Behavior Across Time and Development

    PubMed Central

    Plate, Rista C.; Fulvio, Jacqueline M.; Shutts, Kristin; Green, C. Shawn; Pollak, Seth D.

    2017-01-01

    Individuals track probabilities, such as associations between events in their environments, but less is known about the degree to which experience—within a learning session and over development—influences people’s use of incoming probabilistic information to guide behavior in real time. In two experiments, children (4–11 years) and adults searched for rewards hidden in locations with predetermined probabilities. In Experiment 1, children (n = 42) and adults (n = 32) changed strategies to maximize reward receipt over time. However, adults demonstrated greater strategy change efficiency. Making the predetermined probabilities more difficult to learn (Experiment 2) delayed effective strategy change for children (n = 39) and adults (n = 33). Taken together, these data characterize how children and adults alike react flexibly and change behavior according to incoming information. PMID:28121026

  5. Medical Problem-Solving: A Critique of the Literature.

    ERIC Educational Resources Information Center

    McGuire, Christine H.

    1985-01-01

    Prescriptive, decision-analysis of medical problem-solving has been based on decision theory that involves calculation and manipulation of complex probability and utility values to arrive at optimal decisions that will maximize patient benefits. The studies offer a methodology for improving clinical judgment. (Author/MLW)

  6. Tug-Of-War Model for Two-Bandit Problem

    NASA Astrophysics Data System (ADS)

    Kim, Song-Ju; Aono, Masashi; Hara, Masahiko

    The amoeba of the true slime mold Physarum polycephalum shows high computational capabilities. In the so-called amoeba-based computing, some computing tasks including combinatorial optimization are performed by the amoeba instead of a digital computer. We expect that there must be problems living organisms are good at solving. The “multi-armed bandit problem” would be the one of such problems. Consider a number of slot machines. Each of the machines has an arm which gives a player a reward with a certain probability when pulled. The problem is to determine the optimal strategy for maximizing the total reward sum after a certain number of trials. To maximize the total reward sum, it is necessary to judge correctly and quickly which machine has the highest reward probability. Therefore, the player should explore many machines to gather much knowledge on which machine is the best, but should not fail to exploit the reward from the known best machine. We consider that living organisms follow some efficient method to solve the problem.

  7. Discovery of a Sweet Spot on the Foot with a Smart Wearable Soccer Boot Sensor That Maximizes the Chances of Scoring a Curved Kick in Soccer.

    PubMed

    Fuss, Franz Konstantin; Düking, Peter; Weizman, Yehuda

    2018-01-01

    This paper provides the evidence of a sweet spot on the boot/foot as well as the method for detecting it with a wearable pressure sensitive device. This study confirmed the hypothesized existence of sweet and dead spots on a soccer boot or foot when kicking a ball. For a stationary curved kick, kicking the ball at the sweet spot maximized the probability of scoring a goal (58-86%), whereas having the impact point at the dead zone minimized the probability (11-22%). The sweet spot was found based on hypothesized favorable parameter ranges (center of pressure in x/y-directions and/or peak impact force) and the dead zone based on hypothesized unfavorable parameter ranges. The sweet spot was rather concentrated, independent of which parameter combination was used (two- or three-parameter combination), whereas the dead zone, located 21 mm from the sweet spot, was more widespread.

  8. Infomax Strategies for an Optimal Balance Between Exploration and Exploitation

    NASA Astrophysics Data System (ADS)

    Reddy, Gautam; Celani, Antonio; Vergassola, Massimo

    2016-06-01

    Proper balance between exploitation and exploration is what makes good decisions that achieve high reward, like payoff or evolutionary fitness. The Infomax principle postulates that maximization of information directs the function of diverse systems, from living systems to artificial neural networks. While specific applications turn out to be successful, the validity of information as a proxy for reward remains unclear. Here, we consider the multi-armed bandit decision problem, which features arms (slot-machines) of unknown probabilities of success and a player trying to maximize cumulative payoff by choosing the sequence of arms to play. We show that an Infomax strategy (Info-p) which optimally gathers information on the highest probability of success among the arms, saturates known optimal bounds and compares favorably to existing policies. Conversely, gathering information on the identity of the best arm in the bandit leads to a strategy that is vastly suboptimal in terms of payoff. The nature of the quantity selected for Infomax acquisition is then crucial for effective tradeoffs between exploration and exploitation.

  9. A Bayesian approach for incorporating economic factors in sample size design for clinical trials of individual drugs and portfolios of drugs.

    PubMed

    Patel, Nitin R; Ankolekar, Suresh

    2007-11-30

    Classical approaches to clinical trial design ignore economic factors that determine economic viability of a new drug. We address the choice of sample size in Phase III trials as a decision theory problem using a hybrid approach that takes a Bayesian view from the perspective of a drug company and a classical Neyman-Pearson view from the perspective of regulatory authorities. We incorporate relevant economic factors in the analysis to determine the optimal sample size to maximize the expected profit for the company. We extend the analysis to account for risk by using a 'satisficing' objective function that maximizes the chance of meeting a management-specified target level of profit. We extend the models for single drugs to a portfolio of clinical trials and optimize the sample sizes to maximize the expected profit subject to budget constraints. Further, we address the portfolio risk and optimize the sample sizes to maximize the probability of achieving a given target of expected profit.

  10. A probable probability distribution of a series nonequilibrium states in a simple system out of equilibrium

    NASA Astrophysics Data System (ADS)

    Gao, Haixia; Li, Ting; Xiao, Changming

    2016-05-01

    When a simple system is in its nonequilibrium state, it will shift to its equilibrium state. Obviously, in this process, there are a series of nonequilibrium states. With the assistance of Bayesian statistics and hyperensemble, a probable probability distribution of these nonequilibrium states can be determined by maximizing the hyperensemble entropy. It is known that the largest probability is the equilibrium state, and the far a nonequilibrium state is away from the equilibrium one, the smaller the probability will be, and the same conclusion can also be obtained in the multi-state space. Furthermore, if the probability stands for the relative time the corresponding nonequilibrium state can stay, then the velocity of a nonequilibrium state returning back to its equilibrium can also be determined through the reciprocal of the derivative of this probability. It tells us that the far away the state from the equilibrium is, the faster the returning velocity will be; if the system is near to its equilibrium state, the velocity will tend to be smaller and smaller, and finally tends to 0 when it gets the equilibrium state.

  11. Reliability-based trajectory optimization using nonintrusive polynomial chaos for Mars entry mission

    NASA Astrophysics Data System (ADS)

    Huang, Yuechen; Li, Haiyang

    2018-06-01

    This paper presents the reliability-based sequential optimization (RBSO) method to settle the trajectory optimization problem with parametric uncertainties in entry dynamics for Mars entry mission. First, the deterministic entry trajectory optimization model is reviewed, and then the reliability-based optimization model is formulated. In addition, the modified sequential optimization method, in which the nonintrusive polynomial chaos expansion (PCE) method and the most probable point (MPP) searching method are employed, is proposed to solve the reliability-based optimization problem efficiently. The nonintrusive PCE method contributes to the transformation between the stochastic optimization (SO) and the deterministic optimization (DO) and to the approximation of trajectory solution efficiently. The MPP method, which is used for assessing the reliability of constraints satisfaction only up to the necessary level, is employed to further improve the computational efficiency. The cycle including SO, reliability assessment and constraints update is repeated in the RBSO until the reliability requirements of constraints satisfaction are satisfied. Finally, the RBSO is compared with the traditional DO and the traditional sequential optimization based on Monte Carlo (MC) simulation in a specific Mars entry mission to demonstrate the effectiveness and the efficiency of the proposed method.

  12. Decay modes of the Hoyle state in 12C

    NASA Astrophysics Data System (ADS)

    Zheng, H.; Bonasera, A.; Huang, M.; Zhang, S.

    2018-04-01

    Recent experimental results give an upper limit less than 0.043% (95% C.L.) to the direct decay of the Hoyle state into 3α respect to the sequential decay into 8Be + α. We performed one and two-dimensional tunneling calculations to estimate such a ratio and found it to be more than one order of magnitude smaller than experiment depending on the range of the nuclear force. This is within high statistics experimental capabilities. Our results can also be tested by measuring the decay modes of high excitation energy states of 12C where the ratio of direct to sequential decay might reach 10% at E*(12C) = 10.3 MeV. The link between a Bose Einstein Condensate (BEC) and the direct decay of the Hoyle state is also addressed. We discuss a hypothetical 'Efimov state' at E*(12C) = 7.458 MeV, which would mainly sequentially decay with 3α of equal energies: a counterintuitive result of tunneling. Such a state, if it would exist, is at least 8 orders of magnitude less probable than the Hoyle's, thus below the sensitivity of recent and past experiments.

  13. Resource-efficient generation of linear cluster states by linear optics with postselection

    DOE PAGES

    Uskov, D. B.; Alsing, P. M.; Fanto, M. L.; ...

    2015-01-30

    Here we report on theoretical research in photonic cluster-state computing. Finding optimal schemes of generating non-classical photonic states is of critical importance for this field as physically implementable photon-photon entangling operations are currently limited to measurement-assisted stochastic transformations. A critical parameter for assessing the efficiency of such transformations is the success probability of a desired measurement outcome. At present there are several experimental groups that are capable of generating multi-photon cluster states carrying more than eight qubits. Separate photonic qubits or small clusters can be fused into a single cluster state by a probabilistic optical CZ gate conditioned on simultaneousmore » detection of all photons with 1/9 success probability for each gate. This design mechanically follows the original theoretical scheme of cluster state generation proposed more than a decade ago by Raussendorf, Browne, and Briegel. The optimality of the destructive CZ gate in application to linear optical cluster state generation has not been analyzed previously. Our results reveal that this method is far from the optimal one. Employing numerical optimization we have identified that the maximal success probability of fusing n unentangled dual-rail optical qubits into a linear cluster state is equal to 1/2 n-1; an m-tuple of photonic Bell pair states, commonly generated via spontaneous parametric down-conversion, can be fused into a single cluster with the maximal success probability of 1/4 m-1.« less

  14. Plasma GH responses to GHRH, arginine, L-dopa, pyridostigmine, sequential administrations of GHRH and combined administration of PD and GHRH in Turner's syndrome.

    PubMed

    Hanew, K; Tanaka, A; Utsumi, A

    1998-02-01

    To investigate GH secretory capacities in patients with Turner's syndrome, GHRH, arginine, L-dopa and pyridostigmine (PD) were administered singly and GHRH was administered sequentially for 3 days. In addition, plasma GH and TSH responses to GHRH and TRH after pretreatment with PD were analyzed to investigate whether the hypothalamic cholinergic somatostatinergic system functioned normally. The maximal GH responses to GHRH, L-dopa and PD were significantly smaller in Turner's syndrome (no.=14) than in normal short children (NSC, no.=14). However, there was no difference in plasma GH responses to arginine between the two groups. In ten patients with Turner's syndrome, the plasma GH response to GHRH did not improve even after the sequential 3-day administrations. Although plasma GH and TSH responses to GHRH and TRH were significantly enhanced by the pretreatment of PD in NSC (no.=12), these responses were not enhanced in Turner's syndrome. Plasma GH response to GHRH in Turner's syndrome with normal body fat was still significantly lower than in NSC. It is therefore concluded that somatotroph sensitivity to GHRH is decreased in Turner's syndrome and that this may be due to the primary defects of the somatotrophs rather than to the increased body fat. In addition, the network of cholinergic-somatostatinergic systems seemed to be impaired in these patients, while the activity of hypothalamic somatostatin neurons was thought to be maintained.

  15. Transient Cognitive Dynamics, Metastability, and Decision Making

    PubMed Central

    Rabinovich, Mikhail I.; Huerta, Ramón; Varona, Pablo; Afraimovich, Valentin S.

    2008-01-01

    The idea that cognitive activity can be understood using nonlinear dynamics has been intensively discussed at length for the last 15 years. One of the popular points of view is that metastable states play a key role in the execution of cognitive functions. Experimental and modeling studies suggest that most of these functions are the result of transient activity of large-scale brain networks in the presence of noise. Such transients may consist of a sequential switching between different metastable cognitive states. The main problem faced when using dynamical theory to describe transient cognitive processes is the fundamental contradiction between reproducibility and flexibility of transient behavior. In this paper, we propose a theoretical description of transient cognitive dynamics based on the interaction of functionally dependent metastable cognitive states. The mathematical image of such transient activity is a stable heteroclinic channel, i.e., a set of trajectories in the vicinity of a heteroclinic skeleton that consists of saddles and unstable separatrices that connect their surroundings. We suggest a basic mathematical model, a strongly dissipative dynamical system, and formulate the conditions for the robustness and reproducibility of cognitive transients that satisfy the competing requirements for stability and flexibility. Based on this approach, we describe here an effective solution for the problem of sequential decision making, represented as a fixed time game: a player takes sequential actions in a changing noisy environment so as to maximize a cumulative reward. As we predict and verify in computer simulations, noise plays an important role in optimizing the gain. PMID:18452000

  16. Design and protocol of a randomized multiple behavior change trial: Make Better Choices 2 (MBC2).

    PubMed

    Pellegrini, Christine A; Steglitz, Jeremy; Johnston, Winter; Warnick, Jennifer; Adams, Tiara; McFadden, H G; Siddique, Juned; Hedeker, Donald; Spring, Bonnie

    2015-03-01

    Suboptimal diet and inactive lifestyle are among the most prevalent preventable causes of premature death. Interventions that target multiple behaviors are potentially efficient; however the optimal way to initiate and maintain multiple health behavior changes is unknown. The Make Better Choices 2 (MBC2) trial aims to examine whether sustained healthful diet and activity change are best achieved by targeting diet and activity behaviors simultaneously or sequentially. Study design approximately 250 inactive adults with poor quality diet will be randomized to 3 conditions examining the best way to prescribe healthy diet and activity change. The 3 intervention conditions prescribe: 1) an increase in fruit and vegetable consumption (F/V+), decrease in sedentary leisure screen time (Sed-), and increase in physical activity (PA+) simultaneously (Simultaneous); 2) F/V+ and Sed- first, and then sequentially add PA+ (Sequential); or 3) Stress Management Control that addresses stress, relaxation, and sleep. All participants will receive a smartphone application to self-monitor behaviors and regular coaching calls to help facilitate behavior change during the 9 month intervention. Healthy lifestyle change in fruit/vegetable and saturated fat intakes, sedentary leisure screen time, and physical activity will be assessed at 3, 6, and 9 months. MBC2 is a randomized m-Health intervention examining methods to maximize initiation and maintenance of multiple healthful behavior changes. Results from this trial will provide insight about an optimal technology supported approach to promote improvement in diet and physical activity. Copyright © 2015 Elsevier Inc. All rights reserved.

  17. Estimation and classification by sigmoids based on mutual information

    NASA Technical Reports Server (NTRS)

    Baram, Yoram

    1994-01-01

    An estimate of the probability density function of a random vector is obtained by maximizing the mutual information between the input and the output of a feedforward network of sigmoidal units with respect to the input weights. Classification problems can be solved by selecting the class associated with the maximal estimated density. Newton's s method, applied to an estimated density, yields a recursive maximum likelihood estimator, consisting of a single internal layer of sigmoids, for a random variable or a random sequence. Applications to the diamond classification and to the prediction of a sun-spot process are demonstrated.

  18. Entanglement Concentration for Arbitrary Four-Photon Cluster State Assisted with Single Photons

    NASA Astrophysics Data System (ADS)

    Zhao, Sheng-Yang; Cai, Chun; Liu, Jiong; Zhou, Lan; Sheng, Yu-Bo

    2016-02-01

    We present an entanglement concentration protocol (ECP) to concentrate arbitrary four-photon less-entangled cluster state into maximally entangled cluster state. Different from other ECPs for cluster state, we only exploit the single photon as auxiliary, which makes this protocol feasible and economic. In our ECP, the concentrated maximally entangled state can be retained for further application and the discarded state can be reused for a higher success probability. This ECP works with the help of cross-Kerr nonlinearity and conventional photon detectors. This ECP may be useful in future one-way quantum computation.

  19. Cost-Effective Strategies for Rural Community Outreach, Hawaii, 2010–2011

    PubMed Central

    Barbato, Anna; Holuby, R. Scott; Ciarleglio, Anita E.; Taniguchi, Ronald

    2014-01-01

    Three strategies designed to maximize attendance at educational sessions on chronic disease medication safety in older adults in rural areas were implemented sequentially and compared for cost-effectiveness: 1) existing community groups and events, 2) formal advertisement, and 3) employer-based outreach. Cost-effectiveness was measured by comparing overall cost per attendee recruited and number of attendees per event. The overall cost per attendee was substantially higher for the formal advertising strategy, which produced the lowest number of attendees per event. Leveraging existing community events and employers in rural areas was more cost-effective than formal advertisement for recruiting rural community members. PMID:25496555

  20. Cost-effective strategies for rural community outreach, Hawaii, 2010-2011.

    PubMed

    Pellegrin, Karen L; Barbato, Anna; Holuby, R Scott; Ciarleglio, Anita E; Taniguchi, Ronald

    2014-12-11

    Three strategies designed to maximize attendance at educational sessions on chronic disease medication safety in older adults in rural areas were implemented sequentially and compared for cost-effectiveness: 1) existing community groups and events, 2) formal advertisement, and 3) employer-based outreach. Cost-effectiveness was measured by comparing overall cost per attendee recruited and number of attendees per event. The overall cost per attendee was substantially higher for the formal advertising strategy, which produced the lowest number of attendees per event. Leveraging existing community events and employers in rural areas was more cost-effective than formal advertisement for recruiting rural community members.

  1. Estimation of probability of failure for damage-tolerant aerospace structures

    NASA Astrophysics Data System (ADS)

    Halbert, Keith

    The majority of aircraft structures are designed to be damage-tolerant such that safe operation can continue in the presence of minor damage. It is necessary to schedule inspections so that minor damage can be found and repaired. It is generally not possible to perform structural inspections prior to every flight. The scheduling is traditionally accomplished through a deterministic set of methods referred to as Damage Tolerance Analysis (DTA). DTA has proven to produce safe aircraft but does not provide estimates of the probability of failure of future flights or the probability of repair of future inspections. Without these estimates maintenance costs cannot be accurately predicted. Also, estimation of failure probabilities is now a regulatory requirement for some aircraft. The set of methods concerned with the probabilistic formulation of this problem are collectively referred to as Probabilistic Damage Tolerance Analysis (PDTA). The goal of PDTA is to control the failure probability while holding maintenance costs to a reasonable level. This work focuses specifically on PDTA for fatigue cracking of metallic aircraft structures. The growth of a crack (or cracks) must be modeled using all available data and engineering knowledge. The length of a crack can be assessed only indirectly through evidence such as non-destructive inspection results, failures or lack of failures, and the observed severity of usage of the structure. The current set of industry PDTA tools are lacking in several ways: they may in some cases yield poor estimates of failure probabilities, they cannot realistically represent the variety of possible failure and maintenance scenarios, and they do not allow for model updates which incorporate observed evidence. A PDTA modeling methodology must be flexible enough to estimate accurately the failure and repair probabilities under a variety of maintenance scenarios, and be capable of incorporating observed evidence as it becomes available. This dissertation describes and develops new PDTA methodologies that directly address the deficiencies of the currently used tools. The new methods are implemented as a free, publicly licensed and open source R software package that can be downloaded from the Comprehensive R Archive Network. The tools consist of two main components. First, an explicit (and expensive) Monte Carlo approach is presented which simulates the life of an aircraft structural component flight-by-flight. This straightforward MC routine can be used to provide defensible estimates of the failure probabilities for future flights and repair probabilities for future inspections under a variety of failure and maintenance scenarios. This routine is intended to provide baseline estimates against which to compare the results of other, more efficient approaches. Second, an original approach is described which models the fatigue process and future scheduled inspections as a hidden Markov model. This model is solved using a particle-based approximation and the sequential importance sampling algorithm, which provides an efficient solution to the PDTA problem. Sequential importance sampling is an extension of importance sampling to a Markov process, allowing for efficient Bayesian updating of model parameters. This model updating capability, the benefit of which is demonstrated, is lacking in other PDTA approaches. The results of this approach are shown to agree with the results of the explicit Monte Carlo routine for a number of PDTA problems. Extensions to the typical PDTA problem, which cannot be solved using currently available tools, are presented and solved in this work. These extensions include incorporating observed evidence (such as non-destructive inspection results), more realistic treatment of possible future repairs, and the modeling of failure involving more than one crack (the so-called continuing damage problem). The described hidden Markov model / sequential importance sampling approach to PDTA has the potential to improve aerospace structural safety and reduce maintenance costs by providing a more accurate assessment of the risk of failure and the likelihood of repairs throughout the life of an aircraft.

  2. Exact Maximum-Entropy Estimation with Feynman Diagrams

    NASA Astrophysics Data System (ADS)

    Netser Zernik, Amitai; Schlank, Tomer M.; Tessler, Ran J.

    2018-02-01

    A longstanding open problem in statistics is finding an explicit expression for the probability measure which maximizes entropy with respect to given constraints. In this paper a solution to this problem is found, using perturbative Feynman calculus. The explicit expression is given as a sum over weighted trees.

  3. LEVEL AND EXTENT OF MERCURY CONTAMINATION IN OREGON, USA, LOTIC FISH

    EPA Science Inventory

    Because of growing concern with widespread mercury contamination of fish tissue, we sampled 154 streams and rivers throughout Oregon using a probability design. To maximize the sample size we took samples of small and large fish, where possible, from wadeable streams and boatable...

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhu, Lin; Dai, Zhenxue; Gong, Huili

    Understanding the heterogeneity arising from the complex architecture of sedimentary sequences in alluvial fans is challenging. This study develops a statistical inverse framework in a multi-zone transition probability approach for characterizing the heterogeneity in alluvial fans. An analytical solution of the transition probability matrix is used to define the statistical relationships among different hydrofacies and their mean lengths, integral scales, and volumetric proportions. A statistical inversion is conducted to identify the multi-zone transition probability models and estimate the optimal statistical parameters using the modified Gauss–Newton–Levenberg–Marquardt method. The Jacobian matrix is computed by the sensitivity equation method, which results in anmore » accurate inverse solution with quantification of parameter uncertainty. We use the Chaobai River alluvial fan in the Beijing Plain, China, as an example for elucidating the methodology of alluvial fan characterization. The alluvial fan is divided into three sediment zones. In each zone, the explicit mathematical formulations of the transition probability models are constructed with optimized different integral scales and volumetric proportions. The hydrofacies distributions in the three zones are simulated sequentially by the multi-zone transition probability-based indicator simulations. Finally, the result of this study provides the heterogeneous structure of the alluvial fan for further study of flow and transport simulations.« less

  5. Which patients do I treat? An experimental study with economists and physicians

    PubMed Central

    2012-01-01

    This experiment investigates decisions made by prospective economists and physicians in an allocation problem which can be framed either medically or neutrally. The potential recipients differ with respect to their minimum needs as well as to how much they benefit from a treatment. We classify the allocators as either 'selfish', 'Rawlsian', or 'maximizing the number of recipients'. Economists tend to maximize their own payoff, whereas the physicians' choices are more in line with maximizing the number of recipients and with Rawlsianism. Regarding the framing, we observe that professional norms surface more clearly in familiar settings. Finally, we scrutinize how the probability of being served and the allocated quantity depend on a recipient's characteristics as well as on the allocator type. JEL Classification: A13, I19, C91, C72 PMID:22827912

  6. Statistic inversion of multi-zone transition probability models for aquifer characterization in alluvial fans

    DOE PAGES

    Zhu, Lin; Dai, Zhenxue; Gong, Huili; ...

    2015-06-12

    Understanding the heterogeneity arising from the complex architecture of sedimentary sequences in alluvial fans is challenging. This study develops a statistical inverse framework in a multi-zone transition probability approach for characterizing the heterogeneity in alluvial fans. An analytical solution of the transition probability matrix is used to define the statistical relationships among different hydrofacies and their mean lengths, integral scales, and volumetric proportions. A statistical inversion is conducted to identify the multi-zone transition probability models and estimate the optimal statistical parameters using the modified Gauss–Newton–Levenberg–Marquardt method. The Jacobian matrix is computed by the sensitivity equation method, which results in anmore » accurate inverse solution with quantification of parameter uncertainty. We use the Chaobai River alluvial fan in the Beijing Plain, China, as an example for elucidating the methodology of alluvial fan characterization. The alluvial fan is divided into three sediment zones. In each zone, the explicit mathematical formulations of the transition probability models are constructed with optimized different integral scales and volumetric proportions. The hydrofacies distributions in the three zones are simulated sequentially by the multi-zone transition probability-based indicator simulations. Finally, the result of this study provides the heterogeneous structure of the alluvial fan for further study of flow and transport simulations.« less

  7. Situation models and memory: the effects of temporal and causal information on recall sequence.

    PubMed

    Brownstein, Aaron L; Read, Stephen J

    2007-10-01

    Participants watched an episode of the television show Cheers on video and then reported free recall. Recall sequence followed the sequence of events in the story; if one concept was observed immediately after another, it was recalled immediately after it. We also made a causal network of the show's story and found that recall sequence followed causal links; effects were recalled immediately after their causes. Recall sequence was more likely to follow causal links than temporal sequence, and most likely to follow causal links that were temporally sequential. Results were similar at 10-minute and 1-week delayed recall. This is the most direct and detailed evidence reported on sequential effects in recall. The causal network also predicted probability of recall; concepts with more links and concepts on the main causal chain were most likely to be recalled. This extends the causal network model to more complex materials than previous research.

  8. Children's sequential information search is sensitive to environmental probabilities.

    PubMed

    Nelson, Jonathan D; Divjak, Bojana; Gudmundsdottir, Gudny; Martignon, Laura F; Meder, Björn

    2014-01-01

    We investigated 4th-grade children's search strategies on sequential search tasks in which the goal is to identify an unknown target object by asking yes-no questions about its features. We used exhaustive search to identify the most efficient question strategies and evaluated the usefulness of children's questions accordingly. Results show that children have good intuitions regarding questions' usefulness and search adaptively, relative to the statistical structure of the task environment. Search was especially efficient in a task environment that was representative of real-world experiences. This suggests that children may use their knowledge of real-world environmental statistics to guide their search behavior. We also compared different related search tasks. We found positive transfer effects from first doing a number search task on a later person search task. Copyright © 2013 The Authors. Published by Elsevier B.V. All rights reserved.

  9. Reliability-based design optimization using a generalized subset simulation method and posterior approximation

    NASA Astrophysics Data System (ADS)

    Ma, Yuan-Zhuo; Li, Hong-Shuang; Yao, Wei-Xing

    2018-05-01

    The evaluation of the probabilistic constraints in reliability-based design optimization (RBDO) problems has always been significant and challenging work, which strongly affects the performance of RBDO methods. This article deals with RBDO problems using a recently developed generalized subset simulation (GSS) method and a posterior approximation approach. The posterior approximation approach is used to transform all the probabilistic constraints into ordinary constraints as in deterministic optimization. The assessment of multiple failure probabilities required by the posterior approximation approach is achieved by GSS in a single run at all supporting points, which are selected by a proper experimental design scheme combining Sobol' sequences and Bucher's design. Sequentially, the transformed deterministic design optimization problem can be solved by optimization algorithms, for example, the sequential quadratic programming method. Three optimization problems are used to demonstrate the efficiency and accuracy of the proposed method.

  10. Quantification, Prediction, and the Online Impact of Sentence Truth-Value: Evidence From Event-Related Potentials

    PubMed Central

    2015-01-01

    Do negative quantifiers like “few” reduce people’s ability to rapidly evaluate incoming language with respect to world knowledge? Previous research has addressed this question by examining whether online measures of quantifier comprehension match the “final” interpretation reflected in verification judgments. However, these studies confounded quantifier valence with its impact on the unfolding expectations for upcoming words, yielding mixed results. In the current event-related potentials study, participants read negative and positive quantifier sentences matched on cloze probability and on truth-value (e.g., “Most/Few gardeners plant their flowers during the spring/winter for best results”). Regardless of whether participants explicitly verified the sentences or not, true-positive quantifier sentences elicited reduced N400s compared with false-positive quantifier sentences, reflecting the facilitated semantic retrieval of words that render a sentence true. No such facilitation was seen in negative quantifier sentences. However, mixed-effects model analyses (with cloze value and truth-value as continuous predictors) revealed that decreasing cloze values were associated with an interaction pattern between truth-value and quantifier, whereas increasing cloze values were associated with more similar truth-value effects regardless of quantifier. Quantifier sentences are thus understood neither always in 2 sequential stages, nor always in a partial-incremental fashion, nor always in a maximally incremental fashion. Instead, and in accordance with prediction-based views of sentence comprehension, quantifier sentence comprehension depends on incorporation of quantifier meaning into an online, knowledge-based prediction for upcoming words. Fully incremental quantifier interpretation occurs when quantifiers are incorporated into sufficiently strong online predictions for upcoming words. PMID:26375784

  11. The effect of isolated labrum resection on shoulder stability.

    PubMed

    Pouliart, Nicole; Gagey, Olivier

    2006-03-01

    The present study was initiated to determine whether glenohumeral instability and dislocation can result from isolated lesions of the glenoid labrum in an arthroscopic cadaver model. Adjacent combinations of four zones of the labrum (superior, anterosuperior, anteroinferior and inferior) were sequentially removed with a motorised shaver, taking great care to leave the capsule intact in 24 cadaver shoulders. Stability was tested before and after inserting the scope and after each resection step. Inferior stability was examined by performing an inferior drawer test. Anterior stability was evaluated with an anteroposterior drawer test in 0 degrees of abduction and with a load-and-shift test in external rotation and 90 degrees abduction. Labral resection of all four zones maximally resulted in a grade 1 inferior instability (<10 mm inferior translation). When two adjacent labral zones were resected, a grade 2 anterior drawer (>10 mm anterior but no medial translation) was seen in 17% of the specimens. This was seen in one more specimen after the addition of a third zone. There were no differences in the stability of the load-and-shift test after any amount of labral resection. Total labral debridement increased inferior and anterior translation, but did not allow the humeral head to dislocate. The degree of stability in the cocked-arm position, which is the most prone to dislocation, is not altered. In patients, isolated labral tears, that is, without evidence of capsuloligamentous damage, can probably be safely debrided without risking glenohumeral instability to the point of dislocation. Nevertheless, anterior translation may significantly increase when two or more zones are resected.

  12. Detection of nuclear resonance signals: modification of the receiver operating characteristics using feedback.

    PubMed

    Blauch, A J; Schiano, J L; Ginsberg, M D

    2000-06-01

    The performance of a nuclear resonance detection system can be quantified using binary detection theory. Within this framework, signal averaging increases the probability of a correct detection and decreases the probability of a false alarm by reducing the variance of the noise in the average signal. In conjunction with signal averaging, we propose another method based on feedback control concepts that further improves detection performance. By maximizing the nuclear resonance signal amplitude, feedback raises the probability of correct detection. Furthermore, information generated by the feedback algorithm can be used to reduce the probability of false alarm. We discuss the advantages afforded by feedback that cannot be obtained using signal averaging. As an example, we show how this method is applicable to the detection of explosives using nuclear quadrupole resonance. Copyright 2000 Academic Press.

  13. Risk Assessment of Pollution Emergencies in Water Source Areas of the Hanjiang-to-Weihe River Diversion Project

    NASA Astrophysics Data System (ADS)

    Liu, Luyao; Feng, Minquan

    2018-03-01

    [Objective] This study quantitatively evaluated risk probabilities of sudden water pollution accidents under the influence of risk sources, thus providing an important guarantee for risk source identification during water diversion from the Hanjiang River to the Weihe River. [Methods] The research used Bayesian networks to represent the correlation between accidental risk sources. It also adopted the sequential Monte Carlo algorithm to combine water quality simulation with state simulation of risk sources, thereby determining standard-exceeding probabilities of sudden water pollution accidents. [Results] When the upstream inflow was 138.15 m3/s and the average accident duration was 48 h, the probabilities were 0.0416 and 0.0056 separately. When the upstream inflow was 55.29 m3/s and the average accident duration was 48 h, the probabilities were 0.0225 and 0.0028 separately. [Conclusions] The research conducted a risk assessment on sudden water pollution accidents, thereby providing an important guarantee for the smooth implementation, operation, and water quality of the Hanjiang-to-Weihe River Diversion Project.

  14. Age, period, and cohort analysis of regular dental care behavior and edentulism: A marginal approach

    PubMed Central

    2011-01-01

    Background To analyze the regular dental care behavior and prevalence of edentulism in adult Danes, reported in sequential cross-sectional oral health surveys by the application of a marginal approach to consider the possible clustering effect of birth cohorts. Methods Data from four sequential cross-sectional surveys of non-institutionalized Danes conducted from 1975-2005 comprising 4330 respondents aged 15+ years in 9 birth cohorts were analyzed. The key study variables were seeking dental care on an annual basis (ADC) and edentulism. For the analysis of ADC, survey year, age, gender, socio-economic status (SES) group, denture-wearing, and school dental care (SDC) during childhood were considered. For the analysis of edentulism, only respondents aged 35+ years were included. Survey year, age, gender, SES group, ADC, and SDC during childhood were considered as the independent factors. To take into account the clustering effect of birth cohorts, marginal logistic regressions with an independent correlation structure in generalized estimating equations (GEE) were carried out, with PROC GENMOD in SAS software. Results The overall proportion of people seeking ADC increased from 58.8% in 1975 to 86.7% in 2005, while for respondents aged 35 years or older, the overall prevalence of edentulism (35+ years) decreased from 36.4% in 1975 to 5.0% in 2005. Females, respondents in the higher SES group, in more recent survey years, with no denture, and receiving SDC in all grades during childhood were associated with higher probability of seeking ADC regularly (P < 0.05). The interaction of SDC and age (P < 0.0001) was significant. The probabilities of seeking ADC were even higher among subjects with SDC in all grades and aged 45 years or older. Females, older age group, respondents in earlier survey years, not seeking ADC, lower SES group, and not receiving SDC in all grades were associated with higher probability of being edentulous (P < 0.05). Conclusions With the use of GEE, the potential clustering effect of birth cohorts in sequential cross-sectional oral health survey data could be appropriately considered. The success of Danish dental health policy was demonstrated by a continued increase of regular dental visiting habits and tooth retention in adults because school dental care was provided to Danes in their childhood. PMID:21410991

  15. Anomaly Detection in Dynamic Networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Turcotte, Melissa

    2014-10-14

    Anomaly detection in dynamic communication networks has many important security applications. These networks can be extremely large and so detecting any changes in their structure can be computationally challenging; hence, computationally fast, parallelisable methods for monitoring the network are paramount. For this reason the methods presented here use independent node and edge based models to detect locally anomalous substructures within communication networks. As a first stage, the aim is to detect changes in the data streams arising from node or edge communications. Throughout the thesis simple, conjugate Bayesian models for counting processes are used to model these data streams. Amore » second stage of analysis can then be performed on a much reduced subset of the network comprising nodes and edges which have been identified as potentially anomalous in the first stage. The first method assumes communications in a network arise from an inhomogeneous Poisson process with piecewise constant intensity. Anomaly detection is then treated as a changepoint problem on the intensities. The changepoint model is extended to incorporate seasonal behavior inherent in communication networks. This seasonal behavior is also viewed as a changepoint problem acting on a piecewise constant Poisson process. In a static time frame, inference is made on this extended model via a Gibbs sampling strategy. In a sequential time frame, where the data arrive as a stream, a novel, fast Sequential Monte Carlo (SMC) algorithm is introduced to sample from the sequence of posterior distributions of the change points over time. A second method is considered for monitoring communications in a large scale computer network. The usage patterns in these types of networks are very bursty in nature and don’t fit a Poisson process model. For tractable inference, discrete time models are considered, where the data are aggregated into discrete time periods and probability models are fitted to the communication counts. In a sequential analysis, anomalous behavior is then identified from outlying behavior with respect to the fitted predictive probability models. Seasonality is again incorporated into the model and is treated as a changepoint model on the transition probabilities of a discrete time Markov process. Second stage analytics are then developed which combine anomalous edges to identify anomalous substructures in the network.« less

  16. Joint state and parameter estimation of the hemodynamic model by particle smoother expectation maximization method

    NASA Astrophysics Data System (ADS)

    Aslan, Serdar; Taylan Cemgil, Ali; Akın, Ata

    2016-08-01

    Objective. In this paper, we aimed for the robust estimation of the parameters and states of the hemodynamic model by using blood oxygen level dependent signal. Approach. In the fMRI literature, there are only a few successful methods that are able to make a joint estimation of the states and parameters of the hemodynamic model. In this paper, we implemented a maximum likelihood based method called the particle smoother expectation maximization (PSEM) algorithm for the joint state and parameter estimation. Main results. Former sequential Monte Carlo methods were only reliable in the hemodynamic state estimates. They were claimed to outperform the local linearization (LL) filter and the extended Kalman filter (EKF). The PSEM algorithm is compared with the most successful method called square-root cubature Kalman smoother (SCKS) for both state and parameter estimation. SCKS was found to be better than the dynamic expectation maximization (DEM) algorithm, which was shown to be a better estimator than EKF, LL and particle filters. Significance. PSEM was more accurate than SCKS for both the state and the parameter estimation. Hence, PSEM seems to be the most accurate method for the system identification and state estimation for the hemodynamic model inversion literature. This paper do not compare its results with Tikhonov-regularized Newton—CKF (TNF-CKF), a recent robust method which works in filtering sense.

  17. Ultrathin Coating of Confined Pt Nanocatalysts by Atomic Layer Deposition for Enhanced Catalytic Performance in Hydrogenation Reactions.

    PubMed

    Wang, Meihua; Gao, Zhe; Zhang, Bin; Yang, Huimin; Qiao, Yan; Chen, Shuai; Ge, Huibin; Zhang, Jiankang; Qin, Yong

    2016-06-13

    Metal-support interfaces play a prominent role in heterogeneous catalysis. However, tailoring the metal-support interfaces to realize full utilization remains a major challenge. In this work, we propose a graceful strategy to maximize the metal-oxide interfaces by coating confined nanoparticles with an ultrathin oxide layer. This is achieved by sequential deposition of ultrathin Al2 O3 coats, Pt, and a thick Al2 O3 layer on carbon nanocoils templates by atomic layer deposition (ALD), followed by removal of the templates. Compared with the Pt catalysts confined in Al2 O3 nanotubes without the ultrathin coats, the ultrathin coated samples have larger Pt-Al2 O3 interfaces. The maximized interfaces significantly improve the activity and the protecting Al2 O3 nanotubes retain the stability for hydrogenation reactions of 4-nitrophenol. We believe that applying ALD ultrathin coats on confined catalysts is a promising way to achieve enhanced performance for other catalysts. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  18. Metal-induced streak artifact reduction using iterative reconstruction algorithms in x-ray computed tomography image of the dentoalveolar region.

    PubMed

    Dong, Jian; Hayakawa, Yoshihiko; Kannenberg, Sven; Kober, Cornelia

    2013-02-01

    The objective of this study was to reduce metal-induced streak artifact on oral and maxillofacial x-ray computed tomography (CT) images by developing the fast statistical image reconstruction system using iterative reconstruction algorithms. Adjacent CT images often depict similar anatomical structures in thin slices. So, first, images were reconstructed using the same projection data of an artifact-free image. Second, images were processed by the successive iterative restoration method where projection data were generated from reconstructed image in sequence. Besides the maximum likelihood-expectation maximization algorithm, the ordered subset-expectation maximization algorithm (OS-EM) was examined. Also, small region of interest (ROI) setting and reverse processing were applied for improving performance. Both algorithms reduced artifacts instead of slightly decreasing gray levels. The OS-EM and small ROI reduced the processing duration without apparent detriments. Sequential and reverse processing did not show apparent effects. Two alternatives in iterative reconstruction methods were effective for artifact reduction. The OS-EM algorithm and small ROI setting improved the performance. Copyright © 2012 Elsevier Inc. All rights reserved.

  19. Aging and loss decision making: increased risk aversion and decreased use of maximizing information, with correlated rationality and value maximization.

    PubMed

    Kurnianingsih, Yoanna A; Sim, Sam K Y; Chee, Michael W L; Mullette-Gillman, O'Dhaniel A

    2015-01-01

    We investigated how adult aging specifically alters economic decision-making, focusing on examining alterations in uncertainty preferences (willingness to gamble) and choice strategies (what gamble information influences choices) within both the gains and losses domains. Within each domain, participants chose between certain monetary outcomes and gambles with uncertain outcomes. We examined preferences by quantifying how uncertainty modulates choice behavior as if altering the subjective valuation of gambles. We explored age-related preferences for two types of uncertainty, risk, and ambiguity. Additionally, we explored how aging may alter what information participants utilize to make their choices by comparing the relative utilization of maximizing and satisficing information types through a choice strategy metric. Maximizing information was the ratio of the expected value of the two options, while satisficing information was the probability of winning. We found age-related alterations of economic preferences within the losses domain, but no alterations within the gains domain. Older adults (OA; 61-80 years old) were significantly more uncertainty averse for both risky and ambiguous choices. OA also exhibited choice strategies with decreased use of maximizing information. Within OA, we found a significant correlation between risk preferences and choice strategy. This linkage between preferences and strategy appears to derive from a convergence to risk neutrality driven by greater use of the effortful maximizing strategy. As utility maximization and value maximization intersect at risk neutrality, this result suggests that OA are exhibiting a relationship between enhanced rationality and enhanced value maximization. While there was variability in economic decision-making measures within OA, these individual differences were unrelated to variability within examined measures of cognitive ability. Our results demonstrate that aging alters economic decision-making for losses through changes in both individual preferences and the strategies individuals employ.

  20. Making Better Use of Bandwidth: Data Compression and Network Management Technologies

    DTIC Science & Technology

    2005-01-01

    data , the compression would not be a success. A key feature of the Lempel - Ziv family of algorithms is that the...citeseer.nj.nec.com/yu02motion.html. Ziv , J., and A. Lempel , “A Universal Algorithm for Sequential Data Compression ,” IEEE Transac- tions on Information Theory, Vol. 23, 1977, pp. 337–342. ...probability models – Lempel - Ziv – Prediction by partial matching The central component of a lossless compression algorithm

  1. Attractors in complex networks

    NASA Astrophysics Data System (ADS)

    Rodrigues, Alexandre A. P.

    2017-10-01

    In the framework of the generalized Lotka-Volterra model, solutions representing multispecies sequential competition can be predictable with high probability. In this paper, we show that it occurs because the corresponding "heteroclinic channel" forms part of an attractor. We prove that, generically, in an attracting heteroclinic network involving a finite number of hyperbolic and non-resonant saddle-equilibria whose linearization has only real eigenvalues, the connections corresponding to the most positive expanding eigenvalues form part of an attractor (observable in numerical simulations).

  2. Attractors in complex networks.

    PubMed

    Rodrigues, Alexandre A P

    2017-10-01

    In the framework of the generalized Lotka-Volterra model, solutions representing multispecies sequential competition can be predictable with high probability. In this paper, we show that it occurs because the corresponding "heteroclinic channel" forms part of an attractor. We prove that, generically, in an attracting heteroclinic network involving a finite number of hyperbolic and non-resonant saddle-equilibria whose linearization has only real eigenvalues, the connections corresponding to the most positive expanding eigenvalues form part of an attractor (observable in numerical simulations).

  3. External versus Intuitive Reasoning: The Conjunction Fallacy in Probability Judgment.

    DTIC Science & Technology

    1983-06-01

    tions. Linda is a teacher in elementary school . Linda works in a bookstore and takes Yoga classes. Linda is active in the feminist movement. (F) Linda...sophisticated group consisted of PhD students in the decision science program of the Stanford Busi- ness School , all with several advanced courses in... mind by seemingly incon- sequential cues. There is a contrast worthy of note between the effectiveness of exten- sional cues in the health-survey

  4. Photocatalytic Conversion of Nitrobenzene to Aniline through Sequential Proton-Coupled One-Electron Transfers from a Cadmium Sulfide Quantum Dot

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jensen, Stephen C.; Bettis Homan, Stephanie; Weiss, Emily A.

    2016-01-28

    This paper describes the use of cadmium sulfide quantum dots (CdS QDs) as visible-light photocatalysts for the reduction of nitrobenzene to aniline through six sequential photoinduced, proton-coupled electron transfers. At pH 3.6–4.3, the internal quantum yield of photons-to-reducing electrons is 37.1% over 54 h of illumination, with no apparent decrease in catalyst activity. Monitoring of the QD exciton by transient absorption reveals that, for each step in the catalytic cycle, the sacrificial reductant, 3-mercaptopropionic acid, scavenges the excitonic hole in ~5 ps to form QD•–; electron transfer to nitrobenzene or the intermediates nitrosobenzene and phenylhydroxylamine then occurs on the nanosecondmore » time scale. The rate constants for the single-electron transfer reactions are correlated with the driving forces for the corresponding proton-coupled electron transfers. This result suggests, but does not prove, that electron transfer, not proton transfer, is rate-limiting for these reactions. Nuclear magnetic resonance analysis of the QD–molecule systems shows that the photoproduct aniline, left unprotonated, serves as a poison for the QD catalyst by adsorbing to its surface. Performing the reaction at an acidic pH not only encourages aniline to desorb but also increases the probability of protonated intermediates; the latter effect probably ensures that recruitment of protons is not rate-limiting.« less

  5. Multitarget tracking in cluttered environment for a multistatic passive radar system under the DAB/DVB network

    NASA Astrophysics Data System (ADS)

    Shi, Yi Fang; Park, Seung Hyo; Song, Taek Lyul

    2017-12-01

    The target tracking using multistatic passive radar in a digital audio/video broadcast (DAB/DVB) network with illuminators of opportunity faces two main challenges: the first challenge is that one has to solve the measurement-to-illuminator association ambiguity in addition to the conventional association ambiguity between the measurements and targets, which introduces a significantly complex three-dimensional (3-D) data association problem among the target-measurement illuminator, this is because all the illuminators transmit the same carrier frequency signals and signals transmitted by different illuminators but reflected via the same target become indistinguishable; the other challenge is that only the bistatic range and range-rate measurements are available while the angle information is unavailable or of very poor quality. In this paper, the authors propose a new target tracking algorithm directly in three-dimensional (3-D) Cartesian coordinates with the capability of track management using the probability of target existence as a track quality measure. The proposed algorithm is termed sequential processing-joint integrated probabilistic data association (SP-JIPDA), which applies the modified sequential processing technique to resolve the additional association ambiguity between measurements and illuminators. The SP-JIPDA algorithm sequentially operates the JIPDA tracker to update each track for each illuminator with all the measurements in the common measurement set at each time. For reasons of fair comparison, the existing modified joint probabilistic data association (MJPDA) algorithm that addresses the 3-D data association problem via "supertargets" using gate grouping and provides tracks directly in 3-D Cartesian coordinates, is enhanced by incorporating the probability of target existence as an effective track quality measure for track management. Both algorithms deal with nonlinear observations using the extended Kalman filtering. A simulation study is performed to verify the superiority of the proposed SP-JIPDA algorithm over the MJIPDA in this multistatic passive radar system.

  6. Analytical approach to an integrate-and-fire model with spike-triggered adaptation

    NASA Astrophysics Data System (ADS)

    Schwalger, Tilo; Lindner, Benjamin

    2015-12-01

    The calculation of the steady-state probability density for multidimensional stochastic systems that do not obey detailed balance is a difficult problem. Here we present the analytical derivation of the stationary joint and various marginal probability densities for a stochastic neuron model with adaptation current. Our approach assumes weak noise but is valid for arbitrary adaptation strength and time scale. The theory predicts several effects of adaptation on the statistics of the membrane potential of a tonically firing neuron: (i) a membrane potential distribution with a convex shape, (ii) a strongly increased probability of hyperpolarized membrane potentials induced by strong and fast adaptation, and (iii) a maximized variability associated with the adaptation current at a finite adaptation time scale.

  7. Opioid receptor mediated anticonvulsant effect of pentazocine.

    PubMed

    Khanna, N; Khosla, R; Kohli, J

    1998-01-01

    Intraperitoneal (i.p.) administration of (+/-) pentazocine (10, 30 & 50 mg/kg), a Sigma opioid agonist, resulted in a dose dependent anticonvulsant action against maximal electroshock seizures in mice. This anticonvulsant effect of pentazocine was not antagonized by both the doses of naloxone (1 and 10 mg/kg) suggesting thereby that its anticonvulsant action is probably mediated by Sigma opiate binding sites. Its anticonvulsant effect was potentiated by both the anticonvulsant drugs viz. diazepam and diphenylhydantoin. Morphine, mu opioid agonist, on the other hand, failed to protect the animals against maximal electroshock seizures when it was given in doses of 10-40 mg/kg body wt.

  8. Two-step entanglement concentration for arbitrary electronic cluster state

    NASA Astrophysics Data System (ADS)

    Zhao, Sheng-Yang; Liu, Jiong; Zhou, Lan; Sheng, Yu-Bo

    2013-12-01

    We present an efficient protocol for concentrating an arbitrary four-electron less-entangled cluster state into a maximally entangled cluster state. As a two-step entanglement concentration protocol (ECP), it only needs one pair of less-entangled cluster state, which makes this ECP more economical. With the help of electronic polarization beam splitter (PBS) and the charge detection, the whole concentration process is essentially the quantum nondemolition (QND) measurement. Therefore, the concentrated maximally entangled state can be remained for further application. Moreover, the discarded terms in some traditional ECPs can be reused to obtain a high success probability. It is feasible and useful in current one-way quantum computation.

  9. A new exact and more powerful unconditional test of no treatment effect from binary matched pairs.

    PubMed

    Lloyd, Chris J

    2008-09-01

    We consider the problem of testing for a difference in the probability of success from matched binary pairs. Starting with three standard inexact tests, the nuisance parameter is first estimated and then the residual dependence is eliminated by maximization, producing what I call an E+M P-value. The E+M P-value based on McNemar's statistic is shown numerically to dominate previous suggestions, including partially maximized P-values as described in Berger and Sidik (2003, Statistical Methods in Medical Research 12, 91-108). The latter method, however, may have computational advantages for large samples.

  10. Optimum Sensors Integration for Multi-Sensor Multi-Target Environment for Ballistic Missile Defense Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Imam, Neena; Barhen, Jacob; Glover, Charles Wayne

    2012-01-01

    Multi-sensor networks may face resource limitations in a dynamically evolving multiple target tracking scenario. It is necessary to task the sensors efficiently so that the overall system performance is maximized within the system constraints. The central sensor resource manager may control the sensors to meet objective functions that are formulated to meet system goals such as minimization of track loss, maximization of probability of target detection, and minimization of track error. This paper discusses the variety of techniques that may be utilized to optimize sensor performance for either near term gain or future reward over a longer time horizon.

  11. Multi-arm group sequential designs with a simultaneous stopping rule.

    PubMed

    Urach, S; Posch, M

    2016-12-30

    Multi-arm group sequential clinical trials are efficient designs to compare multiple treatments to a control. They allow one to test for treatment effects already in interim analyses and can have a lower average sample number than fixed sample designs. Their operating characteristics depend on the stopping rule: We consider simultaneous stopping, where the whole trial is stopped as soon as for any of the arms the null hypothesis of no treatment effect can be rejected, and separate stopping, where only recruitment to arms for which a significant treatment effect could be demonstrated is stopped, but the other arms are continued. For both stopping rules, the family-wise error rate can be controlled by the closed testing procedure applied to group sequential tests of intersection and elementary hypotheses. The group sequential boundaries for the separate stopping rule also control the family-wise error rate if the simultaneous stopping rule is applied. However, we show that for the simultaneous stopping rule, one can apply improved, less conservative stopping boundaries for local tests of elementary hypotheses. We derive corresponding improved Pocock and O'Brien type boundaries as well as optimized boundaries to maximize the power or average sample number and investigate the operating characteristics and small sample properties of the resulting designs. To control the power to reject at least one null hypothesis, the simultaneous stopping rule requires a lower average sample number than the separate stopping rule. This comes at the cost of a lower power to reject all null hypotheses. Some of this loss in power can be regained by applying the improved stopping boundaries for the simultaneous stopping rule. The procedures are illustrated with clinical trials in systemic sclerosis and narcolepsy. © 2016 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd. © 2016 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.

  12. Organic nanoparticle systems for spatiotemporal control of multimodal chemotherapy

    PubMed Central

    Meng, Fanfei; Han, Ning; Yeo, Yoon

    2017-01-01

    Introduction Chemotherapeutic drugs are used in combination to target multiple mechanisms involved in cancer cell survival and proliferation. Carriers are developed to deliver drug combinations to common target tissues in optimal ratios and desirable sequences. Nanoparticles (NP) have been a popular choice for this purpose due to their ability to increase the circulation half-life and tumor accumulation of a drug. Areas covered We review organic NP carriers based on polymers, proteins, peptides, and lipids for simultaneous delivery of multiple anticancer drugs, drug/sensitizer combinations, drug/photodynamic- or photothermal therapy combinations, and drug/gene therapeutics with examples in the past three years. Sequential delivery of drug combinations, based on either sequential administration or built-in release control, is introduced with an emphasis on the mechanistic understanding of such control. Expert opinion Recent studies demonstrate how a drug carrier can contribute to co-localizing drug combinations in optimal ratios and dosing sequences to maximize the synergistic effects. We identify several areas for improvement in future research, including the choice of drug combinations, circulation stability of carriers, spatiotemporal control of drug release, and the evaluation and clinical translation of combination delivery. PMID:27476442

  13. A sequential mechanism for clathrin cage disassembly by 70-kDa heat-shock cognate protein (Hsc70) and auxilin

    PubMed Central

    Rothnie, Alice; Clarke, Anthony R.; Kuzmic, Petr; Cameron, Angus; Smith, Corinne J.

    2011-01-01

    An essential stage in endocytic coated vesicle recycling is the dissociation of clathrin from the vesicle coat by the molecular chaperone, 70-kDa heat-shock cognate protein (Hsc70), and the J-domain-containing protein, auxilin, in an ATP-dependent process. We present a detailed mechanistic analysis of clathrin disassembly catalyzed by Hsc70 and auxilin, using loss of perpendicular light scattering to monitor the process. We report that a single auxilin per clathrin triskelion is required for maximal rate of disassembly, that ATP is hydrolyzed at the same rate that disassembly occurs, and that three ATP molecules are hydrolyzed per clathrin triskelion released. Stopped-flow measurements revealed a lag phase in which the scattering intensity increased owing to association of Hsc70 with clathrin cages followed by serial rounds of ATP hydrolysis prior to triskelion removal. Global fit of stopped-flow data to several physically plausible mechanisms showed the best fit to a model in which sequential hydrolysis of three separate ATP molecules is required for the eventual release of a triskelion from the clathrin–auxilin cage. PMID:21482805

  14. Problematizing the concept of the "borderline" group in performance assessments.

    PubMed

    Homer, Matt; Pell, Godfrey; Fuller, Richard

    2017-05-01

    Many standard setting procedures focus on the performance of the "borderline" group, defined through expert judgments by assessors. In performance assessments such as Objective Structured Clinical Examinations (OSCEs), these judgments usually apply at the station level. Using largely descriptive approaches, we analyze the assessment profile of OSCE candidates at the end of a five year undergraduate medical degree program to investigate the consistency of the borderline group across stations. We look specifically at those candidates who are borderline in individual stations, and in the overall assessment. While the borderline group can be clearly defined at the individual station level, our key finding is that the membership of this group varies considerably across stations. These findings pose challenges for some standard setting methods, particularly the borderline group and objective borderline methods. They also suggest that institutions should ensure appropriate conjunctive rules to limit compensation in performance between stations to maximize "diagnostic accuracy". In addition, this work highlights a key benefit of sequential testing formats in OSCEs. In comparison with a traditional, single-test format, sequential models allow assessment of "borderline" candidates across a wider range of content areas with concomitant improvements in pass/fail decision-making.

  15. Optimization strategies based on sequential quadratic programming applied for a fermentation process for butanol production.

    PubMed

    Pinto Mariano, Adriano; Bastos Borba Costa, Caliane; de Franceschi de Angelis, Dejanira; Maugeri Filho, Francisco; Pires Atala, Daniel Ibraim; Wolf Maciel, Maria Regina; Maciel Filho, Rubens

    2009-11-01

    In this work, the mathematical optimization of a continuous flash fermentation process for the production of biobutanol was studied. The process consists of three interconnected units, as follows: fermentor, cell-retention system (tangential microfiltration), and vacuum flash vessel (responsible for the continuous recovery of butanol from the broth). The objective of the optimization was to maximize butanol productivity for a desired substrate conversion. Two strategies were compared for the optimization of the process. In one of them, the process was represented by a deterministic model with kinetic parameters determined experimentally and, in the other, by a statistical model obtained using the factorial design technique combined with simulation. For both strategies, the problem was written as a nonlinear programming problem and was solved with the sequential quadratic programming technique. The results showed that despite the very similar solutions obtained with both strategies, the problems found with the strategy using the deterministic model, such as lack of convergence and high computational time, make the use of the optimization strategy with the statistical model, which showed to be robust and fast, more suitable for the flash fermentation process, being recommended for real-time applications coupling optimization and control.

  16. TARGETED SEQUENTIAL DESIGN FOR TARGETED LEARNING INFERENCE OF THE OPTIMAL TREATMENT RULE AND ITS MEAN REWARD.

    PubMed

    Chambaz, Antoine; Zheng, Wenjing; van der Laan, Mark J

    2017-01-01

    This article studies the targeted sequential inference of an optimal treatment rule (TR) and its mean reward in the non-exceptional case, i.e. , assuming that there is no stratum of the baseline covariates where treatment is neither beneficial nor harmful, and under a companion margin assumption. Our pivotal estimator, whose definition hinges on the targeted minimum loss estimation (TMLE) principle, actually infers the mean reward under the current estimate of the optimal TR. This data-adaptive statistical parameter is worthy of interest on its own. Our main result is a central limit theorem which enables the construction of confidence intervals on both mean rewards under the current estimate of the optimal TR and under the optimal TR itself. The asymptotic variance of the estimator takes the form of the variance of an efficient influence curve at a limiting distribution, allowing to discuss the efficiency of inference. As a by product, we also derive confidence intervals on two cumulated pseudo-regrets, a key notion in the study of bandits problems. A simulation study illustrates the procedure. One of the corner-stones of the theoretical study is a new maximal inequality for martingales with respect to the uniform entropy integral.

  17. Ingestion of High Molecular Weight Carbohydrate Enhances Subsequent Repeated Maximal Power: A Randomized Controlled Trial

    PubMed Central

    Oliver, Jonathan M.; Almada, Anthony L.; Van Eck, Leighsa E.; Shah, Meena; Mitchell, Joel B.; Jones, Margaret T.; Jagim, Andrew R.; Rowlands, David S.

    2016-01-01

    Athletes in sports demanding repeat maximal work outputs frequently train concurrently utilizing sequential bouts of intense endurance and resistance training sessions. On a daily basis, maximal work within subsequent bouts may be limited by muscle glycogen availability. Recently, the ingestion of a unique high molecular weight (HMW) carbohydrate was found to increase glycogen re-synthesis rate and enhance work output during subsequent endurance exercise, relative to low molecular weight (LMW) carbohydrate ingestion. The effect of the HMW carbohydrate, however, on the performance of intense resistance exercise following prolonged-intense endurance training is unknown. Sixteen resistance trained men (23±3 years; 176.7±9.8 cm; 88.2±8.6 kg) participated in a double-blind, placebo-controlled, randomized 3-way crossover design comprising a muscle-glycogen depleting cycling exercise followed by ingestion of placebo (PLA), or 1.2 g•kg•bw-1 of LMW or HMW carbohydrate solution (10%) with blood sampling for 2-h post-ingestion. Thereafter, participants performed 5 sets of 10 maximal explosive repetitions of back squat (75% of 1RM). Compared to PLA, ingestion of HMW (4.9%, 90%CI 3.8%, 5.9%) and LMW (1.9%, 90%CI 0.8%, 3.0%) carbohydrate solutions substantially increased power output during resistance exercise, with the 3.1% (90% CI 4.3, 2.0%) almost certain additional gain in power after HMW-LMW ingestion attributed to higher movement velocity after force kinematic analysis (HMW-LMW 2.5%, 90%CI 1.4, 3.7%). Both carbohydrate solutions increased post-exercise plasma glucose, glucoregulatory and gut hormones compared to PLA, but differences between carbohydrates were unclear; thus, the underlying mechanism remains to be elucidated. Ingestion of a HMW carbohydrate following prolonged intense endurance exercise provides superior benefits to movement velocity and power output during subsequent repeated maximal explosive resistance exercise. This study was registered with clinicaltrials.gov (NCT02778373). PMID:27636206

  18. Ingestion of High Molecular Weight Carbohydrate Enhances Subsequent Repeated Maximal Power: A Randomized Controlled Trial.

    PubMed

    Oliver, Jonathan M; Almada, Anthony L; Van Eck, Leighsa E; Shah, Meena; Mitchell, Joel B; Jones, Margaret T; Jagim, Andrew R; Rowlands, David S

    2016-01-01

    Athletes in sports demanding repeat maximal work outputs frequently train concurrently utilizing sequential bouts of intense endurance and resistance training sessions. On a daily basis, maximal work within subsequent bouts may be limited by muscle glycogen availability. Recently, the ingestion of a unique high molecular weight (HMW) carbohydrate was found to increase glycogen re-synthesis rate and enhance work output during subsequent endurance exercise, relative to low molecular weight (LMW) carbohydrate ingestion. The effect of the HMW carbohydrate, however, on the performance of intense resistance exercise following prolonged-intense endurance training is unknown. Sixteen resistance trained men (23±3 years; 176.7±9.8 cm; 88.2±8.6 kg) participated in a double-blind, placebo-controlled, randomized 3-way crossover design comprising a muscle-glycogen depleting cycling exercise followed by ingestion of placebo (PLA), or 1.2 g•kg•bw-1 of LMW or HMW carbohydrate solution (10%) with blood sampling for 2-h post-ingestion. Thereafter, participants performed 5 sets of 10 maximal explosive repetitions of back squat (75% of 1RM). Compared to PLA, ingestion of HMW (4.9%, 90%CI 3.8%, 5.9%) and LMW (1.9%, 90%CI 0.8%, 3.0%) carbohydrate solutions substantially increased power output during resistance exercise, with the 3.1% (90% CI 4.3, 2.0%) almost certain additional gain in power after HMW-LMW ingestion attributed to higher movement velocity after force kinematic analysis (HMW-LMW 2.5%, 90%CI 1.4, 3.7%). Both carbohydrate solutions increased post-exercise plasma glucose, glucoregulatory and gut hormones compared to PLA, but differences between carbohydrates were unclear; thus, the underlying mechanism remains to be elucidated. Ingestion of a HMW carbohydrate following prolonged intense endurance exercise provides superior benefits to movement velocity and power output during subsequent repeated maximal explosive resistance exercise. This study was registered with clinicaltrials.gov (NCT02778373).

  19. Winning in sequential Parrondo games by players with short-term memory

    NASA Astrophysics Data System (ADS)

    Cheung, K. W.; Ma, H. F.; Wu, D.; Lui, G. C.; Szeto, K. Y.

    2016-05-01

    The original Parrondo game, denoted as AB3, contains two independent games: A and B. The winning or losing of games A and B is defined by the change of one unit of capital. Game A is a losing game if played continuously, with winning probability p=0.5-ɛ , where ɛ =0.003 . Game B is also losing and has two coins: a good coin with winning probability {{p}\\text{g}}=0.75-ɛ is used if the player’s capital is not divisible by 3, otherwise a bad coin with winning probability {{p}\\text{b}}=0.1-ɛ is used. The Parrondo paradox refers to the situation where the mixture of games A and B in a sequence leads to winning in the long run. The paradox can be resolved using Markov chain analysis. We extend this setting of the Parrondo game to involve players with one-step memory. The player can win by switching his choice of A or B game in a Parrondo game sequence. If the player knows the identity of the game he plays and the state of his capital, then the player can win maximally. On the other hand, if the player does not know the nature of the game, then he is playing a (C, D) game, where either (C  =  A, D  =  B), or (C  =  B, D  =  A). For a player with one-step memory playing the AB3 game, he can achieve the highest expected gain with switching probability equal to 3/4 in the (C, D) game sequence. This result has been found first numerically and then proven analytically. Generalization to an AB mod(M) Parrondo game for other integers M has been made for the general domain of parameters {{p}\\text{b}}\\text{A}}<{{p}\\text{g}} . We find that for odd M the Parrondo effect does exist. However, for even M, there is no Parrondo effect for two cases: the initial game is A and the initial capital is even, or the initial game is B and the initial capital is odd. There is still a possibility of the Parrondo effect for the other two cases when M is even: the initial game is A and the initial capital is odd, or the initial game is B and the initial capital is even. These observations from numerical experiments can be understood as the factorization of the Markov chains into two distinct cycles. Discussion of these effects on games is also made in the context of feedback control of the Brownian ratchet.

  20. Sequential versus simultaneous use of chemotherapy and gonadotropin-releasing hormone agonist (GnRHa) among estrogen receptor (ER)-positive premenopausal breast cancer patients: effects on ovarian function, disease-free survival, and overall survival.

    PubMed

    Zhang, Ying; Ji, Yajie; Li, Jianwei; Lei, Li; Wu, Siyu; Zuo, Wenjia; Jia, Xiaoqing; Wang, Yujie; Mo, Miao; Zhang, Na; Shen, Zhenzhou; Wu, Jiong; Shao, Zhimin; Liu, Guangyu

    2018-04-01

    To investigate ovarian function and therapeutic efficacy among estrogen receptor (ER)-positive, premenopausal breast cancer patients treated with gonadotropin-releasing hormone agonist (GnRHa) and chemotherapy simultaneously or sequentially. This study was a phase 3, open-label, parallel, randomized controlled trial (NCT01712893). Two hundred sixteen premenopausal patients (under 45 years) diagnosed with invasive ER-positive breast cancer were enrolled from July 2009 to May 2013 and randomized at a 1:1 ratio to receive (neo)adjuvant chemotherapy combined with sequential or simultaneous GnRHa treatment. All patients were advised to receive GnRHa for at least 2 years. The primary outcome was the incidence of early menopause, defined as amenorrhea lasting longer than 12 months after the last chemotherapy or GnRHa dose, with postmenopausal or unknown follicle-stimulating hormone and estradiol levels. The menstrual resumption period and survivals were the secondary endpoints. The median follow-up time was 56.9 months (IQR 49.5-72.4 months). One hundred and eight patients were enrolled in each group. Among them, 92 and 78 patients had complete primary endpoint data in the sequential and simultaneous groups, respectively. The rates of early menopause were 22.8% (21/92) in the sequential group and 23.1% (18/78) in the simultaneous group [simultaneous vs. sequential: OR 1.01 (95% CI 0.50-2.08); p = 0.969; age-adjusted OR 1.13; (95% CI 0.54-2.37); p = 0.737]. The median menstruation resumption period was 12.0 (95% CI 9.3-14.7) months and 10.3 (95% CI 8.2-12.4) months for the sequential and simultaneous groups, respectively [HR 0.83 (95% CI 0.59-1.16); p = 0.274; age-adjusted HR 0.90 (95%CI 0.64-1.27); p = 0.567]. No significant differences were evident for disease-free survival (p = 0.290) or overall survival (p = 0.514) between the two groups. For ER-positive premenopausal patients, the sequential use of GnRHa and chemotherapy showed ovarian preservation and survival outcomes that were no worse than simultaneous use. The application of GnRHa can probably be delayed until menstruation resumption after chemotherapy.

  1. Effective Online Bayesian Phylogenetics via Sequential Monte Carlo with Guided Proposals

    PubMed Central

    Fourment, Mathieu; Claywell, Brian C; Dinh, Vu; McCoy, Connor; Matsen IV, Frederick A; Darling, Aaron E

    2018-01-01

    Abstract Modern infectious disease outbreak surveillance produces continuous streams of sequence data which require phylogenetic analysis as data arrives. Current software packages for Bayesian phylogenetic inference are unable to quickly incorporate new sequences as they become available, making them less useful for dynamically unfolding evolutionary stories. This limitation can be addressed by applying a class of Bayesian statistical inference algorithms called sequential Monte Carlo (SMC) to conduct online inference, wherein new data can be continuously incorporated to update the estimate of the posterior probability distribution. In this article, we describe and evaluate several different online phylogenetic sequential Monte Carlo (OPSMC) algorithms. We show that proposing new phylogenies with a density similar to the Bayesian prior suffers from poor performance, and we develop “guided” proposals that better match the proposal density to the posterior. Furthermore, we show that the simplest guided proposals can exhibit pathological behavior in some situations, leading to poor results, and that the situation can be resolved by heating the proposal density. The results demonstrate that relative to the widely used MCMC-based algorithm implemented in MrBayes, the total time required to compute a series of phylogenetic posteriors as sequences arrive can be significantly reduced by the use of OPSMC, without incurring a significant loss in accuracy. PMID:29186587

  2. Sequential Monte Carlo tracking of the marginal artery by multiple cue fusion and random forest regression.

    PubMed

    Cherry, Kevin M; Peplinski, Brandon; Kim, Lauren; Wang, Shijun; Lu, Le; Zhang, Weidong; Liu, Jianfei; Wei, Zhuoshi; Summers, Ronald M

    2015-01-01

    Given the potential importance of marginal artery localization in automated registration in computed tomography colonography (CTC), we have devised a semi-automated method of marginal vessel detection employing sequential Monte Carlo tracking (also known as particle filtering tracking) by multiple cue fusion based on intensity, vesselness, organ detection, and minimum spanning tree information for poorly enhanced vessel segments. We then employed a random forest algorithm for intelligent cue fusion and decision making which achieved high sensitivity and robustness. After applying a vessel pruning procedure to the tracking results, we achieved statistically significantly improved precision compared to a baseline Hessian detection method (2.7% versus 75.2%, p<0.001). This method also showed statistically significantly improved recall rate compared to a 2-cue baseline method using fewer vessel cues (30.7% versus 67.7%, p<0.001). These results demonstrate that marginal artery localization on CTC is feasible by combining a discriminative classifier (i.e., random forest) with a sequential Monte Carlo tracking mechanism. In so doing, we present the effective application of an anatomical probability map to vessel pruning as well as a supplementary spatial coordinate system for colonic segmentation and registration when this task has been confounded by colon lumen collapse. Published by Elsevier B.V.

  3. Performance Analysis of Ranging Techniques for the KPLO Mission

    NASA Astrophysics Data System (ADS)

    Park, Sungjoon; Moon, Sangman

    2018-03-01

    In this study, the performance of ranging techniques for the Korea Pathfinder Lunar Orbiter (KPLO) space communication system is investigated. KPLO is the first lunar mission of Korea, and pseudo-noise (PN) ranging will be used to support the mission along with sequential ranging. We compared the performance of both ranging techniques using the criteria of accuracy, acquisition probability, and measurement time. First, we investigated the end-to-end accuracy error of a ranging technique incorporating all sources of errors such as from ground stations and the spacecraft communication system. This study demonstrates that increasing the clock frequency of the ranging system is not required when the dominant factor of accuracy error is independent of the thermal noise of the ranging technique being used in the system. Based on the understanding of ranging accuracy, the measurement time of PN and sequential ranging are further investigated and compared, while both techniques satisfied the accuracy and acquisition requirements. We demonstrated that PN ranging performed better than sequential ranging in the signal-to-noise ratio (SNR) regime where KPLO will be operating, and we found that the T2B (weighted-voting balanced Tausworthe, voting v = 2) code is the best choice among the PN codes available for the KPLO mission.

  4. An Iodine Fluorescence Quenching Clock Reaction

    NASA Astrophysics Data System (ADS)

    Weinberg, Richard B.

    2007-05-01

    A fluorescent clock reaction is described that is based on the principles of the Landolt iodine reaction but uses the potent fluorescence quenching properties of triiodide to abruptly extinguish the ultraviolet fluorescence of optical brighteners present in liquid laundry detergents. The reaction uses easily obtained household products. One variation illustrates the sequential steps and mechanisms of the reaction; other variations maximize the dramatic impact of the demonstration; and a variation that uses liquid detergent in the Briggs Rauscher reaction yields a striking oscillating luminescence. The iodine fluorescence quenching clock reaction can be used in the classroom to explore not only the principles of redox chemistry and reaction kinetics, but also the photophysics of fluorescent pH probes and optical quenching.

  5. Special topical approach to the treatment of acne. Suppression of sweating with aluminum chloride in an anhydrous formulation.

    PubMed

    Hurley, H J; Shelley, W B

    1978-12-01

    A new topical approach to acne treatment--the use of aluminum chloride hexahydrate in anhydrous ethanol (ACAE)--was studied in 141 patients. Using sequential treatment schedules, paired comparison techniques, and various concentrations of ACAE, we established maximal efficacy with minimal local irritation for the 6.25% strength solution. Clinical efficacy and lack of toxicity of this formulation were confirmed by the additional clinical study of 65 patients. The antiperspirant and antibacterial actions of 6.25% ACAE solution were then verified on acne skin areas. It is postulated that the clinical improvement in acne that follows the topical use of ACAE results from one or both of these actions.

  6. Probability Learning: Changes in Behavior Across Time and Development.

    PubMed

    Plate, Rista C; Fulvio, Jacqueline M; Shutts, Kristin; Green, C Shawn; Pollak, Seth D

    2018-01-01

    Individuals track probabilities, such as associations between events in their environments, but less is known about the degree to which experience-within a learning session and over development-influences people's use of incoming probabilistic information to guide behavior in real time. In two experiments, children (4-11 years) and adults searched for rewards hidden in locations with predetermined probabilities. In Experiment 1, children (n = 42) and adults (n = 32) changed strategies to maximize reward receipt over time. However, adults demonstrated greater strategy change efficiency. Making the predetermined probabilities more difficult to learn (Experiment 2) delayed effective strategy change for children (n = 39) and adults (n = 33). Taken together, these data characterize how children and adults alike react flexibly and change behavior according to incoming information. © 2017 The Authors. Child Development © 2017 Society for Research in Child Development, Inc.

  7. Density profiles of the exclusive queuing process

    NASA Astrophysics Data System (ADS)

    Arita, Chikashi; Schadschneider, Andreas

    2012-12-01

    The exclusive queuing process (EQP) incorporates the exclusion principle into classic queuing models. It is characterized by, in addition to the entrance probability α and exit probability β, a third parameter: the hopping probability p. The EQP can be interpreted as an exclusion process of variable system length. Its phase diagram in the parameter space (α,β) is divided into a convergent phase and a divergent phase by a critical line which consists of a curved part and a straight part. Here we extend previous studies of this phase diagram. We identify subphases in the divergent phase, which can be distinguished by means of the shape of the density profile, and determine the velocity of the system length growth. This is done for EQPs with different update rules (parallel, backward sequential and continuous time). We also investigate the dynamics of the system length and the number of customers on the critical line. They are diffusive or subdiffusive with non-universal exponents that also depend on the update rules.

  8. Time-course variation of statistics embedded in music: Corpus study on implicit learning and knowledge.

    PubMed

    Daikoku, Tatsuya

    2018-01-01

    Learning and knowledge of transitional probability in sequences like music, called statistical learning and knowledge, are considered implicit processes that occur without intention to learn and awareness of what one knows. This implicit statistical knowledge can be alternatively expressed via abstract medium such as musical melody, which suggests this knowledge is reflected in melodies written by a composer. This study investigates how statistics in music vary over a composer's lifetime. Transitional probabilities of highest-pitch sequences in Ludwig van Beethoven's Piano Sonata were calculated based on different hierarchical Markov models. Each interval pattern was ordered based on the sonata opus number. The transitional probabilities of sequential patterns that are musical universal in music gradually decreased, suggesting that time-course variations of statistics in music reflect time-course variations of a composer's statistical knowledge. This study sheds new light on novel methodologies that may be able to evaluate the time-course variation of composer's implicit knowledge using musical scores.

  9. Generation of intervention strategy for a genetic regulatory network represented by a family of Markov Chains.

    PubMed

    Berlow, Noah; Pal, Ranadip

    2011-01-01

    Genetic Regulatory Networks (GRNs) are frequently modeled as Markov Chains providing the transition probabilities of moving from one state of the network to another. The inverse problem of inference of the Markov Chain from noisy and limited experimental data is an ill posed problem and often generates multiple model possibilities instead of a unique one. In this article, we address the issue of intervention in a genetic regulatory network represented by a family of Markov Chains. The purpose of intervention is to alter the steady state probability distribution of the GRN as the steady states are considered to be representative of the phenotypes. We consider robust stationary control policies with best expected behavior. The extreme computational complexity involved in search of robust stationary control policies is mitigated by using a sequential approach to control policy generation and utilizing computationally efficient techniques for updating the stationary probability distribution of a Markov chain following a rank one perturbation.

  10. Active controls technology to maximize structural efficiency

    NASA Technical Reports Server (NTRS)

    Hoy, J. M.; Arnold, J. M.

    1978-01-01

    The implication of the dependence on active controls technology during the design phase of transport structures is considered. Critical loading conditions are discussed along with probable ways of alleviating these loads. Why fatigue requirements may be critical and can only be partially alleviated is explained. The significance of certain flutter suppression system criteria is examined.

  11. A Probability Based Framework for Testing the Missing Data Mechanism

    ERIC Educational Resources Information Center

    Lin, Johnny Cheng-Han

    2013-01-01

    Many methods exist for imputing missing data but fewer methods have been proposed to test the missing data mechanism. Little (1988) introduced a multivariate chi-square test for the missing completely at random data mechanism (MCAR) that compares observed means for each pattern with expectation-maximization (EM) estimated means. As an alternative,…

  12. Grammars Leak: Modeling How Phonotactic Generalizations Interact within the Grammar

    ERIC Educational Resources Information Center

    Martin, Andrew

    2011-01-01

    I present evidence from Navajo and English that weaker, gradient versions of morpheme-internal phonotactic constraints, such as the ban on geminate consonants in English, hold even across prosodic word boundaries. I argue that these lexical biases are the result of a MAXIMUM ENTROPY phonotactic learning algorithm that maximizes the probability of…

  13. Layered motion segmentation and depth ordering by tracking edges.

    PubMed

    Smith, Paul; Drummond, Tom; Cipolla, Roberto

    2004-04-01

    This paper presents a new Bayesian framework for motion segmentation--dividing a frame from an image sequence into layers representing different moving objects--by tracking edges between frames. Edges are found using the Canny edge detector, and the Expectation-Maximization algorithm is then used to fit motion models to these edges and also to calculate the probabilities of the edges obeying each motion model. The edges are also used to segment the image into regions of similar color. The most likely labeling for these regions is then calculated by using the edge probabilities, in association with a Markov Random Field-style prior. The identification of the relative depth ordering of the different motion layers is also determined, as an integral part of the process. An efficient implementation of this framework is presented for segmenting two motions (foreground and background) using two frames. It is then demonstrated how, by tracking the edges into further frames, the probabilities may be accumulated to provide an even more accurate and robust estimate, and segment an entire sequence. Further extensions are then presented to address the segmentation of more than two motions. Here, a hierarchical method of initializing the Expectation-Maximization algorithm is described, and it is demonstrated that the Minimum Description Length principle may be used to automatically select the best number of motion layers. The results from over 30 sequences (demonstrating both two and three motions) are presented and discussed.

  14. A Measure Approximation for Distributionally Robust PDE-Constrained Optimization Problems

    DOE PAGES

    Kouri, Drew Philip

    2017-12-19

    In numerous applications, scientists and engineers acquire varied forms of data that partially characterize the inputs to an underlying physical system. This data is then used to inform decisions such as controls and designs. Consequently, it is critical that the resulting control or design is robust to the inherent uncertainties associated with the unknown probabilistic characterization of the model inputs. Here in this work, we consider optimal control and design problems constrained by partial differential equations with uncertain inputs. We do not assume a known probabilistic model for the inputs, but rather we formulate the problem as a distributionally robustmore » optimization problem where the outer minimization problem determines the control or design, while the inner maximization problem determines the worst-case probability measure that matches desired characteristics of the data. We analyze the inner maximization problem in the space of measures and introduce a novel measure approximation technique, based on the approximation of continuous functions, to discretize the unknown probability measure. Finally, we prove consistency of our approximated min-max problem and conclude with numerical results.« less

  15. A Novel Wireless Power Transfer-Based Weighed Clustering Cooperative Spectrum Sensing Method for Cognitive Sensor Networks.

    PubMed

    Liu, Xin

    2015-10-30

    In a cognitive sensor network (CSN), the wastage of sensing time and energy is a challenge to cooperative spectrum sensing, when the number of cooperative cognitive nodes (CNs) becomes very large. In this paper, a novel wireless power transfer (WPT)-based weighed clustering cooperative spectrum sensing model is proposed, which divides all the CNs into several clusters, and then selects the most favorable CNs as the cluster heads and allows the common CNs to transfer the received radio frequency (RF) energy of the primary node (PN) to the cluster heads, in order to supply the electrical energy needed for sensing and cooperation. A joint resource optimization is formulated to maximize the spectrum access probability of the CSN, through jointly allocating sensing time and clustering number. According to the resource optimization results, a clustering algorithm is proposed. The simulation results have shown that compared to the traditional model, the cluster heads of the proposed model can achieve more transmission power and there exists optimal sensing time and clustering number to maximize the spectrum access probability.

  16. Inconvenient Truth or Convenient Fiction? Probable Maximum Precipitation and Nonstationarity

    NASA Astrophysics Data System (ADS)

    Nielsen-Gammon, J. W.

    2017-12-01

    According to the inconvenient truth that Probable Maximum Precipitation (PMP) represents a non-deterministic, statistically very rare event, future changes in PMP involve a complex interplay between future frequencies of storm type, storm morphology, and environmental characteristics, many of which are poorly constrained by global climate models. On the other hand, according to the convenient fiction that PMP represents an estimate of the maximum possible precipitation that can occur at a given location, as determined by storm maximization and transposition, the primary climatic driver of PMP change is simply a change in maximum moisture availability. Increases in boundary-layer and total-column moisture have been observed globally, are anticipated from basic physical principles, and are robustly projected to continue by global climate models. Thus, using the same techniques that are used within the PMP storm maximization process itself, future PMP values may be projected. The resulting PMP trend projections are qualitatively consistent with observed trends of extreme rainfall within Texas, suggesting that in this part of the world the inconvenient truth is congruent with the convenient fiction.

  17. Clinical and dosimetric factors of radiation-induced esophageal injury: radiation-induced esophageal toxicity.

    PubMed

    Qiao, Wen-Bo; Zhao, Yan-Hui; Zhao, Yan-Bin; Wang, Rui-Zhi

    2005-05-07

    To analyze the clinical and dosimetric predictive factors for radiation-induced esophageal injury in patients with non-small-cell lung cancer (NSCLC) during three-dimensional conformal radiotherapy (3D-CRT). We retrospectively analyzed 208 consecutive patients (146 men and 62 women) with NSCLC treated with 3D-CRT. The median age of the patients was 64 years (range 35-87 years). The clinical and treatment parameters including gender, age, performance status, sequential chemotherapy, concurrent chemotherapy, presence of carinal or subcarinal lymph nodes, pretreatment weight loss, mean dose to the entire esophagus, maximal point dose to the esophagus, and percentage of volume of esophagus receiving >55 Gy were studied. Clinical and dosimetric factors for radiation-induced acute and late grade 3-5 esophageal injury were analyzed according to Radiation Therapy Oncology Group (RTOG) criteria. Twenty-five (12%) of the two hundred and eight patients developed acute or late grade 3-5 esophageal injury. Among them, nine patients had both acute and late grade 3-5 esophageal injury, two died of late esophageal perforation. Concurrent chemotherapy and maximal point dose to the esophagus > or =60 Gy were significantly associated with the risk of grade 3-5 esophageal injury. Fifty-four (26%) of the two hundred and eight patients received concurrent chemotherapy. Among them, 25 (46%) developed grade 3-5 esophageal injury (P = 0.0001<0.01). However, no grade 3-5 esophageal injury occurred in patients who received a maximal point dose to the esophagus <60 Gy (P = 0.0001<0.01). Concurrent chemotherapy and the maximal esophageal point dose > or =60 Gy are significantly associated with the risk of grade 3-5 esophageal injury in patients with NSCLC treated with 3D-CRT.

  18. Scaling in tournaments

    NASA Astrophysics Data System (ADS)

    Ben-Naim, E.; Redner, S.; Vazquez, F.

    2007-02-01

    We study a stochastic process that mimics single-game elimination tournaments. In our model, the outcome of each match is stochastic: the weaker player wins with upset probability q<=1/2, and the stronger player wins with probability 1-q. The loser is eliminated. Extremal statistics of the initial distribution of player strengths governs the tournament outcome. For a uniform initial distribution of strengths, the rank of the winner, x*, decays algebraically with the number of players, N, as x*~N-β. Different decay exponents are found analytically for sequential dynamics, βseq=1-2q, and parallel dynamics, \\beta_par=1+\\frac{\\ln (1-q)}{\\ln 2} . The distribution of player strengths becomes self-similar in the long time limit with an algebraic tail. Our theory successfully describes statistics of the US college basketball national championship tournament.

  19. More heads choose better than one: Group decision making can eliminate probability matching.

    PubMed

    Schulze, Christin; Newell, Ben R

    2016-06-01

    Probability matching is a robust and common failure to adhere to normative predictions in sequential decision making. We show that this choice anomaly is nearly eradicated by gathering individual decision makers into small groups and asking the groups to decide. The group choice advantage emerged both when participants generated responses for an entire sequence of choices without outcome feedback (Exp. 1a) and when participants made trial-by-trial predictions with outcome feedback after each decision (Exp. 1b). We show that the dramatic improvement observed in group settings stands in stark contrast to a complete lack of effective solitary deliberation. These findings suggest a crucial role of group discussion in alleviating the impact of hasty intuitive responses in tasks better suited to careful deliberation.

  20. Optimal minimal measurements of mixed states

    NASA Astrophysics Data System (ADS)

    Vidal, G.; Latorre, J. I.; Pascual, P.; Tarrach, R.

    1999-07-01

    The optimal and minimal measuring strategy is obtained for a two-state system prepared in a mixed state with a probability given by any isotropic a priori distribution. We explicitly construct the specific optimal and minimal generalized measurements, which turn out to be independent of the a priori probability distribution, obtaining the best guesses for the unknown state as well as a closed expression for the maximal mean-average fidelity. We do this for up to three copies of the unknown state in a way that leads to the generalization to any number of copies, which we then present and prove.

  1. Convexity of Ruin Probability and Optimal Dividend Strategies for a General Lévy Process

    PubMed Central

    Yuen, Kam Chuen; Shen, Ying

    2015-01-01

    We consider the optimal dividends problem for a company whose cash reserves follow a general Lévy process with certain positive jumps and arbitrary negative jumps. The objective is to find a policy which maximizes the expected discounted dividends until the time of ruin. Under appropriate conditions, we use some recent results in the theory of potential analysis of subordinators to obtain the convexity properties of probability of ruin. We present conditions under which the optimal dividend strategy, among all admissible ones, takes the form of a barrier strategy. PMID:26351655

  2. Wildlife tradeoffs based on landscape models of habitat

    USGS Publications Warehouse

    Loehle, C.; Mitchell, M.S.

    2000-01-01

    It is becoming increasingly clear that the spatial structure of landscapes affects the habitat choices and abundance of wildlife. In contrast to wildlife management based on preservation of critical habitat features such as nest sites on a beach or mast trees, it has not been obvious how to incorporate spatial structure into management plans. We present techniques to accomplish this goal. We used multiscale logistic regression models developed previously for neotropical migrant bird species habitat use in South Carolina (USA) as a basis for these techniques. Based on these models we used a spatial optimization technique to generate optimal maps (probability of occurrence, P = 1.0) for each of seven species. To emulate management of a forest for maximum species diversity, we defined the objective function of the algorithm as the sum of probabilities over the seven species, resulting in a complex map that allowed all seven species to coexist. The map that allowed for coexistence is not obvious, must be computed algorithmically, and would be difficult to realize using rules of thumb for habitat management. To assess how management of a forest for a single species of interest might affect other species, we analyzed tradeoffs by gradually increasing the weighting on a single species in the objective function over a series of simulations. We found that as habitat was increasingly modified to favor that species, the probability of presence for two of the other species was driven to zero. This shows that whereas it is not possible to simultaneously maximize the likelihood of presence for multiple species with divergent habitat preferences, compromise solutions are possible at less than maximal likelihood in many cases. Our approach suggests that efficiency of habitat management for species diversity can by maximized for even small landscapes by incorporating spatial context. The methods we present are suitable for wildlife management, endangered species conservation, and nature reserve design.

  3. Maintaining homeostasis by decision-making.

    PubMed

    Korn, Christoph W; Bach, Dominik R

    2015-05-01

    Living organisms need to maintain energetic homeostasis. For many species, this implies taking actions with delayed consequences. For example, humans may have to decide between foraging for high-calorie but hard-to-get, and low-calorie but easy-to-get food, under threat of starvation. Homeostatic principles prescribe decisions that maximize the probability of sustaining appropriate energy levels across the entire foraging trajectory. Here, predictions from biological principles contrast with predictions from economic decision-making models based on maximizing the utility of the endpoint outcome of a choice. To empirically arbitrate between the predictions of biological and economic models for individual human decision-making, we devised a virtual foraging task in which players chose repeatedly between two foraging environments, lost energy by the passage of time, and gained energy probabilistically according to the statistics of the environment they chose. Reaching zero energy was framed as starvation. We used the mathematics of random walks to derive endpoint outcome distributions of the choices. This also furnished equivalent lotteries, presented in a purely economic, casino-like frame, in which starvation corresponded to winning nothing. Bayesian model comparison showed that--in both the foraging and the casino frames--participants' choices depended jointly on the probability of starvation and the expected endpoint value of the outcome, but could not be explained by economic models based on combinations of statistical moments or on rank-dependent utility. This implies that under precisely defined constraints biological principles are better suited to explain human decision-making than economic models based on endpoint utility maximization.

  4. Maintaining Homeostasis by Decision-Making

    PubMed Central

    Korn, Christoph W.; Bach, Dominik R.

    2015-01-01

    Living organisms need to maintain energetic homeostasis. For many species, this implies taking actions with delayed consequences. For example, humans may have to decide between foraging for high-calorie but hard-to-get, and low-calorie but easy-to-get food, under threat of starvation. Homeostatic principles prescribe decisions that maximize the probability of sustaining appropriate energy levels across the entire foraging trajectory. Here, predictions from biological principles contrast with predictions from economic decision-making models based on maximizing the utility of the endpoint outcome of a choice. To empirically arbitrate between the predictions of biological and economic models for individual human decision-making, we devised a virtual foraging task in which players chose repeatedly between two foraging environments, lost energy by the passage of time, and gained energy probabilistically according to the statistics of the environment they chose. Reaching zero energy was framed as starvation. We used the mathematics of random walks to derive endpoint outcome distributions of the choices. This also furnished equivalent lotteries, presented in a purely economic, casino-like frame, in which starvation corresponded to winning nothing. Bayesian model comparison showed that—in both the foraging and the casino frames—participants’ choices depended jointly on the probability of starvation and the expected endpoint value of the outcome, but could not be explained by economic models based on combinations of statistical moments or on rank-dependent utility. This implies that under precisely defined constraints biological principles are better suited to explain human decision-making than economic models based on endpoint utility maximization. PMID:26024504

  5. Small-Scale Spatio-Temporal Distribution of Bactrocera minax (Enderlein) (Diptera: Tephritidae) Using Probability Kriging.

    PubMed

    Wang, S Q; Zhang, H Y; Li, Z L

    2016-10-01

    Understanding spatio-temporal distribution of pest in orchards can provide important information that could be used to design monitoring schemes and establish better means for pest control. In this study, the spatial and temporal distribution of Bactrocera minax (Enderlein) (Diptera: Tephritidae) was assessed, and activity trends were evaluated by using probability kriging. Adults of B. minax were captured in two successive occurrences in a small-scale citrus orchard by using food bait traps, which were placed both inside and outside the orchard. The weekly spatial distribution of B. minax within the orchard and adjacent woods was examined using semivariogram parameters. The edge concentration was discovered during the most weeks in adult occurrence, and the population of the adults aggregated with high probability within a less-than-100-m-wide band on both of the sides of the orchard and the woods. The sequential probability kriged maps showed that the adults were estimated in the marginal zone with higher probability, especially in the early and peak stages. The feeding, ovipositing, and mating behaviors of B. minax are possible explanations for these spatio-temporal patterns. Therefore, spatial arrangement and distance to the forest edge of traps or spraying spot should be considered to enhance pest control on B. minax in small-scale orchards.

  6. Optimization of Second Fault Detection Thresholds to Maximize Mission POS

    NASA Technical Reports Server (NTRS)

    Anzalone, Evan

    2018-01-01

    In order to support manned spaceflight safety requirements, the Space Launch System (SLS) has defined program-level requirements for key systems to ensure successful operation under single fault conditions. To accommodate this with regards to Navigation, the SLS utilizes an internally redundant Inertial Navigation System (INS) with built-in capability to detect, isolate, and recover from first failure conditions and still maintain adherence to performance requirements. The unit utilizes multiple hardware- and software-level techniques to enable detection, isolation, and recovery from these events in terms of its built-in Fault Detection, Isolation, and Recovery (FDIR) algorithms. Successful operation is defined in terms of sufficient navigation accuracy at insertion while operating under worst case single sensor outages (gyroscope and accelerometer faults at launch). In addition to first fault detection and recovery, the SLS program has also levied requirements relating to the capability of the INS to detect a second fault, tracking any unacceptable uncertainty in knowledge of the vehicle's state. This detection functionality is required in order to feed abort analysis and ensure crew safety. Increases in navigation state error and sensor faults can drive the vehicle outside of its operational as-designed environments and outside of its performance envelope causing loss of mission, or worse, loss of crew. The criteria for operation under second faults allows for a larger set of achievable missions in terms of potential fault conditions, due to the INS operating at the edge of its capability. As this performance is defined and controlled at the vehicle level, it allows for the use of system level margins to increase probability of mission success on the operational edges of the design space. Due to the implications of the vehicle response to abort conditions (such as a potentially failed INS), it is important to consider a wide range of failure scenarios in terms of both magnitude and time. As such, the Navigation team is taking advantage of the INS's capability to schedule and change fault detection thresholds in flight. These values are optimized along a nominal trajectory in order to maximize probability of mission success, and reducing the probability of false positives (defined as when the INS would report a second fault condition resulting in loss of mission, but the vehicle would still meet insertion requirements within system-level margins). This paper will describe an optimization approach using Genetic Algorithms to tune the threshold parameters to maximize vehicle resilience to second fault events as a function of potential fault magnitude and time of fault over an ascent mission profile. The analysis approach, and performance assessment of the results will be presented to demonstrate the applicability of this process to second fault detection to maximize mission probability of success.

  7. Global quantitative indices reflecting provider process-of-care: data-base derivation.

    PubMed

    Moran, John L; Solomon, Patricia J

    2010-04-19

    Controversy has attended the relationship between risk-adjusted mortality and process-of-care. There would be advantage in the establishment, at the data-base level, of global quantitative indices subsuming the diversity of process-of-care. A retrospective, cohort study of patients identified in the Australian and New Zealand Intensive Care Society Adult Patient Database, 1993-2003, at the level of geographic and ICU-level descriptors (n = 35), for both hospital survivors and non-survivors. Process-of-care indices were established by analysis of: (i) the smoothed time-hazard curve of individual patient discharge and determined by pharmaco-kinetic methods as area under the hazard-curve (AUC), reflecting the integrated experience of the discharge process, and time-to-peak-hazard (TMAX, in days), reflecting the time to maximum rate of hospital discharge; and (ii) individual patient ability to optimize output (as length-of-stay) for recorded data-base physiological inputs; estimated as a technical production-efficiency (TE, scaled [0,(maximum)1]), via the econometric technique of stochastic frontier analysis. For each descriptor, multivariate correlation-relationships between indices and summed mortality probability were determined. The data-set consisted of 223129 patients from 99 ICUs with mean (SD) age and APACHE III score of 59.2(18.9) years and 52.7(30.6) respectively; 41.7% were female and 45.7% were mechanically ventilated within the first 24 hours post-admission. For survivors, AUC was maximal in rural and for-profit ICUs, whereas TMAX (>or= 7.8 days) and TE (>or= 0.74) were maximal in tertiary-ICUs. For non-survivors, AUC was maximal in tertiary-ICUs, but TMAX (>or= 4.2 days) and TE (>or= 0.69) were maximal in for-profit ICUs. Across descriptors, significant differences in indices were demonstrated (analysis-of-variance, P

  8. Multinomial Logistic Regression & Bootstrapping for Bayesian Estimation of Vertical Facies Prediction in Heterogeneous Sandstone Reservoirs

    NASA Astrophysics Data System (ADS)

    Al-Mudhafar, W. J.

    2013-12-01

    Precisely prediction of rock facies leads to adequate reservoir characterization by improving the porosity-permeability relationships to estimate the properties in non-cored intervals. It also helps to accurately identify the spatial facies distribution to perform an accurate reservoir model for optimal future reservoir performance. In this paper, the facies estimation has been done through Multinomial logistic regression (MLR) with respect to the well logs and core data in a well in upper sandstone formation of South Rumaila oil field. The entire independent variables are gamma rays, formation density, water saturation, shale volume, log porosity, core porosity, and core permeability. Firstly, Robust Sequential Imputation Algorithm has been considered to impute the missing data. This algorithm starts from a complete subset of the dataset and estimates sequentially the missing values in an incomplete observation by minimizing the determinant of the covariance of the augmented data matrix. Then, the observation is added to the complete data matrix and the algorithm continues with the next observation with missing values. The MLR has been chosen to estimate the maximum likelihood and minimize the standard error for the nonlinear relationships between facies & core and log data. The MLR is used to predict the probabilities of the different possible facies given each independent variable by constructing a linear predictor function having a set of weights that are linearly combined with the independent variables by using a dot product. Beta distribution of facies has been considered as prior knowledge and the resulted predicted probability (posterior) has been estimated from MLR based on Baye's theorem that represents the relationship between predicted probability (posterior) with the conditional probability and the prior knowledge. To assess the statistical accuracy of the model, the bootstrap should be carried out to estimate extra-sample prediction error by randomly drawing datasets with replacement from the training data. Each sample has the same size of the original training set and it can be conducted N times to produce N bootstrap datasets to re-fit the model accordingly to decrease the squared difference between the estimated and observed categorical variables (facies) leading to decrease the degree of uncertainty.

  9. Retinal blood vessel extraction using tunable bandpass filter and fuzzy conditional entropy.

    PubMed

    Sil Kar, Sudeshna; Maity, Santi P

    2016-09-01

    Extraction of blood vessels on retinal images plays a significant role for screening of different opthalmologic diseases. However, accurate extraction of the entire and individual type of vessel silhouette from the noisy images with poorly illuminated background is a complicated task. To this aim, an integrated system design platform is suggested in this work for vessel extraction using a sequential bandpass filter followed by fuzzy conditional entropy maximization on matched filter response. At first noise is eliminated from the image under consideration through curvelet based denoising. To include the fine details and the relatively less thick vessel structures, the image is passed through a bank of sequential bandpass filter structure optimized for contrast enhancement. Fuzzy conditional entropy on matched filter response is then maximized to find the set of multiple optimal thresholds to extract the different types of vessel silhouettes from the background. Differential Evolution algorithm is used to determine the optimal gain in bandpass filter and the combination of the fuzzy parameters. Using the multiple thresholds, retinal image is classified as the thick, the medium and the thin vessels including neovascularization. Performance evaluated on different publicly available retinal image databases shows that the proposed method is very efficient in identifying the diverse types of vessels. Proposed method is also efficient in extracting the abnormal and the thin blood vessels in pathological retinal images. The average values of true positive rate, false positive rate and accuracy offered by the method is 76.32%, 1.99% and 96.28%, respectively for the DRIVE database and 72.82%, 2.6% and 96.16%, respectively for the STARE database. Simulation results demonstrate that the proposed method outperforms the existing methods in detecting the various types of vessels and the neovascularization structures. The combination of curvelet transform and tunable bandpass filter is found to be very much effective in edge enhancement whereas fuzzy conditional entropy efficiently distinguishes vessels of different widths. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  10. Maximizing RNA folding rates: a balancing act.

    PubMed Central

    Thirumalai, D; Woodson, S A

    2000-01-01

    Large ribozymes typically require very long times to refold into their active conformation in vitro, because the RNA is easily trapped in metastable misfolded structures. Theoretical models show that the probability of misfolding is reduced when local and long-range interactions in the RNA are balanced. Using the folding kinetics of the Tetrahymena ribozyme as an example, we propose that folding rates are maximized when the free energies of forming independent domains are similar to each other. A prediction is that the folding pathway of the ribozyme can be reversed by inverting the relative stability of the tertiary domains. This result suggests strategies for optimizing ribozyme sequences for therapeutics and structural studies. PMID:10864039

  11. Probable flood predictions in ungauged coastal basins of El Salvador

    USGS Publications Warehouse

    Friedel, M.J.; Smith, M.E.; Chica, A.M.E.; Litke, D.

    2008-01-01

    A regionalization procedure is presented and used to predict probable flooding in four ungauged coastal river basins of El Salvador: Paz, Jiboa, Grande de San Miguel, and Goascoran. The flood-prediction problem is sequentially solved for two regions: upstream mountains and downstream alluvial plains. In the upstream mountains, a set of rainfall-runoff parameter values and recurrent peak-flow discharge hydrographs are simultaneously estimated for 20 tributary-basin models. Application of dissimilarity equations among tributary basins (soft prior information) permitted development of a parsimonious parameter structure subject to information content in the recurrent peak-flow discharge values derived using regression equations based on measurements recorded outside the ungauged study basins. The estimated joint set of parameter values formed the basis from which probable minimum and maximum peak-flow discharge limits were then estimated revealing that prediction uncertainty increases with basin size. In the downstream alluvial plain, model application of the estimated minimum and maximum peak-flow hydrographs facilitated simulation of probable 100-year flood-flow depths in confined canyons and across unconfined coastal alluvial plains. The regionalization procedure provides a tool for hydrologic risk assessment and flood protection planning that is not restricted to the case presented herein. ?? 2008 ASCE.

  12. GENERAL A Hierarchy of Compatibility and Comeasurability Levels in Quantum Logics with Unique Conditional Probabilities

    NASA Astrophysics Data System (ADS)

    Gerd, Niestegge

    2010-12-01

    In the quantum mechanical Hilbert space formalism, the probabilistic interpretation is a later ad-hoc add-on, more or less enforced by the experimental evidence, but not motivated by the mathematical model itself. A model involving a clear probabilistic interpretation from the very beginning is provided by the quantum logics with unique conditional probabilities. It includes the projection lattices in von Neumann algebras and here probability conditionalization becomes identical with the state transition of the Lüders-von Neumann measurement process. This motivates the definition of a hierarchy of five compatibility and comeasurability levels in the abstract setting of the quantum logics with unique conditional probabilities. Their meanings are: the absence of quantum interference or influence, the existence of a joint distribution, simultaneous measurability, and the independence of the final state after two successive measurements from the sequential order of these two measurements. A further level means that two elements of the quantum logic (events) belong to the same Boolean subalgebra. In the general case, the five compatibility and comeasurability levels appear to differ, but they all coincide in the common Hilbert space formalism of quantum mechanics, in von Neumann algebras, and in some other cases.

  13. Goodness of fit of probability distributions for sightings as species approach extinction.

    PubMed

    Vogel, Richard M; Hosking, Jonathan R M; Elphick, Chris S; Roberts, David L; Reed, J Michael

    2009-04-01

    Estimating the probability that a species is extinct and the timing of extinctions is useful in biological fields ranging from paleoecology to conservation biology. Various statistical methods have been introduced to infer the time of extinction and extinction probability from a series of individual sightings. There is little evidence, however, as to which of these models provide adequate fit to actual sighting records. We use L-moment diagrams and probability plot correlation coefficient (PPCC) hypothesis tests to evaluate the goodness of fit of various probabilistic models to sighting data collected for a set of North American and Hawaiian bird populations that have either gone extinct, or are suspected of having gone extinct, during the past 150 years. For our data, the uniform, truncated exponential, and generalized Pareto models performed moderately well, but the Weibull model performed poorly. Of the acceptable models, the uniform distribution performed best based on PPCC goodness of fit comparisons and sequential Bonferroni-type tests. Further analyses using field significance tests suggest that although the uniform distribution is the best of those considered, additional work remains to evaluate the truncated exponential model more fully. The methods we present here provide a framework for evaluating subsequent models.

  14. The Neural Basis of Risky Choice with Affective Outcomes

    PubMed Central

    Suter, Renata S.; Pachur, Thorsten; Hertwig, Ralph; Endestad, Tor; Biele, Guido

    2015-01-01

    Both normative and many descriptive theories of decision making under risk are based on the notion that outcomes are weighted by their probability, with subsequent maximization of the (subjective) expected outcome. Numerous investigations from psychology, economics, and neuroscience have produced evidence consistent with this notion. However, this research has typically investigated choices involving relatively affect-poor, monetary outcomes. We compared choice in relatively affect-poor, monetary lottery problems with choice in relatively affect-rich medical decision problems. Computational modeling of behavioral data and model-based neuroimaging analyses provide converging evidence for substantial differences in the respective decision mechanisms. Relative to affect-poor choices, affect-rich choices yielded a more strongly curved probability weighting function of cumulative prospect theory, thus signaling that the psychological impact of probabilities is strongly diminished for affect-rich outcomes. Examining task-dependent brain activation, we identified a region-by-condition interaction indicating qualitative differences of activation between affect-rich and affect-poor choices. Moreover, brain activation in regions that were more active during affect-poor choices (e.g., the supramarginal gyrus) correlated with individual trial-by-trial decision weights, indicating that these regions reflect processing of probabilities. Formal reverse inference Neurosynth meta-analyses suggested that whereas affect-poor choices seem to be based on brain mechanisms for calculative processes, affect-rich choices are driven by the representation of outcomes’ emotional value and autobiographical memories associated with them. These results provide evidence that the traditional notion of expectation maximization may not apply in the context of outcomes laden with affective responses, and that understanding the brain mechanisms of decision making requires the domain of the decision to be taken into account. PMID:25830918

  15. The neural basis of risky choice with affective outcomes.

    PubMed

    Suter, Renata S; Pachur, Thorsten; Hertwig, Ralph; Endestad, Tor; Biele, Guido

    2015-01-01

    Both normative and many descriptive theories of decision making under risk are based on the notion that outcomes are weighted by their probability, with subsequent maximization of the (subjective) expected outcome. Numerous investigations from psychology, economics, and neuroscience have produced evidence consistent with this notion. However, this research has typically investigated choices involving relatively affect-poor, monetary outcomes. We compared choice in relatively affect-poor, monetary lottery problems with choice in relatively affect-rich medical decision problems. Computational modeling of behavioral data and model-based neuroimaging analyses provide converging evidence for substantial differences in the respective decision mechanisms. Relative to affect-poor choices, affect-rich choices yielded a more strongly curved probability weighting function of cumulative prospect theory, thus signaling that the psychological impact of probabilities is strongly diminished for affect-rich outcomes. Examining task-dependent brain activation, we identified a region-by-condition interaction indicating qualitative differences of activation between affect-rich and affect-poor choices. Moreover, brain activation in regions that were more active during affect-poor choices (e.g., the supramarginal gyrus) correlated with individual trial-by-trial decision weights, indicating that these regions reflect processing of probabilities. Formal reverse inference Neurosynth meta-analyses suggested that whereas affect-poor choices seem to be based on brain mechanisms for calculative processes, affect-rich choices are driven by the representation of outcomes' emotional value and autobiographical memories associated with them. These results provide evidence that the traditional notion of expectation maximization may not apply in the context of outcomes laden with affective responses, and that understanding the brain mechanisms of decision making requires the domain of the decision to be taken into account.

  16. Presenting evidence and summary measures to best inform societal decisions when comparing multiple strategies.

    PubMed

    Eckermann, Simon; Willan, Andrew R

    2011-07-01

    Multiple strategy comparisons in health technology assessment (HTA) are becoming increasingly important, with multiple alternative therapeutic actions, combinations of therapies and diagnostic and genetic testing alternatives. Comparison under uncertainty of incremental cost, effects and cost effectiveness across more than two strategies is conceptually and practically very different from that for two strategies, where all evidence can be summarized in a single bivariate distribution on the incremental cost-effectiveness plane. Alternative methods for comparing multiple strategies in HTA have been developed in (i) presenting cost and effects on the cost-disutility plane and (ii) summarizing evidence with multiple strategy cost-effectiveness acceptability (CEA) and expected net loss (ENL) curves and frontiers. However, critical questions remain for the analyst and decision maker of how these techniques can be best employed across multiple strategies to (i) inform clinical and cost inference in presenting evidence, and (ii) summarize evidence of cost effectiveness to inform societal reimbursement decisions where preferences may be risk neutral or somewhat risk averse under the Arrow-Lind theorem. We critically consider how evidence across multiple strategies can be best presented and summarized to inform inference and societal reimbursement decisions, given currently available methods. In the process, we make a number of important original findings. First, in presenting evidence for multiple strategies, the joint distribution of costs and effects on the cost-disutility plane with associated flexible comparators varying across replicates for cost and effect axes ensure full cost and effect inference. Such inference is usually confounded on the cost-effectiveness plane with comparison relative to a fixed origin and axes. Second, in summarizing evidence for risk-neutral societal decision making, ENL curves and frontiers are shown to have advantages over the CEA frontier in directly presenting differences in expected net benefit (ENB). The CEA frontier, while identifying strategies that maximize ENB, only presents their probability of maximizing net benefit (NB) and, hence, fails to explain why strategies maximize ENB at any given threshold value. Third, in summarizing evidence for somewhat risk-averse societal decision making, trade-offs between the strategy maximizing ENB and other potentially optimal strategies with higher probability of maximizing NB should be presented over discrete threshold values where they arise. However, the probabilities informing these trade-offs and associated discrete threshold value regions should be derived from bilateral CEA curves to prevent confounding by other strategies inherent in multiple strategy CEA curves. Based on these findings, a series of recommendations are made for best presenting and summarizing cost-effectiveness evidence for reimbursement decisions when comparing multiple strategies, which are contrasted with advice for comparing two strategies. Implications for joint research and reimbursement decisions are also discussed.

  17. Planning, Execution, and Assessment of Effects-Based Operations (EBO)

    DTIC Science & Technology

    2006-05-01

    time of execution that would maximize the likelihood of achieving a desired effect. GMU has developed a methodology, named ECAD -EA (Effective...Algorithm EBO Effects Based Operations ECAD -EA Effective Course of Action-Evolutionary Algorithm GMU George Mason University GUI Graphical...Probability Profile Generation ........................................................72 A.2.11 Running ECAD -EA (Effective Courses of Action Determination

  18. Identifying Student Resources in Reasoning about Entropy and the Approach to Thermal Equilibrium

    ERIC Educational Resources Information Center

    Loverude, Michael

    2015-01-01

    As part of an ongoing project to examine student learning in upper-division courses in thermal and statistical physics, we have examined student reasoning about entropy and the second law of thermodynamics. We have examined reasoning in terms of heat transfer, entropy maximization, and statistical treatments of multiplicity and probability. In…

  19. Locally Bayesian Learning with Applications to Retrospective Revaluation and Highlighting

    ERIC Educational Resources Information Center

    Kruschke, John K.

    2006-01-01

    A scheme is described for locally Bayesian parameter updating in models structured as successions of component functions. The essential idea is to back-propagate the target data to interior modules, such that an interior component's target is the input to the next component that maximizes the probability of the next component's target. Each layer…

  20. Nonlocality without inequality for almost all two-qubit entangled states based on Cabello's nonlocality argument

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kunkri, Samir; Choudhary, Sujit K.; Ahanj, Ali

    2006-02-15

    Here we deal with a nonlocality argument proposed by Cabello, which is more general than Hardy's nonlocality argument, but still maximally entangled states do not respond. However, for most of the other entangled states, maximum probability of success of this argument is more than that of the Hardy's argument.

  1. Assessing risk to birds from industrial wind energy development via paired resource selection nodels

    Treesearch

    Tricia A. Miller; Robert P. Brooks; Michael Lanzone; David Brandes; Jeff Cooper; Kieran O' malley; Charles Maisonneuve; Junior Tremblay; Adam Duerr; Todd Katzner

    2014-01-01

    When wildlife habitat overlaps with industrial development animals may be harmed. Because wildlife and people select resources to maximize biological fitness and economic return, respectively, we estimated risk, the probability of eagles encountering and being affected by turbines, by overlaying models of resource selection for each entity. This conceptual framework...

  2. Multiple Ordinal Regression by Maximizing the Sum of Margins

    PubMed Central

    Hamsici, Onur C.; Martinez, Aleix M.

    2016-01-01

    Human preferences are usually measured using ordinal variables. A system whose goal is to estimate the preferences of humans and their underlying decision mechanisms requires to learn the ordering of any given sample set. We consider the solution of this ordinal regression problem using a Support Vector Machine algorithm. Specifically, the goal is to learn a set of classifiers with common direction vectors and different biases correctly separating the ordered classes. Current algorithms are either required to solve a quadratic optimization problem, which is computationally expensive, or are based on maximizing the minimum margin (i.e., a fixed margin strategy) between a set of hyperplanes, which biases the solution to the closest margin. Another drawback of these strategies is that they are limited to order the classes using a single ranking variable (e.g., perceived length). In this paper, we define a multiple ordinal regression algorithm based on maximizing the sum of the margins between every consecutive class with respect to one or more rankings (e.g., perceived length and weight). We provide derivations of an efficient, easy-to-implement iterative solution using a Sequential Minimal Optimization procedure. We demonstrate the accuracy of our solutions in several datasets. In addition, we provide a key application of our algorithms in estimating human subjects’ ordinal classification of attribute associations to object categories. We show that these ordinal associations perform better than the binary one typically employed in the literature. PMID:26529784

  3. Bayesian randomized clinical trials: From fixed to adaptive design.

    PubMed

    Yin, Guosheng; Lam, Chi Kin; Shi, Haolun

    2017-08-01

    Randomized controlled studies are the gold standard for phase III clinical trials. Using α-spending functions to control the overall type I error rate, group sequential methods are well established and have been dominating phase III studies. Bayesian randomized design, on the other hand, can be viewed as a complement instead of competitive approach to the frequentist methods. For the fixed Bayesian design, the hypothesis testing can be cast in the posterior probability or Bayes factor framework, which has a direct link to the frequentist type I error rate. Bayesian group sequential design relies upon Bayesian decision-theoretic approaches based on backward induction, which is often computationally intensive. Compared with the frequentist approaches, Bayesian methods have several advantages. The posterior predictive probability serves as a useful and convenient tool for trial monitoring, and can be updated at any time as the data accrue during the trial. The Bayesian decision-theoretic framework possesses a direct link to the decision making in the practical setting, and can be modeled more realistically to reflect the actual cost-benefit analysis during the drug development process. Other merits include the possibility of hierarchical modeling and the use of informative priors, which would lead to a more comprehensive utilization of information from both historical and longitudinal data. From fixed to adaptive design, we focus on Bayesian randomized controlled clinical trials and make extensive comparisons with frequentist counterparts through numerical studies. Copyright © 2017 Elsevier Inc. All rights reserved.

  4. Fault detection on a sewer network by a combination of a Kalman filter and a binary sequential probability ratio test

    NASA Astrophysics Data System (ADS)

    Piatyszek, E.; Voignier, P.; Graillot, D.

    2000-05-01

    One of the aims of sewer networks is the protection of population against floods and the reduction of pollution rejected to the receiving water during rainy events. To meet these goals, managers have to equip the sewer networks with and to set up real-time control systems. Unfortunately, a component fault (leading to intolerable behaviour of the system) or sensor fault (deteriorating the process view and disturbing the local automatism) makes the sewer network supervision delicate. In order to ensure an adequate flow management during rainy events it is essential to set up procedures capable of detecting and diagnosing these anomalies. This article introduces a real-time fault detection method, applicable to sewer networks, for the follow-up of rainy events. This method consists in comparing the sensor response with a forecast of this response. This forecast is provided by a model and more precisely by a state estimator: a Kalman filter. This Kalman filter provides not only a flow estimate but also an entity called 'innovation'. In order to detect abnormal operations within the network, this innovation is analysed with the binary sequential probability ratio test of Wald. Moreover, by crossing available information on several nodes of the network, a diagnosis of the detected anomalies is carried out. This method provided encouraging results during the analysis of several rains, on the sewer network of Seine-Saint-Denis County, France.

  5. Detecting Signals of Disproportionate Reporting from Singapore's Spontaneous Adverse Event Reporting System: An Application of the Sequential Probability Ratio Test.

    PubMed

    Chan, Cheng Leng; Rudrappa, Sowmya; Ang, Pei San; Li, Shu Chuen; Evans, Stephen J W

    2017-08-01

    The ability to detect safety concerns from spontaneous adverse drug reaction reports in a timely and efficient manner remains important in public health. This paper explores the behaviour of the Sequential Probability Ratio Test (SPRT) and ability to detect signals of disproportionate reporting (SDRs) in the Singapore context. We used SPRT with a combination of two hypothesised relative risks (hRRs) of 2 and 4.1 to detect signals of both common and rare adverse events in our small database. We compared SPRT with other methods in terms of number of signals detected and whether labelled adverse drug reactions were detected or the reaction terms were considered serious. The other methods used were reporting odds ratio (ROR), Bayesian Confidence Propagation Neural Network (BCPNN) and Gamma Poisson Shrinker (GPS). The SPRT produced 2187 signals in common with all methods, 268 unique signals, and 70 signals in common with at least one other method, and did not produce signals in 178 cases where two other methods detected them, and there were 403 signals unique to one of the other methods. In terms of sensitivity, ROR performed better than other methods, but the SPRT method found more new signals. The performances of the methods were similar for negative predictive value and specificity. Using a combination of hRRs for SPRT could be a useful screening tool for regulatory agencies, and more detailed investigation of the medical utility of the system is merited.

  6. When is Pharmacogenetic Testing for Antidepressant Response Ready for the Clinic? A Cost-effectiveness Analysis Based on Data from the STAR*D Study

    PubMed Central

    Perlis, Roy H.; Patrick, Amanda; Smoller, Jordan W.; Wang, Philip S.

    2009-01-01

    The potential of personalized medicine to transform the treatment of mood disorders has been widely touted in psychiatry, but has not been quantified. We estimated the costs and benefits of a putative pharmacogenetic test for antidepressant response in the treatment of major depressive disorder (MDD) from the societal perspective. Specifically, we performed cost-effectiveness analyses using state-transition probability models incorporating probabilities from the multicenter STAR*D effectiveness study of MDD. Costs and quality-adjusted life years were compared for sequential antidepressant trials, with or without guidance from a pharmacogenetic test for differential response to selective serotonin reuptake inhibitors (SSRIs). Likely SSRI responders received an SSRI, while likely nonresponders received the norepinephrine/dopamine reuptake inhibitor bupropion. For a 40-year-old with major depressive disorder, applying the pharmacogenetic test and using the non-SSRI bupropion for those at higher risk for nonresponse cost $93,520 per additional quality-adjusted life-year (QALY) compared with treating all patients with an SSRI first and switching sequentially in the case of nonremission. Cost/QALY dropped below $50,000 for tests with remission rate ratios as low as 1.5, corresponding to odds ratios ~1.8–2.0. Tests for differential antidepressant response could thus become cost-effective under certain circumstances. These circumstances, particularly availability of alternative treatment strategies and test effect sizes, can be estimated and should be considered before these tests are broadly applied in clinical settings. PMID:19494805

  7. Studies on Hydrogen Production by Photosynthetic Bacteria after Anaerobic Fermentation of Starch by a Hyperthermophile, Pyrococcus furiosus

    NASA Astrophysics Data System (ADS)

    Sugitate, Toshihiro; Fukatsu, Makoto; Ishimi, Katsuhiro; Kohno, Hideki; Wakayama, Tatsuki; Nakamura, Yoshihiro; Miyake, Jun; Asada, Yasuo

    In order to establish the sequential hydrogen production from waste starch using a hyperthermophile, Pyrococcus furiosus, and a photosynthetic bacterium, basic studies were done. P. furiosus produced hydrogen and acetate by anaerobic fermentation at 90°C. A photosynthetic bacterium, Rhodobacter sphaeroides RV, was able to produce hydrogen from acetate under anaerobic and light conditions at 30°C. However, Rb. sphaeroides RV was not able to produce hydrogen from acetate in the presence of sodium chloride that was essential for the growth and hydrogen production of P. furiosus although it produced hydrogen from lactate at a reduced rate with 1% sodium chloride. A newly isolated strain, CST-8, from natural environment was, however, able to produce hydrogen from acetate, especially with 3 mM L-alanine and in the presence of 1% sodium chloride. The sequential hydrogen production with P. furiosus and salt-tolerant photosynthetic bacteria could be probable at least in the laboratory experiment scale.

  8. Sequential megafaunal collapse in the North Pacific Ocean: An ongoing legacy of industrial whaling?

    USGS Publications Warehouse

    Springer, A.M.; Estes, J.A.; Van Vliet, Gus B.; Williams, T.M.; Doak, D.F.; Danner, E.M.; Forney, K.A.; Pfister, B.

    2003-01-01

    Populations of seals, sea lions, and sea otters have sequentially collapsed over large areas of the northern North Pacific Ocean and southern Bering Sea during the last several decades. A bottom-up nutritional limitation mechanism induced by physical oceanographic change or competition with fisheries was long thought to be largely responsible for these declines. The current weight of evidence is more consistent with top-down forcing. Increased predation by killer whales probably drove the sea otter collapse and may have been responsible for the earlier pinniped declines as well. We propose that decimation of the great whales by post-World War II industrial whaling caused the great whales' foremost natural predators, killer whales, to begin feeding more intensively on the smaller marine mammals, thus "fishing-down" this element of the marine food web. The timing of these events, information on the abundance, diet, and foraging behavior of both predators and prey, and feasibility analyses based on demographic and energetic modeling are all consistent with this hypothesis.

  9. Sequential megafaunal collapse in the North Pacific Ocean: An ongoing legacy of industrial whaling?

    PubMed Central

    Springer, A. M.; Estes, J. A.; van Vliet, G. B.; Williams, T. M.; Doak, D. F.; Danner, E. M.; Forney, K. A.; Pfister, B.

    2003-01-01

    Populations of seals, sea lions, and sea otters have sequentially collapsed over large areas of the northern North Pacific Ocean and southern Bering Sea during the last several decades. A bottom-up nutritional limitation mechanism induced by physical oceanographic change or competition with fisheries was long thought to be largely responsible for these declines. The current weight of evidence is more consistent with top-down forcing. Increased predation by killer whales probably drove the sea otter collapse and may have been responsible for the earlier pinniped declines as well. We propose that decimation of the great whales by post-World War II industrial whaling caused the great whales' foremost natural predators, killer whales, to begin feeding more intensively on the smaller marine mammals, thus “fishing-down” this element of the marine food web. The timing of these events, information on the abundance, diet, and foraging behavior of both predators and prey, and feasibility analyses based on demographic and energetic modeling are all consistent with this hypothesis. PMID:14526101

  10. Footprints of electron correlation in strong-field double ionization of Kr close to the sequential-ionization regime

    NASA Astrophysics Data System (ADS)

    Li, Xiaokai; Wang, Chuncheng; Yuan, Zongqiang; Ye, Difa; Ma, Pan; Hu, Wenhui; Luo, Sizuo; Fu, Libin; Ding, Dajun

    2017-09-01

    By combining kinematically complete measurements and a semiclassical Monte Carlo simulation we study the correlated-electron dynamics in the strong-field double ionization of Kr. Interestingly, we find that, as we step into the sequential-ionization regime, there are still signatures of correlation in the two-electron joint momentum spectrum and, more intriguingly, the scaling law of the high-energy tail is completely different from early predictions on the low-Z atom (He). These experimental observations are well reproduced by our generalized semiclassical model adapting a Green-Sellin-Zachor potential. It is revealed that the competition between the screening effect of inner-shell electrons and the Coulomb focusing of nuclei leads to a non-inverse-square central force, which twists the returned electron trajectory at the vicinity of the parent core and thus significantly increases the probability of hard recollisions between two electrons. Our results might have promising applications ranging from accurately retrieving atomic structures to simulating celestial phenomena in the laboratory.

  11. Bioinformatics algorithm based on a parallel implementation of a machine learning approach using transducers

    NASA Astrophysics Data System (ADS)

    Roche-Lima, Abiel; Thulasiram, Ruppa K.

    2012-02-01

    Finite automata, in which each transition is augmented with an output label in addition to the familiar input label, are considered finite-state transducers. Transducers have been used to analyze some fundamental issues in bioinformatics. Weighted finite-state transducers have been proposed to pairwise alignments of DNA and protein sequences; as well as to develop kernels for computational biology. Machine learning algorithms for conditional transducers have been implemented and used for DNA sequence analysis. Transducer learning algorithms are based on conditional probability computation. It is calculated by using techniques, such as pair-database creation, normalization (with Maximum-Likelihood normalization) and parameters optimization (with Expectation-Maximization - EM). These techniques are intrinsically costly for computation, even worse when are applied to bioinformatics, because the databases sizes are large. In this work, we describe a parallel implementation of an algorithm to learn conditional transducers using these techniques. The algorithm is oriented to bioinformatics applications, such as alignments, phylogenetic trees, and other genome evolution studies. Indeed, several experiences were developed using the parallel and sequential algorithm on Westgrid (specifically, on the Breeze cluster). As results, we obtain that our parallel algorithm is scalable, because execution times are reduced considerably when the data size parameter is increased. Another experience is developed by changing precision parameter. In this case, we obtain smaller execution times using the parallel algorithm. Finally, number of threads used to execute the parallel algorithm on the Breezy cluster is changed. In this last experience, we obtain as result that speedup is considerably increased when more threads are used; however there is a convergence for number of threads equal to or greater than 16.

  12. Leveraging Hypoxia-Activated Prodrugs to Prevent Drug Resistance in Solid Tumors.

    PubMed

    Lindsay, Danika; Garvey, Colleen M; Mumenthaler, Shannon M; Foo, Jasmine

    2016-08-01

    Experimental studies have shown that one key factor in driving the emergence of drug resistance in solid tumors is tumor hypoxia, which leads to the formation of localized environmental niches where drug-resistant cell populations can evolve and survive. Hypoxia-activated prodrugs (HAPs) are compounds designed to penetrate to hypoxic regions of a tumor and release cytotoxic or cytostatic agents; several of these HAPs are currently in clinical trial. However, preliminary results have not shown a survival benefit in several of these trials. We hypothesize that the efficacy of treatments involving these prodrugs depends heavily on identifying the correct treatment schedule, and that mathematical modeling can be used to help design potential therapeutic strategies combining HAPs with standard therapies to achieve long-term tumor control or eradication. We develop this framework in the specific context of EGFR-driven non-small cell lung cancer, which is commonly treated with the tyrosine kinase inhibitor erlotinib. We develop a stochastic mathematical model, parametrized using clinical and experimental data, to explore a spectrum of treatment regimens combining a HAP, evofosfamide, with erlotinib. We design combination toxicity constraint models and optimize treatment strategies over the space of tolerated schedules to identify specific combination schedules that lead to optimal tumor control. We find that (i) combining these therapies delays resistance longer than any monotherapy schedule with either evofosfamide or erlotinib alone, (ii) sequentially alternating single doses of each drug leads to minimal tumor burden and maximal reduction in probability of developing resistance, and (iii) strategies minimizing the length of time after an evofosfamide dose and before erlotinib confer further benefits in reduction of tumor burden. These results provide insights into how hypoxia-activated prodrugs may be used to enhance therapeutic effectiveness in the clinic.

  13. Measurement Uncertainty Relations for Discrete Observables: Relative Entropy Formulation

    NASA Astrophysics Data System (ADS)

    Barchielli, Alberto; Gregoratti, Matteo; Toigo, Alessandro

    2018-02-01

    We introduce a new information-theoretic formulation of quantum measurement uncertainty relations, based on the notion of relative entropy between measurement probabilities. In the case of a finite-dimensional system and for any approximate joint measurement of two target discrete observables, we define the entropic divergence as the maximal total loss of information occurring in the approximation at hand. For fixed target observables, we study the joint measurements minimizing the entropic divergence, and we prove the general properties of its minimum value. Such a minimum is our uncertainty lower bound: the total information lost by replacing the target observables with their optimal approximations, evaluated at the worst possible state. The bound turns out to be also an entropic incompatibility degree, that is, a good information-theoretic measure of incompatibility: indeed, it vanishes if and only if the target observables are compatible, it is state-independent, and it enjoys all the invariance properties which are desirable for such a measure. In this context, we point out the difference between general approximate joint measurements and sequential approximate joint measurements; to do this, we introduce a separate index for the tradeoff between the error of the first measurement and the disturbance of the second one. By exploiting the symmetry properties of the target observables, exact values, lower bounds and optimal approximations are evaluated in two different concrete examples: (1) a couple of spin-1/2 components (not necessarily orthogonal); (2) two Fourier conjugate mutually unbiased bases in prime power dimension. Finally, the entropic incompatibility degree straightforwardly generalizes to the case of many observables, still maintaining all its relevant properties; we explicitly compute it for three orthogonal spin-1/2 components.

  14. Post licensure surveillance of influenza vaccines in the Vaccine Safety Datalink in the 2013-2014 and 2014-2015 seasons.

    PubMed

    Li, Rongxia; Stewart, Brock; McNeil, Michael M; Duffy, Jonathan; Nelson, Jennifer; Kawai, Alison Tse; Baxter, Roger; Belongia, Edward A; Weintraub, Eric

    2016-08-01

    The changes in each year in influenza vaccine antigenic components as well as vaccine administration patterns may pose new risks of adverse events following immunization (AEs). To evaluate the safety of influenza vaccines annually administered to people ≥ 6 months, we conducted weekly post licensure surveillance for seven pre-specified adverse events following receipt of influenza vaccines during the 2013-2014 and 2014-2015 seasons in the Vaccine Safety Datalink (VSD). We used both a historically-controlled cohort design with the Poisson-based maximized sequential probability ratio test (maxSPRT) and a self-controlled risk interval (SCRI) design with the binomial-based maxSPRT. For each adverse event outcome, we defined the risk interval on the basis of biologic plausibility and prior literature. For the historical cohort design, numbers of expected adverse events were calculated from the prior seven seasons, adjusted for age and site. For the SCRI design, a comparison window was defined either before vaccination or after vaccination, depending on each specific outcome. An elevated risk of febrile seizures 0-1 days following trivalent inactivated influenza vaccine (IIV3) was identified in children aged 6-23 months during the 2014-2015 season using the SCRI design. We found the relative risk (RR) of febrile seizures following concomitant administration of IIV3 and PCV13 was 5.3 with a 95% CI 1.87-14.75. Without concomitant PCV 13 administration, the estimated risk decreased and was no longer statistically significant (RR: 1.4; CI: 0.54 - 3.61). No increased risks, other than for febrile seizures, were identified in influenza vaccine safety surveillance during 2013-2014 and 2014-2015 seasons in the VSD. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  15. Dynamic regime marginal structural mean models for estimation of optimal dynamic treatment regimes, Part I: main content.

    PubMed

    Orellana, Liliana; Rotnitzky, Andrea; Robins, James M

    2010-01-01

    Dynamic treatment regimes are set rules for sequential decision making based on patient covariate history. Observational studies are well suited for the investigation of the effects of dynamic treatment regimes because of the variability in treatment decisions found in them. This variability exists because different physicians make different decisions in the face of similar patient histories. In this article we describe an approach to estimate the optimal dynamic treatment regime among a set of enforceable regimes. This set is comprised by regimes defined by simple rules based on a subset of past information. The regimes in the set are indexed by a Euclidean vector. The optimal regime is the one that maximizes the expected counterfactual utility over all regimes in the set. We discuss assumptions under which it is possible to identify the optimal regime from observational longitudinal data. Murphy et al. (2001) developed efficient augmented inverse probability weighted estimators of the expected utility of one fixed regime. Our methods are based on an extension of the marginal structural mean model of Robins (1998, 1999) which incorporate the estimation ideas of Murphy et al. (2001). Our models, which we call dynamic regime marginal structural mean models, are specially suitable for estimating the optimal treatment regime in a moderately small class of enforceable regimes of interest. We consider both parametric and semiparametric dynamic regime marginal structural models. We discuss locally efficient, double-robust estimation of the model parameters and of the index of the optimal treatment regime in the set. In a companion paper in this issue of the journal we provide proofs of the main results.

  16. Aging and loss decision making: increased risk aversion and decreased use of maximizing information, with correlated rationality and value maximization

    PubMed Central

    Kurnianingsih, Yoanna A.; Sim, Sam K. Y.; Chee, Michael W. L.; Mullette-Gillman, O’Dhaniel A.

    2015-01-01

    We investigated how adult aging specifically alters economic decision-making, focusing on examining alterations in uncertainty preferences (willingness to gamble) and choice strategies (what gamble information influences choices) within both the gains and losses domains. Within each domain, participants chose between certain monetary outcomes and gambles with uncertain outcomes. We examined preferences by quantifying how uncertainty modulates choice behavior as if altering the subjective valuation of gambles. We explored age-related preferences for two types of uncertainty, risk, and ambiguity. Additionally, we explored how aging may alter what information participants utilize to make their choices by comparing the relative utilization of maximizing and satisficing information types through a choice strategy metric. Maximizing information was the ratio of the expected value of the two options, while satisficing information was the probability of winning. We found age-related alterations of economic preferences within the losses domain, but no alterations within the gains domain. Older adults (OA; 61–80 years old) were significantly more uncertainty averse for both risky and ambiguous choices. OA also exhibited choice strategies with decreased use of maximizing information. Within OA, we found a significant correlation between risk preferences and choice strategy. This linkage between preferences and strategy appears to derive from a convergence to risk neutrality driven by greater use of the effortful maximizing strategy. As utility maximization and value maximization intersect at risk neutrality, this result suggests that OA are exhibiting a relationship between enhanced rationality and enhanced value maximization. While there was variability in economic decision-making measures within OA, these individual differences were unrelated to variability within examined measures of cognitive ability. Our results demonstrate that aging alters economic decision-making for losses through changes in both individual preferences and the strategies individuals employ. PMID:26029092

  17. Contingency Space Analysis: An Alternative Method for Identifying Contingent Relations from Observational Data

    PubMed Central

    Martens, Brian K; DiGennaro, Florence D; Reed, Derek D; Szczech, Frances M; Rosenthal, Blair D

    2008-01-01

    Descriptive assessment methods have been used in applied settings to identify consequences for problem behavior, thereby aiding in the design of effective treatment programs. Consensus has not been reached, however, regarding the types of data or analytic strategies that are most useful for describing behavior–consequence relations. One promising approach involves the analysis of conditional probabilities from sequential recordings of behavior and events that follow its occurrence. In this paper we review several strategies for identifying contingent relations from conditional probabilities, and propose an alternative strategy known as a contingency space analysis (CSA). Step-by-step procedures for conducting and interpreting a CSA using sample data are presented, followed by discussion of the potential use of a CSA for conducting descriptive assessments, informing intervention design, and evaluating changes in reinforcement contingencies following treatment. PMID:18468280

  18. Measurement Model Nonlinearity in Estimation of Dynamical Systems

    NASA Astrophysics Data System (ADS)

    Majji, Manoranjan; Junkins, J. L.; Turner, J. D.

    2012-06-01

    The role of nonlinearity of the measurement model and its interactions with the uncertainty of measurements and geometry of the problem is studied in this paper. An examination of the transformations of the probability density function in various coordinate systems is presented for several astrodynamics applications. Smooth and analytic nonlinear functions are considered for the studies on the exact transformation of uncertainty. Special emphasis is given to understanding the role of change of variables in the calculus of random variables. The transformation of probability density functions through mappings is shown to provide insight in to understanding the evolution of uncertainty in nonlinear systems. Examples are presented to highlight salient aspects of the discussion. A sequential orbit determination problem is analyzed, where the transformation formula provides useful insights for making the choice of coordinates for estimation of dynamic systems.

  19. Extended target recognition in cognitive radar networks.

    PubMed

    Wei, Yimin; Meng, Huadong; Liu, Yimin; Wang, Xiqin

    2010-01-01

    We address the problem of adaptive waveform design for extended target recognition in cognitive radar networks. A closed-loop active target recognition radar system is extended to the case of a centralized cognitive radar network, in which a generalized likelihood ratio (GLR) based sequential hypothesis testing (SHT) framework is employed. Using Doppler velocities measured by multiple radars, the target aspect angle for each radar is calculated. The joint probability of each target hypothesis is then updated using observations from different radar line of sights (LOS). Based on these probabilities, a minimum correlation algorithm is proposed to adaptively design the transmit waveform for each radar in an amplitude fluctuation situation. Simulation results demonstrate performance improvements due to the cognitive radar network and adaptive waveform design. Our minimum correlation algorithm outperforms the eigen-waveform solution and other non-cognitive waveform design approaches.

  20. Critical role of bevacizumab scheduling in combination with pre-surgical chemo-radiotherapy in MRI-defined high-risk locally advanced rectal cancer: Results of the BRANCH trial.

    PubMed

    Avallone, Antonio; Pecori, Biagio; Bianco, Franco; Aloj, Luigi; Tatangelo, Fabiana; Romano, Carmela; Granata, Vincenza; Marone, Pietro; Leone, Alessandra; Botti, Gerardo; Petrillo, Antonella; Caracò, Corradina; Iaffaioli, Vincenzo R; Muto, Paolo; Romano, Giovanni; Comella, Pasquale; Budillon, Alfredo; Delrio, Paolo

    2015-10-06

    We have previously shown that an intensified preoperative regimen including oxaliplatin plus raltitrexed and 5-fluorouracil/folinic acid (OXATOM/FUFA) during preoperative pelvic radiotherapy produced promising results in locally advanced rectal cancer (LARC). Preclinical evidence suggests that the scheduling of bevacizumab may be crucial to optimize its combination with chemo-radiotherapy. This non-randomized, non-comparative, phase II study was conducted in MRI-defined high-risk LARC. Patients received three biweekly cycles of OXATOM/FUFA during RT. Bevacizumab was given 2 weeks before the start of chemo-radiotherapy, and on the same day of chemotherapy for 3 cycles (concomitant-schedule A) or 4 days prior to the first and second cycle of chemotherapy (sequential-schedule B). Primary end point was pathological complete tumor regression (TRG1) rate. The accrual for the concomitant-schedule was early terminated because the number of TRG1 (2 out of 16 patients) was statistically inconsistent with the hypothesis of activity (30%) to be tested. Conversely, the endpoint was reached with the sequential-schedule and the final TRG1 rate among 46 enrolled patients was 50% (95% CI 35%-65%). Neutropenia was the most common grade ≥ 3 toxicity with both schedules, but it was less pronounced with the sequential than concomitant-schedule (30% vs. 44%). Postoperative complications occurred in 8/15 (53%) and 13/46 (28%) patients in schedule A and B, respectively. At 5 year follow-up the probability of PFS and OS was 80% (95%CI, 66%-89%) and 85% (95%CI, 69%-93%), respectively, for the sequential-schedule. These results highlights the relevance of bevacizumab scheduling to optimize its combination with preoperative chemo-radiotherapy in the management of LARC.

  1. Under the hood of statistical learning: A statistical MMN reflects the magnitude of transitional probabilities in auditory sequences.

    PubMed

    Koelsch, Stefan; Busch, Tobias; Jentschke, Sebastian; Rohrmeier, Martin

    2016-02-02

    Within the framework of statistical learning, many behavioural studies investigated the processing of unpredicted events. However, surprisingly few neurophysiological studies are available on this topic, and no statistical learning experiment has investigated electroencephalographic (EEG) correlates of processing events with different transition probabilities. We carried out an EEG study with a novel variant of the established statistical learning paradigm. Timbres were presented in isochronous sequences of triplets. The first two sounds of all triplets were equiprobable, while the third sound occurred with either low (10%), intermediate (30%), or high (60%) probability. Thus, the occurrence probability of the third item of each triplet (given the first two items) was varied. Compared to high-probability triplet endings, endings with low and intermediate probability elicited an early anterior negativity that had an onset around 100 ms and was maximal at around 180 ms. This effect was larger for events with low than for events with intermediate probability. Our results reveal that, when predictions are based on statistical learning, events that do not match a prediction evoke an early anterior negativity, with the amplitude of this mismatch response being inversely related to the probability of such events. Thus, we report a statistical mismatch negativity (sMMN) that reflects statistical learning of transitional probability distributions that go beyond auditory sensory memory capabilities.

  2. Metocean design parameter estimation for fixed platform based on copula functions

    NASA Astrophysics Data System (ADS)

    Zhai, Jinjin; Yin, Qilin; Dong, Sheng

    2017-08-01

    Considering the dependent relationship among wave height, wind speed, and current velocity, we construct novel trivariate joint probability distributions via Archimedean copula functions. Total 30-year data of wave height, wind speed, and current velocity in the Bohai Sea are hindcast and sampled for case study. Four kinds of distributions, namely, Gumbel distribution, lognormal distribution, Weibull distribution, and Pearson Type III distribution, are candidate models for marginal distributions of wave height, wind speed, and current velocity. The Pearson Type III distribution is selected as the optimal model. Bivariate and trivariate probability distributions of these environmental conditions are established based on four bivariate and trivariate Archimedean copulas, namely, Clayton, Frank, Gumbel-Hougaard, and Ali-Mikhail-Haq copulas. These joint probability models can maximize marginal information and the dependence among the three variables. The design return values of these three variables can be obtained by three methods: univariate probability, conditional probability, and joint probability. The joint return periods of different load combinations are estimated by the proposed models. Platform responses (including base shear, overturning moment, and deck displacement) are further calculated. For the same return period, the design values of wave height, wind speed, and current velocity obtained by the conditional and joint probability models are much smaller than those by univariate probability. Considering the dependence among variables, the multivariate probability distributions provide close design parameters to actual sea state for ocean platform design.

  3. Care delivery considerations for widespread and equitable implementation of inherited cancer predisposition testing

    PubMed Central

    Cragun, Deborah; Kinney, Anita Y; Pal, Tuya

    2017-01-01

    Introduction DNA sequencing advances through next-generation sequencing (NGS) and several practice changing events, have led to shifting paradigms for inherited cancer predisposition testing. These changes necessitated a means by which to maximize health benefits without unnecessarily inflating healthcare costs and exacerbating health disparities. Areas covered NGS-based tests encompass multi-gene panel tests, whole exome sequencing, and whole genome sequencing, all of which test for multiple genes simultaneously, compared to prior sequencing practices through which testing was performed sequentially for one or two genes. Taking an ecological approach, this article synthesizes the current literature to consider the broad impact of these advances from the individual patient-, interpersonal-, organizational-, community- and policy-levels. Furthermore, the authors describe how multi-level factors that impact genetic testing and follow-up care reveal great potential to widen existing health disparities if these issues are not addressed. Expert Commentary As we consider ways to maximize patient benefit from testing in a cost effective manner, it is important to consider perspectives from multiple levels. This information is needed to guide the development of interventions such that the promise of genomic testing may be realized by all populations, regardless of race, ethnicity and ability to pay. PMID:27910721

  4. Scheduling for anesthesia at geographic locations remote from the operating room.

    PubMed

    Dexter, Franklin; Wachtel, Ruth E

    2014-08-01

    Providing general anesthesia at locations away from the operating room, called remote locations, poses many medical and scheduling challenges. This review discusses how to schedule procedures at remote locations to maximize anesthesia productivity (see Video, Supplemental Digital Content 1). Anesthesia labour productivity can be maximized by assigning one or more 8-h or 10-h periods of allocated time every 2 weeks dedicated specifically to each remote specialty that has enough cases to fill those periods. Remote specialties can then schedule their cases themselves into their own allocated time. Periods of allocated time (called open, unblocked or first come first served time) can be used by remote locations that do not have their own allocated time. Unless cases are scheduled sequentially into allocated time, there will be substantial extra underutilized time (time during which procedures are not being performed and personnel sit idle even though staffing has been planned) and a concomitant reduction in percent productivity. Allocated time should be calculated on the basis of usage. Remote locations with sufficient hours of cases should be allocated time reserved especially for them in which to schedule their cases, with a maximum waiting time of 2 weeks, to achieve an average wait of 1 week.

  5. Comparison of developmental gradients for growth, ATPase, and fusicoccin-binding activity in mung bean hypocotyls

    NASA Technical Reports Server (NTRS)

    Basel, L. E.; Cleland, R. E.

    1992-01-01

    A comparison has been made of the developmental gradients along a mung bean (Vigna radiata L.) hypocotyl of the growth rate, plasma membrane ATPase, and fusicoccin-binding protein (FCBP) activity to determine whether they are interrelated. The hook and four sequential 7.5 millimeter segments of the hypocotyl below the hook were cut. A plasma membrane-enriched fraction was isolated from each section by aqueous two-phase partitioning and assayed for vanadate-sensitive ATPase and FCBP activity. Each gradient had a distinctive and different pattern. Endogenous growth rate was maximal in the second section and much lower in the others. Vanadate-sensitive ATPase activity was maximal in the third section, but remained high in the older sections. Amounts of ATPase protein, shown by specific antibody binding, did not correlate with the amount of vanadate-sensitive ATPase activity in the three youngest sections. FCBP activity was almost absent in the first section, then increased to a maximum in the oldest sections. These data show that the growth rate is not determined by the ATPase activity, and that there are no fixed ratios between the ATPase and FCBP.

  6. Systems Issues Pertaining to Holographic Optical Data Storage in Thick Bacteriorhodopsin Films

    NASA Technical Reports Server (NTRS)

    Downie, John D.; Timucin, Dogan A.; Gary, Charles K.; Oezcan, Meric; Smithey, Daniel T.; Crew, Marshall; Lau, Sonie (Technical Monitor)

    1998-01-01

    The optical data storage capacity and raw bit-error-rate achievable with thick photochromic bacteriorhodopsin (BR) films are investigated for sequential recording and read- out of angularly- and shift-multiplexed digital holograms inside a thick blue-membrane D85N BR film. We address the determination of an exposure schedule that produces equal diffraction efficiencies among each of the multiplexed holograms. This exposure schedule is determined by numerical simulations of the holographic recording process within the BR material, and maximizes the total grating strength. We also experimentally measure the shift selectivity and compare the results to theoretical predictions. Finally, we evaluate the bit-error-rate of a single hologram, and of multiple holograms stored within the film.

  7. Influence of maneuverability on helicopter combat effectiveness

    NASA Technical Reports Server (NTRS)

    Falco, M.; Smith, R.

    1982-01-01

    A computational procedure employing a stochastic learning method in conjunction with dynamic simulation of helicopter flight and weapon system operation was used to derive helicopter maneuvering strategies. The derived strategies maximize either survival or kill probability and are in the form of a feedback control based upon threat visual or warning system cues. Maneuverability parameters implicit in the strategy development include maximum longitudinal acceleration and deceleration, maximum sustained and transient load factor turn rate at forward speed, and maximum pedal turn rate and lateral acceleration at hover. Results are presented in terms of probability of skill for all combat initial conditions for two threat categories.

  8. Optimal remote preparation of arbitrary multi-qubit real-parameter states via two-qubit entangled states

    NASA Astrophysics Data System (ADS)

    Wei, Jiahua; Shi, Lei; Luo, Junwen; Zhu, Yu; Kang, Qiaoyan; Yu, Longqiang; Wu, Hao; Jiang, Jun; Zhao, Boxin

    2018-06-01

    In this paper, we present an efficient scheme for remote state preparation of arbitrary n-qubit states with real coefficients. Quantum channel is composed of n maximally two-qubit entangled states, and several appropriate mutually orthogonal bases including the real parameters of prepared states are delicately constructed without the introduction of auxiliary particles. It is noted that the successful probability is 100% by using our proposal under the condition that the parameters of prepared states are all real. Compared to general states, the probability of our protocol is improved at the cost of the information reduction in the transmitted state.

  9. Probabilistic teleportation via multi-parameter measurements and partially entangled states

    NASA Astrophysics Data System (ADS)

    Wei, Jiahua; Shi, Lei; Han, Chen; Xu, Zhiyan; Zhu, Yu; Wang, Gang; Wu, Hao

    2018-04-01

    In this paper, a novel scheme for probabilistic teleportation is presented with multi-parameter measurements via a non-maximally entangled state. This is in contrast to the fact that the measurement kinds for quantum teleportation are usually particular in most previous schemes. The detail implementation producers for our proposal are given by using of appropriate local unitary operations. Moreover, the total success probability and classical information of this proposal are calculated. It is demonstrated that the success probability and classical cost would be changed with the multi-measurement parameters and the entanglement factor of quantum channel. Our scheme could enlarge the research range of probabilistic teleportation.

  10. Non-Maximal Tripartite Entanglement Degradation of Dirac and Scalar Fields in Non-Inertial Frames

    NASA Astrophysics Data System (ADS)

    Salman, Khan; Niaz, Ali Khan; M. K., Khan

    2014-03-01

    The π-tangle is used to study the behavior of entanglement of a nonmaximal tripartite state of both Dirac and scalar fields in accelerated frame. For Dirac fields, the degree of degradation with acceleration of both one-tangle of accelerated observer and π-tangle, for the same initial entanglement, is different by just interchanging the values of probability amplitudes. A fraction of both one-tangles and the π-tangle always survives for any choice of acceleration and the degree of initial entanglement. For scalar field, the one-tangle of accelerated observer depends on the choice of values of probability amplitudes and it vanishes in the range of infinite acceleration, whereas for π-tangle this is not always true. The dependence of π-tangle on probability amplitudes varies with acceleration. In the lower range of acceleration, its behavior changes by switching between the values of probability amplitudes and for larger values of acceleration this dependence on probability amplitudes vanishes. Interestingly, unlike bipartite entanglement, the degradation of π-tangle against acceleration in the case of scalar fields is slower than for Dirac fields.

  11. Reuse of imputed data in microarray analysis increases imputation efficiency

    PubMed Central

    Kim, Ki-Yeol; Kim, Byoung-Jin; Yi, Gwan-Su

    2004-01-01

    Background The imputation of missing values is necessary for the efficient use of DNA microarray data, because many clustering algorithms and some statistical analysis require a complete data set. A few imputation methods for DNA microarray data have been introduced, but the efficiency of the methods was low and the validity of imputed values in these methods had not been fully checked. Results We developed a new cluster-based imputation method called sequential K-nearest neighbor (SKNN) method. This imputes the missing values sequentially from the gene having least missing values, and uses the imputed values for the later imputation. Although it uses the imputed values, the efficiency of this new method is greatly improved in its accuracy and computational complexity over the conventional KNN-based method and other methods based on maximum likelihood estimation. The performance of SKNN was in particular higher than other imputation methods for the data with high missing rates and large number of experiments. Application of Expectation Maximization (EM) to the SKNN method improved the accuracy, but increased computational time proportional to the number of iterations. The Multiple Imputation (MI) method, which is well known but not applied previously to microarray data, showed a similarly high accuracy as the SKNN method, with slightly higher dependency on the types of data sets. Conclusions Sequential reuse of imputed data in KNN-based imputation greatly increases the efficiency of imputation. The SKNN method should be practically useful to save the data of some microarray experiments which have high amounts of missing entries. The SKNN method generates reliable imputed values which can be used for further cluster-based analysis of microarray data. PMID:15504240

  12. Selenium speciation in seleniferous agricultural soils under different cropping systems using sequential extraction and X-ray absorption spectroscopy.

    PubMed

    Qin, Hai-Bo; Zhu, Jian-Ming; Lin, Zhi-Qing; Xu, Wen-Po; Tan, De-Can; Zheng, Li-Rong; Takahashi, Yoshio

    2017-06-01

    Selenium (Se) speciation in soil is critically important for understanding the solubility, mobility, bioavailability, and toxicity of Se in the environment. In this study, Se fractionation and chemical speciation in agricultural soils from seleniferous areas were investigated using the elaborate sequential extraction and X-ray absorption near-edge structure (XANES) spectroscopy. The speciation results quantified by XANES technique generally agreed with those obtained by sequential extraction, and the combination of both approaches can reliably characterize Se speciation in soils. Results showed that dominant organic Se (56-81% of the total Se) and lesser Se(IV) (19-44%) were observed in seleniferous agricultural soils. A significant decrease in the proportion of organic Se to the total Se was found in different types of soil, i.e., paddy soil (81%) > uncultivated soil (69-73%) > upland soil (56-63%), while that of Se(IV) presented an inverse tendency. This suggests that Se speciation in agricultural soils can be significantly influenced by different cropping systems. Organic Se in seleniferous agricultural soils was probably derived from plant litter, which provides a significant insight for phytoremediation in Se-laden ecosystems and biofortification in Se-deficient areas. Furthermore, elevated organic Se in soils could result in higher Se accumulation in crops and further potential chronic Se toxicity to local residents in seleniferous areas. Copyright © 2017 Elsevier Ltd. All rights reserved.

  13. Finding specific RNA motifs: Function in a zeptomole world?

    PubMed Central

    KNIGHT, ROB; YARUS, MICHAEL

    2003-01-01

    We have developed a new method for estimating the abundance of any modular (piecewise) RNA motif within a longer random region. We have used this method to estimate the size of the active motifs available to modern SELEX experiments (picomoles of unique sequences) and to a plausible RNA World (zeptomoles of unique sequences: 1 zmole = 602 sequences). Unexpectedly, activities such as specific isoleucine binding are almost certainly present in zeptomoles of molecules, and even ribozymes such as self-cleavage motifs may appear (depending on assumptions about the minimal structures). The number of specified nucleotides is not the only important determinant of a motif’s rarity: The number of modules into which it is divided, and the details of this division, are also crucial. We propose three maxims for easily isolated motifs: the Maxim of Minimization, the Maxim of Multiplicity, and the Maxim of the Median. These maxims together state that selected motifs should be small and composed of as many separate, equally sized modules as possible. For evenly divided motifs with four modules, the largest accessible activity in picomole scale (1–1000 pmole) pools of length 100 is about 34 nucleotides; while for zeptomole scale (1–1000 zmole) pools it is about 20 specific nucleotides (50% probability of occurrence). This latter figure includes some ribozymes and aptamers. Consequently, an RNA metabolism apparently could have begun with only zeptomoles of RNA molecules. PMID:12554865

  14. Treatment of Small Hepatocellular Carcinoma (≤2 cm) in the Caudate Lobe with Sequential Transcatheter Arterial Chemoembolization and Radiofrequency Ablation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hyun, Dongho; Cho, Sung Ki, E-mail: sungkismc.cho@samsung.com; Shin, Sung Wook

    2016-07-15

    PurposeTo evaluate technical feasibility and treatment results of sequential transcatheter arterial chemoembolization (TACE) and cone-beam computed tomography-guided percutaneous radiofrequency ablation (CBCT-RFA) for small hepatocellular carcinoma (HCC) in the caudate lobe.Materials and MethodsInstitutional review board approved this retrospective study. Radiologic database was searched for the patients referred to perform TACE and CBCT-RFA for small caudate HCCs (≤2 cm) between February 2009 and February 2014. A total of 14 patients (12 men and 2 women, mean age; 61.3 years) were included. Percutaneous ultrasonography-guided RFA (pUS-RFA) and surgery were infeasible due to poor conspicuity, inconspicuity or no safe electrode pathway, and poor hepatic reserve. Proceduralmore » success (completion of both TACE and CBCT-RFA), technique efficacy (absence of tumor enhancement at 1 month after treatment), and complication were evaluated. Treatment results including local tumor progression (LTP), intrahepatic distant recurrence (IDR), overall survival (OS), and progression-free survival (PFS) were analyzed.ResultsProcedural success and technique efficacy rates were 78.6 % (11/14) and 90.9 % (10/11), respectively. Average follow-up period was 45.3 months (range, 13.4–64.6 months). The 1-, 3-, and 5-year LTP probabilities were 0, 12.5, and 12.5 %, respectively. IDR occurred in seven patients (63.6 %, 7/11). The 1-, 3-, and 5-year PFS probabilities were 81.8, 51.9, and 26 %, respectively. The 1-, 3-, and 5-year OS probabilities were 100, 80.8, and 80.8 %, respectively.ConclusionCombination of TACE and CBCT-RFA seems feasible for small HCC in the caudate lobe not amenable to pUS-RFA and effective in local tumor control.« less

  15. LOPP: A Location Privacy Protected Anonymous Routing Protocol for Disruption Tolerant Network

    NASA Astrophysics Data System (ADS)

    Lu, Xiaofeng; Hui, Pan; Towsley, Don; Pu, Juhua; Xiong, Zhang

    In this paper, we propose an anonymous routing protocol, LOPP, to protect the originator's location privacy in Delay/Disruption Tolerant Network (DTN). The goals of our study are to minimize the originator's probability of being localized (Pl) and maximize the destination's probability of receiving the message (Pr). The idea of LOPP is to divide a sensitive message into k segments and send each of them to n different neighbors. Although message fragmentation could reduce the destination's probability to receive a complete message, LOPP can decrease the originator's Pl. We validate LOPP on a real-world human mobility dataset. The simulation results show that LOPP can decrease the originator's Pl by over 54% with only 5.7% decrease in destination's Pr. We address the physical localization issue of DTN, which was not studied in the literature.

  16. The effect of medium viscosity on kinetics of ATP hydrolysis by the chloroplast coupling factor CF1.

    PubMed

    Malyan, Alexander N

    2016-05-01

    The coupling factor CF1 is a catalytic part of chloroplast ATP synthase which is exposed to stroma whose viscosity is many-fold higher than that of reaction mixtures commonly used to measure kinetics of CF1-catalyzed ATP hydrolysis. This study is focused on the effect of medium viscosity modulated by sucrose or bovine serum albumin (BSA) on kinetics of Ca(2+)- and Mg(2+)-dependent ATP hydrolysis by CF1. These agents were shown to reduce the maximal rate of Ca(2+)-dependent ATPase without changing the apparent Michaelis constant (К m), thus supporting the hypothesis on viscosity dependence of CF1 activity. For the sulfite- and ethanol-stimulated Mg(2+)-dependent reaction, the presence of sucrose increased К m without changing the maximal rate that is many-fold as high as that of Ca(2+)-dependent hydrolysis. The hydrolysis reaction was shown to be stimulated by low concentrations of BSA and inhibited by its higher concentrations, with the increasing maximal reaction rate estimated by extrapolation. Sucrose- or BSA-induced inhibition of the Mg(2+)-dependent ATPase reaction is believed to result from diffusion-caused deceleration, while its BSA-induced stimulation is probably caused by optimization of the enzyme structure. Molecular mechanisms of the inhibitory effect of viscosity are discussed. Taking into account high protein concentrations in the chloroplast stroma, it was suggested that kinetic parameters of ATP hydrolysis, and probably those of ATP synthesis in vivo as well, must be quite different from measurements taken at a viscosity level close to that of water.

  17. Maximizing Statistical Power When Verifying Probabilistic Forecasts of Hydrometeorological Events

    NASA Astrophysics Data System (ADS)

    DeChant, C. M.; Moradkhani, H.

    2014-12-01

    Hydrometeorological events (i.e. floods, droughts, precipitation) are increasingly being forecasted probabilistically, owing to the uncertainties in the underlying causes of the phenomenon. In these forecasts, the probability of the event, over some lead time, is estimated based on some model simulations or predictive indicators. By issuing probabilistic forecasts, agencies may communicate the uncertainty in the event occurring. Assuming that the assigned probability of the event is correct, which is referred to as a reliable forecast, the end user may perform some risk management based on the potential damages resulting from the event. Alternatively, an unreliable forecast may give false impressions of the actual risk, leading to improper decision making when protecting resources from extreme events. Due to this requisite for reliable forecasts to perform effective risk management, this study takes a renewed look at reliability assessment in event forecasts. Illustrative experiments will be presented, showing deficiencies in the commonly available approaches (Brier Score, Reliability Diagram). Overall, it is shown that the conventional reliability assessment techniques do not maximize the ability to distinguish between a reliable and unreliable forecast. In this regard, a theoretical formulation of the probabilistic event forecast verification framework will be presented. From this analysis, hypothesis testing with the Poisson-Binomial distribution is the most exact model available for the verification framework, and therefore maximizes one's ability to distinguish between a reliable and unreliable forecast. Application of this verification system was also examined within a real forecasting case study, highlighting the additional statistical power provided with the use of the Poisson-Binomial distribution.

  18. Can Statistical Modeling Increase Annual Fund Performance? An Experiment at the University of Maryland, College Park.

    ERIC Educational Resources Information Center

    Porter, Stephen R.

    Annual funds face pressures to contact all alumni to maximize participation, but these efforts are costly. This paper uses a logistic regression model to predict likely donors among alumni from the College of Arts & Humanities at the University of Maryland, College Park. Alumni were grouped according to their predicted probability of donating…

  19. Assessment of pretest probability of pulmonary embolism in the emergency department by physicians in training using the Wells model.

    PubMed

    Penaloza, Andrea; Mélot, Christian; Dochy, Emmanuelle; Blocklet, Didier; Gevenois, Pierre Alain; Wautrecht, Jean-Claude; Lheureux, Philippe; Motte, Serge

    2007-01-01

    Assessment of pretest probability should be the initial step in investigation of patients with suspected pulmonary embolism (PE). In teaching hospitals physicians in training are often the first physicians to evaluate patients. To evaluate the accuracy of pretest probability assessment of PE by physicians in training using the Wells clinical model and to assess the safety of a diagnostic strategy including pretest probability assessment. 291 consecutive outpatients with clinical suspicion of PE were categorized as having a low, moderate or high pretest probability of PE by physicians in training who could take supervising physicians' advice when they deemed necessary. Then, patients were managed according to a sequential diagnostic algorithm including D-dimer testing, lung scan, leg compression ultrasonography and helical computed tomography. Patients in whom PE was deemed absent were followed up for 3 months. 34 patients (18%) had PE. Prevalence of PE in the low, moderate and high pretest probability groups categorized by physicians in training alone was 3% (95% confidence interval (CI): 1% to 9%), 31% (95% CI: 22% to 42%) and 100% (95% CI: 61% to 100%) respectively. One of the 152 untreated patients (0.7%, 95% CI: 0.1% to 3.6%) developed a thromboembolic event during the 3-month follow-up period. Physicians in training can use the Wells clinical model to determine pretest probability of PE. A diagnostic strategy including the use of this model by physicians in training with access to supervising physicians' advice appears to be safe.

  20. Statistical thermodynamics of amphiphile chains in micelles

    PubMed Central

    Ben-Shaul, A.; Szleifer, I.; Gelbart, W. M.

    1984-01-01

    The probability distribution of amphiphile chain conformations in micelles of different geometries is derived through maximization of their packing entropy. A lattice model, first suggested by Dill and Flory, is used to represent the possible chain conformations in the micellar core. The polar heads of the chains are assumed to be anchored to the micellar surface, with the other chain segments occupying all lattice sites in the interior of the micelle. This “volume-filling” requirement, the connectivity of the chains, and the geometry of the micelle define constraints on the possible probability distributions of chain conformations. The actual distribution is derived by maximizing the chain's entropy subject to these constraints; “reversals” of the chains back towards the micellar surface are explicitly included. Results are presented for amphiphiles organized in planar bilayers and in cylindrical and spherical micelles of different sizes. It is found that, for all three geometries, the bond order parameters decrease as a function of the bond distance from the polar head, in accordance with recent experimental data. The entropy differences associated with geometrical changes are shown to be significant, suggesting thereby the need to include curvature (environmental)-dependent “tail” contributions in statistical thermodynamic treatments of micellization. PMID:16593492

  1. Maximizing phylogenetic diversity in biodiversity conservation: Greedy solutions to the Noah's Ark problem.

    PubMed

    Hartmann, Klaas; Steel, Mike

    2006-08-01

    The Noah's Ark Problem (NAP) is a comprehensive cost-effectiveness methodology for biodiversity conservation that was introduced by Weitzman (1998) and utilizes the phylogenetic tree containing the taxa of interest to assess biodiversity. Given a set of taxa, each of which has a particular survival probability that can be increased at some cost, the NAP seeks to allocate limited funds to conserving these taxa so that the future expected biodiversity is maximized. Finding optimal solutions using this framework is a computationally difficult problem to which a simple and efficient "greedy" algorithm has been proposed in the literature and applied to conservation problems. We show that, although algorithms of this type cannot produce optimal solutions for the general NAP, there are two restricted scenarios of the NAP for which a greedy algorithm is guaranteed to produce optimal solutions. The first scenario requires the taxa to have equal conservation cost; the second scenario requires an ultrametric tree. The NAP assumes a linear relationship between the funding allocated to conservation of a taxon and the increased survival probability of that taxon. This relationship is briefly investigated and one variation is suggested that can also be solved using a greedy algorithm.

  2. Optimal Control via Self-Generated Stochasticity

    NASA Technical Reports Server (NTRS)

    Zak, Michail

    2011-01-01

    The problem of global maxima of functionals has been examined. Mathematical roots of local maxima are the same as those for a much simpler problem of finding global maximum of a multi-dimensional function. The second problem is instability even if an optimal trajectory is found, there is no guarantee that it is stable. As a result, a fundamentally new approach is introduced to optimal control based upon two new ideas. The first idea is to represent the functional to be maximized as a limit of a probability density governed by the appropriately selected Liouville equation. Then, the corresponding ordinary differential equations (ODEs) become stochastic, and that sample of the solution that has the largest value will have the highest probability to appear in ODE simulation. The main advantages of the stochastic approach are that it is not sensitive to local maxima, the function to be maximized must be only integrable but not necessarily differentiable, and global equality and inequality constraints do not cause any significant obstacles. The second idea is to remove possible instability of the optimal solution by equipping the control system with a self-stabilizing device. The applications of the proposed methodology will optimize the performance of NASA spacecraft, as well as robot performance.

  3. SHORT-TERM SOLAR FLARE PREDICTION USING MULTIRESOLUTION PREDICTORS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yu Daren; Huang Xin; Hu Qinghua

    2010-01-20

    Multiresolution predictors of solar flares are constructed by a wavelet transform and sequential feature extraction method. Three predictors-the maximum horizontal gradient, the length of neutral line, and the number of singular points-are extracted from Solar and Heliospheric Observatory/Michelson Doppler Imager longitudinal magnetograms. A maximal overlap discrete wavelet transform is used to decompose the sequence of predictors into four frequency bands. In each band, four sequential features-the maximum, the mean, the standard deviation, and the root mean square-are extracted. The multiresolution predictors in the low-frequency band reflect trends in the evolution of newly emerging fluxes. The multiresolution predictors in the high-frequencymore » band reflect the changing rates in emerging flux regions. The variation of emerging fluxes is decoupled by wavelet transform in different frequency bands. The information amount of these multiresolution predictors is evaluated by the information gain ratio. It is found that the multiresolution predictors in the lowest and highest frequency bands contain the most information. Based on these predictors, a C4.5 decision tree algorithm is used to build the short-term solar flare prediction model. It is found that the performance of the short-term solar flare prediction model based on the multiresolution predictors is greatly improved.« less

  4. Controlled catalytic and thermal sequential pyrolysis and hydrolysis of mixed polymer waste streams to sequentially recover monomers or other high value products

    DOEpatents

    Evans, Robert J.; Chum, Helena L.

    1994-01-01

    A process of using fast pyrolysis in a carrier gas to convert a plastic waste feedstream having a mixed polymeric composition in a manner such that pyrolysis of a given polymer to its high value monomeric constituent occurs prior to pyrolysis of other plastic components therein comprising: selecting a first temperature program range to cause pyrolysis of said given polymer to its high value monomeric constituent prior to a temperature range that causes pyrolysis of other plastic components; selecting a catalyst and support for treating said feed streams with said catalyst to effect acid or base catalyzed reaction pathways to maximize yield or enhance separation of said high value monomeric constituent in said temperature program range; differentially heating said feed stream at a heat rate within the first temperature program range to provide differential pyrolysis for selective recovery of optimum quantities of the high value monomeric constituent prior to pyrolysis of other plastic components; separating the high value monomeric constituents, selecting a second higher temperature range to cause pyrolysis of a different high value monomeric constituent of said plastic waste and differentially heating the feedstream at the higher temperature program range to cause pyrolysis of the different high value monomeric constituent; and separating the different high value monomeric constituent.

  5. Controlled catalytic and thermal sequential pyrolysis and hydrolysis of mixed polymer waste streams to sequentially recover monomers or other high value products

    DOEpatents

    Evans, Robert J.; Chum, Helena L.

    1994-01-01

    A process of using fast pyrolysis in a carrier gas to convert a plastic waste feedstream having a mixed polymeric composition in a manner such that pyrolysis of a given polymer to its high value monomeric constituent occurs prior to pyrolysis of other plastic components therein comprising: selecting a first temperature program range to cause pyrolysis of said given polymer to its high value monomeric constituent prior to a temperature range that causes pyrolysis of other plastic components; selecting a catalyst and support for treating said feed streams with said catalyst to effect acid or base catalyzed reaction pathways to maximize yield or enhance separation of said high value monomeric constituent in said temperature program range; differentially heating said feed stream at a heat rate within the first temperature program range to provide differential pyrolysis for selective recovery of optimum quantities of the high value monomeric constituent prior to pyrolysis of other plastic components; separating the high value monomeric constituents; selecting a second higher temperature range to cause pyrolysis of a different high value monomeric constituent of said plastic waste and differentially heating the feedstream at the higher temperature program range to cause pyrolysis of the different high value monomeric constituent; and separating the different high value monomeric constituent.

  6. Controlled catalytic and thermal sequential pyrolysis and hydrolysis of mixed polymer waste streams to sequentially recover monomers or other high value products

    DOEpatents

    Evans, Robert J.; Chum, Helena L.

    1993-01-01

    A process of using fast pyrolysis in a carrier gas to convert a plastic waste feedstream having a mixed polymeric composition in a manner such that pyrolysis of a given polymer to its high value monomeric constituent occurs prior to pyrolysis of other plastic components therein comprising: selecting a first temperature program range to cause pyrolysis of said given polymer to its high value monomeric constituent prior to a temperature range that causes pyrolysis of other plastic components; selecting a catalyst and support for treating said feed streams with said catalyst to effect acid or base catalyzed reaction pathways to maximize yield or enhance separation of said high value monomeric constituent in said temperature program range; differentially heating said feed stream at a heat rate within the first temperature program range to provide differential pyrolysis for selective recovery of optimum quantities of the high value monomeric constituent prior to pyrolysis of other plastic components; separating the high value monomeric constituents; selecting a second higher temperature range to cause pyrolysis of a different high value monomeric constituent of said plastic waste and differentially heating the feedstream at the higher temperature program range to cause pyrolysis of the different high value monomeric constituent; and separating the different high value monomeric constituent.

  7. Ethical issues across different fields of forensic science.

    PubMed

    Yadav, Praveen Kumar

    2017-01-01

    Many commentators have acknowledged the fact that the usual courtroom maxim to "tell the truth, the whole truth, and nothing but the truth" is not so easy to apply in practicality. In any given situation, what does the whole truth include? In case, the whole truth includes all the possible alternatives for a given situation, what should a forensic expert witness do when an important question is not asked by the prosecutor? Does the obligation to tell the whole truth mean that all possible, all probable, all reasonably probable, all highly probable, or only the most probable alternatives must be given in response to a question? In this paper, an attempt has been made to review the various ethical issues in different fields of forensic science, forensic psychology, and forensic DNA databases. Some of the ethical issues are common to all fields whereas some are field specific. These ethical issues are mandatory for ensuring high levels of reliability and credibility of forensic scientists.

  8. Maximum likelihood density modification by pattern recognition of structural motifs

    DOEpatents

    Terwilliger, Thomas C.

    2004-04-13

    An electron density for a crystallographic structure having protein regions and solvent regions is improved by maximizing the log likelihood of a set of structures factors {F.sub.h } using a local log-likelihood function: (x)+p(.rho.(x).vertline.SOLV)p.sub.SOLV (x)+p(.rho.(x).vertline.H)p.sub.H (x)], where p.sub.PROT (x) is the probability that x is in the protein region, p(.rho.(x).vertline.PROT) is the conditional probability for .rho.(x) given that x is in the protein region, and p.sub.SOLV (x) and p(.rho.(x).vertline.SOLV) are the corresponding quantities for the solvent region, p.sub.H (x) refers to the probability that there is a structural motif at a known location, with a known orientation, in the vicinity of the point x; and p(.rho.(x).vertline.H) is the probability distribution for electron density at this point given that the structural motif actually is present. One appropriate structural motif is a helical structure within the crystallographic structure.

  9. Chemical effects in ion mixing of a ternary system (metal-SiO2)

    NASA Technical Reports Server (NTRS)

    Banwell, T.; Nicolet, M.-A.; Sands, T.; Grunthaner, P. J.

    1987-01-01

    The mixing of Ti, Cr, and Ni thin films with SiO2 by low-temperature (- 196-25 C) irradiation with 290 keV Xe has been investigated. Comparison of the morphology of the intermixed region and the dose dependences of net metal transport into SiO2 reveals that long range motion and phase formation probably occur as separate and sequential processes. Kinetic limitations suppress chemical effects in these systems during the initial transport process. Chemical interactions influence the subsequent phase formation.

  10. Betting on Illusory Patterns: Probability Matching in Habitual Gamblers.

    PubMed

    Gaissmaier, Wolfgang; Wilke, Andreas; Scheibehenne, Benjamin; McCanney, Paige; Barrett, H Clark

    2016-03-01

    Why do people gamble? A large body of research suggests that cognitive distortions play an important role in pathological gambling. Many of these distortions are specific cases of a more general misperception of randomness, specifically of an illusory perception of patterns in random sequences. In this article, we provide further evidence for the assumption that gamblers are particularly prone to perceiving illusory patterns. In particular, we compared habitual gamblers to a matched sample of community members with regard to how much they exhibit the choice anomaly 'probability matching'. Probability matching describes the tendency to match response proportions to outcome probabilities when predicting binary outcomes. It leads to a lower expected accuracy than the maximizing strategy of predicting the most likely event on each trial. Previous research has shown that an illusory perception of patterns in random sequences fuels probability matching. So does impulsivity, which is also reported to be higher in gamblers. We therefore hypothesized that gamblers will exhibit more probability matching than non-gamblers, which was confirmed in a controlled laboratory experiment. Additionally, gamblers scored much lower than community members on the cognitive reflection task, which indicates higher impulsivity. This difference could account for the difference in probability matching between the samples. These results suggest that gamblers are more willing to bet impulsively on perceived illusory patterns.

  11. Computer-aided mathematical analysis of probability of intercept for ground-based communication intercept system

    NASA Astrophysics Data System (ADS)

    Park, Sang Chul

    1989-09-01

    We develop a mathematical analysis model to calculate the probability of intercept (POI) for the ground-based communication intercept (COMINT) system. The POI is a measure of the effectiveness of the intercept system. We define the POI as the product of the probability of detection and the probability of coincidence. The probability of detection is a measure of the receiver's capability to detect a signal in the presence of noise. The probability of coincidence is the probability that an intercept system is available, actively listening in the proper frequency band, in the right direction and at the same time that the signal is received. We investigate the behavior of the POI with respect to the observation time, the separation distance, antenna elevations, the frequency of the signal, and the receiver bandwidths. We observe that the coincidence characteristic between the receiver scanning parameters and the signal parameters is the key factor to determine the time to obtain a given POI. This model can be used to find the optimal parameter combination to maximize the POI in a given scenario. We expand this model to a multiple system. This analysis is conducted on a personal computer to provide the portability. The model is also flexible and can be easily implemented under different situations.

  12. Universal scheme for finite-probability perfect transfer of arbitrary multispin states through spin chains

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Man, Zhong-Xiao, E-mail: zxman@mail.qfnu.edu.cn; An, Nguyen Ba, E-mail: nban@iop.vast.ac.vn; Xia, Yun-Jie, E-mail: yjxia@mail.qfnu.edu.cn

    In combination with the theories of open system and quantum recovering measurement, we propose a quantum state transfer scheme using spin chains by performing two sequential operations: a projective measurement on the spins of ‘environment’ followed by suitably designed quantum recovering measurements on the spins of interest. The scheme allows perfect transfer of arbitrary multispin states through multiple parallel spin chains with finite probability. Our scheme is universal in the sense that it is state-independent and applicable to any model possessing spin–spin interactions. We also present possible methods to implement the required measurements taking into account the current experimental technologies.more » As applications, we consider two typical models for which the probabilities of perfect state transfer are found to be reasonably high at optimally chosen moments during the time evolution. - Highlights: • Scheme that can achieve perfect quantum state transfer is devised. • The scheme is state-independent and applicable to any spin-interaction models. • The scheme allows perfect transfer of arbitrary multispin states. • Applications to two typical models are considered in detail.« less

  13. Role of the evening eastward electric field and the seed perturbations in the sequential occurrence of plasma bubble

    NASA Astrophysics Data System (ADS)

    Abadi, P.; Otsuka, Y.; Shiokawa, K.; Yamamoto, M.; M Buhari, S.; Abdullah, M.

    2017-12-01

    We investigate the 3-m ionospheric irregularities and the height variation of equatorial F-region observed by the Equatorial Atmosphere Radar (EAR) at Kototabang (100.3°E, 0.2°S, dip. Lat.: 10.1°S) in Indonesia and ionosondes at Chumphon (99.3°E, 10.7°N, dip. Lat.: 3°N) in Thailand and at Bac Lieu (105.7°E, 9.3°N, dip. Lat.; 1.5°N) in Vietnam, respectively, during March-April from 2011 to 2014. We aim to disclose the relation between pre-reversal enhancement (PRE) of evening eastward electric field and the sequential occurrence of the equatorial plasma bubble (EPB) in the period of 19-22 LT. In summary, (i) we found that the zonal spacing between consecutive EPBs ranges from less than 100 km up to 800 km with a maximum occurrence around 100-300 km as shown in Figure 1(a), and this result is consistent with the previous study [e.g. Makela et al., 2010]; (ii) the probability of the sequential occurrence of the EPB enhances with the increase of PRE strength (see Figure 1(b)); and (iii) Figure 1(c) shows that the zonal spacing between consecutive EPBs is less than 300 km for the weaker PRE (<30 m/s), whereas the zonal spacing is more varied for the stronger PRE (≥30 m/s). Our results remark that the PRE strength is a prominent factor of the sequential occurrence of the EPB. However, we also consider another factor, namely the zonal structure of seed perturbation modulated by gravity wave (GW), and the zonal spacing between consecutive EPBs may fit with the wavelength of the zonal structure of seed perturbation. We particularly attribute the result (iii) to the effects of PRE and seed perturbation on the sequential occurrence of the EPB, that is, we suggest that the weaker PRE could cause the sequential occurrence of the EPB when the zonal structure of seed perturbation has a shorter wavelength. We, however, need a further investigation for confirming the periodic seeding mechanism, and we will use a network of GPS receivers in the western part of Southeast Asia to analyze the zonal wavy structure in the TEC as a manifestation of the seed perturbations.

  14. Goal-Directed Decision Making with Spiking Neurons.

    PubMed

    Friedrich, Johannes; Lengyel, Máté

    2016-02-03

    Behavioral and neuroscientific data on reward-based decision making point to a fundamental distinction between habitual and goal-directed action selection. The formation of habits, which requires simple updating of cached values, has been studied in great detail, and the reward prediction error theory of dopamine function has enjoyed prominent success in accounting for its neural bases. In contrast, the neural circuit mechanisms of goal-directed decision making, requiring extended iterative computations to estimate values online, are still unknown. Here we present a spiking neural network that provably solves the difficult online value estimation problem underlying goal-directed decision making in a near-optimal way and reproduces behavioral as well as neurophysiological experimental data on tasks ranging from simple binary choice to sequential decision making. Our model uses local plasticity rules to learn the synaptic weights of a simple neural network to achieve optimal performance and solves one-step decision-making tasks, commonly considered in neuroeconomics, as well as more challenging sequential decision-making tasks within 1 s. These decision times, and their parametric dependence on task parameters, as well as the final choice probabilities match behavioral data, whereas the evolution of neural activities in the network closely mimics neural responses recorded in frontal cortices during the execution of such tasks. Our theory provides a principled framework to understand the neural underpinning of goal-directed decision making and makes novel predictions for sequential decision-making tasks with multiple rewards. Goal-directed actions requiring prospective planning pervade decision making, but their circuit-level mechanisms remain elusive. We show how a model circuit of biologically realistic spiking neurons can solve this computationally challenging problem in a novel way. The synaptic weights of our network can be learned using local plasticity rules such that its dynamics devise a near-optimal plan of action. By systematically comparing our model results to experimental data, we show that it reproduces behavioral decision times and choice probabilities as well as neural responses in a rich set of tasks. Our results thus offer the first biologically realistic account for complex goal-directed decision making at a computational, algorithmic, and implementational level. Copyright © 2016 the authors 0270-6474/16/361529-18$15.00/0.

  15. Goal-Directed Decision Making with Spiking Neurons

    PubMed Central

    Lengyel, Máté

    2016-01-01

    Behavioral and neuroscientific data on reward-based decision making point to a fundamental distinction between habitual and goal-directed action selection. The formation of habits, which requires simple updating of cached values, has been studied in great detail, and the reward prediction error theory of dopamine function has enjoyed prominent success in accounting for its neural bases. In contrast, the neural circuit mechanisms of goal-directed decision making, requiring extended iterative computations to estimate values online, are still unknown. Here we present a spiking neural network that provably solves the difficult online value estimation problem underlying goal-directed decision making in a near-optimal way and reproduces behavioral as well as neurophysiological experimental data on tasks ranging from simple binary choice to sequential decision making. Our model uses local plasticity rules to learn the synaptic weights of a simple neural network to achieve optimal performance and solves one-step decision-making tasks, commonly considered in neuroeconomics, as well as more challenging sequential decision-making tasks within 1 s. These decision times, and their parametric dependence on task parameters, as well as the final choice probabilities match behavioral data, whereas the evolution of neural activities in the network closely mimics neural responses recorded in frontal cortices during the execution of such tasks. Our theory provides a principled framework to understand the neural underpinning of goal-directed decision making and makes novel predictions for sequential decision-making tasks with multiple rewards. SIGNIFICANCE STATEMENT Goal-directed actions requiring prospective planning pervade decision making, but their circuit-level mechanisms remain elusive. We show how a model circuit of biologically realistic spiking neurons can solve this computationally challenging problem in a novel way. The synaptic weights of our network can be learned using local plasticity rules such that its dynamics devise a near-optimal plan of action. By systematically comparing our model results to experimental data, we show that it reproduces behavioral decision times and choice probabilities as well as neural responses in a rich set of tasks. Our results thus offer the first biologically realistic account for complex goal-directed decision making at a computational, algorithmic, and implementational level. PMID:26843636

  16. Reproductive success of kittiwakes and murres in sequential stages of the nesting period: Relationships with diet and oceanography

    NASA Astrophysics Data System (ADS)

    Renner, Heather M.; Drummond, Brie A.; Benson, Anna-Marie; Paredes, Rosana

    2014-11-01

    Reproductive success is one of the most easily-measured and widely studied demographic parameters of colonial nesting seabirds. Nevertheless, factors affecting the sequential stages (egg laying, incubation, chick-rearing) of reproductive success are less understood. We investigated the separate sequential stages of reproductive success in piscivorous black-legged kittiwakes (Rissa tridactyla) and thick-billed murres (Uria lomvia) using a 36-year dataset (1975-2010) on the major Pribilof Islands (St. Paul and St. George), which have recently had contrasting population trajectories. Our objectives were to evaluate how the proportion of successful nests varied among stages, and to quantify factors influencing the probability of nest success at each stage in each island. We modeled the probability of nest success at each stage using General Linear Mixed Models incorporating broad-scale and local climate variables, and diet as covariates as well as other measures of reproduction such as timing of breeding and reproductive output in the previous year and previous stage. For both species we found: (1) Success in previous stages of the breeding cycle and success in the prior year better explained overall success than any environmental variables. Phenology was also an important predictor of laying success for kittiwakes. (2) Fledging success was lower when chick diets contained oceanic fish found farther from the colonies and small invertebrates, rather than coastal fish species. (3) Differences in reproductive variables at St. Paul and St. George islands did not correspond to population trends between the two islands. Our results highlight the potential importance of adult condition and annual survival to kittiwake and murre productivity and ultimately, populations. Adult condition carrying over from the previous year ultimately seems to drive annual breeding success in a cascade effect. Furthermore, condition and survival appear to be important contributors to population dynamics at each island. Therefore, adult condition and survival prior to breeding, and factors that influence these parameters such as foraging conditions in the non-breeding season, may be important datasets for understanding drivers of seabird demography at the Pribilof Islands.

  17. Effects of hydraulic resistance circuit training on physical fitness components of potential relevance to +Gz tolerance.

    PubMed

    Jacobs, I; Bell, D G; Pope, J; Lee, W

    1987-08-01

    Recent studies carried out in the United States and Sweden have demonstrated that strength training can improve +Gz acceleration tolerance. Based on these findings, the Canadian Forces have introduced a training program for aircrew of high performance aircraft. This report describes the changes in physical fitness components considered relevant to +Gz tolerance after 12 weeks of training with this program. Prior to beginning training, 45 military personnel were tested, but only 20 completed a minimum of 24 training sessions. The following variables were measured in these 20 subjects before and after training: maximal strength of several large muscle groups during isokinetic contractions, maximal aerobic power and an endurance fitness index, maximal anaerobic power, anthropometric characteristics, and maximal expiratory pressure generated during exhalation. Training involved hydraulic resistance circuit training 2-4 times/week. The circuit consisted of 3 consecutive sets at each of 8 stations using Hydra-Gym equipment. The exercise:rest ratio was 20:40 s for the initial 4 training weeks and was then changed to 30:50. After training the changes in anthropometric measurements suggested that lean body mass was increased. Small, but significant, increases were also measured in muscle strength during bench press, biceps curls, squats, knee extension, and knee flexion. Neither maximal anaerobic power (i.e. muscular endurance) nor maximal expiratory pressure were changed after the training. Indices of endurance fitness were also increased in the present study. The relatively small increases in strength are probably due to the design of the exercise:rest ratio which resulted in improved strength and aerobic fitness.(ABSTRACT TRUNCATED AT 250 WORDS)

  18. Comparison of Periodized and Non-Periodized Resistance Training on Maximal Strength: A Meta-Analysis.

    PubMed

    Williams, Tyler D; Tolusso, Danilo V; Fedewa, Michael V; Esco, Michael R

    2017-10-01

    Periodization is a logical method of organizing training into sequential phases and cyclical time periods in order to increase the potential for achieving specific performance goals while minimizing the potential for overtraining. Periodized resistance training plans are proposed to be superior to non-periodized training plans for enhancing maximal strength. The primary aim of this study was to examine the previous literature comparing periodized resistance training plans to non-periodized resistance training plans and determine a quantitative estimate of effect on maximal strength. All studies included in the meta-analysis met the following inclusion criteria: (1) peer-reviewed publication; (2) published in English; (3) comparison of a periodized resistance training group to a non-periodized resistance training group; (4) maximal strength measured by 1-repetition maximum (1RM) squat, bench press, or leg press. Data were extracted and independently coded by two authors. Random-effects models were used to aggregate a mean effect size (ES), 95% confidence intervals (CIs) and potential moderators. The cumulative results of 81 effects gathered from 18 studies published between 1988 and 2015 indicated that the magnitude of improvement in 1RM following periodized resistance training was greater than non-periodized resistance training (ES = 0.43, 95% CI 0.27-0.58; P < 0.001). Periodization model (β = 0.51; P = 0.0010), training status (β = -0.59; P = 0.0305), study length (β = 0.03; P = 0.0067), and training frequency (β = 0.46; P = 0.0123) were associated with a change in 1RM. These results indicate that undulating programs were more favorable for strength gains. Improvements in 1RM were greater among untrained participants. Additionally, higher training frequency and longer study length were associated with larger improvements in 1RM. These results suggest that periodized resistance training plans have a moderate effect on 1RM compared to non-periodized training plans. Variation in training stimuli appears to be vital for increasing maximal strength, and longer periods of higher training frequency may be preferred.

  19. A new methodology to integrate planetary quarantine requirements into mission planning, with application to a Jupiter orbiter

    NASA Technical Reports Server (NTRS)

    Howard, R. A.; North, D. W.; Pezier, J. P.

    1975-01-01

    A new methodology is proposed for integrating planetary quarantine objectives into space exploration planning. This methodology is designed to remedy the major weaknesses inherent in the current formulation of planetary quarantine requirements. Application of the methodology is illustrated by a tutorial analysis of a proposed Jupiter Orbiter mission. The proposed methodology reformulates planetary quarantine planning as a sequential decision problem. Rather than concentrating on a nominal plan, all decision alternatives and possible consequences are laid out in a decision tree. Probabilities and values are associated with the outcomes, including the outcome of contamination. The process of allocating probabilities, which could not be made perfectly unambiguous and systematic, is replaced by decomposition and optimization techniques based on principles of dynamic programming. Thus, the new methodology provides logical integration of all available information and allows selection of the best strategy consistent with quarantine and other space exploration goals.

  20. Stimulation of abdominal and upper thoracic muscles with surface electrodes for respiration and cough: Acute studies in adult canines.

    PubMed

    Walter, James S; Posluszny, Joseph; Dieter, Raymond; Dieter, Robert S; Sayers, Scott; Iamsakul, Kiratipath; Staunton, Christine; Thomas, Donald; Rabbat, Mark; Singh, Sanjay

    2018-05-01

    To optimize maximal respiratory responses with surface stimulation over abdominal and upper thorax muscles and using a 12-Channel Neuroprosthetic Platform. Following instrumentation, six anesthetized adult canines were hyperventilated sufficiently to produce respiratory apnea. Six abdominal tests optimized electrode arrangements and stimulation parameters using bipolar sets of 4.5 cm square electrodes. Tests in the upper thorax optimized electrode locations, and forelimb moment was limited to slight-to-moderate. During combined muscle stimulation tests, the upper thoracic was followed immediately by abdominal stimulation. Finally, a model of glottal closure for cough was conducted with the goal of increased peak expiratory flow. Optimized stimulation of abdominal muscles included three sets of bilateral surface electrodes located 4.5 cm dorsal to the lateral line and from the 8 th intercostal space to caudal to the 13 th rib, 80 or 100 mA current, and 50 Hz stimulation frequency. The maximal expired volume was 343 ± 23 ml (n=3). Optimized upper thorax stimulation included a single bilateral set of electrodes located over the 2 nd interspace, 60 to 80 mA, and 50 Hz. The maximal inspired volume was 304 ± 54 ml (n=4). Sequential stimulation of the two muscles increased the volume to 600 ± 152 ml (n=2), and the glottal closure maneuver increased the flow. Studies in an adult canine model identified optimal surface stimulation methods for upper thorax and abdominal muscles to induce sufficient volumes for ventilation and cough. Further study with this neuroprosthetic platform is warranted.

  1. A discrete event modelling framework for simulation of long-term outcomes of sequential treatment strategies for ankylosing spondylitis.

    PubMed

    Tran-Duy, An; Boonen, Annelies; van de Laar, Mart A F J; Franke, Angelinus C; Severens, Johan L

    2011-12-01

    To develop a modelling framework which can simulate long-term quality of life, societal costs and cost-effectiveness as affected by sequential drug treatment strategies for ankylosing spondylitis (AS). Discrete event simulation paradigm was selected for model development. Drug efficacy was modelled as changes in disease activity (Bath Ankylosing Spondylitis Disease Activity Index (BASDAI)) and functional status (Bath Ankylosing Spondylitis Functional Index (BASFI)), which were linked to costs and health utility using statistical models fitted based on an observational AS cohort. Published clinical data were used to estimate drug efficacy and time to events. Two strategies were compared: (1) five available non-steroidal anti-inflammatory drugs (strategy 1) and (2) same as strategy 1 plus two tumour necrosis factor α inhibitors (strategy 2). 13,000 patients were followed up individually until death. For probability sensitivity analysis, Monte Carlo simulations were performed with 1000 sets of parameters sampled from the appropriate probability distributions. The models successfully generated valid data on treatments, BASDAI, BASFI, utility, quality-adjusted life years (QALYs) and costs at time points with intervals of 1-3 months during the simulation length of 70 years. Incremental cost per QALY gained in strategy 2 compared with strategy 1 was €35,186. At a willingness-to-pay threshold of €80,000, it was 99.9% certain that strategy 2 was cost-effective. The modelling framework provides great flexibility to implement complex algorithms representing treatment selection, disease progression and changes in costs and utilities over time of patients with AS. Results obtained from the simulation are plausible.

  2. Historical demography of common carp estimated from individuals collected from various parts of the world using the pairwise sequentially markovian coalescent approach.

    PubMed

    Yuan, Zihao; Huang, Wei; Liu, Shikai; Xu, Peng; Dunham, Rex; Liu, Zhanjiang

    2018-04-01

    The inference of historical demography of a species is helpful for understanding species' differentiation and its population dynamics. However, such inference has been previously difficult due to the lack of proper analytical methods and availability of genetic data. A recently developed method called Pairwise Sequentially Markovian Coalescent (PSMC) offers the capability for estimation of the trajectories of historical populations over considerable time periods using genomic sequences. In this study, we applied this approach to infer the historical demography of the common carp using samples collected from Europe, Asia and the Americas. Comparison between Asian and European common carp populations showed that the last glacial period starting 100 ka BP likely caused a significant decline in population size of the wild common carp in Europe, while it did not have much of an impact on its counterparts in Asia. This was probably caused by differences in glacial activities in East Asia and Europe, and suggesting a separation of the European and Asian clades before the last glacial maximum. The North American clade which is an invasive population shared a similar demographic history as those from Europe, consistent with the idea that the North American common carp probably had European ancestral origins. Our analysis represents the first reconstruction of the historical population demography of the common carp, which is important to elucidate the separation of European and Asian common carp clades during the Quaternary glaciation, as well as the dispersal of common carp across the world.

  3. Optimal Deployment of Unmanned Aerial Vehicles for Border Surveillance

    DTIC Science & Technology

    2014-06-01

    and intercept intruders that are trying to trespass a border. These intruders can include terrorists, drug traffickers, smugglers, illegal immigrants...routes, altitudes, and speeds in order to maximize the probability of detecting intruders trying to trespass a given border. These models will...Border surveillance is an important concern for most nations wanting to detect and intercept intruders that are trying to trespass a border. These

  4. NTCP reduction for advanced head and neck cancer patients using proton therapy for complete or sequential boost treatment versus photon therapy.

    PubMed

    Jakobi, Annika; Stützer, Kristin; Bandurska-Luque, Anna; Löck, Steffen; Haase, Robert; Wack, Linda-Jacqueline; Mönnich, David; Thorwarth, Daniel; Perez, Damien; Lühr, Armin; Zips, Daniel; Krause, Mechthild; Baumann, Michael; Perrin, Rosalind; Richter, Christian

    2015-01-01

    To determine by treatment plan comparison differences in toxicity risk reduction for patients with head and neck squamous cell carcinoma (HNSCC) from proton therapy either used for complete treatment or sequential boost treatment only. For 45 HNSCC patients, intensity-modulated photon (IMXT) and proton (IMPT) treatment plans were created including a dose escalation via simultaneous integrated boost with a one-step adaptation strategy after 25 fractions for sequential boost treatment. Dose accumulation was performed for pure IMXT treatment, pure IMPT treatment and for a mixed modality treatment with IMXT for the elective target followed by a sequential boost with IMPT. Treatment plan evaluation was based on modern normal tissue complication probability (NTCP) models for mucositis, xerostomia, aspiration, dysphagia, larynx edema and trismus. Individual NTCP differences between IMXT and IMPT (∆NTCPIMXT-IMPT) as well as between IMXT and the mixed modality treatment (∆NTCPIMXT-Mix) were calculated. Target coverage was similar in all three scenarios. NTCP values could be reduced in all patients using IMPT treatment. However, ∆NTCPIMXT-Mix values were a factor 2-10 smaller than ∆NTCPIMXT-IMPT. Assuming a threshold of ≥ 10% NTCP reduction in xerostomia or dysphagia risk as criterion for patient assignment to IMPT, less than 15% of the patients would be selected for a proton boost, while about 50% would be assigned to pure IMPT treatment. For mucositis and trismus, ∆NTCP ≥ 10% occurred in six and four patients, respectively, with pure IMPT treatment, while no such difference was identified with the proton boost. The use of IMPT generally reduces the expected toxicity risk while maintaining good tumor coverage in the examined HNSCC patients. A mixed modality treatment using IMPT solely for a sequential boost reduces the risk by 10% only in rare cases. In contrast, pure IMPT treatment may be reasonable for about half of the examined patient cohort considering the toxicities xerostomia and dysphagia, if a feasible strategy for patient anatomy changes is implemented.

  5. Altered Fermentation Performances, Growth, and Metabolic Footprints Reveal Competition for Nutrients between Yeast Species Inoculated in Synthetic Grape Juice-Like Medium.

    PubMed

    Rollero, Stephanie; Bloem, Audrey; Ortiz-Julien, Anne; Camarasa, Carole; Divol, Benoit

    2018-01-01

    The sequential inoculation of non- Saccharomyces yeasts and Saccharomyces cerevisiae in grape juice is becoming an increasingly popular practice to diversify wine styles and/or to obtain more complex wines with a peculiar microbial footprint. One of the main interactions is competition for nutrients, especially nitrogen sources, that directly impacts not only fermentation performance but also the production of aroma compounds. In order to better understand the interactions taking place between non-Saccharomyces yeasts and S. cerevisiae during alcoholic fermentation, sequential inoculations of three yeast species ( Pichia burtonii, Kluyveromyces marxianus, Zygoascus meyerae ) with S. cerevisiae were performed individually in a synthetic medium. Different species-dependent interactions were evidenced. Indeed, the three sequential inoculations resulted in three different behaviors in terms of growth. P. burtonii and Z. meyerae declined after the inoculation of S. cerevisiae which promptly outcompeted the other two species. However, while the presence of P. burtonii did not impact the fermentation kinetics of S. cerevisiae , that of Z. meyerae rendered the overall kinetics very slow and with no clear exponential phase. K. marxianus and S. cerevisiae both declined and became undetectable before fermentation completion. The results also demonstrated that yeasts differed in their preference for nitrogen sources. Unlike Z. meyerae and P. burtonii, K. marxianus appeared to be a competitor for S. cerevisiae (as evidenced by the uptake of ammonium and amino acids), thereby explaining the resulting stuck fermentation. Nevertheless, the results suggested that competition for other nutrients (probably vitamins) occurred during the sequential inoculation of Z. meyerae with S. cerevisiae . The metabolic footprint of the non- Saccharomyces yeasts determined after 48 h of fermentation remained until the end of fermentation and combined with that of S. cerevisiae . For instance, fermentations performed with K. marxianus were characterized by the formation of phenylethanol and phenylethyl acetate, while those performed with P. burtonii or Z. meyerae displayed higher production of isoamyl alcohol and ethyl esters. When considering sequential inoculation of yeasts, the nutritional requirements of the yeasts used should be carefully considered and adjusted accordingly. Finally, our chemical data suggests that the organoleptic properties of the wine are altered in a species specific manner.

  6. Exact and Approximate Probabilistic Symbolic Execution

    NASA Technical Reports Server (NTRS)

    Luckow, Kasper; Pasareanu, Corina S.; Dwyer, Matthew B.; Filieri, Antonio; Visser, Willem

    2014-01-01

    Probabilistic software analysis seeks to quantify the likelihood of reaching a target event under uncertain environments. Recent approaches compute probabilities of execution paths using symbolic execution, but do not support nondeterminism. Nondeterminism arises naturally when no suitable probabilistic model can capture a program behavior, e.g., for multithreading or distributed systems. In this work, we propose a technique, based on symbolic execution, to synthesize schedulers that resolve nondeterminism to maximize the probability of reaching a target event. To scale to large systems, we also introduce approximate algorithms to search for good schedulers, speeding up established random sampling and reinforcement learning results through the quantification of path probabilities based on symbolic execution. We implemented the techniques in Symbolic PathFinder and evaluated them on nondeterministic Java programs. We show that our algorithms significantly improve upon a state-of- the-art statistical model checking algorithm, originally developed for Markov Decision Processes.

  7. Recyclable amplification for single-photon entanglement from photon loss and decoherence

    NASA Astrophysics Data System (ADS)

    Zhou, Lan; Chen, Ling-Quan; Zhong, Wei; Sheng, Yu-Bo

    2018-01-01

    We put forward a highly efficient recyclable single-photon assisted amplification protocol, which can protect single-photon entanglement (SPE) from photon loss and decoherence. Making use of quantum nondemolition detection gates constructed with the help of cross-Kerr nonlinearity, our protocol has some attractive advantages. First, the parties can recover less-entangled SPE to be maximally entangled SPE, and reduce photon loss simultaneously. Second, if the protocol fails, the parties can repeat the protocol to reuse some discarded items, which can increase the success probability. Third, when the protocol is successful, they can similarly repeat the protocol to further increase the fidelity of the SPE. Thereby, our protocol provides a possible way to obtain high entanglement, high fidelity and high success probability simultaneously. In particular, our protocol shows higher success probability in the practical high photon loss channel. Based on the above features, our amplification protocol has potential for future application in long-distance quantum communication.

  8. Reinforcement Learning for Constrained Energy Trading Games With Incomplete Information.

    PubMed

    Wang, Huiwei; Huang, Tingwen; Liao, Xiaofeng; Abu-Rub, Haitham; Chen, Guo

    2017-10-01

    This paper considers the problem of designing adaptive learning algorithms to seek the Nash equilibrium (NE) of the constrained energy trading game among individually strategic players with incomplete information. In this game, each player uses the learning automaton scheme to generate the action probability distribution based on his/her private information for maximizing his own averaged utility. It is shown that if one of admissible mixed-strategies converges to the NE with probability one, then the averaged utility and trading quantity almost surely converge to their expected ones, respectively. For the given discontinuous pricing function, the utility function has already been proved to be upper semicontinuous and payoff secure which guarantee the existence of the mixed-strategy NE. By the strict diagonal concavity of the regularized Lagrange function, the uniqueness of NE is also guaranteed. Finally, an adaptive learning algorithm is provided to generate the strategy probability distribution for seeking the mixed-strategy NE.

  9. Exploration of multiphoton entangled states by using weak nonlinearities

    PubMed Central

    He, Ying-Qiu; Ding, Dong; Yan, Feng-Li; Gao, Ting

    2016-01-01

    We propose a fruitful scheme for exploring multiphoton entangled states based on linear optics and weak nonlinearities. Compared with the previous schemes the present method is more feasible because there are only small phase shifts instead of a series of related functions of photon numbers in the process of interaction with Kerr nonlinearities. In the absence of decoherence we analyze the error probabilities induced by homodyne measurement and show that the maximal error probability can be made small enough even when the number of photons is large. This implies that the present scheme is quite tractable and it is possible to produce entangled states involving a large number of photons. PMID:26751044

  10. Skeletal muscle work efficiency with age: the role of non-contractile processes.

    PubMed

    Layec, Gwenael; Hart, Corey R; Trinity, Joel D; Le Fur, Yann; Jeong, Eun-Kee; Richardson, Russell S

    2015-02-01

    Although skeletal muscle work efficiency probably plays a key role in limiting mobility of the elderly, the physiological mechanisms responsible for this diminished function remain incompletely understood. Thus, in the quadriceps of young (n=9) and old (n=10) subjects, we measured the cost of muscle contraction (ATP cost) with 31P-magnetic resonance spectroscopy (31P-MRS) during (i) maximal intermittent contractions to elicit a metabolic demand from both cross-bridge cycling and ion pumping and (ii) a continuous maximal contraction to predominantly tax cross-bridge cycling. The ATP cost of the intermittent contractions was significantly greater in the old (0.30±0.22 mM·min-1·N·m-1) compared with the young (0.13±0.03 mM·min-1·N·m-1, P<0.05). In contrast, at the end of the continuous contraction protocol, the ATP cost in the old (0.10±0.07 mM·min-1·N·m-1) was not different from the young (0.06±0.02 mM·min-1·N·m-1, P=0.2). In addition, the ATP cost of the intermittent contractions correlated significantly with the single leg peak power of the knee-extensors assessed during incremental dynamic exercise (r=-0.55; P<0.05). Overall, this study reveals an age-related increase in the ATP cost of contraction, probably mediated by an excessive energy demand from ion pumping, which probably contributes to both the decline in muscle efficiency and functional capacity associated with aging.

  11. Rotorcraft system identification techniques for handling qualities and stability and control evaluation

    NASA Technical Reports Server (NTRS)

    Hall, W. E., Jr.; Gupta, N. K.; Hansen, R. S.

    1978-01-01

    An integrated approach to rotorcraft system identification is described. This approach consists of sequential application of (1) data filtering to estimate states of the system and sensor errors, (2) model structure estimation to isolate significant model effects, and (3) parameter identification to quantify the coefficient of the model. An input design algorithm is described which can be used to design control inputs which maximize parameter estimation accuracy. Details of each aspect of the rotorcraft identification approach are given. Examples of both simulated and actual flight data processing are given to illustrate each phase of processing. The procedure is shown to provide means of calibrating sensor errors in flight data, quantifying high order state variable models from the flight data, and consequently computing related stability and control design models.

  12. Steps in the design, development and formative evaluation of obesity prevention-related behavior change trials.

    PubMed

    Baranowski, Tom; Cerin, Ester; Baranowski, Janice

    2009-01-21

    Obesity prevention interventions through dietary and physical activity change have generally not been effective. Limitations on possible program effectiveness are herein identified at every step in the mediating variable model, a generic conceptual framework for understanding how interventions may promote behavior change. To minimize these problems, and thereby enhance likely intervention effectiveness, four sequential types of formative studies are proposed: targeted behavior validation, targeted mediator validation, intervention procedure validation, and pilot feasibility intervention. Implementing these studies would establish the relationships at each step in the mediating variable model, thereby maximizing the likelihood that an intervention would work and its effects would be detected. Building consensus among researchers, funding agencies, and journal editors on distinct intervention development studies should avoid identified limitations and move the field forward.

  13. Steps in the design, development and formative evaluation of obesity prevention-related behavior change trials

    PubMed Central

    Baranowski, Tom; Cerin, Ester; Baranowski, Janice

    2009-01-01

    Obesity prevention interventions through dietary and physical activity change have generally not been effective. Limitations on possible program effectiveness are herein identified at every step in the mediating variable model, a generic conceptual framework for understanding how interventions may promote behavior change. To minimize these problems, and thereby enhance likely intervention effectiveness, four sequential types of formative studies are proposed: targeted behavior validation, targeted mediator validation, intervention procedure validation, and pilot feasibility intervention. Implementing these studies would establish the relationships at each step in the mediating variable model, thereby maximizing the likelihood that an intervention would work and its effects would be detected. Building consensus among researchers, funding agencies, and journal editors on distinct intervention development studies should avoid identified limitations and move the field forward. PMID:19159476

  14. Saccade selection when reward probability is dynamically manipulated using Markov chains

    PubMed Central

    Lovejoy, Lee P.; Krauzlis, Richard J.

    2012-01-01

    Markov chains (stochastic processes where probabilities are assigned based on the previous outcome) are commonly used to examine the transitions between behavioral states, such as those that occur during foraging or social interactions. However, relatively little is known about how well primates can incorporate knowledge about Markov chains into their behavior. Saccadic eye movements are an example of a simple behavior influenced by information about probability, and thus are good candidates for testing whether subjects can learn Markov chains. In addition, when investigating the influence of probability on saccade target selection, the use of Markov chains could provide an alternative method that avoids confounds present in other task designs. To investigate these possibilities, we evaluated human behavior on a task in which stimulus reward probabilities were assigned using a Markov chain. On each trial, the subject selected one of four identical stimuli by saccade; after selection, feedback indicated the rewarded stimulus. Each session consisted of 200–600 trials, and on some sessions, the reward magnitude varied. On sessions with a uniform reward, subjects (n = 6) learned to select stimuli at a frequency close to reward probability, which is similar to human behavior on matching or probability classification tasks. When informed that a Markov chain assigned reward probabilities, subjects (n = 3) learned to select the greatest reward probability more often, bringing them close to behavior that maximizes reward. On sessions where reward magnitude varied across stimuli, subjects (n = 6) demonstrated preferences for both greater reward probability and greater reward magnitude, resulting in a preference for greater expected value (the product of reward probability and magnitude). These results demonstrate that Markov chains can be used to dynamically assign probabilities that are rapidly exploited by human subjects during saccade target selection. PMID:18330552

  15. Saccade selection when reward probability is dynamically manipulated using Markov chains.

    PubMed

    Nummela, Samuel U; Lovejoy, Lee P; Krauzlis, Richard J

    2008-05-01

    Markov chains (stochastic processes where probabilities are assigned based on the previous outcome) are commonly used to examine the transitions between behavioral states, such as those that occur during foraging or social interactions. However, relatively little is known about how well primates can incorporate knowledge about Markov chains into their behavior. Saccadic eye movements are an example of a simple behavior influenced by information about probability, and thus are good candidates for testing whether subjects can learn Markov chains. In addition, when investigating the influence of probability on saccade target selection, the use of Markov chains could provide an alternative method that avoids confounds present in other task designs. To investigate these possibilities, we evaluated human behavior on a task in which stimulus reward probabilities were assigned using a Markov chain. On each trial, the subject selected one of four identical stimuli by saccade; after selection, feedback indicated the rewarded stimulus. Each session consisted of 200-600 trials, and on some sessions, the reward magnitude varied. On sessions with a uniform reward, subjects (n = 6) learned to select stimuli at a frequency close to reward probability, which is similar to human behavior on matching or probability classification tasks. When informed that a Markov chain assigned reward probabilities, subjects (n = 3) learned to select the greatest reward probability more often, bringing them close to behavior that maximizes reward. On sessions where reward magnitude varied across stimuli, subjects (n = 6) demonstrated preferences for both greater reward probability and greater reward magnitude, resulting in a preference for greater expected value (the product of reward probability and magnitude). These results demonstrate that Markov chains can be used to dynamically assign probabilities that are rapidly exploited by human subjects during saccade target selection.

  16. A monogamy-of-entanglement game with applications to device-independent quantum cryptography

    NASA Astrophysics Data System (ADS)

    Tomamichel, Marco; Fehr, Serge; Kaniewski, Jędrzej; Wehner, Stephanie

    2013-10-01

    We consider a game in which two separate laboratories collaborate to prepare a quantum system and are then asked to guess the outcome of a measurement performed by a third party in a random basis on that system. Intuitively, by the uncertainty principle and the monogamy of entanglement, the probability that both players simultaneously succeed in guessing the outcome correctly is bounded. We are interested in the question of how the success probability scales when many such games are performed in parallel. We show that any strategy that maximizes the probability to win every game individually is also optimal for the parallel repetition of the game. Our result implies that the optimal guessing probability can be achieved without the use of entanglement. We explore several applications of this result. Firstly, we show that it implies security for standard BB84 quantum key distribution when the receiving party uses fully untrusted measurement devices, i.e. we show that BB84 is one-sided device independent. Secondly, we show how our result can be used to prove security of a one-round position-verification scheme. Finally, we generalize a well-known uncertainty relation for the guessing probability to quantum side information.

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Marleau, Peter; Monterial, Mateusz; Clarke, Shaun

    A Bayesian approach is proposed for pulse shape discrimination of photons and neutrons in liquid organic scinitillators. Instead of drawing a decision boundary, each pulse is assigned a photon or neutron confidence probability. In addition, this allows for photon and neutron classification on an event-by-event basis. The sum of those confidence probabilities is used to estimate the number of photon and neutron instances in the data. An iterative scheme, similar to an expectation-maximization algorithm for Gaussian mixtures, is used to infer the ratio of photons-to-neutrons in each measurement. Therefore, the probability space adapts to data with varying photon-to-neutron ratios. Amore » time-correlated measurement of Am–Be and separate measurements of 137Cs, 60Co and 232Th photon sources were used to construct libraries of neutrons and photons. These libraries were then used to produce synthetic data sets with varying ratios of photons-to-neutrons. Probability weighted method that we implemented was found to maintain neutron acceptance rate of up to 90% up to photon-to-neutron ratio of 2000, and performed 9% better than the decision boundary approach. Furthermore, the iterative approach appropriately changed the probability space with an increasing number of photons which kept the neutron population estimate from unrealistically increasing.« less

  18. If Only my Leader Would just Do Something! Passive Leadership Undermines Employee Well-being Through Role Stressors and Psychological Resource Depletion.

    PubMed

    Barling, Julian; Frone, Michael R

    2017-08-01

    The goal of this study was to develop and test a sequential mediational model explaining the negative relationship of passive leadership to employee well-being. Based on role stress theory, we posit that passive leadership will predict higher levels of role ambiguity, role conflict and role overload. Invoking Conservation of Resources theory, we further hypothesize that these role stressors will indirectly and negatively influence two aspects of employee well-being, namely overall mental health and overall work attitude, through psychological work fatigue. Using a probability sample of 2467 US workers, structural equation modelling supported the model by showing that role stressors and psychological work fatigue partially mediated the negative relationship between passive leadership and both aspects of employee well-being. The hypothesized, sequential indirect relationships explained 47.9% of the overall relationship between passive leadership and mental health and 26.6% of the overall relationship between passive leadership and overall work attitude. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  19. Teaching the principles of statistical dynamics

    PubMed Central

    Ghosh, Kingshuk; Dill, Ken A.; Inamdar, Mandar M.; Seitaridou, Effrosyni; Phillips, Rob

    2012-01-01

    We describe a simple framework for teaching the principles that underlie the dynamical laws of transport: Fick’s law of diffusion, Fourier’s law of heat flow, the Newtonian viscosity law, and the mass-action laws of chemical kinetics. In analogy with the way that the maximization of entropy over microstates leads to the Boltzmann distribution and predictions about equilibria, maximizing a quantity that E. T. Jaynes called “caliber” over all the possible microtrajectories leads to these dynamical laws. The principle of maximum caliber also leads to dynamical distribution functions that characterize the relative probabilities of different microtrajectories. A great source of recent interest in statistical dynamics has resulted from a new generation of single-particle and single-molecule experiments that make it possible to observe dynamics one trajectory at a time. PMID:23585693

  20. Teaching the principles of statistical dynamics.

    PubMed

    Ghosh, Kingshuk; Dill, Ken A; Inamdar, Mandar M; Seitaridou, Effrosyni; Phillips, Rob

    2006-02-01

    We describe a simple framework for teaching the principles that underlie the dynamical laws of transport: Fick's law of diffusion, Fourier's law of heat flow, the Newtonian viscosity law, and the mass-action laws of chemical kinetics. In analogy with the way that the maximization of entropy over microstates leads to the Boltzmann distribution and predictions about equilibria, maximizing a quantity that E. T. Jaynes called "caliber" over all the possible microtrajectories leads to these dynamical laws. The principle of maximum caliber also leads to dynamical distribution functions that characterize the relative probabilities of different microtrajectories. A great source of recent interest in statistical dynamics has resulted from a new generation of single-particle and single-molecule experiments that make it possible to observe dynamics one trajectory at a time.

  1. Using the principle of entropy maximization to infer genetic interaction networks from gene expression patterns.

    PubMed

    Lezon, Timothy R; Banavar, Jayanth R; Cieplak, Marek; Maritan, Amos; Fedoroff, Nina V

    2006-12-12

    We describe a method based on the principle of entropy maximization to identify the gene interaction network with the highest probability of giving rise to experimentally observed transcript profiles. In its simplest form, the method yields the pairwise gene interaction network, but it can also be extended to deduce higher-order interactions. Analysis of microarray data from genes in Saccharomyces cerevisiae chemostat cultures exhibiting energy metabolic oscillations identifies a gene interaction network that reflects the intracellular communication pathways that adjust cellular metabolic activity and cell division to the limiting nutrient conditions that trigger metabolic oscillations. The success of the present approach in extracting meaningful genetic connections suggests that the maximum entropy principle is a useful concept for understanding living systems, as it is for other complex, nonequilibrium systems.

  2. Maximizing Total QoS-Provisioning of Image Streams with Limited Energy Budget

    NASA Astrophysics Data System (ADS)

    Lee, Wan Yeon; Kim, Kyong Hoon; Ko, Young Woong

    To fully utilize the limited battery energy of mobile electronic devices, we propose an adaptive adjustment method of processing quality for multiple image stream tasks running with widely varying execution times. This adjustment method completes the worst-case executions of the tasks with a given budget of energy, and maximizes the total reward value of processing quality obtained during their executions by exploiting the probability distribution of task execution times. The proposed method derives the maximum reward value for the tasks being executable with arbitrary processing quality, and near maximum value for the tasks being executable with a finite number of processing qualities. Our evaluation on a prototype system shows that the proposed method achieves larger reward values, by up to 57%, than the previous method.

  3. Learning of state-space models with highly informative observations: A tempered sequential Monte Carlo solution

    NASA Astrophysics Data System (ADS)

    Svensson, Andreas; Schön, Thomas B.; Lindsten, Fredrik

    2018-05-01

    Probabilistic (or Bayesian) modeling and learning offers interesting possibilities for systematic representation of uncertainty using probability theory. However, probabilistic learning often leads to computationally challenging problems. Some problems of this type that were previously intractable can now be solved on standard personal computers thanks to recent advances in Monte Carlo methods. In particular, for learning of unknown parameters in nonlinear state-space models, methods based on the particle filter (a Monte Carlo method) have proven very useful. A notoriously challenging problem, however, still occurs when the observations in the state-space model are highly informative, i.e. when there is very little or no measurement noise present, relative to the amount of process noise. The particle filter will then struggle in estimating one of the basic components for probabilistic learning, namely the likelihood p (data | parameters). To this end we suggest an algorithm which initially assumes that there is substantial amount of artificial measurement noise present. The variance of this noise is sequentially decreased in an adaptive fashion such that we, in the end, recover the original problem or possibly a very close approximation of it. The main component in our algorithm is a sequential Monte Carlo (SMC) sampler, which gives our proposed method a clear resemblance to the SMC2 method. Another natural link is also made to the ideas underlying the approximate Bayesian computation (ABC). We illustrate it with numerical examples, and in particular show promising results for a challenging Wiener-Hammerstein benchmark problem.

  4. Biochemical transport modeling, estimation, and detection in realistic environments

    NASA Astrophysics Data System (ADS)

    Ortner, Mathias; Nehorai, Arye

    2006-05-01

    Early detection and estimation of the spread of a biochemical contaminant are major issues for homeland security applications. We present an integrated approach combining the measurements given by an array of biochemical sensors with a physical model of the dispersion and statistical analysis to solve these problems and provide system performance measures. We approximate the dispersion model of the contaminant in a realistic environment through numerical simulations of reflected stochastic diffusions describing the microscopic transport phenomena due to wind and chemical diffusion using the Feynman-Kac formula. We consider arbitrary complex geometries and account for wind turbulence. Localizing the dispersive sources is useful for decontamination purposes and estimation of the cloud evolution. To solve the associated inverse problem, we propose a Bayesian framework based on a random field that is particularly powerful for localizing multiple sources with small amounts of measurements. We also develop a sequential detector using the numerical transport model we propose. Sequential detection allows on-line analysis and detecting wether a change has occurred. We first focus on the formulation of a suitable sequential detector that overcomes the presence of unknown parameters (e.g. release time, intensity and location). We compute a bound on the expected delay before false detection in order to decide the threshold of the test. For a fixed false-alarm rate, we obtain the detection probability of a substance release as a function of its location and initial concentration. Numerical examples are presented for two real-world scenarios: an urban area and an indoor ventilation duct.

  5. Concerted vs. Sequential. Two Activation Patterns of Vast Arrays of Intracellular Ca2+ Channels in Muscle

    PubMed Central

    Zhou, Jinsong; Brum, Gustavo; González, Adom; Launikonis, Bradley S.; Stern, Michael D.; Ríos, Eduardo

    2005-01-01

    To signal cell responses, Ca2+ is released from storage through intracellular Ca2+ channels. Unlike most plasmalemmal channels, these are clustered in quasi-crystalline arrays, which should endow them with unique properties. Two distinct patterns of local activation of Ca2+ release were revealed in images of Ca2+ sparks in permeabilized cells of amphibian muscle. In the presence of sulfate, an anion that enters the SR and precipitates Ca2+, sparks became wider than in the conventional, glutamate-based solution. Some of these were “protoplatykurtic” (had a flat top from early on), suggesting an extensive array of channels that activate simultaneously. Under these conditions the rate of production of signal mass was roughly constant during the rise time of the spark and could be as high as 5 μm3 ms−1, consistent with a release current >50 pA since the beginning of the event. This pattern, called “concerted activation,” was observed also in rat muscle fibers. When sulfate was combined with a reduced cytosolic [Ca2+] (50 nM) these sparks coexisted (and interfered) with a sequential progression of channel opening, probably mediated by Ca2+-induced Ca2+ release (CICR). Sequential propagation, observed only in frogs, may require parajunctional channels, of RyR isoform β, which are absent in the rat. Concerted opening instead appears to be a property of RyR α in the amphibian and the homologous isoform 1 in the mammal. PMID:16186560

  6. Adaptive sequential Bayesian classification using Page's test

    NASA Astrophysics Data System (ADS)

    Lynch, Robert S., Jr.; Willett, Peter K.

    2002-03-01

    In this paper, the previously introduced Mean-Field Bayesian Data Reduction Algorithm is extended for adaptive sequential hypothesis testing utilizing Page's test. In general, Page's test is well understood as a method of detecting a permanent change in distribution associated with a sequence of observations. However, the relationship between detecting a change in distribution utilizing Page's test with that of classification and feature fusion is not well understood. Thus, the contribution of this work is based on developing a method of classifying an unlabeled vector of fused features (i.e., detect a change to an active statistical state) as quickly as possible given an acceptable mean time between false alerts. In this case, the developed classification test can be thought of as equivalent to performing a sequential probability ratio test repeatedly until a class is decided, with the lower log-threshold of each test being set to zero and the upper log-threshold being determined by the expected distance between false alerts. It is of interest to estimate the delay (or, related stopping time) to a classification decision (the number of time samples it takes to classify the target), and the mean time between false alerts, as a function of feature selection and fusion by the Mean-Field Bayesian Data Reduction Algorithm. Results are demonstrated by plotting the delay to declaring the target class versus the mean time between false alerts, and are shown using both different numbers of simulated training data and different numbers of relevant features for each class.

  7. Quantification of type I error probabilities for heterogeneity LOD scores.

    PubMed

    Abreu, Paula C; Hodge, Susan E; Greenberg, David A

    2002-02-01

    Locus heterogeneity is a major confounding factor in linkage analysis. When no prior knowledge of linkage exists, and one aims to detect linkage and heterogeneity simultaneously, classical distribution theory of log-likelihood ratios does not hold. Despite some theoretical work on this problem, no generally accepted practical guidelines exist. Nor has anyone rigorously examined the combined effect of testing for linkage and heterogeneity and simultaneously maximizing over two genetic models (dominant, recessive). The effect of linkage phase represents another uninvestigated issue. Using computer simulation, we investigated type I error (P value) of the "admixture" heterogeneity LOD (HLOD) score, i.e., the LOD score maximized over both recombination fraction theta and admixture parameter alpha and we compared this with the P values when one maximizes only with respect to theta (i.e., the standard LOD score). We generated datasets of phase-known and -unknown nuclear families, sizes k = 2, 4, and 6 children, under fully penetrant autosomal dominant inheritance. We analyzed these datasets (1) assuming a single genetic model, and maximizing the HLOD over theta and alpha; and (2) maximizing the HLOD additionally over two dominance models (dominant vs. recessive), then subtracting a 0.3 correction. For both (1) and (2), P values increased with family size k; rose less for phase-unknown families than for phase-known ones, with the former approaching the latter as k increased; and did not exceed the one-sided mixture distribution xi = (1/2) chi1(2) + (1/2) chi2(2). Thus, maximizing the HLOD over theta and alpha appears to add considerably less than an additional degree of freedom to the associated chi1(2) distribution. We conclude with practical guidelines for linkage investigators. Copyright 2002 Wiley-Liss, Inc.

  8. Chronic low back pain in patients with systemic lupus erythematosus: prevalence and predictors of back muscle strength and its correlation with disability.

    PubMed

    Cezarino, Raíssa Sudré; Cardoso, Jefferson Rosa; Rodrigues, Kedma Neves; Magalhães, Yasmin Santana; Souza, Talita Yokoy de; Mota, Lícia Maria Henrique da; Bonini-Rocha, Ana Clara; McVeigh, Joseph; Martins, Wagner Rodrigues

    To determine the prevalence of Chronic Low Back Pain and predictors of Back Muscle Strength in patients with Systemic Lupus Erythematosus. Cross-sectional study. Ninety-six ambulatory patients with lupus were selected by non-probability sampling and interviewed and tested during medical consultation. The outcomes measurements were: Point prevalence of chronic low back pain, Oswestry Disability Index, Tampa Scale of Kinesiophobia, Fatigue Severity Scale and maximal voluntary isometric contractions of handgrip and of the back muscles. Correlation coefficient and multiple linear regression were used in statistical analysis. Of the 96 individuals interviewed, 25 had chronic low back pain, indicating a point prevalence of 26% (92% women). The correlation between the Oswestry Index and maximal voluntary isometric contraction of the back muscles was r=-0.4, 95% CI [-0.68; -0.01] and between the maximal voluntary isometric contraction of handgrip and of the back muscles was r=0.72, 95% CI [0.51; 0.88]. The regression model presented the highest value of R 2 being observed when maximal voluntary isometric contraction of the back muscles was tested with five independent variables (63%). In this model handgrip strength was the only predictive variable (β=0.61, p=0.001). The prevalence of chronic low back pain in individuals with systemic lupus erythematosus was 26%. The maximal voluntary isometric contraction of the back muscles was 63% predicted by five variables of interest, however, only the handgrip strength was a statistically significant predictive variable. The maximal voluntary isometric contraction of the back muscles presented a linear relation directly proportional to handgrip and inversely proportional to Oswestry Index i.e. stronger back muscles are associated with lower disability scores. Copyright © 2017. Published by Elsevier Editora Ltda.

  9. Effect of risk aversion on prioritizing conservation projects.

    PubMed

    Tulloch, Ayesha I T; Maloney, Richard F; Joseph, Liana N; Bennett, Joseph R; Di Fonzo, Martina M I; Probert, William J M; O'Connor, Shaun M; Densem, Jodie P; Possingham, Hugh P

    2015-04-01

    Conservation outcomes are uncertain. Agencies making decisions about what threat mitigation actions to take to save which species frequently face the dilemma of whether to invest in actions with high probability of success and guaranteed benefits or to choose projects with a greater risk of failure that might provide higher benefits if they succeed. The answer to this dilemma lies in the decision maker's aversion to risk--their unwillingness to accept uncertain outcomes. Little guidance exists on how risk preferences affect conservation investment priorities. Using a prioritization approach based on cost effectiveness, we compared 2 approaches: a conservative probability threshold approach that excludes investment in projects with a risk of management failure greater than a fixed level, and a variance-discounting heuristic used in economics that explicitly accounts for risk tolerance and the probabilities of management success and failure. We applied both approaches to prioritizing projects for 700 of New Zealand's threatened species across 8303 management actions. Both decision makers' risk tolerance and our choice of approach to dealing with risk preferences drove the prioritization solution (i.e., the species selected for management). Use of a probability threshold minimized uncertainty, but more expensive projects were selected than with variance discounting, which maximized expected benefits by selecting the management of species with higher extinction risk and higher conservation value. Explicitly incorporating risk preferences within the decision making process reduced the number of species expected to be safe from extinction because lower risk tolerance resulted in more species being excluded from management, but the approach allowed decision makers to choose a level of acceptable risk that fit with their ability to accommodate failure. We argue for transparency in risk tolerance and recommend that decision makers accept risk in an adaptive management framework to maximize benefits and avoid potential extinctions due to inefficient allocation of limited resources. © 2014 Society for Conservation Biology.

  10. Infrared observations of OB star formation in NGC 6334

    NASA Technical Reports Server (NTRS)

    Harvey, P. M.; Gatley, I.

    1982-01-01

    Infrared photometry and maps from 2 to 100 microns are presented for three of the principal far infrared sources in NGC 6334. Each region is powered by two or more very young stars. The distribution of dust and ionized gas is probably strongly affected by the presence of the embedded stars; one of the sources is a blister H II region, another has a bipolar structure, and the third exhibits asymmetric temperature structure. The presence of protostellar objects throughout the region suggests that star formation has occurred nearly simultaneously in the whole molecular cloud rather than having been triggered sequentially from within.

  11. Multivariate Analysis, Retrieval, and Storage System (MARS). Volume 1: MARS System and Analysis Techniques

    NASA Technical Reports Server (NTRS)

    Hague, D. S.; Vanderberg, J. D.; Woodbury, N. W.

    1974-01-01

    A method for rapidly examining the probable applicability of weight estimating formulae to a specific aerospace vehicle design is presented. The Multivariate Analysis Retrieval and Storage System (MARS) is comprised of three computer programs which sequentially operate on the weight and geometry characteristics of past aerospace vehicles designs. Weight and geometric characteristics are stored in a set of data bases which are fully computerized. Additional data bases are readily added to the MARS system and/or the existing data bases may be easily expanded to include additional vehicles or vehicle characteristics.

  12. A Collision Avoidance Strategy for a Potential Natural Satellite around the Asteroid Bennu for the OSIRIS-REx Mission

    NASA Technical Reports Server (NTRS)

    Mashiku, Alinda K.; Carpenter, J. Russell

    2016-01-01

    The cadence of proximity operations for the OSIRIS-REx mission may have an extra induced challenge given the potential of the detection of a natural satellite orbiting the asteroid Bennu. Current ground radar observations for object detection orbiting Bennu show no found objects within bounds of specific size and rotation rates. If a natural satellite is detected during approach, a different proximity operation cadence will need to be implemented as well as a collision avoidance strategy for mission success. A collision avoidance strategy will be analyzed using the Wald Sequential Probability Ratio Test.

  13. A Collision Avoidance Strategy for a Potential Natural Satellite Around the Asteroid Bennu for the OSIRIS-REx Mission

    NASA Technical Reports Server (NTRS)

    Mashiku, Alinda; Carpenter, Russell

    2016-01-01

    The cadence of proximity operations for the OSIRIS-REx mission may have an extra induced challenge given the potential of the detection of a natural satellite orbiting the asteroid Bennu. Current ground radar observations for object detection orbiting Bennu show no found objects within bounds of specific size and rotation rates. If a natural satellite is detected during approach, a different proximity operation cadence will need to be implemented as well as a collision avoidance strategy for mission success. A collision avoidance strategy will be analyzed using the Wald Sequential Probability Ratio Test.

  14. Binomial Test Method for Determining Probability of Detection Capability for Fracture Critical Applications

    NASA Technical Reports Server (NTRS)

    Generazio, Edward R.

    2011-01-01

    The capability of an inspection system is established by applications of various methodologies to determine the probability of detection (POD). One accepted metric of an adequate inspection system is that for a minimum flaw size and all greater flaw sizes, there is 0.90 probability of detection with 95% confidence (90/95 POD). Directed design of experiments for probability of detection (DOEPOD) has been developed to provide an efficient and accurate methodology that yields estimates of POD and confidence bounds for both Hit-Miss or signal amplitude testing, where signal amplitudes are reduced to Hit-Miss by using a signal threshold Directed DOEPOD uses a nonparametric approach for the analysis or inspection data that does require any assumptions about the particular functional form of a POD function. The DOEPOD procedure identifies, for a given sample set whether or not the minimum requirement of 0.90 probability of detection with 95% confidence is demonstrated for a minimum flaw size and for all greater flaw sizes (90/95 POD). The DOEPOD procedures are sequentially executed in order to minimize the number of samples needed to demonstrate that there is a 90/95 POD lower confidence bound at a given flaw size and that the POD is monotonic for flaw sizes exceeding that 90/95 POD flaw size. The conservativeness of the DOEPOD methodology results is discussed. Validated guidelines for binomial estimation of POD for fracture critical inspection are established.

  15. Optimization of an incubation step to maximize sulforaphane content in pre-processed broccoli.

    PubMed

    Mahn, Andrea; Pérez, Carmen

    2016-11-01

    Sulforaphane is a powerful anticancer compound, found naturally in food, which comes from the hydrolysis of glucoraphanin, the main glucosinolate of broccoli. The aim of this work was to maximize sulforaphane content in broccoli by designing an incubation step after subjecting broccoli pieces to an optimized blanching step. Incubation was optimized through a Box-Behnken design using ascorbic acid concentration, incubation temperature and incubation time as factors. The optimal incubation conditions were 38 °C for 3 h and 0.22 mg ascorbic acid per g fresh broccoli. The maximum sulforaphane concentration predicted by the model was 8.0 µmol g -1 , which was confirmed experimentally yielding a value of 8.1 ± 0.3 µmol g -1 . This represents a 585% increase with respect to fresh broccoli and a 119% increase in relation to blanched broccoli, equivalent to a conversion of 94% of glucoraphanin. The process proposed here allows maximizing sulforaphane content, thus avoiding artificial chemical synthesis. The compound could probably be isolated from broccoli, and may find application as nutraceutical or functional ingredient.

  16. Framing matters: Effects of framing on older adults’ exploratory decision-making

    PubMed Central

    Cooper, Jessica A.; Blanco, Nathaniel; Maddox, W. Todd

    2016-01-01

    We examined framing effects on exploratory decision-making. In Experiment 1 we tested older and younger adults in two decision-making tasks separated by one week, finding that older adults’ decision-making performance was preserved when maximizing gains, but declined when minimizing losses. Computational modeling indicates that younger adults in both conditions, and older adults in gains-maximization, utilized a decreasing threshold strategy (which is optimal), but older adults in losses were better fit by a fixed-probability model of exploration. In Experiment 2 we examined within-subjects behavior in older and younger adults in the same exploratory decision-making task, but without a time separation between tasks. We replicated the older adult disadvantage in loss-minimization from Experiment 1, and found that the older adult deficit was significantly reduced when the loss-minimization task immediately followed the gains-maximization task. We conclude that older adults’ performance in exploratory decision-making is hindered when framed as loss-minimization, but that this deficit is attenuated when older adults can first develop a strategy in a gains-framed task. PMID:27977218

  17. Contextuality in canonical systems of random variables

    NASA Astrophysics Data System (ADS)

    Dzhafarov, Ehtibar N.; Cervantes, Víctor H.; Kujala, Janne V.

    2017-10-01

    Random variables representing measurements, broadly understood to include any responses to any inputs, form a system in which each of them is uniquely identified by its content (that which it measures) and its context (the conditions under which it is recorded). Two random variables are jointly distributed if and only if they share a context. In a canonical representation of a system, all random variables are binary, and every content-sharing pair of random variables has a unique maximal coupling (the joint distribution imposed on them so that they coincide with maximal possible probability). The system is contextual if these maximal couplings are incompatible with the joint distributions of the context-sharing random variables. We propose to represent any system of measurements in a canonical form and to consider the system contextual if and only if its canonical representation is contextual. As an illustration, we establish a criterion for contextuality of the canonical system consisting of all dichotomizations of a single pair of content-sharing categorical random variables. This article is part of the themed issue `Second quantum revolution: foundational questions'.

  18. Framing matters: Effects of framing on older adults' exploratory decision-making.

    PubMed

    Cooper, Jessica A; Blanco, Nathaniel J; Maddox, W Todd

    2017-02-01

    We examined framing effects on exploratory decision-making. In Experiment 1 we tested older and younger adults in two decision-making tasks separated by one week, finding that older adults' decision-making performance was preserved when maximizing gains, but it declined when minimizing losses. Computational modeling indicates that younger adults in both conditions, and older adults in gains maximization, utilized a decreasing threshold strategy (which is optimal), but older adults in losses were better fit by a fixed-probability model of exploration. In Experiment 2 we examined within-subject behavior in older and younger adults in the same exploratory decision-making task, but without a time separation between tasks. We replicated the older adult disadvantage in loss minimization from Experiment 1 and found that the older adult deficit was significantly reduced when the loss-minimization task immediately followed the gains-maximization task. We conclude that older adults' performance in exploratory decision-making is hindered when framed as loss minimization, but that this deficit is attenuated when older adults can first develop a strategy in a gains-framed task. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  19. Controlled Secure Direct Communication with Seven-Qubit Entangled States

    NASA Astrophysics Data System (ADS)

    Wang, Shu-Kai; Zha, Xin-Wei; Wu, Hao

    2018-01-01

    In this paper, a new controlled secure direct communication protocol based on a maximally seven-qubit entangled state is proposed. the outcomes of measurement is performed by the sender and the controller, the receiver can obtain different secret messages in a deterministic way with unit successful probability.In this scheme,by using entanglement swapping, no qubits carrying secret messages are transmitted.Therefore, the protocol is completely secure.

  20. Wireless Sensor Network Metrics for Real-Time Systems

    DTIC Science & Technology

    2009-05-20

    to compute the probability of end-to-end packet delivery as a function of latency, the expected radio energy consumption on the nodes from relaying... schedules for WSNs. Particularly, we focus on the impact scheduling has on path diversity, using short repeating schedules and Greedy Maximal Matching...a greedy algorithm for constructing a mesh routing topology. Finally, we study the implications of using distributed scheduling schemes to generate

Top