Simultaneous sequential monitoring of efficacy and safety led to masking of effects.
van Eekelen, Rik; de Hoop, Esther; van der Tweel, Ingeborg
2016-08-01
Usually, sequential designs for clinical trials are applied on the primary (=efficacy) outcome. In practice, other outcomes (e.g., safety) will also be monitored and influence the decision whether to stop a trial early. Implications of simultaneous monitoring on trial decision making are yet unclear. This study examines what happens to the type I error, power, and required sample sizes when one efficacy outcome and one correlated safety outcome are monitored simultaneously using sequential designs. We conducted a simulation study in the framework of a two-arm parallel clinical trial. Interim analyses on two outcomes were performed independently and simultaneously on the same data sets using four sequential monitoring designs, including O'Brien-Fleming and Triangular Test boundaries. Simulations differed in values for correlations and true effect sizes. When an effect was present in both outcomes, competition was introduced, which decreased power (e.g., from 80% to 60%). Futility boundaries for the efficacy outcome reduced overall type I errors as well as power for the safety outcome. Monitoring two correlated outcomes, given that both are essential for early trial termination, leads to masking of true effects. Careful consideration of scenarios must be taken into account when designing sequential trials. Simulation results can help guide trial design. Copyright © 2016 Elsevier Inc. All rights reserved.
A Bayesian Theory of Sequential Causal Learning and Abstract Transfer.
Lu, Hongjing; Rojas, Randall R; Beckers, Tom; Yuille, Alan L
2016-03-01
Two key research issues in the field of causal learning are how people acquire causal knowledge when observing data that are presented sequentially, and the level of abstraction at which learning takes place. Does sequential causal learning solely involve the acquisition of specific cause-effect links, or do learners also acquire knowledge about abstract causal constraints? Recent empirical studies have revealed that experience with one set of causal cues can dramatically alter subsequent learning and performance with entirely different cues, suggesting that learning involves abstract transfer, and such transfer effects involve sequential presentation of distinct sets of causal cues. It has been demonstrated that pre-training (or even post-training) can modulate classic causal learning phenomena such as forward and backward blocking. To account for these effects, we propose a Bayesian theory of sequential causal learning. The theory assumes that humans are able to consider and use several alternative causal generative models, each instantiating a different causal integration rule. Model selection is used to decide which integration rule to use in a given learning environment in order to infer causal knowledge from sequential data. Detailed computer simulations demonstrate that humans rely on the abstract characteristics of outcome variables (e.g., binary vs. continuous) to select a causal integration rule, which in turn alters causal learning in a variety of blocking and overshadowing paradigms. When the nature of the outcome variable is ambiguous, humans select the model that yields the best fit with the recent environment, and then apply it to subsequent learning tasks. Based on sequential patterns of cue-outcome co-occurrence, the theory can account for a range of phenomena in sequential causal learning, including various blocking effects, primacy effects in some experimental conditions, and apparently abstract transfer of causal knowledge. Copyright © 2015 Cognitive Science Society, Inc.
Orphan therapies: making best use of postmarket data.
Maro, Judith C; Brown, Jeffrey S; Dal Pan, Gerald J; Li, Lingling
2014-08-01
Postmarket surveillance of the comparative safety and efficacy of orphan therapeutics is challenging, particularly when multiple therapeutics are licensed for the same orphan indication. To make best use of product-specific registry data collected to fulfill regulatory requirements, we propose the creation of a distributed electronic health data network among registries. Such a network could support sequential statistical analyses designed to detect early warnings of excess risks. We use a simulated example to explore the circumstances under which a distributed network may prove advantageous. We perform sample size calculations for sequential and non-sequential statistical studies aimed at comparing the incidence of hepatotoxicity following initiation of two newly licensed therapies for homozygous familial hypercholesterolemia. We calculate the sample size savings ratio, or the proportion of sample size saved if one conducted a sequential study as compared to a non-sequential study. Then, using models to describe the adoption and utilization of these therapies, we simulate when these sample sizes are attainable in calendar years. We then calculate the analytic calendar time savings ratio, analogous to the sample size savings ratio. We repeat these analyses for numerous scenarios. Sequential analyses detect effect sizes earlier or at the same time as non-sequential analyses. The most substantial potential savings occur when the market share is more imbalanced (i.e., 90% for therapy A) and the effect size is closest to the null hypothesis. However, due to low exposure prevalence, these savings are difficult to realize within the 30-year time frame of this simulation for scenarios in which the outcome of interest occurs at or more frequently than one event/100 person-years. We illustrate a process to assess whether sequential statistical analyses of registry data performed via distributed networks may prove a worthwhile infrastructure investment for pharmacovigilance.
Sequentially Simulated Outcomes: Kind Experience versus Nontransparent Description
ERIC Educational Resources Information Center
Hogarth, Robin M.; Soyer, Emre
2011-01-01
Recently, researchers have investigated differences in decision making based on description and experience. We address the issue of when experience-based judgments of probability are more accurate than are those based on description. If description is well understood ("transparent") and experience is misleading ("wicked"), it…
The Effects of the Previous Outcome on Probabilistic Choice in Rats
Marshall, Andrew T.; Kirkpatrick, Kimberly
2014-01-01
This study examined the effects of previous outcomes on subsequent choices in a probabilistic-choice task. Twenty-four rats were trained to choose between a certain outcome (1 or 3 pellets) versus an uncertain outcome (3 or 9 pellets), delivered with a probability of .1, .33, .67, and .9 in different phases. Uncertain outcome choices increased with the probability of uncertain food. Additionally, uncertain choices increased with the probability of uncertain food following both certain-choice outcomes and unrewarded uncertain choices. However, following uncertain-choice food outcomes, there was a tendency to choose the uncertain outcome in all cases, indicating that the rats continued to “gamble” after successful uncertain choices, regardless of the overall probability or magnitude of food. A subsequent manipulation, in which the probability of uncertain food varied within each session as a function of the previous uncertain outcome, examined how the previous outcome and probability of uncertain food affected choice in a dynamic environment. Uncertain-choice behavior increased with the probability of uncertain food. The rats exhibited increased sensitivity to probability changes and a greater degree of win–stay/lose–shift behavior than in the static phase. Simulations of two sequential choice models were performed to explore the possible mechanisms of reward value computations. The simulation results supported an exponentially decaying value function that updated as a function of trial (rather than time). These results emphasize the importance of analyzing global and local factors in choice behavior and suggest avenues for the future development of sequential-choice models. PMID:23205915
Liu, Ying; ZENG, Donglin; WANG, Yuanjia
2014-01-01
Summary Dynamic treatment regimens (DTRs) are sequential decision rules tailored at each point where a clinical decision is made based on each patient’s time-varying characteristics and intermediate outcomes observed at earlier points in time. The complexity, patient heterogeneity, and chronicity of mental disorders call for learning optimal DTRs to dynamically adapt treatment to an individual’s response over time. The Sequential Multiple Assignment Randomized Trial (SMARTs) design allows for estimating causal effects of DTRs. Modern statistical tools have been developed to optimize DTRs based on personalized variables and intermediate outcomes using rich data collected from SMARTs; these statistical methods can also be used to recommend tailoring variables for designing future SMART studies. This paper introduces DTRs and SMARTs using two examples in mental health studies, discusses two machine learning methods for estimating optimal DTR from SMARTs data, and demonstrates the performance of the statistical methods using simulated data. PMID:25642116
Sequential parallel comparison design with binary and time-to-event outcomes.
Silverman, Rachel Kloss; Ivanova, Anastasia; Fine, Jason
2018-04-30
Sequential parallel comparison design (SPCD) has been proposed to increase the likelihood of success of clinical trials especially trials with possibly high placebo effect. Sequential parallel comparison design is conducted with 2 stages. Participants are randomized between active therapy and placebo in stage 1. Then, stage 1 placebo nonresponders are rerandomized between active therapy and placebo. Data from the 2 stages are pooled to yield a single P value. We consider SPCD with binary and with time-to-event outcomes. For time-to-event outcomes, response is defined as a favorable event prior to the end of follow-up for a given stage of SPCD. We show that for these cases, the usual test statistics from stages 1 and 2 are asymptotically normal and uncorrelated under the null hypothesis, leading to a straightforward combined testing procedure. In addition, we show that the estimators of the treatment effects from the 2 stages are asymptotically normal and uncorrelated under the null and alternative hypothesis, yielding confidence interval procedures with correct coverage. Simulations and real data analysis demonstrate the utility of the binary and time-to-event SPCD. Copyright © 2018 John Wiley & Sons, Ltd.
A path-level exact parallelization strategy for sequential simulation
NASA Astrophysics Data System (ADS)
Peredo, Oscar F.; Baeza, Daniel; Ortiz, Julián M.; Herrero, José R.
2018-01-01
Sequential Simulation is a well known method in geostatistical modelling. Following the Bayesian approach for simulation of conditionally dependent random events, Sequential Indicator Simulation (SIS) method draws simulated values for K categories (categorical case) or classes defined by K different thresholds (continuous case). Similarly, Sequential Gaussian Simulation (SGS) method draws simulated values from a multivariate Gaussian field. In this work, a path-level approach to parallelize SIS and SGS methods is presented. A first stage of re-arrangement of the simulation path is performed, followed by a second stage of parallel simulation for non-conflicting nodes. A key advantage of the proposed parallelization method is to generate identical realizations as with the original non-parallelized methods. Case studies are presented using two sequential simulation codes from GSLIB: SISIM and SGSIM. Execution time and speedup results are shown for large-scale domains, with many categories and maximum kriging neighbours in each case, achieving high speedup results in the best scenarios using 16 threads of execution in a single machine.
Olives, Casey; Pagano, Marcello; Deitchler, Megan; Hedt, Bethany L; Egge, Kari; Valadez, Joseph J
2009-04-01
Traditional lot quality assurance sampling (LQAS) methods require simple random sampling to guarantee valid results. However, cluster sampling has been proposed to reduce the number of random starting points. This study uses simulations to examine the classification error of two such designs, a 67x3 (67 clusters of three observations) and a 33x6 (33 clusters of six observations) sampling scheme to assess the prevalence of global acute malnutrition (GAM). Further, we explore the use of a 67x3 sequential sampling scheme for LQAS classification of GAM prevalence. Results indicate that, for independent clusters with moderate intracluster correlation for the GAM outcome, the three sampling designs maintain approximate validity for LQAS analysis. Sequential sampling can substantially reduce the average sample size that is required for data collection. The presence of intercluster correlation can impact dramatically the classification error that is associated with LQAS analysis.
Doros, Gheorghe; Pencina, Michael; Rybin, Denis; Meisner, Allison; Fava, Maurizio
2013-07-20
Previous authors have proposed the sequential parallel comparison design (SPCD) to address the issue of high placebo response rate in clinical trials. The original use of SPCD focused on binary outcomes, but recent use has since been extended to continuous outcomes that arise more naturally in many fields, including psychiatry. Analytic methods proposed to date for analysis of SPCD trial continuous data included methods based on seemingly unrelated regression and ordinary least squares. Here, we propose a repeated measures linear model that uses all outcome data collected in the trial and accounts for data that are missing at random. An appropriate contrast formulated after the model has been fit can be used to test the primary hypothesis of no difference in treatment effects between study arms. Our extensive simulations show that when compared with the other methods, our approach preserves the type I error even for small sample sizes and offers adequate power and the smallest mean squared error under a wide variety of assumptions. We recommend consideration of our approach for analysis of data coming from SPCD trials. Copyright © 2013 John Wiley & Sons, Ltd.
Parallelization of sequential Gaussian, indicator and direct simulation algorithms
NASA Astrophysics Data System (ADS)
Nunes, Ruben; Almeida, José A.
2010-08-01
Improving the performance and robustness of algorithms on new high-performance parallel computing architectures is a key issue in efficiently performing 2D and 3D studies with large amount of data. In geostatistics, sequential simulation algorithms are good candidates for parallelization. When compared with other computational applications in geosciences (such as fluid flow simulators), sequential simulation software is not extremely computationally intensive, but parallelization can make it more efficient and creates alternatives for its integration in inverse modelling approaches. This paper describes the implementation and benchmarking of a parallel version of the three classic sequential simulation algorithms: direct sequential simulation (DSS), sequential indicator simulation (SIS) and sequential Gaussian simulation (SGS). For this purpose, the source used was GSLIB, but the entire code was extensively modified to take into account the parallelization approach and was also rewritten in the C programming language. The paper also explains in detail the parallelization strategy and the main modifications. Regarding the integration of secondary information, the DSS algorithm is able to perform simple kriging with local means, kriging with an external drift and collocated cokriging with both local and global correlations. SIS includes a local correction of probabilities. Finally, a brief comparison is presented of simulation results using one, two and four processors. All performance tests were carried out on 2D soil data samples. The source code is completely open source and easy to read. It should be noted that the code is only fully compatible with Microsoft Visual C and should be adapted for other systems/compilers.
Laird, Robert A
2018-09-07
Cooperation is a central topic in evolutionary biology because (a) it is difficult to reconcile why individuals would act in a way that benefits others if such action is costly to themselves, and (b) it underpins many of the 'major transitions of evolution', making it essential for explaining the origins of successively higher levels of biological organization. Within evolutionary game theory, the Prisoner's Dilemma and Snowdrift games are the main theoretical constructs used to study the evolution of cooperation in dyadic interactions. In single-shot versions of these games, wherein individuals play each other only once, players typically act simultaneously rather than sequentially. Allowing one player to respond to the actions of its co-player-in the absence of any possibility of the responder being rewarded for cooperation or punished for defection, as in simultaneous or sequential iterated games-may seem to invite more incentive for exploitation and retaliation in single-shot games, compared to when interactions occur simultaneously, thereby reducing the likelihood that cooperative strategies can thrive. To the contrary, I use lattice-based, evolutionary-dynamical simulation models of single-shot games to demonstrate that under many conditions, sequential interactions have the potential to enhance unilaterally or mutually cooperative outcomes and increase the average payoff of populations, relative to simultaneous interactions-benefits that are especially prevalent in a spatially explicit context. This surprising result is attributable to the presence of conditional strategies that emerge in sequential games that can't occur in the corresponding simultaneous versions. Copyright © 2018 Elsevier Ltd. All rights reserved.
Olives, Casey; Pagano, Marcello; Deitchler, Megan; Hedt, Bethany L; Egge, Kari; Valadez, Joseph J
2009-01-01
Traditional lot quality assurance sampling (LQAS) methods require simple random sampling to guarantee valid results. However, cluster sampling has been proposed to reduce the number of random starting points. This study uses simulations to examine the classification error of two such designs, a 67×3 (67 clusters of three observations) and a 33×6 (33 clusters of six observations) sampling scheme to assess the prevalence of global acute malnutrition (GAM). Further, we explore the use of a 67×3 sequential sampling scheme for LQAS classification of GAM prevalence. Results indicate that, for independent clusters with moderate intracluster correlation for the GAM outcome, the three sampling designs maintain approximate validity for LQAS analysis. Sequential sampling can substantially reduce the average sample size that is required for data collection. The presence of intercluster correlation can impact dramatically the classification error that is associated with LQAS analysis. PMID:20011037
NeCamp, Timothy; Kilbourne, Amy; Almirall, Daniel
2017-08-01
Cluster-level dynamic treatment regimens can be used to guide sequential treatment decision-making at the cluster level in order to improve outcomes at the individual or patient-level. In a cluster-level dynamic treatment regimen, the treatment is potentially adapted and re-adapted over time based on changes in the cluster that could be impacted by prior intervention, including aggregate measures of the individuals or patients that compose it. Cluster-randomized sequential multiple assignment randomized trials can be used to answer multiple open questions preventing scientists from developing high-quality cluster-level dynamic treatment regimens. In a cluster-randomized sequential multiple assignment randomized trial, sequential randomizations occur at the cluster level and outcomes are observed at the individual level. This manuscript makes two contributions to the design and analysis of cluster-randomized sequential multiple assignment randomized trials. First, a weighted least squares regression approach is proposed for comparing the mean of a patient-level outcome between the cluster-level dynamic treatment regimens embedded in a sequential multiple assignment randomized trial. The regression approach facilitates the use of baseline covariates which is often critical in the analysis of cluster-level trials. Second, sample size calculators are derived for two common cluster-randomized sequential multiple assignment randomized trial designs for use when the primary aim is a between-dynamic treatment regimen comparison of the mean of a continuous patient-level outcome. The methods are motivated by the Adaptive Implementation of Effective Programs Trial which is, to our knowledge, the first-ever cluster-randomized sequential multiple assignment randomized trial in psychiatry.
C-learning: A new classification framework to estimate optimal dynamic treatment regimes.
Zhang, Baqun; Zhang, Min
2017-12-11
A dynamic treatment regime is a sequence of decision rules, each corresponding to a decision point, that determine that next treatment based on each individual's own available characteristics and treatment history up to that point. We show that identifying the optimal dynamic treatment regime can be recast as a sequential optimization problem and propose a direct sequential optimization method to estimate the optimal treatment regimes. In particular, at each decision point, the optimization is equivalent to sequentially minimizing a weighted expected misclassification error. Based on this classification perspective, we propose a powerful and flexible C-learning algorithm to learn the optimal dynamic treatment regimes backward sequentially from the last stage until the first stage. C-learning is a direct optimization method that directly targets optimizing decision rules by exploiting powerful optimization/classification techniques and it allows incorporation of patient's characteristics and treatment history to improve performance, hence enjoying advantages of both the traditional outcome regression-based methods (Q- and A-learning) and the more recent direct optimization methods. The superior performance and flexibility of the proposed methods are illustrated through extensive simulation studies. © 2017, The International Biometric Society.
Accelerating Sequential Gaussian Simulation with a constant path
NASA Astrophysics Data System (ADS)
Nussbaumer, Raphaël; Mariethoz, Grégoire; Gravey, Mathieu; Gloaguen, Erwan; Holliger, Klaus
2018-03-01
Sequential Gaussian Simulation (SGS) is a stochastic simulation technique commonly employed for generating realizations of Gaussian random fields. Arguably, the main limitation of this technique is the high computational cost associated with determining the kriging weights. This problem is compounded by the fact that often many realizations are required to allow for an adequate uncertainty assessment. A seemingly simple way to address this problem is to keep the same simulation path for all realizations. This results in identical neighbourhood configurations and hence the kriging weights only need to be determined once and can then be re-used in all subsequent realizations. This approach is generally not recommended because it is expected to result in correlation between the realizations. Here, we challenge this common preconception and make the case for the use of a constant path approach in SGS by systematically evaluating the associated benefits and limitations. We present a detailed implementation, particularly regarding parallelization and memory requirements. Extensive numerical tests demonstrate that using a constant path allows for substantial computational gains with very limited loss of simulation accuracy. This is especially the case for a constant multi-grid path. The computational savings can be used to increase the neighbourhood size, thus allowing for a better reproduction of the spatial statistics. The outcome of this study is a recommendation for an optimal implementation of SGS that maximizes accurate reproduction of the covariance structure as well as computational efficiency.
Ji, Qiang; Shi, YunQing; Xia, LiMin; Ma, RunHua; Shen, JinQiang; Lai, Hao; Ding, WenJun; Wang, ChunSheng
2017-12-25
To evaluate in-hospital and mid-term outcomes of sequential vs. separate grafting of in situ skeletonized left internal mammary artery (LIMA) to the left coronary system in a single-center, propensity-matched study.Methods and Results:After propensity score-matching, 120 pairs of patients undergoing first scheduled isolated coronary artery bypass grafting (CABG) with in situ skeletonized LIMA grafting to the left anterior descending artery (LAD) territory were entered into a sequential group (sequential grafting of LIMA to the diagonal artery and then to the LAD) or a control group (separate grafting of LIMA to the LAD). The in-hospital and follow-up clinical outcomes and follow-up LIMA graft patency were compared. Both propensity score-matched groups had similar in-hospital and follow-up clinical outcomes. Sequential LIMA grafting was not found to be an independent predictor of adverse events. During a follow-up period of 27.0±7.3 months, 99.1% patency for the diagonal site and 98.3% for the LAD site were determined by coronary computed tomographic angiography after sequential LIMA grafting, both of which were similar with graft patency of separate grafting of in situ skeletonized LIMA to the LAD. Revascularization of the left coronary system using a skeletonized LIMA resulted in excellent in-hospital and mid-term clinical outcomes and graft patency using sequential grafting.
Chakraborty, Bibhas; Davidson, Karina W.
2015-01-01
Summary Implementation study is an important tool for deploying state-of-the-art treatments from clinical efficacy studies into a treatment program, with the dual goals of learning about effectiveness of the treatments and improving the quality of care for patients enrolled into the program. In this article, we deal with the design of a treatment program of dynamic treatment regimens (DTRs) for patients with depression post acute coronary syndrome. We introduce a novel adaptive randomization scheme for a sequential multiple assignment randomized trial of DTRs. Our approach adapts the randomization probabilities to favor treatment sequences having comparatively superior Q-functions used in Q-learning. The proposed approach addresses three main concerns of an implementation study: it allows incorporation of historical data or opinions, it includes randomization for learning purposes, and it aims to improve care via adaptation throughout the program. We demonstrate how to apply our method to design a depression treatment program using data from a previous study. By simulation, we illustrate that the inputs from historical data are important for the program performance measured by the expected outcomes of the enrollees, but also show that the adaptive randomization scheme is able to compensate poorly specified historical inputs by improving patient outcomes within a reasonable horizon. The simulation results also confirm that the proposed design allows efficient learning of the treatments by alleviating the curse of dimensionality. PMID:25354029
Suppressing correlations in massively parallel simulations of lattice models
NASA Astrophysics Data System (ADS)
Kelling, Jeffrey; Ódor, Géza; Gemming, Sibylle
2017-11-01
For lattice Monte Carlo simulations parallelization is crucial to make studies of large systems and long simulation time feasible, while sequential simulations remain the gold-standard for correlation-free dynamics. Here, various domain decomposition schemes are compared, concluding with one which delivers virtually correlation-free simulations on GPUs. Extensive simulations of the octahedron model for 2 + 1 dimensional Kardar-Parisi-Zhang surface growth, which is very sensitive to correlation in the site-selection dynamics, were performed to show self-consistency of the parallel runs and agreement with the sequential algorithm. We present a GPU implementation providing a speedup of about 30 × over a parallel CPU implementation on a single socket and at least 180 × with respect to the sequential reference.
A framework of knowledge creation processes in participatory simulation of hospital work systems.
Andersen, Simone Nyholm; Broberg, Ole
2017-04-01
Participatory simulation (PS) is a method to involve workers in simulating and designing their own future work system. Existing PS studies have focused on analysing the outcome, and minimal attention has been devoted to the process of creating this outcome. In order to study this process, we suggest applying a knowledge creation perspective. The aim of this study was to develop a framework describing the process of how ergonomics knowledge is created in PS. Video recordings from three projects applying PS of hospital work systems constituted the foundation of process mining analysis. The analysis resulted in a framework revealing the sources of ergonomics knowledge creation as sequential relationships between the activities of simulation participants sharing work experiences; experimenting with scenarios; and reflecting on ergonomics consequences. We argue that this framework reveals the hidden steps of PS that are essential when planning and facilitating PS that aims at designing work systems. Practitioner Summary: When facilitating participatory simulation (PS) in work system design, achieving an understanding of the PS process is essential. By applying a knowledge creation perspective and process mining, we investigated the knowledge-creating activities constituting the PS process. The analysis resulted in a framework of the knowledge-creating process in PS.
Hajati, Omid; Zarrabi, Khalil; Karimi, Reza; Hajati, Azadeh
2012-01-01
There is still controversy over the differences in the patency rates of the sequential and individual coronary artery bypass grafting (CABG) techniques. The purpose of this paper was to non-invasively evaluate hemodynamic parameters using complete 3D computational fluid dynamics (CFD) simulations of the sequential and the individual methods based on the patient-specific data extracted from computed tomography (CT) angiography. For CFD analysis, the geometric model of coronary arteries was reconstructed using an ECG-gated 64-detector row CT. Modeling the sequential and individual bypass grafting, this study simulates the flow from the aorta to the occluded posterior descending artery (PDA) and the posterior left ventricle (PLV) vessel with six coronary branches based on the physiologically measured inlet flow as the boundary condition. The maximum calculated wall shear stress (WSS) in the sequential and the individual models were estimated to be 35.1 N/m(2) and 36.5 N/m(2), respectively. Compared to the individual bypass method, the sequential graft has shown a higher velocity at the proximal segment and lower spatial wall shear stress gradient (SWSSG) due to the flow splitting caused by the side-to-side anastomosis. Simulated results combined with its surgical benefits including the requirement of shorter vein length and fewer anastomoses advocate the sequential method as a more favorable CABG method.
Lung Volume Measured during Sequential Swallowing in Healthy Young Adults
ERIC Educational Resources Information Center
Hegland, Karen Wheeler; Huber, Jessica E.; Pitts, Teresa; Davenport, Paul W.; Sapienza, Christine M.
2011-01-01
Purpose: Outcomes from studying the coordinative relationship between respiratory and swallow subsystems are inconsistent for sequential swallows, and the lung volume at the initiation of sequential swallowing remains undefined. The first goal of this study was to quantify the lung volume at initiation of sequential swallowing ingestion cycles and…
Tang, Yongqiang
2018-04-30
The controlled imputation method refers to a class of pattern mixture models that have been commonly used as sensitivity analyses of longitudinal clinical trials with nonignorable dropout in recent years. These pattern mixture models assume that participants in the experimental arm after dropout have similar response profiles to the control participants or have worse outcomes than otherwise similar participants who remain on the experimental treatment. In spite of its popularity, the controlled imputation has not been formally developed for longitudinal binary and ordinal outcomes partially due to the lack of a natural multivariate distribution for such endpoints. In this paper, we propose 2 approaches for implementing the controlled imputation for binary and ordinal data based respectively on the sequential logistic regression and the multivariate probit model. Efficient Markov chain Monte Carlo algorithms are developed for missing data imputation by using the monotone data augmentation technique for the sequential logistic regression and a parameter-expanded monotone data augmentation scheme for the multivariate probit model. We assess the performance of the proposed procedures by simulation and the analysis of a schizophrenia clinical trial and compare them with the fully conditional specification, last observation carried forward, and baseline observation carried forward imputation methods. Copyright © 2018 John Wiley & Sons, Ltd.
Tran-Duy, An; Boonen, Annelies; van de Laar, Mart A F J; Franke, Angelinus C; Severens, Johan L
2011-12-01
To develop a modelling framework which can simulate long-term quality of life, societal costs and cost-effectiveness as affected by sequential drug treatment strategies for ankylosing spondylitis (AS). Discrete event simulation paradigm was selected for model development. Drug efficacy was modelled as changes in disease activity (Bath Ankylosing Spondylitis Disease Activity Index (BASDAI)) and functional status (Bath Ankylosing Spondylitis Functional Index (BASFI)), which were linked to costs and health utility using statistical models fitted based on an observational AS cohort. Published clinical data were used to estimate drug efficacy and time to events. Two strategies were compared: (1) five available non-steroidal anti-inflammatory drugs (strategy 1) and (2) same as strategy 1 plus two tumour necrosis factor α inhibitors (strategy 2). 13,000 patients were followed up individually until death. For probability sensitivity analysis, Monte Carlo simulations were performed with 1000 sets of parameters sampled from the appropriate probability distributions. The models successfully generated valid data on treatments, BASDAI, BASFI, utility, quality-adjusted life years (QALYs) and costs at time points with intervals of 1-3 months during the simulation length of 70 years. Incremental cost per QALY gained in strategy 2 compared with strategy 1 was €35,186. At a willingness-to-pay threshold of €80,000, it was 99.9% certain that strategy 2 was cost-effective. The modelling framework provides great flexibility to implement complex algorithms representing treatment selection, disease progression and changes in costs and utilities over time of patients with AS. Results obtained from the simulation are plausible.
Millroth, Philip; Guath, Mona; Juslin, Peter
2018-06-07
The rationality of decision making under risk is of central concern in psychology and other behavioral sciences. In real-life, the information relevant to a decision often arrives sequentially or changes over time, implying nontrivial demands on memory. Yet, little is known about how this affects the ability to make rational decisions and a default assumption is rather that information about outcomes and probabilities are simultaneously available at the time of the decision. In 4 experiments, we show that participants receiving probability- and outcome information sequentially report substantially (29 to 83%) higher certainty equivalents than participants with simultaneous presentation. This holds also for monetary-incentivized participants with perfect recall of the information. Participants in the sequential conditions often violate stochastic dominance in the sense that they pay more for a lottery with low probability of an outcome than participants in the simultaneous condition pay for a high probability of the same outcome. Computational modeling demonstrates that Cumulative Prospect Theory (Tversky & Kahneman, 1992) fails to account for the effects of sequential presentation, but a model assuming anchoring-and adjustment constrained by memory can account for the data. By implication, established assumptions of rationality may need to be reconsidered to account for the effects of memory in many real-life tasks. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Simultaneous Versus Sequential Ptosis and Strabismus Surgery in Children.
Revere, Karen E; Binenbaum, Gil; Li, Jonathan; Mills, Monte D; Katowitz, William R; Katowitz, James A
The authors sought to compare the clinical outcomes of simultaneous versus sequential ptosis and strabismus surgery in children. Retrospective, single-center cohort study of children requiring both ptosis and strabismus surgery on the same eye. Simultaneous surgeries were performed during a single anesthetic event; sequential surgeries were performed at least 7 weeks apart. Outcomes were ptosis surgery success (margin reflex distance 1 ≥ 2 mm, good eyelid contour, and good eyelid crease); strabismus surgery success (ocular alignment within 10 prism diopters of orthophoria and/or improved head position); surgical complications; and reoperations. Fifty-six children were studied, 38 had simultaneous surgery and 18 sequential. Strabismus surgery was performed first in 38/38 simultaneous and 6/18 sequential cases. Mean age at first surgery was 64 months, with mean follow up 27 months. A total of 75% of children had congenital ptosis; 64% had comitant strabismus. A majority of ptosis surgeries were frontalis sling (59%) or Fasanella-Servat (30%) procedures. There were no significant differences between simultaneous and sequential groups with regards to surgical success rates, complications, or reoperations (all p > 0.28). In the first comparative study of simultaneous versus sequential ptosis and strabismus surgery, no advantage for sequential surgery was seen. Despite a theoretical risk of postoperative eyelid malposition or complications when surgeries were performed in a combined manner, the rate of such outcomes was not increased with simultaneous surgeries. Performing ptosis and strabismus surgery together appears to be clinically effective and safe, and reduces anesthesia exposure during childhood.
Saadeh, Charles K; Rosero, Eric B; Joshi, Girish P; Ozayar, Esra; Mau, Ted
2017-12-01
To determine the extent to which a sequential anesthetic technique 1) shortens time under sedation for thyroplasty with arytenoid adduction (TP-AA), 2) affects the total operative time, and 3) changes the voice outcome compared to TP-AA performed entirely under sedation/analgesia. Case-control study. A new sequential anesthetic technique of performing most of the TP-AA surgery under general anesthesia (GA), followed by transition to sedation/analgesia (SA) for voice assessment, was developed to achieve smooth emergence from GA. Twenty-five TP-AA cases performed with the sequential GA-SA technique were compared with 25 TP-AA controls performed completely under sedation/analgesia. The primary outcome measure was the time under sedation. Voice improvement, as assessed by Consensus Auditory-Perceptual Evaluation of Voice, and total operative time were secondary outcome measures. With the conventional all-SA anesthetic, the duration of SA was 209 ± 26.3 minutes. With the sequential GA-SA technique, the duration of SA was 79.0 ± 18.9 minutes, a 62.3% reduction (P < 0.0001). There was no significant difference in the total operative time (209.5 vs. 200.9 minutes; P = 0.42) or in voice outcome. This sequential anesthetic technique has been easily adopted by multiple anesthesiologists and nurse anesthetists at our institution. TP-AA is effectively performed under sequential GA-SA technique with a significant reduction in the duration of time under sedation. This allows the surgeon to perform the technically more challenging part of the surgery under GA, without having to contend with variability in patient tolerance for laryngeal manipulation under sedation. 3b. Laryngoscope, 127:2813-2817, 2017. © 2017 The American Laryngological, Rhinological and Otological Society, Inc.
Nonparametric Subgroup Identification by PRIM and CART: A Simulation and Application Study
2017-01-01
Two nonparametric methods for the identification of subgroups with outstanding outcome values are described and compared to each other in a simulation study and an application to clinical data. The Patient Rule Induction Method (PRIM) searches for box-shaped areas in the given data which exceed a minimal size and average outcome. This is achieved via a combination of iterative peeling and pasting steps, where small fractions of the data are removed or added to the current box. As an alternative, Classification and Regression Trees (CART) prediction models perform sequential binary splits of the data to produce subsets which can be interpreted as subgroups of heterogeneous outcome. PRIM and CART were compared in a simulation study to investigate their strengths and weaknesses under various data settings, taking different performance measures into account. PRIM was shown to be superior in rather complex settings such as those with few observations, a smaller signal-to-noise ratio, and more than one subgroup. CART showed the best performance in simpler situations. A practical application of the two methods was illustrated using a clinical data set. For this application, both methods produced similar results but the higher amount of user involvement of PRIM became apparent. PRIM can be flexibly tuned by the user, whereas CART, although simpler to implement, is rather static. PMID:28611849
Nonparametric Subgroup Identification by PRIM and CART: A Simulation and Application Study.
Ott, Armin; Hapfelmeier, Alexander
2017-01-01
Two nonparametric methods for the identification of subgroups with outstanding outcome values are described and compared to each other in a simulation study and an application to clinical data. The Patient Rule Induction Method (PRIM) searches for box-shaped areas in the given data which exceed a minimal size and average outcome. This is achieved via a combination of iterative peeling and pasting steps, where small fractions of the data are removed or added to the current box. As an alternative, Classification and Regression Trees (CART) prediction models perform sequential binary splits of the data to produce subsets which can be interpreted as subgroups of heterogeneous outcome. PRIM and CART were compared in a simulation study to investigate their strengths and weaknesses under various data settings, taking different performance measures into account. PRIM was shown to be superior in rather complex settings such as those with few observations, a smaller signal-to-noise ratio, and more than one subgroup. CART showed the best performance in simpler situations. A practical application of the two methods was illustrated using a clinical data set. For this application, both methods produced similar results but the higher amount of user involvement of PRIM became apparent. PRIM can be flexibly tuned by the user, whereas CART, although simpler to implement, is rather static.
Sequential Computerized Mastery Tests--Three Simulation Studies
ERIC Educational Resources Information Center
Wiberg, Marie
2006-01-01
A simulation study of a sequential computerized mastery test is carried out with items modeled with the 3 parameter logistic item response theory model. The examinees' responses are either identically distributed, not identically distributed, or not identically distributed together with estimation errors in the item characteristics. The…
Chang, Young-Soo; Hong, Sung Hwa; Kim, Eun Yeon; Choi, Ji Eun; Chung, Won-Ho; Cho, Yang-Sun; Moon, Il Joon
2018-05-18
Despite recent advancement in the prediction of cochlear implant outcome, the benefit of bilateral procedures compared to bimodal stimulation and how we predict speech perception outcomes of sequential bilateral cochlear implant based on bimodal auditory performance in children remain unclear. This investigation was performed: (1) to determine the benefit of sequential bilateral cochlear implant and (2) to identify the associated factors for the outcome of sequential bilateral cochlear implant. Observational and retrospective study. We retrospectively analyzed 29 patients with sequential cochlear implant following bimodal-fitting condition. Audiological evaluations were performed; the categories of auditory performance scores, speech perception with monosyllable and disyllables words, and the Korean version of Ling. Audiological evaluations were performed before sequential cochlear implant with the bimodal fitting condition (CI1+HA) and one year after the sequential cochlear implant with bilateral cochlear implant condition (CI1+CI2). The good Performance Group (GP) was defined as follows; 90% or higher in monosyllable and bisyllable tests with auditory-only condition or 20% or higher improvement of the scores with CI1+CI2. Age at first implantation, inter-implant interval, categories of auditory performance score, and various comorbidities were analyzed by logistic regression analysis. Compared to the CI1+HA, CI1+CI2 provided significant benefit in categories of auditory performance, speech perception, and Korean version of Ling results. Preoperative categories of auditory performance scores were the only associated factor for being GP (odds ratio=4.38, 95% confidence interval - 95%=1.07-17.93, p=0.04). The children with limited language development in bimodal condition should be considered as the sequential bilateral cochlear implant and preoperative categories of auditory performance score could be used as the predictor in speech perception after sequential cochlear implant. Copyright © 2018 Associação Brasileira de Otorrinolaringologia e Cirurgia Cérvico-Facial. Published by Elsevier Editora Ltda. All rights reserved.
Managing numerical errors in random sequential adsorption
NASA Astrophysics Data System (ADS)
Cieśla, Michał; Nowak, Aleksandra
2016-09-01
Aim of this study is to examine the influence of a finite surface size and a finite simulation time on a packing fraction estimated using random sequential adsorption simulations. The goal of particular interest is providing hints on simulation setup to achieve desired level of accuracy. The analysis is based on properties of saturated random packing of disks on continuous and flat surfaces of different sizes.
Modeling snail breeding in a bioregenerative life support system
NASA Astrophysics Data System (ADS)
Kovalev, V. S.; Manukovsky, N. S.; Tikhomirov, A. A.; Kolmakova, A. A.
2015-07-01
The discrete-time model of snail breeding consists of two sequentially linked submodels: "Stoichiometry" and "Population". In both submodels, a snail population is split up into twelve age groups within one year of age. The first submodel is used to simulate the metabolism of a single snail in each age group via the stoichiometric equation; the second submodel is used to optimize the age structure and the size of the snail population. Daily intake of snail meat by crewmen is a guideline which specifies the population productivity. The mass exchange of the snail unit inhabited by land snails of Achatina fulica is given as an outcome of step-by-step modeling. All simulations are performed using Solver Add-In of Excel 2007.
NASA Astrophysics Data System (ADS)
Tavakoli, Armin; Cabello, Adán
2018-03-01
We consider an ideal experiment in which unlimited nonprojective quantum measurements are sequentially performed on a system that is initially entangled with a distant one. At each step of the sequence, the measurements are randomly chosen between two. However, regardless of which measurement is chosen or which outcome is obtained, the quantum state of the pair always remains entangled. We show that the classical simulation of the reduced state of the distant system requires not only unlimited rounds of communication, but also that the distant system has infinite memory. Otherwise, a thermodynamical argument predicts heating at a distance. Our proposal can be used for experimentally ruling out nonlocal finite-memory classical models of quantum theory.
Dave, Hreem; Phoenix, Vidya; Becker, Edmund R.; Lambert, Scott R.
2015-01-01
OBJECTIVES To compare the incidence of adverse events, visual outcomes and economic costs of sequential versus simultaneous bilateral cataract surgery for infants with congenital cataracts. METHODS We retrospectively reviewed the incidence of adverse events, visual outcomes and medical payments associated with simultaneous versus sequential bilateral cataract surgery for infants with congenital cataracts who underwent cataract surgery when 6 months of age or younger at our institution. RESULTS Records were available for 10 children who underwent sequential surgery at a mean age of 49 days for the first eye and 17 children who underwent simultaneous surgery at a mean age of 68 days (p=.25). We found a similar incidence of adverse events between the two treatment groups. Intraoperative or postoperative complications occurred in 14 eyes. The most common postoperative complication was glaucoma. No eyes developed endophthalmitis. The mean absolute interocular difference in logMAR visual acuities between the two treatment groups was 0.47±0.76 for the sequential group and 0.44±0.40 for the simultaneous group (p=.92). Hospital, drugs, supplies and professional payments were on average 21.9% lower per patient in the simultaneous group. CONCLUSIONS Simultaneous bilateral cataract surgery for infants with congenital cataracts was associated with a 21.9% reduction in medical payments and no discernible difference in the incidence of adverse events or visual outcome. PMID:20697007
Sukumaran, Lakshmi; McCarthy, Natalie L; Kharbanda, Elyse O; Weintraub, Eric S; Vazquez-Benitez, Gabriela; McNeil, Michael M; Li, Rongxia; Klein, Nicola P; Hambidge, Simon J; Naleway, Allison L; Lugg, Marlene M; Jackson, Michael L; King, Jennifer P; DeStefano, Frank; Omer, Saad B; Orenstein, Walter A
2015-11-01
To evaluate the safety of coadministering tetanus toxoid, reduced diphtheria toxoid, and acellular pertussis (Tdap) and influenza vaccines during pregnancy by comparing adverse events after concomitant and sequential vaccination. We conducted a retrospective cohort study of pregnant women aged 14-49 years in the Vaccine Safety Datalink from January 1, 2007, to November 15, 2013. We compared medically attended acute events (fever, any acute reaction) and adverse birth outcomes (preterm delivery, low birth weight, small for gestational age) in women receiving concomitant Tdap and influenza vaccination and women receiving sequential vaccination. Among 36,844 pregnancies in which Tdap and influenza vaccines were administered, the vaccines were administered concomitantly in 8,464 (23%) pregnancies and sequentially in 28,380 (77%) pregnancies. Acute adverse events after vaccination were rare. We found no statistically significant increased risk of fever or any medically attended acute adverse event in pregnant women vaccinated concomitantly compared with sequentially. When analyzing women at 20 weeks of gestation or greater during periods of influenza vaccine administration, there were no differences in preterm delivery, low-birth-weight, or small-for-gestational-age neonates between women vaccinated concomitantly compared with sequentially in pregnancy. Concomitant administration of Tdap and influenza vaccines during pregnancy was not associated with a higher risk of medically attended adverse acute outcomes or birth outcomes compared with sequential vaccination. II.
Shen, J Q; Ji, Q; Ding, W J; Xia, L M; Wei, L; Wang, C S
2018-03-13
Objective: To evaluate in-hospital and mid-term outcomes of sequential versus separate grafting of in situ skeletonized left internal mammary artery (LIMA) to the left coronary system in a single-center, propensity-matched study. Methods: After propensity score matching, 120 pairs of patients undergoing first, scheduled, isolated coronary artery bypass grafting (CABG) with in situ skeletonized LIMA grafting to the left anterior descending artery (LAD) territory were entered into a sequential group (sequential grafting of LIMA to the diagonal artery and then to the LAD) or a control group (separate grafting of LIMA to the LAD). The in-hospital and follow-up clinical outcomes and follow-up LIMA graft patency were compared. Results: The two propensity score-matched groups had similar in-hospital and follow-up clinical outcomes. The number of bypass conduits ranged from 3 to 6 (with a mean of 3.5), and 91.3%(219/240)of the included patients received off-pump CABG surgery. No significant differences were found between the two propensity score-matched groups in the in-hospital outcomes, including in-hospital death and the incidence of complications associated with CABG (prolonged ventilation, peroperative stroke, re-operation before discharge, and deep sternal wound infection). During follow-up, 9 patients (4 patients from the sequential group and 5 patients from the control group) died, and the all-cause mortality rate was 3.9%. No significant difference was found in the all-cause mortality rate between the 2 groups[3.4% (4/116) vs 4.3% (5/115), P =0.748]. During follow-up period, 99.1% (115/116) patency for the diagonal site and 98.3% (114/116) for the LAD site were determined by coronary computed tomographic angiography after sequential LIMA grafting, both of which were similar with graft patency of separate grafting of in situ skeletonized LIMA to the LAD. Conclusions: Revascularization of the left coronary system using a skeletonized LIMA resulted in excellent in-hospital and mid-term clinical outcomes and graft patency using sequential grafting.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shirley, C.; Pohlmann, K.; Andricevic, R.
1996-09-01
Geological and geophysical data are used with the sequential indicator simulation algorithm of Gomez-Hernandez and Srivastava to produce multiple, equiprobable, three-dimensional maps of informal hydrostratigraphic units at the Frenchman Flat Corrective Action Unit, Nevada Test Site. The upper 50 percent of the Tertiary volcanic lithostratigraphic column comprises the study volume. Semivariograms are modeled from indicator-transformed geophysical tool signals. Each equiprobable study volume is subdivided into discrete classes using the ISIM3D implementation of the sequential indicator simulation algorithm. Hydraulic conductivity is assigned within each class using the sequential Gaussian simulation method of Deutsch and Journel. The resulting maps show the contiguitymore » of high and low hydraulic conductivity regions.« less
Sequential biases in accumulating evidence
Huggins, Richard; Dogo, Samson Henry
2015-01-01
Whilst it is common in clinical trials to use the results of tests at one phase to decide whether to continue to the next phase and to subsequently design the next phase, we show that this can lead to biased results in evidence synthesis. Two new kinds of bias associated with accumulating evidence, termed ‘sequential decision bias’ and ‘sequential design bias’, are identified. Both kinds of bias are the result of making decisions on the usefulness of a new study, or its design, based on the previous studies. Sequential decision bias is determined by the correlation between the value of the current estimated effect and the probability of conducting an additional study. Sequential design bias arises from using the estimated value instead of the clinically relevant value of an effect in sample size calculations. We considered both the fixed‐effect and the random‐effects models of meta‐analysis and demonstrated analytically and by simulations that in both settings the problems due to sequential biases are apparent. According to our simulations, the sequential biases increase with increased heterogeneity. Minimisation of sequential biases arises as a new and important research area necessary for successful evidence‐based approaches to the development of science. © 2015 The Authors. Research Synthesis Methods Published by John Wiley & Sons Ltd. PMID:26626562
Simultaneous versus sequential penetrating keratoplasty and cataract surgery.
Hayashi, Ken; Hayashi, Hideyuki
2006-10-01
To compare the surgical outcomes of simultaneous penetrating keratoplasty and cataract surgery with those of sequential surgery. Thirty-nine eyes of 39 patients scheduled for simultaneous keratoplasty and cataract surgery and 23 eyes of 23 patients scheduled for sequential keratoplasty and secondary phacoemulsification surgery were recruited. Refractive error, regular and irregular corneal astigmatism determined by Fourier analysis, and endothelial cell loss were studied at 1 week and 3, 6, and 12 months after combined surgery in the simultaneous surgery group or after subsequent phacoemulsification surgery in the sequential surgery group. At 3 and more months after surgery, mean refractive error was significantly greater in the simultaneous surgery group than in the sequential surgery group, although no difference was seen at 1 week. The refractive error at 12 months was within 2 D of that targeted in 15 eyes (39%) in the simultaneous surgery group and within 2 D in 16 eyes (70%) in the sequential surgery group; the incidence was significantly greater in the sequential group (P = 0.0344). The regular and irregular astigmatism was not significantly different between the groups at 3 and more months after surgery. No significant difference was also found in the percentage of endothelial cell loss between the groups. Although corneal astigmatism and endothelial cell loss were not different, refractive error from target refraction was greater after simultaneous keratoplasty and cataract surgery than after sequential surgery, indicating a better outcome after sequential surgery than after simultaneous surgery.
James, Erica; Freund, Megan; Booth, Angela; Duncan, Mitch J; Johnson, Natalie; Short, Camille E; Wolfenden, Luke; Stacey, Fiona G; Kay-Lambkin, Frances; Vandelanotte, Corneel
2016-08-01
Growing evidence points to the benefits of addressing multiple health behaviors rather than single behaviors. This review evaluates the relative effectiveness of simultaneous and sequentially delivered multiple health behavior change (MHBC) interventions. Secondary aims were to identify: a) the most effective spacing of sequentially delivered components; b) differences in efficacy of MHBC interventions for adoption/cessation behaviors and lifestyle/addictive behaviors, and; c) differences in trial retention between simultaneously and sequentially delivered interventions. MHBC intervention trials published up to October 2015 were identified through a systematic search. Eligible trials were randomised controlled trials that directly compared simultaneous and sequential delivery of a MHBC intervention. A narrative synthesis was undertaken. Six trials met the inclusion criteria and across these trials the behaviors targeted were smoking, diet, physical activity, and alcohol consumption. Three trials reported a difference in intervention effect between a sequential and simultaneous approach in at least one behavioral outcome. Of these, two trials favoured a sequential approach on smoking. One trial favoured a simultaneous approach on fat intake. There was no difference in retention between sequential and simultaneous approaches. There is limited evidence regarding the relative effectiveness of sequential and simultaneous approaches. Given only three of the six trials observed a difference in intervention effectiveness for one health behavior outcome, and the relatively consistent finding that the sequential and simultaneous approaches were more effective than a usual/minimal care control condition, it appears that both approaches should be considered equally efficacious. PROSPERO registration number: CRD42015027876. Copyright © 2016 Elsevier Inc. All rights reserved.
Placebo non-response measure in sequential parallel comparison design studies.
Rybin, Denis; Doros, Gheorghe; Pencina, Michael J; Fava, Maurizio
2015-07-10
The Sequential Parallel Comparison Design (SPCD) is one of the novel approaches addressing placebo response. The analysis of SPCD data typically classifies subjects as 'placebo responders' or 'placebo non-responders'. Most current methods employed for analysis of SPCD data utilize only a part of the data collected during the trial. A repeated measures model was proposed for analysis of continuous outcomes that permitted the inclusion of information from all subjects into the treatment effect estimation. We describe here a new approach using a weighted repeated measures model that further improves the utilization of data collected during the trial, allowing the incorporation of information that is relevant to the placebo response, and dealing with the problem of possible misclassification of subjects. Our simulations show that when compared to the unweighted repeated measures model method, our approach performs as well or, under certain conditions, better, in preserving the type I error, achieving adequate power and minimizing the mean squared error. Copyright © 2015 John Wiley & Sons, Ltd.
Outcomes of simultaneous resections for patients with synchronous colorectal liver metastases.
Slesser, A A P; Chand, M; Goldin, R; Brown, G; Tekkis, P P; Mudan, S
2013-12-01
The aim of this study was to determine the outcomes associated with simultaneous resections compared to patients undergoing sequential resections for synchronous colorectal liver metastases. Consecutive patients undergoing hepatic resections between 2000 and 2012 for synchronous colorectal liver metastases were identified from a prospectively maintained database. Of the 112 hepatic resections that were performed, 36 were simultaneous resections and 76 were sequential resections. There was no difference in disease severity: number of metastases (P 0.228), metastatic size (P 0.58), the primary tumour nodal status (P 0.283), CEA (P 0.387) or the presence of extra-hepatic metastases (P 1.0). Major hepatic resections were performed in 23 (64%) and 60 (79%) of patients in the simultaneous and sequential groups respectively (P 0.089). Intra-operatively no differences were found in blood loss (P 1.0), duration of surgery (P 0.284) or number of adverse events (P 1.0). There were no differences in post-operative complications (P 0.161) or post-operative mortality (P 0.241). The length of hospital stay was 14 (95% CI 12.0-18.0) and 18.5 (95% CI 16.0-23.0) days in the simultaneous and sequential groups respectively (P 0.03). The 3-year overall survival was 75% and 64% in the simultaneous and sequential groups respectively (P 0.379). The 3-year hepatic recurrence free survival was 61% and 46% in the simultaneous and sequential groups respectively (P 0.254). Simultaneous resections result in similar short-term and long-term outcomes as patients receiving sequential resections with comparable metastatic disease and are associated with a significant reduction in the length of stay. Copyright © 2013 Elsevier Ltd. All rights reserved.
Rossitto, Giacomo; Battistel, Michele; Barbiero, Giulio; Bisogni, Valeria; Maiolino, Giuseppe; Diego, Miotto; Seccia, Teresa M; Rossi, Gian Paolo
2018-02-01
The pulsatile secretion of adrenocortical hormones and a stress reaction occurring when starting adrenal vein sampling (AVS) can affect the selectivity and also the assessment of lateralization when sequential blood sampling is used. We therefore tested the hypothesis that a simulated sequential blood sampling could decrease the diagnostic accuracy of lateralization index for identification of aldosterone-producing adenoma (APA), as compared with bilaterally simultaneous AVS. In 138 consecutive patients who underwent subtyping of primary aldosteronism, we compared the results obtained simultaneously bilaterally when starting AVS (t-15) and 15 min after (t0), with those gained with a simulated sequential right-to-left AVS technique (R ⇒ L) created by combining hormonal values obtained at t-15 and at t0. The concordance between simultaneously obtained values at t-15 and t0, and between simultaneously obtained values and values gained with a sequential R ⇒ L technique, was also assessed. We found a marked interindividual variability of lateralization index values in the patients with bilaterally selective AVS at both time point. However, overall the lateralization index simultaneously determined at t0 provided a more accurate identification of APA than the simulated sequential lateralization indexR ⇒ L (P = 0.001). Moreover, regardless of which side was sampled first, the sequential AVS technique induced a sequence-dependent overestimation of lateralization index. While in APA patients the concordance between simultaneous AVS at t0 and t-15 and between simultaneous t0 and sequential technique was moderate-to-good (K = 0.55 and 0.66, respectively), in non-APA patients, it was poor (K = 0.12 and 0.13, respectively). Sequential AVS generates factitious between-sides gradients, which lower its diagnostic accuracy, likely because of the stress reaction arising upon starting AVS.
Cullington, H E; Bele, D; Brinton, J C; Cooper, S; Daft, M; Harding, J; Hatton, N; Humphries, J; Lutman, M E; Maddocks, J; Maggs, J; Millward, K; O'Donoghue, G; Patel, S; Rajput, K; Salmon, V; Sear, T; Speers, A; Wheeler, A; Wilson, K
2017-01-01
To assess longitudinal outcomes in a large and varied population of children receiving bilateral cochlear implants both simultaneously and sequentially. This observational non-randomized service evaluation collected localization and speech recognition in noise data from simultaneously and sequentially implanted children at four time points: before bilateral cochlear implants or before the sequential implant, 1 year, 2 years, and 3 years after bilateral implants. No inclusion criteria were applied, so children with additional difficulties, cochleovestibular anomalies, varying educational placements, 23 different home languages, a full range of outcomes and varying device use were included. 1001 children were included: 465 implanted simultaneously and 536 sequentially, representing just over 50% of children receiving bilateral implants in the UK in this period. In simultaneously implanted children the median age at implant was 2.1 years; 7% were implanted at less than 1 year of age. In sequentially implanted children the interval between implants ranged from 0.1 to 14.5 years. Children with simultaneous bilateral implants localized better than those with one implant. On average children receiving a second (sequential) cochlear implant showed improvement in localization and listening in background noise after 1 year of bilateral listening. The interval between sequential implants had no effect on localization improvement although a smaller interval gave more improvement in speech recognition in noise. Children with sequential implants on average were able to use their second device to obtain spatial release from masking after 2 years of bilateral listening. Although ranges were large, bilateral cochlear implants on average offered an improvement in localization and speech perception in noise over unilateral implants. These data represent the diverse population of children with bilateral cochlear implants in the UK from 2010 to 2012. Predictions of outcomes for individual patients are not possible from these data. However, there are no indications to preclude children with long inter-implant interval having the chance of a second cochlear implant.
Heuristic and optimal policy computations in the human brain during sequential decision-making.
Korn, Christoph W; Bach, Dominik R
2018-01-23
Optimal decisions across extended time horizons require value calculations over multiple probabilistic future states. Humans may circumvent such complex computations by resorting to easy-to-compute heuristics that approximate optimal solutions. To probe the potential interplay between heuristic and optimal computations, we develop a novel sequential decision-making task, framed as virtual foraging in which participants have to avoid virtual starvation. Rewards depend only on final outcomes over five-trial blocks, necessitating planning over five sequential decisions and probabilistic outcomes. Here, we report model comparisons demonstrating that participants primarily rely on the best available heuristic but also use the normatively optimal policy. FMRI signals in medial prefrontal cortex (MPFC) relate to heuristic and optimal policies and associated choice uncertainties. Crucially, reaction times and dorsal MPFC activity scale with discrepancies between heuristic and optimal policies. Thus, sequential decision-making in humans may emerge from integration between heuristic and optimal policies, implemented by controllers in MPFC.
Palter, Vanessa N; Orzech, Neil; Reznick, Richard K; Grantcharov, Teodor P
2013-02-01
: To develop and validate an ex vivo comprehensive curriculum for a basic laparoscopic procedure. : Although simulators have been well validated as tools to teach technical skills, their integration into comprehensive curricula is lacking. Moreover, neither the effect of ex vivo training on learning curves in the operating room (OR), nor the effect on nontechnical proficiency has been investigated. : This randomized single-blinded prospective trial allocated 20 surgical trainees to a structured training and assessment curriculum (STAC) group or conventional residency training. The STAC consisted of case-based learning, proficiency-based virtual reality training, laparoscopic box training, and OR participation. After completion of the intervention, all participants performed 5 sequential laparoscopic cholecystectomies in the OR. The primary outcome measure was the difference in technical performance between the 2 groups during the first laparoscopic cholecystectomy. Secondary outcome measures included differences with respect to learning curves in the OR, technical proficiency of each sequential laparoscopic cholecystectomy, and nontechnical skills. : Residents in the STAC group outperformed residents in the conventional group in the first (P = 0.004), second (P = 0.036), third (P = 0.021), and fourth (P = 0.023) laparoscopic cholecystectomies. The conventional group demonstrated a significant learning curve in the OR (P = 0.015) in contrast to the STAC group (P = 0.032). Residents in the STAC group also had significantly higher nontechnical skills (P = 0.027). : Participating in the STAC shifted the learning curve for a basic laparoscopic procedure from the operating room into the simulation laboratory. STAC-trained residents had superior technical proficiency in the OR and nontechnical skills compared with conventionally trained residents. (The study registration ID is NCT01560494.).
Auctions with Dynamic Populations: Efficiency and Revenue Maximization
NASA Astrophysics Data System (ADS)
Said, Maher
We study a stochastic sequential allocation problem with a dynamic population of privately-informed buyers. We characterize the set of efficient allocation rules and show that a dynamic VCG mechanism is both efficient and periodic ex post incentive compatible; we also show that the revenue-maximizing direct mechanism is a pivot mechanism with a reserve price. We then consider sequential ascending auctions in this setting, both with and without a reserve price. We construct equilibrium bidding strategies in this indirect mechanism where bidders reveal their private information in every period, yielding the same outcomes as the direct mechanisms. Thus, the sequential ascending auction is a natural institution for achieving either efficient or optimal outcomes.
Hall, David B; Meier, Ulrich; Diener, Hans-Cristoph
2005-06-01
The trial objective was to test whether a new mechanism of action would effectively treat migraine headaches and to select a dose range for further investigation. The motivation for a group sequential, adaptive, placebo-controlled trial design was (1) limited information about where across the range of seven doses to focus attention, (2) a need to limit sample size for a complicated inpatient treatment and (3) a desire to reduce exposure of patients to ineffective treatment. A design based on group sequential and up and down designs was developed and operational characteristics were explored by trial simulation. The primary outcome was headache response at 2 h after treatment. Groups of four treated and two placebo patients were assigned to one dose. Adaptive dose selection was based on response rates of 60% seen with other migraine treatments. If more than 60% of treated patients responded, then the next dose was the next lower dose; otherwise, the dose was increased. A stopping rule of at least five groups at the target dose and at least four groups at that dose with more than 60% response was developed to ensure that a selected dose would be statistically significantly (p=0.05) superior to placebo. Simulations indicated good characteristics in terms of control of type 1 error, sufficient power, modest expected sample size and modest bias in estimation. The trial design is attractive for phase 2 clinical trials when response is acute and simple, ideally binary, placebo comparator is required, and patient accrual is relatively slow allowing for the collection and processing of results as a basis for the adaptive assignment of patients to dose groups. The acute migraine trial based on this design was successful in both proof of concept and dose range selection.
A Bayesian sequential design using alpha spending function to control type I error.
Zhu, Han; Yu, Qingzhao
2017-10-01
We propose in this article a Bayesian sequential design using alpha spending functions to control the overall type I error in phase III clinical trials. We provide algorithms to calculate critical values, power, and sample sizes for the proposed design. Sensitivity analysis is implemented to check the effects from different prior distributions, and conservative priors are recommended. We compare the power and actual sample sizes of the proposed Bayesian sequential design with different alpha spending functions through simulations. We also compare the power of the proposed method with frequentist sequential design using the same alpha spending function. Simulations show that, at the same sample size, the proposed method provides larger power than the corresponding frequentist sequential design. It also has larger power than traditional Bayesian sequential design which sets equal critical values for all interim analyses. When compared with other alpha spending functions, O'Brien-Fleming alpha spending function has the largest power and is the most conservative in terms that at the same sample size, the null hypothesis is the least likely to be rejected at early stage of clinical trials. And finally, we show that adding a step of stop for futility in the Bayesian sequential design can reduce the overall type I error and reduce the actual sample sizes.
Multiuser signal detection using sequential decoding
NASA Astrophysics Data System (ADS)
Xie, Zhenhua; Rushforth, Craig K.; Short, Robert T.
1990-05-01
The application of sequential decoding to the detection of data transmitted over the additive white Gaussian noise channel by K asynchronous transmitters using direct-sequence spread-spectrum multiple access is considered. A modification of Fano's (1963) sequential-decoding metric, allowing the messages from a given user to be safely decoded if its Eb/N0 exceeds -1.6 dB, is presented. Computer simulation is used to evaluate the performance of a sequential decoder that uses this metric in conjunction with the stack algorithm. In many circumstances, the sequential decoder achieves results comparable to those obtained using the much more complicated optimal receiver.
Sequential and simultaneous SLAR block adjustment. [spline function analysis for mapping
NASA Technical Reports Server (NTRS)
Leberl, F.
1975-01-01
Two sequential methods of planimetric SLAR (Side Looking Airborne Radar) block adjustment, with and without splines, and three simultaneous methods based on the principles of least squares are evaluated. A limited experiment with simulated SLAR images indicates that sequential block formation with splines followed by external interpolative adjustment is superior to the simultaneous methods such as planimetric block adjustment with similarity transformations. The use of the sequential block formation is recommended, since it represents an inexpensive tool for satisfactory point determination from SLAR images.
2012-05-30
annealing-based or Bayesian sequential simulation approaches B. Dafflon1,2 and W. Barrash1 Received 13 May 2011; revised 12 March 2012; accepted 17 April 2012...the withheld porosity log are also withheld for this estimation process. For both cases we do this for two wells having locally variable stratigraphy ...borehole location is given at the bottom of each log comparison panel. For comparison with stratigraphy at the BHRS, contacts between Units 1 to 4
2014-01-01
Background Perinatal mortality and morbidity in the Netherlands is relatively high compared to other European countries. Our country has a unique system with an independent primary care providing care to low-risk pregnancies and a secondary/tertiary care responsible for high-risk pregnancies. About 65% of pregnant women in the Netherlands will be referred from primary to secondary care implicating multiple medical handovers. Dutch audits concluded that in the entire obstetric collaborative network process parameters could be improved. Studies have shown that obstetric team training improves perinatal outcome and that simulation-based obstetric team training implementing crew resource management (CRM) improves team performance. In addition, deliberate practice (DP) improves medical skills. The aim of this study is to analyse whether transmural multiprofessional simulation-based obstetric team training improves perinatal outcome. Methods/Design The study will be implemented in the south-eastern part of the Netherlands with an annual delivery rate of over 9,000. In this area secondary care is provided by four hospitals. Each hospital with referring primary care practices will form a cluster (study group). Within each cluster, teams will be formed of different care providers representing the obstetric collaborative network. CRM and elements of DP will be implemented in the training. To analyse the quality of care as perceived by patients, the Pregnancy and Childbirth Questionnaire (PCQ) will be used. Furthermore, self-reported collaboration between care providers will be assessed. Team performance will be measured by the Clinical Teamwork Scale (CTS). We employ a stepped-wedge trial design with a sequential roll-out of the trainings for the different study groups. Primary outcome will be perinatal mortality and/or admission to a NICU. Secondary outcome will be team performance, quality of care as perceived by patients, and collaboration among care providers. Conclusion The effect of transmural multiprofessional simulation-based obstetric team training on perinatal outcome has never been studied. We hypothesise that this training will improve perinatal outcome, team performance, and quality of care as perceived by patients and care providers. Trial registration The Netherlands National Trial Register, http://www.trialregister.nl/NTR4576, registered June 1, 2014 PMID:25145317
Serra-Guillén, Carlos; Nagore, Eduardo; Hueso, Luis; Traves, Victor; Messeguer, Francesc; Sanmartín, Onofre; Llombart, Beatriz; Requena, Celia; Botella-Estrada, Rafael; Guillén, Carlos
2012-04-01
Photodynamic therapy (PDT) and imiquimod are the treatments of choice for actinic keratosis (AK). As they have different mechanisms of action, it seems reasonable to assume that applying both treatments sequentially would be efficacious. We sought to determine which of these therapeutic modalities provides a better clinical and histologic response in patients with AK and whether sequential use of both was more efficacious than each separately. Patients were randomly assigned to one treatment group: group 1, PDT only; group 2, imiquimod only; or group 3, sequential use of PDT and imiquimod. The primary outcome measure was complete clinical response. Partial clinical response was defined as a reduction of more than 75% in the initial number of lesions. A complete clinicopathologic response was defined as lack of evidence of AK in the biopsy specimen. In all, 105 patients completed the study (group 1, 40 patients; group 2, 33 patients; group 3, 32 patients). Sequential application of PDT and imiquimod was more efficacious in all the outcome measures. More patients were satisfied with PDT than with the other two modalities (P = .003). No significant differences were observed among the 3 modalities and tolerance to treatment. Only one cycle of imiquimod was administered. The follow-up period was brief. Sequential application of PDT and imiquimod provides a significantly better clinical and histologic response in the treatment of AK than PDT or imiquimod monotherapy. It also produces less intense local reactions and better tolerance and satisfaction than imiquimod monotherapy. Copyright © 2011 American Academy of Dermatology, Inc. Published by Mosby, Inc. All rights reserved.
Dave, Hreem; Phoenix, Vidya; Becker, Edmund R; Lambert, Scott R
2010-08-01
To compare the incidence of adverse events and visual outcomes and to compare the economic costs of sequential vs simultaneous bilateral cataract surgery for infants with congenital cataracts. Retrospective review of simultaneous vs sequential bilateral cataract surgery for infants with congenital cataracts who underwent cataract surgery when 6 months or younger at our institution. Records were available for 10 children who underwent sequential surgery at a mean age of 49 days for the first eye and 17 children who underwent simultaneous surgery at a mean age of 68 days (P = .25). We found a similar incidence of adverse events between the 2 treatment groups. Intraoperative or postoperative complications occurred in 14 eyes. The most common postoperative complication was glaucoma. No eyes developed endophthalmitis. The mean (SD) absolute interocular difference in logMAR visual acuities between the 2 treatment groups was 0.47 (0.76) for the sequential group and 0.44 (0.40) for the simultaneous group (P = .92). Payments for the hospital, drugs, supplies, and professional services were on average 21.9% lower per patient in the simultaneous group. Simultaneous bilateral cataract surgery for infants with congenital cataracts is associated with a 21.9% reduction in medical payments and no discernible difference in the incidence of adverse events or visual outcomes. However, our small sample size limits our ability to make meaningful comparisons of the relative risks and visual benefits of the 2 procedures.
Chen, Connie; Gribble, Matthew O; Bartroff, Jay; Bay, Steven M; Goldstein, Larry
2017-05-01
The United States's Clean Water Act stipulates in section 303(d) that states must identify impaired water bodies for which total maximum daily loads (TMDLs) of pollution inputs into water bodies are developed. Decision-making procedures about how to list, or delist, water bodies as impaired, or not, per Clean Water Act 303(d) differ across states. In states such as California, whether or not a particular monitoring sample suggests that water quality is impaired can be regarded as a binary outcome variable, and California's current regulatory framework invokes a version of the exact binomial test to consolidate evidence across samples and assess whether the overall water body complies with the Clean Water Act. Here, we contrast the performance of California's exact binomial test with one potential alternative, the Sequential Probability Ratio Test (SPRT). The SPRT uses a sequential testing framework, testing samples as they become available and evaluating evidence as it emerges, rather than measuring all the samples and calculating a test statistic at the end of the data collection process. Through simulations and theoretical derivations, we demonstrate that the SPRT on average requires fewer samples to be measured to have comparable Type I and Type II error rates as the current fixed-sample binomial test. Policymakers might consider efficient alternatives such as SPRT to current procedure. Copyright © 2017 Elsevier Ltd. All rights reserved.
Keogh, Ruth H; Daniel, Rhian M; VanderWeele, Tyler J; Vansteelandt, Stijn
2018-05-01
Estimation of causal effects of time-varying exposures using longitudinal data is a common problem in epidemiology. When there are time-varying confounders, which may include past outcomes, affected by prior exposure, standard regression methods can lead to bias. Methods such as inverse probability weighted estimation of marginal structural models have been developed to address this problem. However, in this paper we show how standard regression methods can be used, even in the presence of time-dependent confounding, to estimate the total effect of an exposure on a subsequent outcome by controlling appropriately for prior exposures, outcomes, and time-varying covariates. We refer to the resulting estimation approach as sequential conditional mean models (SCMMs), which can be fitted using generalized estimating equations. We outline this approach and describe how including propensity score adjustment is advantageous. We compare the causal effects being estimated using SCMMs and marginal structural models, and we compare the two approaches using simulations. SCMMs enable more precise inferences, with greater robustness against model misspecification via propensity score adjustment, and easily accommodate continuous exposures and interactions. A new test for direct effects of past exposures on a subsequent outcome is described.
On the Lulejian-I Combat Model
1976-08-01
possible initial massing of the attacking side’s resources, the model tries to represent in a game -theoretic context the adversary nature of the...sequential game , as outlined in [A]. In principle, it is necessary to run the combat simulation once for each possible set of sequentially chosen...sequential game , in which the evaluative portion of the model (i.e., the combat assessment) serves to compute intermediate and terminal payoffs for the
Lifelong Transfer Learning for Heterogeneous Teams of Agents in Sequential Decision Processes
2016-06-01
making (SDM) tasks in dynamic environments with simulated and physical robots . 15. SUBJECT TERMS Sequential decision making, lifelong learning, transfer...sequential decision-making (SDM) tasks in dynamic environments with both simple benchmark tasks and more complex aerial and ground robot tasks. Our work...and ground robots in the presence of disturbances: We applied our methods to the problem of learning controllers for robots with novel disturbances in
Increasing efficiency of preclinical research by group sequential designs
Piper, Sophie K.; Rex, Andre; Florez-Vargas, Oscar; Karystianis, George; Schneider, Alice; Wellwood, Ian; Siegerink, Bob; Ioannidis, John P. A.; Kimmelman, Jonathan; Dirnagl, Ulrich
2017-01-01
Despite the potential benefits of sequential designs, studies evaluating treatments or experimental manipulations in preclinical experimental biomedicine almost exclusively use classical block designs. Our aim with this article is to bring the existing methodology of group sequential designs to the attention of researchers in the preclinical field and to clearly illustrate its potential utility. Group sequential designs can offer higher efficiency than traditional methods and are increasingly used in clinical trials. Using simulation of data, we demonstrate that group sequential designs have the potential to improve the efficiency of experimental studies, even when sample sizes are very small, as is currently prevalent in preclinical experimental biomedicine. When simulating data with a large effect size of d = 1 and a sample size of n = 18 per group, sequential frequentist analysis consumes in the long run only around 80% of the planned number of experimental units. In larger trials (n = 36 per group), additional stopping rules for futility lead to the saving of resources of up to 30% compared to block designs. We argue that these savings should be invested to increase sample sizes and hence power, since the currently underpowered experiments in preclinical biomedicine are a major threat to the value and predictiveness in this research domain. PMID:28282371
Rispin, Amy; Farrar, David; Margosches, Elizabeth; Gupta, Kailash; Stitzel, Katherine; Carr, Gregory; Greene, Michael; Meyer, William; McCall, Deborah
2002-01-01
The authors have developed an improved version of the up-and-down procedure (UDP) as one of the replacements for the traditional acute oral toxicity test formerly used by the Organisation for Economic Co-operation and Development member nations to characterize industrial chemicals, pesticides, and their mixtures. This method improves the performance of acute testing for applications that use the median lethal dose (classic LD50) test while achieving significant reductions in animal use. It uses sequential dosing, together with sophisticated computer-assisted computational methods during the execution and calculation phases of the test. Staircase design, a form of sequential test design, can be applied to acute toxicity testing with its binary experimental endpoints (yes/no outcomes). The improved UDP provides a point estimate of the LD50 and approximate confidence intervals in addition to observed toxic signs for the substance tested. It does not provide information about the dose-response curve. Computer simulation was used to test performance of the UDP without the need for additional laboratory validation.
GOST: A generic ordinal sequential trial design for a treatment trial in an emerging pandemic.
Whitehead, John; Horby, Peter
2017-03-01
Conducting clinical trials to assess experimental treatments for potentially pandemic infectious diseases is challenging. Since many outbreaks of infectious diseases last only six to eight weeks, there is a need for trial designs that can be implemented rapidly in the face of uncertainty. Outbreaks are sudden and unpredictable and so it is essential that as much planning as possible takes place in advance. Statistical aspects of such trial designs should be evaluated and discussed in readiness for implementation. This paper proposes a generic ordinal sequential trial design (GOST) for a randomised clinical trial comparing an experimental treatment for an emerging infectious disease with standard care. The design is intended as an off-the-shelf, ready-to-use robust and flexible option. The primary endpoint is a categorisation of patient outcome according to an ordinal scale. A sequential approach is adopted, stopping as soon as it is clear that the experimental treatment has an advantage or that sufficient advantage is unlikely to be detected. The properties of the design are evaluated using large-sample theory and verified for moderate sized samples using simulation. The trial is powered to detect a generic clinically relevant difference: namely an odds ratio of 2 for better rather than worse outcomes. Total sample sizes (across both treatments) of between 150 and 300 patients prove to be adequate in many cases, but the precise value depends on both the magnitude of the treatment advantage and the nature of the ordinal scale. An advantage of the approach is that any erroneous assumptions made at the design stage about the proportion of patients falling into each outcome category have little effect on the error probabilities of the study, although they can lead to inaccurate forecasts of sample size. It is important and feasible to pre-determine many of the statistical aspects of an efficient trial design in advance of a disease outbreak. The design can then be tailored to the specific disease under study once its nature is better understood.
Decroocq, Justine; Itzykson, Raphaël; Vigouroux, Stéphane; Michallet, Mauricette; Yakoub-Agha, Ibrahim; Huynh, Anne; Beckerich, Florence; Suarez, Felipe; Chevallier, Patrice; Nguyen-Quoc, Stéphanie; Ledoux, Marie-Pierre; Clement, Laurence; Hicheri, Yosr; Guillerm, Gaëlle; Cornillon, Jérôme; Contentin, Nathalie; Carre, Martin; Maillard, Natacha; Mercier, Mélanie; Mohty, Mohamad; Beguin, Yves; Bourhis, Jean-Henri; Charbonnier, Amandine; Dauriac, Charles; Bay, Jacques-Olivier; Blaise, Didier; Deconinck, Eric; Jubert, Charlotte; Raus, Nicole; Peffault de Latour, Regis; Dhedin, Nathalie
2018-03-01
Patients with acute myeloid leukemia (AML) in relapse or refractory to induction therapy have a dismal prognosis. Allogeneic hematopoietic stem cell transplantation is the only curative option. In these patients, we aimed to compare the results of a myeloablative transplant versus a sequential approach consisting in a cytoreductive chemotherapy followed by a reduced intensity conditioning regimen and prophylactic donor lymphocytes infusions. We retrospectively analyzed 99 patients aged 18-50 years, transplanted for a refractory (52%) or a relapsed AML not in remission (48%). Fifty-eight patients received a sequential approach and 41 patients a myeloablative conditioning regimen. Only 6 patients received prophylactic donor lymphocytes infusions. With a median follow-up of 48 months, 2-year overall survival was 39%, 95% confidence interval (CI) (24-53) in the myeloablative group versus 33%, 95% CI (21-45) in the sequential groups (P = .39), and 2-year cumulative incidence of relapse (CIR) was 57% versus 50% respectively (P = .99). Nonrelapse mortality was not higher in the myeloablative group (17% versus 15%, P = .44). In multivariate analysis, overall survival, CIR and nonrelapse mortality remained similar between the two groups. However, in multivariate analysis, sequential conditioning led to fewer acute grade II-IV graft versus host disease (GVHD) (HR for sequential approach = 0.37; 95% CI: 0.21-0.65; P < .001) without a significant impact on chronic GVHD (all grades and extensive). In young patients with refractory or relapsed AML, myeloablative transplant and sequential approach offer similar outcomes except for a lower incidence of acute GvHD after a sequential transplant. © 2018 Wiley Periodicals, Inc.
Gekas, Jean; Gagné, Geneviève; Bujold, Emmanuel; Douillard, Daniel; Forest, Jean-Claude; Reinharz, Daniel; Rousseau, François
2009-02-13
To assess and compare the cost effectiveness of three different strategies for prenatal screening for Down's syndrome (integrated test, sequential screening, and contingent screenings) and to determine the most useful cut-off values for risk. Computer simulations to study integrated, sequential, and contingent screening strategies with various cut-offs leading to 19 potential screening algorithms. The computer simulation was populated with data from the Serum Urine and Ultrasound Screening Study (SURUSS), real unit costs for healthcare interventions, and a population of 110 948 pregnancies from the province of Québec for the year 2001. Cost effectiveness ratios, incremental cost effectiveness ratios, and screening options' outcomes. The contingent screening strategy dominated all other screening options: it had the best cost effectiveness ratio ($C26,833 per case of Down's syndrome) with fewer procedure related euploid miscarriages and unnecessary terminations (respectively, 6 and 16 per 100,000 pregnancies). It also outperformed serum screening at the second trimester. In terms of the incremental cost effectiveness ratio, contingent screening was still dominant: compared with screening based on maternal age alone, the savings were $C30,963 per additional birth with Down's syndrome averted. Contingent screening was the only screening strategy that offered early reassurance to the majority of women (77.81%) in first trimester and minimised costs by limiting retesting during the second trimester (21.05%). For the contingent and sequential screening strategies, the choice of cut-off value for risk in the first trimester test significantly affected the cost effectiveness ratios (respectively, from $C26,833 to $C37,260 and from $C35,215 to $C45,314 per case of Down's syndrome), the number of procedure related euploid miscarriages (from 6 to 46 and from 6 to 45 per 100,000 pregnancies), and the number of unnecessary terminations (from 16 to 26 and from 16 to 25 per 100,000 pregnancies). Contingent screening, with a first trimester cut-off value for high risk of 1 in 9, is the preferred option for prenatal screening of women for pregnancies affected by Down's syndrome.
Knowledge outcomes within rotational models of social work field education.
Birkenmaier, Julie; Curley, Jami; Rowan, Noell L
2012-01-01
This study assessed knowledge outcomes among concurrent, concurrent/sequential, and sequential rotation models of field instruction. Posttest knowledge scores of students ( n = 231) in aging-related field education were higher for students who participated in the concurrent rotation model, and for those who completed field education at a long-term care facility. Scores were also higher for students in programs that infused a higher number of geriatric competencies in their curriculum. Recommendations are provided to programs considering rotation models of field education related to older adults.
A Bayesian sequential design with adaptive randomization for 2-sided hypothesis test.
Yu, Qingzhao; Zhu, Lin; Zhu, Han
2017-11-01
Bayesian sequential and adaptive randomization designs are gaining popularity in clinical trials thanks to their potentials to reduce the number of required participants and save resources. We propose a Bayesian sequential design with adaptive randomization rates so as to more efficiently attribute newly recruited patients to different treatment arms. In this paper, we consider 2-arm clinical trials. Patients are allocated to the 2 arms with a randomization rate to achieve minimum variance for the test statistic. Algorithms are presented to calculate the optimal randomization rate, critical values, and power for the proposed design. Sensitivity analysis is implemented to check the influence on design by changing the prior distributions. Simulation studies are applied to compare the proposed method and traditional methods in terms of power and actual sample sizes. Simulations show that, when total sample size is fixed, the proposed design can obtain greater power and/or cost smaller actual sample size than the traditional Bayesian sequential design. Finally, we apply the proposed method to a real data set and compare the results with the Bayesian sequential design without adaptive randomization in terms of sample sizes. The proposed method can further reduce required sample size. Copyright © 2017 John Wiley & Sons, Ltd.
Spatial versus sequential correlations for random access coding
NASA Astrophysics Data System (ADS)
Tavakoli, Armin; Marques, Breno; Pawłowski, Marcin; Bourennane, Mohamed
2016-03-01
Random access codes are important for a wide range of applications in quantum information. However, their implementation with quantum theory can be made in two very different ways: (i) by distributing data with strong spatial correlations violating a Bell inequality or (ii) using quantum communication channels to create stronger-than-classical sequential correlations between state preparation and measurement outcome. Here we study this duality of the quantum realization. We present a family of Bell inequalities tailored to the task at hand and study their quantum violations. Remarkably, we show that the use of spatial and sequential quantum correlations imposes different limitations on the performance of quantum random access codes: Sequential correlations can outperform spatial correlations. We discuss the physics behind the observed discrepancy between spatial and sequential quantum correlations.
Sequential Dependencies in Driving
ERIC Educational Resources Information Center
Doshi, Anup; Tran, Cuong; Wilder, Matthew H.; Mozer, Michael C.; Trivedi, Mohan M.
2012-01-01
The effect of recent experience on current behavior has been studied extensively in simple laboratory tasks. We explore the nature of sequential effects in the more naturalistic setting of automobile driving. Driving is a safety-critical task in which delayed response times may have severe consequences. Using a realistic driving simulator, we find…
J-adaptive estimation with estimated noise statistics
NASA Technical Reports Server (NTRS)
Jazwinski, A. H.; Hipkins, C.
1973-01-01
The J-adaptive sequential estimator is extended to include simultaneous estimation of the noise statistics in a model for system dynamics. This extension completely automates the estimator, eliminating the requirement of an analyst in the loop. Simulations in satellite orbit determination demonstrate the efficacy of the sequential estimation algorithm.
Parallelization and automatic data distribution for nuclear reactor simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liebrock, L.M.
1997-07-01
Detailed attempts at realistic nuclear reactor simulations currently take many times real time to execute on high performance workstations. Even the fastest sequential machine can not run these simulations fast enough to ensure that the best corrective measure is used during a nuclear accident to prevent a minor malfunction from becoming a major catastrophe. Since sequential computers have nearly reached the speed of light barrier, these simulations will have to be run in parallel to make significant improvements in speed. In physical reactor plants, parallelism abounds. Fluids flow, controls change, and reactions occur in parallel with only adjacent components directlymore » affecting each other. These do not occur in the sequentialized manner, with global instantaneous effects, that is often used in simulators. Development of parallel algorithms that more closely approximate the real-world operation of a reactor may, in addition to speeding up the simulations, actually improve the accuracy and reliability of the predictions generated. Three types of parallel architecture (shared memory machines, distributed memory multicomputers, and distributed networks) are briefly reviewed as targets for parallelization of nuclear reactor simulation. Various parallelization models (loop-based model, shared memory model, functional model, data parallel model, and a combined functional and data parallel model) are discussed along with their advantages and disadvantages for nuclear reactor simulation. A variety of tools are introduced for each of the models. Emphasis is placed on the data parallel model as the primary focus for two-phase flow simulation. Tools to support data parallel programming for multiple component applications and special parallelization considerations are also discussed.« less
Crawford, Sara; Boulet, Sheree L; Mneimneh, Allison S; Perkins, Kiran M; Jamieson, Denise J; Zhang, Yujia; Kissin, Dmitry M
2016-02-01
To assess treatment and pregnancy/infant-associated medical costs and birth outcomes for assisted reproductive technology (ART) cycles in a subset of patients using elective double embryo (ET) and to project the difference in costs and outcomes had the cycles instead been sequential single ETs (fresh followed by frozen if the fresh ET did not result in live birth). Retrospective cohort study using 2012 and 2013 data from the National ART Surveillance System. Infertility treatment centers. Fresh, autologous double ETs performed in 2012 among ART patients younger than 35 years of age with no prior ART use who cryopreserved at least one embryo. Sequential single and double ETs. Actual live birth rates and estimated ART treatment and pregnancy/infant-associated medical costs for double ET cycles started in 2012 and projected ART treatment and pregnancy/infant-associated medical costs if the double ET cycles had been performed as sequential single ETs. The estimated total ART treatment and pregnancy/infant-associated medical costs were $580.9 million for 10,001 double ETs started in 2012. If performed as sequential single ETs, estimated costs would have decreased by $195.0 million to $386.0 million, and live birth rates would have increased from 57.7%-68.0%. Sequential single ETs, when clinically appropriate, can reduce total ART treatment and pregnancy/infant-associated medical costs by reducing multiple births without lowering live birth rates. Published by Elsevier Inc.
Bhoomiboonchoo, Piraya; Nisalak, Ananda; Chansatiporn, Natkamol; Yoon, In-Kyu; Kalayanarooj, Siripen; Thipayamongkolgul, Mathuros; Endy, Timothy; Rothman, Alan L; Green, Sharone; Srikiatkhachorn, Anon; Buddhari, Darunee; Mammen, Mammen P; Gibbons, Robert V
2015-03-14
The effect of prior dengue virus (DENV) exposure on subsequent heterologous infection can be beneficial or detrimental depending on many factors including timing of infection. We sought to evaluate this effect by examining a large database of DENV infections captured by both active and passive surveillance encompassing a wide clinical spectrum of disease. We evaluated datasets from 17 years of hospital-based passive surveillance and nine years of cohort studies, including clinical and subclinical DENV infections, to assess the outcomes of sequential heterologous infections. Chi square or Fisher's exact test was used to compare proportions of infection outcomes such as disease severity; ANOVA was used for continuous variables. Multivariate logistic regression was used to assess risk factors for infection outcomes. Of 38,740 DENV infections, two or more infections were detected in 502 individuals; 14 had three infections. The mean ages at the time of the first and second detected infections were 7.6 ± 3.0 and 11.2 ± 3.0 years. The shortest time between sequential infections was 66 days. A longer time interval between sequential infections was associated with dengue hemorrhagic fever (DHF) in the second detected infection (OR 1.3, 95% CI 1.2-1.4). All possible sequential serotype pairs were observed among 201 subjects with DHF at the second detected infection, except DENV-4 followed by DENV-3. Among DENV infections detected in cohort subjects by active study surveillance and subsequent non-study hospital-based passive surveillance, hospitalization at the first detected infection increased the likelihood of hospitalization at the second detected infection. Increasing time between sequential DENV infections was associated with greater severity of the second detected infection, supporting the role of heterotypic immunity in both protection and enhancement. Hospitalization was positively associated between the first and second detected infections, suggesting a possible predisposition in some individuals to more severe dengue disease.
Murray, Thomas A; Yuan, Ying; Thall, Peter F; Elizondo, Joan H; Hofstetter, Wayne L
2018-01-22
A design is proposed for randomized comparative trials with ordinal outcomes and prognostic subgroups. The design accounts for patient heterogeneity by allowing possibly different comparative conclusions within subgroups. The comparative testing criterion is based on utilities for the levels of the ordinal outcome and a Bayesian probability model. Designs based on two alternative models that include treatment-subgroup interactions are considered, the proportional odds model and a non-proportional odds model with a hierarchical prior that shrinks toward the proportional odds model. A third design that assumes homogeneity and ignores possible treatment-subgroup interactions also is considered. The three approaches are applied to construct group sequential designs for a trial of nutritional prehabilitation versus standard of care for esophageal cancer patients undergoing chemoradiation and surgery, including both untreated patients and salvage patients whose disease has recurred following previous therapy. A simulation study is presented that compares the three designs, including evaluation of within-subgroup type I and II error probabilities under a variety of scenarios including different combinations of treatment-subgroup interactions. © 2018, The International Biometric Society.
Hager, Rebecca; Tsiatis, Anastasios A; Davidian, Marie
2018-05-18
Clinicians often make multiple treatment decisions at key points over the course of a patient's disease. A dynamic treatment regime is a sequence of decision rules, each mapping a patient's observed history to the set of available, feasible treatment options at each decision point, and thus formalizes this process. An optimal regime is one leading to the most beneficial outcome on average if used to select treatment for the patient population. We propose a method for estimation of an optimal regime involving two decision points when the outcome of interest is a censored survival time, which is based on maximizing a locally efficient, doubly robust, augmented inverse probability weighted estimator for average outcome over a class of regimes. By casting this optimization as a classification problem, we exploit well-studied classification techniques such as support vector machines to characterize the class of regimes and facilitate implementation via a backward iterative algorithm. Simulation studies of performance and application of the method to data from a sequential, multiple assignment randomized clinical trial in acute leukemia are presented. © 2018, The International Biometric Society.
NASA Technical Reports Server (NTRS)
Sidik, S. M.
1972-01-01
A sequential adaptive experimental design procedure for a related problem is studied. It is assumed that a finite set of potential linear models relating certain controlled variables to an observed variable is postulated, and that exactly one of these models is correct. The problem is to sequentially design most informative experiments so that the correct model equation can be determined with as little experimentation as possible. Discussion includes: structure of the linear models; prerequisite distribution theory; entropy functions and the Kullback-Leibler information function; the sequential decision procedure; and computer simulation results. An example of application is given.
High data rate coding for the space station telemetry links.
NASA Technical Reports Server (NTRS)
Lumb, D. R.; Viterbi, A. J.
1971-01-01
Coding systems for high data rates were examined from the standpoint of potential application in space-station telemetry links. Approaches considered included convolutional codes with sequential, Viterbi, and cascaded-Viterbi decoding. It was concluded that a high-speed (40 Mbps) sequential decoding system best satisfies the requirements for the assumed growth potential and specified constraints. Trade-off studies leading to this conclusion are viewed, and some sequential (Fano) algorithm improvements are discussed, together with real-time simulation results.
Concurrent processing simulation of the space station
NASA Technical Reports Server (NTRS)
Gluck, R.; Hale, A. L.; Sunkel, John W.
1989-01-01
The development of a new capability for the time-domain simulation of multibody dynamic systems and its application to the study of a large angle rotational maneuvers of the Space Station is described. The effort was divided into three sequential tasks, which required significant advancements of the state-of-the art to accomplish. These were: (1) the development of an explicit mathematical model via symbol manipulation of a flexible, multibody dynamic system; (2) the development of a methodology for balancing the computational load of an explicit mathematical model for concurrent processing; and (3) the implementation and successful simulation of the above on a prototype Custom Architectured Parallel Processing System (CAPPS) containing eight processors. The throughput rate achieved by the CAPPS operating at only 70 percent efficiency, was 3.9 times greater than that obtained sequentially by the IBM 3090 supercomputer simulating the same problem. More significantly, analysis of the results leads to the conclusion that the relative cost effectiveness of concurrent vs. sequential digital computation will grow substantially as the computational load is increased. This is a welcomed development in an era when very complex and cumbersome mathematical models of large space vehicles must be used as substitutes for full scale testing which has become impractical.
Qin, Fangjun; Chang, Lubin; Jiang, Sai; Zha, Feng
2018-05-03
In this paper, a sequential multiplicative extended Kalman filter (SMEKF) is proposed for attitude estimation using vector observations. In the proposed SMEKF, each of the vector observations is processed sequentially to update the attitude, which can make the measurement model linearization more accurate for the next vector observation. This is the main difference to Murrell’s variation of the MEKF, which does not update the attitude estimate during the sequential procedure. Meanwhile, the covariance is updated after all the vector observations have been processed, which is used to account for the special characteristics of the reset operation necessary for the attitude update. This is the main difference to the traditional sequential EKF, which updates the state covariance at each step of the sequential procedure. The numerical simulation study demonstrates that the proposed SMEKF has more consistent and accurate performance in a wide range of initial estimate errors compared to the MEKF and its traditional sequential forms.
A Sequential Multiplicative Extended Kalman Filter for Attitude Estimation Using Vector Observations
Qin, Fangjun; Jiang, Sai; Zha, Feng
2018-01-01
In this paper, a sequential multiplicative extended Kalman filter (SMEKF) is proposed for attitude estimation using vector observations. In the proposed SMEKF, each of the vector observations is processed sequentially to update the attitude, which can make the measurement model linearization more accurate for the next vector observation. This is the main difference to Murrell’s variation of the MEKF, which does not update the attitude estimate during the sequential procedure. Meanwhile, the covariance is updated after all the vector observations have been processed, which is used to account for the special characteristics of the reset operation necessary for the attitude update. This is the main difference to the traditional sequential EKF, which updates the state covariance at each step of the sequential procedure. The numerical simulation study demonstrates that the proposed SMEKF has more consistent and accurate performance in a wide range of initial estimate errors compared to the MEKF and its traditional sequential forms. PMID:29751538
Avery, Taliser R; Kulldorff, Martin; Vilk, Yury; Li, Lingling; Cheetham, T Craig; Dublin, Sascha; Davis, Robert L; Liu, Liyan; Herrinton, Lisa; Brown, Jeffrey S
2013-05-01
This study describes practical considerations for implementation of near real-time medical product safety surveillance in a distributed health data network. We conducted pilot active safety surveillance comparing generic divalproex sodium to historical branded product at four health plans from April to October 2009. Outcomes reported are all-cause emergency room visits and fractures. One retrospective data extract was completed (January 2002-June 2008), followed by seven prospective monthly extracts (January 2008-November 2009). To evaluate delays in claims processing, we used three analytic approaches: near real-time sequential analysis, sequential analysis with 1.5 month delay, and nonsequential (using final retrospective data). Sequential analyses used the maximized sequential probability ratio test. Procedural and logistical barriers to active surveillance were documented. We identified 6586 new users of generic divalproex sodium and 43,960 new users of the branded product. Quality control methods identified 16 extract errors, which were corrected. Near real-time extracts captured 87.5% of emergency room visits and 50.0% of fractures, which improved to 98.3% and 68.7% respectively with 1.5 month delay. We did not identify signals for either outcome regardless of extract timeframe, and slight differences in the test statistic and relative risk estimates were found. Near real-time sequential safety surveillance is feasible, but several barriers warrant attention. Data quality review of each data extract was necessary. Although signal detection was not affected by delay in analysis, when using a historical control group differential accrual between exposure and outcomes may theoretically bias near real-time risk estimates towards the null, causing failure to detect a signal. Copyright © 2013 John Wiley & Sons, Ltd.
Lee, Seonah
2013-10-01
This study aimed to organize the system features of decision support technologies targeted at nursing practice into assessment, problem identification, care plans, implementation, and outcome evaluation. It also aimed to identify the range of the five stage-related sequential decision supports that computerized clinical decision support systems provided. MEDLINE, CINAHL, and EMBASE were searched. A total of 27 studies were reviewed. The system features collected represented the characteristics of each category from patient assessment to outcome evaluation. Several features were common across the reviewed systems. For the sequential decision support, all of the reviewed systems provided decision support in sequence for patient assessment and care plans. Fewer than half of the systems included problem identification. There were only three systems operating in an implementation stage and four systems in outcome evaluation. Consequently, the key steps for sequential decision support functions were initial patient assessment, problem identification, care plan, and outcome evaluation. Providing decision support in such a full scope will effectively help nurses' clinical decision making. By organizing the system features, a comprehensive picture of nursing practice-oriented computerized decision support systems was obtained; however, the development of a guideline for better systems should go beyond the scope of a literature review.
Analyses of group sequential clinical trials.
Koepcke, W
1989-12-01
In the first part of this article the methodology of group sequential plans is reviewed. After introducing the basic definition of such plans the main properties are shown. At the end of this section three different plans (Pocock, O'Brien-Fleming, Koepcke) are compared. In the second part of the article some unresolved issues and recent developments in the application of group sequential methods to long-term controlled clinical trials are discussed. These include deviation from the assumptions, life table methods, multiple-arm clinical trials, multiple outcome measures, and confidence intervals.
On the error probability of general tree and trellis codes with applications to sequential decoding
NASA Technical Reports Server (NTRS)
Johannesson, R.
1973-01-01
An upper bound on the average error probability for maximum-likelihood decoding of the ensemble of random binary tree codes is derived and shown to be independent of the length of the tree. An upper bound on the average error probability for maximum-likelihood decoding of the ensemble of random L-branch binary trellis codes of rate R = 1/n is derived which separates the effects of the tail length T and the memory length M of the code. It is shown that the bound is independent of the length L of the information sequence. This implication is investigated by computer simulations of sequential decoding utilizing the stack algorithm. These simulations confirm the implication and further suggest an empirical formula for the true undetected decoding error probability with sequential decoding.
Sequential use of simulation and optimization in analysis and planning
Hans R. Zuuring; Jimmie D. Chew; J. Greg Jones
2000-01-01
Management activities are analyzed at landscape scales employing both simulation and optimization. SIMPPLLE, a stochastic simulation modeling system, is initially applied to assess the risks associated with a specific natural process occurring on the current landscape without management treatments, but with fire suppression. These simulation results are input into...
Karim, Mohammad Ehsanul; Petkau, John; Gustafson, Paul; Platt, Robert W.; Tremlett, Helen
2017-01-01
In longitudinal studies, if the time-dependent covariates are affected by the past treatment, time-dependent confounding may be present. For a time-to-event response, marginal structural Cox models (MSCMs) are frequently used to deal with such confounding. To avoid some of the problems of fitting MSCM, the sequential Cox approach has been suggested as an alternative. Although the estimation mechanisms are different, both approaches claim to estimate the causal effect of treatment by appropriately adjusting for time-dependent confounding. We carry out simulation studies to assess the suitability of the sequential Cox approach for analyzing time-to-event data in the presence of a time-dependent covariate that may or may not be a time-dependent confounder. Results from these simulations revealed that the sequential Cox approach is not as effective as MSCM in addressing the time-dependent confounding. The sequential Cox approach was also found to be inadequate in the presence of a time-dependent covariate. We propose a modified version of the sequential Cox approach that correctly estimates the treatment effect in both of the above scenarios. All approaches are applied to investigate the impact of beta-interferon treatment in delaying disability progression in the British Columbia Multiple Sclerosis cohort (1995 – 2008). PMID:27659168
Karim, Mohammad Ehsanul; Petkau, John; Gustafson, Paul; Platt, Robert W; Tremlett, Helen
2018-06-01
In longitudinal studies, if the time-dependent covariates are affected by the past treatment, time-dependent confounding may be present. For a time-to-event response, marginal structural Cox models are frequently used to deal with such confounding. To avoid some of the problems of fitting marginal structural Cox model, the sequential Cox approach has been suggested as an alternative. Although the estimation mechanisms are different, both approaches claim to estimate the causal effect of treatment by appropriately adjusting for time-dependent confounding. We carry out simulation studies to assess the suitability of the sequential Cox approach for analyzing time-to-event data in the presence of a time-dependent covariate that may or may not be a time-dependent confounder. Results from these simulations revealed that the sequential Cox approach is not as effective as marginal structural Cox model in addressing the time-dependent confounding. The sequential Cox approach was also found to be inadequate in the presence of a time-dependent covariate. We propose a modified version of the sequential Cox approach that correctly estimates the treatment effect in both of the above scenarios. All approaches are applied to investigate the impact of beta-interferon treatment in delaying disability progression in the British Columbia Multiple Sclerosis cohort (1995-2008).
Gaudrain, Etienne; Carlyon, Robert P
2013-01-01
Previous studies have suggested that cochlear implant users may have particular difficulties exploiting opportunities to glimpse clear segments of a target speech signal in the presence of a fluctuating masker. Although it has been proposed that this difficulty is associated with a deficit in linking the glimpsed segments across time, the details of this mechanism are yet to be explained. The present study introduces a method called Zebra-speech developed to investigate the relative contribution of simultaneous and sequential segregation mechanisms in concurrent speech perception, using a noise-band vocoder to simulate cochlear implants. One experiment showed that the saliency of the difference between the target and the masker is a key factor for Zebra-speech perception, as it is for sequential segregation. Furthermore, forward masking played little or no role, confirming that intelligibility was not limited by energetic masking but by across-time linkage abilities. In another experiment, a binaural cue was used to distinguish the target and the masker. It showed that the relative contribution of simultaneous and sequential segregation depended on the spectral resolution, with listeners relying more on sequential segregation when the spectral resolution was reduced. The potential of Zebra-speech as a segregation enhancement strategy for cochlear implants is discussed.
Gaudrain, Etienne; Carlyon, Robert P.
2013-01-01
Previous studies have suggested that cochlear implant users may have particular difficulties exploiting opportunities to glimpse clear segments of a target speech signal in the presence of a fluctuating masker. Although it has been proposed that this difficulty is associated with a deficit in linking the glimpsed segments across time, the details of this mechanism are yet to be explained. The present study introduces a method called Zebra-speech developed to investigate the relative contribution of simultaneous and sequential segregation mechanisms in concurrent speech perception, using a noise-band vocoder to simulate cochlear implants. One experiment showed that the saliency of the difference between the target and the masker is a key factor for Zebra-speech perception, as it is for sequential segregation. Furthermore, forward masking played little or no role, confirming that intelligibility was not limited by energetic masking but by across-time linkage abilities. In another experiment, a binaural cue was used to distinguish target and masker. It showed that the relative contribution of simultaneous and sequential segregation depended on the spectral resolution, with listeners relying more on sequential segregation when the spectral resolution was reduced. The potential of Zebra-speech as a segregation enhancement strategy for cochlear implants is discussed. PMID:23297922
Simulations of 6-DOF Motion with a Cartesian Method
NASA Technical Reports Server (NTRS)
Murman, Scott M.; Aftosmis, Michael J.; Berger, Marsha J.; Kwak, Dochan (Technical Monitor)
2003-01-01
Coupled 6-DOF/CFD trajectory predictions using an automated Cartesian method are demonstrated by simulating a GBU-32/JDAM store separating from an F-18C aircraft. Numerical simulations are performed at two Mach numbers near the sonic speed, and compared with flight-test telemetry and photographic-derived data. Simulation results obtained with a sequential-static series of flow solutions are contrasted with results using a time-dependent flow solver. Both numerical methods show good agreement with the flight-test data through the first half of the simulations. The sequential-static and time-dependent methods diverge over the last half of the trajectory prediction. after the store produces peak angular rates. A cost comparison for the Cartesian method is included, in terms of absolute cost and relative to computing uncoupled 6-DOF trajectories. A detailed description of the 6-DOF method, as well as a verification of its accuracy, is provided in an appendix.
Monte Carlo Simulation of Sudden Death Bearing Testing
NASA Technical Reports Server (NTRS)
Vlcek, Brian L.; Hendricks, Robert C.; Zaretsky, Erwin V.
2003-01-01
Monte Carlo simulations combined with sudden death testing were used to compare resultant bearing lives to the calculated hearing life and the cumulative test time and calendar time relative to sequential and censored sequential testing. A total of 30 960 virtual 50-mm bore deep-groove ball bearings were evaluated in 33 different sudden death test configurations comprising 36, 72, and 144 bearings each. Variations in both life and Weibull slope were a function of the number of bearings failed independent of the test method used and not the total number of bearings tested. Variation in L10 life as a function of number of bearings failed were similar to variations in lift obtained from sequentially failed real bearings and from Monte Carlo (virtual) testing of entire populations. Reductions up to 40 percent in bearing test time and calendar time can be achieved by testing to failure or the L(sub 50) life and terminating all testing when the last of the predetermined bearing failures has occurred. Sudden death testing is not a more efficient method to reduce bearing test time or calendar time when compared to censored sequential testing.
Adverse Outcome Pathways (AOPs) describe toxicant effects as a sequential chain of causally linked events beginning with a molecular perturbation and culminating in an adverse outcome at an individual or population level. Strategies for developing AOPs are still evolving and dep...
Leivo, Tiina; Sarikkola, Anna-Ulrika; Uusitalo, Risto J; Hellstedt, Timo; Ess, Sirje-Linda; Kivelä, Tero
2011-06-01
To present an economic-analysis comparison of simultaneous and sequential bilateral cataract surgery. Helsinki University Eye Hospital, Helsinki, Finland. Economic analysis. Effects were estimated from data in a study in which patients were randomized to have bilateral cataract surgery on the same day (study group) or sequentially (control group). The main clinical outcomes were corrected distance visual acuity, refraction, complications, Visual Function Index-7 (VF-7) scores, and patient-rated satisfaction with vision. Health-care costs of surgeries and preoperative and postoperative visits were estimated, including the cost of staff, equipment, material, floor space, overhead, and complications. The data were obtained from staff measurements, questionnaires, internal hospital records, and accountancy. Non-health-care costs of travel, home care, and time were estimated based on questionnaires from a random subset of patients. The main economic outcome measures were cost per VF-7 score unit change and cost per patient in simultaneous versus sequential surgery. The study comprised 520 patients (241 patients included non-health-care and time cost analyses). Surgical outcomes and patient satisfaction were similar in both groups. Simultaneous cataract surgery saved 449 Euros (€) per patient in health-care costs and €739 when travel and paid home-care costs were included. The savings added up to €849 per patient when the cost of lost working time was included. Compared with sequential bilateral cataract surgery, simultaneous bilateral cataract surgery provided comparable clinical outcomes with substantial savings in health-care and non-health-care-related costs. No author has a financial or proprietary interest in any material or method mentioned. Copyright © 2011 ASCRS and ESCRS. Published by Elsevier Inc. All rights reserved.
Estenssoro, Elisa; Kanoore Edul, Vanina S; Loudet, Cecilia I; Osatnik, Javier; Ríos, Fernando G; Vázquez, Daniela N; Pozo, Mario O; Lattanzio, Bernardo; Pálizas, Fernando; Klein, Francisco; Piezny, Damián; Rubatto Birri, Paolo N; Tuhay, Graciela; Díaz, Anatilde; Santamaría, Analía; Zakalik, Graciela; Dubin, Arnaldo
2018-05-08
The new Sepsis-3 definitions have been scarcely assessed in low- and middle-income countries; besides, regional information of sepsis outcomes is sparse. Our objective was to evaluate Sepsis-3 definition performance in Argentina. Cohort study of 3-month duration beginning on July 1, 2016. Forty-nine ICUs. Consecutive patients admitted to the ICU with suspected infection that triggered blood cultures and antibiotic administration. None. Patients were classified as having infection, sepsis (infection + change in Sequential Organ Failure Assessment ≥ 2 points), and septic shock (vasopressors + lactate > 2 mmol/L). Patients on vasopressors and lactate less than or equal to 2 mmol/L (cardiovascular dysfunction) were analyzed separately, as those on vasopressors without serum lactate measurement. Systemic inflammatory response syndrome was also recorded. Main outcome was hospital mortality. Of 809 patients, 6% had infection, 29% sepsis, 20% cardiovascular dysfunction, 40% septic shock, and 3% received vasopressors with lactate unmeasured. Hospital mortality was 13%, 20%, 39%, 51%, and 41%, respectively (p = 0.000). Independent predictors of outcome were lactate, Sequential Organ Failure Assessment score, comorbidities, prior duration of symptoms (hr), mechanical ventilation requirement, and infection by highly resistant microorganisms. Area under the receiver operating characteristic curves for mortality for systemic inflammatory response syndrome and Sequential Organ Failure Assessment were 0.53 (0.48-0.55) and 0.74 (0.69-0.77), respectively (p = 0.000). Increasing severity of Sepsis-3 categories adequately tracks mortality; cardiovascular dysfunction subgroup, not included in Sepsis-3, has distinct characteristics. Sequential Organ Failure Assessment score shows adequate prognosis accuracy-contrary to systemic inflammatory response syndrome. This study supports the predictive validity of Sepsis-3 definitions.
Dry minor mergers and size evolution of high-z compact massive early-type galaxies
NASA Astrophysics Data System (ADS)
Oogi, Taira; Habe, Asao
2012-09-01
Recent observations show evidence that high-z (z ~ 2 - 3) early-type galaxies (ETGs) are quite compact than that with comparable mass at z ~ 0. Dry merger scenario is one of the most probable one that can explain such size evolution. However, previous studies based on this scenario do not succeed to explain both properties of high-z compact massive ETGs and local ETGs, consistently. We investigate effects of sequential, multiple dry minor (stellar mass ratio M2/M1<1/4) mergers on the size evolution of compact massive ETGs. We perform N-body simulations of the sequential minor mergers with parabolic and head-on orbits, including a dark matter component and a stellar component. We show that the sequential minor mergers of compact satellite galaxies are the most efficient in the size growth and in decrease of the velocity dispersion of the compact massive ETGs. The change of stellar size and density of the merger remnant is consistent with the recent observations. Furthermore, we construct the merger histories of candidates of high-z compact massive ETGs using the Millennium Simulation Database, and estimate the size growth of the galaxies by dry minor mergers. We can reproduce the mean size growth factor between z = 2 and z = 0, assuming the most efficient size growth obtained in the case of the sequential minor mergers in our simulations.
Yang, Yong; Christakos, George; Huang, Wei; Lin, Chengda; Fu, Peihong; Mei, Yang
2016-04-12
Because of the rapid economic growth in China, many regions are subjected to severe particulate matter pollution. Thus, improving the methods of determining the spatiotemporal distribution and uncertainty of air pollution can provide considerable benefits when developing risk assessments and environmental policies. The uncertainty assessment methods currently in use include the sequential indicator simulation (SIS) and indicator kriging techniques. However, these methods cannot be employed to assess multi-temporal data. In this work, a spatiotemporal sequential indicator simulation (STSIS) based on a non-separable spatiotemporal semivariogram model was used to assimilate multi-temporal data in the mapping and uncertainty assessment of PM2.5 distributions in a contaminated atmosphere. PM2.5 concentrations recorded throughout 2014 in Shandong Province, China were used as the experimental dataset. Based on the number of STSIS procedures, we assessed various types of mapping uncertainties, including single-location uncertainties over one day and multiple days and multi-location uncertainties over one day and multiple days. A comparison of the STSIS technique with the SIS technique indicate that a better performance was obtained with the STSIS method.
NASA Astrophysics Data System (ADS)
Yang, Yong; Christakos, George; Huang, Wei; Lin, Chengda; Fu, Peihong; Mei, Yang
2016-04-01
Because of the rapid economic growth in China, many regions are subjected to severe particulate matter pollution. Thus, improving the methods of determining the spatiotemporal distribution and uncertainty of air pollution can provide considerable benefits when developing risk assessments and environmental policies. The uncertainty assessment methods currently in use include the sequential indicator simulation (SIS) and indicator kriging techniques. However, these methods cannot be employed to assess multi-temporal data. In this work, a spatiotemporal sequential indicator simulation (STSIS) based on a non-separable spatiotemporal semivariogram model was used to assimilate multi-temporal data in the mapping and uncertainty assessment of PM2.5 distributions in a contaminated atmosphere. PM2.5 concentrations recorded throughout 2014 in Shandong Province, China were used as the experimental dataset. Based on the number of STSIS procedures, we assessed various types of mapping uncertainties, including single-location uncertainties over one day and multiple days and multi-location uncertainties over one day and multiple days. A comparison of the STSIS technique with the SIS technique indicate that a better performance was obtained with the STSIS method.
A randomized trial of Foley balloon induction of labor trial in nulliparas (FIAT-N).
Connolly, Katherine A; Kohari, Katherine S; Rekawek, Patricia; Smilen, Brooke S; Miller, Meredith R; Moshier, Erin; Factor, Stephanie H; Stone, Joanne L; Bianco, Angela T
2016-09-01
With an increasing rate of induction of labor, it is important to choose induction methods that are safe and efficient in achieving a vaginal delivery. The optimal method for inducing nulliparous women with an unfavorable cervix is not known. We sought to determine if induction of labor with simultaneous use of oxytocin and Foley balloon vs sequential use of Foley balloon followed by oxytocin decreases the time to delivery in nulliparous women. We conducted a randomized controlled trial of nulliparous women presenting for induction at a single institution from December 2013 through March 2015. After decision for induction was made by their primary provider, women with gestational age ≥24 weeks with a nonanomalous, singleton fetus in vertex presentation with intact membranes were offered participation. Exclusion criteria included history of uterine surgery, unexplained vaginal bleeding, latex allergy, or contraindication to vaginal delivery. Participants were randomized to either simultaneous (oxytocin and Foley balloon) or sequential (oxytocin after expulsion of Foley balloon) induction group. The primary outcome was time from induction to delivery. Secondary outcomes included mode of delivery, estimated blood loss, postpartum hemorrhage, chorioamnionitis, and composite neonatal outcome. Maternal and neonatal outcomes were collected via chart review. Analyses were done on an intention-to-treat basis. A total of 166 patients were enrolled; 82 in the simultaneous and 84 in the sequential group. There were no differences in baseline characteristics in the 2 groups. Patients who received simultaneous oxytocin with insertion of a Foley balloon delivered significantly earlier (15.92 vs 18.87 hours, P = .004) than those in the sequential group. There was no difference in rate of cesarean delivery, estimated blood loss, postpartum hemorrhage, chorioamnionitis, or composite neonatal outcome. Simultaneous use of oxytocin and Foley balloon for induction of labor results in a significantly shorter interval to delivery in nulliparas. Copyright © 2016 Elsevier Inc. All rights reserved.
Real-Time Mobile Device-Assisted Chest Compression During Cardiopulmonary Resuscitation.
Sarma, Satyam; Bucuti, Hakiza; Chitnis, Anurag; Klacman, Alex; Dantu, Ram
2017-07-15
Prompt administration of high-quality cardiopulmonary resuscitation (CPR) is a key determinant of survival from cardiac arrest. Strategies to improve CPR quality at point of care could improve resuscitation outcomes. We tested whether a low cost and scalable mobile phone- or smart watch-based solution could provide accurate measures of compression depth and rate during simulated CPR. Fifty health care providers (58% intensive care unit nurses) performed simulated CPR on a calibrated training manikin (Resusci Anne, Laerdal) while wearing both devices. Subjects received real-time audiovisual feedback from each device sequentially. Primary outcome was accuracy of compression depth and rate compared with the calibrated training manikin. Secondary outcome was improvement in CPR quality as defined by meeting both guideline-recommend compression depth (5 to 6 cm) and rate (100 to 120/minute). Compared with the training manikin, typical error for compression depth was <5 mm (smart phone 4.6 mm; 95% CI 4.1 to 5.3 mm; smart watch 4.3 mm; 95% CI 3.8 to 5.0 mm). Compression rates were similarly accurate (smart phone Pearson's R = 0.93; smart watch R = 0.97). There was no difference in improved CPR quality defined as the number of sessions meeting both guideline-recommended compression depth (50 to 60 mm) and rate (100 to 120 compressions/minute) with mobile device feedback (60% vs 50%; p = 0.3). Sessions that did not meet guideline recommendations failed primarily because of inadequate compression depth (46 ± 2 mm). In conclusion, a mobile device application-guided CPR can accurately track compression depth and rate during simulation in a practice environment in accordance with resuscitation guidelines. Copyright © 2017 Elsevier Inc. All rights reserved.
Group-sequential three-arm noninferiority clinical trial designs
Ochiai, Toshimitsu; Hamasaki, Toshimitsu; Evans, Scott R.; Asakura, Koko; Ohno, Yuko
2016-01-01
We discuss group-sequential three-arm noninferiority clinical trial designs that include active and placebo controls for evaluating both assay sensitivity and noninferiority. We extend two existing approaches, the fixed margin and fraction approaches, into a group-sequential setting with two decision-making frameworks. We investigate the operating characteristics including power, Type I error rate, maximum and expected sample sizes, as design factors vary. In addition, we discuss sample size recalculation and its’ impact on the power and Type I error rate via a simulation study. PMID:26892481
Improved coverage of cDNA-AFLP by sequential digestion of immobilized cDNA.
Weiberg, Arne; Pöhler, Dirk; Morgenstern, Burkhard; Karlovsky, Petr
2008-10-13
cDNA-AFLP is a transcriptomics technique which does not require prior sequence information and can therefore be used as a gene discovery tool. The method is based on selective amplification of cDNA fragments generated by restriction endonucleases, electrophoretic separation of the products and comparison of the band patterns between treated samples and controls. Unequal distribution of restriction sites used to generate cDNA fragments negatively affects the performance of cDNA-AFLP. Some transcripts are represented by more than one fragment while other escape detection, causing redundancy and reducing the coverage of the analysis, respectively. With the goal of improving the coverage of cDNA-AFLP without increasing its redundancy, we designed a modified cDNA-AFLP protocol. Immobilized cDNA is sequentially digested with several restriction endonucleases and the released DNA fragments are collected in mutually exclusive pools. To investigate the performance of the protocol, software tool MECS (Multiple Enzyme cDNA-AFLP Simulation) was written in Perl. cDNA-AFLP protocols described in the literature and the new sequential digestion protocol were simulated on sets of cDNA sequences from mouse, human and Arabidopsis thaliana. The redundancy and coverage, the total number of PCR reactions, and the average fragment length were calculated for each protocol and cDNA set. Simulation revealed that sequential digestion of immobilized cDNA followed by the partitioning of released fragments into mutually exclusive pools outperformed other cDNA-AFLP protocols in terms of coverage, redundancy, fragment length, and the total number of PCRs. Primers generating 30 to 70 amplicons per PCR provided the highest fraction of electrophoretically distinguishable fragments suitable for normalization. For A. thaliana, human and mice transcriptome, the use of two marking enzymes and three sequentially applied releasing enzymes for each of the marking enzymes is recommended.
Carleton, R. Drew; Heard, Stephen B.; Silk, Peter J.
2013-01-01
Estimation of pest density is a basic requirement for integrated pest management in agriculture and forestry, and efficiency in density estimation is a common goal. Sequential sampling techniques promise efficient sampling, but their application can involve cumbersome mathematics and/or intensive warm-up sampling when pests have complex within- or between-site distributions. We provide tools for assessing the efficiency of sequential sampling and of alternative, simpler sampling plans, using computer simulation with “pre-sampling” data. We illustrate our approach using data for balsam gall midge (Paradiplosis tumifex) attack in Christmas tree farms. Paradiplosis tumifex proved recalcitrant to sequential sampling techniques. Midge distributions could not be fit by a common negative binomial distribution across sites. Local parameterization, using warm-up samples to estimate the clumping parameter k for each site, performed poorly: k estimates were unreliable even for samples of n∼100 trees. These methods were further confounded by significant within-site spatial autocorrelation. Much simpler sampling schemes, involving random or belt-transect sampling to preset sample sizes, were effective and efficient for P. tumifex. Sampling via belt transects (through the longest dimension of a stand) was the most efficient, with sample means converging on true mean density for sample sizes of n∼25–40 trees. Pre-sampling and simulation techniques provide a simple method for assessing sampling strategies for estimating insect infestation. We suspect that many pests will resemble P. tumifex in challenging the assumptions of sequential sampling methods. Our software will allow practitioners to optimize sampling strategies before they are brought to real-world applications, while potentially avoiding the need for the cumbersome calculations required for sequential sampling methods. PMID:24376556
Propagating probability distributions of stand variables using sequential Monte Carlo methods
Jeffrey H. Gove
2009-01-01
A general probabilistic approach to stand yield estimation is developed based on sequential Monte Carlo filters, also known as particle filters. The essential steps in the development of the sampling importance resampling (SIR) particle filter are presented. The SIR filter is then applied to simulated and observed data showing how the 'predictor - corrector'...
Poster error probability in the Mu-11 Sequential Ranging System
NASA Technical Reports Server (NTRS)
Coyle, C. W.
1981-01-01
An expression is derived for the posterior error probability in the Mu-2 Sequential Ranging System. An algorithm is developed which closely bounds the exact answer and can be implemented in the machine software. A computer simulation is provided to illustrate the improved level of confidence in a ranging acquisition using this figure of merit as compared to that using only the prior probabilities. In a simulation of 20,000 acquisitions with an experimentally determined threshold setting, the algorithm detected 90% of the actual errors and made false indication of errors on 0.2% of the acquisitions.
Sequential Tests of Multiple Hypotheses Controlling Type I and II Familywise Error Rates
Bartroff, Jay; Song, Jinlin
2014-01-01
This paper addresses the following general scenario: A scientist wishes to perform a battery of experiments, each generating a sequential stream of data, to investigate some phenomenon. The scientist would like to control the overall error rate in order to draw statistically-valid conclusions from each experiment, while being as efficient as possible. The between-stream data may differ in distribution and dimension but also may be highly correlated, even duplicated exactly in some cases. Treating each experiment as a hypothesis test and adopting the familywise error rate (FWER) metric, we give a procedure that sequentially tests each hypothesis while controlling both the type I and II FWERs regardless of the between-stream correlation, and only requires arbitrary sequential test statistics that control the error rates for a given stream in isolation. The proposed procedure, which we call the sequential Holm procedure because of its inspiration from Holm’s (1979) seminal fixed-sample procedure, shows simultaneous savings in expected sample size and less conservative error control relative to fixed sample, sequential Bonferroni, and other recently proposed sequential procedures in a simulation study. PMID:25092948
Koopmeiners, Joseph S.; Feng, Ziding
2015-01-01
Group sequential testing procedures have been proposed as an approach to conserving resources in biomarker validation studies. Previously, Koopmeiners and Feng (2011) derived the asymptotic properties of the sequential empirical positive predictive value (PPV) and negative predictive value curves, which summarize the predictive accuracy of a continuous marker, under case-control sampling. A limitation of their approach is that the prevalence can not be estimated from a case-control study and must be assumed known. In this manuscript, we consider group sequential testing of the predictive accuracy of a continuous biomarker with unknown prevalence. First, we develop asymptotic theory for the sequential empirical PPV and NPV curves when the prevalence must be estimated, rather than assumed known in a case-control study. We then discuss how our results can be combined with standard group sequential methods to develop group sequential testing procedures and bias-adjusted estimators for the PPV and NPV curve. The small sample properties of the proposed group sequential testing procedures and estimators are evaluated by simulation and we illustrate our approach in the context of a study to validate a novel biomarker for prostate cancer. PMID:26537180
Garatti, Andrea; Castelvecchio, Serenella; Canziani, Alberto; Corain, Livio; Generali, Tommaso; Mossuto, Eugenio; Gagliardotto, Piervincenzo; Anastasia, Luigi; Salmaso, Luigi; Giacomazzi, Francesca; Menicanti, Lorenzo
2014-12-01
The aim of the study was to analyse the early and long-term outcomes of a consecutive series of patients who underwent sequential coronary artery bypass grafting (CABG) and to compare them with a matched population of totally arterial revascularized patients. From January 1994 to December 1996, 209 patients underwent total arterial myocardial revascularization at our institution [arterial (ART) group]. In the same period, 2097 patients underwent CABG with left internal thoracic artery on left anterior descending and great saphenous vein on the right and circumflex branches sequentially [sequential vein (SV) group]. The propensity score methodology was used to obtain risk-adjusted outcome comparisons between the two groups (209 vs 243 patients in the ART group and SV group, respectively). In-hospital mortality was 1% in the ART group and 0.4% in the SV group (P = 0.86). Mean follow-up was 14 ± 4 years. Long-term survival was comparable among the two study groups [actuarial 5- and 15-year survival rates were 97 vs 93% and 82 vs 79% in the ART group and the SV group, respectively (P = 0.29)]. At follow-up, recurrence of angina (17 vs 18%; P = 0.99), acute myocardial infarction (MI) (3 vs 5%; P = 0.72) and repeated percutaneous coronary intervention (19 vs 21%; P = 0.69) were similar in the ART group compared with the SV group. In the Cox regression analysis, type of revascularization was not an independent predictor of any long-term outcomes (death or major adverse cardiac events). In asymptomatic patients, exercise stress test at follow-up was comparable between the two groups (P = 0.14). Sequential vein CABG appears to have good early and long-term clinical outcomes. Also, early and long-term incidence of acute MI was not significantly higher in the SV group. However, further studies with a larger population are warranted in order to confirm the present results. © The Author 2014. Published by Oxford University Press on behalf of the European Association for Cardio-Thoracic Surgery. All rights reserved.
Kraaijenga, Véronique J C; Ramakers, Geerte G J; Smulders, Yvette E; van Zon, Alice; Stegeman, Inge; Smit, Adriana L; Stokroos, Robert J; Hendrice, Nadia; Free, Rolien H; Maat, Bert; Frijns, Johan H M; Briaire, Jeroen J; Mylanus, E A M; Huinck, Wendy J; Van Zanten, Gijsbert A; Grolman, Wilko
2017-09-01
To date, no randomized clinical trial on the comparison between simultaneous and sequential bilateral cochlear implants (BiCIs) has been performed. To investigate the hearing capabilities and the self-reported benefits of simultaneous BiCIs compared with those of sequential BiCIs. A multicenter randomized clinical trial was conducted between January 12, 2010, and September 2, 2012, at 5 tertiary referral centers among 40 participants eligible for BiCIs. Main inclusion criteria were postlingual severe to profound hearing loss, age 18 to 70 years, and a maximum duration of 10 years without hearing aid use in both ears. Data analysis was conducted from May 24 to June 12, 2016. The simultaneous BiCI group received 2 cochlear implants during 1 surgical procedure. The sequential BiCI group received 2 cochlear implants with an interval of 2 years between implants. First, the results 1 year after receiving simultaneous BiCIs were compared with the results 1 year after receiving sequential BiCIs. Second, the results of 3 years of follow-up for both groups were compared separately. The primary outcome measure was speech intelligibility in noise from straight ahead. Secondary outcome measures were speech intelligibility in noise from spatially separated sources, speech intelligibility in silence, localization capabilities, and self-reported benefits assessed with various hearing and quality of life questionnaires. Nineteen participants were randomized to receive simultaneous BiCIs (11 women and 8 men; median age, 52 years [interquartile range, 36-63 years]), and another 19 participants were randomized to undergo sequential BiCIs (8 women and 11 men; median age, 54 years [interquartile range, 43-64 years]). Three patients did not receive a second cochlear implant and were unavailable for follow-up. Comparable results were found 1 year after simultaneous or sequential BiCIs for speech intelligibility in noise from straight ahead (difference, 0.9 dB [95% CI, -3.1 to 4.4 dB]) and all secondary outcome measures except for localization with a 30° angle between loudspeakers (difference, -10% [95% CI, -20.1% to 0.0%]). In the sequential BiCI group, all participants performed significantly better after the BiCIs on speech intelligibility in noise from spatially separated sources and on all localization tests, which was consistent with most of the participants' self-reported hearing capabilities. Speech intelligibility-in-noise results improved in the simultaneous BiCI group up to 3 years following the BiCIs. This study shows comparable objective and subjective hearing results 1 year after receiving simultaneous BiCIs and sequential BiCIs with an interval of 2 years between implants. It also shows a significant benefit of sequential BiCIs over a unilateral cochlear implant. Until 3 years after receiving simultaneous BiCIs, speech intelligibility in noise significantly improved compared with previous years. trialregister.nl Identifier: NTR1722.
Melo, Adma Nadja Ferreira de; Souza, Geany Targino de; Schaffner, Donald; Oliveira, Tereza C Moreira de; Maciel, Janeeyre Ferreira; Souza, Evandro Leite de; Magnani, Marciane
2017-06-19
This study assessed changes in thermo-tolerance and capability to survive to simulated gastrointestinal conditions of Salmonella Enteritidis PT4 and Salmonella Typhimurium PT4 inoculated in chicken breast meat following exposure to stresses (cold, acid and osmotic) commonly imposed during food processing. The effects of the stress imposed by exposure to oregano (Origanum vulgare L.) essential oil (OVEO) on thermo-tolerance were also assessed. After exposure to cold stress (5°C for 5h) in chicken breast meat the test strains were sequentially exposed to the different stressing substances (lactic acid, NaCl or OVEO) at sub-lethal amounts, which were defined considering previously determined minimum inhibitory concentrations, and finally to thermal treatment (55°C for 30min). Resistant cells from distinct sequential treatments were exposed to simulated gastrointestinal conditions. The exposure to cold stress did not result in increased tolerance to acid stress (lactic acid: 5 and 2.5μL/g) for both strains. Cells of S. Typhimurium PT4 and S. Enteritidis PT4 previously exposed to acid stress showed higher (p<0.05) tolerance to osmotic stress (NaCl: 75 or 37.5mg/g) compared to non-acid-exposed cells. Exposure to osmotic stress without previous exposure to acid stress caused a salt-concentration dependent decrease in counts for both strains. Exposure to OVEO (1.25 and 0.62μL/g) decreased the acid and osmotic tolerance of both S. Enteritidis PT4 and S. Typhimurium PT4. Sequential exposure to acid and osmotic stress conditions after cold exposure increased (p<0.05) the thermo-tolerance in both strains. The cells that survived the sequential stress exposure (resistant) showed higher tolerance (p<0.05) to acidic conditions during continuous exposure (182min) to simulated gastrointestinal conditions. Resistant cells of S. Enteritidis PT4 and S. Typhimurium PT4 showed higher survival rates (p<0.05) than control cells at the end of the in vitro digestion. These results show that sequential exposure to multiple sub-lethal stresses may increase the thermo-tolerance and enhance the survival under gastrointestinal conditions of S. Enteritidis PT4 and S. Typhimurium PT4. Copyright © 2017 Elsevier B.V. All rights reserved.
Bennett, Casey C; Hauser, Kris
2013-01-01
In the modern healthcare system, rapidly expanding costs/complexity, the growing myriad of treatment options, and exploding information streams that often do not effectively reach the front lines hinder the ability to choose optimal treatment decisions over time. The goal in this paper is to develop a general purpose (non-disease-specific) computational/artificial intelligence (AI) framework to address these challenges. This framework serves two potential functions: (1) a simulation environment for exploring various healthcare policies, payment methodologies, etc., and (2) the basis for clinical artificial intelligence - an AI that can "think like a doctor". This approach combines Markov decision processes and dynamic decision networks to learn from clinical data and develop complex plans via simulation of alternative sequential decision paths while capturing the sometimes conflicting, sometimes synergistic interactions of various components in the healthcare system. It can operate in partially observable environments (in the case of missing observations or data) by maintaining belief states about patient health status and functions as an online agent that plans and re-plans as actions are performed and new observations are obtained. This framework was evaluated using real patient data from an electronic health record. The results demonstrate the feasibility of this approach; such an AI framework easily outperforms the current treatment-as-usual (TAU) case-rate/fee-for-service models of healthcare. The cost per unit of outcome change (CPUC) was $189 vs. $497 for AI vs. TAU (where lower is considered optimal) - while at the same time the AI approach could obtain a 30-35% increase in patient outcomes. Tweaking certain AI model parameters could further enhance this advantage, obtaining approximately 50% more improvement (outcome change) for roughly half the costs. Given careful design and problem formulation, an AI simulation framework can approximate optimal decisions even in complex and uncertain environments. Future work is described that outlines potential lines of research and integration of machine learning algorithms for personalized medicine. Copyright © 2012 Elsevier B.V. All rights reserved.
ERIC Educational Resources Information Center
Reeder, Ruth M.; Firszt, Jill B.; Holden, Laura K.; Strube, Michael J.
2014-01-01
Purpose: The purpose of this study was to examine the rate of progress in the 2nd implanted ear as it relates to the 1st implanted ear and to bilateral performance in adult sequential cochlear implant recipients. In addition, this study aimed to identify factors that contribute to patient outcomes. Method: The authors performed a prospective…
Li, Guoqi; Deng, Lei; Wang, Dong; Wang, Wei; Zeng, Fei; Zhang, Ziyang; Li, Huanglong; Song, Sen; Pei, Jing; Shi, Luping
2016-01-01
Chunking refers to a phenomenon whereby individuals group items together when performing a memory task to improve the performance of sequential memory. In this work, we build a bio-plausible hierarchical chunking of sequential memory (HCSM) model to explain why such improvement happens. We address this issue by linking hierarchical chunking with synaptic plasticity and neuromorphic engineering. We uncover that a chunking mechanism reduces the requirements of synaptic plasticity since it allows applying synapses with narrow dynamic range and low precision to perform a memory task. We validate a hardware version of the model through simulation, based on measured memristor behavior with narrow dynamic range in neuromorphic circuits, which reveals how chunking works and what role it plays in encoding sequential memory. Our work deepens the understanding of sequential memory and enables incorporating it for the investigation of the brain-inspired computing on neuromorphic architecture. PMID:28066223
Recent advances in lossless coding techniques
NASA Astrophysics Data System (ADS)
Yovanof, Gregory S.
Current lossless techniques are reviewed with reference to both sequential data files and still images. Two major groups of sequential algorithms, dictionary and statistical techniques, are discussed. In particular, attention is given to Lempel-Ziv coding, Huffman coding, and arithmewtic coding. The subject of lossless compression of imagery is briefly discussed. Finally, examples of practical implementations of lossless algorithms and some simulation results are given.
Analyzing multicomponent receptive fields from neural responses to natural stimuli
Rowekamp, Ryan; Sharpee, Tatyana O
2011-01-01
The challenge of building increasingly better models of neural responses to natural stimuli is to accurately estimate the multiple stimulus features that may jointly affect the neural spike probability. The selectivity for combinations of features is thought to be crucial for achieving classical properties of neural responses such as contrast invariance. The joint search for these multiple stimulus features is difficult because estimating spike probability as a multidimensional function of stimulus projections onto candidate relevant dimensions is subject to the curse of dimensionality. An attractive alternative is to search for relevant dimensions sequentially, as in projection pursuit regression. Here we demonstrate using analytic arguments and simulations of model cells that different types of sequential search strategies exhibit systematic biases when used with natural stimuli. Simulations show that joint optimization is feasible for up to three dimensions with current algorithms. When applied to the responses of V1 neurons to natural scenes, models based on three jointly optimized dimensions had better predictive power in a majority of cases compared to dimensions optimized sequentially, with different sequential methods yielding comparable results. Thus, although the curse of dimensionality remains, at least several relevant dimensions can be estimated by joint information maximization. PMID:21780916
Olusesi, A D; Oyeniran, O
2017-05-01
Few studies have compared bilateral same-day with staged tympanoplasty using cartilage graft materials. A prospective randomised observational study was performed of 38 chronic suppurative otitis media patients (76 ears) who were assigned to undergo bilateral sequential same-day tympanoplasty (18 patients, 36 ears) or bilateral sequential tympanoplasty performed 3 months apart (20 patients, 40 ears). Disease duration, intra-operative findings, combined duration of surgery, post-operative graft appearance at 6 weeks, post-operative complications, re-do rate and relative cost of surgery were recorded. Tympanic membrane perforations were predominantly subtotal (p = 0.36, odds ratio = 0.75). Most grafts were harvested from the conchal cartilage and fewer from the tragus (p = 0.59, odds ratio = 1.016). Types of complication, post-operative hearing gain and revision rates were similar in both patient groups. Surgical outcomes are not significantly different for same-day and bilateral cartilage tympanoplasty, but same-day surgery has the added benefit of a lower cost.
Program For Parallel Discrete-Event Simulation
NASA Technical Reports Server (NTRS)
Beckman, Brian C.; Blume, Leo R.; Geiselman, John S.; Presley, Matthew T.; Wedel, John J., Jr.; Bellenot, Steven F.; Diloreto, Michael; Hontalas, Philip J.; Reiher, Peter L.; Weiland, Frederick P.
1991-01-01
User does not have to add any special logic to aid in synchronization. Time Warp Operating System (TWOS) computer program is special-purpose operating system designed to support parallel discrete-event simulation. Complete implementation of Time Warp mechanism. Supports only simulations and other computations designed for virtual time. Time Warp Simulator (TWSIM) subdirectory contains sequential simulation engine interface-compatible with TWOS. TWOS and TWSIM written in, and support simulations in, C programming language.
Dry minor mergers and size evolution of high-z compact massive early-type galaxies
NASA Astrophysics Data System (ADS)
Oogi, Taira; Habe, Asao
2013-01-01
Recent observations show evidence that high-z (z ˜ 2-3) early-type galaxies (ETGs) are more compact than those with comparable mass at z ˜ 0. Such size evolution is most likely explained by the `dry merger sceanario'. However, previous studies based on this scenario cannot consistently explain the properties of both high-z compact massive ETGs and local ETGs. We investigate the effect of multiple sequential dry minor mergers on the size evolution of compact massive ETGs. From an analysis of the Millennium Simulation Data Base, we show that such minor (stellar mass ratio M2/M1 < 1/4) mergers are extremely common during hierarchical structure formation. We perform N-body simulations of sequential minor mergers with parabolic and head-on orbits, including a dark matter component and a stellar component. Typical mass ratios of these minor mergers are 1/20 < M2/M1 ≤q 1/10. We show that sequential minor mergers of compact satellite galaxies are the most efficient at promoting size growth and decreasing the velocity dispersion of compact massive ETGs in our simulations. The change of stellar size and density of the merger remnants is consistent with recent observations. Furthermore, we construct the merger histories of candidates for high-z compact massive ETGs using the Millennium Simulation Data Base and estimate the size growth of the galaxies through the dry minor merger scenario. We can reproduce the mean size growth factor between z = 2 and z = 0, assuming the most efficient size growth obtained during sequential minor mergers in our simulations. However, we note that our numerical result is only valid for merger histories with typical mass ratios between 1/20 and 1/10 with parabolic and head-on orbits and that our most efficient size-growth efficiency is likely an upper limit.
Sell in May and Go Away? Learning and Risk Taking in Nonmonotonic Decision Problems
ERIC Educational Resources Information Center
Frey, Renato; Rieskamp, Jörg; Hertwig, Ralph
2015-01-01
In nonmonotonic decision problems, the magnitude of outcomes can both increase and decrease over time depending on the state of the decision problem. These increases and decreases may occur repeatedly and result in a variety of possible outcome distributions. In many previously investigated sequential decision problems, in contrast, outcomes (or…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hansen, K.M.
1992-10-01
Sequential indicator simulation (SIS) is a geostatistical technique designed to aid in the characterization of uncertainty about the structure or behavior of natural systems. This report discusses a simulation experiment designed to study the quality of uncertainty bounds generated using SIS. The results indicate that, while SIS may produce reasonable uncertainty bounds in many situations, factors like the number and location of available sample data, the quality of variogram models produced by the user, and the characteristics of the geologic region to be modeled, can all have substantial effects on the accuracy and precision of estimated confidence limits. It ismore » recommended that users of SIS conduct validation studies for the technique on their particular regions of interest before accepting the output uncertainty bounds.« less
Causal mediation analysis with a latent mediator.
Albert, Jeffrey M; Geng, Cuiyu; Nelson, Suchitra
2016-05-01
Health researchers are often interested in assessing the direct effect of a treatment or exposure on an outcome variable, as well as its indirect (or mediation) effect through an intermediate variable (or mediator). For an outcome following a nonlinear model, the mediation formula may be used to estimate causally interpretable mediation effects. This method, like others, assumes that the mediator is observed. However, as is common in structural equations modeling, we may wish to consider a latent (unobserved) mediator. We follow a potential outcomes framework and assume a generalized structural equations model (GSEM). We provide maximum-likelihood estimation of GSEM parameters using an approximate Monte Carlo EM algorithm, coupled with a mediation formula approach to estimate natural direct and indirect effects. The method relies on an untestable sequential ignorability assumption; we assess robustness to this assumption by adapting a recently proposed method for sensitivity analysis. Simulation studies show good properties of the proposed estimators in plausible scenarios. Our method is applied to a study of the effect of mother education on occurrence of adolescent dental caries, in which we examine possible mediation through latent oral health behavior. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
ERIC Educational Resources Information Center
Wang, Hung-Yuan; Duh, Henry Been-Lirn; Li, Nai; Lin, Tzung-Jin; Tsai, Chin-Chung
2014-01-01
The purpose of this study is to investigate and compare students' collaborative inquiry learning behaviors and their behavior patterns in an augmented reality (AR) simulation system and a traditional 2D simulation system. Their inquiry and discussion processes were analyzed by content analysis and lag sequential analysis (LSA). Forty…
ERIC Educational Resources Information Center
Ikeda, Kenji; Ueno, Taiji; Ito, Yuichi; Kitagami, Shinji; Kawaguchi, Jun
2017-01-01
Humans can pronounce a nonword (e.g., rint). Some researchers have interpreted this behavior as requiring a sequential mechanism by which a grapheme-phoneme correspondence rule is applied to each grapheme in turn. However, several parallel-distributed processing (PDP) models in English have simulated human nonword reading accuracy without a…
The impact of eyewitness identifications from simultaneous and sequential lineups.
Wright, Daniel B
2007-10-01
Recent guidelines in the US allow either simultaneous or sequential lineups to be used for eyewitness identification. This paper investigates how potential jurors weight the probative value of the different outcomes from both of these types of lineups. Participants (n=340) were given a description of a case that included some exonerating and some incriminating evidence. There was either a simultaneous or a sequential lineup. Depending on the condition, an eyewitness chose the suspect, chose a filler, or made no identification. The participant had to judge the guilt of the suspect and decide whether to render a guilty verdict. For both simultaneous and sequential lineups an identification had a large effect,increasing the probability of a guilty verdict. There were no reliable effects detected between making no identification and identifying a filler. The effect sizes were similar for simultaneous and sequential lineups. These findings are important for judges and other legal professionals to know for trials involving lineup identifications.
Transition play in team performance of volleyball: a log-linear analysis.
Eom, H J; Schutz, R W
1992-09-01
The purpose of this study was to develop and test a method to analyze and evaluate sequential skill performances in a team sport. An on-line computerized system was developed to record and summarize the sequential skill performances in volleyball. Seventy-two sample games from the third Federation of International Volleyball Cup men's competition were videotaped and grouped into two categories according to the final team standing and game outcome. Log-linear procedures were used to investigate the nature and degree of the relationship in the first-order (pass-to-set, set-to-spike) and second-order (pass-to-spike) transition plays. Results showed that there was a significant dependency in both the first-order and second-order transition plays, indicating that the outcome of a skill performance is highly influenced by the quality of a preceding skill performance. In addition, the pattern of the transition plays was stable and consistent, regardless of the classification status: Game Outcome, Team Standing, or Transition Process. The methodology and subsequent results provide valuable aids for a thorough understanding of the characteristics of transition plays in volleyball. In addition, the concept of sequential performance analysis may serve as an example for sport scientists in investigating probabilistic patterns of motor performance.
NASA Astrophysics Data System (ADS)
Jayanthi, Aditya; Coker, Christopher
2016-11-01
In the last decade, CFD simulations have transitioned from the stage where they are used to validate the final designs to the main stream development of products driven by the simulation. However, there are still niche areas of applications liking oiling simulations, where the traditional CFD simulation times are probative to use them in product development and have to rely on experimental methods, which are expensive. In this paper a unique example of Sprocket-Chain simulation will be presented using nanoFluidx a commercial SPH code developed by FluiDyna GmbH and Altair Engineering. The grid less nature of the of SPH method has inherent advantages in the areas of application with complex geometry which pose severe challenge to classical finite volume CFD methods due to complex moving geometries, moving meshes and high resolution requirements leading to long simulation times. The simulations times using nanoFluidx can be reduced from weeks to days allowing the flexibility to run more simulation and can be in used in main stream product development. The example problem under consideration is a classical Multiphysics problem and a sequentially coupled solution of Motion Solve and nanoFluidX will be presented. This abstract is replacing DFD16-2016-000045.
Multi-point objective-oriented sequential sampling strategy for constrained robust design
NASA Astrophysics Data System (ADS)
Zhu, Ping; Zhang, Siliang; Chen, Wei
2015-03-01
Metamodelling techniques are widely used to approximate system responses of expensive simulation models. In association with the use of metamodels, objective-oriented sequential sampling methods have been demonstrated to be effective in balancing the need for searching an optimal solution versus reducing the metamodelling uncertainty. However, existing infilling criteria are developed for deterministic problems and restricted to one sampling point in one iteration. To exploit the use of multiple samples and identify the true robust solution in fewer iterations, a multi-point objective-oriented sequential sampling strategy is proposed for constrained robust design problems. In this article, earlier development of objective-oriented sequential sampling strategy for unconstrained robust design is first extended to constrained problems. Next, a double-loop multi-point sequential sampling strategy is developed. The proposed methods are validated using two mathematical examples followed by a highly nonlinear automotive crashworthiness design example. The results show that the proposed method can mitigate the effect of both metamodelling uncertainty and design uncertainty, and identify the robust design solution more efficiently than the single-point sequential sampling approach.
Time scale of random sequential adsorption.
Erban, Radek; Chapman, S Jonathan
2007-04-01
A simple multiscale approach to the diffusion-driven adsorption from a solution to a solid surface is presented. The model combines two important features of the adsorption process: (i) The kinetics of the chemical reaction between adsorbing molecules and the surface and (ii) geometrical constraints on the surface made by molecules which are already adsorbed. The process (i) is modeled in a diffusion-driven context, i.e., the conditional probability of adsorbing a molecule provided that the molecule hits the surface is related to the macroscopic surface reaction rate. The geometrical constraint (ii) is modeled using random sequential adsorption (RSA), which is the sequential addition of molecules at random positions on a surface; one attempt to attach a molecule is made per one RSA simulation time step. By coupling RSA with the diffusion of molecules in the solution above the surface the RSA simulation time step is related to the real physical time. The method is illustrated on a model of chemisorption of reactive polymers to a virus surface.
Understanding and simulating the material behavior during multi-particle irradiations
Mir, Anamul H.; Toulemonde, M.; Jegou, C.; Miro, S.; Serruys, Y.; Bouffard, S.; Peuget, S.
2016-01-01
A number of studies have suggested that the irradiation behavior and damage processes occurring during sequential and simultaneous particle irradiations can significantly differ. Currently, there is no definite answer as to why and when such differences are seen. Additionally, the conventional multi-particle irradiation facilities cannot correctly reproduce the complex irradiation scenarios experienced in a number of environments like space and nuclear reactors. Therefore, a better understanding of multi-particle irradiation problems and possible alternatives are needed. This study shows ionization induced thermal spike and defect recovery during sequential and simultaneous ion irradiation of amorphous silica. The simultaneous irradiation scenario is shown to be equivalent to multiple small sequential irradiation scenarios containing latent damage formation and recovery mechanisms. The results highlight the absence of any new damage mechanism and time-space correlation between various damage events during simultaneous irradiation of amorphous silica. This offers a new and convenient way to simulate and understand complex multi-particle irradiation problems. PMID:27466040
Szakmany, T; Pugh, R; Kopczynska, M; Lundin, R M; Sharif, B; Morgan, P; Ellis, G; Abreu, J; Kulikouskaya, S; Bashir, K; Galloway, L; Al-Hassan, H; Grother, T; McNulty, P; Seal, S T; Cains, A; Vreugdenhil, M; Abdimalik, M; Dennehey, N; Evans, G; Whitaker, J; Beasant, E; Hall, C; Lazarou, M; Vanderpump, C V; Harding, K; Duffy, L; Guerrier Sadler, A; Keeling, R; Banks, C; Ng, S W Y; Heng, S Y; Thomas, D; Puw, E W; Otahal, I; Battle, C; Minik, O; Lyons, R A; Hall, J E
2018-02-01
Our aim was to prospectively determine the predictive capabilities of SEPSIS-1 and SEPSIS-3 definitions in the emergency departments and general wards. Patients with National Early Warning Score (NEWS) of 3 or above and suspected or proven infection were enrolled over a 24-h period in 13 Welsh hospitals. The primary outcome measure was mortality within 30 days. Out of the 5422 patients screened, 431 fulfilled inclusion criteria and 380 (88%) were recruited. Using the SEPSIS-1 definition, 212 patients had sepsis. When using the SEPSIS-3 definitions with Sequential Organ Failure Assessment (SOFA) score ≥ 2, there were 272 septic patients, whereas with quickSOFA score ≥ 2, 50 patients were identified. For the prediction of primary outcome, SEPSIS-1 criteria had a sensitivity (95%CI) of 65% (54-75%) and specificity of 47% (41-53%); SEPSIS-3 criteria had a sensitivity of 86% (76-92%) and specificity of 32% (27-38%). SEPSIS-3 and SEPSIS-1 definitions were associated with a hazard ratio (95%CI) 2.7 (1.5-5.6) and 1.6 (1.3-2.5), respectively. Scoring system discrimination evaluated by receiver operating characteristic curves was highest for Sequential Organ Failure Assessment score (0.69 (95%CI 0.63-0.76)), followed by NEWS (0.58 (0.51-0.66)) (p < 0.001). Systemic inflammatory response syndrome criteria (0.55 (0.49-0.61)) and quickSOFA score (0.56 (0.49-0.64)) could not predict outcome. The SEPSIS-3 definition identified patients with the highest risk. Sequential Organ Failure Assessment score and NEWS were better predictors of poor outcome. The Sequential Organ Failure Assessment score appeared to be the best tool for identifying patients with high risk of death and sepsis-induced organ dysfunction. © 2017 The Association of Anaesthetists of Great Britain and Ireland.
GEOMETRIC CROSS SECTIONS OF DUST AGGREGATES AND A COMPRESSION MODEL FOR AGGREGATE COLLISIONS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Suyama, Toru; Wada, Koji; Tanaka, Hidekazu
2012-07-10
Geometric cross sections of dust aggregates determine their coupling with disk gas, which governs their motions in protoplanetary disks. Collisional outcomes also depend on geometric cross sections of initial aggregates. In a previous paper, we performed three-dimensional N-body simulations of sequential collisions of aggregates composed of a number of sub-micron-sized icy particles and examined radii of gyration (and bulk densities) of the obtained aggregates. We showed that collisional compression of aggregates is not efficient and that aggregates remain fluffy. In the present study, we examine geometric cross sections of the aggregates. Their cross sections decrease due to compression as wellmore » as to their gyration radii. It is found that a relation between the cross section and the gyration radius proposed by Okuzumi et al. is valid for the compressed aggregates. We also refine the compression model proposed in our previous paper. The refined model enables us to calculate the evolution of both gyration radii and cross sections of growing aggregates and reproduces well our numerical results of sequential aggregate collisions. The refined model can describe non-equal-mass collisions as well as equal-mass cases. Although we do not take into account oblique collisions in the present study, oblique collisions would further hinder compression of aggregates.« less
Solar wind interaction with Venus and Mars in a parallel hybrid code
NASA Astrophysics Data System (ADS)
Jarvinen, Riku; Sandroos, Arto
2013-04-01
We discuss the development and applications of a new parallel hybrid simulation, where ions are treated as particles and electrons as a charge-neutralizing fluid, for the interaction between the solar wind and Venus and Mars. The new simulation code under construction is based on the algorithm of the sequential global planetary hybrid model developed at the Finnish Meteorological Institute (FMI) and on the Corsair parallel simulation platform also developed at the FMI. The FMI's sequential hybrid model has been used for studies of plasma interactions of several unmagnetized and weakly magnetized celestial bodies for more than a decade. Especially, the model has been used to interpret in situ particle and magnetic field observations from plasma environments of Mars, Venus and Titan. Further, Corsair is an open source MPI (Message Passing Interface) particle and mesh simulation platform, mainly aimed for simulations of diffusive shock acceleration in solar corona and interplanetary space, but which is now also being extended for global planetary hybrid simulations. In this presentation we discuss challenges and strategies of parallelizing a legacy simulation code as well as possible applications and prospects of a scalable parallel hybrid model for the solar wind interactions of Venus and Mars.
Huddy, Jeremy R; Weldon, Sharon-Marie; Ralhan, Shvaita; Painter, Tim; Hanna, George B; Kneebone, Roger; Bello, Fernando
2016-01-01
Objectives Public and patient engagement (PPE) is fundamental to healthcare research. To facilitate effective engagement in novel point-of-care tests (POCTs), the test and downstream consequences of the result need to be considered. Sequential simulation (SqS) is a tool to represent patient journeys and the effects of intervention at each and subsequent stages. This case study presents a process evaluation of SqS as a tool for PPE in the development of a volatile organic compound-based breath test POCT for the diagnosis of oesophagogastric (OG) cancer. Setting Three 3-hour workshops in central London. Participants 38 members of public attended a workshop, 26 (68%) had no prior experience of the OG cancer diagnostic pathway. Interventions Clinical pathway SqS was developed from a storyboard of a patient, played by an actor, noticing symptoms of oesophageal cancer and following a typical diagnostic pathway. The proposed breath testing strategy was then introduced and incorporated into a second SqS to demonstrate pathway impact. Facilitated group discussions followed each SqS. Primary and secondary outcome measures Evaluation was conducted through pre-event and postevent questionnaires, field notes and analysis of audiovisual recordings. Results 38 participants attended a workshop. All participants agreed they were able to contribute to discussions and like the idea of an OG cancer breath test. Five themes emerged related to the proposed new breath test including awareness of OG cancer, barriers to testing and diagnosis, design of new test device, new clinical pathway and placement of test device. 3 themes emerged related to the use of SqS: participatory engagement, simulation and empathetic engagement, and why participants attended. Conclusions SqS facilitated a shared immersive experience for participants and researchers that led to the coconstruction of knowledge that will guide future research activities and be of value to stakeholders concerned with the invention and adoption of POCT. PMID:27625053
Time and Order Effects on Causal Learning
ERIC Educational Resources Information Center
Alvarado, Angelica; Jara, Elvia; Vila, Javier; Rosas, Juan M.
2006-01-01
Five experiments were conducted to explore trial order and retention interval effects upon causal predictive judgments. Experiment 1 found that participants show a strong effect of trial order when a stimulus was sequentially paired with two different outcomes compared to a condition where both outcomes were presented intermixed. Experiment 2…
1998-06-01
4] By 2010, we should be able to change how we conduct the most intense joint operations. Instead of relying on massed forces and sequential ...not independent, sequential steps. Data probes to support the analysis phase were required to complete the logical models. This generated a need...Networks) Identify Granularity (System Level) - Establish Physical Bounds or Limits to Systems • Determine System Test Configuration and Lineup
Spatial interpolation of forest conditions using co-conditional geostatistical simulation
H. Todd Mowrer
2000-01-01
In recent work the author used the geostatistical Monte Carlo technique of sequential Gaussian simulation (s.G.s.) to investigate uncertainty in a GIS analysis of potential old-growth forest areas. The current study compares this earlier technique to that of co-conditional simulation, wherein the spatial cross-correlations between variables are included. As in the...
Fully vs. Sequentially Coupled Loads Analysis of Offshore Wind Turbines
DOE Office of Scientific and Technical Information (OSTI.GOV)
Damiani, Rick; Wendt, Fabian; Musial, Walter
The design and analysis methods for offshore wind turbines must consider the aerodynamic and hydrodynamic loads and response of the entire system (turbine, tower, substructure, and foundation) coupled to the turbine control system dynamics. Whereas a fully coupled (turbine and support structure) modeling approach is more rigorous, intellectual property concerns can preclude this approach. In fact, turbine control system algorithms and turbine properties are strictly guarded and often not shared. In many cases, a partially coupled analysis using separate tools and an exchange of reduced sets of data via sequential coupling may be necessary. In the sequentially coupled approach, themore » turbine and substructure designers will independently determine and exchange an abridged model of their respective subsystems to be used in their partners' dynamic simulations. Although the ability to achieve design optimization is sacrificed to some degree with a sequentially coupled analysis method, the central question here is whether this approach can deliver the required safety and how the differences in the results from the fully coupled method could affect the design. This work summarizes the scope and preliminary results of a study conducted for the Bureau of Safety and Environmental Enforcement aimed at quantifying differences between these approaches through aero-hydro-servo-elastic simulations of two offshore wind turbines on a monopile and jacket substructure.« less
Fan, Tingbo; Liu, Zhenbo; Zhang, Dong; Tang, Mengxing
2013-03-01
Lesion formation and temperature distribution induced by high-intensity focused ultrasound (HIFU) were investigated both numerically and experimentally via two energy-delivering strategies, i.e., sequential discrete and continuous scanning modes. Simulations were presented based on the combination of Khokhlov-Zabolotskaya-Kuznetsov (KZK) equation and bioheat equation. Measurements were performed on tissue-mimicking phantoms sonicated by a 1.12-MHz single-element focused transducer working at an acoustic power of 75 W. Both the simulated and experimental results show that, in the sequential discrete mode, obvious saw-tooth-like contours could be observed for the peak temperature distribution and the lesion boundaries, with the increasing interval space between two adjacent exposure points. In the continuous scanning mode, more uniform peak temperature distributions and lesion boundaries would be produced, and the peak temperature values would decrease significantly with the increasing scanning speed. In addition, compared to the sequential discrete mode, the continuous scanning mode could achieve higher treatment efficiency (lesion area generated per second) with a lower peak temperature. The present studies suggest that the peak temperature and tissue lesion resulting from the HIFU exposure could be controlled by adjusting the transducer scanning speed, which is important for improving the HIFU treatment efficiency.
Adverse Outcomes in Infantile Bilateral Developmental Dysplasia of the Hip.
Morbi, Abigail H M; Carsi, Belen; Gorianinov, Vitalli; Clarke, Nicholas M P
2015-01-01
It is believed that bilateral developmental dysplasia of the hip (DDH) has poorer outcomes with higher rates of avascular necrosis (AVN) and reintervention, compared with unilateral DDH. However, there is limited evidence in the literature, with few studies looking specifically at bilateral cases. A retrospective review of 36 patients (72 hips) with >4 years of follow-up. Patient population included surgically treated DDH including late presentations and failures of conservative treatment. The dislocated hips underwent either simultaneous closed or 1 open and 1 closed, or sequential open reduction. AVN and secondary procedures were used as endpoints for analysis as well as clinical and radiologic outcomes. At the last follow-up, 33% of hips had radiologic signs of AVN. Those hips that had no ossific nucleus (ON) at the time of surgery had an odds ratio of developing AVN of 3.05 and a statistically significant association between the 2 variables, whereas open/closed or simultaneous/sequential reduction did not increase the risk for AVN. In addition, 45.8% of those hips required further surgery. The estimated odds ratio of needing additional surgery after simultaneous reduction was 4.04. Clinically, 79.2% of the hips were graded as McKay I, whereas radiologically only 38.8% were Severin I. The AVN rate in bilateral DDH treated surgically is greater than the rate noted in unilateral cases from the same institution undergoing identical protocols. There was no difference in AVN rates between simultaneous and sequential or between the first and second hip to be sequentially reduced. Presence of ON decreases the risk for AVN, suggesting that in bilateral cases, awaiting the appearance of the ON is an important tool to reduce the incidence of AVN. IV.
An energy function for dynamics simulations of polypeptides in torsion angle space
NASA Astrophysics Data System (ADS)
Sartori, F.; Melchers, B.; Böttcher, H.; Knapp, E. W.
1998-05-01
Conventional simulation techniques to model the dynamics of proteins in atomic detail are restricted to short time scales. A simplified molecular description, in which high frequency motions with small amplitudes are ignored, can overcome this problem. In this protein model only the backbone dihedrals φ and ψ and the χi of the side chains serve as degrees of freedom. Bond angles and lengths are fixed at ideal geometry values provided by the standard molecular dynamics (MD) energy function CHARMM. In this work a Monte Carlo (MC) algorithm is used, whose elementary moves employ cooperative rotations in a small window of consecutive amide planes, leaving the polypeptide conformation outside of this window invariant. A single of these window MC moves generates local conformational changes only. But, the application of many such moves at different parts of the polypeptide backbone leads to global conformational changes. To account for the lack of flexibility in the protein model employed, the energy function used to evaluate conformational energies is split into sequentially neighbored and sequentially distant contributions. The sequentially neighbored part is represented by an effective (φ,ψ)-torsion potential. It is derived from MD simulations of a flexible model dipeptide using a conventional MD energy function. To avoid exaggeration of hydrogen bonding strengths, the electrostatic interactions involving hydrogen atoms are scaled down at short distances. With these adjustments of the energy function, the rigid polypeptide model exhibits the same equilibrium distributions as obtained by conventional MD simulation with a fully flexible molecular model. Also, the same temperature dependence of the stability and build-up of α helices of 18-alanine as found in MD simulations is observed using the adapted energy function for MC simulations. Analyses of transition frequencies demonstrate that also dynamical aspects of MD trajectories are faithfully reproduced. Finally, it is demonstrated that even for high temperature unfolded polypeptides the MC simulation is more efficient by a factor of 10 than conventional MD simulations.
Spiegelhalter, David; Grigg, Olivia; Kinsman, Robin; Treasure, Tom
2003-02-01
To investigate the use of the risk-adjusted sequential probability ratio test in monitoring the cumulative occurrence of adverse clinical outcomes. Retrospective analysis of three longitudinal datasets. Patients aged 65 years and over under the care of Harold Shipman between 1979 and 1997, patients under 1 year of age undergoing paediatric heart surgery in Bristol Royal Infirmary between 1984 and 1995, adult patients receiving cardiac surgery from a team of cardiac surgeons in London,UK. Annual and 30-day mortality rates. Using reasonable boundaries, the procedure could have indicated an 'alarm' in Bristol after publication of the 1991 Cardiac Surgical Register, and in 1985 or 1997 for Harold Shipman depending on the data source and the comparator. The cardiac surgeons showed no significant deviation from expected performance. The risk-adjusted sequential probability test is simple to implement, can be applied in a variety of contexts, and might have been useful to detect specific instances of past divergent performance. The use of this and related techniques deserves further attention in the context of prospectively monitoring adverse clinical outcomes.
Novel high-fidelity realistic explosion damage simulation for urban environments
NASA Astrophysics Data System (ADS)
Liu, Xiaoqing; Yadegar, Jacob; Zhu, Youding; Raju, Chaitanya; Bhagavathula, Jaya
2010-04-01
Realistic building damage simulation has a significant impact in modern modeling and simulation systems especially in diverse panoply of military and civil applications where these simulation systems are widely used for personnel training, critical mission planning, disaster management, etc. Realistic building damage simulation should incorporate accurate physics-based explosion models, rubble generation, rubble flyout, and interactions between flying rubble and their surrounding entities. However, none of the existing building damage simulation systems sufficiently faithfully realize the criteria of realism required for effective military applications. In this paper, we present a novel physics-based high-fidelity and runtime efficient explosion simulation system to realistically simulate destruction to buildings. In the proposed system, a family of novel blast models is applied to accurately and realistically simulate explosions based on static and/or dynamic detonation conditions. The system also takes account of rubble pile formation and applies a generic and scalable multi-component based object representation to describe scene entities and highly scalable agent-subsumption architecture and scheduler to schedule clusters of sequential and parallel events. The proposed system utilizes a highly efficient and scalable tetrahedral decomposition approach to realistically simulate rubble formation. Experimental results demonstrate that the proposed system has the capability to realistically simulate rubble generation, rubble flyout and their primary and secondary impacts on surrounding objects including buildings, constructions, vehicles and pedestrians in clusters of sequential and parallel damage events.
Gstat: a program for geostatistical modelling, prediction and simulation
NASA Astrophysics Data System (ADS)
Pebesma, Edzer J.; Wesseling, Cees G.
1998-01-01
Gstat is a computer program for variogram modelling, and geostatistical prediction and simulation. It provides a generic implementation of the multivariable linear model with trends modelled as a linear function of coordinate polynomials or of user-defined base functions, and independent or dependent, geostatistically modelled, residuals. Simulation in gstat comprises conditional or unconditional (multi-) Gaussian sequential simulation of point values or block averages, or (multi-) indicator sequential simulation. Besides many of the popular options found in other geostatistical software packages, gstat offers the unique combination of (i) an interactive user interface for modelling variograms and generalized covariances (residual variograms), that uses the device-independent plotting program gnuplot for graphical display, (ii) support for several ascii and binary data and map file formats for input and output, (iii) a concise, intuitive and flexible command language, (iv) user customization of program defaults, (v) no built-in limits, and (vi) free, portable ANSI-C source code. This paper describes the class of problems gstat can solve, and addresses aspects of efficiency and implementation, managing geostatistical projects, and relevant technical details.
MaMiCo: Software design for parallel molecular-continuum flow simulations
NASA Astrophysics Data System (ADS)
Neumann, Philipp; Flohr, Hanno; Arora, Rahul; Jarmatz, Piet; Tchipev, Nikola; Bungartz, Hans-Joachim
2016-03-01
The macro-micro-coupling tool (MaMiCo) was developed to ease the development of and modularize molecular-continuum simulations, retaining sequential and parallel performance. We demonstrate the functionality and performance of MaMiCo by coupling the spatially adaptive Lattice Boltzmann framework waLBerla with four molecular dynamics (MD) codes: the light-weight Lennard-Jones-based implementation SimpleMD, the node-level optimized software ls1 mardyn, and the community codes ESPResSo and LAMMPS. We detail interface implementations to connect each solver with MaMiCo. The coupling for each waLBerla-MD setup is validated in three-dimensional channel flow simulations which are solved by means of a state-based coupling method. We provide sequential and strong scaling measurements for the four molecular-continuum simulations. The overhead of MaMiCo is found to come at 10%-20% of the total (MD) runtime. The measurements further show that scalability of the hybrid simulations is reached on up to 500 Intel SandyBridge, and more than 1000 AMD Bulldozer compute cores.
A behavioural and neural evaluation of prospective decision-making under risk
Symmonds, Mkael; Bossaerts, Peter; Dolan, Raymond J.
2010-01-01
Making the best choice when faced with a chain of decisions requires a person to judge both anticipated outcomes and future actions. Although economic decision-making models account for both risk and reward in single choice contexts there is a dearth of similar knowledge about sequential choice. Classical utility-based models assume that decision-makers select and follow an optimal pre-determined strategy, irrespective of the particular order in which options are presented. An alternative model involves continuously re-evaluating decision utilities, without prescribing a specific future set of choices. Here, using behavioral and functional magnetic resonance imaging (fMRI) data, we studied human subjects in a sequential choice task and use these data to compare alternative decision models of valuation and strategy selection. We provide evidence that subjects adopt a model of re-evaluating decision utilities, where available strategies are continuously updated and combined in assessing action values. We validate this model by using simultaneously-acquired fMRI data to show that sequential choice evokes a pattern of neural response consistent with a tracking of anticipated distribution of future reward, as expected in such a model. Thus, brain activity evoked at each decision point reflects the expected mean, variance and skewness of possible payoffs, consistent with the idea that sequential choice evokes a prospective evaluation of both available strategies and possible outcomes. PMID:20980595
De Britto, R L; Vanamail, P; Sankari, T; Vijayalakshmi, G; Das, L K; Pani, S P
2015-06-01
Till today, there is no effective treatment protocol for the complete clearance of Wuchereria bancrofti (W.b) infection that causes secondary lymphoedema. In a double blind randomized control trial (RCT), 146 asymptomatic W. b infected individuals were randomly assigned to one of the four regimens for 12 days, DEC 300 mg + Doxycycline 100 mg coadministration or DEC 300 mg + Albendazole 400 mg co-administration or DEC 300 mg + Albendazole 400 mg sequential administration or control regimen DEC 300 mg and were followed up at 13, 26 and 52 weeks post-treatment for the clearance of infection. At intake, there was no significant variation in mf counts (F(3,137)=0.044; P=0.988) and antigen levels (F(3,137)=1.433; P=0.236) between the regimens. Primary outcome analysis showed that DEC + Albendazole sequential administration has an enhanced efficacy over DEC + Albendazole co-administration (80.6 Vs 64.7%), and this regimen is significantly different when compared to DEC + doxycycline co-administration and control (P<0.05), in clearing microfilaria in 13 weeks. Secondary outcome analysis showed that, all the trial regimens were comparable to control regimen in clearing antigen (F(3, 109)=0.405; P=0.750). Therefore, DEC + Albendazole sequential administration appears to be a better option for rapid clearance of W. b microfilariae in 13 weeks time. (Clinical trials.gov identifier - NCT02005653).
A behavioral and neural evaluation of prospective decision-making under risk.
Symmonds, Mkael; Bossaerts, Peter; Dolan, Raymond J
2010-10-27
Making the best choice when faced with a chain of decisions requires a person to judge both anticipated outcomes and future actions. Although economic decision-making models account for both risk and reward in single-choice contexts, there is a dearth of similar knowledge about sequential choice. Classical utility-based models assume that decision-makers select and follow an optimal predetermined strategy, regardless of the particular order in which options are presented. An alternative model involves continuously reevaluating decision utilities, without prescribing a specific future set of choices. Here, using behavioral and functional magnetic resonance imaging (fMRI) data, we studied human subjects in a sequential choice task and use these data to compare alternative decision models of valuation and strategy selection. We provide evidence that subjects adopt a model of reevaluating decision utilities, in which available strategies are continuously updated and combined in assessing action values. We validate this model by using simultaneously acquired fMRI data to show that sequential choice evokes a pattern of neural response consistent with a tracking of anticipated distribution of future reward, as expected in such a model. Thus, brain activity evoked at each decision point reflects the expected mean, variance, and skewness of possible payoffs, consistent with the idea that sequential choice evokes a prospective evaluation of both available strategies and possible outcomes.
Spacecraft Data Simulator for the test of level zero processing systems
NASA Technical Reports Server (NTRS)
Shi, Jeff; Gordon, Julie; Mirchandani, Chandru; Nguyen, Diem
1994-01-01
The Microelectronic Systems Branch (MSB) at Goddard Space Flight Center (GSFC) has developed a Spacecraft Data Simulator (SDS) to support the development, test, and verification of prototype and production Level Zero Processing (LZP) systems. Based on a disk array system, the SDS is capable of generating large test data sets up to 5 Gigabytes and outputting serial test data at rates up to 80 Mbps. The SDS supports data formats including NASA Communication (Nascom) blocks, Consultative Committee for Space Data System (CCSDS) Version 1 & 2 frames and packets, and all the Advanced Orbiting Systems (AOS) services. The capability to simulate both sequential and non-sequential time-ordered downlink data streams with errors and gaps is crucial to test LZP systems. This paper describes the system architecture, hardware and software designs, and test data designs. Examples of test data designs are included to illustrate the application of the SDS.
A parallel computational model for GATE simulations.
Rannou, F R; Vega-Acevedo, N; El Bitar, Z
2013-12-01
GATE/Geant4 Monte Carlo simulations are computationally demanding applications, requiring thousands of processor hours to produce realistic results. The classical strategy of distributing the simulation of individual events does not apply efficiently for Positron Emission Tomography (PET) experiments, because it requires a centralized coincidence processing and large communication overheads. We propose a parallel computational model for GATE that handles event generation and coincidence processing in a simple and efficient way by decentralizing event generation and processing but maintaining a centralized event and time coordinator. The model is implemented with the inclusion of a new set of factory classes that can run the same executable in sequential or parallel mode. A Mann-Whitney test shows that the output produced by this parallel model in terms of number of tallies is equivalent (but not equal) to its sequential counterpart. Computational performance evaluation shows that the software is scalable and well balanced. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
A sequential coalescent algorithm for chromosomal inversions
Peischl, S; Koch, E; Guerrero, R F; Kirkpatrick, M
2013-01-01
Chromosomal inversions are common in natural populations and are believed to be involved in many important evolutionary phenomena, including speciation, the evolution of sex chromosomes and local adaptation. While recent advances in sequencing and genotyping methods are leading to rapidly increasing amounts of genome-wide sequence data that reveal interesting patterns of genetic variation within inverted regions, efficient simulation methods to study these patterns are largely missing. In this work, we extend the sequential Markovian coalescent, an approximation to the coalescent with recombination, to include the effects of polymorphic inversions on patterns of recombination. Results show that our algorithm is fast, memory-efficient and accurate, making it feasible to simulate large inversions in large populations for the first time. The SMC algorithm enables studies of patterns of genetic variation (for example, linkage disequilibria) and tests of hypotheses (using simulation-based approaches) that were previously intractable. PMID:23632894
Luo, Yuan; Castro, Jose; Barton, Jennifer K.; Kostuk, Raymond K.; Barbastathis, George
2010-01-01
A new methodology describing the effects of aperiodic and multiplexed gratings in volume holographic imaging systems (VHIS) is presented. The aperiodic gratings are treated as an ensemble of localized planar gratings using coupled wave methods in conjunction with sequential and non-sequential ray-tracing techniques to accurately predict volumetric diffraction effects in VHIS. Our approach can be applied to aperiodic, multiplexed gratings and used to theoretically predict the performance of multiplexed volume holographic gratings within a volume hologram for VHIS. We present simulation and experimental results for the aperiodic and multiplexed imaging gratings formed in PQ-PMMA at 488nm and probed with a spherical wave at 633nm. Simulation results based on our approach that can be easily implemented in ray-tracing packages such as Zemax® are confirmed with experiments and show proof of consistency and usefulness of the proposed models. PMID:20940823
Engesgaard, Peter; Kipp, Kenneth L.
1992-01-01
A one-dimensional prototype geochemical transport model was developed in order to handle simultaneous precipitation-dissolution and oxidation-reduction reactions governed by chemical equilibria. Total aqueous component concentrations are the primary dependent variables, and a sequential iterative approach is used for the calculation. The model was verified by analytical and numerical comparisons and is able to simulate sharp mineral fronts. At a site in Denmark, denitrification has been observed by oxidation of pyrite. Simulation of nitrate movement at this site showed a redox front movement rate of 0.58 m yr−1, which agreed with calculations of others. It appears that the sequential iterative approach is the most practical for extension to multidimensional simulation and for handling large numbers of components and reactions. However, slow convergence may limit the size of redox systems that can be handled.
Philip A. Araman
1977-01-01
The design of a rough mill for the production of interior furniture parts is used to illustrate a simulation technique for analyzing and evaluating established and proposed sequential production systems. Distributions representing the real-world random characteristics of lumber, equipment feed speeds and delay times are programmed into the simulation. An example is...
Ensemble Sampling vs. Time Sampling in Molecular Dynamics Simulations of Thermal Conductivity
Gordiz, Kiarash; Singh, David J.; Henry, Asegun
2015-01-29
In this report we compare time sampling and ensemble averaging as two different methods available for phase space sampling. For the comparison, we calculate thermal conductivities of solid argon and silicon structures, using equilibrium molecular dynamics. We introduce two different schemes for the ensemble averaging approach, and show that both can reduce the total simulation time as compared to time averaging. It is also found that velocity rescaling is an efficient mechanism for phase space exploration. Although our methodology is tested using classical molecular dynamics, the ensemble generation approaches may find their greatest utility in computationally expensive simulations such asmore » first principles molecular dynamics. For such simulations, where each time step is costly, time sampling can require long simulation times because each time step must be evaluated sequentially and therefore phase space averaging is achieved through sequential operations. On the other hand, with ensemble averaging, phase space sampling can be achieved through parallel operations, since each ensemble is independent. For this reason, particularly when using massively parallel architectures, ensemble sampling can result in much shorter simulation times and exhibits similar overall computational effort.« less
ERIC Educational Resources Information Center
Soltero-González, Lucinda; Sparrow, Wendy; Butvilofsky, Sandra; Escamilla, Kathy; Hopewell, Susan
2016-01-01
This longitudinal study examined whether the implementation of a Spanish-English paired literacy approach provides an academic advantage to emerging bilingual students over a sequential literacy approach. The study employed a quasi-experimental design. It compared the biliteracy outcomes of third-grade emerging bilingual learners participating in…
Hamilton, Kyra; Vayro, Caitlin; Schwarzer, Ralf
2015-01-01
To examine a mechanism by which social cognitive factors may predict fruit and vegetable consumption in long-haul truck drivers. Dietary self-efficacy, positive outcome expectancies, and intentions were assessed in 148 Australian truck drivers, and 1 week later they reported their fruit and vegetable consumption. A theory-guided sequential mediation model was specified that postulated self-efficacy and intention as mediators between outcome expectancies and behavior. The hypothesized model was confirmed. A direct effect of outcome expectancies was no longer present when mediators were included, and all indirect effects were significant, including the 2-mediator chain (β = .15; P < .05; 95% confidence interval, 0.05-0.32). Truck drivers who expected benefits from dietary change, felt confident about being capable to do so, and formed an intention were likely to report larger amounts of fruit and vegetable intake. The results suggest that the role of outcome expectancies and self-efficacy are important to consider for understanding and predicting healthy eating intentions in truck drivers. Copyright © 2015 Society for Nutrition Education and Behavior. Published by Elsevier Inc. All rights reserved.
Parallel Multi-cycle LES of an Optical Pent-roof DISI Engine Under Motored Operating Conditions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Van Dam, Noah; Sjöberg, Magnus; Zeng, Wei
The use of Large-eddy Simulations (LES) has increased due to their ability to resolve the turbulent fluctuations of engine flows and capture the resulting cycle-to-cycle variability. One drawback of LES, however, is the requirement to run multiple engine cycles to obtain the necessary cycle statistics for full validation. The standard method to obtain the cycles by running a single simulation through many engine cycles sequentially can take a long time to complete. Recently, a new strategy has been proposed by our research group to reduce the amount of time necessary to simulate the many engine cycles by running individual enginemore » cycle simulations in parallel. With modern large computing systems this has the potential to reduce the amount of time necessary for a full set of simulated engine cycles to finish by up to an order of magnitude. In this paper, the Parallel Perturbation Methodology (PPM) is used to simulate up to 35 engine cycles of an optically accessible, pent-roof Directinjection Spark-ignition (DISI) engine at two different motored engine operating conditions, one throttled and one un-throttled. Comparisons are made against corresponding sequential-cycle simulations to verify the similarity of results using either methodology. Mean results from the PPM approach are very similar to sequential-cycle results with less than 0.5% difference in pressure and a magnitude structure index (MSI) of 0.95. Differences in cycle-to-cycle variability (CCV) predictions are larger, but close to the statistical uncertainty in the measurement for the number of cycles simulated. PPM LES results were also compared against experimental data. Mean quantities such as pressure or mean velocities were typically matched to within 5- 10%. Pressure CCVs were under-predicted, mostly due to the lack of any perturbations in the pressure boundary conditions between cycles. Velocity CCVs for the simulations had the same average magnitude as experiments, but the experimental data showed greater spatial variation in the root-mean-square (RMS). Conversely, circular standard deviation results showed greater repeatability of the flow directionality and swirl vortex positioning than the simulations.« less
Optimization of multi-stage dynamic treatment regimes utilizing accumulated data.
Huang, Xuelin; Choi, Sangbum; Wang, Lu; Thall, Peter F
2015-11-20
In medical therapies involving multiple stages, a physician's choice of a subject's treatment at each stage depends on the subject's history of previous treatments and outcomes. The sequence of decisions is known as a dynamic treatment regime or treatment policy. We consider dynamic treatment regimes in settings where each subject's final outcome can be defined as the sum of longitudinally observed values, each corresponding to a stage of the regime. Q-learning, which is a backward induction method, is used to first optimize the last stage treatment then sequentially optimize each previous stage treatment until the first stage treatment is optimized. During this process, model-based expectations of outcomes of late stages are used in the optimization of earlier stages. When the outcome models are misspecified, bias can accumulate from stage to stage and become severe, especially when the number of treatment stages is large. We demonstrate that a modification of standard Q-learning can help reduce the accumulated bias. We provide a computational algorithm, estimators, and closed-form variance formulas. Simulation studies show that the modified Q-learning method has a higher probability of identifying the optimal treatment regime even in settings with misspecified models for outcomes. It is applied to identify optimal treatment regimes in a study for advanced prostate cancer and to estimate and compare the final mean rewards of all the possible discrete two-stage treatment sequences. Copyright © 2015 John Wiley & Sons, Ltd.
Sequential sampling: a novel method in farm animal welfare assessment.
Heath, C A E; Main, D C J; Mullan, S; Haskell, M J; Browne, W J
2016-02-01
Lameness in dairy cows is an important welfare issue. As part of a welfare assessment, herd level lameness prevalence can be estimated from scoring a sample of animals, where higher levels of accuracy are associated with larger sample sizes. As the financial cost is related to the number of cows sampled, smaller samples are preferred. Sequential sampling schemes have been used for informing decision making in clinical trials. Sequential sampling involves taking samples in stages, where sampling can stop early depending on the estimated lameness prevalence. When welfare assessment is used for a pass/fail decision, a similar approach could be applied to reduce the overall sample size. The sampling schemes proposed here apply the principles of sequential sampling within a diagnostic testing framework. This study develops three sequential sampling schemes of increasing complexity to classify 80 fully assessed UK dairy farms, each with known lameness prevalence. Using the Welfare Quality herd-size-based sampling scheme, the first 'basic' scheme involves two sampling events. At the first sampling event half the Welfare Quality sample size is drawn, and then depending on the outcome, sampling either stops or is continued and the same number of animals is sampled again. In the second 'cautious' scheme, an adaptation is made to ensure that correctly classifying a farm as 'bad' is done with greater certainty. The third scheme is the only scheme to go beyond lameness as a binary measure and investigates the potential for increasing accuracy by incorporating the number of severely lame cows into the decision. The three schemes are evaluated with respect to accuracy and average sample size by running 100 000 simulations for each scheme, and a comparison is made with the fixed size Welfare Quality herd-size-based sampling scheme. All three schemes performed almost as well as the fixed size scheme but with much smaller average sample sizes. For the third scheme, an overall association between lameness prevalence and the proportion of lame cows that were severely lame on a farm was found. However, as this association was found to not be consistent across all farms, the sampling scheme did not prove to be as useful as expected. The preferred scheme was therefore the 'cautious' scheme for which a sampling protocol has also been developed.
Estimation After a Group Sequential Trial.
Milanzi, Elasma; Molenberghs, Geert; Alonso, Ariel; Kenward, Michael G; Tsiatis, Anastasios A; Davidian, Marie; Verbeke, Geert
2015-10-01
Group sequential trials are one important instance of studies for which the sample size is not fixed a priori but rather takes one of a finite set of pre-specified values, dependent on the observed data. Much work has been devoted to the inferential consequences of this design feature. Molenberghs et al (2012) and Milanzi et al (2012) reviewed and extended the existing literature, focusing on a collection of seemingly disparate, but related, settings, namely completely random sample sizes, group sequential studies with deterministic and random stopping rules, incomplete data, and random cluster sizes. They showed that the ordinary sample average is a viable option for estimation following a group sequential trial, for a wide class of stopping rules and for random outcomes with a distribution in the exponential family. Their results are somewhat surprising in the sense that the sample average is not optimal, and further, there does not exist an optimal, or even, unbiased linear estimator. However, the sample average is asymptotically unbiased, both conditionally upon the observed sample size as well as marginalized over it. By exploiting ignorability they showed that the sample average is the conventional maximum likelihood estimator. They also showed that a conditional maximum likelihood estimator is finite sample unbiased, but is less efficient than the sample average and has the larger mean squared error. Asymptotically, the sample average and the conditional maximum likelihood estimator are equivalent. This previous work is restricted, however, to the situation in which the the random sample size can take only two values, N = n or N = 2 n . In this paper, we consider the more practically useful setting of sample sizes in a the finite set { n 1 , n 2 , …, n L }. It is shown that the sample average is then a justifiable estimator , in the sense that it follows from joint likelihood estimation, and it is consistent and asymptotically unbiased. We also show why simulations can give the false impression of bias in the sample average when considered conditional upon the sample size. The consequence is that no corrections need to be made to estimators following sequential trials. When small-sample bias is of concern, the conditional likelihood estimator provides a relatively straightforward modification to the sample average. Finally, it is shown that classical likelihood-based standard errors and confidence intervals can be applied, obviating the need for technical corrections.
Ganesh, J S; Rogers, C A; Bonser, R S; Banner, N R
2005-06-01
Cystic fibrosis (CF) patients requiring transplantation for respiratory failure may undergo either heart-lung (HLT) or bilateral sequential lung (BSLT) transplantation. The choice of operation varies between surgeons, centres and countries. The current authors investigated whether operation type influenced outcome in adult CF patients transplanted in the UK between July 1995 and June 2002. Propensity scores for receipt of BSLT versus HLT were derived using logistic regression. Cox regression was used to compare survival. In total, 88 BSLTs and 93 HLTs were identified. Patient characteristics were similar overall, but HLT recipients were more likely to be on long-term oxygen therapy and to have had prior resuscitation. There were 72 deaths (29 BSLT and 43 HLT) within 4 yrs. There was a trend towards higher unadjusted survival following BSLT, but, after adjustment, no difference was found (hazard ratio = 0.77; 95% confidence interval 0.29-2.06). Time to the first rejection episode and infection rates were also similar. A total of 82% of hearts from HLT recipients were used as domino heart transplants. In conclusion, after adjusting for comorbidity, donor factors and ischaemia time, it was found that heart-lung and bilateral sequential lung transplantation achieved a similar outcome. The use of domino heart transplantation ameliorated the impact of heart-lung transplantation on total organ availability.
Moshirfar, Majid; Fenzl, Carlton R; Meyer, Jay J; Neuffer, Marcus C; Espandar, Ladan; Mifflin, Mark D
2011-02-01
To evaluate the safety, efficacy, and visual outcomes of simultaneous and sequential implantation of Intacs (Addition Technology, Inc, Sunnyvale, CA) and Verisyse phakic intraocular lens (AMO, Santa Ana, CA) in selected cases of ectatic corneal disease. John A. Moran Eye Center, University of Utah, UT. Prospective data were collected from 19 eyes of 12 patients (5 eyes, post-laser in situ keratomileusis ectasia and 14 eyes, keratoconus). Intacs segments were implanted followed by insertion of a phakic Verisyse lens at the same session (12 eyes) in the simultaneous group or several months later (7 eyes) in the sequential group. The uncorrected visual acuity, best spectacle-corrected visual acuity (BSCVA), and manifest refraction were recorded at each visit. No intraoperative or postoperative complications were observed. At the last follow-up (19 ± 6 months), in the simultaneous group, mean spherical error was -0.79 ± 1.0 diopter (D) (range, -2.0 to +1.50 D) and cylindrical error +2.06 ± 1.21 D (range, +0.5 to +3.75 D). In the sequential group, at the last follow-up, at 36 ± 21 months, the mean spherical error was -1.64 ± 1.31 D (range, -3.25 to +1.0 D) and cylindrical error +2.07 ± 1.03 D (range, +0.75 to +3.25 D). There were no significant differences in mean uncorrected visual acuity or BSCVA between the 2 groups preoperatively or postoperatively. No eye lost lines of preoperative BSCVA. Combined insertion of Intacs and Verisyse was safe and effective in all cases. The outcomes of the simultaneous implantation of the Intacs and Verisyse lens in 1 surgery were similar to the results achieved with sequential implantation using 2 surgeries.
Xu, Shuozhi; Xiong, Liming; Chen, Youping; ...
2016-01-29
Sequential slip transfer across grain boundaries (GB) has an important role in size-dependent propagation of plastic deformation in polycrystalline metals. For example, the Hall–Petch effect, which states that a smaller average grain size results in a higher yield stress, can be rationalised in terms of dislocation pile-ups against GBs. In spite of extensive studies in modelling individual phases and grains using atomistic simulations, well-accepted criteria of slip transfer across GBs are still lacking, as well as models of predicting irreversible GB structure evolution. Slip transfer is inherently multiscale since both the atomic structure of the boundary and the long-range fieldsmore » of the dislocation pile-up come into play. In this work, concurrent atomistic-continuum simulations are performed to study sequential slip transfer of a series of curved dislocations from a given pile-up on Σ3 coherent twin boundary (CTB) in Cu and Al, with dominant leading screw character at the site of interaction. A Frank-Read source is employed to nucleate dislocations continuously. It is found that subject to a shear stress of 1.2 GPa, screw dislocations transfer into the twinned grain in Cu, but glide on the twin boundary plane in Al. Moreover, four dislocation/CTB interaction modes are identified in Al, which are affected by (1) applied shear stress, (2) dislocation line length, and (3) dislocation line curvature. Our results elucidate the discrepancies between atomistic simulations and experimental observations of dislocation-GB reactions and highlight the importance of directly modeling sequential dislocation slip transfer reactions using fully 3D models.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xu, Shuozhi; Xiong, Liming; Chen, Youping
Sequential slip transfer across grain boundaries (GB) has an important role in size-dependent propagation of plastic deformation in polycrystalline metals. For example, the Hall–Petch effect, which states that a smaller average grain size results in a higher yield stress, can be rationalised in terms of dislocation pile-ups against GBs. In spite of extensive studies in modelling individual phases and grains using atomistic simulations, well-accepted criteria of slip transfer across GBs are still lacking, as well as models of predicting irreversible GB structure evolution. Slip transfer is inherently multiscale since both the atomic structure of the boundary and the long-range fieldsmore » of the dislocation pile-up come into play. In this work, concurrent atomistic-continuum simulations are performed to study sequential slip transfer of a series of curved dislocations from a given pile-up on Σ3 coherent twin boundary (CTB) in Cu and Al, with dominant leading screw character at the site of interaction. A Frank-Read source is employed to nucleate dislocations continuously. It is found that subject to a shear stress of 1.2 GPa, screw dislocations transfer into the twinned grain in Cu, but glide on the twin boundary plane in Al. Moreover, four dislocation/CTB interaction modes are identified in Al, which are affected by (1) applied shear stress, (2) dislocation line length, and (3) dislocation line curvature. Our results elucidate the discrepancies between atomistic simulations and experimental observations of dislocation-GB reactions and highlight the importance of directly modeling sequential dislocation slip transfer reactions using fully 3D models.« less
An analog scrambler for speech based on sequential permutations in time and frequency
NASA Astrophysics Data System (ADS)
Cox, R. V.; Jayant, N. S.; McDermott, B. J.
Permutation of speech segments is an operation that is frequently used in the design of scramblers for analog speech privacy. In this paper, a sequential procedure for segment permutation is considered. This procedure can be extended to two dimensional permutation of time segments and frequency bands. By subjective testing it is shown that this combination gives a residual intelligibility for spoken digits of 20 percent with a delay of 256 ms. (A lower bound for this test would be 10 percent). The complexity of implementing such a system is considered and the issues of synchronization and channel equalization are addressed. The computer simulation results for the system using both real and simulated channels are examined.
Performance evaluation of an asynchronous multisensor track fusion filter
NASA Astrophysics Data System (ADS)
Alouani, Ali T.; Gray, John E.; McCabe, D. H.
2003-08-01
Recently the authors developed a new filter that uses data generated by asynchronous sensors to produce a state estimate that is optimal in the minimum mean square sense. The solution accounts for communications delay between sensors platform and fusion center. It also deals with out of sequence data as well as latent data by processing the information in a batch-like manner. This paper compares, using simulated targets and Monte Carlo simulations, the performance of the filter to the optimal sequential processing approach. It was found that the new asynchronous Multisensor track fusion filter (AMSTFF) performance is identical to that of the extended sequential Kalman filter (SEKF), while the new filter updates its track at a much lower rate than the SEKF.
Modeling of a Sequential Two-Stage Combustor
NASA Technical Reports Server (NTRS)
Hendricks, R. C.; Liu, N.-S.; Gallagher, J. R.; Ryder, R. C.; Brankovic, A.; Hendricks, J. A.
2005-01-01
A sequential two-stage, natural gas fueled power generation combustion system is modeled to examine the fundamental aerodynamic and combustion characteristics of the system. The modeling methodology includes CAD-based geometry definition, and combustion computational fluid dynamics analysis. Graphical analysis is used to examine the complex vortical patterns in each component, identifying sources of pressure loss. The simulations demonstrate the importance of including the rotating high-pressure turbine blades in the computation, as this results in direct computation of combustion within the first turbine stage, and accurate simulation of the flow in the second combustion stage. The direct computation of hot-streaks through the rotating high-pressure turbine stage leads to improved understanding of the aerodynamic relationships between the primary and secondary combustors and the turbomachinery.
Biocellion: accelerating computer simulation of multicellular biological system models
Kang, Seunghwa; Kahan, Simon; McDermott, Jason; Flann, Nicholas; Shmulevich, Ilya
2014-01-01
Motivation: Biological system behaviors are often the outcome of complex interactions among a large number of cells and their biotic and abiotic environment. Computational biologists attempt to understand, predict and manipulate biological system behavior through mathematical modeling and computer simulation. Discrete agent-based modeling (in combination with high-resolution grids to model the extracellular environment) is a popular approach for building biological system models. However, the computational complexity of this approach forces computational biologists to resort to coarser resolution approaches to simulate large biological systems. High-performance parallel computers have the potential to address the computing challenge, but writing efficient software for parallel computers is difficult and time-consuming. Results: We have developed Biocellion, a high-performance software framework, to solve this computing challenge using parallel computers. To support a wide range of multicellular biological system models, Biocellion asks users to provide their model specifics by filling the function body of pre-defined model routines. Using Biocellion, modelers without parallel computing expertise can efficiently exploit parallel computers with less effort than writing sequential programs from scratch. We simulate cell sorting, microbial patterning and a bacterial system in soil aggregate as case studies. Availability and implementation: Biocellion runs on x86 compatible systems with the 64 bit Linux operating system and is freely available for academic use. Visit http://biocellion.com for additional information. Contact: seunghwa.kang@pnnl.gov PMID:25064572
Le Bras, Fabien; Molinier-Frenkel, Valerie; Guellich, Aziz; Dupuis, Jehan; Belhadj, Karim; Guendouz, Soulef; Ayad, Karima; Colombat, Magali; Benhaiem, Nicole; Tissot, Claire Marie; Hulin, Anne; Jaccard, Arnaud; Damy, Thibaud
2017-05-01
Chemotherapy combining cyclophosphamide, bortezomib and dexamethasone is widely used in light-chain amyloidosis. The benefit is limited in patients with cardiac amyloidosis mainly because of adverse cardiac events. Retrospective analysis of our cohort showed that 39 patients died with 42% during the first month. A new escalation-sequential regimen was set to improve the outcomes. Nine newly-diagnosed patients were prospectively treated with close monitoring of serum N-terminal pro-brain natriuretic peptide, troponin-T and free light chains. The results show that corticoids may destabilise the heart through fluid retention. Thus, a sequential protocol may be a promising approach to treat these patients. Copyright © 2017 Elsevier Ltd. All rights reserved.
Goodman, Geoff; Chung, Hyewon; Fischel, Leah; Athey-Lloyd, Laura
2017-07-01
This study examined the sequential relations among three pertinent variables in child psychotherapy: therapeutic alliance (TA) (including ruptures and repairs), autism symptoms, and adherence to child-centered play therapy (CCPT) process. A 2-year CCPT of a 6-year-old Caucasian boy diagnosed with autism spectrum disorder was conducted weekly with two doctoral-student therapists, working consecutively for 1 year each, in a university-based community mental-health clinic. Sessions were video-recorded and coded using the Child Psychotherapy Process Q-Set (CPQ), a measure of the TA, and an autism symptom measure. Sequential relations among these variables were examined using simulation modeling analysis (SMA). In Therapist 1's treatment, unexpectedly, autism symptoms decreased three sessions after a rupture occurred in the therapeutic dyad. In Therapist 2's treatment, adherence to CCPT process increased 2 weeks after a repair occurred in the therapeutic dyad. The TA decreased 1 week after autism symptoms increased. Finally, adherence to CCPT process decreased 1 week after autism symptoms increased. The authors concluded that (1) sequential relations differ by therapist even though the child remains constant, (2) therapeutic ruptures can have an unexpected effect on autism symptoms, and (3) changes in autism symptoms can precede as well as follow changes in process variables.
A novel method for the sequential removal and separation of multiple heavy metals from wastewater.
Fang, Li; Li, Liang; Qu, Zan; Xu, Haomiao; Xu, Jianfang; Yan, Naiqiang
2018-01-15
A novel method was developed and applied for the treatment of simulated wastewater containing multiple heavy metals. A sorbent of ZnS nanocrystals (NCs) was synthesized and showed extraordinary performance for the removal of Hg 2+ , Cu 2+ , Pb 2+ and Cd 2+ . The removal efficiencies of Hg 2+ , Cu 2+ , Pb 2+ and Cd 2+ were 99.9%, 99.9%, 90.8% and 66.3%, respectively. Meanwhile, it was determined that solubility product (K sp ) of heavy metal sulfides was closely related to adsorption selectivity of various heavy metals on the sorbent. The removal efficiency of Hg 2+ was higher than that of Cd 2+ , while the K sp of HgS was lower than that of CdS. It indicated that preferential adsorption of heavy metals occurred when the K sp of the heavy metal sulfide was lower. In addition, the differences in the K sp of heavy metal sulfides allowed for the exchange of heavy metals, indicating the potential application for the sequential removal and separation of heavy metals from wastewater. According to the cumulative adsorption experimental results, multiple heavy metals were sequentially adsorbed and separated from the simulated wastewater in the order of the K sp of their sulfides. This method holds the promise of sequentially removing and separating multiple heavy metals from wastewater. Copyright © 2017 Elsevier B.V. All rights reserved.
Hunter, Christopher L; Silvestri, Salvatore; Ralls, George; Stone, Amanda; Walker, Ayanna; Mangalat, Neal; Papa, Linda
2018-05-01
Early identification of sepsis significantly improves outcomes, suggesting a role for prehospital screening. An end-tidal carbon dioxide (ETCO 2 ) value ≤ 25 mmHg predicts mortality and severe sepsis when used as part of a prehospital screening tool. Recently, the Quick Sequential Organ Failure Assessment (qSOFA) score was also derived as a tool for predicting poor outcomes in potentially septic patients. We conducted a retrospective cohort study among patients transported by emergency medical services to compare the use of ETCO 2 ≤ 25 mmHg with qSOFA score of ≥ 2 as a predictor of mortality or diagnosis of severe sepsis in prehospital patients with suspected sepsis. By comparison of receiver operator characteristic curves, ETCO 2 had a higher discriminatory power to predict mortality, sepsis, and severe sepsis than qSOFA. Both non-invasive measures were easily obtainable by prehospital personnel, with ETCO 2 performing slightly better as an outcome predictor.
Numerical study on the sequential Bayesian approach for radioactive materials detection
NASA Astrophysics Data System (ADS)
Qingpei, Xiang; Dongfeng, Tian; Jianyu, Zhu; Fanhua, Hao; Ge, Ding; Jun, Zeng
2013-01-01
A new detection method, based on the sequential Bayesian approach proposed by Candy et al., offers new horizons for the research of radioactive detection. Compared with the commonly adopted detection methods incorporated with statistical theory, the sequential Bayesian approach offers the advantages of shorter verification time during the analysis of spectra that contain low total counts, especially in complex radionuclide components. In this paper, a simulation experiment platform implanted with the methodology of sequential Bayesian approach was developed. Events sequences of γ-rays associating with the true parameters of a LaBr3(Ce) detector were obtained based on an events sequence generator using Monte Carlo sampling theory to study the performance of the sequential Bayesian approach. The numerical experimental results are in accordance with those of Candy. Moreover, the relationship between the detection model and the event generator, respectively represented by the expected detection rate (Am) and the tested detection rate (Gm) parameters, is investigated. To achieve an optimal performance for this processor, the interval of the tested detection rate as a function of the expected detection rate is also presented.
The effectiveness of zinc supplementation in men with isolated hypogonadotropic hypogonadism.
Liu, Yan-Ling; Zhang, Man-Na; Tong, Guo-Yu; Sun, Shou-Yue; Zhu, Yan-Hua; Cao, Ying; Zhang, Jie; Huang, Hong; Niu, Ben; Li, Hong; Guo, Qing-Hua; Gao, Yan; Zhu, Da-Long; Li, Xiao-Ying
2017-01-01
A multicenter, open-label, randomized, controlled superiority trial with 18 months of follow-up was conducted to investigate whether oral zinc supplementation could further promote spermatogenesis in males with isolated hypogonadotropic hypogonadism (IHH) receiving sequential purified urinary follicular-stimulating hormone/human chorionic gonadotropin (uFSH/hCG) replacement. Sixty-seven Chinese male IHH patients were recruited from the Departments of Endocrinology in eight tertiary hospitals and randomly allocated into the sequential uFSH/hCG group (Group A, n = 34) or the sequential uFSH plus zinc supplementation group (Group B, n = 33). In Group A, patients received sequential uFSH (75 U, three times a week every other 3 months) and hCG (2000 U, twice a week) treatments. In Group B, patients received oral zinc supplementation (40 mg day-1 ) in addition to the sequential uFSH/hCG treatment given to patients in Group A. The primary outcome was the proportion of patients with a sperm concentration ≥1.0 × 106 ml-1 during the 18 months. The comparison of efficacy between Groups A and B was analyzed. Nineteen of 34 (55.9%) patients receiving sequential uFSH/hCG and 20 of 33 (60.6%) patients receiving sequential uFSH/hCG plus zinc supplementation achieved sperm concentrations ≥1.0 × 106 ml-1 by intention to treat analyses. No differences between Group A and Group B were observed as far as the efficacy of inducing spermatogenesis (P = 0.69). We concluded that the sequential uFSH/hCG plus zinc supplementation regimen had a similar efficacy to the sequential uFSH/hCG treatment alone. The additional improvement of 40 mg day-1 oral zinc supplementation on spermatogenesis and masculinization in male IHH patients is very subtle.
From In-Session Behaviors to Drinking Outcomes: A Causal Chain for Motivational Interviewing
ERIC Educational Resources Information Center
Moyers, Theresa B.; Martin, Tim; Houck, Jon M.; Christopher, Paulette J.; Tonigan, J. Scott
2009-01-01
Client speech in favor of change within motivational interviewing sessions has been linked to treatment outcomes, but a causal chain has not yet been demonstrated. Using a sequential behavioral coding system for client speech, the authors found that, at both the session and utterance levels, specific therapist behaviors predict client change talk.…
Sequential causal inference: Application to randomized trials of adaptive treatment strategies
Dawson, Ree; Lavori, Philip W.
2009-01-01
SUMMARY Clinical trials that randomize subjects to decision algorithms, which adapt treatments over time according to individual response, have gained considerable interest as investigators seek designs that directly inform clinical decision making. We consider designs in which subjects are randomized sequentially at decision points, among adaptive treatment options under evaluation. We present a sequential method to estimate the comparative effects of the randomized adaptive treatments, which are formalized as adaptive treatment strategies. Our causal estimators are derived using Bayesian predictive inference. We use analytical and empirical calculations to compare the predictive estimators to (i) the ‘standard’ approach that allocates the sequentially obtained data to separate strategy-specific groups as would arise from randomizing subjects at baseline; (ii) the semi-parametric approach of marginal mean models that, under appropriate experimental conditions, provides the same sequential estimator of causal differences as the proposed approach. Simulation studies demonstrate that sequential causal inference offers substantial efficiency gains over the standard approach to comparing treatments, because the predictive estimators can take advantage of the monotone structure of shared data among adaptive strategies. We further demonstrate that the semi-parametric asymptotic variances, which are marginal ‘one-step’ estimators, may exhibit significant bias, in contrast to the predictive variances. We show that the conditions under which the sequential method is attractive relative to the other two approaches are those most likely to occur in real studies. PMID:17914714
Inoue, Tadahisa; Ishii, Norimitsu; Kobayashi, Yuji; Kitano, Rena; Sakamoto, Kazumasa; Ohashi, Tomohiko; Nakade, Yukiomi; Sumida, Yoshio; Ito, Kiyoaki; Nakao, Haruhisa; Yoneda, Masashi
2017-09-01
Endoscopic bilateral self-expandable metallic stent (SEMS) placement for malignant hilar biliary obstructions (MHBOs) is technically demanding, and a second SEMS insertion is particularly challenging. A simultaneous side-by-side (SBS) placement technique using a thinner delivery system may mitigate these issues. We aimed to examine the feasibility and efficacy of simultaneous SBS SEMS placement for treating MHBOs using a novel SEMS that has a 5.7-Fr ultra-thin delivery system. Thirty-four patients with MHBOs underwent SBS SEMS placement between 2010 and 2016. We divided the patient cohort into those who underwent sequential (conventional) SBS placement between 2010 and 2014 (sequential group) and those who underwent simultaneous SBS placement between 2015 and 2016 (simultaneous group), and compared the groups with respect to the clinical outcomes. The technical success rates were 71% (12/17) and 100% (17/17) in the sequential and simultaneous groups, respectively, a difference that was significant (P = .045). The median procedure time was significantly shorter in the simultaneous group (22 min) than in the sequential group (52 min) (P = .017). There were no significant group differences in the time to recurrent biliary obstruction (sequential group: 113 days; simultaneous group: 140 days) or other adverse event rates (sequential group: 12%; simultaneous group: 12%). Simultaneous SBS placement using the novel 5.7-Fr SEMS delivery system may be more straightforward and have a higher success rate compared to that with sequential SBS placement. This new method may be useful for bilateral stenting to treat MHBOs.
Lin, Lien-Chieh; Hsu, Tzu-Herng; Huang, Kuang-Wei; Tam, Ka-Wai
2016-01-01
AIM: To evaluate the applicability of nonbismuth concomitant quadruple therapy for Helicobacter pylori (H. pylori) eradication in Chinese regions. METHODS: A systematic review and meta-analysis of randomized controlled trials was performed to evaluate the efficacy of nonbismuth concomitant quadruple therapy between sequential therapy or triple therapy for H. pylori eradication in Chinese regions. The defined Chinese regions include China, Hong Kong, Taiwan, and Singapore. The primary outcome was the H. pylori eradication rate; the secondary outcome was the compliance with therapy. The PubMed, Embase, Scopus, and Cochrane databases were searched for studies published in the period up to March 2016 with no language restriction. RESULTS: We reviewed six randomized controlled trials and 1616 patients. In 3 trials comparing concomitant quadruple therapy with triple therapy, the H. pylori eradication rate was significantly higher for 7-d nonbismuth concomitant quadruple therapy than for 7-d triple therapy (91.2% vs 77.9%, risk ratio = 1.17, 95%CI: 1.09-1.25). In 3 trials comparing quadruple therapy with sequential therapy, the eradication rate was not significant between groups (86.9% vs 86.0%). However, higher compliance was achieved with concomitant therapy than with sequential therapy. CONCLUSION: The H. pylori eradication rate was higher for nonbismuth concomitant quadruple therapy than for triple therapy. Moreover, higher compliance was achieved with nonbismuth concomitant quadruple therapy than with sequential therapy. Thus, nonbismuth concomitant quadruple therapy should be the first-line treatment in Chinese regions. PMID:27340362
Immediately sequential bilateral cataract surgery: advantages and disadvantages.
Singh, Ranjodh; Dohlman, Thomas H; Sun, Grace
2017-01-01
The number of cataract surgeries performed globally will continue to rise to meet the needs of an aging population. This increased demand will require healthcare systems and providers to find new surgical efficiencies while maintaining excellent surgical outcomes. Immediately sequential bilateral cataract surgery (ISBCS) has been proposed as a solution and is increasingly being performed worldwide. The purpose of this review is to discuss the advantages and disadvantages of ISBCS. When appropriate patient selection occurs and guidelines are followed, ISBCS is comparable with delayed sequential bilateral cataract surgery in long-term patient satisfaction, visual acuity and complication rates. In addition, the risk of bilateral postoperative endophthalmitis and concerns of poorer refractive outcomes have not been supported by the literature. ISBCS is cost-effective for the patient, healthcare payors and society, but current reimbursement models in many countries create significant financial barriers for facilities and surgeons. As demand for cataract surgery rises worldwide, ISBCS will become increasingly important as an alternative to delayed sequential bilateral cataract surgery. Advantages include potentially decreased wait times for surgery, patient convenience and cost savings for healthcare payors. Although they are comparable in visual acuity and complication rates, hurdles that prevent wide adoption include liability concerns as ISBCS is not an established standard of care, economic constraints for facilities and surgeons and inability to fine-tune intraocular lens selection in the second eye. Given these considerations, an open discussion regarding the advantages and disadvantages of ISBCS is important for appropriate patient selection.
ERIC Educational Resources Information Center
Yin, Chengjiu; Song, Yanjie; Tabata, Yoshiyuki; Ogata, Hiroaki; Hwang, Gwo-Jen
2013-01-01
This paper proposes a conceptual framework, scaffolding participatory simulation for mobile learning (SPSML), used on mobile devices for helping students learn conceptual knowledge in the classroom. As the pedagogical design, the framework adopts an experiential learning model, which consists of five sequential but cyclic steps: the initial stage,…
Computer simulation of a space SAR using a range-sequential processor for soil moisture mapping
NASA Technical Reports Server (NTRS)
Fujita, M.; Ulaby, F. (Principal Investigator)
1982-01-01
The ability of a spaceborne synthetic aperture radar (SAR) to detect soil moisture was evaluated by means of a computer simulation technique. The computer simulation package includes coherent processing of the SAR data using a range-sequential processor, which can be set up through hardware implementations, thereby reducing the amount of telemetry involved. With such a processing approach, it is possible to monitor the earth's surface on a continuous basis, since data storage requirements can be easily met through the use of currently available technology. The Development of the simulation package is described, followed by an examination of the application of the technique to actual environments. The results indicate that in estimating soil moisture content with a four-look processor, the difference between the assumed and estimated values of soil moisture is within + or - 20% of field capacity for 62% of the pixels for agricultural terrain and for 53% of the pixels for hilly terrain. The estimation accuracy for soil moisture may be improved by reducing the effect of fading through non-coherent averaging.
Hu, B.X.; He, C.
2008-01-01
An iterative inverse method, the sequential self-calibration method, is developed for mapping spatial distribution of a hydraulic conductivity field by conditioning on nonreactive tracer breakthrough curves. A streamline-based, semi-analytical simulator is adopted to simulate solute transport in a heterogeneous aquifer. The simulation is used as the forward modeling step. In this study, the hydraulic conductivity is assumed to be a deterministic or random variable. Within the framework of the streamline-based simulator, the efficient semi-analytical method is used to calculate sensitivity coefficients of the solute concentration with respect to the hydraulic conductivity variation. The calculated sensitivities account for spatial correlations between the solute concentration and parameters. The performance of the inverse method is assessed by two synthetic tracer tests conducted in an aquifer with a distinct spatial pattern of heterogeneity. The study results indicate that the developed iterative inverse method is able to identify and reproduce the large-scale heterogeneity pattern of the aquifer given appropriate observation wells in these synthetic cases. ?? International Association for Mathematical Geology 2008.
Computational Particle Dynamic Simulations on Multicore Processors (CPDMu) Final Report Phase I
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schmalz, Mark S
2011-07-24
Statement of Problem - Department of Energy has many legacy codes for simulation of computational particle dynamics and computational fluid dynamics applications that are designed to run on sequential processors and are not easily parallelized. Emerging high-performance computing architectures employ massively parallel multicore architectures (e.g., graphics processing units) to increase throughput. Parallelization of legacy simulation codes is a high priority, to achieve compatibility, efficiency, accuracy, and extensibility. General Statement of Solution - A legacy simulation application designed for implementation on mainly-sequential processors has been represented as a graph G. Mathematical transformations, applied to G, produce a graph representation {und G}more » for a high-performance architecture. Key computational and data movement kernels of the application were analyzed/optimized for parallel execution using the mapping G {yields} {und G}, which can be performed semi-automatically. This approach is widely applicable to many types of high-performance computing systems, such as graphics processing units or clusters comprised of nodes that contain one or more such units. Phase I Accomplishments - Phase I research decomposed/profiled computational particle dynamics simulation code for rocket fuel combustion into low and high computational cost regions (respectively, mainly sequential and mainly parallel kernels), with analysis of space and time complexity. Using the research team's expertise in algorithm-to-architecture mappings, the high-cost kernels were transformed, parallelized, and implemented on Nvidia Fermi GPUs. Measured speedups (GPU with respect to single-core CPU) were approximately 20-32X for realistic model parameters, without final optimization. Error analysis showed no loss of computational accuracy. Commercial Applications and Other Benefits - The proposed research will constitute a breakthrough in solution of problems related to efficient parallel computation of particle and fluid dynamics simulations. These problems occur throughout DOE, military and commercial sectors: the potential payoff is high. We plan to license or sell the solution to contractors for military and domestic applications such as disaster simulation (aerodynamic and hydrodynamic), Government agencies (hydrological and environmental simulations), and medical applications (e.g., in tomographic image reconstruction). Keywords - High-performance Computing, Graphic Processing Unit, Fluid/Particle Simulation. Summary for Members of Congress - Department of Energy has many simulation codes that must compute faster, to be effective. The Phase I research parallelized particle/fluid simulations for rocket combustion, for high-performance computing systems.« less
Signorelli, Mauro; Lissoni, Andrea Alberto; De Ponti, Elena; Grassi, Tommaso; Ponti, Serena
2015-01-01
Objective Evaluation of the impact of sequential chemoradiotherapy in high risk endometrial cancer (EC). Methods Two hundred fifty-four women with stage IB grade 3, II and III EC (2009 FIGO staging), were included in this retrospective study. Results Stage I, II, and III was 24%, 28.7%, and 47.3%, respectively. Grade 3 tumor was 53.2% and 71.3% had deep myometrial invasion. One hundred sixty-five women (65%) underwent pelvic (+/- aortic) lymphadenectomy and 58 (22.8%) had nodal metastases. Ninety-eight women (38.6%) underwent radiotherapy, 59 (23.2%) chemotherapy, 42 (16.5%) sequential chemoradiotherapy, and 55 (21.7%) were only observed. After a median follow-up of 101 months, 78 women (30.7%) relapsed and 91 women (35.8%) died. Sequential chemoradiotherapy improved survival rates in women who did not undergo nodal evaluation (disease-free survival [DFS], p=0.040; overall survival [OS], p=0.024) or pelvic (+/- aortic) lymphadenectomy (DFS, p=0.008; OS, p=0.021). Sequential chemoradiotherapy improved both DFS (p=0.015) and OS (p=0.014) in stage III, while only a trend was found for DFS (p=0.210) and OS (p=0.102) in stage I-II EC. In the multivariate analysis, only age (≤65 years) and sequential chemoradiotherapy were statistically related to the prognosis. Conclusion Sequential chemoradiotherapy improves survival rates in high risk EC compared with chemotherapy or radiotherapy alone, in particular in stage III. PMID:26197768
Keenan, Jeffrey E; Speicher, Paul J; Nussbaum, Daniel P; Adam, Mohamed Abdelgadir; Miller, Timothy E; Mantyh, Christopher R; Thacker, Julie K M
2015-08-01
The purpose of this study was to examine the impact of the sequential implementation of the enhanced recovery program (ERP) and surgical site infection bundle (SSIB) on short-term outcomes in colorectal surgery (CRS) to determine if the presence of multiple standardized care programs provides additive benefit. Institutional ACS-NSQIP data were used to identify patients who underwent elective CRS from September 2006 to March 2013. The cohort was stratified into 3 groups relative to implementation of the ERP (February 1, 2010) and SSIB (July 1, 2011). Unadjusted characteristics and 30-day outcomes were assessed, and inverse proportional weighting was then used to determine the adjusted effect of these programs. There were 787 patients included: 337, 165, and 285 in the pre-ERP/SSIB, post-ERP/pre-SSIB, and post-ERP/SSIB periods, respectively. After inverse probability weighting (IPW) adjustment, groups were balanced with respect to patient and procedural characteristics considered. Compared with the pre-ERP/SSIB group, the post-ERP/pre-SSIB group had significantly reduced length of hospitalization (8.3 vs 6.6 days, p = 0.01) but did not differ with respect to postoperative wound complications and sepsis. Subsequent introduction of the SSIB then resulted in a significant decrease in superficial SSI (16.1% vs 6.3%, p < 0.01) and postoperative sepsis (11.2% vs 1.8%, p < 0.01). Finally, inflation-adjusted mean hospital cost for a CRS admission fell from $31,926 in 2008 to $22,044 in 2013 (p < 0.01). Sequential implementation of the ERP and SSIB provided incremental improvements in CRS outcomes while controlling hospital costs, supporting their combined use as an effective strategy toward improving the quality of patient care. Copyright © 2015 American College of Surgeons. Published by Elsevier Inc. All rights reserved.
Yeh, Ting-Ting; Wu, Ching-Yi; Hsieh, Yu-Wei; Chang, Ku-Chou; Lee, Lin-Chien; Hung, Jen-Wen; Lin, Keh-Chung; Teng, Ching-Hung; Liao, Yi-Han
2017-08-31
Aerobic exercise and cognitive training have been effective in improving cognitive functions; however, whether the combination of these two can further enhance cognition and clinical outcomes in stroke survivors with cognitive decline remains unknown. This study aimed to determine the treatment effects of a sequential combination of aerobic exercise and cognitive training on cognitive function and clinical outcomes. Stroke survivors (n = 75) with cognitive decline will be recruited and randomly assigned to cognitive training, aerobic exercise, and sequential combination of aerobic exercise and cognitive training groups. All participants will receive training for 60 minutes per day, 3 days per week for 12 weeks. The aerobic exercise group will receive stationary bicycle training, the cognitive training group will receive cognitive-based training, and the sequential group will first receive 30 minutes of aerobic exercise, followed by 30 minutes of cognitive training. The outcome measures involve cognitive functions, physiological biomarkers, daily function and quality of life, physical functions, and social participation. Participants will be assessed before and immediately after the interventions, and 6 months after the interventions. Repeated measures of analysis of variance will be used to evaluate the changes in outcome measures at the three assessments. This trial aims to explore the benefits of innovative intervention approaches to improve the cognitive function, physiological markers, daily function, and quality of life in stroke survivors with cognitive decline. The findings will provide evidence to advance post-stroke cognitive rehabilitation. ClinicalTrials.gov, NCT02550990 . Registered on 6 September 2015.
Numerical simulation of double‐diffusive finger convection
Hughes, Joseph D.; Sanford, Ward E.; Vacher, H. Leonard
2005-01-01
A hybrid finite element, integrated finite difference numerical model is developed for the simulation of double‐diffusive and multicomponent flow in two and three dimensions. The model is based on a multidimensional, density‐dependent, saturated‐unsaturated transport model (SUTRA), which uses one governing equation for fluid flow and another for solute transport. The solute‐transport equation is applied sequentially to each simulated species. Density coupling of the flow and solute‐transport equations is accounted for and handled using a sequential implicit Picard iterative scheme. High‐resolution data from a double‐diffusive Hele‐Shaw experiment, initially in a density‐stable configuration, is used to verify the numerical model. The temporal and spatial evolution of simulated double‐diffusive convection is in good agreement with experimental results. Numerical results are very sensitive to discretization and correspond closest to experimental results when element sizes adequately define the spatial resolution of observed fingering. Numerical results also indicate that differences in the molecular diffusivity of sodium chloride and the dye used to visualize experimental sodium chloride concentrations are significant and cause inaccurate mapping of sodium chloride concentrations by the dye, especially at late times. As a result of reduced diffusion, simulated dye fingers are better defined than simulated sodium chloride fingers and exhibit more vertical mass transfer.
NASA Technical Reports Server (NTRS)
Massey, J. L.
1976-01-01
Virtually all previously-suggested rate 1/2 binary convolutional codes with KE = 24 are compared. Their distance properties are given; and their performance, both in computation and in error probability, with sequential decoding on the deep-space channel is determined by simulation. Recommendations are made both for the choice of a specific KE = 24 code as well as for codes to be included in future coding standards for the deep-space channel. A new result given in this report is a method for determining the statistical significance of error probability data when the error probability is so small that it is not feasible to perform enough decoding simulations to obtain more than a very small number of decoding errors.
Sample size determination for logistic regression on a logit-normal distribution.
Kim, Seongho; Heath, Elisabeth; Heilbrun, Lance
2017-06-01
Although the sample size for simple logistic regression can be readily determined using currently available methods, the sample size calculation for multiple logistic regression requires some additional information, such as the coefficient of determination ([Formula: see text]) of a covariate of interest with other covariates, which is often unavailable in practice. The response variable of logistic regression follows a logit-normal distribution which can be generated from a logistic transformation of a normal distribution. Using this property of logistic regression, we propose new methods of determining the sample size for simple and multiple logistic regressions using a normal transformation of outcome measures. Simulation studies and a motivating example show several advantages of the proposed methods over the existing methods: (i) no need for [Formula: see text] for multiple logistic regression, (ii) available interim or group-sequential designs, and (iii) much smaller required sample size.
Faria, Rita; Spackman, Eldon; Burch, Jane; Corbacho, Belen; Todd, Derick; Pepper, Chris; Woolacott, Nerys; Palmer, Stephen
2013-07-01
The National Institute for Health and Clinical Excellence (NICE) invited the manufacturer of dabigatran etexilate (Boehringer Ingelheim Ltd, UK) to submit evidence for the clinical and cost-effectiveness of this drug for the prevention of stroke and systemic embolism in patients with non-valvular atrial fibrillation (AF) as part of the NICE single technology appraisal process. The Centre for Reviews and Dissemination and the Centre for Health Economics at the University of York were commissioned to act as the evidence review group (ERG). This article presents a summary of the manufacturer's submission, the ERG report and the subsequent development of NICE guidance for the use of dabigatran within the UK National Health Service. Dabigatran was granted marketing authorisation by the European Medicines Agency for a sequential dosing regimen (DBG sequential), in which patients under 80 years are treated with dabigatran 150 mg twice daily (DBG150) and patients 80 years and over are given dabigatran 110 mg twice daily (DBG110). NICE decisions are bound by the marketing authorisation; therefore, the decision problem faced by the committee was whether the DBG sequential regimen was effective and cost-effective compared with warfarin or aspirin for patients with non-valvular AF and one or more risk factors. The RE-LY trial, a large multi-centre non-inferiority randomised clinical trial, was the primary source of clinical evidence. DBG150 was shown to be non-inferior, and subsequently superior to warfarin, for the primary outcome of all stroke/systemic embolism. DBG110 was found to be non-inferior to warfarin. Results were presented for a post hoc subgroup analysis for patients under and over 80 years of age, where DBG110 showed a statistically significant reduction of haemorrhagic stroke and intracranial haemorrhage in comparison to warfarin in patients over 80 years of age. This post hoc subgroup analysis by age was the basis for the licensed DBG sequential regimen. The economic evaluation compared the costs and outcomes of DBG110, DBG150 and DBG sequential against warfarin, aspirin, and aspirin plus clopidogrel. Across the three dosing regimens, dabigatran was associated with greater costs and better health outcomes than warfarin; however, DBG150 offered the most benefits and dominated DBG110 and DBG sequential (i.e. less costly and more effective). The cost-effectiveness of DBG150 was less favourable for patients well controlled on warfarin. In the first appraisal meeting, the committee issued a 'minded no' decision until additional analyses on the licensed DBG sequential regimen were presented by the manufacturer. These additional analyses indicated that the incremental cost-effectiveness ratio (ICER) of the DBG sequential regimen compared with warfarin ranged from £8,388 to £18,987 per quality-adjusted life year (QALY) gained depending on the level of monitoring costs assumed for warfarin. Patients on warfarin would need to be within therapeutic range 83-85 % of the time for the ICER to exceed £30,000 per additional QALY. Following consideration of the additional evidence and the responses from a large number of consultees and commentators, the committee recommended dabigatran as DBG sequential as an option for the prevention of stroke and systemic embolism in people with non-valvular AF with one or more risk factors for ischaemic stroke.
Acceleration of discrete stochastic biochemical simulation using GPGPU.
Sumiyoshi, Kei; Hirata, Kazuki; Hiroi, Noriko; Funahashi, Akira
2015-01-01
For systems made up of a small number of molecules, such as a biochemical network in a single cell, a simulation requires a stochastic approach, instead of a deterministic approach. The stochastic simulation algorithm (SSA) simulates the stochastic behavior of a spatially homogeneous system. Since stochastic approaches produce different results each time they are used, multiple runs are required in order to obtain statistical results; this results in a large computational cost. We have implemented a parallel method for using SSA to simulate a stochastic model; the method uses a graphics processing unit (GPU), which enables multiple realizations at the same time, and thus reduces the computational time and cost. During the simulation, for the purpose of analysis, each time course is recorded at each time step. A straightforward implementation of this method on a GPU is about 16 times faster than a sequential simulation on a CPU with hybrid parallelization; each of the multiple simulations is run simultaneously, and the computational tasks within each simulation are parallelized. We also implemented an improvement to the memory access and reduced the memory footprint, in order to optimize the computations on the GPU. We also implemented an asynchronous data transfer scheme to accelerate the time course recording function. To analyze the acceleration of our implementation on various sizes of model, we performed SSA simulations on different model sizes and compared these computation times to those for sequential simulations with a CPU. When used with the improved time course recording function, our method was shown to accelerate the SSA simulation by a factor of up to 130.
Acceleration of discrete stochastic biochemical simulation using GPGPU
Sumiyoshi, Kei; Hirata, Kazuki; Hiroi, Noriko; Funahashi, Akira
2015-01-01
For systems made up of a small number of molecules, such as a biochemical network in a single cell, a simulation requires a stochastic approach, instead of a deterministic approach. The stochastic simulation algorithm (SSA) simulates the stochastic behavior of a spatially homogeneous system. Since stochastic approaches produce different results each time they are used, multiple runs are required in order to obtain statistical results; this results in a large computational cost. We have implemented a parallel method for using SSA to simulate a stochastic model; the method uses a graphics processing unit (GPU), which enables multiple realizations at the same time, and thus reduces the computational time and cost. During the simulation, for the purpose of analysis, each time course is recorded at each time step. A straightforward implementation of this method on a GPU is about 16 times faster than a sequential simulation on a CPU with hybrid parallelization; each of the multiple simulations is run simultaneously, and the computational tasks within each simulation are parallelized. We also implemented an improvement to the memory access and reduced the memory footprint, in order to optimize the computations on the GPU. We also implemented an asynchronous data transfer scheme to accelerate the time course recording function. To analyze the acceleration of our implementation on various sizes of model, we performed SSA simulations on different model sizes and compared these computation times to those for sequential simulations with a CPU. When used with the improved time course recording function, our method was shown to accelerate the SSA simulation by a factor of up to 130. PMID:25762936
Wild, Aaron T; Gandhi, Nishant; Chettiar, Sivarajan T; Aziz, Khaled; Gajula, Rajendra P; Williams, Russell D; Kumar, Rachit; Taparra, Kekoa; Zeng, Jing; Cades, Jessica A; Velarde, Esteban; Menon, Siddharth; Geschwind, Jean F; Cosgrove, David; Pawlik, Timothy M; Maitra, Anirban; Wong, John; Hales, Russell K; Torbenson, Michael S; Herman, Joseph M; Tran, Phuoc T
2013-01-01
Sorafenib (SOR) is the only systemic agent known to improve survival for hepatocellular carcinoma (HCC). However, SOR prolongs survival by less than 3 months and does not alter symptomatic progression. To improve outcomes, several phase I-II trials are currently examining SOR with radiation (RT) for HCC utilizing heterogeneous concurrent and sequential treatment regimens. Our study provides preclinical data characterizing the effects of concurrent versus sequential RT-SOR on HCC cells both in vitro and in vivo. Concurrent and sequential RT-SOR regimens were tested for efficacy among 4 HCC cell lines in vitro by assessment of clonogenic survival, apoptosis, cell cycle distribution, and γ-H2AX foci formation. Results were confirmed in vivo by evaluating tumor growth delay and performing immunofluorescence staining in a hind-flank xenograft model. In vitro, concurrent RT-SOR produced radioprotection in 3 of 4 cell lines, whereas sequential RT-SOR produced decreased colony formation among all 4. Sequential RT-SOR increased apoptosis compared to RT alone, while concurrent RT-SOR did not. Sorafenib induced reassortment into less radiosensitive phases of the cell cycle through G1-S delay and cell cycle slowing. More double-strand breaks (DSBs) persisted 24 h post-irradiation for RT alone versus concurrent RT-SOR. In vivo, sequential RT-SOR produced the greatest tumor growth delay, while concurrent RT-SOR was similar to RT alone. More persistent DSBs were observed in xenografts treated with sequential RT-SOR or RT alone versus concurrent RT-SOR. Sequential RT-SOR additionally produced a greater reduction in xenograft tumor vascularity and mitotic index than either concurrent RT-SOR or RT alone. In conclusion, sequential RT-SOR demonstrates greater efficacy against HCC than concurrent RT-SOR both in vitro and in vivo. These results may have implications for clinical decision-making and prospective trial design.
Chettiar, Sivarajan T.; Aziz, Khaled; Gajula, Rajendra P.; Williams, Russell D.; Kumar, Rachit; Taparra, Kekoa; Zeng, Jing; Cades, Jessica A.; Velarde, Esteban; Menon, Siddharth; Geschwind, Jean F.; Cosgrove, David; Pawlik, Timothy M.; Maitra, Anirban; Wong, John; Hales, Russell K.; Torbenson, Michael S.; Herman, Joseph M.; Tran, Phuoc T.
2013-01-01
Sorafenib (SOR) is the only systemic agent known to improve survival for hepatocellular carcinoma (HCC). However, SOR prolongs survival by less than 3 months and does not alter symptomatic progression. To improve outcomes, several phase I-II trials are currently examining SOR with radiation (RT) for HCC utilizing heterogeneous concurrent and sequential treatment regimens. Our study provides preclinical data characterizing the effects of concurrent versus sequential RT-SOR on HCC cells both in vitro and in vivo. Concurrent and sequential RT-SOR regimens were tested for efficacy among 4 HCC cell lines in vitro by assessment of clonogenic survival, apoptosis, cell cycle distribution, and γ-H2AX foci formation. Results were confirmed in vivo by evaluating tumor growth delay and performing immunofluorescence staining in a hind-flank xenograft model. In vitro, concurrent RT-SOR produced radioprotection in 3 of 4 cell lines, whereas sequential RT-SOR produced decreased colony formation among all 4. Sequential RT-SOR increased apoptosis compared to RT alone, while concurrent RT-SOR did not. Sorafenib induced reassortment into less radiosensitive phases of the cell cycle through G1-S delay and cell cycle slowing. More double-strand breaks (DSBs) persisted 24 h post-irradiation for RT alone versus concurrent RT-SOR. In vivo, sequential RT-SOR produced the greatest tumor growth delay, while concurrent RT-SOR was similar to RT alone. More persistent DSBs were observed in xenografts treated with sequential RT-SOR or RT alone versus concurrent RT-SOR. Sequential RT-SOR additionally produced a greater reduction in xenograft tumor vascularity and mitotic index than either concurrent RT-SOR or RT alone. In conclusion, sequential RT-SOR demonstrates greater efficacy against HCC than concurrent RT-SOR both in vitro and in vivo. These results may have implications for clinical decision-making and prospective trial design. PMID:23762417
Zhang, Ying; Ji, Yajie; Li, Jianwei; Lei, Li; Wu, Siyu; Zuo, Wenjia; Jia, Xiaoqing; Wang, Yujie; Mo, Miao; Zhang, Na; Shen, Zhenzhou; Wu, Jiong; Shao, Zhimin; Liu, Guangyu
2018-04-01
To investigate ovarian function and therapeutic efficacy among estrogen receptor (ER)-positive, premenopausal breast cancer patients treated with gonadotropin-releasing hormone agonist (GnRHa) and chemotherapy simultaneously or sequentially. This study was a phase 3, open-label, parallel, randomized controlled trial (NCT01712893). Two hundred sixteen premenopausal patients (under 45 years) diagnosed with invasive ER-positive breast cancer were enrolled from July 2009 to May 2013 and randomized at a 1:1 ratio to receive (neo)adjuvant chemotherapy combined with sequential or simultaneous GnRHa treatment. All patients were advised to receive GnRHa for at least 2 years. The primary outcome was the incidence of early menopause, defined as amenorrhea lasting longer than 12 months after the last chemotherapy or GnRHa dose, with postmenopausal or unknown follicle-stimulating hormone and estradiol levels. The menstrual resumption period and survivals were the secondary endpoints. The median follow-up time was 56.9 months (IQR 49.5-72.4 months). One hundred and eight patients were enrolled in each group. Among them, 92 and 78 patients had complete primary endpoint data in the sequential and simultaneous groups, respectively. The rates of early menopause were 22.8% (21/92) in the sequential group and 23.1% (18/78) in the simultaneous group [simultaneous vs. sequential: OR 1.01 (95% CI 0.50-2.08); p = 0.969; age-adjusted OR 1.13; (95% CI 0.54-2.37); p = 0.737]. The median menstruation resumption period was 12.0 (95% CI 9.3-14.7) months and 10.3 (95% CI 8.2-12.4) months for the sequential and simultaneous groups, respectively [HR 0.83 (95% CI 0.59-1.16); p = 0.274; age-adjusted HR 0.90 (95%CI 0.64-1.27); p = 0.567]. No significant differences were evident for disease-free survival (p = 0.290) or overall survival (p = 0.514) between the two groups. For ER-positive premenopausal patients, the sequential use of GnRHa and chemotherapy showed ovarian preservation and survival outcomes that were no worse than simultaneous use. The application of GnRHa can probably be delayed until menstruation resumption after chemotherapy.
Some theoretical issues on computer simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barrett, C.L.; Reidys, C.M.
1998-02-01
The subject of this paper is the development of mathematical foundations for a theory of simulation. Sequentially updated cellular automata (sCA) over arbitrary graphs are employed as a paradigmatic framework. In the development of the theory, the authors focus on the properties of causal dependencies among local mappings in a simulation. The main object of and study is the mapping between a graph representing the dependencies among entities of a simulation and a representing the equivalence classes of systems obtained by all possible updates.
Blocking for Sequential Political Experiments
Moore, Sally A.
2013-01-01
In typical political experiments, researchers randomize a set of households, precincts, or individuals to treatments all at once, and characteristics of all units are known at the time of randomization. However, in many other experiments, subjects “trickle in” to be randomized to treatment conditions, usually via complete randomization. To take advantage of the rich background data that researchers often have (but underutilize) in these experiments, we develop methods that use continuous covariates to assign treatments sequentially. We build on biased coin and minimization procedures for discrete covariates and demonstrate that our methods outperform complete randomization, producing better covariate balance in simulated data. We then describe how we selected and deployed a sequential blocking method in a clinical trial and demonstrate the advantages of our having done so. Further, we show how that method would have performed in two larger sequential political trials. Finally, we compare causal effect estimates from differences in means, augmented inverse propensity weighted estimators, and randomization test inversion. PMID:24143061
On the origin of reproducible sequential activity in neural circuits
NASA Astrophysics Data System (ADS)
Afraimovich, V. S.; Zhigulin, V. P.; Rabinovich, M. I.
2004-12-01
Robustness and reproducibility of sequential spatio-temporal responses is an essential feature of many neural circuits in sensory and motor systems of animals. The most common mathematical images of dynamical regimes in neural systems are fixed points, limit cycles, chaotic attractors, and continuous attractors (attractive manifolds of neutrally stable fixed points). These are not suitable for the description of reproducible transient sequential neural dynamics. In this paper we present the concept of a stable heteroclinic sequence (SHS), which is not an attractor. SHS opens the way for understanding and modeling of transient sequential activity in neural circuits. We show that this new mathematical object can be used to describe robust and reproducible sequential neural dynamics. Using the framework of a generalized high-dimensional Lotka-Volterra model, that describes the dynamics of firing rates in an inhibitory network, we present analytical results on the existence of the SHS in the phase space of the network. With the help of numerical simulations we confirm its robustness in presence of noise in spite of the transient nature of the corresponding trajectories. Finally, by referring to several recent neurobiological experiments, we discuss possible applications of this new concept to several problems in neuroscience.
On the origin of reproducible sequential activity in neural circuits.
Afraimovich, V S; Zhigulin, V P; Rabinovich, M I
2004-12-01
Robustness and reproducibility of sequential spatio-temporal responses is an essential feature of many neural circuits in sensory and motor systems of animals. The most common mathematical images of dynamical regimes in neural systems are fixed points, limit cycles, chaotic attractors, and continuous attractors (attractive manifolds of neutrally stable fixed points). These are not suitable for the description of reproducible transient sequential neural dynamics. In this paper we present the concept of a stable heteroclinic sequence (SHS), which is not an attractor. SHS opens the way for understanding and modeling of transient sequential activity in neural circuits. We show that this new mathematical object can be used to describe robust and reproducible sequential neural dynamics. Using the framework of a generalized high-dimensional Lotka-Volterra model, that describes the dynamics of firing rates in an inhibitory network, we present analytical results on the existence of the SHS in the phase space of the network. With the help of numerical simulations we confirm its robustness in presence of noise in spite of the transient nature of the corresponding trajectories. Finally, by referring to several recent neurobiological experiments, we discuss possible applications of this new concept to several problems in neuroscience.
Cao, Youfang; Liang, Jie
2013-01-01
Critical events that occur rarely in biological processes are of great importance, but are challenging to study using Monte Carlo simulation. By introducing biases to reaction selection and reaction rates, weighted stochastic simulation algorithms based on importance sampling allow rare events to be sampled more effectively. However, existing methods do not address the important issue of barrier crossing, which often arises from multistable networks and systems with complex probability landscape. In addition, the proliferation of parameters and the associated computing cost pose significant problems. Here we introduce a general theoretical framework for obtaining optimized biases in sampling individual reactions for estimating probabilities of rare events. We further describe a practical algorithm called adaptively biased sequential importance sampling (ABSIS) method for efficient probability estimation. By adopting a look-ahead strategy and by enumerating short paths from the current state, we estimate the reaction-specific and state-specific forward and backward moving probabilities of the system, which are then used to bias reaction selections. The ABSIS algorithm can automatically detect barrier-crossing regions, and can adjust bias adaptively at different steps of the sampling process, with bias determined by the outcome of exhaustively generated short paths. In addition, there are only two bias parameters to be determined, regardless of the number of the reactions and the complexity of the network. We have applied the ABSIS method to four biochemical networks: the birth-death process, the reversible isomerization, the bistable Schlögl model, and the enzymatic futile cycle model. For comparison, we have also applied the finite buffer discrete chemical master equation (dCME) method recently developed to obtain exact numerical solutions of the underlying discrete chemical master equations of these problems. This allows us to assess sampling results objectively by comparing simulation results with true answers. Overall, ABSIS can accurately and efficiently estimate rare event probabilities for all examples, often with smaller variance than other importance sampling algorithms. The ABSIS method is general and can be applied to study rare events of other stochastic networks with complex probability landscape. PMID:23862966
NASA Astrophysics Data System (ADS)
Cao, Youfang; Liang, Jie
2013-07-01
Critical events that occur rarely in biological processes are of great importance, but are challenging to study using Monte Carlo simulation. By introducing biases to reaction selection and reaction rates, weighted stochastic simulation algorithms based on importance sampling allow rare events to be sampled more effectively. However, existing methods do not address the important issue of barrier crossing, which often arises from multistable networks and systems with complex probability landscape. In addition, the proliferation of parameters and the associated computing cost pose significant problems. Here we introduce a general theoretical framework for obtaining optimized biases in sampling individual reactions for estimating probabilities of rare events. We further describe a practical algorithm called adaptively biased sequential importance sampling (ABSIS) method for efficient probability estimation. By adopting a look-ahead strategy and by enumerating short paths from the current state, we estimate the reaction-specific and state-specific forward and backward moving probabilities of the system, which are then used to bias reaction selections. The ABSIS algorithm can automatically detect barrier-crossing regions, and can adjust bias adaptively at different steps of the sampling process, with bias determined by the outcome of exhaustively generated short paths. In addition, there are only two bias parameters to be determined, regardless of the number of the reactions and the complexity of the network. We have applied the ABSIS method to four biochemical networks: the birth-death process, the reversible isomerization, the bistable Schlögl model, and the enzymatic futile cycle model. For comparison, we have also applied the finite buffer discrete chemical master equation (dCME) method recently developed to obtain exact numerical solutions of the underlying discrete chemical master equations of these problems. This allows us to assess sampling results objectively by comparing simulation results with true answers. Overall, ABSIS can accurately and efficiently estimate rare event probabilities for all examples, often with smaller variance than other importance sampling algorithms. The ABSIS method is general and can be applied to study rare events of other stochastic networks with complex probability landscape.
Cao, Youfang; Liang, Jie
2013-07-14
Critical events that occur rarely in biological processes are of great importance, but are challenging to study using Monte Carlo simulation. By introducing biases to reaction selection and reaction rates, weighted stochastic simulation algorithms based on importance sampling allow rare events to be sampled more effectively. However, existing methods do not address the important issue of barrier crossing, which often arises from multistable networks and systems with complex probability landscape. In addition, the proliferation of parameters and the associated computing cost pose significant problems. Here we introduce a general theoretical framework for obtaining optimized biases in sampling individual reactions for estimating probabilities of rare events. We further describe a practical algorithm called adaptively biased sequential importance sampling (ABSIS) method for efficient probability estimation. By adopting a look-ahead strategy and by enumerating short paths from the current state, we estimate the reaction-specific and state-specific forward and backward moving probabilities of the system, which are then used to bias reaction selections. The ABSIS algorithm can automatically detect barrier-crossing regions, and can adjust bias adaptively at different steps of the sampling process, with bias determined by the outcome of exhaustively generated short paths. In addition, there are only two bias parameters to be determined, regardless of the number of the reactions and the complexity of the network. We have applied the ABSIS method to four biochemical networks: the birth-death process, the reversible isomerization, the bistable Schlögl model, and the enzymatic futile cycle model. For comparison, we have also applied the finite buffer discrete chemical master equation (dCME) method recently developed to obtain exact numerical solutions of the underlying discrete chemical master equations of these problems. This allows us to assess sampling results objectively by comparing simulation results with true answers. Overall, ABSIS can accurately and efficiently estimate rare event probabilities for all examples, often with smaller variance than other importance sampling algorithms. The ABSIS method is general and can be applied to study rare events of other stochastic networks with complex probability landscape.
Koning, Ina M; Maric, Marija; MacKinnon, David; Vollebergh, Wilma A M
2015-08-01
Previous work revealed that the combined parent-student alcohol prevention program (PAS) effectively postponed alcohol initiation through its hypothesized intermediate factors: increase in strict parental rule setting and adolescents' self-control (Koning, van den Eijnden, Verdurmen, Engels, & Vollebergh, 2011). This study examines whether the parental strictness precedes an increase in adolescents' self-control by testing a sequential mediation model. A cluster randomized trial including 3,245 Dutch early adolescents (M age = 12.68, SD = 0.50) and their parents randomized over 4 conditions: (1) parent intervention, (2) student intervention, (3) combined intervention, and (4) control group. Outcome measure was amount of weekly drinking measured at age 12 to 15; baseline assessment (T0) and 3 follow-up assessments (T1-T3). Main effects of the combined and parent intervention on weekly drinking at T3 were found. The effect of the combined intervention on weekly drinking (T3) was mediated via an increase in strict rule setting (T1) and adolescents' subsequent self-control (T2). In addition, the indirect effect of the combined intervention via rule setting (T1) was significant. No reciprocal sequential mediation (self-control at T1 prior to rules at T2) was found. The current study is 1 of the few studies reporting sequential mediation effects of youth intervention outcomes. It underscores the need of involving parents in youth alcohol prevention programs, and the need to target both parents and adolescents, so that change in parents' behavior enables change in their offspring. (c) 2015 APA, all rights reserved).
Optimization of Multiple Related Negotiation through Multi-Negotiation Network
NASA Astrophysics Data System (ADS)
Ren, Fenghui; Zhang, Minjie; Miao, Chunyan; Shen, Zhiqi
In this paper, a Multi-Negotiation Network (MNN) and a Multi- Negotiation Influence Diagram (MNID) are proposed to optimally handle Multiple Related Negotiations (MRN) in a multi-agent system. Most popular, state-of-the-art approaches perform MRN sequentially. However, a sequential procedure may not optimally execute MRN in terms of maximizing the global outcome, and may even lead to unnecessary losses in some situations. The motivation of this research is to use a MNN to handle MRN concurrently so as to maximize the expected utility of MRN. Firstly, both the joint success rate and the joint utility by considering all related negotiations are dynamically calculated based on a MNN. Secondly, by employing a MNID, an agent's possible decision on each related negotiation is reflected by the value of expected utility. Lastly, through comparing expected utilities between all possible policies to conduct MRN, an optimal policy is generated to optimize the global outcome of MRN. The experimental results indicate that the proposed approach can improve the global outcome of MRN in a successful end scenario, and avoid unnecessary losses in an unsuccessful end scenario.
Biocellion: accelerating computer simulation of multicellular biological system models.
Kang, Seunghwa; Kahan, Simon; McDermott, Jason; Flann, Nicholas; Shmulevich, Ilya
2014-11-01
Biological system behaviors are often the outcome of complex interactions among a large number of cells and their biotic and abiotic environment. Computational biologists attempt to understand, predict and manipulate biological system behavior through mathematical modeling and computer simulation. Discrete agent-based modeling (in combination with high-resolution grids to model the extracellular environment) is a popular approach for building biological system models. However, the computational complexity of this approach forces computational biologists to resort to coarser resolution approaches to simulate large biological systems. High-performance parallel computers have the potential to address the computing challenge, but writing efficient software for parallel computers is difficult and time-consuming. We have developed Biocellion, a high-performance software framework, to solve this computing challenge using parallel computers. To support a wide range of multicellular biological system models, Biocellion asks users to provide their model specifics by filling the function body of pre-defined model routines. Using Biocellion, modelers without parallel computing expertise can efficiently exploit parallel computers with less effort than writing sequential programs from scratch. We simulate cell sorting, microbial patterning and a bacterial system in soil aggregate as case studies. Biocellion runs on x86 compatible systems with the 64 bit Linux operating system and is freely available for academic use. Visit http://biocellion.com for additional information. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Multiphysics Code Demonstrated for Propulsion Applications
NASA Technical Reports Server (NTRS)
Lawrence, Charles; Melis, Matthew E.
1998-01-01
The utility of multidisciplinary analysis tools for aeropropulsion applications is being investigated at the NASA Lewis Research Center. The goal of this project is to apply Spectrum, a multiphysics code developed by Centric Engineering Systems, Inc., to simulate multidisciplinary effects in turbomachinery components. Many engineering problems today involve detailed computer analyses to predict the thermal, aerodynamic, and structural response of a mechanical system as it undergoes service loading. Analysis of aerospace structures generally requires attention in all three disciplinary areas to adequately predict component service behavior, and in many cases, the results from one discipline substantially affect the outcome of the other two. There are numerous computer codes currently available in the engineering community to perform such analyses in each of these disciplines. Many of these codes are developed and used in-house by a given organization, and many are commercially available. However, few, if any, of these codes are designed specifically for multidisciplinary analyses. The Spectrum code has been developed for performing fully coupled fluid, thermal, and structural analyses on a mechanical system with a single simulation that accounts for all simultaneous interactions, thus eliminating the requirement for running a large number of sequential, separate, disciplinary analyses. The Spectrum code has a true multiphysics analysis capability, which improves analysis efficiency as well as accuracy. Centric Engineering, Inc., working with a team of Lewis and AlliedSignal Engines engineers, has been evaluating Spectrum for a variety of propulsion applications including disk quenching, drum cavity flow, aeromechanical simulations, and a centrifugal compressor flow simulation.
[Using sequential indicator simulation method to define risk areas of soil heavy metals in farmland.
Yang, Hao; Song, Ying Qiang; Hu, Yue Ming; Chen, Fei Xiang; Zhang, Rui
2018-05-01
The heavy metals in soil have serious impacts on safety, ecological environment and human health due to their toxicity and accumulation. It is necessary to efficiently identify the risk area of heavy metals in farmland soil, which is of important significance for environment protection, pollution warning and farmland risk control. We collected 204 samples and analyzed the contents of seven kinds of heavy metals (Cu, Zn, Pb, Cd, Cr, As, Hg) in Zengcheng District of Guangzhou, China. In order to overcame the problems of the data, including the limitation of abnormal values and skewness distribution and the smooth effect with the traditional kriging methods, we used sequential indicator simulation method (SISIM) to define the spatial distribution of heavy metals, and combined Hakanson index method to identify potential ecological risk area of heavy metals in farmland. The results showed that: (1) Based on the similar accuracy of spatial prediction of soil heavy metals, the SISIM had a better expression of detail rebuild than ordinary kriging in small scale area. Compared to indicator kriging, the SISIM had less error rate (4.9%-17.1%) in uncertainty evaluation of heavy-metal risk identification. The SISIM had less smooth effect and was more applicable to simulate the spatial uncertainty assessment of soil heavy metals and risk identification. (2) There was no pollution in Zengcheng's farmland. Moderate potential ecological risk was found in the southern part of study area due to enterprise production, human activities, and river sediments. This study combined the sequential indicator simulation with Hakanson risk index method, and effectively overcame the outlier information loss and smooth effect of traditional kriging method. It provided a new way to identify the soil heavy metal risk area of farmland in uneven sampling.
NASA Technical Reports Server (NTRS)
Shyy, W.; Thakur, S.; Udaykumar, H. S.
1993-01-01
A high accuracy convection scheme using a sequential solution technique has been developed and applied to simulate the longitudinal combustion instability and its active control. The scheme has been devised in the spirit of the Total Variation Diminishing (TVD) concept with special source term treatment. Due to the substantial heat release effect, a clear delineation of the key elements employed by the scheme, i.e., the adjustable damping factor and the source term treatment has been made. By comparing with the first-order upwind scheme previously utilized, the present results exhibit less damping and are free from spurious oscillations, offering improved quantitative accuracy while confirming the spectral analysis reported earlier. A simple feedback type of active control has been found to be capable of enhancing or attenuating the magnitude of the combustion instability.
Sookhak Lari, Kaveh; Johnston, Colin D; Rayner, John L; Davis, Greg B
2018-03-05
Remediation of subsurface systems, including groundwater, soil and soil gas, contaminated with light non-aqueous phase liquids (LNAPLs) is challenging. Field-scale pilot trials of multi-phase remediation were undertaken at a site to determine the effectiveness of recovery options. Sequential LNAPL skimming and vacuum-enhanced skimming, with and without water table drawdown were trialled over 78days; in total extracting over 5m 3 of LNAPL. For the first time, a multi-component simulation framework (including the multi-phase multi-component code TMVOC-MP and processing codes) was developed and applied to simulate the broad range of multi-phase remediation and recovery methods used in the field trials. This framework was validated against the sequential pilot trials by comparing predicted and measured LNAPL mass removal rates and compositional changes. The framework was tested on both a Cray supercomputer and a cluster. Simulations mimicked trends in LNAPL recovery rates (from 0.14 to 3mL/s) across all remediation techniques each operating over periods of 4-14days over the 78day trial. The code also approximated order of magnitude compositional changes of hazardous chemical concentrations in extracted gas during vacuum-enhanced recovery. The verified framework enables longer term prediction of the effectiveness of remediation approaches allowing better determination of remediation endpoints and long-term risks. Copyright © 2017 Commonwealth Scientific and Industrial Research Organisation. Published by Elsevier B.V. All rights reserved.
Lin, Yu-Pin; Chu, Hone-Jay; Wang, Cheng-Long; Yu, Hsiao-Hsuan; Wang, Yung-Chieh
2009-01-01
This study applies variogram analyses of normalized difference vegetation index (NDVI) images derived from SPOT HRV images obtained before and after the ChiChi earthquake in the Chenyulan watershed, Taiwan, as well as images after four large typhoons, to delineate the spatial patterns, spatial structures and spatial variability of landscapes caused by these large disturbances. The conditional Latin hypercube sampling approach was applied to select samples from multiple NDVI images. Kriging and sequential Gaussian simulation with sufficient samples were then used to generate maps of NDVI images. The variography of NDVI image results demonstrate that spatial patterns of disturbed landscapes were successfully delineated by variogram analysis in study areas. The high-magnitude Chi-Chi earthquake created spatial landscape variations in the study area. After the earthquake, the cumulative impacts of typhoons on landscape patterns depended on the magnitudes and paths of typhoons, but were not always evident in the spatiotemporal variability of landscapes in the study area. The statistics and spatial structures of multiple NDVI images were captured by 3,000 samples from 62,500 grids in the NDVI images. Kriging and sequential Gaussian simulation with the 3,000 samples effectively reproduced spatial patterns of NDVI images. However, the proposed approach, which integrates the conditional Latin hypercube sampling approach, variogram, kriging and sequential Gaussian simulation in remotely sensed images, efficiently monitors, samples and maps the effects of large chronological disturbances on spatial characteristics of landscape changes including spatial variability and heterogeneity.
Optimal medication dosing from suboptimal clinical examples: a deep reinforcement learning approach.
Nemati, Shamim; Ghassemi, Mohammad M; Clifford, Gari D
2016-08-01
Misdosing medications with sensitive therapeutic windows, such as heparin, can place patients at unnecessary risk, increase length of hospital stay, and lead to wasted hospital resources. In this work, we present a clinician-in-the-loop sequential decision making framework, which provides an individualized dosing policy adapted to each patient's evolving clinical phenotype. We employed retrospective data from the publicly available MIMIC II intensive care unit database, and developed a deep reinforcement learning algorithm that learns an optimal heparin dosing policy from sample dosing trails and their associated outcomes in large electronic medical records. Using separate training and testing datasets, our model was observed to be effective in proposing heparin doses that resulted in better expected outcomes than the clinical guidelines. Our results demonstrate that a sequential modeling approach, learned from retrospective data, could potentially be used at the bedside to derive individualized patient dosing policies.
Parallel discrete event simulation using shared memory
NASA Technical Reports Server (NTRS)
Reed, Daniel A.; Malony, Allen D.; Mccredie, Bradley D.
1988-01-01
With traditional event-list techniques, evaluating a detailed discrete-event simulation-model can often require hours or even days of computation time. By eliminating the event list and maintaining only sufficient synchronization to ensure causality, parallel simulation can potentially provide speedups that are linear in the numbers of processors. A set of shared-memory experiments, using the Chandy-Misra distributed-simulation algorithm, to simulate networks of queues is presented. Parameters of the study include queueing network topology and routing probabilities, number of processors, and assignment of network nodes to processors. These experiments show that Chandy-Misra distributed simulation is a questionable alternative to sequential-simulation of most queueing network models.
Chen, Qi-Fen; Zhang, Yi-Wei
2018-02-01
To investigate the clinical effect of Saccharomyces boulardii powder combined with azithromycin sequential therapy in the treatment of children with diarrhea secondary to Mycoplasma pneumoniae pneumonia. A total of 88 children with diarrhea secondary to Mycoplasma pneumoniae pneumonia between June 2015 and March 2017 were divided into control group and study group using a random number table, with 44 children in each group. The children in the control group were given routine treatment combined with azithromycin sequential therapy, and those in the study group were given oral Saccharomyces boulardii powder in addition to the treatment in the control group until the end of azithromycin sequential therapy. After the treatment ended, the two groups were compared in terms of time to improvement of clinical symptoms, length of hospital stay, clinical outcome, defecation frequency before and after treatment, condition of intestinal dysbacteriosis, and incidence of adverse events. Compared with the control group, the study group had significantly shorter time to improvement of clinical symptoms and length of hospital stay (P<0.05). The study group had a significantly higher response rate than the control group (P<0.05). On days 3 and 5 of treatment, the study group had a significant reduction in defecation frequency compared with the control group (P<0.05). The study group had a significantly lower rate of intestinal dysbacteriosis than the control group (P<0.05). There was no significant difference in the incidence of adverse events between the two groups (P>0.05). In the treatment of children with diarrhea secondary to Mycoplasma pneumoniae pneumonia, Saccharomyces boulardii powder combined with azithromycin sequential therapy can improve clinical symptoms, shorten the length of hospital stay, reduce defecation frequency and the incidence of intestinal dysbacteriosis, and improve clinical outcomes, and does not increase the risk of adverse events.
Tinnitus after Simultaneous and Sequential Bilateral Cochlear Implantation.
Ramakers, Geerte G J; Kraaijenga, Véronique J C; Smulders, Yvette E; van Zon, Alice; Stegeman, Inge; Stokroos, Robert J; Free, Rolien H; Frijns, Johan H M; Huinck, Wendy J; Van Zanten, Gijsbert A; Grolman, Wilko
2017-01-01
There is an ongoing global discussion on whether or not bilateral cochlear implantation should be standard care for bilateral deafness. Contrary to unilateral cochlear implantation, however, little is known about the effect of bilateral cochlear implantation on tinnitus. To investigate tinnitus outcomes 1 year after bilateral cochlear implantation. Secondarily, to compare tinnitus outcomes between simultaneous and sequential bilateral cochlear implantation and to investigate long-term follow-up (3 years). This study is a secondary analysis as part of a multicenter randomized controlled trial. Thirty-eight postlingually deafened adults were included in the original trial, in which the presence of tinnitus was not an inclusion criterion. All participants received cochlear implants (CIs) because of profound hearing loss. Nineteen participants received bilateral CIs simultaneously and 19 participants received bilateral CIs sequentially with an inter-implant interval of 2 years. The prevalence and severity of tinnitus before and after simultaneous and sequential bilateral cochlear implantation were measured preoperatively and each year after implantation with the Tinnitus Handicap Inventory (THI) and Tinnitus Questionnaire (TQ). The prevalence of preoperative tinnitus was 42% (16/38). One year after bilateral implantation, there was a median difference of -8 (inter-quartile range (IQR): -28 to 4) in THI score and -9 (IQR: -17 to -9) in TQ score in the participants with preoperative tinnitus. Induction of tinnitus occurred in five participants, all in the simultaneous group, in the year after bilateral implantation. Although the preoperative and also the postoperative median THI and TQ scores were higher in the simultaneous group, the median difference scores were equal in both groups. In the simultaneous group, tinnitus scores fluctuated in the 3 years after implantation. In the sequential group, four patients had an additional benefit of the second CI: a total suppression of tinnitus compared with their unilateral situation. While bilateral cochlear implantation can have a positive effect on preoperative tinnitus complaints, the induction of (temporary or permanent) tinnitus was also reported. Dutch Trial Register NTR1722.
Food matrix effects on in vitro digestion of microencapsulated tuna oil powder.
Shen, Zhiping; Apriani, Christina; Weerakkody, Rangika; Sanguansri, Luz; Augustin, Mary Ann
2011-08-10
Tuna oil, containing 53 mg of eicosapentaenoic acid (EPA) and 241 mg of docosahexaenoic acid (DHA) per gram of oil, delivered as a neat microencapsulated tuna oil powder (25% oil loading) or in food matrices (orange juice, yogurt, or cereal bar) fortified with microencapsulated tuna oil powder was digested in simulated gastric fluid or sequentially in simulated gastric fluid and simulated intestinal fluid. The level of fortification was equivalent to 1 g of tuna oil per recommended serving size (i.e., per 200 g of orange juice or yogurt or 60 g of cereal bar). The changes in particle size of oil droplets during digestion were influenced by the method of delivery of the microencapsulated tuna oil powder. Lipolysis in simulated gastric fluid was low, with only 4.4-6.1% EPA and ≤1.5% DHA released after digestion (as a % of total fatty acids present). After sequential exposure to simulated gastric and intestinal fluids, much higher extents of lipolysis of both glycerol-bound EPA and DHA were obtained (73.2-78.6% for the neat powder, fortified orange juice, and yogurt; 60.3-64.0% for the fortified cereal bar). This research demonstrates that the choice of food matrix may influence the lipolysis of microencapsulated tuna oil.
Burden, C; Preshaw, J; White, P; Draycott, T J; Grant, S; Fox, R
2013-08-01
To assess the usability of virtual-reality (VR) simulation for obstetric ultrasound trainees. Twenty-six participants were recruited: 18 obstetric ultrasound trainees (with little formal ultrasonography training) and eight certified experts. All performed five sequential VR-simulated crown-rump length (CRL) scans in a single session and three repetitions of biparietal diameter (BPD), occipitofrontal diameter (OFD) and femur length (FL) measurements. Outcome measures included mean percentage deviation from target for all measurements. Time taken to perform each type of scan was recorded. The mean percentage difference for the first scan was significantly greater for the trainee group than for the expert group for BPD (P = 0.035), OFD (P = 0.010) and FL (P = 0.008) and for time taken for the first CRL (P < 0.001) and fetal biometry (including BPD, OFD and FL measurements) scan (P < 0.001), demonstrating that trainees were initially significantly less accurate and less efficient. Over subsequent scans, the trainees became more accurate for all measurements with a significant improvement shown for OFD and FL (P < 0.05). The time taken for trainees to complete CRL and fetal biometry scans decreased significantly (all P < 0.05) with repetition, to near-expert efficiency. All participants were able to use the simulator and produce clinically meaningful biometry results. With repetition, beginners quickly approached near-expert levels of accuracy and speed. These data demonstrate that obstetricians with minimal experience can improve their ultrasonographic skills with short-phase VR-simulation training. The speed of improvement suggests that VR simulation might be useful as a warm-up exercise before clinical training sessions in order to reduce their impact on clinical service. Copyright © 2013 ISUOG. Published by John Wiley & Sons Ltd.
Burstein, Harold J.; Prestrud, Ann Alexis; Seidenfeld, Jerome; Anderson, Holly; Buchholz, Thomas A.; Davidson, Nancy E.; Gelmon, Karen E.; Giordano, Sharon H.; Hudis, Clifford A.; Malin, Jennifer; Mamounas, Eleftherios P.; Rowden, Diana; Solky, Alexander J.; Sowers, MaryFran R.; Stearns, Vered; Winer, Eric P.; Somerfield, Mark R.; Griggs, Jennifer J.
2010-01-01
Purpose To develop evidence-based guidelines, based on a systematic review, for endocrine therapy for postmenopausal women with hormone receptor–positive breast cancer. Methods A literature search identified relevant randomized trials. Databases searched included MEDLINE, PREMEDLINE, the Cochrane Collaboration Library, and those for the Annual Meetings of the American Society of Clinical Oncology (ASCO) and the San Antonio Breast Cancer Symposium (SABCS). The primary outcomes of interest were disease-free survival, overall survival, and time to contralateral breast cancer. Secondary outcomes included adverse events and quality of life. An expert panel reviewed the literature, especially 12 major trials, and developed updated recommendations. Results An adjuvant treatment strategy incorporating an aromatase inhibitor (AI) as primary (initial endocrine therapy), sequential (using both tamoxifen and an AI in either order), or extended (AI after 5 years of tamoxifen) therapy reduces the risk of breast cancer recurrence compared with 5 years of tamoxifen alone. Data suggest that including an AI as primary monotherapy or as sequential treatment after 2 to 3 years of tamoxifen yields similar outcomes. Tamoxifen and AIs differ in their adverse effect profiles, and these differences may inform treatment preferences. Conclusion The Update Committee recommends that postmenopausal women with hormone receptor–positive breast cancer consider incorporating AI therapy at some point during adjuvant treatment, either as up-front therapy or as sequential treatment after tamoxifen. The optimal timing and duration of endocrine treatment remain unresolved. The Update Committee supports careful consideration of adverse effect profiles and patient preferences in deciding whether and when to incorporate AI therapy. PMID:20625130
Cullington, H E; Bele, D; Brinton, J C; Cooper, S; Daft, M; Harding, J; Hatton, N; Humphries, J; Lutman, M E; Maddocks, J; Maggs, J; Millward, K; O'Donoghue, G; Patel, S; Rajput, K; Salmon, V; Sear, T; Speers, A; Wheeler, A; Wilson, K
2017-01-01
This fourteen-centre project used professional rating scales and parent questionnaires to assess longitudinal outcomes in a large non-selected population of children receiving simultaneous and sequential bilateral cochlear implants. This was an observational non-randomized service evaluation. Data were collected at four time points: before bilateral cochlear implants or before the sequential implant, one year, two years, and three years after. The measures reported are Categories of Auditory Performance II (CAPII), Speech Intelligibility Rating (SIR), Bilateral Listening Skills Profile (BLSP) and Parent Outcome Profile (POP). Thousand and one children aged from 8 months to almost 18 years were involved, although there were many missing data. In children receiving simultaneous implants after one, two, and three years respectively, median CAP scores were 4, 5, and 6; median SIR were 1, 2, and 3. Three years after receiving simultaneous bilateral cochlear implants, 61% of children were reported to understand conversation without lip-reading and 66% had intelligible speech if the listener concentrated hard. Auditory performance and speech intelligibility were significantly better in female children than males. Parents of children using sequential implants were generally positive about their child's well-being and behaviour since receiving the second device; those who were less positive about well-being changes also generally reported their children less willing to wear the second device. Data from 78% of paediatric cochlear implant centres in the United Kingdom provide a real-world picture of outcomes of children with bilateral implants in the UK. This large reference data set can be used to identify children in the lower quartile for targeted intervention.
Wasser, Tobias; Pollard, Jessica; Fisk, Deborah; Srihari, Vinod
2017-10-01
In first-episode psychosis there is a heightened risk of aggression and subsequent criminal justice involvement. This column reviews the evidence pointing to these heightened risks and highlights opportunities, using a sequential intercept model, for collaboration between mental health services and existing diversionary programs, particularly for patients whose behavior has already brought them to the attention of the criminal justice system. Coordinating efforts in these areas across criminal justice and clinical spheres can decrease the caseload burden on the criminal justice system and optimize clinical and legal outcomes for this population.
Multiple point statistical simulation using uncertain (soft) conditional data
NASA Astrophysics Data System (ADS)
Hansen, Thomas Mejer; Vu, Le Thanh; Mosegaard, Klaus; Cordua, Knud Skou
2018-05-01
Geostatistical simulation methods have been used to quantify spatial variability of reservoir models since the 80s. In the last two decades, state of the art simulation methods have changed from being based on covariance-based 2-point statistics to multiple-point statistics (MPS), that allow simulation of more realistic Earth-structures. In addition, increasing amounts of geo-information (geophysical, geological, etc.) from multiple sources are being collected. This pose the problem of integration of these different sources of information, such that decisions related to reservoir models can be taken on an as informed base as possible. In principle, though difficult in practice, this can be achieved using computationally expensive Monte Carlo methods. Here we investigate the use of sequential simulation based MPS simulation methods conditional to uncertain (soft) data, as a computational efficient alternative. First, it is demonstrated that current implementations of sequential simulation based on MPS (e.g. SNESIM, ENESIM and Direct Sampling) do not account properly for uncertain conditional information, due to a combination of using only co-located information, and a random simulation path. Then, we suggest two approaches that better account for the available uncertain information. The first make use of a preferential simulation path, where more informed model parameters are visited preferentially to less informed ones. The second approach involves using non co-located uncertain information. For different types of available data, these approaches are demonstrated to produce simulation results similar to those obtained by the general Monte Carlo based approach. These methods allow MPS simulation to condition properly to uncertain (soft) data, and hence provides a computationally attractive approach for integration of information about a reservoir model.
Luby, Joan L; Barch, Deanna; Whalen, Diana; Tillman, Rebecca; Belden, Andy
2017-12-01
Adverse childhood experiences (ACEs) have been associated with poor mental and physical health outcomes. However, the mechanism of this effect, critical to enhancing public health, remains poorly understood. To investigate the neurodevelopmental trajectory of the association between early ACEs and adolescent general and emotional health outcomes. A prospective longitudinal study that began when patients were aged 3 to 6 years who underwent neuroimaging later at ages 7 to 12 years and whose mental and physical health outcomes were observed at ages 9 to 15 years. Sequential mediation models were used to investigate associations between early ACEs and brain structure, emotion development, and health outcomes longitudinally. Children were recruited from an academic medical center research unit. Early life adversity. Early ACEs in children aged 3 to 7 years; volume of a subregion of the prefrontal cortex, the inferior frontal gyrus, in children aged 6 to 12 years; and emotional awareness, depression severity, and general health outcomes in children and adolescents aged 9 to 15 years. The mean (SD) age of 119 patients was 9.65 (1.31) years at the time of scan. The mean (SD) ACE score was 5.44 (3.46). The mean (SD) depression severity scores were 2.61 (1.78) at preschool, 1.77 (1.58) at time 2, and 2.16 (1.64) at time 3. The mean (SD) global physical health scores at time 2 and time 3 were 0.30 (0.38) and 0.33 (0.42), respectively. Sequential mediation in the association between high early ACEs and emotional and physical health outcomes were found. Smaller inferior frontal gyrus volumes and poor emotional awareness sequentially mediated the association between early ACEs and poor general health (model parameter estimate = 0.002; 95% CI, 0.0002-0.056) and higher depression severity (model parameter estimate = 0.007; 95% CI, 0.001-0.021) in adolescence. An increase from 0 to 3 early ACEs was associated with 15% and 25% increases in depression severity and physical health problems, respectively. Study findings highlight 1 putative neurodevelopmental mechanism by which the association between early ACEs and later poor mental and physical health outcomes may operate. This identified risk trajectory may be useful to target preventive interventions.
Yoo, Myung Hoon; Lim, Won Sub; Park, Joo Hyun; Kwon, Joong Keun; Lee, Tae-Hoon; An, Yong-Hwi; Kim, Young-Jin; Kim, Jong Yang; Lim, Hyun Woo; Park, Hong Ju
2016-01-01
Severe-to-profound sudden sensorineural hearing loss (SSNHL) has a poor prognosis. We aimed to compare the efficacy of simultaneous and sequential oral and intratympanic steroids for this condition. Fifty patients with severe-to-profound SSNHL (>70 dB HL) were included from 7 centers. The simultaneous group (27 patients) received oral and intratympanic steroid injections for 2 weeks. The sequential group (23 patients) was treated with oral steroids for 2 weeks and intratympanic steroids for the subsequent 2 weeks. Pure-tone averages (PTA) and word discrimination scores (WDS) were compared before treatment and 2 weeks and 1 and 2 months after treatment. Treatment outcomes according to the modified American Academy of Otolaryngology-Head and Neck Surgery (AAO-HNS) criteria were also analyzed. The improvement in PTA and WDS at the 2-week follow-up was 23 ± 21 dB HL and 20 ± 39% in the simultaneous group and 31 ± 29 dB HL and 37 ± 42% in the sequential group; this was not statistically significant. Complete or partial recovery at the 2-week follow-up was observed in 26% of the simultaneous group and 30% of the sequential group; this was also not significant. The improvement in PTA and WDS at the 2-month follow-up was 40 ± 20 dB HL and 37 ± 35% in the simultaneous group and 41 ± 25 dB HL and 48 ± 41% in the sequential group; this was not statistically significant. Complete or partial recovery at the 2-month follow-up was observed in 33% of the simultaneous group and 35% of the sequential group; this was also not significant. Seven patients in the sequential group did not need intratympanic steroid injections for sufficient improvement after oral steroids alone. Simultaneous oral/intratympanic steroid treatment yielded a recovery similar to that produced by sequential treatment. Because the addition of intratympanic steroids can be decided upon based on the improvement after an oral steroid, the sequential regimen can be recommended to avoid unnecessary intratympanic injections. © 2017 S. Karger AG, Basel.
Effective Identification of Similar Patients Through Sequential Matching over ICD Code Embedding.
Nguyen, Dang; Luo, Wei; Venkatesh, Svetha; Phung, Dinh
2018-04-11
Evidence-based medicine often involves the identification of patients with similar conditions, which are often captured in ICD (International Classification of Diseases (World Health Organization 2013)) code sequences. With no satisfying prior solutions for matching ICD-10 code sequences, this paper presents a method which effectively captures the clinical similarity among routine patients who have multiple comorbidities and complex care needs. Our method leverages the recent progress in representation learning of individual ICD-10 codes, and it explicitly uses the sequential order of codes for matching. Empirical evaluation on a state-wide cancer data collection shows that our proposed method achieves significantly higher matching performance compared with state-of-the-art methods ignoring the sequential order. Our method better identifies similar patients in a number of clinical outcomes including readmission and mortality outlook. Although this paper focuses on ICD-10 diagnosis code sequences, our method can be adapted to work with other codified sequence data.
SMA texture and reorientation: simulations and neutron diffraction studies
NASA Astrophysics Data System (ADS)
Gao, Xiujie; Brown, Donald W.; Brinson, L. Catherine
2005-05-01
With increased usage of shape memory alloys (SMA) for applications in various fields, it is important to understand how the material behavior is affected by factors such as texture, stress state and loading history, especially for complex multiaxial loading states. Using the in-situ neutron diffraction loading facility (SMARTS diffractometer) and ex situ inverse pole figure measurement facility (HIPPO diffractometer) at the Los Alamos Neutron Science Center (LANCE), the macroscopic mechanical behavior and texture evolution of Nickel-Titanium (Nitinol) SMAs under sequential compression in alternating directions were studied. The simplified multivariant model developed at Northwestern University was then used to simulate the macroscopic behavior and the microstructural change of Nitinol under this sequential loading. Pole figures were obtained via post-processing of the multivariant results for volume fraction evolution and compared quantitatively well to the experimental results. The experimental results can also be used to test or verify other SMA constitutive models.
NASA Astrophysics Data System (ADS)
Ma, Yuan-Zhuo; Li, Hong-Shuang; Yao, Wei-Xing
2018-05-01
The evaluation of the probabilistic constraints in reliability-based design optimization (RBDO) problems has always been significant and challenging work, which strongly affects the performance of RBDO methods. This article deals with RBDO problems using a recently developed generalized subset simulation (GSS) method and a posterior approximation approach. The posterior approximation approach is used to transform all the probabilistic constraints into ordinary constraints as in deterministic optimization. The assessment of multiple failure probabilities required by the posterior approximation approach is achieved by GSS in a single run at all supporting points, which are selected by a proper experimental design scheme combining Sobol' sequences and Bucher's design. Sequentially, the transformed deterministic design optimization problem can be solved by optimization algorithms, for example, the sequential quadratic programming method. Three optimization problems are used to demonstrate the efficiency and accuracy of the proposed method.
Bush, Terry; Lovejoy, Jennifer; Javitz, Harold; Torres, Alula Jimenez; Wassum, Ken; Tan, Marcia M; Spring, Bonnie
2018-05-31
Smoking cessation often results in weight gain which discourages many smokers from quitting and can increase health risks. Treatments to reduce cessation-related weight gain have been tested in highly controlled trials of in-person treatment, but have never been tested in a real-world setting, which has inhibited dissemination. The Best Quit Study (BQS) is a replication and "real world" translation using telephone delivery of a prior in-person efficacy trial. randomized control trial in a quitline setting. Eligible smokers (n = 2540) were randomized to the standard 5-call quitline intervention or quitline plus simultaneous or sequential weight management. Regression analyses tested effectiveness of treatments on self-reported smoking abstinence and weight change at 6 and 12 months. Study enrollees were from 10 commercial employer groups and three state quitlines. Participants were between ages 18-72, 65.8% female, 68.2% white; 23.0% Medicaid-insured, and 76.3% overweight/obese. The follow-up response rate was lower in the simultaneous group than the control group at 6 months (p = 0.01). While a completers analysis of 30-day point prevalence abstinence detected no differences among groups at 6 or 12 months, multiply imputed abstinence showed quit rate differences at 6 months for:simultaneous (40.3%) vs. sequential (48.3%), p = 0.034 and simultaneous vs. control (44.9%), p = 0.043. At 12 months, multiply imputed abstinence, was significantly lower for the simultaneous group (40.7%) vs. control (46.0%), p < 0.05 and vs. sequential (46.3%), p < 0.05. Weight gain at 6 and 12 months was minimal and not different among treatment groups. The sequential group completed fewer total calls (3.75) vs. control (4.16) and vs. simultaneous group (3.83), p = 0.01, and fewer weight calls (0.94) than simultaneous (2.33), p < 0.0001. The number of calls completed predicted 30-day abstinence, p < 0.001, but not weight outcomes. This study offers a model for evaluating population-level public health interventions conducted in partnership with tobacco quitlines. Simultaneous (vs. sequential) delivery of phone/web weight management with cessation treatment in the quitline setting may adversely affect quit rate. Neither a simultaneous nor sequential approach to addressing weight produced any benefit on suppressing weight gain. This study highlights the need and the challenges of testing intensive interventions in real-world settings. ClinicalTrials.gov Identifier: NCT01867983 . Registered: May 30, 2013.
NASA Astrophysics Data System (ADS)
Klise, K. A.; Weissmann, G. S.; McKenna, S. A.; Tidwell, V. C.; Frechette, J. D.; Wawrzyniec, T. F.
2007-12-01
Solute plumes are believed to disperse in a non-Fickian manner due to small-scale heterogeneity and variable velocities that create preferential pathways. In order to accurately predict dispersion in naturally complex geologic media, the connection between heterogeneity and dispersion must be better understood. Since aquifer properties can not be measured at every location, it is common to simulate small-scale heterogeneity with random field generators based on a two-point covariance (e.g., through use of sequential simulation algorithms). While these random fields can produce preferential flow pathways, it is unknown how well the results simulate solute dispersion through natural heterogeneous media. To evaluate the influence that complex heterogeneity has on dispersion, we utilize high-resolution terrestrial lidar to identify and model lithofacies from outcrop for application in particle tracking solute transport simulations using RWHet. The lidar scan data are used to produce a lab (meter) scale two-dimensional model that captures 2-8 mm scale natural heterogeneity. Numerical simulations utilize various methods to populate the outcrop structure captured by the lidar-based image with reasonable hydraulic conductivity values. The particle tracking simulations result in residence time distributions used to evaluate the nature of dispersion through complex media. Particle tracking simulations through conductivity fields produced from the lidar images are then compared to particle tracking simulations through hydraulic conductivity fields produced from sequential simulation algorithms. Based on this comparison, the study aims to quantify the difference in dispersion when using realistic and simplified representations of aquifer heterogeneity. Sandia is a multiprogram laboratory operated by Sandia Corporation, a Lockheed Martin Company, for the United States Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.
Parallel discrete event simulation: A shared memory approach
NASA Technical Reports Server (NTRS)
Reed, Daniel A.; Malony, Allen D.; Mccredie, Bradley D.
1987-01-01
With traditional event list techniques, evaluating a detailed discrete event simulation model can often require hours or even days of computation time. Parallel simulation mimics the interacting servers and queues of a real system by assigning each simulated entity to a processor. By eliminating the event list and maintaining only sufficient synchronization to insure causality, parallel simulation can potentially provide speedups that are linear in the number of processors. A set of shared memory experiments is presented using the Chandy-Misra distributed simulation algorithm to simulate networks of queues. Parameters include queueing network topology and routing probabilities, number of processors, and assignment of network nodes to processors. These experiments show that Chandy-Misra distributed simulation is a questionable alternative to sequential simulation of most queueing network models.
NASA Astrophysics Data System (ADS)
Koziel, Slawomir; Bekasiewicz, Adrian
2018-02-01
In this article, a simple yet efficient and reliable technique for fully automated multi-objective design optimization of antenna structures using sequential domain patching (SDP) is discussed. The optimization procedure according to SDP is a two-step process: (i) obtaining the initial set of Pareto-optimal designs representing the best possible trade-offs between considered conflicting objectives, and (ii) Pareto set refinement for yielding the optimal designs at the high-fidelity electromagnetic (EM) simulation model level. For the sake of computational efficiency, the first step is realized at the level of a low-fidelity (coarse-discretization) EM model by sequential construction and relocation of small design space segments (patches) in order to create a path connecting the extreme Pareto front designs obtained beforehand. The second stage involves response correction techniques and local response surface approximation models constructed by reusing EM simulation data acquired in the first step. A major contribution of this work is an automated procedure for determining the patch dimensions. It allows for appropriate selection of the number of patches for each geometry variable so as to ensure reliability of the optimization process while maintaining its low cost. The importance of this procedure is demonstrated by comparing it with uniform patch dimensions.
Computational time analysis of the numerical solution of 3D electrostatic Poisson's equation
NASA Astrophysics Data System (ADS)
Kamboh, Shakeel Ahmed; Labadin, Jane; Rigit, Andrew Ragai Henri; Ling, Tech Chaw; Amur, Khuda Bux; Chaudhary, Muhammad Tayyab
2015-05-01
3D Poisson's equation is solved numerically to simulate the electric potential in a prototype design of electrohydrodynamic (EHD) ion-drag micropump. Finite difference method (FDM) is employed to discretize the governing equation. The system of linear equations resulting from FDM is solved iteratively by using the sequential Jacobi (SJ) and sequential Gauss-Seidel (SGS) methods, simulation results are also compared to examine the difference between the results. The main objective was to analyze the computational time required by both the methods with respect to different grid sizes and parallelize the Jacobi method to reduce the computational time. In common, the SGS method is faster than the SJ method but the data parallelism of Jacobi method may produce good speedup over SGS method. In this study, the feasibility of using parallel Jacobi (PJ) method is attempted in relation to SGS method. MATLAB Parallel/Distributed computing environment is used and a parallel code for SJ method is implemented. It was found that for small grid size the SGS method remains dominant over SJ method and PJ method while for large grid size both the sequential methods may take nearly too much processing time to converge. Yet, the PJ method reduces computational time to some extent for large grid sizes.
Grover, Elise; Hossain, Mohammed Kamal; Uddin, Saker; Venkatesh, Mohini; Ram, Pavani K; Dreibelbis, Robert
2018-01-01
To determine the impact of environmental nudges on handwashing behaviours among primary school children as compared to a high-intensity hygiene education intervention. In a cluster-randomised trial (CRT), we compared the rates of handwashing with soap (HWWS) after a toileting event among primary school students in rural Bangladesh. Eligible schools (government run, on-site sanitation and water, no hygiene interventions in last year, fewer than 450 students) were identified, and 20 schools were randomly selected and allocated without blinding to one of four interventions, five schools per group: simultaneous handwashing infrastructure and nudge construction, sequential infrastructure then nudge construction, simultaneous infrastructure and high-intensity hygiene education (HE) and sequential handwashing infrastructure and HE. The primary outcome, incidence of HWWS after a toileting event, was compared between the intervention groups at different data collection points with robust-Poisson regression analysis with generalised estimating equations, adjusting for school-level clustering of outcomes. The nudge intervention and the HE intervention were found to be equally effective at sustained impact over 5 months post-intervention (adjusted IRR 0.81, 95% CI 0.61-1.09). When comparing intervention delivery timing, the simultaneous delivery of the HE intervention significantly outperformed the sequential HE delivery (adjusted IRR 1.58 CI 1.20-2.08), whereas no significant difference was observed between sequential and simultaneous nudge intervention delivery (adjusted IRR 0.75, 95% CI 0.48-1.17). Our trial demonstrates sustained improved handwashing behaviour 5 months after the nudge intervention. The nudge intervention's comparable performance to a high-intensity hygiene education intervention is encouraging. © 2017 John Wiley & Sons Ltd.
Fonoff, Erich Talamoni; Azevedo, Angelo; Angelos, Jairo Silva Dos; Martinez, Raquel Chacon Ruiz; Navarro, Jessie; Reis, Paul Rodrigo; Sepulveda, Miguel Ernesto San Martin; Cury, Rubens Gisbert; Ghilardi, Maria Gabriela Dos Santos; Teixeira, Manoel Jacobsen; Lopez, William Omar Contreras
2016-07-01
OBJECT Currently, bilateral procedures involve 2 sequential implants in each of the hemispheres. The present report demonstrates the feasibility of simultaneous bilateral procedures during the implantation of deep brain stimulation (DBS) leads. METHODS Fifty-seven patients with movement disorders underwent bilateral DBS implantation in the same study period. The authors compared the time required for the surgical implantation of deep brain electrodes in 2 randomly assigned groups. One group of 28 patients underwent traditional sequential electrode implantation, and the other 29 patients underwent simultaneous bilateral implantation. Clinical outcomes of the patients with Parkinson's disease (PD) who had undergone DBS implantation of the subthalamic nucleus using either of the 2 techniques were compared. RESULTS Overall, a reduction of 38.51% in total operating time for the simultaneous bilateral group (136.4 ± 20.93 minutes) as compared with that for the traditional consecutive approach (220.3 ± 27.58 minutes) was observed. Regarding clinical outcomes in the PD patients who underwent subthalamic nucleus DBS implantation, comparing the preoperative off-medication condition with the off-medication/on-stimulation condition 1 year after the surgery in both procedure groups, there was a mean 47.8% ± 9.5% improvement in the Unified Parkinson's Disease Rating Scale Part III (UPDRS-III) score in the simultaneous group, while the sequential group experienced 47.5% ± 15.8% improvement (p = 0.96). Moreover, a marked reduction in the levodopa-equivalent dose from preoperatively to postoperatively was similar in these 2 groups. The simultaneous bilateral procedure presented major advantages over the traditional sequential approach, with a shorter total operating time. CONCLUSIONS A simultaneous stereotactic approach significantly reduces the operation time in bilateral DBS procedures, resulting in decreased microrecording time, contributing to the optimization of functional stereotactic procedures.
2014-01-01
Background End-to-side anastomoses to connect the distal end of the great saphenous vein (GSV) to small target coronary arteries are commonly performed in sequential coronary artery bypass grafting (CABG). However, the oversize diameter ratio between the GSV and small target vessels at end-to-side anastomoses might induce adverse hemodynamic condition. The purpose of this study was to describe a distal end side-to-side anastomosis technique and retrospectively compare the effect of distal end side-to-side versus end-to-side anastomosis on graft flow characteristics. Methods We performed side-to-side anastomoses to connect the distal end of the GSV to small target vessels on 30 patients undergoing off-pump sequential CABG in our hospital between October 2012 and July 2013. Among the 30 patients, end-to-side anastomoses at the distal end of the GSV were initially performed on 14 patients; however, due to poor graft flow, those anastomoses were revised into side-to-side anastomoses. We retrospectively compared the intraoperative graft flow characteristics of the end-to-side versus side-to-side anastomoses in the 14 patients. The patient outcomes were also evaluated. Results We found that the side-to-side anastomosis reconstruction improved intraoperative flow and reduced pulsatility index in all the 14 patients significantly. The 16 patients who had the distal end side-to-side anastomoses performed directly also exhibited satisfactory intraoperative graft flow. Three-month postoperative outcomes for all the patients were satisfactory. Conclusions Side-to-side anastomosis at the distal end of sequential vein grafts might be a promising strategy to connect small target coronary arteries to the GSV. PMID:24884776
Hennings, Justin M.; Zimmer, Randall L.; Nabli, Henda; Davis, J. Wade; Sutovsky, Peter; Sutovsky, Miriam; Sharpe-Timms, Kathy L.
2015-01-01
Objective: Validate single versus sequential culture media for murine embryo development. Design: Prospective laboratory experiment. Setting: Assisted Reproduction Laboratory. Animals: Murine embryos. Interventions: Thawed murine zygotes cultured for 3 or 5 days (d3 or d5) in single or sequential embryo culture media developed for human in vitro fertilization. Main Outcome Measures: On d3, zygotes developing to the 8 cell (8C) stage or greater were quantified using 4’,6-diamidino-2-phenylindole (DAPI), and quality was assessed by morphological analysis. On d5, the number of embryos reaching the blastocyst stage was counted. DAPI was used to quantify total nuclei and inner cell mass nuclei. Localization of ubiquitin C-terminal hydrolase L1 (UCHL1) and ubiquitin C-terminal hydrolase L3 (UCHL3) was reference points for evaluating cell quality. Results: Comparing outcomes in single versus to sequential media, the odds of embryos developing to the 8C stage on d3 were 2.34 time greater (P = .06). On d5, more embryos reached the blastocyst stage (P = <.0001), hatched, and had significantly more trophoblast cells (P = .005) contributing to the increased total cell number. Also at d5, localization of distinct cytoplasmic UCHL1 and nuclear UCHL3 was found in high-quality hatching blastocysts. Localization of UCHL1 and UCHL3 was diffuse and inappropriately dispersed throughout the cytoplasm in low-quality nonhatching blastocysts. Conclusions: Single medium yields greater cell numbers, an increased growth rate, and more hatching of murine embryos. Cytoplasmic UCHL1 and nuclear UHCL3 localization patterns were indicative of embryo quality. Our conclusions are limited to murine embryos but one might speculate that single medium may also be more beneficial for human embryo culture. Human embryo studies are needed. PMID:26668049
NASA Astrophysics Data System (ADS)
Croce, Pierpaolo; Zappasodi, Filippo; Merla, Arcangelo; Chiarelli, Antonio Maria
2017-08-01
Objective. Electrical and hemodynamic brain activity are linked through the neurovascular coupling process and they can be simultaneously measured through integration of electroencephalography (EEG) and functional near-infrared spectroscopy (fNIRS). Thanks to the lack of electro-optical interference, the two procedures can be easily combined and, whereas EEG provides electrophysiological information, fNIRS can provide measurements of two hemodynamic variables, such as oxygenated and deoxygenated hemoglobin. A Bayesian sequential Monte Carlo approach (particle filter, PF) was applied to simulated recordings of electrical and neurovascular mediated hemodynamic activity, and the advantages of a unified framework were shown. Approach. Multiple neural activities and hemodynamic responses were simulated in the primary motor cortex of a subject brain. EEG and fNIRS recordings were obtained by means of forward models of volume conduction and light propagation through the head. A state space model of combined EEG and fNIRS data was built and its dynamic evolution was estimated through a Bayesian sequential Monte Carlo approach (PF). Main results. We showed the feasibility of the procedure and the improvements in both electrical and hemodynamic brain activity reconstruction when using the PF on combined EEG and fNIRS measurements. Significance. The investigated procedure allows one to combine the information provided by the two methodologies, and, by taking advantage of a physical model of the coupling between electrical and hemodynamic response, to obtain a better estimate of brain activity evolution. Despite the high computational demand, application of such an approach to in vivo recordings could fully exploit the advantages of this combined brain imaging technology.
NASA Astrophysics Data System (ADS)
Noh, S. J.; Tachikawa, Y.; Shiiba, M.; Yorozu, K.; Kim, S.
2012-04-01
Data assimilation methods have received increased attention to accomplish uncertainty assessment and enhancement of forecasting capability in various areas. Despite of their potentials, applicable software frameworks to probabilistic approaches and data assimilation are still limited because the most of hydrologic modeling software are based on a deterministic approach. In this study, we developed a hydrological modeling framework for sequential data assimilation, so called MPI-OHyMoS. MPI-OHyMoS allows user to develop his/her own element models and to easily build a total simulation system model for hydrological simulations. Unlike process-based modeling framework, this software framework benefits from its object-oriented feature to flexibly represent hydrological processes without any change of the main library. Sequential data assimilation based on the particle filters is available for any hydrologic models based on MPI-OHyMoS considering various sources of uncertainty originated from input forcing, parameters and observations. The particle filters are a Bayesian learning process in which the propagation of all uncertainties is carried out by a suitable selection of randomly generated particles without any assumptions about the nature of the distributions. In MPI-OHyMoS, ensemble simulations are parallelized, which can take advantage of high performance computing (HPC) system. We applied this software framework for short-term streamflow forecasting of several catchments in Japan using a distributed hydrologic model. Uncertainty of model parameters and remotely-sensed rainfall data such as X-band or C-band radar is estimated and mitigated in the sequential data assimilation.
Cost-Utility Analysis of Cochlear Implantation in Australian Adults.
Foteff, Chris; Kennedy, Steven; Milton, Abul Hasnat; Deger, Melike; Payk, Florian; Sanderson, Georgina
2016-06-01
Sequential and simultaneous bilateral cochlear implants are emerging as appropriate treatment options for Australian adults with sensory deficits in both cochleae. Current funding of Australian public hospitals does not provide for simultaneous bilateral cochlear implantation (CI) as a separate surgical procedure. Previous cost-effectiveness studies of sequential and simultaneous bilateral CI assumed 100% of unilaterally treated patients' transition to a sequential bilateral CI. This assumption does not place cochlear implantation in the context of the generally treated population. When mutually exclusive treatment options exist, such as unilateral CI, sequential bilateral CI, and simultaneous bilateral CI, the mean costs of the treated populations are weighted in the calculation of incremental cost-utility ratios. The objective was to evaluate the cost-utility of bilateral hearing aids (HAs) compared with unilateral, sequential, and simultaneous bilateral CI in Australian adults with bilateral severe to profound sensorineural hearing loss. Cost-utility analysis of secondary sources input to a Markov model. Australian health care perspective, lifetime horizon with costs and outcomes discounted 5% annually. Bilateral HAs as treatment for bilateral severe to profound sensorineural hearing loss compared with unilateral, sequential, and simultaneous bilateral CI. Incremental costs per quality adjusted life year (AUD/QALY). When compared with bilateral hearing aids the incremental cost-utility ratio for the CI treatment population was AUD11,160/QALY. The incremental cost-utility ratio was weighted according to the number of patients treated unilaterally, sequentially, and simultaneously, as these were mutually exclusive treatment options. No peer-reviewed articles have reported the incremental analysis of cochlear implantation in a continuum of care for surgically treated populations with bilateral severe to profound sensorineural hearing loss. Unilateral, sequential, and simultaneous bilateral CI were cost-effective when compared with bilateral hearing aids. Technologies that reduce the total number of visits for a patient could introduce additional cost efficiencies into clinical practice.
Multiplexed Holographic Optical Data Storage In Thick Bacteriorhodopsin Films
NASA Technical Reports Server (NTRS)
Downie, John D.; Timucin, Dogan A.; Gary, Charles K.; Ozcan, Meric; Smithey, Daniel T.; Crew, Marshall
1998-01-01
The optical data storage capacity of photochromic bacteriorhodopsin films is investigated by means of theoretical calculations, numerical simulations, and experimental measurements on sequential recording of angularly multiplexed diffraction gratings inside a thick D85N BR film.
Lalor, Joan G; Casey, Dympna; Elliott, Naomi; Coyne, Imelda; Comiskey, Catherine; Higgins, Agnes; Murphy, Kathy; Devane, Declan; Begley, Cecily
2013-04-08
The role of the clinical nurse/midwife specialist and advanced nurse/midwife practitioner is complex not least because of the diversity in how the roles are operationalised across health settings and within multidisciplinary teams. This aim of this paper is to use The SCAPE Study: Specialist Clinical and Advanced Practitioner Evaluation in Ireland to illustrate how case study was used to strengthen a Sequential Explanatory Design. In Phase 1, clinicians identified indicators of specialist and advanced practice which were then used to guide the instrumental case study design which formed the second phase of the larger study. Phase 2 used matched case studies to evaluate the effectiveness of specialist and advanced practitioners on clinical outcomes for service users. Data were collected through observation, documentary analysis, and interviews. Observations were made of 23 Clinical Specialists or Advanced Practitioners, and 23 matched clinicians in similar matched non-postholding sites, while they delivered care. Forty-one service users, 41 clinicians, and 23 Directors of Nursing or Midwifery were interviewed, and 279 service users completed a survey based on the components of CS and AP practice identified in Phase 1. A coding framework, and the generation of cross tabulation matrices in NVivo, was used to make explicit how the outcome measures were confirmed and validated from multiple sources. This strengthened the potential to examine single cases that seemed 'different', and allowed for cases to be redefined. Phase 3 involved interviews with policy-makers to set the findings in context. Case study is a powerful research strategy to use within sequential explanatory mixed method designs, and adds completeness to the exploration of complex issues in clinical practice. The design is flexible, allowing the use of multiple data collection methods from both qualitative and quantitative paradigms. Multiple approaches to data collection are needed to evaluate the impact of complex roles and interventions in health care outcomes and service delivery. Case study design is an appropriate methodology to use when study outcomes relate to clinical practice.
Statistical Emulator for Expensive Classification Simulators
NASA Technical Reports Server (NTRS)
Ross, Jerret; Samareh, Jamshid A.
2016-01-01
Expensive simulators prevent any kind of meaningful analysis to be performed on the phenomena they model. To get around this problem the concept of using a statistical emulator as a surrogate representation of the simulator was introduced in the 1980's. Presently, simulators have become more and more complex and as a result running a single example on these simulators is very expensive and can take days to weeks or even months. Many new techniques have been introduced, termed criteria, which sequentially select the next best (most informative to the emulator) point that should be run on the simulator. These criteria methods allow for the creation of an emulator with only a small number of simulator runs. We follow and extend this framework to expensive classification simulators.
Hemodynamic analysis of sequential graft from right coronary system to left coronary system.
Wang, Wenxin; Mao, Boyan; Wang, Haoran; Geng, Xueying; Zhao, Xi; Zhang, Huixia; Xie, Jinsheng; Zhao, Zhou; Lian, Bo; Liu, Youjun
2016-12-28
Sequential and single grafting are two surgical procedures of coronary artery bypass grafting. However, it remains unclear if the sequential graft can be used between the right and left coronary artery system. The purpose of this paper is to clarify the possibility of right coronary artery system anastomosis to left coronary system. A patient-specific 3D model was first reconstructed based on coronary computed tomography angiography (CCTA) images. Two different grafts, the normal multi-graft (Model 1) and the novel multi-graft (Model 2), were then implemented on this patient-specific model using virtual surgery techniques. In Model 1, the single graft was anastomosed to right coronary artery (RCA) and the sequential graft was adopted to anastomose left anterior descending (LAD) and left circumflex artery (LCX). While in Model 2, the single graft was anastomosed to LAD and the sequential graft was adopted to anastomose RCA and LCX. A zero-dimensional/three-dimensional (0D/3D) coupling method was used to realize the multi-scale simulation of both the pre-operative and two post-operative models. Flow rates in the coronary artery and grafts were obtained. The hemodynamic parameters were also showed, including wall shear stress (WSS) and oscillatory shear index (OSI). The area of low WSS and OSI in Model 1 was much less than that in Model 2. Model 1 shows optimistic hemodynamic modifications which may enhance the long-term patency of grafts. The anterior segments of sequential graft have better long-term patency than the posterior segments. With rational spatial position of the heart vessels, the last anastomosis of sequential graft should be connected to the main branch.
Effectiveness of leukocyte immunotherapy in primary recurrent spontaneous abortion (RSA).
Gharesi-Fard, Behrouz; Zolghadri, Jaleh; Foroughinia, Leila; Tavazoo, Fahimeh; Samsami Dehaghani, Alamtaj
2007-09-01
Recurrent spontaneous abortion (RSA) is defined as three or more sequential abortions before the twentieth week of gestation. There are evidences to support an allo-immunologic mechanism for RSA. One of the methods for treatment of RSA is leukocyte therapy; however there is still controversy about effectiveness of this method. To evaluate the effectiveness of leukocyte therapy for treatment of RSA. Ninety two non-pregnant women with at least three sequential abortions (60 primary & 32 secondary aborters) recognized as RSA were referred to our Laboratory for immunotherapy. All the cases were immunized by isolated lymphocytes from their husbands. Fifty to 100 million washed and resuspended mononuclear cells were injected by I.V., S.C., and I.D. route. The result of each injection was checked by WBC cross matching between couples after four weeks of injections. Immunization was repeated in fifth week to a maximum of 3 times if needed. Eighty one age-matched non-pregnant RSA women (52 primary and 29 secondary aborters) with at least three sequential abortions were also included in this study as controls. The control group was not immunized. 67 out of 92 (72.8%) immunized cases and 44 out of 81 controls (54.3%) showed a successful outcome of pregnancy (p<0.02). Comparison of primary and secondary aborters indicated a significantly better outcome only in primary (75% vs. 42.3%. p<0.001) but not in secondary aborters (68.8% vs. 75.9%, p = 0.7). The present investigation showed the effectiveness of leukocyte therapy in primary but not in secondary RSA patients. Despite the current controversy and limitation of leukocyte therapy in RSA, the results of our investigation provide evidence supporting the use of allo-immunization in improving the outcome of pregnancy in primary RSA patients.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bhide, Shreerang A.; Ahmed, Merina; Head and Neck Unit, Royal Marsden National Health Service Foundation Trust Hospital, London
2009-02-01
Purpose: Sequential treatment (chemotherapy followed by concomitant chemoradiation; CCRT) is increasingly being used for radical treatment of squamous cell cancer of the head and neck (SCCHN), which results in increased myelosuppression. In this study, we review the incidence of anemia and the effect of a policy of hemoglobin (Hb) maintenance by blood transfusion on disease outcomes in these patients. Methods and Materials: Retrospective review of the records of patients with SCCHN treated with sequential CCRT formed the basis of this study. The incidence of anemia and statistics on blood transfusion were documented. For the purpose of outcome analyses, patients weremore » divided into four categories by (1) transfusion status, (2) nadir Hb concentration, (3) number of transfusion episodes, and (4) number of units of blood transfused (NOUT). Data on 3-year locoregional control (LRC), relapse-free survival (RFS), disease-specific survival (DSS), and overall survival (OS) were analyzed. Results: One hundred and sixty-nine patients were identified. The median follow-up was 23.6 months. The RFS (52% vs. 41%, p = 0.03), DSS (71% vs. 66%, p = 0.02), and OS (58% vs. 42% p = 0.005) were significantly better for patients who did not have a transfusion vs. those who did. The LRC, RFS, DSS, and OS were also significantly better for patients with nadir Hb level >12 vs. <12 g/dL and NOUT 1-4 vs. >4. Conclusion: Our study seems to suggest that blood transfusion during radical treatment for SCCHN might be detrimental. Further research should be undertaken into the complex interactions among tumor hypoxia, anemia, and the treatment of anemia before making treatment recommendations.« less
MacIsaac, Rachael L; Khatri, Pooja; Bendszus, Martin; Bracard, Serge; Broderick, Joseph; Campbell, Bruce; Ciccone, Alfonso; Dávalos, Antoni; Davis, Stephen M; Demchuk, Andrew; Diener, Hans-Christoph; Dippel, Diederik; Donnan, Geoffrey A; Fiehler, Jens; Fiorella, David; Goyal, Mayank; Hacke, Werner; Hill, Michael D; Jahan, Reza; Jauch, Edward; Jovin, Tudor; Kidwell, Chelsea S; Liebeskind, David; Majoie, Charles B; Martins, Sheila Cristina Ouriques; Mitchell, Peter; Mocco, J; Muir, Keith W; Nogueira, Raul; Saver, Jeffrey L; Schonewille, Wouter J; Siddiqui, Adnan H; Thomalla, Götz; Tomsick, Thomas A; Turk, Aquilla S; White, Philip; Zaidat, Osama; Lees, Kennedy R
2015-10-01
Endovascular treatment has been shown to restore blood flow effectively. Second-generation medical devices such as stent retrievers are now showing overwhelming efficacy in clinical trials, particularly in conjunction with intravenous recombinant tissue plasminogen activator. This statistical analysis plan utilizing a novel, sequential approach describes a prospective, individual patient data analysis of endovascular therapy in conjunction with intravenous recombinant tissue plasminogen activator agreed upon by the Thrombectomy and Tissue Plasminogen Activator Collaborative Group. This protocol will specify the primary outcome for efficacy, as 'favorable' outcome defined by the ordinal distribution of the modified Rankin Scale measured at three-months poststroke, but with modified Rankin Scales 5 and 6 collapsed into a single category. The primary analysis will aim to answer the questions: 'what is the treatment effect of endovascular therapy with intravenous recombinant tissue plasminogen activator compared to intravenous tissue plasminogen activator alone on full scale modified Rankin Scale at 3 months?' and 'to what extent do key patient characteristics influence the treatment effect of endovascular therapy?'. Key secondary outcomes include effect of endovascular therapy on death within 90 days; analyses of modified Rankin Scale using dichotomized methods; and effects of endovascular therapy on symptomatic intracranial hemorrhage. Several secondary analyses will be considered as well as expanding patient cohorts to intravenous recombinant tissue plasminogen activator-ineligible patients, should data allow. This collaborative meta-analysis of individual participant data from randomized trials of endovascular therapy vs. control in conjunction with intravenous thrombolysis will demonstrate the efficacy and generalizability of endovascular therapy with intravenous thrombolysis as a concomitant medication. © 2015 World Stroke Organization.
PARTICLE FILTERING WITH SEQUENTIAL PARAMETER LEARNING FOR NONLINEAR BOLD fMRI SIGNALS.
Xia, Jing; Wang, Michelle Yongmei
Analyzing the blood oxygenation level dependent (BOLD) effect in the functional magnetic resonance imaging (fMRI) is typically based on recent ground-breaking time series analysis techniques. This work represents a significant improvement over existing approaches to system identification using nonlinear hemodynamic models. It is important for three reasons. First, instead of using linearized approximations of the dynamics, we present a nonlinear filtering based on the sequential Monte Carlo method to capture the inherent nonlinearities in the physiological system. Second, we simultaneously estimate the hidden physiological states and the system parameters through particle filtering with sequential parameter learning to fully take advantage of the dynamic information of the BOLD signals. Third, during the unknown static parameter learning, we employ the low-dimensional sufficient statistics for efficiency and avoiding potential degeneration of the parameters. The performance of the proposed method is validated using both the simulated data and real BOLD fMRI data.
Sequential Monte Carlo for inference of latent ARMA time-series with innovations correlated in time
NASA Astrophysics Data System (ADS)
Urteaga, Iñigo; Bugallo, Mónica F.; Djurić, Petar M.
2017-12-01
We consider the problem of sequential inference of latent time-series with innovations correlated in time and observed via nonlinear functions. We accommodate time-varying phenomena with diverse properties by means of a flexible mathematical representation of the data. We characterize statistically such time-series by a Bayesian analysis of their densities. The density that describes the transition of the state from time t to the next time instant t+1 is used for implementation of novel sequential Monte Carlo (SMC) methods. We present a set of SMC methods for inference of latent ARMA time-series with innovations correlated in time for different assumptions in knowledge of parameters. The methods operate in a unified and consistent manner for data with diverse memory properties. We show the validity of the proposed approach by comprehensive simulations of the challenging stochastic volatility model.
Shariat, Mohammad Hassan; Gazor, Saeed; Redfearn, Damian
2016-08-01
In this paper, we study the problem of the cardiac conduction velocity (CCV) estimation for the sequential intracardiac mapping. We assume that the intracardiac electrograms of several cardiac sites are sequentially recorded, their activation times (ATs) are extracted, and the corresponding wavefronts are specified. The locations of the mapping catheter's electrodes and the ATs of the wavefronts are used here for the CCV estimation. We assume that the extracted ATs include some estimation errors, which we model with zero-mean white Gaussian noise values with known variances. Assuming stable planar wavefront propagation, we derive the maximum likelihood CCV estimator, when the synchronization times between various recording sites are unknown. We analytically evaluate the performance of the CCV estimator and provide its mean square estimation error. Our simulation results confirm the accuracy of the proposed method and the error analysis of the proposed CCV estimator.
Stamatakos, Georgios S; Dionysiou, Dimitra D
2009-10-21
The tremendous rate of accumulation of experimental and clinical knowledge pertaining to cancer dictates the development of a theoretical framework for the meaningful integration of such knowledge at all levels of biocomplexity. In this context our research group has developed and partly validated a number of spatiotemporal simulation models of in vivo tumour growth and in particular tumour response to several therapeutic schemes. Most of the modeling modules have been based on discrete mathematics and therefore have been formulated in terms of rather complex algorithms (e.g. in pseudocode and actual computer code). However, such lengthy algorithmic descriptions, although sufficient from the mathematical point of view, may render it difficult for an interested reader to readily identify the sequence of the very basic simulation operations that lie at the heart of the entire model. In order to both alleviate this problem and at the same time provide a bridge to symbolic mathematics, we propose the introduction of the notion of hypermatrix in conjunction with that of a discrete operator into the already developed models. Using a radiotherapy response simulation example we demonstrate how the entire model can be considered as the sequential application of a number of discrete operators to a hypermatrix corresponding to the dynamics of the anatomic area of interest. Subsequently, we investigate the operators' commutativity and outline the "summarize and jump" strategy aiming at efficiently and realistically address multilevel biological problems such as cancer. In order to clarify the actual effect of the composite discrete operator we present further simulation results which are in agreement with the outcome of the clinical study RTOG 83-02, thus strengthening the reliability of the model developed.
Modeling eye gaze patterns in clinician-patient interaction with lag sequential analysis.
Montague, Enid; Xu, Jie; Chen, Ping-Yu; Asan, Onur; Barrett, Bruce P; Chewning, Betty
2011-10-01
The aim of this study was to examine whether lag sequential analysis could be used to describe eye gaze orientation between clinicians and patients in the medical encounter. This topic is particularly important as new technologies are implemented into multiuser health care settings in which trust is critical and nonverbal cues are integral to achieving trust. This analysis method could lead to design guidelines for technologies and more effective assessments of interventions. Nonverbal communication patterns are important aspects of clinician-patient interactions and may affect patient outcomes. The eye gaze behaviors of clinicians and patients in 110 videotaped medical encounters were analyzed using the lag sequential method to identify significant behavior sequences. Lag sequential analysis included both event-based lag and time-based lag. Results from event-based lag analysis showed that the patient's gaze followed that of the clinician, whereas the clinician's gaze did not follow the patient's. Time-based sequential analysis showed that responses from the patient usually occurred within 2 s after the initial behavior of the clinician. Our data suggest that the clinician's gaze significantly affects the medical encounter but that the converse is not true. Findings from this research have implications for the design of clinical work systems and modeling interactions. Similar research methods could be used to identify different behavior patterns in clinical settings (physical layout, technology, etc.) to facilitate and evaluate clinical work system designs.
Modeling Eye Gaze Patterns in Clinician-Patient Interaction with Lag Sequential Analysis
Montague, E; Xu, J; Asan, O; Chen, P; Chewning, B; Barrett, B
2011-01-01
Objective The aim of this study was to examine whether lag-sequential analysis could be used to describe eye gaze orientation between clinicians and patients in the medical encounter. This topic is particularly important as new technologies are implemented into multi-user health care settings where trust is critical and nonverbal cues are integral to achieving trust. This analysis method could lead to design guidelines for technologies and more effective assessments of interventions. Background Nonverbal communication patterns are important aspects of clinician-patient interactions and may impact patient outcomes. Method Eye gaze behaviors of clinicians and patients in 110-videotaped medical encounters were analyzed using the lag-sequential method to identify significant behavior sequences. Lag-sequential analysis included both event-based lag and time-based lag. Results Results from event-based lag analysis showed that the patients’ gaze followed that of clinicians, while clinicians did not follow patients. Time-based sequential analysis showed that responses from the patient usually occurred within two seconds after the initial behavior of the clinician. Conclusion Our data suggest that the clinician’s gaze significantly affects the medical encounter but not the converse. Application Findings from this research have implications for the design of clinical work systems and modeling interactions. Similar research methods could be used to identify different behavior patterns in clinical settings (physical layout, technology, etc.) to facilitate and evaluate clinical work system designs. PMID:22046723
Bursts and heavy tails in temporal and sequential dynamics of foraging decisions.
Jung, Kanghoon; Jang, Hyeran; Kralik, Jerald D; Jeong, Jaeseung
2014-08-01
A fundamental understanding of behavior requires predicting when and what an individual will choose. However, the actual temporal and sequential dynamics of successive choices made among multiple alternatives remain unclear. In the current study, we tested the hypothesis that there is a general bursting property in both the timing and sequential patterns of foraging decisions. We conducted a foraging experiment in which rats chose among four different foods over a continuous two-week time period. Regarding when choices were made, we found bursts of rapidly occurring actions, separated by time-varying inactive periods, partially based on a circadian rhythm. Regarding what was chosen, we found sequential dynamics in affective choices characterized by two key features: (a) a highly biased choice distribution; and (b) preferential attachment, in which the animals were more likely to choose what they had previously chosen. To capture the temporal dynamics, we propose a dual-state model consisting of active and inactive states. We also introduce a satiation-attainment process for bursty activity, and a non-homogeneous Poisson process for longer inactivity between bursts. For the sequential dynamics, we propose a dual-control model consisting of goal-directed and habit systems, based on outcome valuation and choice history, respectively. This study provides insights into how the bursty nature of behavior emerges from the interaction of different underlying systems, leading to heavy tails in the distribution of behavior over time and choices.
Influence of Multidimensionality on Convergence of Sampling in Protein Simulation
NASA Astrophysics Data System (ADS)
Metsugi, Shoichi
2005-06-01
We study the problem of convergence of sampling in protein simulation originating in the multidimensionality of protein’s conformational space. Since several important physical quantities are given by second moments of dynamical variables, we attempt to obtain the time of simulation necessary for their sufficient convergence. We perform a molecular dynamics simulation of a protein and the subsequent principal component (PC) analysis as a function of simulation time T. As T increases, PC vectors with smaller amplitude of variations are identified and their amplitudes are equilibrated before identifying and equilibrating vectors with larger amplitude of variations. This sequential identification and equilibration mechanism makes protein simulation a useful method although it has an intrinsic multidimensional nature.
Aksoy, Ozan; Weesie, Jeroen
2014-05-01
In this paper, using a within-subjects design, we estimate the utility weights that subjects attach to the outcome of their interaction partners in four decision situations: (1) binary Dictator Games (DG), second player's role in the sequential Prisoner's Dilemma (PD) after the first player (2) cooperated and (3) defected, and (4) first player's role in the sequential Prisoner's Dilemma game. We find that the average weights in these four decision situations have the following order: (1)>(2)>(4)>(3). Moreover, the average weight is positive in (1) but negative in (2), (3), and (4). Our findings indicate the existence of strong negative and small positive reciprocity for the average subject, but there is also high interpersonal variation in the weights in these four nodes. We conclude that the PD frame makes subjects more competitive than the DG frame. Using hierarchical Bayesian modeling, we simultaneously analyze beliefs of subjects about others' utility weights in the same four decision situations. We compare several alternative theoretical models on beliefs, e.g., rational beliefs (Bayesian-Nash equilibrium) and a consensus model. Our results on beliefs strongly support the consensus effect and refute rational beliefs: there is a strong relationship between own preferences and beliefs and this relationship is relatively stable across the four decision situations. Copyright © 2014 Elsevier Inc. All rights reserved.
Sequential episodes of ethylene glycol poisoning in the same person.
Sugunaraj, Jaya Prakash; Thakur, Lokendra Kumar; Jha, Kunal Kishor; Bucaloiu, Ion Dan
2017-05-27
Ethylene glycol is a common alcohol found in many household products such as household hard surface cleaner, paints, varnish, auto glass cleaner and antifreeze. While extremely toxic and often fatal on ingestion, few cases with early presentation by the patient have resulted in death; thus, rapid diagnosis is paramount to effectively treating ethylene glycol poisoning. In this study, we compare two sequential cases of ethylene glycol poisoning in a single individual, which resulted in strikingly different outcomes. © BMJ Publishing Group Ltd (unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
Shi, Zongbo; Krom, Michael D; Bonneville, Steeve; Baker, Alex R; Jickells, Timothy D; Benning, Liane G
2009-09-01
The formation of iron (Fe) nanoperticles and increase in Fe reactivity in mineral dust during simulated cloud processing was investigated using high-resolution microscopy and chemical extraction methods. Cloud processing of dust was experimentally simulated via an alternation of acidic (pH 2) and circumneutral conditions (pH 5-6) over periods of 24 h each on presieved (<20 microm) Saharan soil and goethite suspensions. Microscopic analyses of the processed soil and goethite samples reveal the neo-formation of Fe-rich nanoparticle aggregates, which were not found initially. Similar Fe-rich nanoparticles were also observed in wet-deposited Saharen dusts from the western Mediterranean but not in dry-deposited dust from the eastern Mediterranean. Sequential Fe extraction of the soil samples indicated an increase in the proportion of chemically reactive Fe extractable by an ascorbate solution after simulated cloud processing. In addition, the sequential extractions on the Mediterranean dust samples revealed a higher content of reactive Fe in the wet-deposited dust compared to that of the dry-deposited dust These results suggestthat large variations of pH commonly reported in aerosol and cloud waters can trigger neo-formation of nanosize Fe particles and an increase in Fe reactivity in the dust
Meng, Zhenyu; Kubar, Tomas; Mu, Yuguang; Shao, Fangwei
2018-05-08
Charge transport (CT) through biomolecules is of high significance in the research fields of biology, nanotechnology, and molecular devices. Inspired by our previous work that showed the binding of ionic liquid (IL) facilitated charge transport in duplex DNA, in silico simulation is a useful means to understand the microscopic mechanism of the facilitation phenomenon. Here molecular dynamics simulations (MD) of duplex DNA in water and hydrated ionic liquids were employed to explore the helical parameters. Principal component analysis was further applied to capture the subtle conformational changes of helical DNA upon different environmental impacts. Sequentially, CT rates were calculated by a QM/MM simulation of the flickering resonance model based upon MD trajectories. Herein, MD simulation illustrated that the binding of ionic liquids can restrain dynamic conformation and lower the on-site energy of the DNA base. Confined movement among the adjacent base pairs was highly related to the increase of electronic coupling among base pairs, which may lead DNA to a CT facilitated state. Sequentially combining MD and QM/MM analysis, the rational correlations among the binding modes, the conformational changes, and CT rates illustrated the facilitation effects from hydrated IL on DNA CT and supported a conformational-gating mechanism.
Mechanisms of electron acceptor utilization: Implications for simulating anaerobic biodegradation
Schreiber, M.E.; Carey, G.R.; Feinstein, D.T.; Bahr, J.M.
2004-01-01
Simulation of biodegradation reactions within a reactive transport framework requires information on mechanisms of terminal electron acceptor processes (TEAPs). In initial modeling efforts, TEAPs were approximated as occurring sequentially, with the highest energy-yielding electron acceptors (e.g. oxygen) consumed before those that yield less energy (e.g., sulfate). Within this framework in a steady state plume, sequential electron acceptor utilization would theoretically produce methane at an organic-rich source and Fe(II) further downgradient, resulting in a limited zone of Fe(II) and methane overlap. However, contaminant plumes often display much more extensive zones of overlapping Fe(II) and methane. The extensive overlap could be caused by several abiotic and biotic processes including vertical mixing of byproducts in long-screened monitoring wells, adsorption of Fe(II) onto aquifer solids, or microscale heterogeneity in Fe(III) concentrations. Alternatively, the overlap could be due to simultaneous utilization of terminal electron acceptors. Because biodegradation rates are controlled by TEAPs, evaluating the mechanisms of electron acceptor utilization is critical for improving prediction of contaminant mass losses due to biodegradation. Using BioRedox-MT3DMS, a three-dimensional, multi-species reactive transport code, we simulated the current configurations of a BTEX plume and TEAP zones at a petroleum- contaminated field site in Wisconsin. Simulation results suggest that BTEX mass loss due to biodegradation is greatest under oxygen-reducing conditions, with smaller but similar contributions to mass loss from biodegradation under Fe(III)-reducing, sulfate-reducing, and methanogenic conditions. Results of sensitivity calculations document that BTEX losses due to biodegradation are most sensitive to the age of the plume, while the shape of the BTEX plume is most sensitive to effective porosity and rate constants for biodegradation under Fe(III)-reducing and methanogenic conditions. Using this transport model, we had limited success in simulating overlap of redox products using reasonable ranges of parameters within a strictly sequential electron acceptor utilization framework. Simulation results indicate that overlap of redox products cannot be accurately simulated using the constructed model, suggesting either that Fe(III) reduction and methanogenesis are occurring simultaneously in the source area, or that heterogeneities in Fe(III) concentration and/or mineral type cause the observed overlap. Additional field, experimental, and modeling studies will be needed to address these questions. ?? 2004 Elsevier B.V. All rights reserved.
Comparing multiple imputation methods for systematically missing subject-level data.
Kline, David; Andridge, Rebecca; Kaizar, Eloise
2017-06-01
When conducting research synthesis, the collection of studies that will be combined often do not measure the same set of variables, which creates missing data. When the studies to combine are longitudinal, missing data can occur on the observation-level (time-varying) or the subject-level (non-time-varying). Traditionally, the focus of missing data methods for longitudinal data has been on missing observation-level variables. In this paper, we focus on missing subject-level variables and compare two multiple imputation approaches: a joint modeling approach and a sequential conditional modeling approach. We find the joint modeling approach to be preferable to the sequential conditional approach, except when the covariance structure of the repeated outcome for each individual has homogenous variance and exchangeable correlation. Specifically, the regression coefficient estimates from an analysis incorporating imputed values based on the sequential conditional method are attenuated and less efficient than those from the joint method. Remarkably, the estimates from the sequential conditional method are often less efficient than a complete case analysis, which, in the context of research synthesis, implies that we lose efficiency by combining studies. Copyright © 2015 John Wiley & Sons, Ltd. Copyright © 2015 John Wiley & Sons, Ltd.
Statistical Discourse Analysis: A Method for Modelling Online Discussion Processes
ERIC Educational Resources Information Center
Chiu, Ming Ming; Fujita, Nobuko
2014-01-01
Online forums (synchronous and asynchronous) offer exciting data opportunities to analyze how people influence one another through their interactions. However, researchers must address several analytic difficulties involving the data (missing values, nested structure [messages within topics], non-sequential messages), outcome variables (discrete…
AEROSOL TRANSPORT AND DEPOSITION IN SEQUENTIALLY BIFURCATING AIRWAYS
Deposition patterns and efficiencies of a dilute suspension of inhaled particles in three-dimensional double bifurcating airway models for both in-plane and 90 deg out-of-plane configurations have been numerically simulated assuming steady, laminar, constant-property air flow wit...
Actively learning human gaze shifting paths for semantics-aware photo cropping.
Zhang, Luming; Gao, Yue; Ji, Rongrong; Xia, Yingjie; Dai, Qionghai; Li, Xuelong
2014-05-01
Photo cropping is a widely used tool in printing industry, photography, and cinematography. Conventional cropping models suffer from the following three challenges. First, the deemphasized role of semantic contents that are many times more important than low-level features in photo aesthetics. Second, the absence of a sequential ordering in the existing models. In contrast, humans look at semantically important regions sequentially when viewing a photo. Third, the difficulty of leveraging inputs from multiple users. Experience from multiple users is particularly critical in cropping as photo assessment is quite a subjective task. To address these challenges, this paper proposes semantics-aware photo cropping, which crops a photo by simulating the process of humans sequentially perceiving semantically important regions of a photo. We first project the local features (graphlets in this paper) onto the semantic space, which is constructed based on the category information of the training photos. An efficient learning algorithm is then derived to sequentially select semantically representative graphlets of a photo, and the selecting process can be interpreted by a path, which simulates humans actively perceiving semantics in a photo. Furthermore, we learn a prior distribution of such active graphlet paths from training photos that are marked as aesthetically pleasing by multiple users. The learned priors enforce the corresponding active graphlet path of a test photo to be maximally similar to those from the training photos. Experimental results show that: 1) the active graphlet path accurately predicts human gaze shifting, and thus is more indicative for photo aesthetics than conventional saliency maps and 2) the cropped photos produced by our approach outperform its competitors in both qualitative and quantitative comparisons.
Fatigue reduction during aggregated and distributed sequential stimulation.
Bergquist, Austin J; Babbar, Vishvek; Ali, Saima; Popovic, Milos R; Masani, Kei
2017-08-01
Transcutaneous neuromuscular electrical stimulation (NMES) can generate muscle contractions for rehabilitation and exercise. However, NMES-evoked contractions are limited by fatigue when they are delivered "conventionally" (CONV) using a single active electrode. Researchers have developed "sequential" (SEQ) stimulation, involving rotation of pulses between multiple "aggregated" (AGGR-SEQ) or "distributed" (DISTR-SEQ) active electrodes, to reduce fatigue (torque-decline) by reducing motor unit discharge rates. The primary objective was to compare fatigue-related outcomes, "potentiation," "variability," and "efficiency" between CONV, AGGR-SEQ, and DISTR-SEQ stimulation of knee extensors in healthy participants. Torque and current were recorded during testing with fatiguing trains using each NMES type under isometric and isokinetic (180°/s) conditions. Compared with CONV stimulation, SEQ techniques reduced fatigue-related outcomes, increased potentiation, did not affect variability, and reduced efficiency. SEQ techniques hold promise for reducing fatigue during NMES-based rehabilitation and exercise; however, optimization is required to improve efficiency. Muscle Nerve 56: 271-281, 2017. © 2016 Wiley Periodicals, Inc.
Brzosko, Zuzanna; Zannone, Sara; Schultz, Wolfram
2017-01-01
Spike timing-dependent plasticity (STDP) is under neuromodulatory control, which is correlated with distinct behavioral states. Previously, we reported that dopamine, a reward signal, broadens the time window for synaptic potentiation and modulates the outcome of hippocampal STDP even when applied after the plasticity induction protocol (Brzosko et al., 2015). Here, we demonstrate that sequential neuromodulation of STDP by acetylcholine and dopamine offers an efficacious model of reward-based navigation. Specifically, our experimental data in mouse hippocampal slices show that acetylcholine biases STDP toward synaptic depression, whilst subsequent application of dopamine converts this depression into potentiation. Incorporating this bidirectional neuromodulation-enabled correlational synaptic learning rule into a computational model yields effective navigation toward changing reward locations, as in natural foraging behavior. Thus, temporally sequenced neuromodulation of STDP enables associations to be made between actions and outcomes and also provides a possible mechanism for aligning the time scales of cellular and behavioral learning. DOI: http://dx.doi.org/10.7554/eLife.27756.001 PMID:28691903
Hemmingsen, Bianca; Lund, Søren S; Gluud, Christian; Vaag, Allan; Almdal, Thomas; Hemmingsen, Christina; Wetterslev, Jørn
2011-11-24
To assess the effect of targeting intensive glycaemic control versus conventional glycaemic control on all cause mortality and cardiovascular mortality, non-fatal myocardial infarction, microvascular complications, and severe hypoglycaemia in patients with type 2 diabetes. Systematic review with meta-analyses and trial sequential analyses of randomised trials. Cochrane Library, Medline, Embase, Science Citation Index Expanded, LILACS, and CINAHL to December 2010; hand search of reference lists and conference proceedings; contacts with authors, relevant pharmaceutical companies, and the US Food and Drug Administration. Randomised clinical trials comparing targeted intensive glycaemic control with conventional glycaemic control in patients with type 2 diabetes. Published and unpublished trials in all languages were included, irrespective of predefined outcomes. Two reviewers independently assessed studies for inclusion and extracted data related to study methods, interventions, outcomes, risk of bias, and adverse events. Risk ratios with 95% confidence intervals were estimated with fixed and random effects models. Fourteen clinical trials that randomised 28,614 participants with type 2 diabetes (15,269 to intensive control and 13,345 to conventional control) were included. Intensive glycaemic control did not significantly affect the relative risks of all cause (1.02, 95% confidence interval 0.91 to 1.13; 28,359 participants, 12 trials) or cardiovascular mortality (1.11, 0.92 to 1.35; 28,359 participants, 12 trials). Trial sequential analyses rejected a relative risk reduction above 10% for all cause mortality and showed insufficient data on cardiovascular mortality. The risk of non-fatal myocardial infarction may be reduced (relative risk 0.85, 0.76 to 0.95; P=0.004; 28,111 participants, 8 trials), but this finding was not confirmed in trial sequential analysis. Intensive glycaemic control showed a reduction of the relative risks for the composite microvascular outcome (0.88, 0.79 to 0.97; P=0.01; 25,600 participants, 3 trials) and retinopathy (0.80, 0.67 to 0.94; P=0.009; 10,793 participants, 7 trials), but trial sequential analyses showed that sufficient evidence had not yet been reached. The estimate of an effect on the risk of nephropathy (relative risk 0.83, 0.64 to 1.06; 27,769 participants, 8 trials) was not statistically significant. The risk of severe hypoglycaemia was significantly increased when intensive glycaemic control was targeted (relative risk 2.39, 1.71 to 3.34; 27,844 participants, 9 trials); trial sequential analysis supported a 30% increased relative risk of severe hypoglycaemia. Intensive glycaemic control does not seem to reduce all cause mortality in patients with type 2 diabetes. Data available from randomised clinical trials remain insufficient to prove or refute a relative risk reduction for cardiovascular mortality, non-fatal myocardial infarction, composite microvascular complications, or retinopathy at a magnitude of 10%. Intensive glycaemic control increases the relative risk of severe hypoglycaemia by 30%.
NASA Astrophysics Data System (ADS)
Nussbaumer, Raphaël; Gloaguen, Erwan; Mariéthoz, Grégoire; Holliger, Klaus
2016-04-01
Bayesian sequential simulation (BSS) is a powerful geostatistical technique, which notably has shown significant potential for the assimilation of datasets that are diverse with regard to the spatial resolution and their relationship. However, these types of applications of BSS require a large number of realizations to adequately explore the solution space and to assess the corresponding uncertainties. Moreover, such simulations generally need to be performed on very fine grids in order to adequately exploit the technique's potential for characterizing heterogeneous environments. Correspondingly, the computational cost of BSS algorithms in their classical form is very high, which so far has limited an effective application of this method to large models and/or vast datasets. In this context, it is also important to note that the inherent assumption regarding the independence of the considered datasets is generally regarded as being too strong in the context of sequential simulation. To alleviate these problems, we have revisited the classical implementation of BSS and incorporated two key features to increase the computational efficiency. The first feature is a combined quadrant spiral - superblock search, which targets run-time savings on large grids and adds flexibility with regard to the selection of neighboring points using equal directional sampling and treating hard data and previously simulated points separately. The second feature is a constant path of simulation, which enhances the efficiency for multiple realizations. We have also modified the aggregation operator to be more flexible with regard to the assumption of independence of the considered datasets. This is achieved through log-linear pooling, which essentially allows for attributing weights to the various data components. Finally, a multi-grid simulating path was created to enforce large-scale variance and to allow for adapting parameters, such as, for example, the log-linear weights or the type of simulation path at various scales. The newly implemented search method for kriging reduces the computational cost from an exponential dependence with regard to the grid size in the original algorithm to a linear relationship, as each neighboring search becomes independent from the grid size. For the considered examples, our results show a sevenfold reduction in run time for each additional realization when a constant simulation path is used. The traditional criticism that constant path techniques introduce a bias to the simulations was explored and our findings do indeed reveal a minor reduction in the diversity of the simulations. This bias can, however, be largely eliminated by changing the path type at different scales through the use of the multi-grid approach. Finally, we show that adapting the aggregation weight at each scale considered in our multi-grid approach allows for reproducing both the variogram and histogram, and the spatial trend of the underlying data.
Lee, Tae Hoon; Hwang, Soon Oh; Choi, Hyun Jong; Jung, Yunho; Cha, Sang Woo; Chung, Il-Kwun; Moon, Jong Ho; Cho, Young Deok; Park, Sang-Heum; Kim, Sun-Joo
2014-02-17
Numerous clinical trials to improve the success rate of biliary access in difficult biliary cannulation (DBC) during ERCP have been reported. However, standard guidelines or sequential protocol analysis according to different methods are limited in place. We planned to investigate a sequential protocol to facilitate selective biliary access for DBC during ERCP. This prospective clinical study enrolled 711 patients with naïve papillae at a tertiary referral center. If wire-guided cannulation was deemed to have failed due to the DBC criteria, then according to the cannulation algorithm early precut fistulotomy (EPF; cannulation time > 5 min, papillary contacts > 5 times, or hook-nose-shaped papilla), double-guidewire cannulation (DGC; unintentional pancreatic duct cannulation ≥ 3 times), and precut after placement of a pancreatic stent (PPS; if DGC was difficult or failed) were performed sequentially. The main outcome measurements were the technical success, procedure outcomes, and complications. Initially, a total of 140 (19.7%) patients with DBC underwent EPF (n = 71) and DGC (n = 69). Then, in DGC group 36 patients switched to PPS due to difficulty criteria. The successful biliary cannulation rate was 97.1% (136/140; 94.4% [67/71] with EPF, 47.8% [33/69] with DGC, and 100% [36/36] with PPS; P < 0.001). The mean successful cannulation time (standard deviation) was 559.4 (412.8) seconds in EPF, 314.8 (65.2) seconds in DGC, and 706.0 (469.4) seconds in PPS (P < 0.05). The DGC group had a relatively low successful cannulation rate (47.8%) but had a shorter cannulation time compared to the other groups due to early switching to the PPS method in difficult or failed DGC. Post-ERCP pancreatitis developed in 14 (10%) patients (9 mild, 1 moderate), which did not differ significantly among the groups (P = 0.870) or compared with the conventional group (P = 0.125). Based on the sequential protocol analysis, EPF, DGC, and PPS may be safe and feasible for DBC. The use of EPF in selected DBC criteria, DGC in unintentional pancreatic duct cannulations, and PPS in failed or difficult DGC may facilitate successful biliary cannulation.
specsim: A Fortran-77 program for conditional spectral simulation in 3D
NASA Astrophysics Data System (ADS)
Yao, Tingting
1998-12-01
A Fortran 77 program, specsim, is presented for conditional spectral simulation in 3D domains. The traditional Fourier integral method allows generating random fields with a given covariance spectrum. Conditioning to local data is achieved by an iterative identification of the conditional phase information. A flowchart of the program is given to illustrate the implementation procedures of the program. A 3D case study is presented to demonstrate application of the program. A comparison with the traditional sequential Gaussian simulation algorithm emphasizes the advantages and drawbacks of the proposed algorithm.
Fast Acceleration of 2D Wave Propagation Simulations Using Modern Computational Accelerators
Wang, Wei; Xu, Lifan; Cavazos, John; Huang, Howie H.; Kay, Matthew
2014-01-01
Recent developments in modern computational accelerators like Graphics Processing Units (GPUs) and coprocessors provide great opportunities for making scientific applications run faster than ever before. However, efficient parallelization of scientific code using new programming tools like CUDA requires a high level of expertise that is not available to many scientists. This, plus the fact that parallelized code is usually not portable to different architectures, creates major challenges for exploiting the full capabilities of modern computational accelerators. In this work, we sought to overcome these challenges by studying how to achieve both automated parallelization using OpenACC and enhanced portability using OpenCL. We applied our parallelization schemes using GPUs as well as Intel Many Integrated Core (MIC) coprocessor to reduce the run time of wave propagation simulations. We used a well-established 2D cardiac action potential model as a specific case-study. To the best of our knowledge, we are the first to study auto-parallelization of 2D cardiac wave propagation simulations using OpenACC. Our results identify several approaches that provide substantial speedups. The OpenACC-generated GPU code achieved more than speedup above the sequential implementation and required the addition of only a few OpenACC pragmas to the code. An OpenCL implementation provided speedups on GPUs of at least faster than the sequential implementation and faster than a parallelized OpenMP implementation. An implementation of OpenMP on Intel MIC coprocessor provided speedups of with only a few code changes to the sequential implementation. We highlight that OpenACC provides an automatic, efficient, and portable approach to achieve parallelization of 2D cardiac wave simulations on GPUs. Our approach of using OpenACC, OpenCL, and OpenMP to parallelize this particular model on modern computational accelerators should be applicable to other computational models of wave propagation in multi-dimensional media. PMID:24497950
Research on parallel algorithm for sequential pattern mining
NASA Astrophysics Data System (ADS)
Zhou, Lijuan; Qin, Bai; Wang, Yu; Hao, Zhongxiao
2008-03-01
Sequential pattern mining is the mining of frequent sequences related to time or other orders from the sequence database. Its initial motivation is to discover the laws of customer purchasing in a time section by finding the frequent sequences. In recent years, sequential pattern mining has become an important direction of data mining, and its application field has not been confined to the business database and has extended to new data sources such as Web and advanced science fields such as DNA analysis. The data of sequential pattern mining has characteristics as follows: mass data amount and distributed storage. Most existing sequential pattern mining algorithms haven't considered the above-mentioned characteristics synthetically. According to the traits mentioned above and combining the parallel theory, this paper puts forward a new distributed parallel algorithm SPP(Sequential Pattern Parallel). The algorithm abides by the principal of pattern reduction and utilizes the divide-and-conquer strategy for parallelization. The first parallel task is to construct frequent item sets applying frequent concept and search space partition theory and the second task is to structure frequent sequences using the depth-first search method at each processor. The algorithm only needs to access the database twice and doesn't generate the candidated sequences, which abates the access time and improves the mining efficiency. Based on the random data generation procedure and different information structure designed, this paper simulated the SPP algorithm in a concrete parallel environment and implemented the AprioriAll algorithm. The experiments demonstrate that compared with AprioriAll, the SPP algorithm had excellent speedup factor and efficiency.
Parent-Implemented Communication Intervention: Sequential Analysis of Triadic Relationships
ERIC Educational Resources Information Center
Brown, Jennifer A.; Woods, Juliann J.
2016-01-01
Collaboration with parents and caregivers to support young children's communication development is an important component to early intervention services. Coaching parents to implement communication support strategies is increasingly common in parent-implemented interventions, but few studies examine the process as well as the outcomes. We explored…
Preschool Children's Control of Action Outcomes
ERIC Educational Resources Information Center
Freier, Livia; Cooper, Richard P.; Mareschal, Denis
2017-01-01
Naturalistic goal-directed behaviours require the engagement and maintenance of appropriate levels of cognitive control over relatively extended intervals of time. In two experiments, we examined preschool children's abilities to maintain top-down control throughout the course of a sequential task. Both 3- and 5-year-olds demonstrated good…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wendt, Fabian F; Damiani, Rick R
This poster summarizes the scope and preliminary results of a study conducted for the Bureau of Safety and Environmental Enforcement aimed at quantifying differences between two modeling approaches (fully coupled and sequentially coupled) through aero-hydro-servo-elastic simulations of two offshore wind turbines on a monopile and jacket substructure.
High-Order Multioperator Compact Schemes for Numerical Simulation of Unsteady Subsonic Airfoil Flow
NASA Astrophysics Data System (ADS)
Savel'ev, A. D.
2018-02-01
On the basis of high-order schemes, the viscous gas flow over the NACA2212 airfoil is numerically simulated at a free-stream Mach number of 0.3 and Reynolds numbers ranging from 103 to 107. Flow regimes sequentially varying due to variations in the free-stream viscosity are considered. Vortex structures developing on the airfoil surface are investigated, and a physical interpretation of this phenomenon is given.
A Sequential Optimization Sampling Method for Metamodels with Radial Basis Functions
Pan, Guang; Ye, Pengcheng; Yang, Zhidong
2014-01-01
Metamodels have been widely used in engineering design to facilitate analysis and optimization of complex systems that involve computationally expensive simulation programs. The accuracy of metamodels is strongly affected by the sampling methods. In this paper, a new sequential optimization sampling method is proposed. Based on the new sampling method, metamodels can be constructed repeatedly through the addition of sampling points, namely, extrema points of metamodels and minimum points of density function. Afterwards, the more accurate metamodels would be constructed by the procedure above. The validity and effectiveness of proposed sampling method are examined by studying typical numerical examples. PMID:25133206
Sequential capture of CO2 and SO2 in a pressurized TGA simulating FBC conditions.
Sun, Ping; Grace, John R; Lim, C Jim; Anthony, Edward J
2007-04-15
Four FBC-based processes were investigated as possible means of sequentially capturing SO2 and CO2. Sorbent performance is the key to their technical feasibility. Two sorbents (a limestone and a dolomite) were tested in a pressurized thermogravimetric analyzer (PTGA). The sorbent behaviors were explained based on complex interaction between carbonation, sulfation, and direct sulfation. The best option involved using limestone or dolomite as a SO2-sorbent in a FBC combustor following cyclic CO2 capture. Highly sintered limestone is a good sorbent for SO2 because of the generation of macropores during calcination/carbonation cycling.
NASA Technical Reports Server (NTRS)
Sidik, S. M.
1972-01-01
The error variance of the process prior multivariate normal distributions of the parameters of the models are assumed to be specified, prior probabilities of the models being correct. A rule for termination of sampling is proposed. Upon termination, the model with the largest posterior probability is chosen as correct. If sampling is not terminated, posterior probabilities of the models and posterior distributions of the parameters are computed. An experiment was chosen to maximize the expected Kullback-Leibler information function. Monte Carlo simulation experiments were performed to investigate large and small sample behavior of the sequential adaptive procedure.
Berthias, F; Feketeová, L; Abdoul-Carime, H; Calvo, F; Farizon, B; Farizon, M; Märk, T D
2018-06-22
Velocity distributions of neutral water molecules evaporated after collision induced dissociation of protonated water clusters H+(H2O)n≤10 were measured using the combined correlated ion and neutral fragment time-of-flight (COINTOF) and velocity map imaging (VMI) techniques. As observed previously, all measured velocity distributions exhibit two contributions, with a low velocity part identified by statistical molecular dynamics (SMD) simulations as events obeying the Maxwell-Boltzmann statistics and a high velocity contribution corresponding to non-ergodic events in which energy redistribution is incomplete. In contrast to earlier studies, where the evaporation of a single molecule was probed, the present study is concerned with events involving the evaporation of up to five water molecules. In particular, we discuss here in detail the cases of two and three evaporated molecules. Evaporation of several water molecules after CID can be interpreted in general as a sequential evaporation process. In addition to the SMD calculations, a Monte Carlo (MC) based simulation was developed allowing the reconstruction of the velocity distribution produced by the evaporation of m molecules from H+(H2O)n≤10 cluster ions using the measured velocity distributions for singly evaporated molecules as the input. The observed broadening of the low-velocity part of the distributions for the evaporation of two and three molecules as compared to the width for the evaporation of a single molecule results from the cumulative recoil velocity of the successive ion residues as well as the intrinsically broader distributions for decreasingly smaller parent clusters. Further MC simulations were carried out assuming that a certain proportion of non-ergodic events is responsible for the first evaporation in such a sequential evaporation series, thereby allowing to model the entire velocity distribution.
Comparison of Sequential and Variational Data Assimilation
NASA Astrophysics Data System (ADS)
Alvarado Montero, Rodolfo; Schwanenberg, Dirk; Weerts, Albrecht
2017-04-01
Data assimilation is a valuable tool to improve model state estimates by combining measured observations with model simulations. It has recently gained significant attention due to its potential in using remote sensing products to improve operational hydrological forecasts and for reanalysis purposes. This has been supported by the application of sequential techniques such as the Ensemble Kalman Filter which require no additional features within the modeling process, i.e. it can use arbitrary black-box models. Alternatively, variational techniques rely on optimization algorithms to minimize a pre-defined objective function. This function describes the trade-off between the amount of noise introduced into the system and the mismatch between simulated and observed variables. While sequential techniques have been commonly applied to hydrological processes, variational techniques are seldom used. In our believe, this is mainly attributed to the required computation of first order sensitivities by algorithmic differentiation techniques and related model enhancements, but also to lack of comparison between both techniques. We contribute to filling this gap and present the results from the assimilation of streamflow data in two basins located in Germany and Canada. The assimilation introduces noise to precipitation and temperature to produce better initial estimates of an HBV model. The results are computed for a hindcast period and assessed using lead time performance metrics. The study concludes with a discussion of the main features of each technique and their advantages/disadvantages in hydrological applications.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Konomi, Bledar A.; Karagiannis, Georgios; Sarkar, Avik
2014-05-16
Computer experiments (numerical simulations) are widely used in scientific research to study and predict the behavior of complex systems, which usually have responses consisting of a set of distinct outputs. The computational cost of the simulations at high resolution are often expensive and become impractical for parametric studies at different input values. To overcome these difficulties we develop a Bayesian treed multivariate Gaussian process (BTMGP) as an extension of the Bayesian treed Gaussian process (BTGP) in order to model and evaluate a multivariate process. A suitable choice of covariance function and the prior distributions facilitates the different Markov chain Montemore » Carlo (MCMC) movements. We utilize this model to sequentially sample the input space for the most informative values, taking into account model uncertainty and expertise gained. A simulation study demonstrates the use of the proposed method and compares it with alternative approaches. We apply the sequential sampling technique and BTMGP to model the multiphase flow in a full scale regenerator of a carbon capture unit. The application presented in this paper is an important tool for research into carbon dioxide emissions from thermal power plants.« less
Stephen, Julia M; Ranken, Doug M; Aine, Cheryl J; Weisend, Michael P; Shih, Jerry J
2005-12-01
Previous studies have shown that magnetoencephalography (MEG) can measure hippocampal activity, despite the cylindrical shape and deep location in the brain. The current study extended this work by examining the ability to differentiate the hippocampal subfields, parahippocampal cortex, and neocortical temporal sources using simulated interictal epileptic activity. A model of the hippocampus was generated on the MRIs of five subjects. CA1, CA3, and dentate gyrus of the hippocampus were activated as well as entorhinal cortex, presubiculum, and neocortical temporal cortex. In addition, pairs of sources were activated sequentially to emulate various hypotheses of mesial temporal lobe seizure generation. The simulated MEG activity was added to real background brain activity from the five subjects and modeled using a multidipole spatiotemporal modeling technique. The waveforms and source locations/orientations for hippocampal and parahippocampal sources were differentiable from neocortical temporal sources. In addition, hippocampal and parahippocampal sources were differentiated to varying degrees depending on source. The sequential activation of hippocampal and parahippocampal sources was adequately modeled by a single source; however, these sources were not resolvable when they overlapped in time. These results suggest that MEG has the sensitivity to distinguish parahippocampal and hippocampal spike generators in mesial temporal lobe epilepsy.
The Application of Neutron Transport Green's Functions to Threat Scenario Simulation
NASA Astrophysics Data System (ADS)
Thoreson, Gregory G.; Schneider, Erich A.; Armstrong, Hirotatsu; van der Hoeven, Christopher A.
2015-02-01
Radiation detectors provide deterrence and defense against nuclear smuggling attempts by scanning vehicles, ships, and pedestrians for radioactive material. Understanding detector performance is crucial to developing novel technologies, architectures, and alarm algorithms. Detection can be modeled through radiation transport simulations; however, modeling a spanning set of threat scenarios over the full transport phase-space is computationally challenging. Previous research has demonstrated Green's functions can simulate photon detector signals by decomposing the scenario space into independently simulated submodels. This paper presents decomposition methods for neutron and time-dependent transport. As a result, neutron detector signals produced from full forward transport simulations can be efficiently reconstructed by sequential application of submodel response functions.
Pressman, Alice R; Avins, Andrew L; Hubbard, Alan; Satariano, William A
2011-07-01
There is a paucity of literature comparing Bayesian analytic techniques with traditional approaches for analyzing clinical trials using real trial data. We compared Bayesian and frequentist group sequential methods using data from two published clinical trials. We chose two widely accepted frequentist rules, O'Brien-Fleming and Lan-DeMets, and conjugate Bayesian priors. Using the nonparametric bootstrap, we estimated a sampling distribution of stopping times for each method. Because current practice dictates the preservation of an experiment-wise false positive rate (Type I error), we approximated these error rates for our Bayesian and frequentist analyses with the posterior probability of detecting an effect in a simulated null sample. Thus for the data-generated distribution represented by these trials, we were able to compare the relative performance of these techniques. No final outcomes differed from those of the original trials. However, the timing of trial termination differed substantially by method and varied by trial. For one trial, group sequential designs of either type dictated early stopping of the study. In the other, stopping times were dependent upon the choice of spending function and prior distribution. Results indicate that trialists ought to consider Bayesian methods in addition to traditional approaches for analysis of clinical trials. Though findings from this small sample did not demonstrate either method to consistently outperform the other, they did suggest the need to replicate these comparisons using data from varied clinical trials in order to determine the conditions under which the different methods would be most efficient. Copyright © 2011 Elsevier Inc. All rights reserved.
Pressman, Alice R.; Avins, Andrew L.; Hubbard, Alan; Satariano, William A.
2014-01-01
Background There is a paucity of literature comparing Bayesian analytic techniques with traditional approaches for analyzing clinical trials using real trial data. Methods We compared Bayesian and frequentist group sequential methods using data from two published clinical trials. We chose two widely accepted frequentist rules, O'Brien–Fleming and Lan–DeMets, and conjugate Bayesian priors. Using the nonparametric bootstrap, we estimated a sampling distribution of stopping times for each method. Because current practice dictates the preservation of an experiment-wise false positive rate (Type I error), we approximated these error rates for our Bayesian and frequentist analyses with the posterior probability of detecting an effect in a simulated null sample. Thus for the data-generated distribution represented by these trials, we were able to compare the relative performance of these techniques. Results No final outcomes differed from those of the original trials. However, the timing of trial termination differed substantially by method and varied by trial. For one trial, group sequential designs of either type dictated early stopping of the study. In the other, stopping times were dependent upon the choice of spending function and prior distribution. Conclusions Results indicate that trialists ought to consider Bayesian methods in addition to traditional approaches for analysis of clinical trials. Though findings from this small sample did not demonstrate either method to consistently outperform the other, they did suggest the need to replicate these comparisons using data from varied clinical trials in order to determine the conditions under which the different methods would be most efficient. PMID:21453792
Leff, Daniel R; Aggarwal, Rajesh; Rana, Mariam; Nakhjavani, Batool; Purkayastha, Sanjay; Khullar, Vik; Darzi, Ara W
2008-03-01
Research evaluating fatigue-induced skills decline has focused on acute sleep deprivation rather than the effects of circadian desynchronization associated with multiple shifts. As a result, the number of consecutive night shifts that residents can safely be on duty without detrimental effects to their technical skills remains unknown. A prospective observational cohort study was conducted to assess the impact of 7 successive night shifts on the technical surgical performance of junior residents. The interventional strategy included training 21 residents from surgery and allied disciplines on a virtual reality surgical simulator, towards the achievement of preset benchmark scores, followed by 294 technical skills assessments conducted over 1764 manpower night shift hours. Primary outcomes comprised serial technical skills assessments on 2 tasks of a virtual reality surgical simulator. Secondary outcomes included assessments of introspective fatigue, duration of sleep, and prospective recordings of activity (number of "calls" received, steps walked, and patients evaluated). Maximal deterioration in performance was observed following the first night shift. Residents took significantly longer to complete the first (P = 0.002) and second tasks (P = 0.005) compared with baseline. They also committed significantly greater numbers of errors (P = 0.025) on the first task assessed. Improved performance was observed across subsequent shifts towards baseline levels. Newly acquired technical surgical skills deteriorate maximally after the first night shift, emphasizing the importance of adequate preparation for night rotas. Performance improvements across successive shifts may be due to ongoing learning or adaptation to chronic fatigue. Further research should focus on assessments of both technical procedural skills and cognitive abilities to determine the rotas that best minimize errors and maximize patient safety.
ERIC Educational Resources Information Center
Kidwell, Kelley M.; Hyde, Luke W.
2016-01-01
Heterogeneity between and within people necessitates the need for sequential personalized interventions to optimize individual outcomes. Personalized or adaptive interventions (AIs) are relevant for diseases and maladaptive behavioral trajectories when one intervention is not curative and success of a subsequent intervention may depend on…
The Returns to Community College
ERIC Educational Resources Information Center
Agan, Amanda Yvonne
2013-01-01
Almost half of postsecondary students are currently enrolled in community colleges. These institutions imply that even amongst students with the same degree outcome there is considerable heterogeneity in the path taken to get there. I estimate the life-cycle private and social returns to the different postsecondary paths and sequential decisions…
NASA Astrophysics Data System (ADS)
Li, Xiaokai; Wang, Chuncheng; Yuan, Zongqiang; Ye, Difa; Ma, Pan; Hu, Wenhui; Luo, Sizuo; Fu, Libin; Ding, Dajun
2017-09-01
By combining kinematically complete measurements and a semiclassical Monte Carlo simulation we study the correlated-electron dynamics in the strong-field double ionization of Kr. Interestingly, we find that, as we step into the sequential-ionization regime, there are still signatures of correlation in the two-electron joint momentum spectrum and, more intriguingly, the scaling law of the high-energy tail is completely different from early predictions on the low-Z atom (He). These experimental observations are well reproduced by our generalized semiclassical model adapting a Green-Sellin-Zachor potential. It is revealed that the competition between the screening effect of inner-shell electrons and the Coulomb focusing of nuclei leads to a non-inverse-square central force, which twists the returned electron trajectory at the vicinity of the parent core and thus significantly increases the probability of hard recollisions between two electrons. Our results might have promising applications ranging from accurately retrieving atomic structures to simulating celestial phenomena in the laboratory.
Davies, Jeff K; Hassan, Sandra; Sarker, Shah-Jalal; Besley, Caroline; Oakervee, Heather; Smith, Matthew; Taussig, David; Gribben, John G; Cavenagh, Jamie D
2018-02-01
Allogeneic haematopoietic stem-cell transplantation remains the only curative treatment for relapsed/refractory acute myeloid leukaemia (AML) and high-risk myelodysplasia but has previously been limited to patients who achieve remission before transplant. New sequential approaches employing T-cell depleted transplantation directly after chemotherapy show promise but are burdened by viral infection and require donor lymphocyte infusions (DLI) to augment donor chimerism and graft-versus-leukaemia effects. T-replete transplantation in sequential approaches could reduce both viral infection and DLI usage. We therefore performed a single-arm prospective Phase II clinical trial of sequential chemotherapy and T-replete transplantation using reduced-intensity conditioning without planned DLI. The primary endpoint was overall survival. Forty-seven patients with relapsed/refractory AML or high-risk myelodysplasia were enrolled; 43 proceeded to transplantation. High levels of donor chimerism were achieved spontaneously with no DLI. Overall survival of transplanted patients was 45% and 33% at 1 and 3 years. Only one patient developed cytomegalovirus disease. Cumulative incidences of treatment-related mortality and relapse were 35% and 20% at 1 year. Patients with relapsed AML and myelodysplasia had the most favourable outcomes. Late-onset graft-versus-host disease protected against relapse. In conclusion, a T-replete sequential transplantation using reduced-intensity conditioning is feasible for relapsed/refractory AML and myelodysplasia and can deliver graft-versus-leukaemia effects without DLI. © 2017 John Wiley & Sons Ltd.
Bursts and Heavy Tails in Temporal and Sequential Dynamics of Foraging Decisions
Jung, Kanghoon; Jang, Hyeran; Kralik, Jerald D.; Jeong, Jaeseung
2014-01-01
A fundamental understanding of behavior requires predicting when and what an individual will choose. However, the actual temporal and sequential dynamics of successive choices made among multiple alternatives remain unclear. In the current study, we tested the hypothesis that there is a general bursting property in both the timing and sequential patterns of foraging decisions. We conducted a foraging experiment in which rats chose among four different foods over a continuous two-week time period. Regarding when choices were made, we found bursts of rapidly occurring actions, separated by time-varying inactive periods, partially based on a circadian rhythm. Regarding what was chosen, we found sequential dynamics in affective choices characterized by two key features: (a) a highly biased choice distribution; and (b) preferential attachment, in which the animals were more likely to choose what they had previously chosen. To capture the temporal dynamics, we propose a dual-state model consisting of active and inactive states. We also introduce a satiation-attainment process for bursty activity, and a non-homogeneous Poisson process for longer inactivity between bursts. For the sequential dynamics, we propose a dual-control model consisting of goal-directed and habit systems, based on outcome valuation and choice history, respectively. This study provides insights into how the bursty nature of behavior emerges from the interaction of different underlying systems, leading to heavy tails in the distribution of behavior over time and choices. PMID:25122498
Schiekirka, Sarah; Anders, Sven; Raupach, Tobias
2014-07-21
Estimating learning outcome from comparative student self-ratings is a reliable and valid method to identify specific strengths and shortcomings in undergraduate medical curricula. However, requiring students to complete two evaluation forms (i.e. one before and one after teaching) might adversely affect response rates. Alternatively, students could be asked to rate their initial performance level retrospectively. This approach might threaten the validity of results due to response shift or effort justification bias. Two consecutive cohorts of medical students enrolled in a six-week cardio-respiratory module were enrolled in this study. In both cohorts, performance gain was estimated for 33 specific learning objectives. In the first cohort, outcomes calculated from ratings provided before (pretest) and after (posttest) teaching were compared to outcomes derived from comparative self-ratings collected after teaching only (thentest and posttest). In the second cohort, only thentests and posttests were used to calculate outcomes, but data collection tools differed with regard to item presentation. In one group, thentest and posttest ratings were obtained sequentially on separate forms while in the other, both ratings were obtained simultaneously for each learning objective. Using thentest ratings to calculate performance gain produced slightly higher values than using true pretest ratings. Direct comparison of then- and posttest ratings also yielded slightly higher performance gain than sequential ratings, but this effect was negligibly small. Given the small effect sizes, using thentests appears to be equivalent to using true pretest ratings. Item presentation in the posttest does not significantly impact on results.
2014-01-01
Background Estimating learning outcome from comparative student self-ratings is a reliable and valid method to identify specific strengths and shortcomings in undergraduate medical curricula. However, requiring students to complete two evaluation forms (i.e. one before and one after teaching) might adversely affect response rates. Alternatively, students could be asked to rate their initial performance level retrospectively. This approach might threaten the validity of results due to response shift or effort justification bias. Methods Two consecutive cohorts of medical students enrolled in a six-week cardio-respiratory module were enrolled in this study. In both cohorts, performance gain was estimated for 33 specific learning objectives. In the first cohort, outcomes calculated from ratings provided before (pretest) and after (posttest) teaching were compared to outcomes derived from comparative self-ratings collected after teaching only (thentest and posttest). In the second cohort, only thentests and posttests were used to calculate outcomes, but data collection tools differed with regard to item presentation. In one group, thentest and posttest ratings were obtained sequentially on separate forms while in the other, both ratings were obtained simultaneously for each learning objective. Results Using thentest ratings to calculate performance gain produced slightly higher values than using true pretest ratings. Direct comparison of then- and posttest ratings also yielded slightly higher performance gain than sequential ratings, but this effect was negligibly small. Conclusions Given the small effect sizes, using thentests appears to be equivalent to using true pretest ratings. Item presentation in the posttest does not significantly impact on results. PMID:25043503
Imberger, G; Orr, A; Thorlund, K; Wetterslev, J; Myles, P; Møller, A M
2014-03-01
The role of nitrous oxide in modern anaesthetic practice is contentious. One concern is that exposure to nitrous oxide may increase the risk of cardiovascular complications. ENIGMA II is a large randomized clinical trial currently underway which is investigating nitrous oxide and cardiovascular complications. Before the completion of this trial, we performed a systematic review and meta-analysis, using Cochrane methodology, on the outcomes that make up the composite primary outcome. We used conventional meta-analysis and trial sequential analysis (TSA). We reviewed 8282 abstracts and selected 138 that fulfilled our criteria for study type, population, and intervention. We attempted to contact the authors of all the selected publications to check for unpublished outcome data. Thirteen trials had outcome data eligible for our outcomes. We assessed three of these trials as having a low risk of bias. Using conventional meta-analysis, the relative risk of short-term mortality in the nitrous oxide group was 1.38 [95% confidence interval (CI) 0.22-8.71] and the relative risk of long-term mortality in the nitrous oxide group was 0.94 (95% CI 0.80-1.10). In both cases, TSA demonstrated that the data were far too sparse to make any conclusions. There were insufficient data to perform meta-analysis for stroke, myocardial infarct, pulmonary embolus, or cardiac arrest. This systematic review demonstrated that we currently do not have robust evidence for how nitrous oxide used as part of general anaesthesia affects mortality and cardiovascular complications.
Dueland, S; Ree, A H; Grøholt, K K; Saelen, M G; Folkvord, S; Hole, K H; Seierstad, T; Larsen, S G; Giercksky, K E; Wiig, J N; Boye, K; Flatmark, K
2016-08-01
This non-randomised study was undertaken to examine oxaliplatin as possibly an intensifying component of sequential neoadjuvant therapy in locally advanced rectal cancer for improved local and metastatic outcome. Ninety-seven patients (57 T2-3 cases, 40 T4 cases) received two cycles of the Nordic FLOX regimen (oxaliplatin 85 mg/m(2) day 1 and bolus 5-fluorouracil 500 mg/m(2) and folinic acid 100 mg days 1 and 2) before long-course chemoradiotherapy with concomitant oxaliplatin and capecitabine, followed by pelvic surgery. Treatment toxicity, local tumour response and long-term outcome were recorded. Good histologic tumour regression was obtained in 72% of patients. Implementing protocol-specific dose adjustments, tolerance was acceptable and 95% of patients received the total prescribed radiation dose. Estimated 5 year progression-free and overall survival were 61% and 83%, respectively. T4 stage was associated with an inferior local response rate, which again was highly associated with impaired long-term outcome. In this cohort of rectal cancer patients dominated by T4 and advanced T3 cases given sequential oxaliplatin-containing preoperative therapy with acceptable toxicity, high tumour response rates and overall survival were obtained, consistent with both local and systemic effects. However, tumour response and long-term outcome remained inferior for a significant number of T4 cases, suggesting that the T4 entity is biologically heterogeneous with subgroups of patients eligible for further individualisation of therapy. Copyright © 2016 The Royal College of Radiologists. Published by Elsevier Ltd. All rights reserved.
Aukema, Sietse M; Theil, Laura; Rohde, Marius; Bauer, Benedikt; Bradtke, Jutta; Burkhardt, Birgit; Bonn, Bettina R; Claviez, Alexander; Gattenlöhner, Stefan; Makarova, Olga; Nagel, Inga; Oschlies, Ilske; Pott, Christiane; Szczepanowski, Monika; Traulsen, Arne; Kluin, Philip M; Klapper, Wolfram; Siebert, Reiner; Murga Penas, Eva M
2015-09-01
Typical Burkitt lymphoma is characterized by an IG-MYC translocation and overall low genomic complexity. Clinically, Burkitt lymphoma has a favourable prognosis with very few relapses. However, the few patients experiencing disease progression and/or relapse have a dismal outcome. Here we report cytogenetic findings of seven cases of Burkitt lymphoma in which sequential karyotyping was performed at time of diagnosis and/or disease progression/relapse(s). After case selection, karyotype re-review and additional molecular analyses were performed in six paediatric cases, treated in Berlin-Frankfurt-Münster-Non-Hodgkin lymphoma study group trials, and one additional adult patient. Moreover, we analysed 18 cases of Burkitt lymphoma from the Mitelman database in which sequential karyotyping was performed. Our findings show secondary karyotypes to have a significant increase in load of cytogenetic aberrations with a mean number of 2, 5 and 8 aberrations for primary, secondary and third investigations. Importantly, this increase in karyotype complexity seemed to result from recurrent secondary chromosomal changes involving mainly trisomy 21, gains of 1q and 7q, losses of 6q, 11q, 13q, and 17p. In addition, our findings indicate a linear clonal evolution to be the predominant manner of cytogenetic evolution. Our data may provide a biological framework for the dismal outcome of progressive and relapsing Burkitt lymphoma. © 2015 John Wiley & Sons Ltd.
Liou, Jyh-Ming; Chen, Chieh-Chang; Fang, Yu-Jen; Chen, Po-Yueh; Chang, Chi-Yang; Chou, Chu-Kuang; Chen, Mei-Jyh; Tseng, Cheng-Hao; Lee, Ji-Yuh; Yang, Tsung-Hua; Chiu, Min-Chin; Yu, Jian-Jyun; Kuo, Chia-Chi; Luo, Jiing-Chyuan; Hsu, Wen-Feng; Hu, Wen-Hao; Tsai, Min-Horn; Lin, Jaw-Town; Shun, Chia-Tung; Twu, Gary; Lee, Yi-Chia; Bair, Ming-Jong; Wu, Ming-Shiang
2018-05-29
Whether extending the treatment length and the use of high-dose esomeprazole may optimize the efficacy of Helicobacter pylori eradication remains unknown. To compare the efficacy and tolerability of optimized 14 day sequential therapy and 10 day bismuth quadruple therapy containing high-dose esomeprazole in first-line therapy. We recruited 620 adult patients (≥20 years of age) with H. pylori infection naive to treatment in this multicentre, open-label, randomized trial. Patients were randomly assigned to receive 14 day sequential therapy or 10 day bismuth quadruple therapy, both containing esomeprazole 40 mg twice daily. Those who failed after 14 day sequential therapy received rescue therapy with 10 day bismuth quadruple therapy and vice versa. Our primary outcome was the eradication rate in the first-line therapy. Antibiotic susceptibility was determined. ClinicalTrials.gov: NCT03156855. The eradication rates of 14 day sequential therapy and 10 day bismuth quadruple therapy were 91.3% (283 of 310, 95% CI 87.4%-94.1%) and 91.6% (284 of 310, 95% CI 87.8%-94.3%) in the ITT analysis, respectively (difference -0.3%, 95% CI -4.7% to 4.4%, P = 0.886). However, the frequencies of adverse effects were significantly higher in patients treated with 10 day bismuth quadruple therapy than those treated with 14 day sequential therapy (74.4% versus 36.7% P < 0.0001). The eradication rate of 14 day sequential therapy in strains with and without 23S ribosomal RNA mutation was 80% (24 of 30) and 99% (193 of 195), respectively (P < 0.0001). Optimized 14 day sequential therapy was non-inferior to, but better tolerated than 10 day bismuth quadruple therapy and both may be used in first-line treatment in populations with low to intermediate clarithromycin resistance.
Enduring Advantages of Early Cochlear Implantation for Spoken Language Development
Geers, Ann E.; Nicholas, Johanna G.
2013-01-01
Purpose To determine whether the precise age of implantation (AOI) remains an important predictor of spoken language outcomes in later childhood for those who received a cochlear implant (CI) between 12–38 months of age. Relative advantages of receiving a bilateral CI after age 4.5, better pre-CI aided hearing, and longer CI experience were also examined. Method Sixty children participated in a prospective longitudinal study of outcomes at 4.5 and 10.5 years of age. Twenty-nine children received a sequential second CI. Test scores were compared to normative samples of hearing age-mates and predictors of outcomes identified. Results Standard scores on language tests at 10.5 years of age remained significantly correlated with age of first cochlear implantation. Scores were not associated with receipt of a second, sequentially-acquired CI. Significantly higher scores were achieved for vocabulary as compared with overall language, a finding not evident when the children were tested at younger ages. Conclusion Age-appropriate spoken language skills continued to be more likely with younger AOI, even after an average of 8.6 years of additional CI use. Receipt of a second implant between ages 4–10 years and longer duration of device use did not provide significant added benefit. PMID:23275406
Galvin, Karyn Louise; Holland, Jennifer Frances; Hughes, Kathryn Clare
2014-01-01
First, to document a broad range of functional outcomes of bilateral implantation for young children through young adults at a postoperative point at which stable outcomes could be expected. Second, to evaluate the relationship between functional outcomes and age at bilateral implantation and time between implants. A study-specific questionnaire was administered to parents in an interview 3.5 years or more after sequential (n = 50) or simultaneous (n = 7) implants were received by their child. Median age at bilateral implantation was 4.1 years (range 0.7 to 19.8) and time between implants was 2.7 years (range 0.0 to 16.7). On the basis of parent report, 72% of the sequentially implanted children and young adults found it easy/only "a bit difficult" to adapt to the second implant, and were "happily wearing both implants together most of the time" by 6 months or before; 26% had not adapted, with both implants not worn most of the time or worn as a parental requirement. Seventy-two percent of sequentially implanted children and young adults had a positive attitude toward the second implant, including 9 whose early postoperative attitude was negative or neutral. The majority of children and young adults preferred bilateral implants (70%) and used the two full time (72%), while around half demonstrated similar performance with each implant alone. The proportion of nonusers or very minimal users of the second implant was just 9%. Eighty-eight percent of parents reported superior performance with bilateral versus a unilateral implant (n = 40), or that only bilateral implants were worn (n = 10) so performance could not be compared. The most commonly identified areas of superiority were localization, less need for repetition, and increased responsiveness. In balancing risks and costs with benefits, most parents (86%) considered the second implant worthwhile. Regarding the relationship between outcomes and demographic factors, the group achieving similar performance with each implant alone was younger at bilateral implantation and had less time between implants, and the group bilaterally implanted before 3.5 years of age (who also had less than 2 years between implants) had a higher proportion of positive outcomes on all functional outcome measures. Overall, the results indicate primarily positive functional outcomes for children and young adults receiving bilateral implants at all ages, including when the delay between implants is long. The results are important for evidence-based preoperative counseling, which helps families to make informed decisions and develop appropriate expectations. The results are also important for the development of clinical management practices that support and encourage the minority of recipients who have difficulty adapting to bilateral implants or achieving full-time use.
Baglivo, Cristina; Congedo, Paolo Maria
2018-04-01
Several technical combinations have been evaluated in order to design high energy performance buildings for the warm climate. The analysis has been developed in several steps, avoiding the use of HVAC systems. The methodological approach of this study is based on a sequential search technique and it is shown on the paper entitled "Envelope Design Optimization by Thermal Modeling of a Building in a Warm Climate" [1]. The Operative Air Temperature trends (TOP), for each combination, have been plotted through a dynamic simulation performed using the software TRNSYS 17 (a transient system simulation program, University of Wisconsin, Solar Energy Laboratory, USA, 2010). Starting from the simplest building configuration consisting of 9 rooms (equal-sized modules of 5 × 5 m 2 ), the different building components are sequentially evaluated until the envelope design is optimized. The aim of this study is to perform a step-by-step simulation, simplifying as much as possible the model without making additional variables that can modify their performances. Walls, slab-on-ground floor, roof, shading and windows are among the simulated building components. The results are shown for each combination and evaluated for Brindisi, a city in southern Italy having 1083 degrees day, belonging to the national climatic zone C. The data show the trends of the TOP for each measure applied in the case study for a total of 17 combinations divided into eight steps.
Geber, Selmo; Bossi, Renata; Guimarães, Fernando; Valle, Marcello; Sampaio, Marcos
2012-10-01
Several culture media are available to be used in ART. However it is uncertain whether embryos would preferably benefit from one type of medium or the association of different media. We performed this study to evaluate the impact of simultaneous transfer of embryos independently cultured in two distinct culture media, on pregnancy outcome. A total of 722 couples who underwent infertility treatment were sequentially allocated into three groups: those who had half of the embryos individually cultured in MEM and the other half cultured in sequential media (MEM + Seq Group) (n = 243); those who had all embryos cultured only in sequential medium (Seq Group) (n = 239); and those who had all embryos cultured only in MEM (MEM Group) (n = 240). The pregnancy rate was higher in the MEM + Seq group (51.8 %) than the Seq group (36.7 %) (p < 0.001). However the pregnancy rate observed in the MEM group was similar to the others (44.2 %). When a logistic regression test was applied it demonstrated that the number of transferred embryos did not interfere in the pregnancy rates. Our results suggests that offering different culture conditions for sibling embryos with subsequent transfer of embryos that were kept in distinct culture media, might increase pregnancy rates in assisted reproduction cycles.
Wenz, Holger; Maros, Máté E; Meyer, Mathias; Gawlitza, Joshua; Förster, Alex; Haubenreisser, Holger; Kurth, Stefan; Schoenberg, Stefan O; Groden, Christoph; Henzler, Thomas
2016-01-01
To prospectively evaluate image quality and organ-specific-radiation dose of spiral cranial CT (cCT) combined with automated tube current modulation (ATCM) and iterative image reconstruction (IR) in comparison to sequential tilted cCT reconstructed with filtered back projection (FBP) without ATCM. 31 patients with a previous performed tilted non-contrast enhanced sequential cCT aquisition on a 4-slice CT system with only FBP reconstruction and no ATCM were prospectively enrolled in this study for a clinical indicated cCT scan. All spiral cCT examinations were performed on a 3rd generation dual-source CT system using ATCM in z-axis direction. Images were reconstructed using both, FBP and IR (level 1-5). A Monte-Carlo-simulation-based analysis was used to compare organ-specific-radiation dose. Subjective image quality for various anatomic structures was evaluated using a 4-point Likert-scale and objective image quality was evaluated by comparing signal-to-noise ratios (SNR). Spiral cCT led to a significantly lower (p < 0.05) organ-specific-radiation dose in all targets including eye lense. Subjective image quality of spiral cCT datasets with an IR reconstruction level 5 was rated significantly higher compared to the sequential cCT acquisitions (p < 0.0001). Consecutive mean SNR was significantly higher in all spiral datasets (FBP, IR 1-5) when compared to sequential cCT with a mean SNR improvement of 44.77% (p < 0.0001). Spiral cCT combined with ATCM and IR allows for significant-radiation dose reduction including a reduce eye lens organ-dose when compared to a tilted sequential cCT while improving subjective and objective image quality.
Proposed hardware architectures of particle filter for object tracking
NASA Astrophysics Data System (ADS)
Abd El-Halym, Howida A.; Mahmoud, Imbaby Ismail; Habib, SED
2012-12-01
In this article, efficient hardware architectures for particle filter (PF) are presented. We propose three different architectures for Sequential Importance Resampling Filter (SIRF) implementation. The first architecture is a two-step sequential PF machine, where particle sampling, weight, and output calculations are carried out in parallel during the first step followed by sequential resampling in the second step. For the weight computation step, a piecewise linear function is used instead of the classical exponential function. This decreases the complexity of the architecture without degrading the results. The second architecture speeds up the resampling step via a parallel, rather than a serial, architecture. This second architecture targets a balance between hardware resources and the speed of operation. The third architecture implements the SIRF as a distributed PF composed of several processing elements and central unit. All the proposed architectures are captured using VHDL synthesized using Xilinx environment, and verified using the ModelSim simulator. Synthesis results confirmed the resource reduction and speed up advantages of our architectures.
Yu, Zhan; Li, Yuanyang; Liu, Lisheng; Guo, Jin; Wang, Tingfeng; Yang, Guoqing
2017-11-10
The speckle pattern (line by line) sequential extraction (SPSE) metric is proposed by the one-dimensional speckle intensity level crossing theory. Through the sequential extraction of received speckle information, the speckle metrics for estimating the variation of focusing spot size on a remote diffuse target are obtained. Based on the simulation, we will give some discussions about the SPSE metric range of application under the theoretical conditions, and the aperture size will affect the metric performance of the observation system. The results of the analyses are verified by the experiment. This method is applied to the detection of relative static target (speckled jitter frequency is less than the CCD sampling frequency). The SPSE metric can determine the variation of the focusing spot size over a long distance, moreover, the metric will estimate the spot size under some conditions. Therefore, the monitoring and the feedback of far-field spot will be implemented laser focusing system applications and help the system to optimize the focusing performance.
An Overview of the State of the Art in Atomistic and Multiscale Simulation of Fracture
NASA Technical Reports Server (NTRS)
Saether, Erik; Yamakov, Vesselin; Phillips, Dawn R.; Glaessgen, Edward H.
2009-01-01
The emerging field of nanomechanics is providing a new focus in the study of the mechanics of materials, particularly in simulating fundamental atomic mechanisms involved in the initiation and evolution of damage. Simulating fundamental material processes using first principles in physics strongly motivates the formulation of computational multiscale methods to link macroscopic failure to the underlying atomic processes from which all material behavior originates. This report gives an overview of the state of the art in applying concurrent and sequential multiscale methods to analyze damage and failure mechanisms across length scales.
Adrenal vein sampling in primary aldosteronism: concordance of simultaneous vs sequential sampling.
Almarzooqi, Mohamed-Karji; Chagnon, Miguel; Soulez, Gilles; Giroux, Marie-France; Gilbert, Patrick; Oliva, Vincent L; Perreault, Pierre; Bouchard, Louis; Bourdeau, Isabelle; Lacroix, André; Therasse, Eric
2017-02-01
Many investigators believe that basal adrenal venous sampling (AVS) should be done simultaneously, whereas others opt for sequential AVS for simplicity and reduced cost. This study aimed to evaluate the concordance of sequential and simultaneous AVS methods. Between 1989 and 2015, bilateral simultaneous sets of basal AVS were obtained twice within 5 min, in 188 consecutive patients (59 women and 129 men; mean age: 53.4 years). Selectivity was defined by adrenal-to-peripheral cortisol ratio ≥2, and lateralization was defined as an adrenal aldosterone-to-cortisol ratio ≥2, the contralateral side. Sequential AVS was simulated using right sampling at -5 min (t = -5) and left sampling at 0 min (t = 0). There was no significant difference in mean selectivity ratio (P = 0.12 and P = 0.42 for the right and left sides respectively) and in mean lateralization ratio (P = 0.93) between t = -5 and t = 0. Kappa for selectivity between 2 simultaneous AVS was 0.71 (95% CI: 0.60-0.82), whereas it was 0.84 (95% CI: 0.76-0.92) and 0.85 (95% CI: 0.77-0.93) between sequential and simultaneous AVS at respectively -5 min and at 0 min. Kappa for lateralization between 2 simultaneous AVS was 0.84 (95% CI: 0.75-0.93), whereas it was 0.86 (95% CI: 0.78-0.94) and 0.80 (95% CI: 0.71-0.90) between sequential AVS and simultaneous AVS at respectively -5 min at 0 min. Concordance between simultaneous and sequential AVS was not different than that between 2 repeated simultaneous AVS in the same patient. Therefore, a better diagnostic performance is not a good argument to select the AVS method. © 2017 European Society of Endocrinology.
Chen, Chee Kean; Lau, Francis C S; Lee, Woo Guan; Phui, Vui Eng
2016-09-01
To compare the anesthetic potency and safety of spinal anesthesia with higher dosages of levobupivacaine and bupivacaine in patients for bilateral sequential for total knee arthroplasty (TKA). Retrospective cohort study. Operation theater with postoperative inpatient follow-up. The medical records of 315 patients who underwent sequential bilateral TKA were reviewed. Patients who received intrathecal levobupicavaine 0.5% were compared with patients who received hyperbaric bupivacaine 0.5% with fentanyl 25 μg for spinal anesthesia. The primary outcome was the use of rescue analgesia (systemic opioids, conversion to general anesthesia) during surgery for both groups. Secondary outcomes included adverse effects of local anesthetics (hypotension and bradycardia) during surgery and morbidity related to spinal anesthesia (postoperative nausea, vomiting, and bleeding) during hospital stay. One hundred fifty patients who received intrathecal levobupivacaine 0.5% (group L) were compared with 90 patients given hyperbaric bupivacaine 0.5% with fentanyl 25 μg (group B). The mean volume of levobupivacaine administered was 5.8 mL (range, 5.0-6.0 mL), and that of bupivacaine was 3.8 mL (range, 3.5-4.0 mL). Both groups achieved similar maximal sensory level of block (T6). The time to maximal height of sensory block was significantly shorter in group B than group L, 18.2 ± 4.5 vs 23.9 ± 3.8 minutes (P< .001). The time to motor block of Bromage 3 was also shorter in group B (8.7 ± 4.1 minutes) than group L (16.0 ± 4.5 minutes) (P< .001). Patients in group B required more anesthetic supplement than group L (P< .001). Hypotension and postoperative bleeding were significantly less common in group L than group B. Levobupivacaine at a higher dosage provided longer duration of spinal anesthesia with better safety profile in sequential bilateral TKA. Copyright © 2016 Elsevier Inc. All rights reserved.
Lindqvist, Helena; Forsberg, Lars; Enebrink, Pia; Andersson, Gerhard; Rosendahl, Ingvar
2017-06-01
The technical component of Motivational Interviewing (MI) posits that client language mediates the relationship between counselor techniques and subsequent client behavioral outcomes. The purpose of this study was to examine this hypothesized technical component of MI in smoking cessation treatment in more depth. Secondary analysis of 106 first treatment sessions, derived from the Swedish National Tobacco Quitline, and previously rated using the Motivational Interviewing Sequential Code for Observing Process Exchanges (MI-SCOPE) Coder's Manual and the Motivational Interviewing Treatment Integrity code (MITI) Manual, version 3.1. The outcome measure was self-reported 6-month continuous abstinence at 12-month follow-up. Sequential analyses indicated that clients were significantly more likely than expected by chance to argue for change (change talk) following MI-consistent behaviors and questions and reflections favoring change. Conversely, clients were more likely to argue against change (sustain talk) following questions and reflections favoring status-quo. Parallel mediation analysis revealed that a counselor technique (reflections of client sustain talk) had an indirect effect on smoking outcome at follow-up through client language mediators. The study makes a significant contribution to our understanding of how MI works in smoking cessation treatment and adds further empirical support for the hypothesized technical component in MI. The results emphasize the importance of counselors avoiding unintentional reinforcement of sustain talk and underline the need for a greater emphasis on the direction of questions and reflections in MI trainings and fidelity measures. Copyright © 2017 Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cao, A. J.; Wei, Y. G.
2006-07-24
Fivefold deformation twins were reported recently to be observed in the experiment of the nanocrystalline face-centered-cubic metals and alloys. However, they were not predicted previously based on the molecular dynamics (MD) simulations and the reason was thought to be a uniaxial tension considered in the simulations. In the present investigation, through introducing pretwins in grain regions, using the MD simulations, the authors predict out the fivefold deformation twins in the grain regions of the nanocrystal grain cell, which undergoes a uniaxial tension. It is shown in their simulation results that series of Shockley partial dislocations emitted from grain boundaries providemore » sequential twining mechanism, which results in fivefold deformation twins.« less
On extending parallelism to serial simulators
NASA Technical Reports Server (NTRS)
Nicol, David; Heidelberger, Philip
1994-01-01
This paper describes an approach to discrete event simulation modeling that appears to be effective for developing portable and efficient parallel execution of models of large distributed systems and communication networks. In this approach, the modeler develops submodels using an existing sequential simulation modeling tool, using the full expressive power of the tool. A set of modeling language extensions permit automatically synchronized communication between submodels; however, the automation requires that any such communication must take a nonzero amount off simulation time. Within this modeling paradigm, a variety of conservative synchronization protocols can transparently support conservative execution of submodels on potentially different processors. A specific implementation of this approach, U.P.S. (Utilitarian Parallel Simulator), is described, along with performance results on the Intel Paragon.
Methodology of modeling and measuring computer architectures for plasma simulations
NASA Technical Reports Server (NTRS)
Wang, L. P. T.
1977-01-01
A brief introduction to plasma simulation using computers and the difficulties on currently available computers is given. Through the use of an analyzing and measuring methodology - SARA, the control flow and data flow of a particle simulation model REM2-1/2D are exemplified. After recursive refinements the total execution time may be greatly shortened and a fully parallel data flow can be obtained. From this data flow, a matched computer architecture or organization could be configured to achieve the computation bound of an application problem. A sequential type simulation model, an array/pipeline type simulation model, and a fully parallel simulation model of a code REM2-1/2D are proposed and analyzed. This methodology can be applied to other application problems which have implicitly parallel nature.
Lund, Søren S; Gluud, Christian; Vaag, Allan; Almdal, Thomas; Wetterslev, Jørn
2011-01-01
Objective To assess the effect of targeting intensive glycaemic control versus conventional glycaemic control on all cause mortality and cardiovascular mortality, non-fatal myocardial infarction, microvascular complications, and severe hypoglycaemia in patients with type 2 diabetes. Design Systematic review with meta-analyses and trial sequential analyses of randomised trials. Data sources Cochrane Library, Medline, Embase, Science Citation Index Expanded, LILACS, and CINAHL to December 2010; hand search of reference lists and conference proceedings; contacts with authors, relevant pharmaceutical companies, and the US Food and Drug Administration. Study selection Randomised clinical trials comparing targeted intensive glycaemic control with conventional glycaemic control in patients with type 2 diabetes. Published and unpublished trials in all languages were included, irrespective of predefined outcomes. Data extraction Two reviewers independently assessed studies for inclusion and extracted data related to study methods, interventions, outcomes, risk of bias, and adverse events. Risk ratios with 95% confidence intervals were estimated with fixed and random effects models. Results Fourteen clinical trials that randomised 28 614 participants with type 2 diabetes (15 269 to intensive control and 13 345 to conventional control) were included. Intensive glycaemic control did not significantly affect the relative risks of all cause (1.02, 95% confidence interval 0.91 to 1.13; 28 359 participants, 12 trials) or cardiovascular mortality (1.11, 0.92 to 1.35; 28 359 participants, 12 trials). Trial sequential analyses rejected a relative risk reduction above 10% for all cause mortality and showed insufficient data on cardiovascular mortality. The risk of non-fatal myocardial infarction may be reduced (relative risk 0.85, 0.76 to 0.95; P=0.004; 28 111 participants, 8 trials), but this finding was not confirmed in trial sequential analysis. Intensive glycaemic control showed a reduction of the relative risks for the composite microvascular outcome (0.88, 0.79 to 0.97; P=0.01; 25 600 participants, 3 trials) and retinopathy (0.80, 0.67 to 0.94; P=0.009; 10 793 participants, 7 trials), but trial sequential analyses showed that sufficient evidence had not yet been reached. The estimate of an effect on the risk of nephropathy (relative risk 0.83, 0.64 to 1.06; 27 769 participants, 8 trials) was not statistically significant. The risk of severe hypoglycaemia was significantly increased when intensive glycaemic control was targeted (relative risk 2.39, 1.71 to 3.34; 27 844 participants, 9 trials); trial sequential analysis supported a 30% increased relative risk of severe hypoglycaemia. Conclusion Intensive glycaemic control does not seem to reduce all cause mortality in patients with type 2 diabetes. Data available from randomised clinical trials remain insufficient to prove or refute a relative risk reduction for cardiovascular mortality, non-fatal myocardial infarction, composite microvascular complications, or retinopathy at a magnitude of 10%. Intensive glycaemic control increases the relative risk of severe hypoglycaemia by 30%. PMID:22115901
Qi, Hong; Qiao, Yao-Bin; Ren, Ya-Tao; Shi, Jing-Wen; Zhang, Ze-Yu; Ruan, Li-Ming
2016-10-17
Sequential quadratic programming (SQP) is used as an optimization algorithm to reconstruct the optical parameters based on the time-domain radiative transfer equation (TD-RTE). Numerous time-resolved measurement signals are obtained using the TD-RTE as forward model. For a high computational efficiency, the gradient of objective function is calculated using an adjoint equation technique. SQP algorithm is employed to solve the inverse problem and the regularization term based on the generalized Gaussian Markov random field (GGMRF) model is used to overcome the ill-posed problem. Simulated results show that the proposed reconstruction scheme performs efficiently and accurately.
Discrete filtering techniques applied to sequential GPS range measurements
NASA Technical Reports Server (NTRS)
Vangraas, Frank
1987-01-01
The basic navigation solution is described for position and velocity based on range and delta range (Doppler) measurements from NAVSTAR Global Positioning System satellites. The application of discrete filtering techniques is examined to reduce the white noise distortions on the sequential range measurements. A second order (position and velocity states) Kalman filter is implemented to obtain smoothed estimates of range by filtering the dynamics of the signal from each satellite separately. Test results using a simulated GPS receiver show a steady-state noise reduction, the input noise variance divided by the output noise variance, of a factor of four. Recommendations for further noise reduction based on higher order Kalman filters or additional delta range measurements are included.
Hall, Olivia J; Nachbagauer, Raffael; Vermillion, Meghan S; Fink, Ashley L; Phuong, Vanessa; Krammer, Florian; Klein, Sabra L
2017-04-15
In addition to their intended use, progesterone (P4)-based contraceptives promote anti-inflammatory immune responses, yet their effects on the outcome of infectious diseases, including influenza A virus (IAV) infection, are rarely evaluated. To evaluate their impact on immune responses to sequential IAV infections, adult female mice were treated with placebo or one of two progestins, P4 or levonorgestrel (LNG), and infected with a mouse-adapted H1N1 (maH1N1) virus. Treatment with P4 or LNG reduced morbidity but had no effect on pulmonary virus titers during primary H1N1 infection compared to placebo treatment. In serum and bronchoalveolar lavage fluid, total anti-IAV IgG and IgA titers and virus-neutralizing antibody titers but not hemagglutinin stalk antibody titers were lower in progestin-treated mice than placebo-treated mice. Females were challenged 6 weeks later with either an maH1N1 drift variant (maH1N1dv) or maH3N2 IAV. The level of protection following infection with the maH1N1dv was similar among all groups. In contrast, following challenge with maH3N2, progestin treatment reduced survival as well as the numbers and activity of H1N1- and H3N2-specific memory CD8 + T cells, including tissue-resident cells, compared with placebo treatment. In contrast to primary IAV infection, progestin treatment increased the titers of neutralizing and IgG antibodies against both challenge viruses compared with those achieved with placebo treatment. While the immunomodulatory properties of progestins protected immunologically naive female mice from the severe outcomes from IAV infection, it made them more susceptible to secondary challenge with a heterologous IAV, despite improving their antibody responses against a secondary IAV infection. Taken together, the immunomodulatory effects of progestins differentially regulate the outcome of infection depending on exposure history. IMPORTANCE The impact of hormone-based contraceptives on the outcome of infectious diseases outside the reproductive tract is rarely considered. Using a mouse model, we have made the novel observation that treatment with either progesterone or a synthetic analog found in hormonal contraceptives, levonorgestrel, impacts sequential influenza A virus infection by modulating antibody responses and decreasing the numbers and activity of memory CD8 + T cells. Progestins reduced the antibody responses during primary H1N1 virus infection but increased antibody titers following a sequential infection with either an H1N1 drift variant or an H3N2 virus. Following challenge with an H3N2 virus, female mice treated with progestins experienced greater mortality with increased pulmonary inflammation and reduced numbers and activity of CD8 + T cells. This study suggests that progestins significantly affect adaptive immune responses to influenza A virus infection, with their effect on the outcome of infection depending on exposure history. Copyright © 2017 American Society for Microbiology.
Social Studies Scope & Sequence Learner Outcomes: [Grades K-8].
ERIC Educational Resources Information Center
Duluth Public Schools, MN.
The goal of this K-8 social studies curriculum guide is to ensure instructional consistency at each grade level and sequential growth in content and process skills. Within the K-6 grade levels, the curriculum objectives are itemized under the discipline areas of geography, history, cultures/ethnicity, citizenship/government, economics, and…
Health Curriculum Guide. Grade K. Bulletin 1988, No. 48.
ERIC Educational Resources Information Center
Alabama State Dept. of Education, Montgomery.
This curriculum guide supplements the Alabama "Health Education Course of Study," which offers a comprehensive planned sequential curriculum for grades K-12. The largest section of the guide consists of classroom activities which are tied to specific student outcomes. A list of materials needed to carry out the activities is provided.…
We are evaluating methods to screen/prioritize large numbers of chemicals using 6 day old zebrafish (Danio rerio) as an alternative model for detecting neurotoxic effects. Our behavioral testing paradigm simultaneously tests individual larval zebrafish under sequential light and...
Health Curriculum Guide. Grade 5. Bulletin 1988, No. 53.
ERIC Educational Resources Information Center
Alabama State Dept. of Education, Montgomery.
This curriculum guide supplements the Alabama "Health Education Course of Study," which offers a comprehensive planned sequential curriculum for grades K-12. The largest section of the guide consists of classroom activities which are tied to specific student outcomes. A list of materials needed to carry out the activities is provided.…
Modeling Valuations from Experience: A Comment on Ashby and Rakow (2014)
ERIC Educational Resources Information Center
Wulff, Dirk U.; Pachur, Thorsten
2016-01-01
What are the cognitive mechanisms underlying subjective valuations formed on the basis of sequential experiences of an option's possible outcomes? Ashby and Rakow (2014) have proposed a sliding window model (SWIM), according to which people's valuations represent the average of a limited sample of recent experiences (the size of which is estimated…
Balancing the Equation: Do Course Variations in Algebra 1 Provide Equal Student Outcomes?
ERIC Educational Resources Information Center
Kenfield, Danielle M.
2013-01-01
Historically, algebra has served as a gatekeeper that divides students into academic programs with varying opportunities to learn and controls access to higher education and career opportunities. Successful completion of Algebra 1 demonstrates mathematical proficiency and allows access to a sequential and progressive path of advanced study that…
Barta, Stefan K.; Zou, Yiyu; Schindler, John; Shenoy, Niraj; Bhagat, Tushar D.; Steidl, Ulrich; Verma, Amit
2013-01-01
The outcome for patients with refractory or relapsed acute lymphoblastic leukemia (ALL) treated with conventional therapy is poor. Immunoconjugates present a novel approach and have recently been shown to have efficacy in this setting. Combotox is a mixture of two ricin-conjugated monoclonal antibodies (RFB4 and HD37) directed against CD19 and CD22, respectively, and has shown activity in pediatric and adult ALL. We created a murine xenograft model of advanced ALL using the NALM/6 cell line to explore whether the combination of Combotox with the cytotoxic agent cytarabine (Ara-C) results in better outcomes. In our model the combination of both low- and high-dose Combotox and Ara-C resulted in significantly longer median survival. Sequential administration of Ara-C and Combotox, however, was shown to be superior to concurrent administration. These findings have led to a phase I clinical trial exploring this combination in adults with relapsed or refractory B-lineage ALL (ClinicalTrials.gov identifier NCT01408160). PMID:22448921
Chemoradiotherapy for stage III non-small cell lung cancer: have we reached the limit?
Xu, Peng; Le Pechoux, Cecile
2015-12-01
Lung cancer is the leading cause of cancer-related mortality in men and the second leading cause in women. Approximately 85% of lung cancer patients have non-small cell lung cancer (NSCLC), and most present with advanced stage at diagnosis. The current treatment for such patients is chemoradiation (CRT) provided concurrently preferably or sequentially with chemotherapy, using conventionally fractionated radiation doses in the range of 60 to 66 Gy in 30 to 33 fractions. An individual patient data based metaanalysis has shown that in good performance status (PS), concomitant CRT was associated to improved survival by 4.5% compared to sequential combination (5-year survival rate of 15.1% and 10.6% respectively). In the recent years, improvement of modern technique of radiotherapy (RT) and new chemotherapy drugs may be favorable for the patients. Furthermore, the positron emission tomography-computed tomography (PET-CT) contributes to improved delineation of RT especially in terms of nodal involvement. Improving outcomes for patients with stage III disease remains a challenge, this review will address the questions that are considered fundamental to improving outcome in patients with stage III NSCLC.
Fueglistaler, Philipp; Amsler, Felix; Schüepp, Marcel; Fueglistaler-Montali, Ida; Attenberger, Corinna; Pargger, Hans; Jacob, Augustinus Ludwig; Gross, Thomas
2010-08-01
Prospective data regarding the prognostic value of the Sequential Organ Failure Assessment (SOFA) score in comparison with the Simplified Acute Physiology Score (SAPS II) and trauma scores on the outcome of multiple-trauma patients are lacking. Single-center evaluation (n = 237, Injury Severity Score [ISS] >16; mean ISS = 29). Uni- and multivariate analysis of SAPS II, SOFA, revised trauma, polytrauma, and trauma and ISS scores (TRISS) was performed. The 30-day mortality was 22.8% (n = 54). SOFA day 1 was significantly higher in nonsurvivors compared with survivors (P < .001) and correlated well with the length of intensive care unit stay (r = .50, P < .001). Logistic regression revealed SAPS II to have the best predictive value of 30-day mortality (area under the receiver operating characteristic = .86 +/- .03). The SOFA score significantly added prognostic information with regard to mortality to both SAPS II and TRISS. The combination of critically ill and trauma scores may increase the accuracy of mortality prediction in multiple-trauma patients. 2010 Elsevier Inc. All rights reserved.
Observer Training Revisited: A Comparison of in Vivo and Video Instruction
ERIC Educational Resources Information Center
Dempsey, Carrie M.; Iwata, Brian A.; Fritz, Jennifer N.; Rolider, Natalie U.
2012-01-01
We compared the effects of 2 observer-training procedures. In vivo training involved practice during actual treatment sessions. Video training involved practice while watching progressively more complex simulations. Fifty-nine undergraduate students entered 1 of the 2 training conditions sequentially according to an ABABAB design. Results showed…
Say again? How complexity and format of air traffic control instructions affect pilot recall
DOT National Transportation Integrated Search
1999-01-01
This study compared the recall of ATC information presented in cither grouped or sequential format : in a part-task simulation. It also tested the effect of complexity of ATC clearances on recall, that is, : how many pieces of information a single tr...
A method was developed to simulate the human gastrointestinal environment and
to estimate bioavailability of arsenic in contaminated soil and solid media. In
this in vitro gastrointestinal (IVG) method, arsenic is sequentially extracted
from contaminated soil with ...
Persistence of opinion in the Sznajd consensus model: computer simulation
NASA Astrophysics Data System (ADS)
Stauffer, D.; de Oliveira, P. M. C.
2002-12-01
The density of never changed opinions during the Sznajd consensus-finding process decays with time t as 1/t^θ. We find θ simeq 3/8 for a chain, compatible with the exact Ising result of Derrida et al. In higher dimensions, however, the exponent differs from the Ising θ. With simultaneous updating of sublattices instead of the usual random sequential updating, the number of persistent opinions decays roughly exponentially. Some of the simulations used multi-spin coding.
Mazilu, I; Mazilu, D A; Melkerson, R E; Hall-Mejia, E; Beck, G J; Nshimyumukiza, S; da Fonseca, Carlos M
2016-03-01
We present exact and approximate results for a class of cooperative sequential adsorption models using matrix theory, mean-field theory, and computer simulations. We validate our models with two customized experiments using ionically self-assembled nanoparticles on glass slides. We also address the limitations of our models and their range of applicability. The exact results obtained using matrix theory can be applied to a variety of two-state systems with cooperative effects.
He, Pei
2014-07-01
The advancements in biotechnology and genetics lead to an increasing research interest in personalized medicine, where a patient's genetic profile or biological traits contribute to choosing the most effective treatment for the patient. The process starts with finding a specific biomarker among all possible candidates that can best predict the treatment effect. After a biomarker is chosen, identifying a cut point of the biomarker value that splits the patients into treatment effective and non-effective subgroups becomes an important scientific problem. Numerous methods have been proposed to validate the predictive marker and select the appropriate cut points either prospectively or retrospectively using clinical trial data. In trials with survival outcomes, the current practice applies an interaction testing procedure and chooses the cut point that minimizes the p-values for the tests. Such method assumes independence between the baseline hazard and biomarker value. In reality, however, this assumption is often violated, as the chosen biomarker might also be prognostic in addition to its predictive nature for treatment effect. In this paper we propose a block-wise estimation and a sequential testing approach to identify the cut point in biomarkers that can group the patients into subsets based on their distinct treatment outcomes without assuming independence between the biomarker and baseline hazard. Numerical results based on simulated survival data show that the proposed method could pinpoint accurately the cut points in biomarker values that separate the patient subpopulations into subgroups with distinctive treatment outcomes. Copyright © 2014 Elsevier Inc. All rights reserved.
Dosimetric effects of patient rotational setup errors on prostate IMRT treatments
NASA Astrophysics Data System (ADS)
Fu, Weihua; Yang, Yong; Li, Xiang; Heron, Dwight E.; Saiful Huq, M.; Yue, Ning J.
2006-10-01
The purpose of this work is to determine dose delivery errors that could result from systematic rotational setup errors (ΔΦ) for prostate cancer patients treated with three-phase sequential boost IMRT. In order to implement this, different rotational setup errors around three Cartesian axes were simulated for five prostate patients and dosimetric indices, such as dose-volume histogram (DVH), tumour control probability (TCP), normal tissue complication probability (NTCP) and equivalent uniform dose (EUD), were employed to evaluate the corresponding dosimetric influences. Rotational setup errors were simulated by adjusting the gantry, collimator and horizontal couch angles of treatment beams and the dosimetric effects were evaluated by recomputing the dose distributions in the treatment planning system. Our results indicated that, for prostate cancer treatment with the three-phase sequential boost IMRT technique, the rotational setup errors do not have significant dosimetric impacts on the cumulative plan. Even in the worst-case scenario with ΔΦ = 3°, the prostate EUD varied within 1.5% and TCP decreased about 1%. For seminal vesicle, slightly larger influences were observed. However, EUD and TCP changes were still within 2%. The influence on sensitive structures, such as rectum and bladder, is also negligible. This study demonstrates that the rotational setup error degrades the dosimetric coverage of target volume in prostate cancer treatment to a certain degree. However, the degradation was not significant for the three-phase sequential boost prostate IMRT technique and for the margin sizes used in our institution.
Challenges in predicting climate change impacts on pome fruit phenology
NASA Astrophysics Data System (ADS)
Darbyshire, Rebecca; Webb, Leanne; Goodwin, Ian; Barlow, E. W. R.
2014-08-01
Climate projection data were applied to two commonly used pome fruit flowering models to investigate potential differences in predicted full bloom timing. The two methods, fixed thermal time and sequential chill-growth, produced different results for seven apple and pear varieties at two Australian locations. The fixed thermal time model predicted incremental advancement of full bloom, while results were mixed from the sequential chill-growth model. To further investigate how the sequential chill-growth model reacts under climate perturbed conditions, four simulations were created to represent a wider range of species physiological requirements. These were applied to five Australian locations covering varied climates. Lengthening of the chill period and contraction of the growth period was common to most results. The relative dominance of the chill or growth component tended to predict whether full bloom advanced, remained similar or was delayed with climate warming. The simplistic structure of the fixed thermal time model and the exclusion of winter chill conditions in this method indicate it is unlikely to be suitable for projection analyses. The sequential chill-growth model includes greater complexity; however, reservations in using this model for impact analyses remain. The results demonstrate that appropriate representation of physiological processes is essential to adequately predict changes to full bloom under climate perturbed conditions with greater model development needed.
Vuckovic, Anita; Kwantes, Peter J; Humphreys, Michael; Neal, Andrew
2014-03-01
Signal Detection Theory (SDT; Green & Swets, 1966) is a popular tool for understanding decision making. However, it does not account for the time taken to make a decision, nor why response bias might change over time. Sequential sampling models provide a way of accounting for speed-accuracy trade-offs and response bias shifts. In this study, we test the validity of a sequential sampling model of conflict detection in a simulated air traffic control task by assessing whether two of its key parameters respond to experimental manipulations in a theoretically consistent way. Through experimental instructions, we manipulated participants' response bias and the relative speed or accuracy of their responses. The sequential sampling model was able to replicate the trends in the conflict responses as well as response time across all conditions. Consistent with our predictions, manipulating response bias was associated primarily with changes in the model's Criterion parameter, whereas manipulating speed-accuracy instructions was associated with changes in the Threshold parameter. The success of the model in replicating the human data suggests we can use the parameters of the model to gain an insight into the underlying response bias and speed-accuracy preferences common to dynamic decision-making tasks. © 2013 American Psychological Association
Kähler, Pernille; Grevstad, Berit; Almdal, Thomas; Gluud, Christian; Wetterslev, Jørn; Vaag, Allan; Hemmingsen, Bianca
2014-01-01
Objective To assess the benefits and harms of targeting intensive versus conventional glycaemic control in patients with type 1 diabetes mellitus. Design A systematic review with meta-analyses and trial sequential analyses of randomised clinical trials. Data sources The Cochrane Library, MEDLINE, EMBASE, Science Citation Index Expanded and LILACS to January 2013. Study selection Randomised clinical trials that prespecified different targets of glycaemic control in participants at any age with type 1 diabetes mellitus were included. Data extraction Two authors independently assessed studies for inclusion and extracted data. Results 18 randomised clinical trials included 2254 participants with type 1 diabetes mellitus. All trials had high risk of bias. There was no statistically significant effect of targeting intensive glycaemic control on all-cause mortality (risk ratio 1.16, 95% CI 0.65 to 2.08) or cardiovascular mortality (0.49, 0.19 to 1.24). Targeting intensive glycaemic control reduced the relative risks for the composite macrovascular outcome (0.63, 0.41 to 0.96; p=0.03), and nephropathy (0.37, 0.27 to 0.50; p<0.00001. The effect estimates of retinopathy, ketoacidosis and retinal photocoagulation were not consistently statistically significant between random and fixed effects models. The risk of severe hypoglycaemia was significantly increased with intensive glycaemic targets (1.40, 1.01 to 1.94). Trial sequential analyses showed that the amount of data needed to demonstrate a relative risk reduction of 10% were, in general, inadequate. Conclusions There was no significant effect towards improved all-cause mortality when targeting intensive glycaemic control compared with conventional glycaemic control. However, there may be beneficial effects of targeting intensive glycaemic control on the composite macrovascular outcome and on nephropathy, and detrimental effects on severe hypoglycaemia. Notably, the data for retinopathy and ketoacidosis were inconsistent. There was a severe lack of reporting on patient relevant outcomes, and all trials had poor bias control. PMID:25138801
Cost-effectiveness of allopurinol and febuxostat for the management of gout.
Jutkowitz, Eric; Choi, Hyon K; Pizzi, Laura T; Kuntz, Karen M
2014-11-04
Gout is the most common inflammatory arthritis in the United States. To evaluate the cost-effectiveness of urate-lowering treatment strategies for the management of gout. Markov model. Published literature and expert opinion. Patients for whom allopurinol or febuxostat is a suitable initial urate-lowering treatment. Lifetime. Health care payer. 5 urate-lowering treatment strategies were evaluated: no treatment; allopurinol- or febuxostat-only therapy; allopurinol-febuxostat sequential therapy; and febuxostat-allopurinol sequential therapy. Two dosing scenarios were investigated: fixed dose (80 mg of febuxostat daily, 0.80 success rate; 300 mg of allopurinol daily, 0.39 success rate) and dose escalation (≤120 mg of febuxostat daily, 0.82 success rate; ≤800 mg of allopurinol daily, 0.78 success rate). Discounted costs, discounted quality-adjusted life-years, and incremental cost-effectiveness ratios. In both dosing scenarios, allopurinol-only therapy was cost-saving. Dose-escalation allopurinol-febuxostat sequential therapy was more costly but more effective than dose-escalation allopurinol therapy, with an incremental cost-effectiveness ratio of $39 400 per quality-adjusted life-year. The relative rankings of treatments did not change. Our results were relatively sensitive to several potential variations of model assumptions; however, the cost-effectiveness ratios of dose escalation with allopurinol-febuxostat sequential therapy remained lower than the willingness-to-pay threshold of $109 000 per quality-adjusted life-year. Long-term outcome data for patients with gout, including medication adherence, are limited. Allopurinol single therapy is cost-saving compared with no treatment. Dose-escalation allopurinol-febuxostat sequential therapy is cost-effective compared with accepted willingness-to-pay thresholds. Agency for Healthcare Research and Quality.
Lu, Sharon M; Chang-Halpenny, Christine; Hwang-Graziano, Julie
2015-04-01
To compare the efficacy and tolerance of adjuvant chemotherapy and radiotherapy delivered in sequential (chemotherapy followed by radiation) versus "sandwich" fashion (chemotherapy, interval radiation, and remaining chemotherapy) after surgery in patients with FIGO stage III uterine endometrioid adenocarcinoma. From 2004 to 2011, we identified 51 patients treated at our institution fitting the above criteria. All patients received surgical staging followed by adjuvant chemoradiation (external-beam radiation therapy (EBRT) with or without high-dose rate (HDR) vaginal brachytherapy (VB)). Of these, 73% and 27% of patients received their adjuvant therapy in sequential and sandwich fashion, respectively. There were no significant differences in clinical or pathologic factors between patients treated with either regimen. Thirty-nine (76%) patients had stage IIIC disease. The majority of patients received 6 cycles of paclitaxel with carboplatin or cisplatin. Median EBRT dose was 45 Gy and 54% of patients received HDR VB boost (median dose 21 Gy). There were no significant differences in the estimated 5-year overall survival, local progression-free survival, and distant metastasis-free survival between the sequential and sandwich groups: 87% vs. 77% (p=0.37), 89% vs. 100% (p=0.21), and 78% vs. 85% (p=0.79), respectively. No grade 3-4 genitourinary or gastrointestinal toxicities were reported in either group. There was a trend towards higher incidence of grade 3-4 hematologic toxicity in the sandwich group. Adjuvant chemoradiation for FIGO stage III endometrioid uterine cancer given in either sequential or sandwich fashion appears to offer equally excellent early clinical outcomes and acceptably low toxicity. Copyright © 2015 Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Duan, Nan; Dimitrovski, Aleksandar D; Simunovic, Srdjan
2016-01-01
The development of high-performance computing techniques and platforms has provided many opportunities for real-time or even faster-than-real-time implementation of power system simulations. One approach uses the Parareal in time framework. The Parareal algorithm has shown promising theoretical simulation speedups by temporal decomposing a simulation run into a coarse simulation on the entire simulation interval and fine simulations on sequential sub-intervals linked through the coarse simulation. However, it has been found that the time cost of the coarse solver needs to be reduced to fully exploit the potentials of the Parareal algorithm. This paper studies a Parareal implementation using reduced generatormore » models for the coarse solver and reports the testing results on the IEEE 39-bus system and a 327-generator 2383-bus Polish system model.« less
Sequential Gaussian co-simulation of rate decline parameters of longwall gob gas ventholes.
Karacan, C Özgen; Olea, Ricardo A
2013-04-01
Gob gas ventholes (GGVs) are used to control methane inflows into a longwall mining operation by capturing the gas within the overlying fractured strata before it enters the work environment. Using geostatistical co-simulation techniques, this paper maps the parameters of their rate decline behaviors across the study area, a longwall mine in the Northern Appalachian basin. Geostatistical gas-in-place (GIP) simulations were performed, using data from 64 exploration boreholes, and GIP data were mapped within the fractured zone of the study area. In addition, methane flowrates monitored from 10 GGVs were analyzed using decline curve analyses (DCA) techniques to determine parameters of decline rates. Surface elevation showed the most influence on methane production from GGVs and thus was used to investigate its relation with DCA parameters using correlation techniques on normal-scored data. Geostatistical analysis was pursued using sequential Gaussian co-simulation with surface elevation as the secondary variable and with DCA parameters as the primary variables. The primary DCA variables were effective percentage decline rate, rate at production start, rate at the beginning of forecast period, and production end duration. Co-simulation results were presented to visualize decline parameters at an area-wide scale. Wells located at lower elevations, i.e., at the bottom of valleys, tend to perform better in terms of their rate declines compared to those at higher elevations. These results were used to calculate drainage radii of GGVs using GIP realizations. The calculated drainage radii are close to ones predicted by pressure transient tests.
Wong, Ka-Hing; Cheung, Peter C K
2005-11-30
The in vitro mineral binding capacity of three novel dietary fibers (DFs) prepared from mushroom sclerotia, namely, Pleurotus tuber-regium, Polyporous rhinocerus, and Wolfiporia cocos, to Ca, Mg, Cu, Fe, and Zn under sequential simulated physiological conditions of the human stomach, small intestine, and colon was investigated and compared. Apart from releasing most of their endogenous Ca (ranged from 96.9 to 97.9% removal) and Mg (ranged from 95.9 to 96.7% removal), simulated physiological conditions of the stomach also attenuated the possible adverse binding effect of the three sclerotial DFs to the exogenous minerals by lowering their cation-exchange capacity (ranged from 20.8 to 32.3%) and removing a substantial amount of their potential mineral chelators including protein (ranged from 16.2 to 37.8%) and phytate (ranged from 58.5 to 64.2%). The in vitro mineral binding capacity of the three sclerotial DF under simulated physiological conditions of small intestine was found to be low, especially for Ca (ranged from 4.79 to 5.91% binding) and Mg (ranged from 3.16 to 4.18% binding), and was highly correlated (r > 0.97) with their residual protein contents. Under simulated physiological conditions of the colon with slightly acidic pH (5.80), only bound Ca was readily released (ranged from 34.2 to 72.3% releasing) from the three sclerotial DFs, and their potential enhancing effect on passive Ca absorption in the human large intestine was also discussed.
Design of the biosonar simulator for dolphin's clicks waveform reproduction
NASA Astrophysics Data System (ADS)
Ishii, Ken; Akamatsu, Tomonari; Hatakeyama, Yoshimi
1992-03-01
The emitted clicks of Dall's porpoises consist of a pulse train of burst signals with an ultrasonic carrier frequency. The authors have designed a biosonar simulator to reproduce the waveforms associated with a dolphin's clicks underwater. The total reproduction system consists of a click signal acquisition block, a waveform analysis block, a memory unit, a click simulator, and a underwater, ultrasonic wave transmitter. In operation, data stored in an EPROM (Erasable Programmable Read Only Memory) are read out sequentially by a fast clock and converted to analog output signals. Then an ultrasonic power amplifier reproduces these signals through a transmitter. The click signal replaying block is referred to as the BSS (Biosonar Simulator). This is what simulates the clicks. The details of the BSS are described in this report. A unit waveform is defined. The waveform is divided into a burst period and a waiting period. Clicks are a sequence based on a unit waveform, and digital data are sequentially read out from an EPROM of waveform data. The basic parameters of the BSS are as follows: (1) reading clock, 100 ns to 25.4 microseconds; (2) number of reading clock, 34 to 1024 times; (3) counter clock in a waiting period, 100 ns to 25.4 microseconds; (4) number of counter clock, zero to 16,777,215 times; (5) number of burst/waiting repetition cycle, one to 128 times; and (6) transmission level adjustment by a programmable attenuator, zero to 86.5 dB. These basic functions enable the BSS to replay clicks of Dall's porpoise precisely.
Sequential Gaussian co-simulation of rate decline parameters of longwall gob gas ventholes
Karacan, C. Özgen; Olea, Ricardo A.
2013-01-01
Gob gas ventholes (GGVs) are used to control methane inflows into a longwall mining operation by capturing the gas within the overlying fractured strata before it enters the work environment. Using geostatistical co-simulation techniques, this paper maps the parameters of their rate decline behaviors across the study area, a longwall mine in the Northern Appalachian basin. Geostatistical gas-in-place (GIP) simulations were performed, using data from 64 exploration boreholes, and GIP data were mapped within the fractured zone of the study area. In addition, methane flowrates monitored from 10 GGVs were analyzed using decline curve analyses (DCA) techniques to determine parameters of decline rates. Surface elevation showed the most influence on methane production from GGVs and thus was used to investigate its relation with DCA parameters using correlation techniques on normal-scored data. Geostatistical analysis was pursued using sequential Gaussian co-simulation with surface elevation as the secondary variable and with DCA parameters as the primary variables. The primary DCA variables were effective percentage decline rate, rate at production start, rate at the beginning of forecast period, and production end duration. Co-simulation results were presented to visualize decline parameters at an area-wide scale. Wells located at lower elevations, i.e., at the bottom of valleys, tend to perform better in terms of their rate declines compared to those at higher elevations. These results were used to calculate drainage radii of GGVs using GIP realizations. The calculated drainage radii are close to ones predicted by pressure transient tests.
Sequential Gaussian co-simulation of rate decline parameters of longwall gob gas ventholes
Karacan, C.Özgen; Olea, Ricardo A.
2015-01-01
Gob gas ventholes (GGVs) are used to control methane inflows into a longwall mining operation by capturing the gas within the overlying fractured strata before it enters the work environment. Using geostatistical co-simulation techniques, this paper maps the parameters of their rate decline behaviors across the study area, a longwall mine in the Northern Appalachian basin. Geostatistical gas-in-place (GIP) simulations were performed, using data from 64 exploration boreholes, and GIP data were mapped within the fractured zone of the study area. In addition, methane flowrates monitored from 10 GGVs were analyzed using decline curve analyses (DCA) techniques to determine parameters of decline rates. Surface elevation showed the most influence on methane production from GGVs and thus was used to investigate its relation with DCA parameters using correlation techniques on normal-scored data. Geostatistical analysis was pursued using sequential Gaussian co-simulation with surface elevation as the secondary variable and with DCA parameters as the primary variables. The primary DCA variables were effective percentage decline rate, rate at production start, rate at the beginning of forecast period, and production end duration. Co-simulation results were presented to visualize decline parameters at an area-wide scale. Wells located at lower elevations, i.e., at the bottom of valleys, tend to perform better in terms of their rate declines compared to those at higher elevations. These results were used to calculate drainage radii of GGVs using GIP realizations. The calculated drainage radii are close to ones predicted by pressure transient tests. PMID:26190930
NASA Astrophysics Data System (ADS)
Ruggeri, Paolo; Irving, James; Gloaguen, Erwan; Holliger, Klaus
2013-04-01
Significant progress has been made with regard to the quantitative integration of geophysical and hydrological data at the local scale. However, extending corresponding approaches to the regional scale still represents a major challenge, yet is critically important for the development of groundwater flow and contaminant transport models. To address this issue, we have developed a regional-scale hydrogeophysical data integration technique based on a two-step Bayesian sequential simulation procedure. The objective is to simulate the regional-scale distribution of a hydraulic parameter based on spatially exhaustive, but poorly resolved, measurements of a pertinent geophysical parameter and locally highly resolved, but spatially sparse, measurements of the considered geophysical and hydraulic parameters. To this end, our approach first involves linking the low- and high-resolution geophysical data via a downscaling procedure before relating the downscaled regional-scale geophysical data to the high-resolution hydraulic parameter field. We present the application of this methodology to a pertinent field scenario, where we consider collocated high-resolution measurements of the electrical conductivity, measured using a cone penetrometer testing (CPT) system, and the hydraulic conductivity, estimated from EM flowmeter and slug test measurements, in combination with low-resolution exhaustive electrical conductivity estimates obtained from dipole-dipole ERT meausurements.
NASA Astrophysics Data System (ADS)
Murakami, H.; Chen, X.; Hahn, M. S.; Over, M. W.; Rockhold, M. L.; Vermeul, V.; Hammond, G. E.; Zachara, J. M.; Rubin, Y.
2010-12-01
Subsurface characterization for predicting groundwater flow and contaminant transport requires us to integrate large and diverse datasets in a consistent manner, and quantify the associated uncertainty. In this study, we sequentially assimilated multiple types of datasets for characterizing a three-dimensional heterogeneous hydraulic conductivity field at the Hanford 300 Area. The datasets included constant-rate injection tests, electromagnetic borehole flowmeter tests, lithology profile and tracer tests. We used the method of anchored distributions (MAD), which is a modular-structured Bayesian geostatistical inversion method. MAD has two major advantages over the other inversion methods. First, it can directly infer a joint distribution of parameters, which can be used as an input in stochastic simulations for prediction. In MAD, in addition to typical geostatistical structural parameters, the parameter vector includes multiple point values of the heterogeneous field, called anchors, which capture local trends and reduce uncertainty in the prediction. Second, MAD allows us to integrate the datasets sequentially in a Bayesian framework such that it updates the posterior distribution, as a new dataset is included. The sequential assimilation can decrease computational burden significantly. We applied MAD to assimilate different combinations of the datasets, and then compared the inversion results. For the injection and tracer test assimilation, we calculated temporal moments of pressure build-up and breakthrough curves, respectively, to reduce the data dimension. A massive parallel flow and transport code PFLOTRAN is used for simulating the tracer test. For comparison, we used different metrics based on the breakthrough curves not used in the inversion, such as mean arrival time, peak concentration and early arrival time. This comparison intends to yield the combined data worth, i.e. which combination of the datasets is the most effective for a certain metric, which will be useful for guiding the further characterization effort at the site and also the future characterization projects at the other sites.
NASA Technical Reports Server (NTRS)
Hartman, Brian Davis
1995-01-01
A key drawback to estimating geodetic and geodynamic parameters over time based on satellite laser ranging (SLR) observations is the inability to accurately model all the forces acting on the satellite. Errors associated with the observations and the measurement model can detract from the estimates as well. These 'model errors' corrupt the solutions obtained from the satellite orbit determination process. Dynamical models for satellite motion utilize known geophysical parameters to mathematically detail the forces acting on the satellite. However, these parameters, while estimated as constants, vary over time. These temporal variations must be accounted for in some fashion to maintain meaningful solutions. The primary goal of this study is to analyze the feasibility of using a sequential process noise filter for estimating geodynamic parameters over time from the Laser Geodynamics Satellite (LAGEOS) SLR data. This evaluation is achieved by first simulating a sequence of realistic LAGEOS laser ranging observations. These observations are generated using models with known temporal variations in several geodynamic parameters (along track drag and the J(sub 2), J(sub 3), J(sub 4), and J(sub 5) geopotential coefficients). A standard (non-stochastic) filter and a stochastic process noise filter are then utilized to estimate the model parameters from the simulated observations. The standard non-stochastic filter estimates these parameters as constants over consecutive fixed time intervals. Thus, the resulting solutions contain constant estimates of parameters that vary in time which limits the temporal resolution and accuracy of the solution. The stochastic process noise filter estimates these parameters as correlated process noise variables. As a result, the stochastic process noise filter has the potential to estimate the temporal variations more accurately since the constraint of estimating the parameters as constants is eliminated. A comparison of the temporal resolution of solutions obtained from standard sequential filtering methods and process noise sequential filtering methods shows that the accuracy is significantly improved using process noise. The results show that the positional accuracy of the orbit is improved as well. The temporal resolution of the resulting solutions are detailed, and conclusions drawn about the results. Benefits and drawbacks of using process noise filtering in this type of scenario are also identified.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fakcharoenphol, Perapon; Xiong, Yi; Hu, Litang
TOUGH2-EGS is a numerical simulation program coupling geomechanics and chemical reactions for fluid and heat flows in porous media and fractured reservoirs of enhanced geothermal systems. The simulator includes the fully-coupled geomechanical (THM) module, the fully-coupled geochemical (THC) module, and the sequentially coupled reactive geochemistry (THMC) module. The fully-coupled flow-geomechanics model is developed from the linear elastic theory for the thermo-poro-elastic system and is formulated with the mean normal stress as well as pore pressure and temperature. The chemical reaction is sequentially coupled after solution of flow equations, which provides the flow velocity and phase saturation for the solute transportmore » calculation at each time step. In addition, reservoir rock properties, such as porosity and permeability, are subjected to change due to rock deformation and chemical reactions. The relationships between rock properties and geomechanical and chemical effects from poro-elasticity theories and empirical correlations are incorporated into the simulator. This report provides the user with detailed information on both mathematical models and instructions for using TOUGH2-EGS for THM, THC or THMC simulations. The mathematical models include the fluid and heat flow equations, geomechanical equation, reactive geochemistry equations, and discretization methods. Although TOUGH2-EGS has the capability for simulating fluid and heat flows coupled with both geomechanical and chemical effects, it is up to the users to select the specific coupling process, such as THM, THC, or THMC in a simulation. There are several example problems illustrating the applications of this program. These example problems are described in details and their input data are presented. The results demonstrate that this program can be used for field-scale geothermal reservoir simulation with fluid and heat flow, geomechanical effect, and chemical reaction in porous and fractured media.« less
Spatiotemporal stochastic models for earth science and engineering applications
NASA Astrophysics Data System (ADS)
Luo, Xiaochun
1998-12-01
Spatiotemporal processes occur in many areas of earth sciences and engineering. However, most of the available theoretical tools and techniques of space-time daft processing have been designed to operate exclusively in time or in space, and the importance of spatiotemporal variability was not fully appreciated until recently. To address this problem, a systematic framework of spatiotemporal random field (S/TRF) models for geoscience/engineering applications is presented and developed in this thesis. The space-tune continuity characterization is one of the most important aspects in S/TRF modelling, where the space-time continuity is displayed with experimental spatiotemporal variograms, summarized in terms of space-time continuity hypotheses, and modelled using spatiotemporal variogram functions. Permissible spatiotemporal covariance/variogram models are addressed through permissibility criteria appropriate to spatiotemporal processes. The estimation of spatiotemporal processes is developed in terms of spatiotemporal kriging techniques. Particular emphasis is given to the singularity analysis of spatiotemporal kriging systems. The impacts of covariance, functions, trend forms, and data configurations on the singularity of spatiotemporal kriging systems are discussed. In addition, the tensorial invariance of universal spatiotemporal kriging systems is investigated in terms of the space-time trend. The conditional simulation of spatiotemporal processes is proposed with the development of the sequential group Gaussian simulation techniques (SGGS), which is actually a series of sequential simulation algorithms associated with different group sizes. The simulation error is analyzed with different covariance models and simulation grids. The simulated annealing technique honoring experimental variograms, is also proposed, providing a way of conditional simulation without the covariance model fitting which is prerequisite for most simulation algorithms. The proposed techniques were first applied for modelling of the pressure system in a carbonate reservoir, and then applied for modelling of springwater contents in the Dyle watershed. The results of these case studies as well as the theory suggest that these techniques are realistic and feasible.
Kofotolis, Nikolaos D; Vlachopoulos, Symeon P; Kellis, Eleftherios
2008-02-01
To examine the effectiveness of rhythmic stabilization exercises and transcutaneous electrical nerve stimulation (TENS) and their combination in treating women with chronic low back pain. Sequentially allocated, single-blinded and controlled study, with a two-month follow-up. The data were collected in a patient rehabilitation setting. A total of 92 women (34-46 years old) with chronic low back pain were studied. Sequential allocation was undertaken into four groups: ;rhythmic stabilization' (n=23), ;rhythmic stabilization - TENS' (n=23), TENS (n=23), and a placebo group (n = 23). Each programme lasted for four weeks. All outcome measures were assessed prior to, immediately after, four weeks and eight weeks post intervention. Data were obtained on functional disability, pain intensity, trunk extension range of motion, dynamic endurance of trunk flexion and static endurance of trunk extension. A total of 88 patients provided two-month follow-up data. The ;rhythmic stabilization' and the ;rhythmic stabilization - TENS' groups displayed statistically significant (P<0.05) improvements in functional disability and pain intensity (ranging from 21.2 to 42.8%), trunk extension range of motion (ranging from 6.5 to 25.5%), dynamic endurance of trunk flexion and static endurance of trunk extension (ranging from 13.5 to 74.3%) compared with the remaining groups. The rhythmic stabilization programmes resulted in more gains in women with chronic low back pain regarding the present outcome variables compared with the other groups; therefore, its application in female chronic low back pain patients aged 34-46 years is recommended.
Wang, Jia-Zhong; Liu, Yang; Wang, Jin-Long; Lu, Le; Zhang, Ya-Fei; Lu, Hong-Wei; Li, Yi-Ming
2015-06-14
We undertook this meta-analysis to investigate the relationship between revascularization and outcomes after liver transplantation. A literature search was performed using MeSH and key words. The quality of the included studies was assessed using the Jadad Score and the Newcastle-Ottawa Scale. Heterogeneity was evaluated by the χ(2) and I (2) tests. The risk of publication bias was assessed using a funnel plot and Egger's test, and the risk of bias was assessed using a domain-based assessment tool. A sensitivity analysis was conducted by reanalyzing the data using different statistical approaches. Six studies with a total of 467 patients were included. Ischemic-type biliary lesions were significantly reduced in the simultaneous revascularization group compared with the sequential revascularization group (OR = 4.97, 95%CI: 2.45-10.07; P < 0.00001), and intensive care unit (ICU) days were decreased (MD = 2.00, 95%CI: 0.55-3.45; P = 0.007) in the simultaneous revascularization group. Although warm ischemia time was prolonged in simultaneous revascularization group (MD = -25.84, 95%CI: -29.28-22.40; P < 0.00001), there were no significant differences in other outcomes between sequential and simultaneous revascularization groups. Assessment of the risk of bias showed that the methods of random sequence generation and blinding might have been a source of bias. The sensitivity analysis strengthened the reliability of the results of this meta-analysis. The results of this study indicate that simultaneous revascularization in liver transplantation may reduce the incidence of ischemic-type biliary lesions and length of stay of patients in the ICU.
NASA Astrophysics Data System (ADS)
Cansız, Barış; Dal, Hüsnü; Kaliske, Michael
2017-10-01
Working mechanisms of the cardiac defibrillation are still in debate due to the limited experimental facilities and one-third of patients even do not respond to cardiac resynchronization therapy. With an aim to develop a milestone towards reaching the unrevealed mechanisms of the defibrillation phenomenon, we propose a bidomain based finite element formulation of cardiac electromechanics by taking into account the viscous effects that are disregarded by many researchers. To do so, the material is deemed as an electro-visco-active material and described by the modified Hill model (Cansız et al. in Comput Methods Appl Mech Eng 315:434-466, 2017). On the numerical side, we utilize a staggered solution method, where the elliptic and parabolic part of the bidomain equations and the mechanical field are solved sequentially. The comparative simulations designate that the viscoelastic and elastic formulations lead to remarkably different outcomes upon an externally applied electric field to the myocardial tissue. Besides, the achieved framework requires significantly less computational time and memory compared to monolithic schemes without loss of stability for the presented examples.
Causal Models for Mediation Analysis: An Introduction to Structural Mean Models.
Zheng, Cheng; Atkins, David C; Zhou, Xiao-Hua; Rhew, Isaac C
2015-01-01
Mediation analyses are critical to understanding why behavioral interventions work. To yield a causal interpretation, common mediation approaches must make an assumption of "sequential ignorability." The current article describes an alternative approach to causal mediation called structural mean models (SMMs). A specific SMM called a rank-preserving model (RPM) is introduced in the context of an applied example. Particular attention is given to the assumptions of both approaches to mediation. Applying both mediation approaches to the college student drinking data yield notable differences in the magnitude of effects. Simulated examples reveal instances in which the traditional approach can yield strongly biased results, whereas the RPM approach remains unbiased in these cases. At the same time, the RPM approach has its own assumptions that must be met for correct inference, such as the existence of a covariate that strongly moderates the effect of the intervention on the mediator and no unmeasured confounders that also serve as a moderator of the effect of the intervention or the mediator on the outcome. The RPM approach to mediation offers an alternative way to perform mediation analysis when there may be unmeasured confounders.
ERIC Educational Resources Information Center
Bell Haynes, Janel Elizabeth
2013-01-01
The purpose of this mixed method sequential explanatory case study was to describe the relationship of a student outcomes assessment program, as measured by the Peregrine Academic Leveling Course, (ALC), to the academic performance, determined by scores on the Peregrine Common Professional Component (CPC) examination, of students enrolled during…
ERIC Educational Resources Information Center
Coatsworth, J. Douglas; Conroy, David E.
2009-01-01
This study tested a sequential process model linking youth sport coaching climates (perceived coach behaviors and perceived need satisfaction) to youth self-perceptions (perceived competence and global self-esteem) and youth development outcomes (initiative, identity reflection, identity exploration). A sample of 119 youth between the ages of 10…
ERIC Educational Resources Information Center
Feldman, Ruth
2007-01-01
Synchrony, a construct used across multiple fields to denote the temporal relationship between events, is applied to the study of parent-infant interactions and suggested as a model for intersubjectivity. Three types of timed relationships between the parent and child's affective behavior are assessed: concurrent, sequential, and organized in an…
ERIC Educational Resources Information Center
Kennard, Betsy D.; Emslie, Graham J.; Mayes, Taryn L.; Nightingale-Teresi, Jeanne; Nakonezny, Paul A.; Hughes, Jennifer L.; Jones, Jessica M.; Tao, Rongrong; Stewart, Sunita M.; Jarrett, Robin B.
2008-01-01
The outcome of a sequential treatment strategy that included cognitive behavioral therapy (CBT) in the prevention of major depressive disorder relapse among 46 youths is examined. Results show that youths under the antidepressant medication management plus relapse prevention CBT treatment was at lower risk for relapse than those under the…
ERIC Educational Resources Information Center
Chao, Jie; Chiu, Jennifer L.; DeJaegher, Crystal J.; Pan, Edward A.
2016-01-01
Deep learning of science involves integration of existing knowledge and normative science concepts. Past research demonstrates that combining physical and virtual labs sequentially or side by side can take advantage of the unique affordances each provides for helping students learn science concepts. However, providing simultaneously connected…
ERIC Educational Resources Information Center
Yang, Xiangdong; Poggio, John C.; Glasnapp, Douglas R.
2006-01-01
The effects of five ability estimators, that is, maximum likelihood estimator, weighted likelihood estimator, maximum a posteriori, expected a posteriori, and Owen's sequential estimator, on the performances of the item response theory-based adaptive classification procedure on multiple categories were studied via simulations. The following…
Landscape analysis software tools
Don Vandendriesche
2008-01-01
Recently, several new computer programs have been developed to assist in landscape analysis. The âSequential Processing Routine for Arraying Yieldsâ (SPRAY) program was designed to run a group of stands with particular treatment activities to produce vegetation yield profiles for forest planning. SPRAY uses existing Forest Vegetation Simulator (FVS) software coupled...
NASA Astrophysics Data System (ADS)
Hosking, Michael Robert
This dissertation improves an analyst's use of simulation by offering improvements in the utilization of kriging metamodels. There are three main contributions. First an analysis is performed of what comprises good experimental designs for practical (non-toy) problems when using a kriging metamodel. Second is an explanation and demonstration of how reduced rank decompositions can improve the performance of kriging, now referred to as reduced rank kriging. Third is the development of an extension of reduced rank kriging which solves an open question regarding the usage of reduced rank kriging in practice. This extension is called omni-rank kriging. Finally these results are demonstrated on two case studies. The first contribution focuses on experimental design. Sequential designs are generally known to be more efficient than "one shot" designs. However, sequential designs require some sort of pilot design from which the sequential stage can be based. We seek to find good initial designs for these pilot studies, as well as designs which will be effective if there is no following sequential stage. We test a wide variety of designs over a small set of test-bed problems. Our findings indicate that analysts should take advantage of any prior information they have about their problem's shape and/or their goals in metamodeling. In the event of a total lack of information we find that Latin hypercube designs are robust default choices. Our work is most distinguished by its attention to the higher levels of dimensionality. The second contribution introduces and explains an alternative method for kriging when there is noise in the data, which we call reduced rank kriging. Reduced rank kriging is based on using a reduced rank decomposition which artificially smoothes the kriging weights similar to a nugget effect. Our primary focus will be showing how the reduced rank decomposition propagates through kriging empirically. In addition, we show further evidence for our explanation through tests of reduced rank kriging's performance over different situations. In total, reduced rank kriging is a useful tool for simulation metamodeling. For the third contribution we will answer the question of how to find the best rank for reduced rank kriging. We do this by creating an alternative method which does not need to search for a particular rank. Instead it uses all potential ranks; we call this approach omnirank kriging. This modification realizes the potential gains from reduced rank kriging and provides a workable methodology for simulation metamodeling. Finally, we will demonstrate the use and value of these developments on two case studies, a clinic operation problem and a location problem. These cases will validate the value of this research. Simulation metamodeling always attempts to extract maximum information from limited data. Each one of these contributions will allow analysts to make better use of their constrained computational budgets.
NASA Astrophysics Data System (ADS)
Schaerlaekens, J.; Mallants, D.; Imûnek, J.; van Genuchten, M. Th.; Feyen, J.
1999-12-01
Microbiological degradation of perchloroethylene (PCE) under anaerobic conditions follows a series of chain reactions, in which, sequentially, trichloroethylene (TCE), cis-dichloroethylene (c-DCE), vinylchloride (VC) and ethene are generated. First-order degradation rate constants, partitioning coefficients and mass exchange rates for PCE, TCE, c-DCE and VC were compiled from the literature. The parameters were used in a case study of pump-and-treat remediation of a PCE-contaminated site near Tilburg, The Netherlands. Transport, non-equilibrium sorption and biodegradation chain processes at the site were simulated using the CHAIN_2D code without further calibration. The modelled PCE compared reasonably well with observed PCE concentrations in the pumped water. We also performed a scenario analysis by applying several increased reductive dechlorination rates, reflecting different degradation conditions (e.g. addition of yeast extract and citrate). The scenario analysis predicted considerably higher concentrations of the degradation products as a result of enhanced reductive dechlorination of PCE. The predicted levels of the very toxic compound VC were now an order of magnitude above the maximum permissible concentration levels.
Multisensor surveillance data augmentation and prediction with optical multipath signal processing
NASA Astrophysics Data System (ADS)
Bush, G. T., III
1980-12-01
The spatial characteristics of an oil spill on the high seas are examined in the interest of determining whether linear-shift-invariant data processing implemented on an optical computer would be a useful tool in analyzing spill behavior. Simulations were performed on a digital computer using data obtained from a 25,000 gallon spill of soy bean oil in the open ocean. Marked changes occurred in the observed spatial frequencies when the oil spill was encountered. An optical detector may readily be developed to sound an alarm automatically when this happens. The average extent of oil spread between sequential observations was quantified by a simulation of non-holographic optical computation. Because a zero crossover was available in this computation, it may be possible to construct a system to measure automatically the amount of spread. Oil images were subjected to deconvolutional filtering to reveal the force field which acted upon the oil to cause spreading. Some features of spill-size prediction were observed. Calculations based on two sequential photos produced an image which exhibited characteristics of the third photo in that sequence.
Lamont, Scott; Brunero, Scott; Lyons, Sarah; Foster, Karlie; Perry, Lin
2015-11-01
To explore intra-professional collaboration amongst nursing leadership teams at a tertiary referral hospital in Sydney. Effective working within a wide network of alliances is critical to patient outcomes. An understanding of collaboration amongst nursing leadership teams is essential within this context. A sequential explanatory mixed-methods design was used. The Collaborative Behaviour scale was sent to 106 Nurse Unit Managers, Nurse Educators and Clinical Nurse Consultants to measure pairwise collaborative behaviours; two follow-up focus groups with 15 participants were conducted. Data were collected between May 2012 and May 2013. A thematic analysis of focus group data provided a detailed explanation of the questionnaire findings. The findings identified high collaboration between dyad groups. Two themes emerged from the thematic analysis: (1) professional role and expectations; with sub-themes of transparency and clarity of individual roles; and intra/interpersonal aspects of role functioning; and (2) organisational infrastructure and governance. These leadership teams can be effective and powerful vehicles for change and are central to optimum patient outcomes. Organisational strategic planning and evaluation can benefit from understanding how to promote collaborative behaviours in these nurse leaders. To date, little research has explored collaboration amongst nursing leadership teams. Successful collaboration may contribute to the efficient use of nursing resources; improve patient outcomes, and ultimately, nurse satisfaction and retention. © 2014 John Wiley & Sons Ltd.
High-Fidelity Simulation for Advanced Cardiac Life Support Training
Davis, Lindsay E.; Storjohann, Tara D.; Spiegel, Jacqueline J.; Beiber, Kellie M.
2013-01-01
Objective. To determine whether a high-fidelity simulation technique compared with lecture would produce greater improvement in advanced cardiac life support (ACLS) knowledge, confidence, and overall satisfaction with the training method. Design. This sequential, parallel-group, crossover trial randomized students into 2 groups distinguished by the sequence of teaching technique delivered for ACLS instruction (ie, classroom lecture vs high-fidelity simulation exercise). Assessment. Test scores on a written examination administered at baseline and after each teaching technique improved significantly from baseline in all groups but were highest when lecture was followed by simulation. Simulation was associated with a greater degree of overall student satisfaction compared with lecture. Participation in a simulation exercise did not improve pharmacy students’ knowledge of ACLS more than attending a lecture, but it was associated with improved student confidence in skills and satisfaction with learning and application. Conclusions. College curricula should incorporate simulation to complement but not replace lecture for ACLS education. PMID:23610477
High-fidelity simulation for advanced cardiac life support training.
Davis, Lindsay E; Storjohann, Tara D; Spiegel, Jacqueline J; Beiber, Kellie M; Barletta, Jeffrey F
2013-04-12
OBJECTIVE. To determine whether a high-fidelity simulation technique compared with lecture would produce greater improvement in advanced cardiac life support (ACLS) knowledge, confidence, and overall satisfaction with the training method. DESIGN. This sequential, parallel-group, crossover trial randomized students into 2 groups distinguished by the sequence of teaching technique delivered for ACLS instruction (ie, classroom lecture vs high-fidelity simulation exercise). ASSESSMENT. Test scores on a written examination administered at baseline and after each teaching technique improved significantly from baseline in all groups but were highest when lecture was followed by simulation. Simulation was associated with a greater degree of overall student satisfaction compared with lecture. Participation in a simulation exercise did not improve pharmacy students' knowledge of ACLS more than attending a lecture, but it was associated with improved student confidence in skills and satisfaction with learning and application. CONCLUSIONS. College curricula should incorporate simulation to complement but not replace lecture for ACLS education.
Colligan, Lacey; Anderson, Janet E; Potts, Henry W W; Berman, Jonathan
2010-01-07
Many quality and safety improvement methods in healthcare rely on a complete and accurate map of the process. Process mapping in healthcare is often achieved using a sequential flow diagram, but there is little guidance available in the literature about the most effective type of process map to use. Moreover there is evidence that the organisation of information in an external representation affects reasoning and decision making. This exploratory study examined whether the type of process map - sequential or hierarchical - affects healthcare practitioners' judgments. A sequential and a hierarchical process map of a community-based anti coagulation clinic were produced based on data obtained from interviews, talk-throughs, attendance at a training session and examination of protocols and policies. Clinic practitioners were asked to specify the parts of the process that they judged to contain quality and safety concerns. The process maps were then shown to them in counter-balanced order and they were asked to circle on the diagrams the parts of the process where they had the greatest quality and safety concerns. A structured interview was then conducted, in which they were asked about various aspects of the diagrams. Quality and safety concerns cited by practitioners differed depending on whether they were or were not looking at a process map, and whether they were looking at a sequential diagram or a hierarchical diagram. More concerns were identified using the hierarchical diagram compared with the sequential diagram and more concerns were identified in relation to clinical work than administrative work. Participants' preference for the sequential or hierarchical diagram depended on the context in which they would be using it. The difficulties of determining the boundaries for the analysis and the granularity required were highlighted. The results indicated that the layout of a process map does influence perceptions of quality and safety problems in a process. In quality improvement work it is important to carefully consider the type of process map to be used and to consider using more than one map to ensure that different aspects of the process are captured.
An extended sequential goodness-of-fit multiple testing method for discrete data.
Castro-Conde, Irene; Döhler, Sebastian; de Uña-Álvarez, Jacobo
2017-10-01
The sequential goodness-of-fit (SGoF) multiple testing method has recently been proposed as an alternative to the familywise error rate- and the false discovery rate-controlling procedures in high-dimensional problems. For discrete data, the SGoF method may be very conservative. In this paper, we introduce an alternative SGoF-type procedure that takes into account the discreteness of the test statistics. Like the original SGoF, our new method provides weak control of the false discovery rate/familywise error rate but attains false discovery rate levels closer to the desired nominal level, and thus it is more powerful. We study the performance of this method in a simulation study and illustrate its application to a real pharmacovigilance data set.
Sequential deconvolution from wave-front sensing using bivariate simplex splines
NASA Astrophysics Data System (ADS)
Guo, Shiping; Zhang, Rongzhi; Li, Jisheng; Zou, Jianhua; Xu, Rong; Liu, Changhai
2015-05-01
Deconvolution from wave-front sensing (DWFS) is an imaging compensation technique for turbulence degraded images based on simultaneous recording of short exposure images and wave-front sensor data. This paper employs the multivariate splines method for the sequential DWFS: a bivariate simplex splines based average slopes measurement model is built firstly for Shack-Hartmann wave-front sensor; next, a well-conditioned least squares estimator for the spline coefficients is constructed using multiple Shack-Hartmann measurements; then, the distorted wave-front is uniquely determined by the estimated spline coefficients; the object image is finally obtained by non-blind deconvolution processing. Simulated experiments in different turbulence strength show that our method performs superior image restoration results and noise rejection capability especially when extracting the multidirectional phase derivatives.
NASA Astrophysics Data System (ADS)
Tweed, D.; Devriendt, J.; Blaizot, J.; Colombi, S.; Slyz, A.
2009-11-01
Context: In the past decade or so, using numerical N-body simulations to describe the gravitational clustering of dark matter (DM) in an expanding universe has become the tool of choice for tackling the issue of hierarchical galaxy formation. As mass resolution increases with the power of supercomputers, one is able to grasp finer and finer details of this process, resolving more and more of the inner structure of collapsed objects. This begs one to revisit time and again the post-processing tools with which one transforms particles into “invisible” dark matter haloes and from thereon into luminous galaxies. Aims: Although a fair amount of work has been devoted to growing Monte-Carlo merger trees that resemble those built from an N-body simulation, comparatively little effort has been invested in quantifying the caveats one necessarily encounters when one extracts trees directly from such a simulation. To somewhat revert the tide, this paper seeks to provide its reader with a comprehensive study of the problems one faces when following this route. Methods: The first step in building merger histories of dark matter haloes and their subhaloes is to identify these structures in each of the time outputs (snapshots) produced by the simulation. Even though we discuss a particular implementation of such an algorithm (called AdaptaHOP) in this paper, we believe that our results do not depend on the exact details of the implementation but instead extend to most if not all (sub)structure finders. To illustrate this point in the appendix we compare AdaptaHOP's results to the standard friend-of-friend (FOF) algorithm, widely utilised in the astrophysical community. We then highlight different ways of building merger histories from AdaptaHOP haloes and subhaloes, contrasting their various advantages and drawbacks. Results: We find that the best approach to (sub)halo merging histories is through an analysis that goes back and forth between identification and tree building rather than one that conducts a straightforward sequential treatment of these two steps. This is rooted in the complexity of the merging trees that have to depict an inherently dynamical process from the partial temporal information contained in the collection of instantaneous snapshots available from the N-body simulation. However, we also propose a simpler sequential “Most massive Substructure Method” (MSM) whose trees approximate those obtained via the more complicated non sequential method. Appendices are only available in electronic form at: http://www.aanda.org
López-Pelayo, Iratxe; Gutiérrez-Romero, Javier María; Armada, Ana Isabel Mangano; Calero-Ruiz, María Mercedes; Acevedo-Yagüe, Pablo Javier Moreno de
2018-04-26
To compare embryo quality, fertilization, implantation, miscarriage and clinical pregnancy rates for embryos cultured in two different commercial culture media until D-2 or D-3. In this retrospective study, we analyzed 189 cycles performed in 2016. Metaphase II oocytes were microinjected and allocated into single medium (SAGE 1-STEP, Origio) until transferred, frozen or discarded; or, if sequential media were used, the oocytes were cultured in G1-PLUSTM (Vitrolife) up to D-2 or D-3 and in G2-PLUSTM (Vitrolife) to transfer. On the following day, the oocytes were checked for normal fertilization and on D-2 and D-3 for morphological classification. Statistical analysis was performed using the chi-square and Mann-Whitney tests in PASW Statistics 18.0. The fertilization rates were 70.07% for single and 69.11% for sequential media (p=0.736). The mean number of embryos with high morphological quality (class A/B) was higher in the single medium than in the sequential media: D-2 [class A (190 vs. 107, p<0.001), B (133 vs. 118, p=0.018)]; D-3 [class A (40 vs. 19, p=0.048) but without differences in class B (40 vs. 49)]. Consequently, a higher number of embryos cultured in single medium were frozen: 197 (21.00%) vs. sequential: 102 (11.00%), p<0.001. No differences were found in implantation rates (30.16% vs. 25.57%, p=0.520), clinical pregnancy rates (55.88% vs. 41.05%, p=0.213), or miscarriage rates (14.29% vs. 9.52%, p=0.472). Embryo culture in single medium yields greater efficiency per cycle than in sequential media. Higher embryo quality and quantity were achieved, resulting in more frozen embryos. There were no differences in clinical pregnancy rates.
NASA Astrophysics Data System (ADS)
Jennings, E.; Madigan, M.
2017-04-01
Given the complexity of modern cosmological parameter inference where we are faced with non-Gaussian data and noise, correlated systematics and multi-probe correlated datasets,the Approximate Bayesian Computation (ABC) method is a promising alternative to traditional Markov Chain Monte Carlo approaches in the case where the Likelihood is intractable or unknown. The ABC method is called "Likelihood free" as it avoids explicit evaluation of the Likelihood by using a forward model simulation of the data which can include systematics. We introduce astroABC, an open source ABC Sequential Monte Carlo (SMC) sampler for parameter estimation. A key challenge in astrophysics is the efficient use of large multi-probe datasets to constrain high dimensional, possibly correlated parameter spaces. With this in mind astroABC allows for massive parallelization using MPI, a framework that handles spawning of processes across multiple nodes. A key new feature of astroABC is the ability to create MPI groups with different communicators, one for the sampler and several others for the forward model simulation, which speeds up sampling time considerably. For smaller jobs the Python multiprocessing option is also available. Other key features of this new sampler include: a Sequential Monte Carlo sampler; a method for iteratively adapting tolerance levels; local covariance estimate using scikit-learn's KDTree; modules for specifying optimal covariance matrix for a component-wise or multivariate normal perturbation kernel and a weighted covariance metric; restart files output frequently so an interrupted sampling run can be resumed at any iteration; output and restart files are backed up at every iteration; user defined distance metric and simulation methods; a module for specifying heterogeneous parameter priors including non-standard prior PDFs; a module for specifying a constant, linear, log or exponential tolerance level; well-documented examples and sample scripts. This code is hosted online at https://github.com/EliseJ/astroABC.
A Sequential Ensemble Prediction System at Convection Permitting Scales
NASA Astrophysics Data System (ADS)
Milan, M.; Simmer, C.
2012-04-01
A Sequential Assimilation Method (SAM) following some aspects of particle filtering with resampling, also called SIR (Sequential Importance Resampling), is introduced and applied in the framework of an Ensemble Prediction System (EPS) for weather forecasting on convection permitting scales, with focus to precipitation forecast. At this scale and beyond, the atmosphere increasingly exhibits chaotic behaviour and non linear state space evolution due to convectively driven processes. One way to take full account of non linear state developments are particle filter methods, their basic idea is the representation of the model probability density function by a number of ensemble members weighted by their likelihood with the observations. In particular particle filter with resampling abandons ensemble members (particles) with low weights restoring the original number of particles adding multiple copies of the members with high weights. In our SIR-like implementation we substitute the likelihood way to define weights and introduce a metric which quantifies the "distance" between the observed atmospheric state and the states simulated by the ensemble members. We also introduce a methodology to counteract filter degeneracy, i.e. the collapse of the simulated state space. To this goal we propose a combination of resampling taking account of simulated state space clustering and nudging. By keeping cluster representatives during resampling and filtering, the method maintains the potential for non linear system state development. We assume that a particle cluster with initially low likelihood may evolve in a state space with higher likelihood in a subsequent filter time thus mimicking non linear system state developments (e.g. sudden convection initiation) and remedies timing errors for convection due to model errors and/or imperfect initial condition. We apply a simplified version of the resampling, the particles with highest weights in each cluster are duplicated; for the model evolution for each particle pair one particle evolves using the forward model; the second particle, however, is nudged to the radar and satellite observation during its evolution based on the forward model.
Beyond Valence and Magnitude: a Flexible Evaluative Coding System in the Brain
Gu, Ruolei; Lei, Zhihui; Broster, Lucas; Wu, Tingting; Jiang, Yang; Luo, Yue-jia
2013-01-01
Outcome evaluation is a cognitive process that plays an important role in our daily lives. In most paradigms utilized in the field of experimental psychology, outcome valence and outcome magnitude are the two major features investigated. The classical “independent coding model” suggest that outcome valence and outcome magnitude are evaluated by separate neural mechanisms that may be mapped onto discrete event-related potential (ERP) components: feedback-related negativity (FRN) and the P3, respectively. To examine this model, we presented outcome valence and magnitude sequentially rather than simultaneously. The results reveal that when only outcome valence or magnitude is known, both the FRN and the P3 encode that outcome feature; when both aspects of outcome are known, the cognitive functions of the two components dissociate: the FRN responds to the information available in the current context, while the P3 pattern depends on outcome presentation sequence. The current study indicates that the human evaluative system, indexed in part by the FRN and the P3, is more flexible than previous theories suggested. PMID:22019775
Rochau, Ursula; Sroczynski, Gaby; Wolf, Dominik; Schmidt, Stefan; Jahn, Beate; Kluibenschaedl, Martina; Conrads-Frank, Annette; Stenehjem, David; Brixner, Diana; Radich, Jerald; Gastl, Günther; Siebert, Uwe
2015-01-01
Several tyrosine kinase inhibitors (TKIs) are approved for chronic myeloid leukemia (CML) therapy. We evaluated the long-term cost-effectiveness of seven sequential therapy regimens for CML in Austria. A cost-effectiveness analysis was performed using a state-transition Markov model. As model parameters, we used published trial data, clinical, epidemiological and economic data from the Austrian CML registry and national databases. We performed a cohort simulation over a life-long time-horizon from a societal perspective. Nilotinib without second-line TKI yielded an incremental cost-utility ratio of 121,400 €/quality-adjusted life year (QALY) compared to imatinib without second-line TKI after imatinib failure. Imatinib followed by nilotinib after failure resulted in 131,100 €/QALY compared to nilotinib without second-line TKI. Nilotinib followed by dasatinib yielded 152,400 €/QALY compared to imatinib followed by nilotinib after failure. Remaining strategies were dominated. The sequential application of TKIs is standard-of-care, and thus, our analysis points toward imatinib followed by nilotinib as the most cost-effective strategy.
NASA Technical Reports Server (NTRS)
Todling, Ricardo
2015-01-01
Recently, this author studied an approach to the estimation of system error based on combining observation residuals derived from a sequential filter and fixed lag-1 smoother. While extending the methodology to a variational formulation, experimenting with simple models and making sure consistency was found between the sequential and variational formulations, the limitations of the residual-based approach came clearly to the surface. This note uses the sequential assimilation application to simple nonlinear dynamics to highlight the issue. Only when some of the underlying error statistics are assumed known is it possible to estimate the unknown component. In general, when considerable uncertainties exist in the underlying statistics as a whole, attempts to obtain separate estimates of the various error covariances are bound to lead to misrepresentation of errors. The conclusions are particularly relevant to present-day attempts to estimate observation-error correlations from observation residual statistics. A brief illustration of the issue is also provided by comparing estimates of error correlations derived from a quasi-operational assimilation system and a corresponding Observing System Simulation Experiments framework.
Flexible sequential designs for multi-arm clinical trials.
Magirr, D; Stallard, N; Jaki, T
2014-08-30
Adaptive designs that are based on group-sequential approaches have the benefit of being efficient as stopping boundaries can be found that lead to good operating characteristics with test decisions based solely on sufficient statistics. The drawback of these so called 'pre-planned adaptive' designs is that unexpected design changes are not possible without impacting the error rates. 'Flexible adaptive designs' on the other hand can cope with a large number of contingencies at the cost of reduced efficiency. In this work, we focus on two different approaches for multi-arm multi-stage trials, which are based on group-sequential ideas, and discuss how these 'pre-planned adaptive designs' can be modified to allow for flexibility. We then show how the added flexibility can be used for treatment selection and sample size reassessment and evaluate the impact on the error rates in a simulation study. The results show that an impressive overall procedure can be found by combining a well chosen pre-planned design with an application of the conditional error principle to allow flexible treatment selection. Copyright © 2014 John Wiley & Sons, Ltd.
Sachan, Prachee; Kumar, Nidhi; Sharma, Jagdish Prasad
2014-01-01
Background: Density of the drugs injected intrathecally is an important factor that influences spread in the cerebrospinal fluid. Mixing adjuvants with local anesthetics (LA) alters their density and hence their spread compared to when given sequentially in seperate syringes. Aims: To evaluate the efficacy of intrathecal administration of hyperbaric bupivacaine (HB) and clonidine as a mixture and sequentially in terms of block characteristics, hemodynamics, neonatal outcome, and postoperative pain. Setting and Design: Prospective randomized single blind study at a tertiary center from 2010 to 2012. Materials and Methods: Ninety full-term parturient scheduled for elective cesarean sections were divided into three groups on the basis of technique of intrathecal drug administration. Group M received mixture of 75 μg clonidine and 10 mg HB 0.5%. Group A received 75 μg clonidine after administration of 10 mg HB 0.5% through separate syringe. Group B received 75 μg clonidine before HB 0.5% (10 mg) through separate syringe. Statistical analysis used: Observational descriptive statistics, analysis of variance with Bonferroni multiple comparison post hoc test, and Chi-square test. Results: Time to achieve complete sensory and motor block was less in group A and B in which drugs were given sequentially. Duration of analgesia lasted longer in group B (474.3 ± 20.79 min) and group A (472.50 ± 22.11 min) than in group M (337 ± 18.22 min) with clinically insignificant influence on hemodynamic parameters and sedation. Conclusion: Sequential technique reduces time to achieve complete sensory and motor block, delays block regression, and significantly prolongs the duration of analgesia. However, it did not matter much whether clonidine was administered before or after HB. PMID:25886098
S Chapman, Jocelyn; Roddy, Erika; Panighetti, Anna; Hwang, Shelley; Crawford, Beth; Powell, Bethan; Chen, Lee-May
2016-12-01
Women with breast cancer who carry BRCA1 or BRCA2 mutations must also consider risk-reducing salpingo-oophorectomy (RRSO) and how to coordinate this procedure with their breast surgery. We report the factors associated with coordinated versus sequential surgery and compare the outcomes of each. Patients in our cancer risk database who had breast cancer and a known deleterious BRCA1/2 mutation before undergoing breast surgery were included. Women who chose concurrent RRSO at the time of breast surgery were compared to those who did not. Sixty-two patients knew their mutation carrier status before undergoing breast cancer surgery. Forty-three patients (69%) opted for coordinated surgeries, and 19 (31%) underwent sequential surgeries at a median follow-up of 4.4 years. Women who underwent coordinated surgery were significantly older than those who chose sequential surgery (median age of 45 vs. 39 years; P = .025). There were no differences in comorbidities between groups. Patients who received neoadjuvant chemotherapy were more likely to undergo coordinated surgery (65% vs. 37%; P = .038). Sequential surgery patients had longer hospital stays (4.79 vs. 3.44 days, P = .01) and longer operating times (8.25 vs. 6.38 hours, P = .006) than patients who elected combined surgery. Postoperative complications were minor and were no more likely in either group (odds ratio, 4.76; 95% confidence interval, 0.56-40.6). Coordinating RRSO with breast surgery is associated with receipt of neoadjuvant chemotherapy, longer operating times, and hospital stays without an observed increase in complications. In the absence of risk, surgical options can be personalized. Copyright © 2016 Elsevier Inc. All rights reserved.
Pellacani, G; Peris, K; Guillen, C; Clonier, F; Larsson, T; Venkata, R; Puig, S
2015-11-01
Actinic keratoses (AKs) are precursors to invasive squamous cell carcinoma and can progress if untreated. Limited data support the use of ingenol mebutate to treat AKs on more than one area of the body simultaneously. To investigate safety, efficacy and treatment satisfaction when treating separate areas simultaneously or sequentially with different concentrations of ingenol mebutate gel. In this phase IIIb study (NCT01787383), patients with clinically visible, non-hyperkeratotic AKs on two separate treatment areas (face/scalp and trunk/extremities) were randomized to simultaneous or sequential treatment with ingenol mebutate gel (0.015% and 0.05%). Endpoints included composite local skin response (LSR) score 3 days after first application, complete AK clearance and percentage reduction in AKs at week 8. There were no statistically significant differences between simultaneous (n = 101) and sequential (n = 98) groups in composite LSR score (10.4 vs. 9.7), complete clearance (52.7% vs. 46.9%) or percentage reduction in AKs (83.4% vs. 79.1%). Mean composite LSR scores on face/scalp and trunk/extremities were similar for both groups. Adverse event (AE) incidence was comparable between groups, the most common treatment-related AEs being pruritus and pain at the application site. Treating AKs with ingenol mebutate simultaneously or sequentially gave similar results in terms of tolerability (LSR score, AEs) and efficacy (complete clearance). Therefore, the physician and patient can select the most convenient treatment regimen, with confidence in achieving a similar outcome. © 2015 LEO Pharma A/S. Journal of the European Academy of Dermatology and Venereology published by John Wiley & Sons, Ltd. on behalf of European Academy of Dermatology and Venereology.
Asao, Tetsuhiko; Fujiwara, Yutaka; Itahashi, Kota; Kitahara, Shinsuke; Goto, Yasushi; Horinouchi, Hidehito; Kanda, Shintaro; Nokihara, Hiroshi; Yamamoto, Noboru; Takahashi, Kazuhisa; Ohe, Yuichiro
2017-07-01
Second-generation anaplastic lymphoma kinase (ALK) inhibitors, such as alectinib and ceritinib, have recently been approved for treatment of ALK-rearranged non-small-cell lung cancer (NSCLC). An optimal strategy for using 2 or more ALK inhibitors has not been established. We sought to investigate the clinical impact of sequential use of ALK inhibitors on these tumors in clinical practice. Patients with ALK-rearranged NSCLC treated from May 2010 to January 2016 at the National Cancer Center Hospital were identified, and their outcomes were evaluated retrospectively. Fifty-nine patients with ALK-rearranged NSCLC had been treated and 37 cases were assessable. Twenty-six received crizotinib, 21 received alectinib, and 13 (35.1%) received crizotinib followed by alectinib. Response rates and median progression-free survival (PFS) on crizotinib and alectinib (after crizotinib failure) were 53.8% (95% confidence interval [CI], 26.7%-80.9%) and 38.4% (95% CI, 12.0%-64.9%), and 10.7 (95% CI, 5.3-14.7) months and 16.6 (95% CI, 2.9-not calculable), respectively. The median PFS of patients on sequential therapy was 35.2 months (95% CI, 12.7 months-not calculable). The 5-year survival rate of ALK-rearranged patients who received 2 sequential ALK inhibitors from diagnosis was 77.8% (95% CI, 36.5%-94.0%). The combined PFS and 5-year survival rates in patients who received sequential ALK inhibitors were encouraging. Making full use of multiple ALK inhibitors might be important to prolonging survival in patients with ALK-rearranged NSCLC. Copyright © 2016 Elsevier Inc. All rights reserved.
Error in telemetry studies: Effects of animal movement on triangulation
Schmutz, Joel A.; White, Gary C.
1990-01-01
We used Monte Carlo simulations to investigate the effects of animal movement on error of estimated animal locations derived from radio-telemetry triangulation of sequentially obtained bearings. Simulated movements of 0-534 m resulted in up to 10-fold increases in average location error but <10% decreases in location precision when observer-to-animal distances were <1,000 m. Location error and precision were minimally affected by censorship of poor locations with Chi-square goodness-of-fit tests. Location error caused by animal movement can only be eliminated by taking simultaneous bearings.
Development of a dynamic coupled hydro-geomechanical code and its application to induced seismicity
NASA Astrophysics Data System (ADS)
Miah, Md Mamun
This research describes the importance of a hydro-geomechanical coupling in the geologic sub-surface environment from fluid injection at geothermal plants, large-scale geological CO2 sequestration for climate mitigation, enhanced oil recovery, and hydraulic fracturing during wells construction in the oil and gas industries. A sequential computational code is developed to capture the multiphysics interaction behavior by linking a flow simulation code TOUGH2 and a geomechanics modeling code PyLith. Numerical formulation of each code is discussed to demonstrate their modeling capabilities. The computational framework involves sequential coupling, and solution of two sub-problems- fluid flow through fractured and porous media and reservoir geomechanics. For each time step of flow calculation, pressure field is passed to the geomechanics code to compute effective stress field and fault slips. A simplified permeability model is implemented in the code that accounts for the permeability of porous and saturated rocks subject to confining stresses. The accuracy of the TOUGH-PyLith coupled simulator is tested by simulating Terzaghi's 1D consolidation problem. The modeling capability of coupled poroelasticity is validated by benchmarking it against Mandel's problem. The code is used to simulate both quasi-static and dynamic earthquake nucleation and slip distribution on a fault from the combined effect of far field tectonic loading and fluid injection by using an appropriate fault constitutive friction model. Results from the quasi-static induced earthquake simulations show a delayed response in earthquake nucleation. This is attributed to the increased total stress in the domain and not accounting for pressure on the fault. However, this issue is resolved in the final chapter in simulating a single event earthquake dynamic rupture. Simulation results show that fluid pressure has a positive effect on slip nucleation and subsequent crack propagation. This is confirmed by running a sensitivity analysis that shows an increase in injection well distance results in delayed slip nucleation and rupture propagation on the fault.
A posteriori model validation for the temporal order of directed functional connectivity maps.
Beltz, Adriene M; Molenaar, Peter C M
2015-01-01
A posteriori model validation for the temporal order of neural directed functional connectivity maps is rare. This is striking because models that require sequential independence among residuals are regularly implemented. The aim of the current study was (a) to apply to directed functional connectivity maps of functional magnetic resonance imaging data an a posteriori model validation procedure (i.e., white noise tests of one-step-ahead prediction errors combined with decision criteria for revising the maps based upon Lagrange Multiplier tests), and (b) to demonstrate how the procedure applies to single-subject simulated, single-subject task-related, and multi-subject resting state data. Directed functional connectivity was determined by the unified structural equation model family of approaches in order to map contemporaneous and first order lagged connections among brain regions at the group- and individual-levels while incorporating external input, then white noise tests were run. Findings revealed that the validation procedure successfully detected unmodeled sequential dependencies among residuals and recovered higher order (greater than one) simulated connections, and that the procedure can accommodate task-related input. Findings also revealed that lags greater than one were present in resting state data: With a group-level network that contained only contemporaneous and first order connections, 44% of subjects required second order, individual-level connections in order to obtain maps with white noise residuals. Results have broad methodological relevance (e.g., temporal validation is necessary after directed functional connectivity analyses because the presence of unmodeled higher order sequential dependencies may bias parameter estimates) and substantive implications (e.g., higher order lags may be common in resting state data).
NASA Astrophysics Data System (ADS)
Qiao, C. Y.; Wei, H. L.; Ma, C. W.; Zhang, Y. L.; Wang, S. S.
2015-07-01
Background: The isobaric yield ratio difference (IBD) method is found to be sensitive to the density difference of neutron-rich nucleus induced reaction around the Fermi energy. Purpose: An investigation is performed to study the IBD results in the transport model. Methods: The antisymmetric molecular dynamics (AMD) model plus the sequential decay model gemini are adopted to simulate the 140 A MeV 58 ,64Ni +9Be reactions. A relative small coalescence radius Rc= 2.5 fm is used for the phase space at t = 500 fm/c to form the hot fragment. Two limitations on the impact parameter (b 1 =0 -2 fm and b 2 =0 -9 fm) are used to study the effect of central collisions in IBD. Results: The isobaric yield ratios (IYRs) for the large-A fragments are found to be suppressed in the symmetric reaction. The IBD results for fragments with neutron excess I = 0 and 1 are obtained. A small difference is found in the IBDs with the b 1 and b 2 limitations in the AMD simulated reactions. The IBD with b 1 and b 2 are quite similar in the AMD + GEMINI simulated reactions. Conclusions: The IBDs for the I =0 and 1 chains are mainly determined by the central collisions, which reflects the nuclear density in the core region of the reaction system. The increasing part of the IBD distribution is found due to the difference between the densities in the peripheral collisions of the reactions. The sequential decay process influences the IBD results. The AMD + GEMINI simulation can better reproduce the experimental IBDs than the AMD simulation.
NASA Astrophysics Data System (ADS)
Khaki, M.; Hoteit, I.; Kuhn, M.; Awange, J.; Forootan, E.; van Dijk, A. I. J. M.; Schumacher, M.; Pattiaratchi, C.
2017-09-01
The time-variable terrestrial water storage (TWS) products from the Gravity Recovery And Climate Experiment (GRACE) have been increasingly used in recent years to improve the simulation of hydrological models by applying data assimilation techniques. In this study, for the first time, we assess the performance of the most popular data assimilation sequential techniques for integrating GRACE TWS into the World-Wide Water Resources Assessment (W3RA) model. We implement and test stochastic and deterministic ensemble-based Kalman filters (EnKF), as well as Particle filters (PF) using two different resampling approaches of Multinomial Resampling and Systematic Resampling. These choices provide various opportunities for weighting observations and model simulations during the assimilation and also accounting for error distributions. Particularly, the deterministic EnKF is tested to avoid perturbing observations before assimilation (that is the case in an ordinary EnKF). Gaussian-based random updates in the EnKF approaches likely do not fully represent the statistical properties of the model simulations and TWS observations. Therefore, the fully non-Gaussian PF is also applied to estimate more realistic updates. Monthly GRACE TWS are assimilated into W3RA covering the entire Australia. To evaluate the filters performances and analyze their impact on model simulations, their estimates are validated by independent in-situ measurements. Our results indicate that all implemented filters improve the estimation of water storage simulations of W3RA. The best results are obtained using two versions of deterministic EnKF, i.e. the Square Root Analysis (SQRA) scheme and the Ensemble Square Root Filter (EnSRF), respectively, improving the model groundwater estimations errors by 34% and 31% compared to a model run without assimilation. Applying the PF along with Systematic Resampling successfully decreases the model estimation error by 23%.
Constant speed control of four-stroke micro internal combustion swing engine
NASA Astrophysics Data System (ADS)
Gao, Dedong; Lei, Yong; Zhu, Honghai; Ni, Jun
2015-09-01
The increasing demands on safety, emission and fuel consumption require more accurate control models of micro internal combustion swing engine (MICSE). The objective of this paper is to investigate the constant speed control models of four-stroke MICSE. The operation principle of the four-stroke MICSE is presented based on the description of MICSE prototype. A two-level Petri net based hybrid model is proposed to model the four-stroke MICSE engine cycle. The Petri net subsystem at the upper level controls and synchronizes the four Petri net subsystems at the lower level. The continuous sub-models, including breathing dynamics of intake manifold, thermodynamics of the chamber and dynamics of the torque generation, are investigated and integrated with the discrete model in MATLAB Simulink. Through the comparison of experimental data and simulated DC voltage output, it is demonstrated that the hybrid model is valid for the four-stroke MICSE system. A nonlinear model is obtained from the cycle average data via the regression method, and it is linearized around a given nominal equilibrium point for the controller design. The feedback controller of the spark timing and valve duration timing is designed with a sequential loop closing design approach. The simulation of the sequential loop closure control design applied to the hybrid model is implemented in MATLAB. The simulation results show that the system is able to reach its desired operating point within 0.2 s, and the designed controller shows good MICSE engine performance with a constant speed. This paper presents the constant speed control models of four-stroke MICSE and carries out the simulation tests, the models and the simulation results can be used for further study on the precision control of four-stroke MICSE.
ERIC Educational Resources Information Center
Scherer, Aaron M.; Windschitl, Paul D.; O'Rourke, Jillian; Smith, Andrew R.
2012-01-01
People must often engage in sequential sampling in order to make predictions about the relative quantities of two options. We investigated how directional motives influence sampling selections and resulting predictions in such cases. We used a paradigm in which participants had limited time to sample items and make predictions about which side of…
ERIC Educational Resources Information Center
Biag, Manuelito; Williams, Imeh
2014-01-01
Research demonstrates that students' success in rigorous middle and high school math courses is positively associated with their admission to college, earnings later in life, and career prospects. The sequential nature of math course-taking, however, can create an opportunity structure that puts certain students at a disadvantage, specifically…
ERIC Educational Resources Information Center
Murray, Laura Carolyn
2017-01-01
This dissertation study employs an exploratory sequential mixed methods design to investigate how emerging adults with psychiatric disabilities plan for and transition to and through college. Special attention is paid to how disclosure of disability status in educational contexts can influence both educational and recovery outcomes. Though more…
USDA-ARS?s Scientific Manuscript database
Bolland and colleagues performed a meta-analysis of randomized, controlled trials and concluded that vitamin D is ineffective in lowering risk of fracture, cancer, vascular disease, and mortality. While we agree that this analysis addresses an important question about the efficacy of vitamin D usin...
ERIC Educational Resources Information Center
Jaakkola, Tomi; Nurmi, Sami; Veermans, Koen
2011-01-01
The aim of this experimental study was to compare learning outcomes of students using a simulation alone (simulation environment) with outcomes of those using a simulation in parallel with real circuits (combination environment) in the domain of electricity, and to explore how learning outcomes in these environments are mediated by implicit (only…
USDA-ARS?s Scientific Manuscript database
For more than three decades, researchers have utilized the Snowmelt Runoff Model (SRM) to test the impacts of climate change on streamflow of snow-fed systems. In this study, the hydrological effects of climate change are modeled over three sequential years using SRM with both typical and recommende...
Comparative Implementation of High Performance Computing for Power System Dynamic Simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jin, Shuangshuang; Huang, Zhenyu; Diao, Ruisheng
Dynamic simulation for transient stability assessment is one of the most important, but intensive, computations for power system planning and operation. Present commercial software is mainly designed for sequential computation to run a single simulation, which is very time consuming with a single processer. The application of High Performance Computing (HPC) to dynamic simulations is very promising in accelerating the computing process by parallelizing its kernel algorithms while maintaining the same level of computation accuracy. This paper describes the comparative implementation of four parallel dynamic simulation schemes in two state-of-the-art HPC environments: Message Passing Interface (MPI) and Open Multi-Processing (OpenMP).more » These implementations serve to match the application with dedicated multi-processor computing hardware and maximize the utilization and benefits of HPC during the development process.« less
Antipova, Anna S; Zelikina, Darya V; Shumilina, Elena A; Semenova, Maria G
2016-10-01
The present work is focused on the structural transformation of the complexes, formed between covalent conjugate (sodium caseinate + maltodextrin) and an equimass mixture of the polyunsaturated lipids (PULs): (soy phosphatidylcholine + triglycerides of flaxseed oil) stabilized by a plant antioxidant (an essential oil of clove buds), in the simulated conditions of the gastrointestinal tract. The conjugate was used here as a food-grade delivery vehicle for the PULs. The release of these PULs at each stage of the simulated digestion was estimated. Copyright © 2016 Elsevier Ltd. All rights reserved.
Nakashima, Ryoichi; Komori, Yuya; Maeda, Eriko; Yoshikawa, Takeharu; Yokosawa, Kazuhiko
2016-01-01
Although viewing multiple stacks of medical images presented on a display is a relatively new but useful medical task, little is known about this task. Particularly, it is unclear how radiologists search for lesions in this type of image reading. When viewing cluttered and dynamic displays, continuous motion itself does not capture attention. Thus, it is effective for the target detection that observers' attention is captured by the onset signal of a suddenly appearing target among the continuously moving distractors (i.e., a passive viewing strategy). This can be applied to stack viewing tasks, because lesions often show up as transient signals in medical images which are sequentially presented simulating a dynamic and smoothly transforming image progression of organs. However, it is unclear whether observers can detect a target when the target appears at the beginning of a sequential presentation where the global apparent motion onset signal (i.e., signal of the initiation of the apparent motion by sequential presentation) occurs. We investigated the ability of radiologists to detect lesions during such tasks by comparing the performances of radiologists and novices. Results show that overall performance of radiologists is better than novices. Furthermore, the temporal locations of lesions in CT image sequences, i.e., when a lesion appears in an image sequence, does not affect the performance of radiologists, whereas it does affect the performance of novices. Results indicate that novices have greater difficulty in detecting a lesion appearing early than late in the image sequence. We suggest that radiologists have other mechanisms to detect lesions in medical images with little attention which novices do not have. This ability is critically important when viewing rapid sequential presentations of multiple CT images, such as stack viewing tasks.
Nakashima, Ryoichi; Komori, Yuya; Maeda, Eriko; Yoshikawa, Takeharu; Yokosawa, Kazuhiko
2016-01-01
Although viewing multiple stacks of medical images presented on a display is a relatively new but useful medical task, little is known about this task. Particularly, it is unclear how radiologists search for lesions in this type of image reading. When viewing cluttered and dynamic displays, continuous motion itself does not capture attention. Thus, it is effective for the target detection that observers' attention is captured by the onset signal of a suddenly appearing target among the continuously moving distractors (i.e., a passive viewing strategy). This can be applied to stack viewing tasks, because lesions often show up as transient signals in medical images which are sequentially presented simulating a dynamic and smoothly transforming image progression of organs. However, it is unclear whether observers can detect a target when the target appears at the beginning of a sequential presentation where the global apparent motion onset signal (i.e., signal of the initiation of the apparent motion by sequential presentation) occurs. We investigated the ability of radiologists to detect lesions during such tasks by comparing the performances of radiologists and novices. Results show that overall performance of radiologists is better than novices. Furthermore, the temporal locations of lesions in CT image sequences, i.e., when a lesion appears in an image sequence, does not affect the performance of radiologists, whereas it does affect the performance of novices. Results indicate that novices have greater difficulty in detecting a lesion appearing early than late in the image sequence. We suggest that radiologists have other mechanisms to detect lesions in medical images with little attention which novices do not have. This ability is critically important when viewing rapid sequential presentations of multiple CT images, such as stack viewing tasks. PMID:27774080
Zhang, Jia-yu; Wang, Zi-jian; Li, Yun; Liu, Ying; Cai, Wei; Li, Chen; Lu, Jian-qiu; Qiao, Yan-jiang
2016-01-15
The analytical methodologies for evaluation of multi-component system in traditional Chinese medicines (TCMs) have been inadequate or unacceptable. As a result, the unclarity of multi-component hinders the sufficient interpretation of their bioactivities. In this paper, an ultra-high-performance liquid chromatography coupled with linear ion trap-Orbitrap (UPLC-LTQ-Orbitrap)-based strategy focused on the comprehensive identification of TCM sequential constituents was developed. The strategy was characterized by molecular design, multiple ion monitoring (MIM), targeted database hits and mass spectral trees similarity filter (MTSF), and even more isomerism discrimination. It was successfully applied in the HRMS data-acquisition and processing of chlorogenic acids (CGAs) in Flos Lonicerae Japonicae (FLJ), and a total of 115 chromatographic peaks attributed to 18 categories were characterized, allowing a comprehensive revelation of CGAs in FLJ for the first time. This demonstrated that MIM based on molecular design could improve the efficiency to trigger MS/MS fragmentation reactions. Targeted database hits and MTSF searching greatly facilitated the processing of extremely large information data. Besides, the introduction of diagnostic product ions (DPIs) discrimination, ClogP analysis, and molecular simulation, raised the efficiency and accuracy to characterize sequential constituents especially position and geometric isomers. In conclusion, the results expanded our understanding on CGAs in FLJ, and the strategy could be exemplary for future research on the comprehensive identification of sequential constituents in TCMs. Meanwhile, it may propose a novel idea for analyzing sequential constituents, and is promising for quality control and evaluation of TCMs. Copyright © 2015 Elsevier B.V. All rights reserved.
Sequentially reweighted TV minimization for CT metal artifact reduction.
Zhang, Xiaomeng; Xing, Lei
2013-07-01
Metal artifact reduction has long been an important topic in x-ray CT image reconstruction. In this work, the authors propose an iterative method that sequentially minimizes a reweighted total variation (TV) of the image and produces substantially artifact-reduced reconstructions. A sequentially reweighted TV minimization algorithm is proposed to fully exploit the sparseness of image gradients (IG). The authors first formulate a constrained optimization model that minimizes a weighted TV of the image, subject to the constraint that the estimated projection data are within a specified tolerance of the available projection measurements, with image non-negativity enforced. The authors then solve a sequence of weighted TV minimization problems where weights used for the next iteration are computed from the current solution. Using the complete projection data, the algorithm first reconstructs an image from which a binary metal image can be extracted. Forward projection of the binary image identifies metal traces in the projection space. The metal-free background image is then reconstructed from the metal-trace-excluded projection data by employing a different set of weights. Each minimization problem is solved using a gradient method that alternates projection-onto-convex-sets and steepest descent. A series of simulation and experimental studies are performed to evaluate the proposed approach. Our study shows that the sequentially reweighted scheme, by altering a single parameter in the weighting function, flexibly controls the sparsity of the IG and reconstructs artifacts-free images in a two-stage process. It successfully produces images with significantly reduced streak artifacts, suppressed noise and well-preserved contrast and edge properties. The sequentially reweighed TV minimization provides a systematic approach for suppressing CT metal artifacts. The technique can also be generalized to other "missing data" problems in CT image reconstruction.
Impact of a Sequential Intervention on Albumin Utilization in Critical Care.
Lyu, Peter F; Hockenberry, Jason M; Gaydos, Laura M; Howard, David H; Buchman, Timothy G; Murphy, David J
2016-07-01
Literature generally finds no advantages in mortality risk for albumin over cheaper alternatives in many settings. Few studies have combined financial and nonfinancial strategies to reduce albumin overuse. We evaluated the effect of a sequential multifaceted intervention on decreasing albumin use in ICU and explore the effects of different strategies. Prospective prepost cohort study. Eight ICUs at two hospitals in an academic healthcare system. Adult patients admitted to study ICUs from September 2011 to August 2014 (n = 22,004). Over 2 years, providers in study ICUs participated in an intervention to reduce albumin use involving monthly feedback and explicit financial incentives in the first year and internal guidelines and order process changes in the second year. Outcomes measured were albumin orders per ICU admission, direct albumin costs, and mortality. Mean (SD) utilization decreased 37% from 2.7 orders (6.8) per admission during the baseline to 1.7 orders (4.6) during the intervention (p < 0.001). Regression analysis revealed that the intervention was independently associated with 0.9 fewer orders per admission, a 42% relative decrease. This adjusted effect consisted of an 18% reduction in the probability of using any albumin (p < 0.001) and a 29% reduction in the number of orders per admission among patients receiving any (p < 0.001). Secondary analysis revealed that probability reductions were concurrent with internal guidelines and order process modification while reductions in quantity occurred largely during the financial incentives and feedback period. Estimated cost savings totaled $2.5M during the 2-year intervention. There was no significant difference in ICU or hospital mortality between baseline and intervention. A sequential intervention achieved significant reductions in ICU albumin use and cost savings without changes in patient outcomes, supporting the combination of financial and nonfinancial strategies to align providers with evidence-based practices.
Wang, Shifei; Li, Hairui; He, Nvqin; Sun, Yili; Guo, Shengcun; Liao, Wangjun; Liao, Yulin; Chen, Yanmei; Bin, Jianping
2017-01-15
The impact of remote ischaemic preconditioning (RIPC) on major clinical outcomes in patients undergoing cardiovascular surgery remains controversial. We systematically reviewed the available evidence to evaluate the potential benefits of RIPC in such patients. PubMed, Embase, and Cochrane Library databases were searched for relevant randomised controlled trials (RCTs) conducted between January 2006 and March 2016. The pooled population of patients who underwent cardiovascular surgery was divided into the RIPC and control groups. Trial sequential analysis was applied to judge data reliability. The pooled relative risks (RRs) with 95% confidence intervals (CIs) between the groups were calculated for all-cause mortality, major adverse cardiovascular and cerebral events (MACCEs), myocardial infarction (MI), and renal failure. RIPC was not associated with improvement in all-cause mortality (RR, 1.04; 95%CI, 0.82-1.31; I 2 =26%; P>0.05) or MACCE incidence (RR, 0.90; 95%CI, 0.71-1.14; I 2 =40%; P>0.05) after cardiovascular surgery, and both results were assessed by trial sequential analysis as sufficient and conclusive. Nevertheless, RIPC was associated with a significantly lower incidence of MI (RR, 0.87; 95%CI, 0.76-1.00; I 2 =13%; P≤0.05). However, after excluding a study that had a high contribution to heterogeneity, RIPC was associated with increased rates of renal failure (RR, 1.53; 95%CI, 1.12-2.10; I 2 =5%; P≤0.05). In patients undergoing cardiovascular surgery, RIPC reduced the risk for postoperative MI, but not that for MACCEs or all-cause mortality, a discrepancy likely related to the higher rate of renal failure associated with RIPC. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Wang, Jia-Zhong; Liu, Yang; Wang, Jin-Long; Lu, Le; Zhang, Ya-Fei; Lu, Hong-Wei; Li, Yi-Ming
2015-01-01
AIM: We undertook this meta-analysis to investigate the relationship between revascularization and outcomes after liver transplantation. METHODS: A literature search was performed using MeSH and key words. The quality of the included studies was assessed using the Jadad Score and the Newcastle-Ottawa Scale. Heterogeneity was evaluated by the χ2 and I2 tests. The risk of publication bias was assessed using a funnel plot and Egger’s test, and the risk of bias was assessed using a domain-based assessment tool. A sensitivity analysis was conducted by reanalyzing the data using different statistical approaches. RESULTS: Six studies with a total of 467 patients were included. Ischemic-type biliary lesions were significantly reduced in the simultaneous revascularization group compared with the sequential revascularization group (OR = 4.97, 95%CI: 2.45-10.07; P < 0.00001), and intensive care unit (ICU) days were decreased (MD = 2.00, 95%CI: 0.55-3.45; P = 0.007) in the simultaneous revascularization group. Although warm ischemia time was prolonged in simultaneous revascularization group (MD = -25.84, 95%CI: -29.28-22.40; P < 0.00001), there were no significant differences in other outcomes between sequential and simultaneous revascularization groups. Assessment of the risk of bias showed that the methods of random sequence generation and blinding might have been a source of bias. The sensitivity analysis strengthened the reliability of the results of this meta-analysis. CONCLUSION: The results of this study indicate that simultaneous revascularization in liver transplantation may reduce the incidence of ischemic-type biliary lesions and length of stay of patients in the ICU. PMID:26078582
Danwanichakul, Panu; Glandt, Eduardo D
2004-11-15
We applied the integral-equation theory to the connectedness problem. The method originally applied to the study of continuum percolation in various equilibrium systems was modified for our sequential quenching model, a particular limit of an irreversible adsorption. The development of the theory based on the (quenched-annealed) binary-mixture approximation includes the Ornstein-Zernike equation, the Percus-Yevick closure, and an additional term involving the three-body connectedness function. This function is simplified by introducing a Kirkwood-like superposition approximation. We studied the three-dimensional (3D) system of randomly placed spheres and 2D systems of square-well particles, both with a narrow and with a wide well. The results from our integral-equation theory are in good accordance with simulation results within a certain range of densities.
NASA Astrophysics Data System (ADS)
Danwanichakul, Panu; Glandt, Eduardo D.
2004-11-01
We applied the integral-equation theory to the connectedness problem. The method originally applied to the study of continuum percolation in various equilibrium systems was modified for our sequential quenching model, a particular limit of an irreversible adsorption. The development of the theory based on the (quenched-annealed) binary-mixture approximation includes the Ornstein-Zernike equation, the Percus-Yevick closure, and an additional term involving the three-body connectedness function. This function is simplified by introducing a Kirkwood-like superposition approximation. We studied the three-dimensional (3D) system of randomly placed spheres and 2D systems of square-well particles, both with a narrow and with a wide well. The results from our integral-equation theory are in good accordance with simulation results within a certain range of densities.
Vorstenbosch, Joshua; Islur, Avi
2017-06-01
Breast augmentation is among the most frequently performed cosmetic plastic surgeries. Providing patients with "realistic" 3D simulations of breast augmentation outcomes is becoming increasingly common. Until recently, such programs were costly and required significant equipment, training, and office space. New simple user-friendly cloud-based programs have been developed, but to date there remains a paucity of objective evidence comparing these 3D simulations with the post-operative outcomes. To determine the aesthetic similarity between pre-operative 3D simulation generated by Crisalix and real post-operative outcomes. A retrospective review of 20 patients receiving bilateral breast augmentation was conducted comparing 6-month post-operative outcomes with 3D simulation using Crisalix software. Similarities between post-operative and simulated images were measured by three attending plastic surgeons and ten plastic surgery residents using a series of parameters. Assessment reveals similarity between the 3D simulation and 6-month post-operative images for overall appearance, breast height, breast width, breast volume, breast projection, and nipple correction. Crisalix software generated more representative simulations for symmetric breasts than for tuberous or ptotic breasts. Comparison of overall aesthetic outcome to simulation showed that the post-operative outcome was more appealing for the symmetric and tuberous breasts and less appealing for the ptotic breasts. Our data suggest that Crisalix offers a good overall 3D simulated image of post-operative breast augmentation outcomes. Improvements to the simulation of the post-operative outcomes for ptotic and tuberous breasts would result in greater predictive capabilities of Crisalix. Collectively, Crisalix offers good predictive simulations for symmetric breasts. This journal requires that authors assign a level of evidence to each article. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the online Instructions to Authors http://www.springer.com/00266 .
Numerical Simulation of Rolling-Airframes Using a Multi-Level Cartesian Method
NASA Technical Reports Server (NTRS)
Murman, Scott M.; Aftosmis, Michael J.; Berger, Marsha J.; Kwak, Dochan (Technical Monitor)
2002-01-01
A supersonic rolling missile with two synchronous canard control surfaces is analyzed using an automated, inviscid, Cartesian method. Sequential-static and time-dependent dynamic simulations of the complete motion are computed for canard dither schedules for level flight, pitch, and yaw maneuver. The dynamic simulations are compared directly against both high-resolution viscous simulations and relevant experimental data, and are also utilized to compute dynamic stability derivatives. The results show that both the body roll rate and canard dither motion influence the roll-averaged forces and moments on the body. At the relatively, low roll rates analyzed in the current work these dynamic effects are modest, however the dynamic computations are effective in predicting the dynamic stability derivatives which can be significant for highly-maneuverable missiles.
First-principles simulations of heat transport
NASA Astrophysics Data System (ADS)
Puligheddu, Marcello; Gygi, Francois; Galli, Giulia
2017-11-01
Advances in understanding heat transport in solids were recently reported by both experiment and theory. However an efficient and predictive quantum simulation framework to investigate thermal properties of solids, with the same complexity as classical simulations, has not yet been developed. Here we present a method to compute the thermal conductivity of solids by performing ab initio molecular dynamics at close to equilibrium conditions, which only requires calculations of first-principles trajectories and atomic forces, thus avoiding direct computation of heat currents and energy densities. In addition the method requires much shorter sequential simulation times than ordinary molecular dynamics techniques, making it applicable within density functional theory. We discuss results for a representative oxide, MgO, at different temperatures and for ordered and nanostructured morphologies, showing the performance of the method in different conditions.
Cestário, Elizabeth do Espirito Santo; Fernandes, Letícia Aparecida Barufi; Giollo-Júnior, Luiz Tadeu; Uyemura, Jéssica Rodrigues Roma; Matarucco, Camila Suemi Sato; Landim, Manoel Idelfonso Paz; Cosenso-Martin, Luciana Neves; Tácito, Lúcia Helena Bonalume; Moreno, Heitor; Vilela-Martin, José Fernando; Yugar-Toledo, Juan Carlos
2018-02-12
Resistant hypertension is characterized when the blood pressure (BP) remains above the recommended goal after taking three antihypertensive drugs with synergistic actions at their maximum recommended tolerated doses, preferably including a diuretic. Identifying the contribution of intravascular volume and serum renin in maintaining BP levels could help tailor more effective hypertension treatment, whether acting on the control of intravascular volume or sodium balance, or acting on the effects of the renin-angiotensin-aldosterone system (RAAS) on the kidney. This is a randomized, open-label, clinical trial is designed to compare sequential nephron blockade and its contribution to the intravascular volume component with dual blockade of the RAAS plus bisoprolol and the importance of serum renin in maintaining BP levels. The trial has two arms: sequential nephron blockade versus dual blockade of the RAAS (with an angiotensin converting enzyme (ACE) inhibitor plus a beta-blocker) both added-on to a thiazide diuretic, a calcium-channel blocker and an angiotensin receptor-1 blocker (ARB). Sequential nephron blockade consists in a progressive increase in sodium depletion using a thiazide diuretic, an aldosterone-receptor blocker, furosemide and, finally, amiloride. On the other hand, the dual blockade of the RAAS consists of the progressive addition of an ACE inhibitor until the maximum dose and then the administration of a beta-blocker until the maximum dose. The primary outcomes will be reductions in the systolic BP, diastolic BP, mean BP and pulse pressure (PP) after 20 weeks of treatment. The secondary outcomes will evaluate treatment safety and tolerability, biochemical changes, evaluation of renal function and recognition of hypotension (ambulatory BP monitoring (ABPM)). The sample size was calculated assuming an alpha error of 5% to reject the null hypothesis with a statistical power of 80% giving a total of 40 individuals per group. In recent years, the cost of resistant hypertension (RH) treatment has increased. Thus, identifying the contribution of intravascular volume and serum renin in maintaining BP levels could help tailor more effective hypertension treatment, whether by acting on the control of intravascular volume or sodium balance, or by acting on the effects of the RAAS on the kidney. Sequential Nephron Blockade vs. Dual Blockade Renin-angiotensin System + Bisoprolol in Resistant Arterial Hypertension (ResHypOT). ClinicalTrials.gov, ID: NCT02832973 . Registered on 14 July 2016. First received: 12 June 2016. Last updated: 18 July 2016.
Blunt pancreatic trauma: A persistent diagnostic conundrum?
Kumar, Atin; Panda, Ananya; Gamanagatti, Shivanand
2016-01-01
Blunt pancreatic trauma is an uncommon injury but has high morbidity and mortality. In modern era of trauma care, pancreatic trauma remains a persistent challenge to radiologists and surgeons alike. Early detection of pancreatic trauma is essential to prevent subsequent complications. However early pancreatic injury is often subtle on computed tomography (CT) and can be missed unless specifically looked for. Signs of pancreatic injury on CT include laceration, transection, bulky pancreas, heterogeneous enhancement, peripancreatic fluid and signs of pancreatitis. Pan-creatic ductal injury is a vital decision-making parameter as ductal injury is an indication for laparotomy. While lacerations involving more than half of pancreatic parenchyma are suggestive of ductal injury on CT, ductal injuries can be directly assessed on magnetic resonance imaging (MRI) or encoscopic retrograde cholangio-pancreatography. Pancreatic trauma also shows temporal evolution with increase in extent of injury with time. Hence early CT scans may underestimate the extent of injures and sequential imaging with CT or MRI is important in pancreatic trauma. Sequential imaging is also needed for successful non-operative management of pancreatic injury. Accurate early detection on initial CT and adopting a multimodality and sequential imaging strategy can improve outcome in pancreatic trauma. PMID:26981225
Samuel, V; Gamble, C; Cullington, H; Bathgate, F; Bennett, E; Coop, N; Cropper, J; Emond, A; Kentish, R; Edwards, L
2016-11-01
In contrast to previous clinical practice, current guidelines recommend bilateral cochlear implantation in children, resulting in a cohort of children who initially received one implant, but have subsequently had a second, contralateral implant. This study aimed to explore satisfaction and quality of life in children implanted simultaneously or sequentially. A novel measure of satisfaction and quality of life following paediatric bilateral cochlear implantation (the Brief Assessment of Parental Perception; BAPP) was developed and preliminary validation undertaken as part of a large, national project of bilateral implantation. Children's parents completed the measure yearly for up to three years following implantation. Children from 14 UK implant centres were recruited into the study; data were available for 410 children one year post-implantation. The BAPP was found to have good face and convergent validity, and internal consistency. Results indicated very high levels of satisfaction with the devices, and improvements in quality of life. However there was evidence that children implanted sequentially were less willing to wear their second implant in the first two years than those children receiving simultaneous implants. Simultaneous and sequential cochlear implants have a positive impact on the quality of life of deaf children.
Impact of switching crop type on water and solute fluxes in deep vadose zone
NASA Astrophysics Data System (ADS)
Turkeltaub, T.; Kurtzman, D.; Russak, E. E.; Dahan, O.
2015-12-01
Switching crop type and consequently changing irrigation and fertilization regimes lead to alterations in deep percolation and solute concentrations of pore water. Herein, observations from the deep vadose zone and model simulations demonstrate the changes in water, chloride, and nitrate fluxes under a commercial greenhouse following the change from tomato to lettuce cropping. The site, located above a phreatic aquifer, was monitored for 5 years. A vadose-zone monitoring system was implemented under the greenhouse and provided continuous data on both temporal variations in water content and chemical composition of the pore water at multiple depths in the deep vadose zone (up to 20 m). Following crop switching, a significant reduction in chloride concentration and dramatic increase in nitrate were observed across the unsaturated zone. The changes in chemical composition of the vadose-zone pore water appeared as sequential breakthroughs across the unsaturated zone, initiating at land surface and propagating down toward the water table. Today, 3 years after switching the crops, penetration of the impact exceeds 10 m depth. Variations in the isotopic composition of nitrate (18O and 15N) in water samples obtained from the entire vadose zone clearly support a fast leaching process and mobilization of solutes across the unsaturated zone following the change in crop type. Water flow and chloride transport models were calibrated to observations acquired during an enhanced infiltration experiment. Forward simulation runs were performed with the calibrated models, constrained to tomato and lettuce cultivation regimes as surface boundary conditions. Predicted chloride and nitrate concentrations were in agreement with the observed concentrations. The simulated water drainage and nitrogen leaching implied that the observed changes are an outcome of recommended agricultural management practices.
Sjövall, Fredrik; Perner, Anders; Hylander Møller, Morten
2017-04-01
To assess benefits and harms of empirical mono- vs. combination antibiotic therapy in adult patients with severe sepsis in the intensive care unit (ICU). We performed a systematic review according to the Cochrane Collaboration methodology, including meta-analysis, risk of bias assessment and trial sequential analysis (TSA). We included randomised clinical trials (RCT) assessing empirical mono-antibiotic therapy versus a combination of two or more antibiotics in adult ICU patients with severe sepsis. We exclusively assessed patient-important outcomes, including mortality. Two reviewers independently evaluated studies for inclusion, extracted data, and assessed risk of bias. Risk ratios (RRs) with 95% confidence intervals (CIs) were estimated and the risk of random errors was assessed by TSA. Thirteen RCTs (n = 2633) were included; all were judged as having high risk of bias. Carbapenems were the most frequently used mono-antibiotic (8 of 13 trials). There was no difference in mortality (RR 1.11, 95% CI 0.95-1.29; p = 0.19) or in any other patient-important outcomes between mono- vs. combination therapy. In TSA of mortality, the Z-curve reached the futility area, indicating that a 20% relative risk difference in mortality may be excluded between the two groups. For the other outcomes, TSA indicated lack of data and high risk of random errors. This systematic review of RCTs with meta-analysis and TSA demonstrated no differences in mortality or other patient-important outcomes between empirical mono- vs. combination antibiotic therapy in adult ICU patients with severe sepsis. The quantity and quality of data was low without firm evidence for benefit or harm of combination therapy. Copyright © 2016 The British Infection Association. Published by Elsevier Ltd. All rights reserved.
López-Cano, M; Brandsma, H-T; Bury, K; Hansson, B; Kyle-Leinhase, I; Alamino, J G; Muysoms, F
2017-04-01
Prevention of parastomal hernia (PSH) formation is crucial, given the high prevalence and difficulties in the surgical repair of PSH. To investigate the effect of a preventive mesh in PSH formation after an end colostomy, we aimed to meta-analyze all relevant randomized controlled trials (RCTs). We searched five databases. For each trial, we extracted risk ratios (RRs) of the effects of mesh or no mesh. The primary outcome was incidence of PSH with a minimum follow-up of 12 months with a clinical and/or computed tomography diagnosis. RRs were combined using the random-effect model (Mantel-Haenszel). To control the risk of type I error, we performed a trial sequential analysis (TSA). Seven RCTs with low risk of bias (451 patients) were included. Meta-analysis for primary outcome showed a significant reduction of the incidence of PSH using a mesh (RR 0.43, 95% CI 0.26-0.71; P = 0.0009). Regarding TSA calculation for the primary outcome, the accrued information size (451) was 187.1% of the estimated required information size (RIS) (241). Wound infection showed no statistical differences between groups (RR 0.77, 95% CI 0.39-1.54; P = 0.46). PSH repair rate showed a significant reduction in the mesh group (RR 0.28 (95% CI 0.10-0.78; P = 0.01). PSH prevention with mesh when creating an end colostomy reduces the incidence of PSH, the risk for subsequent PSH repair and does not increase wound infections. TSA shows that the RIS is reached for the primary outcome. Additional RCTs in the previous context are not needed.
Koster, Geert; Bekema, Hanneke J; Wetterslev, Jørn; Gluud, Christian; Keus, Frederik; van der Horst, Iwan C C
2016-09-01
Milrinone is an inotrope widely used for treatment of cardiac failure. Because previous meta-analyses had methodological flaws, we decided to conduct a systematic review of the effect of milrinone in critically ill adult patients with cardiac dysfunction. This systematic review was performed according to The Cochrane Handbook for Systematic Reviews of Interventions. Searches were conducted until November 2015. Patients with cardiac dysfunction were included. The primary outcome was serious adverse events (SAE) including mortality at maximum follow-up. The risk of bias was evaluated and trial sequential analyses were conducted. The quality of evidence was assessed by the Grading of Recommendations Assessment, Development and Evaluation criteria. A total of 31 randomised clinical trials fulfilled the inclusion criteria, of which 16 provided data for our analyses. All trials were at high risk of bias, and none reported the primary composite outcome SAE. Fourteen trials with 1611 randomised patients reported mortality data at maximum follow-up (RR 0.96; 95% confidence interval 0.76-1.21). Milrinone did not significantly affect other patient-centred outcomes. All analyses displayed statistical and/or clinical heterogeneity of patients, interventions, comparators, outcomes, and/or settings and all featured missing data. The current evidence on the use of milrinone in critically ill adult patients with cardiac dysfunction suffers from considerable risks of both bias and random error and demonstrates no benefits. The use of milrinone for the treatment of critically ill patients with cardiac dysfunction can be neither recommended nor refuted. Future randomised clinical trials need to be sufficiently large and designed to have low risk of bias.
Butler, Katherine; Ramphul, Meenakshi; Dunney, Clare; Farren, Maria; McSweeney, Aoife; McNamara, Karen; Murphy, Deirdre J
2014-10-29
To evaluate maternal and neonatal outcomes associated with operative vaginal deliveries (OVDs) performed by day and at night. Prospective cohort study. Urban maternity unit in Ireland with off-site consultant staff at night. All nulliparous women requiring an OVD with a term singleton fetus in a cephalic presentation from February to November 2013. Delivery outcomes were compared for women who delivered by day (08:00-19:59) or at night (20:00-07:59). The main outcomes included postpartum haemorrhage (PPH), anal sphincter tear and neonatal unit admission. Procedural factors included operator grade, sequential use of instruments and caesarean section. Of the 597 women who required an OVD, 296 (50%) delivered at night. Choice of instrument, place of delivery, sequential use of instruments and caesarean section did not differ significantly in relation to time of birth. Mid-grade operators performed less OVDs by day than at night, OR 0.60 (95% CI 0.43 to 0.83), and a consultant supervisor was more frequently present by day, OR 2.26 (95% CI 1.05 to 4.83). Shoulder dystocia occurred more commonly by day, OR 2.57 (95% CI 1.05 to 6.28). The incidence of PPH, anal sphincter tears, neonatal unit admission, fetal acidosis and neonatal trauma was similar by day and at night. The mean decision to delivery intervals were 12.0 and 12.6 min, respectively. There was no evidence of an association between time of OVD and adverse perinatal outcomes despite off-site consultant obstetric support at night. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.
Meloni, Gregory R; Fisher, Matthew B; Stoeckl, Brendan D; Dodge, George R; Mauck, Robert L
2017-07-01
Cartilage tissue engineering is emerging as a promising treatment for osteoarthritis, and the field has progressed toward utilizing large animal models for proof of concept and preclinical studies. Mechanical testing of the regenerative tissue is an essential outcome for functional evaluation. However, testing modalities and constitutive frameworks used to evaluate in vitro grown samples differ substantially from those used to evaluate in vivo derived samples. To address this, we developed finite element (FE) models (using FEBio) of unconfined compression and indentation testing, modalities commonly used for such samples. We determined the model sensitivity to tissue radius and subchondral bone modulus, as well as its ability to estimate material parameters using the built-in parameter optimization tool in FEBio. We then sequentially tested agarose gels of 4%, 6%, 8%, and 10% weight/weight using a custom indentation platform, followed by unconfined compression. Similarly, we evaluated the ability of the model to generate material parameters for living constructs by evaluating engineered cartilage. Juvenile bovine mesenchymal stem cells were seeded (2 × 10 7 cells/mL) in 1% weight/volume hyaluronic acid hydrogels and cultured in a chondrogenic medium for 3, 6, and 9 weeks. Samples were planed and tested sequentially in indentation and unconfined compression. The model successfully completed parameter optimization routines for each testing modality for both acellular and cell-based constructs. Traditional outcome measures and the FE-derived outcomes showed significant changes in material properties during the maturation of engineered cartilage tissue, capturing dynamic changes in functional tissue mechanics. These outcomes were significantly correlated with one another, establishing this FE modeling approach as a singular method for the evaluation of functional engineered and native tissue regeneration, both in vitro and in vivo.
Leitão, Cristiane B; Gross, Jorge L
2017-01-01
Objective To evaluate the efficacy of coronary artery disease screening in asymptomatic patients with type 2 diabetes and assess the statistical reliability of the findings. Methods Electronic databases (MEDLINE, EMBASE, Cochrane Library and clinicaltrials.org) were reviewed up to July 2016. Randomised controlled trials evaluating coronary artery disease screening in asymptomatic patients with type 2 diabetes and reporting cardiovascular events and/or mortality were included. Data were summarised with Mantel-Haenszel relative risk. Trial sequential analysis (TSA) was used to evaluate the optimal sample size to detect a 40% reduction in outcomes. Main outcomes were all-cause mortality and cardiac events (non-fatal myocardial infarction and cardiovascular death); secondary outcomes were non-fatal myocardial infarction, myocardial revascularisations and heart failure. Results One hundred thirty-five references were identified and 5 studies fulfilled the inclusion criteria and totalised 3315 patients, 117 all-cause deaths and 100 cardiac events. Screening for coronary artery disease was not associated with decrease in risk for all-cause deaths (RR 0.95(95% CI 0.66 to 1.35)) or cardiac events (RR 0.72(95% CI 0.49 to 1.06)). TSA shows that futility boundaries were reached for all-cause mortality and a relative risk reduction of 40% between treatments could be discarded. However, there is not enough information for firm conclusions for cardiac events. For secondary outcomes no benefit or harm was identified; optimal sample sizes were not reached. Conclusion Current available data do not support screening for coronary artery disease in patients with type 2 diabetes for preventing fatal events. Further studies are needed to assess the effects on cardiac events. PROSPERO CRD42015026627. PMID:28490559
Acetylcholine-modulated plasticity in reward-driven navigation: a computational study.
Zannone, Sara; Brzosko, Zuzanna; Paulsen, Ole; Clopath, Claudia
2018-06-21
Neuromodulation plays a fundamental role in the acquisition of new behaviours. In previous experimental work, we showed that acetylcholine biases hippocampal synaptic plasticity towards depression, and the subsequent application of dopamine can retroactively convert depression into potentiation. We also demonstrated that incorporating this sequentially neuromodulated Spike-Timing-Dependent Plasticity (STDP) rule in a network model of navigation yields effective learning of changing reward locations. Here, we employ computational modelling to further characterize the effects of cholinergic depression on behaviour. We find that acetylcholine, by allowing learning from negative outcomes, enhances exploration over the action space. We show that this results in a variety of effects, depending on the structure of the model, the environment and the task. Interestingly, sequentially neuromodulated STDP also yields flexible learning, surpassing the performance of other reward-modulated plasticity rules.
Sequential processing deficits in schizophrenia: relationship to neuropsychology and genetics.
Hill, S Kristian; Bjorkquist, Olivia; Carrathers, Tarra; Roseberry, Jarett E; Hochberger, William C; Bishop, Jeffrey R
2013-12-01
Utilizing a combination of neuropsychological and cognitive neuroscience approaches may be essential for characterizing cognitive deficits in schizophrenia and eventually assessing cognitive outcomes. This study was designed to compare the stability of select exemplars for these approaches and their correlations in schizophrenia patients with stable treatment and clinical profiles. Reliability estimates for serial order processing were comparable to neuropsychological measures and indicate that experimental serial order processing measures may be less susceptible to practice effects than traditional neuropsychological measures. Correlations were moderate and consistent with a global cognitive factor. Exploratory analyses indicated a potentially critical role of the Met allele of the Catechol-O-methyltransferase (COMT) Val158Met polymorphism in externally paced sequential recall. Experimental measures of serial order processing may reflect frontostriatal dysfunction and be a useful supplement to large neuropsychological batteries. © 2013.
Sequential Processing Deficits in Schizophrenia: Relationship to Neuropsychology and Genetics
Hill, S. Kristian; Bjorkquist, Olivia; Carrathers, Tarra; Roseberry, Jarett E.; Hochberger, William C.; Bishop, Jeffrey R.
2014-01-01
Utilizing a combination of neuropsychological and cognitive neuroscience approaches may be essential for characterizing cognitive deficits in schizophrenia and eventually assessing cognitive outcomes. This study was designed to compare the stability of select exemplars for these approaches and their correlations in schizophrenia patients with stable treatment and clinical profiles. Reliability estimates for serial order processing were comparable to neuropsychological measures and indicate that experimental serial order processing measures may be less susceptible to practice effects than traditional neuropsychological measures. Correlations were moderate and consistent with a global cognitive factor. Exploratory analyses indicated a potentially critical role of the Met allele of the Catechol-O-methyltransferase (COMT) Val158Met polymorphism in externally paced sequential recall. Experimental measures of serial order processing may reflect frontostriatal dysfunction and be a useful supplement to large neuropsychological batteries. PMID:24119464
ERIC Educational Resources Information Center
Osman, Magda; Wilkinson, Leonora; Beigi, Mazda; Castaneda, Cristina Sanchez; Jahanshahi, Marjan
2008-01-01
The striatum is considered to mediate some forms of procedural learning. Complex dynamic control (CDC) tasks involve an individual having to make a series of sequential decisions to achieve a specific outcome (e.g. learning to operate and control a car), and they involve procedural learning. The aim of this study was to test the hypothesis that…
Kähler, Pernille; Grevstad, Berit; Almdal, Thomas; Gluud, Christian; Wetterslev, Jørn; Lund, Søren Søgaard; Vaag, Allan; Hemmingsen, Bianca
2014-08-19
To assess the benefits and harms of targeting intensive versus conventional glycaemic control in patients with type 1 diabetes mellitus. A systematic review with meta-analyses and trial sequential analyses of randomised clinical trials. The Cochrane Library, MEDLINE, EMBASE, Science Citation Index Expanded and LILACS to January 2013. Randomised clinical trials that prespecified different targets of glycaemic control in participants at any age with type 1 diabetes mellitus were included. Two authors independently assessed studies for inclusion and extracted data. 18 randomised clinical trials included 2254 participants with type 1 diabetes mellitus. All trials had high risk of bias. There was no statistically significant effect of targeting intensive glycaemic control on all-cause mortality (risk ratio 1.16, 95% CI 0.65 to 2.08) or cardiovascular mortality (0.49, 0.19 to 1.24). Targeting intensive glycaemic control reduced the relative risks for the composite macrovascular outcome (0.63, 0.41 to 0.96; p=0.03), and nephropathy (0.37, 0.27 to 0.50; p<0.00001. The effect estimates of retinopathy, ketoacidosis and retinal photocoagulation were not consistently statistically significant between random and fixed effects models. The risk of severe hypoglycaemia was significantly increased with intensive glycaemic targets (1.40, 1.01 to 1.94). Trial sequential analyses showed that the amount of data needed to demonstrate a relative risk reduction of 10% were, in general, inadequate. There was no significant effect towards improved all-cause mortality when targeting intensive glycaemic control compared with conventional glycaemic control. However, there may be beneficial effects of targeting intensive glycaemic control on the composite macrovascular outcome and on nephropathy, and detrimental effects on severe hypoglycaemia. Notably, the data for retinopathy and ketoacidosis were inconsistent. There was a severe lack of reporting on patient relevant outcomes, and all trials had poor bias control. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.
Parallelization of a Fully-Distributed Hydrologic Model using Sub-basin Partitioning
NASA Astrophysics Data System (ADS)
Vivoni, E. R.; Mniszewski, S.; Fasel, P.; Springer, E.; Ivanov, V. Y.; Bras, R. L.
2005-12-01
A primary obstacle towards advances in watershed simulations has been the limited computational capacity available to most models. The growing trend of model complexity, data availability and physical representation has not been matched by adequate developments in computational efficiency. This situation has created a serious bottleneck which limits existing distributed hydrologic models to small domains and short simulations. In this study, we present novel developments in the parallelization of a fully-distributed hydrologic model. Our work is based on the TIN-based Real-time Integrated Basin Simulator (tRIBS), which provides continuous hydrologic simulation using a multiple resolution representation of complex terrain based on a triangulated irregular network (TIN). While the use of TINs reduces computational demand, the sequential version of the model is currently limited over large basins (>10,000 km2) and long simulation periods (>1 year). To address this, a parallel MPI-based version of the tRIBS model has been implemented and tested using high performance computing resources at Los Alamos National Laboratory. Our approach utilizes domain decomposition based on sub-basin partitioning of the watershed. A stream reach graph based on the channel network structure is used to guide the sub-basin partitioning. Individual sub-basins or sub-graphs of sub-basins are assigned to separate processors to carry out internal hydrologic computations (e.g. rainfall-runoff transformation). Routed streamflow from each sub-basin forms the major hydrologic data exchange along the stream reach graph. Individual sub-basins also share subsurface hydrologic fluxes across adjacent boundaries. We demonstrate how the sub-basin partitioning provides computational feasibility and efficiency for a set of test watersheds in northeastern Oklahoma. We compare the performance of the sequential and parallelized versions to highlight the efficiency gained as the number of processors increases. We also discuss how the coupled use of TINs and parallel processing can lead to feasible long-term simulations in regional watersheds while preserving basin properties at high-resolution.
Sequential Geoacoustic Filtering and Geoacoustic Inversion
2015-09-30
and online algorithms. We show here that CS obtains higher resolution than MVDR, even in scenarios, which favor classical high-resolution methods...windows actually performs better than conventional beamforming and MVDR/ MUSIC (see Figs. 1-2). Compressive geoacoustic inversion Geoacoustic...histograms based on 100 Monte Carlo simulations, and c)(CS, exhaustive-search, CBF, MVDR, and MUSIC performance versus SNR. The true source positions
1998-10-01
The efficient design of a free play , 24 hour per day, operational test (OT) of an ASW search system remains a challenge to the OT community. It will...efficient, realistic, free play , 24 hour per day OT. The basic test control premise described here is to stop the test event if the time without a
A random walk rule for phase I clinical trials.
Durham, S D; Flournoy, N; Rosenberger, W F
1997-06-01
We describe a family of random walk rules for the sequential allocation of dose levels to patients in a dose-response study, or phase I clinical trial. Patients are sequentially assigned the next higher, same, or next lower dose level according to some probability distribution, which may be determined by ethical considerations as well as the patient's response. It is shown that one can choose these probabilities in order to center dose level assignments unimodally around any target quantile of interest. Estimation of the quantile is discussed; the maximum likelihood estimator and its variance are derived under a two-parameter logistic distribution, and the maximum likelihood estimator is compared with other nonparametric estimators. Random walk rules have clear advantages: they are simple to implement, and finite and asymptotic distribution theory is completely worked out. For a specific random walk rule, we compute finite and asymptotic properties and give examples of its use in planning studies. Having the finite distribution theory available and tractable obviates the need for elaborate simulation studies to analyze the properties of the design. The small sample properties of our rule, as determined by exact theory, compare favorably to those of the continual reassessment method, determined by simulation.
Spiking neural network model for memorizing sequences with forward and backward recall.
Borisyuk, Roman; Chik, David; Kazanovich, Yakov; da Silva Gomes, João
2013-06-01
We present an oscillatory network of conductance based spiking neurons of Hodgkin-Huxley type as a model of memory storage and retrieval of sequences of events (or objects). The model is inspired by psychological and neurobiological evidence on sequential memories. The building block of the model is an oscillatory module which contains excitatory and inhibitory neurons with all-to-all connections. The connection architecture comprises two layers. A lower layer represents consecutive events during their storage and recall. This layer is composed of oscillatory modules. Plastic excitatory connections between the modules are implemented using an STDP type learning rule for sequential storage. Excitatory neurons in the upper layer project star-like modifiable connections toward the excitatory lower layer neurons. These neurons in the upper layer are used to tag sequences of events represented in the lower layer. Computer simulations demonstrate good performance of the model including difficult cases when different sequences contain overlapping events. We show that the model with STDP type or anti-STDP type learning rules can be applied for the simulation of forward and backward replay of neural spikes respectively. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Lin, Carol Y; Li, Ling
2016-11-07
HPV DNA diagnostic tests for epidemiology monitoring (research purpose) or cervical cancer screening (clinical purpose) have often been considered separately. Women with positive Linear Array (LA) polymerase chain reaction (PCR) research test results typically are neither informed nor referred for colposcopy. Recently, a sequential testing by using Hybrid Capture 2 (HC2) HPV clinical test as a triage before genotype by LA has been adopted for monitoring HPV infections. Also, HC2 has been reported as a more feasible screening approach for cervical cancer in low-resource countries. Thus, knowing the performance of testing strategies incorporating HPV clinical test (i.e., HC2-only or using HC2 as a triage before genotype by LA) compared with LA-only testing in measuring HPV prevalence will be informative for public health practice. We conducted a Monte Carlo simulation study. Data were generated using mathematical algorithms. We designated the reported HPV infection prevalence in the U.S. and Latin America as the "true" underlying type-specific HPV prevalence. Analytical sensitivity of HC2 for detecting 14 high-risk (oncogenic) types was considered to be less than LA. Estimated-to-true prevalence ratios and percentage reductions were calculated. When the "true" HPV prevalence was designated as the reported prevalence in the U.S., with LA genotyping sensitivity and specificity of (0.95, 0.95), estimated-to-true prevalence ratios of 14 high-risk types were 2.132, 1.056, 0.958 for LA-only, HC2-only, and sequential testing, respectively. Estimated-to-true prevalence ratios of two vaccine-associated high-risk types were 2.359 and 1.063 for LA-only and sequential testing, respectively. When designated type-specific prevalence of HPV16 and 18 were reduced by 50 %, using either LA-only or sequential testing, prevalence estimates were reduced by 18 %. Estimated-to-true HPV infection prevalence ratios using LA-only testing strategy are generally higher than using HC2-only or using HC2 as a triage before genotype by LA. HPV clinical testing can be incorporated to monitor HPV prevalence or vaccine effectiveness. Caution is needed when comparing apparent prevalence from different testing strategies.
Bhalodi, Amira A; Hagihara, Mao; Nicolau, David P; Kuti, Joseph L
2014-01-01
The effects of prior vancomycin exposure on ceftaroline and daptomycin therapy against methicillin-resistant Staphylococcus aureus (MRSA) have not been widely studied. Humanized free-drug exposures of vancomycin at 1 g every 12 h (q12h), ceftaroline at 600 mg q12h, and daptomycin at 10 mg/kg of body weight q24h were simulated in a 96-h in vitro pharmacodynamic model against three MRSA isolates, including one heteroresistant vancomycin-intermediate S. aureus (hVISA) isolate and one VISA isolate. A total of five regimens were tested: vancomycin, ceftaroline, and daptomycin alone for the entire 96 h, and then sequential therapy with vancomycin for 48 h followed by ceftaroline or daptomycin for 48 h. Microbiological responses were measured by the changes in log10 CFU during 96 h from baseline. Control isolates grew to 9.16 ± 0.32, 9.13 ± 0.14, and 8.69 ± 0.28 log10 CFU for MRSA, hVISA, and VISA, respectively. Vancomycin initially achieved ≥3 log10 CFU reductions against the MRSA and hVISA isolates, followed by regrowth beginning at 48 h; minimal activity was observed against VISA. The change in 96-h log10 CFU was largest for sequential therapy with vancomycin followed by ceftaroline (-5.22 ± 1.2, P = 0.010 versus ceftaroline) and for sequential therapy with vancomycin followed by ceftaroline (-3.60 ± 0.6, P = 0.037 versus daptomycin), compared with daptomycin (-2.24 ± 1.0), vancomycin (-1.40 ± 1.8), and sequential therapy with vancomycin followed by daptomycin (-1.32 ± 1.0, P > 0.5 for the last three regimens). Prior exposure of vancomycin at 1 g q12h reduced the initial microbiological response of daptomycin, particularly for hVISA and VISA isolates, but did not affect the response of ceftaroline. In the scenario of poor vancomycin response for high-inoculum MRSA infection, a ceftaroline-containing regimen may be preferred.
Sung, Nayoung; Kwak-Kim, Joanne; Koo, H S; Yang, K M
2016-09-01
To investigate hCG-β level on postovulatory day (POD) 12 and its fold increase as predictors for pregnancy outcome after in vitro fertilization (IVF) cycles. A retrospective cohort study was performed in total 1408 fresh and 598 frozen cycles between November 2008 and October 2011, which resulted in biochemical pregnancy, early pregnancy loss, or live birth of singleton pregnancy. The serum hCG-β levels of POD 12 and 14 were compared among biochemical pregnancy, early pregnancy loss, and live birth groups. The cutoff values of POD 12 and 14 hCG-β levels and the degree of hCG-β increase from POD 12 to 14 were determined for each pregnancy outcome. POD 12 and 14 hCG-β levels stratified based on pregnancy outcomes were significantly different among the biochemical pregnancy, early pregnancy loss, and live birth in both fresh and frozen cycles. Serum hCG-β levels of POD 12 and 14 and the fold increase of hCG-β levels from POD 12 to 14 significantly predict pregnancy outcomes after fresh and frozen cycles. Among these, the cutoff value of POD 14 hCG-β had the highest sensitivity and positive predictive value (PPV). In fresh cycles, the cutoff values of POD 12 and 14 serum hCG-β levels for clinical pregnancies were 30.2 mIU/mL (sensitivity 81.3 %, specificity 79.6 %, and PPV 92.3 %) and 70.5 mIU/mL (sensitivity 88.4 %, specificity 85.2 %, and PPV 94.7 %). In pregnancies with POD 12 serum hCG-β levels ≥30.2 mIU/mL, the cutoff level of increase of hCG-β for clinical pregnancy was 2.56 (sensitivity 73.6 %, specificity 72.4 %, and PPV 97.8 %). Sequential application of cutoff values such as POD 12 hCG-β and fold increase of hCG-β improved predictability of pregnancy outcome as compared with that of POD 12 hCG-β alone. The cutoff values of POD 12 and 14 serum hCG-β levels for live birth were 40.5 mIU/mL (sensitivity 75.2 %, specificity 72.6 %, PPV 78.9 %) and 104.5 mIU/mL (sensitivity 80.3 %, specificity 74.1 %, PPV 80.8 %). In the frozen cycles, the cutoff values of POD 12 and 14 serum hCG-β level for clinical pregnancy were 31.5 IU/L (sensitivity 80.4 %, specificity 71.1 % and PPV 90 %) and 43.5 mIU/mL (sensitivity 72.6 %, specificity 71.7 %, PPV 77.2 %). In pregnancies with POD 12 serum hCG-β level ≥31.5 mIU/mL, the cutoff value for fold increase of hCG-β was 2.38 for clinical pregnancy (sensitivity 81.6 %, specificity 71.4 % and PPV 87.9 %). The cutoff values of POD 12 and 14 for live birth were 43.5 mIU/mL (sensitivity 72.6 %, specificity 71.7 %, PPV 77.2 %) and 101.6 mIU/mL (sensitivity 79.6 %, specificity 71.1 %, PPV 78.4 %). Sequential application of cutoff values for POD 12 hCG-β level and fold increase of hCG-β significantly increased PPV for live birth but not clinical pregnancy in frozen cycles. Early prediction of pregnancy outcome by using POD 12 and 14 cutoff levels and sequential application of cutoff value of fold increase could provide appropriate reference to health care providers to initiate earlier management of high-risk pregnancies and precise follow-up of abnormal pregnancies.
A specific PFT and sub-canopy structure for simulating oil palm in the Community Land Model
NASA Astrophysics Data System (ADS)
Fan, Y.; Knohl, A.; Roupsard, O.; Bernoux, M.; LE Maire, G.; Panferov, O.; Kotowska, M.; Meijide, A.
2015-12-01
Towards an effort to quantify the effects of rainforests to oil palm conversion on land-atmosphere carbon, water and energy fluxes, a specific plant functional type (PFT) and sub-canopy structure are developed for simulating oil palm within the Community Land Model (CLM4.5). Current global land surface models only simulate annual crops beside natural vegetation. In this study, a multilayer oil palm subroutine is developed in CLM4.5 for simulating oil palm's phenology and carbon and nitrogen allocation. The oil palm has monopodial morphology and sequential phenology of around 40 stacked phytomers, each carrying a large leaf and a fruit bunch, forming a natural multilayer canopy. A sub-canopy phenological and physiological parameterization is thus introduced, so that multiple phytomer components develop simultaneously but according to their different phenological steps (growth, yield and senescence) at different canopy layers. This specific multilayer structure was proved useful for simulating canopy development in terms of leaf area index (LAI) and fruit yield in terms of carbon and nitrogen outputs in Jambi, Sumatra (Fan et al. 2015). The study supports that species-specific traits, such as palm's monopodial morphology and sequential phenology, are necessary representations in terrestrial biosphere models in order to accurately simulate vegetation dynamics and feedbacks to climate. Further, oil palm's multilayer structure allows adding all canopy-level calculations of radiation, photosynthesis, stomatal conductance and respiration, beside phenology, also to the sub-canopy level, so as to eliminate scale mismatch problem among different processes. A series of adaptations are made to the CLM model. Initial results show that the adapted multilayer radiative transfer scheme and the explicit represention of oil palm's canopy structure improve on simulating photosynthesis-light response curve. The explicit photosynthesis and dynamic leaf nitrogen calculations per canopy layer also enhance simulated CO2 flux when compared to eddy covariance flux data. More investigations on energy and water fluxes and nitrogen balance are being conducted. These new schemes would hopefully promote the understanding of climatic effects of the tropical land use transformation system.
Simulated spaceflight effects on mating and pregnancy of rats
NASA Technical Reports Server (NTRS)
Sabelman, E. E.; Chetirkin, P. V.; Howard, R. M.
1981-01-01
The mating of rats was studied to determine the effects of: simulated reentry stresses at known stages of pregnancy, and full flight simulation, consisting of sequential launch stresses, group housing, mating opportunity, diet, simulated reentry, and postreentry isolation of male and female rats. Uterine contents, adrenal mass and abdominal fat as a proportion of body mass, duration of pregnancy, and number and sex of offspring were studied. It is found that: (1) parturition following full flight simulation was delayed relative to that of controls; (2) litter size was reduced and resorptions increased compared with previous matings in the same group of animals; and (3) abdominal fat was highly elevated in animals that were fed the Soviet paste diet. It is suggested that the combined effects of diet, stress, spacecraft environment, and weightlessness decreased the probability of mating or of viable pregnancies in the Cosmos 1129 flight and control animals.
Data parallel sorting for particle simulation
NASA Technical Reports Server (NTRS)
Dagum, Leonardo
1992-01-01
Sorting on a parallel architecture is a communications intensive event which can incur a high penalty in applications where it is required. In the case of particle simulation, only integer sorting is necessary, and sequential implementations easily attain the minimum performance bound of O (N) for N particles. Parallel implementations, however, have to cope with the parallel sorting problem which, in addition to incurring a heavy communications cost, can make the minimun performance bound difficult to attain. This paper demonstrates how the sorting problem in a particle simulation can be reduced to a merging problem, and describes an efficient data parallel algorithm to solve this merging problem in a particle simulation. The new algorithm is shown to be optimal under conditions usual for particle simulation, and its fieldwise implementation on the Connection Machine is analyzed in detail. The new algorithm is about four times faster than a fieldwise implementation of radix sort on the Connection Machine.
Sequential protein unfolding through a carbon nanotube pore
NASA Astrophysics Data System (ADS)
Xu, Zhonghe; Zhang, Shuang; Weber, Jeffrey K.; Luan, Binquan; Zhou, Ruhong; Li, Jingyuan
2016-06-01
An assortment of biological processes, like protein degradation and the transport of proteins across membranes, depend on protein unfolding events mediated by nanopore interfaces. In this work, we exploit fully atomistic simulations of an artificial, CNT-based nanopore to investigate the nature of ubiquitin unfolding. With one end of the protein subjected to an external force, we observe non-canonical unfolding behaviour as ubiquitin is pulled through the pore opening. Secondary structural elements are sequentially detached from the protein and threaded into the nanotube, interestingly, the remaining part maintains native-like characteristics. The constraints of the nanopore interface thus facilitate the formation of stable ``unfoldon'' motifs above the nanotube aperture that can exist in the absence of specific native contacts with the other secondary structure. Destruction of these unfoldons gives rise to distinct force peaks in our simulations, providing us with a sensitive probe for studying the kinetics of serial unfolding events. Our detailed analysis of nanopore-mediated protein unfolding events not only provides insight into how related processes might proceed in the cell, but also serves to deepen our understanding of structural arrangements which form the basis for protein conformational stability.An assortment of biological processes, like protein degradation and the transport of proteins across membranes, depend on protein unfolding events mediated by nanopore interfaces. In this work, we exploit fully atomistic simulations of an artificial, CNT-based nanopore to investigate the nature of ubiquitin unfolding. With one end of the protein subjected to an external force, we observe non-canonical unfolding behaviour as ubiquitin is pulled through the pore opening. Secondary structural elements are sequentially detached from the protein and threaded into the nanotube, interestingly, the remaining part maintains native-like characteristics. The constraints of the nanopore interface thus facilitate the formation of stable ``unfoldon'' motifs above the nanotube aperture that can exist in the absence of specific native contacts with the other secondary structure. Destruction of these unfoldons gives rise to distinct force peaks in our simulations, providing us with a sensitive probe for studying the kinetics of serial unfolding events. Our detailed analysis of nanopore-mediated protein unfolding events not only provides insight into how related processes might proceed in the cell, but also serves to deepen our understanding of structural arrangements which form the basis for protein conformational stability. Electronic supplementary information (ESI) available. See DOI: 10.1039/c6nr00410e
Ma, Irene W Y; Brindle, Mary E; Ronksley, Paul E; Lorenzetti, Diane L; Sauve, Reg S; Ghali, William A
2011-09-01
Central venous catheterization (CVC) is increasingly taught by simulation. The authors reviewed the literature on the effects of simulation training in CVC on learner and clinical outcomes. The authors searched computerized databases (1950 to May 2010), reference lists, and considered studies with a control group (without simulation education intervention). Two independent assessors reviewed the retrieved citations. Independent data abstraction was performed on study design, study quality score, learner characteristics, sample size, components of interventional curriculum, outcomes assessed, and method of assessment. Learner outcomes included performance measures on simulators, knowledge, and confidence. Patient outcomes included number of needle passes, arterial puncture, pneumothorax, and catheter-related infections. Twenty studies were identified. Simulation-based education was associated with significant improvements in learner outcomes: performance on simulators (standardized mean difference [SMD] 0.60 [95% CI 0.45 to 0.76]), knowledge (SMD 0.60 [95% CI 0.35 to 0.84]), and confidence (SMD 0.41 [95% CI 0.30 to 0.53] for studies with single-group pretest and posttest design; SMD 0.52 (95% CI 0.23 to 0.81) for studies with nonrandomized, two-group design). Furthermore, simulation-based education was associated with improved patient outcomes, including fewer needle passes (SMD -0.58 [95% CI -0.95 to -0.20]), and pneumothorax (relative risk 0.62 [95% CI 0.40 to 0.97]), for studies with nonrandomized, two-group design. However, simulation-based training was not associated with a significant reduction in risk of either arterial puncture or catheter-related infections. Despite some limitations in the literature reviewed, evidence suggests that simulation-based education for CVC provides benefits in learner and select clinical outcomes.
Radio Frequency Ablation Registration, Segmentation, and Fusion Tool
McCreedy, Evan S.; Cheng, Ruida; Hemler, Paul F.; Viswanathan, Anand; Wood, Bradford J.; McAuliffe, Matthew J.
2008-01-01
The Radio Frequency Ablation Segmentation Tool (RFAST) is a software application developed using NIH's Medical Image Processing Analysis and Visualization (MIPAV) API for the specific purpose of assisting physicians in the planning of radio frequency ablation (RFA) procedures. The RFAST application sequentially leads the physician through the steps necessary to register, fuse, segment, visualize and plan the RFA treatment. Three-dimensional volume visualization of the CT dataset with segmented 3D surface models enables the physician to interactively position the ablation probe to simulate burns and to semi-manually simulate sphere packing in an attempt to optimize probe placement. PMID:16871716
A posteriori model validation for the temporal order of directed functional connectivity maps
Beltz, Adriene M.; Molenaar, Peter C. M.
2015-01-01
A posteriori model validation for the temporal order of neural directed functional connectivity maps is rare. This is striking because models that require sequential independence among residuals are regularly implemented. The aim of the current study was (a) to apply to directed functional connectivity maps of functional magnetic resonance imaging data an a posteriori model validation procedure (i.e., white noise tests of one-step-ahead prediction errors combined with decision criteria for revising the maps based upon Lagrange Multiplier tests), and (b) to demonstrate how the procedure applies to single-subject simulated, single-subject task-related, and multi-subject resting state data. Directed functional connectivity was determined by the unified structural equation model family of approaches in order to map contemporaneous and first order lagged connections among brain regions at the group- and individual-levels while incorporating external input, then white noise tests were run. Findings revealed that the validation procedure successfully detected unmodeled sequential dependencies among residuals and recovered higher order (greater than one) simulated connections, and that the procedure can accommodate task-related input. Findings also revealed that lags greater than one were present in resting state data: With a group-level network that contained only contemporaneous and first order connections, 44% of subjects required second order, individual-level connections in order to obtain maps with white noise residuals. Results have broad methodological relevance (e.g., temporal validation is necessary after directed functional connectivity analyses because the presence of unmodeled higher order sequential dependencies may bias parameter estimates) and substantive implications (e.g., higher order lags may be common in resting state data). PMID:26379489
Self-Consistent Chaotic Transport in a High-Dimensional Mean-Field Hamiltonian Map Model
Martínez-del-Río, D.; del-Castillo-Negrete, D.; Olvera, A.; ...
2015-10-30
We studied the self-consistent chaotic transport in a Hamiltonian mean-field model. This model provides a simplified description of transport in marginally stable systems including vorticity mixing in strong shear flows and electron dynamics in plasmas. Self-consistency is incorporated through a mean-field that couples all the degrees-of-freedom. The model is formulated as a large set of N coupled standard-like area-preserving twist maps in which the amplitude and phase of the perturbation, rather than being constant like in the standard map, are dynamical variables. Of particular interest is the study of the impact of periodic orbits on the chaotic transport and coherentmore » structures. Furthermore, numerical simulations show that self-consistency leads to the formation of a coherent macro-particle trapped around the elliptic fixed point of the system that appears together with an asymptotic periodic behavior of the mean field. To model this asymptotic state, we introduced a non-autonomous map that allows a detailed study of the onset of global transport. A turnstile-type transport mechanism that allows transport across instantaneous KAM invariant circles in non-autonomous systems is discussed. As a first step to understand transport, we study a special type of orbits referred to as sequential periodic orbits. Using symmetry properties we show that, through replication, high-dimensional sequential periodic orbits can be generated starting from low-dimensional periodic orbits. We show that sequential periodic orbits in the self-consistent map can be continued from trivial (uncoupled) periodic orbits of standard-like maps using numerical and asymptotic methods. Normal forms are used to describe these orbits and to find the values of the map parameters that guarantee their existence. Numerical simulations are used to verify the prediction from the asymptotic methods.« less
Self-Consistent Chaotic Transport in a High-Dimensional Mean-Field Hamiltonian Map Model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Martínez-del-Río, D.; del-Castillo-Negrete, D.; Olvera, A.
We studied the self-consistent chaotic transport in a Hamiltonian mean-field model. This model provides a simplified description of transport in marginally stable systems including vorticity mixing in strong shear flows and electron dynamics in plasmas. Self-consistency is incorporated through a mean-field that couples all the degrees-of-freedom. The model is formulated as a large set of N coupled standard-like area-preserving twist maps in which the amplitude and phase of the perturbation, rather than being constant like in the standard map, are dynamical variables. Of particular interest is the study of the impact of periodic orbits on the chaotic transport and coherentmore » structures. Furthermore, numerical simulations show that self-consistency leads to the formation of a coherent macro-particle trapped around the elliptic fixed point of the system that appears together with an asymptotic periodic behavior of the mean field. To model this asymptotic state, we introduced a non-autonomous map that allows a detailed study of the onset of global transport. A turnstile-type transport mechanism that allows transport across instantaneous KAM invariant circles in non-autonomous systems is discussed. As a first step to understand transport, we study a special type of orbits referred to as sequential periodic orbits. Using symmetry properties we show that, through replication, high-dimensional sequential periodic orbits can be generated starting from low-dimensional periodic orbits. We show that sequential periodic orbits in the self-consistent map can be continued from trivial (uncoupled) periodic orbits of standard-like maps using numerical and asymptotic methods. Normal forms are used to describe these orbits and to find the values of the map parameters that guarantee their existence. Numerical simulations are used to verify the prediction from the asymptotic methods.« less
NASA Astrophysics Data System (ADS)
Otake, Yoshito; Esnault, Matthieu; Grupp, Robert; Kosugi, Shinichi; Sato, Yoshinobu
2016-03-01
The determination of in vivo motion of multiple-bones using dynamic fluoroscopic images and computed tomography (CT) is useful for post-operative assessment of orthopaedic surgeries such as medial patellofemoral ligament reconstruction. We propose a robust method to measure the 3D motion of multiple rigid objects with high accuracy using a series of bi-plane fluoroscopic images and a multi-resolution, intensity-based, 2D-3D registration. A Covariance Matrix Adaptation Evolution Strategy (CMA-ES) optimizer was used with a gradient correlation similarity metric. Four approaches to register three rigid objects (femur, tibia-fibula and patella) were implemented: 1) an individual bone approach registering one bone at a time, each with optimization of a six degrees of freedom (6DOF) parameter, 2) a sequential approach registering one bone at a time but using the previous bone results as the background in DRR generation, 3) a simultaneous approach registering all the bones together (18DOF) and 4) a combination of the sequential and the simultaneous approaches. These approaches were compared in experiments using simulated images generated from the CT of a healthy volunteer and measured fluoroscopic images. Over the 120 simulated frames of motion, the simultaneous approach showed improved registration accuracy compared to the individual approach: with less than 0.68mm root-mean-square error (RMSE) for translation and less than 1.12° RMSE for rotation. A robustness evaluation was conducted with 45 trials of a randomly perturbed initialization showed that the sequential approach improved robustness significantly (74% success rate) compared to the individual bone approach (34% success) for patella registration (femur and tibia-fibula registration had a 100% success rate with each approach).
NASA Astrophysics Data System (ADS)
Tang, Jingshi; Wang, Haihong; Chen, Qiuli; Chen, Zhonggui; Zheng, Jinjun; Cheng, Haowen; Liu, Lin
2018-07-01
Onboard orbit determination (OD) is often used in space missions, with which mission support can be partially accomplished autonomously, with less dependency on ground stations. In major Global Navigation Satellite Systems (GNSS), inter-satellite link is also an essential upgrade in the future generations. To serve for autonomous operation, sequential OD method is crucial to provide real-time or near real-time solutions. The Extended Kalman Filter (EKF) is an effective and convenient sequential estimator that is widely used in onboard application. The filter requires the solutions of state transition matrix (STM) and the process noise transition matrix, which are always obtained by numerical integration. However, numerically integrating the differential equations is a CPU intensive process and consumes a large portion of the time in EKF procedures. In this paper, we present an implementation that uses the analytical solutions of these transition matrices to replace the numerical calculations. This analytical implementation is demonstrated and verified using a fictitious constellation based on selected medium Earth orbit (MEO) and inclined Geosynchronous orbit (IGSO) satellites. We show that this implementation performs effectively and converges quickly, steadily and accurately in the presence of considerable errors in the initial values, measurements and force models. The filter is able to converge within 2-4 h of flight time in our simulation. The observation residual is consistent with simulated measurement error, which is about a few centimeters in our scenarios. Compared to results implemented with numerically integrated STM, the analytical implementation shows results with consistent accuracy, while it takes only about half the CPU time to filter a 10-day measurement series. The future possible extensions are also discussed to fit in various missions.
NASA Astrophysics Data System (ADS)
Cherkashin, N.; Daghbouj, N.; Seine, G.; Claverie, A.
2018-04-01
Sequential He++H+ ion implantation, being more effective than the sole implantation of H+ or He+, is used by many to transfer thin layers of silicon onto different substrates. However, due to the poor understanding of the basic mechanisms involved in such a process, the implantation parameters to be used for the efficient delamination of a superficial layer are still subject to debate. In this work, by using various experimental techniques, we have studied the influence of the He and H relative depth-distributions imposed by the ion energies onto the result of the sequential implantation and annealing of the same fluence of He and H ions. Analyzing the characteristics of the blister populations observed after annealing and deducing the composition of the gas they contain from FEM simulations, we show that the trapping efficiency of He atoms in platelets and blisters during annealing depends on the behavior of the vacancies generated by the two implants within the H-rich region before and after annealing. Maximum efficiency of the sequential ion implantation is obtained when the H-rich region is able to trap all implanted He ions, while the vacancies it generated are not available to favor the formation of V-rich complexes after implantation then He-filled nano-bubbles after annealing. A technological option is to implant He+ ions first at such an energy that the damage it generates is located on the deeper side of the H profile.
Sequential programmable self-assembly: Role of cooperative interactions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jonathan D. Halverson; Tkachenko, Alexei V.
Here, we propose a general strategy of “sequential programmable self-assembly” that enables a bottom-up design of arbitrary multi-particle architectures on nano- and microscales. We show that a naive realization of this scheme, based on the pairwise additive interactions between particles, has fundamental limitations that lead to a relatively high error rate. This can be overcome by using cooperative interparticle binding. The cooperativity is a well known feature of many biochemical processes, responsible, e.g., for signaling and regulations in living systems. Here we propose to utilize a similar strategy for high precision self-assembly, and show that DNA-mediated interactions provide a convenientmore » platform for its implementation. In particular, we outline a specific design of a DNA-based complex which we call “DNA spider,” that acts as a smart interparticle linker and provides a built-in cooperativity of binding. We demonstrate versatility of the sequential self-assembly based on spider-functionalized particles by designing several mesostructures of increasing complexity and simulating their assembly process. This includes a number of finite and repeating structures, in particular, the so-called tetrahelix and its several derivatives. Due to its generality, this approach allows one to design and successfully self-assemble virtually any structure made of a “GEOMAG” magnetic construction toy, out of nanoparticles. According to our results, once the binding cooperativity is strong enough, the sequential self-assembly becomes essentially error-free.« less
Sequential programmable self-assembly: Role of cooperative interactions
Jonathan D. Halverson; Tkachenko, Alexei V.
2016-03-04
Here, we propose a general strategy of “sequential programmable self-assembly” that enables a bottom-up design of arbitrary multi-particle architectures on nano- and microscales. We show that a naive realization of this scheme, based on the pairwise additive interactions between particles, has fundamental limitations that lead to a relatively high error rate. This can be overcome by using cooperative interparticle binding. The cooperativity is a well known feature of many biochemical processes, responsible, e.g., for signaling and regulations in living systems. Here we propose to utilize a similar strategy for high precision self-assembly, and show that DNA-mediated interactions provide a convenientmore » platform for its implementation. In particular, we outline a specific design of a DNA-based complex which we call “DNA spider,” that acts as a smart interparticle linker and provides a built-in cooperativity of binding. We demonstrate versatility of the sequential self-assembly based on spider-functionalized particles by designing several mesostructures of increasing complexity and simulating their assembly process. This includes a number of finite and repeating structures, in particular, the so-called tetrahelix and its several derivatives. Due to its generality, this approach allows one to design and successfully self-assemble virtually any structure made of a “GEOMAG” magnetic construction toy, out of nanoparticles. According to our results, once the binding cooperativity is strong enough, the sequential self-assembly becomes essentially error-free.« less
Reliability-based trajectory optimization using nonintrusive polynomial chaos for Mars entry mission
NASA Astrophysics Data System (ADS)
Huang, Yuechen; Li, Haiyang
2018-06-01
This paper presents the reliability-based sequential optimization (RBSO) method to settle the trajectory optimization problem with parametric uncertainties in entry dynamics for Mars entry mission. First, the deterministic entry trajectory optimization model is reviewed, and then the reliability-based optimization model is formulated. In addition, the modified sequential optimization method, in which the nonintrusive polynomial chaos expansion (PCE) method and the most probable point (MPP) searching method are employed, is proposed to solve the reliability-based optimization problem efficiently. The nonintrusive PCE method contributes to the transformation between the stochastic optimization (SO) and the deterministic optimization (DO) and to the approximation of trajectory solution efficiently. The MPP method, which is used for assessing the reliability of constraints satisfaction only up to the necessary level, is employed to further improve the computational efficiency. The cycle including SO, reliability assessment and constraints update is repeated in the RBSO until the reliability requirements of constraints satisfaction are satisfied. Finally, the RBSO is compared with the traditional DO and the traditional sequential optimization based on Monte Carlo (MC) simulation in a specific Mars entry mission to demonstrate the effectiveness and the efficiency of the proposed method.
Gönner, Lorenz; Vitay, Julien; Hamker, Fred H.
2017-01-01
Hippocampal place-cell sequences observed during awake immobility often represent previous experience, suggesting a role in memory processes. However, recent reports of goals being overrepresented in sequential activity suggest a role in short-term planning, although a detailed understanding of the origins of hippocampal sequential activity and of its functional role is still lacking. In particular, it is unknown which mechanism could support efficient planning by generating place-cell sequences biased toward known goal locations, in an adaptive and constructive fashion. To address these questions, we propose a model of spatial learning and sequence generation as interdependent processes, integrating cortical contextual coding, synaptic plasticity and neuromodulatory mechanisms into a map-based approach. Following goal learning, sequential activity emerges from continuous attractor network dynamics biased by goal memory inputs. We apply Bayesian decoding on the resulting spike trains, allowing a direct comparison with experimental data. Simulations show that this model (1) explains the generation of never-experienced sequence trajectories in familiar environments, without requiring virtual self-motion signals, (2) accounts for the bias in place-cell sequences toward goal locations, (3) highlights their utility in flexible route planning, and (4) provides specific testable predictions. PMID:29075187
Naughton, Peter A; Aggarwal, Rajesh; Wang, Tim T; Van Herzeele, Isabelle; Keeling, Aoife N; Darzi, Ara W; Cheshire, Nicholas J W
2011-03-01
Adoption of residents' working time restrictions potentially undermines surgical training by reduction of operating room exposure. Simulation has been proposed as a way to acquire necessary skills in a laboratory environment but remains difficult to incorporate into training schedules. This study assessed whether residents working successive nights could acquire endovascular skills similar to colleagues working day shifts. This prospective observational cohort study recruited 20 junior residents, divided into day shift and night shift groups by their respective call schedule. After initial cognitive skills training, a validated renal artery stent module on an endovascular simulator was completed over a series of seven sequential shifts during 1 week. The primary outcome measure was serial technical skill assessments. Secondary measures comprised assessments of activity, cognitive performance, introspective fatigue, quality, and quantity of preceding sleep. Both groups demonstrated significant learning curves for total time at the first session median vs seventh session median (181 vs 564 seconds [P < .001]; night, 1399 vs 572 [P < .001]), fluoroscopy time (day, 702 vs 308 seconds, [P < .001]; night, 669 vs 313 [P < .001]), and contrast volume (day, 29 vs 13 mL [P < .001]; night, 40 vs 16 [P < .001]). Residents working day shifts reached plateau 1 day earlier in the above measures vs those on night duty. The night shift group walked more steps (P < .001), reviewed more patients (P < .001), performed worse on all cognitive assessments (P < .05), slept less (P < .05), had poorer quality of sleep (P = .001), and was more fatigued (P < .001) than the day shift group. Acquired skill was retained a week after completion of shifts. Technical skills training after night shift work enables acquisition of endovascular technical skills, although it takes longer than after day shift training. This study provides evidence for program directors to organize simulation-based training schedules for residents on night shift rotations. Copyright © 2011. Published by Mosby, Inc.
Distributional Preferences, Reciprocity-Like Behavior, and Efficiency in Bilateral Exchange
Benjamin, Daniel J.
2014-01-01
Under what conditions do distributional preferences, such as altruism or a concern for fair outcomes, generate efficient trade? I analyze theoretically a simple bilateral exchange game: each player sequentially takes an action that reduces his own material payoff but increases the other player’s. Each player’s preferences may depend on both his/her own material payoff and the other player’s. I identify two key properties of the second-mover’s preferences: indifference curves kinked around “fair” material-payoff distributions, and materials payoffs entering preferences as “normal goods.” Either property can drive reciprocity-like behavior and generate a Pareto efficient outcome. PMID:25664144
A GPU-based large-scale Monte Carlo simulation method for systems with long-range interactions
NASA Astrophysics Data System (ADS)
Liang, Yihao; Xing, Xiangjun; Li, Yaohang
2017-06-01
In this work we present an efficient implementation of Canonical Monte Carlo simulation for Coulomb many body systems on graphics processing units (GPU). Our method takes advantage of the GPU Single Instruction, Multiple Data (SIMD) architectures, and adopts the sequential updating scheme of Metropolis algorithm. It makes no approximation in the computation of energy, and reaches a remarkable 440-fold speedup, compared with the serial implementation on CPU. We further use this method to simulate primitive model electrolytes, and measure very precisely all ion-ion pair correlation functions at high concentrations. From these data, we extract the renormalized Debye length, renormalized valences of constituent ions, and renormalized dielectric constants. These results demonstrate unequivocally physics beyond the classical Poisson-Boltzmann theory.
Moving the Needle: Simulation's Impact on Patient Outcomes.
Cox, Tiffany; Seymour, Neal; Stefanidis, Dimitrios
2015-08-01
This review investigates the available literature that addresses the impact simulator training has on patient outcomes. The authors conducted a comprehensive literature search of studies reporting outcomes of simulation training and categorized studies based on the Kirkpatrick model of training evaluation. Kirkpatrick level 4 studies reporting patient outcomes were identified and included in this review. Existing evidence is promising, demonstrating patient benefits as a result of simulation training for central line placement, obstetric emergencies, cataract surgery, laparoscopic inguinal hernia repair, and team training. Copyright © 2015 Elsevier Inc. All rights reserved.
A novel approach for small sample size family-based association studies: sequential tests.
Ilk, Ozlem; Rajabli, Farid; Dungul, Dilay Ciglidag; Ozdag, Hilal; Ilk, Hakki Gokhan
2011-08-01
In this paper, we propose a sequential probability ratio test (SPRT) to overcome the problem of limited samples in studies related to complex genetic diseases. The results of this novel approach are compared with the ones obtained from the traditional transmission disequilibrium test (TDT) on simulated data. Although TDT classifies single-nucleotide polymorphisms (SNPs) to only two groups (SNPs associated with the disease and the others), SPRT has the flexibility of assigning SNPs to a third group, that is, those for which we do not have enough evidence and should keep sampling. It is shown that SPRT results in smaller ratios of false positives and negatives, as well as better accuracy and sensitivity values for classifying SNPs when compared with TDT. By using SPRT, data with small sample size become usable for an accurate association analysis.
Transaction costs and sequential bargaining in transferable discharge permit markets.
Netusil, N R; Braden, J B
2001-03-01
Market-type mechanisms have been introduced and are being explored for various environmental programs. Several existing programs, however, have not attained the cost savings that were initially projected. Modeling that acknowledges the role of transactions costs and the discrete, bilateral, and sequential manner in which trades are executed should provide a more realistic basis for calculating potential cost savings. This paper presents empirical evidence on potential cost savings by examining a market for the abatement of sediment from farmland. Empirical results based on a market simulation model find no statistically significant change in mean abatement costs under several transaction cost levels when contracts are randomly executed. An alternative method of contract execution, gain-ranked, yields similar results. At the highest transaction cost level studied, trading reduces the total cost of compliance relative to a uniform standard that reflects current regulations.
NASA Astrophysics Data System (ADS)
Salehin, Z.; Woobaidullah, A. S. M.; Snigdha, S. S.
2015-12-01
Bengal Basin with its prolific gas rich province provides needed energy to Bangladesh. Present energy situation demands more Hydrocarbon explorations. Only 'Semutang' is discovered in the high amplitude structures, where rest of are in the gentle to moderate structures of western part of Chittagong-Tripura Fold Belt. But it has some major thrust faults which have strongly breached the reservoir zone. The major objectives of this research are interpretation of gas horizons and faults, then to perform velocity model, structural and property modeling to obtain reservoir properties. It is needed to properly identify the faults and reservoir heterogeneities. 3D modeling is widely used to reveal the subsurface structure in faulted zone where planning and development drilling is major challenge. Thirteen 2D seismic and six well logs have been used to identify six gas bearing horizons and a network of faults and to map the structure at reservoir level. Variance attributes were used to identify faults. Velocity model is performed for domain conversion. Synthetics were prepared from two wells where sonic and density logs are available. Well to seismic tie at reservoir zone shows good match with Direct Hydrocarbon Indicator on seismic section. Vsh, porosity, water saturation and permeability have been calculated and various cross plots among porosity logs have been shown. Structural modeling is used to make zone and layering accordance with minimum sand thickness. Fault model shows the possible fault network, those liable for several dry wells. Facies model have been constrained with Sequential Indicator Simulation method to show the facies distribution along the depth surfaces. Petrophysical models have been prepared with Sequential Gaussian Simulation to estimate petrophysical parameters away from the existing wells to other parts of the field and to observe heterogeneities in reservoir. Average porosity map for each gas zone were constructed. The outcomes of the research are an improved subsurface image of the seismic data (model), a porosity prediction for the reservoir, a reservoir quality map and also a fault map. The result shows a complex geologic model which may contribute to the economic potential of the field. For better understanding, 3D seismic survey, uncertainty and attributes analysis are necessary.
NASA Technical Reports Server (NTRS)
Sohn, Andrew; Biswas, Rupak
1996-01-01
Solving the hard Satisfiability Problem is time consuming even for modest-sized problem instances. Solving the Random L-SAT Problem is especially difficult due to the ratio of clauses to variables. This report presents a parallel synchronous simulated annealing method for solving the Random L-SAT Problem on a large-scale distributed-memory multiprocessor. In particular, we use a parallel synchronous simulated annealing procedure, called Generalized Speculative Computation, which guarantees the same decision sequence as sequential simulated annealing. To demonstrate the performance of the parallel method, we have selected problem instances varying in size from 100-variables/425-clauses to 5000-variables/21,250-clauses. Experimental results on the AP1000 multiprocessor indicate that our approach can satisfy 99.9 percent of the clauses while giving almost a 70-fold speedup on 500 processors.
Curtin, Lindsay B; Finn, Laura A; Czosnowski, Quinn A; Whitman, Craig B; Cawley, Michael J
2011-08-10
To assess the impact of computer-based simulation on the achievement of student learning outcomes during mannequin-based simulation. Participants were randomly assigned to rapid response teams of 5-6 students and then teams were randomly assigned to either a group that completed either computer-based or mannequin-based simulation cases first. In both simulations, students used their critical thinking skills and selected interventions independent of facilitator input. A predetermined rubric was used to record and assess students' performance in the mannequin-based simulations. Feedback and student performance scores were generated by the software in the computer-based simulations. More of the teams in the group that completed the computer-based simulation before completing the mannequin-based simulation achieved the primary outcome for the exercise, which was survival of the simulated patient (41.2% vs. 5.6%). The majority of students (>90%) recommended the continuation of simulation exercises in the course. Students in both groups felt the computer-based simulation should be completed prior to the mannequin-based simulation. The use of computer-based simulation prior to mannequin-based simulation improved the achievement of learning goals and outcomes. In addition to improving participants' skills, completing the computer-based simulation first may improve participants' confidence during the more real-life setting achieved in the mannequin-based simulation.
Ertefaie, Ashkan; Shortreed, Susan; Chakraborty, Bibhas
2016-06-15
Q-learning is a regression-based approach that uses longitudinal data to construct dynamic treatment regimes, which are sequences of decision rules that use patient information to inform future treatment decisions. An optimal dynamic treatment regime is composed of a sequence of decision rules that indicate how to optimally individualize treatment using the patients' baseline and time-varying characteristics to optimize the final outcome. Constructing optimal dynamic regimes using Q-learning depends heavily on the assumption that regression models at each decision point are correctly specified; yet model checking in the context of Q-learning has been largely overlooked in the current literature. In this article, we show that residual plots obtained from standard Q-learning models may fail to adequately check the quality of the model fit. We present a modified Q-learning procedure that accommodates residual analyses using standard tools. We present simulation studies showing the advantage of the proposed modification over standard Q-learning. We illustrate this new Q-learning approach using data collected from a sequential multiple assignment randomized trial of patients with schizophrenia. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Hardway, D; Weatherly, K S; Bonheur, B
1993-01-01
Diabetes education programs remain underdeveloped in the pediatric setting, resulting in increased consumer complaints and financial liability for hospitals. The Diabetes Education on Wheels program was designed to provide comprehensive, outcome-oriented education for patients with juvenile diabetes. The primary goal of the program was to enhance patients' and family members' ability to achieve self-care in the home setting. The program facilitated sequential learning, improved consumer satisfaction, and promoted financial viability for the hospital.
Panahbehagh, B.; Smith, D.R.; Salehi, M.M.; Hornbach, D.J.; Brown, D.J.; Chan, F.; Marinova, D.; Anderssen, R.S.
2011-01-01
Assessing populations of rare species is challenging because of the large effort required to locate patches of occupied habitat and achieve precise estimates of density and abundance. The presence of a rare species has been shown to be correlated with presence or abundance of more common species. Thus, ecological community richness or abundance can be used to inform sampling of rare species. Adaptive sampling designs have been developed specifically for rare and clustered populations and have been applied to a wide range of rare species. However, adaptive sampling can be logistically challenging, in part, because variation in final sample size introduces uncertainty in survey planning. Two-stage sequential sampling (TSS), a recently developed design, allows for adaptive sampling, but avoids edge units and has an upper bound on final sample size. In this paper we present an extension of two-stage sequential sampling that incorporates an auxiliary variable (TSSAV), such as community attributes, as the condition for adaptive sampling. We develop a set of simulations to approximate sampling of endangered freshwater mussels to evaluate the performance of the TSSAV design. The performance measures that we are interested in are efficiency and probability of sampling a unit occupied by the rare species. Efficiency measures the precision of population estimate from the TSSAV design relative to a standard design, such as simple random sampling (SRS). The simulations indicate that the density and distribution of the auxiliary population is the most important determinant of the performance of the TSSAV design. Of the design factors, such as sample size, the fraction of the primary units sampled was most important. For the best scenarios, the odds of sampling the rare species was approximately 1.5 times higher for TSSAV compared to SRS and efficiency was as high as 2 (i.e., variance from TSSAV was half that of SRS). We have found that design performance, especially for adaptive designs, is often case-specific. Efficiency of adaptive designs is especially sensitive to spatial distribution. We recommend that simulations tailored to the application of interest are highly useful for evaluating designs in preparation for sampling rare and clustered populations.
High performance hybrid functional Petri net simulations of biological pathway models on CUDA.
Chalkidis, Georgios; Nagasaki, Masao; Miyano, Satoru
2011-01-01
Hybrid functional Petri nets are a wide-spread tool for representing and simulating biological models. Due to their potential of providing virtual drug testing environments, biological simulations have a growing impact on pharmaceutical research. Continuous research advancements in biology and medicine lead to exponentially increasing simulation times, thus raising the demand for performance accelerations by efficient and inexpensive parallel computation solutions. Recent developments in the field of general-purpose computation on graphics processing units (GPGPU) enabled the scientific community to port a variety of compute intensive algorithms onto the graphics processing unit (GPU). This work presents the first scheme for mapping biological hybrid functional Petri net models, which can handle both discrete and continuous entities, onto compute unified device architecture (CUDA) enabled GPUs. GPU accelerated simulations are observed to run up to 18 times faster than sequential implementations. Simulating the cell boundary formation by Delta-Notch signaling on a CUDA enabled GPU results in a speedup of approximately 7x for a model containing 1,600 cells.
Night vision goggle stimulation using LCoS and DLP projection technology, which is better?
NASA Astrophysics Data System (ADS)
Ali, Masoud H.; Lyon, Paul; De Meerleer, Peter
2014-06-01
High fidelity night-vision training has become important for many of the simulation systems being procured today. The end-users of these simulation-training systems prefer using their actual night-vision goggle (NVG) headsets. This requires that the visual display system stimulate the NVGs in a realistic way. Historically NVG stimulation was done with cathode-ray tube (CRT) projectors. However, this technology became obsolete and in recent years training simulators do NVG stimulation with laser, LCoS and DLP projectors. The LCoS and DLP projection technologies have emerged as the preferred approach for the stimulation of NVGs. Both LCoS and DLP technologies have advantages and disadvantages for stimulating NVGs. LCoS projectors can have more than 5-10 times the contrast capability of DLP projectors. The larger the difference between the projected black level and the brightest object in a scene, the better the NVG stimulation effects can be. This is an advantage of LCoS technology, especially when the proper NVG wavelengths are used. Single-chip DLP projectors, even though they have much reduced contrast compared to LCoS projectors, can use LED illuminators in a sequential red-green-blue fashion to create a projected image. It is straightforward to add an extra infrared (NVG wavelength) LED into this sequential chain of LED illumination. The content of this NVG channel can be independent of the visible scene, which allows effects to be added that can compensate for the lack of contrast inherent in a DLP device. This paper will expand on the differences between LCoS and DLP projectors for stimulating NVGs and summarize the benefits of both in night-vision simulation training systems.
Scorpion Hybrid Optical-based Inertial Tracker (HObIT) test results
NASA Astrophysics Data System (ADS)
Atac, Robert; Spink, Scott; Calloway, Tom; Foxlin, Eric
2014-06-01
High fidelity night-vision training has become important for many of the simulation systems being procured today. The end-users of these simulation-training systems prefer using their actual night-vision goggle (NVG) headsets. This requires that the visual display system stimulate the NVGs in a realistic way. Historically NVG stimulation was done with cathode-ray tube (CRT) projectors. However, this technology became obsolete and in recent years training simulators do NVG stimulation with laser, LCoS and DLP projectors. The LCoS and DLP projection technologies have emerged as the preferred approach for the stimulation of NVGs. Both LCoS and DLP technologies have advantages and disadvantages for stimulating NVGs. LCoS projectors can have more than 5-10 times the contrast capability of DLP projectors. The larger the difference between the projected black level and the brightest object in a scene, the better the NVG stimulation effects can be. This is an advantage of LCoS technology, especially when the proper NVG wavelengths are used. Single-chip DLP projectors, even though they have much reduced contrast compared to LCoS projectors, can use LED illuminators in a sequential red-green-blue fashion to create a projected image. It is straightforward to add an extra infrared (NVG wavelength) LED into this sequential chain of LED illumination. The content of this NVG channel can be independent of the visible scene, which allows effects to be added that can compensate for the lack of contrast inherent in a DLP device. This paper will expand on the differences between LCoS and DLP projectors for stimulating NVGs and summarize the benefits of both in night-vision simulation training systems.
Lhakhang, Pempa; Gholami, Maryam; Knoll, Nina; Schwarzer, Ralf
2015-01-01
A sequential intervention to facilitate the adoption and maintenance of dental flossing was conducted among 205 students in India, aged 18-26 years. Two experimental groups received different treatment sequences and were observed at three assessment points, 34 days apart. One group received first a motivational intervention (intention, outcome expectancies, and risk perception, followed by a self-regulatory intervention (planning, self-efficacy, and action control). The second group received the same intervention in the opposite order. Both intervention sequences yielded gains in terms of flossing, planning, self-efficacy, and action control. However, at Time 2, those who had received the self-regulatory intervention first, were superior to their counterparts who had received the motivational intervention first. At Time 3, differences vanished as everyone had then received both interventions. Thus, findings highlight the benefits of a self-regulatory compared to a mere motivational intervention.
Kidwell, Kelley M; Hyde, Luke W
2016-09-01
Heterogeneity between and within people necessitates the need for sequential personalized interventions to optimize individual outcomes. Personalized or adaptive interventions (AIs) are relevant for diseases and maladaptive behavioral trajectories when one intervention is not curative and success of a subsequent intervention may depend on individual characteristics or response. AIs may be applied to medical settings and to investigate best prevention, education, and community-based practices. AIs can begin with low-cost or low-burden interventions and followed with intensified or alternative interventions for those who need it most. AIs that guide practice over the course of a disease, program, or school year can be investigated through sequential multiple assignment randomized trials (SMARTs). To promote the use of SMARTs, we provide a hypothetical SMART in a Head Start program to address child behavior problems. We describe the advantages and limitations of SMARTs, particularly as they may be applied to the field of evaluation.
Prosody and alignment: a sequential perspective
NASA Astrophysics Data System (ADS)
Szczepek Reed, Beatrice
2010-12-01
In their analysis of a corpus of classroom interactions in an inner city high school, Roth and Tobin describe how teachers and students accomplish interactional alignment by prosodically matching each other's turns. Prosodic matching, and specific prosodic patterns are interpreted as signs of, and contributions to successful interactional outcomes and positive emotions. Lack of prosodic matching, and other specific prosodic patterns are interpreted as features of unsuccessful interactions, and negative emotions. This forum focuses on the article's analysis of the relation between interpersonal alignment, emotion and prosody. It argues that prosodic matching, and other prosodic linking practices, play a primarily sequential role, i.e. one that displays the way in which participants place and design their turns in relation to other participants' turns. Prosodic matching, rather than being a conversational action in itself, is argued to be an interactional practice (Schegloff 1997), which is not always employed for the accomplishment of `positive', or aligning actions.
Busse, Sebastian; Schwarting, Rainer K. W.
2016-01-01
The present study is part of a series of experiments, where we analyze why and how damage of the rat’s dorsal hippocampus (dHC) can enhance performance in a sequential reaction time task (SRTT). In this task, sequences of distinct visual stimulus presentations are food-rewarded in a fixed-ratio-13-schedule. Our previous study (Busse and Schwarting, 2016) had shown that rats with lesions of the dHC show substantially shorter session times and post-reinforcement pauses (PRPs) than controls, which allows for more practice when daily training is kept constant. Since sequential behavior is based on instrumental performance, a sequential benefit might be secondary to that. In order to test this hypothesis in the present study, we performed two experiments, where pseudorandom rather than sequential stimulus presentation was used in rats with excitotoxic dorsal hippocampal lesions. Again, we found enhanced performance in the lesion-group in terms of shorter session times and PRPs. During the sessions we found that the lesion-group spent less time with non-instrumental behavior (i.e., grooming, sniffing, and rearing) after prolonged instrumental training. Also, such rats showed moderate evidence for an extinction impairment under devalued food reward conditions and significant deficits in a response-outcome (R-O)-discrimination task in comparison to a control-group. These findings suggest that facilitatory effects on instrumental performance after dorsal hippocampal lesions may be primarily a result of complex behavioral changes, i.e., reductions of behavioral flexibility and/or alterations in motivation, which then result in enhanced instrumental learning. PMID:27375453
A Multilab Preregistered Replication of the Ego-Depletion Effect.
Hagger, Martin S; Chatzisarantis, Nikos L D; Alberts, Hugo; Anggono, Calvin Octavianus; Batailler, Cédric; Birt, Angela R; Brand, Ralf; Brandt, Mark J; Brewer, Gene; Bruyneel, Sabrina; Calvillo, Dustin P; Campbell, W Keith; Cannon, Peter R; Carlucci, Marianna; Carruth, Nicholas P; Cheung, Tracy; Crowell, Adrienne; De Ridder, Denise T D; Dewitte, Siegfried; Elson, Malte; Evans, Jacqueline R; Fay, Benjamin A; Fennis, Bob M; Finley, Anna; Francis, Zoë; Heise, Elke; Hoemann, Henrik; Inzlicht, Michael; Koole, Sander L; Koppel, Lina; Kroese, Floor; Lange, Florian; Lau, Kevin; Lynch, Bridget P; Martijn, Carolien; Merckelbach, Harald; Mills, Nicole V; Michirev, Alexej; Miyake, Akira; Mosser, Alexandra E; Muise, Megan; Muller, Dominique; Muzi, Milena; Nalis, Dario; Nurwanti, Ratri; Otgaar, Henry; Philipp, Michael C; Primoceri, Pierpaolo; Rentzsch, Katrin; Ringos, Lara; Schlinkert, Caroline; Schmeichel, Brandon J; Schoch, Sarah F; Schrama, Michel; Schütz, Astrid; Stamos, Angelos; Tinghög, Gustav; Ullrich, Johannes; vanDellen, Michelle; Wimbarti, Supra; Wolff, Wanja; Yusainy, Cleoputri; Zerhouni, Oulmann; Zwienenberg, Maria
2016-07-01
Good self-control has been linked to adaptive outcomes such as better health, cohesive personal relationships, success in the workplace and at school, and less susceptibility to crime and addictions. In contrast, self-control failure is linked to maladaptive outcomes. Understanding the mechanisms by which self-control predicts behavior may assist in promoting better regulation and outcomes. A popular approach to understanding self-control is the strength or resource depletion model. Self-control is conceptualized as a limited resource that becomes depleted after a period of exertion resulting in self-control failure. The model has typically been tested using a sequential-task experimental paradigm, in which people completing an initial self-control task have reduced self-control capacity and poorer performance on a subsequent task, a state known as ego depletion Although a meta-analysis of ego-depletion experiments found a medium-sized effect, subsequent meta-analyses have questioned the size and existence of the effect and identified instances of possible bias. The analyses served as a catalyst for the current Registered Replication Report of the ego-depletion effect. Multiple laboratories (k = 23, total N = 2,141) conducted replications of a standardized ego-depletion protocol based on a sequential-task paradigm by Sripada et al. Meta-analysis of the studies revealed that the size of the ego-depletion effect was small with 95% confidence intervals (CIs) that encompassed zero (d = 0.04, 95% CI [-0.07, 0.15]. We discuss implications of the findings for the ego-depletion effect and the resource depletion model of self-control. © The Author(s) 2016.
Dose density in adjuvant chemotherapy for breast cancer.
Citron, Marc L
2004-01-01
Dose-dense chemotherapy increases the dose intensity of the regimen by delivering standard-dose chemotherapy with shorter intervals between the cycles. This article discusses the rationale for dose-dense therapy and reviews the results with dose-dense adjuvant regimens in recent clinical trials in breast cancer. The papers for this review covered evidence of a dose-response relation in cancer chemotherapy; the rationale for dose-intense (and specifically dose-dense) therapy; and clinical experience with dose-dense regimens in adjuvant chemotherapy for breast cancer, with particular attention to outcomes and toxicity. Evidence supports maintaining the dose intensity of adjuvant chemotherapy within the conventional dose range. Disease-free and overall survival with combination cyclophosphamide, methotrexate, and fluorouracil are significantly improved when patients receive within 85% of the planned dose. Moderate and high dose cyclophosphamide, doxorubicin, and fluorouracil within the standard range results in greater disease-free and overall survival than the low dose regimen. The sequential addition of paclitaxel after concurrent doxorubicin and cyclophosphamide also significantly improves survival. Disease-free and overall survival with dose-dense sequential or concurrent doxorubicin, cyclophosphamide, and paclitaxel with filgrastim (rhG-CSF; NEUPOGEN) support are significantly greater than with conventional schedules (q21d). The delivered dose intensity of adjuvant chemotherapy within the standard dose range is an important predictor of the clinical outcome. Prospective trials of high-dose chemotherapy have shown no improvement over standard regimens, and toxicity was greater. Dose-dense adjuvant chemotherapy improves the clinical outcomes with doxorubicin-containing regimens. Filgrastim support enables the delivery of dose-dense chemotherapy and reduces the risk of neutropenia and its complications.
Group motivational interviewing for adolescents: Change talk and alcohol and marijuana outcomes
D’Amico, Elizabeth J.; Houck, Jon M.; Hunter, Sarah B.; Miles, Jeremy N.V.; Osilla, Karen Chan; Ewing, Brett A.
2014-01-01
Objective Little is known about what may distinguish effective and ineffective group interventions. Group motivational interviewing (MI) is a promising intervention for adolescent alcohol and other drug (AOD) use; however, the mechanisms of change for group MI are unknown. One potential mechanism is change talk, which is client speech arguing for change. The present study describes the group process in adolescent group MI and effects of group-level change talk on individual alcohol and marijuana outcomes. Method We analyzed 129 group session audio recordings from a randomized clinical trial of adolescent group MI. Sequential coding was performed using the Motivational Interviewing Skill Code (MISC) and the CASAA Application for Coding Treatment Interactions (CACTI) software application. Outcomes included past-month intentions, frequency, and consequences of alcohol and marijuana use, motivation to change, and positive expectancies. Results Sequential analysis indicated that facilitator open-ended questions and reflections of change talk (CT) increased group CT. Group CT was then followed by more CT. Multilevel models accounting for rolling group enrollment revealed group CT was associated with decreased alcohol intentions, alcohol use and heavy drinking three months later; group sustain talk was associated with decreased motivation to change, increased intentions to use marijuana, and increased positive alcohol and marijuana expectancies. Conclusions Facilitator speech and peer responses each had effects on change and sustain talk in the group setting, which was then associated with individual changes. Selective reflection of CT in adolescent group MI is suggested as a strategy to manage group dynamics and increase behavioral change. PMID:25365779
Who is most affected by prenatal alcohol exposure: Boys or girls?
May, Philip A; Tabachnick, Barbara; Hasken, Julie M; Marais, Anna-Susan; de Vries, Marlene M; Barnard, Ronel; Joubert, Belinda; Cloete, Marise; Botha, Isobel; Kalberg, Wendy O; Buckley, David; Burroughs, Zachary R; Bezuidenhout, Heidre; Robinson, Luther K; Manning, Melanie A; Adnams, Colleen M; Seedat, Soraya; Parry, Charles D H; Hoyme, H Eugene
2017-08-01
To examine outcomes among boys and girls that are associated with prenatal alcohol exposure. Boys and girls with fetal alcohol spectrum disorders (FASD) and randomly-selected controls were compared on a variety of physical and neurobehavioral traits. Sex ratios indicated that heavy maternal binge drinking may have significantly diminished viability to birth and survival of boys postpartum more than girls by age seven. Case control comparisons of a variety of physical and neurobehavioral traits at age seven indicate that both sexes were affected similarly for a majority of variables. However, alcohol-exposed girls had significantly more dysmorphology overall than boys and performed significantly worse on non-verbal IQ tests than males. A three-step sequential regression analysis, controlling for multiple covariates, further indicated that dysmorphology among girls was significantly more associated with five maternal drinking variables and three distal maternal risk factors. However, the overall model, which included five associated neurobehavioral measures at step three, was not significant (p=0.09, two-tailed test). A separate sequential logistic regression analysis of predictors of a FASD diagnosis, however, indicated significantly more negative outcomes overall for girls than boys (Nagelkerke R 2 =0.42 for boys and 0.54 for girls, z=-2.9, p=0.004). Boys and girls had mostly similar outcomes when prenatal alcohol exposure was linked to poor physical and neurocognitive development. Nevertheless, sex ratios implicate lower viability and survival of males by first grade, and girls have more dysmorphology and neurocognitive impairment than boys resulting in a higher probability of a FASD diagnosis. Copyright © 2017 Elsevier B.V. All rights reserved.
Racial Discrimination, Cultural Resilience, and Stress.
Spence, Nicholas D; Wells, Samantha; Graham, Kathryn; George, Julie
2016-05-01
Racial discrimination is a social determinant of health for First Nations people. Cultural resilience has been regarded as a potentially positive resource for social outcomes. Using a compensatory model of resilience, this study sought to determine if cultural resilience (compensatory factor) neutralized or offset the detrimental effect of racial discrimination (social risk factor) on stress (outcome). Data were collected from October 2012 to February 2013 (N = 340) from adult members of the Kettle and Stony Point First Nation community in Ontario, Canada. The outcome was perceived stress; risk factor, racial discrimination; and compensatory factor, cultural resilience. Control variables included individual (education, sociability) and family (marital status, socioeconomic status) resilience resources and demographics (age and gender). The model was tested using sequential regression. The risk factor, racial discrimination, increased stress across steps of the sequential model, while cultural resilience had an opposite modest effect on stress levels. In the final model with all variables, age and gender were significant, with the former having a negative effect on stress and women reporting higher levels of stress than males. Education, marital status, and socioeconomic status (household income) were not significant in the model. The model had R(2) = 0.21 and adjusted R(2) = 0.18 and semipartial correlation (squared) of 0.04 and 0.01 for racial discrimination and cultural resilience, respectively. In this study, cultural resilience compensated for the detrimental effect of racial discrimination on stress in a modest manner. These findings may support the development of programs and services fostering First Nations culture, pending further study. © The Author(s) 2016.
Racial Discrimination, Cultural Resilience, and Stress
Wells, Samantha; Graham, Kathryn; George, Julie
2016-01-01
Objective: Racial discrimination is a social determinant of health for First Nations people. Cultural resilience has been regarded as a potentially positive resource for social outcomes. Using a compensatory model of resilience, this study sought to determine if cultural resilience (compensatory factor) neutralized or offset the detrimental effect of racial discrimination (social risk factor) on stress (outcome). Methods: Data were collected from October 2012 to February 2013 (N = 340) from adult members of the Kettle and Stony Point First Nation community in Ontario, Canada. The outcome was perceived stress; risk factor, racial discrimination; and compensatory factor, cultural resilience. Control variables included individual (education, sociability) and family (marital status, socioeconomic status) resilience resources and demographics (age and gender). The model was tested using sequential regression. Results: The risk factor, racial discrimination, increased stress across steps of the sequential model, while cultural resilience had an opposite modest effect on stress levels. In the final model with all variables, age and gender were significant, with the former having a negative effect on stress and women reporting higher levels of stress than males. Education, marital status, and socioeconomic status (household income) were not significant in the model. The model had R2 = 0.21 and adjusted R2 = 0.18 and semipartial correlation (squared) of 0.04 and 0.01 for racial discrimination and cultural resilience, respectively. Conclusions: In this study, cultural resilience compensated for the detrimental effect of racial discrimination on stress in a modest manner. These findings may support the development of programs and services fostering First Nations culture, pending further study. PMID:27254805
Ellis, Charles; Peach, Richard K
2017-04-01
To examine aphasia outcomes and to determine whether the observed language profiles vary by race-ethnicity. Retrospective cross-sectional study using a convenience sample of persons of with aphasia (PWA) obtained from AphasiaBank, a database designed for the study of aphasia outcomes. Aphasia research laboratories. PWA (N=381; 339 white and 42 black individuals). Not applicable. Western Aphasia Battery-Revised (WAB-R) total scale score (Aphasia Quotient) and subtest scores were analyzed for racial-ethnic differences. The WAB-R is a comprehensive assessment of communication function designed to evaluate PWA in the areas of spontaneous speech, auditory comprehension, repetition, and naming in addition to reading, writing, apraxia, and constructional, visuospatial, and calculation skills. In univariate comparisons, black PWA exhibited lower word fluency (5.7 vs 7.6; P=.004), auditory word comprehension (49.0 vs 53.0; P=.021), and comprehension of sequential commands (44.2 vs 52.2; P=.012) when compared with white PWA. In multivariate comparisons, adjusted for age and years of education, black PWA exhibited lower word fluency (5.5 vs 7.6; P=.015), auditory word recognition (49.3 vs 53.3; P=.02), and comprehension of sequential commands (43.7 vs 53.2; P=.017) when compared with white PWA. This study identified racial-ethnic differences in word fluency and auditory comprehension ability among PWA. Both skills are critical to effective communication, and racial-ethnic differences in outcomes must be considered in treatment approaches designed to improve overall communication ability. Copyright © 2016 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.
Craig, C L; Bauman, A; Reger-Nash, B
2010-03-01
The hierarchy of effects (HOE) model is often used in planning mass-reach communication campaigns to promote health, but has rarely been empirically tested. This paper examines Canada's 30 year ParticipACTION campaign to promote physical activity (PA). A cohort from the nationally representative 1981 Canada Fitness Survey was followed up in 1988 and 2002-2004. Modelling of these data tested whether the mechanisms of campaign effects followed the theoretical framework proposed in the HOE. Campaign awareness was measured in 1981. Outcome expectancy, attitudes, decision balance and future intention were asked in 1988. PA was assessed at all time points. Logistic regression was used to sequentially test mediating and moderating variables adjusting for age, sex and education. No selection bias was observed; however, relatively fewer respondents than non-respondents smoked or were underweight at baseline. Among those inactive at baseline, campaign awareness predicted outcome expectancy which in turn predicted positive attitude to PA. Positive attitudes predicted high decision balance, which predicted future intention. Future intention mediated the relationship between decision balance and sufficient activity. Among those sufficiently active at baseline, awareness was unrelated to outcome expectancy and inversely related to positive attitude. These results lend support to the HOE model, in that the effects of ParticipACTION's serial mass media campaigns were consistent with the sequential rollout of its messages, which in turn was associated with achieving an active lifestyle among those initially insufficiently active. This provides support to an often-used theoretical framework for designing health promotion media campaigns.
NASA Technical Reports Server (NTRS)
Reed, John A.; Afjeh, Abdollah A.
1995-01-01
A major difficulty in designing aeropropulsion systems is that of identifying and understanding the interactions between the separate engine components and disciplines (e.g., fluid mechanics, structural mechanics, heat transfer, material properties, etc.). The traditional analysis approach is to decompose the system into separate components with the interaction between components being evaluated by the application of each of the single disciplines in a sequential manner. Here, one discipline uses information from the calculation of another discipline to determine the effects of component coupling. This approach, however, may not properly identify the consequences of these effects during the design phase, leaving the interactions to be discovered and evaluated during engine testing. This contributes to the time and cost of developing new propulsion systems as, typically, several design-build-test cycles are needed to fully identify multidisciplinary effects and reach the desired system performance. The alternative to sequential isolated component analysis is to use multidisciplinary coupling at a more fundamental level. This approach has been made more plausible due to recent advancements in computation simulation along with application of concurrent engineering concepts. Computer simulation systems designed to provide an environment which is capable of integrating the various disciplines into a single simulation system have been proposed and are currently being developed. One such system is being developed by the Numerical Propulsion System Simulation (NPSS) project. The NPSS project, being developed at the Interdisciplinary Technology Office at the NASA Lewis Research Center is a 'numerical test cell' designed to provide for comprehensive computational design and analysis of aerospace propulsion systems. It will provide multi-disciplinary analyses on a variety of computational platforms, and a user-interface consisting of expert systems, data base management and visualization tools, to allow the designer to investigate the complex interactions inherent in these systems. An interactive programming software system, known as the Application Visualization System (AVS), was utilized for the development of the propulsion system simulation. The modularity of this system provides the ability to couple propulsion system components, as well as disciplines, and provides for the ability to integrate existing, well established analysis codes into the overall system simulation. This feature allows the user to customize the simulation model by inserting desired analysis codes. The prototypical simulation environment for multidisciplinary analysis, called Turbofan Engine System Simulation (TESS), which incorporates many of the characteristics of the simulation environment proposed herein, is detailed.
El Emam, Dalia Sabry; Farag, Rania Kamel; Abouelkheir, Hossam Youssef
2016-01-01
Aim. To compare objective and subjective outcome after simultaneous wave front guided (WFG) PRK and accelerated corneal cross-linking (CXL) in patients with progressive keratoconus versus sequential WFG PRK 6 months after CXL. Methods. 62 eyes with progressive keratoconus were divided into two groups; the first including 30 eyes underwent simultaneous WFG PRK with accelerated CXL. The second including 32 eyes underwent subsequent WFG PRK performed 6 months later after accelerated CXL. Visual, refractive, topographic, and aberrometric data were determined preoperatively and during 1-year follow-up period and the results compared in between the 2 studied groups. Results. All evaluated visual, refractive, and aberrometric parameters demonstrated highly significant improvement in both studied groups (all P < 0.001). A significant improvement was observed in keratometric and Q values. The improvement in all parameters was stable till the end of follow-up. Likewise, no significant difference was determined in between the 2 groups in any of recorded parameters. Subjective data revealed similarly significant improvement in both groups. Conclusions. WFG PRK and accelerated CXL is an effective and safe option to improve the vision in mild to moderate keratoconus. In one-year follow-up, there is no statistically significant difference between the simultaneous and sequential procedure. PMID:28127465
Abou Samra, Waleed Ali; El Emam, Dalia Sabry; Farag, Rania Kamel; Abouelkheir, Hossam Youssef
2016-01-01
Aim . To compare objective and subjective outcome after simultaneous wave front guided (WFG) PRK and accelerated corneal cross-linking (CXL) in patients with progressive keratoconus versus sequential WFG PRK 6 months after CXL. Methods . 62 eyes with progressive keratoconus were divided into two groups; the first including 30 eyes underwent simultaneous WFG PRK with accelerated CXL. The second including 32 eyes underwent subsequent WFG PRK performed 6 months later after accelerated CXL. Visual, refractive, topographic, and aberrometric data were determined preoperatively and during 1-year follow-up period and the results compared in between the 2 studied groups. Results . All evaluated visual, refractive, and aberrometric parameters demonstrated highly significant improvement in both studied groups (all P < 0.001). A significant improvement was observed in keratometric and Q values. The improvement in all parameters was stable till the end of follow-up. Likewise, no significant difference was determined in between the 2 groups in any of recorded parameters. Subjective data revealed similarly significant improvement in both groups. Conclusions . WFG PRK and accelerated CXL is an effective and safe option to improve the vision in mild to moderate keratoconus. In one-year follow-up, there is no statistically significant difference between the simultaneous and sequential procedure.
Reeder, Ruth M; Firszt, Jill B; Cadieux, Jamie H; Strube, Michael J
2017-01-01
Whether, and if so when, a second-ear cochlear implant should be provided to older, unilaterally implanted children is an ongoing clinical question. This study evaluated rate of speech recognition progress for the second implanted ear and with bilateral cochlear implants in older sequentially implanted children and evaluated localization abilities. A prospective longitudinal study included 24 bilaterally implanted children (mean ear surgeries at 5.11 and 14.25 years). Test intervals were every 3-6 months through 24 months postbilateral. Test conditions were each ear and bilaterally for speech recognition and localization. Overall, the rate of progress for the second implanted ear was gradual. Improvements in quiet continued through the second year of bilateral use. Improvements in noise were more modest and leveled off during the second year. On all measures, results from the second ear were poorer than the first. Bilateral scores were better than either ear alone for all measures except sentences in quiet and localization. Older sequentially implanted children with several years between surgeries may obtain speech understanding in the second implanted ear; however, performance may be limited and rate of progress gradual. Continued contralateral ear hearing aid use and reduced time between surgeries may enhance outcomes.
Chen, Huachao; Wang, Yurong; Yao, Yongrong; Qiao, Shenglin; Wang, Hao; Tan, Ninghua
2017-01-01
A programmed drug delivery system that can achieve sequential release of multiple therapeutics under different stimulus holds great promise to enhance the treatment efficacy and overcome multi-drug resistance (MDR) in tumor. Herein, multi-organelle-targeted and pH/ cytochrome c (Cyt c) dual-responsive nanoparticles were designed for combination therapy on resistant tumor. In this system (designated DGLipo NPs), doxorubicin (Dox) was intercalated into the DNA duplex containing a Cyt c aptamer, which subsequently loaded in the dendrigraftpoly-L-lysines (DGL) cores of DGLipo NPs, while cyclopeptide RA-V was doped into the pH-sensitive liposomal shells. After dual modification with c(RGDfK) and mitochondria-penetrating peptide (MPP), DGLipo NPs could successively deliver the two drugs into lysosome and mitochondria of cancer cells, and achieve sequential drug release in virtue of the unique characteristic of these two organelles. The organelle-specific and spatiotemporally controlled release of Dox and RA-V led to enhanced therapeutic outcomes in MDR tumor. More significantly, the DGLipo NPs were successfully applied to monitor Cyt c release during mitochondria-mediated apoptotic process. This work represents a versatile strategy for precise combination therapy against resistant tumor with spatiotemporal control, and provides a potential tool for Cyt c-related apoptotic studies. PMID:29109776
Role of genetic mutations in folate-related enzyme genes on Male Infertility
Liu, Kang; Zhao, Ruizhe; Shen, Min; Ye, Jiaxin; Li, Xiao; Huang, Yuan; Hua, Lixin; Wang, Zengjun; Li, Jie
2015-01-01
Several studies showed that the genetic mutations in the folate-related enzyme genes might be associated with male infertility; however, the results were still inconsistent. We performed a meta-analysis with trial sequential analysis to investigate the associations between the MTHFR C677T, MTHFR A1298C, MTR A2756G, MTRR A66G mutations and the MTHFR haplotype with the risk of male infertility. Overall, a total of 37 studies were selected. Our meta-analysis showed that the MTHFR C677T mutation was a risk factor for male infertility in both azoospermia and oligoasthenoteratozoospermia patients, especially in Asian population. Men carrying the MTHFR TC haplotype were most liable to suffer infertility while those with CC haplotype had lowest risk. On the other hand, the MTHFR A1298C mutation was not related to male infertility. MTR A2756G and MTRR A66G were potential candidates in the pathogenesis of male infertility, but more case-control studies were required to avoid false-positive outcomes. All of these results were confirmed by the trial sequential analysis. Finally, our meta-analysis with trial sequential analysis proved that the genetic mutations in the folate-related enzyme genes played a significant role in male infertility. PMID:26549413
Tsang, William W. N.; Gao, Kelly L.; Chan, K. M.; Purves, Sheila; Macfarlane, Duncan J.; Fong, Shirley S. M.
2015-01-01
Objective. To investigate the effects of sitting Tai Chi on muscle strength, balance control, and quality of life (QOL) among survivors with spinal cord injuries (SCI). Methods. Eleven SCI survivors participated in the sitting Tai Chi training (90 minutes/session, 2 times/week for 12 weeks) and eight SCI survivors acted as controls. Dynamic sitting balance was evaluated using limits of stability test and a sequential weight shifting test in sitting. Handgrip strength was also tested using a hand-held dynamometer. QOL was measured using the World Health Organization's Quality of Life Scale. Results. Tai Chi practitioners achieved significant improvements in their reaction time (P = 0.042); maximum excursion (P = 0.016); and directional control (P = 0.025) in the limits of stability test after training. In the sequential weight shifting test, they significantly improved their total time to sequentially hit the 12 targets (P = 0.035). Significant improvement in handgrip strength was also found among the Tai Chi practitioners (P = 0.049). However, no significant within and between-group differences were found in the QOL outcomes (P > 0.05). Conclusions. Twelve weeks of sitting Tai Chi training could improve the dynamic sitting balance and handgrip strength, but not QOL, of the SCI survivors. PMID:25688276
Towards Data-Driven Simulations of Wildfire Spread using Ensemble-based Data Assimilation
NASA Astrophysics Data System (ADS)
Rochoux, M. C.; Bart, J.; Ricci, S. M.; Cuenot, B.; Trouvé, A.; Duchaine, F.; Morel, T.
2012-12-01
Real-time predictions of a propagating wildfire remain a challenging task because the problem involves both multi-physics and multi-scales. The propagation speed of wildfires, also called the rate of spread (ROS), is indeed determined by complex interactions between pyrolysis, combustion and flow dynamics, atmospheric dynamics occurring at vegetation, topographical and meteorological scales. Current operational fire spread models are mainly based on a semi-empirical parameterization of the ROS in terms of vegetation, topographical and meteorological properties. For the fire spread simulation to be predictive and compatible with operational applications, the uncertainty on the ROS model should be reduced. As recent progress made in remote sensing technology provides new ways to monitor the fire front position, a promising approach to overcome the difficulties found in wildfire spread simulations is to integrate fire modeling and fire sensing technologies using data assimilation (DA). For this purpose we have developed a prototype data-driven wildfire spread simulator in order to provide optimal estimates of poorly known model parameters [*]. The data-driven simulation capability is adapted for more realistic wildfire spread : it considers a regional-scale fire spread model that is informed by observations of the fire front location. An Ensemble Kalman Filter algorithm (EnKF) based on a parallel computing platform (OpenPALM) was implemented in order to perform a multi-parameter sequential estimation where wind magnitude and direction are in addition to vegetation properties (see attached figure). The EnKF algorithm shows its good ability to track a small-scale grassland fire experiment and ensures a good accounting for the sensitivity of the simulation outcomes to the control parameters. As a conclusion, it was shown that data assimilation is a promising approach to more accurately forecast time-varying wildfire spread conditions as new airborne-like observations of the fire front location get available. [*] Rochoux, M.C., Delmotte, B., Cuenot, B., Ricci, S., and Trouvé, A. (2012) "Regional-scale simulations of wildland fire spread informed by real-time flame front observations", Proc. Combust. Inst., 34, in press http://dx.doi.org/10.1016/j.proci.2012.06.090 EnKF-based tracking of small-scale grassland fire experiment, with estimation of wind and fuel parameters.
Scott, Anthony; Jeon, Sung-Hee; Joyce, Catherine M; Humphreys, John S; Kalb, Guyonne; Witt, Julia; Leahy, Anne
2011-09-05
Surveys of doctors are an important data collection method in health services research. Ways to improve response rates, minimise survey response bias and item non-response, within a given budget, have not previously been addressed in the same study. The aim of this paper is to compare the effects and costs of three different modes of survey administration in a national survey of doctors. A stratified random sample of 4.9% (2,702/54,160) of doctors undertaking clinical practice was drawn from a national directory of all doctors in Australia. Stratification was by four doctor types: general practitioners, specialists, specialists-in-training, and hospital non-specialists, and by six rural/remote categories. A three-arm parallel trial design with equal randomisation across arms was used. Doctors were randomly allocated to: online questionnaire (902); simultaneous mixed mode (a paper questionnaire and login details sent together) (900); or, sequential mixed mode (online followed by a paper questionnaire with the reminder) (900). Analysis was by intention to treat, as within each primary mode, doctors could choose either paper or online. Primary outcome measures were response rate, survey response bias, item non-response, and cost. The online mode had a response rate 12.95%, followed by the simultaneous mixed mode with 19.7%, and the sequential mixed mode with 20.7%. After adjusting for observed differences between the groups, the online mode had a 7 percentage point lower response rate compared to the simultaneous mixed mode, and a 7.7 percentage point lower response rate compared to sequential mixed mode. The difference in response rate between the sequential and simultaneous modes was not statistically significant. Both mixed modes showed evidence of response bias, whilst the characteristics of online respondents were similar to the population. However, the online mode had a higher rate of item non-response compared to both mixed modes. The total cost of the online survey was 38% lower than simultaneous mixed mode and 22% lower than sequential mixed mode. The cost of the sequential mixed mode was 14% lower than simultaneous mixed mode. Compared to the online mode, the sequential mixed mode was the most cost-effective, although exhibiting some evidence of response bias. Decisions on which survey mode to use depend on response rates, response bias, item non-response and costs. The sequential mixed mode appears to be the most cost-effective mode of survey administration for surveys of the population of doctors, if one is prepared to accept a degree of response bias. Online surveys are not yet suitable to be used exclusively for surveys of the doctor population.
2011-01-01
Background Surveys of doctors are an important data collection method in health services research. Ways to improve response rates, minimise survey response bias and item non-response, within a given budget, have not previously been addressed in the same study. The aim of this paper is to compare the effects and costs of three different modes of survey administration in a national survey of doctors. Methods A stratified random sample of 4.9% (2,702/54,160) of doctors undertaking clinical practice was drawn from a national directory of all doctors in Australia. Stratification was by four doctor types: general practitioners, specialists, specialists-in-training, and hospital non-specialists, and by six rural/remote categories. A three-arm parallel trial design with equal randomisation across arms was used. Doctors were randomly allocated to: online questionnaire (902); simultaneous mixed mode (a paper questionnaire and login details sent together) (900); or, sequential mixed mode (online followed by a paper questionnaire with the reminder) (900). Analysis was by intention to treat, as within each primary mode, doctors could choose either paper or online. Primary outcome measures were response rate, survey response bias, item non-response, and cost. Results The online mode had a response rate 12.95%, followed by the simultaneous mixed mode with 19.7%, and the sequential mixed mode with 20.7%. After adjusting for observed differences between the groups, the online mode had a 7 percentage point lower response rate compared to the simultaneous mixed mode, and a 7.7 percentage point lower response rate compared to sequential mixed mode. The difference in response rate between the sequential and simultaneous modes was not statistically significant. Both mixed modes showed evidence of response bias, whilst the characteristics of online respondents were similar to the population. However, the online mode had a higher rate of item non-response compared to both mixed modes. The total cost of the online survey was 38% lower than simultaneous mixed mode and 22% lower than sequential mixed mode. The cost of the sequential mixed mode was 14% lower than simultaneous mixed mode. Compared to the online mode, the sequential mixed mode was the most cost-effective, although exhibiting some evidence of response bias. Conclusions Decisions on which survey mode to use depend on response rates, response bias, item non-response and costs. The sequential mixed mode appears to be the most cost-effective mode of survey administration for surveys of the population of doctors, if one is prepared to accept a degree of response bias. Online surveys are not yet suitable to be used exclusively for surveys of the doctor population. PMID:21888678
Jakobsen, Janus Christian; Katakam, Kiran Kumar; Schou, Anne; Hellmuth, Signe Gade; Stallknecht, Sandra Elkjær; Leth-Møller, Katja; Iversen, Maria; Banke, Marianne Bjørnø; Petersen, Iggiannguaq Juhl; Klingenberg, Sarah Louise; Krogh, Jesper; Ebert, Sebastian Elgaard; Timm, Anne; Lindschou, Jane; Gluud, Christian
2017-02-08
The evidence on selective serotonin reuptake inhibitors (SSRIs) for major depressive disorder is unclear. Our objective was to conduct a systematic review assessing the effects of SSRIs versus placebo, 'active' placebo, or no intervention in adult participants with major depressive disorder. We searched for eligible randomised clinical trials in The Cochrane Library's CENTRAL, PubMed, EMBASE, PsycLIT, PsycINFO, Science Citation Index Expanded, clinical trial registers of Europe and USA, websites of pharmaceutical companies, the U.S. Food and Drug Administration (FDA), and the European Medicines Agency until January 2016. All data were extracted by at least two independent investigators. We used Cochrane systematic review methodology, Trial Sequential Analysis, and calculation of Bayes factor. An eight-step procedure was followed to assess if thresholds for statistical and clinical significance were crossed. Primary outcomes were reduction of depressive symptoms, remission, and adverse events. Secondary outcomes were suicides, suicide attempts, suicide ideation, and quality of life. A total of 131 randomised placebo-controlled trials enrolling a total of 27,422 participants were included. None of the trials used 'active' placebo or no intervention as control intervention. All trials had high risk of bias. SSRIs significantly reduced the Hamilton Depression Rating Scale (HDRS) at end of treatment (mean difference -1.94 HDRS points; 95% CI -2.50 to -1.37; P < 0.00001; 49 trials; Trial Sequential Analysis-adjusted CI -2.70 to -1.18); Bayes factor below predefined threshold (2.01*10 -23 ). The effect estimate, however, was below our predefined threshold for clinical significance of 3 HDRS points. SSRIs significantly decreased the risk of no remission (RR 0.88; 95% CI 0.84 to 0.91; P < 0.00001; 34 trials; Trial Sequential Analysis adjusted CI 0.83 to 0.92); Bayes factor (1426.81) did not confirm the effect). SSRIs significantly increased the risks of serious adverse events (OR 1.37; 95% CI 1.08 to 1.75; P = 0.009; 44 trials; Trial Sequential Analysis-adjusted CI 1.03 to 1.89). This corresponds to 31/1000 SSRI participants will experience a serious adverse event compared with 22/1000 control participants. SSRIs also significantly increased the number of non-serious adverse events. There were almost no data on suicidal behaviour, quality of life, and long-term effects. SSRIs might have statistically significant effects on depressive symptoms, but all trials were at high risk of bias and the clinical significance seems questionable. SSRIs significantly increase the risk of both serious and non-serious adverse events. The potential small beneficial effects seem to be outweighed by harmful effects. PROSPERO CRD42013004420.
Clinical results of computerized tomography-based simulation with laser patient marking.
Ragan, D P; Forman, J D; He, T; Mesina, C F
1996-02-01
Accuracy of a patient treatment portal marking device and computerized tomography (CT) simulation have been clinically tested. A CT-based simulator has been assembled based on a commercial CT scanner. This includes visualization software and a computer-controlled laser drawing device. This laser drawing device is used to transfer the setup, central axis, and/or radiation portals from the CT simulator to the patient for appropriate patient skin marking. A protocol for clinical testing is reported. Twenty-five prospectively, sequentially accessioned patients have been analyzed. The simulation process can be completed in an average time of 62 min. Under many cases, the treatment portals can be designed and the patient marked in one session. Mechanical accuracy of the system was found to be within +/- 1mm. The portal projection accuracy in clinical cases is observed to be better than +/- 1.2 mm. Operating costs are equivalent to the conventional simulation process it replaces. Computed tomography simulation is a clinical accurate substitute for conventional simulation when used with an appropriate patient marking system and digitally reconstructed radiographs. Personnel time spent in CT simulation is equivalent to time in conventional simulation.
Realistic page-turning of electronic books
NASA Astrophysics Data System (ADS)
Fan, Chaoran; Li, Haisheng; Bai, Yannan
2014-01-01
The booming electronic books (e-books), as an extension to the paper book, are popular with readers. Recently, many efforts are put into the realistic page-turning simulation o f e-book to improve its reading experience. This paper presents a new 3D page-turning simulation approach, which employs piecewise time-dependent cylindrical surfaces to describe the turning page and constructs smooth transition method between time-dependent cylinders. The page-turning animation is produced by sequentially mapping the turning page into the cylinders with different radii and positions. Compared to the previous approaches, our method is able to imitate various effects efficiently and obtains more natural animation of turning page.
A formal language for the specification and verification of synchronous and asynchronous circuits
NASA Technical Reports Server (NTRS)
Russinoff, David M.
1993-01-01
A formal hardware description language for the intended application of verifiable asynchronous communication is described. The language is developed within the logical framework of the Nqthm system of Boyer and Moore and is based on the event-driven behavioral model of VHDL, including the basic VHDL signal propagation mechanisms, the notion of simulation deltas, and the VHDL simulation cycle. A core subset of the language corresponds closely with a subset of VHDL and is adequate for the realistic gate-level modeling of both combinational and sequential circuits. Various extensions to this subset provide means for convenient expression of behavioral circuit specifications.
Singh, Sonal
2013-01-01
Background: Regulatory decision-making involves assessment of risks and benefits of medications at the time of approval or when relevant safety concerns arise with a medication. The Analytic Hierarchy Process (AHP) facilitates decision-making in complex situations involving tradeoffs by considering risks and benefits of alternatives. The AHP allows a more structured method of synthesizing and understanding evidence in the context of importance assigned to outcomes. Our objective is to evaluate the use of an AHP in a simulated committee setting selecting oral medications for type 2 diabetes. Methods: This study protocol describes the AHP in five sequential steps using a small group of diabetes experts representing various clinical disciplines. The first step will involve defining the goal of the decision and developing the AHP model. In the next step, we will collect information about how well alternatives are expected to fulfill the decision criteria. In the third step, we will compare the ability of the alternatives to fulfill the criteria and judge the importance of eight criteria relative to the decision goal of the optimal medication choice for type 2 diabetes. We will use pairwise comparisons to sequentially compare the pairs of alternative options regarding their ability to fulfill the criteria. In the fourth step, the scales created in the third step will be combined to create a summary score indicating how well the alternatives met the decision goal. The resulting scores will be expressed as percentages and will indicate the alternative medications' relative abilities to fulfill the decision goal. The fifth step will consist of sensitivity analyses to explore the effects of changing the estimates. We will also conduct a cognitive interview and process evaluation. Discussion: Multi-criteria decision analysis using the AHP will aid, support and enhance the ability of decision makers to make evidence-based informed decisions consistent with their values and preferences. PMID:24555077
Maruthur, Nisa M; Joy, Susan; Dolan, James; Segal, Jodi B; Shihab, Hasan M; Singh, Sonal
2013-01-01
Regulatory decision-making involves assessment of risks and benefits of medications at the time of approval or when relevant safety concerns arise with a medication. The Analytic Hierarchy Process (AHP) facilitates decision-making in complex situations involving tradeoffs by considering risks and benefits of alternatives. The AHP allows a more structured method of synthesizing and understanding evidence in the context of importance assigned to outcomes. Our objective is to evaluate the use of an AHP in a simulated committee setting selecting oral medications for type 2 diabetes. This study protocol describes the AHP in five sequential steps using a small group of diabetes experts representing various clinical disciplines. The first step will involve defining the goal of the decision and developing the AHP model. In the next step, we will collect information about how well alternatives are expected to fulfill the decision criteria. In the third step, we will compare the ability of the alternatives to fulfill the criteria and judge the importance of eight criteria relative to the decision goal of the optimal medication choice for type 2 diabetes. We will use pairwise comparisons to sequentially compare the pairs of alternative options regarding their ability to fulfill the criteria. In the fourth step, the scales created in the third step will be combined to create a summary score indicating how well the alternatives met the decision goal. The resulting scores will be expressed as percentages and will indicate the alternative medications' relative abilities to fulfill the decision goal. The fifth step will consist of sensitivity analyses to explore the effects of changing the estimates. We will also conduct a cognitive interview and process evaluation. Multi-criteria decision analysis using the AHP will aid, support and enhance the ability of decision makers to make evidence-based informed decisions consistent with their values and preferences.
NASA Astrophysics Data System (ADS)
Paloma, Cynthia S.
The plasma electron temperature (Te) plays a critical role in a tokamak nu- clear fusion reactor since temperatures on the order of 108K are required to achieve fusion conditions. Many plasma properties in a tokamak nuclear fusion reactor are modeled by partial differential equations (PDE's) because they depend not only on time but also on space. In particular, the dynamics of the electron temperature is governed by a PDE referred to as the Electron Heat Transport Equation (EHTE). In this work, a numerical method is developed to solve the EHTE based on a custom finite-difference technique. The solution of the EHTE is compared to temperature profiles obtained by using TRANSP, a sophisticated plasma transport code, for specific discharges from the DIII-D tokamak, located at the DIII-D National Fusion Facility in San Diego, CA. The thermal conductivity (also called thermal diffusivity) of the electrons (Xe) is a plasma parameter that plays a critical role in the EHTE since it indicates how the electron temperature diffusion varies across the minor effective radius of the tokamak. TRANSP approximates Xe through a curve-fitting technique to match experimentally measured electron temperature profiles. While complex physics-based model have been proposed for Xe, there is a lack of a simple mathematical model for the thermal diffusivity that could be used for control design. In this work, a model for Xe is proposed based on a scaling law involving key plasma variables such as the electron temperature (Te), the electron density (ne), and the safety factor (q). An optimization algorithm is developed based on the Sequential Quadratic Programming (SQP) technique to optimize the scaling factors appearing in the proposed model so that the predicted electron temperature and magnetic flux profiles match predefined target profiles in the best possible way. A simulation study summarizing the outcomes of the optimization procedure is presented to illustrate the potential of the proposed modeling method.
NASA Astrophysics Data System (ADS)
Neves de Campos, Thiago
This research examines the distortionary effects of a discovered and undeveloped sequential modular offshore project under five different designs for a production-sharing agreement (PSA). The model differs from previous research by looking at the effect of taxation from the perspective of a host government, where the objective is to maximize government utility over government revenue generated by the project and the non-pecuniary benefits to society. This research uses Modern Asset Pricing (MAP) theory, which is able to provide a good measure of the asset value accruing to various stakeholders in the project combined with the optimal decision rule for the development of the investment opportunity. Monte Carlo simulation was also applied to incorporate into the model the most important sources of risk associated with the project and to account for non-linearity in the cash flows. For a complete evaluation of how the fiscal system affects the project development, an investor's behavioral model was constructed, incorporating three operational decisions: investment timing, capacity size and early abandonment. The model considers four sources of uncertainty that affect the project value and the firm's optimal decision: the long run oil price and short-run deviations from that price, cost escalation and the reservoir recovery rate. The optimizations outcomes show that all fiscal systems evaluated produce distortion over the companies' optimal decisions, and companies adjust their choices to avoid taxation in different ways according to the fiscal system characteristics. Moreover, it is revealed that fiscal systems with tax provisions that try to capture additional project profits based on production profitability measures leads to stronger distortions in the project investment and output profile. It is also shown that a model based on a fixed percentage rate is the system that creates the least distortion. This is because companies will be subjected to the same government share of profit oil independently of any operational decision which they can make to change the production profile to evade taxation.
NASA Technical Reports Server (NTRS)
Jones, D. W.
1971-01-01
The navigation and guidance process for the Jupiter, Saturn and Uranus planetary encounter phases of the 1977 Grand Tour interior mission was simulated. Reference approach navigation accuracies were defined and the relative information content of the various observation types were evaluated. Reference encounter guidance requirements were defined, sensitivities to assumed simulation model parameters were determined and the adequacy of the linear estimation theory was assessed. A linear sequential estimator was used to provide an estimate of the augmented state vector, consisting of the six state variables of position and velocity plus the three components of a planet position bias. The guidance process was simulated using a nonspherical model of the execution errors. Computation algorithms which simulate the navigation and guidance process were derived from theory and implemented into two research-oriented computer programs, written in FORTRAN.
2009-09-01
problems, to better model the problem solving of computer systems. This research brought about the intertwining of AI and cognitive psychology . Much of...where symbol sequences are sequential intelligent states of the network, and must be classified as normal, abnormal , or unknown. These symbols...is associated with abnormal behavior; and abcbc is associated with unknown behavior, as it fits no known behavior. Predicted outcomes from
Yin, Jie; Yagüe, Jose Luis; Boyce, Mary C; Gleason, Karen K
2014-02-26
Controlled buckling is a facile means of structuring surfaces. The resulting ordered wrinkling topologies provide surface properties and features desired for multifunctional applications. Here, we study the biaxially dynamic tuning of two-dimensional wrinkled micropatterns under cyclic mechanical stretching/releasing/restretching simultaneously or sequentially. A biaxially prestretched PDMS substrate is coated with a stiff polymer deposited by initiated chemical vapor deposition (iCVD). Applying a mechanical release/restretch cycle in two directions loaded simultaneously or sequentially to the wrinkled system results in a variety of dynamic and tunable wrinkled geometries, the evolution of which is investigated using in situ optical profilometry, numerical simulations, and theoretical modeling. Results show that restretching ordered herringbone micropatterns, created through sequential release of biaxial prestrain, leads to reversible and repeatable surface topography. The initial flat surface and the same wrinkled herringbone pattern are obtained alternatively after cyclic release/restretch processes, owing to the highly ordered structure leaving no avenue for trapping irregular topological regions during cycling as further evidenced by the uniformity of strains distributions and negligible residual strain. Conversely, restretching disordered labyrinth micropatterns created through simultaneous release shows an irreversible surface topology whether after sequential or simultaneous restretching due to creation of irregular surface topologies with regions of highly concentrated strain upon formation of the labyrinth which then lead to residual strains and trapped topologies upon cycling; furthermore, these trapped topologies depend upon the subsequent strain histories as well as the cycle. The disordered labyrinth pattern varies after each cyclic release/restretch process, presenting residual shallow patterns instead of achieving a flat state. The ability to dynamically tune the highly ordered herringbone patterning through mechanical stretching or other actuation makes these wrinkles excellent candidates for tunable multifunctional surfaces properties such as reflectivity, friction, anisotropic liquid flow or boundary layer control.
NASA Astrophysics Data System (ADS)
Liu, Wei; Ma, Shunjian; Sun, Mingwei; Yi, Haidong; Wang, Zenghui; Chen, Zengqiang
2016-08-01
Path planning plays an important role in aircraft guided systems. Multiple no-fly zones in the flight area make path planning a constrained nonlinear optimization problem. It is necessary to obtain a feasible optimal solution in real time. In this article, the flight path is specified to be composed of alternate line segments and circular arcs, in order to reformulate the problem into a static optimization one in terms of the waypoints. For the commonly used circular and polygonal no-fly zones, geometric conditions are established to determine whether or not the path intersects with them, and these can be readily programmed. Then, the original problem is transformed into a form that can be solved by the sequential quadratic programming method. The solution can be obtained quickly using the Sparse Nonlinear OPTimizer (SNOPT) package. Mathematical simulations are used to verify the effectiveness and rapidity of the proposed algorithm.
High energy protons generation by two sequential laser pulses
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Xiaofeng; Shen, Baifei, E-mail: bfshen@mail.shcnc.ac.cn, E-mail: zhxm@siom.ac.cn; Zhang, Xiaomei, E-mail: bfshen@mail.shcnc.ac.cn, E-mail: zhxm@siom.ac.cn
2015-04-15
The sequential proton acceleration by two laser pulses of relativistic intensity is proposed to produce high energy protons. In the scheme, a relativistic super-Gaussian (SG) laser pulse followed by a Laguerre-Gaussian (LG) pulse irradiates dense plasma attached by underdense plasma. A proton beam is produced from the target and accelerated in the radiation pressure regime by the short SG pulse and then trapped and re-accelerated in a special bubble driven by the LG pulse in the underdense plasma. The advantages of radiation pressure acceleration and LG transverse structure are combined to achieve the effective trapping and acceleration of protons. Inmore » a two-dimensional particle-in-cell simulation, protons of 6.7 GeV are obtained from a 2 × 10{sup 22 }W/cm{sup 2} SG laser pulse and a LG pulse at a lower peak intensity.« less
Virtual reality laparoscopic simulator for assessment in gynaecology.
Gor, Mounna; McCloy, Rory; Stone, Robert; Smith, Anthony
2003-02-01
A validated virtual reality laparoscopic simulator minimally invasive surgical trainer (MIST) 2 was used to assess the psychomotor skills of 21 gynaecologists (2 consultants, 8 registrars and 11 senior house officers). Nine gynaecologists failed to complete the VR tasks at the first attempt and were excluded for sequential evaluation. Each of the remaining 12 gynaecologists were tested on MIST 2 on four occasions within four weeks. The MIST 2 simulator provided quantitative data on time to complete tasks, errors, economy of movement and economy of diathermy use--for both right and left hand performance. The results show a significant early learning curve for the majority of tasks which plateaued by the third session. This suggests a high quality surgeon-computer interface. MIST 2 provides objective assessment of laparoscopic skills in gynaecologists.
Simulation of Peptides at Aqueous Interfaces
NASA Technical Reports Server (NTRS)
Pohorille, Andrew; Wilson, M.; Chipot, C.; DeVincenzi, Donald L. (Technical Monitor)
2001-01-01
Behavior of peptides at water-membrane interfaces is of great interest in studies on cellular transport and signaling, membrane fusion, and the action of toxins and antibiotics. Many peptides, which exist in water only as random coils, can form sequence-dependent, ordered structures at aqueous interfaces, incorporate into membranes and self-assembly into functional units, such as simple ion channels. Multi -nanosecond molecular dynamics simulations have been carried out to study the mechanism and energetics of interfacial folding of both non-polar and amphiphilic peptides, their insertion into membranes and association into higher-order structures. The simulations indicate that peptides fold non-sequentially, often through a series of amphiphilic intermediates. They further incorporate into the membrane in a preferred direction as folded monomers, and only then aggregate into dimers and, possibly, further into "dimers of dimers".
Advanced Numerical Techniques of Performance Evaluation. Volume 2
1990-06-01
multiprocessor environment. This factor is determined by the overhead of the primitives available in the system ( semaphore , monitor , or message... semaphore , monitor , or message passing primitives ) and U the programming ability of the user who implements the simulation. " t,: the sequential...Warp Operating System . i Pro" lftevcnth ACM Symposum on Operating Systems Princlplcs, pages 77 9:3, Auslin, TX, Nov wicr 1987. ACM. [121 D.R. Jefferson
2006-09-30
allocated to intangible assets. With Proctor & Gamble’s $53.5 billion acquisition of Gillette , $31.5 billion or 59% of the total purchase price was... outsourcing , alliances, joint ventures) • Compound Option (platform options) • Sequential Options (stage-gate development, R&D, phased...Comparisons • RO/KVA could enhance outsourcing comparisons between the Government’s Most Efficient Organization (MEO) and private-sector
Optical architecture design for detection of absorbers embedded in visceral fat.
Francis, Robert; Florence, James; MacFarlane, Duncan
2014-05-01
Optically absorbing ducts embedded in scattering adipose tissue can be injured during laparoscopic surgery. Non-sequential simulations and theoretical analysis compare optical system configurations for detecting these absorbers. For absorbers in deep scattering volumes, trans-illumination is preferred instead of diffuse reflectance. For improved contrast, a scanning source with a large area detector is preferred instead of a large area source with a pixelated detector.
Optical architecture design for detection of absorbers embedded in visceral fat
Francis, Robert; Florence, James; MacFarlane, Duncan
2014-01-01
Optically absorbing ducts embedded in scattering adipose tissue can be injured during laparoscopic surgery. Non-sequential simulations and theoretical analysis compare optical system configurations for detecting these absorbers. For absorbers in deep scattering volumes, trans-illumination is preferred instead of diffuse reflectance. For improved contrast, a scanning source with a large area detector is preferred instead of a large area source with a pixelated detector. PMID:24877008
A Sequential Monte Carlo Approach for Streamflow Forecasting
NASA Astrophysics Data System (ADS)
Hsu, K.; Sorooshian, S.
2008-12-01
As alternatives to traditional physically-based models, Artificial Neural Network (ANN) models offer some advantages with respect to the flexibility of not requiring the precise quantitative mechanism of the process and the ability to train themselves from the data directly. In this study, an ANN model was used to generate one-day-ahead streamflow forecasts from the precipitation input over a catchment. Meanwhile, the ANN model parameters were trained using a Sequential Monte Carlo (SMC) approach, namely Regularized Particle Filter (RPF). The SMC approaches are known for their capabilities in tracking the states and parameters of a nonlinear dynamic process based on the Baye's rule and the proposed effective sampling and resampling strategies. In this study, five years of daily rainfall and streamflow measurement were used for model training. Variable sample sizes of RPF, from 200 to 2000, were tested. The results show that, after 1000 RPF samples, the simulation statistics, in terms of correlation coefficient, root mean square error, and bias, were stabilized. It is also shown that the forecasted daily flows fit the observations very well, with the correlation coefficient of higher than 0.95. The results of RPF simulations were also compared with those from the popular back-propagation ANN training approach. The pros and cons of using SMC approach and the traditional back-propagation approach will be discussed.
Ng, Ding-Quan; Lin, Yi-Pin
2016-01-01
In this pilot study, a modified sampling protocol was evaluated for the detection of lead contamination and locating the source of lead release in a simulated premise plumbing system with one-, three- and seven-day stagnation for a total period of 475 days. Copper pipes, stainless steel taps and brass fittings were used to assemble the “lead-free” system. Sequential sampling using 100 mL was used to detect lead contamination while that using 50 mL was used to locate the lead source. Elevated lead levels, far exceeding the World Health Organization (WHO) guideline value of 10 µg·L−1, persisted for as long as five months in the system. “Lead-free” brass fittings were identified as the source of lead contamination. Physical disturbances, such as renovation works, could cause short-term spikes in lead release. Orthophosphate was able to suppress total lead levels below 10 µg·L−1, but caused “blue water” problems. When orthophosphate addition was ceased, total lead levels began to spike within one week, implying that a continuous supply of orthophosphate was required to control total lead levels. Occasional total lead spikes were observed in one-day stagnation samples throughout the course of the experiments. PMID:26927154
Ng, Ding-Quan; Lin, Yi-Pin
2016-02-27
In this pilot study, a modified sampling protocol was evaluated for the detection of lead contamination and locating the source of lead release in a simulated premise plumbing system with one-, three- and seven-day stagnation for a total period of 475 days. Copper pipes, stainless steel taps and brass fittings were used to assemble the "lead-free" system. Sequential sampling using 100 mL was used to detect lead contamination while that using 50 mL was used to locate the lead source. Elevated lead levels, far exceeding the World Health Organization (WHO) guideline value of 10 µg · L(-1), persisted for as long as five months in the system. "Lead-free" brass fittings were identified as the source of lead contamination. Physical disturbances, such as renovation works, could cause short-term spikes in lead release. Orthophosphate was able to suppress total lead levels below 10 µg · L(-1), but caused "blue water" problems. When orthophosphate addition was ceased, total lead levels began to spike within one week, implying that a continuous supply of orthophosphate was required to control total lead levels. Occasional total lead spikes were observed in one-day stagnation samples throughout the course of the experiments.