Group-sequential three-arm noninferiority clinical trial designs
Ochiai, Toshimitsu; Hamasaki, Toshimitsu; Evans, Scott R.; Asakura, Koko; Ohno, Yuko
2016-01-01
We discuss group-sequential three-arm noninferiority clinical trial designs that include active and placebo controls for evaluating both assay sensitivity and noninferiority. We extend two existing approaches, the fixed margin and fraction approaches, into a group-sequential setting with two decision-making frameworks. We investigate the operating characteristics including power, Type I error rate, maximum and expected sample sizes, as design factors vary. In addition, we discuss sample size recalculation and its’ impact on the power and Type I error rate via a simulation study. PMID:26892481
Sequential causal inference: Application to randomized trials of adaptive treatment strategies
Dawson, Ree; Lavori, Philip W.
2009-01-01
SUMMARY Clinical trials that randomize subjects to decision algorithms, which adapt treatments over time according to individual response, have gained considerable interest as investigators seek designs that directly inform clinical decision making. We consider designs in which subjects are randomized sequentially at decision points, among adaptive treatment options under evaluation. We present a sequential method to estimate the comparative effects of the randomized adaptive treatments, which are formalized as adaptive treatment strategies. Our causal estimators are derived using Bayesian predictive inference. We use analytical and empirical calculations to compare the predictive estimators to (i) the ‘standard’ approach that allocates the sequentially obtained data to separate strategy-specific groups as would arise from randomizing subjects at baseline; (ii) the semi-parametric approach of marginal mean models that, under appropriate experimental conditions, provides the same sequential estimator of causal differences as the proposed approach. Simulation studies demonstrate that sequential causal inference offers substantial efficiency gains over the standard approach to comparing treatments, because the predictive estimators can take advantage of the monotone structure of shared data among adaptive strategies. We further demonstrate that the semi-parametric asymptotic variances, which are marginal ‘one-step’ estimators, may exhibit significant bias, in contrast to the predictive variances. We show that the conditions under which the sequential method is attractive relative to the other two approaches are those most likely to occur in real studies. PMID:17914714
Fully vs. Sequentially Coupled Loads Analysis of Offshore Wind Turbines
DOE Office of Scientific and Technical Information (OSTI.GOV)
Damiani, Rick; Wendt, Fabian; Musial, Walter
The design and analysis methods for offshore wind turbines must consider the aerodynamic and hydrodynamic loads and response of the entire system (turbine, tower, substructure, and foundation) coupled to the turbine control system dynamics. Whereas a fully coupled (turbine and support structure) modeling approach is more rigorous, intellectual property concerns can preclude this approach. In fact, turbine control system algorithms and turbine properties are strictly guarded and often not shared. In many cases, a partially coupled analysis using separate tools and an exchange of reduced sets of data via sequential coupling may be necessary. In the sequentially coupled approach, themore » turbine and substructure designers will independently determine and exchange an abridged model of their respective subsystems to be used in their partners' dynamic simulations. Although the ability to achieve design optimization is sacrificed to some degree with a sequentially coupled analysis method, the central question here is whether this approach can deliver the required safety and how the differences in the results from the fully coupled method could affect the design. This work summarizes the scope and preliminary results of a study conducted for the Bureau of Safety and Environmental Enforcement aimed at quantifying differences between these approaches through aero-hydro-servo-elastic simulations of two offshore wind turbines on a monopile and jacket substructure.« less
Paturzo, Marco; Colaceci, Sofia; Clari, Marco; Mottola, Antonella; Alvaro, Rosaria; Vellone, Ercole
2016-01-01
. Mixed methods designs: an innovative methodological approach for nursing research. The mixed method research designs (MM) combine qualitative and quantitative approaches in the research process, in a single study or series of studies. Their use can provide a wider understanding of multifaceted phenomena. This article presents a general overview of the structure and design of MM to spread this approach in the Italian nursing research community. The MM designs most commonly used in the nursing field are the convergent parallel design, the sequential explanatory design, the exploratory sequential design and the embedded design. For each method a research example is presented. The use of MM can be an added value to improve clinical practices as, through the integration of qualitative and quantitative methods, researchers can better assess complex phenomena typical of nursing.
Multi-point objective-oriented sequential sampling strategy for constrained robust design
NASA Astrophysics Data System (ADS)
Zhu, Ping; Zhang, Siliang; Chen, Wei
2015-03-01
Metamodelling techniques are widely used to approximate system responses of expensive simulation models. In association with the use of metamodels, objective-oriented sequential sampling methods have been demonstrated to be effective in balancing the need for searching an optimal solution versus reducing the metamodelling uncertainty. However, existing infilling criteria are developed for deterministic problems and restricted to one sampling point in one iteration. To exploit the use of multiple samples and identify the true robust solution in fewer iterations, a multi-point objective-oriented sequential sampling strategy is proposed for constrained robust design problems. In this article, earlier development of objective-oriented sequential sampling strategy for unconstrained robust design is first extended to constrained problems. Next, a double-loop multi-point sequential sampling strategy is developed. The proposed methods are validated using two mathematical examples followed by a highly nonlinear automotive crashworthiness design example. The results show that the proposed method can mitigate the effect of both metamodelling uncertainty and design uncertainty, and identify the robust design solution more efficiently than the single-point sequential sampling approach.
A novel visual hardware behavioral language
NASA Technical Reports Server (NTRS)
Li, Xueqin; Cheng, H. D.
1992-01-01
Most hardware behavioral languages just use texts to describe the behavior of the desired hardware design. This is inconvenient for VLSI designers who enjoy using the schematic approach. The proposed visual hardware behavioral language has the ability to graphically express design information using visual parallel models (blocks), visual sequential models (processes) and visual data flow graphs (which consist of primitive operational icons, control icons, and Data and Synchro links). Thus, the proposed visual hardware behavioral language can not only specify hardware concurrent and sequential functionality, but can also visually expose parallelism, sequentiality, and disjointness (mutually exclusive operations) for the hardware designers. That would make the hardware designers capture the design ideas easily and explicitly using this visual hardware behavioral language.
Optimal flexible sample size design with robust power.
Zhang, Lanju; Cui, Lu; Yang, Bo
2016-08-30
It is well recognized that sample size determination is challenging because of the uncertainty on the treatment effect size. Several remedies are available in the literature. Group sequential designs start with a sample size based on a conservative (smaller) effect size and allow early stop at interim looks. Sample size re-estimation designs start with a sample size based on an optimistic (larger) effect size and allow sample size increase if the observed effect size is smaller than planned. Different opinions favoring one type over the other exist. We propose an optimal approach using an appropriate optimality criterion to select the best design among all the candidate designs. Our results show that (1) for the same type of designs, for example, group sequential designs, there is room for significant improvement through our optimization approach; (2) optimal promising zone designs appear to have no advantages over optimal group sequential designs; and (3) optimal designs with sample size re-estimation deliver the best adaptive performance. We conclude that to deal with the challenge of sample size determination due to effect size uncertainty, an optimal approach can help to select the best design that provides most robust power across the effect size range of interest. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
A sequential linear optimization approach for controller design
NASA Technical Reports Server (NTRS)
Horta, L. G.; Juang, J.-N.; Junkins, J. L.
1985-01-01
A linear optimization approach with a simple real arithmetic algorithm is presented for reliable controller design and vibration suppression of flexible structures. Using first order sensitivity of the system eigenvalues with respect to the design parameters in conjunction with a continuation procedure, the method converts a nonlinear optimization problem into a maximization problem with linear inequality constraints. The method of linear programming is then applied to solve the converted linear optimization problem. The general efficiency of the linear programming approach allows the method to handle structural optimization problems with a large number of inequality constraints on the design vector. The method is demonstrated using a truss beam finite element model for the optimal sizing and placement of active/passive-structural members for damping augmentation. Results using both the sequential linear optimization approach and nonlinear optimization are presented and compared. The insensitivity to initial conditions of the linear optimization approach is also demonstrated.
Sequential biases in accumulating evidence
Huggins, Richard; Dogo, Samson Henry
2015-01-01
Whilst it is common in clinical trials to use the results of tests at one phase to decide whether to continue to the next phase and to subsequently design the next phase, we show that this can lead to biased results in evidence synthesis. Two new kinds of bias associated with accumulating evidence, termed ‘sequential decision bias’ and ‘sequential design bias’, are identified. Both kinds of bias are the result of making decisions on the usefulness of a new study, or its design, based on the previous studies. Sequential decision bias is determined by the correlation between the value of the current estimated effect and the probability of conducting an additional study. Sequential design bias arises from using the estimated value instead of the clinically relevant value of an effect in sample size calculations. We considered both the fixed‐effect and the random‐effects models of meta‐analysis and demonstrated analytically and by simulations that in both settings the problems due to sequential biases are apparent. According to our simulations, the sequential biases increase with increased heterogeneity. Minimisation of sequential biases arises as a new and important research area necessary for successful evidence‐based approaches to the development of science. © 2015 The Authors. Research Synthesis Methods Published by John Wiley & Sons Ltd. PMID:26626562
Flexible sequential designs for multi-arm clinical trials.
Magirr, D; Stallard, N; Jaki, T
2014-08-30
Adaptive designs that are based on group-sequential approaches have the benefit of being efficient as stopping boundaries can be found that lead to good operating characteristics with test decisions based solely on sufficient statistics. The drawback of these so called 'pre-planned adaptive' designs is that unexpected design changes are not possible without impacting the error rates. 'Flexible adaptive designs' on the other hand can cope with a large number of contingencies at the cost of reduced efficiency. In this work, we focus on two different approaches for multi-arm multi-stage trials, which are based on group-sequential ideas, and discuss how these 'pre-planned adaptive designs' can be modified to allow for flexibility. We then show how the added flexibility can be used for treatment selection and sample size reassessment and evaluate the impact on the error rates in a simulation study. The results show that an impressive overall procedure can be found by combining a well chosen pre-planned design with an application of the conditional error principle to allow flexible treatment selection. Copyright © 2014 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Ma, Yuan-Zhuo; Li, Hong-Shuang; Yao, Wei-Xing
2018-05-01
The evaluation of the probabilistic constraints in reliability-based design optimization (RBDO) problems has always been significant and challenging work, which strongly affects the performance of RBDO methods. This article deals with RBDO problems using a recently developed generalized subset simulation (GSS) method and a posterior approximation approach. The posterior approximation approach is used to transform all the probabilistic constraints into ordinary constraints as in deterministic optimization. The assessment of multiple failure probabilities required by the posterior approximation approach is achieved by GSS in a single run at all supporting points, which are selected by a proper experimental design scheme combining Sobol' sequences and Bucher's design. Sequentially, the transformed deterministic design optimization problem can be solved by optimization algorithms, for example, the sequential quadratic programming method. Three optimization problems are used to demonstrate the efficiency and accuracy of the proposed method.
NeCamp, Timothy; Kilbourne, Amy; Almirall, Daniel
2017-08-01
Cluster-level dynamic treatment regimens can be used to guide sequential treatment decision-making at the cluster level in order to improve outcomes at the individual or patient-level. In a cluster-level dynamic treatment regimen, the treatment is potentially adapted and re-adapted over time based on changes in the cluster that could be impacted by prior intervention, including aggregate measures of the individuals or patients that compose it. Cluster-randomized sequential multiple assignment randomized trials can be used to answer multiple open questions preventing scientists from developing high-quality cluster-level dynamic treatment regimens. In a cluster-randomized sequential multiple assignment randomized trial, sequential randomizations occur at the cluster level and outcomes are observed at the individual level. This manuscript makes two contributions to the design and analysis of cluster-randomized sequential multiple assignment randomized trials. First, a weighted least squares regression approach is proposed for comparing the mean of a patient-level outcome between the cluster-level dynamic treatment regimens embedded in a sequential multiple assignment randomized trial. The regression approach facilitates the use of baseline covariates which is often critical in the analysis of cluster-level trials. Second, sample size calculators are derived for two common cluster-randomized sequential multiple assignment randomized trial designs for use when the primary aim is a between-dynamic treatment regimen comparison of the mean of a continuous patient-level outcome. The methods are motivated by the Adaptive Implementation of Effective Programs Trial which is, to our knowledge, the first-ever cluster-randomized sequential multiple assignment randomized trial in psychiatry.
Ebrahimi, Ahmad; Kia, Reza; Komijan, Alireza Rashidi
2016-01-01
In this article, a novel integrated mixed-integer nonlinear programming model is presented for designing a cellular manufacturing system (CMS) considering machine layout and part scheduling problems simultaneously as interrelated decisions. The integrated CMS model is formulated to incorporate several design features including part due date, material handling time, operation sequence, processing time, an intra-cell layout of unequal-area facilities, and part scheduling. The objective function is to minimize makespan, tardiness penalties, and material handling costs of inter-cell and intra-cell movements. Two numerical examples are solved by the Lingo software to illustrate the results obtained by the incorporated features. In order to assess the effects and importance of integration of machine layout and part scheduling in designing a CMS, two approaches, sequentially and concurrent are investigated and the improvement resulted from a concurrent approach is revealed. Also, due to the NP-hardness of the integrated model, an efficient genetic algorithm is designed. As a consequence, computational results of this study indicate that the best solutions found by GA are better than the solutions found by B&B in much less time for both sequential and concurrent approaches. Moreover, the comparisons between the objective function values (OFVs) obtained by sequential and concurrent approaches demonstrate that the OFV improvement is averagely around 17 % by GA and 14 % by B&B.
Dose finding with the sequential parallel comparison design.
Wang, Jessie J; Ivanova, Anastasia
2014-01-01
The sequential parallel comparison design (SPCD) is a two-stage design recommended for trials with possibly high placebo response. A drug-placebo comparison in the first stage is followed in the second stage by placebo nonresponders being re-randomized between drug and placebo. We describe how SPCD can be used in trials where multiple doses of a drug or multiple treatments are compared with placebo and present two adaptive approaches. We detail how to analyze data in such trials and give recommendations about the allocation proportion to placebo in the two stages of SPCD.
Nonlinear interferometry approach to photonic sequential logic
NASA Astrophysics Data System (ADS)
Mabuchi, Hideo
2011-10-01
Motivated by rapidly advancing capabilities for extensive nanoscale patterning of optical materials, I propose an approach to implementing photonic sequential logic that exploits circuit-scale phase coherence for efficient realizations of fundamental components such as a NAND-gate-with-fanout and a bistable latch. Kerr-nonlinear optical resonators are utilized in combination with interference effects to drive the binary logic. Quantum-optical input-output models are characterized numerically using design parameters that yield attojoule-scale energy separation between the latch states.
ERIC Educational Resources Information Center
Ayalon, Michal; Watson, Anne; Lerman, Steve
2015-01-01
This study investigates students' ways of attending to linear sequential data in two tasks, and conjectures possible relationships between those ways and elements of the task design. Drawing on the substantial literature about such situations, we focus for this paper on linear rate of change, and on covariation and correspondence approaches to…
ERIC Educational Resources Information Center
Karademir, Yavuz; Demir, Selcuk Besir
2015-01-01
The aim of this study is to ascertain the problems social studies teachers face in the teaching of topics covered in 8th grade TRHRK Course. The study was conducted in line with explanatory sequential mixed method design, which is one of the mixed research method, was used. The study involves three phases. In the first step, exploratory process…
Sequential experimental design based generalised ANOVA
NASA Astrophysics Data System (ADS)
Chakraborty, Souvik; Chowdhury, Rajib
2016-07-01
Over the last decade, surrogate modelling technique has gained wide popularity in the field of uncertainty quantification, optimization, model exploration and sensitivity analysis. This approach relies on experimental design to generate training points and regression/interpolation for generating the surrogate. In this work, it is argued that conventional experimental design may render a surrogate model inefficient. In order to address this issue, this paper presents a novel distribution adaptive sequential experimental design (DA-SED). The proposed DA-SED has been coupled with a variant of generalised analysis of variance (G-ANOVA), developed by representing the component function using the generalised polynomial chaos expansion. Moreover, generalised analytical expressions for calculating the first two statistical moments of the response, which are utilized in predicting the probability of failure, have also been developed. The proposed approach has been utilized in predicting probability of failure of three structural mechanics problems. It is observed that the proposed approach yields accurate and computationally efficient estimate of the failure probability.
Sequential experimental design based generalised ANOVA
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chakraborty, Souvik, E-mail: csouvik41@gmail.com; Chowdhury, Rajib, E-mail: rajibfce@iitr.ac.in
Over the last decade, surrogate modelling technique has gained wide popularity in the field of uncertainty quantification, optimization, model exploration and sensitivity analysis. This approach relies on experimental design to generate training points and regression/interpolation for generating the surrogate. In this work, it is argued that conventional experimental design may render a surrogate model inefficient. In order to address this issue, this paper presents a novel distribution adaptive sequential experimental design (DA-SED). The proposed DA-SED has been coupled with a variant of generalised analysis of variance (G-ANOVA), developed by representing the component function using the generalised polynomial chaos expansion. Moreover,more » generalised analytical expressions for calculating the first two statistical moments of the response, which are utilized in predicting the probability of failure, have also been developed. The proposed approach has been utilized in predicting probability of failure of three structural mechanics problems. It is observed that the proposed approach yields accurate and computationally efficient estimate of the failure probability.« less
Van Derlinden, E; Bernaerts, K; Van Impe, J F
2010-05-21
Optimal experiment design for parameter estimation (OED/PE) has become a popular tool for efficient and accurate estimation of kinetic model parameters. When the kinetic model under study encloses multiple parameters, different optimization strategies can be constructed. The most straightforward approach is to estimate all parameters simultaneously from one optimal experiment (single OED/PE strategy). However, due to the complexity of the optimization problem or the stringent limitations on the system's dynamics, the experimental information can be limited and parameter estimation convergence problems can arise. As an alternative, we propose to reduce the optimization problem to a series of two-parameter estimation problems, i.e., an optimal experiment is designed for a combination of two parameters while presuming the other parameters known. Two different approaches can be followed: (i) all two-parameter optimal experiments are designed based on identical initial parameter estimates and parameters are estimated simultaneously from all resulting experimental data (global OED/PE strategy), and (ii) optimal experiments are calculated and implemented sequentially whereby the parameter values are updated intermediately (sequential OED/PE strategy). This work exploits OED/PE for the identification of the Cardinal Temperature Model with Inflection (CTMI) (Rosso et al., 1993). This kinetic model describes the effect of temperature on the microbial growth rate and encloses four parameters. The three OED/PE strategies are considered and the impact of the OED/PE design strategy on the accuracy of the CTMI parameter estimation is evaluated. Based on a simulation study, it is observed that the parameter values derived from the sequential approach deviate more from the true parameters than the single and global strategy estimates. The single and global OED/PE strategies are further compared based on experimental data obtained from design implementation in a bioreactor. Comparable estimates are obtained, but global OED/PE estimates are, in general, more accurate and reliable. Copyright (c) 2010 Elsevier Ltd. All rights reserved.
Breaking from binaries - using a sequential mixed methods design.
Larkin, Patricia Mary; Begley, Cecily Marion; Devane, Declan
2014-03-01
To outline the traditional worldviews of healthcare research and discuss the benefits and challenges of using mixed methods approaches in contributing to the development of nursing and midwifery knowledge. There has been much debate about the contribution of mixed methods research to nursing and midwifery knowledge in recent years. A sequential exploratory design is used as an exemplar of a mixed methods approach. The study discussed used a combination of focus-group interviews and a quantitative instrument to obtain a fuller understanding of women's experiences of childbirth. In the mixed methods study example, qualitative data were analysed using thematic analysis and quantitative data using regression analysis. Polarised debates about the veracity, philosophical integrity and motivation for conducting mixed methods research have largely abated. A mixed methods approach can contribute to a deeper, more contextual understanding of a variety of subjects and experiences; as a result, it furthers knowledge that can be used in clinical practice. The purpose of the research study should be the main instigator when choosing from an array of mixed methods research designs. Mixed methods research offers a variety of models that can augment investigative capabilities and provide richer data than can a discrete method alone. This paper offers an example of an exploratory, sequential approach to investigating women's childbirth experiences. A clear framework for the conduct and integration of the different phases of the mixed methods research process is provided. This approach can be used by practitioners and policy makers to improve practice.
Integrated Controls-Structures Design Methodology for Flexible Spacecraft
NASA Technical Reports Server (NTRS)
Maghami, P. G.; Joshi, S. M.; Price, D. B.
1995-01-01
This paper proposes an approach for the design of flexible spacecraft, wherein the structural design and the control system design are performed simultaneously. The integrated design problem is posed as an optimization problem in which both the structural parameters and the control system parameters constitute the design variables, which are used to optimize a common objective function, thereby resulting in an optimal overall design. The approach is demonstrated by application to the integrated design of a geostationary platform, and to a ground-based flexible structure experiment. The numerical results obtained indicate that the integrated design approach generally yields spacecraft designs that are substantially superior to the conventional approach, wherein the structural design and control design are performed sequentially.
The potential application of the blackboard model of problem solving to multidisciplinary design
NASA Technical Reports Server (NTRS)
Rogers, James L.
1989-01-01
The potential application of the blackboard model of problem solving to multidisciplinary design is discussed. Multidisciplinary design problems are complex, poorly structured, and lack a predetermined decision path from the initial starting point to the final solution. The final solution is achieved using data from different engineering disciplines. Ideally, for the final solution to be the optimum solution, there must be a significant amount of communication among the different disciplines plus intradisciplinary and interdisciplinary optimization. In reality, this is not what happens in today's sequential approach to multidisciplinary design. Therefore it is highly unlikely that the final solution is the true optimum solution from an interdisciplinary optimization standpoint. A multilevel decomposition approach is suggested as a technique to overcome the problems associated with the sequential approach, but no tool currently exists with which to fully implement this technique. A system based on the blackboard model of problem solving appears to be an ideal tool for implementing this technique because it offers an incremental problem solving approach that requires no a priori determined reasoning path. Thus it has the potential of finding a more optimum solution for the multidisciplinary design problems found in today's aerospace industries.
ERIC Educational Resources Information Center
Polat, Ahmet; Dogan, Soner; Demir, Selçuk Besir
2016-01-01
The present study was undertaken to investigate the quality of education based on the views of the students attending social studies education departments at the Faculties of Education and to determine the existing problems and present suggestions for their solutions. The study was conducted according to exploratory sequential mixed method. In…
ERIC Educational Resources Information Center
Polat, Ahmet; Dogan, Soner; Demir, Selçuk Besir
2016-01-01
The present study was undertaken to investigate the quality of education based on the views of the students attending social studies education departments at the Faculties of Education and to determine the existing problems and present suggestions for their solutions. The study was conducted according to exploratory sequential mixed method. In…
The potential application of the blackboard model of problem solving to multidisciplinary design
NASA Technical Reports Server (NTRS)
Rogers, J. L.
1989-01-01
Problems associated with the sequential approach to multidisciplinary design are discussed. A blackboard model is suggested as a potential tool for implementing the multilevel decomposition approach to overcome these problems. The blackboard model serves as a global database for the solution with each discipline acting as a knowledge source for updating the solution. With this approach, it is possible for engineers to improve the coordination, communication, and cooperation in the conceptual design process, allowing them to achieve a more optimal design from an interdisciplinary standpoint.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Ray -Bing; Wang, Weichung; Jeff Wu, C. F.
A numerical method, called OBSM, was recently proposed which employs overcomplete basis functions to achieve sparse representations. While the method can handle non-stationary response without the need of inverting large covariance matrices, it lacks the capability to quantify uncertainty in predictions. We address this issue by proposing a Bayesian approach which first imposes a normal prior on the large space of linear coefficients, then applies the MCMC algorithm to generate posterior samples for predictions. From these samples, Bayesian credible intervals can then be obtained to assess prediction uncertainty. A key application for the proposed method is the efficient construction ofmore » sequential designs. Several sequential design procedures with different infill criteria are proposed based on the generated posterior samples. As a result, numerical studies show that the proposed schemes are capable of solving problems of positive point identification, optimization, and surrogate fitting.« less
Chen, Ray -Bing; Wang, Weichung; Jeff Wu, C. F.
2017-04-12
A numerical method, called OBSM, was recently proposed which employs overcomplete basis functions to achieve sparse representations. While the method can handle non-stationary response without the need of inverting large covariance matrices, it lacks the capability to quantify uncertainty in predictions. We address this issue by proposing a Bayesian approach which first imposes a normal prior on the large space of linear coefficients, then applies the MCMC algorithm to generate posterior samples for predictions. From these samples, Bayesian credible intervals can then be obtained to assess prediction uncertainty. A key application for the proposed method is the efficient construction ofmore » sequential designs. Several sequential design procedures with different infill criteria are proposed based on the generated posterior samples. As a result, numerical studies show that the proposed schemes are capable of solving problems of positive point identification, optimization, and surrogate fitting.« less
Bayesian design of decision rules for failure detection
NASA Technical Reports Server (NTRS)
Chow, E. Y.; Willsky, A. S.
1984-01-01
The formulation of the decision making process of a failure detection algorithm as a Bayes sequential decision problem provides a simple conceptualization of the decision rule design problem. As the optimal Bayes rule is not computable, a methodology that is based on the Bayesian approach and aimed at a reduced computational requirement is developed for designing suboptimal rules. A numerical algorithm is constructed to facilitate the design and performance evaluation of these suboptimal rules. The result of applying this design methodology to an example shows that this approach is potentially a useful one.
Learning Sequential Composition Control.
Najafi, Esmaeil; Babuska, Robert; Lopes, Gabriel A D
2016-11-01
Sequential composition is an effective supervisory control method for addressing control problems in nonlinear dynamical systems. It executes a set of controllers sequentially to achieve a control specification that cannot be realized by a single controller. As these controllers are designed offline, sequential composition cannot address unmodeled situations that might occur during runtime. This paper proposes a learning approach to augment the standard sequential composition framework by using online learning to handle unforeseen situations. New controllers are acquired via learning and added to the existing supervisory control structure. In the proposed setting, learning experiments are restricted to take place within the domain of attraction (DOA) of the existing controllers. This guarantees that the learning process is safe (i.e., the closed loop system is always stable). In addition, the DOA of the new learned controller is approximated after each learning trial. This keeps the learning process short as learning is terminated as soon as the DOA of the learned controller is sufficiently large. The proposed approach has been implemented on two nonlinear systems: 1) a nonlinear mass-damper system and 2) an inverted pendulum. The results show that in both cases a new controller can be rapidly learned and added to the supervisory control structure.
A Proposed Conceptual Framework for Curriculum Design in Physical Fitness.
ERIC Educational Resources Information Center
Miller, Peter V.; Beauchamp, Larry S.
A physical fitness curriculum, designed to provide cumulative benefits in a sequential pattern, is based upon a framework of a conceptual structure. The curriculum's ultimate goal is the achievement of greater physiological efficiency through a holistic approach that would strengthen circulatory-respiratory, mechanical, and neuro-muscular…
Placebo non-response measure in sequential parallel comparison design studies.
Rybin, Denis; Doros, Gheorghe; Pencina, Michael J; Fava, Maurizio
2015-07-10
The Sequential Parallel Comparison Design (SPCD) is one of the novel approaches addressing placebo response. The analysis of SPCD data typically classifies subjects as 'placebo responders' or 'placebo non-responders'. Most current methods employed for analysis of SPCD data utilize only a part of the data collected during the trial. A repeated measures model was proposed for analysis of continuous outcomes that permitted the inclusion of information from all subjects into the treatment effect estimation. We describe here a new approach using a weighted repeated measures model that further improves the utilization of data collected during the trial, allowing the incorporation of information that is relevant to the placebo response, and dealing with the problem of possible misclassification of subjects. Our simulations show that when compared to the unweighted repeated measures model method, our approach performs as well or, under certain conditions, better, in preserving the type I error, achieving adequate power and minimizing the mean squared error. Copyright © 2015 John Wiley & Sons, Ltd.
A multi-stage drop-the-losers design for multi-arm clinical trials.
Wason, James; Stallard, Nigel; Bowden, Jack; Jennison, Christopher
2017-02-01
Multi-arm multi-stage trials can improve the efficiency of the drug development process when multiple new treatments are available for testing. A group-sequential approach can be used in order to design multi-arm multi-stage trials, using an extension to Dunnett's multiple-testing procedure. The actual sample size used in such a trial is a random variable that has high variability. This can cause problems when applying for funding as the cost will also be generally highly variable. This motivates a type of design that provides the efficiency advantages of a group-sequential multi-arm multi-stage design, but has a fixed sample size. One such design is the two-stage drop-the-losers design, in which a number of experimental treatments, and a control treatment, are assessed at a prescheduled interim analysis. The best-performing experimental treatment and the control treatment then continue to a second stage. In this paper, we discuss extending this design to have more than two stages, which is shown to considerably reduce the sample size required. We also compare the resulting sample size requirements to the sample size distribution of analogous group-sequential multi-arm multi-stage designs. The sample size required for a multi-stage drop-the-losers design is usually higher than, but close to, the median sample size of a group-sequential multi-arm multi-stage trial. In many practical scenarios, the disadvantage of a slight loss in average efficiency would be overcome by the huge advantage of a fixed sample size. We assess the impact of delay between recruitment and assessment as well as unknown variance on the drop-the-losers designs.
Achieving integration in mixed methods designs-principles and practices.
Fetters, Michael D; Curry, Leslie A; Creswell, John W
2013-12-01
Mixed methods research offers powerful tools for investigating complex processes and systems in health and health care. This article describes integration principles and practices at three levels in mixed methods research and provides illustrative examples. Integration at the study design level occurs through three basic mixed method designs-exploratory sequential, explanatory sequential, and convergent-and through four advanced frameworks-multistage, intervention, case study, and participatory. Integration at the methods level occurs through four approaches. In connecting, one database links to the other through sampling. With building, one database informs the data collection approach of the other. When merging, the two databases are brought together for analysis. With embedding, data collection and analysis link at multiple points. Integration at the interpretation and reporting level occurs through narrative, data transformation, and joint display. The fit of integration describes the extent the qualitative and quantitative findings cohere. Understanding these principles and practices of integration can help health services researchers leverage the strengths of mixed methods. © Health Research and Educational Trust.
ERIC Educational Resources Information Center
Soltero-González, Lucinda; Sparrow, Wendy; Butvilofsky, Sandra; Escamilla, Kathy; Hopewell, Susan
2016-01-01
This longitudinal study examined whether the implementation of a Spanish-English paired literacy approach provides an academic advantage to emerging bilingual students over a sequential literacy approach. The study employed a quasi-experimental design. It compared the biliteracy outcomes of third-grade emerging bilingual learners participating in…
Peterson, Kathryn M; Piazza, Cathleen C; Volkert, Valerie M
2016-09-01
Treatments of pediatric feeding disorders based on applied behavior analysis (ABA) have the most empirical support in the research literature (Volkert & Piazza, 2012); however, professionals often recommend, and caregivers often use, treatments that have limited empirical support. In the current investigation, we compared a modified sequential oral sensory approach (M-SOS; Benson, Parke, Gannon, & Muñoz, 2013) to an ABA approach for the treatment of the food selectivity of 6 children with autism. We randomly assigned 3 children to ABA and 3 children to M-SOS and compared the effects of treatment in a multiple baseline design across novel, healthy target foods. We used a multielement design to assess treatment generalization. Consumption of target foods increased for children who received ABA, but not for children who received M-SOS. We subsequently implemented ABA with the children for whom M-SOS was not effective and observed a potential treatment generalization effect during ABA when M-SOS preceded ABA. © 2016 Society for the Experimental Analysis of Behavior.
ERIC Educational Resources Information Center
Phan, Huy P.; Ngu, Bing H.
2017-01-01
In social sciences, the use of stringent methodological approaches is gaining increasing emphasis. Researchers have recognized the limitations of cross-sectional, non-manipulative data in the study of causality. True experimental designs, in contrast, are preferred as they represent rigorous standards for achieving causal flows between variables.…
Makrakis, Vassilios; Kostoulas-Makrakis, Nelly
2016-02-01
Quantitative and qualitative approaches to planning and evaluation in education for sustainable development have often been treated by practitioners from a single research paradigm. This paper discusses the utility of mixed method evaluation designs which integrate qualitative and quantitative data through a sequential transformative process. Sequential mixed method data collection strategies involve collecting data in an iterative process whereby data collected in one phase contribute to data collected in the next. This is done through examples from a programme addressing the 'Reorientation of University Curricula to Address Sustainability (RUCAS): A European Commission Tempus-funded Programme'. It is argued that the two approaches are complementary and that there are significant gains from combining both. Using methods from both research paradigms does not, however, mean that the inherent differences among epistemologies and methodologies should be neglected. Based on this experience, it is recommended that using a sequential transformative mixed method evaluation can produce more robust results than could be accomplished using a single approach in programme planning and evaluation focussed on education for sustainable development. Copyright © 2015 Elsevier Ltd. All rights reserved.
Teachers' Adoptation Level of Student Centered Education Approach
ERIC Educational Resources Information Center
Arseven, Zeynep; Sahin, Seyma; Kiliç, Abdurrahman
2016-01-01
The aim of this study is to identify how far the student centered education approach is applied in the primary, middle and high schools in Düzce. Explanatory design which is one type of mixed research methods and "sequential mixed methods sampling" were used in the study. 685 teachers constitute the research sample of the quantitative…
Chen, Chao-Jung; Li, Fu-An; Her, Guor-Rong
2008-05-01
A multiplexed CE-MS interface using four low-flow sheath liquid ESI sprayers has been developed. Because of the limited space between the low-flow sprayers and the entrance aperture of the ESI source, multichannel analysis is difficult using conventional rotating plate approaches. Instead, a multiplexed low-flow system was achieved by applying an ESI potential sequentially to the four low-flow sprayers, resulting in only one sprayer being sprayed at any given time. The synchronization of the scan event and the voltage relays was accomplished by using the data acquisition signal from the IT mass spectrometer. This synchronization resulted in the ESI voltage being sequentially applied to each of the four sprayers according to the corresponding scan event. With this design, a four-fold increase in analytical throughput was achieved. Because of the use of low-flow interfaces, this multiplexed system has superior sensitivity than a rotating plate design using conventional sheath liquid interfaces. The multiplexed design presented has the potential to be applied to other low-flow multiplexed systems, such as multiplexed capillary LC and multiplexed CEC.
James, Erica; Freund, Megan; Booth, Angela; Duncan, Mitch J; Johnson, Natalie; Short, Camille E; Wolfenden, Luke; Stacey, Fiona G; Kay-Lambkin, Frances; Vandelanotte, Corneel
2016-08-01
Growing evidence points to the benefits of addressing multiple health behaviors rather than single behaviors. This review evaluates the relative effectiveness of simultaneous and sequentially delivered multiple health behavior change (MHBC) interventions. Secondary aims were to identify: a) the most effective spacing of sequentially delivered components; b) differences in efficacy of MHBC interventions for adoption/cessation behaviors and lifestyle/addictive behaviors, and; c) differences in trial retention between simultaneously and sequentially delivered interventions. MHBC intervention trials published up to October 2015 were identified through a systematic search. Eligible trials were randomised controlled trials that directly compared simultaneous and sequential delivery of a MHBC intervention. A narrative synthesis was undertaken. Six trials met the inclusion criteria and across these trials the behaviors targeted were smoking, diet, physical activity, and alcohol consumption. Three trials reported a difference in intervention effect between a sequential and simultaneous approach in at least one behavioral outcome. Of these, two trials favoured a sequential approach on smoking. One trial favoured a simultaneous approach on fat intake. There was no difference in retention between sequential and simultaneous approaches. There is limited evidence regarding the relative effectiveness of sequential and simultaneous approaches. Given only three of the six trials observed a difference in intervention effectiveness for one health behavior outcome, and the relatively consistent finding that the sequential and simultaneous approaches were more effective than a usual/minimal care control condition, it appears that both approaches should be considered equally efficacious. PROSPERO registration number: CRD42015027876. Copyright © 2016 Elsevier Inc. All rights reserved.
Sequential programmable self-assembly: Role of cooperative interactions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jonathan D. Halverson; Tkachenko, Alexei V.
Here, we propose a general strategy of “sequential programmable self-assembly” that enables a bottom-up design of arbitrary multi-particle architectures on nano- and microscales. We show that a naive realization of this scheme, based on the pairwise additive interactions between particles, has fundamental limitations that lead to a relatively high error rate. This can be overcome by using cooperative interparticle binding. The cooperativity is a well known feature of many biochemical processes, responsible, e.g., for signaling and regulations in living systems. Here we propose to utilize a similar strategy for high precision self-assembly, and show that DNA-mediated interactions provide a convenientmore » platform for its implementation. In particular, we outline a specific design of a DNA-based complex which we call “DNA spider,” that acts as a smart interparticle linker and provides a built-in cooperativity of binding. We demonstrate versatility of the sequential self-assembly based on spider-functionalized particles by designing several mesostructures of increasing complexity and simulating their assembly process. This includes a number of finite and repeating structures, in particular, the so-called tetrahelix and its several derivatives. Due to its generality, this approach allows one to design and successfully self-assemble virtually any structure made of a “GEOMAG” magnetic construction toy, out of nanoparticles. According to our results, once the binding cooperativity is strong enough, the sequential self-assembly becomes essentially error-free.« less
Sequential programmable self-assembly: Role of cooperative interactions
Jonathan D. Halverson; Tkachenko, Alexei V.
2016-03-04
Here, we propose a general strategy of “sequential programmable self-assembly” that enables a bottom-up design of arbitrary multi-particle architectures on nano- and microscales. We show that a naive realization of this scheme, based on the pairwise additive interactions between particles, has fundamental limitations that lead to a relatively high error rate. This can be overcome by using cooperative interparticle binding. The cooperativity is a well known feature of many biochemical processes, responsible, e.g., for signaling and regulations in living systems. Here we propose to utilize a similar strategy for high precision self-assembly, and show that DNA-mediated interactions provide a convenientmore » platform for its implementation. In particular, we outline a specific design of a DNA-based complex which we call “DNA spider,” that acts as a smart interparticle linker and provides a built-in cooperativity of binding. We demonstrate versatility of the sequential self-assembly based on spider-functionalized particles by designing several mesostructures of increasing complexity and simulating their assembly process. This includes a number of finite and repeating structures, in particular, the so-called tetrahelix and its several derivatives. Due to its generality, this approach allows one to design and successfully self-assemble virtually any structure made of a “GEOMAG” magnetic construction toy, out of nanoparticles. According to our results, once the binding cooperativity is strong enough, the sequential self-assembly becomes essentially error-free.« less
Shin, Seung-Hwa; Lee, Jangwook; Lim, Kwang Suk; Rhim, Taiyoun; Lee, Sang Kyung; Kim, Yong-Hee; Lee, Kuen Yong
2013-02-28
Ischemic disease is associated with high mortality and morbidity rates, and therapeutic angiogenesis via systemic or local delivery of protein drugs is one potential approach to treat the disease. In this study, we hypothesized that combined delivery of TAT-HSP27 (HSP27 fused with transcriptional activator) and VEGF could enhance the therapeutic efficacy in an ischemic mouse model, and that sequential release could be critical in therapeutic angiogenesis. Alginate hydrogels containing TAT-HSP27 as an anti-apoptotic agent were prepared, and porous PLGA microspheres loaded with VEGF as an angiogenic agent were incorporated into the hydrogels to prepare microsphere/hydrogel hybrid delivery systems. Sequential in vitro release of TAT-HSP27 and VEGF was achieved by the hybrid systems. TAT-HSP27 was depleted from alginate gels in 7 days, while VEGF was continually released for 28 days. The release rate of VEGF was attenuated by varying the porous structures of PLGA microspheres. Sequential delivery of TAT-HSP27 and VEGF was critical to protect against muscle degeneration and fibrosis, as well as to promote new blood vessel formation in the ischemic site of a mouse model. This approach to controlling the sequential release behaviors of multiple drugs could be useful in the design of novel drug delivery systems for therapeutic angiogenesis. Copyright © 2012 Elsevier B.V. All rights reserved.
Large-area copper indium diselenide (CIS) process, control and manufacturing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gillespie, T.J.; Lanning, B.R.; Marshall, C.H.
1997-12-31
Lockheed Martin Astronautics (LMA) has developed a large-area (30x30cm) sequential CIS manufacturing approach amenable to low-cost photovoltaics (PV) production. A prototype CIS manufacturing system has been designed and built with compositional uniformity (Cu/In ratio) verified within {+-}4 atomic percent over the 30x30cm area. CIS device efficiencies have been measured by the National Renewable Energy Laboratory (NREL) at 7% on a flexible non-sodium-containing substrate and 10% on a soda-lime-silica (SLS) glass substrate. Critical elements of the manufacturing capability include the CIS sequential process selection, uniform large-area material deposition, and in-situ process control. Details of the process and large-area manufacturing approach aremore » discussed and results presented.« less
ERIC Educational Resources Information Center
Wauters, E.; Mathijs, E.
2013-01-01
Purpose: The aim of this article is to present and apply a method to investigate farmers' socio-psychological determinants of conservation practice adoption, as an aid in extension, policy and conservation practice design. Design/methodology/approach: We use a sequential mixed method, starting with qualitative semi-structured interviews (n = 24),…
NASA Technical Reports Server (NTRS)
Maghami, Peiman G.; Joshi, Suresh M.; Armstrong, Ernest S.
1993-01-01
An approach for an optimization-based integrated controls-structures design is presented for a class of flexible spacecraft that require fine attitude pointing and vibration suppression. The integrated design problem is posed in the form of simultaneous optimization of both structural and control design variables. The approach is demonstrated by application to the integrated design of a generic space platform and to a model of a ground-based flexible structure. The numerical results obtained indicate that the integrated design approach can yield spacecraft designs that have substantially superior performance over a conventional design wherein the structural and control designs are performed sequentially. For example, a 40-percent reduction in the pointing error is observed along with a slight reduction in mass, or an almost twofold increase in the controlled performance is indicated with more than a 5-percent reduction in the overall mass of the spacecraft (a reduction of hundreds of kilograms).
Group sequential designs for stepped-wedge cluster randomised trials
Grayling, Michael J; Wason, James MS; Mander, Adrian P
2017-01-01
Background/Aims: The stepped-wedge cluster randomised trial design has received substantial attention in recent years. Although various extensions to the original design have been proposed, no guidance is available on the design of stepped-wedge cluster randomised trials with interim analyses. In an individually randomised trial setting, group sequential methods can provide notable efficiency gains and ethical benefits. We address this by discussing how established group sequential methodology can be adapted for stepped-wedge designs. Methods: Utilising the error spending approach to group sequential trial design, we detail the assumptions required for the determination of stepped-wedge cluster randomised trials with interim analyses. We consider early stopping for efficacy, futility, or efficacy and futility. We describe first how this can be done for any specified linear mixed model for data analysis. We then focus on one particular commonly utilised model and, using a recently completed stepped-wedge cluster randomised trial, compare the performance of several designs with interim analyses to the classical stepped-wedge design. Finally, the performance of a quantile substitution procedure for dealing with the case of unknown variance is explored. Results: We demonstrate that the incorporation of early stopping in stepped-wedge cluster randomised trial designs could reduce the expected sample size under the null and alternative hypotheses by up to 31% and 22%, respectively, with no cost to the trial’s type-I and type-II error rates. The use of restricted error maximum likelihood estimation was found to be more important than quantile substitution for controlling the type-I error rate. Conclusion: The addition of interim analyses into stepped-wedge cluster randomised trials could help guard against time-consuming trials conducted on poor performing treatments and also help expedite the implementation of efficacious treatments. In future, trialists should consider incorporating early stopping of some kind into stepped-wedge cluster randomised trials according to the needs of the particular trial. PMID:28653550
Group sequential designs for stepped-wedge cluster randomised trials.
Grayling, Michael J; Wason, James Ms; Mander, Adrian P
2017-10-01
The stepped-wedge cluster randomised trial design has received substantial attention in recent years. Although various extensions to the original design have been proposed, no guidance is available on the design of stepped-wedge cluster randomised trials with interim analyses. In an individually randomised trial setting, group sequential methods can provide notable efficiency gains and ethical benefits. We address this by discussing how established group sequential methodology can be adapted for stepped-wedge designs. Utilising the error spending approach to group sequential trial design, we detail the assumptions required for the determination of stepped-wedge cluster randomised trials with interim analyses. We consider early stopping for efficacy, futility, or efficacy and futility. We describe first how this can be done for any specified linear mixed model for data analysis. We then focus on one particular commonly utilised model and, using a recently completed stepped-wedge cluster randomised trial, compare the performance of several designs with interim analyses to the classical stepped-wedge design. Finally, the performance of a quantile substitution procedure for dealing with the case of unknown variance is explored. We demonstrate that the incorporation of early stopping in stepped-wedge cluster randomised trial designs could reduce the expected sample size under the null and alternative hypotheses by up to 31% and 22%, respectively, with no cost to the trial's type-I and type-II error rates. The use of restricted error maximum likelihood estimation was found to be more important than quantile substitution for controlling the type-I error rate. The addition of interim analyses into stepped-wedge cluster randomised trials could help guard against time-consuming trials conducted on poor performing treatments and also help expedite the implementation of efficacious treatments. In future, trialists should consider incorporating early stopping of some kind into stepped-wedge cluster randomised trials according to the needs of the particular trial.
Abraham, Joanna; Kannampallil, Thomas; Brenner, Corinne; Lopez, Karen D; Almoosa, Khalid F; Patel, Bela; Patel, Vimla L
2016-02-01
Effective communication during nurse handoffs is instrumental in ensuring safe and quality patient care. Much of the prior research on nurse handoffs has utilized retrospective methods such as interviews, surveys and questionnaires. While extremely useful, an in-depth understanding of the structure and content of conversations, and the inherent relationships within the content is paramount to designing effective nurse handoff interventions. In this paper, we present a methodological framework-Sequential Conversational Analysis (SCA)-a mixed-method approach that integrates qualitative conversational analysis with quantitative sequential pattern analysis. We describe the SCA approach and provide a detailed example as a proof of concept of its use for the analysis of nurse handoff communication in a medical intensive care unit. This novel approach allows us to characterize the conversational structure, clinical content, disruptions in the conversation, and the inherently phasic nature of nurse handoff communication. The characterization of communication patterns highlights the relationships underlying the verbal content of nurse handoffs with specific emphasis on: the interactive nature of conversation, relevance of role-based (incoming, outgoing) communication requirements, clinical content focus on critical patient-related events, and discussion of pending patient management tasks. We also discuss the applicability of the SCA approach as a method for providing in-depth understanding of the dynamics of communication in other settings and domains. Copyright © 2015 Elsevier Inc. All rights reserved.
Re-animation of muscle flaps for improved function in dynamic myoplasty.
Stremel, R W; Zonnevijlle, E D
2001-01-01
The authors report on a series of experiments designed to produce a skeletal muscle contraction functional for dynamic myoplasties. Conventional stimulation techniques recruit all or most of the muscle fibers simultaneously and with maximal strength. This approach has limitations in free dynamic muscle flap transfers that require the muscle to contract immediately after transfer and before re-innervation. Sequential stimulation of segments of the transferred muscle provides a means of producing non-fatiguing contractions of the muscle in the presence or absence of innervation. The muscles studied were the canine gracilis, and all experiments were acute studies in anesthetized animals. Comparison of conventional and sequential segmental neuromuscular stimulation revealed an increase in muscle fatigue resistance and muscle blood flow with the new approach. This approach offers the opportunity for development of physiologically animated tissue and broadening the abilities of reconstructive surgeons in the repair of functional defects. Copyright 2001 Wiley-Liss, Inc.
Bayesian randomized clinical trials: From fixed to adaptive design.
Yin, Guosheng; Lam, Chi Kin; Shi, Haolun
2017-08-01
Randomized controlled studies are the gold standard for phase III clinical trials. Using α-spending functions to control the overall type I error rate, group sequential methods are well established and have been dominating phase III studies. Bayesian randomized design, on the other hand, can be viewed as a complement instead of competitive approach to the frequentist methods. For the fixed Bayesian design, the hypothesis testing can be cast in the posterior probability or Bayes factor framework, which has a direct link to the frequentist type I error rate. Bayesian group sequential design relies upon Bayesian decision-theoretic approaches based on backward induction, which is often computationally intensive. Compared with the frequentist approaches, Bayesian methods have several advantages. The posterior predictive probability serves as a useful and convenient tool for trial monitoring, and can be updated at any time as the data accrue during the trial. The Bayesian decision-theoretic framework possesses a direct link to the decision making in the practical setting, and can be modeled more realistically to reflect the actual cost-benefit analysis during the drug development process. Other merits include the possibility of hierarchical modeling and the use of informative priors, which would lead to a more comprehensive utilization of information from both historical and longitudinal data. From fixed to adaptive design, we focus on Bayesian randomized controlled clinical trials and make extensive comparisons with frequentist counterparts through numerical studies. Copyright © 2017 Elsevier Inc. All rights reserved.
Large-Scale Bi-Level Strain Design Approaches and Mixed-Integer Programming Solution Techniques
Kim, Joonhoon; Reed, Jennifer L.; Maravelias, Christos T.
2011-01-01
The use of computational models in metabolic engineering has been increasing as more genome-scale metabolic models and computational approaches become available. Various computational approaches have been developed to predict how genetic perturbations affect metabolic behavior at a systems level, and have been successfully used to engineer microbial strains with improved primary or secondary metabolite production. However, identification of metabolic engineering strategies involving a large number of perturbations is currently limited by computational resources due to the size of genome-scale models and the combinatorial nature of the problem. In this study, we present (i) two new bi-level strain design approaches using mixed-integer programming (MIP), and (ii) general solution techniques that improve the performance of MIP-based bi-level approaches. The first approach (SimOptStrain) simultaneously considers gene deletion and non-native reaction addition, while the second approach (BiMOMA) uses minimization of metabolic adjustment to predict knockout behavior in a MIP-based bi-level problem for the first time. Our general MIP solution techniques significantly reduced the CPU times needed to find optimal strategies when applied to an existing strain design approach (OptORF) (e.g., from ∼10 days to ∼5 minutes for metabolic engineering strategies with 4 gene deletions), and identified strategies for producing compounds where previous studies could not (e.g., malate and serine). Additionally, we found novel strategies using SimOptStrain with higher predicted production levels (for succinate and glycerol) than could have been found using an existing approach that considers network additions and deletions in sequential steps rather than simultaneously. Finally, using BiMOMA we found novel strategies involving large numbers of modifications (for pyruvate and glutamate), which sequential search and genetic algorithms were unable to find. The approaches and solution techniques developed here will facilitate the strain design process and extend the scope of its application to metabolic engineering. PMID:21949695
Large-scale bi-level strain design approaches and mixed-integer programming solution techniques.
Kim, Joonhoon; Reed, Jennifer L; Maravelias, Christos T
2011-01-01
The use of computational models in metabolic engineering has been increasing as more genome-scale metabolic models and computational approaches become available. Various computational approaches have been developed to predict how genetic perturbations affect metabolic behavior at a systems level, and have been successfully used to engineer microbial strains with improved primary or secondary metabolite production. However, identification of metabolic engineering strategies involving a large number of perturbations is currently limited by computational resources due to the size of genome-scale models and the combinatorial nature of the problem. In this study, we present (i) two new bi-level strain design approaches using mixed-integer programming (MIP), and (ii) general solution techniques that improve the performance of MIP-based bi-level approaches. The first approach (SimOptStrain) simultaneously considers gene deletion and non-native reaction addition, while the second approach (BiMOMA) uses minimization of metabolic adjustment to predict knockout behavior in a MIP-based bi-level problem for the first time. Our general MIP solution techniques significantly reduced the CPU times needed to find optimal strategies when applied to an existing strain design approach (OptORF) (e.g., from ∼10 days to ∼5 minutes for metabolic engineering strategies with 4 gene deletions), and identified strategies for producing compounds where previous studies could not (e.g., malate and serine). Additionally, we found novel strategies using SimOptStrain with higher predicted production levels (for succinate and glycerol) than could have been found using an existing approach that considers network additions and deletions in sequential steps rather than simultaneously. Finally, using BiMOMA we found novel strategies involving large numbers of modifications (for pyruvate and glutamate), which sequential search and genetic algorithms were unable to find. The approaches and solution techniques developed here will facilitate the strain design process and extend the scope of its application to metabolic engineering.
ERIC Educational Resources Information Center
Vaidyanathan, V. V.; Varanasi, M. R.; Kougianos, E.; Wang, Shuping; Raman, H.
2009-01-01
This paper describes radio frequency identification (RFID) projects, designed and implemented by students in the College of Engineering at the University of North Texas, as part of their senior-design project requirement. The paper also describes an RFID-based project implemented at Rice Middle School in Plano, TX, which went on to win multiple…
A Bayesian sequential design using alpha spending function to control type I error.
Zhu, Han; Yu, Qingzhao
2017-10-01
We propose in this article a Bayesian sequential design using alpha spending functions to control the overall type I error in phase III clinical trials. We provide algorithms to calculate critical values, power, and sample sizes for the proposed design. Sensitivity analysis is implemented to check the effects from different prior distributions, and conservative priors are recommended. We compare the power and actual sample sizes of the proposed Bayesian sequential design with different alpha spending functions through simulations. We also compare the power of the proposed method with frequentist sequential design using the same alpha spending function. Simulations show that, at the same sample size, the proposed method provides larger power than the corresponding frequentist sequential design. It also has larger power than traditional Bayesian sequential design which sets equal critical values for all interim analyses. When compared with other alpha spending functions, O'Brien-Fleming alpha spending function has the largest power and is the most conservative in terms that at the same sample size, the null hypothesis is the least likely to be rejected at early stage of clinical trials. And finally, we show that adding a step of stop for futility in the Bayesian sequential design can reduce the overall type I error and reduce the actual sample sizes.
Mixed-Methods Research Methodologies
ERIC Educational Resources Information Center
Terrell, Steven R.
2012-01-01
Mixed-Method studies have emerged from the paradigm wars between qualitative and quantitative research approaches to become a widely used mode of inquiry. Depending on choices made across four dimensions, mixed-methods can provide an investigator with many design choices which involve a range of sequential and concurrent strategies. Defining…
Hybrid Computerized Adaptive Testing: From Group Sequential Design to Fully Sequential Design
ERIC Educational Resources Information Center
Wang, Shiyu; Lin, Haiyan; Chang, Hua-Hua; Douglas, Jeff
2016-01-01
Computerized adaptive testing (CAT) and multistage testing (MST) have become two of the most popular modes in large-scale computer-based sequential testing. Though most designs of CAT and MST exhibit strength and weakness in recent large-scale implementations, there is no simple answer to the question of which design is better because different…
The Role of Principals in Professional Learning Communities
ERIC Educational Resources Information Center
Buttram, Joan L.; Farley-Ripple, Elizabeth N.
2016-01-01
The purpose of this article is to identify how principals shape the adoption and implementation of professional learning communities. The study employed a sequential mixed-methods approach in which interviews, observations, and document analysis informed survey design. Teachers were surveyed in four elementary schools about the practices and…
READINESS AND READING FOR THE RETARDED CHILD.
ERIC Educational Resources Information Center
BERNSTEIN, BEBE
THIS TEACHER'S BOOK AND MANUAL, DESIGNED TO ACCOMPANY TWO WORKBOOKS, PRESENTS A FUNCTIONAL APPROACH TO READINESS AND READING FOR YOUNG EDUCABLE RETARDED CHILDREN. THE WORKBOOKS THEMSELVES OFFER PREPARATORY ACTIVITIES FOR CHILDREN AT THE READINESS LEVEL AND SEQUENTIAL ACTIVITIES AND MATERIALS FOR THOSE AT THE BEGINNING READING STAGE. THE TEACHER'S…
Genetic Parallel Programming: design and implementation.
Cheang, Sin Man; Leung, Kwong Sak; Lee, Kin Hong
2006-01-01
This paper presents a novel Genetic Parallel Programming (GPP) paradigm for evolving parallel programs running on a Multi-Arithmetic-Logic-Unit (Multi-ALU) Processor (MAP). The MAP is a Multiple Instruction-streams, Multiple Data-streams (MIMD), general-purpose register machine that can be implemented on modern Very Large-Scale Integrated Circuits (VLSIs) in order to evaluate genetic programs at high speed. For human programmers, writing parallel programs is more difficult than writing sequential programs. However, experimental results show that GPP evolves parallel programs with less computational effort than that of their sequential counterparts. It creates a new approach to evolving a feasible problem solution in parallel program form and then serializes it into a sequential program if required. The effectiveness and efficiency of GPP are investigated using a suite of 14 well-studied benchmark problems. Experimental results show that GPP speeds up evolution substantially.
Health risk behaviours amongst school adolescents: protocol for a mixed methods study.
El Achhab, Youness; El Ammari, Abdelghaffar; El Kazdouh, Hicham; Najdi, Adil; Berraho, Mohamed; Tachfouti, Nabil; Lamri, Driss; El Fakir, Samira; Nejjari, Chakib
2016-11-29
Determining risky behaviours of adolescents provides valuable information for designing appropriate intervention programmes for advancing adolescent's health. However, these behaviours are not fully addressed by researchers in a comprehensive approach. We report the protocol of a mixed methods study designed to investigate the health risk behaviours of Moroccan adolescents with the goal of identifying suitable strategies to address their health concerns. We used a sequential two-phase explanatory mixed method study design. The approach begins with the collection of quantitative data, followed by the collection of qualitative data to explain and enrich the quantitative findings. In the first phase, the global school-based student health survey (GSHS) was administered to 800 students who were between 14 and 19 years of age. The second phase engaged adolescents, parents and teachers in focus groups and assessed education documents to explore the level of coverage of health education in the programme learnt in the middle school. To obtain opinions about strategies to reduce Moroccan adolescents' health risk behaviours, a nominal group technique will be used. The findings of this mixed methods sequential explanatory study provide insights into the risk behaviours that need to be considered if intervention programmes and preventive strategies are to be designed to promote adolescent's health in the Moroccan school.
A weight modification sequential method for VSC-MTDC power system state estimation
NASA Astrophysics Data System (ADS)
Yang, Xiaonan; Zhang, Hao; Li, Qiang; Guo, Ziming; Zhao, Kun; Li, Xinpeng; Han, Feng
2017-06-01
This paper presents an effective sequential approach based on weight modification for VSC-MTDC power system state estimation, called weight modification sequential method. The proposed approach simplifies the AC/DC system state estimation algorithm through modifying the weight of state quantity to keep the matrix dimension constant. The weight modification sequential method can also make the VSC-MTDC system state estimation calculation results more ccurate and increase the speed of calculation. The effectiveness of the proposed weight modification sequential method is demonstrated and validated in modified IEEE 14 bus system.
Robust parameter design for automatically controlled systems and nanostructure synthesis
NASA Astrophysics Data System (ADS)
Dasgupta, Tirthankar
2007-12-01
This research focuses on developing comprehensive frameworks for developing robust parameter design methodology for dynamic systems with automatic control and for synthesis of nanostructures. In many automatically controlled dynamic processes, the optimal feedback control law depends on the parameter design solution and vice versa and therefore an integrated approach is necessary. A parameter design methodology in the presence of feedback control is developed for processes of long duration under the assumption that experimental noise factors are uncorrelated over time. Systems that follow a pure-gain dynamic model are considered and the best proportional-integral and minimum mean squared error control strategies are developed by using robust parameter design. The proposed method is illustrated using a simulated example and a case study in a urea packing plant. This idea is also extended to cases with on-line noise factors. The possibility of integrating feedforward control with a minimum mean squared error feedback control scheme is explored. To meet the needs of large scale synthesis of nanostructures, it is critical to systematically find experimental conditions under which the desired nanostructures are synthesized reproducibly, at large quantity and with controlled morphology. The first part of the research in this area focuses on modeling and optimization of existing experimental data. Through a rigorous statistical analysis of experimental data, models linking the probabilities of obtaining specific morphologies to the process variables are developed. A new iterative algorithm for fitting a Multinomial GLM is proposed and used. The optimum process conditions, which maximize the above probabilities and make the synthesis process less sensitive to variations of process variables around set values, are derived from the fitted models using Monte-Carlo simulations. The second part of the research deals with development of an experimental design methodology, tailor-made to address the unique phenomena associated with nanostructure synthesis. A sequential space filling design called Sequential Minimum Energy Design (SMED) for exploring best process conditions for synthesis of nanowires. The SMED is a novel approach to generate sequential designs that are model independent, can quickly "carve out" regions with no observable nanostructure morphology, and allow for the exploration of complex response surfaces.
GOST: A generic ordinal sequential trial design for a treatment trial in an emerging pandemic.
Whitehead, John; Horby, Peter
2017-03-01
Conducting clinical trials to assess experimental treatments for potentially pandemic infectious diseases is challenging. Since many outbreaks of infectious diseases last only six to eight weeks, there is a need for trial designs that can be implemented rapidly in the face of uncertainty. Outbreaks are sudden and unpredictable and so it is essential that as much planning as possible takes place in advance. Statistical aspects of such trial designs should be evaluated and discussed in readiness for implementation. This paper proposes a generic ordinal sequential trial design (GOST) for a randomised clinical trial comparing an experimental treatment for an emerging infectious disease with standard care. The design is intended as an off-the-shelf, ready-to-use robust and flexible option. The primary endpoint is a categorisation of patient outcome according to an ordinal scale. A sequential approach is adopted, stopping as soon as it is clear that the experimental treatment has an advantage or that sufficient advantage is unlikely to be detected. The properties of the design are evaluated using large-sample theory and verified for moderate sized samples using simulation. The trial is powered to detect a generic clinically relevant difference: namely an odds ratio of 2 for better rather than worse outcomes. Total sample sizes (across both treatments) of between 150 and 300 patients prove to be adequate in many cases, but the precise value depends on both the magnitude of the treatment advantage and the nature of the ordinal scale. An advantage of the approach is that any erroneous assumptions made at the design stage about the proportion of patients falling into each outcome category have little effect on the error probabilities of the study, although they can lead to inaccurate forecasts of sample size. It is important and feasible to pre-determine many of the statistical aspects of an efficient trial design in advance of a disease outbreak. The design can then be tailored to the specific disease under study once its nature is better understood.
Patterns and Sequences: Interactive Exploration of Clickstreams to Understand Common Visitor Paths.
Liu, Zhicheng; Wang, Yang; Dontcheva, Mira; Hoffman, Matthew; Walker, Seth; Wilson, Alan
2017-01-01
Modern web clickstream data consists of long, high-dimensional sequences of multivariate events, making it difficult to analyze. Following the overarching principle that the visual interface should provide information about the dataset at multiple levels of granularity and allow users to easily navigate across these levels, we identify four levels of granularity in clickstream analysis: patterns, segments, sequences and events. We present an analytic pipeline consisting of three stages: pattern mining, pattern pruning and coordinated exploration between patterns and sequences. Based on this approach, we discuss properties of maximal sequential patterns, propose methods to reduce the number of patterns and describe design considerations for visualizing the extracted sequential patterns and the corresponding raw sequences. We demonstrate the viability of our approach through an analysis scenario and discuss the strengths and limitations of the methods based on user feedback.
Sequential processing deficits in schizophrenia: relationship to neuropsychology and genetics.
Hill, S Kristian; Bjorkquist, Olivia; Carrathers, Tarra; Roseberry, Jarett E; Hochberger, William C; Bishop, Jeffrey R
2013-12-01
Utilizing a combination of neuropsychological and cognitive neuroscience approaches may be essential for characterizing cognitive deficits in schizophrenia and eventually assessing cognitive outcomes. This study was designed to compare the stability of select exemplars for these approaches and their correlations in schizophrenia patients with stable treatment and clinical profiles. Reliability estimates for serial order processing were comparable to neuropsychological measures and indicate that experimental serial order processing measures may be less susceptible to practice effects than traditional neuropsychological measures. Correlations were moderate and consistent with a global cognitive factor. Exploratory analyses indicated a potentially critical role of the Met allele of the Catechol-O-methyltransferase (COMT) Val158Met polymorphism in externally paced sequential recall. Experimental measures of serial order processing may reflect frontostriatal dysfunction and be a useful supplement to large neuropsychological batteries. © 2013.
Sequential Processing Deficits in Schizophrenia: Relationship to Neuropsychology and Genetics
Hill, S. Kristian; Bjorkquist, Olivia; Carrathers, Tarra; Roseberry, Jarett E.; Hochberger, William C.; Bishop, Jeffrey R.
2014-01-01
Utilizing a combination of neuropsychological and cognitive neuroscience approaches may be essential for characterizing cognitive deficits in schizophrenia and eventually assessing cognitive outcomes. This study was designed to compare the stability of select exemplars for these approaches and their correlations in schizophrenia patients with stable treatment and clinical profiles. Reliability estimates for serial order processing were comparable to neuropsychological measures and indicate that experimental serial order processing measures may be less susceptible to practice effects than traditional neuropsychological measures. Correlations were moderate and consistent with a global cognitive factor. Exploratory analyses indicated a potentially critical role of the Met allele of the Catechol-O-methyltransferase (COMT) Val158Met polymorphism in externally paced sequential recall. Experimental measures of serial order processing may reflect frontostriatal dysfunction and be a useful supplement to large neuropsychological batteries. PMID:24119464
The Equity Consequences of School-Based Management
ERIC Educational Resources Information Center
Nir, Adam E.; Miran, Meir
2006-01-01
Purpose: The purpose of this paper is to examine the extent to which the introduction of school-based management (SBM) affects schools' incomes and educational equity? Design/methodology/approach: An analysis of financial reports coming from 31 SBM schools during a period of four sequential years reveals that the overall inequity among schools has…
Unlocking Hospitality Managers Career Transitions through Applying Schein's Career Anchors Theory
ERIC Educational Resources Information Center
McGuire, David; Polla, Giovana; Heidl, Britta
2017-01-01
Purpose: This paper seeks to unlock the career transitions of hospitality managers through applying Schein's career anchors theory. It seeks to understand how Schein's Career Anchors help explain the career transitions of managers in the Scottish hospitality industry. Design/methodology/approach: The paper adopts a non-sequential multi-method…
ERIC Educational Resources Information Center
Lester, Stan
2018-01-01
Purpose: The purpose of this paper is to review three international frameworks, including the International Standard Classification of Education (ISCED), in relation to one country's higher professional and vocational education system. Design/methodology/approach: The frameworks were examined in the context of English higher work-related…
NASA Technical Reports Server (NTRS)
Hall, W. E., Jr.; Gupta, N. K.; Hansen, R. S.
1978-01-01
An integrated approach to rotorcraft system identification is described. This approach consists of sequential application of (1) data filtering to estimate states of the system and sensor errors, (2) model structure estimation to isolate significant model effects, and (3) parameter identification to quantify the coefficient of the model. An input design algorithm is described which can be used to design control inputs which maximize parameter estimation accuracy. Details of each aspect of the rotorcraft identification approach are given. Examples of both simulated and actual flight data processing are given to illustrate each phase of processing. The procedure is shown to provide means of calibrating sensor errors in flight data, quantifying high order state variable models from the flight data, and consequently computing related stability and control design models.
Introducing a Model for Optimal Design of Sequential Objective Structured Clinical Examinations
ERIC Educational Resources Information Center
Mortaz Hejri, Sara; Yazdani, Kamran; Labaf, Ali; Norcini, John J.; Jalili, Mohammad
2016-01-01
In a sequential OSCE which has been suggested to reduce testing costs, candidates take a short screening test and who fail the test, are asked to take the full OSCE. In order to introduce an effective and accurate sequential design, we developed a model for designing and evaluating screening OSCEs. Based on two datasets from a 10-station…
Koopmeiners, Joseph S.; Feng, Ziding
2015-01-01
Group sequential testing procedures have been proposed as an approach to conserving resources in biomarker validation studies. Previously, Koopmeiners and Feng (2011) derived the asymptotic properties of the sequential empirical positive predictive value (PPV) and negative predictive value curves, which summarize the predictive accuracy of a continuous marker, under case-control sampling. A limitation of their approach is that the prevalence can not be estimated from a case-control study and must be assumed known. In this manuscript, we consider group sequential testing of the predictive accuracy of a continuous biomarker with unknown prevalence. First, we develop asymptotic theory for the sequential empirical PPV and NPV curves when the prevalence must be estimated, rather than assumed known in a case-control study. We then discuss how our results can be combined with standard group sequential methods to develop group sequential testing procedures and bias-adjusted estimators for the PPV and NPV curve. The small sample properties of the proposed group sequential testing procedures and estimators are evaluated by simulation and we illustrate our approach in the context of a study to validate a novel biomarker for prostate cancer. PMID:26537180
de Oliveira, Saulo H P; Law, Eleanor C; Shi, Jiye; Deane, Charlotte M
2018-04-01
Most current de novo structure prediction methods randomly sample protein conformations and thus require large amounts of computational resource. Here, we consider a sequential sampling strategy, building on ideas from recent experimental work which shows that many proteins fold cotranslationally. We have investigated whether a pseudo-greedy search approach, which begins sequentially from one of the termini, can improve the performance and accuracy of de novo protein structure prediction. We observed that our sequential approach converges when fewer than 20 000 decoys have been produced, fewer than commonly expected. Using our software, SAINT2, we also compared the run time and quality of models produced in a sequential fashion against a standard, non-sequential approach. Sequential prediction produces an individual decoy 1.5-2.5 times faster than non-sequential prediction. When considering the quality of the best model, sequential prediction led to a better model being produced for 31 out of 41 soluble protein validation cases and for 18 out of 24 transmembrane protein cases. Correct models (TM-Score > 0.5) were produced for 29 of these cases by the sequential mode and for only 22 by the non-sequential mode. Our comparison reveals that a sequential search strategy can be used to drastically reduce computational time of de novo protein structure prediction and improve accuracy. Data are available for download from: http://opig.stats.ox.ac.uk/resources. SAINT2 is available for download from: https://github.com/sauloho/SAINT2. saulo.deoliveira@dtc.ox.ac.uk. Supplementary data are available at Bioinformatics online.
NASA Technical Reports Server (NTRS)
Sakata, I. F.; Davis, G. W.
1975-01-01
The structural approach best suited for the design of a Mach 2.7 arrow-wing supersonic cruise aircraft was investigated. Results, procedures, and principal justification of results are presented. Detailed substantiation data are given. In general, each major analysis is presented sequentially in separate sections to provide continuity in the flow of the design concepts analysis effort. In addition to the design concepts evaluation and the detailed engineering design analyses, supporting tasks encompassing: (1) the controls system development; (2) the propulsion-airframe integration study; and (3) the advanced technology assessment are presented.
Increasing efficiency of preclinical research by group sequential designs
Piper, Sophie K.; Rex, Andre; Florez-Vargas, Oscar; Karystianis, George; Schneider, Alice; Wellwood, Ian; Siegerink, Bob; Ioannidis, John P. A.; Kimmelman, Jonathan; Dirnagl, Ulrich
2017-01-01
Despite the potential benefits of sequential designs, studies evaluating treatments or experimental manipulations in preclinical experimental biomedicine almost exclusively use classical block designs. Our aim with this article is to bring the existing methodology of group sequential designs to the attention of researchers in the preclinical field and to clearly illustrate its potential utility. Group sequential designs can offer higher efficiency than traditional methods and are increasingly used in clinical trials. Using simulation of data, we demonstrate that group sequential designs have the potential to improve the efficiency of experimental studies, even when sample sizes are very small, as is currently prevalent in preclinical experimental biomedicine. When simulating data with a large effect size of d = 1 and a sample size of n = 18 per group, sequential frequentist analysis consumes in the long run only around 80% of the planned number of experimental units. In larger trials (n = 36 per group), additional stopping rules for futility lead to the saving of resources of up to 30% compared to block designs. We argue that these savings should be invested to increase sample sizes and hence power, since the currently underpowered experiments in preclinical biomedicine are a major threat to the value and predictiveness in this research domain. PMID:28282371
2017-01-01
Objective Anticipation of opponent actions, through the use of advanced (i.e., pre-event) kinematic information, can be trained using video-based temporal occlusion. Typically, this involves isolated opponent skills/shots presented as trials in a random order. However, two different areas of research concerning representative task design and contextual (non-kinematic) information, suggest this structure of practice restricts expert performance. The aim of this study was to examine the effect of a sequential structure of practice during video-based training of anticipatory behavior in tennis, as well as the transfer of these skills to the performance environment. Methods In a pre-practice-retention-transfer design, participants viewed life-sized video of tennis rallies across practice in either a sequential order (sequential group), in which participants were exposed to opponent skills/shots in the order they occur in the sport, or a non-sequential (non-sequential group) random order. Results In the video-based retention test, the sequential group was significantly more accurate in their anticipatory judgments when the retention condition replicated the sequential structure compared to the non-sequential group. In the non-sequential retention condition, the non-sequential group was more accurate than the sequential group. In the field-based transfer test, overall decision time was significantly faster in the sequential group compared to the non-sequential group. Conclusion Findings highlight the benefits of a sequential structure of practice for the transfer of anticipatory behavior in tennis. We discuss the role of contextual information, and the importance of representative task design, for the testing and training of perceptual-cognitive skills in sport. PMID:28355263
Devaluation and sequential decisions: linking goal-directed and model-based behavior
Friedel, Eva; Koch, Stefan P.; Wendt, Jean; Heinz, Andreas; Deserno, Lorenz; Schlagenhauf, Florian
2014-01-01
In experimental psychology different experiments have been developed to assess goal–directed as compared to habitual control over instrumental decisions. Similar to animal studies selective devaluation procedures have been used. More recently sequential decision-making tasks have been designed to assess the degree of goal-directed vs. habitual choice behavior in terms of an influential computational theory of model-based compared to model-free behavioral control. As recently suggested, different measurements are thought to reflect the same construct. Yet, there has been no attempt to directly assess the construct validity of these different measurements. In the present study, we used a devaluation paradigm and a sequential decision-making task to address this question of construct validity in a sample of 18 healthy male human participants. Correlational analysis revealed a positive association between model-based choices during sequential decisions and goal-directed behavior after devaluation suggesting a single framework underlying both operationalizations and speaking in favor of construct validity of both measurement approaches. Up to now, this has been merely assumed but never been directly tested in humans. PMID:25136310
Minimising back reflections from the common path objective in a fundus camera
NASA Astrophysics Data System (ADS)
Swat, A.
2016-11-01
Eliminating back reflections is critical in the design of a fundus camera with internal illuminating system. As there is very little light reflected from the retina, even excellent antireflective coatings are not sufficient suppression of ghost reflections, therefore the number of surfaces in the common optics in illuminating and imaging paths shall be minimised. Typically a single aspheric objective is used. In the paper an alternative approach, an objective with all spherical surfaces, is presented. As more surfaces are required, more sophisticated method is needed to get rid of back reflections. Typically back reflections analysis, comprise treating subsequent objective surfaces as mirrors, and reflections from the objective surfaces are traced back through the imaging path. This approach can be applied in both sequential and nonsequential ray tracing. It is good enough for system check but not very suitable for early optimisation process in the optical system design phase. There are also available standard ghost control merit function operands in the sequential ray-trace, for example in Zemax system, but these don't allow back ray-trace in an alternative optical path, illumination vs. imaging. What is proposed in the paper, is a complete method to incorporate ghost reflected energy into the raytracing system merit function for sequential mode which is more efficient in optimisation process. Although developed for the purpose of specific case of fundus camera, the method might be utilised in a wider range of applications where ghost control is critical.
Automated ILA design for synchronous sequential circuits
NASA Technical Reports Server (NTRS)
Liu, M. N.; Liu, K. Z.; Maki, G. K.; Whitaker, S. R.
1991-01-01
An iterative logic array (ILA) architecture for synchronous sequential circuits is presented. This technique utilizes linear algebra to produce the design equations. The ILA realization of synchronous sequential logic can be fully automated with a computer program. A programmable design procedure is proposed to fullfill the design task and layout generation. A software algorithm in the C language has been developed and tested to generate 1 micron CMOS layouts using the Hewlett-Packard FUNGEN module generator shell.
Karim, Mohammad Ehsanul; Petkau, John; Gustafson, Paul; Platt, Robert W.; Tremlett, Helen
2017-01-01
In longitudinal studies, if the time-dependent covariates are affected by the past treatment, time-dependent confounding may be present. For a time-to-event response, marginal structural Cox models (MSCMs) are frequently used to deal with such confounding. To avoid some of the problems of fitting MSCM, the sequential Cox approach has been suggested as an alternative. Although the estimation mechanisms are different, both approaches claim to estimate the causal effect of treatment by appropriately adjusting for time-dependent confounding. We carry out simulation studies to assess the suitability of the sequential Cox approach for analyzing time-to-event data in the presence of a time-dependent covariate that may or may not be a time-dependent confounder. Results from these simulations revealed that the sequential Cox approach is not as effective as MSCM in addressing the time-dependent confounding. The sequential Cox approach was also found to be inadequate in the presence of a time-dependent covariate. We propose a modified version of the sequential Cox approach that correctly estimates the treatment effect in both of the above scenarios. All approaches are applied to investigate the impact of beta-interferon treatment in delaying disability progression in the British Columbia Multiple Sclerosis cohort (1995 – 2008). PMID:27659168
Karim, Mohammad Ehsanul; Petkau, John; Gustafson, Paul; Platt, Robert W; Tremlett, Helen
2018-06-01
In longitudinal studies, if the time-dependent covariates are affected by the past treatment, time-dependent confounding may be present. For a time-to-event response, marginal structural Cox models are frequently used to deal with such confounding. To avoid some of the problems of fitting marginal structural Cox model, the sequential Cox approach has been suggested as an alternative. Although the estimation mechanisms are different, both approaches claim to estimate the causal effect of treatment by appropriately adjusting for time-dependent confounding. We carry out simulation studies to assess the suitability of the sequential Cox approach for analyzing time-to-event data in the presence of a time-dependent covariate that may or may not be a time-dependent confounder. Results from these simulations revealed that the sequential Cox approach is not as effective as marginal structural Cox model in addressing the time-dependent confounding. The sequential Cox approach was also found to be inadequate in the presence of a time-dependent covariate. We propose a modified version of the sequential Cox approach that correctly estimates the treatment effect in both of the above scenarios. All approaches are applied to investigate the impact of beta-interferon treatment in delaying disability progression in the British Columbia Multiple Sclerosis cohort (1995-2008).
Failure detection system design methodology. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Chow, E. Y.
1980-01-01
The design of a failure detection and identification system consists of designing a robust residual generation process and a high performance decision making process. The design of these two processes are examined separately. Residual generation is based on analytical redundancy. Redundancy relations that are insensitive to modelling errors and noise effects are important for designing robust residual generation processes. The characterization of the concept of analytical redundancy in terms of a generalized parity space provides a framework in which a systematic approach to the determination of robust redundancy relations are developed. The Bayesian approach is adopted for the design of high performance decision processes. The FDI decision problem is formulated as a Bayes sequential decision problem. Since the optimal decision rule is incomputable, a methodology for designing suboptimal rules is proposed. A numerical algorithm is developed to facilitate the design and performance evaluation of suboptimal rules.
Numerical study on the sequential Bayesian approach for radioactive materials detection
NASA Astrophysics Data System (ADS)
Qingpei, Xiang; Dongfeng, Tian; Jianyu, Zhu; Fanhua, Hao; Ge, Ding; Jun, Zeng
2013-01-01
A new detection method, based on the sequential Bayesian approach proposed by Candy et al., offers new horizons for the research of radioactive detection. Compared with the commonly adopted detection methods incorporated with statistical theory, the sequential Bayesian approach offers the advantages of shorter verification time during the analysis of spectra that contain low total counts, especially in complex radionuclide components. In this paper, a simulation experiment platform implanted with the methodology of sequential Bayesian approach was developed. Events sequences of γ-rays associating with the true parameters of a LaBr3(Ce) detector were obtained based on an events sequence generator using Monte Carlo sampling theory to study the performance of the sequential Bayesian approach. The numerical experimental results are in accordance with those of Candy. Moreover, the relationship between the detection model and the event generator, respectively represented by the expected detection rate (Am) and the tested detection rate (Gm) parameters, is investigated. To achieve an optimal performance for this processor, the interval of the tested detection rate as a function of the expected detection rate is also presented.
Exploring Group Cohesion in a Higher Education Field Experience
ERIC Educational Resources Information Center
Malcarne, Brian Keith
2012-01-01
The purpose of this study was to gain understanding into the experience of group cohesion for university students participating in an academic field experience. A mixed methods approach was used following a two-phase, sequential research design to help provide a more complete explanation of how group cohesion was impacted by the field experience.…
A Bayesian sequential design with adaptive randomization for 2-sided hypothesis test.
Yu, Qingzhao; Zhu, Lin; Zhu, Han
2017-11-01
Bayesian sequential and adaptive randomization designs are gaining popularity in clinical trials thanks to their potentials to reduce the number of required participants and save resources. We propose a Bayesian sequential design with adaptive randomization rates so as to more efficiently attribute newly recruited patients to different treatment arms. In this paper, we consider 2-arm clinical trials. Patients are allocated to the 2 arms with a randomization rate to achieve minimum variance for the test statistic. Algorithms are presented to calculate the optimal randomization rate, critical values, and power for the proposed design. Sensitivity analysis is implemented to check the influence on design by changing the prior distributions. Simulation studies are applied to compare the proposed method and traditional methods in terms of power and actual sample sizes. Simulations show that, when total sample size is fixed, the proposed design can obtain greater power and/or cost smaller actual sample size than the traditional Bayesian sequential design. Finally, we apply the proposed method to a real data set and compare the results with the Bayesian sequential design without adaptive randomization in terms of sample sizes. The proposed method can further reduce required sample size. Copyright © 2017 John Wiley & Sons, Ltd.
Automatic exposure control for space sequential camera
NASA Technical Reports Server (NTRS)
Mcatee, G. E., Jr.; Stoap, L. J.; Solheim, C. D.; Sharpsteen, J. T.
1975-01-01
The final report for the automatic exposure control study for space sequential cameras, for the NASA Johnson Space Center is presented. The material is shown in the same sequence that the work was performed. The purpose of the automatic exposure control is to automatically control the lens iris as well as the camera shutter so that the subject is properly exposed on the film. A study of design approaches is presented. Analysis of the light range of the spectrum covered indicates that the practical range would be from approximately 20 to 6,000 foot-lamberts, or about nine f-stops. Observation of film available from space flights shows that optimum scene illumination is apparently not present in vehicle interior photography as well as in vehicle-to-vehicle situations. The evaluation test procedure for a breadboard, and the results, which provided information for the design of a brassboard are given.
Mauz, Elvira; von der Lippe, Elena; Allen, Jennifer; Schilling, Ralph; Müters, Stephan; Hoebel, Jens; Schmich, Patrick; Wetzstein, Matthias; Kamtsiuris, Panagiotis; Lange, Cornelia
2018-01-01
Population-based surveys currently face the problem of decreasing response rates. Mixed-mode designs are now being implemented more often to account for this, to improve sample composition and to reduce overall costs. This study examines whether a concurrent or sequential mixed-mode design achieves better results on a number of indicators of survey quality. Data were obtained from a population-based health interview survey of adults in Germany that was conducted as a methodological pilot study as part of the German Health Update (GEDA). Participants were randomly allocated to one of two surveys; each of the surveys had a different design. In the concurrent mixed-mode design ( n = 617) two types of self-administered questionnaires (SAQ-Web and SAQ-Paper) and computer-assisted telephone interviewing were offered simultaneously to the respondents along with the invitation to participate. In the sequential mixed-mode design ( n = 561), SAQ-Web was initially provided, followed by SAQ-Paper, with an option for a telephone interview being sent out together with the reminders at a later date. Finally, this study compared the response rates, sample composition, health indicators, item non-response, the scope of fieldwork and the costs of both designs. No systematic differences were identified between the two mixed-mode designs in terms of response rates, the socio-demographic characteristics of the achieved samples, or the prevalence rates of the health indicators under study. The sequential design gained a higher rate of online respondents. Very few telephone interviews were conducted for either design. With regard to data quality, the sequential design (which had more online respondents) showed less item non-response. There were minor differences between the designs in terms of their costs. Postage and printing costs were lower in the concurrent design, but labour costs were lower in the sequential design. No differences in health indicators were found between the two designs. Modelling these results for higher response rates and larger net sample sizes indicated that the sequential design was more cost and time-effective. This study contributes to the research available on implementing mixed-mode designs as part of public health surveys. Our findings show that SAQ-Paper and SAQ-Web questionnaires can be combined effectively. Sequential mixed-mode designs with higher rates of online respondents may be of greater benefit to studies with larger net sample sizes than concurrent mixed-mode designs.
Efficient Testing Combining Design of Experiment and Learn-to-Fly Strategies
NASA Technical Reports Server (NTRS)
Murphy, Patrick C.; Brandon, Jay M.
2017-01-01
Rapid modeling and efficient testing methods are important in a number of aerospace applications. In this study efficient testing strategies were evaluated in a wind tunnel test environment and combined to suggest a promising approach for both ground-based and flight-based experiments. Benefits of using Design of Experiment techniques, well established in scientific, military, and manufacturing applications are evaluated in combination with newly developing methods for global nonlinear modeling. The nonlinear modeling methods, referred to as Learn-to-Fly methods, utilize fuzzy logic and multivariate orthogonal function techniques that have been successfully demonstrated in flight test. The blended approach presented has a focus on experiment design and identifies a sequential testing process with clearly defined completion metrics that produce increased testing efficiency.
Bluemel, Christina; Krebs, Markus; Polat, Bülent; Linke, Fränze; Eiber, Matthias; Samnick, Samuel; Lapa, Constantin; Lassmann, Michael; Riedmiller, Hubertus; Czernin, Johannes; Rubello, Domenico; Bley, Thorsten; Kropf, Saskia; Wester, Hans-Juergen; Buck, Andreas K; Herrmann, Ken
2016-07-01
Investigating the value of Ga-PSMA-PET/CT in biochemically recurring prostate cancer patients with negative F-choline-PET/CT. One hundred thirty-nine consecutive patients with biochemical recurrence after curative (surgery and/or radiotherapy) therapy were offered participation in this sequential clinical imaging approach. Patients first underwent an F-choline-PET/CT. If negative, an additional Ga-PSMA-PET/CT was offered. One hundred twenty-five of 139 eligible patients were included in the study; 32 patients underwent additional Ga-PSMA-PET/CT. Patients with equivocal findings (n = 5) on F-choline-PET/CT and those who declined the additional Ga-PSMA-PET/CT (n = 9) were excluded. Images were analyzed visually for the presence of suspicious lesions. Findings on PET/CT were correlated with PSA level, PSA doubling time (dt), and PSA velocity (vel). The overall detection rates were 85.6% (107/125) for the sequential imaging approach and 74.4% (93/125) for F-choline-PET/CT alone. Ga-PSMA-PET/CT detected sites of recurrence in 43.8% (14/32) of the choline-negative patients. Detection rates of the sequential imaging approach and F-choline-PET/CT alone increased with higher serum PSA levels and PSA vel. Subgroup analysis of Ga-PSMA-PET/CT in F-choline negative patients revealed detection rates of 28.6%, 45.5%, and 71.4% for PSA levels of 0.2 or greater to less than 1 ng/mL, 1 to 2 ng/mL, and greater than 2 ng/mL, respectively. The sequential imaging approach designed to limit Ga-PSMA imaging to patients with negative choline scans resulted in high detection rates. Ga-PSMA-PET/CT identified sites of recurrent disease in 43.8% of the patients with negative F-choline PET/CT scans.
Bluemel, Christina; Krebs, Markus; Polat, Bülent; Linke, Fränze; Eiber, Matthias; Samnick, Samuel; Lapa, Constantin; Lassmann, Michael; Riedmiller, Hubertus; Czernin, Johannes; Rubello, Domenico; Bley, Thorsten; Kropf, Saskia; Wester, Hans-Juergen; Buck, Andreas K.; Herrmann, Ken
2016-01-01
Purpose Investigating the value of 68Ga-PSMA-PET/CT in biochemically recurring prostate cancer patients with negative 18F-choline-PET/CT. Patients and Methods One hundred thirty-nine consecutive patients with biochemical recurrence after curative (surgery and/or radiotherapy) therapy were offered participation in this sequential clinical imaging approach. Patients first underwent an 18F-choline-PET/CT. If negative, an additional 68Ga-PSMA-PET/CTwas offered. One hundred twenty-five of 139 eligible patients were included in the study; 32 patients underwent additional 68Ga-PSMA-PET/CT. Patients with equivocal findings (n = 5) on 18F-choline-PET/CT and those who declined the additional 68Ga-PSMA-PET/CT (n = 9) were excluded. Images were analyzed visually for the presence of suspicious lesions. Findings on PET/CT were correlated with PSA level, PSA doubling time (dt), and PSA velocity (vel). Results The overall detection rates were 85.6% (107/125) for the sequential imaging approach and 74.4% (93/125) for 18F-choline-PET/CT alone. 68Ga-PSMA-PET/CT detected sites of recurrence in 43.8% (14/32) of the choline-negative patients. Detection rates of the sequential imaging approach and 18F-choline-PET/CT alone increased with higher serum PSA levels and PSA vel. Subgroup analysis of 68Ga-PSMA-PET/CT in 18F-choline negative patients revealed detection rates of 28.6%, 45.5%, and 71.4% for PSA levels of 0.2 or greater to less than 1 ng/mL, 1 to 2 ng/mL, and greater than 2 ng/mL, respectively. Conclusions The sequential imaging approach designed to limit 68Ga-PSMA imaging to patients with negative choline scans resulted in high detection rates. 68Ga-PSMA-PET/CT identified sites of recurrent disease in 43.8% of the patients with negative 18F-choline PET/CT scans. PMID:26975008
Designing Robust and Resilient Tactical MANETs
2014-09-25
Bounds on the Throughput Efficiency of Greedy Maximal Scheduling in Wireless Networks , IEEE/ACM Transactions on Networking , (06 2011): 0. doi: N... Wireless Sensor Networks and Effects of Long Range Dependant Data, Special IWSM Issue of Sequential Analysis, (11 2012): 0. doi: A. D. Dominguez...Bushnell, R. Poovendran. A Convex Optimization Approach for Clone Detection in Wireless Sensor Networks , Pervasive and Mobile Computing, (01 2012
Language and Society. Course HP06a: Part Time BA Degree Programme.
ERIC Educational Resources Information Center
Griffith Univ., Brisbane (Australia). School of Humanities.
This course, one of 16 sequential courses comprising phase one of a part-time Bachelor of Arts degree program in Australian Studies, examines a number of theoretical approaches to the study of language, particularly those which place language in a social context. It is designed for independent study combined with tutorial sessions. Chapter 1 is an…
ERIC Educational Resources Information Center
Cheung, Wai Ming
2011-01-01
This research employed the Learning Study approach which refers to a blend of Japanese "lesson study" and design-based research to provide support to teachers to teach creatively in Chinese writing. It reports a serendipity finding that remarkable differences in the creativity scores among these classes were noted even though they had the same…
ERIC Educational Resources Information Center
Ültay, Eser; Alev, Nedim
2017-01-01
The purpose of this study was to investigate the effect of explanation assisted REACT strategy which was based on context-based learning approach on prospective science teachers' (PSTs) learning in impulse, momentum and collisions topics. The sequential explanatory strategy within mixed methods design was employed in this study. The first phase of…
Doros, Gheorghe; Pencina, Michael; Rybin, Denis; Meisner, Allison; Fava, Maurizio
2013-07-20
Previous authors have proposed the sequential parallel comparison design (SPCD) to address the issue of high placebo response rate in clinical trials. The original use of SPCD focused on binary outcomes, but recent use has since been extended to continuous outcomes that arise more naturally in many fields, including psychiatry. Analytic methods proposed to date for analysis of SPCD trial continuous data included methods based on seemingly unrelated regression and ordinary least squares. Here, we propose a repeated measures linear model that uses all outcome data collected in the trial and accounts for data that are missing at random. An appropriate contrast formulated after the model has been fit can be used to test the primary hypothesis of no difference in treatment effects between study arms. Our extensive simulations show that when compared with the other methods, our approach preserves the type I error even for small sample sizes and offers adequate power and the smallest mean squared error under a wide variety of assumptions. We recommend consideration of our approach for analysis of data coming from SPCD trials. Copyright © 2013 John Wiley & Sons, Ltd.
A Unified Approach to the Synthesis of Fully Testable Sequential Machines
1989-10-01
N A Unified Approach to the Synthesis of Fully Testable Sequential Machines Srinivas Devadas and Kurt Keutzer Abstract • In this paper we attempt to...research was supported in part by the Defense Advanced Research Projects Agency under contract N00014-87-K-0825. Author Information Devadas : Department...Fully Testable Sequential Maine(S P Sritiivas Devadas Departinent of Electrical Engineerinig anid Comivi Sciec Massachusetts Institute of Technology
Koopmeiners, Joseph S; Feng, Ziding
2011-01-01
The receiver operating characteristic (ROC) curve, the positive predictive value (PPV) curve and the negative predictive value (NPV) curve are three measures of performance for a continuous diagnostic biomarker. The ROC, PPV and NPV curves are often estimated empirically to avoid assumptions about the distributional form of the biomarkers. Recently, there has been a push to incorporate group sequential methods into the design of diagnostic biomarker studies. A thorough understanding of the asymptotic properties of the sequential empirical ROC, PPV and NPV curves will provide more flexibility when designing group sequential diagnostic biomarker studies. In this paper we derive asymptotic theory for the sequential empirical ROC, PPV and NPV curves under case-control sampling using sequential empirical process theory. We show that the sequential empirical ROC, PPV and NPV curves converge to the sum of independent Kiefer processes and show how these results can be used to derive asymptotic results for summaries of the sequential empirical ROC, PPV and NPV curves.
Koopmeiners, Joseph S.; Feng, Ziding
2013-01-01
The receiver operating characteristic (ROC) curve, the positive predictive value (PPV) curve and the negative predictive value (NPV) curve are three measures of performance for a continuous diagnostic biomarker. The ROC, PPV and NPV curves are often estimated empirically to avoid assumptions about the distributional form of the biomarkers. Recently, there has been a push to incorporate group sequential methods into the design of diagnostic biomarker studies. A thorough understanding of the asymptotic properties of the sequential empirical ROC, PPV and NPV curves will provide more flexibility when designing group sequential diagnostic biomarker studies. In this paper we derive asymptotic theory for the sequential empirical ROC, PPV and NPV curves under case-control sampling using sequential empirical process theory. We show that the sequential empirical ROC, PPV and NPV curves converge to the sum of independent Kiefer processes and show how these results can be used to derive asymptotic results for summaries of the sequential empirical ROC, PPV and NPV curves. PMID:24039313
Time-resolved non-sequential ray-tracing modelling of non-line-of-sight picosecond pulse LIDAR
NASA Astrophysics Data System (ADS)
Sroka, Adam; Chan, Susan; Warburton, Ryan; Gariepy, Genevieve; Henderson, Robert; Leach, Jonathan; Faccio, Daniele; Lee, Stephen T.
2016-05-01
The ability to detect motion and to track a moving object that is hidden around a corner or behind a wall provides a crucial advantage when physically going around the obstacle is impossible or dangerous. One recently demonstrated approach to achieving this goal makes use of non-line-of-sight picosecond pulse laser ranging. This approach has recently become interesting due to the availability of single-photon avalanche diode (SPAD) receivers with picosecond time resolution. We present a time-resolved non-sequential ray-tracing model and its application to indirect line-of-sight detection of moving targets. The model makes use of the Zemax optical design programme's capabilities in stray light analysis where it traces large numbers of rays through multiple random scattering events in a 3D non-sequential environment. Our model then reconstructs the generated multi-segment ray paths and adds temporal analysis. Validation of this model against experimental results is shown. We then exercise the model to explore the limits placed on system design by available laser sources and detectors. In particular we detail the requirements on the laser's pulse energy, duration and repetition rate, and on the receiver's temporal response and sensitivity. These are discussed in terms of the resulting implications for achievable range, resolution and measurement time while retaining eye-safety with this technique. Finally, the model is used to examine potential extensions to the experimental system that may allow for increased localisation of the position of the detected moving object, such as the inclusion of multiple detectors and/or multiple emitters.
Filter design for cancellation of baseline-fluctuation in needle EMG recordings.
Rodríguez-Carreño, I; Malanda-Trigueros, A; Gila-Useros, L; Navallas-Irujo, J; Rodríguez-Falces, J
2006-01-01
Appropriate cancellation of the baseline fluctuation (BLF) is an important issue when recording EMG signals as it may degrade signal quality and distort qualitative and quantitative analysis. We present a novel filter-design approach for automatic cancellation of the BLF based on several signal processing techniques used sequentially. The methodology is to estimate the spectral content of the BLF, and then to use this estimation to design a high-pass FIR filter that cancel the BLF present in the signal. Two merit figures are devised for measuring the degree of BLF present in an EMG record. These figures are used to compare our method with the conventional approach, which naively considers the baseline course to be of constant (without any fluctuation) potential shift. Applications of the technique on real and simulated EMG signals show the superior performance of our approach in terms of both visual inspection and the merit figures.
Pliego, Jorge; Mateos, Juan Carlos; Rodriguez, Jorge; Valero, Francisco; Baeza, Mireia; Femat, Ricardo; Camacho, Rosa; Sandoval, Georgina; Herrera-López, Enrique J
2015-01-27
Lipases and esterases are biocatalysts used at the laboratory and industrial level. To obtain the maximum yield in a bioprocess, it is important to measure key variables, such as enzymatic activity. The conventional method for monitoring hydrolytic activity is to take out a sample from the bioreactor to be analyzed off-line at the laboratory. The disadvantage of this approach is the long time required to recover the information from the process, hindering the possibility to develop control systems. New strategies to monitor lipase/esterase activity are necessary. In this context and in the first approach, we proposed a lab-made sequential injection analysis system to analyze off-line samples from shake flasks. Lipase/esterase activity was determined using p-nitrophenyl butyrate as the substrate. The sequential injection analysis allowed us to measure the hydrolytic activity from a sample without dilution in a linear range from 0.05-1.60 U/mL, with the capability to reach sample dilutions up to 1000 times, a sampling frequency of five samples/h, with a kinetic reaction of 5 min and a relative standard deviation of 8.75%. The results are promising to monitor lipase/esterase activity in real time, in which optimization and control strategies can be designed.
Pliego, Jorge; Mateos, Juan Carlos; Rodriguez, Jorge; Valero, Francisco; Baeza, Mireia; Femat, Ricardo; Camacho, Rosa; Sandoval, Georgina; Herrera-López, Enrique J.
2015-01-01
Lipases and esterases are biocatalysts used at the laboratory and industrial level. To obtain the maximum yield in a bioprocess, it is important to measure key variables, such as enzymatic activity. The conventional method for monitoring hydrolytic activity is to take out a sample from the bioreactor to be analyzed off-line at the laboratory. The disadvantage of this approach is the long time required to recover the information from the process, hindering the possibility to develop control systems. New strategies to monitor lipase/esterase activity are necessary. In this context and in the first approach, we proposed a lab-made sequential injection analysis system to analyze off-line samples from shake flasks. Lipase/esterase activity was determined using p-nitrophenyl butyrate as the substrate. The sequential injection analysis allowed us to measure the hydrolytic activity from a sample without dilution in a linear range from 0.05–1.60 U/mL, with the capability to reach sample dilutions up to 1000 times, a sampling frequency of five samples/h, with a kinetic reaction of 5 min and a relative standard deviation of 8.75%. The results are promising to monitor lipase/esterase activity in real time, in which optimization and control strategies can be designed. PMID:25633600
Efficient partitioning and assignment on programs for multiprocessor execution
NASA Technical Reports Server (NTRS)
Standley, Hilda M.
1993-01-01
The general problem studied is that of segmenting or partitioning programs for distribution across a multiprocessor system. Efficient partitioning and the assignment of program elements are of great importance since the time consumed in this overhead activity may easily dominate the computation, effectively eliminating any gains made by the use of the parallelism. In this study, the partitioning of sequentially structured programs (written in FORTRAN) is evaluated. Heuristics, developed for similar applications are examined. Finally, a model for queueing networks with finite queues is developed which may be used to analyze multiprocessor system architectures with a shared memory approach to the problem of partitioning. The properties of sequentially written programs form obstacles to large scale (at the procedure or subroutine level) parallelization. Data dependencies of even the minutest nature, reflecting the sequential development of the program, severely limit parallelism. The design of heuristic algorithms is tied to the experience gained in the parallel splitting. Parallelism obtained through the physical separation of data has seen some success, especially at the data element level. Data parallelism on a grander scale requires models that accurately reflect the effects of blocking caused by finite queues. A model for the approximation of the performance of finite queueing networks is developed. This model makes use of the decomposition approach combined with the efficiency of product form solutions.
ERIC Educational Resources Information Center
Heil, Leila
2017-01-01
This article describes a sequential approach to improvisation teaching that can be used with students at various age and ability levels by any educator, regardless of improvisation experience. The 2014 National Core Music Standards include improvisation as a central component in musical learning and promote instructional approaches that are…
Optimization for minimum sensitivity to uncertain parameters
NASA Technical Reports Server (NTRS)
Pritchard, Jocelyn I.; Adelman, Howard M.; Sobieszczanski-Sobieski, Jaroslaw
1994-01-01
A procedure to design a structure for minimum sensitivity to uncertainties in problem parameters is described. The approach is to minimize directly the sensitivity derivatives of the optimum design with respect to fixed design parameters using a nested optimization procedure. The procedure is demonstrated for the design of a bimetallic beam for minimum weight with insensitivity to uncertainties in structural properties. The beam is modeled with finite elements based on two dimensional beam analysis. A sequential quadratic programming procedure used as the optimizer supplies the Lagrange multipliers that are used to calculate the optimum sensitivity derivatives. The method was perceived to be successful from comparisons of the optimization results with parametric studies.
ERIC Educational Resources Information Center
Yuvaci, Ibrahim; Demir, Selçuk Besir
2016-01-01
This paper is aimed to determine the relation between reading comprehension skill and TEOG success. In this research, a mixed research method, sequential explanatory mixed design, is utilized to examine the relation between reading comprehension skills and TEOG success of 8th grade students throughly. In explanatory sequential mixed design…
Reporting Guidelines: Optimal Use in Preventive Medicine and Public Health
Popham, Karyn; Calo, William A.; Carpentier, Melissa Y.; Chen, Naomi E.; Kamrudin, Samira A.; Le, Yen-Chi L.; Skala, Katherine A.; Thornton, Logan R.; Mullen, Patricia Dolan
2012-01-01
Numerous reporting guidelines are available to help authors write higher quality manuscripts more efficiently. Almost 200 are listed on the EQUATOR (Enhancing the Quality and Transparency of Health Research) Network’s website and they vary in authority, usability, and breadth, making it difficult to decide which one(s) to use. This paper provides consistent information about guidelines for preventive medicine and public health and a framework and sequential approach for selecting them. EQUATOR guidelines were reviewed for relevance to target audiences; selected guidelines were classified as “core” (frequently recommended) or specialized, and the latter were grouped by their focus. Core and specialized guidelines were coded for indicators of authority (simultaneous publication in multiple journals, rationale, scientific background supporting each element, expertise of designers, permanent website/named group), usability (presence of checklists and examples of good reporting), and breadth (manuscript sections covered). Discrepancies were resolved by consensus. Selected guidelines are presented in four tables arranged to facilitate selection: core guidelines, all of which pertain to major research designs; guidelines for additional study designs, topical guidelines, and guidelines for particular manuscript sections. A flow diagram provides an overview. The framework and sequential approach will enable authors as well as editors, peer reviewers, researchers, and systematic reviewers to make optimal use of available guidelines to improve the transparency, clarity, and rigor of manuscripts and research protocols and the efficiency of conducing systematic reviews and meta-analyses. PMID:22992369
NASA Astrophysics Data System (ADS)
Maximov, Ivan I.; Vinding, Mads S.; Tse, Desmond H. Y.; Nielsen, Niels Chr.; Shah, N. Jon
2015-05-01
There is an increasing need for development of advanced radio-frequency (RF) pulse techniques in modern magnetic resonance imaging (MRI) systems driven by recent advancements in ultra-high magnetic field systems, new parallel transmit/receive coil designs, and accessible powerful computational facilities. 2D spatially selective RF pulses are an example of advanced pulses that have many applications of clinical relevance, e.g., reduced field of view imaging, and MR spectroscopy. The 2D spatially selective RF pulses are mostly generated and optimised with numerical methods that can handle vast controls and multiple constraints. With this study we aim at demonstrating that numerical, optimal control (OC) algorithms are efficient for the design of 2D spatially selective MRI experiments, when robustness towards e.g. field inhomogeneity is in focus. We have chosen three popular OC algorithms; two which are gradient-based, concurrent methods using first- and second-order derivatives, respectively; and a third that belongs to the sequential, monotonically convergent family. We used two experimental models: a water phantom, and an in vivo human head. Taking into consideration the challenging experimental setup, our analysis suggests the use of the sequential, monotonic approach and the second-order gradient-based approach as computational speed, experimental robustness, and image quality is key. All algorithms used in this work were implemented in the MATLAB environment and are freely available to the MRI community.
Chakraborty, Bibhas; Davidson, Karina W.
2015-01-01
Summary Implementation study is an important tool for deploying state-of-the-art treatments from clinical efficacy studies into a treatment program, with the dual goals of learning about effectiveness of the treatments and improving the quality of care for patients enrolled into the program. In this article, we deal with the design of a treatment program of dynamic treatment regimens (DTRs) for patients with depression post acute coronary syndrome. We introduce a novel adaptive randomization scheme for a sequential multiple assignment randomized trial of DTRs. Our approach adapts the randomization probabilities to favor treatment sequences having comparatively superior Q-functions used in Q-learning. The proposed approach addresses three main concerns of an implementation study: it allows incorporation of historical data or opinions, it includes randomization for learning purposes, and it aims to improve care via adaptation throughout the program. We demonstrate how to apply our method to design a depression treatment program using data from a previous study. By simulation, we illustrate that the inputs from historical data are important for the program performance measured by the expected outcomes of the enrollees, but also show that the adaptive randomization scheme is able to compensate poorly specified historical inputs by improving patient outcomes within a reasonable horizon. The simulation results also confirm that the proposed design allows efficient learning of the treatments by alleviating the curse of dimensionality. PMID:25354029
Aerts, Sam; Deschrijver, Dirk; Joseph, Wout; Verloock, Leen; Goeminne, Francis; Martens, Luc; Dhaene, Tom
2013-05-01
Human exposure to background radiofrequency electromagnetic fields (RF-EMF) has been increasing with the introduction of new technologies. There is a definite need for the quantification of RF-EMF exposure but a robust exposure assessment is not yet possible, mainly due to the lack of a fast and efficient measurement procedure. In this article, a new procedure is proposed for accurately mapping the exposure to base station radiation in an outdoor environment based on surrogate modeling and sequential design, an entirely new approach in the domain of dosimetry for human RF exposure. We tested our procedure in an urban area of about 0.04 km(2) for Global System for Mobile Communications (GSM) technology at 900 MHz (GSM900) using a personal exposimeter. Fifty measurement locations were sufficient to obtain a coarse street exposure map, locating regions of high and low exposure; 70 measurement locations were sufficient to characterize the electric field distribution in the area and build an accurate predictive interpolation model. Hence, accurate GSM900 downlink outdoor exposure maps (for use in, e.g., governmental risk communication and epidemiological studies) are developed by combining the proven efficiency of sequential design with the speed of exposimeter measurements and their ease of handling. Copyright © 2013 Wiley Periodicals, Inc.
The Application of Concurrent Engineering Tools and Design Structure Matrix in Designing Tire
NASA Astrophysics Data System (ADS)
Ginting, Rosnani; Fachrozi Fitra Ramadhan, T.
2016-02-01
The development of automobile industry in Indonesia is growing rapidly. This phenomenon causes companies related to the automobile industry such as tire industry must develop products based on customers’ needs and considering the timeliness of delivering the product to the customer. It could be reached by applying strategic planning in developing an integrated concept of product development. This research was held in PT. XYZ that applied the sequential approach in designing and developing products. The need to improve in one stage of product development could occur re-designing that needs longer time in developing a new product. This research is intended to get an integrated product design concept of tire pertaining to the customer's needs using Concurrent Engineering Tools by implementing the two-phased of product development. The implementation of Concurrent Engineering approach results in applying the stage of project planning, conceptual design, and product modules. The product modules consist of four modules that using Product Architecture - Design Structure Matrix to ease the designing process of new product development.
Achieving Integration in Mixed Methods Designs—Principles and Practices
Fetters, Michael D; Curry, Leslie A; Creswell, John W
2013-01-01
Mixed methods research offers powerful tools for investigating complex processes and systems in health and health care. This article describes integration principles and practices at three levels in mixed methods research and provides illustrative examples. Integration at the study design level occurs through three basic mixed method designs—exploratory sequential, explanatory sequential, and convergent—and through four advanced frameworks—multistage, intervention, case study, and participatory. Integration at the methods level occurs through four approaches. In connecting, one database links to the other through sampling. With building, one database informs the data collection approach of the other. When merging, the two databases are brought together for analysis. With embedding, data collection and analysis link at multiple points. Integration at the interpretation and reporting level occurs through narrative, data transformation, and joint display. The fit of integration describes the extent the qualitative and quantitative findings cohere. Understanding these principles and practices of integration can help health services researchers leverage the strengths of mixed methods. PMID:24279835
Modern Gemini-Approach to Technology Development for Human Space Exploration
NASA Technical Reports Server (NTRS)
White, Harold
2010-01-01
In NASA's plan to put men on the moon, there were three sequential programs: Mercury, Gemini, and Apollo. The Gemini program was used to develop and integrate the technologies that would be necessary for the Apollo program to successfully put men on the moon. We would like to present an analogous modern approach that leverages legacy ISS hardware designs, and integrates developing new technologies into a flexible architecture This new architecture is scalable, sustainable, and can be used to establish human exploration infrastructure beyond low earth orbit and into deep space.
A Study of Penalty Function Methods for Constraint Handling with Genetic Algorithm
NASA Technical Reports Server (NTRS)
Ortiz, Francisco
2004-01-01
COMETBOARDS (Comparative Evaluation Testbed of Optimization and Analysis Routines for Design of Structures) is a design optimization test bed that can evaluate the performance of several different optimization algorithms. A few of these optimization algorithms are the sequence of unconstrained minimization techniques (SUMT), sequential linear programming (SLP) and the sequential quadratic programming techniques (SQP). A genetic algorithm (GA) is a search technique that is based on the principles of natural selection or "survival of the fittest". Instead of using gradient information, the GA uses the objective function directly in the search. The GA searches the solution space by maintaining a population of potential solutions. Then, using evolving operations such as recombination, mutation and selection, the GA creates successive generations of solutions that will evolve and take on the positive characteristics of their parents and thus gradually approach optimal or near-optimal solutions. By using the objective function directly in the search, genetic algorithms can be effectively applied in non-convex, highly nonlinear, complex problems. The genetic algorithm is not guaranteed to find the global optimum, but it is less likely to get trapped at a local optimum than traditional gradient-based search methods when the objective function is not smooth and generally well behaved. The purpose of this research is to assist in the integration of genetic algorithm (GA) into COMETBOARDS. COMETBOARDS cast the design of structures as a constrained nonlinear optimization problem. One method used to solve constrained optimization problem with a GA to convert the constrained optimization problem into an unconstrained optimization problem by developing a penalty function that penalizes infeasible solutions. There have been several suggested penalty function in the literature each with there own strengths and weaknesses. A statistical analysis of some suggested penalty functions is performed in this study. Also, a response surface approach to robust design is used to develop a new penalty function approach. This new penalty function approach is then compared with the other existing penalty functions.
An exploratory sequential design to validate measures of moral emotions.
Márquez, Margarita G; Delgado, Ana R
2017-05-01
This paper presents an exploratory and sequential mixed methods approach in validating measures of knowledge of the moral emotions of contempt, anger and disgust. The sample comprised 60 participants in the qualitative phase when a measurement instrument was designed. Item stems, response options and correction keys were planned following the results obtained in a descriptive phenomenological analysis of the interviews. In the quantitative phase, the scale was used with a sample of 102 Spanish participants, and the results were analysed with the Rasch model. In the qualitative phase, salient themes included reasons, objects and action tendencies. In the quantitative phase, good psychometric properties were obtained. The model fit was adequate. However, some changes had to be made to the scale in order to improve the proportion of variance explained. Substantive and methodological im-plications of this mixed-methods study are discussed. Had the study used a single re-search method in isolation, aspects of the global understanding of contempt, anger and disgust would have been lost.
An adaptive two-stage sequential design for sampling rare and clustered populations
Brown, J.A.; Salehi, M.M.; Moradi, M.; Bell, G.; Smith, D.R.
2008-01-01
How to design an efficient large-area survey continues to be an interesting question for ecologists. In sampling large areas, as is common in environmental studies, adaptive sampling can be efficient because it ensures survey effort is targeted to subareas of high interest. In two-stage sampling, higher density primary sample units are usually of more interest than lower density primary units when populations are rare and clustered. Two-stage sequential sampling has been suggested as a method for allocating second stage sample effort among primary units. Here, we suggest a modification: adaptive two-stage sequential sampling. In this method, the adaptive part of the allocation process means the design is more flexible in how much extra effort can be directed to higher-abundance primary units. We discuss how best to design an adaptive two-stage sequential sample. ?? 2008 The Society of Population Ecology and Springer.
Making Career Decisions--A Sequential Elimination Approach.
ERIC Educational Resources Information Center
Gati, Itamar
1986-01-01
Presents a model for career decision making based on the sequential elimination of occupational alternatives, an adaptation for career decisions of Tversky's (1972) elimination-by-aspects theory of choice. The expected utility approach is reviewed as a representative compensatory model for career decisions. Advantages, disadvantages, and…
A Node Linkage Approach for Sequential Pattern Mining
Navarro, Osvaldo; Cumplido, René; Villaseñor-Pineda, Luis; Feregrino-Uribe, Claudia; Carrasco-Ochoa, Jesús Ariel
2014-01-01
Sequential Pattern Mining is a widely addressed problem in data mining, with applications such as analyzing Web usage, examining purchase behavior, and text mining, among others. Nevertheless, with the dramatic increase in data volume, the current approaches prove inefficient when dealing with large input datasets, a large number of different symbols and low minimum supports. In this paper, we propose a new sequential pattern mining algorithm, which follows a pattern-growth scheme to discover sequential patterns. Unlike most pattern growth algorithms, our approach does not build a data structure to represent the input dataset, but instead accesses the required sequences through pseudo-projection databases, achieving better runtime and reducing memory requirements. Our algorithm traverses the search space in a depth-first fashion and only preserves in memory a pattern node linkage and the pseudo-projections required for the branch being explored at the time. Experimental results show that our new approach, the Node Linkage Depth-First Traversal algorithm (NLDFT), has better performance and scalability in comparison with state of the art algorithms. PMID:24933123
Sequential analysis in neonatal research-systematic review.
Lava, Sebastiano A G; Elie, Valéry; Ha, Phuong Thi Viet; Jacqz-Aigrain, Evelyne
2018-05-01
As more new drugs are discovered, traditional designs come at their limits. Ten years after the adoption of the European Paediatric Regulation, we performed a systematic review on the US National Library of Medicine and Excerpta Medica database of sequential trials involving newborns. Out of 326 identified scientific reports, 21 trials were included. They enrolled 2832 patients, of whom 2099 were analyzed: the median number of neonates included per trial was 48 (IQR 22-87), median gestational age was 28.7 (IQR 27.9-30.9) weeks. Eighteen trials used sequential techniques to determine sample size, while 3 used continual reassessment methods for dose-finding. In 16 studies reporting sufficient data, the sequential design allowed to non-significantly reduce the number of enrolled neonates by a median of 24 (31%) patients (IQR - 4.75 to 136.5, p = 0.0674) with respect to a traditional trial. When the number of neonates finally included in the analysis was considered, the difference became significant: 35 (57%) patients (IQR 10 to 136.5, p = 0.0033). Sequential trial designs have not been frequently used in Neonatology. They might potentially be able to reduce the number of patients in drug trials, although this is not always the case. What is known: • In evaluating rare diseases in fragile populations, traditional designs come at their limits. About 20% of pediatric trials are discontinued, mainly because of recruitment problems. What is new: • Sequential trials involving newborns were infrequently used and only a few (n = 21) are available for analysis. • The sequential design allowed to non-significantly reduce the number of enrolled neonates by a median of 24 (31%) patients (IQR - 4.75 to 136.5, p = 0.0674).
Optimality, sample size, and power calculations for the sequential parallel comparison design.
Ivanova, Anastasia; Qaqish, Bahjat; Schoenfeld, David A
2011-10-15
The sequential parallel comparison design (SPCD) has been proposed to increase the likelihood of success of clinical trials in therapeutic areas where high-placebo response is a concern. The trial is run in two stages, and subjects are randomized into three groups: (i) placebo in both stages; (ii) placebo in the first stage and drug in the second stage; and (iii) drug in both stages. We consider the case of binary response data (response/no response). In the SPCD, all first-stage and second-stage data from placebo subjects who failed to respond in the first stage of the trial are utilized in the efficacy analysis. We develop 1 and 2 degree of freedom score tests for treatment effect in the SPCD. We give formulae for asymptotic power and for sample size computations and evaluate their accuracy via simulation studies. We compute the optimal allocation ratio between drug and placebo in stage 1 for the SPCD to determine from a theoretical viewpoint whether a single-stage design, a two-stage design with placebo only in the first stage, or a two-stage design is the best design for a given set of response rates. As response rates are not known before the trial, a two-stage approach with allocation to active drug in both stages is a robust design choice. Copyright © 2011 John Wiley & Sons, Ltd.
A Systematic Approach to Subgroup Classification in Intellectual Disability
ERIC Educational Resources Information Center
Schalock, Robert L.; Luckasson, Ruth
2015-01-01
This article describes a systematic approach to subgroup classification based on a classification framework and sequential steps involved in the subgrouping process. The sequential steps are stating the purpose of the classification, identifying the classification elements, using relevant information, and using clearly stated and purposeful…
A sampling and classification item selection approach with content balancing.
Chen, Pei-Hua
2015-03-01
Existing automated test assembly methods typically employ constrained combinatorial optimization. Constructing forms sequentially based on an optimization approach usually results in unparallel forms and requires heuristic modifications. Methods based on a random search approach have the major advantage of producing parallel forms sequentially without further adjustment. This study incorporated a flexible content-balancing element into the statistical perspective item selection method of the cell-only method (Chen et al. in Educational and Psychological Measurement, 72(6), 933-953, 2012). The new method was compared with a sequential interitem distance weighted deviation model (IID WDM) (Swanson & Stocking in Applied Psychological Measurement, 17(2), 151-166, 1993), a simultaneous IID WDM, and a big-shadow-test mixed integer programming (BST MIP) method to construct multiple parallel forms based on matching a reference form item-by-item. The results showed that the cell-only method with content balancing and the sequential and simultaneous versions of IID WDM yielded results comparable to those obtained using the BST MIP method. The cell-only method with content balancing is computationally less intensive than the sequential and simultaneous versions of IID WDM.
ERIC Educational Resources Information Center
Ebadi, Saman; Rahimi, Masoud
2017-01-01
This article reports the results of a sequential explanatory mixed-methods approach to explore the impact of online peer-editing using Google Docs and peer-editing in a face-to-face classroom on EFL learners' academic writing skills. As the study adopted a quasi-experimental design, two intact classes, each with ten EFL learners, attending an…
ERIC Educational Resources Information Center
Sharp, Lanette
Developed specifically for classroom teachers with a limited background in music, oral music lessons are designed to be taught in short, daily instruction segments to help students gain the most from music and transfer that knowledge to other parts of the curriculum. The lessons, a master degree project, were developed to support the Utah music…
NASA Astrophysics Data System (ADS)
Hamadeh, Linda
In order for science-based inquiry instruction to happen on a large scale in elementary classrooms across the country, evidence must be provided that implementing this reform can be realistic and practical, despite the challenges and obstacles teachers may face. This study sought to examine elementary teachers' knowledge and understanding of, attitudes toward, and overall perceptions of inquiry-based science instruction, and how these beliefs influenced their inquiry practice in the classroom. It offered a description and analysis of the approaches elementary science teachers in Islamic schools reported using to promote inquiry within the context of their science classrooms, and addressed the challenges the participating teachers faced when implementing scientific inquiry strategies in their instruction. The research followed a mixed method approach, best described as a sequential two-strand design (Teddlie & Tashakkori, 2006). Sequential mixed designs develop two methodological strands that occur chronologically, and in the case of this research, QUAN→QUAL. Findings from the study supported the notion that the school and/or classroom environment could be a contextual factor that influenced some teachers' classroom beliefs about the feasibility of implementing science inquiry. Moreover, although teacher beliefs are influential, they are malleable and adaptable and influenced primarily by their own personal direct experiences with inquiry instruction or lack of.
A Bayesian sequential processor approach to spectroscopic portal system decisions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sale, K; Candy, J; Breitfeller, E
The development of faster more reliable techniques to detect radioactive contraband in a portal type scenario is an extremely important problem especially in this era of constant terrorist threats. Towards this goal the development of a model-based, Bayesian sequential data processor for the detection problem is discussed. In the sequential processor each datum (detector energy deposit and pulse arrival time) is used to update the posterior probability distribution over the space of model parameters. The nature of the sequential processor approach is that a detection is produced as soon as it is statistically justified by the data rather than waitingmore » for a fixed counting interval before any analysis is performed. In this paper the Bayesian model-based approach, physics and signal processing models and decision functions are discussed along with the first results of our research.« less
NASA Technical Reports Server (NTRS)
Sidik, S. M.
1972-01-01
A sequential adaptive experimental design procedure for a related problem is studied. It is assumed that a finite set of potential linear models relating certain controlled variables to an observed variable is postulated, and that exactly one of these models is correct. The problem is to sequentially design most informative experiments so that the correct model equation can be determined with as little experimentation as possible. Discussion includes: structure of the linear models; prerequisite distribution theory; entropy functions and the Kullback-Leibler information function; the sequential decision procedure; and computer simulation results. An example of application is given.
A Parametric Geometry Computational Fluid Dynamics (CFD) Study Utilizing Design of Experiments (DOE)
NASA Technical Reports Server (NTRS)
Rhew, Ray D.; Parker, Peter A.
2007-01-01
Design of Experiments (DOE) was applied to the LAS geometric parameter study to efficiently identify and rank primary contributors to integrated drag over the vehicles ascent trajectory in an order of magnitude fewer CFD configurations thereby reducing computational resources and solution time. SME s were able to gain a better understanding on the underlying flowphysics of different geometric parameter configurations through the identification of interaction effects. An interaction effect, which describes how the effect of one factor changes with respect to the levels of other factors, is often the key to product optimization. A DOE approach emphasizes a sequential approach to learning through successive experimentation to continuously build on previous knowledge. These studies represent a starting point for expanded experimental activities that will eventually cover the entire design space of the vehicle and flight trajectory.
Simultaneous sequential monitoring of efficacy and safety led to masking of effects.
van Eekelen, Rik; de Hoop, Esther; van der Tweel, Ingeborg
2016-08-01
Usually, sequential designs for clinical trials are applied on the primary (=efficacy) outcome. In practice, other outcomes (e.g., safety) will also be monitored and influence the decision whether to stop a trial early. Implications of simultaneous monitoring on trial decision making are yet unclear. This study examines what happens to the type I error, power, and required sample sizes when one efficacy outcome and one correlated safety outcome are monitored simultaneously using sequential designs. We conducted a simulation study in the framework of a two-arm parallel clinical trial. Interim analyses on two outcomes were performed independently and simultaneously on the same data sets using four sequential monitoring designs, including O'Brien-Fleming and Triangular Test boundaries. Simulations differed in values for correlations and true effect sizes. When an effect was present in both outcomes, competition was introduced, which decreased power (e.g., from 80% to 60%). Futility boundaries for the efficacy outcome reduced overall type I errors as well as power for the safety outcome. Monitoring two correlated outcomes, given that both are essential for early trial termination, leads to masking of true effects. Careful consideration of scenarios must be taken into account when designing sequential trials. Simulation results can help guide trial design. Copyright © 2016 Elsevier Inc. All rights reserved.
The Relevance of Visual Sequential Memory to Reading.
ERIC Educational Resources Information Center
Crispin, Lisa; And Others
1984-01-01
Results of three visual sequential memory tests and a group reading test given to 19 elementary students are discussed in terms of task analysis and structuralist approaches to analysis of reading skills. Relation of visual sequential memory to other reading subskills is considered in light of current reasearch. (CMG)
Li, Haiou; Lu, Liyao; Chen, Rong; Quan, Lijun; Xia, Xiaoyan; Lü, Qiang
2014-01-01
Structural information related to protein-peptide complexes can be very useful for novel drug discovery and design. The computational docking of protein and peptide can supplement the structural information available on protein-peptide interactions explored by experimental ways. Protein-peptide docking of this paper can be described as three processes that occur in parallel: ab-initio peptide folding, peptide docking with its receptor, and refinement of some flexible areas of the receptor as the peptide is approaching. Several existing methods have been used to sample the degrees of freedom in the three processes, which are usually triggered in an organized sequential scheme. In this paper, we proposed a parallel approach that combines all the three processes during the docking of a folding peptide with a flexible receptor. This approach mimics the actual protein-peptide docking process in parallel way, and is expected to deliver better performance than sequential approaches. We used 22 unbound protein-peptide docking examples to evaluate our method. Our analysis of the results showed that the explicit refinement of the flexible areas of the receptor facilitated more accurate modeling of the interfaces of the complexes, while combining all of the moves in parallel helped the constructing of energy funnels for predictions.
Two-stage sequential sampling: A neighborhood-free adaptive sampling procedure
Salehi, M.; Smith, D.R.
2005-01-01
Designing an efficient sampling scheme for a rare and clustered population is a challenging area of research. Adaptive cluster sampling, which has been shown to be viable for such a population, is based on sampling a neighborhood of units around a unit that meets a specified condition. However, the edge units produced by sampling neighborhoods have proven to limit the efficiency and applicability of adaptive cluster sampling. We propose a sampling design that is adaptive in the sense that the final sample depends on observed values, but it avoids the use of neighborhoods and the sampling of edge units. Unbiased estimators of population total and its variance are derived using Murthy's estimator. The modified two-stage sampling design is easy to implement and can be applied to a wider range of populations than adaptive cluster sampling. We evaluate the proposed sampling design by simulating sampling of two real biological populations and an artificial population for which the variable of interest took the value either 0 or 1 (e.g., indicating presence and absence of a rare event). We show that the proposed sampling design is more efficient than conventional sampling in nearly all cases. The approach used to derive estimators (Murthy's estimator) opens the door for unbiased estimators to be found for similar sequential sampling designs. ?? 2005 American Statistical Association and the International Biometric Society.
Maximov, Ivan I; Vinding, Mads S; Tse, Desmond H Y; Nielsen, Niels Chr; Shah, N Jon
2015-05-01
There is an increasing need for development of advanced radio-frequency (RF) pulse techniques in modern magnetic resonance imaging (MRI) systems driven by recent advancements in ultra-high magnetic field systems, new parallel transmit/receive coil designs, and accessible powerful computational facilities. 2D spatially selective RF pulses are an example of advanced pulses that have many applications of clinical relevance, e.g., reduced field of view imaging, and MR spectroscopy. The 2D spatially selective RF pulses are mostly generated and optimised with numerical methods that can handle vast controls and multiple constraints. With this study we aim at demonstrating that numerical, optimal control (OC) algorithms are efficient for the design of 2D spatially selective MRI experiments, when robustness towards e.g. field inhomogeneity is in focus. We have chosen three popular OC algorithms; two which are gradient-based, concurrent methods using first- and second-order derivatives, respectively; and a third that belongs to the sequential, monotonically convergent family. We used two experimental models: a water phantom, and an in vivo human head. Taking into consideration the challenging experimental setup, our analysis suggests the use of the sequential, monotonic approach and the second-order gradient-based approach as computational speed, experimental robustness, and image quality is key. All algorithms used in this work were implemented in the MATLAB environment and are freely available to the MRI community. Copyright © 2015 Elsevier Inc. All rights reserved.
Lynn Hedt, Bethany; van Leth, Frank; Zignol, Matteo; Cobelens, Frank; van Gemert, Wayne; Viet Nhung, Nguyen; Lyepshina, Svitlana; Egwaga, Saidi; Cohen, Ted
2012-01-01
Background Current methodology for multidrug-resistant TB (MDR TB) surveys endorsed by the World Health Organization provides estimates of MDR TB prevalence among new cases at the national level. On the aggregate, local variation in the burden of MDR TB may be masked. This paper investigates the utility of applying lot quality-assurance sampling to identify geographic heterogeneity in the proportion of new cases with multidrug resistance. Methods We simulated the performance of lot quality-assurance sampling by applying these classification-based approaches to data collected in the most recent TB drug-resistance surveys in Ukraine, Vietnam, and Tanzania. We explored three classification systems—two-way static, three-way static, and three-way truncated sequential sampling—at two sets of thresholds: low MDR TB = 2%, high MDR TB = 10%, and low MDR TB = 5%, high MDR TB = 20%. Results The lot quality-assurance sampling systems identified local variability in the prevalence of multidrug resistance in both high-resistance (Ukraine) and low-resistance settings (Vietnam). In Tanzania, prevalence was uniformly low, and the lot quality-assurance sampling approach did not reveal variability. The three-way classification systems provide additional information, but sample sizes may not be obtainable in some settings. New rapid drug-sensitivity testing methods may allow truncated sequential sampling designs and early stopping within static designs, producing even greater efficiency gains. Conclusions Lot quality-assurance sampling study designs may offer an efficient approach for collecting critical information on local variability in the burden of multidrug-resistant TB. Before this methodology is adopted, programs must determine appropriate classification thresholds, the most useful classification system, and appropriate weighting if unbiased national estimates are also desired. PMID:22249242
Hedt, Bethany Lynn; van Leth, Frank; Zignol, Matteo; Cobelens, Frank; van Gemert, Wayne; Nhung, Nguyen Viet; Lyepshina, Svitlana; Egwaga, Saidi; Cohen, Ted
2012-03-01
Current methodology for multidrug-resistant tuberculosis (MDR TB) surveys endorsed by the World Health Organization provides estimates of MDR TB prevalence among new cases at the national level. On the aggregate, local variation in the burden of MDR TB may be masked. This paper investigates the utility of applying lot quality-assurance sampling to identify geographic heterogeneity in the proportion of new cases with multidrug resistance. We simulated the performance of lot quality-assurance sampling by applying these classification-based approaches to data collected in the most recent TB drug-resistance surveys in Ukraine, Vietnam, and Tanzania. We explored 3 classification systems- two-way static, three-way static, and three-way truncated sequential sampling-at 2 sets of thresholds: low MDR TB = 2%, high MDR TB = 10%, and low MDR TB = 5%, high MDR TB = 20%. The lot quality-assurance sampling systems identified local variability in the prevalence of multidrug resistance in both high-resistance (Ukraine) and low-resistance settings (Vietnam). In Tanzania, prevalence was uniformly low, and the lot quality-assurance sampling approach did not reveal variability. The three-way classification systems provide additional information, but sample sizes may not be obtainable in some settings. New rapid drug-sensitivity testing methods may allow truncated sequential sampling designs and early stopping within static designs, producing even greater efficiency gains. Lot quality-assurance sampling study designs may offer an efficient approach for collecting critical information on local variability in the burden of multidrug-resistant TB. Before this methodology is adopted, programs must determine appropriate classification thresholds, the most useful classification system, and appropriate weighting if unbiased national estimates are also desired.
Decroocq, Justine; Itzykson, Raphaël; Vigouroux, Stéphane; Michallet, Mauricette; Yakoub-Agha, Ibrahim; Huynh, Anne; Beckerich, Florence; Suarez, Felipe; Chevallier, Patrice; Nguyen-Quoc, Stéphanie; Ledoux, Marie-Pierre; Clement, Laurence; Hicheri, Yosr; Guillerm, Gaëlle; Cornillon, Jérôme; Contentin, Nathalie; Carre, Martin; Maillard, Natacha; Mercier, Mélanie; Mohty, Mohamad; Beguin, Yves; Bourhis, Jean-Henri; Charbonnier, Amandine; Dauriac, Charles; Bay, Jacques-Olivier; Blaise, Didier; Deconinck, Eric; Jubert, Charlotte; Raus, Nicole; Peffault de Latour, Regis; Dhedin, Nathalie
2018-03-01
Patients with acute myeloid leukemia (AML) in relapse or refractory to induction therapy have a dismal prognosis. Allogeneic hematopoietic stem cell transplantation is the only curative option. In these patients, we aimed to compare the results of a myeloablative transplant versus a sequential approach consisting in a cytoreductive chemotherapy followed by a reduced intensity conditioning regimen and prophylactic donor lymphocytes infusions. We retrospectively analyzed 99 patients aged 18-50 years, transplanted for a refractory (52%) or a relapsed AML not in remission (48%). Fifty-eight patients received a sequential approach and 41 patients a myeloablative conditioning regimen. Only 6 patients received prophylactic donor lymphocytes infusions. With a median follow-up of 48 months, 2-year overall survival was 39%, 95% confidence interval (CI) (24-53) in the myeloablative group versus 33%, 95% CI (21-45) in the sequential groups (P = .39), and 2-year cumulative incidence of relapse (CIR) was 57% versus 50% respectively (P = .99). Nonrelapse mortality was not higher in the myeloablative group (17% versus 15%, P = .44). In multivariate analysis, overall survival, CIR and nonrelapse mortality remained similar between the two groups. However, in multivariate analysis, sequential conditioning led to fewer acute grade II-IV graft versus host disease (GVHD) (HR for sequential approach = 0.37; 95% CI: 0.21-0.65; P < .001) without a significant impact on chronic GVHD (all grades and extensive). In young patients with refractory or relapsed AML, myeloablative transplant and sequential approach offer similar outcomes except for a lower incidence of acute GvHD after a sequential transplant. © 2018 Wiley Periodicals, Inc.
Hoomans, Ties; Severens, Johan L; Evers, Silvia M A A; Ament, Andre J H A
2009-01-01
Decisions about clinical practice change, that is, which guidelines to adopt and how to implement them, can be made sequentially or simultaneously. Decision makers adopting a sequential approach first compare the costs and effects of alternative guidelines to select the best set of guideline recommendations for patient management and subsequently examine the implementation costs and effects to choose the best strategy to implement the selected guideline. In an integral approach, decision makers simultaneously decide about the guideline and the implementation strategy on the basis of the overall value for money in changing clinical practice. This article demonstrates that the decision to use a sequential v. an integral approach affects the need for detailed information and the complexity of the decision analytic process. More importantly, it may lead to different choices of guidelines and implementation strategies for clinical practice change. The differences in decision making and decision analysis between the alternative approaches are comprehensively illustrated using 2 hypothetical examples. We argue that, in most cases, an integral approach to deciding about change in clinical practice is preferred, as this provides more efficient use of scarce health-care resources.
Comparing multiple imputation methods for systematically missing subject-level data.
Kline, David; Andridge, Rebecca; Kaizar, Eloise
2017-06-01
When conducting research synthesis, the collection of studies that will be combined often do not measure the same set of variables, which creates missing data. When the studies to combine are longitudinal, missing data can occur on the observation-level (time-varying) or the subject-level (non-time-varying). Traditionally, the focus of missing data methods for longitudinal data has been on missing observation-level variables. In this paper, we focus on missing subject-level variables and compare two multiple imputation approaches: a joint modeling approach and a sequential conditional modeling approach. We find the joint modeling approach to be preferable to the sequential conditional approach, except when the covariance structure of the repeated outcome for each individual has homogenous variance and exchangeable correlation. Specifically, the regression coefficient estimates from an analysis incorporating imputed values based on the sequential conditional method are attenuated and less efficient than those from the joint method. Remarkably, the estimates from the sequential conditional method are often less efficient than a complete case analysis, which, in the context of research synthesis, implies that we lose efficiency by combining studies. Copyright © 2015 John Wiley & Sons, Ltd. Copyright © 2015 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Hu, Jiexiang; Zhou, Qi; Jiang, Ping; Shao, Xinyu; Xie, Tingli
2018-01-01
Variable-fidelity (VF) modelling methods have been widely used in complex engineering system design to mitigate the computational burden. Building a VF model generally includes two parts: design of experiments and metamodel construction. In this article, an adaptive sampling method based on improved hierarchical kriging (ASM-IHK) is proposed to refine the improved VF model. First, an improved hierarchical kriging model is developed as the metamodel, in which the low-fidelity model is varied through a polynomial response surface function to capture the characteristics of a high-fidelity model. Secondly, to reduce local approximation errors, an active learning strategy based on a sequential sampling method is introduced to make full use of the already required information on the current sampling points and to guide the sampling process of the high-fidelity model. Finally, two numerical examples and the modelling of the aerodynamic coefficient for an aircraft are provided to demonstrate the approximation capability of the proposed approach, as well as three other metamodelling methods and two sequential sampling methods. The results show that ASM-IHK provides a more accurate metamodel at the same simulation cost, which is very important in metamodel-based engineering design problems.
Mechanical System Reliability and Cost Integration Using a Sequential Linear Approximation Method
NASA Technical Reports Server (NTRS)
Kowal, Michael T.
1997-01-01
The development of new products is dependent on product designs that incorporate high levels of reliability along with a design that meets predetermined levels of system cost. Additional constraints on the product include explicit and implicit performance requirements. Existing reliability and cost prediction methods result in no direct linkage between variables affecting these two dominant product attributes. A methodology to integrate reliability and cost estimates using a sequential linear approximation method is proposed. The sequential linear approximation method utilizes probability of failure sensitivities determined from probabilistic reliability methods as well a manufacturing cost sensitivities. The application of the sequential linear approximation method to a mechanical system is demonstrated.
1987-12-01
not know that it has any real value or credibility. This situation suggests the following. Before the community at large can evaluate the SDG ...design approaches, such as the SDG method, are needed to handle the response of nonlinear subsystems--such as gust load alleviation. I wish to...observed in his SDG approach. Thus his result obtained from examining sequential discrete type gust is not too surprising, since It apparently can
Analysis of SET pulses propagation probabilities in sequential circuits
NASA Astrophysics Data System (ADS)
Cai, Shuo; Yu, Fei; Yang, Yiqun
2018-05-01
As the feature size of CMOS transistors scales down, single event transient (SET) has been an important consideration in designing logic circuits. Many researches have been done in analyzing the impact of SET. However, it is difficult to consider numerous factors. We present a new approach for analyzing the SET pulses propagation probabilities (SPPs). It considers all masking effects and uses SET pulses propagation probabilities matrices (SPPMs) to represent the SPPs in current cycle. Based on the matrix union operations, the SPPs in consecutive cycles can be calculated. Experimental results show that our approach is practicable and efficient.
Cardarelli, Roberto; Reese, David; Roper, Karen L.; Cardarelli, Kathryn; Feltner, Frances J.; Studts, Jamie L.; Knight, Jennifer R.; Armstrong, Debra; Weaver, Anthony; Shaffer, Dana
2017-01-01
For low dose CT lung cancer screening to be effective in curbing disease mortality, efforts are needed to overcome barriers to awareness and facilitate uptake of the current evidence-based screening guidelines. A sequential mixed-methods approach was employed to design a screening campaign utilizing messages developed from community focus groups, followed by implementation of the outreach campaign intervention in two high-risk Kentucky regions. This study reports on rates of awareness and screening in intervention regions, as compared to a control region. PMID:27866066
Description and effects of sequential behavior practice in teacher education.
Sharpe, T; Lounsbery, M; Bahls, V
1997-09-01
This study examined the effects of a sequential behavior feedback protocol on the practice-teaching experiences of undergraduate teacher trainees. The performance competencies of teacher trainees were analyzed using an alternative opportunities for appropriate action measure. Data support the added utility of sequential (Sharpe, 1997a, 1997b) behavior analysis information in systematic observation approaches to teacher education. One field-based undergraduate practicum using sequential behavior (i.e., field systems analysis) principles was monitored. Summarized are the key elements of the (a) classroom instruction provided as a precursor to the practice teaching experience, (b) practice teaching experience, and (c) field systems observation tool used for evaluation and feedback, including multiple-baseline data (N = 4) to support this approach to teacher education. Results point to (a) the strong relationship between sequential behavior feedback and the positive change in four preservice teachers' day-to-day teaching practices in challenging situational contexts, and (b) the relationship between changes in teacher practices and positive changes in the behavioral practices of gymnasium pupils. Sequential behavior feedback was also socially validated by the undergraduate participants and Professional Development School teacher supervisors in the study.
NASA Astrophysics Data System (ADS)
Gao, J.; Lythe, M. B.
1996-06-01
This paper presents the principle of the Maximum Cross-Correlation (MCC) approach in detecting translational motions within dynamic fields from time-sequential remotely sensed images. A C program implementing the approach is presented and illustrated in a flowchart. The program is tested with a pair of sea-surface temperature images derived from Advanced Very High Resolution Radiometer (AVHRR) images near East Cape, New Zealand. Results show that the mean currents in the region have been detected satisfactorily with the approach.
Pre-configured polyhedron based protection against multi-link failures in optical mesh networks.
Huang, Shanguo; Guo, Bingli; Li, Xin; Zhang, Jie; Zhao, Yongli; Gu, Wanyi
2014-02-10
This paper focuses on random multi-link failures protection in optical mesh networks, instead of single, the dual or sequential failures of previous studies. Spare resource efficiency and failure robustness are major concerns in link protection strategy designing and a k-regular and k-edge connected structure is proved to be one of the optimal solutions for link protection network. Based on this, a novel pre-configured polyhedron based protection structure is proposed, and it could provide protection for both simultaneous and sequential random link failures with improved spare resource efficiency. Its performance is evaluated in terms of spare resource consumption, recovery rate and average recovery path length, as well as compared with ring based and subgraph protection under probabilistic link failure scenarios. Results show the proposed novel link protection approach has better performance than previous works.
Multilevel sequential Monte Carlo: Mean square error bounds under verifiable conditions
Del Moral, Pierre; Jasra, Ajay; Law, Kody J. H.
2017-01-09
We consider the multilevel sequential Monte Carlo (MLSMC) method of Beskos et al. (Stoch. Proc. Appl. [to appear]). This technique is designed to approximate expectations w.r.t. probability laws associated to a discretization. For instance, in the context of inverse problems, where one discretizes the solution of a partial differential equation. The MLSMC approach is especially useful when independent, coupled sampling is not possible. Beskos et al. show that for MLSMC the computational effort to achieve a given error, can be less than independent sampling. In this article we significantly weaken the assumptions of Beskos et al., extending the proofs tomore » non-compact state-spaces. The assumptions are based upon multiplicative drift conditions as in Kontoyiannis and Meyn (Electron. J. Probab. 10 [2005]: 61–123). The assumptions are verified for an example.« less
Multilevel sequential Monte Carlo: Mean square error bounds under verifiable conditions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Del Moral, Pierre; Jasra, Ajay; Law, Kody J. H.
We consider the multilevel sequential Monte Carlo (MLSMC) method of Beskos et al. (Stoch. Proc. Appl. [to appear]). This technique is designed to approximate expectations w.r.t. probability laws associated to a discretization. For instance, in the context of inverse problems, where one discretizes the solution of a partial differential equation. The MLSMC approach is especially useful when independent, coupled sampling is not possible. Beskos et al. show that for MLSMC the computational effort to achieve a given error, can be less than independent sampling. In this article we significantly weaken the assumptions of Beskos et al., extending the proofs tomore » non-compact state-spaces. The assumptions are based upon multiplicative drift conditions as in Kontoyiannis and Meyn (Electron. J. Probab. 10 [2005]: 61–123). The assumptions are verified for an example.« less
Trial Sequential Methods for Meta-Analysis
ERIC Educational Resources Information Center
Kulinskaya, Elena; Wood, John
2014-01-01
Statistical methods for sequential meta-analysis have applications also for the design of new trials. Existing methods are based on group sequential methods developed for single trials and start with the calculation of a required information size. This works satisfactorily within the framework of fixed effects meta-analysis, but conceptual…
2013-08-01
in Sequential Design Optimization with Concurrent Calibration-Based Model Validation Dorin Drignei 1 Mathematics and Statistics Department...Validation 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) Dorin Drignei; Zissimos Mourelatos; Vijitashwa Pandey
Approaches for Achieving Broadband Achromatic Phase Shifts for Visible Nulling Coronagraphy
NASA Technical Reports Server (NTRS)
Bolcar, Matthew R.; Lyon, Richard G.
2012-01-01
Visible nulling coronagraphy is one of the few approaches to the direct detection and characterization of Jovian and Terrestrial exoplanets that works with segmented aperture telescopes. Jovian and Terrestrial planets require at least 10(exp -9) and 10(exp -10) image plane contrasts, respectively, within the spectral bandpass and thus require a nearly achromatic pi-phase difference between the arms of the interferometer. An achromatic pi-phase shift can be achieved by several techniques, including sequential angled thick glass plates of varying dispersive materials, distributed thin-film multilayer coatings, and techniques that leverage the polarization-dependent phase shift of total-internal reflections. Herein we describe two such techniques: sequential thick glass plates and Fresnel rhomb prisms. A viable technique must achieve the achromatic phase shift while simultaneously minimizing the intensity difference, chromatic beam spread and polarization variation between each arm. In this paper we describe the above techniques and report on efforts to design, model, fabricate, align the trades associated with each technique that will lead to an implementations of the most promising one in Goddard's Visible Nulling Coronagraph (VNC).
Integrated design of the CSI evolutionary structure: A verification of the design methodology
NASA Technical Reports Server (NTRS)
Maghami, Peiman G.; Joshi, S. M.; Elliott, Kenny B.; Walz, J. E.
1993-01-01
One of the main objectives of the Controls-Structures Interaction (CSI) program is to develop and evaluate integrated controls-structures design methodology for flexible space structures. Thus far, integrated design methodologies for a class of flexible spacecraft, which require fine attitude pointing and vibration suppression with no payload articulation, have been extensively investigated. Various integrated design optimization approaches, such as single-objective optimization, and multi-objective optimization, have been implemented with an array of different objectives and constraints involving performance and cost measures such as total mass, actuator mass, steady-state pointing performance, transient performance, control power, and many more. These studies have been performed using an integrated design software tool (CSI-DESIGN CODE) which is under development by the CSI-ADM team at the NASA Langley Research Center. To date, all of these studies, irrespective of the type of integrated optimization posed or objectives and constraints used, have indicated that integrated controls-structures design results in an overall spacecraft design which is considerably superior to designs obtained through a conventional sequential approach. Consequently, it is believed that validation of some of these results through fabrication and testing of a structure which is designed through an integrated design approach is warranted. The objective of this paper is to present and discuss the efforts that have been taken thus far for the validation of the integrated design methodology.
A Pocock Approach to Sequential Meta-Analysis of Clinical Trials
ERIC Educational Resources Information Center
Shuster, Jonathan J.; Neu, Josef
2013-01-01
Three recent papers have provided sequential methods for meta-analysis of two-treatment randomized clinical trials. This paper provides an alternate approach that has three desirable features. First, when carried out prospectively (i.e., we only have the results up to the time of our current analysis), we do not require knowledge of the…
Sequential Organization and Room Reverberation for Speech Segregation
2012-02-28
we have proposed two algorithms for sequential organization, an unsupervised clustering algorithm applicable to monaural recordings and a binaural ...algorithm that integrates monaural and binaural analyses. In addition, we have conducted speech intelligibility tests that Firmly establish the...comprehensive version is currently under review for journal publication. A binaural approach in room reverberation Most existing approaches to binaural or
DOE Office of Scientific and Technical Information (OSTI.GOV)
Man, Jun; Zhang, Jiangjiang; Li, Weixuan
2016-10-01
The ensemble Kalman filter (EnKF) has been widely used in parameter estimation for hydrological models. The focus of most previous studies was to develop more efficient analysis (estimation) algorithms. On the other hand, it is intuitively understandable that a well-designed sampling (data-collection) strategy should provide more informative measurements and subsequently improve the parameter estimation. In this work, a Sequential Ensemble-based Optimal Design (SEOD) method, coupled with EnKF, information theory and sequential optimal design, is proposed to improve the performance of parameter estimation. Based on the first-order and second-order statistics, different information metrics including the Shannon entropy difference (SD), degrees ofmore » freedom for signal (DFS) and relative entropy (RE) are used to design the optimal sampling strategy, respectively. The effectiveness of the proposed method is illustrated by synthetic one-dimensional and two-dimensional unsaturated flow case studies. It is shown that the designed sampling strategies can provide more accurate parameter estimation and state prediction compared with conventional sampling strategies. Optimal sampling designs based on various information metrics perform similarly in our cases. The effect of ensemble size on the optimal design is also investigated. Overall, larger ensemble size improves the parameter estimation and convergence of optimal sampling strategy. Although the proposed method is applied to unsaturated flow problems in this study, it can be equally applied in any other hydrological problems.« less
Östlund, Ulrika; Kidd, Lisa; Wengström, Yvonne; Rowa-Dewar, Neneh
2011-03-01
It has been argued that mixed methods research can be useful in nursing and health science because of the complexity of the phenomena studied. However, the integration of qualitative and quantitative approaches continues to be one of much debate and there is a need for a rigorous framework for designing and interpreting mixed methods research. This paper explores the analytical approaches (i.e. parallel, concurrent or sequential) used in mixed methods studies within healthcare and exemplifies the use of triangulation as a methodological metaphor for drawing inferences from qualitative and quantitative findings originating from such analyses. This review of the literature used systematic principles in searching CINAHL, Medline and PsycINFO for healthcare research studies which employed a mixed methods approach and were published in the English language between January 1999 and September 2009. In total, 168 studies were included in the results. Most studies originated in the United States of America (USA), the United Kingdom (UK) and Canada. The analytic approach most widely used was parallel data analysis. A number of studies used sequential data analysis; far fewer studies employed concurrent data analysis. Very few of these studies clearly articulated the purpose for using a mixed methods design. The use of the methodological metaphor of triangulation on convergent, complementary, and divergent results from mixed methods studies is exemplified and an example of developing theory from such data is provided. A trend for conducting parallel data analysis on quantitative and qualitative data in mixed methods healthcare research has been identified in the studies included in this review. Using triangulation as a methodological metaphor can facilitate the integration of qualitative and quantitative findings, help researchers to clarify their theoretical propositions and the basis of their results. This can offer a better understanding of the links between theory and empirical findings, challenge theoretical assumptions and develop new theory. Copyright © 2010 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Heaney, Mike
Statistically designed experiments can save researchers time and money by reducing the number of necessary experimental trials, while resulting in more conclusive experimental results. Surprisingly, many researchers are still not aware of this efficient and effective experimental methodology. As reported in a 2013 article from Chemical & Engineering News, there has been a resurgence of this methodology in recent years (http://cen.acs.org/articles/91/i13/Design-Experiments-Makes-Comeback.html?h=2027056365). This presentation will provide a brief introduction to statistically designed experiments. The main advantages will be reviewed along with the some basic concepts such as factorial and fractional factorial designs. The recommended sequential approach to experiments will be introducedmore » and finally a case study will be presented to demonstrate this methodology.« less
A path-level exact parallelization strategy for sequential simulation
NASA Astrophysics Data System (ADS)
Peredo, Oscar F.; Baeza, Daniel; Ortiz, Julián M.; Herrero, José R.
2018-01-01
Sequential Simulation is a well known method in geostatistical modelling. Following the Bayesian approach for simulation of conditionally dependent random events, Sequential Indicator Simulation (SIS) method draws simulated values for K categories (categorical case) or classes defined by K different thresholds (continuous case). Similarly, Sequential Gaussian Simulation (SGS) method draws simulated values from a multivariate Gaussian field. In this work, a path-level approach to parallelize SIS and SGS methods is presented. A first stage of re-arrangement of the simulation path is performed, followed by a second stage of parallel simulation for non-conflicting nodes. A key advantage of the proposed parallelization method is to generate identical realizations as with the original non-parallelized methods. Case studies are presented using two sequential simulation codes from GSLIB: SISIM and SGSIM. Execution time and speedup results are shown for large-scale domains, with many categories and maximum kriging neighbours in each case, achieving high speedup results in the best scenarios using 16 threads of execution in a single machine.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Berman, Diana; Guha, Supratik; Lee, Byeongdu
Control over refractive index and thickness of surface coatings is central to the design of low refraction films used in applications ranging from optical computing to antireflective coatings. Here, we introduce gas-phase sequential infiltration synthesis (SIS) as a robust, powerful and efficient approach to deposit conformal coatings with very low refractive indices. We demonstrate that the refractive indices of inorganic coatings can be efficiently tuned by the number of cycles used in the SIS process, composition and selective swelling of the of the polymer template. We show that the refractive index of Al 2O 3 can be lowered from 1.76more » down to 1.1 using this method. The thickness of the Al 2O 3 coating can be efficiently controlled by the swelling of the block copolymer template in ethanol at elevated temperature, thereby enabling deposition of both single-layer and graded-index broadband anti-reflective coatings. Using this technique, Fresnel reflections of glass can be reduced to as low as 0.1% under normal illumination over a broad spectral range.« less
Berman, Diana; Guha, Supratik; Lee, Byeongdu; ...
2017-01-31
Control over refractive index and thickness of surface coatings is central to the design of low refraction films used in applications ranging from optical computing to antireflective coatings. Here, we introduce gas-phase sequential infiltration synthesis (SIS) as a robust, powerful and efficient approach to deposit conformal coatings with very low refractive indices. We demonstrate that the refractive indices of inorganic coatings can be efficiently tuned by the number of cycles used in the SIS process, composition and selective swelling of the of the polymer template. We show that the refractive index of Al 2O 3 can be lowered from 1.76more » down to 1.1 using this method. The thickness of the Al 2O 3 coating can be efficiently controlled by the swelling of the block copolymer template in ethanol at elevated temperature, thereby enabling deposition of both single-layer and graded-index broadband anti-reflective coatings. Using this technique, Fresnel reflections of glass can be reduced to as low as 0.1% under normal illumination over a broad spectral range.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Berman, Diana; Guha, Supratik; Lee, Byeongdu
Control over refractive index and thickness of surface coatings is central to the design of low refraction films used in applications ranging from optical computing to antireflective coatings. Here, we introduce gas-phase sequential infiltration synthesis (SIS) as a robust, powerful, and efficient approach to deposit conformal coatings with very low refractive indices. We demonstrate that the refractive indices of inorganic coatings can be efficiently tuned by the number of cycles used in the SIS process, composition, and selective swelling of the of the polymer template. We show that the refractive index of Al2O3 can be lowered from 1.76 down tomore » 1.1 using this method. The thickness of the Al2O3 coating can be efficiently controlled by the swelling of the block copolymer template in ethanol at elevated temperature, thereby enabling deposition of both single-layer and graded-index broadband antireflective coatings. Using this technique, Fresnel reflections of glass can be reduced to as low as 0.1% under normal illumination over a broad spectral range.« less
Berman, Diana; Guha, Supratik; Lee, Byeongdu; Elam, Jeffrey W; Darling, Seth B; Shevchenko, Elena V
2017-03-28
Control over refractive index and thickness of surface coatings is central to the design of low refraction films used in applications ranging from optical computing to antireflective coatings. Here, we introduce gas-phase sequential infiltration synthesis (SIS) as a robust, powerful, and efficient approach to deposit conformal coatings with very low refractive indices. We demonstrate that the refractive indices of inorganic coatings can be efficiently tuned by the number of cycles used in the SIS process, composition, and selective swelling of the of the polymer template. We show that the refractive index of Al 2 O 3 can be lowered from 1.76 down to 1.1 using this method. The thickness of the Al 2 O 3 coating can be efficiently controlled by the swelling of the block copolymer template in ethanol at elevated temperature, thereby enabling deposition of both single-layer and graded-index broadband antireflective coatings. Using this technique, Fresnel reflections of glass can be reduced to as low as 0.1% under normal illumination over a broad spectral range.
Arend, Carlos Frederico; Arend, Ana Amalia; da Silva, Tiago Rodrigues
2014-06-01
The aim of our study was to systematically compare different methodologies to establish an evidence-based approach based on tendon thickness and structure for sonographic diagnosis of supraspinatus tendinopathy when compared to MRI. US was obtained from 164 symptomatic patients with supraspinatus tendinopathy detected at MRI and 42 asymptomatic controls with normal MRI. Diagnostic yield was calculated for either maximal supraspinatus tendon thickness (MSTT) and tendon structure as isolated criteria and using different combinations of parallel and sequential testing at US. Chi-squared tests were performed to assess sensitivity, specificity, and accuracy of different diagnostic approaches. Mean MSTT was 6.68 mm in symptomatic patients and 5.61 mm in asymptomatic controls (P<.05). When used as an isolated criterion, MSTT>6.0mm provided best results for accuracy (93.7%) when compared to other measurements of tendon thickness. Also as an isolated criterion, abnormal tendon structure (ATS) yielded 93.2% accuracy for diagnosis. The best overall yield was obtained by both parallel and sequential testing using either MSTT>6.0mm or ATS as diagnostic criteria at no particular order, which provided 99.0% accuracy, 100% sensitivity, and 95.2% specificity. Among these parallel and sequential tests that provided best overall yield, additional analysis revealed that sequential testing first evaluating tendon structure required assessment of 258 criteria (vs. 261 for sequential testing first evaluating tendon thickness and 412 for parallel testing) and demanded a mean of 16.1s to assess diagnostic criteria and reach the diagnosis (vs. 43.3s for sequential testing first evaluating tendon thickness and 47.4s for parallel testing). We found that using either MSTT>6.0mm or ATS as diagnostic criteria for both parallel and sequential testing provides the best overall yield for sonographic diagnosis of supraspinatus tendinopathy when compared to MRI. Among these strategies, a two-step sequential approach first assessing tendon structure was advantageous because it required a lower number of criteria to be assessed and demanded less time to assess diagnostic criteria and reach the diagnosis. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Sequential color video to parallel color video converter
NASA Technical Reports Server (NTRS)
1975-01-01
The engineering design, development, breadboard fabrication, test, and delivery of a breadboard field sequential color video to parallel color video converter is described. The converter was designed for use onboard a manned space vehicle to eliminate a flickering TV display picture and to reduce the weight and bulk of previous ground conversion systems.
Structural Optimization of a Force Balance Using a Computational Experiment Design
NASA Technical Reports Server (NTRS)
Parker, P. A.; DeLoach, R.
2002-01-01
This paper proposes a new approach to force balance structural optimization featuring a computational experiment design. Currently, this multi-dimensional design process requires the designer to perform a simplification by executing parameter studies on a small subset of design variables. This one-factor-at-a-time approach varies a single variable while holding all others at a constant level. Consequently, subtle interactions among the design variables, which can be exploited to achieve the design objectives, are undetected. The proposed method combines Modern Design of Experiments techniques to direct the exploration of the multi-dimensional design space, and a finite element analysis code to generate the experimental data. To efficiently search for an optimum combination of design variables and minimize the computational resources, a sequential design strategy was employed. Experimental results from the optimization of a non-traditional force balance measurement section are presented. An approach to overcome the unique problems associated with the simultaneous optimization of multiple response criteria is described. A quantitative single-point design procedure that reflects the designer's subjective impression of the relative importance of various design objectives, and a graphical multi-response optimization procedure that provides further insights into available tradeoffs among competing design objectives are illustrated. The proposed method enhances the intuition and experience of the designer by providing new perspectives on the relationships between the design variables and the competing design objectives providing a systematic foundation for advancements in structural design.
NASA Astrophysics Data System (ADS)
Thamvichai, Ratchaneekorn; Huang, Liang-Chih; Ashok, Amit; Gong, Qian; Coccarelli, David; Greenberg, Joel A.; Gehm, Michael E.; Neifeld, Mark A.
2017-05-01
We employ an adaptive measurement system, based on sequential hypotheses testing (SHT) framework, for detecting material-based threats using experimental data acquired on an X-ray experimental testbed system. This testbed employs 45-degree fan-beam geometry and 15 views over a 180-degree span to generate energy sensitive X-ray projection data. Using this testbed system, we acquire multiple view projection data for 200 bags. We consider an adaptive measurement design where the X-ray projection measurements are acquired in a sequential manner and the adaptation occurs through the choice of the optimal "next" source/view system parameter. Our analysis of such an adaptive measurement design using the experimental data demonstrates a 3x-7x reduction in the probability of error relative to a static measurement design. Here the static measurement design refers to the operational system baseline that corresponds to a sequential measurement using all the available sources/views. We also show that by using adaptive measurements it is possible to reduce the number of sources/views by nearly 50% compared a system that relies on static measurements.
Chatterji, Madhabi
2016-12-01
This paper explores avenues for navigating evaluation design challenges posed by complex social programs (CSPs) and their environments when conducting studies that call for generalizable, causal inferences on the intervention's effectiveness. A definition is provided of a CSP drawing on examples from different fields, and an evaluation case is analyzed in depth to derive seven (7) major sources of complexity that typify CSPs, threatening assumptions of textbook-recommended experimental designs for performing impact evaluations. Theoretically-supported, alternative methodological strategies are discussed to navigate assumptions and counter the design challenges posed by the complex configurations and ecology of CSPs. Specific recommendations include: sequential refinement of the evaluation design through systems thinking, systems-informed logic modeling; and use of extended term, mixed methods (ETMM) approaches with exploratory and confirmatory phases of the evaluation. In the proposed approach, logic models are refined through direct induction and interactions with stakeholders. To better guide assumption evaluation, question-framing, and selection of appropriate methodological strategies, a multiphase evaluation design is recommended. Copyright © 2016 Elsevier Ltd. All rights reserved.
Networked Workstations and Parallel Processing Utilizing Functional Languages
1993-03-01
program . This frees the programmer to concentrate on what the program is to do, not how the program is...traditional ’von Neumann’ architecture uses a timer based (e.g., the program counter), sequentially pro- grammed, single processor approach to problem...traditional ’von Neumann’ architecture uses a timer based (e.g., the program counter), sequentially programmed , single processor approach to
A Rejection Principle for Sequential Tests of Multiple Hypotheses Controlling Familywise Error Rates
BARTROFF, JAY; SONG, JINLIN
2015-01-01
We present a unifying approach to multiple testing procedures for sequential (or streaming) data by giving sufficient conditions for a sequential multiple testing procedure to control the familywise error rate (FWER). Together we call these conditions a “rejection principle for sequential tests,” which we then apply to some existing sequential multiple testing procedures to give simplified understanding of their FWER control. Next the principle is applied to derive two new sequential multiple testing procedures with provable FWER control, one for testing hypotheses in order and another for closed testing. Examples of these new procedures are given by applying them to a chromosome aberration data set and to finding the maximum safe dose of a treatment. PMID:26985125
ERIC Educational Resources Information Center
Boekkooi-Timminga, Ellen
Nine methods for automated test construction are described. All are based on the concepts of information from item response theory. Two general kinds of methods for the construction of parallel tests are presented: (1) sequential test design; and (2) simultaneous test design. Sequential design implies that the tests are constructed one after the…
Topics in the Sequential Design of Experiments
1992-03-01
decision , unless so designated by other documentation. 12a. DISTRIBUTION /AVAILABIIUTY STATEMENT 12b. DISTRIBUTION CODE Approved for public release...3 0 1992 D 14. SUBJECT TERMS 15. NUMBER OF PAGES12 Design of Experiments, Renewal Theory , Sequential Testing 1 2. PRICE CODE Limit Theory , Local...distributions for one parameter exponential families," by Michael Woodroofe. Stntca, 2 (1991), 91-112. [6] "A non linear renewal theory for a functional of
MSFC Skylab airlock module, volume 1. [systems design and performance
NASA Technical Reports Server (NTRS)
1974-01-01
The history and development of the Skylab Airlock Module and Payload Shroud is presented from initial concept through final design. A summary is given of the Airlock features and systems. System design and performance are presented for the Spent Stage Experiment Support Module, structure and mechanical systems, mass properties, thermal and environmental control systems, EVA/IVA suite system, electrical power system, sequential system, sequential system, and instrumentation system.
Filleron, Thomas; Gal, Jocelyn; Kramar, Andrew
2012-10-01
A major and difficult task is the design of clinical trials with a time to event endpoint. In fact, it is necessary to compute the number of events and in a second step the required number of patients. Several commercial software packages are available for computing sample size in clinical trials with sequential designs and time to event endpoints, but there are a few R functions implemented. The purpose of this paper is to describe features and use of the R function. plansurvct.func, which is an add-on function to the package gsDesign which permits in one run of the program to calculate the number of events, and required sample size but also boundaries and corresponding p-values for a group sequential design. The use of the function plansurvct.func is illustrated by several examples and validated using East software. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
Tractable Experiment Design via Mathematical Surrogates
DOE Office of Scientific and Technical Information (OSTI.GOV)
Williams, Brian J.
This presentation summarizes the development and implementation of quantitative design criteria motivated by targeted inference objectives for identifying new, potentially expensive computational or physical experiments. The first application is concerned with estimating features of quantities of interest arising from complex computational models, such as quantiles or failure probabilities. A sequential strategy is proposed for iterative refinement of the importance distributions used to efficiently sample the uncertain inputs to the computational model. In the second application, effective use of mathematical surrogates is investigated to help alleviate the analytical and numerical intractability often associated with Bayesian experiment design. This approach allows formore » the incorporation of prior information into the design process without the need for gross simplification of the design criterion. Illustrative examples of both design problems will be presented as an argument for the relevance of these research problems.« less
Design and protocol of a randomized multiple behavior change trial: Make Better Choices 2 (MBC2).
Pellegrini, Christine A; Steglitz, Jeremy; Johnston, Winter; Warnick, Jennifer; Adams, Tiara; McFadden, H G; Siddique, Juned; Hedeker, Donald; Spring, Bonnie
2015-03-01
Suboptimal diet and inactive lifestyle are among the most prevalent preventable causes of premature death. Interventions that target multiple behaviors are potentially efficient; however the optimal way to initiate and maintain multiple health behavior changes is unknown. The Make Better Choices 2 (MBC2) trial aims to examine whether sustained healthful diet and activity change are best achieved by targeting diet and activity behaviors simultaneously or sequentially. Study design approximately 250 inactive adults with poor quality diet will be randomized to 3 conditions examining the best way to prescribe healthy diet and activity change. The 3 intervention conditions prescribe: 1) an increase in fruit and vegetable consumption (F/V+), decrease in sedentary leisure screen time (Sed-), and increase in physical activity (PA+) simultaneously (Simultaneous); 2) F/V+ and Sed- first, and then sequentially add PA+ (Sequential); or 3) Stress Management Control that addresses stress, relaxation, and sleep. All participants will receive a smartphone application to self-monitor behaviors and regular coaching calls to help facilitate behavior change during the 9 month intervention. Healthy lifestyle change in fruit/vegetable and saturated fat intakes, sedentary leisure screen time, and physical activity will be assessed at 3, 6, and 9 months. MBC2 is a randomized m-Health intervention examining methods to maximize initiation and maintenance of multiple healthful behavior changes. Results from this trial will provide insight about an optimal technology supported approach to promote improvement in diet and physical activity. Copyright © 2015 Elsevier Inc. All rights reserved.
Sensitivity Analysis in Sequential Decision Models.
Chen, Qiushi; Ayer, Turgay; Chhatwal, Jagpreet
2017-02-01
Sequential decision problems are frequently encountered in medical decision making, which are commonly solved using Markov decision processes (MDPs). Modeling guidelines recommend conducting sensitivity analyses in decision-analytic models to assess the robustness of the model results against the uncertainty in model parameters. However, standard methods of conducting sensitivity analyses cannot be directly applied to sequential decision problems because this would require evaluating all possible decision sequences, typically in the order of trillions, which is not practically feasible. As a result, most MDP-based modeling studies do not examine confidence in their recommended policies. In this study, we provide an approach to estimate uncertainty and confidence in the results of sequential decision models. First, we provide a probabilistic univariate method to identify the most sensitive parameters in MDPs. Second, we present a probabilistic multivariate approach to estimate the overall confidence in the recommended optimal policy considering joint uncertainty in the model parameters. We provide a graphical representation, which we call a policy acceptability curve, to summarize the confidence in the optimal policy by incorporating stakeholders' willingness to accept the base case policy. For a cost-effectiveness analysis, we provide an approach to construct a cost-effectiveness acceptability frontier, which shows the most cost-effective policy as well as the confidence in that for a given willingness to pay threshold. We demonstrate our approach using a simple MDP case study. We developed a method to conduct sensitivity analysis in sequential decision models, which could increase the credibility of these models among stakeholders.
Least-squares sequential parameter and state estimation for large space structures
NASA Technical Reports Server (NTRS)
Thau, F. E.; Eliazov, T.; Montgomery, R. C.
1982-01-01
This paper presents the formulation of simultaneous state and parameter estimation problems for flexible structures in terms of least-squares minimization problems. The approach combines an on-line order determination algorithm, with least-squares algorithms for finding estimates of modal approximation functions, modal amplitudes, and modal parameters. The approach combines previous results on separable nonlinear least squares estimation with a regression analysis formulation of the state estimation problem. The technique makes use of sequential Householder transformations. This allows for sequential accumulation of matrices required during the identification process. The technique is used to identify the modal prameters of a flexible beam.
Lineup Composition, Suspect Position, and the Sequential Lineup Advantage
ERIC Educational Resources Information Center
Carlson, Curt A.; Gronlund, Scott D.; Clark, Steven E.
2008-01-01
N. M. Steblay, J. Dysart, S. Fulero, and R. C. L. Lindsay (2001) argued that sequential lineups reduce the likelihood of mistaken eyewitness identification. Experiment 1 replicated the design of R. C. L. Lindsay and G. L. Wells (1985), the first study to show the sequential lineup advantage. However, the innocent suspect was chosen at a lower rate…
Evaluating Bias of Sequential Mixed-Mode Designs against Benchmark Surveys
ERIC Educational Resources Information Center
Klausch, Thomas; Schouten, Barry; Hox, Joop J.
2017-01-01
This study evaluated three types of bias--total, measurement, and selection bias (SB)--in three sequential mixed-mode designs of the Dutch Crime Victimization Survey: telephone, mail, and web, where nonrespondents were followed up face-to-face (F2F). In the absence of true scores, all biases were estimated as mode effects against two different…
C-quence: a tool for analyzing qualitative sequential data.
Duncan, Starkey; Collier, Nicholson T
2002-02-01
C-quence is a software application that matches sequential patterns of qualitative data specified by the user and calculates the rate of occurrence of these patterns in a data set. Although it was designed to facilitate analyses of face-to-face interaction, it is applicable to any data set involving categorical data and sequential information. C-quence queries are constructed using a graphical user interface. The program does not limit the complexity of the sequential patterns specified by the user.
Overcoming limitations of model-based diagnostic reasoning systems
NASA Technical Reports Server (NTRS)
Holtzblatt, Lester J.; Marcotte, Richard A.; Piazza, Richard L.
1989-01-01
The development of a model-based diagnostic system to overcome the limitations of model-based reasoning systems is discussed. It is noted that model-based reasoning techniques can be used to analyze the failure behavior and diagnosability of system and circuit designs as part of the system process itself. One goal of current research is the development of a diagnostic algorithm which can reason efficiently about large numbers of diagnostic suspects and can handle both combinational and sequential circuits. A second goal is to address the model-creation problem by developing an approach for using design models to construct the GMODS model in an automated fashion.
Sequence-invariant state machines
NASA Technical Reports Server (NTRS)
Whitaker, Sterling R.; Manjunath, Shamanna K.; Maki, Gary K.
1991-01-01
A synthesis method and an MOS VLSI architecture are presented to realize sequential circuits that have the ability to implement any state machine having N states and m inputs, regardless of the actual sequence specified in the flow table. The design method utilizes binary tree structured (BTS) logic to implement regular and dense circuits. The desired state sequence can be hardwired with power supply connections or can be dynamically reallocated if stored in a register. This allows programmable VLSI controllers to be designed with a compact size and performance approaching that of dedicated logic. Results of ICV implementations are reported and an example sequence-invariant state machine is contrasted with implementations based on traditional methods.
Solving the infeasible trust-region problem using approximations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Renaud, John E.; Perez, Victor M.; Eldred, Michael Scott
2004-07-01
The use of optimization in engineering design has fueled the development of algorithms for specific engineering needs. When the simulations are expensive to evaluate or the outputs present some noise, the direct use of nonlinear optimizers is not advisable, since the optimization process will be expensive and may result in premature convergence. The use of approximations for both cases is an alternative investigated by many researchers including the authors. When approximations are present, a model management is required for proper convergence of the algorithm. In nonlinear programming, the use of trust-regions for globalization of a local algorithm has been provenmore » effective. The same approach has been used to manage the local move limits in sequential approximate optimization frameworks as in Alexandrov et al., Giunta and Eldred, Perez et al. , Rodriguez et al., etc. The experience in the mathematical community has shown that more effective algorithms can be obtained by the specific inclusion of the constraints (SQP type of algorithms) rather than by using a penalty function as in the augmented Lagrangian formulation. The presence of explicit constraints in the local problem bounded by the trust region, however, may have no feasible solution. In order to remedy this problem the mathematical community has developed different versions of a composite steps approach. This approach consists of a normal step to reduce the amount of constraint violation and a tangential step to minimize the objective function maintaining the level of constraint violation attained at the normal step. Two of the authors have developed a different approach for a sequential approximate optimization framework using homotopy ideas to relax the constraints. This algorithm called interior-point trust-region sequential approximate optimization (IPTRSAO) presents some similarities to the two normal-tangential steps algorithms. In this paper, a description of the similarities is presented and an expansion of the two steps algorithm is presented for the case of approximations.« less
Helin-Salmivaara, Arja; Lavikainen, Piia; Aarnio, Emma; Huupponen, Risto; Korhonen, Maarit Jaana
2014-01-01
Sequential cohort design (SCD) applying matching for propensity scores (PS) in accrual periods has been proposed to mitigate bias caused by channeling when calendar time is a proxy for strong confounders. We studied the channeling of patients according to atorvastatin and simvastatin initiation in Finland, starting from the market introduction of atorvastatin in 1998, and explored the SCD PS approach to analyzing the comparative effectiveness of atorvastatin versus simvastatin in the prevention of cardiovascular events (CVE). Initiators of atorvastatin or simvastatin use in the 45-75-year age range in 1998-2006 were characterized by their propensity of receiving atorvastatin over simvastatin, as estimated for 17 six-month periods. Atorvastatin (10 mg) and simvastatin (20 mg) initiators were matched 1∶1 on the PS, as estimated for the whole cohort and within each period. Cox regression models were fitted conventionally, and also for the PS matched cohort and the periodically PS matched cohort, to estimate the hazard ratios (HR) for CVEs. Atorvastatin (10 mg) was associated with a 11%-12% lower incidence of CVE in comparison with simvastatin (20 mg). The HR estimates were the same for a conventional Cox model (0.88, 95% confidence interval 0.85-0.91), for the analysis in which the PS was used to match across all periods and the Cox model was adjusted for strong confounders (0.89, 0.85-0.92), and for the analysis in which PS matching was applied within sequential periods (0.88, 0.84-0.92). The HR from a traditional PS matched analysis was 0.80 (0.77-0.83). The SCD PS approach produced effect estimates similar to those obtained in matching for PS within the whole cohort and adjusting the outcome model for strong confounders, but at the cost of efficiency. A traditional PS matched analysis without further adjustment in the outcome model produced estimates further away from unity.
A sequential bioequivalence design with a potential ethical advantage.
Fuglsang, Anders
2014-07-01
This paper introduces a two-stage approach for evaluation of bioequivalence, where, in contrast to the designs of Diane Potvin and co-workers, two stages are mandatory regardless of the data obtained at stage 1. The approach is derived from Potvin's method C. It is shown that under circumstances with relatively high variability and relatively low initial sample size, this method has an advantage over Potvin's approaches in terms of sample sizes while controlling type I error rates at or below 5% with a minute occasional trade-off in power. Ethically and economically, the method may thus be an attractive alternative to the Potvin designs. It is also shown that when using the method introduced here, average total sample sizes are rather independent of initial sample size. Finally, it is shown that when a futility rule in terms of sample size for stage 2 is incorporated into this method, i.e., when a second stage can be abolished due to sample size considerations, there is often an advantage in terms of power or sample size as compared to the previously published methods.
Iterative non-sequential protein structural alignment.
Salem, Saeed; Zaki, Mohammed J; Bystroff, Christopher
2009-06-01
Structural similarity between proteins gives us insights into their evolutionary relationships when there is low sequence similarity. In this paper, we present a novel approach called SNAP for non-sequential pair-wise structural alignment. Starting from an initial alignment, our approach iterates over a two-step process consisting of a superposition step and an alignment step, until convergence. We propose a novel greedy algorithm to construct both sequential and non-sequential alignments. The quality of SNAP alignments were assessed by comparing against the manually curated reference alignments in the challenging SISY and RIPC datasets. Moreover, when applied to a dataset of 4410 protein pairs selected from the CATH database, SNAP produced longer alignments with lower rmsd than several state-of-the-art alignment methods. Classification of folds using SNAP alignments was both highly sensitive and highly selective. The SNAP software along with the datasets are available online at http://www.cs.rpi.edu/~zaki/software/SNAP.
Is moral elevation an approach-oriented emotion?
Van de Vyver, Julie; Abrams, Dominic
2017-01-01
Abstract Two studies were designed to test whether moral elevation should be conceptualized as an approach-oriented emotion. The studies examined the relationship between moral elevation and the behavioral activation and inhibition systems. Study 1 (N = 80) showed that individual differences in moral elevation were associated with individual differences in behavioral activation but not inhibition. Study 2 (N = 78) showed that an elevation-inducing video promoted equally high levels of approach orientation as an anger-inducing video and significantly higher levels of approach orientation than a control video. Furthermore, the elevation-inducing stimulus (vs. the control condition) significantly promoted prosocial motivation and this effect was sequentially mediated by feelings of moral elevation followed by an approach-oriented state. Overall the results show unambiguous support for the proposal that moral elevation is an approach-oriented emotion. Applied and theoretical implications are discussed. PMID:28191027
Sequential, progressive, equal-power, reflective beam-splitter arrays
NASA Astrophysics Data System (ADS)
Manhart, Paul K.
2017-11-01
The equations to calculate equal-power reflectivity of a sequential series of beam splitters is presented. Non-sequential optical design examples are offered for uniform illumination using diode lasers. Objects created using Boolean operators and Swept Surfaces can create objects capable of reflecting light into predefined elevation and azimuth angles. Analysis of the illumination patterns for the array are also presented.
NASA Technical Reports Server (NTRS)
Klarer, P.
1994-01-01
An alternative methodology for designing an autonomous navigation and control system is discussed. This generalized hybrid system is based on a less sequential and less anthropomorphic approach than that used in the more traditional artificial intelligence (AI) technique. The architecture is designed to allow both synchronous and asynchronous operations between various behavior modules. This is accomplished by intertask communications channels which implement each behavior module and each interconnection node as a stand-alone task. The proposed design architecture allows for construction of hybrid systems which employ both subsumption and traditional AI techniques as well as providing for a teleoperator's interface. Implementation of the architecture is planned for the prototype Robotic All Terrain Lunar Explorer Rover (RATLER) which is described briefly.
Design of dissipative low-authority controllers using an eigensystem assignment technique
NASA Technical Reports Server (NTRS)
Maghami, P. G.; Gupta, S.; Joshi, S. M.
1992-01-01
A novel method for the design of dissipative, low-authority controllers has been developed. The method uses a sequential approach along with eigensystem assignment to compute rate and position gain matrices that assign a number of closed-loop poles of the system to desired locations. Because the feedback gain matrices are symmetric and nonnegative definite, the closed-loop stability is always guaranteed regardless of the model order or parameter inaccuracies. The resulting (nominal) closed-loop system can have specified damping ratios for m modes, which makes the plant amenable to high-authority controller design, using methods such as LQG/LTR or H-infinity. A numerical example is worked out for a flexible structure in order to demonstrate the proposed technique.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Galli, M.R.; Cerda, J.
1998-06-01
A mathematical representation of a heat-exchanger network structure that explicitly accounts for the relative location of heat-transfer units, splitters, and mixers is presented. It is the basis of a mixed-integer linear programming sequential approach to the synthesis of heat-exchanger networks that allows the designer to specify beforehand some desired topology features as further design targets. Such structural information stands for additional problem data to be considered in the problem formulation, thus enhancing the involvement of the design engineer in the synthesis task. The topology constraints are expressed in terms of (1) the equipment items (heat exchangers, splitters, and mixers) thatmore » could be incorporated into the network, (2) the feasible neighbors for every potential unit, and (3) the heat matches, if any, with which a heat exchanger can be accomplished in parallel over any process stream. Moreover, the number and types of splitters being arranged over either a particular stream or the whole network can also be restrained. The new approach has been successfully applied to the solution of five example problems at each of which a wide variety of structural design restrictions were specified.« less
Lin, Carol Y; Li, Ling
2016-11-07
HPV DNA diagnostic tests for epidemiology monitoring (research purpose) or cervical cancer screening (clinical purpose) have often been considered separately. Women with positive Linear Array (LA) polymerase chain reaction (PCR) research test results typically are neither informed nor referred for colposcopy. Recently, a sequential testing by using Hybrid Capture 2 (HC2) HPV clinical test as a triage before genotype by LA has been adopted for monitoring HPV infections. Also, HC2 has been reported as a more feasible screening approach for cervical cancer in low-resource countries. Thus, knowing the performance of testing strategies incorporating HPV clinical test (i.e., HC2-only or using HC2 as a triage before genotype by LA) compared with LA-only testing in measuring HPV prevalence will be informative for public health practice. We conducted a Monte Carlo simulation study. Data were generated using mathematical algorithms. We designated the reported HPV infection prevalence in the U.S. and Latin America as the "true" underlying type-specific HPV prevalence. Analytical sensitivity of HC2 for detecting 14 high-risk (oncogenic) types was considered to be less than LA. Estimated-to-true prevalence ratios and percentage reductions were calculated. When the "true" HPV prevalence was designated as the reported prevalence in the U.S., with LA genotyping sensitivity and specificity of (0.95, 0.95), estimated-to-true prevalence ratios of 14 high-risk types were 2.132, 1.056, 0.958 for LA-only, HC2-only, and sequential testing, respectively. Estimated-to-true prevalence ratios of two vaccine-associated high-risk types were 2.359 and 1.063 for LA-only and sequential testing, respectively. When designated type-specific prevalence of HPV16 and 18 were reduced by 50 %, using either LA-only or sequential testing, prevalence estimates were reduced by 18 %. Estimated-to-true HPV infection prevalence ratios using LA-only testing strategy are generally higher than using HC2-only or using HC2 as a triage before genotype by LA. HPV clinical testing can be incorporated to monitor HPV prevalence or vaccine effectiveness. Caution is needed when comparing apparent prevalence from different testing strategies.
ERIC Educational Resources Information Center
Peters, Richard
A model for Continuous-Integrated-Sequential (C/I/S) curricula for social studies education is presented. The design advocated involves ensuring continuity of instruction from grades K-12, an integration of social studies disciplines, and a sequential process of refining and reinforcing concept and skills from grade-to-grade along the K-12…
NASA Astrophysics Data System (ADS)
Hosking, Michael Robert
This dissertation improves an analyst's use of simulation by offering improvements in the utilization of kriging metamodels. There are three main contributions. First an analysis is performed of what comprises good experimental designs for practical (non-toy) problems when using a kriging metamodel. Second is an explanation and demonstration of how reduced rank decompositions can improve the performance of kriging, now referred to as reduced rank kriging. Third is the development of an extension of reduced rank kriging which solves an open question regarding the usage of reduced rank kriging in practice. This extension is called omni-rank kriging. Finally these results are demonstrated on two case studies. The first contribution focuses on experimental design. Sequential designs are generally known to be more efficient than "one shot" designs. However, sequential designs require some sort of pilot design from which the sequential stage can be based. We seek to find good initial designs for these pilot studies, as well as designs which will be effective if there is no following sequential stage. We test a wide variety of designs over a small set of test-bed problems. Our findings indicate that analysts should take advantage of any prior information they have about their problem's shape and/or their goals in metamodeling. In the event of a total lack of information we find that Latin hypercube designs are robust default choices. Our work is most distinguished by its attention to the higher levels of dimensionality. The second contribution introduces and explains an alternative method for kriging when there is noise in the data, which we call reduced rank kriging. Reduced rank kriging is based on using a reduced rank decomposition which artificially smoothes the kriging weights similar to a nugget effect. Our primary focus will be showing how the reduced rank decomposition propagates through kriging empirically. In addition, we show further evidence for our explanation through tests of reduced rank kriging's performance over different situations. In total, reduced rank kriging is a useful tool for simulation metamodeling. For the third contribution we will answer the question of how to find the best rank for reduced rank kriging. We do this by creating an alternative method which does not need to search for a particular rank. Instead it uses all potential ranks; we call this approach omnirank kriging. This modification realizes the potential gains from reduced rank kriging and provides a workable methodology for simulation metamodeling. Finally, we will demonstrate the use and value of these developments on two case studies, a clinic operation problem and a location problem. These cases will validate the value of this research. Simulation metamodeling always attempts to extract maximum information from limited data. Each one of these contributions will allow analysts to make better use of their constrained computational budgets.
Lalor, Joan G; Casey, Dympna; Elliott, Naomi; Coyne, Imelda; Comiskey, Catherine; Higgins, Agnes; Murphy, Kathy; Devane, Declan; Begley, Cecily
2013-04-08
The role of the clinical nurse/midwife specialist and advanced nurse/midwife practitioner is complex not least because of the diversity in how the roles are operationalised across health settings and within multidisciplinary teams. This aim of this paper is to use The SCAPE Study: Specialist Clinical and Advanced Practitioner Evaluation in Ireland to illustrate how case study was used to strengthen a Sequential Explanatory Design. In Phase 1, clinicians identified indicators of specialist and advanced practice which were then used to guide the instrumental case study design which formed the second phase of the larger study. Phase 2 used matched case studies to evaluate the effectiveness of specialist and advanced practitioners on clinical outcomes for service users. Data were collected through observation, documentary analysis, and interviews. Observations were made of 23 Clinical Specialists or Advanced Practitioners, and 23 matched clinicians in similar matched non-postholding sites, while they delivered care. Forty-one service users, 41 clinicians, and 23 Directors of Nursing or Midwifery were interviewed, and 279 service users completed a survey based on the components of CS and AP practice identified in Phase 1. A coding framework, and the generation of cross tabulation matrices in NVivo, was used to make explicit how the outcome measures were confirmed and validated from multiple sources. This strengthened the potential to examine single cases that seemed 'different', and allowed for cases to be redefined. Phase 3 involved interviews with policy-makers to set the findings in context. Case study is a powerful research strategy to use within sequential explanatory mixed method designs, and adds completeness to the exploration of complex issues in clinical practice. The design is flexible, allowing the use of multiple data collection methods from both qualitative and quantitative paradigms. Multiple approaches to data collection are needed to evaluate the impact of complex roles and interventions in health care outcomes and service delivery. Case study design is an appropriate methodology to use when study outcomes relate to clinical practice.
Wu, Mixia; Shu, Yu; Li, Zhaohai; Liu, Aiyi
2016-01-01
A sequential design is proposed to test whether the accuracy of a binary diagnostic biomarker meets the minimal level of acceptance. The accuracy of a binary diagnostic biomarker is a linear combination of the marker’s sensitivity and specificity. The objective of the sequential method is to minimize the maximum expected sample size under the null hypothesis that the marker’s accuracy is below the minimal level of acceptance. The exact results of two-stage designs based on Youden’s index and efficiency indicate that the maximum expected sample sizes are smaller than the sample sizes of the fixed designs. Exact methods are also developed for estimation, confidence interval and p-value concerning the proposed accuracy index upon termination of the sequential testing. PMID:26947768
Salter-Venzon, Dawna; Kazlova, Valentina; Izzy Ford, Samantha; Intra, Janjira; Klosner, Allison E; Gellenbeck, Kevin W
2017-05-01
Despite the notable health benefits of carotenoids for human health, the majority of human diets worldwide are repeatedly shown to be inadequate in intake of carotenoid-rich fruits and vegetables, according to current health recommendations. To address this deficit, strategies designed to increase dietary intakes and subsequent plasma levels of carotenoids are warranted. When mixed carotenoids are delivered into the intestinal tract simultaneously, competition occurs for micelle formation and absorption, affecting carotenoid bioavailability. Previously, we tested the in vitro viability of a carotenoid mix designed to deliver individual carotenoids sequentially spaced from one another over the 6 hr transit time of the human upper gastrointestinal system. We hypothesized that temporally and spatially separating the individual carotenoids would reduce competition for micelle formation, improve uptake, and maximize efficacy. Here, we test this hypothesis in a double-blind, repeated-measure, cross-over human study with 12 subjects by comparing the change of plasma carotenoid levels for 8 hr after oral doses of a sequentially spaced carotenoid mix, to a matched mix without sequential spacing. We find the carotenoid change from baseline, measured as area under the curve, is increased following consumption of the sequentially spaced mix compared to concomitant carotenoids delivery. These results demonstrate reduced interaction and regulation between the sequentially spaced carotenoids, suggesting improved bioavailability from a novel sequentially spaced carotenoid mix.
Experimental Design and Primary Data Analysis Methods for Comparing Adaptive Interventions
Nahum-Shani, Inbal; Qian, Min; Almirall, Daniel; Pelham, William E.; Gnagy, Beth; Fabiano, Greg; Waxmonsky, Jim; Yu, Jihnhee; Murphy, Susan
2013-01-01
In recent years, research in the area of intervention development is shifting from the traditional fixed-intervention approach to adaptive interventions, which allow greater individualization and adaptation of intervention options (i.e., intervention type and/or dosage) over time. Adaptive interventions are operationalized via a sequence of decision rules that specify how intervention options should be adapted to an individual’s characteristics and changing needs, with the general aim to optimize the long-term effectiveness of the intervention. Here, we review adaptive interventions, discussing the potential contribution of this concept to research in the behavioral and social sciences. We then propose the sequential multiple assignment randomized trial (SMART), an experimental design useful for addressing research questions that inform the construction of high-quality adaptive interventions. To clarify the SMART approach and its advantages, we compare SMART with other experimental approaches. We also provide methods for analyzing data from SMART to address primary research questions that inform the construction of a high-quality adaptive intervention. PMID:23025433
Picheny, Victor; Trépos, Ronan; Casadebaig, Pierre
2017-01-01
Accounting for the interannual climatic variations is a well-known issue for simulation-based studies of environmental systems. It often requires intensive sampling (e.g., averaging the simulation outputs over many climatic series), which hinders many sequential processes, in particular optimization algorithms. We propose here an approach based on a subset selection in a large basis of climatic series, using an ad-hoc similarity function and clustering. A non-parametric reconstruction technique is introduced to estimate accurately the distribution of the output of interest using only the subset sampling. The proposed strategy is non-intrusive and generic (i.e. transposable to most models with climatic data inputs), and can be combined to most “off-the-shelf” optimization solvers. We apply our approach to sunflower ideotype design using the crop model SUNFLO. The underlying optimization problem is formulated as a multi-objective one to account for risk-aversion. Our approach achieves good performances even for limited computational budgets, outperforming significantly standard strategies. PMID:28542198
Fully integrated aerodynamic/dynamic optimization of helicopter rotor blades
NASA Technical Reports Server (NTRS)
Walsh, Joanne L.; Lamarsh, William J., II; Adelman, Howard M.
1992-01-01
This paper describes a fully integrated aerodynamic/dynamic optimization procedure for helicopter rotor blades. The procedure combines performance and dynamics analyses with a general purpose optimizer. The procedure minimizes a linear combination of power required (in hover, forward flight, and maneuver) and vibratory hub shear. The design variables include pretwist, taper initiation, taper ratio, root chord, blade stiffnesses, tuning masses, and tuning mass locations. Aerodynamic constraints consist of limits on power required in hover, forward flight and maneuver; airfoil section stall; drag divergence Mach number; minimum tip chord; and trim. Dynamic constraints are on frequencies, minimum autorotational inertia, and maximum blade weight. The procedure is demonstrated for two cases. In the first case the objective function involves power required (in hover, forward flight, and maneuver) and dynamics. The second case involves only hover power and dynamics. The designs from the integrated procedure are compared with designs from a sequential optimization approach in which the blade is first optimized for performance and then for dynamics. In both cases, the integrated approach is superior.
Fully integrated aerodynamic/dynamic optimization of helicopter rotor blades
NASA Technical Reports Server (NTRS)
Walsh, Joanne L.; Lamarsh, William J., II; Adelman, Howard M.
1992-01-01
A fully integrated aerodynamic/dynamic optimization procedure is described for helicopter rotor blades. The procedure combines performance and dynamic analyses with a general purpose optimizer. The procedure minimizes a linear combination of power required (in hover, forward flight, and maneuver) and vibratory hub shear. The design variables include pretwist, taper initiation, taper ratio, root chord, blade stiffnesses, tuning masses, and tuning mass locations. Aerodynamic constraints consist of limits on power required in hover, forward flight and maneuvers; airfoil section stall; drag divergence Mach number; minimum tip chord; and trim. Dynamic constraints are on frequencies, minimum autorotational inertia, and maximum blade weight. The procedure is demonstrated for two cases. In the first case, the objective function involves power required (in hover, forward flight and maneuver) and dynamics. The second case involves only hover power and dynamics. The designs from the integrated procedure are compared with designs from a sequential optimization approach in which the blade is first optimized for performance and then for dynamics. In both cases, the integrated approach is superior.
Two time scale output feedback regulation for ill-conditioned systems
NASA Technical Reports Server (NTRS)
Calise, A. J.; Moerder, D. D.
1986-01-01
Issues pertaining to the well-posedness of a two time scale approach to the output feedback regulator design problem are examined. An approximate quadratic performance index which reflects a two time scale decomposition of the system dynamics is developed. It is shown that, under mild assumptions, minimization of this cost leads to feedback gains providing a second-order approximation of optimal full system performance. A simplified approach to two time scale feedback design is also developed, in which gains are separately calculated to stabilize the slow and fast subsystem models. By exploiting the notion of combined control and observation spillover suppression, conditions are derived assuring that these gains will stabilize the full-order system. A sequential numerical algorithm is described which obtains output feedback gains minimizing a broad class of performance indices, including the standard LQ case. It is shown that the algorithm converges to a local minimum under nonrestrictive assumptions. This procedure is adapted to and demonstrated for the two time scale design formulations.
Dinavahi, Saketh S; Noory, Mohammad A; Gowda, Raghavendra; Drabick, Joseph J; Berg, Arthur; Neves, Rogerio I; Robertson, Gavin P
2018-03-01
Drug combinations acting synergistically to kill cancer cells have become increasingly important in melanoma as an approach to manage the recurrent resistant disease. Protein kinase B (AKT) is a major target in this disease but its inhibitors are not effective clinically, which is a major concern. Targeting AKT in combination with WEE1 (mitotic inhibitor kinase) seems to have potential to make AKT-based therapeutics effective clinically. Since agents targeting AKT and WEE1 have been tested individually in the clinic, the quickest way to move the drug combination to patients would be to combine these agents sequentially, enabling the use of existing phase I clinical trial toxicity data. Therefore, a rapid preclinical approach is needed to evaluate whether simultaneous or sequential drug treatment has maximal therapeutic efficacy, which is based on a mechanistic rationale. To develop this approach, melanoma cell lines were treated with AKT inhibitor AZD5363 [4-amino- N -[(1 S )-1-(4-chlorophenyl)-3-hydroxypropyl]-1-(7 H -pyrrolo[2,3- d ]pyrimidin-4-yl)piperidine-4-carboxamide] and WEE1 inhibitor AZD1775 [2-allyl-1-(6-(2-hydroxypropan-2-yl)pyridin-2-yl)-6-((4-(4-methylpiperazin-1-yl)phenyl)amino)-1 H -pyrazolo[3,4- d ]pyrimidin-3(2 H )-one] using simultaneous and sequential dosing schedules. Simultaneous treatment synergistically reduced melanoma cell survival and tumor growth. In contrast, sequential treatment was antagonistic and had a minimal tumor inhibitory effect compared with individual agents. Mechanistically, simultaneous targeting of AKT and WEE1 enhanced deregulation of the cell cycle and DNA damage repair pathways by modulating transcription factors p53 and forkhead box M1, which was not observed with sequential treatment. Thus, this study identifies a rapid approach to assess the drug combinations with a mechanistic basis for selection, which suggests that combining AKT and WEE1 inhibitors is needed for maximal efficacy. Copyright © 2018 by The American Society for Pharmacology and Experimental Therapeutics.
ERIC Educational Resources Information Center
Ivankova, Nataliya V.
2014-01-01
In spite of recent methodological developments related to quality assurance in mixed methods research, practical examples of how to implement quality criteria in designing and conducting sequential QUAN [right arrow] QUAL mixed methods studies to ensure the process is systematic and rigorous remain scarce. This article discusses a three-step…
NASA Astrophysics Data System (ADS)
Azhar, N.; Saad, W. H. M.; Manap, N. A.; Saad, N. M.; Syafeeza, A. R.
2017-06-01
This study presents the approach of 3D image reconstruction using an autonomous robotic arm for the image acquisition process. A low cost of the automated imaging platform is created using a pair of G15 servo motor connected in series to an Arduino UNO as a main microcontroller. Two sets of sequential images were obtained using different projection angle of the camera. The silhouette-based approach is used in this study for 3D reconstruction from the sequential images captured from several different angles of the object. Other than that, an analysis based on the effect of different number of sequential images on the accuracy of 3D model reconstruction was also carried out with a fixed projection angle of the camera. The effecting elements in the 3D reconstruction are discussed and the overall result of the analysis is concluded according to the prototype of imaging platform.
Lineup composition, suspect position, and the sequential lineup advantage.
Carlson, Curt A; Gronlund, Scott D; Clark, Steven E
2008-06-01
N. M. Steblay, J. Dysart, S. Fulero, and R. C. L. Lindsay (2001) argued that sequential lineups reduce the likelihood of mistaken eyewitness identification. Experiment 1 replicated the design of R. C. L. Lindsay and G. L. Wells (1985), the first study to show the sequential lineup advantage. However, the innocent suspect was chosen at a lower rate in the simultaneous lineup, and no sequential lineup advantage was found. This led the authors to hypothesize that protection from a sequential lineup might emerge only when an innocent suspect stands out from the other lineup members. In Experiment 2, participants viewed a simultaneous or sequential lineup with either the guilty suspect or 1 of 3 innocent suspects. Lineup fairness was varied to influence the degree to which a suspect stood out. A sequential lineup advantage was found only for the unfair lineups. Additional analyses of suspect position in the sequential lineups showed an increase in the diagnosticity of suspect identifications as the suspect was placed later in the sequential lineup. These results suggest that the sequential lineup advantage is dependent on lineup composition and suspect position. (c) 2008 APA, all rights reserved
A field test of three LQAS designs to assess the prevalence of acute malnutrition.
Deitchler, Megan; Valadez, Joseph J; Egge, Kari; Fernandez, Soledad; Hennigan, Mary
2007-08-01
The conventional method for assessing the prevalence of Global Acute Malnutrition (GAM) in emergency settings is the 30 x 30 cluster-survey. This study describes alternative approaches: three Lot Quality Assurance Sampling (LQAS) designs to assess GAM. The LQAS designs were field-tested and their results compared with those from a 30 x 30 cluster-survey. Computer simulations confirmed that small clusters instead of a simple random sample could be used for LQAS assessments of GAM. Three LQAS designs were developed (33 x 6, 67 x 3, Sequential design) to assess GAM thresholds of 10, 15 and 20%. The designs were field-tested simultaneously with a 30 x 30 cluster-survey in Siraro, Ethiopia during June 2003. Using a nested study design, anthropometric, morbidity and vaccination data were collected on all children 6-59 months in sampled households. Hypothesis tests about GAM thresholds were conducted for each LQAS design. Point estimates were obtained for the 30 x 30 cluster-survey and the 33 x 6 and 67 x 3 LQAS designs. Hypothesis tests showed GAM as <10% for the 33 x 6 design and GAM as > or =10% for the 67 x 3 and Sequential designs. Point estimates for the 33 x 6 and 67 x 3 designs were similar to those of the 30 x 30 cluster-survey for GAM (6.7%, CI = 3.2-10.2%; 8.2%, CI = 4.3-12.1%, 7.4%, CI = 4.8-9.9%) and all other indicators. The CIs for the LQAS designs were only slightly wider than the CIs for the 30 x 30 cluster-survey; yet the LQAS designs required substantially less time to administer. The LQAS designs provide statistically appropriate alternatives to the more time-consuming 30 x 30 cluster-survey. However, additional field-testing is needed using independent samples rather than a nested study design.
Rodrigues, Eunice R G O; Lapa, Rui A S
2009-03-01
An alternative process for the design and construction of fluidic devices is presented. Several sealing processes were studied, as well as the hydrodynamic characteristics of the proposed fluidic devices. Manifolds were imprinted on polymeric substrates by direct-write milling, according to Computer Assisted Design (CAD) data. Poly(methyl methacrylate) (PMMA) was used as substrate due to its physical and chemical properties. Different bonding approaches for the imprinted channels were evaluated and UV-photopolymerization of acrylic acid (AA) was selected. The hydrodynamic characteristics of the proposed flow devices were assessed and compared to those obtained in similar flow systems using PTFE reactors and micro-pumps as propulsion units (multi-pumping approach). The applicability of the imprinted reactors was evaluated in the sequential determination of calcium and magnesium in water samples. Results obtained were in good agreement with those obtained by the reference procedure.
User-Defined Data Distributions in High-Level Programming Languages
NASA Technical Reports Server (NTRS)
Diaconescu, Roxana E.; Zima, Hans P.
2006-01-01
One of the characteristic features of today s high performance computing systems is a physically distributed memory. Efficient management of locality is essential for meeting key performance requirements for these architectures. The standard technique for dealing with this issue has involved the extension of traditional sequential programming languages with explicit message passing, in the context of a processor-centric view of parallel computation. This has resulted in complex and error-prone assembly-style codes in which algorithms and communication are inextricably interwoven. This paper presents a high-level approach to the design and implementation of data distributions. Our work is motivated by the need to improve the current parallel programming methodology by introducing a paradigm supporting the development of efficient and reusable parallel code. This approach is currently being implemented in the context of a new programming language called Chapel, which is designed in the HPCS project Cascade.
NASA Astrophysics Data System (ADS)
Pfefferkorn, T.; Oxynos, C.; Greff, P.; Gerlach, L.
2008-09-01
After the successful series of Eurostar 3000 and Spacebus 4000 satellites and due to the demand of satellite operators for even larger and more powerful satellites, ESA decided to co-fund the development of a new satellite platform which covers the market segment beyond the upper limits of both satellite families.The new satellite bus family Alphabus is developed in the frame of ARTES 8 project by a joint project team of ASTRIUM and TAS, whereas the solar array is developed by ASTRIUM GmbH.The main approaches in this design phase for the Alphabus solar array were to find a standardized and scaleable design to production and to use qualification heritage from former projects, especially Eurostar 3000, as far as possible. The main challenges for the solar array design and test philosophy were the usage of lateral deployment and related sequential deployment and the bus voltage of 102,5V and related ESD precautions.This paper provides an overview of the different configurations, their main design features and performance parameters. In addition it summarizes the development and verification approach and shows the actual qualification status.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Edwards, J. V.; Goheen, Steven C.
The formation of peptide and protein conjugates of cellulose on cotton fabrics provides promising leads for the development of wound healing, antibacterial, and decontaminating textiles. An approach to the design, synthesis, and analysis of bioconjugates containing cellulose peptide and protein conjugates includes: 1) computer graphic modeling for a rationally designed structure; 2) attachment of the peptide or protein to cotton cellulose through a linker amino acid, and 3) characterization of the resulting bioconjugate. Computer graphic simulation of protein and peptide cellulose conjugates gives a rationally designed biopolymer to target synthetic modifications to the cotton cellulose. Techniques for preparing these typesmore » of conjugates involve both sequential assembly of the peptide on the fabric and direct crosslinking of the peptide or protein as cellulose bound esters or carboxymethylcellulose amides.« less
Recent Improvements in Aerodynamic Design Optimization on Unstructured Meshes
NASA Technical Reports Server (NTRS)
Nielsen, Eric J.; Anderson, W. Kyle
2000-01-01
Recent improvements in an unstructured-grid method for large-scale aerodynamic design are presented. Previous work had shown such computations to be prohibitively long in a sequential processing environment. Also, robust adjoint solutions and mesh movement procedures were difficult to realize, particularly for viscous flows. To overcome these limiting factors, a set of design codes based on a discrete adjoint method is extended to a multiprocessor environment using a shared memory approach. A nearly linear speedup is demonstrated, and the consistency of the linearizations is shown to remain valid. The full linearization of the residual is used to precondition the adjoint system, and a significantly improved convergence rate is obtained. A new mesh movement algorithm is implemented and several advantages over an existing technique are presented. Several design cases are shown for turbulent flows in two and three dimensions.
Extended target recognition in cognitive radar networks.
Wei, Yimin; Meng, Huadong; Liu, Yimin; Wang, Xiqin
2010-01-01
We address the problem of adaptive waveform design for extended target recognition in cognitive radar networks. A closed-loop active target recognition radar system is extended to the case of a centralized cognitive radar network, in which a generalized likelihood ratio (GLR) based sequential hypothesis testing (SHT) framework is employed. Using Doppler velocities measured by multiple radars, the target aspect angle for each radar is calculated. The joint probability of each target hypothesis is then updated using observations from different radar line of sights (LOS). Based on these probabilities, a minimum correlation algorithm is proposed to adaptively design the transmit waveform for each radar in an amplitude fluctuation situation. Simulation results demonstrate performance improvements due to the cognitive radar network and adaptive waveform design. Our minimum correlation algorithm outperforms the eigen-waveform solution and other non-cognitive waveform design approaches.
An efficiency study of the simultaneous analysis and design of structures
NASA Technical Reports Server (NTRS)
Striz, Alfred G.; Wu, Zhiqi; Sobieski, Jaroslaw
1995-01-01
The efficiency of the Simultaneous Analysis and Design (SAND) approach in the minimum weight optimization of structural systems subject to strength and displacement constraints as well as size side constraints is investigated. SAND allows for an optimization to take place in one single operation as opposed to the more traditional and sequential Nested Analysis and Design (NAND) method, where analyses and optimizations alternate. Thus, SAND has the advantage that the stiffness matrix is never factored during the optimization retaining its original sparsity. One of SAND's disadvantages is the increase in the number of design variables and in the associated number of constraint gradient evaluations. If SAND is to be an acceptable player in the optimization field, it is essential to investigate the efficiency of the method and to present a possible cure for any inherent deficiencies.
Guidance for using mixed methods design in nursing practice research.
Chiang-Hanisko, Lenny; Newman, David; Dyess, Susan; Piyakong, Duangporn; Liehr, Patricia
2016-08-01
The mixed methods approach purposefully combines both quantitative and qualitative techniques, enabling a multi-faceted understanding of nursing phenomena. The purpose of this article is to introduce three mixed methods designs (parallel; sequential; conversion) and highlight interpretive processes that occur with the synthesis of qualitative and quantitative findings. Real world examples of research studies conducted by the authors will demonstrate the processes leading to the merger of data. The examples include: research questions; data collection procedures and analysis with a focus on synthesizing findings. Based on experience with mixed methods studied, the authors introduce two synthesis patterns (complementary; contrasting), considering application for practice and implications for research. Copyright © 2015 Elsevier Inc. All rights reserved.
System reliability approaches for advanced propulsion system structures
NASA Technical Reports Server (NTRS)
Cruse, T. A.; Mahadevan, S.
1991-01-01
This paper identifies significant issues that pertain to the estimation and use of system reliability in the design of advanced propulsion system structures. Linkages between the reliabilities of individual components and their effect on system design issues such as performance, cost, availability, and certification are examined. The need for system reliability computation to address the continuum nature of propulsion system structures and synergistic progressive damage modes has been highlighted. Available system reliability models are observed to apply only to discrete systems. Therefore a sequential structural reanalysis procedure is formulated to rigorously compute the conditional dependencies between various failure modes. The method is developed in a manner that supports both top-down and bottom-up analyses in system reliability.
Engineering design: A cognitive process approach
NASA Astrophysics Data System (ADS)
Strimel, Greg Joseph
The intent of this dissertation was to identify the cognitive processes used by advanced pre-engineering students to solve complex engineering design problems. Students in technology and engineering education classrooms are often taught to use an ideal engineering design process that has been generated mostly by educators and curriculum developers. However, the review of literature showed that it is unclear as to how advanced pre-engineering students cognitively navigate solving a complex and multifaceted problem from beginning to end. Additionally, it was unclear how a student thinks and acts throughout their design process and how this affects the viability of their solution. Therefore, Research Objective 1 was to identify the fundamental cognitive processes students use to design, construct, and evaluate operational solutions to engineering design problems. Research Objective 2 was to determine identifiers within student cognitive processes for monitoring aptitude to successfully design, construct, and evaluate technological solutions. Lastly, Research Objective 3 was to create a conceptual technological and engineering problem-solving model integrating student cognitive processes for the improved development of problem-solving abilities. The methodology of this study included multiple forms of data collection. The participants were first given a survey to determine their prior experience with engineering and to provide a description of the subjects being studied. The participants were then presented an engineering design challenge to solve individually. While they completed the challenge, the participants verbalized their thoughts using an established "think aloud" method. These verbalizations were captured along with participant observational recordings using point-of-view camera technology. Additionally, the participant design journals, design artifacts, solution effectiveness data, and teacher evaluations were collected for analysis to help achieve the research objectives of this study. Two independent coders then coded the video/audio recordings and the additional design data using Halfin's (1973) 17 mental processes for technological problem-solving. The results of this study indicated that the participants employed a wide array of mental processes when solving engineering design challenges. However, the findings provide a general analysis of the number of times participants employed each mental process, as well as the amount of time consumed employing the various mental processes through the different stages of the engineering design process. The results indicated many similarities between the students solving the problem, which may highlight voids in current technology and engineering education curricula. Additionally, the findings showed differences between the processes employed by participants that created the most successful solutions and the participants who developed the least effective solutions. Upon comparing and contrasting these processes, recommendations for instructional strategies to enhance a student's capability for solving engineering design problems were developed. The results also indicated that students, when left without teacher intervention, use a simplified and more natural process to solve design challenges than the 12-step engineering design process reported in much of the literature. Lastly, these data indicated that students followed two different approaches to solving the design problem. Some students employed a sequential and logical approach, while others employed a nebulous, solution centered trial-and-error approach to solving the problem. In this study the participants who were more sequential had better performing solutions. Examining these two approaches and the student cognition data enabled the researcher to generate a conceptual engineering design model for the improved teaching and development of engineering design problem solving.
Sequential circuit design for radiation hardened multiple voltage integrated circuits
Clark, Lawrence T [Phoenix, AZ; McIver, III, John K.
2009-11-24
The present invention includes a radiation hardened sequential circuit, such as a bistable circuit, flip-flop or other suitable design that presents substantial immunity to ionizing radiation while simultaneously maintaining a low operating voltage. In one embodiment, the circuit includes a plurality of logic elements that operate on relatively low voltage, and a master and slave latches each having storage elements that operate on a relatively high voltage.
Mining of high utility-probability sequential patterns from uncertain databases
Zhang, Binbin; Fournier-Viger, Philippe; Li, Ting
2017-01-01
High-utility sequential pattern mining (HUSPM) has become an important issue in the field of data mining. Several HUSPM algorithms have been designed to mine high-utility sequential patterns (HUPSPs). They have been applied in several real-life situations such as for consumer behavior analysis and event detection in sensor networks. Nonetheless, most studies on HUSPM have focused on mining HUPSPs in precise data. But in real-life, uncertainty is an important factor as data is collected using various types of sensors that are more or less accurate. Hence, data collected in a real-life database can be annotated with existing probabilities. This paper presents a novel pattern mining framework called high utility-probability sequential pattern mining (HUPSPM) for mining high utility-probability sequential patterns (HUPSPs) in uncertain sequence databases. A baseline algorithm with three optional pruning strategies is presented to mine HUPSPs. Moroever, to speed up the mining process, a projection mechanism is designed to create a database projection for each processed sequence, which is smaller than the original database. Thus, the number of unpromising candidates can be greatly reduced, as well as the execution time for mining HUPSPs. Substantial experiments both on real-life and synthetic datasets show that the designed algorithm performs well in terms of runtime, number of candidates, memory usage, and scalability for different minimum utility and minimum probability thresholds. PMID:28742847
Modeling eye gaze patterns in clinician-patient interaction with lag sequential analysis.
Montague, Enid; Xu, Jie; Chen, Ping-Yu; Asan, Onur; Barrett, Bruce P; Chewning, Betty
2011-10-01
The aim of this study was to examine whether lag sequential analysis could be used to describe eye gaze orientation between clinicians and patients in the medical encounter. This topic is particularly important as new technologies are implemented into multiuser health care settings in which trust is critical and nonverbal cues are integral to achieving trust. This analysis method could lead to design guidelines for technologies and more effective assessments of interventions. Nonverbal communication patterns are important aspects of clinician-patient interactions and may affect patient outcomes. The eye gaze behaviors of clinicians and patients in 110 videotaped medical encounters were analyzed using the lag sequential method to identify significant behavior sequences. Lag sequential analysis included both event-based lag and time-based lag. Results from event-based lag analysis showed that the patient's gaze followed that of the clinician, whereas the clinician's gaze did not follow the patient's. Time-based sequential analysis showed that responses from the patient usually occurred within 2 s after the initial behavior of the clinician. Our data suggest that the clinician's gaze significantly affects the medical encounter but that the converse is not true. Findings from this research have implications for the design of clinical work systems and modeling interactions. Similar research methods could be used to identify different behavior patterns in clinical settings (physical layout, technology, etc.) to facilitate and evaluate clinical work system designs.
Modeling Eye Gaze Patterns in Clinician-Patient Interaction with Lag Sequential Analysis
Montague, E; Xu, J; Asan, O; Chen, P; Chewning, B; Barrett, B
2011-01-01
Objective The aim of this study was to examine whether lag-sequential analysis could be used to describe eye gaze orientation between clinicians and patients in the medical encounter. This topic is particularly important as new technologies are implemented into multi-user health care settings where trust is critical and nonverbal cues are integral to achieving trust. This analysis method could lead to design guidelines for technologies and more effective assessments of interventions. Background Nonverbal communication patterns are important aspects of clinician-patient interactions and may impact patient outcomes. Method Eye gaze behaviors of clinicians and patients in 110-videotaped medical encounters were analyzed using the lag-sequential method to identify significant behavior sequences. Lag-sequential analysis included both event-based lag and time-based lag. Results Results from event-based lag analysis showed that the patients’ gaze followed that of clinicians, while clinicians did not follow patients. Time-based sequential analysis showed that responses from the patient usually occurred within two seconds after the initial behavior of the clinician. Conclusion Our data suggest that the clinician’s gaze significantly affects the medical encounter but not the converse. Application Findings from this research have implications for the design of clinical work systems and modeling interactions. Similar research methods could be used to identify different behavior patterns in clinical settings (physical layout, technology, etc.) to facilitate and evaluate clinical work system designs. PMID:22046723
Accurately controlled sequential self-folding structures by polystyrene film
NASA Astrophysics Data System (ADS)
Deng, Dongping; Yang, Yang; Chen, Yong; Lan, Xing; Tice, Jesse
2017-08-01
Four-dimensional (4D) printing overcomes the traditional fabrication limitations by designing heterogeneous materials to enable the printed structures evolve over time (the fourth dimension) under external stimuli. Here, we present a simple 4D printing of self-folding structures that can be sequentially and accurately folded. When heated above their glass transition temperature pre-strained polystyrene films shrink along the XY plane. In our process silver ink traces printed on the film are used to provide heat stimuli by conducting current to trigger the self-folding behavior. The parameters affecting the folding process are studied and discussed. Sequential folding and accurately controlled folding angles are achieved by using printed ink traces and angle lock design. Theoretical analyses are done to guide the design of the folding processes. Programmable structures such as a lock and a three-dimensional antenna are achieved to test the feasibility and potential applications of this method. These self-folding structures change their shapes after fabrication under controlled stimuli (electric current) and have potential applications in the fields of electronics, consumer devices, and robotics. Our design and fabrication method provides an easy way by using silver ink printed on polystyrene films to 4D print self-folding structures for electrically induced sequential folding with angular control.
Artificial heart for humanoid robot using coiled SMA actuators
NASA Astrophysics Data System (ADS)
Potnuru, Akshay; Tadesse, Yonas
2015-03-01
Previously, we have presented the design and characterization of artificial heart using cylindrical shape memory alloy (SMA) actuators for humanoids [1]. The robotic heart was primarily designed to pump a blood-like fluid to parts of the robot such as the face to simulate blushing or anger by the use of elastomeric substrates for the transport of fluids. It can also be used for other applications. In this paper, we present an improved design by using high strain coiled SMAs and a novel pumping mechanism that uses sequential actuation to create peristalsis-like motions, and hence pump the fluid. Various placements of actuators will be investigated with respect to the silicone elastomeric body. This new approach provides a better performance in terms of the fluid volume pumped.
Correlated sequential tunneling through a double barrier for interacting one-dimensional electrons
NASA Astrophysics Data System (ADS)
Thorwart, M.; Egger, R.; Grifoni, M.
2005-07-01
The problem of resonant tunneling through a quantum dot weakly coupled to spinless Tomonaga-Luttinger liquids has been studied. We compute the linear conductance due to sequential tunneling processes upon employing a master equation approach. Besides the previously used lowest-order golden rule rates describing uncorrelated sequential tunneling processes, we systematically include higher-order correlated sequential tunneling (CST) diagrams within the standard Weisskopf-Wigner approximation. We provide estimates for the parameter regions where CST effects can be important. Focusing mainly on the temperature dependence of the peak conductance, we discuss the relation of these findings to previous theoretical and experimental results.
Correlated sequential tunneling in Tomonaga-Luttinger liquid quantum dots
NASA Astrophysics Data System (ADS)
Thorwart, M.; Egger, R.; Grifoni, M.
2005-02-01
We investigate tunneling through a quantum dot formed by two strong impurites in a spinless Tomonaga-Luttinger liquid. Upon employing a Markovian master equation approach, we compute the linear conductance due to sequential tunneling processes. Besides the previously used lowest-order Golden Rule rates describing uncorrelated sequential tunneling (UST) processes, we systematically include higher-order correlated sequential tunneling (CST) diagrams within the standard Weisskopf-Wigner approximation. We provide estimates for the parameter regions where CST effects are shown to dominate over UST. Focusing mainly on the temperature dependence of the conductance maximum, we discuss the relation of our results to previous theoretical and experimental results.
Structural Optimization for Reliability Using Nonlinear Goal Programming
NASA Technical Reports Server (NTRS)
El-Sayed, Mohamed E.
1999-01-01
This report details the development of a reliability based multi-objective design tool for solving structural optimization problems. Based on two different optimization techniques, namely sequential unconstrained minimization and nonlinear goal programming, the developed design method has the capability to take into account the effects of variability on the proposed design through a user specified reliability design criterion. In its sequential unconstrained minimization mode, the developed design tool uses a composite objective function, in conjunction with weight ordered design objectives, in order to take into account conflicting and multiple design criteria. Multiple design criteria of interest including structural weight, load induced stress and deflection, and mechanical reliability. The nonlinear goal programming mode, on the other hand, provides for a design method that eliminates the difficulty of having to define an objective function and constraints, while at the same time has the capability of handling rank ordered design objectives or goals. For simulation purposes the design of a pressure vessel cover plate was undertaken as a test bed for the newly developed design tool. The formulation of this structural optimization problem into sequential unconstrained minimization and goal programming form is presented. The resulting optimization problem was solved using: (i) the linear extended interior penalty function method algorithm; and (ii) Powell's conjugate directions method. Both single and multi-objective numerical test cases are included demonstrating the design tool's capabilities as it applies to this design problem.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Feldman, Steven; Valera-Leon, Carlos; Dechev, Damian
The vector is a fundamental data structure, which provides constant-time access to a dynamically-resizable range of elements. Currently, there exist no wait-free vectors. The only non-blocking version supports only a subset of the sequential vector API and exhibits significant synchronization overhead caused by supporting opposing operations. Since many applications operate in phases of execution, wherein each phase only a subset of operations are used, this overhead is unnecessary for the majority of the application. To address the limitations of the non-blocking version, we present a new design that is wait-free, supports more of the operations provided by the sequential vector,more » and provides alternative implementations of key operations. These alternatives allow the developer to balance the performance and functionality of the vector as requirements change throughout execution. Compared to the known non-blocking version and the concurrent vector found in Intel’s TBB library, our design outperforms or provides comparable performance in the majority of tested scenarios. Over all tested scenarios, the presented design performs an average of 4.97 times more operations per second than the non-blocking vector and 1.54 more than the TBB vector. In a scenario designed to simulate the filling of a vector, performance improvement increases to 13.38 and 1.16 times. This work presents the first ABA-free non-blocking vector. Finally, unlike the other non-blocking approach, all operations are wait-free and bounds-checked and elements are stored contiguously in memory.« less
Feldman, Steven; Valera-Leon, Carlos; Dechev, Damian
2016-03-01
The vector is a fundamental data structure, which provides constant-time access to a dynamically-resizable range of elements. Currently, there exist no wait-free vectors. The only non-blocking version supports only a subset of the sequential vector API and exhibits significant synchronization overhead caused by supporting opposing operations. Since many applications operate in phases of execution, wherein each phase only a subset of operations are used, this overhead is unnecessary for the majority of the application. To address the limitations of the non-blocking version, we present a new design that is wait-free, supports more of the operations provided by the sequential vector,more » and provides alternative implementations of key operations. These alternatives allow the developer to balance the performance and functionality of the vector as requirements change throughout execution. Compared to the known non-blocking version and the concurrent vector found in Intel’s TBB library, our design outperforms or provides comparable performance in the majority of tested scenarios. Over all tested scenarios, the presented design performs an average of 4.97 times more operations per second than the non-blocking vector and 1.54 more than the TBB vector. In a scenario designed to simulate the filling of a vector, performance improvement increases to 13.38 and 1.16 times. This work presents the first ABA-free non-blocking vector. Finally, unlike the other non-blocking approach, all operations are wait-free and bounds-checked and elements are stored contiguously in memory.« less
Microcomputer Applications in Interaction Analysis.
ERIC Educational Resources Information Center
Wadham, Rex A.
The Timed Interval Categorical Observation Recorder (TICOR), a portable, battery powered microcomputer designed to automate the collection of sequential and simultaneous behavioral observations and their associated durations, was developed to overcome problems in gathering subtle interaction analysis data characterized by sequential flow of…
Parallelization of sequential Gaussian, indicator and direct simulation algorithms
NASA Astrophysics Data System (ADS)
Nunes, Ruben; Almeida, José A.
2010-08-01
Improving the performance and robustness of algorithms on new high-performance parallel computing architectures is a key issue in efficiently performing 2D and 3D studies with large amount of data. In geostatistics, sequential simulation algorithms are good candidates for parallelization. When compared with other computational applications in geosciences (such as fluid flow simulators), sequential simulation software is not extremely computationally intensive, but parallelization can make it more efficient and creates alternatives for its integration in inverse modelling approaches. This paper describes the implementation and benchmarking of a parallel version of the three classic sequential simulation algorithms: direct sequential simulation (DSS), sequential indicator simulation (SIS) and sequential Gaussian simulation (SGS). For this purpose, the source used was GSLIB, but the entire code was extensively modified to take into account the parallelization approach and was also rewritten in the C programming language. The paper also explains in detail the parallelization strategy and the main modifications. Regarding the integration of secondary information, the DSS algorithm is able to perform simple kriging with local means, kriging with an external drift and collocated cokriging with both local and global correlations. SIS includes a local correction of probabilities. Finally, a brief comparison is presented of simulation results using one, two and four processors. All performance tests were carried out on 2D soil data samples. The source code is completely open source and easy to read. It should be noted that the code is only fully compatible with Microsoft Visual C and should be adapted for other systems/compilers.
1984-06-01
SEQUENTIAL TESTING (Bldg. A, Room C) 1300-1330 ’ 1330-1415 1415-1445 1445-1515 BREAK 1515-1545 A TRUNCATED SEQUENTIAL PROBABILITY RATIO TEST J...suicide optical data operational testing reliability random numbers bootstrap methods missing data sequential testing fire support complex computer model carcinogenesis studies EUITION Of 1 NOV 68 I% OBSOLETE a ...contributed papers can be ascertained from the titles of the
Mathematical Problem Solving through Sequential Process Analysis
ERIC Educational Resources Information Center
Codina, A.; Cañadas, M. C.; Castro, E.
2015-01-01
Introduction: The macroscopic perspective is one of the frameworks for research on problem solving in mathematics education. Coming from this perspective, our study addresses the stages of thought in mathematical problem solving, offering an innovative approach because we apply sequential relations and global interrelations between the different…
Baglivo, Cristina; Congedo, Paolo Maria
2018-04-01
Several technical combinations have been evaluated in order to design high energy performance buildings for the warm climate. The analysis has been developed in several steps, avoiding the use of HVAC systems. The methodological approach of this study is based on a sequential search technique and it is shown on the paper entitled "Envelope Design Optimization by Thermal Modeling of a Building in a Warm Climate" [1]. The Operative Air Temperature trends (TOP), for each combination, have been plotted through a dynamic simulation performed using the software TRNSYS 17 (a transient system simulation program, University of Wisconsin, Solar Energy Laboratory, USA, 2010). Starting from the simplest building configuration consisting of 9 rooms (equal-sized modules of 5 × 5 m 2 ), the different building components are sequentially evaluated until the envelope design is optimized. The aim of this study is to perform a step-by-step simulation, simplifying as much as possible the model without making additional variables that can modify their performances. Walls, slab-on-ground floor, roof, shading and windows are among the simulated building components. The results are shown for each combination and evaluated for Brindisi, a city in southern Italy having 1083 degrees day, belonging to the national climatic zone C. The data show the trends of the TOP for each measure applied in the case study for a total of 17 combinations divided into eight steps.
Schmoll, Hans-Joachim; Arnold, Dirk; de Gramont, Aimery; Ducreux, Michel; Grothey, Axel; O'Dwyer, Peter J; Van Cutsem, Eric; Hermann, Frank; Bosanac, Ivan; Bendahmane, Belguendouz; Mancao, Christoph; Tabernero, Josep
2018-06-01
The old approach of one therapeutic for all patients with mCRC is evolving with a need to target specific molecular aberrations or cell-signalling pathways. Molecular screening approaches and new biomarkers are required to fully characterize tumours, identify patients most likely to benefit, and predict treatment response. MODUL is a signal-seeking trial with a design that is highly adaptable, permitting modification of different treatment cohorts and inclusion of further additional cohorts based on novel evidence on new compounds/combinations that emerge during the study. MODUL is ongoing and its adaptable nature permits timely and efficient recruitment of patients into the most appropriate cohort. Recruitment will take place over approximately 5 years in Europe, Asia, Africa, and South America. The design of MODUL with ongoing parallel/sequential treatment cohorts means that the overall size and duration of the trial can be modified/prolonged based on accumulation of new data. The early success of the current trial suggests that the design may provide definitive leads in a patient-friendly and relatively economical trial structure. Along with other biomarker-driven trials that are currently underway, it is hoped that MODUL will contribute to the continuing evolution of clinical trial design and permit a more 'tailored' approach to the treatment of patients with mCRC.
van Maanen, Leendert; van Rijn, Hedderik; Taatgen, Niels
2012-01-01
This article discusses how sequential sampling models can be integrated in a cognitive architecture. The new theory Retrieval by Accumulating Evidence in an Architecture (RACE/A) combines the level of detail typically provided by sequential sampling models with the level of task complexity typically provided by cognitive architectures. We will use RACE/A to model data from two variants of a picture-word interference task in a psychological refractory period design. These models will demonstrate how RACE/A enables interactions between sequential sampling and long-term declarative learning, and between sequential sampling and task control. In a traditional sequential sampling model, the onset of the process within the task is unclear, as is the number of sampling processes. RACE/A provides a theoretical basis for estimating the onset of sequential sampling processes during task execution and allows for easy modeling of multiple sequential sampling processes within a task. Copyright © 2011 Cognitive Science Society, Inc.
High data rate coding for the space station telemetry links.
NASA Technical Reports Server (NTRS)
Lumb, D. R.; Viterbi, A. J.
1971-01-01
Coding systems for high data rates were examined from the standpoint of potential application in space-station telemetry links. Approaches considered included convolutional codes with sequential, Viterbi, and cascaded-Viterbi decoding. It was concluded that a high-speed (40 Mbps) sequential decoding system best satisfies the requirements for the assumed growth potential and specified constraints. Trade-off studies leading to this conclusion are viewed, and some sequential (Fano) algorithm improvements are discussed, together with real-time simulation results.
Evaluation Using Sequential Trials Methods.
ERIC Educational Resources Information Center
Cohen, Mark E.; Ralls, Stephen A.
1986-01-01
Although dental school faculty as well as practitioners are interested in evaluating products and procedures used in clinical practice, research design and statistical analysis can sometimes pose problems. Sequential trials methods provide an analytical structure that is both easy to use and statistically valid. (Author/MLW)
Environmentally adaptive processing for shallow ocean applications: A sequential Bayesian approach.
Candy, J V
2015-09-01
The shallow ocean is a changing environment primarily due to temperature variations in its upper layers directly affecting sound propagation throughout. The need to develop processors capable of tracking these changes implies a stochastic as well as an environmentally adaptive design. Bayesian techniques have evolved to enable a class of processors capable of performing in such an uncertain, nonstationary (varying statistics), non-Gaussian, variable shallow ocean environment. A solution to this problem is addressed by developing a sequential Bayesian processor capable of providing a joint solution to the modal function tracking and environmental adaptivity problem. Here, the focus is on the development of both a particle filter and an unscented Kalman filter capable of providing reasonable performance for this problem. These processors are applied to hydrophone measurements obtained from a vertical array. The adaptivity problem is attacked by allowing the modal coefficients and/or wavenumbers to be jointly estimated from the noisy measurement data along with tracking of the modal functions while simultaneously enhancing the noisy pressure-field measurements.
A sequential extraction approach was utilized to estimate the distribution of arsenite [As(III)] and arsenate [As(V)] on iron oxide/hydroxide solids obtained from drinking water distribution systems. The arsenic (As) associated with these solids can be segregated into three oper...
Hack, Daniel; Chauhan, Pankaj; Deckers, Kristina; Mizutani, Yusuke; Raabe, Gerhard; Enders, Dieter
2015-02-11
A one-pot asymmetric Michael addition/hydroalkoxylation sequence, catalyzed by a sequential catalytic system consisting of a squaramide and a silver salt, provides a new series of chiral pyrano-annulated pyrazole derivatives in excellent yields (up to 95%) and high enantioselectivities (up to 97% ee).
Zheng, Jiafu; Zhao, Fujian; Zhang, Wen; Mo, Yunfei; Zeng, Lei; Li, Xian; Chen, Xiaofeng
2018-08-01
In recent years, gelatin-based composites hydrogels have been intensively investigated because of their inherent bioactivity, biocompatibility and biodegradability. Herein, we fabricated photocrosslinkable biomimetic composites hydrogels from bioactive glass (BG) and gelatin methacryloyl (GelMA) by a sequential physical and chemical crosslinking (gelation + UV) approach. The results showed that the compressive modulus of composites hydrogels increased significantly through the sequential crosslinking approach. The addition of BG resulted in a significant increase in physiological stability and apatite-forming ability. In vitro data indicated that BG/GelMA composites hydrogels promoted cell attachment, proliferation and differentiation. Overall, the BG/GelMA composites hydrogels combined the advantages of good biocompatibility and bioactivity, and had potential applications in bone regeneration. Copyright © 2018. Published by Elsevier B.V.
El Afifi, E M; Awwad, N S; Hilal, M A
2009-01-30
This paper is dedicated to the treatment of sludge occurring in frame of the Egyptian produced from oil and gas production. The activity levels of three radium isotopes: Ra-226 (of U-series), Ra-228 and Ra-224 (of Th-series) in the solid TENORM waste (sludge) were first evaluated and followed by a sequential treatment for all radium species (fractions) presented in TENORM. The sequential treatment was carried out based on two approaches 'A' and 'B' using different chemical solutions. The results obtained indicate that the activity levels of all radium isotopes (Ra-226, Ra-228 and Ra-224) of the environmental interest in the TENORM waste sludge were elevated with regard to exemption levels established by IAEA [International Atomic Energy Agency (IAEA), International basic safety standards for the protection against ionizing radiation and for the safety of radiation sources. GOV/2715/Vienna, 1994]. Each approach of the sequential treatment was performed through four steps using different chemical solutions to reduce the activity concentration of radium in a large extent. Most of the leached radium was found as an oxidizable Ra species. The actual removal % leached using approach B was relatively efficient compared to A. It is observed that the actual removal percentages (%) of Ra-226, Ra-228 and Ra-224 using approach A are 78+/-2.8, 64.8+/-4.1 and 76.4+/-5.2%, respectively. Whereas in approach A, the overall removal % of Ra-226, Ra-228 and Ra-228 was increased to approximately 91+/-3.5, 87+/-4.1 and 90+/-6.2%, respectively.
Zeelenberg, René; Pecher, Diane
2015-03-01
Counterbalanced designs are frequently used in the behavioral sciences. Studies often counterbalance either the order in which conditions are presented in the experiment or the assignment of stimulus materials to conditions. Occasionally, researchers need to simultaneously counterbalance both condition order and stimulus assignment to conditions. Lewis (1989; Behavior Research Methods, Instruments, & Computers 25:414-415, 1993) presented a method for constructing Latin squares that fulfill these requirements. The resulting Latin squares counterbalance immediate sequential effects, but not remote sequential effects. Here, we present a new method for generating Latin squares that simultaneously counterbalance both immediate and remote sequential effects and assignment of stimuli to conditions. An Appendix is provided to facilitate implementation of these Latin square designs.
Palmer, Matthew A; Brewer, Neil
2012-06-01
When compared with simultaneous lineup presentation, sequential presentation has been shown to reduce false identifications to a greater extent than it reduces correct identifications. However, there has been much debate about whether this difference in identification performance represents improved discriminability or more conservative responding. In this research, data from 22 experiments that compared sequential and simultaneous lineups were analyzed using a compound signal-detection model, which is specifically designed to describe decision-making performance on tasks such as eyewitness identification tests. Sequential (cf. simultaneous) presentation did not influence discriminability, but produced a conservative shift in response bias that resulted in less-biased choosing for sequential than simultaneous lineups. These results inform understanding of the effects of lineup presentation mode on eyewitness identification decisions.
Sequential segmental classification of feline congenital heart disease.
Scansen, Brian A; Schneider, Matthias; Bonagura, John D
2015-12-01
Feline congenital heart disease is less commonly encountered in veterinary medicine than acquired feline heart diseases such as cardiomyopathy. Understanding the wide spectrum of congenital cardiovascular disease demands a familiarity with a variety of lesions, occurring both in isolation and in combination, along with an appreciation of complex nomenclature and variable classification schemes. This review begins with an overview of congenital heart disease in the cat, including proposed etiologies and prevalence, examination approaches, and principles of therapy. Specific congenital defects are presented and organized by a sequential segmental classification with respect to their morphologic lesions. Highlights of diagnosis, treatment options, and prognosis are offered. It is hoped that this review will provide a framework for approaching congenital heart disease in the cat, and more broadly in other animal species based on the sequential segmental approach, which represents an adaptation of the common methodology used in children and adults with congenital heart disease. Copyright © 2015 Elsevier B.V. All rights reserved.
Aeroelastic Modeling of Offshore Turbines and Support Structures in Hurricane-Prone Regions (Poster)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Damiani, R.
US offshore wind turbines (OWTs) will likely have to contend with hurricanes and the associated loading conditions. Current industry standards do not account for these design load cases (DLCs), thus a new approach is required to guarantee that the OWTs achieve an appropriate level of reliability. In this study, a sequentially coupled aero-hydro-servo-elastic modeling technique was used to address two design approaches: 1.) The ABS (American Bureau of Shipping) approach; and 2.) The Hazard Curve or API (American Petroleum Institute) approach. The former employs IEC partial load factors (PSFs) and 100-yr return-period (RP) metocean events. The latter allows setting PSFsmore » and RP to a prescribed level of system reliability. The 500-yr RP robustness check (appearing in [2] and [3] upcoming editions) is a good indicator of the target reliability for L2 structures. CAE tools such as NREL's FAST and Bentley's' SACS (offshore analysis and design software) can be efficiently coupled to simulate system loads under hurricane DLCs. For this task, we augmented the latest FAST version (v. 8) to include tower aerodynamic drag that cannot be ignored in hurricane DLCs. In this project, a 6 MW turbine was simulated on a typical 4-legged jacket for a mid-Atlantic site. FAST-calculated tower base loads were fed to SACS at the interface level (transition piece); SACS added hydrodynamic and wind loads on the exposed substructure, and calculated mudline overturning moments, and member and joint utilization. Results show that CAE tools can be effectively used to compare design approaches for the design of OWTs in hurricane regions and to achieve a well-balanced design, where reliability levels and costs are optimized.« less
Raja, Muhammad Asif Zahoor; Zameer, Aneela; Khan, Aziz Ullah; Wazwaz, Abdul Majid
2016-01-01
In this study, a novel bio-inspired computing approach is developed to analyze the dynamics of nonlinear singular Thomas-Fermi equation (TFE) arising in potential and charge density models of an atom by exploiting the strength of finite difference scheme (FDS) for discretization and optimization through genetic algorithms (GAs) hybrid with sequential quadratic programming. The FDS procedures are used to transform the TFE differential equations into a system of nonlinear equations. A fitness function is constructed based on the residual error of constituent equations in the mean square sense and is formulated as the minimization problem. Optimization of parameters for the system is carried out with GAs, used as a tool for viable global search integrated with SQP algorithm for rapid refinement of the results. The design scheme is applied to solve TFE for five different scenarios by taking various step sizes and different input intervals. Comparison of the proposed results with the state of the art numerical and analytical solutions reveals that the worth of our scheme in terms of accuracy and convergence. The reliability and effectiveness of the proposed scheme are validated through consistently getting optimal values of statistical performance indices calculated for a sufficiently large number of independent runs to establish its significance.
Capote, F Priego; Jiménez, J Ruiz; de Castro, M D Luque
2007-08-01
An analytical method for the sequential detection, identification and quantitation of extra virgin olive oil adulteration with four edible vegetable oils--sunflower, corn, peanut and coconut oils--is proposed. The only data required for this method are the results obtained from an analysis of the lipid fraction by gas chromatography-mass spectrometry. A total number of 566 samples (pure oils and samples of adulterated olive oil) were used to develop the chemometric models, which were designed to accomplish, step-by-step, the three aims of the method: to detect whether an olive oil sample is adulterated, to identify the type of adulterant used in the fraud, and to determine how much aldulterant is in the sample. Qualitative analysis was carried out via two chemometric approaches--soft independent modelling of class analogy (SIMCA) and K nearest neighbours (KNN)--both approaches exhibited prediction abilities that were always higher than 91% for adulterant detection and 88% for type of adulterant identification. Quantitative analysis was based on partial least squares regression (PLSR), which yielded R2 values of >0.90 for calibration and validation sets and thus made it possible to determine adulteration with excellent precision according to the Shenk criteria.
NASA Technical Reports Server (NTRS)
Reed, John A.; Afjeh, Abdollah A.
1995-01-01
A major difficulty in designing aeropropulsion systems is that of identifying and understanding the interactions between the separate engine components and disciplines (e.g., fluid mechanics, structural mechanics, heat transfer, material properties, etc.). The traditional analysis approach is to decompose the system into separate components with the interaction between components being evaluated by the application of each of the single disciplines in a sequential manner. Here, one discipline uses information from the calculation of another discipline to determine the effects of component coupling. This approach, however, may not properly identify the consequences of these effects during the design phase, leaving the interactions to be discovered and evaluated during engine testing. This contributes to the time and cost of developing new propulsion systems as, typically, several design-build-test cycles are needed to fully identify multidisciplinary effects and reach the desired system performance. The alternative to sequential isolated component analysis is to use multidisciplinary coupling at a more fundamental level. This approach has been made more plausible due to recent advancements in computation simulation along with application of concurrent engineering concepts. Computer simulation systems designed to provide an environment which is capable of integrating the various disciplines into a single simulation system have been proposed and are currently being developed. One such system is being developed by the Numerical Propulsion System Simulation (NPSS) project. The NPSS project, being developed at the Interdisciplinary Technology Office at the NASA Lewis Research Center is a 'numerical test cell' designed to provide for comprehensive computational design and analysis of aerospace propulsion systems. It will provide multi-disciplinary analyses on a variety of computational platforms, and a user-interface consisting of expert systems, data base management and visualization tools, to allow the designer to investigate the complex interactions inherent in these systems. An interactive programming software system, known as the Application Visualization System (AVS), was utilized for the development of the propulsion system simulation. The modularity of this system provides the ability to couple propulsion system components, as well as disciplines, and provides for the ability to integrate existing, well established analysis codes into the overall system simulation. This feature allows the user to customize the simulation model by inserting desired analysis codes. The prototypical simulation environment for multidisciplinary analysis, called Turbofan Engine System Simulation (TESS), which incorporates many of the characteristics of the simulation environment proposed herein, is detailed.
SEQUENTIAL EXTRACTIONS FOR PARTITIONING OF ARSENIC ON HYDROUS IRON OXIDES AND IRON SULFIDES
The objective of this study was to use model solids to test solutions designed to extract arsenic from relatively labile solid phase fractions. The use of sequential extractions provides analytical constraints on the identification of mineral phases that control arsenic mobility...
THRESHOLD ELEMENTS AND THE DESIGN OF SEQUENTIAL SWITCHING NETWORKS.
The report covers research performed from March 1966 to March 1967. The major topics treated are: (1) methods for finding weight- threshold vectors...that realize a given switching function in multi- threshold linear logic; (2) synthesis of sequential machines by means of shift registers and simple
Karst, Daniel J; Scibona, Ernesto; Serra, Elisa; Bielser, Jean-Marc; Souquet, Jonathan; Stettler, Matthieu; Broly, Hervé; Soos, Miroslav; Morbidelli, Massimo; Villiger, Thomas K
2017-09-01
Mammalian cell perfusion cultures are gaining renewed interest as an alternative to traditional fed-batch processes for the production of therapeutic proteins, such as monoclonal antibodies (mAb). The steady state operation at high viable cell density allows the continuous delivery of antibody product with increased space-time yield and reduced in-process variability of critical product quality attributes (CQA). In particular, the production of a confined mAb N-linked glycosylation pattern has the potential to increase therapeutic efficacy and bioactivity. In this study, we show that accurate control of flow rates, media composition and cell density of a Chinese hamster ovary (CHO) cell perfusion bioreactor allowed the production of a constant glycosylation profile for over 20 days. Steady state was reached after an initial transition phase of 6 days required for the stabilization of extra- and intracellular processes. The possibility to modulate the glycosylation profile was further investigated in a Design of Experiment (DoE), at different viable cell density and media supplement concentrations. This strategy was implemented in a sequential screening approach, where various steady states were achieved sequentially during one culture. It was found that, whereas high ammonia levels reached at high viable cell densities (VCD) values inhibited the processing to complex glycan structures, the supplementation of either galactose, or manganese as well as their synergy significantly increased the proportion of complex forms. The obtained experimental data set was used to compare the reliability of a statistical response surface model (RSM) to a mechanistic model of N-linked glycosylation. The latter outperformed the response surface predictions with respect to its capability and reliability in predicting the system behavior (i.e., glycosylation pattern) outside the experimental space covered by the DoE design used for the model parameter estimation. Therefore, we can conclude that the modulation of glycosylation in a sequential steady state approach in combination with mechanistic model represents an efficient and rational strategy to develop continuous processes with desired N-linked glycosylation patterns. Biotechnol. Bioeng. 2017;114: 1978-1990. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
A Sequential Analysis of Parent-Child Interactions in Anxious and Nonanxious Families
ERIC Educational Resources Information Center
Williams, Sarah R.; Kertz, Sarah J.; Schrock, Matthew D.; Woodruff-Borden, Janet
2012-01-01
Although theoretical work has suggested that reciprocal behavior patterns between parent and child may be important in the development of childhood anxiety, most empirical work has failed to consider the bidirectional nature of interactions. The current study sought to address this limitation by utilizing a sequential approach to exploring…
Sequential Online Wellness Programming Is an Effective Strategy to Promote Behavior Change
ERIC Educational Resources Information Center
MacNab, Lindsay R.; Francis, Sarah L.
2015-01-01
The growing number of United States youth and adults categorized as overweight or obese illustrates a need for research-based family wellness interventions. Sequential, online, Extension-delivered family wellness interventions offer a time- and cost-effective approach for both participants and Extension educators. The 6-week, online Healthy…
O'Brien, Nicola; Heaven, Ben; Teal, Gemma; Evans, Elizabeth H; Cleland, Claire; Moffatt, Suzanne; Sniehotta, Falko F; White, Martin; Mathers, John C
2016-01-01
Background Integrating stakeholder involvement in complex health intervention design maximizes acceptability and potential effectiveness. However, there is little methodological guidance about how to integrate evidence systematically from various sources in this process. Scientific evidence derived from different approaches can be difficult to integrate and the problem is compounded when attempting to include diverse, subjective input from stakeholders. Objective The intent of the study was to describe and appraise a systematic, sequential approach to integrate scientific evidence, expert knowledge and experience, and stakeholder involvement in the co-design and development of a complex health intervention. The development of a Web-based lifestyle intervention for people in retirement is used as an example. Methods Evidence from three systematic reviews, qualitative research findings, and expert knowledge was compiled to produce evidence statements (stage 1). Face validity of these statements was assessed by key stakeholders in a co-design workshop resulting in a set of intervention principles (stage 2). These principles were assessed for face validity in a second workshop, resulting in core intervention concepts and hand-drawn prototypes (stage 3). The outputs from stages 1-3 were translated into a design brief and specification (stage 4), which guided the building of a functioning prototype, Web-based intervention (stage 5). This prototype was de-risked resulting in an optimized functioning prototype (stage 6), which was subject to iterative testing and optimization (stage 7), prior to formal pilot evaluation. Results The evidence statements (stage 1) highlighted the effectiveness of physical activity, dietary and social role interventions in retirement; the idiosyncratic nature of retirement and well-being; the value of using specific behavior change techniques including those derived from the Health Action Process Approach; and the need for signposting to local resources. The intervention principles (stage 2) included the need to facilitate self-reflection on available resources, personalization, and promotion of links between key lifestyle behaviors. The core concepts and hand-drawn prototypes (stage 3) had embedded in them the importance of time use and work exit planning, personalized goal setting, and acceptance of a Web-based intervention. The design brief detailed the features and modules required (stage 4), guiding the development of wireframes, module content and functionality, virtual mentors, and intervention branding (stage 5). Following an iterative process of intervention testing and optimization (stage 6), the final Web-based intervention prototype of LEAP (Living, Eating, Activity, and Planning in retirement) was produced (stage 7). The approach was resource intensive and required a multidisciplinary team. The design expert made an invaluable contribution throughout the process. Conclusions Our sequential approach fills an important methodological gap in the literature, describing the stages and techniques useful in developing an evidence-based complex health intervention. The systematic and rigorous integration of scientific evidence, expert knowledge and experience, and stakeholder input has resulted in an intervention likely to be acceptable and feasible. PMID:27489143
O'Brien, Nicola; Heaven, Ben; Teal, Gemma; Evans, Elizabeth H; Cleland, Claire; Moffatt, Suzanne; Sniehotta, Falko F; White, Martin; Mathers, John C; Moynihan, Paula
2016-08-03
Integrating stakeholder involvement in complex health intervention design maximizes acceptability and potential effectiveness. However, there is little methodological guidance about how to integrate evidence systematically from various sources in this process. Scientific evidence derived from different approaches can be difficult to integrate and the problem is compounded when attempting to include diverse, subjective input from stakeholders. The intent of the study was to describe and appraise a systematic, sequential approach to integrate scientific evidence, expert knowledge and experience, and stakeholder involvement in the co-design and development of a complex health intervention. The development of a Web-based lifestyle intervention for people in retirement is used as an example. Evidence from three systematic reviews, qualitative research findings, and expert knowledge was compiled to produce evidence statements (stage 1). Face validity of these statements was assessed by key stakeholders in a co-design workshop resulting in a set of intervention principles (stage 2). These principles were assessed for face validity in a second workshop, resulting in core intervention concepts and hand-drawn prototypes (stage 3). The outputs from stages 1-3 were translated into a design brief and specification (stage 4), which guided the building of a functioning prototype, Web-based intervention (stage 5). This prototype was de-risked resulting in an optimized functioning prototype (stage 6), which was subject to iterative testing and optimization (stage 7), prior to formal pilot evaluation. The evidence statements (stage 1) highlighted the effectiveness of physical activity, dietary and social role interventions in retirement; the idiosyncratic nature of retirement and well-being; the value of using specific behavior change techniques including those derived from the Health Action Process Approach; and the need for signposting to local resources. The intervention principles (stage 2) included the need to facilitate self-reflection on available resources, personalization, and promotion of links between key lifestyle behaviors. The core concepts and hand-drawn prototypes (stage 3) had embedded in them the importance of time use and work exit planning, personalized goal setting, and acceptance of a Web-based intervention. The design brief detailed the features and modules required (stage 4), guiding the development of wireframes, module content and functionality, virtual mentors, and intervention branding (stage 5). Following an iterative process of intervention testing and optimization (stage 6), the final Web-based intervention prototype of LEAP (Living, Eating, Activity, and Planning in retirement) was produced (stage 7). The approach was resource intensive and required a multidisciplinary team. The design expert made an invaluable contribution throughout the process. Our sequential approach fills an important methodological gap in the literature, describing the stages and techniques useful in developing an evidence-based complex health intervention. The systematic and rigorous integration of scientific evidence, expert knowledge and experience, and stakeholder input has resulted in an intervention likely to be acceptable and feasible.
On mining complex sequential data by means of FCA and pattern structures
NASA Astrophysics Data System (ADS)
Buzmakov, Aleksey; Egho, Elias; Jay, Nicolas; Kuznetsov, Sergei O.; Napoli, Amedeo; Raïssi, Chedy
2016-02-01
Nowadays data-sets are available in very complex and heterogeneous ways. Mining of such data collections is essential to support many real-world applications ranging from healthcare to marketing. In this work, we focus on the analysis of "complex" sequential data by means of interesting sequential patterns. We approach the problem using the elegant mathematical framework of formal concept analysis and its extension based on "pattern structures". Pattern structures are used for mining complex data (such as sequences or graphs) and are based on a subsumption operation, which in our case is defined with respect to the partial order on sequences. We show how pattern structures along with projections (i.e. a data reduction of sequential structures) are able to enumerate more meaningful patterns and increase the computing efficiency of the approach. Finally, we show the applicability of the presented method for discovering and analysing interesting patient patterns from a French healthcare data-set on cancer. The quantitative and qualitative results (with annotations and analysis from a physician) are reported in this use-case which is the main motivation for this work.
Sedano-Portillo, Ismael; Ochoa-León, Gastón; Fuentes-Orozco, Clotilde; Irusteta-Jiménez, Leire; Michel-Espinoza, Luis Rodrigo; Salazar-Parra, Marcela; Cuesta-Márquez, Lizbeth; González-Ojeda, Alejandro
2017-01-01
Percutaneous nephrolithotomy is an efficient approach for treatment of different types of kidney stones. Various types of access techniques have been described like sequential dilatation and one-shot procedure. To determine the differences in time of exposure to X-rays and hemoglobin levels between techniques. Controlled clinical trial. Patients older than 18 years with complex/uncomplicated kidney stones, without urine infection were included. They were assigned randomly to one of the two techniques. Response variables were determined before and 24 h after procedures. 59 patients were included: 30 underwent one-shot procedure (study-group) and 29 sequential dilatation (control-group). Baseline characteristics were similar. Study group had a lower postoperative hemoglobin decline than control group (0.81 vs. 2.03 g/dl, respectively; p < 0.001); X-ray exposure time (69.6 vs. 100.62 s; p < 0.001) and postoperative creatinine serum levels (0.93 ± 0.29 vs. 1.13 ± 0.4 mg/dl; p = 0.039). No significant differences in postoperative morbidity were found. One-shot technique demonstrated better results compared to sequential dilatation.
NASA Astrophysics Data System (ADS)
Li, Shuang; Zhu, Yongsheng; Wang, Yukai
2014-02-01
Asteroid deflection techniques are essential in order to protect the Earth from catastrophic impacts by hazardous asteroids. Rapid design and optimization of low-thrust rendezvous/interception trajectories is considered as one of the key technologies to successfully deflect potentially hazardous asteroids. In this paper, we address a general framework for the rapid design and optimization of low-thrust rendezvous/interception trajectories for future asteroid deflection missions. The design and optimization process includes three closely associated steps. Firstly, shape-based approaches and genetic algorithm (GA) are adopted to perform preliminary design, which provides a reasonable initial guess for subsequent accurate optimization. Secondly, Radau pseudospectral method is utilized to transcribe the low-thrust trajectory optimization problem into a discrete nonlinear programming (NLP) problem. Finally, sequential quadratic programming (SQP) is used to efficiently solve the nonlinear programming problem and obtain the optimal low-thrust rendezvous/interception trajectories. The rapid design and optimization algorithms developed in this paper are validated by three simulation cases with different performance indexes and boundary constraints.
Raja, Muhammad Asif Zahoor; Kiani, Adiqa Kausar; Shehzad, Azam; Zameer, Aneela
2016-01-01
In this study, bio-inspired computing is exploited for solving system of nonlinear equations using variants of genetic algorithms (GAs) as a tool for global search method hybrid with sequential quadratic programming (SQP) for efficient local search. The fitness function is constructed by defining the error function for systems of nonlinear equations in mean square sense. The design parameters of mathematical models are trained by exploiting the competency of GAs and refinement are carried out by viable SQP algorithm. Twelve versions of the memetic approach GA-SQP are designed by taking a different set of reproduction routines in the optimization process. Performance of proposed variants is evaluated on six numerical problems comprising of system of nonlinear equations arising in the interval arithmetic benchmark model, kinematics, neurophysiology, combustion and chemical equilibrium. Comparative studies of the proposed results in terms of accuracy, convergence and complexity are performed with the help of statistical performance indices to establish the worth of the schemes. Accuracy and convergence of the memetic computing GA-SQP is found better in each case of the simulation study and effectiveness of the scheme is further established through results of statistics based on different performance indices for accuracy and complexity.
Lau, Kai Lin; Sleiman, Hanadi F
2016-07-26
Given its highly predictable self-assembly properties, DNA has proven to be an excellent template toward the design of functional materials. Prominent examples include the remarkable complexity provided by DNA origami and single-stranded tile (SST) assemblies, which require hundreds of unique component strands. However, in many cases, the majority of the DNA assembly is purely structural, and only a small "working area" needs to be aperiodic. On the other hand, extended lattices formed by DNA tile motifs require only a few strands; but they suffer from lack of size control and limited periodic patterning. To overcome these limitations, we adopt a templation strategy, where an input strand of DNA dictates the size and patterning of resultant DNA tile structures. To prepare these templating input strands, a sequential growth technique developed in our lab is used, whereby extended DNA strands of defined sequence and length may be generated simply by controlling their order of addition. With these, we demonstrate the periodic patterning of size-controlled double-crossover (DX) and triple-crossover (TX) tile structures, as well as intentionally designed aperiodicity of a DX tile structure. As such, we are able to prepare size-controlled DNA structures featuring aperiodicity only where necessary with exceptional economy and efficiency.
Deformation behavior and mechanical analysis of vertically aligned carbon nanotube (VACNT) bundles
NASA Astrophysics Data System (ADS)
Hutchens, Shelby B.
Vertically aligned carbon nanotubes (VACNTs) serve as integral components in a variety of applications including MEMS devices, energy absorbing materials, dry adhesives, light absorbing coatings, and electron emitters, all of which require structural robustness. It is only through an understanding of VACNT's structural mechanical response and local constitutive stress-strain relationship that future advancements through rational design may take place. Even for applications in which the structural response is not central to device performance, VACNTs must be sufficiently robust and therefore knowledge of their microstructure-property relationship is essential. This thesis first describes the results of in situ uniaxial compression experiments of 50 micron diameter cylindrical bundles of these complex, hierarchical materials as they undergo unusual deformation behavior. Most notably they deform via a series of localized folding events, originating near the bundle base, which propagate laterally and collapse sequentially from bottom to top. This deformation mechanism accompanies an overall foam-like stress-strain response having elastic, plateau, and densification regimes with the addition of undulations in the stress throughout the plateau regime that correspond to the sequential folding events. Microstructural observations indicate the presence of a strength gradient, due to a gradient in both tube density and alignment along the bundle height, which is found to play a key role in both the sequential deformation process and the overall stress-strain response. Using the complicated structural response as both motivation and confirmation, a finite element model based on a viscoplastic solid is proposed. This model is characterized by a flow stress relation that contains an initial peak followed by strong softening and successive hardening. Analysis of this constitutive relation results in capture of the sequential buckling phenomenon and a strength gradient effect. This combination of experimental and modeling approaches motivates discussion of the particular microstructural mechanisms and local material behavior that govern the non-trivial energy absorption via sequential, localized buckle formation in the VACNT bundles.
Using mixed methods effectively in prevention science: designs, procedures, and examples.
Zhang, Wanqing; Watanabe-Galloway, Shinobu
2014-10-01
There is growing interest in using a combination of quantitative and qualitative methods to generate evidence about the effectiveness of health prevention, services, and intervention programs. With the emerging importance of mixed methods research across the social and health sciences, there has been an increased recognition of the value of using mixed methods for addressing research questions in different disciplines. We illustrate the mixed methods approach in prevention research, showing design procedures used in several published research articles. In this paper, we focused on two commonly used mixed methods designs: concurrent and sequential mixed methods designs. We discuss the types of mixed methods designs, the reasons for, and advantages of using a particular type of design, and the procedures of qualitative and quantitative data collection and integration. The studies reviewed in this paper show that the essence of qualitative research is to explore complex dynamic phenomena in prevention science, and the advantage of using mixed methods is that quantitative data can yield generalizable results and qualitative data can provide extensive insights. However, the emphasis of methodological rigor in a mixed methods application also requires considerable expertise in both qualitative and quantitative methods. Besides the necessary skills and effective interdisciplinary collaboration, this combined approach also requires an open-mindedness and reflection from the involved researchers.
Schneider, Francine; de Vries, Hein; van Osch, Liesbeth ADM; van Nierop, Peter WM; Kremers, Stef PJ
2012-01-01
Background Unhealthy lifestyle behaviors often co-occur and are related to chronic diseases. One effective method to change multiple lifestyle behaviors is web-based computer tailoring. Dropout from Internet interventions, however, is rather high, and it is challenging to retain participants in web-based tailored programs, especially programs targeting multiple behaviors. To date, it is unknown how much information people can handle in one session while taking part in a multiple behavior change intervention, which could be presented either sequentially (one behavior at a time) or simultaneously (all behaviors at once). Objectives The first objective was to compare dropout rates of 2 computer-tailored interventions: a sequential and a simultaneous strategy. The second objective was to assess which personal characteristics are associated with completion rates of the 2 interventions. Methods Using an RCT design, demographics, health status, physical activity, vegetable consumption, fruit consumption, alcohol intake, and smoking were self-assessed through web-based questionnaires among 3473 adults, recruited through Regional Health Authorities in the Netherlands in the autumn of 2009. First, a health risk appraisal was offered, indicating whether respondents were meeting the 5 national health guidelines. Second, psychosocial determinants of the lifestyle behaviors were assessed and personal advice was provided, about one or more lifestyle behaviors. Results Our findings indicate a high non-completion rate for both types of intervention (71.0%; n = 2167), with more incompletes in the simultaneous intervention (77.1%; n = 1169) than in the sequential intervention (65.0%; n = 998). In both conditions, discontinuation was predicted by a lower age (sequential condition: OR = 1.04; P < .001; CI = 1.02-1.05; simultaneous condition: OR = 1.04; P < .001; CI = 1.02-1.05) and an unhealthy lifestyle (sequential condition: OR = 0.86; P = .01; CI = 0.76-0.97; simultaneous condition: OR = 0.49; P < .001; CI = 0.42-0.58). In the sequential intervention, being male (OR = 1.27; P = .04; CI = 1.01-1.59) also predicted dropout. When respondents failed to adhere to at least 2 of the guidelines, those receiving the simultaneous intervention were more inclined to drop out than were those receiving the sequential intervention. Conclusion Possible reasons for the higher dropout rate in our simultaneous intervention may be the amount of time required and information overload. Strategies to optimize program completion as well as continued use of computer-tailored interventions should be studied. Trial Registration Dutch Trial Register NTR2168 PMID:22403770
Schulz, Daniela N; Schneider, Francine; de Vries, Hein; van Osch, Liesbeth A D M; van Nierop, Peter W M; Kremers, Stef P J
2012-03-08
Unhealthy lifestyle behaviors often co-occur and are related to chronic diseases. One effective method to change multiple lifestyle behaviors is web-based computer tailoring. Dropout from Internet interventions, however, is rather high, and it is challenging to retain participants in web-based tailored programs, especially programs targeting multiple behaviors. To date, it is unknown how much information people can handle in one session while taking part in a multiple behavior change intervention, which could be presented either sequentially (one behavior at a time) or simultaneously (all behaviors at once). The first objective was to compare dropout rates of 2 computer-tailored interventions: a sequential and a simultaneous strategy. The second objective was to assess which personal characteristics are associated with completion rates of the 2 interventions. Using an RCT design, demographics, health status, physical activity, vegetable consumption, fruit consumption, alcohol intake, and smoking were self-assessed through web-based questionnaires among 3473 adults, recruited through Regional Health Authorities in the Netherlands in the autumn of 2009. First, a health risk appraisal was offered, indicating whether respondents were meeting the 5 national health guidelines. Second, psychosocial determinants of the lifestyle behaviors were assessed and personal advice was provided, about one or more lifestyle behaviors. Our findings indicate a high non-completion rate for both types of intervention (71.0%; n = 2167), with more incompletes in the simultaneous intervention (77.1%; n = 1169) than in the sequential intervention (65.0%; n = 998). In both conditions, discontinuation was predicted by a lower age (sequential condition: OR = 1.04; P < .001; CI = 1.02-1.05; simultaneous condition: OR = 1.04; P < .001; CI = 1.02-1.05) and an unhealthy lifestyle (sequential condition: OR = 0.86; P = .01; CI = 0.76-0.97; simultaneous condition: OR = 0.49; P < .001; CI = 0.42-0.58). In the sequential intervention, being male (OR = 1.27; P = .04; CI = 1.01-1.59) also predicted dropout. When respondents failed to adhere to at least 2 of the guidelines, those receiving the simultaneous intervention were more inclined to drop out than were those receiving the sequential intervention. Possible reasons for the higher dropout rate in our simultaneous intervention may be the amount of time required and information overload. Strategies to optimize program completion as well as continued use of computer-tailored interventions should be studied. Dutch Trial Register NTR2168.
Finding False Paths in Sequential Circuits
NASA Astrophysics Data System (ADS)
Matrosova, A. Yu.; Andreeva, V. V.; Chernyshov, S. V.; Rozhkova, S. V.; Kudin, D. V.
2018-02-01
Method of finding false paths in sequential circuits is developed. In contrast with heuristic approaches currently used abroad, the precise method based on applying operations on Reduced Ordered Binary Decision Diagrams (ROBDDs) extracted from the combinational part of a sequential controlling logic circuit is suggested. The method allows finding false paths when transfer sequence length is not more than the given value and obviates the necessity of investigation of combinational circuit equivalents of the given lengths. The possibilities of using of the developed method for more complicated circuits are discussed.
Giobbie-Hurder, Anita; Price, Karen N; Gelber, Richard D
2009-06-01
Aromatase inhibitors provide superior disease control when compared with tamoxifen as adjuvant therapy for postmenopausal women with endocrine-responsive early breast cancer. To present the design, history, and analytic challenges of the Breast International Group (BIG) 1-98 trial: an international, multicenter, randomized, double-blind, phase-III study comparing the aromatase inhibitor letrozole with tamoxifen in this clinical setting. From 1998-2003, BIG 1-98 enrolled 8028 women to receive monotherapy with either tamoxifen or letrozole for 5 years, or sequential therapy of 2 years of one agent followed by 3 years of the other. Randomization to one of four treatment groups permitted two complementary analyses to be conducted several years apart. The first, reported in 2005, provided a head-to-head comparison of letrozole versus tamoxifen. Statistical power was increased by an enriched design, which included patients who were assigned sequential treatments until the time of the treatment switch. The second, reported in late 2008, used a conditional landmark approach to test the hypothesis that switching endocrine agents at approximately 2 years from randomization for patients who are disease-free is superior to continuing with the original agent. The 2005 analysis showed the superiority of letrozole compared with tamoxifen. The patients who were assigned tamoxifen alone were unblinded and offered the opportunity to switch to letrozole. Results from other trials increased the clinical relevance about whether or not to start treatment with letrozole or tamoxifen, and analysis plans were expanded to evaluate sequential versus single-agent strategies from randomization. Due to the unblinding of patients assigned tamoxifen alone, analysis of updated data will require ascertainment of the influence of selective crossover from tamoxifen to letrozole. BIG 1-98 is an example of an enriched design, involving complementary analyses addressing different questions several years apart, and subject to evolving analytic plans influenced by new data that emerge over time.
Design of Neural Networks for Fast Convergence and Accuracy
NASA Technical Reports Server (NTRS)
Maghami, Peiman G.; Sparks, Dean W., Jr.
1998-01-01
A novel procedure for the design and training of artificial neural networks, used for rapid and efficient controls and dynamics design and analysis for flexible space systems, has been developed. Artificial neural networks are employed to provide a means of evaluating the impact of design changes rapidly. Specifically, two-layer feedforward neural networks are designed to approximate the functional relationship between the component spacecraft design changes and measures of its performance. A training algorithm, based on statistical sampling theory, is presented, which guarantees that the trained networks provide a designer-specified degree of accuracy in mapping the functional relationship. Within each iteration of this statistical-based algorithm, a sequential design algorithm is used for the design and training of the feedforward network to provide rapid convergence to the network goals. Here, at each sequence a new network is trained to minimize the error of previous network. The design algorithm attempts to avoid the local minima phenomenon that hampers the traditional network training. A numerical example is performed on a spacecraft application in order to demonstrate the feasibility of the proposed approach.
Color Breakup In Sequentially-Scanned LC Displays
NASA Technical Reports Server (NTRS)
Arend, L.; Lubin, J.; Gille, J.; Larimer, J.; Statler, Irving C. (Technical Monitor)
1994-01-01
In sequentially-scanned liquid-crystal displays the chromatic components of color pixels are distributed in time. For such displays eye, head, display, and image-object movements can cause the individual color elements to be visible. We analyze conditions (scan designs, types of eye movement) likely to produce color breakup.
Sequential Requests and the Problem of Message Sampling.
ERIC Educational Resources Information Center
Cantrill, James Gerard
S. Jackson and S. Jacobs's criticism of "single message" designs in communication research served as a framework for a study that examined the differences between various sequential request paradigms. The study sought to answer the following questions: (1) What were the most naturalistic request sequences assured to replicate…
The Motivating Language of Principals: A Sequential Transformative Strategy
ERIC Educational Resources Information Center
Holmes, William Tobias
2012-01-01
This study implemented a Sequential Transformative Mixed Methods design with teachers (as recipients) and principals (to give voice) in the examination of principal talk in two different school accountability contexts (Continuously Improving and Continuously Zigzag) using the conceptual framework of Motivating Language Theory. In phase one,…
A high level language for a high performance computer
NASA Technical Reports Server (NTRS)
Perrott, R. H.
1978-01-01
The proposed computational aerodynamic facility will join the ranks of the supercomputers due to its architecture and increased execution speed. At present, the languages used to program these supercomputers have been modifications of programming languages which were designed many years ago for sequential machines. A new programming language should be developed based on the techniques which have proved valuable for sequential programming languages and incorporating the algorithmic techniques required for these supercomputers. The design objectives for such a language are outlined.
2012-05-30
annealing-based or Bayesian sequential simulation approaches B. Dafflon1,2 and W. Barrash1 Received 13 May 2011; revised 12 March 2012; accepted 17 April 2012...the withheld porosity log are also withheld for this estimation process. For both cases we do this for two wells having locally variable stratigraphy ...borehole location is given at the bottom of each log comparison panel. For comparison with stratigraphy at the BHRS, contacts between Units 1 to 4
Unsteady, one-dimensional gas dynamics computations using a TVD type sequential solver
NASA Technical Reports Server (NTRS)
Thakur, Siddharth; Shyy, Wei
1992-01-01
The efficacy of high resolution convection schemes to resolve sharp gradient in unsteady, 1D flows is examined using the TVD concept based on a sequential solution algorithm. Two unsteady flow problems are considered which include the problem involving the interaction of the various waves in a shock tube with closed reflecting ends and the problem involving the unsteady gas dynamics in a tube with closed ends subject to an initial pressure perturbation. It is concluded that high accuracy convection schemes in a sequential solution framework are capable of resolving discontinuities in unsteady flows involving complex gas dynamics. However, a sufficient amount of dissipation is required to suppress oscillations near discontinuities in the sequential approach, which leads to smearing of the solution profiles.
Ivanova, Anastasia; Zhang, Zhiwei; Thompson, Laura; Yang, Ying; Kotz, Richard M; Fang, Xin
2016-01-01
Sequential parallel comparison design (SPCD) was proposed for trials with high placebo response. In the first stage of SPCD subjects are randomized between placebo and active treatment. In the second stage placebo nonresponders are re-randomized between placebo and active treatment. Data from the population of "all comers" and the subpopulations of placebo nonresponders then combined to yield a single p-value for treatment comparison. Two-way enriched design (TED) is an extension of SPCD where active treatment responders are also re-randomized between placebo and active treatment in Stage 2. This article investigates the potential uses of SPCD and TED in medical device trials.
Sequential and prosodic design of English and Greek non-valenced news receipts.
Kaimaki, Marianna
2012-03-01
Results arising from a prosodic and interactional study of the organization of everyday talk in English suggest that news receipts can be grouped into two categories: valenced (e.g., oh good) and non-valenced (e.g., oh really). In-depth investigation of both valenced and non-valenced news receipts shows that differences in their prosodic design do not seem to affect the sequential structure of the news informing sequence. News receipts with falling and rising pitch may have the same uptake and are treated in the same way by co-participants. A preliminary study of a Greek telephone corpus yielded the following receipts of news announcements: a malista, a(h) orea, a ne, a, oh. These are news markers composed of a standalone particle or a particle followed by an adverb or a response token (ne). Analysis of the sequential and prosodic design of Greek news announcement sequences is made to determine any interactional patterns and/or prosodic constraints. By examining the way in which co-participants display their interpretation of these turns I show that the phonological systems of contrast are different depending on the sequential environment, in much the same way that consonantal systems of contrast are not the same syllable initially and finally.
Constant speed control of four-stroke micro internal combustion swing engine
NASA Astrophysics Data System (ADS)
Gao, Dedong; Lei, Yong; Zhu, Honghai; Ni, Jun
2015-09-01
The increasing demands on safety, emission and fuel consumption require more accurate control models of micro internal combustion swing engine (MICSE). The objective of this paper is to investigate the constant speed control models of four-stroke MICSE. The operation principle of the four-stroke MICSE is presented based on the description of MICSE prototype. A two-level Petri net based hybrid model is proposed to model the four-stroke MICSE engine cycle. The Petri net subsystem at the upper level controls and synchronizes the four Petri net subsystems at the lower level. The continuous sub-models, including breathing dynamics of intake manifold, thermodynamics of the chamber and dynamics of the torque generation, are investigated and integrated with the discrete model in MATLAB Simulink. Through the comparison of experimental data and simulated DC voltage output, it is demonstrated that the hybrid model is valid for the four-stroke MICSE system. A nonlinear model is obtained from the cycle average data via the regression method, and it is linearized around a given nominal equilibrium point for the controller design. The feedback controller of the spark timing and valve duration timing is designed with a sequential loop closing design approach. The simulation of the sequential loop closure control design applied to the hybrid model is implemented in MATLAB. The simulation results show that the system is able to reach its desired operating point within 0.2 s, and the designed controller shows good MICSE engine performance with a constant speed. This paper presents the constant speed control models of four-stroke MICSE and carries out the simulation tests, the models and the simulation results can be used for further study on the precision control of four-stroke MICSE.
Sequential Pattern Analysis: Method and Application in Exploring How Students Develop Concept Maps
ERIC Educational Resources Information Center
Chiu, Chiung-Hui; Lin, Chien-Liang
2012-01-01
Concept mapping is a technique that represents knowledge in graphs. It has been widely adopted in science education and cognitive psychology to aid learning and assessment. To realize the sequential manner in which students develop concept maps, most research relies upon human-dependent, qualitative approaches. This article proposes a method for…
ERIC Educational Resources Information Center
Raymond, Chase Wesley
2014-01-01
This dissertation takes an ethnomethodologically-grounded, conversation-analytic approach in investigating the sequential deployment of linguistic resources in Spanish-language talk-in-interaction. Three sets of resources are examined: 2nd-person singular reference forms (tú, vos, usted), indicative/subjunctive verbal mood selection, and…
ERIC Educational Resources Information Center
Economou, A.; Tzanavaras, P. D.; Themelis, D. G.
2005-01-01
The sequential-injection analysis (SIA) is an approach to sample handling that enables the automation of manual wet-chemistry procedures in a rapid, precise and efficient manner. The experiments using SIA fits well in the course of Instrumental Chemical Analysis and especially in the section of Automatic Methods of analysis provided by chemistry…
Propagating probability distributions of stand variables using sequential Monte Carlo methods
Jeffrey H. Gove
2009-01-01
A general probabilistic approach to stand yield estimation is developed based on sequential Monte Carlo filters, also known as particle filters. The essential steps in the development of the sampling importance resampling (SIR) particle filter are presented. The SIR filter is then applied to simulated and observed data showing how the 'predictor - corrector'...
An Overview of Markov Chain Methods for the Study of Stage-Sequential Developmental Processes
ERIC Educational Resources Information Center
Kapland, David
2008-01-01
This article presents an overview of quantitative methodologies for the study of stage-sequential development based on extensions of Markov chain modeling. Four methods are presented that exemplify the flexibility of this approach: the manifest Markov model, the latent Markov model, latent transition analysis, and the mixture latent Markov model.…
Harmonizing and Improvising in the Choral Rehearsal: A Sequential Approach
ERIC Educational Resources Information Center
Bell, Cindy L.
2004-01-01
This article challenges choral teachers to motivate their choirs to a new level of choral singing and harmonic creativity and outlines a sequential process for introducing improvisation into the daily warm-up. It argues that students can learn to harmonize and improvise by ear as part of each day's warm-up period. Sections include: (1) Chord…
ERIC Educational Resources Information Center
Spring, Bonnie; Pagoto, Sherry; Pingitore, Regina; Doran, Neal; Schneider, Kristin; Hedeker, Don
2004-01-01
The authors compared simultaneous versus sequential approaches to multiple health behavior change in diet, exercise, and cigarette smoking. Female regular smokers (N = 315) randomized to 3 conditions received 16 weeks of behavioral smoking treatment, quit smoking at Week 5, and were followed for 9 months after quit date. Weight management was…
Prototype color field sequential television lens assembly
NASA Technical Reports Server (NTRS)
1974-01-01
The design, development, and evaluation of a prototype modular lens assembly with a self-contained field sequential color wheel is presented. The design of a color wheel of maximum efficiency, the selection of spectral filters, and the design of a quiet, efficient wheel drive system are included. Design tradeoffs considered for each aspect of the modular assembly are discussed. Emphasis is placed on achieving a design which can be attached directly to an unmodified camera, thus permitting use of the assembly in evaluating various candidate camera and sensor designs. A technique is described which permits maintaining high optical efficiency with an unmodified camera. A motor synchronization system is developed which requires only the vertical synchronization signal as a reference frequency input. Equations and tradeoff curves are developed to permit optimizing the filter wheel aperture shapes for a variety of different design conditions.
Synthesizing a novel genetic sequential logic circuit: a push-on push-off switch
Lou, Chunbo; Liu, Xili; Ni, Ming; Huang, Yiqi; Huang, Qiushi; Huang, Longwen; Jiang, Lingli; Lu, Dan; Wang, Mingcong; Liu, Chang; Chen, Daizhuo; Chen, Chongyi; Chen, Xiaoyue; Yang, Le; Ma, Haisu; Chen, Jianguo; Ouyang, Qi
2010-01-01
Design and synthesis of basic functional circuits are the fundamental tasks of synthetic biologists. Before it is possible to engineer higher-order genetic networks that can perform complex functions, a toolkit of basic devices must be developed. Among those devices, sequential logic circuits are expected to be the foundation of the genetic information-processing systems. In this study, we report the design and construction of a genetic sequential logic circuit in Escherichia coli. It can generate different outputs in response to the same input signal on the basis of its internal state, and ‘memorize' the output. The circuit is composed of two parts: (1) a bistable switch memory module and (2) a double-repressed promoter NOR gate module. The two modules were individually rationally designed, and they were coupled together by fine-tuning the interconnecting parts through directed evolution. After fine-tuning, the circuit could be repeatedly, alternatively triggered by the same input signal; it functions as a push-on push-off switch. PMID:20212522
Synthesizing a novel genetic sequential logic circuit: a push-on push-off switch.
Lou, Chunbo; Liu, Xili; Ni, Ming; Huang, Yiqi; Huang, Qiushi; Huang, Longwen; Jiang, Lingli; Lu, Dan; Wang, Mingcong; Liu, Chang; Chen, Daizhuo; Chen, Chongyi; Chen, Xiaoyue; Yang, Le; Ma, Haisu; Chen, Jianguo; Ouyang, Qi
2010-01-01
Design and synthesis of basic functional circuits are the fundamental tasks of synthetic biologists. Before it is possible to engineer higher-order genetic networks that can perform complex functions, a toolkit of basic devices must be developed. Among those devices, sequential logic circuits are expected to be the foundation of the genetic information-processing systems. In this study, we report the design and construction of a genetic sequential logic circuit in Escherichia coli. It can generate different outputs in response to the same input signal on the basis of its internal state, and 'memorize' the output. The circuit is composed of two parts: (1) a bistable switch memory module and (2) a double-repressed promoter NOR gate module. The two modules were individually rationally designed, and they were coupled together by fine-tuning the interconnecting parts through directed evolution. After fine-tuning, the circuit could be repeatedly, alternatively triggered by the same input signal; it functions as a push-on push-off switch.
Kidgell, Joel T.; de Nys, Rocky; Paul, Nicholas A.; Roberts, David A.
2014-01-01
Fe-treated biochar and raw biochar produced from macroalgae are effective biosorbents of metalloids and metals, respectively. However, the treatment of complex effluents that contain both metalloid and metal contaminants presents a challenging scenario. We test a multiple-biosorbent approach to bioremediation using Fe-biochar and biochar to remediate both metalloids and metals from the effluent from a coal-fired power station. First, a model was derived from published data for this effluent to predict the biosorption of 21 elements by Fe-biochar and biochar. The modelled outputs were then used to design biosorption experiments using Fe-biochar and biochar, both simultaneously and in sequence, to treat effluent containing multiple contaminants in excess of water quality criteria. The waste water was produced during ash disposal at an Australian coal-fired power station. The application of Fe-biochar and biochar, either simultaneously or sequentially, resulted in a more comprehensive remediation of metalloids and metals compared to either biosorbent used individually. The most effective treatment was the sequential use of Fe-biochar to remove metalloids from the waste water, followed by biochar to remove metals. Al, Cd, Cr, Cu, Mn, Ni, Pb, Zn were reduced to the lowest concentration following the sequential application of the two biosorbents, and their final concentrations were predicted by the model. Overall, 17 of the 21 elements measured were remediated to, or below, the concentrations that were predicted by the model. Both metalloids and metals can be remediated from complex effluent using biosorbents with different characteristics but derived from a single feedstock. Furthermore, the extent of remediation can be predicted for similar effluents using additive models. PMID:25061756
Robust inference for group sequential trials.
Ganju, Jitendra; Lin, Yunzhi; Zhou, Kefei
2017-03-01
For ethical reasons, group sequential trials were introduced to allow trials to stop early in the event of extreme results. Endpoints in such trials are usually mortality or irreversible morbidity. For a given endpoint, the norm is to use a single test statistic and to use that same statistic for each analysis. This approach is risky because the test statistic has to be specified before the study is unblinded, and there is loss in power if the assumptions that ensure optimality for each analysis are not met. To minimize the risk of moderate to substantial loss in power due to a suboptimal choice of a statistic, a robust method was developed for nonsequential trials. The concept is analogous to diversification of financial investments to minimize risk. The method is based on combining P values from multiple test statistics for formal inference while controlling the type I error rate at its designated value.This article evaluates the performance of 2 P value combining methods for group sequential trials. The emphasis is on time to event trials although results from less complex trials are also included. The gain or loss in power with the combination method relative to a single statistic is asymmetric in its favor. Depending on the power of each individual test, the combination method can give more power than any single test or give power that is closer to the test with the most power. The versatility of the method is that it can combine P values from different test statistics for analysis at different times. The robustness of results suggests that inference from group sequential trials can be strengthened with the use of combined tests. Copyright © 2017 John Wiley & Sons, Ltd.
Near-common-path interferometer for imaging Fourier-transform spectroscopy in wide-field microscopy
Wadduwage, Dushan N.; Singh, Vijay Raj; Choi, Heejin; Yaqoob, Zahid; Heemskerk, Hans; Matsudaira, Paul; So, Peter T. C.
2017-01-01
Imaging Fourier-transform spectroscopy (IFTS) is a powerful method for biological hyperspectral analysis based on various imaging modalities, such as fluorescence or Raman. Since the measurements are taken in the Fourier space of the spectrum, it can also take advantage of compressed sensing strategies. IFTS has been readily implemented in high-throughput, high-content microscope systems based on wide-field imaging modalities. However, there are limitations in existing wide-field IFTS designs. Non-common-path approaches are less phase-stable. Alternatively, designs based on the common-path Sagnac interferometer are stable, but incompatible with high-throughput imaging. They require exhaustive sequential scanning over large interferometric path delays, making compressive strategic data acquisition impossible. In this paper, we present a novel phase-stable, near-common-path interferometer enabling high-throughput hyperspectral imaging based on strategic data acquisition. Our results suggest that this approach can improve throughput over those of many other wide-field spectral techniques by more than an order of magnitude without compromising phase stability. PMID:29392168
NASA Astrophysics Data System (ADS)
Koziel, Slawomir; Bekasiewicz, Adrian
2018-02-01
In this article, a simple yet efficient and reliable technique for fully automated multi-objective design optimization of antenna structures using sequential domain patching (SDP) is discussed. The optimization procedure according to SDP is a two-step process: (i) obtaining the initial set of Pareto-optimal designs representing the best possible trade-offs between considered conflicting objectives, and (ii) Pareto set refinement for yielding the optimal designs at the high-fidelity electromagnetic (EM) simulation model level. For the sake of computational efficiency, the first step is realized at the level of a low-fidelity (coarse-discretization) EM model by sequential construction and relocation of small design space segments (patches) in order to create a path connecting the extreme Pareto front designs obtained beforehand. The second stage involves response correction techniques and local response surface approximation models constructed by reusing EM simulation data acquired in the first step. A major contribution of this work is an automated procedure for determining the patch dimensions. It allows for appropriate selection of the number of patches for each geometry variable so as to ensure reliability of the optimization process while maintaining its low cost. The importance of this procedure is demonstrated by comparing it with uniform patch dimensions.
Optimal design and use of retry in fault tolerant real-time computer systems
NASA Technical Reports Server (NTRS)
Lee, Y. H.; Shin, K. G.
1983-01-01
A new method to determin an optimal retry policy and for use in retry of fault characterization is presented. An optimal retry policy for a given fault characteristic, which determines the maximum allowable retry durations to minimize the total task completion time was derived. The combined fault characterization and retry decision, in which the characteristics of fault are estimated simultaneously with the determination of the optimal retry policy were carried out. Two solution approaches were developed, one based on the point estimation and the other on the Bayes sequential decision. The maximum likelihood estimators are used for the first approach, and the backward induction for testing hypotheses in the second approach. Numerical examples in which all the durations associated with faults have monotone hazard functions, e.g., exponential, Weibull and gamma distributions are presented. These are standard distributions commonly used for modeling analysis and faults.
Porous microwells for geometry-selective, large-scale microparticle arrays
NASA Astrophysics Data System (ADS)
Kim, Jae Jung; Bong, Ki Wan; Reátegui, Eduardo; Irimia, Daniel; Doyle, Patrick S.
2017-01-01
Large-scale microparticle arrays (LSMAs) are key for material science and bioengineering applications. However, previous approaches suffer from trade-offs between scalability, precision, specificity and versatility. Here, we present a porous microwell-based approach to create large-scale microparticle arrays with complex motifs. Microparticles are guided to and pushed into microwells by fluid flow through small open pores at the bottom of the porous well arrays. A scaling theory allows for the rational design of LSMAs to sort and array particles on the basis of their size, shape, or modulus. Sequential particle assembly allows for proximal and nested particle arrangements, as well as particle recollection and pattern transfer. We demonstrate the capabilities of the approach by means of three applications: high-throughput single-cell arrays; microenvironment fabrication for neutrophil chemotaxis; and complex, covert tags by the transfer of an upconversion nanocrystal-laden LSMA.
Field-Sequential Color Converter
NASA Technical Reports Server (NTRS)
Studer, Victor J.
1989-01-01
Electronic conversion circuit enables display of signals from field-sequential color-television camera on color video camera. Designed for incorporation into color-television monitor on Space Shuttle, circuit weighs less, takes up less space, and consumes less power than previous conversion equipment. Incorporates state-of-art memory devices, also used in terrestrial stationary or portable closed-circuit television systems.
Apollo experience report: Command and service module sequential events control subsystem
NASA Technical Reports Server (NTRS)
Johnson, G. W.
1975-01-01
The Apollo command and service module sequential events control subsystem is described, with particular emphasis on the major systems and component problems and solutions. The subsystem requirements, design, and development and the test and flight history of the hardware are discussed. Recommendations to avoid similar problems on future programs are outlined.
An Undergraduate Survey Course on Asynchronous Sequential Logic, Ladder Logic, and Fuzzy Logic
ERIC Educational Resources Information Center
Foster, D. L.
2012-01-01
For a basic foundation in computer engineering, universities traditionally teach synchronous sequential circuit design, using discrete gates or field programmable gate arrays, and a microcomputers course that includes basic I/O processing. These courses, though critical, expose students to only a small subset of tools. At co-op schools like…
Terminating Sequential Delphi Survey Data Collection
ERIC Educational Resources Information Center
Kalaian, Sema A.; Kasim, Rafa M.
2012-01-01
The Delphi survey technique is an iterative mail or electronic (e-mail or web-based) survey method used to obtain agreement or consensus among a group of experts in a specific field on a particular issue through a well-designed and systematic multiple sequential rounds of survey administrations. Each of the multiple rounds of the Delphi survey…
NASA Astrophysics Data System (ADS)
Dassekpo, Jean-Baptiste Mawulé; Zha, Xiaoxiong; Zhan, Jiapeng; Ning, Jiaqian
Geopolymer is an energy efficient and sustainable material that is currently used in construction industry as an alternative for Portland cement. As a new material, specific mix design method is essential and efforts have been made to develop a mix design procedure with the main focus on achieving better compressive strength and economy. In this paper, a sequential addition of synthesis parameters such as fly ash-sand, alkaline liquids, plasticizer and additional water at well-defined time intervals was investigated. A total of 4 mix procedures were used to study the compressive performance on fly ash-based geopolymer mortar and the results of each method were analyzed and discussed. Experimental results show that the sequential addition of sodium hydroxide (NaOH), sodium silicate (Na2SiO3), plasticizer (PL), followed by adding water (WA) increases considerably the compressive strengths of the geopolymer-based mortar. These results clearly demonstrate the high significant influence of sequential addition of synthesis parameters on geopolymer materials compressive properties, and also provide a new mixing method for the preparation of geopolymer paste, mortar and concrete.
Keresztes, Janos C; John Koshel, R; D'huys, Karlien; De Ketelaere, Bart; Audenaert, Jan; Goos, Peter; Saeys, Wouter
2016-12-26
A novel meta-heuristic approach for minimizing nonlinear constrained problems is proposed, which offers tolerance information during the search for the global optimum. The method is based on the concept of design and analysis of computer experiments combined with a novel two phase design augmentation (DACEDA), which models the entire merit space using a Gaussian process, with iteratively increased resolution around the optimum. The algorithm is introduced through a series of cases studies with increasing complexity for optimizing uniformity of a short-wave infrared (SWIR) hyperspectral imaging (HSI) illumination system (IS). The method is first demonstrated for a two-dimensional problem consisting of the positioning of analytical isotropic point sources. The method is further applied to two-dimensional (2D) and five-dimensional (5D) SWIR HSI IS versions using close- and far-field measured source models applied within the non-sequential ray-tracing software FRED, including inherent stochastic noise. The proposed method is compared to other heuristic approaches such as simplex and simulated annealing (SA). It is shown that DACEDA converges towards a minimum with 1 % improvement compared to simplex and SA, and more importantly requiring only half the number of simulations. Finally, a concurrent tolerance analysis is done within DACEDA for to the five-dimensional case such that further simulations are not required.
Parke, Tom; Marchenko, Olga; Anisimov, Vladimir; Ivanova, Anastasia; Jennison, Christopher; Perevozskaya, Inna; Song, Guochen
2017-01-01
Designing an oncology clinical program is more challenging than designing a single study. The standard approaches have been proven to be not very successful during the last decade; the failure rate of Phase 2 and Phase 3 trials in oncology remains high. Improving a development strategy by applying innovative statistical methods is one of the major objectives of a drug development process. The oncology sub-team on Adaptive Program under the Drug Information Association Adaptive Design Scientific Working Group (DIA ADSWG) evaluated hypothetical oncology programs with two competing treatments and published the work in the Therapeutic Innovation and Regulatory Science journal in January 2014. Five oncology development programs based on different Phase 2 designs, including adaptive designs and a standard two parallel arm Phase 3 design were simulated and compared in terms of the probability of clinical program success and expected net present value (eNPV). In this article, we consider eight Phase2/Phase3 development programs based on selected combinations of five Phase 2 study designs and three Phase 3 study designs. We again used the probability of program success and eNPV to compare simulated programs. For the development strategies, we considered that the eNPV showed robust improvement for each successive strategy, with the highest being for a three-arm response adaptive randomization design in Phase 2 and a group sequential design with 5 analyses in Phase 3.
Silverman, Rachel K; Ivanova, Anastasia
2017-01-01
Sequential parallel comparison design (SPCD) was proposed to reduce placebo response in a randomized trial with placebo comparator. Subjects are randomized between placebo and drug in stage 1 of the trial, and then, placebo non-responders are re-randomized in stage 2. Efficacy analysis includes all data from stage 1 and all placebo non-responding subjects from stage 2. This article investigates the possibility to re-estimate the sample size and adjust the design parameters, allocation proportion to placebo in stage 1 of SPCD, and weight of stage 1 data in the overall efficacy test statistic during an interim analysis.
Doubly negative isotropic elastic metamaterial for sub-wavelength focusing: Design and realization
NASA Astrophysics Data System (ADS)
Oh, Joo Hwan; Seung, Hong Min; Kim, Yoon Young
2017-12-01
In spite of much progress in elastic metamaterials, tuning the effective density and stiffness to desired values ranging from negatives to large positives is still difficult. In particular, simultaneous realization of double negativity and isotropy, critical in sub-wavelength focusing, is very challenging since anisotropy is usually unavoidable in resonance-based metamaterials. The main difficulty is that there is no established systematic design method for simultaneous achieving of double negativity and isotropy. Thus, we propose a unique elastic metamaterial unit cell with which simultaneous realization can be achieved by an explicit step-by-step approach. The unit cell of the proposed metamaterial can be accurately modeled as an equivalent mass-spring system so that the effective properties can be easily controlled with the design parameters. The actual realization was carried out by acquiring the desired properties in sequential steps which is in detail. The specific application for this study is on sub-wavelength focusing, which will be demonstrated by waves from a single point source focused on a region smaller than half the wavelength. Actual experiments were performed on an aluminum plate where the designed metamaterial flat lens was imbedded. The results acquired through simulations and experiments suggest potential applications of the proposed metamaterial and the systematic design approach in advanced acoustic surgery or non-destructive testing.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bielinski, Ashley R.; Boban, Mathew; He, Yang
2017-01-24
A method for tunable control of geometry in hyperbranched ZnO nanowire (NW) systems is reported, which enables the rational design and fabrication of superomniphobic surfaces. Branched NWs with tunable density and orientation were grown via a sequential hydrothermal process, in which atomic layer deposition (ALD) was used for NW seeding, disruption of epitaxy, and selective blocking of NW nucleation. This approach allows for the rational design and optimization of three-level hierarchical structures, in which the geometric parameters of each level of hierarchy can be individually controlled. We demonstrate the coupled relationships between geometry and contact angle for a variety ofmore » liquids, which is supported by mathematical models of structural superomniphobicity. The highest performing superomniphobic surface was designed with three levels of hierarchy and achieved the following advancing/receding contact angles, water: 172°/170°, hexadecane: 166°/156°, octane: 162°/145°, and heptane: 160°/130°. Low surface tension liquids were shown to bounce off the surface from a height of 7 cm without breaking through and wetting. This approach demonstrates the power of ALD as an enabling technique for hierarchical materials by design, spanning the macro, micro, and nano length scales.« less
NASA Technical Reports Server (NTRS)
Todling, Ricardo
2015-01-01
Recently, this author studied an approach to the estimation of system error based on combining observation residuals derived from a sequential filter and fixed lag-1 smoother. While extending the methodology to a variational formulation, experimenting with simple models and making sure consistency was found between the sequential and variational formulations, the limitations of the residual-based approach came clearly to the surface. This note uses the sequential assimilation application to simple nonlinear dynamics to highlight the issue. Only when some of the underlying error statistics are assumed known is it possible to estimate the unknown component. In general, when considerable uncertainties exist in the underlying statistics as a whole, attempts to obtain separate estimates of the various error covariances are bound to lead to misrepresentation of errors. The conclusions are particularly relevant to present-day attempts to estimate observation-error correlations from observation residual statistics. A brief illustration of the issue is also provided by comparing estimates of error correlations derived from a quasi-operational assimilation system and a corresponding Observing System Simulation Experiments framework.
Stranges, P. Benjamin; Palla, Mirkó; Kalachikov, Sergey; Nivala, Jeff; Dorwart, Michael; Trans, Andrew; Kumar, Shiv; Porel, Mintu; Chien, Minchen; Tao, Chuanjuan; Morozova, Irina; Li, Zengmin; Shi, Shundi; Aberra, Aman; Arnold, Cleoma; Yang, Alexander; Aguirre, Anne; Harada, Eric T.; Korenblum, Daniel; Pollard, James; Bhat, Ashwini; Gremyachinskiy, Dmitriy; Bibillo, Arek; Chen, Roger; Davis, Randy; Russo, James J.; Fuller, Carl W.; Roever, Stefan; Ju, Jingyue; Church, George M.
2016-01-01
Scalable, high-throughput DNA sequencing is a prerequisite for precision medicine and biomedical research. Recently, we presented a nanopore-based sequencing-by-synthesis (Nanopore-SBS) approach, which used a set of nucleotides with polymer tags that allow discrimination of the nucleotides in a biological nanopore. Here, we designed and covalently coupled a DNA polymerase to an α-hemolysin (αHL) heptamer using the SpyCatcher/SpyTag conjugation approach. These porin–polymerase conjugates were inserted into lipid bilayers on a complementary metal oxide semiconductor (CMOS)-based electrode array for high-throughput electrical recording of DNA synthesis. The designed nanopore construct successfully detected the capture of tagged nucleotides complementary to a DNA base on a provided template. We measured over 200 tagged-nucleotide signals for each of the four bases and developed a classification method to uniquely distinguish them from each other and background signals. The probability of falsely identifying a background event as a true capture event was less than 1.2%. In the presence of all four tagged nucleotides, we observed sequential additions in real time during polymerase-catalyzed DNA synthesis. Single-polymerase coupling to a nanopore, in combination with the Nanopore-SBS approach, can provide the foundation for a low-cost, single-molecule, electronic DNA-sequencing platform. PMID:27729524
Development of high-accuracy convection schemes for sequential solvers
NASA Technical Reports Server (NTRS)
Thakur, Siddharth; Shyy, Wei
1993-01-01
An exploration is conducted of the applicability of such high resolution schemes as TVD to the resolving of sharp flow gradients using a sequential solution approach borrowed from pressure-based algorithms. It is shown that by extending these high-resolution shock-capturing schemes to a sequential solver that treats the equations as a collection of scalar conservation equations, the speed of signal propagation in the solution has to be coordinated by assigning the local convection speed as the characteristic speed for the entire system. A higher amount of dissipation is therefore needed to eliminate oscillations near discontinuities.
NASA Astrophysics Data System (ADS)
Otake, Yoshito; Esnault, Matthieu; Grupp, Robert; Kosugi, Shinichi; Sato, Yoshinobu
2016-03-01
The determination of in vivo motion of multiple-bones using dynamic fluoroscopic images and computed tomography (CT) is useful for post-operative assessment of orthopaedic surgeries such as medial patellofemoral ligament reconstruction. We propose a robust method to measure the 3D motion of multiple rigid objects with high accuracy using a series of bi-plane fluoroscopic images and a multi-resolution, intensity-based, 2D-3D registration. A Covariance Matrix Adaptation Evolution Strategy (CMA-ES) optimizer was used with a gradient correlation similarity metric. Four approaches to register three rigid objects (femur, tibia-fibula and patella) were implemented: 1) an individual bone approach registering one bone at a time, each with optimization of a six degrees of freedom (6DOF) parameter, 2) a sequential approach registering one bone at a time but using the previous bone results as the background in DRR generation, 3) a simultaneous approach registering all the bones together (18DOF) and 4) a combination of the sequential and the simultaneous approaches. These approaches were compared in experiments using simulated images generated from the CT of a healthy volunteer and measured fluoroscopic images. Over the 120 simulated frames of motion, the simultaneous approach showed improved registration accuracy compared to the individual approach: with less than 0.68mm root-mean-square error (RMSE) for translation and less than 1.12° RMSE for rotation. A robustness evaluation was conducted with 45 trials of a randomly perturbed initialization showed that the sequential approach improved robustness significantly (74% success rate) compared to the individual bone approach (34% success) for patella registration (femur and tibia-fibula registration had a 100% success rate with each approach).
Boessen, Ruud; van der Baan, Frederieke; Groenwold, Rolf; Egberts, Antoine; Klungel, Olaf; Grobbee, Diederick; Knol, Mirjam; Roes, Kit
2013-01-01
Two-stage clinical trial designs may be efficient in pharmacogenetics research when there is some but inconclusive evidence of effect modification by a genomic marker. Two-stage designs allow to stop early for efficacy or futility and can offer the additional opportunity to enrich the study population to a specific patient subgroup after an interim analysis. This study compared sample size requirements for fixed parallel group, group sequential, and adaptive selection designs with equal overall power and control of the family-wise type I error rate. The designs were evaluated across scenarios that defined the effect sizes in the marker positive and marker negative subgroups and the prevalence of marker positive patients in the overall study population. Effect sizes were chosen to reflect realistic planning scenarios, where at least some effect is present in the marker negative subgroup. In addition, scenarios were considered in which the assumed 'true' subgroup effects (i.e., the postulated effects) differed from those hypothesized at the planning stage. As expected, both two-stage designs generally required fewer patients than a fixed parallel group design, and the advantage increased as the difference between subgroups increased. The adaptive selection design added little further reduction in sample size, as compared with the group sequential design, when the postulated effect sizes were equal to those hypothesized at the planning stage. However, when the postulated effects deviated strongly in favor of enrichment, the comparative advantage of the adaptive selection design increased, which precisely reflects the adaptive nature of the design. Copyright © 2013 John Wiley & Sons, Ltd.
Corrected Mean-Field Model for Random Sequential Adsorption on Random Geometric Graphs
NASA Astrophysics Data System (ADS)
Dhara, Souvik; van Leeuwaarden, Johan S. H.; Mukherjee, Debankur
2018-03-01
A notorious problem in mathematics and physics is to create a solvable model for random sequential adsorption of non-overlapping congruent spheres in the d-dimensional Euclidean space with d≥ 2 . Spheres arrive sequentially at uniformly chosen locations in space and are accepted only when there is no overlap with previously deposited spheres. Due to spatial correlations, characterizing the fraction of accepted spheres remains largely intractable. We study this fraction by taking a novel approach that compares random sequential adsorption in Euclidean space to the nearest-neighbor blocking on a sequence of clustered random graphs. This random network model can be thought of as a corrected mean-field model for the interaction graph between the attempted spheres. Using functional limit theorems, we characterize the fraction of accepted spheres and its fluctuations.
Lenas, Petros; Moos, Malcolm; Luyten, Frank P
2009-12-01
Recent advances in developmental biology, systems biology, and network science are converging to poise the heretofore largely empirical field of tissue engineering on the brink of a metamorphosis into a rigorous discipline based on universally accepted engineering principles of quality by design. Failure of more simplistic approaches to the manufacture of cell-based therapies has led to increasing appreciation of the need to imitate, at least to some degree, natural mechanisms that control cell fate and differentiation. The identification of many of these mechanisms, which in general are based on cell signaling pathways, is an important step in this direction. Some well-accepted empirical concepts of developmental biology, such as path-dependence, robustness, modularity, and semiautonomy of intermediate tissue forms, that appear sequentially during tissue development are starting to be incorporated in process design.
Dmitriy Volinskiy; John C Bergstrom; Christopher M Cornwell; Thomas P Holmes
2010-01-01
The assumption of independence of irrelevant alternatives in a sequential contingent valuation format should be questioned. Statistically, most valuation studies treat nonindependence as a consequence of unobserved individual effects. Another approach is to consider an inferential process in which any particular choice is part of a general choosing strategy of a survey...
Sun, Bin; Lynn, David M
2010-11-20
We report an approach to the design of multilayered polyelectrolyte thin films (or 'polyelectrolyte multilayers', PEMs) that can be used to provide tunable control over the release of plasmid DNA (or multiple different DNA constructs) from film-coated surfaces. Our approach is based upon methods for the layer-by-layer assembly of DNA-containing thin films, and exploits the properties of a new class of cationic 'charge-shifting' polymers (amine functionalized polymers that undergo gradual changes in net charge upon side chain ester hydrolysis) to provide control over the rates at which these films erode and release DNA. We synthesized two 'charge-shifting' polymers (polymers 1 and 2) containing different side chain structures by ring-opening reactions of poly(2-alkenyl azlactone)s with two different tertiary amine functionalized alcohols (3-dimethylamino-1-propanol and 2-dimethylaminoethanol, respectively). Subsequent characterization revealed large changes in the rates of side chain ester hydrolysis for these two polymers; whereas the half-life for the hydrolysis of the esters in polymer 1 was ~200 days, the half-life for polymer 2 was ~6 days. We demonstrate that these large differences in side chain hydrolysis make possible the design of PEMs that erode and promote the surface-mediated release of DNA either rapidly (e.g., over ~3 days for films fabricated using polymer 2) or slowly (e.g., over ~1 month for films fabricated using polymer 1). We demonstrate further that it is possible to design films with release profiles that are intermediate to these two extremes by fabricating films using solutions containing different mixtures of these two polymers. This approach can thus expand the usefulness of these two polymers and achieve a broader range of DNA release profiles without the need to synthesize polymers with new structures or properties. Finally, we demonstrate that polymers 1 and 2 can be used to fabricate multilayered films with hierarchical structures that promote the sequential release of two different DNA constructs with separate and distinct release profiles (e.g., the release of a first construct over a period of ~3 days, followed by the sustained release of a second for a period of ~70 days). With further development, this approach could contribute to the design of functional thin films and surface coatings that provide sophisticated control over the timing and the order of the release of two or more DNA constructs (or other agents) of interest in a range of biomedical contexts. Copyright © 2010 Elsevier B.V. All rights reserved.
Level-Set Topology Optimization with Aeroelastic Constraints
NASA Technical Reports Server (NTRS)
Dunning, Peter D.; Stanford, Bret K.; Kim, H. Alicia
2015-01-01
Level-set topology optimization is used to design a wing considering skin buckling under static aeroelastic trim loading, as well as dynamic aeroelastic stability (flutter). The level-set function is defined over the entire 3D volume of a transport aircraft wing box. Therefore, the approach is not limited by any predefined structure and can explore novel configurations. The Sequential Linear Programming (SLP) level-set method is used to solve the constrained optimization problems. The proposed method is demonstrated using three problems with mass, linear buckling and flutter objective and/or constraints. A constraint aggregation method is used to handle multiple buckling constraints in the wing skins. A continuous flutter constraint formulation is used to handle difficulties arising from discontinuities in the design space caused by a switching of the critical flutter mode.
Peterson, Janey C; Czajkowski, Susan; Charlson, Mary E; Link, Alissa R; Wells, Martin T; Isen, Alice M; Mancuso, Carol A; Allegrante, John P; Boutin-Foster, Carla; Ogedegbe, Gbenga; Jobe, Jared B
2013-04-01
To describe a mixed-methods approach to develop and test a basic behavioral science-informed intervention to motivate behavior change in 3 high-risk clinical populations. Our theoretically derived intervention comprised a combination of positive affect and self-affirmation (PA/SA), which we applied to 3 clinical chronic disease populations. We employed a sequential mixed methods model (EVOLVE) to design and test the PA/SA intervention in order to increase physical activity in people with coronary artery disease (post-percutaneous coronary intervention [PCI]) or asthma (ASM) and to improve medication adherence in African Americans with hypertension (HTN). In an initial qualitative phase, we explored participant values and beliefs. We next pilot tested and refined the intervention and then conducted 3 randomized controlled trials with parallel study design. Participants were randomized to combined PA/SA versus an informational control and were followed bimonthly for 12 months, assessing for health behaviors and interval medical events. Over 4.5 years, we enrolled 1,056 participants. Changes were sequentially made to the intervention during the qualitative and pilot phases. The 3 randomized controlled trials enrolled 242 participants who had undergone PCI, 258 with ASM, and 256 with HTN (n = 756). Overall, 45.1% of PA/SA participants versus 33.6% of informational control participants achieved successful behavior change (p = .001). In multivariate analysis, PA/SA intervention remained a significant predictor of achieving behavior change (p < .002, odds ratio = 1.66), 95% CI [1.22, 2.27], controlling for baseline negative affect, comorbidity, gender, race/ethnicity, medical events, smoking, and age. The EVOLVE method is a means by which basic behavioral science research can be translated into efficacious interventions for chronic disease populations.
Peterson, Janey C.; Czajkowski, Susan; Charlson, Mary E.; Link, Alissa R.; Wells, Martin T.; Isen, Alice M.; Mancuso, Carol A.; Allegrante, John P.; Boutin-Foster, Carla; Ogedegbe, Gbenga; Jobe, Jared B.
2012-01-01
Objective To describe a mixed-methods approach to develop and test a basic behavioral science-informed intervention to motivate behavior change in three high-risk clinical populations. Our theoretically-derived intervention comprised a combination of positive affect and self-affirmation (PA/SA) which we applied to three clinical chronic disease populations. Methods We employed a sequential mixed methods model (EVOLVE) to design and test the PA/SA intervention in order to increase physical activity in people with coronary artery disease (post-percutaneous coronary intervention [PCI]) or asthma (ASM), and to improve medication adherence in African Americans with hypertension (HTN). In an initial qualitative phase, we explored participant values and beliefs. We next pilot tested and refined the intervention, and then conducted three randomized controlled trials (RCTs) with parallel study design. Participants were randomized to combined PA/SA vs. an informational control (IC) and followed bimonthly for 12 months, assessing for health behaviors and interval medical events. Results Over 4.5 years, we enrolled 1,056 participants. Changes were sequentially made to the intervention during the qualitative and pilot phases. The three RCTs enrolled 242 PCI, 258 ASM and 256 HTN participants (n=756). Overall, 45.1% of PA/SA participants versus 33.6% of IC participants achieved successful behavior change (p=0.001). In multivariate analysis PA/SA intervention remained a significant predictor of achieving behavior change (p<0.002, OR=1.66, 95% CI 1.22–2.27), controlling for baseline negative affect, comorbidity, gender, race/ethnicity, medical events, smoking and age. Conclusions The EVOLVE method is a means by which basic behavioral science research can be translated into efficacious interventions for chronic disease populations. PMID:22963594
Inverse problems in 1D hemodynamics on systemic networks: a sequential approach.
Lombardi, D
2014-02-01
In this work, a sequential approach based on the unscented Kalman filter is applied to solve inverse problems in 1D hemodynamics, on a systemic network. For instance, the arterial stiffness is estimated by exploiting cross-sectional area and mean speed observations in several locations of the arteries. The results are compared with those ones obtained by estimating the pulse wave velocity and the Moens-Korteweg formula. In the last section, a perspective concerning the identification of the terminal models parameters and peripheral circulation (modeled by a Windkessel circuit) is presented. Copyright © 2013 John Wiley & Sons, Ltd.
Particle filters, a quasi-Monte-Carlo-solution for segmentation of coronaries.
Florin, Charles; Paragios, Nikos; Williams, Jim
2005-01-01
In this paper we propose a Particle Filter-based approach for the segmentation of coronary arteries. To this end, successive planes of the vessel are modeled as unknown states of a sequential process. Such states consist of the orientation, position, shape model and appearance (in statistical terms) of the vessel that are recovered in an incremental fashion, using a sequential Bayesian filter (Particle Filter). In order to account for bifurcations and branchings, we consider a Monte Carlo sampling rule that propagates in parallel multiple hypotheses. Promising results on the segmentation of coronary arteries demonstrate the potential of the proposed approach.
Murray, Thomas A; Yuan, Ying; Thall, Peter F; Elizondo, Joan H; Hofstetter, Wayne L
2018-01-22
A design is proposed for randomized comparative trials with ordinal outcomes and prognostic subgroups. The design accounts for patient heterogeneity by allowing possibly different comparative conclusions within subgroups. The comparative testing criterion is based on utilities for the levels of the ordinal outcome and a Bayesian probability model. Designs based on two alternative models that include treatment-subgroup interactions are considered, the proportional odds model and a non-proportional odds model with a hierarchical prior that shrinks toward the proportional odds model. A third design that assumes homogeneity and ignores possible treatment-subgroup interactions also is considered. The three approaches are applied to construct group sequential designs for a trial of nutritional prehabilitation versus standard of care for esophageal cancer patients undergoing chemoradiation and surgery, including both untreated patients and salvage patients whose disease has recurred following previous therapy. A simulation study is presented that compares the three designs, including evaluation of within-subgroup type I and II error probabilities under a variety of scenarios including different combinations of treatment-subgroup interactions. © 2018, The International Biometric Society.
Framework for Multidisciplinary Analysis, Design, and Optimization with High-Fidelity Analysis Tools
NASA Technical Reports Server (NTRS)
Orr, Stanley A.; Narducci, Robert P.
2009-01-01
A plan is presented for the development of a high fidelity multidisciplinary optimization process for rotorcraft. The plan formulates individual disciplinary design problems, identifies practical high-fidelity tools and processes that can be incorporated in an automated optimization environment, and establishes statements of the multidisciplinary design problem including objectives, constraints, design variables, and cross-disciplinary dependencies. Five key disciplinary areas are selected in the development plan. These are rotor aerodynamics, rotor structures and dynamics, fuselage aerodynamics, fuselage structures, and propulsion / drive system. Flying qualities and noise are included as ancillary areas. Consistency across engineering disciplines is maintained with a central geometry engine that supports all multidisciplinary analysis. The multidisciplinary optimization process targets the preliminary design cycle where gross elements of the helicopter have been defined. These might include number of rotors and rotor configuration (tandem, coaxial, etc.). It is at this stage that sufficient configuration information is defined to perform high-fidelity analysis. At the same time there is enough design freedom to influence a design. The rotorcraft multidisciplinary optimization tool is built and substantiated throughout its development cycle in a staged approach by incorporating disciplines sequentially.
NASA Astrophysics Data System (ADS)
A'yun, Kurroti; Suyono, Poedjiastoeti, Sri; Bin-Tahir, Saidna Zulfiqar
2017-08-01
The most crucial issue in education is a misconception that is caused by the misconception of the students themselves. Therefore, this study provided the solution to improve the quality of teaching chemistry in the schools through the remediation of misconceptions to the chemistry teacher candidates. This study employed a mixed method approach using concurrent embedded designs where it tended more to the qualitative research, but it still relied on the quantitative research in the assessment of the learning impact. The results of this study were the students with higher levels of cognitive conflict still have high loads of misconceptions (MC), it possibly due to the type of students' learning styles that is the sequential-global balanced. To facilitate the cognitive conflict character and the learning style of sequential-global balanced, the researchers created an integrated worksheet conceptual change with peer learning (WCCPL). The peer learning undertaken in the last stages of conceptual change of WCCPL can increase the resistance of students' concept in a category of knowing the concept significantly, but it should be examined in an in-depth study related to the long-term memory.
Reversible logic gates on Physarum Polycephalum
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schumann, Andrew
2015-03-10
In this paper, we consider possibilities how to implement asynchronous sequential logic gates and quantum-style reversible logic gates on Physarum polycephalum motions. We show that in asynchronous sequential logic gates we can erase information because of uncertainty in the direction of plasmodium propagation. Therefore quantum-style reversible logic gates are more preferable for designing logic circuits on Physarum polycephalum.
ERIC Educational Resources Information Center
Ramaswamy, Ravishankar; Dix, Edward F.; Drew, Janet E.; Diamond, James J.; Inouye, Sharon K.; Roehl, Barbara J. O.
2011-01-01
Purpose of the Study: Delirium is a widespread concern for hospitalized seniors, yet is often unrecognized. A comprehensive and sequential intervention (CSI) aiming to effect change in clinician behavior by improving knowledge about delirium was tested. Design and Methods: A 2-day CSI program that consisted of progressive 4-part didactic series,…
Novel Designs of Quantum Reversible Counters
NASA Astrophysics Data System (ADS)
Qi, Xuemei; Zhu, Haihong; Chen, Fulong; Zhu, Junru; Zhang, Ziyang
2016-11-01
Reversible logic, as an interesting and important issue, has been widely used in designing combinational and sequential circuits for low-power and high-speed computation. Though a significant number of works have been done on reversible combinational logic, the realization of reversible sequential circuit is still at premature stage. Reversible counter is not only an important part of the sequential circuit but also an essential part of the quantum circuit system. In this paper, we designed two kinds of novel reversible counters. In order to construct counter, the innovative reversible T Flip-flop Gate (TFG), T Flip-flop block (T_FF) and JK flip-flop block (JK_FF) are proposed. Based on the above blocks and some existing reversible gates, the 4-bit binary-coded decimal (BCD) counter and controlled Up/Down synchronous counter are designed. With the help of Verilog hardware description language (Verilog HDL), these counters above have been modeled and confirmed. According to the simulation results, our circuits' logic structures are validated. Compared to the existing ones in terms of quantum cost (QC), delay (DL) and garbage outputs (GBO), it can be concluded that our designs perform better than the others. There is no doubt that they can be used as a kind of important storage components to be applied in future low-power computing systems.
A three-dimensional quality-guided phase unwrapping method for MR elastography
NASA Astrophysics Data System (ADS)
Wang, Huifang; Weaver, John B.; Perreard, Irina I.; Doyley, Marvin M.; Paulsen, Keith D.
2011-07-01
Magnetic resonance elastography (MRE) uses accumulated phases that are acquired at multiple, uniformly spaced relative phase offsets, to estimate harmonic motion information. Heavily wrapped phase occurs when the motion is large and unwrapping procedures are necessary to estimate the displacements required by MRE. Two unwrapping methods were developed and compared in this paper. The first method is a sequentially applied approach. The three-dimensional MRE phase image block for each slice was processed by two-dimensional unwrapping followed by a one-dimensional phase unwrapping approach along the phase-offset direction. This unwrapping approach generally works well for low noise data. However, there are still cases where the two-dimensional unwrapping method fails when noise is high. In this case, the baseline of the corrupted regions within an unwrapped image will not be consistent. Instead of separating the two-dimensional and one-dimensional unwrapping in a sequential approach, an interleaved three-dimensional quality-guided unwrapping method was developed to combine both the two-dimensional phase image continuity and one-dimensional harmonic motion information. The quality of one-dimensional harmonic motion unwrapping was used to guide the three-dimensional unwrapping procedures and it resulted in stronger guidance than in the sequential method. In this work, in vivo results generated by the two methods were compared.
Design of telehealth trials--introducing adaptive approaches.
Law, Lisa M; Wason, James M S
2014-12-01
The field of telehealth and telemedicine is expanding as the need to improve efficiency of health care becomes more pressing. The decision to implement a telehealth system is generally an expensive undertaking that impacts a large number of patients and other stakeholders. It is therefore extremely important that the decision is fully supported by accurate evaluation of telehealth interventions. Numerous reviews of telehealth have described the evidence base as inconsistent. In response they call for larger, more rigorously controlled trials, and trials which go beyond evaluation of clinical effectiveness alone. The aim of this paper is to discuss various ways in which evaluation of telehealth could be improved by the use of adaptive trial designs. We discuss various adaptive design options, such as sample size reviews and changing the study hypothesis to address uncertain parameters, group sequential trials and multi-arm multi-stage trials to improve efficiency, and enrichment designs to maximise the chances of obtaining clear evidence about the telehealth intervention. There is potential to address the flaws discussed in the telehealth literature through the adoption of adaptive approaches to trial design. Such designs could lead to improvements in efficiency, allow the evaluation of multiple telehealth interventions in a cost-effective way, or accurately assess a range of endpoints that are important in the overall success of a telehealth programme. Copyright © 2014 The Authors. Published by Elsevier Ireland Ltd.. All rights reserved.
Davies, Jeff K; Hassan, Sandra; Sarker, Shah-Jalal; Besley, Caroline; Oakervee, Heather; Smith, Matthew; Taussig, David; Gribben, John G; Cavenagh, Jamie D
2018-02-01
Allogeneic haematopoietic stem-cell transplantation remains the only curative treatment for relapsed/refractory acute myeloid leukaemia (AML) and high-risk myelodysplasia but has previously been limited to patients who achieve remission before transplant. New sequential approaches employing T-cell depleted transplantation directly after chemotherapy show promise but are burdened by viral infection and require donor lymphocyte infusions (DLI) to augment donor chimerism and graft-versus-leukaemia effects. T-replete transplantation in sequential approaches could reduce both viral infection and DLI usage. We therefore performed a single-arm prospective Phase II clinical trial of sequential chemotherapy and T-replete transplantation using reduced-intensity conditioning without planned DLI. The primary endpoint was overall survival. Forty-seven patients with relapsed/refractory AML or high-risk myelodysplasia were enrolled; 43 proceeded to transplantation. High levels of donor chimerism were achieved spontaneously with no DLI. Overall survival of transplanted patients was 45% and 33% at 1 and 3 years. Only one patient developed cytomegalovirus disease. Cumulative incidences of treatment-related mortality and relapse were 35% and 20% at 1 year. Patients with relapsed AML and myelodysplasia had the most favourable outcomes. Late-onset graft-versus-host disease protected against relapse. In conclusion, a T-replete sequential transplantation using reduced-intensity conditioning is feasible for relapsed/refractory AML and myelodysplasia and can deliver graft-versus-leukaemia effects without DLI. © 2017 John Wiley & Sons Ltd.
Concurrent Learning of Control in Multi agent Sequential Decision Tasks
2018-04-17
Concurrent Learning of Control in Multi-agent Sequential Decision Tasks The overall objective of this project was to develop multi-agent reinforcement...learning (MARL) approaches for intelligent agents to autonomously learn distributed control policies in decentral- ized partially observable...shall be subject to any oenalty for failing to comply with a collection of information if it does not display a currently valid OMB control number
Crawford, Megan R.; Turner, Arlener D.; Wyatt, James K.; Fogg, Louis F.; Ong, Jason C.
2016-01-01
Chronic insomnia disorder is a prevalent condition and a significant proportion of these individuals also have obstructive sleep apnea (OSA). These two sleep disorders have distinct pathophysiology and are managed with different treatment approaches. High comorbidity rates have been a catalyst for emerging studies examining multidisciplinary treatment for OSA comorbid with insomnia disorder. In this article, we describe a randomized clinical trial of Cognitive Behavioral Treatment for insomnia (CBT-I) and Positive Airway Pressure (PAP) for OSA. Participants are randomized to receive one of three treatment combinations. Individuals randomized to treatment Arm A receive sequential treatment beginning with CBT-I followed by PAP, in treatment Arm B CBT-I and PAP are administered concurrently. These treatment arms are compared to a control condition, treatment Arm C, where individuals receive PAP alone. Adopting an incomplete factorial study design will allow us to evaluate the efficacy of multidisciplinary treatment (Arms A & B) versus standard treatment alone (Arm C). In addition, the random allocation of individuals to the two different combined treatment sequences (Arm A and Arm B) will allow us to understand the benefits of the sequential administration of CBT-I and PAP relative to concurrent treatment of PAP and CBT-I. These findings will provide evidence of the clinical benefits of treating insomnia disorder in the context of OSA. PMID:26733360
Soft robot design methodology for `push-button' manufacturing
NASA Astrophysics Data System (ADS)
Paik, Jamie
2018-06-01
`Push-button' or fully automated manufacturing would enable the production of robots with zero intervention from human hands. Realizing this utopia requires a fundamental shift from a sequential (design-materials-manufacturing) to a concurrent design methodology.
Near real-time vaccine safety surveillance with partially accrued data.
Greene, Sharon K; Kulldorff, Martin; Yin, Ruihua; Yih, W Katherine; Lieu, Tracy A; Weintraub, Eric S; Lee, Grace M
2011-06-01
The Vaccine Safety Datalink (VSD) Project conducts near real-time vaccine safety surveillance using sequential analytic methods. Timely surveillance is critical in identifying potential safety problems and preventing additional exposure before most vaccines are administered. For vaccines that are administered during a short period, such as influenza vaccines, timeliness can be improved by undertaking analyses while risk windows following vaccination are ongoing and by accommodating predictable and unpredictable data accrual delays. We describe practical solutions to these challenges, which were adopted by the VSD Project during pandemic and seasonal influenza vaccine safety surveillance in 2009/2010. Adjustments were made to two sequential analytic approaches. The Poisson-based approach compared the number of pre-defined adverse events observed following vaccination with the number expected using historical data. The expected number was adjusted for the proportion of the risk window elapsed and the proportion of inpatient data estimated to have accrued. The binomial-based approach used a self-controlled design, comparing the observed numbers of events in risk versus comparison windows. Events were included in analysis only if they occurred during a week that had already passed for both windows. Analyzing data before risk windows fully elapsed improved the timeliness of safety surveillance. Adjustments for data accrual lags were tailored to each data source and avoided biasing analyses away from detecting a potential safety problem, particularly early during surveillance. The timeliness of vaccine and drug safety surveillance can be improved by properly accounting for partially elapsed windows and data accrual delays. Copyright © 2011 John Wiley & Sons, Ltd.
Lucyshyn, Joseph M; Fossett, Brenda; Bakeman, Roger; Cheremshynski, Christy; Miller, Lynn; Lohrmann, Sharon; Binnendyk, Lauren; Khan, Sophia; Chinn, Stephen; Kwon, Samantha; Irvin, Larry K
2015-12-01
The efficacy and consequential validity of an ecological approach to behavioral intervention with families of children with developmental disabilities was examined. The approach aimed to transform coercive into constructive parent-child interaction in family routines. Ten families participated, including 10 mothers and fathers and 10 children 3-8 years old with developmental disabilities. Thirty-six family routines were selected (2 to 4 per family). Dependent measures included child problem behavior, routine steps completed, and coercive and constructive parent-child interaction. For each family, a single case, multiple baseline design was employed with three phases: baseline, intervention, and follow-up. Visual analysis evaluated the functional relation between intervention and improvements in child behavior and routine participation. Nonparametric tests across families evaluated the statistical significance of these improvements. Sequential analyses within families and univariate analyses across families examined changes from baseline to intervention in the percentage and odds ratio of coercive and constructive parent-child interaction. Multiple baseline results documented functional or basic effects for 8 of 10 families. Nonparametric tests showed these changes to be significant. Follow-up showed durability at 11 to 24 months postintervention. Sequential analyses documented the transformation of coercive into constructive processes for 9 of 10 families. Univariate analyses across families showed significant improvements in 2- and 4-step coercive and constructive processes but not in odds ratio. Results offer evidence of the efficacy of the approach and consequential validity of the ecological unit of analysis, parent-child interaction in family routines. Future studies should improve efficiency, and outcomes for families experiencing family systems challenges.
Using timed event sequential data in nursing research.
Pecanac, Kristen E; Doherty-King, Barbara; Yoon, Ju Young; Brown, Roger; Schiefelbein, Tony
2015-01-01
Measuring behavior is important in nursing research, and innovative technologies are needed to capture the "real-life" complexity of behaviors and events. The purpose of this article is to describe the use of timed event sequential data in nursing research and to demonstrate the use of this data in a research study. Timed event sequencing allows the researcher to capture the frequency, duration, and sequence of behaviors as they occur in an observation period and to link the behaviors to contextual details. Timed event sequential data can easily be collected with handheld computers, loaded with a software program designed for capturing observations in real time. Timed event sequential data add considerable strength to analysis of any nursing behavior of interest, which can enhance understanding and lead to improvement in nursing practice.
McCarthy, Kathleen M; Mahon, Merle; Rosen, Stuart; Evans, Bronwen G
2014-01-01
The majority of bilingual speech research has focused on simultaneous bilinguals. Yet, in immigrant communities, children are often initially exposed to their family language (L1), before becoming gradually immersed in the host country's language (L2). This is typically referred to as sequential bilingualism. Using a longitudinal design, this study explored the perception and production of the English voicing contrast in 55 children (40 Sylheti-English sequential bilinguals and 15 English monolinguals). Children were tested twice: when they were in nursery (52-month-olds) and 1 year later. Sequential bilinguals' perception and production of English plosives were initially driven by their experience with their L1, but after starting school, changed to match that of their monolingual peers. PMID:25123987
NASA Astrophysics Data System (ADS)
Gallagher, C. B.; Ferraro, A.
2018-05-01
A possible alternative to the standard model of measurement-based quantum computation (MBQC) is offered by the sequential model of MBQC—a particular class of quantum computation via ancillae. Although these two models are equivalent under ideal conditions, their relative resilience to noise in practical conditions is not yet known. We analyze this relationship for various noise models in the ancilla preparation and in the entangling-gate implementation. The comparison of the two models is performed utilizing both the gate infidelity and the diamond distance as figures of merit. Our results show that in the majority of instances the sequential model outperforms the standard one in regard to a universal set of operations for quantum computation. Further investigation is made into the performance of sequential MBQC in experimental scenarios, thus setting benchmarks for possible cavity-QED implementations.
Claycamp, H Gregg; Kona, Ravikanth; Fahmy, Raafat; Hoag, Stephen W
2016-04-01
Qualitative risk assessment methods are often used as the first step to determining design space boundaries; however, quantitative assessments of risk with respect to the design space, i.e., calculating the probability of failure for a given severity, are needed to fully characterize design space boundaries. Quantitative risk assessment methods in design and operational spaces are a significant aid to evaluating proposed design space boundaries. The goal of this paper is to demonstrate a relatively simple strategy for design space definition using a simplified Bayesian Monte Carlo simulation. This paper builds on a previous paper that used failure mode and effects analysis (FMEA) qualitative risk assessment and Plackett-Burman design of experiments to identity the critical quality attributes. The results show that the sequential use of qualitative and quantitative risk assessments can focus the design of experiments on a reduced set of critical material and process parameters that determine a robust design space under conditions of limited laboratory experimentation. This approach provides a strategy by which the degree of risk associated with each known parameter can be calculated and allocates resources in a manner that manages risk to an acceptable level.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dai, Yuanshun; Baek, Seung H.; Garcia-Diza, Alberto
2012-01-01
This paper designs a comprehensive approach based on the engineering machine/system concept, to model, analyze, and assess the level of CO2 exchange between the atmosphere and terrestrial ecosystems, which is an important factor in understanding changes in global climate. The focus of this article is on spatial patterns and on the correlation between levels of CO2 fluxes and a variety of influencing factors in eco-environments. The engineering/machine concept used is a system protocol that includes the sequential activities of design, test, observe, and model. This concept is applied to explicitly include various influencing factors and interactions associated with CO2 fluxes.more » To formulate effective models of a large and complex climate system, this article introduces a modeling technique that will be referred to as Stochastic Filtering Analysis of Variance (SFANOVA). The CO2 flux data observed from some sites of AmeriFlux are used to illustrate and validate the analysis, prediction and globalization capabilities of the proposed engineering approach and the SF-ANOVA technology. The SF-ANOVA modeling approach was compared to stepwise regression, ridge regression, and neural networks. The comparison indicated that the proposed approach is a valid and effective tool with similar accuracy and less complexity than the other procedures.« less
Meta-analyses and adaptive group sequential designs in the clinical development process.
Jennison, Christopher; Turnbull, Bruce W
2005-01-01
The clinical development process can be viewed as a succession of trials, possibly overlapping in calendar time. The design of each trial may be influenced by results from previous studies and other currently proceeding trials, as well as by external information. Results from all of these trials must be considered together in order to assess the efficacy and safety of the proposed new treatment. Meta-analysis techniques provide a formal way of combining the information. We examine how such methods can be used in combining results from: (1) a collection of separate studies, (2) a sequence of studies in an organized development program, and (3) stages within a single study using a (possibly adaptive) group sequential design. We present two examples. The first example concerns the combining of results from a Phase IIb trial using several dose levels or treatment arms with those of the Phase III trial comparing the treatment selected in Phase IIb against a control This enables a "seamless transition" from Phase IIb to Phase III. The second example examines the use of combination tests to analyze data from an adaptive group sequential trial.
Bedell, T Aaron; Hone, Graham A B; Valette, Damien; Yu, Jin-Quan; Davies, Huw M L; Sorensen, Erik J
2016-07-11
Methods for functionalizing carbon-hydrogen bonds are featured in a new synthesis of the tricyclic core architecture that characterizes the indoxamycin family of secondary metabolites. A unique collaboration between three laboratories has engendered a design for synthesis featuring two sequential C-H functionalization reactions, namely a diastereoselective dirhodium carbene insertion followed by an ester-directed oxidative Heck cyclization, to rapidly assemble the congested tricyclic core of the indoxamycins. This project exemplifies how multi-laboratory collaborations can foster conceptually novel approaches to challenging problems in chemical synthesis. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Modulated Acquisition of Spatial Distortion Maps
Volkov, Alexey; Gros, Jerneja Žganec; Žganec, Mario; Javornik, Tomaž; Švigelj, Aleš
2013-01-01
This work discusses a novel approach to image acquisition which improves the robustness of captured data required for 3D range measurements. By applying a pseudo-random code modulation to sequential acquisition of projected patterns the impact of environmental factors such as ambient light and mutual interference is significantly reduced. The proposed concept has been proven with an experimental range sensor based on the laser triangulation principle. The proposed design can potentially enhance the use of this principle to a variety of outdoor applications, such as autonomous vehicles, pedestrians' safety, collision avoidance, and many other tasks, where robust real-time distance detection in real world environment is crucial. PMID:23966196
Modulated acquisition of spatial distortion maps.
Volkov, Alexey; Gros, Jerneja Zganec; Zganec, Mario; Javornik, Tomaž; Svigelj, Aleš
2013-08-21
This work discusses a novel approach to image acquisition which improves the robustness of captured data required for 3D range measurements. By applying a pseudo-random code modulation to sequential acquisition of projected patterns the impact of environmental factors such as ambient light and mutual interference is significantly reduced. The proposed concept has been proven with an experimental range sensor based on the laser triangulation principle. The proposed design can potentially enhance the use of this principle to a variety of outdoor applications, such as autonomous vehicles, pedestrians' safety, collision avoidance, and many other tasks, where robust real-time distance detection in real world environment is crucial.
Rehabilitation strategies for partially edentulous-prosthodontic principles and current trends.
D'Souza, Dsj; Dua, Parag
2011-07-01
The prosthetic considerations for treatment of partially edentulous patients involve evaluation of important aspects such as presence of certain functional or skeletal deficits, orientation of the occlusal plane, free-way space, size and location of edentulous areas, number, strategic location and quality of the likely abutment teeth, vertical dimension, and the type of occlusion. A comprehensive evaluation, multidisciplinary approach and a sequential treatment plan, worked out in harmony with the patient's perceptions are important factors to ensure a successful outcome. This article discusses the principles, current trends and importance of clinical decisions in designing a treatment strategy when confronted with complex situations of partial edentulism.
ERIC Educational Resources Information Center
McCarthy, Kathleen M.; Mahon, Merle; Rosen, Stuart; Evans, Bronwen G.
2014-01-01
The majority of bilingual speech research has focused on simultaneous bilinguals. Yet, in immigrant communities, children are often initially exposed to their family language (L1), before becoming gradually immersed in the host country's language (L2). This is typically referred to as sequential bilingualism. Using a longitudinal design, this…
NASA Astrophysics Data System (ADS)
Mayer, J. M.; Stead, D.
2017-04-01
With the increased drive towards deeper and more complex mine designs, geotechnical engineers are often forced to reconsider traditional deterministic design techniques in favour of probabilistic methods. These alternative techniques allow for the direct quantification of uncertainties within a risk and/or decision analysis framework. However, conventional probabilistic practices typically discretize geological materials into discrete, homogeneous domains, with attributes defined by spatially constant random variables, despite the fact that geological media display inherent heterogeneous spatial characteristics. This research directly simulates this phenomenon using a geostatistical approach, known as sequential Gaussian simulation. The method utilizes the variogram which imposes a degree of controlled spatial heterogeneity on the system. Simulations are constrained using data from the Ok Tedi mine site in Papua New Guinea and designed to randomly vary the geological strength index and uniaxial compressive strength using Monte Carlo techniques. Results suggest that conventional probabilistic techniques have a fundamental limitation compared to geostatistical approaches, as they fail to account for the spatial dependencies inherent to geotechnical datasets. This can result in erroneous model predictions, which are overly conservative when compared to the geostatistical results.
Chuprom, Julalak; Bovornreungroj, Preeyanuch; Ahmad, Mehraj; Kantachote, Duangporn; Dueramae, Sawitree
2016-06-01
A new potent halophilic protease producer, Halobacterium sp. strain LBU50301 was isolated from salt-fermented fish samples ( budu ) and identified by phenotypic analysis, and 16S rDNA gene sequencing. Thereafter, sequential statistical strategy was used to optimize halophilic protease production from Halobacterium sp. strain LBU50301 by shake-flask fermentation. The classical one-factor-at-a-time (OFAT) approach determined gelatin was the best nitrogen source. Based on Plackett - Burman (PB) experimental design; gelatin, MgSO 4 ·7H 2 O, NaCl and pH significantly influenced the halophilic protease production. Central composite design (CCD) determined the optimum level of medium components. Subsequently, an 8.78-fold increase in corresponding halophilic protease yield (156.22 U/mL) was obtained, compared with that produced in the original medium (17.80 U/mL). Validation experiments proved the adequacy and accuracy of model, and the results showed the predicted value agreed well with the experimental values. An overall 13-fold increase in halophilic protease yield was achieved using a 3 L laboratory fermenter and optimized medium (231.33 U/mL).
Toombs, Elaine; Unruh, Anita; McGrath, Patrick
2018-01-01
This study aimed to assess the Parent-Adolescent Communication Toolkit, an online intervention designed to help improve parent communication with their adolescents. Participant preferences for two module delivery systems (sequential and unrestricted module access) were identified. Usability assessment of the PACT intervention was completed using pre-test and posttest comparisons. Usability data, including participant completion and satisfaction ratings were examined. Parents ( N = 18) of adolescents were randomized to a sequential or unrestricted chapter access group. Parent participants completed pre-test measures, the PACT intervention and posttest measures. Participants provided feedback for the intervention to improve modules and provided usability ratings. Adolescent pre- and posttest ratings were evaluated. Usability ratings were high and parent feedback was positive. The sequential module access groups rated the intervention content higher and completed more content than the unrestricted chapter access group, indicating support for the sequential access design. Parent mean posttest communication scores were significantly higher ( p < .05) than pre-test scores. No significant differences were detected for adolescent participants. Findings suggest that the Parent-Adolescent Communication Toolkit has potential to improve parent-adolescent communication but further effectiveness assessment is required.
Aerostructural Shape and Topology Optimization of Aircraft Wings
NASA Astrophysics Data System (ADS)
James, Kai
A series of novel algorithms for performing aerostructural shape and topology optimization are introduced and applied to the design of aircraft wings. An isoparametric level set method is developed for performing topology optimization of wings and other non-rectangular structures that must be modeled using a non-uniform, body-fitted mesh. The shape sensitivities are mapped to computational space using the transformation defined by the Jacobian of the isoparametric finite elements. The mapped sensitivities are then passed to the Hamilton-Jacobi equation, which is solved on a uniform Cartesian grid. The method is derived for several objective functions including mass, compliance, and global von Mises stress. The results are compared with SIMP results for several two-dimensional benchmark problems. The method is also demonstrated on a three-dimensional wingbox structure subject to fixed loading. It is shown that the isoparametric level set method is competitive with the SIMP method in terms of the final objective value as well as computation time. In a separate problem, the SIMP formulation is used to optimize the structural topology of a wingbox as part of a larger MDO framework. Here, topology optimization is combined with aerodynamic shape optimization, using a monolithic MDO architecture that includes aerostructural coupling. The aerodynamic loads are modeled using a three-dimensional panel method, and the structural analysis makes use of linear, isoparametric, hexahedral elements. The aerodynamic shape is parameterized via a set of twist variables representing the jig twist angle at equally spaced locations along the span of the wing. The sensitivities are determined analytically using a coupled adjoint method. The wing is optimized for minimum drag subject to a compliance constraint taken from a 2 g maneuver condition. The results from the MDO algorithm are compared with those of a sequential optimization procedure in order to quantify the benefits of the MDO approach. While the sequentially optimized wing exhibits a nearly-elliptical lift distribution, the MDO design seeks to push a greater portion of the load toward the root, thus reducing the structural deflection, and allowing for a lighter structure. By exploiting this trade-off, the MDO design achieves a 42% lower drag than the sequential result.
Gao, Xinxin; Yo, Peggy; Keith, Andrew; Ragan, Timothy J.; Harris, Thomas K.
2003-01-01
A novel thermodynamically-balanced inside-out (TBIO) method of primer design was developed and compared with a thermodynamically-balanced conventional (TBC) method of primer design for PCR-based gene synthesis of codon-optimized gene sequences for the human protein kinase B-2 (PKB2; 1494 bp), p70 ribosomal S6 subunit protein kinase-1 (S6K1; 1622 bp) and phosphoinositide-dependent protein kinase-1 (PDK1; 1712 bp). Each of the 60mer TBIO primers coded for identical nucleotide regions that the 60mer TBC primers covered, except that half of the TBIO primers were reverse complement sequences. In addition, the TBIO and TBC primers contained identical regions of temperature- optimized primer overlaps. The TBC method was optimized to generate sequential overlapping fragments (∼0.4–0.5 kb) for each of the gene sequences, and simultaneous and sequential combinations of overlapping fragments were tested for their ability to be assembled under an array of PCR conditions. However, no fully synthesized gene sequences could be obtained by this approach. In contrast, the TBIO method generated an initial central fragment (∼0.4–0.5 kb), which could be gel purified and used for further inside-out bidirectional elongation by additional increments of 0.4–0.5 kb. By using the newly developed TBIO method of PCR-based gene synthesis, error-free synthetic genes for the human protein kinases PKB2, S6K1 and PDK1 were obtained with little or no corrective mutagenesis. PMID:14602936
Guetterman, Timothy C; Fetters, Michael D; Creswell, John W
2015-11-01
Mixed methods research is becoming an important methodology to investigate complex health-related topics, yet the meaningful integration of qualitative and quantitative data remains elusive and needs further development. A promising innovation to facilitate integration is the use of visual joint displays that bring data together visually to draw out new insights. The purpose of this study was to identify exemplar joint displays by analyzing the various types of joint displays being used in published articles. We searched for empirical articles that included joint displays in 3 journals that publish state-of-the-art mixed methods research. We analyzed each of 19 identified joint displays to extract the type of display, mixed methods design, purpose, rationale, qualitative and quantitative data sources, integration approaches, and analytic strategies. Our analysis focused on what each display communicated and its representation of mixed methods analysis. The most prevalent types of joint displays were statistics-by-themes and side-by-side comparisons. Innovative joint displays connected findings to theoretical frameworks or recommendations. Researchers used joint displays for convergent, explanatory sequential, exploratory sequential, and intervention designs. We identified exemplars for each of these designs by analyzing the inferences gained through using the joint display. Exemplars represented mixed methods integration, presented integrated results, and yielded new insights. Joint displays appear to provide a structure to discuss the integrated analysis and assist both researchers and readers in understanding how mixed methods provides new insights. We encourage researchers to use joint displays to integrate and represent mixed methods analysis and discuss their value. © 2015 Annals of Family Medicine, Inc.
Guetterman, Timothy C.; Fetters, Michael D.; Creswell, John W.
2015-01-01
PURPOSE Mixed methods research is becoming an important methodology to investigate complex health-related topics, yet the meaningful integration of qualitative and quantitative data remains elusive and needs further development. A promising innovation to facilitate integration is the use of visual joint displays that bring data together visually to draw out new insights. The purpose of this study was to identify exemplar joint displays by analyzing the various types of joint displays being used in published articles. METHODS We searched for empirical articles that included joint displays in 3 journals that publish state-of-the-art mixed methods research. We analyzed each of 19 identified joint displays to extract the type of display, mixed methods design, purpose, rationale, qualitative and quantitative data sources, integration approaches, and analytic strategies. Our analysis focused on what each display communicated and its representation of mixed methods analysis. RESULTS The most prevalent types of joint displays were statistics-by-themes and side-by-side comparisons. Innovative joint displays connected findings to theoretical frameworks or recommendations. Researchers used joint displays for convergent, explanatory sequential, exploratory sequential, and intervention designs. We identified exemplars for each of these designs by analyzing the inferences gained through using the joint display. Exemplars represented mixed methods integration, presented integrated results, and yielded new insights. CONCLUSIONS Joint displays appear to provide a structure to discuss the integrated analysis and assist both researchers and readers in understanding how mixed methods provides new insights. We encourage researchers to use joint displays to integrate and represent mixed methods analysis and discuss their value. PMID:26553895
Ellis, Charles; Peach, Richard K
2017-04-01
To examine aphasia outcomes and to determine whether the observed language profiles vary by race-ethnicity. Retrospective cross-sectional study using a convenience sample of persons of with aphasia (PWA) obtained from AphasiaBank, a database designed for the study of aphasia outcomes. Aphasia research laboratories. PWA (N=381; 339 white and 42 black individuals). Not applicable. Western Aphasia Battery-Revised (WAB-R) total scale score (Aphasia Quotient) and subtest scores were analyzed for racial-ethnic differences. The WAB-R is a comprehensive assessment of communication function designed to evaluate PWA in the areas of spontaneous speech, auditory comprehension, repetition, and naming in addition to reading, writing, apraxia, and constructional, visuospatial, and calculation skills. In univariate comparisons, black PWA exhibited lower word fluency (5.7 vs 7.6; P=.004), auditory word comprehension (49.0 vs 53.0; P=.021), and comprehension of sequential commands (44.2 vs 52.2; P=.012) when compared with white PWA. In multivariate comparisons, adjusted for age and years of education, black PWA exhibited lower word fluency (5.5 vs 7.6; P=.015), auditory word recognition (49.3 vs 53.3; P=.02), and comprehension of sequential commands (43.7 vs 53.2; P=.017) when compared with white PWA. This study identified racial-ethnic differences in word fluency and auditory comprehension ability among PWA. Both skills are critical to effective communication, and racial-ethnic differences in outcomes must be considered in treatment approaches designed to improve overall communication ability. Copyright © 2016 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.
Robust Sensing of Approaching Vehicles Relying on Acoustic Cues
Mizumachi, Mitsunori; Kaminuma, Atsunobu; Ono, Nobutaka; Ando, Shigeru
2014-01-01
The latest developments in automobile design have allowed them to be equipped with various sensing devices. Multiple sensors such as cameras and radar systems can be simultaneously used for active safety systems in order to overcome blind spots of individual sensors. This paper proposes a novel sensing technique for catching up and tracking an approaching vehicle relying on an acoustic cue. First, it is necessary to extract a robust spatial feature from noisy acoustical observations. In this paper, the spatio-temporal gradient method is employed for the feature extraction. Then, the spatial feature is filtered out through sequential state estimation. A particle filter is employed to cope with a highly non-linear problem. Feasibility of the proposed method has been confirmed with real acoustical observations, which are obtained by microphones outside a cruising vehicle. PMID:24887038
Design of Neural Networks for Fast Convergence and Accuracy: Dynamics and Control
NASA Technical Reports Server (NTRS)
Maghami, Peiman G.; Sparks, Dean W., Jr.
1997-01-01
A procedure for the design and training of artificial neural networks, used for rapid and efficient controls and dynamics design and analysis for flexible space systems, has been developed. Artificial neural networks are employed, such that once properly trained, they provide a means of evaluating the impact of design changes rapidly. Specifically, two-layer feedforward neural networks are designed to approximate the functional relationship between the component/spacecraft design changes and measures of its performance or nonlinear dynamics of the system/components. A training algorithm, based on statistical sampling theory, is presented, which guarantees that the trained networks provide a designer-specified degree of accuracy in mapping the functional relationship. Within each iteration of this statistical-based algorithm, a sequential design algorithm is used for the design and training of the feedforward network to provide rapid convergence to the network goals. Here, at each sequence a new network is trained to minimize the error of previous network. The proposed method should work for applications wherein an arbitrary large source of training data can be generated. Two numerical examples are performed on a spacecraft application in order to demonstrate the feasibility of the proposed approach.
Design of neural networks for fast convergence and accuracy: dynamics and control.
Maghami, P G; Sparks, D R
2000-01-01
A procedure for the design and training of artificial neural networks, used for rapid and efficient controls and dynamics design and analysis for flexible space systems, has been developed. Artificial neural networks are employed, such that once properly trained, they provide a means of evaluating the impact of design changes rapidly. Specifically, two-layer feedforward neural networks are designed to approximate the functional relationship between the component/spacecraft design changes and measures of its performance or nonlinear dynamics of the system/components. A training algorithm, based on statistical sampling theory, is presented, which guarantees that the trained networks provide a designer-specified degree of accuracy in mapping the functional relationship. Within each iteration of this statistical-based algorithm, a sequential design algorithm is used for the design and training of the feedforward network to provide rapid convergence to the network goals. Here, at each sequence a new network is trained to minimize the error of previous network. The proposed method should work for applications wherein an arbitrary large source of training data can be generated. Two numerical examples are performed on a spacecraft application in order to demonstrate the feasibility of the proposed approach.
NASA Astrophysics Data System (ADS)
Selvaraj, A.; Nambi, I. M.
2014-12-01
In this study, an innovative technique of ZVI mediated 'coupling of Fenton like oxidation of phenol and Cr(VI) reduction technique' was attempted. The hypothesis is that Fe3+ generated from Cr(VI) reduction process acts as electron acceptor and catalyst for Fenton's Phenol oxidation process. The Fe2+ formed from Fenton reactions can be reused for Cr(VI) reduction. Thus iron can be made to recycle between two reactions, changing back and forth between Fe2+ and Fe3+ forms, makes treatment sustainable.(Fig 1) This approach advances current Fenton like oxidation process by (i)single system removal of heavy metal and organic matter (ii)recycling of iron species; hence no additional iron required (iii)more contaminant removal to ZVI ratio (iv)eliminating sludge related issues. Preliminary batch studies were conducted at different modes i) concurrent removal ii) sequential removal. The sequential removal was found better for in-situ PRB applications. PRB was designed based on kinetic rate slope and half-life time, obtained from primary column study. This PRB has two segments (i)ZVI segment[Cr(VI)] (ii)iron species segment[phenol]. This makes treatment sustainable by (i) having no iron ions in outlet stream (ii)meeting hypothesis and elongates the life span of PRB. Sequential removal of contaminates were tested in pilot scale PRB(Fig 2) and its life span was calculated based on the exhaustion of filling material. Aqueous, sand and iron aliquots were collected at various segments of PRB and analyzed for precipitation and chemical speciation thoroughly (UV spectrometer, XRD, FTIR, electron microscope). Chemical speciation profile eliminates the uncertainties over in-situ PRB's long term performance. Based on the pilot scale PRB study, 'field level PRB wall construction' was suggested to remove heavy metal and organic compounds from Pallikaranai marshland(Fig 3)., which is contaminated with leachate coming from nearby Perungudi dumpsite. This research provides (i)deeper insight into the environmental friendly, accelerated, sustainable technique for combined removal of organic matter and heavy metal (ii)evaluation of the novel technique in PRB, which resulted in PRB's increased life span (iii)designing of PRB to remediate the marshland and its ecosystem, thus save the habitats related to it.
Decision-theoretic approach to data acquisition for transit operations planning
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ritchie, S.G.
The most costly element of transportation planning and modeling activities in the past has usually been that of data acquisition. This is even truer today when the unit costs of data collection are increasing rapidly and at the same time budgets are severely limited by continuing policies of fiscal austerity in the public sector. The overall objectives of this research were to improve the decisions and decision-making capabilities of transit operators or planners in short-range transit planning, and to improve the quality and cost-effectiveness of associated route or corridor-level data collection and service monitoring activities. A new approach was presentedmore » for sequentially updating the parameters of both simple and multiple linear regression models with stochastic regressors, and for determining the expected value of sample information and expected net gain of sampling for associated sample designs. A new approach was also presented for estimating and updating (both spatially and temporally) the parameters of multinomial logit discrete choice models, and for determining associated optimal sample designs for attribute-based and choice-based sampling methods. The approach provides an effective framework for addressing the issue of optimal sampling method and sample size, which to date have been largely unresolved. The application of these methodologies and the feasibility of the decision-theoretic approach was illustrated with a hypothetical case study example.« less
ERIC Educational Resources Information Center
Grilo, Carlos M.; Masheb, Robin M.; Wilson, G. Terence; Gueorguieva, Ralitza; White, Marney A.
2011-01-01
Objective: Cognitive-behavioral therapy (CBT) is the best established treatment for binge-eating disorder (BED) but does not produce weight loss. The efficacy of behavioral weight loss (BWL) in obese patients with BED is uncertain. This study compared CBT, BWL, and a sequential approach in which CBT is delivered first, followed by BWL (CBT + BWL).…
NASA Astrophysics Data System (ADS)
Fishman, M. M.
1985-01-01
The problem of multialternative sequential discernment of processes is formulated in terms of conditionally optimum procedures minimizing the average length of observations, without any probabilistic assumptions about any one occurring process, rather than in terms of Bayes procedures minimizing the average risk. The problem is to find the procedure that will transform inequalities into equalities. The problem is formulated for various models of signal observation and data processing: (1) discernment of signals from background interference by a multichannel system; (2) discernment of pulse sequences with unknown time delay; (3) discernment of harmonic signals with unknown frequency. An asymptotically optimum sequential procedure is constructed which compares the statistics of the likelihood ratio with the mean-weighted likelihood ratio and estimates the upper bound for conditional average lengths of observations. This procedure is shown to remain valid as the upper bound for the probability of erroneous partial solutions decreases approaching zero and the number of hypotheses increases approaching infinity. It also remains valid under certain special constraints on the probability such as a threshold. A comparison with a fixed-length procedure reveals that this sequential procedure decreases the length of observations to one quarter, on the average, when the probability of erroneous partial solutions is low.
The Complexities of Teachers' Commitment to Environmental Education: A Mixed Methods Approach
ERIC Educational Resources Information Center
Sosu, Edward M.; McWilliam, Angus; Gray, Donald S.
2008-01-01
This article argues that a mixed methods approach is useful in understanding the complexity that underlies teachers' commitment to environmental education. Using sequential and concurrent procedures, the authors demonstrate how different methodological approaches highlighted different aspects of teacher commitment. The quantitative survey examined…
Martens, Brian K; DiGennaro, Florence D; Reed, Derek D; Szczech, Frances M; Rosenthal, Blair D
2008-01-01
Descriptive assessment methods have been used in applied settings to identify consequences for problem behavior, thereby aiding in the design of effective treatment programs. Consensus has not been reached, however, regarding the types of data or analytic strategies that are most useful for describing behavior–consequence relations. One promising approach involves the analysis of conditional probabilities from sequential recordings of behavior and events that follow its occurrence. In this paper we review several strategies for identifying contingent relations from conditional probabilities, and propose an alternative strategy known as a contingency space analysis (CSA). Step-by-step procedures for conducting and interpreting a CSA using sample data are presented, followed by discussion of the potential use of a CSA for conducting descriptive assessments, informing intervention design, and evaluating changes in reinforcement contingencies following treatment. PMID:18468280
Reliability based design optimization: Formulations and methodologies
NASA Astrophysics Data System (ADS)
Agarwal, Harish
Modern products ranging from simple components to complex systems should be designed to be optimal and reliable. The challenge of modern engineering is to ensure that manufacturing costs are reduced and design cycle times are minimized while achieving requirements for performance and reliability. If the market for the product is competitive, improved quality and reliability can generate very strong competitive advantages. Simulation based design plays an important role in designing almost any kind of automotive, aerospace, and consumer products under these competitive conditions. Single discipline simulations used for analysis are being coupled together to create complex coupled simulation tools. This investigation focuses on the development of efficient and robust methodologies for reliability based design optimization in a simulation based design environment. Original contributions of this research are the development of a novel efficient and robust unilevel methodology for reliability based design optimization, the development of an innovative decoupled reliability based design optimization methodology, the application of homotopy techniques in unilevel reliability based design optimization methodology, and the development of a new framework for reliability based design optimization under epistemic uncertainty. The unilevel methodology for reliability based design optimization is shown to be mathematically equivalent to the traditional nested formulation. Numerical test problems show that the unilevel methodology can reduce computational cost by at least 50% as compared to the nested approach. The decoupled reliability based design optimization methodology is an approximate technique to obtain consistent reliable designs at lesser computational expense. Test problems show that the methodology is computationally efficient compared to the nested approach. A framework for performing reliability based design optimization under epistemic uncertainty is also developed. A trust region managed sequential approximate optimization methodology is employed for this purpose. Results from numerical test studies indicate that the methodology can be used for performing design optimization under severe uncertainty.
Zhang, Liangcai; Yuan, Ying
2016-01-01
Drug combination therapy has become the mainstream approach to cancer treatment. One fundamental feature that makes combination trials different from single-agent trials is the existence of the maximum tolerated dose (MTD) contour, i.e., multiple MTDs. As a result, unlike single-agent phase I trials, which aim to find a single MTD, it is often of interest to find the MTD contour for combination trials. We propose a new dose-finding design, the waterfall design, to find the MTD contour for drug combination trials. Taking the divide-and-conquer strategy, the waterfall design divides the task of finding the MTD contour into a sequence of one-dimensional dose-finding processes, known as subtrials. The subtrials are conducted sequentially in a certain order, such that the results of each subtrial will be used to inform the design of subsequent subtrials. Such information borrowing allows the waterfall design to explore the two-dimensional dose space efficiently using a limited sample size, and decreases the chance of overdosing and underdosing patients. To accommodate the consideration that doses on the MTD contour may have very different efficacy or synergistic effects due to drug-drug interaction, we further extend our approach to a phase I/II design with the goal of finding the MTD with the highest efficacy. Simulation studies show that the waterfall design is safer and has higher probability of identifying the true MTD contour than some existing designs. The R package “BOIN” to implement the waterfall design is freely available from CRAN. PMID:27580928
ProperCAD: A portable object-oriented parallel environment for VLSI CAD
NASA Technical Reports Server (NTRS)
Ramkumar, Balkrishna; Banerjee, Prithviraj
1993-01-01
Most parallel algorithms for VLSI CAD proposed to date have one important drawback: they work efficiently only on machines that they were designed for. As a result, algorithms designed to date are dependent on the architecture for which they are developed and do not port easily to other parallel architectures. A new project under way to address this problem is described. A Portable object-oriented parallel environment for CAD algorithms (ProperCAD) is being developed. The objectives of this research are (1) to develop new parallel algorithms that run in a portable object-oriented environment (CAD algorithms using a general purpose platform for portable parallel programming called CARM is being developed and a C++ environment that is truly object-oriented and specialized for CAD applications is also being developed); and (2) to design the parallel algorithms around a good sequential algorithm with a well-defined parallel-sequential interface (permitting the parallel algorithm to benefit from future developments in sequential algorithms). One CAD application that has been implemented as part of the ProperCAD project, flat VLSI circuit extraction, is described. The algorithm, its implementation, and its performance on a range of parallel machines are discussed in detail. It currently runs on an Encore Multimax, a Sequent Symmetry, Intel iPSC/2 and i860 hypercubes, a NCUBE 2 hypercube, and a network of Sun Sparc workstations. Performance data for other applications that were developed are provided: namely test pattern generation for sequential circuits, parallel logic synthesis, and standard cell placement.
Physics-based, Bayesian sequential detection method and system for radioactive contraband
Candy, James V; Axelrod, Michael C; Breitfeller, Eric F; Chambers, David H; Guidry, Brian L; Manatt, Douglas R; Meyer, Alan W; Sale, Kenneth E
2014-03-18
A distributed sequential method and system for detecting and identifying radioactive contraband from highly uncertain (noisy) low-count, radionuclide measurements, i.e. an event mode sequence (EMS), using a statistical approach based on Bayesian inference and physics-model-based signal processing based on the representation of a radionuclide as a monoenergetic decomposition of monoenergetic sources. For a given photon event of the EMS, the appropriate monoenergy processing channel is determined using a confidence interval condition-based discriminator for the energy amplitude and interarrival time and parameter estimates are used to update a measured probability density function estimate for a target radionuclide. A sequential likelihood ratio test is then used to determine one of two threshold conditions signifying that the EMS is either identified as the target radionuclide or not, and if not, then repeating the process for the next sequential photon event of the EMS until one of the two threshold conditions is satisfied.
The distribution of individual cabinet positions in coalition governments: A sequential approach
Meyer, Thomas M.; Müller, Wolfgang C.
2015-01-01
Abstract Multiparty government in parliamentary democracies entails bargaining over the payoffs of government participation, in particular the allocation of cabinet positions. While most of the literature deals with the numerical distribution of cabinet seats among government parties, this article explores the distribution of individual portfolios. It argues that coalition negotiations are sequential choice processes that begin with the allocation of those portfolios most important to the bargaining parties. This induces conditionality in the bargaining process as choices of individual cabinet positions are not independent of each other. Linking this sequential logic with party preferences for individual cabinet positions, the authors of the article study the allocation of individual portfolios for 146 coalition governments in Western and Central Eastern Europe. The results suggest that a sequential logic in the bargaining process results in better predictions than assuming mutual independence in the distribution of individual portfolios. PMID:27546952
Zhang, Heyi; Cheng, Biao; Lu, Zhan
2018-06-20
A newly designed thiazoline iminopyridine ligand for enantioselective cobalt-catalyzed sequential Nazarov cyclization/electrophilic fluorination was developed. Various chiral α-fluorocyclopentenones were prepared with good yields and diastereo- and enantioselectivities. Further derivatizations could be easily carried out to provide chiral cyclopentenols with three contiguous stereocenters. Furthermore, a direct deesterification of fluorinated products could afford chiral α-single fluorine-substituted cyclopentenones.
Manheimer, Eric D.; Peters, M. Robert; Wolff, Steven D.; Qureshi, Mehreen A.; Atluri, Prashanth; Pearson, Gregory D.N.; Einstein, Andrew J.
2011-01-01
Triple-rule-out computed tomography angiography (TRO CTA), performed to evaluate the coronary arteries, pulmonary arteries, and thoracic aorta, has been associated with high radiation exposure. Utilization of sequential scanning for coronary computed tomography angiography (CCTA) reduces radiation dose. The application of sequential scanning to TRO CTA is much less well defined. We analyzed radiation dose and image quality from TRO CTA performed in a single outpatient center, comparing scans from a period during which helical scanning with electrocardiographically controlled tube current modulation was used for all patients (n=35) and after adoption of a strategy incorporating sequential scanning whenever appropriate (n=35). Sequential scanning was able to be employed in 86% of cases. The sequential-if-appropriate strategy, compared to the helical-only strategy, was associated with a 61.6% dose decrease (mean dose-length product [DLP] of 439 mGy×cm vs 1144 mGy×cm and mean effective dose of 7.5 mSv vs 19.4 mSv, respectively, p<0.0001). Similarly, there was a 71.5% dose reduction among 30 patients scanned with the sequential protocol compared to 40 patients scanned with the helical protocol under either strategy (326 mGy×cm vs 1141 mGy×cm and 5.5 mSv vs 19.4 mSv, respectively, p<0.0001). Although image quality did not differ between strategies, there was a non-statistically significant trend towards better quality in the sequential protocol compared to the helical protocol. In conclusion, approaching TRO CTA with a diagnostic strategy of sequential scanning as appropriate offers a marked reduction in radiation dose while maintaining image quality. PMID:21306693
Negotiation of territorial boundaries in a songbird
Ellis, Jesse M.; Cropp, Brett F.; Koltz, John M.
2014-01-01
How do territorial neighbors resolve the location of their boundaries? We addressed this question by testing the predictions of 2 nonexclusive game theoretical models for competitive signaling: the sequential assessment game and the territorial bargaining game. Our study species, the banded wren, is a neotropical nonmigratory songbird living in densely packed territorial neighborhoods. The males possess repertoires of approximately 25 song types that are largely shared between neighbors and sequentially delivered with variable switching rates. Over 3 days, boundary disputes among pairs of neighboring males were synchronously recorded, their perch positions were marked, and their behavioral interactions were noted. For each countersinging interaction between 2 focal males, we quantified approach and retreat order, a variety of song and call patterns, closest approach distance, distance from the territorial center, and female presence. Aggressors produced more rattle-buzz songs during the approaching phase of interactions, whereas defenders overlapped their opponent’s songs. During the close phase of the interaction, both males matched frequently, but the key determinant of which one retreated first was song-type diversity—first retreaters sang with a higher diversity. Retreaters also produced more unshared song types during the interaction, and in the retreating phase of the interaction, they overlapped more. A negative correlation between song-type diversity asymmetry and contest duration suggested sequential assessment of motivational asymmetry. The use of this graded signal, which varied with distance from the center and indicated a male’s motivation to defend a particular position, supported the bargaining model. The bargaining game could be viewed as a series of sequential assessment contests. PMID:25419086
Madec, Morgan; Pecheux, François; Gendrault, Yves; Rosati, Elise; Lallement, Christophe; Haiech, Jacques
2016-10-01
The topic of this article is the development of an open-source automated design framework for synthetic biology, specifically for the design of artificial gene regulatory networks based on a digital approach. In opposition to other tools, GeNeDA is an open-source online software based on existing tools used in microelectronics that have proven their efficiency over the last 30 years. The complete framework is composed of a computation core directly adapted from an Electronic Design Automation tool, input and output interfaces, a library of elementary parts that can be achieved with gene regulatory networks, and an interface with an electrical circuit simulator. Each of these modules is an extension of microelectronics tools and concepts: ODIN II, ABC, the Verilog language, SPICE simulator, and SystemC-AMS. GeNeDA is first validated on a benchmark of several combinatorial circuits. The results highlight the importance of the part library. Then, this framework is used for the design of a sequential circuit including a biological state machine.
Braathen, Sverre; Sendstad, Ole Jakob
2004-08-01
Possible techniques for representing automatic decision-making behavior approximating human experts in complex simulation model experiments are of interest. Here, fuzzy logic (FL) and constraint satisfaction problem (CSP) methods are applied in a hybrid design of automatic decision making in simulation game models. The decision processes of a military headquarters are used as a model for the FL/CSP decision agents choice of variables and rulebases. The hybrid decision agent design is applied in two different types of simulation games to test the general applicability of the design. The first application is a two-sided zero-sum sequential resource allocation game with imperfect information interpreted as an air campaign game. The second example is a network flow stochastic board game designed to capture important aspects of land manoeuvre operations. The proposed design is shown to perform well also in this complex game with a very large (billionsize) action set. Training of the automatic FL/CSP decision agents against selected performance measures is also shown and results are presented together with directions for future research.
Design and Evaluation of Fusion Approach for Combining Brain and Gaze Inputs for Target Selection
Évain, Andéol; Argelaguet, Ferran; Casiez, Géry; Roussel, Nicolas; Lécuyer, Anatole
2016-01-01
Gaze-based interfaces and Brain-Computer Interfaces (BCIs) allow for hands-free human–computer interaction. In this paper, we investigate the combination of gaze and BCIs. We propose a novel selection technique for 2D target acquisition based on input fusion. This new approach combines the probabilistic models for each input, in order to better estimate the intent of the user. We evaluated its performance against the existing gaze and brain–computer interaction techniques. Twelve participants took part in our study, in which they had to search and select 2D targets with each of the evaluated techniques. Our fusion-based hybrid interaction technique was found to be more reliable than the previous gaze and BCI hybrid interaction techniques for 10 participants over 12, while being 29% faster on average. However, similarly to what has been observed in hybrid gaze-and-speech interaction, gaze-only interaction technique still provides the best performance. Our results should encourage the use of input fusion, as opposed to sequential interaction, in order to design better hybrid interfaces. PMID:27774048
Barone, Vincenzo; Bellina, Fabio; Biczysko, Malgorzata; Bloino, Julien; Fornaro, Teresa; Latouche, Camille; Lessi, Marco; Marianetti, Giulia; Minei, Pierpaolo; Panattoni, Alessandro; Pucci, Andrea
2015-10-28
The possibilities offered by organic fluorophores in the preparation of advanced plastic materials have been increased by designing novel alkynylimidazole dyes, featuring different push and pull groups. This new family of fluorescent dyes was synthesized by means of a one-pot sequential bromination-alkynylation of the heteroaromatic core, and their optical properties were investigated in tetrahydrofuran and in poly(methyl methacrylate). An efficient in silico pre-screening scheme was devised as consisting of a step-by-step procedure employing computational methodologies by simulation of electronic spectra within simple vertical energy and more sophisticated vibronic approaches. Such an approach was also extended to efficiently simulate one-photon absorption and emission spectra of the dyes in the polymer environment for their potential application in luminescent solar concentrators. Besides the specific applications of this novel material, the integration of computational and experimental techniques reported here provides an efficient protocol that can be applied to make a selection among similar dye candidates, which constitute the essential responsive part of those fluorescent plastic materials.
Vapor Grown Perovskite Solar Cells
NASA Astrophysics Data System (ADS)
Abdussamad Abbas, Hisham
Perovskite solar cells has been the fastest growing solar cell material till date with verified efficiencies of over 22%. Most groups in the world focuses their research on solution based devices that has residual solvent in the material bulk. This work focuses extensively on the fabrication and properties of vapor based perovskite devices that is devoid of solvents. The initial part of my work focuses on the detailed fabrication of high efficiency consistent sequential vapor NIP devices made using P3HT as P-type Type II heterojunction. The sequential vapor devices experiences device anomalies like voltage evolution and IV hysteresis owing to charge trapping in TiO2. Hence, sequential PIN devices were fabricated using doped Type-II heterojunctions that had no device anomalies. The sequential PIN devices has processing restriction, as organic Type-II heterojunction materials cannot withstand high processing temperature, hence limiting device efficiency. Thereby bringing the need of co-evaporation for fabricating high efficiency consistent PIN devices, the approach has no-restriction on substrates and offers stoichiometric control. A comprehensive description of the fabrication, Co-evaporator setup and how to build it is described. The results of Co-evaporated devices clearly show that grain size, stoichiometry and doped transport layers are all critical for eliminating device anomalies and in fabricating high efficiency devices. Finally, Formamidinium based perovskite were fabricated using sequential approach. A thermal degradation study was conducted on Methyl Ammonium Vs. Formamidinium based perovskite films, Formamidinium based perovskites were found to be more stable. Lastly, inorganic films such as CdS and Nickel oxide were developed in this work.
2014-01-01
Background Pattern recognition (PR) based strategies for the control of myoelectric upper limb prostheses are generally evaluated through offline classification accuracy, which is an admittedly useful metric, but insufficient to discuss functional performance in real time. Existing functional tests are extensive to set up and most fail to provide a challenging, objective framework to assess the strategy performance in real time. Methods Nine able-bodied and two amputee subjects gave informed consent and participated in the local Institutional Review Board approved study. We designed a two-dimensional target acquisition task, based on the principles of Fitts’ law for human motor control. Subjects were prompted to steer a cursor from the screen center of into a series of subsequently appearing targets of different difficulties. Three cursor control systems were tested, corresponding to three electromyography-based prosthetic control strategies: 1) amplitude-based direct control (the clinical standard of care), 2) sequential PR control, and 3) simultaneous PR control, allowing for a concurrent activation of two degrees of freedom (DOF). We computed throughput (bits/second), path efficiency (%), reaction time (second), and overshoot (%)) and used general linear models to assess significant differences between the strategies for each metric. Results We validated the proposed methodology by achieving very high coefficients of determination for Fitts’ law. Both PR strategies significantly outperformed direct control in two-DOF targets and were more intuitive to operate. In one-DOF targets, the simultaneous approach was the least precise. The direct control was efficient in one-DOF targets but cumbersome to operate in two-DOF targets through a switch-depended sequential cursor control. Conclusions We designed a test, capable of comprehensively describing prosthetic control strategies in real time. When implemented on control subjects, the test was able to capture statistically significant differences (p < 0.05) in control strategies when considering throughputs, path efficiencies and reaction times. Of particular note, we found statistically significant (p < 0.01) improvements in throughputs and path efficiencies with simultaneous PR when compared to direct control or sequential PR. Amputees could readily achieve the task; however a limited number of subjects was tested and a statistical analysis was not performed with that population. PMID:24886664
Wurth, Sophie M; Hargrove, Levi J
2014-05-30
Pattern recognition (PR) based strategies for the control of myoelectric upper limb prostheses are generally evaluated through offline classification accuracy, which is an admittedly useful metric, but insufficient to discuss functional performance in real time. Existing functional tests are extensive to set up and most fail to provide a challenging, objective framework to assess the strategy performance in real time. Nine able-bodied and two amputee subjects gave informed consent and participated in the local Institutional Review Board approved study. We designed a two-dimensional target acquisition task, based on the principles of Fitts' law for human motor control. Subjects were prompted to steer a cursor from the screen center of into a series of subsequently appearing targets of different difficulties. Three cursor control systems were tested, corresponding to three electromyography-based prosthetic control strategies: 1) amplitude-based direct control (the clinical standard of care), 2) sequential PR control, and 3) simultaneous PR control, allowing for a concurrent activation of two degrees of freedom (DOF). We computed throughput (bits/second), path efficiency (%), reaction time (second), and overshoot (%)) and used general linear models to assess significant differences between the strategies for each metric. We validated the proposed methodology by achieving very high coefficients of determination for Fitts' law. Both PR strategies significantly outperformed direct control in two-DOF targets and were more intuitive to operate. In one-DOF targets, the simultaneous approach was the least precise. The direct control was efficient in one-DOF targets but cumbersome to operate in two-DOF targets through a switch-depended sequential cursor control. We designed a test, capable of comprehensively describing prosthetic control strategies in real time. When implemented on control subjects, the test was able to capture statistically significant differences (p < 0.05) in control strategies when considering throughputs, path efficiencies and reaction times. Of particular note, we found statistically significant (p < 0.01) improvements in throughputs and path efficiencies with simultaneous PR when compared to direct control or sequential PR. Amputees could readily achieve the task; however a limited number of subjects was tested and a statistical analysis was not performed with that population.
Safeguarding a Lunar Rover with Wald's Sequential Probability Ratio Test
NASA Technical Reports Server (NTRS)
Furlong, Michael; Dille, Michael; Wong, Uland; Nefian, Ara
2016-01-01
The virtual bumper is a safeguarding mechanism for autonomous and remotely operated robots. In this paper we take a new approach to the virtual bumper system by using an old statistical test. By using a modified version of Wald's sequential probability ratio test we demonstrate that we can reduce the number of false positive reported by the virtual bumper, thereby saving valuable mission time. We use the concept of sequential probability ratio to control vehicle speed in the presence of possible obstacles in order to increase certainty about whether or not obstacles are present. Our new algorithm reduces the chances of collision by approximately 98 relative to traditional virtual bumper safeguarding without speed control.
Dynamics of Sequential Decision Making
NASA Astrophysics Data System (ADS)
Rabinovich, Mikhail I.; Huerta, Ramón; Afraimovich, Valentin
2006-11-01
We suggest a new paradigm for intelligent decision-making suitable for dynamical sequential activity of animals or artificial autonomous devices that depends on the characteristics of the internal and external world. To do it we introduce a new class of dynamical models that are described by ordinary differential equations with a finite number of possibilities at the decision points, and also include rules solving this uncertainty. Our approach is based on the competition between possible cognitive states using their stable transient dynamics. The model controls the order of choosing successive steps of a sequential activity according to the environment and decision-making criteria. Two strategies (high-risk and risk-aversion conditions) that move the system out of an erratic environment are analyzed.
Sequential Classifier Training for Rice Mapping with Multitemporal Remote Sensing Imagery
NASA Astrophysics Data System (ADS)
Guo, Y.; Jia, X.; Paull, D.
2017-10-01
Most traditional methods for rice mapping with remote sensing data are effective when they are applied to the initial growing stage of rice, as the practice of flooding during this period makes the spectral characteristics of rice fields more distinguishable. In this study, we propose a sequential classifier training approach for rice mapping that can be used over the whole growing period of rice for monitoring various growth stages. Rice fields are firstly identified during the initial flooding period. The identified rice fields are used as training data to train a classifier that separates rice and non-rice pixels. The classifier is then used as a priori knowledge to assist the training of classifiers for later rice growing stages. This approach can be applied progressively to sequential image data, with only a small amount of training samples being required from each image. In order to demonstrate the effectiveness of the proposed approach, experiments were conducted at one of the major rice-growing areas in Australia. The proposed approach was applied to a set of multitemporal remote sensing images acquired by the Sentinel-2A satellite. Experimental results show that, compared with traditional spectral-indexbased algorithms, the proposed method is able to achieve more stable and consistent rice mapping accuracies and it reaches higher than 80% during the whole rice growing period.
Code of Federal Regulations, 2010 CFR
2010-07-01
... substantial deviations from the design specifications of the sampler specified for reference methods in... general requirements as an ISO 9001-registered facility for the design and manufacture of designated... capable of automatically collecting a series of sequential samples. NO means nitrogen oxide. NO 2 means...
Application and Design Characteristics of Generalized Training Devices.
ERIC Educational Resources Information Center
Parker, Edward L.
This program identified applications and developed design characteristics for generalized training devices. The first of three sequential phases reviewed in detail new developments in Naval equipment technology that influence the design of maintenance training devices: solid-state circuitry, modularization, digital technology, standardization,…
Round-the-table teaching: a novel approach to resuscitation education
McGarvey, Kathryn; Scott, Karen; O'Leary, Fenton
2014-01-01
Background Effective cardiopulmonary resuscitation saves lives. Health professionals who care for acutely unwell children need to be prepared to care for a child in arrest. Hospitals must ensure that their staff have the knowledge, confidence and ability to respond to a child in cardiac arrest. RESUS4KIDS is a programme designed to teach paediatric resuscitation to health care professionals who care for acutely unwell children. The programme is delivered in two components: an e–learning component for pre-learning, followed by a short, practical, face-to-face course that is taught using the round-the-table teaching approach. Context Round-the-table teaching is a novel, evidence-based small group teaching approach designed to teach paediatric resuscitation skills and knowledge. Round-the-table teaching uses a structured approach to managing a collapsed child, and ensures that each participant has the opportunity to practise the essential resuscitation skills of airway manoeuvres, bag mask ventilation and cardiac compressions. Innovation Round-the-table teaching is an engaging, non-threatening approach to delivering interdisciplinary paediatric resuscitation education. The methodology ensures that all participants have the opportunity to practise each of the different essential skills associated with the Danger, Response, Send for help, Airway, Breathing, Circulation, Defibrillation or rhythm recognition (DRSABCD) approach to the collapsed child. Implications Round-the-table teaching is based on evidence-based small group teaching methods. The methodology of round-the-table teaching can be applied to any topic where participants must demonstrate an understanding of a sequential approach to a clinical skill. Round-the-table teaching uses a structured approach to managing a collapsed child PMID:25212931
Gulmans, J; Vollenbroek-Hutten, M M R; Van Gemert-Pijnen, J E W C; Van Harten, W H
2007-10-01
Owing to the involvement of multiple professionals from various institutions, integrated care settings are prone to suboptimal patient care communication. To assure continuity, communication gaps should be identified for targeted improvement initiatives. However, available assessment methods are often one-sided evaluations not appropriate for integrated care settings. We developed an evaluation approach that takes into account the multiple communication links and evaluation perspectives inherent to these settings. In this study, we describe this approach, using the integrated care setting of Cerebral Palsy as illustration. The approach follows a three-step mixed design in which the results of each step are used to mark out the subsequent step's focus. The first step patient questionnaire aims to identify quality gaps experienced by patients, comparing their expectancies and experiences with respect to patient-professional and inter-professional communication. Resulting gaps form the input of in-depth interviews with a subset of patients to evaluate underlying factors of ineffective communication. Resulting factors form the input of the final step's focus group meetings with professionals to corroborate and complete the findings. By combining methods, the presented approach aims to minimize limitations inherent to the application of single methods. The comprehensiveness of the approach enables its applicability in various integrated care settings. Its sequential design allows for in-depth evaluation of relevant quality gaps. Further research is needed to evaluate the approach's feasibility in practice. In our subsequent study, we present the results of the approach in the integrated care setting of children with Cerebral Palsy in three Dutch care regions.
Werk, Tobias; Mahler, Hanns-Christian; Ludwig, Imke Sonja; Luemkemann, Joerg; Huwyler, Joerg; Hafner, Mathias
Dual-chamber syringes were originally designed to separate a solid substance and its diluent. However, they can also be used to separate liquid formulations of two individual drug products, which cannot be co-formulated due to technical or regulatory issues. A liquid/liquid dual-chamber syringe can be designed to achieve homogenization and mixing of both solutions prior to administration, or it can be used to sequentially inject both solutions. While sequential injection can be easily achieved by a dual-chamber syringe with a bypass located at the needle end of the syringe barrel, mixing of the two fluids may provide more challenges. Within this study, the mixing behavior of surrogate solutions in different dual-chamber syringes is assessed. Furthermore, the influence of parameters such as injection angle, injection speed, agitation, and sample viscosity were studied. It was noted that mixing was poor for the commercial dual-chamber syringes (with a bypass designed as a longitudinal ridge) when the two liquids significantly differ in their physical properties (viscosity, density). However, an optimized dual-chamber syringe design with multiple bypass channels resulted in improved mixing of liquids. Dual-chamber syringes were originally designed to separate a solid substance and its diluent. However, they can also be used to separate liquid formulations of two individual drug products. A liquid/liquid dual-chamber syringe can be designed to achieve homogenization and mixing of both solutions prior to administration, or it can be used to sequentially inject both solutions. While sequential injection can be easily achieved by a dual-chamber syringe with a bypass located at the needle end of the syringe barrel, mixing of the two fluids may provide more challenges. Within this study, the mixing behavior of surrogate solutions in different dual-chamber syringes is assessed. Furthermore, the influence of parameters such as injection angle, injection speed, agitation, and sample viscosity were studied. It was noted that mixing was poor for the commercially available dual-chamber syringes when the two liquids significantly differ in viscosity and density. However, an optimized dual-chamber syringe design resulted in improved mixing of liquids. © PDA, Inc. 2017.
Automated detection of changes in sequential color ocular fundus images
NASA Astrophysics Data System (ADS)
Sakuma, Satoshi; Nakanishi, Tadashi; Takahashi, Yasuko; Fujino, Yuichi; Tsubouchi, Tetsuro; Nakanishi, Norimasa
1998-06-01
A recent trend is the automatic screening of color ocular fundus images. The examination of such images is used in the early detection of several adult diseases such as hypertension and diabetes. Since this type of examination is easier than CT, costs less, and has no harmful side effects, it will become a routine medical examination. Normal ocular fundus images are found in more than 90% of all people. To deal with the increasing number of such images, this paper proposes a new approach to process them automatically and accurately. Our approach, based on individual comparison, identifies changes in sequential images: a previously diagnosed normal reference image is compared to a non- diagnosed image.
Overcoming catastrophic forgetting in neural networks
Kirkpatrick, James; Pascanu, Razvan; Rabinowitz, Neil; Veness, Joel; Desjardins, Guillaume; Rusu, Andrei A.; Milan, Kieran; Quan, John; Ramalho, Tiago; Grabska-Barwinska, Agnieszka; Hassabis, Demis; Clopath, Claudia; Kumaran, Dharshan; Hadsell, Raia
2017-01-01
The ability to learn tasks in a sequential fashion is crucial to the development of artificial intelligence. Until now neural networks have not been capable of this and it has been widely thought that catastrophic forgetting is an inevitable feature of connectionist models. We show that it is possible to overcome this limitation and train networks that can maintain expertise on tasks that they have not experienced for a long time. Our approach remembers old tasks by selectively slowing down learning on the weights important for those tasks. We demonstrate our approach is scalable and effective by solving a set of classification tasks based on a hand-written digit dataset and by learning several Atari 2600 games sequentially. PMID:28292907
ERIC Educational Resources Information Center
Ronnlund, Michael; Nilsson, Lars-Goran
2008-01-01
To estimate Flynn effects (FEs) on forms of declarative memory (episodic, semantic) and visuospatial ability (Block Design) time-sequential analyses of data for Swedish adult samples (35-80 years) assessed on either of four occasions (1989, 1994, 1999, 2004; n = 2995) were conducted. The results demonstrated cognitive gains across occasions,…
Sequential design of discrete linear quadratic regulators via optimal root-locus techniques
NASA Technical Reports Server (NTRS)
Shieh, Leang S.; Yates, Robert E.; Ganesan, Sekar
1989-01-01
A sequential method employing classical root-locus techniques has been developed in order to determine the quadratic weighting matrices and discrete linear quadratic regulators of multivariable control systems. At each recursive step, an intermediate unity rank state-weighting matrix that contains some invariant eigenvectors of that open-loop matrix is assigned, and an intermediate characteristic equation of the closed-loop system containing the invariant eigenvalues is created.
Liu, Ying; ZENG, Donglin; WANG, Yuanjia
2014-01-01
Summary Dynamic treatment regimens (DTRs) are sequential decision rules tailored at each point where a clinical decision is made based on each patient’s time-varying characteristics and intermediate outcomes observed at earlier points in time. The complexity, patient heterogeneity, and chronicity of mental disorders call for learning optimal DTRs to dynamically adapt treatment to an individual’s response over time. The Sequential Multiple Assignment Randomized Trial (SMARTs) design allows for estimating causal effects of DTRs. Modern statistical tools have been developed to optimize DTRs based on personalized variables and intermediate outcomes using rich data collected from SMARTs; these statistical methods can also be used to recommend tailoring variables for designing future SMART studies. This paper introduces DTRs and SMARTs using two examples in mental health studies, discusses two machine learning methods for estimating optimal DTR from SMARTs data, and demonstrates the performance of the statistical methods using simulated data. PMID:25642116
Sequential parallel comparison design with binary and time-to-event outcomes.
Silverman, Rachel Kloss; Ivanova, Anastasia; Fine, Jason
2018-04-30
Sequential parallel comparison design (SPCD) has been proposed to increase the likelihood of success of clinical trials especially trials with possibly high placebo effect. Sequential parallel comparison design is conducted with 2 stages. Participants are randomized between active therapy and placebo in stage 1. Then, stage 1 placebo nonresponders are rerandomized between active therapy and placebo. Data from the 2 stages are pooled to yield a single P value. We consider SPCD with binary and with time-to-event outcomes. For time-to-event outcomes, response is defined as a favorable event prior to the end of follow-up for a given stage of SPCD. We show that for these cases, the usual test statistics from stages 1 and 2 are asymptotically normal and uncorrelated under the null hypothesis, leading to a straightforward combined testing procedure. In addition, we show that the estimators of the treatment effects from the 2 stages are asymptotically normal and uncorrelated under the null and alternative hypothesis, yielding confidence interval procedures with correct coverage. Simulations and real data analysis demonstrate the utility of the binary and time-to-event SPCD. Copyright © 2018 John Wiley & Sons, Ltd.
Hee, Siew Wan; Parsons, Nicholas; Stallard, Nigel
2018-03-01
The motivation for the work in this article is the setting in which a number of treatments are available for evaluation in phase II clinical trials and where it may be infeasible to try them concurrently because the intended population is small. This paper introduces an extension of previous work on decision-theoretic designs for a series of phase II trials. The program encompasses a series of sequential phase II trials with interim decision making and a single two-arm phase III trial. The design is based on a hybrid approach where the final analysis of the phase III data is based on a classical frequentist hypothesis test, whereas the trials are designed using a Bayesian decision-theoretic approach in which the unknown treatment effect is assumed to follow a known prior distribution. In addition, as treatments are intended for the same population it is not unrealistic to consider treatment effects to be correlated. Thus, the prior distribution will reflect this. Data from a randomized trial of severe arthritis of the hip are used to test the application of the design. We show that the design on average requires fewer patients in phase II than when the correlation is ignored. Correspondingly, the time required to recommend an efficacious treatment for phase III is quicker. © 2017 The Author. Biometrical Journal published by WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
NASA Astrophysics Data System (ADS)
Granade, Christopher; Wiebe, Nathan
2017-08-01
A major challenge facing existing sequential Monte Carlo methods for parameter estimation in physics stems from the inability of existing approaches to robustly deal with experiments that have different mechanisms that yield the results with equivalent probability. We address this problem here by proposing a form of particle filtering that clusters the particles that comprise the sequential Monte Carlo approximation to the posterior before applying a resampler. Through a new graphical approach to thinking about such models, we are able to devise an artificial-intelligence based strategy that automatically learns the shape and number of the clusters in the support of the posterior. We demonstrate the power of our approach by applying it to randomized gap estimation and a form of low circuit-depth phase estimation where existing methods from the physics literature either exhibit much worse performance or even fail completely.
NASA Astrophysics Data System (ADS)
Rousis, Damon A.
The expected growth of civil aviation over the next twenty years places significant emphasis on revolutionary technology development aimed at mitigating the environmental impact of commercial aircraft. As the number of technology alternatives grows along with model complexity, current methods for Pareto finding and multiobjective optimization quickly become computationally infeasible. Coupled with the large uncertainty in the early stages of design, optimal designs are sought while avoiding the computational burden of excessive function calls when a single design change or technology assumption could alter the results. This motivates the need for a robust and efficient evaluation methodology for quantitative assessment of competing concepts. This research presents a novel approach that combines Bayesian adaptive sampling with surrogate-based optimization to efficiently place designs near Pareto frontier intersections of competing concepts. Efficiency is increased over sequential multiobjective optimization by focusing computational resources specifically on the location in the design space where optimality shifts between concepts. At the intersection of Pareto frontiers, the selection decisions are most sensitive to preferences place on the objectives, and small perturbations can lead to vastly different final designs. These concepts are incorporated into an evaluation methodology that ultimately reduces the number of failed cases, infeasible designs, and Pareto dominated solutions across all concepts. A set of algebraic samples along with a truss design problem are presented as canonical examples for the proposed approach. The methodology is applied to the design of ultra-high bypass ratio turbofans to guide NASA's technology development efforts for future aircraft. Geared-drive and variable geometry bypass nozzle concepts are explored as enablers for increased bypass ratio and potential alternatives over traditional configurations. The method is shown to improve sampling efficiency and provide clusters of feasible designs that motivate a shift towards revolutionary technologies that reduce fuel burn, emissions, and noise on future aircraft.
Radiotherapy and hyperthermia in the treatment of fibrosarcomas in the dog.
Brewer, W G; Turrel, J M
1982-07-15
Ten dogs with oral or external nasal fibrosarcoma were treated sequentially with orthovoltage radiation and radiofrequency (RF)-induced hyperthermia. Total radiation doses ranged from 3,200 to 4,800 rad given in 8 to 12 fractions of 400 rad. Immediately after 2 to 4 radiation treatments, hyperthermia was given. Six oral fibrosarcomas were heated to 50 C for 30 sec, using a hand-held RF generator. Four nasomaxillary fibrosarcomas were heated to 43 C for 30 minutes, using a 500-kHz RF generator. Hyperthermia of 50 C resulted in tumor necrosis and infection in 3 dogs and fatal septicemia in 1 dog. Nine of 10 tumors responded to therapy. One year after therapy, 5 dogs were free of disease. Tumor regrowth occurred in 5 dogs. Mean time to tumor regrowth and mean survival time of all dogs were 343 and 398 days, respectively. The results suggested that sequential radiation-hyperthermia is an effective therapeutic regimen for canine fibrosarcoma. It was concluded that this modality not only may be beneficial in the treatment of canine tumors but may be useful for designing new therapeutic approaches to similar tumors in man.
Exploiting Complexity Information for Brain Activation Detection
Zhang, Yan; Liang, Jiali; Lin, Qiang; Hu, Zhenghui
2016-01-01
We present a complexity-based approach for the analysis of fMRI time series, in which sample entropy (SampEn) is introduced as a quantification of the voxel complexity. Under this hypothesis the voxel complexity could be modulated in pertinent cognitive tasks, and it changes through experimental paradigms. We calculate the complexity of sequential fMRI data for each voxel in two distinct experimental paradigms and use a nonparametric statistical strategy, the Wilcoxon signed rank test, to evaluate the difference in complexity between them. The results are compared with the well known general linear model based Statistical Parametric Mapping package (SPM12), where a decided difference has been observed. This is because SampEn method detects brain complexity changes in two experiments of different conditions and the data-driven method SampEn evaluates just the complexity of specific sequential fMRI data. Also, the larger and smaller SampEn values correspond to different meanings, and the neutral-blank design produces higher predictability than threat-neutral. Complexity information can be considered as a complementary method to the existing fMRI analysis strategies, and it may help improving the understanding of human brain functions from a different perspective. PMID:27045838
DOE Office of Scientific and Technical Information (OSTI.GOV)
Konomi, Bledar A.; Karagiannis, Georgios; Sarkar, Avik
2014-05-16
Computer experiments (numerical simulations) are widely used in scientific research to study and predict the behavior of complex systems, which usually have responses consisting of a set of distinct outputs. The computational cost of the simulations at high resolution are often expensive and become impractical for parametric studies at different input values. To overcome these difficulties we develop a Bayesian treed multivariate Gaussian process (BTMGP) as an extension of the Bayesian treed Gaussian process (BTGP) in order to model and evaluate a multivariate process. A suitable choice of covariance function and the prior distributions facilitates the different Markov chain Montemore » Carlo (MCMC) movements. We utilize this model to sequentially sample the input space for the most informative values, taking into account model uncertainty and expertise gained. A simulation study demonstrates the use of the proposed method and compares it with alternative approaches. We apply the sequential sampling technique and BTMGP to model the multiphase flow in a full scale regenerator of a carbon capture unit. The application presented in this paper is an important tool for research into carbon dioxide emissions from thermal power plants.« less
Jimenez, Julie; Gonidec, Estelle; Cacho Rivero, Jesús Andrés; Latrille, Eric; Vedrenne, Fabien; Steyer, Jean-Philippe
2014-03-01
Advanced dynamic anaerobic digestion models, such as ADM1, require both detailed organic matter characterisation and intimate knowledge of the involved metabolic pathways. In the current study, a methodology for municipal sludge characterization is investigated to describe two key parameters: biodegradability and bioaccessibility of organic matter. The methodology is based on coupling sequential chemical extractions with 3D fluorescence spectroscopy. The use of increasingly strong solvents reveals different levels of organic matter accessibility and the spectroscopy measurement leads to a detailed characterisation of the organic matter. The results obtained from testing 52 municipal sludge samples (primary, secondary, digested and thermally treated) showed a successful correlation with sludge biodegradability and bioaccessibility. The two parameters, traditionally obtained through the biochemical methane potential (BMP) lab tests, are now obtain in only 5 days compared to the 30-60 days usually required. Experimental data, obtained from two different laboratory scale reactors, were used to validate the ADM1 model. The proposed approach showed a strong application potential for reactor design and advanced control of anaerobic digestion processes. Copyright © 2013 Elsevier Ltd. All rights reserved.
Children and young people's preference of thematic design and colour for their hospital environment.
Coad, Jane; Coad, Nigel
2008-03-01
In this innovative project, the views of children and young people were explored regarding their preference of thematic design and colour for their hospital environment in a new children's unit. The novelty of the approach was that it was driven by the preferred choices of children and young people through the use of 'child-friendly' interviews and questionnaires. Informing the study was the development of a group of children and young people who underwent research training, and with support, developed all data collection tools and helped to verify data analysis. A two-phased sequential study was undertaken. During phase 1, 40 interviews were performed with children and young people, including 10 with additional learning needs and physical disabilities while 140 questionnaires were analysed for phase 2 of the study. Notable issues emerged about preferred thematic designs of walls, doors and floors, while new findings were revealed regarding colour preferences for wards, entrances and outpatient areas.
Data Assimilation - Advances and Applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Williams, Brian J.
2014-07-30
This presentation provides an overview of data assimilation (model calibration) for complex computer experiments. Calibration refers to the process of probabilistically constraining uncertain physics/engineering model inputs to be consistent with observed experimental data. An initial probability distribution for these parameters is updated using the experimental information. Utilization of surrogate models and empirical adjustment for model form error in code calibration form the basis for the statistical methodology considered. The role of probabilistic code calibration in supporting code validation is discussed. Incorporation of model form uncertainty in rigorous uncertainty quantification (UQ) analyses is also addressed. Design criteria used within a batchmore » sequential design algorithm are introduced for efficiently achieving predictive maturity and improved code calibration. Predictive maturity refers to obtaining stable predictive inference with calibrated computer codes. These approaches allow for augmentation of initial experiment designs for collecting new physical data. A standard framework for data assimilation is presented and techniques for updating the posterior distribution of the state variables based on particle filtering and the ensemble Kalman filter are introduced.« less
Tunable photonic multilayer sensors from photo-crosslinkable polymers
NASA Astrophysics Data System (ADS)
Chiappelli, Maria; Hayward, Ryan
2014-03-01
The fabrication of tunable photonic multilayer sensors from stimuli-responsive, photo-crosslinkable polymers will be described. Benzophenone is covalently incorporated as a pendent photo-crosslinker, allowing for facile preparation of multilayer films by sequential spin-coating and crosslinking processes. Copolymer chemistries and layer thicknesses are selected to provide robust multilayer sensors which can show color changes across nearly the full visible spectrum due to the specific stimulus-responsive nature of the hydrated film stack. We will describe how this approach is extended to alternative sensor designs by tailoring the thickness and chemistry of each layer independently, allowing for the preparation of sensors which depend not only on the shift in wavelength of a reflectance peak, but also on the transition between Bragg mirrors and filters. Device design is optimized by photo-patterning sensor arrays on a single substrate, providing more efficient fabrication time as well as multi-functional sensors. Finally, radiation-sensitive multilayers, designed by choosing polymers which will preferentially degrade or crosslink under ionizing radiation, will also be described.
Veiga, Helena Perrut; Bianchini, Esther Mandelbaum Gonçalves
2012-01-01
To perform an integrative review of studies on liquid sequential swallowing, by characterizing the methodology of the studies and the most important findings in young and elderly adults. Review of the literature written in English and Portuguese on PubMed, LILACS, SciELO and MEDLINE databases, within the past twenty years, available fully, using the following uniterms: sequential swallowing, swallowing, dysphagia, cup, straw, in various combinations. Research articles with a methodological approach on the characterization of liquid sequential swallowing by young and/or elderly adults, regardless of health condition, excluding studies involving only the esophageal phase. The following research indicators were applied: objectives, number and gender of participants; age group; amount of liquid offered; intake instruction; utensil used, methods and main findings. 18 studies met the established criteria. The articles were categorized according to the sample characterization and the methodology on volume intake, utensil used and types of exams. Most studies investigated only healthy individuals, with no swallowing complaints. Subjects were given different instructions as to the intake of all the volume: usual manner, continually, as rapidly as possible. The findings about the characterization of sequential swallowing were varied and described in accordance with the objectives of each study. It found great variability in the methodology employed to characterize the sequential swallowing. Some findings are not comparable, and sequential swallowing is not studied in most swallowing protocols, without consensus on the influence of the utensil.
Dynamic resource allocation in conservation planning
Golovin, D.; Krause, A.; Gardner, B.; Converse, S.J.; Morey, S.
2011-01-01
Consider the problem of protecting endangered species by selecting patches of land to be used for conservation purposes. Typically, the availability of patches changes over time, and recommendations must be made dynamically. This is a challenging prototypical example of a sequential optimization problem under uncertainty in computational sustainability. Existing techniques do not scale to problems of realistic size. In this paper, we develop an efficient algorithm for adaptively making recommendations for dynamic conservation planning, and prove that it obtains near-optimal performance. We further evaluate our approach on a detailed reserve design case study of conservation planning for three rare species in the Pacific Northwest of the United States. Copyright ?? 2011, Association for the Advancement of Artificial Intelligence. All rights reserved.
Implementing a Flipped Classroom Approach in a University Numerical Methods Mathematics Course
ERIC Educational Resources Information Center
Johnston, Barbara M.
2017-01-01
This paper describes and analyses the implementation of a "flipped classroom" approach, in an undergraduate mathematics course on numerical methods. The approach replaced all the lecture contents by instructor-made videos and was implemented in the consecutive years 2014 and 2015. The sequential case study presented here begins with an…
NASA Astrophysics Data System (ADS)
Chao, Daniel Yuh
2015-01-01
Recently, a novel and computationally efficient method - based on a vector covering approach - to design optimal control places and an iteration approach that computes the reachability graph to obtain a maximally permissive liveness enforcing supervisor for FMS (flexible manufacturing systems) have been reported. However, it is unclear as to the relationship between the structure of the net and the minimal number of monitors required. This paper develops a theory to show that the minimal number of monitors required cannot be less than that of basic siphons in α-S3PR (systems of simple sequential processes with resources). This confirms that two of the three controlled systems by Chen et al. are of a minimal monitor configuration since they belong to α-S3PR and their number in each example equals that of basic siphons.
Kostyuchenko, Anastasia S; L.Yurpalov, Vyacheslav; Kurowska, Aleksandra; Domagala, Wojciech; Pron, Adam
2014-01-01
Summary A new synthetic approach towards the preparation of functionalised, soluble, donor–acceptor (DA) alkylbithiophene derivatives of oxadiazole, thiadiazole and triazole is reported. Taking advantage of the Fiesselmann reaction, reactive bithiophene synthons having alkyl or alkoxy substituents at designated positions are prepared. Following a synthetic strategy, featuring the bottom-up approach, sequential structural elements are built, starting from a simple thiophene compound, until the target molecule is obtained, all in good yield. Supplementing the well established methods of oxadiazole and thiadiazole synthesis, efficient ring closure reaction affording a 4H-1,2,4-triazole unit is presented. All target ambipolar compounds display strong photoluminescence with measured quantum yields up to 0.59. Modification of the demonstrated synthetic routes may be exploited for the preparation of longer, specifically functionalised oligothiophenes, coupled to other heteroaromatic cores. PMID:25161716
Kostyuchenko, Anastasia S; L Yurpalov, Vyacheslav; Kurowska, Aleksandra; Domagala, Wojciech; Pron, Adam; Fisyuk, Alexander S
2014-01-01
A new synthetic approach towards the preparation of functionalised, soluble, donor-acceptor (DA) alkylbithiophene derivatives of oxadiazole, thiadiazole and triazole is reported. Taking advantage of the Fiesselmann reaction, reactive bithiophene synthons having alkyl or alkoxy substituents at designated positions are prepared. Following a synthetic strategy, featuring the bottom-up approach, sequential structural elements are built, starting from a simple thiophene compound, until the target molecule is obtained, all in good yield. Supplementing the well established methods of oxadiazole and thiadiazole synthesis, efficient ring closure reaction affording a 4H-1,2,4-triazole unit is presented. All target ambipolar compounds display strong photoluminescence with measured quantum yields up to 0.59. Modification of the demonstrated synthetic routes may be exploited for the preparation of longer, specifically functionalised oligothiophenes, coupled to other heteroaromatic cores.
Cloning strategy for producing brush-forming protein-based polymers.
Henderson, Douglas B; Davis, Richey M; Ducker, William A; Van Cott, Kevin E
2005-01-01
Brush-forming polymers are being used in a variety of applications, and by using recombinant DNA technology, there exists the potential to produce protein-based polymers that incorporate unique structures and functions in these brush layers. Despite this potential, production of protein-based brush-forming polymers is not routinely performed. For the design and production of new protein-based polymers with optimal brush-forming properties, it would be desirable to have a cloning strategy that allows an iterative approach wherein the protein based-polymer product can be produced and evaluated, and then if necessary, it can be sequentially modified in a controlled manner to obtain optimal surface density and brush extension. In this work, we report on the development of a cloning strategy intended for the production of protein-based brush-forming polymers. This strategy is based on the assembly of modules of DNA that encode for blocks of protein-based polymers into a commercially available expression vector; there is no need for custom-modified vectors and no need for intermediate cloning vectors. Additionally, because the design of new protein-based biopolymers can be an iterative process, our method enables sequential modification of a protein-based polymer product. With at least 21 bacterial expression vectors and 11 yeast expression vectors compatible with this strategy, there are a number of options available for production of protein-based polymers. It is our intent that this strategy will aid in advancing the production of protein-based brush-forming polymers.
Race and Older Mothers’ Differentiation: A Sequential Quantitative and Qualitative Analysis
Sechrist, Jori; Suitor, J. Jill; Riffin, Catherine; Taylor-Watson, Kadari; Pillemer, Karl
2011-01-01
The goal of this paper is to demonstrate a process by which qualitative and quantitative approaches are combined to reveal patterns in the data that are unlikely to be detected and confirmed by either method alone. Specifically, we take a sequential approach to combining qualitative and quantitative data to explore race differences in how mothers differentiate among their adult children. We began with a standard multivariate analysis examining race differences in mothers’ differentiation among their adult children regarding emotional closeness and confiding. Finding no race differences in this analysis, we conducted an in-depth comparison of the Black and White mothers’ narratives to determine whether there were underlying patterns that we had been unable to detect in our first analysis. Using this method, we found that Black mothers were substantially more likely than White mothers to emphasize interpersonal relationships within the family when describing differences among their children. In our final step, we developed a measure of familism based on the qualitative data and conducted a multivariate analysis to confirm the patterns revealed by the in-depth comparison of the mother’s narratives. We conclude that using such a sequential mixed methods approach to data analysis has the potential to shed new light on complex family relations. PMID:21967639
2013-05-01
and diazepam with and without pretreatment with pyridostigmine bromide . The 24 hr median lethal dose (MLD) of VM was determined using a sequential... pyridostigmine bromide . The 24 hr median lethal dose (MLD) of VM was determined using a sequential stage approach. The efficacy of medical...with and without pyridostigmine bromide (PB) pretreatment against lethal intoxication with VM, VR or VX. Methods Animals: Adult male Hartley
Östlund, Ann-Sofi; Wadensten, Barbro; Häggström, Elisabeth; Lindqvist, Helena; Kristofferzon, Marja-Leena
2016-11-01
The aim of this study was to describe what verbal behaviours/kinds of talk occur during recorded motivational interviewing sessions between nurses in primary care and their patients. The aim was also to examine what kinds of nurse talk predict patient change talk, neutral talk and/or sustain talk. Motivational interviewing is a collaborative conversational style. It has been shown to be effective, in addressing health behaviours such as diet, exercise, weight loss and chronic disease management. In Sweden, it is one of the approaches to disease prevention conversations with patients recommended in the National Guidelines for Disease Prevention. Research on the mechanisms underlying motivational interviewing is growing, but research on motivational interviewing and disease prevention has also been called for. A descriptive and predictive design was used. Data were collected during 2011-2014. Fifty audio-recorded motivational interviewing sessions between 23 primary care nurses and 50 patients were analysed using Motivational Interviewing Sequential Code for Observing Process Exchanges. The frequency of specific kinds of talk and sequential analysis (to predict patient talk from nurse talk) were computed using the software Generalized Sequential Querier 5. The primary care nurses and patients used neutral talk most frequently. Open and negative questions, complex and positive reflections were significantly more likely to be followed by change talk and motivational interviewing-inconsistent talk, positive questions and negative reflections by sustain talk. To increase patients' change talk, primary care nurses need to use more open questions, complex reflections and questions and reflections directed towards change. © 2016 John Wiley & Sons Ltd.
Classification and assessment tools for structural motif discovery algorithms.
Badr, Ghada; Al-Turaiki, Isra; Mathkour, Hassan
2013-01-01
Motif discovery is the problem of finding recurring patterns in biological data. Patterns can be sequential, mainly when discovered in DNA sequences. They can also be structural (e.g. when discovering RNA motifs). Finding common structural patterns helps to gain a better understanding of the mechanism of action (e.g. post-transcriptional regulation). Unlike DNA motifs, which are sequentially conserved, RNA motifs exhibit conservation in structure, which may be common even if the sequences are different. Over the past few years, hundreds of algorithms have been developed to solve the sequential motif discovery problem, while less work has been done for the structural case. In this paper, we survey, classify, and compare different algorithms that solve the structural motif discovery problem, where the underlying sequences may be different. We highlight their strengths and weaknesses. We start by proposing a benchmark dataset and a measurement tool that can be used to evaluate different motif discovery approaches. Then, we proceed by proposing our experimental setup. Finally, results are obtained using the proposed benchmark to compare available tools. To the best of our knowledge, this is the first attempt to compare tools solely designed for structural motif discovery. Results show that the accuracy of discovered motifs is relatively low. The results also suggest a complementary behavior among tools where some tools perform well on simple structures, while other tools are better for complex structures. We have classified and evaluated the performance of available structural motif discovery tools. In addition, we have proposed a benchmark dataset with tools that can be used to evaluate newly developed tools.
Learning to Monitor Machine Health with Convolutional Bi-Directional LSTM Networks
Zhao, Rui; Yan, Ruqiang; Wang, Jinjiang; Mao, Kezhi
2017-01-01
In modern manufacturing systems and industries, more and more research efforts have been made in developing effective machine health monitoring systems. Among various machine health monitoring approaches, data-driven methods are gaining in popularity due to the development of advanced sensing and data analytic techniques. However, considering the noise, varying length and irregular sampling behind sensory data, this kind of sequential data cannot be fed into classification and regression models directly. Therefore, previous work focuses on feature extraction/fusion methods requiring expensive human labor and high quality expert knowledge. With the development of deep learning methods in the last few years, which redefine representation learning from raw data, a deep neural network structure named Convolutional Bi-directional Long Short-Term Memory networks (CBLSTM) has been designed here to address raw sensory data. CBLSTM firstly uses CNN to extract local features that are robust and informative from the sequential input. Then, bi-directional LSTM is introduced to encode temporal information. Long Short-Term Memory networks (LSTMs) are able to capture long-term dependencies and model sequential data, and the bi-directional structure enables the capture of past and future contexts. Stacked, fully-connected layers and the linear regression layer are built on top of bi-directional LSTMs to predict the target value. Here, a real-life tool wear test is introduced, and our proposed CBLSTM is able to predict the actual tool wear based on raw sensory data. The experimental results have shown that our model is able to outperform several state-of-the-art baseline methods. PMID:28146106
Learning to Monitor Machine Health with Convolutional Bi-Directional LSTM Networks.
Zhao, Rui; Yan, Ruqiang; Wang, Jinjiang; Mao, Kezhi
2017-01-30
In modern manufacturing systems and industries, more and more research efforts have been made in developing effective machine health monitoring systems. Among various machine health monitoring approaches, data-driven methods are gaining in popularity due to the development of advanced sensing and data analytic techniques. However, considering the noise, varying length and irregular sampling behind sensory data, this kind of sequential data cannot be fed into classification and regression models directly. Therefore, previous work focuses on feature extraction/fusion methods requiring expensive human labor and high quality expert knowledge. With the development of deep learning methods in the last few years, which redefine representation learning from raw data, a deep neural network structure named Convolutional Bi-directional Long Short-Term Memory networks (CBLSTM) has been designed here to address raw sensory data. CBLSTM firstly uses CNN to extract local features that are robust and informative from the sequential input. Then, bi-directional LSTM is introduced to encode temporal information. Long Short-Term Memory networks(LSTMs) are able to capture long-term dependencies and model sequential data, and the bi-directional structure enables the capture of past and future contexts. Stacked, fully-connected layers and the linear regression layer are built on top of bi-directional LSTMs to predict the target value. Here, a real-life tool wear test is introduced, and our proposed CBLSTM is able to predict the actual tool wear based on raw sensory data. The experimental results have shown that our model is able to outperform several state-of-the-art baseline methods.
BEopt - Building Energy Optimization BEopt NREL - National Renewable Energy Laboratory Primary Energy Optimization) software provides capabilities to evaluate residential building designs and identify sequential search optimization technique used by BEopt: Finds minimum-cost building designs at different
When to Use What Research Design
ERIC Educational Resources Information Center
Vogt, W. Paul; Gardner, Dianne C.; Haeffele, Lynne M.
2012-01-01
Systematic, practical, and accessible, this is the first book to focus on finding the most defensible design for a particular research question. Thoughtful guidelines are provided for weighing the advantages and disadvantages of various methods, including qualitative, quantitative, and mixed methods designs. The book can be read sequentially or…
Designing User-Computer Dialogues: Basic Principles and Guidelines.
ERIC Educational Resources Information Center
Harrell, Thomas H.
This discussion of the design of computerized psychological assessment or testing instruments stresses the importance of the well-designed computer-user interface. The principles underlying the three main functional elements of computer-user dialogue--data entry, data display, and sequential control--are discussed, and basic guidelines derived…
Photo-crosslinkable polymers for fabrication of photonic multilayer sensors
NASA Astrophysics Data System (ADS)
Chiappelli, Maria; Hayward, Ryan C.
2013-03-01
We have used photo-crosslinkable polymers to fabricate photonic multilayer sensors. Benzophenone is utilized as a covalently incorporated pendent photo-crosslinker, providing a convenient means of fabricating multilayer films by sequential spin-coating and crosslinking processes. Colorimetric temperature sensors were designed from thermally-responsive, low-refractive index poly(N-isopropylacrylamide) (PNIPAM) and high-refractive index poly(para-methyl styrene) (P pMS). Copolymer chemistries and layer thicknesses were selected to provide robust multilayer sensors which show color changes across nearly the full visible spectrum due to changes in temperature of the hydrated film stack. We have characterized the uniformity and interfacial broadening within the multilayers, the kinetics of swelling and de-swelling, and the reversibility over multiple hydration/dehydration cycles. We also describe how the approach can be extended to alternative sensor designs through the ability to tailor each layer independently, as well as to additional stimuli by selecting alternative copolymer chemistries.
Le Bras, Fabien; Molinier-Frenkel, Valerie; Guellich, Aziz; Dupuis, Jehan; Belhadj, Karim; Guendouz, Soulef; Ayad, Karima; Colombat, Magali; Benhaiem, Nicole; Tissot, Claire Marie; Hulin, Anne; Jaccard, Arnaud; Damy, Thibaud
2017-05-01
Chemotherapy combining cyclophosphamide, bortezomib and dexamethasone is widely used in light-chain amyloidosis. The benefit is limited in patients with cardiac amyloidosis mainly because of adverse cardiac events. Retrospective analysis of our cohort showed that 39 patients died with 42% during the first month. A new escalation-sequential regimen was set to improve the outcomes. Nine newly-diagnosed patients were prospectively treated with close monitoring of serum N-terminal pro-brain natriuretic peptide, troponin-T and free light chains. The results show that corticoids may destabilise the heart through fluid retention. Thus, a sequential protocol may be a promising approach to treat these patients. Copyright © 2017 Elsevier Ltd. All rights reserved.
Wild, Aaron T; Gandhi, Nishant; Chettiar, Sivarajan T; Aziz, Khaled; Gajula, Rajendra P; Williams, Russell D; Kumar, Rachit; Taparra, Kekoa; Zeng, Jing; Cades, Jessica A; Velarde, Esteban; Menon, Siddharth; Geschwind, Jean F; Cosgrove, David; Pawlik, Timothy M; Maitra, Anirban; Wong, John; Hales, Russell K; Torbenson, Michael S; Herman, Joseph M; Tran, Phuoc T
2013-01-01
Sorafenib (SOR) is the only systemic agent known to improve survival for hepatocellular carcinoma (HCC). However, SOR prolongs survival by less than 3 months and does not alter symptomatic progression. To improve outcomes, several phase I-II trials are currently examining SOR with radiation (RT) for HCC utilizing heterogeneous concurrent and sequential treatment regimens. Our study provides preclinical data characterizing the effects of concurrent versus sequential RT-SOR on HCC cells both in vitro and in vivo. Concurrent and sequential RT-SOR regimens were tested for efficacy among 4 HCC cell lines in vitro by assessment of clonogenic survival, apoptosis, cell cycle distribution, and γ-H2AX foci formation. Results were confirmed in vivo by evaluating tumor growth delay and performing immunofluorescence staining in a hind-flank xenograft model. In vitro, concurrent RT-SOR produced radioprotection in 3 of 4 cell lines, whereas sequential RT-SOR produced decreased colony formation among all 4. Sequential RT-SOR increased apoptosis compared to RT alone, while concurrent RT-SOR did not. Sorafenib induced reassortment into less radiosensitive phases of the cell cycle through G1-S delay and cell cycle slowing. More double-strand breaks (DSBs) persisted 24 h post-irradiation for RT alone versus concurrent RT-SOR. In vivo, sequential RT-SOR produced the greatest tumor growth delay, while concurrent RT-SOR was similar to RT alone. More persistent DSBs were observed in xenografts treated with sequential RT-SOR or RT alone versus concurrent RT-SOR. Sequential RT-SOR additionally produced a greater reduction in xenograft tumor vascularity and mitotic index than either concurrent RT-SOR or RT alone. In conclusion, sequential RT-SOR demonstrates greater efficacy against HCC than concurrent RT-SOR both in vitro and in vivo. These results may have implications for clinical decision-making and prospective trial design.
Chettiar, Sivarajan T.; Aziz, Khaled; Gajula, Rajendra P.; Williams, Russell D.; Kumar, Rachit; Taparra, Kekoa; Zeng, Jing; Cades, Jessica A.; Velarde, Esteban; Menon, Siddharth; Geschwind, Jean F.; Cosgrove, David; Pawlik, Timothy M.; Maitra, Anirban; Wong, John; Hales, Russell K.; Torbenson, Michael S.; Herman, Joseph M.; Tran, Phuoc T.
2013-01-01
Sorafenib (SOR) is the only systemic agent known to improve survival for hepatocellular carcinoma (HCC). However, SOR prolongs survival by less than 3 months and does not alter symptomatic progression. To improve outcomes, several phase I-II trials are currently examining SOR with radiation (RT) for HCC utilizing heterogeneous concurrent and sequential treatment regimens. Our study provides preclinical data characterizing the effects of concurrent versus sequential RT-SOR on HCC cells both in vitro and in vivo. Concurrent and sequential RT-SOR regimens were tested for efficacy among 4 HCC cell lines in vitro by assessment of clonogenic survival, apoptosis, cell cycle distribution, and γ-H2AX foci formation. Results were confirmed in vivo by evaluating tumor growth delay and performing immunofluorescence staining in a hind-flank xenograft model. In vitro, concurrent RT-SOR produced radioprotection in 3 of 4 cell lines, whereas sequential RT-SOR produced decreased colony formation among all 4. Sequential RT-SOR increased apoptosis compared to RT alone, while concurrent RT-SOR did not. Sorafenib induced reassortment into less radiosensitive phases of the cell cycle through G1-S delay and cell cycle slowing. More double-strand breaks (DSBs) persisted 24 h post-irradiation for RT alone versus concurrent RT-SOR. In vivo, sequential RT-SOR produced the greatest tumor growth delay, while concurrent RT-SOR was similar to RT alone. More persistent DSBs were observed in xenografts treated with sequential RT-SOR or RT alone versus concurrent RT-SOR. Sequential RT-SOR additionally produced a greater reduction in xenograft tumor vascularity and mitotic index than either concurrent RT-SOR or RT alone. In conclusion, sequential RT-SOR demonstrates greater efficacy against HCC than concurrent RT-SOR both in vitro and in vivo. These results may have implications for clinical decision-making and prospective trial design. PMID:23762417
Maffei, D F; Sant'Ana, A S; Monteiro, G; Schaffner, D W; Franco, B D G M
2016-06-01
This study evaluated the impact of sodium dichloroisocyanurate (5, 10, 20, 30, 40, 50 and 250 mg l(-1) ) in wash water on transfer of Salmonella Typhimurium from contaminated lettuce to wash water and then to other noncontaminated lettuces washed sequentially in the same water. Experiments were designed mimicking the conditions commonly seen in minimally processed vegetable (MPV) processing plants in Brazil. The scenarios were as follows: (1) Washing one inoculated lettuce portion in nonchlorinated water, followed by washing 10 noninoculated portions sequentially. (2) Washing one inoculated lettuce portion in chlorinated water followed by washing five noninoculated portions sequentially. (3) Washing five inoculated lettuce portions in chlorinated water sequentially, followed by washing five noninoculated portions sequentially. (4) Washing five noninoculated lettuce portions in chlorinated water sequentially, followed by washing five inoculated portions sequentially and then by washing five noninoculated portions sequentially in the same water. Salm. Typhimurium transfer from inoculated lettuce to wash water and further dissemination to noninoculated lettuces occurred when nonchlorinated water was used (scenario 1). When chlorinated water was used (scenarios 2, 3 and 4), no measurable Salm. Typhimurium transfer occurred if the sanitizer was ≥10 mg l(-1) . Use of sanitizers in correct concentrations is important to minimize the risk of microbial transfer during MPV washing. In this study, the impact of sodium dichloroisocyanurate in the wash water on transfer of Salmonella Typhimurium from inoculated lettuce to wash water and then to other noninoculated lettuces washed sequentially in the same water was evaluated. The use of chlorinated water, at concentration above 10 mg l(-1) , effectively prevented Salm. Typhimurium transfer under several different washing scenarios. Conversely, when nonchlorinated water was used, Salm. Typhimurium transfer occurred in up to at least 10 noninoculated batches of lettuce washed sequentially in the same water. © 2016 The Society for Applied Microbiology.
Fonoff, Erich Talamoni; Azevedo, Angelo; Angelos, Jairo Silva Dos; Martinez, Raquel Chacon Ruiz; Navarro, Jessie; Reis, Paul Rodrigo; Sepulveda, Miguel Ernesto San Martin; Cury, Rubens Gisbert; Ghilardi, Maria Gabriela Dos Santos; Teixeira, Manoel Jacobsen; Lopez, William Omar Contreras
2016-07-01
OBJECT Currently, bilateral procedures involve 2 sequential implants in each of the hemispheres. The present report demonstrates the feasibility of simultaneous bilateral procedures during the implantation of deep brain stimulation (DBS) leads. METHODS Fifty-seven patients with movement disorders underwent bilateral DBS implantation in the same study period. The authors compared the time required for the surgical implantation of deep brain electrodes in 2 randomly assigned groups. One group of 28 patients underwent traditional sequential electrode implantation, and the other 29 patients underwent simultaneous bilateral implantation. Clinical outcomes of the patients with Parkinson's disease (PD) who had undergone DBS implantation of the subthalamic nucleus using either of the 2 techniques were compared. RESULTS Overall, a reduction of 38.51% in total operating time for the simultaneous bilateral group (136.4 ± 20.93 minutes) as compared with that for the traditional consecutive approach (220.3 ± 27.58 minutes) was observed. Regarding clinical outcomes in the PD patients who underwent subthalamic nucleus DBS implantation, comparing the preoperative off-medication condition with the off-medication/on-stimulation condition 1 year after the surgery in both procedure groups, there was a mean 47.8% ± 9.5% improvement in the Unified Parkinson's Disease Rating Scale Part III (UPDRS-III) score in the simultaneous group, while the sequential group experienced 47.5% ± 15.8% improvement (p = 0.96). Moreover, a marked reduction in the levodopa-equivalent dose from preoperatively to postoperatively was similar in these 2 groups. The simultaneous bilateral procedure presented major advantages over the traditional sequential approach, with a shorter total operating time. CONCLUSIONS A simultaneous stereotactic approach significantly reduces the operation time in bilateral DBS procedures, resulting in decreased microrecording time, contributing to the optimization of functional stereotactic procedures.
The PMHT: solutions for some of its problems
NASA Astrophysics Data System (ADS)
Wieneke, Monika; Koch, Wolfgang
2007-09-01
Tracking multiple targets in a cluttered environment is a challenging task. Probabilistic Multiple Hypothesis Tracking (PMHT) is an efficient approach for dealing with it. Essentially PMHT is based on the method of Expectation-Maximization for handling with association conflicts. Linearity in the number of targets and measurements is the main motivation for a further development and extension of this methodology. Unfortunately, compared with the Probabilistic Data Association Filter (PDAF), PMHT has not yet shown its superiority in terms of track-lost statistics. Furthermore, the problem of track extraction and deletion is apparently not yet satisfactorily solved within this framework. Four properties of PMHT are responsible for its problems in track maintenance: Non-Adaptivity, Hospitality, Narcissism and Local Maxima. 1, 2 In this work we present a solution for each of them and derive an improved PMHT by integrating the solutions into the PMHT formalism. The new PMHT is evaluated by Monte-Carlo simulations. A sequential Likelihood-Ratio (LR) test for track extraction has been developed and already integrated into the framework of traditional Bayesian Multiple Hypothesis Tracking. 3 As a multi-scan approach, also the PMHT methodology has the potential for track extraction. In this paper an analogous integration of a sequential LR test into the PMHT framework is proposed. We present an LR formula for track extraction and deletion using the PMHT update formulae. As PMHT provides all required ingredients for a sequential LR calculation, the LR is thus a by-product of the PMHT iteration process. Therefore the resulting update formula for the sequential LR test affords the development of Track-Before-Detect algorithms for PMHT. The approach is illustrated by a simple example.
NASA Astrophysics Data System (ADS)
Croce, Pierpaolo; Zappasodi, Filippo; Merla, Arcangelo; Chiarelli, Antonio Maria
2017-08-01
Objective. Electrical and hemodynamic brain activity are linked through the neurovascular coupling process and they can be simultaneously measured through integration of electroencephalography (EEG) and functional near-infrared spectroscopy (fNIRS). Thanks to the lack of electro-optical interference, the two procedures can be easily combined and, whereas EEG provides electrophysiological information, fNIRS can provide measurements of two hemodynamic variables, such as oxygenated and deoxygenated hemoglobin. A Bayesian sequential Monte Carlo approach (particle filter, PF) was applied to simulated recordings of electrical and neurovascular mediated hemodynamic activity, and the advantages of a unified framework were shown. Approach. Multiple neural activities and hemodynamic responses were simulated in the primary motor cortex of a subject brain. EEG and fNIRS recordings were obtained by means of forward models of volume conduction and light propagation through the head. A state space model of combined EEG and fNIRS data was built and its dynamic evolution was estimated through a Bayesian sequential Monte Carlo approach (PF). Main results. We showed the feasibility of the procedure and the improvements in both electrical and hemodynamic brain activity reconstruction when using the PF on combined EEG and fNIRS measurements. Significance. The investigated procedure allows one to combine the information provided by the two methodologies, and, by taking advantage of a physical model of the coupling between electrical and hemodynamic response, to obtain a better estimate of brain activity evolution. Despite the high computational demand, application of such an approach to in vivo recordings could fully exploit the advantages of this combined brain imaging technology.
Testing Quantum Models of Conjunction Fallacy on the World Wide Web
NASA Astrophysics Data System (ADS)
Aerts, Diederik; Arguëlles, Jonito Aerts; Beltran, Lester; Beltran, Lyneth; de Bianchi, Massimiliano Sassoli; Sozzo, Sandro; Veloz, Tomas
2017-12-01
The `conjunction fallacy' has been extensively debated by scholars in cognitive science and, in recent times, the discussion has been enriched by the proposal of modeling the fallacy using the quantum formalism. Two major quantum approaches have been put forward: the first assumes that respondents use a two-step sequential reasoning and that the fallacy results from the presence of `question order effects'; the second assumes that respondents evaluate the cognitive situation as a whole and that the fallacy results from the `emergence of new meanings', as an `effect of overextension' in the conceptual conjunction. Thus, the question arises as to determine whether and to what extent conjunction fallacies would result from `order effects' or, instead, from `emergence effects'. To help clarify this situation, we propose to use the World Wide Web as an `information space' that can be interrogated both in a sequential and non-sequential way, to test these two quantum approaches. We find that `emergence effects', and not `order effects', should be considered the main cognitive mechanism producing the observed conjunction fallacies.
Sequential time interleaved random equivalent sampling for repetitive signal.
Zhao, Yijiu; Liu, Jingjing
2016-12-01
Compressed sensing (CS) based sampling techniques exhibit many advantages over other existing approaches for sparse signal spectrum sensing; they are also incorporated into non-uniform sampling signal reconstruction to improve the efficiency, such as random equivalent sampling (RES). However, in CS based RES, only one sample of each acquisition is considered in the signal reconstruction stage, and it will result in more acquisition runs and longer sampling time. In this paper, a sampling sequence is taken in each RES acquisition run, and the corresponding block measurement matrix is constructed using a Whittaker-Shannon interpolation formula. All the block matrices are combined into an equivalent measurement matrix with respect to all sampling sequences. We implemented the proposed approach with a multi-cores analog-to-digital converter (ADC), whose ADC cores are time interleaved. A prototype realization of this proposed CS based sequential random equivalent sampling method has been developed. It is able to capture an analog waveform at an equivalent sampling rate of 40 GHz while sampled at 1 GHz physically. Experiments indicate that, for a sparse signal, the proposed CS based sequential random equivalent sampling exhibits high efficiency.
In-situ sequential laser transfer and laser reduction of graphene oxide films
NASA Astrophysics Data System (ADS)
Papazoglou, S.; Petridis, C.; Kymakis, E.; Kennou, S.; Raptis, Y. S.; Chatzandroulis, S.; Zergioti, I.
2018-04-01
Achieving high quality transfer of graphene on selected substrates is a priority in device fabrication, especially where drop-on-demand applications are involved. In this work, we report an in-situ, fast, simple, and one step process that resulted in the reduction, transfer, and fabrication of reduced graphene oxide-based humidity sensors, using picosecond laser pulses. By tuning the laser illumination parameters, we managed to implement the sequential printing and reduction of graphene oxide flakes. The overall process lasted only a few seconds compared to a few hours that our group has previously published. DC current measurements, X-Ray Photoelectron Spectroscopy, X-Ray Diffraction, and Raman Spectroscopy were employed in order to assess the efficiency of our approach. To demonstrate the applicability and the potential of the technique, laser printed reduced graphene oxide humidity sensors with a limit of detection of 1700 ppm are presented. The results demonstrated in this work provide a selective, rapid, and low-cost approach for sequential transfer and photochemical reduction of graphene oxide micro-patterns onto various substrates for flexible electronics and sensor applications.
Carleton, R. Drew; Heard, Stephen B.; Silk, Peter J.
2013-01-01
Estimation of pest density is a basic requirement for integrated pest management in agriculture and forestry, and efficiency in density estimation is a common goal. Sequential sampling techniques promise efficient sampling, but their application can involve cumbersome mathematics and/or intensive warm-up sampling when pests have complex within- or between-site distributions. We provide tools for assessing the efficiency of sequential sampling and of alternative, simpler sampling plans, using computer simulation with “pre-sampling” data. We illustrate our approach using data for balsam gall midge (Paradiplosis tumifex) attack in Christmas tree farms. Paradiplosis tumifex proved recalcitrant to sequential sampling techniques. Midge distributions could not be fit by a common negative binomial distribution across sites. Local parameterization, using warm-up samples to estimate the clumping parameter k for each site, performed poorly: k estimates were unreliable even for samples of n∼100 trees. These methods were further confounded by significant within-site spatial autocorrelation. Much simpler sampling schemes, involving random or belt-transect sampling to preset sample sizes, were effective and efficient for P. tumifex. Sampling via belt transects (through the longest dimension of a stand) was the most efficient, with sample means converging on true mean density for sample sizes of n∼25–40 trees. Pre-sampling and simulation techniques provide a simple method for assessing sampling strategies for estimating insect infestation. We suspect that many pests will resemble P. tumifex in challenging the assumptions of sequential sampling methods. Our software will allow practitioners to optimize sampling strategies before they are brought to real-world applications, while potentially avoiding the need for the cumbersome calculations required for sequential sampling methods. PMID:24376556
Fu, H; Zhang, J-J; Xu, Y; Chao, H-J; Zhou, N-Y
2017-03-01
The ortho-nitrophenol (ONP)-utilizing Alcaligenes sp. strain NyZ215, meta-nitrophenol (MNP)-utilizing Cupriavidus necator JMP134 and para-nitrophenol (PNP)-utilizing Pseudomonas sp. strain WBC-3 were assembled as a consortium to degrade three nitrophenol isomers in sequential batch reactors. Pilot test was conducted in flasks to demonstrate that a mixture of three mononitrophenols at 0·5 mol l -1 each could be mineralized by this microbial consortium within 84 h. Interestingly, neither ONP nor MNP was degraded until PNP was almost consumed by strain WBC-3. By immobilizing this consortium into polyurethane cubes, all three mononitrophenols were continuously degraded in lab-scale sequential reactors for six batch cycles over 18 days. Total concentrations of ONP, MMP and PNP that were degraded were 2·8, 1·5 and 2·3 mol l -1 during this time course respectively. Quantitative real-time PCR analysis showed that each member in the microbial consortium was relatively stable during the entire degradation process. This study provides a novel approach to treat polluted water, particularly with a mixture of co-existing isomers. Nitroaromatic compounds are readily spread in the environment and pose great potential toxicity concerns. Here, we report the simultaneous degradation of three isomers of mononitrophenol in a single system by employing a consortium of three bacteria, both in flasks and lab-scale sequential batch reactors. The results demonstrate that simultaneous biodegradation of three mononitrophenol isomers can be achieved by a tailor-made microbial consortium immobilized in sequential batch reactors, providing a pilot study for a novel approach for the bioremediation of mixed pollutants, especially isomers present in wastewater. © 2016 The Society for Applied Microbiology.
Sequentially reweighted TV minimization for CT metal artifact reduction.
Zhang, Xiaomeng; Xing, Lei
2013-07-01
Metal artifact reduction has long been an important topic in x-ray CT image reconstruction. In this work, the authors propose an iterative method that sequentially minimizes a reweighted total variation (TV) of the image and produces substantially artifact-reduced reconstructions. A sequentially reweighted TV minimization algorithm is proposed to fully exploit the sparseness of image gradients (IG). The authors first formulate a constrained optimization model that minimizes a weighted TV of the image, subject to the constraint that the estimated projection data are within a specified tolerance of the available projection measurements, with image non-negativity enforced. The authors then solve a sequence of weighted TV minimization problems where weights used for the next iteration are computed from the current solution. Using the complete projection data, the algorithm first reconstructs an image from which a binary metal image can be extracted. Forward projection of the binary image identifies metal traces in the projection space. The metal-free background image is then reconstructed from the metal-trace-excluded projection data by employing a different set of weights. Each minimization problem is solved using a gradient method that alternates projection-onto-convex-sets and steepest descent. A series of simulation and experimental studies are performed to evaluate the proposed approach. Our study shows that the sequentially reweighted scheme, by altering a single parameter in the weighting function, flexibly controls the sparsity of the IG and reconstructs artifacts-free images in a two-stage process. It successfully produces images with significantly reduced streak artifacts, suppressed noise and well-preserved contrast and edge properties. The sequentially reweighed TV minimization provides a systematic approach for suppressing CT metal artifacts. The technique can also be generalized to other "missing data" problems in CT image reconstruction.
THRESHOLD LOGIC SYNTHESIS OF SEQUENTIAL MACHINES.
The application of threshold logic to the design of sequential machines is the subject of this research. A single layer of threshold logic units in...advantages of fewer components because of the use of threshold logic, along with very high-speed operation resulting from the use of only a single layer of...logic. In some instances, namely for asynchronous machines, the only delay need be the natural delay of the single layer of threshold elements. It is
A global parallel model based design of experiments method to minimize model output uncertainty.
Bazil, Jason N; Buzzard, Gregory T; Rundell, Ann E
2012-03-01
Model-based experiment design specifies the data to be collected that will most effectively characterize the biological system under study. Existing model-based design of experiment algorithms have primarily relied on Fisher Information Matrix-based methods to choose the best experiment in a sequential manner. However, these are largely local methods that require an initial estimate of the parameter values, which are often highly uncertain, particularly when data is limited. In this paper, we provide an approach to specify an informative sequence of multiple design points (parallel design) that will constrain the dynamical uncertainty of the biological system responses to within experimentally detectable limits as specified by the estimated experimental noise. The method is based upon computationally efficient sparse grids and requires only a bounded uncertain parameter space; it does not rely upon initial parameter estimates. The design sequence emerges through the use of scenario trees with experimental design points chosen to minimize the uncertainty in the predicted dynamics of the measurable responses of the system. The algorithm was illustrated herein using a T cell activation model for three problems that ranged in dimension from 2D to 19D. The results demonstrate that it is possible to extract useful information from a mathematical model where traditional model-based design of experiments approaches most certainly fail. The experiments designed via this method fully constrain the model output dynamics to within experimentally resolvable limits. The method is effective for highly uncertain biological systems characterized by deterministic mathematical models with limited data sets. Also, it is highly modular and can be modified to include a variety of methodologies such as input design and model discrimination.
Adaptive designs in clinical trials.
Bowalekar, Suresh
2011-01-01
In addition to the expensive and lengthy process of developing a new medicine, the attrition rate in clinical research was on the rise, resulting in stagnation in the development of new compounds. As a consequence to this, the US Food and Drug Administration released a critical path initiative document in 2004, highlighting the need for developing innovative trial designs. One of the innovations suggested the use of adaptive designs for clinical trials. Thus, post critical path initiative, there is a growing interest in using adaptive designs for the development of pharmaceutical products. Adaptive designs are expected to have great potential to reduce the number of patients and duration of trial and to have relatively less exposure to new drug. Adaptive designs are not new in the sense that the task of interim analysis (IA)/review of the accumulated data used in adaptive designs existed in the past too. However, such reviews/analyses of accumulated data were not necessarily planned at the stage of planning clinical trial and the methods used were not necessarily compliant with clinical trial process. The Bayesian approach commonly used in adaptive designs was developed by Thomas Bayes in the 18th century, about hundred years prior to the development of modern statistical methods by the father of modern statistics, Sir Ronald A. Fisher, but the complexity involved in Bayesian approach prevented its use in real life practice. The advances in the field of computer and information technology over the last three to four decades has changed the scenario and the Bayesian techniques are being used in adaptive designs in addition to other sequential methods used in IA. This paper attempts to describe the various adaptive designs in clinical trial and views of stakeholders about feasibility of using them, without going into mathematical complexities.
Economic Factors in Tunnel Construction
DOT National Transportation Integrated Search
1979-02-01
This report describes a new cost estimating system for tunneling. The system is designed so that it may be used to aid planners, engineers, and designers in evaluating the cost impact of decisions they may make during the sequential stages of plannin...
Optical and structural properties of cobalt-permalloy slanted columnar heterostructure thin films
NASA Astrophysics Data System (ADS)
Sekora, Derek; Briley, Chad; Schubert, Mathias; Schubert, Eva
2017-11-01
Optical and structural properties of sequential Co-column-NiFe-column slanted columnar heterostructure thin films with an Al2O3 passivation coating are reported. Electron-beam evaporated glancing angle deposition is utilized to deposit the sequential multiple-material slanted columnar heterostructure thin films. Mueller matrix generalized spectroscopic ellipsometry data is analyzed with a best-match model approach employing the anisotropic Bruggeman effective medium approximation formalism to determine bulk-like and anisotropic optical and structural properties of the individual Co and NiFe slanted columnar material sub-layers. Scanning electron microscopy is applied to image the Co-NiFe sequential growth properties and to verify the results of the ellipsometric analysis. Comparisons to single-material slanted columnar thin films and optically bulk solid thin films are presented and discussed. We find that the optical and structural properties of each material sub-layer of the sequential slanted columnar heterostructure film are distinct from each other and resemble those of their respective single-material counterparts.
Mote, Kaustubh R; Gopinath, T; Traaseth, Nathaniel J; Kitchen, Jason; Gor'kov, Peter L; Brey, William W; Veglia, Gianluigi
2011-11-01
Oriented solid-state NMR is the most direct methodology to obtain the orientation of membrane proteins with respect to the lipid bilayer. The method consists of measuring (1)H-(15)N dipolar couplings (DC) and (15)N anisotropic chemical shifts (CSA) for membrane proteins that are uniformly aligned with respect to the membrane bilayer. A significant advantage of this approach is that tilt and azimuthal (rotational) angles of the protein domains can be directly derived from analytical expression of DC and CSA values, or, alternatively, obtained by refining protein structures using these values as harmonic restraints in simulated annealing calculations. The Achilles' heel of this approach is the lack of suitable experiments for sequential assignment of the amide resonances. In this Article, we present a new pulse sequence that integrates proton driven spin diffusion (PDSD) with sensitivity-enhanced PISEMA in a 3D experiment ([(1)H,(15)N]-SE-PISEMA-PDSD). The incorporation of 2D (15)N/(15)N spin diffusion experiments into this new 3D experiment leads to the complete and unambiguous assignment of the (15)N resonances. The feasibility of this approach is demonstrated for the membrane protein sarcolipin reconstituted in magnetically aligned lipid bicelles. Taken with low electric field probe technology, this approach will propel the determination of sequential assignment as well as structure and topology of larger integral membrane proteins in aligned lipid bilayers. © Springer Science+Business Media B.V. 2011
Lin, Yu-Pin; Chu, Hone-Jay; Wang, Cheng-Long; Yu, Hsiao-Hsuan; Wang, Yung-Chieh
2009-01-01
This study applies variogram analyses of normalized difference vegetation index (NDVI) images derived from SPOT HRV images obtained before and after the ChiChi earthquake in the Chenyulan watershed, Taiwan, as well as images after four large typhoons, to delineate the spatial patterns, spatial structures and spatial variability of landscapes caused by these large disturbances. The conditional Latin hypercube sampling approach was applied to select samples from multiple NDVI images. Kriging and sequential Gaussian simulation with sufficient samples were then used to generate maps of NDVI images. The variography of NDVI image results demonstrate that spatial patterns of disturbed landscapes were successfully delineated by variogram analysis in study areas. The high-magnitude Chi-Chi earthquake created spatial landscape variations in the study area. After the earthquake, the cumulative impacts of typhoons on landscape patterns depended on the magnitudes and paths of typhoons, but were not always evident in the spatiotemporal variability of landscapes in the study area. The statistics and spatial structures of multiple NDVI images were captured by 3,000 samples from 62,500 grids in the NDVI images. Kriging and sequential Gaussian simulation with the 3,000 samples effectively reproduced spatial patterns of NDVI images. However, the proposed approach, which integrates the conditional Latin hypercube sampling approach, variogram, kriging and sequential Gaussian simulation in remotely sensed images, efficiently monitors, samples and maps the effects of large chronological disturbances on spatial characteristics of landscape changes including spatial variability and heterogeneity.
Mketo, Nomvano; Nomngongo, Philiswa N; Ngila, J Catherine
2018-05-15
A rapid three-step sequential extraction method was developed under microwave radiation followed by inductively coupled plasma-optical emission spectroscopic (ICP-OES) and ion-chromatographic (IC) analysis for the determination of sulphur forms in coal samples. The experimental conditions of the proposed microwave-assisted sequential extraction (MW-ASE) procedure were optimized by using multivariate mathematical tools. Pareto charts generated from 2 3 full factorial design showed that, extraction time has insignificant effect on the extraction of sulphur species, therefore, all the sequential extraction steps were performed for 5 min. The optimum values according to the central composite designs and counter plots of the response surface methodology were 200 °C (microwave temperature) and 0.1 g (coal amount) for all the investigated extracting reagents (H 2 O, HCl and HNO 3 ). When the optimum conditions of the proposed MW-ASE procedure were applied in coal CRMs, SARM 18 showed more organic sulphur (72%) and the other two coal CRMs (SARMs 19 and 20) were dominated by sulphide sulphur species (52-58%). The sum of the sulphur forms from the sequential extraction steps have shown consistent agreement (95-96%) with certified total sulphur values on the coal CRM certificates. This correlation, in addition to the good precision (1.7%) achieved by the proposed procedure, suggests that the sequential extraction method is reliable, accurate and reproducible. To safe-guard the destruction of pyritic and organic sulphur forms in extraction step 1, water was used instead of HCl. Additionally, the notorious acidic mixture (HCl/HNO 3 /HF) was replaced by greener reagent (H 2 O 2 ) in the last extraction step. Therefore, the proposed MW-ASE method can be applied in routine laboratories for the determination of sulphur forms in coal and coal related matrices. Copyright © 2018 Elsevier B.V. All rights reserved.
An Iterative Approach for the Optimization of Pavement Maintenance Management at the Network Level
Torres-Machí, Cristina; Chamorro, Alondra; Videla, Carlos; Yepes, Víctor
2014-01-01
Pavement maintenance is one of the major issues of public agencies. Insufficient investment or inefficient maintenance strategies lead to high economic expenses in the long term. Under budgetary restrictions, the optimal allocation of resources becomes a crucial aspect. Two traditional approaches (sequential and holistic) and four classes of optimization methods (selection based on ranking, mathematical optimization, near optimization, and other methods) have been applied to solve this problem. They vary in the number of alternatives considered and how the selection process is performed. Therefore, a previous understanding of the problem is mandatory to identify the most suitable approach and method for a particular network. This study aims to assist highway agencies, researchers, and practitioners on when and how to apply available methods based on a comparative analysis of the current state of the practice. Holistic approach tackles the problem considering the overall network condition, while the sequential approach is easier to implement and understand, but may lead to solutions far from optimal. Scenarios defining the suitability of these approaches are defined. Finally, an iterative approach gathering the advantages of traditional approaches is proposed and applied in a case study. The proposed approach considers the overall network condition in a simpler and more intuitive manner than the holistic approach. PMID:24741352
An iterative approach for the optimization of pavement maintenance management at the network level.
Torres-Machí, Cristina; Chamorro, Alondra; Videla, Carlos; Pellicer, Eugenio; Yepes, Víctor
2014-01-01
Pavement maintenance is one of the major issues of public agencies. Insufficient investment or inefficient maintenance strategies lead to high economic expenses in the long term. Under budgetary restrictions, the optimal allocation of resources becomes a crucial aspect. Two traditional approaches (sequential and holistic) and four classes of optimization methods (selection based on ranking, mathematical optimization, near optimization, and other methods) have been applied to solve this problem. They vary in the number of alternatives considered and how the selection process is performed. Therefore, a previous understanding of the problem is mandatory to identify the most suitable approach and method for a particular network. This study aims to assist highway agencies, researchers, and practitioners on when and how to apply available methods based on a comparative analysis of the current state of the practice. Holistic approach tackles the problem considering the overall network condition, while the sequential approach is easier to implement and understand, but may lead to solutions far from optimal. Scenarios defining the suitability of these approaches are defined. Finally, an iterative approach gathering the advantages of traditional approaches is proposed and applied in a case study. The proposed approach considers the overall network condition in a simpler and more intuitive manner than the holistic approach.
A New Control Paradigm for Stochastic Differential Equations
NASA Astrophysics Data System (ADS)
Schmid, Matthias J. A.
This study presents a novel comprehensive approach to the control of dynamic systems under uncertainty governed by stochastic differential equations (SDEs). Large Deviations (LD) techniques are employed to arrive at a control law for a large class of nonlinear systems minimizing sample path deviations. Thereby, a paradigm shift is suggested from point-in-time to sample path statistics on function spaces. A suitable formal control framework which leverages embedded Freidlin-Wentzell theory is proposed and described in detail. This includes the precise definition of the control objective and comprises an accurate discussion of the adaptation of the Freidlin-Wentzell theorem to the particular situation. The new control design is enabled by the transformation of an ill-posed control objective into a well-conditioned sequential optimization problem. A direct numerical solution process is presented using quadratic programming, but the emphasis is on the development of a closed-form expression reflecting the asymptotic deviation probability of a particular nominal path. This is identified as the key factor in the success of the new paradigm. An approach employing the second variation and the differential curvature of the effective action is suggested for small deviation channels leading to the Jacobi field of the rate function and the subsequently introduced Jacobi field performance measure. This closed-form solution is utilized in combination with the supplied parametrization of the objective space. For the first time, this allows for an LD based control design applicable to a large class of nonlinear systems. Thus, Minimum Large Deviations (MLD) control is effectively established in a comprehensive structured framework. The construction of the new paradigm is completed by an optimality proof for the Jacobi field performance measure, an interpretive discussion, and a suggestion for efficient implementation. The potential of the new approach is exhibited by its extension to scalar systems subject to state-dependent noise and to systems of higher order. The suggested control paradigm is further advanced when a sequential application of MLD control is considered. This technique yields a nominal path corresponding to the minimum total deviation probability on the entire time domain. It is demonstrated that this sequential optimization concept can be unified in a single objective function which is revealed to be the Jacobi field performance index on the entire domain subject to an endpoint deviation. The emerging closed-form term replaces the previously required nested optimization and, thus, results in a highly efficient application-ready control design. This effectively substantiates Minimum Path Deviation (MPD) control. The proposed control paradigm allows the specific problem of stochastic cost control to be addressed as a special case. This new technique is employed within this study for the stochastic cost problem giving rise to Cost Constrained MPD (CCMPD) as well as to Minimum Quadratic Cost Deviation (MQCD) control. An exemplary treatment of a generic scalar nonlinear system subject to quadratic costs is performed for MQCD control to demonstrate the elementary expandability of the new control paradigm. This work concludes with a numerical evaluation of both MPD and CCMPD control for three exemplary benchmark problems. Numerical issues associated with the simulation of SDEs are briefly discussed and illustrated. The numerical examples furnish proof of the successful design. This study is complemented by a thorough review of statistical control methods, stochastic processes, Large Deviations techniques and the Freidlin-Wentzell theory, providing a comprehensive, self-contained account. The presentation of the mathematical tools and concepts is of a unique character, specifically addressing an engineering audience.
NASA Astrophysics Data System (ADS)
Wang, S.; Huang, G. H.; Veawab, A.
2013-03-01
This study proposes a sequential factorial analysis (SFA) approach for supporting regional air quality management under uncertainty. SFA is capable not only of examining the interactive effects of input parameters, but also of analyzing the effects of constraints. When there are too many factors involved in practical applications, SFA has the advantage of conducting a sequence of factorial analyses for characterizing the effects of factors in a systematic manner. The factor-screening strategy employed in SFA is effective in greatly reducing the computational effort. The proposed SFA approach is applied to a regional air quality management problem for demonstrating its applicability. The results indicate that the effects of factors are evaluated quantitatively, which can help decision makers identify the key factors that have significant influence on system performance and explore the valuable information that may be veiled beneath their interrelationships.
NASA Astrophysics Data System (ADS)
Sandhu, Amit
A sequential quadratic programming method is proposed for solving nonlinear optimal control problems subject to general path constraints including mixed state-control and state only constraints. The proposed algorithm further develops on the approach proposed in [1] with objective to eliminate the use of a high number of time intervals for arriving at an optimal solution. This is done by introducing an adaptive time discretization to allow formation of a desirable control profile without utilizing a lot of intervals. The use of fewer time intervals reduces the computation time considerably. This algorithm is further used in this thesis to solve a trajectory planning problem for higher elevation Mars landing.
ERIC Educational Resources Information Center
Gibson, Michael R.
2016-01-01
"Designing backwards" is presented here as a means to utilize human-centered processes in diverse educational settings to help teachers and students learn to formulate and operate design processes to achieve three sequential and interrelated goals. The first entails teaching them to effectively and empathetically identify, frame and…
Levin, Gregory P; Emerson, Sarah C; Emerson, Scott S
2014-09-01
Many papers have introduced adaptive clinical trial methods that allow modifications to the sample size based on interim estimates of treatment effect. There has been extensive commentary on type I error control and efficiency considerations, but little research on estimation after an adaptive hypothesis test. We evaluate the reliability and precision of different inferential procedures in the presence of an adaptive design with pre-specified rules for modifying the sampling plan. We extend group sequential orderings of the outcome space based on the stage at stopping, likelihood ratio statistic, and sample mean to the adaptive setting in order to compute median-unbiased point estimates, exact confidence intervals, and P-values uniformly distributed under the null hypothesis. The likelihood ratio ordering is found to average shorter confidence intervals and produce higher probabilities of P-values below important thresholds than alternative approaches. The bias adjusted mean demonstrates the lowest mean squared error among candidate point estimates. A conditional error-based approach in the literature has the benefit of being the only method that accommodates unplanned adaptations. We compare the performance of this and other methods in order to quantify the cost of failing to plan ahead in settings where adaptations could realistically be pre-specified at the design stage. We find the cost to be meaningful for all designs and treatment effects considered, and to be substantial for designs frequently proposed in the literature. © 2014, The International Biometric Society.
Orphan therapies: making best use of postmarket data.
Maro, Judith C; Brown, Jeffrey S; Dal Pan, Gerald J; Li, Lingling
2014-08-01
Postmarket surveillance of the comparative safety and efficacy of orphan therapeutics is challenging, particularly when multiple therapeutics are licensed for the same orphan indication. To make best use of product-specific registry data collected to fulfill regulatory requirements, we propose the creation of a distributed electronic health data network among registries. Such a network could support sequential statistical analyses designed to detect early warnings of excess risks. We use a simulated example to explore the circumstances under which a distributed network may prove advantageous. We perform sample size calculations for sequential and non-sequential statistical studies aimed at comparing the incidence of hepatotoxicity following initiation of two newly licensed therapies for homozygous familial hypercholesterolemia. We calculate the sample size savings ratio, or the proportion of sample size saved if one conducted a sequential study as compared to a non-sequential study. Then, using models to describe the adoption and utilization of these therapies, we simulate when these sample sizes are attainable in calendar years. We then calculate the analytic calendar time savings ratio, analogous to the sample size savings ratio. We repeat these analyses for numerous scenarios. Sequential analyses detect effect sizes earlier or at the same time as non-sequential analyses. The most substantial potential savings occur when the market share is more imbalanced (i.e., 90% for therapy A) and the effect size is closest to the null hypothesis. However, due to low exposure prevalence, these savings are difficult to realize within the 30-year time frame of this simulation for scenarios in which the outcome of interest occurs at or more frequently than one event/100 person-years. We illustrate a process to assess whether sequential statistical analyses of registry data performed via distributed networks may prove a worthwhile infrastructure investment for pharmacovigilance.
2007-02-01
which is used by the model to drive the normal activities of the crew (Figure C.1-2). These routines consist of a sequential list of high- level...separately. Figure C.1-3: Resources & Logic Sheet C.1.1.4 Scenario The scenario that is performed during a model run is a sequential list of all...were marked with a white fore and aft lineup stripe on both landing spots. Current Sea Fighter design does not provide a hangar; however, there
Round-the-table teaching: a novel approach to resuscitation education.
McGarvey, Kathryn; Scott, Karen; O'Leary, Fenton
2014-10-01
Effective cardiopulmonary resuscitation saves lives. Health professionals who care for acutely unwell children need to be prepared to care for a child in arrest. Hospitals must ensure that their staff have the knowledge, confidence and ability to respond to a child in cardiac arrest. RESUS4KIDS is a programme designed to teach paediatric resuscitation to health care professionals who care for acutely unwell children. The programme is delivered in two components: an e-learning component for pre-learning, followed by a short, practical, face-to-face course that is taught using the round-the-table teaching approach. Round-the-table teaching is a novel, evidence-based small group teaching approach designed to teach paediatric resuscitation skills and knowledge. Round-the-table teaching uses a structured approach to managing a collapsed child, and ensures that each participant has the opportunity to practise the essential resuscitation skills of airway manoeuvres, bag mask ventilation and cardiac compressions. Round-the-table teaching is an engaging, non-threatening approach to delivering interdisciplinary paediatric resuscitation education. The methodology ensures that all participants have the opportunity to practise each of the different essential skills associated with the Danger, Response, Send for help, Airway, Breathing, Circulation, Defibrillation or rhythm recognition (DRSABCD) approach to the collapsed child. Round-the-table teaching is based on evidence-based small group teaching methods. The methodology of round-the-table teaching can be applied to any topic where participants must demonstrate an understanding of a sequential approach to a clinical skill. Round-the-table teaching uses a structured approach to managing a collapsed child. © 2014 The Authors. The Clinical Teacher published by Association for the Study of Medical Education and John Wiley & Sons Ltd.
Dissipative rendering and neural network control system design
NASA Technical Reports Server (NTRS)
Gonzalez, Oscar R.
1995-01-01
Model-based control system designs are limited by the accuracy of the models of the plant, plant uncertainty, and exogenous signals. Although better models can be obtained with system identification, the models and control designs still have limitations. One approach to reduce the dependency on particular models is to design a set of compensators that will guarantee robust stability to a set of plants. Optimization over the compensator parameters can then be used to get the desired performance. Conservativeness of this approach can be reduced by integrating fundamental properties of the plant models. This is the approach of dissipative control design. Dissipative control designs are based on several variations of the Passivity Theorem, which have been proven for nonlinear/linear and continuous-time/discrete-time systems. These theorems depend not on a specific model of a plant, but on its general dissipative properties. Dissipative control design has found wide applicability in flexible space structures and robotic systems that can be configured to be dissipative. Currently, there is ongoing research to improve the performance of dissipative control designs. For aircraft systems that are not dissipative active control may be used to make them dissipative and then a dissipative control design technique can be used. It is also possible that rendering a system dissipative and dissipative control design may be combined into one step. Furthermore, the transformation of a non-dissipative system to dissipative can be done robustly. One sequential design procedure for finite dimensional linear time-invariant systems has been developed. For nonlinear plants that cannot be controlled adequately with a single linear controller, model-based techniques have additional problems. Nonlinear system identification is still a research topic. Lacking analytical models for model-based design, artificial neural network algorithms have recently received considerable attention. Using their universal approximation property, neural networks have been introduced into nonlinear control designs in several ways. Unfortunately, little work has appeared that analyzes neural network control systems and establishes margins for stability and performance. One approach for this analysis is to set up neural network control systems in the framework presented above. For example, one neural network could be used to render a system to be dissipative, a second strictly dissipative neural network controller could be used to guarantee robust stability.
Zhang, Jia-yu; Wang, Zi-jian; Li, Yun; Liu, Ying; Cai, Wei; Li, Chen; Lu, Jian-qiu; Qiao, Yan-jiang
2016-01-15
The analytical methodologies for evaluation of multi-component system in traditional Chinese medicines (TCMs) have been inadequate or unacceptable. As a result, the unclarity of multi-component hinders the sufficient interpretation of their bioactivities. In this paper, an ultra-high-performance liquid chromatography coupled with linear ion trap-Orbitrap (UPLC-LTQ-Orbitrap)-based strategy focused on the comprehensive identification of TCM sequential constituents was developed. The strategy was characterized by molecular design, multiple ion monitoring (MIM), targeted database hits and mass spectral trees similarity filter (MTSF), and even more isomerism discrimination. It was successfully applied in the HRMS data-acquisition and processing of chlorogenic acids (CGAs) in Flos Lonicerae Japonicae (FLJ), and a total of 115 chromatographic peaks attributed to 18 categories were characterized, allowing a comprehensive revelation of CGAs in FLJ for the first time. This demonstrated that MIM based on molecular design could improve the efficiency to trigger MS/MS fragmentation reactions. Targeted database hits and MTSF searching greatly facilitated the processing of extremely large information data. Besides, the introduction of diagnostic product ions (DPIs) discrimination, ClogP analysis, and molecular simulation, raised the efficiency and accuracy to characterize sequential constituents especially position and geometric isomers. In conclusion, the results expanded our understanding on CGAs in FLJ, and the strategy could be exemplary for future research on the comprehensive identification of sequential constituents in TCMs. Meanwhile, it may propose a novel idea for analyzing sequential constituents, and is promising for quality control and evaluation of TCMs. Copyright © 2015 Elsevier B.V. All rights reserved.
Precise algorithm to generate random sequential adsorption of hard polygons at saturation
NASA Astrophysics Data System (ADS)
Zhang, G.
2018-04-01
Random sequential adsorption (RSA) is a time-dependent packing process, in which particles of certain shapes are randomly and sequentially placed into an empty space without overlap. In the infinite-time limit, the density approaches a "saturation" limit. Although this limit has attracted particular research interest, the majority of past studies could only probe this limit by extrapolation. We have previously found an algorithm to reach this limit using finite computational time for spherical particles and could thus determine the saturation density of spheres with high accuracy. In this paper, we generalize this algorithm to generate saturated RSA packings of two-dimensional polygons. We also calculate the saturation density for regular polygons of three to ten sides and obtain results that are consistent with previous, extrapolation-based studies.
Precise algorithm to generate random sequential adsorption of hard polygons at saturation.
Zhang, G
2018-04-01
Random sequential adsorption (RSA) is a time-dependent packing process, in which particles of certain shapes are randomly and sequentially placed into an empty space without overlap. In the infinite-time limit, the density approaches a "saturation" limit. Although this limit has attracted particular research interest, the majority of past studies could only probe this limit by extrapolation. We have previously found an algorithm to reach this limit using finite computational time for spherical particles and could thus determine the saturation density of spheres with high accuracy. In this paper, we generalize this algorithm to generate saturated RSA packings of two-dimensional polygons. We also calculate the saturation density for regular polygons of three to ten sides and obtain results that are consistent with previous, extrapolation-based studies.
Optical design of system for a lightship
NASA Astrophysics Data System (ADS)
Chirkov, M. A.; Tsyganok, E. A.
2017-06-01
This article presents the result of the optical design of illuminating optical system for lightship using the freeform surface. It shows an algorithm of optical design of side-emitting lens for point source using Freeform Z function in Zemax non-sequential mode; optimization of calculation results and testing of optical system with real diode
Space Station Human Factors: Designing a Human-Robot Interface
NASA Technical Reports Server (NTRS)
Rochlis, Jennifer L.; Clarke, John Paul; Goza, S. Michael
2001-01-01
The experiments described in this paper are part of a larger joint MIT/NASA research effort and focus on the development of a methodology for designing and evaluating integrated interfaces for highly dexterous and multifunctional telerobot. Specifically, a telerobotic workstation is being designed for an Extravehicular Activity (EVA) anthropomorphic space station telerobot called Robonaut. Previous researchers have designed telerobotic workstations based upon performance of discrete subsets of tasks (for example, peg-in-hole, tracking, etc.) without regard for transitions that operators go through between tasks performed sequentially in the context of larger integrated tasks. The experiments presented here took an integrated approach to describing teleoperator performance and assessed how subjects operating a full-immersion telerobot perform during fine position and gross position tasks. In addition, a Robonaut simulation was also developed as part of this research effort, and experimentally tested against Robonaut itself to determine its utility. Results show that subject performance of teleoperated tasks using both Robonaut and the simulation are virtually identical, with no significant difference between the two. These results indicate that the simulation can be utilized as both a Robonaut training tool, and as a powerful design platform for telepresence displays and aids.
Sequential two-column electro-Fenton-photolytic reactor for the treatment of winery wastewater.
Díez, A M; Sanromán, M A; Pazos, M
2017-01-01
The high amount of winery wastewaters produced each year makes their treatment a priority issue due to their problematic characteristics such as acid pH, high concentration of organic load and colourful compounds. Furthermore, some of these effluents can have dissolved pesticides, due to the previous grape treatments, which are recalcitrant to conventional treatments. Recently, photo-electro-Fenton process has been reported as an effective procedure to mineralize different organic contaminants and a promising technology for the treatment of these complex matrixes. However, the reactors available for applying this process are scarce and they show several limitations. In this study, a sequential two-column reactor for the photo-electro-Fenton treatment was designed and evaluated for the treatment of different pesticides, pirimicarb and pyrimethanil, used in wine production. Both studied pesticides were efficiently removed, and the transformation products were determined. Finally, the treatment of a complex aqueous matrix composed by winery wastewater and the previously studied pesticides was carried out in the designed sequential reactor. The high removals of TOC and COD reached and the low energy consumption demonstrated the efficiency of this new configuration.
NASA DOE POD NDE Capabilities Data Book
NASA Technical Reports Server (NTRS)
Generazio, Edward R.
2015-01-01
This data book contains the Directed Design of Experiments for Validating Probability of Detection (POD) Capability of NDE Systems (DOEPOD) analyses of the nondestructive inspection data presented in the NTIAC, Nondestructive Evaluation (NDE) Capabilities Data Book, 3rd ed., NTIAC DB-97-02. DOEPOD is designed as a decision support system to validate inspection system, personnel, and protocol demonstrating 0.90 POD with 95% confidence at critical flaw sizes, a90/95. The test methodology used in DOEPOD is based on the field of statistical sequential analysis founded by Abraham Wald. Sequential analysis is a method of statistical inference whose characteristic feature is that the number of observations required by the procedure is not determined in advance of the experiment. The decision to terminate the experiment depends, at each stage, on the results of the observations previously made. A merit of the sequential method, as applied to testing statistical hypotheses, is that test procedures can be constructed which require, on average, a substantially smaller number of observations than equally reliable test procedures based on a predetermined number of observations.
Multi-arm group sequential designs with a simultaneous stopping rule.
Urach, S; Posch, M
2016-12-30
Multi-arm group sequential clinical trials are efficient designs to compare multiple treatments to a control. They allow one to test for treatment effects already in interim analyses and can have a lower average sample number than fixed sample designs. Their operating characteristics depend on the stopping rule: We consider simultaneous stopping, where the whole trial is stopped as soon as for any of the arms the null hypothesis of no treatment effect can be rejected, and separate stopping, where only recruitment to arms for which a significant treatment effect could be demonstrated is stopped, but the other arms are continued. For both stopping rules, the family-wise error rate can be controlled by the closed testing procedure applied to group sequential tests of intersection and elementary hypotheses. The group sequential boundaries for the separate stopping rule also control the family-wise error rate if the simultaneous stopping rule is applied. However, we show that for the simultaneous stopping rule, one can apply improved, less conservative stopping boundaries for local tests of elementary hypotheses. We derive corresponding improved Pocock and O'Brien type boundaries as well as optimized boundaries to maximize the power or average sample number and investigate the operating characteristics and small sample properties of the resulting designs. To control the power to reject at least one null hypothesis, the simultaneous stopping rule requires a lower average sample number than the separate stopping rule. This comes at the cost of a lower power to reject all null hypotheses. Some of this loss in power can be regained by applying the improved stopping boundaries for the simultaneous stopping rule. The procedures are illustrated with clinical trials in systemic sclerosis and narcolepsy. © 2016 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd. © 2016 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.
Qi, Wenqiang; Chen, Taojing; Wang, Liang; Wu, Minghong; Zhao, Quanyu; Wei, Wei
2017-03-01
In this study, the sequential process of anaerobic fermentation followed by microalgae cultivation was evaluated from both nutrient and energy recovery standpoints. The effects of different fermentation type on the biogas generation, broth metabolites' composition, algal growth and nutrients' utilization, and energy conversion efficiencies for the whole processes were discussed. When the fermentation was designed to produce hydrogen-dominating biogas, the total energy conversion efficiency (TECE) of the sequential process was higher than that of the methane fermentation one. With the production of hydrogen in anaerobic fermentation, more organic carbon metabolites were left in the broth to support better algal growth with more efficient incorporation of ammonia nitrogen. By applying the sequential process, the heat value conversion efficiency (HVCE) for the wastewater could reach 41.2%, if methane was avoided in the fermentation biogas. The removal efficiencies of organic metabolites and NH 4 + -N in the better case were 100% and 98.3%, respectively. Copyright © 2016 Elsevier Ltd. All rights reserved.
Sequential Feedback Scheme Outperforms the Parallel Scheme for Hamiltonian Parameter Estimation.
Yuan, Haidong
2016-10-14
Measurement and estimation of parameters are essential for science and engineering, where the main quest is to find the highest achievable precision with the given resources and design schemes to attain it. Two schemes, the sequential feedback scheme and the parallel scheme, are usually studied in the quantum parameter estimation. While the sequential feedback scheme represents the most general scheme, it remains unknown whether it can outperform the parallel scheme for any quantum estimation tasks. In this Letter, we show that the sequential feedback scheme has a threefold improvement over the parallel scheme for Hamiltonian parameter estimations on two-dimensional systems, and an order of O(d+1) improvement for Hamiltonian parameter estimation on d-dimensional systems. We also show that, contrary to the conventional belief, it is possible to simultaneously achieve the highest precision for estimating all three components of a magnetic field, which sets a benchmark on the local precision limit for the estimation of a magnetic field.
40 CFR 53.35 - Test procedure for Class II and Class III methods for PM2.5 and PM-2.5
Code of Federal Regulations, 2010 CFR
2010-07-01
... reference method samplers shall be of single-filter design (not multi-filter, sequential sample design... and multiplicative bias (comparative slope and intercept). (1) For each test site, calculate the mean...
40 CFR 53.35 - Test procedure for Class II and Class III methods for PM2.5 and PM-2.5
Code of Federal Regulations, 2011 CFR
2011-07-01
... reference method samplers shall be of single-filter design (not multi-filter, sequential sample design... and multiplicative bias (comparative slope and intercept). (1) For each test site, calculate the mean...
40 CFR 53.35 - Test procedure for Class II and Class III methods for PM2.5 and PM−2.5.
Code of Federal Regulations, 2012 CFR
2012-07-01
... reference method samplers shall be of single-filter design (not multi-filter, sequential sample design... and multiplicative bias (comparative slope and intercept). (1) For each test site, calculate the mean...
The Science ELF: Assessing the Enquiry Levels Framework as a Heuristic for Professional Development
ERIC Educational Resources Information Center
Wheeler, Lindsay B.; Bell, Randy L.; Whitworth, Brooke A.; Maeng, Jennifer L.
2015-01-01
This study utilized an explanatory sequential mixed methods approach to explore randomly assigned treatment and control participants' frequency of inquiry instruction in secondary science classrooms. Eleven treatment participants received professional development (PD) that emphasized a structured approach to inquiry instruction, while 10 control…
Systematic Approach to Food Safety Education on the Farm
ERIC Educational Resources Information Center
Shaw, Angela; Strohbehn, Catherine; Naeve, Linda; Domoto, Paul; Wilson, Lester
2015-01-01
Food safety education from farm to end user is essential in the mitigation of food safety concerns associated with fresh produce. Iowa State University developed a multi-disciplinary three-level sequential program ("Know," "Show," "Go") to provide a holistic approach to food safety education. This program provides…
The Rhetorical Cycle: Reading, Thinking, Speaking, Listening, Discussing, Writing.
ERIC Educational Resources Information Center
Keller, Rodney D.
The rhetorical cycle is a step-by-step approach that provides classroom experience before students actually write, thereby making the writing process less frustrating for them. This approach consists of six sequential steps: reading, thinking, speaking, listening, discussing, and finally writing. Readings serve not only as models of rhetorical…
Mutual Information Item Selection in Adaptive Classification Testing
ERIC Educational Resources Information Center
Weissman, Alexander
2007-01-01
A general approach for item selection in adaptive multiple-category classification tests is provided. The approach uses mutual information (MI), a special case of the Kullback-Leibler distance, or relative entropy. MI works efficiently with the sequential probability ratio test and alleviates the difficulties encountered with using other local-…
Exact Tests for the Rasch Model via Sequential Importance Sampling
ERIC Educational Resources Information Center
Chen, Yuguo; Small, Dylan
2005-01-01
Rasch proposed an exact conditional inference approach to testing his model but never implemented it because it involves the calculation of a complicated probability. This paper furthers Rasch's approach by (1) providing an efficient Monte Carlo methodology for accurately approximating the required probability and (2) illustrating the usefulness…
Lee, Ju Hun; Domaille, Dylan W; Noh, Hyunwoo; Oh, Taeseok; Choi, Chulmin; Jin, Sungho; Cha, Jennifer N
2014-07-22
The development of strategies to couple biomolecules covalently to surfaces is necessary for constructing sensing arrays for biological and biomedical applications. One attractive conjugation reaction is hydrazone formation--the reaction of a hydrazine with an aldehyde or ketone--as both hydrazines and aldehydes/ketones are largely bioorthogonal, which makes this particular reaction suitable for conjugating biomolecules to a variety of substrates. We show that the mild reaction conditions afforded by hydrazone conjugation enable the conjugation of DNA and proteins to the substrate surface in significantly higher yields than can be achieved with traditional bioconjugation techniques, such as maleimide chemistry. Next, we designed and synthesized a photocaged aryl ketone that can be conjugated to a surface and photochemically activated to provide a suitable partner for subsequent hydrazone formation between the surface-anchored ketone and DNA- or protein-hydrazines. Finally, we exploit the latent functionality of the photocaged ketone and pattern multiple biomolecules on the same substrate, effectively demonstrating a strategy for designing substrates with well-defined domains of different biomolecules. We expect that this approach can be extended to the production of multiplexed assays by using an appropriate mask with sequential photoexposure and biomolecule conjugation steps.
Statistical Engineering in Air Traffic Management Research
NASA Technical Reports Server (NTRS)
Wilson, Sara R.
2015-01-01
NASA is working to develop an integrated set of advanced technologies to enable efficient arrival operations in high-density terminal airspace for the Next Generation Air Transportation System. This integrated arrival solution is being validated and verified in laboratories and transitioned to a field prototype for an operational demonstration at a major U.S. airport. Within NASA, this is a collaborative effort between Ames and Langley Research Centers involving a multi-year iterative experimentation process. Designing and analyzing a series of sequential batch computer simulations and human-in-the-loop experiments across multiple facilities and simulation environments involves a number of statistical challenges. Experiments conducted in separate laboratories typically have different limitations and constraints, and can take different approaches with respect to the fundamental principles of statistical design of experiments. This often makes it difficult to compare results from multiple experiments and incorporate findings into the next experiment in the series. A statistical engineering approach is being employed within this project to support risk-informed decision making and maximize the knowledge gained within the available resources. This presentation describes a statistical engineering case study from NASA, highlights statistical challenges, and discusses areas where existing statistical methodology is adapted and extended.
The use of clinical trials in comparative effectiveness research on mental health
Blanco, Carlos; Rafful, Claudia; Olfson, Mark
2013-01-01
Objectives A large body of research on comparative effectiveness research (CER) focuses on the use of observational and quasi-experimental approaches. We sought to examine the use of clinical trials as a tool for CER, particularly in mental health. Study Design and Setting Examination of three ongoing randomized clinical trials in psychiatry that address issues which would pose difficulties for non-experimental CER methods. Results Existing statistical approaches to non-experimental data appear insufficient to compensate for biases that may arise when the pattern of missing data cannot be properly modeled such as when there are no standards for treatment, when affected populations have limited access to treatment, or when there are high rates of treatment dropout. Conclusions Clinical trials should retain an important role in CER, particularly in cases of high disorder prevalence, large expected effect sizes, difficult to reach populations or when examining sequential treatments or stepped-care algorithms. Progress in CER in mental health will require careful consideration of appropriate selection between clinical trials and non-experimental designs and on allocation of research resources to optimally inform key treatment decisions for each individual patient. PMID:23849150
From SOPs to Reports to Evaluations: Learning and Memory ...
In an era of global trade and regulatory cooperation, consistent and scientifically based interpretation of developmental neurotoxicity (DNT) studies is essential. Because there is flexibility in the selection of test method(s), consistency can be especially challenging for learning and memory tests required by EPA and OECD DNT guidelines (chemicals and pesticides) and recommended for ICH prenatal/postnatal guidelines (pharmaceuticals). A well reasoned uniform approach is particularly important for variable endpoints and if non-standard tests are used. An understanding of the purpose behind the tests and expected outcomes is critical, and attention to elements of experimental design, conduct, and reporting can improve study design by the investigator as well as accuracy and consistency of interpretation by evaluators. This understanding also directs which information must be clearly described in study reports. While missing information may be available in standardized operating procedures (SOPs), if not clearly reflected in report submissions there may be questions and misunderstandings by evaluators which could impact risk assessments. A practical example will be presented to provide insights into important variables and reporting approaches. Cognitive functions most often tested in guidelines studies include associative, positional, sequential, and spatial learning and memory in weanling and adult animals. These complex behaviors tap different bra
Secret, Mary; Abell, Melissa L; Berlin, Trey
2011-01-01
The authors present a set of guiding principles and strategies to facilitate the collaborative efforts of social work researchers and practitioners as they initiate, design, and implement outcome evaluations of human service interventions and programs. Beginning with an exploration of the interpersonal barriers to practice-research collaborations, and building on their experiences in successfully completing a community-based research evaluation, the authors identify specific relationship-focused principles and strategies and illustrate how these approaches can guide practice-research teams through the various sequential activities of the evaluation research process. In particular, it is suggested that practice-research collaborations can be formed, strengthened, and sustained by emphasis on a spirit of discovery and shared leadership at the start of the relationship, use of a comprehensive evaluation model to clarify and frame the evaluation and program goals, beginning where the client is when selecting research methodology and measurement tools, commitment to keeping the program first and recording everything during the implementation and data-collection stages, discussion of emerging findings and presentation of findings in graphic format at the data-analysis stage, and a total team approach at the dissemination stage.
Using theatre to address mental illness stigma: a knowledge translation study in bipolar disorder.
Michalak, Erin E; Livingston, James D; Maxwell, Victoria; Hole, Rachelle; Hawke, Lisa D; Parikh, Sagar V
2014-01-01
Reduction of the stigma of mental illness is an international priority; arts- and contact-based approaches represent a promising mode of intervention. This project was designed to explore the impact of a one-woman theatrical performance on attitudes towards bipolar disorder (BD) on people with BD and healthcare providers. A playwright and actress who lives with BD developed a stage performance - 'That's Just Crazy Talk' - targeting stigmatizing attitudes towards BD. Prospective, longitudinal and sequential mixed methods were used to assess the impact of the performance on people with BD (n = 80) and healthcare providers (n = 84). Qualitative interviews were conducted with 33 participants (14 people with BD and 19 healthcare providers). Quantitatively, healthcare providers showed significantly improved attitudes immediately post-performance, but this change was not maintained over time; people with BD showed little quantitative change. Qualitatively, both people with BD and BD healthcare providers showed enduring and broadly positive changes. A theatrical presentation designed to reduce stigma produced immediate impact on healthcare providers quantitatively and significant qualitative impact on people with BD and healthcare providers. Additionally, the utility of using mixed-method approaches in mental health research was demonstrated.
Tragic choices and moral compromise: the ethics of allocating kidneys for transplantation.
Hoffmaster, Barry; Hooker, Cliff
2013-09-01
For almost a decade, the Kidney Transplantation Committee of the United Network for Organ Sharing has been striving to revise its approach to allocating kidneys from deceased donors for transplantation. Two fundamental values, equality and efficiency, are central to distributing this scarce resource. The prevailing approach gives primacy to equality in the temporal form of first-come, first-served, whereas the motivation for a new approach is to redeem efficiency by increasing the length of survival of transplanted kidneys and their recipients. But decision making about a better way of allocating kidneys flounders because it is constrained by the amorphous notion of "balancing" values. This article develops a more fitting, productive approach to resolving the conflict between equality and efficiency by embedding the notion of compromise in the analysis of a tragic choice provided by Guido Calabresi and Philip Bobbitt. For Calabresi and Bobbitt, the goals of public policy with respect to tragic choices are to limit tragedy and to deal with the irreducible minimum of tragedy in the least offensive way. Satisfying the value of efficiency limits tragedy, and satisfying the value of equality deals with the irreducible minimum of tragedy in the least offensive way. But both values cannot be completely satisfied simultaneously. Compromise is occasioned when not all the several obligations that exist in a situation can be met and when neglecting some obligations entirely in order to fulfill others entirely is improper. Compromise is amalgamated with the notion of a tragic choice and then used to assess proposals for revising the allocation of kidneys considered by the Kidney Transplantation Committee. Compromise takes two forms in allocating kidneys: it occurs within particular approaches to allocating kidneys because neither equality nor efficiency can be fully satisfied, and it occurs over the course of sequential approaches to allocating kidneys that cycle between preferring equality and efficiency. Ross and colleagues' Equal Opportunity Supplemented by Fair Innings proposal for allocating kidneys best exemplifies the rationality of compromise as a way of achieving the goals of making a tragic choice. The attempt to design a policy for allocating kidneys from deceased donors for transplantation by balancing the values of equality and efficiency is misguided and unhelpful. Instead policymaking should both incorporate compromise into discrete approaches to allocating kidneys and extend compromise over sequential approaches to allocating kidneys. © 2013 Milbank Memorial Fund.
Hall, David B; Meier, Ulrich; Diener, Hans-Cristoph
2005-06-01
The trial objective was to test whether a new mechanism of action would effectively treat migraine headaches and to select a dose range for further investigation. The motivation for a group sequential, adaptive, placebo-controlled trial design was (1) limited information about where across the range of seven doses to focus attention, (2) a need to limit sample size for a complicated inpatient treatment and (3) a desire to reduce exposure of patients to ineffective treatment. A design based on group sequential and up and down designs was developed and operational characteristics were explored by trial simulation. The primary outcome was headache response at 2 h after treatment. Groups of four treated and two placebo patients were assigned to one dose. Adaptive dose selection was based on response rates of 60% seen with other migraine treatments. If more than 60% of treated patients responded, then the next dose was the next lower dose; otherwise, the dose was increased. A stopping rule of at least five groups at the target dose and at least four groups at that dose with more than 60% response was developed to ensure that a selected dose would be statistically significantly (p=0.05) superior to placebo. Simulations indicated good characteristics in terms of control of type 1 error, sufficient power, modest expected sample size and modest bias in estimation. The trial design is attractive for phase 2 clinical trials when response is acute and simple, ideally binary, placebo comparator is required, and patient accrual is relatively slow allowing for the collection and processing of results as a basis for the adaptive assignment of patients to dose groups. The acute migraine trial based on this design was successful in both proof of concept and dose range selection.
NASA Astrophysics Data System (ADS)
Liu, GaiYun; Chao, Daniel Yuh
2015-08-01
To date, research on the supervisor design for flexible manufacturing systems focuses on speeding up the computation of optimal (maximally permissive) liveness-enforcing controllers. Recent deadlock prevention policies for systems of simple sequential processes with resources (S3PR) reduce the computation burden by considering only the minimal portion of all first-met bad markings (FBMs). Maximal permissiveness is ensured by not forbidding any live state. This paper proposes a method to further reduce the size of minimal set of FBMs to efficiently solve integer linear programming problems while maintaining maximal permissiveness using a vector-covering approach. This paper improves the previous work and achieves the simplest structure with the minimal number of monitors.
DeMAID: A Design Manager's Aide for Intelligent Decomposition user's guide
NASA Technical Reports Server (NTRS)
Rogers, James L.
1989-01-01
A design problem is viewed as a complex system divisible into modules. Before the design of a complex system can begin, the couplings among modules and the presence of iterative loops is determined. This is important because the design manager must know how to group the modules into subsystems and how to assign subsystems to design teams so that changes in one subsystem will have predictable effects on other subsystems. Determining these subsystems is not an easy, straightforward process and often important couplings are overlooked. Moreover, the planning task must be repeated as new information become available or as the design specifications change. The purpose of this research is to develop a knowledge-based tool called the Design Manager's Aide for Intelligent Decomposition (DeMAID) to act as an intelligent advisor for the design manager. DeMaid identifies the subsystems of a complex design problem, orders them into a well-structured format, and marks the couplings among the subsystems to facilitate the use of multilevel tools. DeMAID also provides the design manager with the capability of examining the trade-offs between sequential and parallel processing. This type of approach could lead to a substantial savings or organizing and displaying a complex problem as a sequence of subsystems easily divisible among design teams. This report serves as a User's Guide for the program.
Pressman, Alice R; Avins, Andrew L; Hubbard, Alan; Satariano, William A
2011-07-01
There is a paucity of literature comparing Bayesian analytic techniques with traditional approaches for analyzing clinical trials using real trial data. We compared Bayesian and frequentist group sequential methods using data from two published clinical trials. We chose two widely accepted frequentist rules, O'Brien-Fleming and Lan-DeMets, and conjugate Bayesian priors. Using the nonparametric bootstrap, we estimated a sampling distribution of stopping times for each method. Because current practice dictates the preservation of an experiment-wise false positive rate (Type I error), we approximated these error rates for our Bayesian and frequentist analyses with the posterior probability of detecting an effect in a simulated null sample. Thus for the data-generated distribution represented by these trials, we were able to compare the relative performance of these techniques. No final outcomes differed from those of the original trials. However, the timing of trial termination differed substantially by method and varied by trial. For one trial, group sequential designs of either type dictated early stopping of the study. In the other, stopping times were dependent upon the choice of spending function and prior distribution. Results indicate that trialists ought to consider Bayesian methods in addition to traditional approaches for analysis of clinical trials. Though findings from this small sample did not demonstrate either method to consistently outperform the other, they did suggest the need to replicate these comparisons using data from varied clinical trials in order to determine the conditions under which the different methods would be most efficient. Copyright © 2011 Elsevier Inc. All rights reserved.
Pressman, Alice R.; Avins, Andrew L.; Hubbard, Alan; Satariano, William A.
2014-01-01
Background There is a paucity of literature comparing Bayesian analytic techniques with traditional approaches for analyzing clinical trials using real trial data. Methods We compared Bayesian and frequentist group sequential methods using data from two published clinical trials. We chose two widely accepted frequentist rules, O'Brien–Fleming and Lan–DeMets, and conjugate Bayesian priors. Using the nonparametric bootstrap, we estimated a sampling distribution of stopping times for each method. Because current practice dictates the preservation of an experiment-wise false positive rate (Type I error), we approximated these error rates for our Bayesian and frequentist analyses with the posterior probability of detecting an effect in a simulated null sample. Thus for the data-generated distribution represented by these trials, we were able to compare the relative performance of these techniques. Results No final outcomes differed from those of the original trials. However, the timing of trial termination differed substantially by method and varied by trial. For one trial, group sequential designs of either type dictated early stopping of the study. In the other, stopping times were dependent upon the choice of spending function and prior distribution. Conclusions Results indicate that trialists ought to consider Bayesian methods in addition to traditional approaches for analysis of clinical trials. Though findings from this small sample did not demonstrate either method to consistently outperform the other, they did suggest the need to replicate these comparisons using data from varied clinical trials in order to determine the conditions under which the different methods would be most efficient. PMID:21453792
Ma, Jianzhong; Amos, Christopher I; Warwick Daw, E
2007-09-01
Although extended pedigrees are often sampled through probands with extreme levels of a quantitative trait, Markov chain Monte Carlo (MCMC) methods for segregation and linkage analysis have not been able to perform ascertainment corrections. Further, the extent to which ascertainment of pedigrees leads to biases in the estimation of segregation and linkage parameters has not been previously studied for MCMC procedures. In this paper, we studied these issues with a Bayesian MCMC approach for joint segregation and linkage analysis, as implemented in the package Loki. We first simulated pedigrees ascertained through individuals with extreme values of a quantitative trait in spirit of the sequential sampling theory of Cannings and Thompson [Cannings and Thompson [1977] Clin. Genet. 12:208-212]. Using our simulated data, we detected no bias in estimates of the trait locus location. However, in addition to allele frequencies, when the ascertainment threshold was higher than or close to the true value of the highest genotypic mean, bias was also found in the estimation of this parameter. When there were multiple trait loci, this bias destroyed the additivity of the effects of the trait loci, and caused biases in the estimation all genotypic means when a purely additive model was used for analyzing the data. To account for pedigree ascertainment with sequential sampling, we developed a Bayesian ascertainment approach and implemented Metropolis-Hastings updates in the MCMC samplers used in Loki. Ascertainment correction greatly reduced biases in parameter estimates. Our method is designed for multiple, but a fixed number of trait loci. Copyright (c) 2007 Wiley-Liss, Inc.
Balasubramanian, M; Spencer, A J; Short, S D; Watkins, K; Chrisopoulos, S; Brennan, D S
2016-09-01
The integration of qualitative and quantitative approaches introduces new avenues to bridge strengths, and address weaknesses of both methods. To develop measure(s) for migrant dentist experiences in Australia through a mixed methods approach. The sequential qualitative-quantitative design involved first the harvesting of data items from qualitative study, followed by a national survey of migrant dentists in Australia. Statements representing unique experiences in migrant dentists' life stories were deployed the survey questionnaire, using a five-point Likert scale. Factor analysis was used to examine component factors. Eighty-two statements from 51 participants were harvested from the qualitative analysis. A total of 1,022 of 1,977 migrant dentists (response rate 54.5%) returned completed questionnaires. Factor analysis supported an initial eight-factor solution; further scale development and reliability analysis led to five scales with a final list of 38 life story experience (LSE) items. Three scales were based on home country events: health system and general lifestyle concerns (LSE1; 10 items), society and culture (LSE4; 4 items) and career development (LSE5; 4 items). Two scales included migrant experiences in Australia: appreciation towards Australian way of life (LSE2; 13 items) and settlement concerns (LSE3; 7 items). The five life story experience scales provided necessary conceptual clarity and empirical grounding to explore migrant dentist experiences in Australia. Being based on original migrant dentist narrations, these scales have the potential to offer in-depth insights for policy makers and support future research on dentist migration. Copyright© 2016 Dennis Barber Ltd
ERIC Educational Resources Information Center
Cunningham, Jennifer L.
2013-01-01
The purpose of this sequential, explanatory mixed methods research study was to understand what factors influenced African American maternal intentions to get their daughters aged 9 years to 12 years vaccinated in Alabama. In the first, quantitative phase of the study, the research questions focused on identifying the predictive power of eleven…
Domain-general neural correlates of dependency formation: Using complex tones to simulate language.
Brilmayer, Ingmar; Sassenhagen, Jona; Bornkessel-Schlesewsky, Ina; Schlesewsky, Matthias
2017-08-01
There is an ongoing debate whether the P600 event-related potential component following syntactic anomalies reflects syntactic processes per se, or if it is an instance of the P300, a domain-general ERP component associated with attention and cognitive reorientation. A direct comparison of both components is challenging because of the huge discrepancy in experimental designs and stimulus choice between language and 'classic' P300 experiments. In the present study, we develop a new approach to mimic the interplay of sequential position as well as categorical and relational information in natural language syntax (word category and agreement) in a non-linguistic target detection paradigm using musical instruments. Participants were instructed to (covertly) detect target tones which were defined by instrument change and pitch rise between subsequent tones at the last two positions of four-tone sequences. We analysed the EEG using event-related averaging and time-frequency decomposition. Our results show striking similarities to results obtained from linguistic experiments. We found a P300 that showed sensitivity to sequential position and a late positivity sensitive to stimulus type and position. A time-frequency decomposition revealed significant effects of sequential position on the theta band and a significant influence of stimulus type on the delta band. Our results suggest that the detection of non-linguistic targets defined via complex feature conjunctions in the present study and the detection of syntactic anomalies share the same underlying processes: attentional shift and memory based matching processes that act upon multi-feature conjunctions. We discuss the results as supporting domain-general accounts of the P600 during natural language comprehension. Copyright © 2017 Elsevier Ltd. All rights reserved.
Néri-Quiroz, José; Canto, Fabrice; Guillerme, Laurent; Couston, Laurent; Magnaldo, Alastair; Dugas, Vincent
2016-10-01
A miniaturized and automated approach for the determination of free acidity in solutions containing uranium (VI) is presented. The measurement technique is based on the concept of sequential injection analysis with on-line spectroscopic detection. The proposed methodology relies on the complexation and alkalimetric titration of nitric acid using a pH 5.6 sodium oxalate solution. The titration process is followed by UV/VIS detection at 650nm thanks to addition of Congo red as universal pH indicator. Mixing sequence as well as method validity was investigated by numerical simulation. This new analytical design allows fast (2.3min), reliable and accurate free acidity determination of low volume samples (10µL) containing uranium/[H(+)] moles ratio of 1:3 with relative standard deviation of <7.0% (n=11). The linearity range of the free nitric acid measurement is excellent up to 2.77molL(-1) with a correlation coefficient (R(2)) of 0.995. The method is specific, presence of actinide ions up to 0.54molL(-1) does not interfere on the determination of free nitric acid. In addition to automation, the developed sequential injection analysis method greatly improves the standard off-line oxalate complexation and alkalimetric titration method by reducing thousand fold the required sample volume, forty times the nuclear waste per analysis as well as the analysis time by eight fold. These analytical parameters are important especially in nuclear-related applications to improve laboratory safety, personnel exposure to radioactive samples and to drastically reduce environmental impacts or analytical radioactive waste. Copyright © 2016 Elsevier B.V. All rights reserved.
Preparing the Teacher of Tomorrow
ERIC Educational Resources Information Center
Hemp, Paul E.
1976-01-01
Suggested ways of planning and conducting high quality teacher preparation programs are discussed under major headings of student selection, sequential courses and experiences, and program design. (HD)
Synthesizing genetic sequential logic circuit with clock pulse generator.
Chuang, Chia-Hua; Lin, Chun-Liang
2014-05-28
Rhythmic clock widely occurs in biological systems which controls several aspects of cell physiology. For the different cell types, it is supplied with various rhythmic frequencies. How to synthesize a specific clock signal is a preliminary but a necessary step to further development of a biological computer in the future. This paper presents a genetic sequential logic circuit with a clock pulse generator based on a synthesized genetic oscillator, which generates a consecutive clock signal whose frequency is an inverse integer multiple to that of the genetic oscillator. An analogous electronic waveform-shaping circuit is constructed by a series of genetic buffers to shape logic high/low levels of an oscillation input in a basic sinusoidal cycle and generate a pulse-width-modulated (PWM) output with various duty cycles. By controlling the threshold level of the genetic buffer, a genetic clock pulse signal with its frequency consistent to the genetic oscillator is synthesized. A synchronous genetic counter circuit based on the topology of the digital sequential logic circuit is triggered by the clock pulse to synthesize the clock signal with an inverse multiple frequency to the genetic oscillator. The function acts like a frequency divider in electronic circuits which plays a key role in the sequential logic circuit with specific operational frequency. A cascaded genetic logic circuit generating clock pulse signals is proposed. Based on analogous implement of digital sequential logic circuits, genetic sequential logic circuits can be constructed by the proposed approach to generate various clock signals from an oscillation signal.
Crawford, Sara; Boulet, Sheree L; Mneimneh, Allison S; Perkins, Kiran M; Jamieson, Denise J; Zhang, Yujia; Kissin, Dmitry M
2016-02-01
To assess treatment and pregnancy/infant-associated medical costs and birth outcomes for assisted reproductive technology (ART) cycles in a subset of patients using elective double embryo (ET) and to project the difference in costs and outcomes had the cycles instead been sequential single ETs (fresh followed by frozen if the fresh ET did not result in live birth). Retrospective cohort study using 2012 and 2013 data from the National ART Surveillance System. Infertility treatment centers. Fresh, autologous double ETs performed in 2012 among ART patients younger than 35 years of age with no prior ART use who cryopreserved at least one embryo. Sequential single and double ETs. Actual live birth rates and estimated ART treatment and pregnancy/infant-associated medical costs for double ET cycles started in 2012 and projected ART treatment and pregnancy/infant-associated medical costs if the double ET cycles had been performed as sequential single ETs. The estimated total ART treatment and pregnancy/infant-associated medical costs were $580.9 million for 10,001 double ETs started in 2012. If performed as sequential single ETs, estimated costs would have decreased by $195.0 million to $386.0 million, and live birth rates would have increased from 57.7%-68.0%. Sequential single ETs, when clinically appropriate, can reduce total ART treatment and pregnancy/infant-associated medical costs by reducing multiple births without lowering live birth rates. Published by Elsevier Inc.
Cost Optimal Design of a Power Inductor by Sequential Gradient Search
NASA Astrophysics Data System (ADS)
Basak, Raju; Das, Arabinda; Sanyal, Amarnath
2018-05-01
Power inductors are used for compensating VAR generated by long EHV transmission lines and in electronic circuits. For the EHV-lines, the rating of the inductor is decided upon by techno-economic considerations on the basis of the line-susceptance. It is a high voltage high current device, absorbing little active power and large reactive power. The cost is quite high- hence the design should be made cost-optimally. The 3-phase power inductor is similar in construction to a 3-phase core-type transformer with the exception that it has only one winding per phase and each limb is provided with an air-gap, the length of which is decided upon by the inductance required. In this paper, a design methodology based on sequential gradient search technique and the corresponding algorithm leading to cost-optimal design of a 3-phase EHV power inductor has been presented. The case-study has been made on a 220 kV long line of NHPC running from Chukha HPS to Birpara of Coochbihar.
Some controversial multiple testing problems in regulatory applications.
Hung, H M James; Wang, Sue-Jane
2009-01-01
Multiple testing problems in regulatory applications are often more challenging than the problems of handling a set of mathematical symbols representing multiple null hypotheses under testing. In the union-intersection setting, it is important to define a family of null hypotheses relevant to the clinical questions at issue. The distinction between primary endpoint and secondary endpoint needs to be considered properly in different clinical applications. Without proper consideration, the widely used sequential gate keeping strategies often impose too many logical restrictions to make sense, particularly to deal with the problem of testing multiple doses and multiple endpoints, the problem of testing a composite endpoint and its component endpoints, and the problem of testing superiority and noninferiority in the presence of multiple endpoints. Partitioning the null hypotheses involved in closed testing into clinical relevant orderings or sets can be a viable alternative to resolving the illogical problems requiring more attention from clinical trialists in defining the clinical hypotheses or clinical question(s) at the design stage. In the intersection-union setting there is little room for alleviating the stringency of the requirement that each endpoint must meet the same intended alpha level, unless the parameter space under the null hypothesis can be substantially restricted. Such restriction often requires insurmountable justification and usually cannot be supported by the internal data. Thus, a possible remedial approach to alleviate the possible conservatism as a result of this requirement is a group-sequential design strategy that starts with a conservative sample size planning and then utilizes an alpha spending function to possibly reach the conclusion early.
Replay of Episodic Memories in the Rat.
Panoz-Brown, Danielle; Iyer, Vishakh; Carey, Lawrence M; Sluka, Christina M; Rajic, Gabriela; Kestenman, Jesse; Gentry, Meredith; Brotheridge, Sydney; Somekh, Isaac; Corbin, Hannah E; Tucker, Kjersten G; Almeida, Bianca; Hex, Severine B; Garcia, Krysten D; Hohmann, Andrea G; Crystal, Jonathon D
2018-05-21
Vivid episodic memories in people have been characterized as the replay of multiple unique events in sequential order [1-3]. The hippocampus plays a critical role in episodic memories in both people and rodents [2, 4-6]. Although rats remember multiple unique episodes [7, 8], it is currently unknown if animals "replay" episodic memories. Therefore, we developed an animal model of episodic memory replay. Here, we show that rats can remember a trial-unique stream of multiple episodes and the order in which these events occurred by engaging hippocampal-dependent episodic memory replay. We document that rats rely on episodic memory replay to remember the order of events rather than relying on non-episodic memories. Replay of episodic memories survives a long retention-interval challenge and interference from the memory of other events, which documents that replay is part of long-term episodic memory. The chemogenetic activating drug clozapine N-oxide (CNO), but not vehicle, reversibly impairs episodic memory replay in rats previously injected bilaterally in the hippocampus with a recombinant viral vector containing an inhibitory designer receptor exclusively activated by a designer drug (DREADD; AAV8-hSyn-hM4Di-mCherry). By contrast, two non-episodic memory assessments are unaffected by CNO, showing selectivity of this hippocampal-dependent impairment. Our approach provides an animal model of episodic memory replay, a process by which the rat searches its representations in episodic memory in sequential order to find information. Our findings using rats suggest that the ability to replay a stream of episodic memories is quite old in the evolutionary timescale. Copyright © 2018 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Gragne, A. S.; Sharma, A.; Mehrotra, R.; Alfredsen, K. T.
2012-12-01
Accuracy of reservoir inflow forecasts is instrumental for maximizing value of water resources and influences operation of hydropower reservoirs significantly. Improving hourly reservoir inflow forecasts over a 24 hours lead-time is considered with the day-ahead (Elspot) market of the Nordic exchange market in perspectives. The procedure presented comprises of an error model added on top of an un-alterable constant parameter conceptual model, and a sequential data assimilation routine. The structure of the error model was investigated using freely available software for detecting mathematical relationships in a given dataset (EUREQA) and adopted to contain minimum complexity for computational reasons. As new streamflow data become available the extra information manifested in the discrepancies between measurements and conceptual model outputs are extracted and assimilated into the forecasting system recursively using Sequential Monte Carlo technique. Besides improving forecast skills significantly, the probabilistic inflow forecasts provided by the present approach entrains suitable information for reducing uncertainty in decision making processes related to hydropower systems operation. The potential of the current procedure for improving accuracy of inflow forecasts at lead-times unto 24 hours and its reliability in different seasons of the year will be illustrated and discussed thoroughly.
Spacecraft Data Simulator for the test of level zero processing systems
NASA Technical Reports Server (NTRS)
Shi, Jeff; Gordon, Julie; Mirchandani, Chandru; Nguyen, Diem
1994-01-01
The Microelectronic Systems Branch (MSB) at Goddard Space Flight Center (GSFC) has developed a Spacecraft Data Simulator (SDS) to support the development, test, and verification of prototype and production Level Zero Processing (LZP) systems. Based on a disk array system, the SDS is capable of generating large test data sets up to 5 Gigabytes and outputting serial test data at rates up to 80 Mbps. The SDS supports data formats including NASA Communication (Nascom) blocks, Consultative Committee for Space Data System (CCSDS) Version 1 & 2 frames and packets, and all the Advanced Orbiting Systems (AOS) services. The capability to simulate both sequential and non-sequential time-ordered downlink data streams with errors and gaps is crucial to test LZP systems. This paper describes the system architecture, hardware and software designs, and test data designs. Examples of test data designs are included to illustrate the application of the SDS.
Collaborative, Sequential and Isolated Decisions in Design
NASA Technical Reports Server (NTRS)
Lewis, Kemper; Mistree, Farrokh
1997-01-01
The Massachusetts Institute of Technology (MIT) Commission on Industrial Productivity, in their report Made in America, found that six recurring weaknesses were hampering American manufacturing industries. The two weaknesses most relevant to product development were 1) technological weakness in development and production, and 2) failures in cooperation. The remedies to these weaknesses are considered the essential twin pillars of CE: 1) improved development process, and 2) closer cooperation. In the MIT report, it is recognized that total cooperation among teams in a CE environment is rare in American industry, while the majority of the design research in mathematically modeling CE has assumed total cooperation. In this paper, we present mathematical constructs, based on game theoretic principles, to model degrees of collaboration characterized by approximate cooperation, sequential decision making and isolation. The design of a pressure vessel and a passenger aircraft are included as illustrative examples.
Olives, Casey; Pagano, Marcello; Deitchler, Megan; Hedt, Bethany L; Egge, Kari; Valadez, Joseph J
2009-04-01
Traditional lot quality assurance sampling (LQAS) methods require simple random sampling to guarantee valid results. However, cluster sampling has been proposed to reduce the number of random starting points. This study uses simulations to examine the classification error of two such designs, a 67x3 (67 clusters of three observations) and a 33x6 (33 clusters of six observations) sampling scheme to assess the prevalence of global acute malnutrition (GAM). Further, we explore the use of a 67x3 sequential sampling scheme for LQAS classification of GAM prevalence. Results indicate that, for independent clusters with moderate intracluster correlation for the GAM outcome, the three sampling designs maintain approximate validity for LQAS analysis. Sequential sampling can substantially reduce the average sample size that is required for data collection. The presence of intercluster correlation can impact dramatically the classification error that is associated with LQAS analysis.
ADS: A FORTRAN program for automated design synthesis: Version 1.10
NASA Technical Reports Server (NTRS)
Vanderplaats, G. N.
1985-01-01
A new general-purpose optimization program for engineering design is described. ADS (Automated Design Synthesis - Version 1.10) is a FORTRAN program for solution of nonlinear constrained optimization problems. The program is segmented into three levels: strategy, optimizer, and one-dimensional search. At each level, several options are available so that a total of over 100 possible combinations can be created. Examples of available strategies are sequential unconstrained minimization, the Augmented Lagrange Multiplier method, and Sequential Linear Programming. Available optimizers include variable metric methods and the Method of Feasible Directions as examples, and one-dimensional search options include polynomial interpolation and the Golden Section method as examples. Emphasis is placed on ease of use of the program. All information is transferred via a single parameter list. Default values are provided for all internal program parameters such as convergence criteria, and the user is given a simple means to over-ride these, if desired.
Constraint Optimization Problem For The Cutting Of A Cobalt Chrome Refractory Material
NASA Astrophysics Data System (ADS)
Lebaal, Nadhir; Schlegel, Daniel; Folea, Milena
2011-05-01
This paper shows a complete approach to solve a given problem, from the experimentation to the optimization of different cutting parameters. In response to an industrial problem of slotting FSX 414, a Cobalt-based refractory material, we have implemented a design of experiment to determine the most influent parameters on the tool life, the surface roughness and the cutting forces. After theses trials, an optimization approach has been implemented to find the lowest manufacturing cost while respecting the roughness constraints and cutting force limitation constraints. The optimization approach is based on the Response Surface Method (RSM) using the Sequential Quadratic programming algorithm (SQP) for a constrained problem. To avoid a local optimum and to obtain an accurate solution at low cost, an efficient strategy, which allows improving the RSM accuracy in the vicinity of the global optimum, is presented. With these models and these trials, we could apply and compare our optimization methods in order to get the lowest cost for the best quality, i.e. a satisfying surface roughness and limited cutting forces.
Scalable Nonlinear Solvers for Fully Implicit Coupled Nuclear Fuel Modeling. Final Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cai, Xiao-Chuan; Keyes, David; Yang, Chao
2014-09-29
The focus of the project is on the development and customization of some highly scalable domain decomposition based preconditioning techniques for the numerical solution of nonlinear, coupled systems of partial differential equations (PDEs) arising from nuclear fuel simulations. These high-order PDEs represent multiple interacting physical fields (for example, heat conduction, oxygen transport, solid deformation), each is modeled by a certain type of Cahn-Hilliard and/or Allen-Cahn equations. Most existing approaches involve a careful splitting of the fields and the use of field-by-field iterations to obtain a solution of the coupled problem. Such approaches have many advantages such as ease of implementationmore » since only single field solvers are needed, but also exhibit disadvantages. For example, certain nonlinear interactions between the fields may not be fully captured, and for unsteady problems, stable time integration schemes are difficult to design. In addition, when implemented on large scale parallel computers, the sequential nature of the field-by-field iterations substantially reduces the parallel efficiency. To overcome the disadvantages, fully coupled approaches have been investigated in order to obtain full physics simulations.« less
Cultural adaptation of a supportive care needs measure for Hispanic men cancer survivors.
Martinez Tyson, Dinorah; Medina-Ramirez, Patricia; Vázquez-Otero, Coralia; Gwede, Clement K; Bobonis, Margarita; McMillan, Susan C
2018-01-01
Research with ethnic minority populations requires instrumentation that is cultural and linguistically relevant. The aim of this study was to translate and culturally adapt the Cancer Survivor Unmet Needs measure into Spanish. We describe the iterative, community-engaged consensus-building approaches used to adapt the instrument for Hispanic male cancer survivors. We used an exploratory sequential mixed method study design. Methods included translation and back-translation, focus groups with cancer survivors (n = 18) and providers (n = 5), use of cognitive interview techniques to evaluate the comprehension and acceptability of the adapted instrument with survivors (n = 12), ongoing input from the project's community advisory board, and preliminary psychometric analysis (n = 84). The process emphasized conceptual, content, semantic, and technical equivalence. Combining qualitative and quantitative approaches offered a rigorous, systematic, and contextual approach to translation alone and supports the cultural adaptation of this measure in a purposeful and relevant manner. Our findings highlight the importance of going beyond translation when adapting measures for cross-cultural populations and illustrate the importance of taking culture, literacy, and language into consideration.
Chadha, Neil K; Papsin, Blake C; Jiwani, Salima; Gordon, Karen A
2011-09-01
To measure speech detection in noise performance for children with bilateral cochlear implants (BiCI), to compare performance in children with simultaneous implant versus those with sequential implant, and to compare performance to normal-hearing children. Prospective cohort study. Tertiary academic pediatric center. Children with early-onset bilateral deafness and 2-year BiCI experience, comprising the "sequential" group (>2 yr interimplantation delay, n = 12) and "simultaneous group" (no interimplantation delay, n = 10) and normal-hearing controls (n = 8). Thresholds to speech detection (at 0-degree azimuth) were measured with noise at 0-degree azimuth or ± 90-degree azimuth. Spatial unmasking (SU) as the noise condition changed from 0-degree azimuth to ± 90-degree azimuth and binaural summation advantage (BSA) of 2 over 1 CI. Speech detection in noise was significantly poorer than controls for both BiCI groups (p < 0.0001). However, the SU in the simultaneous group approached levels found in normal controls (7.2 ± 0.6 versus 8.6 ± 0.6 dB, p > 0.05) and was significantly better than that in the sequential group (3.9 ± 0.4 dB, p < 0.05). Spatial unmasking was unaffected by the side of noise presentation in the simultaneous group but, in the sequential group, was significantly better when noise was moved to the second rather than the first implanted ear (4.8 ± 0.5 versus 3.0 ± 0.4 dB, p < 0.05). This was consistent with a larger BSA from the sequential group's second rather than first CI. Children with simultaneously implanted BiCI demonstrated an advantage over children with sequential implant by using spatial cues to improve speech detection in noise.
Wald Sequential Probability Ratio Test for Analysis of Orbital Conjunction Data
NASA Technical Reports Server (NTRS)
Carpenter, J. Russell; Markley, F. Landis; Gold, Dara
2013-01-01
We propose a Wald Sequential Probability Ratio Test for analysis of commonly available predictions associated with spacecraft conjunctions. Such predictions generally consist of a relative state and relative state error covariance at the time of closest approach, under the assumption that prediction errors are Gaussian. We show that under these circumstances, the likelihood ratio of the Wald test reduces to an especially simple form, involving the current best estimate of collision probability, and a similar estimate of collision probability that is based on prior assumptions about the likelihood of collision.