ERIC Educational Resources Information Center
Lee, Seong-Soo
1982-01-01
Tenth-grade students (n=144) received training on one of three processing methods: coding-mapping (simultaneous), coding only, or decision tree (sequential). The induced simultaneous processing strategy worked optimally under rule learning, while the sequential strategy was difficult to induce and/or not optimal for rule-learning operations.…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Man, Jun; Zhang, Jiangjiang; Li, Weixuan
2016-10-01
The ensemble Kalman filter (EnKF) has been widely used in parameter estimation for hydrological models. The focus of most previous studies was to develop more efficient analysis (estimation) algorithms. On the other hand, it is intuitively understandable that a well-designed sampling (data-collection) strategy should provide more informative measurements and subsequently improve the parameter estimation. In this work, a Sequential Ensemble-based Optimal Design (SEOD) method, coupled with EnKF, information theory and sequential optimal design, is proposed to improve the performance of parameter estimation. Based on the first-order and second-order statistics, different information metrics including the Shannon entropy difference (SD), degrees ofmore » freedom for signal (DFS) and relative entropy (RE) are used to design the optimal sampling strategy, respectively. The effectiveness of the proposed method is illustrated by synthetic one-dimensional and two-dimensional unsaturated flow case studies. It is shown that the designed sampling strategies can provide more accurate parameter estimation and state prediction compared with conventional sampling strategies. Optimal sampling designs based on various information metrics perform similarly in our cases. The effect of ensemble size on the optimal design is also investigated. Overall, larger ensemble size improves the parameter estimation and convergence of optimal sampling strategy. Although the proposed method is applied to unsaturated flow problems in this study, it can be equally applied in any other hydrological problems.« less
Van Derlinden, E; Bernaerts, K; Van Impe, J F
2010-05-21
Optimal experiment design for parameter estimation (OED/PE) has become a popular tool for efficient and accurate estimation of kinetic model parameters. When the kinetic model under study encloses multiple parameters, different optimization strategies can be constructed. The most straightforward approach is to estimate all parameters simultaneously from one optimal experiment (single OED/PE strategy). However, due to the complexity of the optimization problem or the stringent limitations on the system's dynamics, the experimental information can be limited and parameter estimation convergence problems can arise. As an alternative, we propose to reduce the optimization problem to a series of two-parameter estimation problems, i.e., an optimal experiment is designed for a combination of two parameters while presuming the other parameters known. Two different approaches can be followed: (i) all two-parameter optimal experiments are designed based on identical initial parameter estimates and parameters are estimated simultaneously from all resulting experimental data (global OED/PE strategy), and (ii) optimal experiments are calculated and implemented sequentially whereby the parameter values are updated intermediately (sequential OED/PE strategy). This work exploits OED/PE for the identification of the Cardinal Temperature Model with Inflection (CTMI) (Rosso et al., 1993). This kinetic model describes the effect of temperature on the microbial growth rate and encloses four parameters. The three OED/PE strategies are considered and the impact of the OED/PE design strategy on the accuracy of the CTMI parameter estimation is evaluated. Based on a simulation study, it is observed that the parameter values derived from the sequential approach deviate more from the true parameters than the single and global strategy estimates. The single and global OED/PE strategies are further compared based on experimental data obtained from design implementation in a bioreactor. Comparable estimates are obtained, but global OED/PE estimates are, in general, more accurate and reliable. Copyright (c) 2010 Elsevier Ltd. All rights reserved.
Optimal sequential measurements for bipartite state discrimination
NASA Astrophysics Data System (ADS)
Croke, Sarah; Barnett, Stephen M.; Weir, Graeme
2017-05-01
State discrimination is a useful test problem with which to clarify the power and limitations of different classes of measurement. We consider the problem of discriminating between given states of a bipartite quantum system via sequential measurement of the subsystems, with classical feed-forward of measurement results. Our aim is to understand when sequential measurements, which are relatively easy to implement experimentally, perform as well, or almost as well, as optimal joint measurements, which are in general more technologically challenging. We construct conditions that the optimal sequential measurement must satisfy, analogous to the well-known Helstrom conditions for minimum error discrimination in the unrestricted case. We give several examples and compare the optimal probability of correctly identifying the state via global versus sequential measurement strategies.
Differential-Game Examination of Optimal Time-Sequential Fire-Support Strategies
1976-09-01
77 004033 NPS-55Tw76091 NAVAL POSTGRADUATE SCHOOL 4Monterey, California i ’ DIFFERENTIAL- GAME EXAMINATION OF OPTIMAL TIME-SEQUENTIAL FIRE...CATALOG NUMBER NPS-55Tw76091 4. TITLE (and Subtitle) S. TYPE OF REPDRT & PERIOD COVERED Differential- Game Examination of Optimal Tir Technical Report...NOTES 19. KEY WORDS (Continue on reverse side If necessary and identify by block number) Differential Games Lanchester Theory of Combat Military Tactics
Pinto Mariano, Adriano; Bastos Borba Costa, Caliane; de Franceschi de Angelis, Dejanira; Maugeri Filho, Francisco; Pires Atala, Daniel Ibraim; Wolf Maciel, Maria Regina; Maciel Filho, Rubens
2009-11-01
In this work, the mathematical optimization of a continuous flash fermentation process for the production of biobutanol was studied. The process consists of three interconnected units, as follows: fermentor, cell-retention system (tangential microfiltration), and vacuum flash vessel (responsible for the continuous recovery of butanol from the broth). The objective of the optimization was to maximize butanol productivity for a desired substrate conversion. Two strategies were compared for the optimization of the process. In one of them, the process was represented by a deterministic model with kinetic parameters determined experimentally and, in the other, by a statistical model obtained using the factorial design technique combined with simulation. For both strategies, the problem was written as a nonlinear programming problem and was solved with the sequential quadratic programming technique. The results showed that despite the very similar solutions obtained with both strategies, the problems found with the strategy using the deterministic model, such as lack of convergence and high computational time, make the use of the optimization strategy with the statistical model, which showed to be robust and fast, more suitable for the flash fermentation process, being recommended for real-time applications coupling optimization and control.
Efficient Robust Optimization of Metal Forming Processes using a Sequential Metamodel Based Strategy
NASA Astrophysics Data System (ADS)
Wiebenga, J. H.; Klaseboer, G.; van den Boogaard, A. H.
2011-08-01
The coupling of Finite Element (FE) simulations to mathematical optimization techniques has contributed significantly to product improvements and cost reductions in the metal forming industries. The next challenge is to bridge the gap between deterministic optimization techniques and the industrial need for robustness. This paper introduces a new and generally applicable structured methodology for modeling and solving robust optimization problems. Stochastic design variables or noise variables are taken into account explicitly in the optimization procedure. The metamodel-based strategy is combined with a sequential improvement algorithm to efficiently increase the accuracy of the objective function prediction. This is only done at regions of interest containing the optimal robust design. Application of the methodology to an industrial V-bending process resulted in valuable process insights and an improved robust process design. Moreover, a significant improvement of the robustness (>2σ) was obtained by minimizing the deteriorating effects of several noise variables. The robust optimization results demonstrate the general applicability of the robust optimization strategy and underline the importance of including uncertainty and robustness explicitly in the numerical optimization procedure.
Optimality of affine control system of several species in competition on a sequential batch reactor
NASA Astrophysics Data System (ADS)
Rodríguez, J. C.; Ramírez, H.; Gajardo, P.; Rapaport, A.
2014-09-01
In this paper, we analyse the optimality of affine control system of several species in competition for a single substrate on a sequential batch reactor, with the objective being to reach a given (low) level of the substrate. We allow controls to be bounded measurable functions of time plus possible impulses. A suitable modification of the dynamics leads to a slightly different optimal control problem, without impulsive controls, for which we apply different optimality conditions derived from Pontryagin principle and the Hamilton-Jacobi-Bellman equation. We thus characterise the singular trajectories of our problem as the extremal trajectories keeping the substrate at a constant level. We also establish conditions for which an immediate one impulse (IOI) strategy is optimal. Some numerical experiences are then included in order to illustrate our study and show that those conditions are also necessary to ensure the optimality of the IOI strategy.
Multi-point objective-oriented sequential sampling strategy for constrained robust design
NASA Astrophysics Data System (ADS)
Zhu, Ping; Zhang, Siliang; Chen, Wei
2015-03-01
Metamodelling techniques are widely used to approximate system responses of expensive simulation models. In association with the use of metamodels, objective-oriented sequential sampling methods have been demonstrated to be effective in balancing the need for searching an optimal solution versus reducing the metamodelling uncertainty. However, existing infilling criteria are developed for deterministic problems and restricted to one sampling point in one iteration. To exploit the use of multiple samples and identify the true robust solution in fewer iterations, a multi-point objective-oriented sequential sampling strategy is proposed for constrained robust design problems. In this article, earlier development of objective-oriented sequential sampling strategy for unconstrained robust design is first extended to constrained problems. Next, a double-loop multi-point sequential sampling strategy is developed. The proposed methods are validated using two mathematical examples followed by a highly nonlinear automotive crashworthiness design example. The results show that the proposed method can mitigate the effect of both metamodelling uncertainty and design uncertainty, and identify the robust design solution more efficiently than the single-point sequential sampling approach.
ADS: A FORTRAN program for automated design synthesis: Version 1.10
NASA Technical Reports Server (NTRS)
Vanderplaats, G. N.
1985-01-01
A new general-purpose optimization program for engineering design is described. ADS (Automated Design Synthesis - Version 1.10) is a FORTRAN program for solution of nonlinear constrained optimization problems. The program is segmented into three levels: strategy, optimizer, and one-dimensional search. At each level, several options are available so that a total of over 100 possible combinations can be created. Examples of available strategies are sequential unconstrained minimization, the Augmented Lagrange Multiplier method, and Sequential Linear Programming. Available optimizers include variable metric methods and the Method of Feasible Directions as examples, and one-dimensional search options include polynomial interpolation and the Golden Section method as examples. Emphasis is placed on ease of use of the program. All information is transferred via a single parameter list. Default values are provided for all internal program parameters such as convergence criteria, and the user is given a simple means to over-ride these, if desired.
Sewsynker-Sukai, Yeshona; Gueguim Kana, E B
2017-11-01
This study presents a sequential sodium phosphate dodecahydrate (Na 3 PO 4 ·12H 2 O) and zinc chloride (ZnCl 2 ) pretreatment to enhance delignification and enzymatic saccharification of corn cobs. The effects of process parameters of Na 3 PO 4 ·12H 2 O concentration (5-15%), ZnCl 2 concentration (1-5%) and solid to liquid ratio (5-15%) on reducing sugar yield from corn cobs were investigated. The sequential pretreatment model was developed and optimized with a high coefficient of determination value (0.94). Maximum reducing sugar yield of 1.10±0.01g/g was obtained with 14.02% Na 3 PO 4 ·12H 2 O, 3.65% ZnCl 2 and 5% solid to liquid ratio. Scanning electron microscopy (SEM) and Fourier Transform Infrared analysis (FTIR) showed major lignocellulosic structural changes after the optimized sequential pretreatment with 63.61% delignification. In addition, a 10-fold increase in the sugar yield was observed compared to previous reports on the same substrate. This sequential pretreatment strategy was efficient for enhancing enzymatic saccharification of corn cobs. Copyright © 2017 Elsevier Ltd. All rights reserved.
Analyzing multicomponent receptive fields from neural responses to natural stimuli
Rowekamp, Ryan; Sharpee, Tatyana O
2011-01-01
The challenge of building increasingly better models of neural responses to natural stimuli is to accurately estimate the multiple stimulus features that may jointly affect the neural spike probability. The selectivity for combinations of features is thought to be crucial for achieving classical properties of neural responses such as contrast invariance. The joint search for these multiple stimulus features is difficult because estimating spike probability as a multidimensional function of stimulus projections onto candidate relevant dimensions is subject to the curse of dimensionality. An attractive alternative is to search for relevant dimensions sequentially, as in projection pursuit regression. Here we demonstrate using analytic arguments and simulations of model cells that different types of sequential search strategies exhibit systematic biases when used with natural stimuli. Simulations show that joint optimization is feasible for up to three dimensions with current algorithms. When applied to the responses of V1 neurons to natural scenes, models based on three jointly optimized dimensions had better predictive power in a majority of cases compared to dimensions optimized sequentially, with different sequential methods yielding comparable results. Thus, although the curse of dimensionality remains, at least several relevant dimensions can be estimated by joint information maximization. PMID:21780916
Friston, Karl J.; Dolan, Raymond J.
2017-01-01
Normative models of human cognition often appeal to Bayesian filtering, which provides optimal online estimates of unknown or hidden states of the world, based on previous observations. However, in many cases it is necessary to optimise beliefs about sequences of states rather than just the current state. Importantly, Bayesian filtering and sequential inference strategies make different predictions about beliefs and subsequent choices, rendering them behaviourally dissociable. Taking data from a probabilistic reversal task we show that subjects’ choices provide strong evidence that they are representing short sequences of states. Between-subject measures of this implicit sequential inference strategy had a neurobiological underpinning and correlated with grey matter density in prefrontal and parietal cortex, as well as the hippocampus. Our findings provide, to our knowledge, the first evidence for sequential inference in human cognition, and by exploiting between-subject variation in this measure we provide pointers to its neuronal substrates. PMID:28486504
NASA Astrophysics Data System (ADS)
Palanikumar, L.; Jeena, M. T.; Kim, Kibeom; Yong Oh, Jun; Kim, Chaekyu; Park, Myoung-Hwan; Ryu, Ja-Hyoung
2017-04-01
Combination chemotherapy has become the primary strategy against cancer multidrug resistance; however, accomplishing optimal pharmacokinetic delivery of multiple drugs is still challenging. Herein, we report a sequential combination drug delivery strategy exploiting a pH-triggerable and redox switch to release cargos from hollow silica nanoparticles in a spatiotemporal manner. This versatile system further enables a large loading efficiency for both hydrophobic and hydrophilic drugs inside the nanoparticles, followed by self-crosslinking with disulfide and diisopropylamine-functionalized polymers. In acidic tumour environments, the positive charge generated by the protonation of the diisopropylamine moiety facilitated the cellular uptake of the particles. Upon internalization, the acidic endosomal pH condition and intracellular glutathione regulated the sequential release of the drugs in a time-dependent manner, providing a promising therapeutic approach to overcoming drug resistance during cancer treatment.
Sequential and parallel image restoration: neural network implementations.
Figueiredo, M T; Leitao, J N
1994-01-01
Sequential and parallel image restoration algorithms and their implementations on neural networks are proposed. For images degraded by linear blur and contaminated by additive white Gaussian noise, maximum a posteriori (MAP) estimation and regularization theory lead to the same high dimension convex optimization problem. The commonly adopted strategy (in using neural networks for image restoration) is to map the objective function of the optimization problem into the energy of a predefined network, taking advantage of its energy minimization properties. Departing from this approach, we propose neural implementations of iterative minimization algorithms which are first proved to converge. The developed schemes are based on modified Hopfield (1985) networks of graded elements, with both sequential and parallel updating schedules. An algorithm supported on a fully standard Hopfield network (binary elements and zero autoconnections) is also considered. Robustness with respect to finite numerical precision is studied, and examples with real images are presented.
Cell-Mediated Immunity to Target the Persistent Human Immunodeficiency Virus Reservoir
Montaner, Luis J.
2017-01-01
Abstract Effective clearance of virally infected cells requires the sequential activity of innate and adaptive immunity effectors. In human immunodeficiency virus (HIV) infection, naturally induced cell-mediated immune responses rarely eradicate infection. However, optimized immune responses could potentially be leveraged in HIV cure efforts if epitope escape and lack of sustained effector memory responses were to be addressed. Here we review leading HIV cure strategies that harness cell-mediated control against HIV in stably suppressed antiretroviral-treated subjects. We focus on strategies that may maximize target recognition and eradication by the sequential activation of a reconstituted immune system, together with delivery of optimal T-cell responses that can eliminate the reservoir and serve as means to maintain control of HIV spread in the absence of antiretroviral therapy (ART). As evidenced by the evolution of ART, we argue that a combination of immune-based strategies will be a superior path to cell-mediated HIV control and eradication. Available data from several human pilot trials already identify target strategies that may maximize antiviral pressure by joining innate and engineered T cell responses toward testing for sustained HIV remission and/or cure. PMID:28520969
A behavioural and neural evaluation of prospective decision-making under risk
Symmonds, Mkael; Bossaerts, Peter; Dolan, Raymond J.
2010-01-01
Making the best choice when faced with a chain of decisions requires a person to judge both anticipated outcomes and future actions. Although economic decision-making models account for both risk and reward in single choice contexts there is a dearth of similar knowledge about sequential choice. Classical utility-based models assume that decision-makers select and follow an optimal pre-determined strategy, irrespective of the particular order in which options are presented. An alternative model involves continuously re-evaluating decision utilities, without prescribing a specific future set of choices. Here, using behavioral and functional magnetic resonance imaging (fMRI) data, we studied human subjects in a sequential choice task and use these data to compare alternative decision models of valuation and strategy selection. We provide evidence that subjects adopt a model of re-evaluating decision utilities, where available strategies are continuously updated and combined in assessing action values. We validate this model by using simultaneously-acquired fMRI data to show that sequential choice evokes a pattern of neural response consistent with a tracking of anticipated distribution of future reward, as expected in such a model. Thus, brain activity evoked at each decision point reflects the expected mean, variance and skewness of possible payoffs, consistent with the idea that sequential choice evokes a prospective evaluation of both available strategies and possible outcomes. PMID:20980595
A behavioral and neural evaluation of prospective decision-making under risk.
Symmonds, Mkael; Bossaerts, Peter; Dolan, Raymond J
2010-10-27
Making the best choice when faced with a chain of decisions requires a person to judge both anticipated outcomes and future actions. Although economic decision-making models account for both risk and reward in single-choice contexts, there is a dearth of similar knowledge about sequential choice. Classical utility-based models assume that decision-makers select and follow an optimal predetermined strategy, regardless of the particular order in which options are presented. An alternative model involves continuously reevaluating decision utilities, without prescribing a specific future set of choices. Here, using behavioral and functional magnetic resonance imaging (fMRI) data, we studied human subjects in a sequential choice task and use these data to compare alternative decision models of valuation and strategy selection. We provide evidence that subjects adopt a model of reevaluating decision utilities, in which available strategies are continuously updated and combined in assessing action values. We validate this model by using simultaneously acquired fMRI data to show that sequential choice evokes a pattern of neural response consistent with a tracking of anticipated distribution of future reward, as expected in such a model. Thus, brain activity evoked at each decision point reflects the expected mean, variance, and skewness of possible payoffs, consistent with the idea that sequential choice evokes a prospective evaluation of both available strategies and possible outcomes.
Analysis of Optimal Sequential State Discrimination for Linearly Independent Pure Quantum States.
Namkung, Min; Kwon, Younghun
2018-04-25
Recently, J. A. Bergou et al. proposed sequential state discrimination as a new quantum state discrimination scheme. In the scheme, by the successful sequential discrimination of a qubit state, receivers Bob and Charlie can share the information of the qubit prepared by a sender Alice. A merit of the scheme is that a quantum channel is established between Bob and Charlie, but a classical communication is not allowed. In this report, we present a method for extending the original sequential state discrimination of two qubit states to a scheme of N linearly independent pure quantum states. Specifically, we obtain the conditions for the sequential state discrimination of N = 3 pure quantum states. We can analytically provide conditions when there is a special symmetry among N = 3 linearly independent pure quantum states. Additionally, we show that the scenario proposed in this study can be applied to quantum key distribution. Furthermore, we show that the sequential state discrimination of three qutrit states performs better than the strategy of probabilistic quantum cloning.
Auctions with Dynamic Populations: Efficiency and Revenue Maximization
NASA Astrophysics Data System (ADS)
Said, Maher
We study a stochastic sequential allocation problem with a dynamic population of privately-informed buyers. We characterize the set of efficient allocation rules and show that a dynamic VCG mechanism is both efficient and periodic ex post incentive compatible; we also show that the revenue-maximizing direct mechanism is a pivot mechanism with a reserve price. We then consider sequential ascending auctions in this setting, both with and without a reserve price. We construct equilibrium bidding strategies in this indirect mechanism where bidders reveal their private information in every period, yielding the same outcomes as the direct mechanisms. Thus, the sequential ascending auction is a natural institution for achieving either efficient or optimal outcomes.
Cell-Mediated Immunity to Target the Persistent Human Immunodeficiency Virus Reservoir.
Riley, James L; Montaner, Luis J
2017-03-15
Effective clearance of virally infected cells requires the sequential activity of innate and adaptive immunity effectors. In human immunodeficiency virus (HIV) infection, naturally induced cell-mediated immune responses rarely eradicate infection. However, optimized immune responses could potentially be leveraged in HIV cure efforts if epitope escape and lack of sustained effector memory responses were to be addressed. Here we review leading HIV cure strategies that harness cell-mediated control against HIV in stably suppressed antiretroviral-treated subjects. We focus on strategies that may maximize target recognition and eradication by the sequential activation of a reconstituted immune system, together with delivery of optimal T-cell responses that can eliminate the reservoir and serve as means to maintain control of HIV spread in the absence of antiretroviral therapy (ART). As evidenced by the evolution of ART, we argue that a combination of immune-based strategies will be a superior path to cell-mediated HIV control and eradication. Available data from several human pilot trials already identify target strategies that may maximize antiviral pressure by joining innate and engineered T cell responses toward testing for sustained HIV remission and/or cure. © The Author 2017. Published by Oxford University Press for the Infectious Diseases Society of America. All rights reserved. For permissions, e-mail: journals.permissions@oup.com.
2011-01-01
Background The potential benefits of coordinating infectious disease eradication programs that use campaigns such as supplementary immunization activities (SIAs) should not be over-looked. One example of a coordinated approach is an adaptive "sequential strategy": first, all annual SIA budget is dedicated to the eradication of a single infectious disease; once that disease is eradicated, the annual SIA budget is re-focussed on eradicating a second disease, etc. Herd immunity suggests that a sequential strategy may eradicate several infectious diseases faster than a non-adaptive "simultaneous strategy" of dividing annual budget equally among eradication programs for those diseases. However, mathematical modeling is required to understand the potential extent of this effect. Methods Our objective was to illustrate how budget allocation strategies can interact with the nonlinear nature of disease transmission to determine time to eradication of several infectious diseases under different budget allocation strategies. Using a mathematical transmission model, we analyzed three hypothetical vaccine-preventable infectious diseases in three different countries. A central decision-maker can distribute funding among SIA programs for these three diseases according to either a sequential strategy or a simultaneous strategy. We explored the time to eradication under these two strategies under a range of scenarios. Results For a certain range of annual budgets, all three diseases can be eradicated relatively quickly under the sequential strategy, whereas eradication never occurs under the simultaneous strategy. However, moderate changes to total SIA budget, SIA frequency, order of eradication, or funding disruptions can create disproportionately large differences in the time and budget required for eradication under the sequential strategy. We find that the predicted time to eradication can be very sensitive to small differences in the rate of case importation between the countries. We also find that the time to eradication of all three diseases is not necessarily lowest when the least transmissible disease is targeted first. Conclusions Relatively modest differences in budget allocation strategies in the near-term can result in surprisingly large long-term differences in time required to eradicate, as a result of the amplifying effects of herd immunity and the nonlinearities of disease transmission. More sophisticated versions of such models may be useful to large international donors or other organizations as a planning or portfolio optimization tool, where choices must be made regarding how much funding to dedicate to different infectious disease eradication efforts. PMID:21955853
Fitzpatrick, Tiffany; Bauch, Chris T
2011-09-28
The potential benefits of coordinating infectious disease eradication programs that use campaigns such as supplementary immunization activities (SIAs) should not be over-looked. One example of a coordinated approach is an adaptive "sequential strategy": first, all annual SIA budget is dedicated to the eradication of a single infectious disease; once that disease is eradicated, the annual SIA budget is re-focussed on eradicating a second disease, etc. Herd immunity suggests that a sequential strategy may eradicate several infectious diseases faster than a non-adaptive "simultaneous strategy" of dividing annual budget equally among eradication programs for those diseases. However, mathematical modeling is required to understand the potential extent of this effect. Our objective was to illustrate how budget allocation strategies can interact with the nonlinear nature of disease transmission to determine time to eradication of several infectious diseases under different budget allocation strategies. Using a mathematical transmission model, we analyzed three hypothetical vaccine-preventable infectious diseases in three different countries. A central decision-maker can distribute funding among SIA programs for these three diseases according to either a sequential strategy or a simultaneous strategy. We explored the time to eradication under these two strategies under a range of scenarios. For a certain range of annual budgets, all three diseases can be eradicated relatively quickly under the sequential strategy, whereas eradication never occurs under the simultaneous strategy. However, moderate changes to total SIA budget, SIA frequency, order of eradication, or funding disruptions can create disproportionately large differences in the time and budget required for eradication under the sequential strategy. We find that the predicted time to eradication can be very sensitive to small differences in the rate of case importation between the countries. We also find that the time to eradication of all three diseases is not necessarily lowest when the least transmissible disease is targeted first. Relatively modest differences in budget allocation strategies in the near-term can result in surprisingly large long-term differences in time required to eradicate, as a result of the amplifying effects of herd immunity and the nonlinearities of disease transmission. More sophisticated versions of such models may be useful to large international donors or other organizations as a planning or portfolio optimization tool, where choices must be made regarding how much funding to dedicate to different infectious disease eradication efforts.
Cloning strategy for producing brush-forming protein-based polymers.
Henderson, Douglas B; Davis, Richey M; Ducker, William A; Van Cott, Kevin E
2005-01-01
Brush-forming polymers are being used in a variety of applications, and by using recombinant DNA technology, there exists the potential to produce protein-based polymers that incorporate unique structures and functions in these brush layers. Despite this potential, production of protein-based brush-forming polymers is not routinely performed. For the design and production of new protein-based polymers with optimal brush-forming properties, it would be desirable to have a cloning strategy that allows an iterative approach wherein the protein based-polymer product can be produced and evaluated, and then if necessary, it can be sequentially modified in a controlled manner to obtain optimal surface density and brush extension. In this work, we report on the development of a cloning strategy intended for the production of protein-based brush-forming polymers. This strategy is based on the assembly of modules of DNA that encode for blocks of protein-based polymers into a commercially available expression vector; there is no need for custom-modified vectors and no need for intermediate cloning vectors. Additionally, because the design of new protein-based biopolymers can be an iterative process, our method enables sequential modification of a protein-based polymer product. With at least 21 bacterial expression vectors and 11 yeast expression vectors compatible with this strategy, there are a number of options available for production of protein-based polymers. It is our intent that this strategy will aid in advancing the production of protein-based brush-forming polymers.
Generalized bipartite quantum state discrimination problems with sequential measurements
NASA Astrophysics Data System (ADS)
Nakahira, Kenji; Kato, Kentaro; Usuda, Tsuyoshi Sasaki
2018-02-01
We investigate an optimization problem of finding quantum sequential measurements, which forms a wide class of state discrimination problems with the restriction that only local operations and one-way classical communication are allowed. Sequential measurements from Alice to Bob on a bipartite system are considered. Using the fact that the optimization problem can be formulated as a problem with only Alice's measurement and is convex programming, we derive its dual problem and necessary and sufficient conditions for an optimal solution. Our results are applicable to various practical optimization criteria, including the Bayes criterion, the Neyman-Pearson criterion, and the minimax criterion. In the setting of the problem of finding an optimal global measurement, its dual problem and necessary and sufficient conditions for an optimal solution have been widely used to obtain analytical and numerical expressions for optimal solutions. Similarly, our results are useful to obtain analytical and numerical expressions for optimal sequential measurements. Examples in which our results can be used to obtain an analytical expression for an optimal sequential measurement are provided.
Data analytics and optimization of an ice-based energy storage system for commercial buildings
Luo, Na; Hong, Tianzhen; Li, Hui; ...
2017-07-25
Ice-based thermal energy storage (TES) systems can shift peak cooling demand and reduce operational energy costs (with time-of-use rates) in commercial buildings. The accurate prediction of the cooling load, and the optimal control strategy for managing the charging and discharging of a TES system, are two critical elements to improving system performance and achieving energy cost savings. This study utilizes data-driven analytics and modeling to holistically understand the operation of an ice–based TES system in a shopping mall, calculating the system’s performance using actual measured data from installed meters and sensors. Results show that there is significant savings potential whenmore » the current operating strategy is improved by appropriately scheduling the operation of each piece of equipment of the TES system, as well as by determining the amount of charging and discharging for each day. A novel optimal control strategy, determined by an optimization algorithm of Sequential Quadratic Programming, was developed to minimize the TES system’s operating costs. Three heuristic strategies were also investigated for comparison with our proposed strategy, and the results demonstrate the superiority of our method to the heuristic strategies in terms of total energy cost savings. Specifically, the optimal strategy yields energy costs of up to 11.3% per day and 9.3% per month compared with current operational strategies. A one-day-ahead hourly load prediction was also developed using machine learning algorithms, which facilitates the adoption of the developed data analytics and optimization of the control strategy in a real TES system operation.« less
Data analytics and optimization of an ice-based energy storage system for commercial buildings
DOE Office of Scientific and Technical Information (OSTI.GOV)
Luo, Na; Hong, Tianzhen; Li, Hui
Ice-based thermal energy storage (TES) systems can shift peak cooling demand and reduce operational energy costs (with time-of-use rates) in commercial buildings. The accurate prediction of the cooling load, and the optimal control strategy for managing the charging and discharging of a TES system, are two critical elements to improving system performance and achieving energy cost savings. This study utilizes data-driven analytics and modeling to holistically understand the operation of an ice–based TES system in a shopping mall, calculating the system’s performance using actual measured data from installed meters and sensors. Results show that there is significant savings potential whenmore » the current operating strategy is improved by appropriately scheduling the operation of each piece of equipment of the TES system, as well as by determining the amount of charging and discharging for each day. A novel optimal control strategy, determined by an optimization algorithm of Sequential Quadratic Programming, was developed to minimize the TES system’s operating costs. Three heuristic strategies were also investigated for comparison with our proposed strategy, and the results demonstrate the superiority of our method to the heuristic strategies in terms of total energy cost savings. Specifically, the optimal strategy yields energy costs of up to 11.3% per day and 9.3% per month compared with current operational strategies. A one-day-ahead hourly load prediction was also developed using machine learning algorithms, which facilitates the adoption of the developed data analytics and optimization of the control strategy in a real TES system operation.« less
Sub-problem Optimization With Regression and Neural Network Approximators
NASA Technical Reports Server (NTRS)
Guptill, James D.; Hopkins, Dale A.; Patnaik, Surya N.
2003-01-01
Design optimization of large systems can be attempted through a sub-problem strategy. In this strategy, the original problem is divided into a number of smaller problems that are clustered together to obtain a sequence of sub-problems. Solution to the large problem is attempted iteratively through repeated solutions to the modest sub-problems. This strategy is applicable to structures and to multidisciplinary systems. For structures, clustering the substructures generates the sequence of sub-problems. For a multidisciplinary system, individual disciplines, accounting for coupling, can be considered as sub-problems. A sub-problem, if required, can be further broken down to accommodate sub-disciplines. The sub-problem strategy is being implemented into the NASA design optimization test bed, referred to as "CometBoards." Neural network and regression approximators are employed for reanalysis and sensitivity analysis calculations at the sub-problem level. The strategy has been implemented in sequential as well as parallel computational environments. This strategy, which attempts to alleviate algorithmic and reanalysis deficiencies, has the potential to become a powerful design tool. However, several issues have to be addressed before its full potential can be harnessed. This paper illustrates the strategy and addresses some issues.
Optimal Sequential Rules for Computer-Based Instruction.
ERIC Educational Resources Information Center
Vos, Hans J.
1998-01-01
Formulates sequential rules for adapting the appropriate amount of instruction to learning needs in the context of computer-based instruction. Topics include Bayesian decision theory, threshold and linear-utility structure, psychometric model, optimal sequential number of test questions, and an empirical example of sequential instructional…
Carleton, R. Drew; Heard, Stephen B.; Silk, Peter J.
2013-01-01
Estimation of pest density is a basic requirement for integrated pest management in agriculture and forestry, and efficiency in density estimation is a common goal. Sequential sampling techniques promise efficient sampling, but their application can involve cumbersome mathematics and/or intensive warm-up sampling when pests have complex within- or between-site distributions. We provide tools for assessing the efficiency of sequential sampling and of alternative, simpler sampling plans, using computer simulation with “pre-sampling” data. We illustrate our approach using data for balsam gall midge (Paradiplosis tumifex) attack in Christmas tree farms. Paradiplosis tumifex proved recalcitrant to sequential sampling techniques. Midge distributions could not be fit by a common negative binomial distribution across sites. Local parameterization, using warm-up samples to estimate the clumping parameter k for each site, performed poorly: k estimates were unreliable even for samples of n∼100 trees. These methods were further confounded by significant within-site spatial autocorrelation. Much simpler sampling schemes, involving random or belt-transect sampling to preset sample sizes, were effective and efficient for P. tumifex. Sampling via belt transects (through the longest dimension of a stand) was the most efficient, with sample means converging on true mean density for sample sizes of n∼25–40 trees. Pre-sampling and simulation techniques provide a simple method for assessing sampling strategies for estimating insect infestation. We suspect that many pests will resemble P. tumifex in challenging the assumptions of sequential sampling methods. Our software will allow practitioners to optimize sampling strategies before they are brought to real-world applications, while potentially avoiding the need for the cumbersome calculations required for sequential sampling methods. PMID:24376556
An Iterative Approach for the Optimization of Pavement Maintenance Management at the Network Level
Torres-Machí, Cristina; Chamorro, Alondra; Videla, Carlos; Yepes, Víctor
2014-01-01
Pavement maintenance is one of the major issues of public agencies. Insufficient investment or inefficient maintenance strategies lead to high economic expenses in the long term. Under budgetary restrictions, the optimal allocation of resources becomes a crucial aspect. Two traditional approaches (sequential and holistic) and four classes of optimization methods (selection based on ranking, mathematical optimization, near optimization, and other methods) have been applied to solve this problem. They vary in the number of alternatives considered and how the selection process is performed. Therefore, a previous understanding of the problem is mandatory to identify the most suitable approach and method for a particular network. This study aims to assist highway agencies, researchers, and practitioners on when and how to apply available methods based on a comparative analysis of the current state of the practice. Holistic approach tackles the problem considering the overall network condition, while the sequential approach is easier to implement and understand, but may lead to solutions far from optimal. Scenarios defining the suitability of these approaches are defined. Finally, an iterative approach gathering the advantages of traditional approaches is proposed and applied in a case study. The proposed approach considers the overall network condition in a simpler and more intuitive manner than the holistic approach. PMID:24741352
An iterative approach for the optimization of pavement maintenance management at the network level.
Torres-Machí, Cristina; Chamorro, Alondra; Videla, Carlos; Pellicer, Eugenio; Yepes, Víctor
2014-01-01
Pavement maintenance is one of the major issues of public agencies. Insufficient investment or inefficient maintenance strategies lead to high economic expenses in the long term. Under budgetary restrictions, the optimal allocation of resources becomes a crucial aspect. Two traditional approaches (sequential and holistic) and four classes of optimization methods (selection based on ranking, mathematical optimization, near optimization, and other methods) have been applied to solve this problem. They vary in the number of alternatives considered and how the selection process is performed. Therefore, a previous understanding of the problem is mandatory to identify the most suitable approach and method for a particular network. This study aims to assist highway agencies, researchers, and practitioners on when and how to apply available methods based on a comparative analysis of the current state of the practice. Holistic approach tackles the problem considering the overall network condition, while the sequential approach is easier to implement and understand, but may lead to solutions far from optimal. Scenarios defining the suitability of these approaches are defined. Finally, an iterative approach gathering the advantages of traditional approaches is proposed and applied in a case study. The proposed approach considers the overall network condition in a simpler and more intuitive manner than the holistic approach.
Enders, Philip; Adler, Werner; Schaub, Friederike; Hermann, Manuel M; Diestelhorst, Michael; Dietlein, Thomas; Cursiefen, Claus; Heindl, Ludwig M
2017-10-24
To compare a simultaneously optimized continuous minimum rim surface parameter between Bruch's membrane opening (BMO) and the internal limiting membrane to the standard sequential minimization used for calculating the BMO minimum rim area in spectral domain optical coherence tomography (SD-OCT). In this case-control, cross-sectional study, 704 eyes of 445 participants underwent SD-OCT of the optic nerve head (ONH), visual field testing, and clinical examination. Globally and clock-hour sector-wise optimized BMO-based minimum rim area was calculated independently. Outcome parameters included BMO-globally optimized minimum rim area (BMO-gMRA) and sector-wise optimized BMO-minimum rim area (BMO-MRA). BMO area was 1.89 ± 0.05 mm 2 . Mean global BMO-MRA was 0.97 ± 0.34 mm 2 , mean global BMO-gMRA was 1.01 ± 0.36 mm 2 . Both parameters correlated with r = 0.995 (P < 0.001); mean difference was 0.04 mm 2 (P < 0.001). In all sectors, parameters differed by 3.0-4.2%. In receiver operating characteristics, the calculated area under the curve (AUC) to differentiate glaucoma was 0.873 for BMO-MRA, compared to 0.866 for BMO-gMRA (P = 0.004). Among ONH sectors, the temporal inferior location showed the highest AUC. Optimization strategies to calculate BMO-based minimum rim area led to significantly different results. Imposing an additional adjacency constraint within calculation of BMO-MRA does not improve diagnostic power. Global and temporal inferior BMO-MRA performed best in differentiating glaucoma patients.
Dong, Yuwen; Deshpande, Sunil; Rivera, Daniel E; Downs, Danielle S; Savage, Jennifer S
2014-06-01
Control engineering offers a systematic and efficient method to optimize the effectiveness of individually tailored treatment and prevention policies known as adaptive or "just-in-time" behavioral interventions. The nature of these interventions requires assigning dosages at categorical levels, which has been addressed in prior work using Mixed Logical Dynamical (MLD)-based hybrid model predictive control (HMPC) schemes. However, certain requirements of adaptive behavioral interventions that involve sequential decision making have not been comprehensively explored in the literature. This paper presents an extension of the traditional MLD framework for HMPC by representing the requirements of sequential decision policies as mixed-integer linear constraints. This is accomplished with user-specified dosage sequence tables, manipulation of one input at a time, and a switching time strategy for assigning dosages at time intervals less frequent than the measurement sampling interval. A model developed for a gestational weight gain (GWG) intervention is used to illustrate the generation of these sequential decision policies and their effectiveness for implementing adaptive behavioral interventions involving multiple components.
Games With Estimation of Non-Damage Objectives
DOE Office of Scientific and Technical Information (OSTI.GOV)
Canavan, G.H.
1998-09-14
Games against nature illustrate the role of non-damage objectives in producing conflict with uncertain rewards and the role of probing and estimation in reducing that uncertainty and restoring optimal strategies. This note discusses two essential elements of the analysis of crisis stability omitted from current treatments based on first strike stability: the role of an objective that motivates conflicts sufficiently serious to lead to conflicts, and the process of sequential interactions that could cause those conflicts to deepen. Games against nature illustrate role of objectives and uncertainty that are at the core of detailed treatments of crisis stability. These modelsmore » can also illustrate how these games processes can generate and deepen crises and the optimal strategies that might be used to end them. This note discusses two essential elements of the analysis of crisis stability that are omitted from current treatments based on first strike stability: anon-damage objective that motivates conflicts sufficiently serious to lead to conflicts, and the process of sequential tests that could cause those conflicts to deepen. The model used is a game against nature, simplified sufficiently to make the role of each of those elements obvious.« less
Hinault, Thomas; Lemaire, Patrick; Phillips, Natalie
2016-01-01
This study investigated age-related differences in electrophysiological signatures of sequential modulations of poorer strategy effects. Sequential modulations of poorer strategy effects refer to decreased poorer strategy effects (i.e., poorer performance when the cued strategy is not the best) on current problem following poorer strategy problems compared to after better strategy problems. Analyses on electrophysiological (EEG) data revealed important age-related changes in time, frequency, and coherence of brain activities underlying sequential modulations of poorer strategy effects. More specifically, sequential modulations of poorer strategy effects were associated with earlier and later time windows (i.e., between 200- and 550 ms and between 850- and 1250 ms). Event-related potentials (ERPs) also revealed an earlier onset in older adults, together with more anterior and less lateralized activations. Furthermore, sequential modulations of poorer strategy effects were associated with theta and alpha frequencies in young adults while these modulations were found in delta frequency and theta inter-hemispheric coherence in older adults, consistent with qualitatively distinct patterns of brain activity. These findings have important implications to further our understanding of age-related differences and similarities in sequential modulations of cognitive control processes during arithmetic strategy execution. Copyright © 2015 Elsevier B.V. All rights reserved.
Heuristic and optimal policy computations in the human brain during sequential decision-making.
Korn, Christoph W; Bach, Dominik R
2018-01-23
Optimal decisions across extended time horizons require value calculations over multiple probabilistic future states. Humans may circumvent such complex computations by resorting to easy-to-compute heuristics that approximate optimal solutions. To probe the potential interplay between heuristic and optimal computations, we develop a novel sequential decision-making task, framed as virtual foraging in which participants have to avoid virtual starvation. Rewards depend only on final outcomes over five-trial blocks, necessitating planning over five sequential decisions and probabilistic outcomes. Here, we report model comparisons demonstrating that participants primarily rely on the best available heuristic but also use the normatively optimal policy. FMRI signals in medial prefrontal cortex (MPFC) relate to heuristic and optimal policies and associated choice uncertainties. Crucially, reaction times and dorsal MPFC activity scale with discrepancies between heuristic and optimal policies. Thus, sequential decision-making in humans may emerge from integration between heuristic and optimal policies, implemented by controllers in MPFC.
Constrained optimization of sequentially generated entangled multiqubit states
NASA Astrophysics Data System (ADS)
Saberi, Hamed; Weichselbaum, Andreas; Lamata, Lucas; Pérez-García, David; von Delft, Jan; Solano, Enrique
2009-08-01
We demonstrate how the matrix-product state formalism provides a flexible structure to solve the constrained optimization problem associated with the sequential generation of entangled multiqubit states under experimental restrictions. We consider a realistic scenario in which an ancillary system with a limited number of levels performs restricted sequential interactions with qubits in a row. The proposed method relies on a suitable local optimization procedure, yielding an efficient recipe for the realistic and approximate sequential generation of any entangled multiqubit state. We give paradigmatic examples that may be of interest for theoretical and experimental developments.
Irredundant Sequential Machines Via Optimal Logic Synthesis
1989-10-01
1989 Irredundant Sequential Machines Via Optimal Logic Synthesis NSrinivas Devadas , Hi-Keung Tony Ma, A. Richard Newton, and Alberto Sangiovanni- S...Agency under contract N00014-87-K-0825, and a grant from AT & T Bell Laboratories. Author Information Devadas : Department of Electrical Engineering...Sequential Machines Via Optimal Logic Synthesis Srinivas Devadas * Hi-Keung Tony ha. A. Richard Newton and Alberto Sangiovanni-Viucentelli Department of
A Game-Theoretic Approach to Information-Flow Control via Protocol Composition
NASA Astrophysics Data System (ADS)
Alvim, Mário; Chatzikokolakis, Konstantinos; Kawamoto, Yusuke; Palamidessi, Catuscia
2018-05-01
In the inference attacks studied in Quantitative Information Flow (QIF), the attacker typically tries to interfere with the system in the attempt to increase its leakage of secret information. The defender, on the other hand, typically tries to decrease leakage by introducing some controlled noise. This noise introduction can be modeled as a type of protocol composition, i.e., a probabilistic choice among different protocols, and its effect on the amount of leakage depends heavily on whether or not this choice is visible to the attacker. In this work, we consider operators for modeling visible and hidden choice in protocol composition, and we study their algebraic properties. We then formalize the interplay between defender and attacker in a game-theoretic framework adapted to the specific issues of QIF, where the payoff is information leakage. We consider various kinds of leakage games, depending on whether players act simultaneously or sequentially, and on whether or not the choices of the defender are visible to the attacker. In the case of sequential games, the choice of the second player is generally a function of the choice of the first player, and his/her probabilistic choice can be either over the possible functions (mixed strategy) or it can be on the result of the function (behavioral strategy). We show that when the attacker moves first in a sequential game with a hidden choice, then behavioral strategies are more advantageous for the defender than mixed strategies. This contrasts with the standard game theory, where the two types of strategies are equivalent. Finally, we establish a hierarchy of these games in terms of their information leakage and provide methods for finding optimal strategies (at the points of equilibrium) for both attacker and defender in the various cases.
Picheny, Victor; Trépos, Ronan; Casadebaig, Pierre
2017-01-01
Accounting for the interannual climatic variations is a well-known issue for simulation-based studies of environmental systems. It often requires intensive sampling (e.g., averaging the simulation outputs over many climatic series), which hinders many sequential processes, in particular optimization algorithms. We propose here an approach based on a subset selection in a large basis of climatic series, using an ad-hoc similarity function and clustering. A non-parametric reconstruction technique is introduced to estimate accurately the distribution of the output of interest using only the subset sampling. The proposed strategy is non-intrusive and generic (i.e. transposable to most models with climatic data inputs), and can be combined to most “off-the-shelf” optimization solvers. We apply our approach to sunflower ideotype design using the crop model SUNFLO. The underlying optimization problem is formulated as a multi-objective one to account for risk-aversion. Our approach achieves good performances even for limited computational budgets, outperforming significantly standard strategies. PMID:28542198
Einstein, Andrew J.; Wolff, Steven D.; Manheimer, Eric D.; Thompson, James; Terry, Sylvia; Uretsky, Seth; Pilip, Adalbert; Peters, M. Robert
2009-01-01
Radiation dose from coronary computed tomography angiography may be reduced using a sequential scanning protocol rather than a conventional helical scanning protocol. Here we compare radiation dose and image quality from coronary computed tomography angiography in a single center between an initial period during which helical scanning with electrocardiographically-controlled tube current modulation was used for all patients (n=138) and after adoption of a strategy incorporating sequential scanning whenever appropriate (n=261). Using the sequential-if-appropriate strategy, sequential scanning was employed in 86.2% of patients. Compared to the helical-only strategy, this strategy was associated with a 65.1% dose reduction (mean dose-length product of 305.2 vs. 875.1 and mean effective dose of 14.9 mSv vs. 5.2 mSv, respectively), with no significant change in overall image quality, step artifacts, motion artifacts, or perceived image noise. For the 225 patients undergoing sequential scanning, the dose-length product was 201.9 ± 90.0 mGy·cm, while for patients undergoing helical scanning under either strategy, the dose-length product was 890.9 ± 293.3 mGy·cm (p<0.0001), corresponding to mean effective doses of 3.4 mSv and 15.1 mSv, respectively, a 77.5% reduction. Image quality was significantly greater for the sequential studies, reflecting the poorer image quality in patients undergoing helical scanning in the sequential-if-appropriate strategy. In conclusion, a sequential-if-appropriate diagnostic strategy reduces dose markedly compared to a helical-only strategy, with no significant difference in image quality. PMID:19892048
Melioration as rational choice: sequential decision making in uncertain environments.
Sims, Chris R; Neth, Hansjörg; Jacobs, Robert A; Gray, Wayne D
2013-01-01
Melioration-defined as choosing a lesser, local gain over a greater longer term gain-is a behavioral tendency that people and pigeons share. As such, the empirical occurrence of meliorating behavior has frequently been interpreted as evidence that the mechanisms of human choice violate the norms of economic rationality. In some environments, the relationship between actions and outcomes is known. In this case, the rationality of choice behavior can be evaluated in terms of how successfully it maximizes utility given knowledge of the environmental contingencies. In most complex environments, however, the relationship between actions and future outcomes is uncertain and must be learned from experience. When the difficulty of this learning challenge is taken into account, it is not evident that melioration represents suboptimal choice behavior. In the present article, we examine human performance in a sequential decision-making experiment that is known to induce meliorating behavior. In keeping with previous results using this paradigm, we find that the majority of participants in the experiment fail to adopt the optimal decision strategy and instead demonstrate a significant bias toward melioration. To explore the origins of this behavior, we develop a rational analysis (Anderson, 1990) of the learning problem facing individuals in uncertain decision environments. Our analysis demonstrates that an unbiased learner would adopt melioration as the optimal response strategy for maximizing long-term gain. We suggest that many documented cases of melioration can be reinterpreted not as irrational choice but rather as globally optimal choice under uncertainty.
de Oliveira, Fabio Santos; Korn, Mauro
2006-01-15
A sensitive SIA method was developed for sulphate determination in automotive fuel ethanol. This method was based on the reaction of sulphate with barium-dimethylsulphonazo(III) leading to a decrease on the magnitude of analytical signal monitored at 665 nm. Alcohol fuel samples were previously burned up to avoid matrix effects for sulphate determinations. Binary sampling and stop-flow strategies were used to increase the sensitivity of the method. The optimization of analytical parameter was performed by response surface method using Box-Behnker and central composite designs. The proposed sequential flow procedure permits to determine up to 10.0mg SO(4)(2-)l(-1) with R.S.D. <2.5% and limit of detection of 0.27 mg l(-1). The method has been successfully applied for sulphate determination in automotive fuel alcohol and the results agreed with the reference volumetric method. In the optimized condition the SIA system carried out 27 samples per hour.
C-learning: A new classification framework to estimate optimal dynamic treatment regimes.
Zhang, Baqun; Zhang, Min
2017-12-11
A dynamic treatment regime is a sequence of decision rules, each corresponding to a decision point, that determine that next treatment based on each individual's own available characteristics and treatment history up to that point. We show that identifying the optimal dynamic treatment regime can be recast as a sequential optimization problem and propose a direct sequential optimization method to estimate the optimal treatment regimes. In particular, at each decision point, the optimization is equivalent to sequentially minimizing a weighted expected misclassification error. Based on this classification perspective, we propose a powerful and flexible C-learning algorithm to learn the optimal dynamic treatment regimes backward sequentially from the last stage until the first stage. C-learning is a direct optimization method that directly targets optimizing decision rules by exploiting powerful optimization/classification techniques and it allows incorporation of patient's characteristics and treatment history to improve performance, hence enjoying advantages of both the traditional outcome regression-based methods (Q- and A-learning) and the more recent direct optimization methods. The superior performance and flexibility of the proposed methods are illustrated through extensive simulation studies. © 2017, The International Biometric Society.
Biggio, Joseph R; Morris, T Christopher; Owen, John; Stringer, Jeffery S A
2004-03-01
This study was undertaken to examine the cost-effectiveness and procedural-related losses associated with 5 prenatal screening strategies for fetal aneuploidy in women under 35 years old. Five prenatal screening strategies were compared in a decision analysis model: triple screen: maternal age and midtrimester serum alpha-fetoprotein, human chorionic gonadotropin (hCG), and unconjugated estriol; quad screen: triple screen plus serum dimeric inhibin A; first-trimester screen: maternal age, serum pregnancy-associated plasma protein A and free beta-hCG and fetal nuchal translucency at 10 to 14 weeks' gestation; integrated screen: first-trimester screen plus quad screen, but first-trimester results are withheld until the quad screen is completed when a composite result is provided; sequential screen: first-trimester screen plus quad screen, but the first-trimester screen results are provided immediately and prenatal diagnosis offered if positive; later prenatal diagnosis is available if the quad screen is positive. Model estimates were literature derived, and cost estimates also included local sources. The 5 strategies were compared for cost, the numbers of Down syndrome fetuses detected and live births averted, and the number of procedure-related euploid losses. Sensitivity analyses were performed for parameters with imprecise point estimates. In the baseline analysis, sequential screening was the least expensive strategy ($455 million). It detected the most Down syndrome fetuses (n=1213), averted the most Down syndrome live births (n=678), but led to the highest number of procedure-related euploid losses (n=859). The integrated screen had the fewest euploid losses (n=62) and averted the second most Down syndrome live births (n=520). If fewer than 70% of women diagnosed with fetal Down syndrome elect to abort, the quad screen became the least expensive strategy. Although sequential screening was the most cost-effective prenatal screening strategy for fetal trisomy 21, it had the highest procedure-related euploid loss rate. The patient's perspective on detection versus fetal safety may help define the optimal screening strategy.
Optimization strategies for molecular dynamics programs on Cray computers and scalar work stations
NASA Astrophysics Data System (ADS)
Unekis, Michael J.; Rice, Betsy M.
1994-12-01
We present results of timing runs and different optimization strategies for a prototype molecular dynamics program that simulates shock waves in a two-dimensional (2-D) model of a reactive energetic solid. The performance of the program may be improved substantially by simple changes to the Fortran or by employing various vendor-supplied compiler optimizations. The optimum strategy varies among the machines used and will vary depending upon the details of the program. The effect of various compiler options and vendor-supplied subroutine calls is demonstrated. Comparison is made between two scalar workstations (IBM RS/6000 Model 370 and Model 530) and several Cray supercomputers (X-MP/48, Y-MP8/128, and C-90/16256). We find that for a scientific application program dominated by sequential, scalar statements, a relatively inexpensive high-end work station such as the IBM RS/60006 RISC series will outperform single processor performance of the Cray X-MP/48 and perform competitively with single processor performance of the Y-MP8/128 and C-9O/16256.
Sequential Test Strategies for Multiple Fault Isolation
NASA Technical Reports Server (NTRS)
Shakeri, M.; Pattipati, Krishna R.; Raghavan, V.; Patterson-Hine, Ann; Kell, T.
1997-01-01
In this paper, we consider the problem of constructing near optimal test sequencing algorithms for diagnosing multiple faults in redundant (fault-tolerant) systems. The computational complexity of solving the optimal multiple-fault isolation problem is super-exponential, that is, it is much more difficult than the single-fault isolation problem, which, by itself, is NP-hard. By employing concepts from information theory and Lagrangian relaxation, we present several static and dynamic (on-line or interactive) test sequencing algorithms for the multiple fault isolation problem that provide a trade-off between the degree of suboptimality and computational complexity. Furthermore, we present novel diagnostic strategies that generate a static diagnostic directed graph (digraph), instead of a static diagnostic tree, for multiple fault diagnosis. Using this approach, the storage complexity of the overall diagnostic strategy reduces substantially. Computational results based on real-world systems indicate that the size of a static multiple fault strategy is strictly related to the structure of the system, and that the use of an on-line multiple fault strategy can diagnose faults in systems with as many as 10,000 failure sources.
Sequential quantum cloning under real-life conditions
NASA Astrophysics Data System (ADS)
Saberi, Hamed; Mardoukhi, Yousof
2012-05-01
We consider a sequential implementation of the optimal quantum cloning machine of Gisin and Massar and propose optimization protocols for experimental realization of such a quantum cloner subject to the real-life restrictions. We demonstrate how exploiting the matrix-product state (MPS) formalism and the ensuing variational optimization techniques reveals the intriguing algebraic structure of the Gisin-Massar output of the cloning procedure and brings about significant improvements to the optimality of the sequential cloning prescription of Delgado [Phys. Rev. Lett.PRLTAO0031-900710.1103/PhysRevLett.98.150502 98, 150502 (2007)]. Our numerical results show that the orthodox paradigm of optimal quantum cloning can in practice be realized in a much more economical manner by utilizing a considerably lesser amount of informational and numerical resources than hitherto estimated. Instead of the previously predicted linear scaling of the required ancilla dimension D with the number of qubits n, our recipe allows a realization of such a sequential cloning setup with an experimentally manageable ancilla of dimension at most D=3 up to n=15 qubits. We also address satisfactorily the possibility of providing an optimal range of sequential ancilla-qubit interactions for optimal cloning of arbitrary states under realistic experimental circumstances when only a restricted class of such bipartite interactions can be engineered in practice.
Overview of technical trend of optical fiber/cable and research and development strategy of Samsung
NASA Astrophysics Data System (ADS)
Kim, Jin H.
2005-01-01
Fiber-to-the-Premise (FTTP), a keyword in the current fiber and cable industry, leads us variegated directions of the research and development activities. In fact, this momentum of industry seems to be weak yet, since the bandwidth demand by market is still unbalanced to the capacity in the several market segments. However, the recent gradual recovery in metro and access network indicates a positive sign for FTTP deployment projects. It is the very preferable for us to optimize R&D strategy applicable to the current market trend of sequential investment.
Sequential lineups: shift in criterion or decision strategy?
Gronlund, Scott D
2004-04-01
R. C. L. Lindsay and G. L. Wells (1985) argued that a sequential lineup enhanced discriminability because it elicited use of an absolute decision strategy. E. B. Ebbesen and H. D. Flowe (2002) argued that a sequential lineup led witnesses to adopt a more conservative response criterion, thereby affecting bias, not discriminability. Height was encoded as absolute (e.g., 6 ft [1.83 m] tall) or relative (e.g., taller than). If a sequential lineup elicited an absolute decision strategy, the principle of transfer-appropriate processing predicted that performance should be best when height was encoded absolutely. Conversely, if a simultaneous lineup elicited a relative decision strategy, performance should be best when height was encoded relatively. The predicted interaction was observed, providing direct evidence for the decision strategies explanation of what happens when witnesses view a sequential lineup.
Manheimer, Eric D.; Peters, M. Robert; Wolff, Steven D.; Qureshi, Mehreen A.; Atluri, Prashanth; Pearson, Gregory D.N.; Einstein, Andrew J.
2011-01-01
Triple-rule-out computed tomography angiography (TRO CTA), performed to evaluate the coronary arteries, pulmonary arteries, and thoracic aorta, has been associated with high radiation exposure. Utilization of sequential scanning for coronary computed tomography angiography (CCTA) reduces radiation dose. The application of sequential scanning to TRO CTA is much less well defined. We analyzed radiation dose and image quality from TRO CTA performed in a single outpatient center, comparing scans from a period during which helical scanning with electrocardiographically controlled tube current modulation was used for all patients (n=35) and after adoption of a strategy incorporating sequential scanning whenever appropriate (n=35). Sequential scanning was able to be employed in 86% of cases. The sequential-if-appropriate strategy, compared to the helical-only strategy, was associated with a 61.6% dose decrease (mean dose-length product [DLP] of 439 mGy×cm vs 1144 mGy×cm and mean effective dose of 7.5 mSv vs 19.4 mSv, respectively, p<0.0001). Similarly, there was a 71.5% dose reduction among 30 patients scanned with the sequential protocol compared to 40 patients scanned with the helical protocol under either strategy (326 mGy×cm vs 1141 mGy×cm and 5.5 mSv vs 19.4 mSv, respectively, p<0.0001). Although image quality did not differ between strategies, there was a non-statistically significant trend towards better quality in the sequential protocol compared to the helical protocol. In conclusion, approaching TRO CTA with a diagnostic strategy of sequential scanning as appropriate offers a marked reduction in radiation dose while maintaining image quality. PMID:21306693
NASA Technical Reports Server (NTRS)
Rais-Rohani, Masoud
2003-01-01
This report discusses the development and application of two alternative strategies in the form of global and sequential local response surface (RS) techniques for the solution of reliability-based optimization (RBO) problems. The problem of a thin-walled composite circular cylinder under axial buckling instability is used as a demonstrative example. In this case, the global technique uses a single second-order RS model to estimate the axial buckling load over the entire feasible design space (FDS) whereas the local technique uses multiple first-order RS models with each applied to a small subregion of FDS. Alternative methods for the calculation of unknown coefficients in each RS model are explored prior to the solution of the optimization problem. The example RBO problem is formulated as a function of 23 uncorrelated random variables that include material properties, thickness and orientation angle of each ply, cylinder diameter and length, as well as the applied load. The mean values of the 8 ply thicknesses are treated as independent design variables. While the coefficients of variation of all random variables are held fixed, the standard deviations of ply thicknesses can vary during the optimization process as a result of changes in the design variables. The structural reliability analysis is based on the first-order reliability method with reliability index treated as the design constraint. In addition to the probabilistic sensitivity analysis of reliability index, the results of the RBO problem are presented for different combinations of cylinder length and diameter and laminate ply patterns. The two strategies are found to produce similar results in terms of accuracy with the sequential local RS technique having a considerably better computational efficiency.
Pliego, Jorge; Mateos, Juan Carlos; Rodriguez, Jorge; Valero, Francisco; Baeza, Mireia; Femat, Ricardo; Camacho, Rosa; Sandoval, Georgina; Herrera-López, Enrique J
2015-01-27
Lipases and esterases are biocatalysts used at the laboratory and industrial level. To obtain the maximum yield in a bioprocess, it is important to measure key variables, such as enzymatic activity. The conventional method for monitoring hydrolytic activity is to take out a sample from the bioreactor to be analyzed off-line at the laboratory. The disadvantage of this approach is the long time required to recover the information from the process, hindering the possibility to develop control systems. New strategies to monitor lipase/esterase activity are necessary. In this context and in the first approach, we proposed a lab-made sequential injection analysis system to analyze off-line samples from shake flasks. Lipase/esterase activity was determined using p-nitrophenyl butyrate as the substrate. The sequential injection analysis allowed us to measure the hydrolytic activity from a sample without dilution in a linear range from 0.05-1.60 U/mL, with the capability to reach sample dilutions up to 1000 times, a sampling frequency of five samples/h, with a kinetic reaction of 5 min and a relative standard deviation of 8.75%. The results are promising to monitor lipase/esterase activity in real time, in which optimization and control strategies can be designed.
Pliego, Jorge; Mateos, Juan Carlos; Rodriguez, Jorge; Valero, Francisco; Baeza, Mireia; Femat, Ricardo; Camacho, Rosa; Sandoval, Georgina; Herrera-López, Enrique J.
2015-01-01
Lipases and esterases are biocatalysts used at the laboratory and industrial level. To obtain the maximum yield in a bioprocess, it is important to measure key variables, such as enzymatic activity. The conventional method for monitoring hydrolytic activity is to take out a sample from the bioreactor to be analyzed off-line at the laboratory. The disadvantage of this approach is the long time required to recover the information from the process, hindering the possibility to develop control systems. New strategies to monitor lipase/esterase activity are necessary. In this context and in the first approach, we proposed a lab-made sequential injection analysis system to analyze off-line samples from shake flasks. Lipase/esterase activity was determined using p-nitrophenyl butyrate as the substrate. The sequential injection analysis allowed us to measure the hydrolytic activity from a sample without dilution in a linear range from 0.05–1.60 U/mL, with the capability to reach sample dilutions up to 1000 times, a sampling frequency of five samples/h, with a kinetic reaction of 5 min and a relative standard deviation of 8.75%. The results are promising to monitor lipase/esterase activity in real time, in which optimization and control strategies can be designed. PMID:25633600
A Sequential Perspective on Searching for Static Targets
2011-01-01
number of expected looks. One possible measure of performance is the amount of slack between the error tolerance and the observed er- ror rate...less than a and it dominates the two alternate procedures. However, the er- ror rates of the two alternate procedures are smaller then when there is...Heterogeneous Autonomous Agents, in: Int’l. Conference on Robotics and Automation, 2009, pp. 939- 945 . [17] K.P. Tognetti, An optimal strategy for whereabouts
A near-optimal guidance for cooperative docking maneuvers
NASA Astrophysics Data System (ADS)
Ciarcià, Marco; Grompone, Alessio; Romano, Marcello
2014-09-01
In this work we study the problem of minimum energy docking maneuvers between two Floating Spacecraft Simulators. The maneuvers are planar and conducted autonomously in a cooperative mode. The proposed guidance strategy is based on the direct method known as Inverse Dynamics in the Virtual Domain, and the nonlinear programming solver known as Sequential Gradient-Restoration Algorithm. The combination of these methods allows for the quick prototyping of near-optimal trajectories, and results in an implementable tool for real-time closed-loop maneuvering. The experimental results included in this paper were obtained by exploiting the recently upgraded Floating Spacecraft-Simulator Testbed of the Spacecraft Robotics Laboratory at the Naval Postgraduate School. A direct performances comparison, in terms of maneuver energy and propellant mass, between the proposed guidance strategy and a LQR controller, demonstrates the effectiveness of the method.
Identifying protein complexes in PPI network using non-cooperative sequential game.
Maulik, Ujjwal; Basu, Srinka; Ray, Sumanta
2017-08-21
Identifying protein complexes from protein-protein interaction (PPI) network is an important and challenging task in computational biology as it helps in better understanding of cellular mechanisms in various organisms. In this paper we propose a noncooperative sequential game based model for protein complex detection from PPI network. The key hypothesis is that protein complex formation is driven by mechanism that eventually optimizes the number of interactions within the complex leading to dense subgraph. The hypothesis is drawn from the observed network property named small world. The proposed multi-player game model translates the hypothesis into the game strategies. The Nash equilibrium of the game corresponds to a network partition where each protein either belong to a complex or form a singleton cluster. We further propose an algorithm to find the Nash equilibrium of the sequential game. The exhaustive experiment on synthetic benchmark and real life yeast networks evaluates the structural as well as biological significance of the network partitions.
Strategies to induce broadly protective antibody responses to viral glycoproteins.
Krammer, F
2017-05-01
Currently, several universal/broadly protective influenza virus vaccine candidates are under development. Many of these vaccines are based on strategies to induce protective antibody responses against the surface glycoproteins of antigenically and genetically diverse influenza viruses. These strategies might also be applicable to surface glycoproteins of a broad range of other important viral pathogens. Areas covered: Common strategies include sequential vaccination with divergent antigens, multivalent approaches, vaccination with glycan-modified antigens, vaccination with minimal antigens and vaccination with antigens that have centralized/optimized sequences. Here we review these strategies and the underlying concepts. Furthermore, challenges, feasibility and applicability to other viral pathogens are discussed. Expert commentary: Several broadly protective/universal influenza virus vaccine strategies will be tested in humans in the coming years. If successful in terms of safety and immunological readouts, they will move forward into efficacy trials. In the meantime, successful vaccine strategies might also be applied to other antigenically diverse viruses of concern.
Condition-dependent mate choice: A stochastic dynamic programming approach.
Frame, Alicia M; Mills, Alex F
2014-09-01
We study how changing female condition during the mating season and condition-dependent search costs impact female mate choice, and what strategies a female could employ in choosing mates to maximize her own fitness. We address this problem via a stochastic dynamic programming model of mate choice. In the model, a female encounters males sequentially and must choose whether to mate or continue searching. As the female searches, her own condition changes stochastically, and she incurs condition-dependent search costs. The female attempts to maximize the quality of the offspring, which is a function of the female's condition at mating and the quality of the male with whom she mates. The mating strategy that maximizes the female's net expected reward is a quality threshold. We compare the optimal policy with other well-known mate choice strategies, and we use simulations to examine how well the optimal policy fares under imperfect information. Copyright © 2014 Elsevier Inc. All rights reserved.
A framework for sensitivity analysis of decision trees.
Kamiński, Bogumił; Jakubczyk, Michał; Szufel, Przemysław
2018-01-01
In the paper, we consider sequential decision problems with uncertainty, represented as decision trees. Sensitivity analysis is always a crucial element of decision making and in decision trees it often focuses on probabilities. In the stochastic model considered, the user often has only limited information about the true values of probabilities. We develop a framework for performing sensitivity analysis of optimal strategies accounting for this distributional uncertainty. We design this robust optimization approach in an intuitive and not overly technical way, to make it simple to apply in daily managerial practice. The proposed framework allows for (1) analysis of the stability of the expected-value-maximizing strategy and (2) identification of strategies which are robust with respect to pessimistic/optimistic/mode-favoring perturbations of probabilities. We verify the properties of our approach in two cases: (a) probabilities in a tree are the primitives of the model and can be modified independently; (b) probabilities in a tree reflect some underlying, structural probabilities, and are interrelated. We provide a free software tool implementing the methods described.
Cache Locality Optimization for Recursive Programs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lifflander, Jonathan; Krishnamoorthy, Sriram
We present an approach to optimize the cache locality for recursive programs by dynamically splicing--recursively interleaving--the execution of distinct function invocations. By utilizing data effect annotations, we identify concurrency and data reuse opportunities across function invocations and interleave them to reduce reuse distance. We present algorithms that efficiently track effects in recursive programs, detect interference and dependencies, and interleave execution of function invocations using user-level (non-kernel) lightweight threads. To enable multi-core execution, a program is parallelized using a nested fork/join programming model. Our cache optimization strategy is designed to work in the context of a random work stealing scheduler. Wemore » present an implementation using the MIT Cilk framework that demonstrates significant improvements in sequential and parallel performance, competitive with a state-of-the-art compile-time optimizer for loop programs and a domain- specific optimizer for stencil programs.« less
Noninvasive, automatic optimization strategy in cardiac resynchronization therapy.
Reumann, Matthias; Osswald, Brigitte; Doessel, Olaf
2007-07-01
Optimization of cardiac resynchronization therapy (CRT) is still unsolved. It has been shown that optimal electrode position,atrioventricular (AV) and interventricular (VV) delays improve the success of CRT and reduce the number of non-responders. However, no automatic, noninvasive optimization strategy exists to date. Cardiac resynchronization therapy was simulated on the Visible Man and a patient data-set including fiber orientation and ventricular heterogeneity. A cellular automaton was used for fast computation of ventricular excitation. An AV block and a left bundle branch block were simulated with 100%, 80% and 60% interventricular conduction velocity. A right apical and 12 left ventricular lead positions were set. Sequential optimization and optimization with the downhill simplex algorithm (DSA) were carried out. The minimal error between isochrones of the physiologic excitation and the therapy was computed automatically and leads to an optimal lead position and timing. Up to 1512 simulations were carried out per pathology per patient. One simulation took 4 minutes on an Apple Macintosh 2 GHz PowerPC G5. For each electrode pair an optimal pacemaker delay was found. The DSA reduced the number of simulations by an order of magnitude and the AV-delay and VV - delay were determined with a much higher resolution. The findings are well comparable with clinical studies. The presented computer model of CRT automatically evaluates an optimal lead position and AV-delay and VV-delay, which can be used to noninvasively plan an optimal therapy for an individual patient. The application of the DSA reduces the simulation time so that the strategy is suitable for pre-operative planning in clinical routine. Future work will focus on clinical evaluation of the computer models and integration of patient data for individualized therapy planning and optimization.
A reduced order model based on Kalman filtering for sequential data assimilation of turbulent flows
NASA Astrophysics Data System (ADS)
Meldi, M.; Poux, A.
2017-10-01
A Kalman filter based sequential estimator is presented in this work. The estimator is integrated in the structure of segregated solvers for the analysis of incompressible flows. This technique provides an augmented flow state integrating available observation in the CFD model, naturally preserving a zero-divergence condition for the velocity field. Because of the prohibitive costs associated with a complete Kalman Filter application, two model reduction strategies have been proposed and assessed. These strategies dramatically reduce the increase in computational costs of the model, which can be quantified in an augmentation of 10%- 15% with respect to the classical numerical simulation. In addition, an extended analysis of the behavior of the numerical model covariance Q has been performed. Optimized values are strongly linked to the truncation error of the discretization procedure. The estimator has been applied to the analysis of a number of test cases exhibiting increasing complexity, including turbulent flow configurations. The results show that the augmented flow successfully improves the prediction of the physical quantities investigated, even when the observation is provided in a limited region of the physical domain. In addition, the present work suggests that these Data Assimilation techniques, which are at an embryonic stage of development in CFD, may have the potential to be pushed even further using the augmented prediction as a powerful tool for the optimization of the free parameters in the numerical simulation.
Scanning laser ophthalmoscopy: optimized testing strategies for psychophysics
NASA Astrophysics Data System (ADS)
Van de Velde, Frans J.
1996-12-01
Retinal function can be evaluated with the scanning laser ophthalmoscope (SLO). the main advantage is a precise localization of the psychophysical stimulus on the retina. Four alternative forced choice (4AFC) and parameter estimation by sequential testing (PEST) are classic adaptive algorithms that have been optimized for use with the SLO, and combined with strategies to correct for small eye movements. Efficient calibration procedures are essential for quantitative microperimetry. These techniques measure precisely visual acuity and retinal sensitivity at distinct locations on the retina. A combined 632 nm and IR Maxwellian view illumination provides a maximal transmittance through the ocular media and has a animal interference with xanthophyll or hemoglobin. Future modifications of the instrument include the possibility of binocular evaluation, Maxwellian view control, fundus tracking using normalized gray-scale correlation, and microphotocoagulation. The techniques are useful in low vision rehabilitation and the application of laser to the retina.
Hinault, Thomas; Lemaire, Patrick; Touron, Dayna
2017-02-01
In this study, we asked young adults and older adults to encode pairs of words. For each item, they were told which strategy to use, interactive imagery or rote repetition. Data revealed poorer-strategy effects in both young adults and older adults: Participants obtained better performance when executing better strategies (i.e., interactive-imagery strategy to encode pairs of concrete words; rote-repetition strategy on pairs of abstract words) than with poorer strategies (i.e., interactive-imagery strategy on pairs of abstract words; rote-repetition strategy on pairs of concrete words). Crucially, we showed that sequential modulations of poorer-strategy effects (i.e., poorer-strategy effects being larger when previous items were encoded with better relative to poorer strategies), previously demonstrated in arithmetic, generalise to memory strategies. We also found reduced sequential modulations of poorer-strategy effects in older adults relative to young adults. Finally, sequential modulations of poorer-strategy effects correlated with measures of cognitive control processes, suggesting that these processes underlie efficient trial-to-trial modulations during strategy execution. Differences in correlations with cognitive control processes were also found between older adults and young adults. These findings have important implications regarding mechanisms underlying memory strategy execution and age differences in memory performance.
Sequential Injection Analysis for Optimization of Molecular Biology Reactions
Allen, Peter B.; Ellington, Andrew D.
2011-01-01
In order to automate the optimization of complex biochemical and molecular biology reactions, we developed a Sequential Injection Analysis (SIA) device and combined this with a Design of Experiment (DOE) algorithm. This combination of hardware and software automatically explores the parameter space of the reaction and provides continuous feedback for optimizing reaction conditions. As an example, we optimized the endonuclease digest of a fluorogenic substrate, and showed that the optimized reaction conditions also applied to the digest of the substrate outside of the device, and to the digest of a plasmid. The sequential technique quickly arrived at optimized reaction conditions with less reagent use than a batch process (such as a fluid handling robot exploring multiple reaction conditions in parallel) would have. The device and method should now be amenable to much more complex molecular biology reactions whose variable spaces are correspondingly larger. PMID:21338059
Equilibria, prudent compromises, and the "waiting" game.
Sim, Kwang Mong
2005-08-01
While evaluation of many e-negotiation agents are carried out through empirical studies, this work supplements and complements existing literature by analyzing the problem of designing market-driven agents (MDAs) in terms of equilibrium points and stable strategies. MDAs are negotiation agents designed to make prudent compromises taking into account factors such as time preference, outside option, and rivalry. This work shows that 1) in a given market situation, an MDA negotiates optimally because it makes minimally sufficient concession, and 2) by modeling negotiation of MDAs as a game gamma of incomplete information, it is shown that the strategies adopted by MDAs are stable. In a bilateral negotiation, it is proven that the strategy pair of two MDAs forms a sequential equilibrium for gamma. In a multilateral negotiation, it is shown that the strategy profile of MDAs forms a market equilibrium for gamma.
Pre-configured polyhedron based protection against multi-link failures in optical mesh networks.
Huang, Shanguo; Guo, Bingli; Li, Xin; Zhang, Jie; Zhao, Yongli; Gu, Wanyi
2014-02-10
This paper focuses on random multi-link failures protection in optical mesh networks, instead of single, the dual or sequential failures of previous studies. Spare resource efficiency and failure robustness are major concerns in link protection strategy designing and a k-regular and k-edge connected structure is proved to be one of the optimal solutions for link protection network. Based on this, a novel pre-configured polyhedron based protection structure is proposed, and it could provide protection for both simultaneous and sequential random link failures with improved spare resource efficiency. Its performance is evaluated in terms of spare resource consumption, recovery rate and average recovery path length, as well as compared with ring based and subgraph protection under probabilistic link failure scenarios. Results show the proposed novel link protection approach has better performance than previous works.
Eisa, Mohamed; El-Refai, Heba; Amin, Magdy
2016-09-01
A new potent Pseudomonas aeruginosa isolate capable for biotransformation of corn oil phytosterol (PS) to 4-androstene-3, 17-dione (AD), testosterone (T) and boldenone (BOL) was identified by phenotypic analysis and 16S rRNA gene sequencing. Sequential statistical strategy was used to optimize the biotransformation process mainly concerning BOL using Factorial design and response surface methodology (RSM). The production of BOL in single step microbial biotransformation from corn oil phytosterols by P. aeruginosa was not previously reported. Results showed that the pH concentration of the medium, (NH 4 ) 2 SO 4 and KH 2 PO 4 were the most significant factors affecting BOL production. By analyzing the statistical model of three-dimensional surface plot, BOL production increased from 36.8% to 42.4% after the first step of optimization, and the overall biotransformation increased to 51.9%. After applying the second step of the sequential statistical strategy BOL production increased to 53.6%, and the overall biotransformation increased to 91.9% using the following optimized medium composition (g/l distilled water) (NH 4 ) 2 SO 4 , 2; KH 2 PO 4 , 4; Na 2 HPO 4 . 1; MgSO 4 ·7H 2 O, 0.3; NaCl, 0.1; CaCl 2 ·2H 2 O, 0.1; FeSO 4 ·7H 2 O, 0.001; ammonium acetate 0.001; Tween 80, 0.05%; corn oil 0.5%; 8-hydroxyquinoline 0.016; pH 8; 200 rpm agitation speed and incubation time 36 h at 30 °C. Validation experiments proved the adequacy and accuracy of model, and the results showed the predicted value agreed well with the experimental values.
Optimal Therapy Scheduling Based on a Pair of Collaterally Sensitive Drugs.
Yoon, Nara; Vander Velde, Robert; Marusyk, Andriy; Scott, Jacob G
2018-05-07
Despite major strides in the treatment of cancer, the development of drug resistance remains a major hurdle. One strategy which has been proposed to address this is the sequential application of drug therapies where resistance to one drug induces sensitivity to another drug, a concept called collateral sensitivity. The optimal timing of drug switching in these situations, however, remains unknown. To study this, we developed a dynamical model of sequential therapy on heterogeneous tumors comprised of resistant and sensitive cells. A pair of drugs (DrugA, DrugB) are utilized and are periodically switched during therapy. Assuming resistant cells to one drug are collaterally sensitive to the opposing drug, we classified cancer cells into two groups, [Formula: see text] and [Formula: see text], each of which is a subpopulation of cells resistant to the indicated drug and concurrently sensitive to the other, and we subsequently explored the resulting population dynamics. Specifically, based on a system of ordinary differential equations for [Formula: see text] and [Formula: see text], we determined that the optimal treatment strategy consists of two stages: an initial stage in which a chosen effective drug is utilized until a specific time point, T, and a second stage in which drugs are switched repeatedly, during which each drug is used for a relative duration (i.e., [Formula: see text]-long for DrugA and [Formula: see text]-long for DrugB with [Formula: see text] and [Formula: see text]). We prove that the optimal duration of the initial stage, in which the first drug is administered, T, is shorter than the period in which it remains effective in decreasing the total population, contrary to current clinical intuition. We further analyzed the relationship between population makeup, [Formula: see text], and the effect of each drug. We determine a critical ratio, which we term [Formula: see text], at which the two drugs are equally effective. As the first stage of the optimal strategy is applied, [Formula: see text] changes monotonically to [Formula: see text] and then, during the second stage, remains at [Formula: see text] thereafter. Beyond our analytic results, we explored an individual-based stochastic model and presented the distribution of extinction times for the classes of solutions found. Taken together, our results suggest opportunities to improve therapy scheduling in clinical oncology.
Probability matching in risky choice: the interplay of feedback and strategy availability.
Newell, Ben R; Koehler, Derek J; James, Greta; Rakow, Tim; van Ravenzwaaij, Don
2013-04-01
Probability matching in sequential decision making is a striking violation of rational choice that has been observed in hundreds of experiments. Recent studies have demonstrated that matching persists even in described tasks in which all the information required for identifying a superior alternative strategy-maximizing-is present before the first choice is made. These studies have also indicated that maximizing increases when (1) the asymmetry in the availability of matching and maximizing strategies is reduced and (2) normatively irrelevant outcome feedback is provided. In the two experiments reported here, we examined the joint influences of these factors, revealing that strategy availability and outcome feedback operate on different time courses. Both behavioral and modeling results showed that while availability of the maximizing strategy increases the choice of maximizing early during the task, feedback appears to act more slowly to erode misconceptions about the task and to reinforce optimal responding. The results illuminate the interplay between "top-down" identification of choice strategies and "bottom-up" discovery of those strategies via feedback.
Optimization of the gypsum-based materials by the sequential simplex method
NASA Astrophysics Data System (ADS)
Doleželová, Magdalena; Vimmrová, Alena
2017-11-01
The application of the sequential simplex optimization method for the design of gypsum based materials is described. The principles of simplex method are explained and several examples of the method usage for the optimization of lightweight gypsum and ternary gypsum based materials are given. By this method lightweight gypsum based materials with desired properties and ternary gypsum based material with higher strength (16 MPa) were successfully developed. Simplex method is a useful tool for optimizing of gypsum based materials, but the objective of the optimization has to be formulated appropriately.
GilPavas, Edison; Dobrosz-Gómez, Izabela; Gómez-García, Miguel Ángel
2017-04-15
In this study, the industrial textile wastewater was treated using a chemical-based technique (coagulation-flocculation, C-F) sequential with an advanced oxidation process (AOP: Fenton or Photo-Fenton). During the C-F, Al 2 (SO 4 ) 3 was used as coagulant and its optimal dose was determined using the jar test. The following operational conditions of C-F, maximizing the organic matter removal, were determined: 700 mg/L of Al 2 (SO 4 ) 3 at pH = 9.96. Thus, the C-F allowed to remove 98% of turbidity, 48% of Chemical Oxygen Demand (COD), and let to increase in the BOD 5 /COD ratio from 0.137 to 0.212. Subsequently, the C-F effluent was treated using each of AOPs. Their performances were optimized by the Response Surface Methodology (RSM) coupled with a Box-Behnken experimental design (BBD). The following optimal conditions of both Fenton (Fe 2+ /H 2 O 2 ) and Photo-Fenton (Fe 2+ /H 2 O 2 /UV) processes were found: Fe 2+ concentration = 1 mM, H 2 O 2 dose = 2 mL/L (19.6 mM), and pH = 3. The combination of C-F pre-treatment with the Fenton reagent, at optimized conditions, let to remove 74% of COD during 90 min of the process. The C-F sequential with Photo-Fenton process let to reach 87% of COD removal, in the same time. Moreover, the BOD 5 /COD ratio increased from 0.212 to 0.68 and from 0.212 to 0.74 using Fenton and Photo-Fenton processes, respectively. Thus, the enhancement of biodegradability with the physico-chemical treatment was proved. The depletion of H 2 O 2 was monitored during kinetic study. Strategies for improving the reaction efficiency, based on the H 2 O 2 evolution, were also tested. Copyright © 2017 Elsevier Ltd. All rights reserved.
Auliac, J B; Chouaid, C; Greillier, L; Greiller, L; Monnet, I; Le Caer, H; Falchero, L; Corre, R; Descourt, R; Bota, S; Berard, H; Schott, R; Bizieux, A; Fournel, P; Labrunie, A; Marin, B; Vergnenegre, A
2014-09-01
Concomitant administration of erlotinib with standard chemotherapy does not appear to improve survival among patients with non-small-cell lung cancer (NSCLC), but preliminary studies suggest that sequential administration might be effective. To assess the efficacy and tolerability of second-line sequential administration of erlotinib and docetaxel in advanced NSCLC. In an open-label phase II trial, patients with advanced NSCLC, EGFR wild-type or unknown, PS 0-2, in whom initial cisplatin-based chemotherapy had failed were randomized to sequential erlotinib 150 mg/d (day 2-16)+docetaxel (75 mg/m(2) d1) (arm ED) or docetaxel (75 mg/m(2) d1) alone (arm D) (21-day cycle). The primary endpoint was the progression-free survival rate at 15 weeks (PFS 15). Secondary endpoints included PFS, overall survival (OS), the overall response rate (ORR) and tolerability. Based on a Simon optimal two-stage design, the ED strategy was rejected if the primary endpoint was below 33/66 patients at the end of the two Simon stages. 147 patients were randomized (median age: 60±8 years, PS 0/1/2: 44/83/20 patients; males: 78%). The ED strategy was rejected, with only 18 of 73 patients achieving PFS15 in arm ED at the end of stage 2 and 17 of 74 patients in arm D. In arms ED and D, respectively, median PFS was 2.2 and 2.5 months and median OS was 6.5 and 8.3 months. Sequential erlotinib and docetaxel was not more effective than docetaxel alone as second-line treatment for advanced NSCLC with wild-type or unknown EGFR status. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
2013-08-01
in Sequential Design Optimization with Concurrent Calibration-Based Model Validation Dorin Drignei 1 Mathematics and Statistics Department...Validation 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) Dorin Drignei; Zissimos Mourelatos; Vijitashwa Pandey
Yoshiga, Yasuhiro; Shimizu, Akihiko; Ueyama, Takeshi; Ono, Makoto; Fukuda, Masakazu; Fumimoto, Tomoko; Ishiguchi, Hironori; Omuro, Takuya; Kobayashi, Shigeki; Yano, Masafumi
2018-08-01
An effective catheter ablation strategy, beyond pulmonary vein isolation (PVI), for persistent atrial fibrillation (AF) is necessary. Pulmonary vein (PV)-reconduction also causes recurrent atrial tachyarrhythmias. The effect of the PVI and additional effect of a superior vena cava (SVC) isolation (SVCI) was strictly evaluated. Seventy consecutive patients with persistent AF who underwent a strict sequential ablation strategy targeting the PVs and SVC were included in this study. The initial ablation strategy was a circumferential PVI. A segmental SVCI was only applied as a repeat procedure when patients demonstrated no PV-reconduction. After the initial procedure, persistent AF was suppressed in 39 of 70 (55.7%) patients during a median follow-up of 32 months. After multiple procedures, persistent AF was suppressed in 46 (65.7%) and 52 (74.3%) patients after receiving the PVI alone and PVI plus SVCI strategies, respectively. In 6 of 15 (40.0%) patients with persistent AF resistant to PVI, persistent AF was suppressed. The persistent AF duration independently predicted persistent AF recurrences after multiple PVI alone procedures [HR: 1.012 (95% confidence interval: 1.006-1.018); p<0.001] and PVI plus SVCI strategies [HR: 1.018 (95% confidence interval: 1.011-1.025); p<0.001]. A receiver-operating-characteristic analysis for recurrent persistent AF indicated an optimal cut-off value of 20 and 32 months for the persistent AF duration using the PVI alone and PVI plus SVCI strategies, respectively. The outcomes of the PVI plus SVCI strategy were favorable for patients with shorter persistent AF durations. The initial SVCI had the additional effect of maintaining sinus rhythm in some patients with persistent AF resistant to PVI. Copyright © 2018 Japanese College of Cardiology. Published by Elsevier Ltd. All rights reserved.
Saving lives: A meta-analysis of team training in healthcare.
Hughes, Ashley M; Gregory, Megan E; Joseph, Dana L; Sonesh, Shirley C; Marlow, Shannon L; Lacerenza, Christina N; Benishek, Lauren E; King, Heidi B; Salas, Eduardo
2016-09-01
As the nature of work becomes more complex, teams have become necessary to ensure effective functioning within organizations. The healthcare industry is no exception. As such, the prevalence of training interventions designed to optimize teamwork in this industry has increased substantially over the last 10 years (Weaver, Dy, & Rosen, 2014). Using Kirkpatrick's (1956, 1996) training evaluation framework, we conducted a meta-analytic examination of healthcare team training to quantify its effectiveness and understand the conditions under which it is most successful. Results demonstrate that healthcare team training improves each of Kirkpatrick's criteria (reactions, learning, transfer, results; d = .37 to .89). Second, findings indicate that healthcare team training is largely robust to trainee composition, training strategy, and characteristics of the work environment, with the only exception being the reduced effectiveness of team training programs that involve feedback. As a tertiary goal, we proposed and found empirical support for a sequential model of healthcare team training where team training affects results via learning, which leads to transfer, which increases results. We find support for this sequential model in the healthcare industry (i.e., the current meta-analysis) and in training across all industries (i.e., using meta-analytic estimates from Arthur, Bennett, Edens, & Bell, 2003), suggesting the sequential benefits of training are not unique to medical teams. Ultimately, this meta-analysis supports the expanded use of team training and points toward recommendations for optimizing its effectiveness within healthcare settings. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Reliability-based trajectory optimization using nonintrusive polynomial chaos for Mars entry mission
NASA Astrophysics Data System (ADS)
Huang, Yuechen; Li, Haiyang
2018-06-01
This paper presents the reliability-based sequential optimization (RBSO) method to settle the trajectory optimization problem with parametric uncertainties in entry dynamics for Mars entry mission. First, the deterministic entry trajectory optimization model is reviewed, and then the reliability-based optimization model is formulated. In addition, the modified sequential optimization method, in which the nonintrusive polynomial chaos expansion (PCE) method and the most probable point (MPP) searching method are employed, is proposed to solve the reliability-based optimization problem efficiently. The nonintrusive PCE method contributes to the transformation between the stochastic optimization (SO) and the deterministic optimization (DO) and to the approximation of trajectory solution efficiently. The MPP method, which is used for assessing the reliability of constraints satisfaction only up to the necessary level, is employed to further improve the computational efficiency. The cycle including SO, reliability assessment and constraints update is repeated in the RBSO until the reliability requirements of constraints satisfaction are satisfied. Finally, the RBSO is compared with the traditional DO and the traditional sequential optimization based on Monte Carlo (MC) simulation in a specific Mars entry mission to demonstrate the effectiveness and the efficiency of the proposed method.
NASA Astrophysics Data System (ADS)
Wehner, William; Schuster, Eugenio; Poli, Francesca
2016-10-01
Initial progress towards the design of non-inductive current ramp-up scenarios in the National Spherical Torus Experiment Upgrade (NSTX-U) has been made through the use of TRANSP predictive simulations. The strategy involves, first, ramping the plasma current with high harmonic fast waves (HHFW) to about 400 kA, and then further ramping to 900 kA with neutral beam injection (NBI). However, the early ramping of neutral beams and application of HHFW leads to an undesirably peaked current profile making the plasma unstable to ballooning modes. We present an optimization-based control approach to improve on the non-inductive ramp-up strategy. We combine the TRANSP code with an optimization algorithm based on sequential quadratic programming to search for time evolutions of the NBI powers, the HHFW powers, and the line averaged density that define an open-loop actuator strategy that maximizes the non-inductive current while satisfying constraints associated with the current profile evolution for MHD stable plasmas. This technique has the potential of playing a critical role in achieving robustly stable non-inductive ramp-up, which will ultimately be necessary to demonstrate applicability of the spherical torus concept to larger devices without sufficient room for a central coil. Supported by the US DOE under the SCGSR Program.
Sequential causal inference: Application to randomized trials of adaptive treatment strategies
Dawson, Ree; Lavori, Philip W.
2009-01-01
SUMMARY Clinical trials that randomize subjects to decision algorithms, which adapt treatments over time according to individual response, have gained considerable interest as investigators seek designs that directly inform clinical decision making. We consider designs in which subjects are randomized sequentially at decision points, among adaptive treatment options under evaluation. We present a sequential method to estimate the comparative effects of the randomized adaptive treatments, which are formalized as adaptive treatment strategies. Our causal estimators are derived using Bayesian predictive inference. We use analytical and empirical calculations to compare the predictive estimators to (i) the ‘standard’ approach that allocates the sequentially obtained data to separate strategy-specific groups as would arise from randomizing subjects at baseline; (ii) the semi-parametric approach of marginal mean models that, under appropriate experimental conditions, provides the same sequential estimator of causal differences as the proposed approach. Simulation studies demonstrate that sequential causal inference offers substantial efficiency gains over the standard approach to comparing treatments, because the predictive estimators can take advantage of the monotone structure of shared data among adaptive strategies. We further demonstrate that the semi-parametric asymptotic variances, which are marginal ‘one-step’ estimators, may exhibit significant bias, in contrast to the predictive variances. We show that the conditions under which the sequential method is attractive relative to the other two approaches are those most likely to occur in real studies. PMID:17914714
Mate choice when males are in patches: optimal strategies and good rules of thumb.
Hutchinson, John M C; Halupka, Konrad
2004-11-07
In standard mate-choice models, females encounter males sequentially and decide whether to inspect the quality of another male or to accept a male already inspected. What changes when males are clumped in patches and there is a significant cost to travel between patches? We use stochastic dynamic programming to derive optimum strategies under various assumptions. With zero costs to returning to a male in the current patch, the optimal strategy accepts males above a quality threshold which is constant whenever one or more males in the patch remain uninspected; this threshold drops when inspecting the last male in the patch, so returns may occur only then and are never to a male in a previously inspected patch. With non-zero within-patch return costs, such a two-threshold rule still performs extremely well, but a more gradual decline in acceptance threshold is optimal. Inability to return at all need not decrease performance by much. The acceptance threshold should also decline if it gets harder to discover the last males in a patch. Optimal strategies become more complex when mean male quality varies systematically between patches or years, and females estimate this in a Bayesian manner through inspecting male qualities. It can then be optimal to switch patch before inspecting all males on a patch, or, exceptionally, to return to an earlier patch. We compare performance of various rules of thumb in these environments and in ones without a patch structure. A two-threshold rule performs excellently, as do various simplifications of it. The best-of-N rule outperforms threshold rules only in non-patchy environments with between-year quality variation. The cutoff rule performs poorly.
NASA Technical Reports Server (NTRS)
Duong, T. A.
2004-01-01
In this paper, we present a new, simple, and optimized hardware architecture sequential learning technique for adaptive Principle Component Analysis (PCA) which will help optimize the hardware implementation in VLSI and to overcome the difficulties of the traditional gradient descent in learning convergence and hardware implementation.
Madani-Hosseini, Mahsa; Mulligan, Catherine N; Barrington, Suzelle
2016-06-01
In-Storage-Psychrophilic-Anaerobic-Digestion (ISPAD) is an ambient temperature treatment system for wastewaters stored for over 100days under temperate climates, which produces a nitrogen rich digestate susceptible to ammonia (NH3) volatilization. Present acidification techniques reducing NH3 volatilization are not only expensive and with secondary environmental effects, but do not apply to ISPAD relying on batch-to-batch inoculation. The objectives of this study were to identify and validate sequential organic loading (OL) strategies producing imbalances in acidogen and methanogen growth, acidifying ISPAD content one week before emptying to a pH of 6, while also preserving the inoculation potential. This acidification process is challenging as wastewaters often offer a high buffering capacity and ISPAD operational practices foster low microbial populations. A model simulating the ISPAD pH regime was used to optimize 3 different sequential OLs to decrease the ISPAD pH to 6.0. All 3 strategies were compared in terms of biogas production, volatile fatty acid (VFA) concentration, microbial activity, glucose consumption, and pH decrease. Laboratory validation of the model outputs confirmed that a sequential OL of 13kg glucose/m(3) of ISPAD content over 4days could indeed reduce the pH to 6.0. Such OL competes feasibly with present acidification techniques. Nevertheless, more research is required to explain the 3-day lag between the model results and the experimental data. Copyright © 2016 Elsevier Ltd. All rights reserved.
Schneider, Francine; de Vries, Hein; van Osch, Liesbeth ADM; van Nierop, Peter WM; Kremers, Stef PJ
2012-01-01
Background Unhealthy lifestyle behaviors often co-occur and are related to chronic diseases. One effective method to change multiple lifestyle behaviors is web-based computer tailoring. Dropout from Internet interventions, however, is rather high, and it is challenging to retain participants in web-based tailored programs, especially programs targeting multiple behaviors. To date, it is unknown how much information people can handle in one session while taking part in a multiple behavior change intervention, which could be presented either sequentially (one behavior at a time) or simultaneously (all behaviors at once). Objectives The first objective was to compare dropout rates of 2 computer-tailored interventions: a sequential and a simultaneous strategy. The second objective was to assess which personal characteristics are associated with completion rates of the 2 interventions. Methods Using an RCT design, demographics, health status, physical activity, vegetable consumption, fruit consumption, alcohol intake, and smoking were self-assessed through web-based questionnaires among 3473 adults, recruited through Regional Health Authorities in the Netherlands in the autumn of 2009. First, a health risk appraisal was offered, indicating whether respondents were meeting the 5 national health guidelines. Second, psychosocial determinants of the lifestyle behaviors were assessed and personal advice was provided, about one or more lifestyle behaviors. Results Our findings indicate a high non-completion rate for both types of intervention (71.0%; n = 2167), with more incompletes in the simultaneous intervention (77.1%; n = 1169) than in the sequential intervention (65.0%; n = 998). In both conditions, discontinuation was predicted by a lower age (sequential condition: OR = 1.04; P < .001; CI = 1.02-1.05; simultaneous condition: OR = 1.04; P < .001; CI = 1.02-1.05) and an unhealthy lifestyle (sequential condition: OR = 0.86; P = .01; CI = 0.76-0.97; simultaneous condition: OR = 0.49; P < .001; CI = 0.42-0.58). In the sequential intervention, being male (OR = 1.27; P = .04; CI = 1.01-1.59) also predicted dropout. When respondents failed to adhere to at least 2 of the guidelines, those receiving the simultaneous intervention were more inclined to drop out than were those receiving the sequential intervention. Conclusion Possible reasons for the higher dropout rate in our simultaneous intervention may be the amount of time required and information overload. Strategies to optimize program completion as well as continued use of computer-tailored interventions should be studied. Trial Registration Dutch Trial Register NTR2168 PMID:22403770
Schulz, Daniela N; Schneider, Francine; de Vries, Hein; van Osch, Liesbeth A D M; van Nierop, Peter W M; Kremers, Stef P J
2012-03-08
Unhealthy lifestyle behaviors often co-occur and are related to chronic diseases. One effective method to change multiple lifestyle behaviors is web-based computer tailoring. Dropout from Internet interventions, however, is rather high, and it is challenging to retain participants in web-based tailored programs, especially programs targeting multiple behaviors. To date, it is unknown how much information people can handle in one session while taking part in a multiple behavior change intervention, which could be presented either sequentially (one behavior at a time) or simultaneously (all behaviors at once). The first objective was to compare dropout rates of 2 computer-tailored interventions: a sequential and a simultaneous strategy. The second objective was to assess which personal characteristics are associated with completion rates of the 2 interventions. Using an RCT design, demographics, health status, physical activity, vegetable consumption, fruit consumption, alcohol intake, and smoking were self-assessed through web-based questionnaires among 3473 adults, recruited through Regional Health Authorities in the Netherlands in the autumn of 2009. First, a health risk appraisal was offered, indicating whether respondents were meeting the 5 national health guidelines. Second, psychosocial determinants of the lifestyle behaviors were assessed and personal advice was provided, about one or more lifestyle behaviors. Our findings indicate a high non-completion rate for both types of intervention (71.0%; n = 2167), with more incompletes in the simultaneous intervention (77.1%; n = 1169) than in the sequential intervention (65.0%; n = 998). In both conditions, discontinuation was predicted by a lower age (sequential condition: OR = 1.04; P < .001; CI = 1.02-1.05; simultaneous condition: OR = 1.04; P < .001; CI = 1.02-1.05) and an unhealthy lifestyle (sequential condition: OR = 0.86; P = .01; CI = 0.76-0.97; simultaneous condition: OR = 0.49; P < .001; CI = 0.42-0.58). In the sequential intervention, being male (OR = 1.27; P = .04; CI = 1.01-1.59) also predicted dropout. When respondents failed to adhere to at least 2 of the guidelines, those receiving the simultaneous intervention were more inclined to drop out than were those receiving the sequential intervention. Possible reasons for the higher dropout rate in our simultaneous intervention may be the amount of time required and information overload. Strategies to optimize program completion as well as continued use of computer-tailored interventions should be studied. Dutch Trial Register NTR2168.
Grodowska, Katarzyna; Parczewski, Andrzej
2013-01-01
The purpose of the present work was to find optimum conditions of headspace gas chromatography (HS-GC) determination of residual solvents which usually appear in pharmaceutical products. Two groups of solvents were taken into account in the present examination. Group I consisted of isopropanol, n-propanol, isobutanol, n-butanol and 1,4-dioxane and group II included cyclohexane, n-hexane and n-heptane. The members of the groups were selected in previous investigations in which experimental design and chemometric methods were applied. Four factors were taken into consideration in optimization which describe HS conditions: sample volume, equilibration time, equilibrium temperature and NaCl concentration in a sample. The relative GC peak area served as an optimization criterion which was considered separately for each analyte. Sequential variable size simplex optimization strategy was used and the progress of optimization was traced and visualized in various ways simultaneously. The optimum HS conditions appeared different for the groups of solvents tested, which proves that influence of experimental conditions (factors) depends on analyte properties. The optimization resulted in significant signal increase (from seven to fifteen times).
Solving a four-destination traveling salesman problem using Escherichia coli cells as biocomputers.
Esau, Michael; Rozema, Mark; Zhang, Tuo Huang; Zeng, Dawson; Chiu, Stephanie; Kwan, Rachel; Moorhouse, Cadence; Murray, Cameron; Tseng, Nien-Tsu; Ridgway, Doug; Sauvageau, Dominic; Ellison, Michael
2014-12-19
The Traveling Salesman Problem involves finding the shortest possible route visiting all destinations on a map only once before returning to the point of origin. The present study demonstrates a strategy for solving Traveling Salesman Problems using modified E. coli cells as processors for massively parallel computing. Sequential, combinatorial DNA assembly was used to generate routes, in the form of plasmids made up of marker genes, each representing a path between destinations, and short connecting linkers, each representing a given destination. Upon growth of the population of modified E. coli, phenotypic selection was used to eliminate invalid routes, and statistical analysis was performed to successfully identify the optimal solution. The strategy was successfully employed to solve a four-destination test problem.
Rochau, Ursula; Kluibenschaedl, Martina; Stenehjem, David; Kuan-Ling, Kuo; Radich, Jerald; Oderda, Gary; Brixner, Diana; Siebert, Uwe
2015-01-01
Currently several tyrosine kinase inhibitors (TKIs) are approved for treatment of chronic myeloid leukemia (CML). Our goal was to identify the optimal sequential treatment strategy in terms of effectiveness and cost-effectiveness for CML patients within the US health care context. We evaluated 18 treatment strategies regarding survival, quality-adjusted survival, and costs. For model parameters, the literature data, expert surveys, registry data, and economic databases were used. Evaluated strategies included imatinib, dasatinib, nilotinib, bosutinib, ponatinib, stem-cell transplantation (SCT), and chemotherapy. We developed a Markov state-transition model, which was analyzed as a cohort simulation over a lifelong time horizon with a third-party payer perspective and discount rate of 3%. Remaining life expectancies ranged from 5.4 years (3.9 quality-adjusted life years (QALYs)) for chemotherapy treatment without TKI to 14.4 years (11.1 QALYs) for nilotinib→dasatinib→chemotherapy/SCT. In the economic evaluation, imatinib→chemotherapy/SCT resulted in an incremental cost-utility ratio (ICUR) of $171,700/QALY compared to chemotherapy without TKI. Imatinib→nilotinib→chemotherapy/SCT yielded an ICUR of $253,500/QALY compared to imatinib→chemotherapy/SCT. Nilotinib→dasatinib→chemotherapy/SCT yielded an ICUR of $445,100/QALY compared to imatinib→nilotinib→chemotherapy/SCT. All remaining strategies were excluded due to dominance of the clinically superior strategies. Based on our analysis and current treatment guidelines, imatinib→nilotinib→chemotherapy/SCT and nilotinib→dasatinib→chemotherapy/SCT can be considered cost-effective for patients with CML, depending on willingness-to-pay. PMID:26783469
Feng, Xiuli; Zhang, Yan; Li, Tao; Li, Yu
2017-01-01
Combination of chemotherapy and epidermal growth factor receptor-tyrosine kinase inhibitors (EGFR-TKIs) had been proved to be a potent anti-drug for the treatment of tumors. However, survival time was not extended for the patients with lung adenocarcinoma (AdC) compared with first-line chemotherapy. In the present study, we attempt to assess the optimal schedule of the combined administration of pemetrexed and icotinib/erlotinib in AdC cell lines. Human lung AdC cell lines with wild-type (A549), EGFR T790M (H1975) and activating EGFR mutation (HCC827) were applied in vitro to assess the differential efficacy of various sequential regimens on cell viability, cell apoptosis and cell cycle distribution. The results suggested that the antiproliferative effect of the sequence of pemetrexed followed by icotinib/erlotinib was more effective than that of icotinib/erlotinib followed by pemetrexed. Additionally, a reduction of G1 phase and increased S phase in sequence of pemetrexed followed by icotinib/erlotinib was also observed, promoting cell apoptosis. Thus, the sequential administration of pemetrexed followed by icotinib/erlotinib exerted a synergistic effect on HCC827 and H1975 cell lines compared with the reverse sequence. The sequential treatment of pemetrexed followed by icotinib/erlotinib has been demonstrated promising results. This treatment strategy warrants further confirmation in patients with advanced lung AdC. PMID:29371987
Feng, Xiuli; Zhang, Yan; Li, Tao; Li, Yu
2017-12-26
Combination of chemotherapy and epidermal growth factor receptor-tyrosine kinase inhibitors (EGFR-TKIs) had been proved to be a potent anti-drug for the treatment of tumors. However, survival time was not extended for the patients with lung adenocarcinoma (AdC) compared with first-line chemotherapy. In the present study, we attempt to assess the optimal schedule of the combined administration of pemetrexed and icotinib/erlotinib in AdC cell lines. Human lung AdC cell lines with wild-type (A549), EGFR T790M (H1975) and activating EGFR mutation (HCC827) were applied in vitro to assess the differential efficacy of various sequential regimens on cell viability, cell apoptosis and cell cycle distribution. The results suggested that the antiproliferative effect of the sequence of pemetrexed followed by icotinib/erlotinib was more effective than that of icotinib/erlotinib followed by pemetrexed. Additionally, a reduction of G1 phase and increased S phase in sequence of pemetrexed followed by icotinib/erlotinib was also observed, promoting cell apoptosis. Thus, the sequential administration of pemetrexed followed by icotinib/erlotinib exerted a synergistic effect on HCC827 and H1975 cell lines compared with the reverse sequence. The sequential treatment of pemetrexed followed by icotinib/erlotinib has been demonstrated promising results. This treatment strategy warrants further confirmation in patients with advanced lung AdC.
Awad, Ghada E A; Amer, Hassan; El-Gammal, Eman W; Helmy, Wafaa A; Esawy, Mona A; Elnashar, Magdy M M
2013-04-02
A sequential optimization strategy, based on statistical experimental designs, was employed to enhance the production of invertase by Lactobacillus brevis Mm-6 isolated from breast milk. First, a 2-level Plackett-Burman design was applied to screen the bioprocess parameters that significantly influence the invertase production. The second optimization step was performed using fractional factorial design in order to optimize the amounts of variables have the highest positive significant effect on the invertase production. A maximal enzyme activity of 1399U/ml was more than five folds the activity obtained using the basal medium. Invertase was immobilized onto grafted alginate beads to improve the enzyme's stability. Immobilization process increased the operational temperature from 30 to 60°C compared to the free enzyme. The reusability test proved the durability of the grafted alginate beads for 15 cycles with retention of 100% of the immobilized enzyme activity to be more convenient for industrial uses. Copyright © 2013 Elsevier Ltd. All rights reserved.
EEG Classification with a Sequential Decision-Making Method in Motor Imagery BCI.
Liu, Rong; Wang, Yongxuan; Newman, Geoffrey I; Thakor, Nitish V; Ying, Sarah
2017-12-01
To develop subject-specific classifier to recognize mental states fast and reliably is an important issue in brain-computer interfaces (BCI), particularly in practical real-time applications such as wheelchair or neuroprosthetic control. In this paper, a sequential decision-making strategy is explored in conjunction with an optimal wavelet analysis for EEG classification. The subject-specific wavelet parameters based on a grid-search method were first developed to determine evidence accumulative curve for the sequential classifier. Then we proposed a new method to set the two constrained thresholds in the sequential probability ratio test (SPRT) based on the cumulative curve and a desired expected stopping time. As a result, it balanced the decision time of each class, and we term it balanced threshold SPRT (BTSPRT). The properties of the method were illustrated on 14 subjects' recordings from offline and online tests. Results showed the average maximum accuracy of the proposed method to be 83.4% and the average decision time of 2.77[Formula: see text]s, when compared with 79.2% accuracy and a decision time of 3.01[Formula: see text]s for the sequential Bayesian (SB) method. The BTSPRT method not only improves the classification accuracy and decision speed comparing with the other nonsequential or SB methods, but also provides an explicit relationship between stopping time, thresholds and error, which is important for balancing the speed-accuracy tradeoff. These results suggest that BTSPRT would be useful in explicitly adjusting the tradeoff between rapid decision-making and error-free device control.
Verdes, Aida; Anand, Prachi; Gorson, Juliette; Jannetti, Stephen; Kelly, Patrick; Leffler, Abba; Simpson, Danny; Ramrattan, Girish; Holford, Mandë
2016-04-19
Animal venoms comprise a diversity of peptide toxins that manipulate molecular targets such as ion channels and receptors, making venom peptides attractive candidates for the development of therapeutics to benefit human health. However, identifying bioactive venom peptides remains a significant challenge. In this review we describe our particular venomics strategy for the discovery, characterization, and optimization of Terebridae venom peptides, teretoxins. Our strategy reflects the scientific path from mollusks to medicine in an integrative sequential approach with the following steps: (1) delimitation of venomous Terebridae lineages through taxonomic and phylogenetic analyses; (2) identification and classification of putative teretoxins through omics methodologies, including genomics, transcriptomics, and proteomics; (3) chemical and recombinant synthesis of promising peptide toxins; (4) structural characterization through experimental and computational methods; (5) determination of teretoxin bioactivity and molecular function through biological assays and computational modeling; (6) optimization of peptide toxin affinity and selectivity to molecular target; and (7) development of strategies for effective delivery of venom peptide therapeutics. While our research focuses on terebrids, the venomics approach outlined here can be applied to the discovery and characterization of peptide toxins from any venomous taxa.
Multiplexed Predictive Control of a Large Commercial Turbofan Engine
NASA Technical Reports Server (NTRS)
Richter, hanz; Singaraju, Anil; Litt, Jonathan S.
2008-01-01
Model predictive control is a strategy well-suited to handle the highly complex, nonlinear, uncertain, and constrained dynamics involved in aircraft engine control problems. However, it has thus far been infeasible to implement model predictive control in engine control applications, because of the combination of model complexity and the time allotted for the control update calculation. In this paper, a multiplexed implementation is proposed that dramatically reduces the computational burden of the quadratic programming optimization that must be solved online as part of the model-predictive-control algorithm. Actuator updates are calculated sequentially and cyclically in a multiplexed implementation, as opposed to the simultaneous optimization taking place in conventional model predictive control. Theoretical aspects are discussed based on a nominal model, and actual computational savings are demonstrated using a realistic commercial engine model.
Development of New Lipid-Based Paclitaxel Nanoparticles Using Sequential Simplex Optimization
Dong, Xiaowei; Mattingly, Cynthia A.; Tseng, Michael; Cho, Moo; Adams, Val R.; Mumper, Russell J.
2008-01-01
The objective of these studies was to develop Cremophor-free lipid-based paclitaxel (PX) nanoparticle formulations prepared from warm microemulsion precursors. To identify and optimize new nanoparticles, experimental design was performed combining Taguchi array and sequential simplex optimization. The combination of Taguchi array and sequential simplex optimization efficiently directed the design of paclitaxel nanoparticles. Two optimized paclitaxel nanoparticles (NPs) were obtained: G78 NPs composed of glyceryl tridodecanoate (GT) and polyoxyethylene 20-stearyl ether (Brij 78), and BTM NPs composed of Miglyol 812, Brij 78 and D-alpha-tocopheryl polyethylene glycol 1000 succinate (TPGS). Both nanoparticles successfully entrapped paclitaxel at a final concentration of 150 μg/ml (over 6% drug loading) with particle sizes less than 200 nm and over 85% of entrapment efficiency. These novel paclitaxel nanoparticles were stable at 4°C over three months and in PBS at 37°C over 102 hours as measured by physical stability. Release of paclitaxel was slow and sustained without initial burst release. Cytotoxicity studies in MDA-MB-231 cancer cells showed that both nanoparticles have similar anticancer activities compared to Taxol®. Interestingly, PX BTM nanocapsules could be lyophilized without cryoprotectants. The lyophilized powder comprised only of PX BTM NPs in water could be rapidly rehydrated with complete retention of original physicochemical properties, in-vitro release properties, and cytotoxicity profile. Sequential Simplex Optimization has been utilized to identify promising new lipid-based paclitaxel nanoparticles having useful attributes. PMID:19111929
Optimal flexible sample size design with robust power.
Zhang, Lanju; Cui, Lu; Yang, Bo
2016-08-30
It is well recognized that sample size determination is challenging because of the uncertainty on the treatment effect size. Several remedies are available in the literature. Group sequential designs start with a sample size based on a conservative (smaller) effect size and allow early stop at interim looks. Sample size re-estimation designs start with a sample size based on an optimistic (larger) effect size and allow sample size increase if the observed effect size is smaller than planned. Different opinions favoring one type over the other exist. We propose an optimal approach using an appropriate optimality criterion to select the best design among all the candidate designs. Our results show that (1) for the same type of designs, for example, group sequential designs, there is room for significant improvement through our optimization approach; (2) optimal promising zone designs appear to have no advantages over optimal group sequential designs; and (3) optimal designs with sample size re-estimation deliver the best adaptive performance. We conclude that to deal with the challenge of sample size determination due to effect size uncertainty, an optimal approach can help to select the best design that provides most robust power across the effect size range of interest. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Constraint Optimization Problem For The Cutting Of A Cobalt Chrome Refractory Material
NASA Astrophysics Data System (ADS)
Lebaal, Nadhir; Schlegel, Daniel; Folea, Milena
2011-05-01
This paper shows a complete approach to solve a given problem, from the experimentation to the optimization of different cutting parameters. In response to an industrial problem of slotting FSX 414, a Cobalt-based refractory material, we have implemented a design of experiment to determine the most influent parameters on the tool life, the surface roughness and the cutting forces. After theses trials, an optimization approach has been implemented to find the lowest manufacturing cost while respecting the roughness constraints and cutting force limitation constraints. The optimization approach is based on the Response Surface Method (RSM) using the Sequential Quadratic programming algorithm (SQP) for a constrained problem. To avoid a local optimum and to obtain an accurate solution at low cost, an efficient strategy, which allows improving the RSM accuracy in the vicinity of the global optimum, is presented. With these models and these trials, we could apply and compare our optimization methods in order to get the lowest cost for the best quality, i.e. a satisfying surface roughness and limited cutting forces.
Adaptive time-sequential binary sensing for high dynamic range imaging
NASA Astrophysics Data System (ADS)
Hu, Chenhui; Lu, Yue M.
2012-06-01
We present a novel image sensor for high dynamic range imaging. The sensor performs an adaptive one-bit quantization at each pixel, with the pixel output switched from 0 to 1 only if the number of photons reaching that pixel is greater than or equal to a quantization threshold. With an oracle knowledge of the incident light intensity, one can pick an optimal threshold (for that light intensity) and the corresponding Fisher information contained in the output sequence follows closely that of an ideal unquantized sensor over a wide range of intensity values. This observation suggests the potential gains one may achieve by adaptively updating the quantization thresholds. As the main contribution of this work, we propose a time-sequential threshold-updating rule that asymptotically approaches the performance of the oracle scheme. With every threshold mapped to a number of ordered states, the dynamics of the proposed scheme can be modeled as a parametric Markov chain. We show that the frequencies of different thresholds converge to a steady-state distribution that is concentrated around the optimal choice. Moreover, numerical experiments show that the theoretical performance measures (Fisher information and Craḿer-Rao bounds) can be achieved by a maximum likelihood estimator, which is guaranteed to find globally optimal solution due to the concavity of the log-likelihood functions. Compared with conventional image sensors and the strategy that utilizes a constant single-photon threshold considered in previous work, the proposed scheme attains orders of magnitude improvement in terms of sensor dynamic ranges.
Deblurring sequential ocular images from multi-spectral imaging (MSI) via mutual information.
Lian, Jian; Zheng, Yuanjie; Jiao, Wanzhen; Yan, Fang; Zhao, Bojun
2018-06-01
Multi-spectral imaging (MSI) produces a sequence of spectral images to capture the inner structure of different species, which was recently introduced into ocular disease diagnosis. However, the quality of MSI images can be significantly degraded by motion blur caused by the inevitable saccades and exposure time required for maintaining a sufficiently high signal-to-noise ratio. This degradation may confuse an ophthalmologist, reduce the examination quality, or defeat various image analysis algorithms. We propose an early work specially on deblurring sequential MSI images, which is distinguished from many of the current image deblurring techniques by resolving the blur kernel simultaneously for all the images in an MSI sequence. It is accomplished by incorporating several a priori constraints including the sharpness of the latent clear image, the spatial and temporal smoothness of the blur kernel and the similarity between temporally-neighboring images in MSI sequence. Specifically, we model the similarity between MSI images with mutual information considering the different wavelengths used for capturing different images in MSI sequence. The optimization of the proposed approach is based on a multi-scale framework and stepwise optimization strategy. Experimental results from 22 MSI sequences validate that our approach outperforms several state-of-the-art techniques in natural image deblurring.
Li, Mingjie; Zhou, Ping; Wang, Hong; ...
2017-09-19
As one of the most important unit in the papermaking industry, the high consistency (HC) refining system is confronted with challenges such as improving pulp quality, energy saving, and emissions reduction in its operation processes. Here in this correspondence, an optimal operation of HC refining system is presented using nonlinear multiobjective model predictive control strategies that aim at set-point tracking objective of pulp quality, economic objective, and specific energy (SE) consumption objective, respectively. First, a set of input and output data at different times are employed to construct the subprocess model of the state process model for the HC refiningmore » system, and then the Wiener-type model can be obtained through combining the mechanism model of Canadian Standard Freeness and the state process model that determines their structures based on Akaike information criterion. Second, the multiobjective optimization strategy that optimizes both the set-point tracking objective of pulp quality and SE consumption is proposed simultaneously, which uses NSGA-II approach to obtain the Pareto optimal set. Furthermore, targeting at the set-point tracking objective of pulp quality, economic objective, and SE consumption objective, the sequential quadratic programming method is utilized to produce the optimal predictive controllers. In conclusion, the simulation results demonstrate that the proposed methods can make the HC refining system provide a better performance of set-point tracking of pulp quality when these predictive controllers are employed. In addition, while the optimal predictive controllers orienting with comprehensive economic objective and SE consumption objective, it has been shown that they have significantly reduced the energy consumption.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Mingjie; Zhou, Ping; Wang, Hong
As one of the most important unit in the papermaking industry, the high consistency (HC) refining system is confronted with challenges such as improving pulp quality, energy saving, and emissions reduction in its operation processes. Here in this correspondence, an optimal operation of HC refining system is presented using nonlinear multiobjective model predictive control strategies that aim at set-point tracking objective of pulp quality, economic objective, and specific energy (SE) consumption objective, respectively. First, a set of input and output data at different times are employed to construct the subprocess model of the state process model for the HC refiningmore » system, and then the Wiener-type model can be obtained through combining the mechanism model of Canadian Standard Freeness and the state process model that determines their structures based on Akaike information criterion. Second, the multiobjective optimization strategy that optimizes both the set-point tracking objective of pulp quality and SE consumption is proposed simultaneously, which uses NSGA-II approach to obtain the Pareto optimal set. Furthermore, targeting at the set-point tracking objective of pulp quality, economic objective, and SE consumption objective, the sequential quadratic programming method is utilized to produce the optimal predictive controllers. In conclusion, the simulation results demonstrate that the proposed methods can make the HC refining system provide a better performance of set-point tracking of pulp quality when these predictive controllers are employed. In addition, while the optimal predictive controllers orienting with comprehensive economic objective and SE consumption objective, it has been shown that they have significantly reduced the energy consumption.« less
A sequential linear optimization approach for controller design
NASA Technical Reports Server (NTRS)
Horta, L. G.; Juang, J.-N.; Junkins, J. L.
1985-01-01
A linear optimization approach with a simple real arithmetic algorithm is presented for reliable controller design and vibration suppression of flexible structures. Using first order sensitivity of the system eigenvalues with respect to the design parameters in conjunction with a continuation procedure, the method converts a nonlinear optimization problem into a maximization problem with linear inequality constraints. The method of linear programming is then applied to solve the converted linear optimization problem. The general efficiency of the linear programming approach allows the method to handle structural optimization problems with a large number of inequality constraints on the design vector. The method is demonstrated using a truss beam finite element model for the optimal sizing and placement of active/passive-structural members for damping augmentation. Results using both the sequential linear optimization approach and nonlinear optimization are presented and compared. The insensitivity to initial conditions of the linear optimization approach is also demonstrated.
Auyeung, S Freda; Long, Qi; Royster, Erica Bruce; Murthy, Smitha; McNutt, Marcia D; Lawson, David; Miller, Andrew; Manatunga, Amita; Musselman, Dominique L
2009-10-01
Interferon-alpha therapy, which is used to treat metastatic malignant melanoma, can cause patients to develop two distinct neurobehavioral symptom complexes: a mood syndrome and a neurovegetative syndrome. Interferon-alpha effects on serotonin metabolism appear to contribute to the mood and anxiety syndrome, while the neurovegetative syndrome appears to be related to interferon-alpha effects on dopamine. Our goal is to propose a design for utilizing a sequential, multiple assignment, randomized trial design for patients with malignant melanoma to test the relative efficacy of drugs that target serotonin versus dopamine metabolism during 4 weeks of intravenous, then 8 weeks of subcutaneous, interferon-alpha therapy. Patients will be offered participation in a double-blinded, randomized, controlled, 14-week trial involving two treatment phases. During the first month of intravenous interferon-alpha therapy, we will test the hypotheses that escitalopram will be more effective in reducing depressed mood, anxiety, and irritability, whereas methylphenidate will be more effective in diminishing interferon-alpha-induced neurovegetative symptoms, such as fatigue and psychomotor slowing. During the next 8 weeks of subcutaneous interferon therapy, participants whose symptoms do not improve significantly will be randomized to the alternate agent alone versus escitalopram and methylphenidate together. We present a prototype for a single-center, sequential, multiple assignment, randomized trial, which seeks to determine the efficacy of sequenced and targeted treatment for the two distinct symptom complexes suffered by patients treated with interferon-alpha. Because we cannot completely control for external factors, a relevant question is whether or not 'short-term' neuropsychiatric interventions can increase the number of interferon-alpha doses tolerated and improve long-term survival. This sequential, multiple assignment, randomized trial proposes a framework for developing optimal treatment strategies; however, additional studies are needed to determine the best strategy for treating or preventing neurobehavioral symptoms induced by the immunotherapy interferon-alpha.
Phased array ghost elimination.
Kellman, Peter; McVeigh, Elliot R
2006-05-01
Parallel imaging may be applied to cancel ghosts caused by a variety of distortion mechanisms, including distortions such as off-resonance or local flow, which are space variant. Phased array combining coefficients may be calculated that null ghost artifacts at known locations based on a constrained optimization, which optimizes SNR subject to the nulling constraint. The resultant phased array ghost elimination (PAGE) technique is similar to the method known as sensitivity encoding (SENSE) used for accelerated imaging; however, in this formulation is applied to full field-of-view (FOV) images. The phased array method for ghost elimination may result in greater flexibility in designing acquisition strategies. For example, in multi-shot EPI applications ghosts are typically mitigated by the use of an interleaved phase encode acquisition order. An alternative strategy is to use a sequential, non-interleaved phase encode order and cancel the resultant ghosts using PAGE parallel imaging. Cancellation of ghosts by means of phased array processing makes sequential, non-interleaved phase encode acquisition order practical, and permits a reduction in repetition time, TR, by eliminating the need for echo-shifting. Sequential, non-interleaved phase encode order has benefits of reduced distortion due to off-resonance, in-plane flow and EPI delay misalignment. Furthermore, the use of EPI with PAGE has inherent fat-water separation and has been used to provide off-resonance correction using a technique referred to as lipid elimination with an echo-shifting N/2-ghost acquisition (LEENA), and may further generalized using the multi-point Dixon method. Other applications of PAGE include cancelling ghosts which arise due to amplitude or phase variation during the approach to steady state. Parallel imaging requires estimates of the complex coil sensitivities. In vivo estimates may be derived by temporally varying the phase encode ordering to obtain a full k-space dataset in a scheme similar to the autocalibrating TSENSE method. This scheme is a generalization of the UNFOLD method used for removing aliasing in undersampled acquisitions. The more general scheme may be used to modulate each EPI ghost image to a separate temporal frequency as described in this paper. Copyright (c) 2006 John Wiley & Sons, Ltd.
Phased array ghost elimination
Kellman, Peter; McVeigh, Elliot R.
2007-01-01
Parallel imaging may be applied to cancel ghosts caused by a variety of distortion mechanisms, including distortions such as off-resonance or local flow, which are space variant. Phased array combining coefficients may be calculated that null ghost artifacts at known locations based on a constrained optimization, which optimizes SNR subject to the nulling constraint. The resultant phased array ghost elimination (PAGE) technique is similar to the method known as sensitivity encoding (SENSE) used for accelerated imaging; however, in this formulation is applied to full field-of-view (FOV) images. The phased array method for ghost elimination may result in greater flexibility in designing acquisition strategies. For example, in multi-shot EPI applications ghosts are typically mitigated by the use of an interleaved phase encode acquisition order. An alternative strategy is to use a sequential, non-interleaved phase encode order and cancel the resultant ghosts using PAGE parallel imaging. Cancellation of ghosts by means of phased array processing makes sequential, non-interleaved phase encode acquisition order practical, and permits a reduction in repetition time, TR, by eliminating the need for echo-shifting. Sequential, non-interleaved phase encode order has benefits of reduced distortion due to off-resonance, in-plane flow and EPI delay misalignment. Furthermore, the use of EPI with PAGE has inherent fat-water separation and has been used to provide off-resonance correction using a technique referred to as lipid elimination with an echo-shifting N/2-ghost acquisition (LEENA), and may further generalized using the multi-point Dixon method. Other applications of PAGE include cancelling ghosts which arise due to amplitude or phase variation during the approach to steady state. Parallel imaging requires estimates of the complex coil sensitivities. In vivo estimates may be derived by temporally varying the phase encode ordering to obtain a full k-space dataset in a scheme similar to the autocalibrating TSENSE method. This scheme is a generalization of the UNFOLD method used for removing aliasing in undersampled acquisitions. The more general scheme may be used to modulate each EPI ghost image to a separate temporal frequency as described in this paper. PMID:16705636
Accumulation of evidence during sequential decision making: the importance of top-down factors.
de Lange, Floris P; Jensen, Ole; Dehaene, Stanislas
2010-01-13
In the last decade, great progress has been made in characterizing the accumulation of neural information during simple unitary perceptual decisions. However, much less is known about how sequentially presented evidence is integrated over time for successful decision making. The aim of this study was to study the mechanisms of sequential decision making in humans. In a magnetoencephalography (MEG) study, we presented healthy volunteers with sequences of centrally presented arrows. Sequence length varied between one and five arrows, and the accumulated directions of the arrows informed the subject about which hand to use for a button press at the end of the sequence (e.g., LRLRR should result in a right-hand press). Mathematical modeling suggested that nonlinear accumulation was the rational strategy for performing this task in the presence of no or little noise, whereas quasilinear accumulation was optimal in the presence of substantial noise. MEG recordings showed a correlate of evidence integration over parietal and central cortex that was inversely related to the amount of accumulated evidence (i.e., when more evidence was accumulated, neural activity for new stimuli was attenuated). This modulation of activity likely reflects a top-down influence on sensory processing, effectively constraining the influence of sensory information on the decision variable over time. The results indicate that, when making decisions on the basis of sequential information, the human nervous system integrates evidence in a nonlinear manner, using the amount of previously accumulated information to constrain the accumulation of additional evidence.
Dühring, Sybille; Ewald, Jan; Germerodt, Sebastian; Kaleta, Christoph; Dandekar, Thomas; Schuster, Stefan
2017-07-01
The release of fungal cells following macrophage phagocytosis, called non-lytic expulsion, is reported for several fungal pathogens. On one hand, non-lytic expulsion may benefit the fungus in escaping the microbicidal environment of the phagosome. On the other hand, the macrophage could profit in terms of avoiding its own lysis and being able to undergo proliferation. To analyse the causes of non-lytic expulsion and the relevance of macrophage proliferation in the macrophage- Candida albicans interaction, we employ Evolutionary Game Theory and dynamic optimization in a sequential manner. We establish a game-theoretical model describing the different strategies of the two players after phagocytosis. Depending on the parameter values, we find four different Nash equilibria and determine the influence of the systems state of the host upon the game. As our Nash equilibria are a direct consequence of the model parameterization, we can depict several biological scenarios. A parameter region, where the host response is robust against the fungal infection, is determined. We further apply dynamic optimization to analyse whether macrophage mitosis is relevant in the host-pathogen interaction of macrophages and C. albicans For this, we study the population dynamics of the macrophage- C. albicans interactions and the corresponding optimal controls for the macrophages, indicating the best macrophage strategy of switching from proliferation to attacking fungal cells. © 2017 The Author(s).
Kania, Dramane; Sangaré, Lassana; Sakandé, Jean; Koanda, Abdoulaye; Nébié, Yacouba Kompingnin; Zerbo, Oumarou; Combasséré, Alain Wilfried; Guissou, Innocent Pierre; Rouet, François
2009-10-01
In Africa where blood-borne agents are highly prevalent, cheaper and feasible alternative strategies for blood donations testing are specifically required. From May to August 2002, 500 blood donations from Burkina Faso were tested for hepatitis B surface antigen (HBsAg), human immunodeficiency virus (HIV), syphilis, and hepatitis C virus (HCV) according to two distinct strategies. The first strategy was a conventional simultaneous screening of these four blood-borne infectious agents on each blood donation by using single-marker assays. The second strategy was a sequential screening starting by HBsAg. HBsAg-nonreactive blood donations were then further tested for HIV. If nonreactive, they were further tested for syphilis. If nonreactive, they were finally assessed for HCV antibodies. The accuracy and cost-effectiveness of the two strategies were compared. By using the simultaneous strategy, the seroprevalences of HBsAg, HIV, syphilis, and HCV among blood donors in Ouagadougou were estimated to be 19.2, 9.8, 1.6, and 5.2%. No significant difference of HIV, syphilis, and HCV prevalence rates was observed by using the sequential strategy (9.2, 1.9, and 4.7%, respectively). Whatever the strategy used, 157 blood donations (31.4%) were found to be reactive for at least one transfusion-transmissible agent and were thus discarded. The sequential strategy allowed a cost decrease of euro 908.6, compared to the simultaneous strategy. Given that approximately there are 50,000 blood donations annually in Burkina Faso, the money savings reached potentially euro 90,860. In resource-limited settings, the implementation of a sequential strategy appears as a pragmatic solution to promote safe blood supply and ensure sustainability of the system.
Aging and List-Wide Modulations of Strategy Execution:A Study in Arithmetic.
Hinault, Thomas; Lemaire, Patrick
2017-01-01
Background/Study Context: This study aimed at further our understanding of the cognitive processes involved during strategy execution, and how the processes involved change with age. More specifically, the main goal was to investigate whether poorer-strategy effects (i.e., poorer performance when a cued strategy is not the best) and sequential modulations of poorer-strategy effects (i.e., decreased poorer-strategy effects on current problems following poorer-strategy problems compared with after better-strategy problems) are influenced by proportions of poorer-strategy problems. We used a computational estimation task (i.e., providing approximate products to two-digit multiplication problems such as 38 × 74) with problems sets including 75%, 50%, or 25% of poorer-strategy problems (i.e., participants have to estimate products with another strategy than the better strategy). The remaining problems were cued with the better strategy. Age-related differences were also investigated. We found that proportions of poorer-strategy problems influenced sequential modulations of poorer-strategy effects. Indeed, sequential modulations of poorer-strategy effects were larger when proportions of poorer-strategy problems were equal than unequal. Moreover, proportion effects were different for young and older adults, as older adults benefited more from low proportions of poorer-strategy problems compared with young adults. These findings have important implications regarding cognitive control mechanisms underlying both list-wide and trial-to-trial modulations of strategy execution, and how these processes change during aging.
Introducing a Model for Optimal Design of Sequential Objective Structured Clinical Examinations
ERIC Educational Resources Information Center
Mortaz Hejri, Sara; Yazdani, Kamran; Labaf, Ali; Norcini, John J.; Jalili, Mohammad
2016-01-01
In a sequential OSCE which has been suggested to reduce testing costs, candidates take a short screening test and who fail the test, are asked to take the full OSCE. In order to introduce an effective and accurate sequential design, we developed a model for designing and evaluating screening OSCEs. Based on two datasets from a 10-station…
Multiuser signal detection using sequential decoding
NASA Astrophysics Data System (ADS)
Xie, Zhenhua; Rushforth, Craig K.; Short, Robert T.
1990-05-01
The application of sequential decoding to the detection of data transmitted over the additive white Gaussian noise channel by K asynchronous transmitters using direct-sequence spread-spectrum multiple access is considered. A modification of Fano's (1963) sequential-decoding metric, allowing the messages from a given user to be safely decoded if its Eb/N0 exceeds -1.6 dB, is presented. Computer simulation is used to evaluate the performance of a sequential decoder that uses this metric in conjunction with the stack algorithm. In many circumstances, the sequential decoder achieves results comparable to those obtained using the much more complicated optimal receiver.
Relevant factors for the optimal duration of extended endocrine therapy in early breast cancer.
Blok, Erik J; Kroep, Judith R; Meershoek-Klein Kranenbarg, Elma; Duijm-de Carpentier, Marjolijn; Putter, Hein; Liefers, Gerrit-Jan; Nortier, Johan W R; Rutgers, Emiel J Th; Seynaeve, Caroline M; van de Velde, Cornelis J H
2018-04-01
For postmenopausal patients with hormone receptor-positive early breast cancer, the optimal subgroup and duration of extended endocrine therapy is not clear yet. The aim of this study using the IDEAL patient cohort was to identify a subgroup for which longer (5 years) extended therapy is beneficial over shorter (2.5 years) extended endocrine therapy. In the IDEAL trial, 1824 patients who completed 5 years of adjuvant endocrine therapy (either 5 years of tamoxifen (12%), 5 years of an AI (29%), or a sequential strategy of both (59%)) were randomized between either 2.5 or 5 years of extended letrozole. For each prior therapy subgroup, the value of longer therapy was assessed for both node-negative and node-positive patients using Kaplan Meier and Cox regression survival analyses. In node-positive patients, there was a significant benefit of 5 years (over 2.5 years) of extended therapy (disease-free survival (DFS) HR 0.67, p = 0.03, 95% CI 0.47-0.96). This effect was only observed in patients who were treated initially with a sequential scheme (DFS HR 0.60, p = 0.03, 95% CI 0.38-0.95). In all other subgroups, there was no significant benefit of longer extended therapy. Similar results were found in patients who were randomized for their initial adjuvant therapy in the TEAM trial (DFS HR 0.37, p = 0.07, 95% CI 0.13-1.06), although this additional analysis was underpowered for definite conclusions. This study suggests that node-positive patients could benefit from longer extended endocrine therapy, although this effect appears isolated to patients treated with sequential endocrine therapy during the first 5 years and needs validation and long-term follow-up.
NASA Astrophysics Data System (ADS)
Otake, Yoshito; Esnault, Matthieu; Grupp, Robert; Kosugi, Shinichi; Sato, Yoshinobu
2016-03-01
The determination of in vivo motion of multiple-bones using dynamic fluoroscopic images and computed tomography (CT) is useful for post-operative assessment of orthopaedic surgeries such as medial patellofemoral ligament reconstruction. We propose a robust method to measure the 3D motion of multiple rigid objects with high accuracy using a series of bi-plane fluoroscopic images and a multi-resolution, intensity-based, 2D-3D registration. A Covariance Matrix Adaptation Evolution Strategy (CMA-ES) optimizer was used with a gradient correlation similarity metric. Four approaches to register three rigid objects (femur, tibia-fibula and patella) were implemented: 1) an individual bone approach registering one bone at a time, each with optimization of a six degrees of freedom (6DOF) parameter, 2) a sequential approach registering one bone at a time but using the previous bone results as the background in DRR generation, 3) a simultaneous approach registering all the bones together (18DOF) and 4) a combination of the sequential and the simultaneous approaches. These approaches were compared in experiments using simulated images generated from the CT of a healthy volunteer and measured fluoroscopic images. Over the 120 simulated frames of motion, the simultaneous approach showed improved registration accuracy compared to the individual approach: with less than 0.68mm root-mean-square error (RMSE) for translation and less than 1.12° RMSE for rotation. A robustness evaluation was conducted with 45 trials of a randomly perturbed initialization showed that the sequential approach improved robustness significantly (74% success rate) compared to the individual bone approach (34% success) for patella registration (femur and tibia-fibula registration had a 100% success rate with each approach).
Asao, Tetsuhiko; Fujiwara, Yutaka; Itahashi, Kota; Kitahara, Shinsuke; Goto, Yasushi; Horinouchi, Hidehito; Kanda, Shintaro; Nokihara, Hiroshi; Yamamoto, Noboru; Takahashi, Kazuhisa; Ohe, Yuichiro
2017-07-01
Second-generation anaplastic lymphoma kinase (ALK) inhibitors, such as alectinib and ceritinib, have recently been approved for treatment of ALK-rearranged non-small-cell lung cancer (NSCLC). An optimal strategy for using 2 or more ALK inhibitors has not been established. We sought to investigate the clinical impact of sequential use of ALK inhibitors on these tumors in clinical practice. Patients with ALK-rearranged NSCLC treated from May 2010 to January 2016 at the National Cancer Center Hospital were identified, and their outcomes were evaluated retrospectively. Fifty-nine patients with ALK-rearranged NSCLC had been treated and 37 cases were assessable. Twenty-six received crizotinib, 21 received alectinib, and 13 (35.1%) received crizotinib followed by alectinib. Response rates and median progression-free survival (PFS) on crizotinib and alectinib (after crizotinib failure) were 53.8% (95% confidence interval [CI], 26.7%-80.9%) and 38.4% (95% CI, 12.0%-64.9%), and 10.7 (95% CI, 5.3-14.7) months and 16.6 (95% CI, 2.9-not calculable), respectively. The median PFS of patients on sequential therapy was 35.2 months (95% CI, 12.7 months-not calculable). The 5-year survival rate of ALK-rearranged patients who received 2 sequential ALK inhibitors from diagnosis was 77.8% (95% CI, 36.5%-94.0%). The combined PFS and 5-year survival rates in patients who received sequential ALK inhibitors were encouraging. Making full use of multiple ALK inhibitors might be important to prolonging survival in patients with ALK-rearranged NSCLC. Copyright © 2016 Elsevier Inc. All rights reserved.
Parallel processing optimization strategy based on MapReduce model in cloud storage environment
NASA Astrophysics Data System (ADS)
Cui, Jianming; Liu, Jiayi; Li, Qiuyan
2017-05-01
Currently, a large number of documents in the cloud storage process employed the way of packaging after receiving all the packets. From the local transmitter this stored procedure to the server, packing and unpacking will consume a lot of time, and the transmission efficiency is low as well. A new parallel processing algorithm is proposed to optimize the transmission mode. According to the operation machine graphs model work, using MPI technology parallel execution Mapper and Reducer mechanism. It is good to use MPI technology to implement Mapper and Reducer parallel mechanism. After the simulation experiment of Hadoop cloud computing platform, this algorithm can not only accelerate the file transfer rate, but also shorten the waiting time of the Reducer mechanism. It will break through traditional sequential transmission constraints and reduce the storage coupling to improve the transmission efficiency.
Comparative Risk Analysis for Metropolitan Solid Waste Management Systems
NASA Astrophysics Data System (ADS)
Chang, Ni-Bin; Wang, S. F.
1996-01-01
Conventional solid waste management planning usually focuses on economic optimization, in which the related environmental impacts or risks are rarely considered. The purpose of this paper is to illustrate the methodology of how optimization concepts and techniques can be applied to structure and solve risk management problems such that the impacts of air pollution, leachate, traffic congestion, and noise increments can be regulated in the iong-term planning of metropolitan solid waste management systems. Management alternatives are sequentially evaluated by adding several environmental risk control constraints stepwise in an attempt to improve the management strategies and reduce the risk impacts in the long run. Statistics associated with those risk control mechanisms are presented as well. Siting, routing, and financial decision making in such solid waste management systems can also be achieved with respect to various resource limitations and disposal requirements.
Reading Remediation Based on Sequential and Simultaneous Processing.
ERIC Educational Resources Information Center
Gunnison, Judy; And Others
1982-01-01
The theory postulating a dichotomy between sequential and simultaneous processing is reviewed and its implications for remediating reading problems are reviewed. Research is cited on sequential-simultaneous processing for early and advanced reading. A list of remedial strategies based on the processing dichotomy addresses decoding and lexical…
Tait, Jamie L; Duckham, Rachel L; Milte, Catherine M; Main, Luana C; Daly, Robin M
2017-01-01
Emerging research indicates that exercise combined with cognitive training may improve cognitive function in older adults. Typically these programs have incorporated sequential training, where exercise and cognitive training are undertaken separately. However, simultaneous or dual-task training, where cognitive and/or motor training are performed simultaneously with exercise, may offer greater benefits. This review summary provides an overview of the effects of combined simultaneous vs. sequential training on cognitive function in older adults. Based on the available evidence, there are inconsistent findings with regard to the cognitive benefits of sequential training in comparison to cognitive or exercise training alone. In contrast, simultaneous training interventions, particularly multimodal exercise programs in combination with secondary tasks regulated by sensory cues, have significantly improved cognition in both healthy older and clinical populations. However, further research is needed to determine the optimal characteristics of a successful simultaneous training program for optimizing cognitive function in older people.
NASA Astrophysics Data System (ADS)
Zhu, Gaofeng; Li, Xin; Ma, Jinzhu; Wang, Yunquan; Liu, Shaomin; Huang, Chunlin; Zhang, Kun; Hu, Xiaoli
2018-04-01
Sequential Monte Carlo (SMC) samplers have become increasing popular for estimating the posterior parameter distribution with the non-linear dependency structures and multiple modes often present in hydrological models. However, the explorative capabilities and efficiency of the sampler depends strongly on the efficiency in the move step of SMC sampler. In this paper we presented a new SMC sampler entitled the Particle Evolution Metropolis Sequential Monte Carlo (PEM-SMC) algorithm, which is well suited to handle unknown static parameters of hydrologic model. The PEM-SMC sampler is inspired by the works of Liang and Wong (2001) and operates by incorporating the strengths of the genetic algorithm, differential evolution algorithm and Metropolis-Hasting algorithm into the framework of SMC. We also prove that the sampler admits the target distribution to be a stationary distribution. Two case studies including a multi-dimensional bimodal normal distribution and a conceptual rainfall-runoff hydrologic model by only considering parameter uncertainty and simultaneously considering parameter and input uncertainty show that PEM-SMC sampler is generally superior to other popular SMC algorithms in handling the high dimensional problems. The study also indicated that it may be important to account for model structural uncertainty by using multiplier different hydrological models in the SMC framework in future study.
A General-Purpose Optimization Engine for Multi-Disciplinary Design Applications
NASA Technical Reports Server (NTRS)
Patnaik, Surya N.; Hopkins, Dale A.; Berke, Laszlo
1996-01-01
A general purpose optimization tool for multidisciplinary applications, which in the literature is known as COMETBOARDS, is being developed at NASA Lewis Research Center. The modular organization of COMETBOARDS includes several analyzers and state-of-the-art optimization algorithms along with their cascading strategy. The code structure allows quick integration of new analyzers and optimizers. The COMETBOARDS code reads input information from a number of data files, formulates a design as a set of multidisciplinary nonlinear programming problems, and then solves the resulting problems. COMETBOARDS can be used to solve a large problem which can be defined through multiple disciplines, each of which can be further broken down into several subproblems. Alternatively, a small portion of a large problem can be optimized in an effort to improve an existing system. Some of the other unique features of COMETBOARDS include design variable formulation, constraint formulation, subproblem coupling strategy, global scaling technique, analysis approximation, use of either sequential or parallel computational modes, and so forth. The special features and unique strengths of COMETBOARDS assist convergence and reduce the amount of CPU time used to solve the difficult optimization problems of aerospace industries. COMETBOARDS has been successfully used to solve a number of problems, including structural design of space station components, design of nozzle components of an air-breathing engine, configuration design of subsonic and supersonic aircraft, mixed flow turbofan engines, wave rotor topped engines, and so forth. This paper introduces the COMETBOARDS design tool and its versatility, which is illustrated by citing examples from structures, aircraft design, and air-breathing propulsion engine design.
de Oliveira, Saulo H P; Law, Eleanor C; Shi, Jiye; Deane, Charlotte M
2018-04-01
Most current de novo structure prediction methods randomly sample protein conformations and thus require large amounts of computational resource. Here, we consider a sequential sampling strategy, building on ideas from recent experimental work which shows that many proteins fold cotranslationally. We have investigated whether a pseudo-greedy search approach, which begins sequentially from one of the termini, can improve the performance and accuracy of de novo protein structure prediction. We observed that our sequential approach converges when fewer than 20 000 decoys have been produced, fewer than commonly expected. Using our software, SAINT2, we also compared the run time and quality of models produced in a sequential fashion against a standard, non-sequential approach. Sequential prediction produces an individual decoy 1.5-2.5 times faster than non-sequential prediction. When considering the quality of the best model, sequential prediction led to a better model being produced for 31 out of 41 soluble protein validation cases and for 18 out of 24 transmembrane protein cases. Correct models (TM-Score > 0.5) were produced for 29 of these cases by the sequential mode and for only 22 by the non-sequential mode. Our comparison reveals that a sequential search strategy can be used to drastically reduce computational time of de novo protein structure prediction and improve accuracy. Data are available for download from: http://opig.stats.ox.ac.uk/resources. SAINT2 is available for download from: https://github.com/sauloho/SAINT2. saulo.deoliveira@dtc.ox.ac.uk. Supplementary data are available at Bioinformatics online.
Lihoreau, Mathieu; Chittka, Lars; Raine, Nigel E
2010-12-01
Animals collecting resources that replenish over time often visit patches in predictable sequences called traplines. Despite the widespread nature of this strategy, we still know little about how spatial memory develops and guides individuals toward suitable routes. Here, we investigate whether flower visitation sequences by bumblebees Bombus terrestris simply reflect the order in which flowers were discovered or whether they result from more complex navigational strategies enabling bees to optimize their foraging routes. We analyzed bee flight movements in an array of four artificial flowers maximizing interfloral distances. Starting from a single patch, we sequentially added three new patches so that if bees visited them in the order in which they originally encountered flowers, they would follow a long (suboptimal) route. Bees' tendency to visit patches in their discovery order decreased with experience. Instead, they optimized their flight distances by rearranging flower visitation sequences. This resulted in the development of a primary route (trapline) and two or three less frequently used secondary routes. Bees consistently used these routes after overnight breaks while occasionally exploring novel possibilities. We discuss how maintaining some level of route flexibility could allow traplining animals to cope with dynamic routing problems, analogous to the well-known traveling salesman problem.
ERIC Educational Resources Information Center
Bullens, Jessie; Igloi, Kinga; Berthoz, Alain; Postma, Albert; Rondi-Reig, Laure
2010-01-01
Navigation in a complex environment can rely on the use of different spatial strategies. We have focused on the employment of "allocentric" (i.e., encoding interrelationships among environmental cues, movements, and the location of the goal) and "sequential egocentric" (i.e., sequences of body turns associated with specific choice points)…
Prabhu, Ashish A; Jayadeep, A
2017-04-21
The current study is focused on optimizing the parameters involved in enzymatic processing of red rice bran for maximizing total polyphenol (TP) and free radical scavenging activity (FRSA). The sequential optimization strategies using central composite design (CCD) and artificial neural network (ANN) modeling linked with genetic algorithm (GA) was performed to study the effect of incubation time (60-90 min), xylanase concentration (5-10 mg/g), cellulase concentration (5-10 mg/g) on the response, i.e., total polyphenol and FRSA. The result showed that incubation time has a negative effect on the response, while the square effect of xylanase and cellulase showed positive effect on the response. A maximum TP of 2,761 mg ferulic acid Eq/100 g bran and FRSA of 778.4 mg Catechin Eq/100 g bran was achieved with incubation time (min) = 60.491; xylanase (mg/g) = 5.4633; cellulase (mg/g) = 11.5825. Furthermore, ANN-GA-based optimization showed better predicting capabilities as compared to CCD.
NASA Astrophysics Data System (ADS)
Koziel, Slawomir; Bekasiewicz, Adrian
2018-02-01
In this article, a simple yet efficient and reliable technique for fully automated multi-objective design optimization of antenna structures using sequential domain patching (SDP) is discussed. The optimization procedure according to SDP is a two-step process: (i) obtaining the initial set of Pareto-optimal designs representing the best possible trade-offs between considered conflicting objectives, and (ii) Pareto set refinement for yielding the optimal designs at the high-fidelity electromagnetic (EM) simulation model level. For the sake of computational efficiency, the first step is realized at the level of a low-fidelity (coarse-discretization) EM model by sequential construction and relocation of small design space segments (patches) in order to create a path connecting the extreme Pareto front designs obtained beforehand. The second stage involves response correction techniques and local response surface approximation models constructed by reusing EM simulation data acquired in the first step. A major contribution of this work is an automated procedure for determining the patch dimensions. It allows for appropriate selection of the number of patches for each geometry variable so as to ensure reliability of the optimization process while maintaining its low cost. The importance of this procedure is demonstrated by comparing it with uniform patch dimensions.
Pilon, Alan Cesar; Carnevale Neto, Fausto; Freire, Rafael Teixeira; Cardoso, Patrícia; Carneiro, Renato Lajarim; Da Silva Bolzani, Vanderlan; Castro-Gamboa, Ian
2016-03-01
A major challenge in metabolomic studies is how to extract and analyze an entire metabolome. So far, no single method was able to clearly complete this task in an efficient and reproducible way. In this work we proposed a sequential strategy for the extraction and chromatographic separation of metabolites from leaves Jatropha gossypifolia using a design of experiments and partial least square model. The effect of 14 different solvents on extraction process was evaluated and an optimized separation condition on liquid chromatography was estimated considering mobile phase composition and analysis time. The initial conditions of extraction using methanol and separation in 30 min between 5 and 100% water/methanol (1:1 v/v) with 0.1% of acetic acid, 20 μL sample volume, 3.0 mL min(-1) flow rate and 25°C column temperature led to 107 chromatographic peaks. After the optimization strategy using i-propanol/chloroform (1:1 v/v) for extraction, linear gradient elution of 60 min between 5 and 100% water/(acetonitrile/methanol 68:32 v/v with 0.1% of acetic acid), 30 μL sample volume, 2.0 mL min(-1) flow rate, and 30°C column temperature, we detected 140 chromatographic peaks, 30.84% more peaks compared to initial method. This is a reliable strategy using a limited number of experiments for metabolomics protocols. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Optimizing Standard Sequential Extraction Protocol With Lake And Ocean Sediments
The environmental mobility/availability behavior of radionuclides in soils and sediments depends on their speciation. Experiments have been carried out to develop a simple but robust radionuclide sequential extraction method for identification of radionuclide partitioning in sed...
Distributed Immune Systems for Wireless Network Information Assurance
2010-04-26
ratio test (SPRT), where the goal is to optimize a hypothesis testing problem given a trade-off between the probability of errors and the...using cumulative sum (CUSUM) and Girshik-Rubin-Shiryaev (GRSh) statistics. In sequential versions of the problem the sequential probability ratio ...the more complicated problems, in particular those where no clear mean can be established. We developed algorithms based on the sequential probability
A sampling and classification item selection approach with content balancing.
Chen, Pei-Hua
2015-03-01
Existing automated test assembly methods typically employ constrained combinatorial optimization. Constructing forms sequentially based on an optimization approach usually results in unparallel forms and requires heuristic modifications. Methods based on a random search approach have the major advantage of producing parallel forms sequentially without further adjustment. This study incorporated a flexible content-balancing element into the statistical perspective item selection method of the cell-only method (Chen et al. in Educational and Psychological Measurement, 72(6), 933-953, 2012). The new method was compared with a sequential interitem distance weighted deviation model (IID WDM) (Swanson & Stocking in Applied Psychological Measurement, 17(2), 151-166, 1993), a simultaneous IID WDM, and a big-shadow-test mixed integer programming (BST MIP) method to construct multiple parallel forms based on matching a reference form item-by-item. The results showed that the cell-only method with content balancing and the sequential and simultaneous versions of IID WDM yielded results comparable to those obtained using the BST MIP method. The cell-only method with content balancing is computationally less intensive than the sequential and simultaneous versions of IID WDM.
Wavelet-based energy features for glaucomatous image classification.
Dua, Sumeet; Acharya, U Rajendra; Chowriappa, Pradeep; Sree, S Vinitha
2012-01-01
Texture features within images are actively pursued for accurate and efficient glaucoma classification. Energy distribution over wavelet subbands is applied to find these important texture features. In this paper, we investigate the discriminatory potential of wavelet features obtained from the daubechies (db3), symlets (sym3), and biorthogonal (bio3.3, bio3.5, and bio3.7) wavelet filters. We propose a novel technique to extract energy signatures obtained using 2-D discrete wavelet transform, and subject these signatures to different feature ranking and feature selection strategies. We have gauged the effectiveness of the resultant ranked and selected subsets of features using a support vector machine, sequential minimal optimization, random forest, and naïve Bayes classification strategies. We observed an accuracy of around 93% using tenfold cross validations to demonstrate the effectiveness of these methods.
Simultaneous co-fermentation of mixed sugars: a promising strategy for producing cellulosic ethanol.
Kim, Soo Rin; Ha, Suk-Jin; Wei, Na; Oh, Eun Joong; Jin, Yong-Su
2012-05-01
The lack of microbial strains capable of fermenting all sugars prevalent in plant cell wall hydrolyzates to ethanol is a major challenge. Although naturally existing or engineered microorganisms can ferment mixed sugars (glucose, xylose and galactose) in these hydrolyzates sequentially, the preferential utilization of glucose to non-glucose sugars often results in lower overall yield and productivity of ethanol. Therefore, numerous metabolic engineering approaches have been attempted to construct optimal microorganisms capable of co-fermenting mixed sugars simultaneously. Here, we present recent findings and breakthroughs in engineering yeast for improved ethanol production from mixed sugars. In particular, this review discusses new sugar transporters, various strategies for simultaneous co-fermentation of mixed sugars, and potential applications of co-fermentation for producing fuels and chemicals. Copyright © 2012 Elsevier Ltd. All rights reserved.
Combinatorial development of antibacterial Zr-Cu-Al-Ag thin film metallic glasses.
Liu, Yanhui; Padmanabhan, Jagannath; Cheung, Bettina; Liu, Jingbei; Chen, Zheng; Scanley, B Ellen; Wesolowski, Donna; Pressley, Mariyah; Broadbridge, Christine C; Altman, Sidney; Schwarz, Udo D; Kyriakides, Themis R; Schroers, Jan
2016-05-27
Metallic alloys are normally composed of multiple constituent elements in order to achieve integration of a plurality of properties required in technological applications. However, conventional alloy development paradigm, by sequential trial-and-error approach, requires completely unrelated strategies to optimize compositions out of a vast phase space, making alloy development time consuming and labor intensive. Here, we challenge the conventional paradigm by proposing a combinatorial strategy that enables parallel screening of a multitude of alloys. Utilizing a typical metallic glass forming alloy system Zr-Cu-Al-Ag as an example, we demonstrate how glass formation and antibacterial activity, two unrelated properties, can be simultaneously characterized and the optimal composition can be efficiently identified. We found that in the Zr-Cu-Al-Ag alloy system fully glassy phase can be obtained in a wide compositional range by co-sputtering, and antibacterial activity is strongly dependent on alloy compositions. Our results indicate that antibacterial activity is sensitive to Cu and Ag while essentially remains unchanged within a wide range of Zr and Al. The proposed strategy not only facilitates development of high-performing alloys, but also provides a tool to unveil the composition dependence of properties in a highly parallel fashion, which helps the development of new materials by design.
Combinatorial development of antibacterial Zr-Cu-Al-Ag thin film metallic glasses
NASA Astrophysics Data System (ADS)
Liu, Yanhui; Padmanabhan, Jagannath; Cheung, Bettina; Liu, Jingbei; Chen, Zheng; Scanley, B. Ellen; Wesolowski, Donna; Pressley, Mariyah; Broadbridge, Christine C.; Altman, Sidney; Schwarz, Udo D.; Kyriakides, Themis R.; Schroers, Jan
2016-05-01
Metallic alloys are normally composed of multiple constituent elements in order to achieve integration of a plurality of properties required in technological applications. However, conventional alloy development paradigm, by sequential trial-and-error approach, requires completely unrelated strategies to optimize compositions out of a vast phase space, making alloy development time consuming and labor intensive. Here, we challenge the conventional paradigm by proposing a combinatorial strategy that enables parallel screening of a multitude of alloys. Utilizing a typical metallic glass forming alloy system Zr-Cu-Al-Ag as an example, we demonstrate how glass formation and antibacterial activity, two unrelated properties, can be simultaneously characterized and the optimal composition can be efficiently identified. We found that in the Zr-Cu-Al-Ag alloy system fully glassy phase can be obtained in a wide compositional range by co-sputtering, and antibacterial activity is strongly dependent on alloy compositions. Our results indicate that antibacterial activity is sensitive to Cu and Ag while essentially remains unchanged within a wide range of Zr and Al. The proposed strategy not only facilitates development of high-performing alloys, but also provides a tool to unveil the composition dependence of properties in a highly parallel fashion, which helps the development of new materials by design.
Combinatorial development of antibacterial Zr-Cu-Al-Ag thin film metallic glasses
Liu, Yanhui; Padmanabhan, Jagannath; Cheung, Bettina; Liu, Jingbei; Chen, Zheng; Scanley, B. Ellen; Wesolowski, Donna; Pressley, Mariyah; Broadbridge, Christine C.; Altman, Sidney; Schwarz, Udo D.; Kyriakides, Themis R.; Schroers, Jan
2016-01-01
Metallic alloys are normally composed of multiple constituent elements in order to achieve integration of a plurality of properties required in technological applications. However, conventional alloy development paradigm, by sequential trial-and-error approach, requires completely unrelated strategies to optimize compositions out of a vast phase space, making alloy development time consuming and labor intensive. Here, we challenge the conventional paradigm by proposing a combinatorial strategy that enables parallel screening of a multitude of alloys. Utilizing a typical metallic glass forming alloy system Zr-Cu-Al-Ag as an example, we demonstrate how glass formation and antibacterial activity, two unrelated properties, can be simultaneously characterized and the optimal composition can be efficiently identified. We found that in the Zr-Cu-Al-Ag alloy system fully glassy phase can be obtained in a wide compositional range by co-sputtering, and antibacterial activity is strongly dependent on alloy compositions. Our results indicate that antibacterial activity is sensitive to Cu and Ag while essentially remains unchanged within a wide range of Zr and Al. The proposed strategy not only facilitates development of high-performing alloys, but also provides a tool to unveil the composition dependence of properties in a highly parallel fashion, which helps the development of new materials by design. PMID:27230692
The timing of adoption of positron emission tomography: a real options approach.
Pertile, Paolo; Torri, Emanuele; Flor, Luciano; Tardivo, Stefano
2009-09-01
This paper presents the economic evaluation from a hospital's perspective of the investment in positron emission tomography, adopting a real options approach. The installation of this equipment requires a major capital outlay, while uncertainty on several key variables is substantial. The value of several timing strategies, including sequential investment, is determined taking into account that future decisions will be based on the information available at that time. The results show that adopting this approach may have an impact on the timing of investment, because postponing the investment may be optimal even when the Expected Net Present Value of the project is positive.
Hoomans, Ties; Severens, Johan L; Evers, Silvia M A A; Ament, Andre J H A
2009-01-01
Decisions about clinical practice change, that is, which guidelines to adopt and how to implement them, can be made sequentially or simultaneously. Decision makers adopting a sequential approach first compare the costs and effects of alternative guidelines to select the best set of guideline recommendations for patient management and subsequently examine the implementation costs and effects to choose the best strategy to implement the selected guideline. In an integral approach, decision makers simultaneously decide about the guideline and the implementation strategy on the basis of the overall value for money in changing clinical practice. This article demonstrates that the decision to use a sequential v. an integral approach affects the need for detailed information and the complexity of the decision analytic process. More importantly, it may lead to different choices of guidelines and implementation strategies for clinical practice change. The differences in decision making and decision analysis between the alternative approaches are comprehensively illustrated using 2 hypothetical examples. We argue that, in most cases, an integral approach to deciding about change in clinical practice is preferred, as this provides more efficient use of scarce health-care resources.
BEopt - Building Energy Optimization BEopt NREL - National Renewable Energy Laboratory Primary Energy Optimization) software provides capabilities to evaluate residential building designs and identify sequential search optimization technique used by BEopt: Finds minimum-cost building designs at different
Large-Scale Bi-Level Strain Design Approaches and Mixed-Integer Programming Solution Techniques
Kim, Joonhoon; Reed, Jennifer L.; Maravelias, Christos T.
2011-01-01
The use of computational models in metabolic engineering has been increasing as more genome-scale metabolic models and computational approaches become available. Various computational approaches have been developed to predict how genetic perturbations affect metabolic behavior at a systems level, and have been successfully used to engineer microbial strains with improved primary or secondary metabolite production. However, identification of metabolic engineering strategies involving a large number of perturbations is currently limited by computational resources due to the size of genome-scale models and the combinatorial nature of the problem. In this study, we present (i) two new bi-level strain design approaches using mixed-integer programming (MIP), and (ii) general solution techniques that improve the performance of MIP-based bi-level approaches. The first approach (SimOptStrain) simultaneously considers gene deletion and non-native reaction addition, while the second approach (BiMOMA) uses minimization of metabolic adjustment to predict knockout behavior in a MIP-based bi-level problem for the first time. Our general MIP solution techniques significantly reduced the CPU times needed to find optimal strategies when applied to an existing strain design approach (OptORF) (e.g., from ∼10 days to ∼5 minutes for metabolic engineering strategies with 4 gene deletions), and identified strategies for producing compounds where previous studies could not (e.g., malate and serine). Additionally, we found novel strategies using SimOptStrain with higher predicted production levels (for succinate and glycerol) than could have been found using an existing approach that considers network additions and deletions in sequential steps rather than simultaneously. Finally, using BiMOMA we found novel strategies involving large numbers of modifications (for pyruvate and glutamate), which sequential search and genetic algorithms were unable to find. The approaches and solution techniques developed here will facilitate the strain design process and extend the scope of its application to metabolic engineering. PMID:21949695
Large-scale bi-level strain design approaches and mixed-integer programming solution techniques.
Kim, Joonhoon; Reed, Jennifer L; Maravelias, Christos T
2011-01-01
The use of computational models in metabolic engineering has been increasing as more genome-scale metabolic models and computational approaches become available. Various computational approaches have been developed to predict how genetic perturbations affect metabolic behavior at a systems level, and have been successfully used to engineer microbial strains with improved primary or secondary metabolite production. However, identification of metabolic engineering strategies involving a large number of perturbations is currently limited by computational resources due to the size of genome-scale models and the combinatorial nature of the problem. In this study, we present (i) two new bi-level strain design approaches using mixed-integer programming (MIP), and (ii) general solution techniques that improve the performance of MIP-based bi-level approaches. The first approach (SimOptStrain) simultaneously considers gene deletion and non-native reaction addition, while the second approach (BiMOMA) uses minimization of metabolic adjustment to predict knockout behavior in a MIP-based bi-level problem for the first time. Our general MIP solution techniques significantly reduced the CPU times needed to find optimal strategies when applied to an existing strain design approach (OptORF) (e.g., from ∼10 days to ∼5 minutes for metabolic engineering strategies with 4 gene deletions), and identified strategies for producing compounds where previous studies could not (e.g., malate and serine). Additionally, we found novel strategies using SimOptStrain with higher predicted production levels (for succinate and glycerol) than could have been found using an existing approach that considers network additions and deletions in sequential steps rather than simultaneously. Finally, using BiMOMA we found novel strategies involving large numbers of modifications (for pyruvate and glutamate), which sequential search and genetic algorithms were unable to find. The approaches and solution techniques developed here will facilitate the strain design process and extend the scope of its application to metabolic engineering.
Random Boolean networks for autoassociative memory: Optimization and sequential learning
NASA Astrophysics Data System (ADS)
Sherrington, D.; Wong, K. Y. M.
Conventional neural networks are based on synaptic storage of information, even when the neural states are discrete and bounded. In general, the set of potential local operations is much greater. Here we discuss some aspects of the properties of networks of binary neurons with more general Boolean functions controlling the local dynamics. Two specific aspects are emphasised; (i) optimization in the presence of noise and (ii) a simple model for short-term memory exhibiting primacy and recency in the recall of sequentially taught patterns.
Trutnevyte, Evelina; Stauffacher, Michael; Schlegel, Matthias; Scholz, Roland W
2012-09-04
Conventional energy strategy defines an energy system vision (the goal), energy scenarios with technical choices and an implementation mechanism (such as economic incentives). Due to the lead of a generic vision, when applied in a specific regional context, such a strategy can deviate from the optimal one with, for instance, the lowest environmental impacts. This paper proposes an approach for developing energy strategies by simultaneously, rather than sequentially, combining multiple energy system visions and technically feasible, cost-effective energy scenarios that meet environmental constraints at a given place. The approach is illustrated by developing a residential heat supply strategy for a Swiss region. In the analyzed case, urban municipalities should focus on reducing heat demand, and rural municipalities should focus on harvesting local energy sources, primarily wood. Solar thermal units are cost-competitive in all municipalities, and their deployment should be fostered by information campaigns. Heat pumps and building refurbishment are not competitive; thus, economic incentives are essential, especially for urban municipalities. In rural municipalities, wood is cost-competitive, and community-based initiatives are likely to be most successful. Thus, the paper shows that energy strategies should be spatially differentiated. The suggested approach can be transferred to other regions and spatial scales.
NASA Astrophysics Data System (ADS)
Ma, Yuan-Zhuo; Li, Hong-Shuang; Yao, Wei-Xing
2018-05-01
The evaluation of the probabilistic constraints in reliability-based design optimization (RBDO) problems has always been significant and challenging work, which strongly affects the performance of RBDO methods. This article deals with RBDO problems using a recently developed generalized subset simulation (GSS) method and a posterior approximation approach. The posterior approximation approach is used to transform all the probabilistic constraints into ordinary constraints as in deterministic optimization. The assessment of multiple failure probabilities required by the posterior approximation approach is achieved by GSS in a single run at all supporting points, which are selected by a proper experimental design scheme combining Sobol' sequences and Bucher's design. Sequentially, the transformed deterministic design optimization problem can be solved by optimization algorithms, for example, the sequential quadratic programming method. Three optimization problems are used to demonstrate the efficiency and accuracy of the proposed method.
A Sequential Optimization Sampling Method for Metamodels with Radial Basis Functions
Pan, Guang; Ye, Pengcheng; Yang, Zhidong
2014-01-01
Metamodels have been widely used in engineering design to facilitate analysis and optimization of complex systems that involve computationally expensive simulation programs. The accuracy of metamodels is strongly affected by the sampling methods. In this paper, a new sequential optimization sampling method is proposed. Based on the new sampling method, metamodels can be constructed repeatedly through the addition of sampling points, namely, extrema points of metamodels and minimum points of density function. Afterwards, the more accurate metamodels would be constructed by the procedure above. The validity and effectiveness of proposed sampling method are examined by studying typical numerical examples. PMID:25133206
Comparative Evaluation of Different Optimization Algorithms for Structural Design Applications
NASA Technical Reports Server (NTRS)
Patnaik, Surya N.; Coroneos, Rula M.; Guptill, James D.; Hopkins, Dale A.
1996-01-01
Non-linear programming algorithms play an important role in structural design optimization. Fortunately, several algorithms with computer codes are available. At NASA Lewis Research Centre, a project was initiated to assess the performance of eight different optimizers through the development of a computer code CometBoards. This paper summarizes the conclusions of that research. CometBoards was employed to solve sets of small, medium and large structural problems, using the eight different optimizers on a Cray-YMP8E/8128 computer. The reliability and efficiency of the optimizers were determined from the performance of these problems. For small problems, the performance of most of the optimizers could be considered adequate. For large problems, however, three optimizers (two sequential quadratic programming routines, DNCONG of IMSL and SQP of IDESIGN, along with Sequential Unconstrained Minimizations Technique SUMT) outperformed others. At optimum, most optimizers captured an identical number of active displacement and frequency constraints but the number of active stress constraints differed among the optimizers. This discrepancy can be attributed to singularity conditions in the optimization and the alleviation of this discrepancy can improve the efficiency of optimizers.
Performance Trend of Different Algorithms for Structural Design Optimization
NASA Technical Reports Server (NTRS)
Patnaik, Surya N.; Coroneos, Rula M.; Guptill, James D.; Hopkins, Dale A.
1996-01-01
Nonlinear programming algorithms play an important role in structural design optimization. Fortunately, several algorithms with computer codes are available. At NASA Lewis Research Center, a project was initiated to assess performance of different optimizers through the development of a computer code CometBoards. This paper summarizes the conclusions of that research. CometBoards was employed to solve sets of small, medium and large structural problems, using different optimizers on a Cray-YMP8E/8128 computer. The reliability and efficiency of the optimizers were determined from the performance of these problems. For small problems, the performance of most of the optimizers could be considered adequate. For large problems however, three optimizers (two sequential quadratic programming routines, DNCONG of IMSL and SQP of IDESIGN, along with the sequential unconstrained minimizations technique SUMT) outperformed others. At optimum, most optimizers captured an identical number of active displacement and frequency constraints but the number of active stress constraints differed among the optimizers. This discrepancy can be attributed to singularity conditions in the optimization and the alleviation of this discrepancy can improve the efficiency of optimizers.
Optimal trajectories of aircraft and spacecraft
NASA Technical Reports Server (NTRS)
Miele, A.
1990-01-01
Work done on algorithms for the numerical solutions of optimal control problems and their application to the computation of optimal flight trajectories of aircraft and spacecraft is summarized. General considerations on calculus of variations, optimal control, numerical algorithms, and applications of these algorithms to real-world problems are presented. The sequential gradient-restoration algorithm (SGRA) is examined for the numerical solution of optimal control problems of the Bolza type. Both the primal formulation and the dual formulation are discussed. Aircraft trajectories, in particular, the application of the dual sequential gradient-restoration algorithm (DSGRA) to the determination of optimal flight trajectories in the presence of windshear are described. Both take-off trajectories and abort landing trajectories are discussed. Take-off trajectories are optimized by minimizing the peak deviation of the absolute path inclination from a reference value. Abort landing trajectories are optimized by minimizing the peak drop of altitude from a reference value. Abort landing trajectories are optimized by minimizing the peak drop of altitude from a reference value. The survival capability of an aircraft in a severe windshear is discussed, and the optimal trajectories are found to be superior to both constant pitch trajectories and maximum angle of attack trajectories. Spacecraft trajectories, in particular, the application of the primal sequential gradient-restoration algorithm (PSGRA) to the determination of optimal flight trajectories for aeroassisted orbital transfer are examined. Both the coplanar case and the noncoplanar case are discussed within the frame of three problems: minimization of the total characteristic velocity; minimization of the time integral of the square of the path inclination; and minimization of the peak heating rate. The solution of the second problem is called nearly-grazing solution, and its merits are pointed out as a useful engineering compromise between energy requirements and aerodynamics heating requirements.
NASA Astrophysics Data System (ADS)
Krestyannikov, E.; Tohka, J.; Ruotsalainen, U.
2008-06-01
This paper presents a novel statistical approach for joint estimation of regions-of-interest (ROIs) and the corresponding time-activity curves (TACs) from dynamic positron emission tomography (PET) brain projection data. It is based on optimizing the joint objective function that consists of a data log-likelihood term and two penalty terms reflecting the available a priori information about the human brain anatomy. The developed local optimization strategy iteratively updates both the ROI and TAC parameters and is guaranteed to monotonically increase the objective function. The quantitative evaluation of the algorithm is performed with numerically and Monte Carlo-simulated dynamic PET brain data of the 11C-Raclopride and 18F-FDG tracers. The results demonstrate that the method outperforms the existing sequential ROI quantification approaches in terms of accuracy, and can noticeably reduce the errors in TACs arising due to the finite spatial resolution and ROI delineation.
Hybrid cardiac imaging with MR-CAT scan: a feasibility study.
Hillenbrand, C; Sandstede, J; Pabst, T; Hahn, D; Haase, A; Jakob, P M
2000-06-01
We demonstrate the feasibility of a new versatile hybrid imaging concept, the combined acquisition technique (CAT), for cardiac imaging. The cardiac CAT approach, which combines new methodology with existing technology, essentially integrates fast low-angle shot (FLASH) and echoplanar imaging (EPI) modules in a sequential fashion, whereby each acquisition module is employed with independently optimized imaging parameters. One important CAT sequence optimization feature is the ability to use different bandwidths for different acquisition modules. Twelve healthy subjects were imaged using three cardiac CAT acquisition strategies: a) CAT was used to reduce breath-hold duration times while maintaining constant spatial resolution; b) CAT was used to increase spatial resolution in a given breath-hold time; and c) single-heart beat CAT imaging was performed. The results obtained demonstrate the feasibility of cardiac imaging using the CAT approach and the potential of this technique to accelerate the imaging process with almost conserved image quality. Copyright 2000 Wiley-Liss, Inc.
NASA Astrophysics Data System (ADS)
Vaz, Miguel; Luersen, Marco A.; Muñoz-Rojas, Pablo A.; Trentin, Robson G.
2016-04-01
Application of optimization techniques to the identification of inelastic material parameters has substantially increased in recent years. The complex stress-strain paths and high nonlinearity, typical of this class of problems, require the development of robust and efficient techniques for inverse problems able to account for an irregular topography of the fitness surface. Within this framework, this work investigates the application of the gradient-based Sequential Quadratic Programming method, of the Nelder-Mead downhill simplex algorithm, of Particle Swarm Optimization (PSO), and of a global-local PSO-Nelder-Mead hybrid scheme to the identification of inelastic parameters based on a deep drawing operation. The hybrid technique has shown to be the best strategy by combining the good PSO performance to approach the global minimum basin of attraction with the efficiency demonstrated by the Nelder-Mead algorithm to obtain the minimum itself.
Two-step sequential pretreatment for the enhanced enzymatic hydrolysis of coffee spent waste.
Ravindran, Rajeev; Jaiswal, Swarna; Abu-Ghannam, Nissreen; Jaiswal, Amit K
2017-09-01
In the present study, eight different pretreatments of varying nature (physical, chemical and physico-chemical) followed by a sequential, combinatorial pretreatment strategy was applied to spent coffee waste to attain maximum sugar yield. Pretreated samples were analysed for total reducing sugar, individual sugars and generation of inhibitory compounds such as furfural and hydroxymethyl furfural (HMF) which can hinder microbial growth and enzyme activity. Native spent coffee waste was high in hemicellulose content. Galactose was found to be the predominant sugar in spent coffee waste. Results showed that sequential pretreatment yielded 350.12mg of reducing sugar/g of substrate, which was 1.7-fold higher than in native spent coffee waste (203.4mg/g of substrate). Furthermore, extensive delignification was achieved using sequential pretreatment strategy. XRD, FTIR, and DSC profiles of the pretreated substrates were studied to analyse the various changes incurred in sequentially pretreated spent coffee waste as opposed to native spent coffee waste. Copyright © 2017 Elsevier Ltd. All rights reserved.
Liu, Ying; ZENG, Donglin; WANG, Yuanjia
2014-01-01
Summary Dynamic treatment regimens (DTRs) are sequential decision rules tailored at each point where a clinical decision is made based on each patient’s time-varying characteristics and intermediate outcomes observed at earlier points in time. The complexity, patient heterogeneity, and chronicity of mental disorders call for learning optimal DTRs to dynamically adapt treatment to an individual’s response over time. The Sequential Multiple Assignment Randomized Trial (SMARTs) design allows for estimating causal effects of DTRs. Modern statistical tools have been developed to optimize DTRs based on personalized variables and intermediate outcomes using rich data collected from SMARTs; these statistical methods can also be used to recommend tailoring variables for designing future SMART studies. This paper introduces DTRs and SMARTs using two examples in mental health studies, discusses two machine learning methods for estimating optimal DTR from SMARTs data, and demonstrates the performance of the statistical methods using simulated data. PMID:25642116
GPU-Based Point Cloud Superpositioning for Structural Comparisons of Protein Binding Sites.
Leinweber, Matthias; Fober, Thomas; Freisleben, Bernd
2018-01-01
In this paper, we present a novel approach to solve the labeled point cloud superpositioning problem for performing structural comparisons of protein binding sites. The solution is based on a parallel evolution strategy that operates on large populations and runs on GPU hardware. The proposed evolution strategy reduces the likelihood of getting stuck in a local optimum of the multimodal real-valued optimization problem represented by labeled point cloud superpositioning. The performance of the GPU-based parallel evolution strategy is compared to a previously proposed CPU-based sequential approach for labeled point cloud superpositioning, indicating that the GPU-based parallel evolution strategy leads to qualitatively better results and significantly shorter runtimes, with speed improvements of up to a factor of 1,500 for large populations. Binary classification tests based on the ATP, NADH, and FAD protein subsets of CavBase, a database containing putative binding sites, show average classification rate improvements from about 92 percent (CPU) to 96 percent (GPU). Further experiments indicate that the proposed GPU-based labeled point cloud superpositioning approach can be superior to traditional protein comparison approaches based on sequence alignments.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-07-05
... Rule 11.9(b) are already written broadly enough to allow for both sequential or simultaneous routing of... or sequential routing as to these strategies. \\4\\ Regarding simultaneous routing, the Exchange may.... Simultaneous routing is an improvement on the current sequential manner in which orders are filled because it...
Federal Register 2010, 2011, 2012, 2013, 2014
2012-07-05
... simultaneous or sequential routing as to these strategies. \\4\\ Regarding simultaneous routing, the Exchange may.... Simultaneous routing is an improvement on the current sequential manner in which orders are filled because it... simultaneous or sequential). \\5\\ 15 U.S.C. 78f(b)(5). B. Self-Regulatory Organization's Statement on Burden on...
Three parameters optimizing closed-loop control in sequential segmental neuromuscular stimulation.
Zonnevijlle, E D; Somia, N N; Perez Abadia, G; Stremel, R W; Maldonado, C J; Werker, P M; Kon, M; Barker, J H
1999-05-01
In conventional dynamic myoplasties, the force generation is poorly controlled. This causes unnecessary fatigue of the transposed/transplanted electrically stimulated muscles and causes damage to the involved tissues. We introduced sequential segmental neuromuscular stimulation (SSNS) to reduce muscle fatigue by allowing part of the muscle to rest periodically while the other parts work. Despite this improvement, we hypothesize that fatigue could be further reduced in some applications of dynamic myoplasty if the muscles were made to contract according to need. The first necessary step is to gain appropriate control over the contractile activity of the dynamic myoplasty. Therefore, closed-loop control was tested on a sequentially stimulated neosphincter to strive for the best possible control over the amount of generated pressure. A selection of parameters was validated for optimizing control. We concluded that the frequency of corrections, the threshold for corrections, and the transition time are meaningful parameters in the controlling algorithm of the closed-loop control in a sequentially stimulated myoplasty.
Tait, Jamie L.; Duckham, Rachel L.; Milte, Catherine M.; Main, Luana C.; Daly, Robin M.
2017-01-01
Emerging research indicates that exercise combined with cognitive training may improve cognitive function in older adults. Typically these programs have incorporated sequential training, where exercise and cognitive training are undertaken separately. However, simultaneous or dual-task training, where cognitive and/or motor training are performed simultaneously with exercise, may offer greater benefits. This review summary provides an overview of the effects of combined simultaneous vs. sequential training on cognitive function in older adults. Based on the available evidence, there are inconsistent findings with regard to the cognitive benefits of sequential training in comparison to cognitive or exercise training alone. In contrast, simultaneous training interventions, particularly multimodal exercise programs in combination with secondary tasks regulated by sensory cues, have significantly improved cognition in both healthy older and clinical populations. However, further research is needed to determine the optimal characteristics of a successful simultaneous training program for optimizing cognitive function in older people. PMID:29163146
Liou, Jyh-Ming; Chen, Chieh-Chang; Fang, Yu-Jen; Chen, Po-Yueh; Chang, Chi-Yang; Chou, Chu-Kuang; Chen, Mei-Jyh; Tseng, Cheng-Hao; Lee, Ji-Yuh; Yang, Tsung-Hua; Chiu, Min-Chin; Yu, Jian-Jyun; Kuo, Chia-Chi; Luo, Jiing-Chyuan; Hsu, Wen-Feng; Hu, Wen-Hao; Tsai, Min-Horn; Lin, Jaw-Town; Shun, Chia-Tung; Twu, Gary; Lee, Yi-Chia; Bair, Ming-Jong; Wu, Ming-Shiang
2018-05-29
Whether extending the treatment length and the use of high-dose esomeprazole may optimize the efficacy of Helicobacter pylori eradication remains unknown. To compare the efficacy and tolerability of optimized 14 day sequential therapy and 10 day bismuth quadruple therapy containing high-dose esomeprazole in first-line therapy. We recruited 620 adult patients (≥20 years of age) with H. pylori infection naive to treatment in this multicentre, open-label, randomized trial. Patients were randomly assigned to receive 14 day sequential therapy or 10 day bismuth quadruple therapy, both containing esomeprazole 40 mg twice daily. Those who failed after 14 day sequential therapy received rescue therapy with 10 day bismuth quadruple therapy and vice versa. Our primary outcome was the eradication rate in the first-line therapy. Antibiotic susceptibility was determined. ClinicalTrials.gov: NCT03156855. The eradication rates of 14 day sequential therapy and 10 day bismuth quadruple therapy were 91.3% (283 of 310, 95% CI 87.4%-94.1%) and 91.6% (284 of 310, 95% CI 87.8%-94.3%) in the ITT analysis, respectively (difference -0.3%, 95% CI -4.7% to 4.4%, P = 0.886). However, the frequencies of adverse effects were significantly higher in patients treated with 10 day bismuth quadruple therapy than those treated with 14 day sequential therapy (74.4% versus 36.7% P < 0.0001). The eradication rate of 14 day sequential therapy in strains with and without 23S ribosomal RNA mutation was 80% (24 of 30) and 99% (193 of 195), respectively (P < 0.0001). Optimized 14 day sequential therapy was non-inferior to, but better tolerated than 10 day bismuth quadruple therapy and both may be used in first-line treatment in populations with low to intermediate clarithromycin resistance.
Gaythorpe, Katy; Adams, Ben
2016-05-21
Epidemics of water-borne infections often follow natural disasters and extreme weather events that disrupt water management processes. The impact of such epidemics may be reduced by deployment of transmission control facilities such as clinics or decontamination plants. Here we use a relatively simple mathematical model to examine how demographic and environmental heterogeneities, population behaviour, and behavioural change in response to the provision of facilities, combine to determine the optimal configurations of limited numbers of facilities to reduce epidemic size, and endemic prevalence. We show that, if the presence of control facilities does not affect behaviour, a good general rule for responsive deployment to minimise epidemic size is to place them in exactly the locations where they will directly benefit the most people. However, if infected people change their behaviour to seek out treatment then the deployment of facilities offering treatment can lead to complex effects that are difficult to foresee. So careful mathematical analysis is the only way to get a handle on the optimal deployment. Behavioural changes in response to control facilities can also lead to critical facility numbers at which there is a radical change in the optimal configuration. So sequential improvement of a control strategy by adding facilities to an existing optimal configuration does not always produce another optimal configuration. We also show that the pre-emptive deployment of control facilities has conflicting effects. The configurations that minimise endemic prevalence are very different to those that minimise epidemic size. So cost-benefit analysis of strategies to manage endemic prevalence must factor in the frequency of extreme weather events and natural disasters. Copyright © 2016 Elsevier Ltd. All rights reserved.
Kwon, Man Jae; Boyanov, Maxim I; Yang, Jung-Seok; Lee, Seunghak; Hwang, Yun Ho; Lee, Ju Yeon; Mishra, Bhoopesh; Kemner, Kenneth M
2017-07-01
Zinc contamination in near- and sub-surface environments is a serious threat to many ecosystems and to public health. Sufficient understanding of Zn speciation and transport mechanisms is therefore critical to evaluating its risk to the environment and to developing remediation strategies. The geochemical and mineralogical characteristics of contaminated soils in the vicinity of a Zn ore transportation route were thoroughly investigated using a variety of analytical techniques (sequential extraction, XRF, XRD, SEM, and XAFS). Imported Zn-concentrate (ZnS) was deposited in a receiving facility and dispersed over time to the surrounding roadside areas and rice-paddy soils. Subsequent physical and chemical weathering resulted in dispersal into the subsurface. The species identified in the contaminated areas included Zn-sulfide, Zn-carbonate, other O-coordinated Zn-minerals, and Zn species bound to Fe/Mn oxides or clays, as confirmed by XAFS spectroscopy and sequential extraction. The observed transformation from S-coordinated Zn to O-coordinated Zn associated with minerals suggests that this contaminant can change into more soluble and labile forms as a result of weathering. For the purpose of developing a soil washing remediation process, the contaminated samples were extracted with dilute acids. The extraction efficiency increased with the increase of O-coordinated Zn relative to S-coordinated Zn in the sediment. This study demonstrates that improved understanding of Zn speciation in contaminated soils is essential for well-informed decision making regarding metal mobility and toxicity, as well as for choosing an appropriate remediation strategy using soil washing. Copyright © 2017 Elsevier Ltd. All rights reserved.
Kwon, Man Jae; Boyanov, Maxim I.; Yang, Jung -Seok; ...
2017-03-24
Zinc contamination in near- and sub-surface environments is a serious threat to many ecosystems and to public health. Sufficient understanding of Zn speciation and transport mechanisms is therefore critical to evaluating its risk to the environment and to developing remediation strategies. The geochemical and mineralogical characteristics of contaminated soils in the vicinity of a Zn ore transportation route were thoroughly investigated using a variety of analytical techniques (sequential extraction, XRF, XRD, SEM, and XAFS). Imported Zn-concentrate (ZnS) was deposited in a receiving facility and dispersed over time to the surrounding roadside areas and rice-paddy soils. Subsequent physical and chemical weatheringmore » resulted in dispersal into the subsurface. The species identified in the contaminated areas included Zn-sulfide, Zn-carbonate, other O-coordinated Zn-minerals, and Zn species bound to Fe/Mn oxides or clays, as confirmed by XAFS spectroscopy and sequential extraction. The observed transformation from S-coordinated Zn to O-coordinated Zn associated with minerals suggests that this contaminant can change into more soluble and labile forms as a result of weathering. For the purpose of developing a soil washing remediation process, the contaminated samples were extracted with dilute acids. The extraction efficiency increased with the increase of O-coordinated Zn relative to S-coordinated Zn in the sediment. Furthermore, this study demonstrates that improved understanding of Zn speciation in contaminated soils is essential for well-informed decision making regarding metal mobility and toxicity, as well as for choosing an appropriate remediation strategy using soil washing.« less
Bahnasy, Mahmoud F; Lucy, Charles A
2012-12-07
A sequential surfactant bilayer/diblock copolymer coating was previously developed for the separation of proteins. The coating is formed by flushing the capillary with the cationic surfactant dioctadecyldimethylammonium bromide (DODAB) followed by the neutral polymer poly-oxyethylene (POE) stearate. Herein we show the method development and optimization for capillary isoelectric focusing (cIEF) separations based on the developed sequential coating. Electroosmotic flow can be tuned by varying the POE chain length which allows optimization of resolution and analysis time. DODAB/POE 40 stearate can be used to perform single-step cIEF, while both DODAB/POE 40 and DODAB/POE 100 stearate allow performing two-step cIEF methodologies. A set of peptide markers is used to assess the coating performance. The sequential coating has been applied successfully to cIEF separations using different capillary lengths and inner diameters. A linear pH gradient is established only in two-step CIEF methodology using 3-10 pH 2.5% (v/v) carrier ampholyte. Hemoglobin A(0) and S variants are successfully resolved on DODAB/POE 40 stearate sequentially coated capillaries. Copyright © 2012 Elsevier B.V. All rights reserved.
Subsonic Aircraft With Regression and Neural-Network Approximators Designed
NASA Technical Reports Server (NTRS)
Patnaik, Surya N.; Hopkins, Dale A.
2004-01-01
At the NASA Glenn Research Center, NASA Langley Research Center's Flight Optimization System (FLOPS) and the design optimization testbed COMETBOARDS with regression and neural-network-analysis approximators have been coupled to obtain a preliminary aircraft design methodology. For a subsonic aircraft, the optimal design, that is the airframe-engine combination, is obtained by the simulation. The aircraft is powered by two high-bypass-ratio engines with a nominal thrust of about 35,000 lbf. It is to carry 150 passengers at a cruise speed of Mach 0.8 over a range of 3000 n mi and to operate on a 6000-ft runway. The aircraft design utilized a neural network and a regression-approximations-based analysis tool, along with a multioptimizer cascade algorithm that uses sequential linear programming, sequential quadratic programming, the method of feasible directions, and then sequential quadratic programming again. Optimal aircraft weight versus the number of design iterations is shown. The central processing unit (CPU) time to solution is given. It is shown that the regression-method-based analyzer exhibited a smoother convergence pattern than the FLOPS code. The optimum weight obtained by the approximation technique and the FLOPS code differed by 1.3 percent. Prediction by the approximation technique exhibited no error for the aircraft wing area and turbine entry temperature, whereas it was within 2 percent for most other parameters. Cascade strategy was required by FLOPS as well as the approximators. The regression method had a tendency to hug the data points, whereas the neural network exhibited a propensity to follow a mean path. The performance of the neural network and regression methods was considered adequate. It was at about the same level for small, standard, and large models with redundancy ratios (defined as the number of input-output pairs to the number of unknown coefficients) of 14, 28, and 57, respectively. In an SGI octane workstation (Silicon Graphics, Inc., Mountainview, CA), the regression training required a fraction of a CPU second, whereas neural network training was between 1 and 9 min, as given. For a single analysis cycle, the 3-sec CPU time required by the FLOPS code was reduced to milliseconds by the approximators. For design calculations, the time with the FLOPS code was 34 min. It was reduced to 2 sec with the regression method and to 4 min by the neural network technique. The performance of the regression and neural network methods was found to be satisfactory for the analysis and design optimization of the subsonic aircraft.
Optimal decision-making in mammals: insights from a robot study of rodent texture discrimination
Lepora, Nathan F.; Fox, Charles W.; Evans, Mathew H.; Diamond, Mathew E.; Gurney, Kevin; Prescott, Tony J.
2012-01-01
Texture perception is studied here in a physical model of the rat whisker system consisting of a robot equipped with a biomimetic vibrissal sensor. Investigations of whisker motion in rodents have led to several explanations for texture discrimination, such as resonance or stick-slips. Meanwhile, electrophysiological studies of decision-making in monkeys have suggested a neural mechanism of evidence accumulation to threshold for competing percepts, described by a probabilistic model of Bayesian sequential analysis. For our robot whisker data, we find that variable reaction-time decision-making with sequential analysis performs better than the fixed response-time maximum-likelihood estimation. These probabilistic classifiers also use whatever available features of the whisker signals aid the discrimination, giving improved performance over a single-feature strategy, such as matching the peak power spectra of whisker vibrations. These results cast new light on how the various proposals for texture discrimination in rodents depend on the whisker contact mechanics and suggest the possibility of a common account of decision-making across mammalian species. PMID:22279155
Parallel Implementation of MAFFT on CUDA-Enabled Graphics Hardware.
Zhu, Xiangyuan; Li, Kenli; Salah, Ahmad; Shi, Lin; Li, Keqin
2015-01-01
Multiple sequence alignment (MSA) constitutes an extremely powerful tool for many biological applications including phylogenetic tree estimation, secondary structure prediction, and critical residue identification. However, aligning large biological sequences with popular tools such as MAFFT requires long runtimes on sequential architectures. Due to the ever increasing sizes of sequence databases, there is increasing demand to accelerate this task. In this paper, we demonstrate how graphic processing units (GPUs), powered by the compute unified device architecture (CUDA), can be used as an efficient computational platform to accelerate the MAFFT algorithm. To fully exploit the GPU's capabilities for accelerating MAFFT, we have optimized the sequence data organization to eliminate the bandwidth bottleneck of memory access, designed a memory allocation and reuse strategy to make full use of limited memory of GPUs, proposed a new modified-run-length encoding (MRLE) scheme to reduce memory consumption, and used high-performance shared memory to speed up I/O operations. Our implementation tested in three NVIDIA GPUs achieves speedup up to 11.28 on a Tesla K20m GPU compared to the sequential MAFFT 7.015.
Different coding strategies for the perception of stable and changeable facial attributes.
Taubert, Jessica; Alais, David; Burr, David
2016-09-01
Perceptual systems face competing requirements: improving signal-to-noise ratios of noisy images, by integration; and maximising sensitivity to change, by differentiation. Both processes occur in human vision, under different circumstances: they have been termed priming, or serial dependencies, leading to positive sequential effects; and adaptation or habituation, which leads to negative sequential effects. We reasoned that for stable attributes, such as the identity and gender of faces, the system should integrate: while for changeable attributes like facial expression, it should also engage contrast mechanisms to maximise sensitivity to change. Subjects viewed a sequence of images varying simultaneously in gender and expression, and scored each as male or female, and happy or sad. We found strong and consistent positive serial dependencies for gender, and negative dependency for expression, showing that both processes can operate at the same time, on the same stimuli, depending on the attribute being judged. The results point to highly sophisticated mechanisms for optimizing use of past information, either by integration or differentiation, depending on the permanence of that attribute.
Multiple health behaviours: overview and implications
Spring, Bonnie; Moller, Arlen C.; Coons, Michael J.
2012-01-01
Background More remains unknown than known about how to optimize multiple health behaviour change. Methods After reviewing the prevalence and comorbidities among major chronic disease risk behaviours for adults and youth, we consider the origins and applicability of high-risk and population strategies to foster multiple health behaviour change. Results Findings indicate that health risk behaviours are prevalent, increase with age and co-occur as risk behaviour clusters or bundles. Conclusions We conclude that both population and high-risk strategies for health behaviour intervention are warranted, potentially synergistic and need intervention design that accounts for substitute and complementary relationships among bundled health behaviours. To maximize positive public health impact, a pressing need exists for bodies of basic and translational science that explain health behaviour bundling. Also needed is applied science that elucidates the following: (1) the optimal number of behaviours to intervene upon; (2) how target behaviours are best selected (e.g. greatest health impact; patient preference or positive effect on bundled behaviours); (3) whether to increase healthy or decrease unhealthy behaviours; (4) whether to intervene on health behaviours simultaneously or sequentially and (5) how to achieve positive synergies across individual-, group- and population-level intervention approaches. PMID:22363028
Moss, Marshall E.; Gilroy, Edward J.
1980-01-01
This report describes the theoretical developments and illustrates the applications of techniques that recently have been assembled to analyze the cost-effectiveness of federally funded stream-gaging activities in support of the Colorado River compact and subsequent adjudications. The cost effectiveness of 19 stream gages in terms of minimizing the sum of the variances of the errors of estimation of annual mean discharge is explored by means of a sequential-search optimization scheme. The search is conducted over a set of decision variables that describes the number of times that each gaging route is traveled in a year. A gage route is defined as the most expeditious circuit that is made from a field office to visit one or more stream gages and return to the office. The error variance is defined as a function of the frequency of visits to a gage by using optimal estimation theory. Currently a minimum of 12 visits per year is made to any gage. By changing to a six-visit minimum, the same total error variance can be attained for the 19 stations with a budget of 10% less than the current one. Other strategies are also explored. (USGS)
ERIC Educational Resources Information Center
Passig, David
2009-01-01
Children with mental retardation have pronounced difficulties in using cognitive strategies and comprehending abstract concepts--among them, the concept of sequential time (Van-Handel, Swaab, De-Vries, & Jongmans, 2007). The perception of sequential time is generally tested by using scenarios presenting a continuum of actions. The goal of this…
Optimal mode transformations for linear-optical cluster-state generation
Uskov, Dmitry B.; Lougovski, Pavel; Alsing, Paul M.; ...
2015-06-15
In this paper, we analyze the generation of linear-optical cluster states (LOCSs) via sequential addition of one and two qubits. Existing approaches employ the stochastic linear-optical two-qubit controlled-Z (CZ) gate with success rate of 1/9 per operation. The question of optimality of the CZ gate with respect to LOCS generation has remained open. We report that there are alternative schemes to the CZ gate that are exponentially more efficient and show that sequential LOCS growth is indeed globally optimal. We find that the optimal cluster growth operation is a state transformation on a subspace of the full Hilbert space. Finally,more » we show that the maximal success rate of postselected entangling n photonic qubits or m Bell pairs into a cluster is (1/2) n-1 and (1/4) m-1, respectively, with no ancilla photons, and we give an explicit optical description of the optimal mode transformations.« less
Optimizing resource allocation for breast cancer prevention and care among Hong Kong Chinese women.
Wong, Irene O L; Tsang, Janice W H; Cowling, Benjamin J; Leung, Gabriel M
2012-09-15
Recommendations about funding of interventions through the full spectrum of the disease often have been made in isolation. The authors of this report optimized budgetary allocations by comparing cost-effectiveness data for different preventive and management strategies throughout the disease course for breast cancer in Hong Kong (HK) Chinese women. Nesting a state-transition Markov model within a generalized cost-effectiveness analytic framework, costs and quality-adjusted life-years (QALYs) were compared to estimate average cost-effectiveness ratios for the following interventions at the population level: biennial mass mammography (ages 40-69 years or ages 40-79 years), reduced waiting time for postoperative radiotherapy (by 15% or by 25%), adjuvant endocrine therapy (either upfront aromatase inhibitor [AI] therapy or sequentially with tamoxifen followed by AI) in postmenopausal women with estrogen receptor-positive disease, targeted immunotherapy in those with tumors that over express human epidermal growth factor receptor 2, and enhanced palliative services (either at home or as an inpatient). Usual care for eligible patients in the public sector was the comparator. In descending order, the optimal allocation of additional resources for breast cancer would be the following: a 25% reduction in waiting time for postoperative radiotherapy (in US dollars: $5000 per QALY); enhanced, home-based palliative care ($7105 per QALY); adjuvant, sequential endocrine therapy ($17,963 per QALY); targeted immunotherapy ($62,092 per QALY); and mass mammography screening of women ages 40 to 69 years ($72,576 per QALY). Given the lower disease risk and different age profiles of patients in HK Chinese, among other newly emergent and emerging economies with similar transitioning epidemiologic profiles, the current findings provided direct evidence to support policy decisions that may be dissimilar to current Western practice. Copyright © 2012 American Cancer Society.
Predicting distant failure in early stage NSCLC treated with SBRT using clinical parameters.
Zhou, Zhiguo; Folkert, Michael; Cannon, Nathan; Iyengar, Puneeth; Westover, Kenneth; Zhang, Yuanyuan; Choy, Hak; Timmerman, Robert; Yan, Jingsheng; Xie, Xian-J; Jiang, Steve; Wang, Jing
2016-06-01
The aim of this study is to predict early distant failure in early stage non-small cell lung cancer (NSCLC) treated with stereotactic body radiation therapy (SBRT) using clinical parameters by machine learning algorithms. The dataset used in this work includes 81 early stage NSCLC patients with at least 6months of follow-up who underwent SBRT between 2006 and 2012 at a single institution. The clinical parameters (n=18) for each patient include demographic parameters, tumor characteristics, treatment fraction schemes, and pretreatment medications. Three predictive models were constructed based on different machine learning algorithms: (1) artificial neural network (ANN), (2) logistic regression (LR) and (3) support vector machine (SVM). Furthermore, to select an optimal clinical parameter set for the model construction, three strategies were adopted: (1) clonal selection algorithm (CSA) based selection strategy; (2) sequential forward selection (SFS) method; and (3) statistical analysis (SA) based strategy. 5-cross-validation is used to validate the performance of each predictive model. The accuracy was assessed by area under the receiver operating characteristic (ROC) curve (AUC), sensitivity and specificity of the system was also evaluated. The AUCs for ANN, LR and SVM were 0.75, 0.73, and 0.80, respectively. The sensitivity values for ANN, LR and SVM were 71.2%, 72.9% and 83.1%, while the specificity values for ANN, LR and SVM were 59.1%, 63.6% and 63.6%, respectively. Meanwhile, the CSA based strategy outperformed SFS and SA in terms of AUC, sensitivity and specificity. Based on clinical parameters, the SVM with the CSA optimal parameter set selection strategy achieves better performance than other strategies for predicting distant failure in lung SBRT patients. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Shaffer, J Scott; Moore, Penny L; Kardar, Mehran; Chakraborty, Arup K
2016-10-24
Strategies to elicit Abs that can neutralize diverse strains of a highly mutable pathogen are likely to result in a potent vaccine. Broadly neutralizing Abs (bnAbs) against HIV have been isolated from patients, proving that the human immune system can evolve them. Using computer simulations and theory, we study immunization with diverse mixtures of variant antigens (Ags). Our results show that particular choices for the number of variant Ags and the mutational distances separating them maximize the probability of inducing bnAbs. The variant Ags represent potentially conflicting selection forces that can frustrate the Darwinian evolutionary process of affinity maturation. An intermediate level of frustration maximizes the chance of evolving bnAbs. A simple model makes vivid the origin of this principle of optimal frustration. Our results, combined with past studies, suggest that an appropriately chosen permutation of immunization with an optimally designed mixture (using the principles that we describe) and sequential immunization with variant Ags that are separated by relatively large mutational distances may best promote the evolution of bnAbs.
Shaffer, J. Scott; Moore, Penny L.; Kardar, Mehran; Chakraborty, Arup K.
2016-01-01
Strategies to elicit Abs that can neutralize diverse strains of a highly mutable pathogen are likely to result in a potent vaccine. Broadly neutralizing Abs (bnAbs) against HIV have been isolated from patients, proving that the human immune system can evolve them. Using computer simulations and theory, we study immunization with diverse mixtures of variant antigens (Ags). Our results show that particular choices for the number of variant Ags and the mutational distances separating them maximize the probability of inducing bnAbs. The variant Ags represent potentially conflicting selection forces that can frustrate the Darwinian evolutionary process of affinity maturation. An intermediate level of frustration maximizes the chance of evolving bnAbs. A simple model makes vivid the origin of this principle of optimal frustration. Our results, combined with past studies, suggest that an appropriately chosen permutation of immunization with an optimally designed mixture (using the principles that we describe) and sequential immunization with variant Ags that are separated by relatively large mutational distances may best promote the evolution of bnAbs. PMID:27791170
Richetti, Aline; Leite, Selma G F; Antunes, Octávio A C; de Souza, Andrea L F; Lerin, Lindomar A; Dallago, Rogério M; Paroul, Natalia; Di Luccio, Marco; Oliveira, J Vladimir; Treichel, Helen; de Oliveira, Débora
2010-04-01
This work reports the application of a lipase in the 2-ethylhexyl palmitate esterification in a solvent-free system with an immobilized lipase (Lipozyme RM IM). A sequential strategy was used applying two experimental designs to optimize the 2-ethylhexyl palmitate production. An empirical model was then built so as to assess the effects of process variables on the reaction conversion. Afterwards, the operating conditions that optimized 2-ethylhexyl palmitate production were established as being acid/alcohol molar ratio 1:3, temperature of 70 degrees C, stirring rate of 150 rpm, 10 wt.% of enzyme, leading to a reaction conversion as high as 95%. From this point, a kinetic study was carried out evaluating the effect of acid:alcohol molar ratio, the enzyme concentration and the temperature on product conversion. The results obtained in this step permit to verify that an excess of alcohol (acid to alcohol molar ratio of 1:6), relatively low enzyme concentration (10 wt.%) and temperature of 70 degrees C, led to conversions next to 100%.
Structural Optimization for Reliability Using Nonlinear Goal Programming
NASA Technical Reports Server (NTRS)
El-Sayed, Mohamed E.
1999-01-01
This report details the development of a reliability based multi-objective design tool for solving structural optimization problems. Based on two different optimization techniques, namely sequential unconstrained minimization and nonlinear goal programming, the developed design method has the capability to take into account the effects of variability on the proposed design through a user specified reliability design criterion. In its sequential unconstrained minimization mode, the developed design tool uses a composite objective function, in conjunction with weight ordered design objectives, in order to take into account conflicting and multiple design criteria. Multiple design criteria of interest including structural weight, load induced stress and deflection, and mechanical reliability. The nonlinear goal programming mode, on the other hand, provides for a design method that eliminates the difficulty of having to define an objective function and constraints, while at the same time has the capability of handling rank ordered design objectives or goals. For simulation purposes the design of a pressure vessel cover plate was undertaken as a test bed for the newly developed design tool. The formulation of this structural optimization problem into sequential unconstrained minimization and goal programming form is presented. The resulting optimization problem was solved using: (i) the linear extended interior penalty function method algorithm; and (ii) Powell's conjugate directions method. Both single and multi-objective numerical test cases are included demonstrating the design tool's capabilities as it applies to this design problem.
Zhang, Lihua; Chen, Xianzhong; Chen, Zhen; Wang, Zezheng; Jiang, Shan; Li, Li; Pötter, Markus; Shen, Wei; Fan, You
2016-11-01
The diploid yeast Candida tropicalis, which can utilize n-alkane as a carbon and energy source, is an attractive strain for both physiological studies and practical applications. However, it presents some characteristics, such as rare codon usage, difficulty in sequential gene disruption, and inefficiency in foreign gene expression, that hamper strain improvement through genetic engineering. In this work, we present a simple and effective method for sequential gene disruption in C. tropicalis based on the use of an auxotrophic mutant host defective in orotidine monophosphate decarboxylase (URA3). The disruption cassette, which consists of a functional yeast URA3 gene flanked by a 0.3 kb gene disruption auxiliary sequence (gda) direct repeat derived from downstream or upstream of the URA3 gene and of homologous arms of the target gene, was constructed and introduced into the yeast genome by integrative transformation. Stable integrants were isolated by selection for Ura + and identified by PCR and sequencing. The important feature of this construct, which makes it very attractive, is that recombination between the flanking direct gda repeats occurs at a high frequency (10 -8 ) during mitosis. After excision of the URA3 marker, only one copy of the gda sequence remains at the recombinant locus. Thus, the resulting ura3 strain can be used again to disrupt a second allelic gene in a similar manner. In addition to this effective sequential gene disruption method, a codon-optimized green fluorescent protein-encoding gene (GFP) was functionally expressed in C. tropicalis. Thus, we propose a simple and reliable method to improve C. tropicalis by genetic manipulation.
NASA Astrophysics Data System (ADS)
Sandhu, Amit
A sequential quadratic programming method is proposed for solving nonlinear optimal control problems subject to general path constraints including mixed state-control and state only constraints. The proposed algorithm further develops on the approach proposed in [1] with objective to eliminate the use of a high number of time intervals for arriving at an optimal solution. This is done by introducing an adaptive time discretization to allow formation of a desirable control profile without utilizing a lot of intervals. The use of fewer time intervals reduces the computation time considerably. This algorithm is further used in this thesis to solve a trajectory planning problem for higher elevation Mars landing.
Shortreed, Susan M.; Moodie, Erica E. M.
2012-01-01
Summary Treatment of schizophrenia is notoriously difficult and typically requires personalized adaption of treatment due to lack of efficacy of treatment, poor adherence, or intolerable side effects. The Clinical Antipsychotic Trials in Intervention Effectiveness (CATIE) Schizophrenia Study is a sequential multiple assignment randomized trial comparing the typical antipsychotic medication, perphenazine, to several newer atypical antipsychotics. This paper describes the marginal structural modeling method for estimating optimal dynamic treatment regimes and applies the approach to the CATIE Schizophrenia Study. Missing data and valid estimation of confidence intervals are also addressed. PMID:23087488
Buffer management for sequential decoding. [block erasure probability reduction
NASA Technical Reports Server (NTRS)
Layland, J. W.
1974-01-01
Sequential decoding has been found to be an efficient means of communicating at low undetected error rates from deep space probes, but erasure or computational overflow remains a significant problem. Erasure of a block occurs when the decoder has not finished decoding that block at the time that it must be output. By drawing upon analogies in computer time sharing, this paper develops a buffer-management strategy which reduces the decoder idle time to a negligible level, and therefore improves the erasure probability of a sequential decoder. For a decoder with a speed advantage of ten and a buffer size of ten blocks, operating at an erasure rate of .01, use of this buffer-management strategy reduces the erasure rate to less than .0001.
Zhang, Jia-yu; Wang, Zi-jian; Li, Yun; Liu, Ying; Cai, Wei; Li, Chen; Lu, Jian-qiu; Qiao, Yan-jiang
2016-01-15
The analytical methodologies for evaluation of multi-component system in traditional Chinese medicines (TCMs) have been inadequate or unacceptable. As a result, the unclarity of multi-component hinders the sufficient interpretation of their bioactivities. In this paper, an ultra-high-performance liquid chromatography coupled with linear ion trap-Orbitrap (UPLC-LTQ-Orbitrap)-based strategy focused on the comprehensive identification of TCM sequential constituents was developed. The strategy was characterized by molecular design, multiple ion monitoring (MIM), targeted database hits and mass spectral trees similarity filter (MTSF), and even more isomerism discrimination. It was successfully applied in the HRMS data-acquisition and processing of chlorogenic acids (CGAs) in Flos Lonicerae Japonicae (FLJ), and a total of 115 chromatographic peaks attributed to 18 categories were characterized, allowing a comprehensive revelation of CGAs in FLJ for the first time. This demonstrated that MIM based on molecular design could improve the efficiency to trigger MS/MS fragmentation reactions. Targeted database hits and MTSF searching greatly facilitated the processing of extremely large information data. Besides, the introduction of diagnostic product ions (DPIs) discrimination, ClogP analysis, and molecular simulation, raised the efficiency and accuracy to characterize sequential constituents especially position and geometric isomers. In conclusion, the results expanded our understanding on CGAs in FLJ, and the strategy could be exemplary for future research on the comprehensive identification of sequential constituents in TCMs. Meanwhile, it may propose a novel idea for analyzing sequential constituents, and is promising for quality control and evaluation of TCMs. Copyright © 2015 Elsevier B.V. All rights reserved.
Husson, Eric; Auxenfans, Thomas; Herbaut, Mickael; Baralle, Manon; Lambertyn, Virginie; Rakotoarivonina, Harivoni; Rémond, Caroline; Sarazin, Catherine
2018-03-01
Sequential and simultaneous strategies for fractioning wheat straw were developed in combining 1-ethyl-3-methyl imidazolium acetate [C2mim][OAc], endo-xylanases from Thermobacillus xylanilyticus and commercial cellulases. After [C2mim][OAc]-pretreatment, hydrolysis catalyzed by endo-xylanases of wheat straw led to efficient xylose production with very competitive yield (97.6 ± 1.3%). Subsequent enzymatic saccharification allowed achieving a total degradation of cellulosic fraction (>99%). These high performances revealed an interesting complementarity of [C2mim][OAc]- and xylanase-pretreatments for increasing enzymatic digestibility of cellulosic fraction in agreement with the structural and morphological changes of wheat straw induced by each of these pretreatment steps. In addition a higher tolerance of endo-xylanases from T. xylaniliticus to [C2mim][AcO] until 30% v/v than cellulases from T. reesei was observed. Based on this property, a simultaneous strategy combining [C2mim][OAc]- and endo-xylanases as pretreatment in a one-batch produced xylose with similar yield than those obtained by the sequential strategy. Copyright © 2017 Elsevier Ltd. All rights reserved.
2017-01-01
We develop a flexible, two-locus model for the spread of insecticide resistance applicable to mosquito species that transmit human diseases such as malaria. The model allows differential exposure of males and females, allows them to encounter high or low concentrations of insecticide, and allows selection pressures and dominance values to differ depending on the concentration of insecticide encountered. We demonstrate its application by investigating the relative merits of sequential use of insecticides versus their deployment as a mixture to minimise the spread of resistance. We recover previously published results as subsets of this model and conduct a sensitivity analysis over an extensive parameter space to identify what circumstances favour mixtures over sequences. Both strategies lasted more than 500 mosquito generations (or about 40 years) in 24% of runs, while in those runs where resistance had spread to high levels by 500 generations, 56% favoured sequential use and 44% favoured mixtures. Mixtures are favoured when insecticide effectiveness (their ability to kill homozygous susceptible mosquitoes) is high and exposure (the proportion of mosquitoes that encounter the insecticide) is low. If insecticides do not reliably kill homozygous sensitive genotypes, it is likely that sequential deployment will be a more robust strategy. Resistance to an insecticide always spreads slower if that insecticide is used in a mixture although this may be insufficient to outperform sequential use: for example, a mixture may last 5 years while the two insecticides deployed individually may last 3 and 4 years giving an overall ‘lifespan’ of 7 years for sequential use. We emphasise that this paper is primarily about designing and implementing a flexible modelling strategy to investigate the spread of insecticide resistance in vector populations and demonstrate how our model can identify vector control strategies most likely to minimise the spread of insecticide resistance. PMID:28095406
NASA Astrophysics Data System (ADS)
Liu, Wei; Ma, Shunjian; Sun, Mingwei; Yi, Haidong; Wang, Zenghui; Chen, Zengqiang
2016-08-01
Path planning plays an important role in aircraft guided systems. Multiple no-fly zones in the flight area make path planning a constrained nonlinear optimization problem. It is necessary to obtain a feasible optimal solution in real time. In this article, the flight path is specified to be composed of alternate line segments and circular arcs, in order to reformulate the problem into a static optimization one in terms of the waypoints. For the commonly used circular and polygonal no-fly zones, geometric conditions are established to determine whether or not the path intersects with them, and these can be readily programmed. Then, the original problem is transformed into a form that can be solved by the sequential quadratic programming method. The solution can be obtained quickly using the Sparse Nonlinear OPTimizer (SNOPT) package. Mathematical simulations are used to verify the effectiveness and rapidity of the proposed algorithm.
The cost and cost-effectiveness of rapid testing strategies for yaws diagnosis and surveillance.
Fitzpatrick, Christopher; Asiedu, Kingsley; Sands, Anita; Gonzalez Pena, Tita; Marks, Michael; Mitja, Oriol; Meheus, Filip; Van der Stuyft, Patrick
2017-10-01
Yaws is a non-venereal treponemal infection caused by Treponema pallidum subspecies pertenue. The disease is targeted by WHO for eradication by 2020. Rapid diagnostic tests (RDTs) are envisaged for confirmation of clinical cases during treatment campaigns and for certification of the interruption of transmission. Yaws testing requires both treponemal (trep) and non-treponemal (non-trep) assays for diagnosis of current infection. We evaluate a sequential testing strategy (using a treponemal RDT before a trep/non-trep RDT) in terms of cost and cost-effectiveness, relative to a single-assay combined testing strategy (using the trep/non-trep RDT alone), for two use cases: individual diagnosis and community surveillance. We use cohort decision analysis to examine the diagnostic and cost outcomes. We estimate cost and cost-effectiveness of the alternative testing strategies at different levels of prevalence of past/current infection and current infection under each use case. We take the perspective of the global yaws eradication programme. We calculate the total number of correct diagnoses for each strategy over a range of plausible prevalences. We employ probabilistic sensitivity analysis (PSA) to account for uncertainty and report 95% intervals. At current prices of the treponemal and trep/non-trep RDTs, the sequential strategy is cost-saving for individual diagnosis at prevalence of past/current infection less than 85% (81-90); it is cost-saving for surveillance at less than 100%. The threshold price of the trep/non-trep RDT (below which the sequential strategy would no longer be cost-saving) is US$ 1.08 (1.02-1.14) for individual diagnosis at high prevalence of past/current infection (51%) and US$ 0.54 (0.52-0.56) for community surveillance at low prevalence (15%). We find that the sequential strategy is cost-saving for both diagnosis and surveillance in most relevant settings. In the absence of evidence assessing relative performance (sensitivity and specificity), cost-effectiveness is uncertain. However, the conditions under which the combined test only strategy might be more cost-effective than the sequential strategy are limited. A cheaper trep/non-trep RDT is needed, costing no more than US$ 0.50-1.00, depending on the use case. Our results will help enhance the cost-effectiveness of yaws programmes in the 13 countries known to be currently endemic. It will also inform efforts in the much larger group of 71 countries with a history of yaws, many of which will have to undertake surveillance to confirm the interruption of transmission.
MR CAT scan: a modular approach for hybrid imaging.
Hillenbrand, C; Hahn, D; Haase, A; Jakob, P M
2000-07-01
In this study, a modular concept for NMR hybrid imaging is presented. This concept essentially integrates different imaging modules in a sequential fashion and is therefore called CAT (combined acquisition technique). CAT is not a single specific measurement sequence, but rather a sequence design concept whereby distinct acquisition techniques with varying imaging parameters are employed in rapid succession in order to cover k-space. The power of the CAT approach is that it provides a high flexibility toward the acquisition optimization with respect to the available imaging time and the desired image quality. Important CAT sequence optimization steps include the appropriate choice of the k-space coverage ratio and the application of mixed bandwidth technology. Details of both the CAT methodology and possible CAT acquisition strategies, such as FLASH/EPI-, RARE/EPI- and FLASH/BURST-CAT are provided. Examples from imaging experiments in phantoms and healthy volunteers including mixed bandwidth acquisitions are provided to demonstrate the feasibility of the proposed CAT concept.
Culture Moderates Biases in Search Decisions.
Pattaratanakun, Jake A; Mak, Vincent
2015-08-01
Prior studies suggest that people often search insufficiently in sequential-search tasks compared with the predictions of benchmark optimal strategies that maximize expected payoff. However, those studies were mostly conducted in individualist Western cultures; Easterners from collectivist cultures, with their higher susceptibility to escalation of commitment induced by sunk search costs, could exhibit a reversal of this undersearch bias by searching more than optimally, but only when search costs are high. We tested our theory in four experiments. In our pilot experiment, participants generally undersearched when search cost was low, but only Eastern participants oversearched when search cost was high. In Experiments 1 and 2, we obtained evidence for our hypothesized effects via a cultural-priming manipulation on bicultural participants in which we manipulated the language used in the program interface. We obtained further process evidence for our theory in Experiment 3, in which we made sunk costs nonsalient in the search task-as expected, cross-cultural effects were largely mitigated. © The Author(s) 2015.
Economic and environmental costs of regulatory uncertainty for coal-fired power plants.
Patiño-Echeverri, Dalia; Fischbeck, Paul; Kriegler, Elmar
2009-02-01
Uncertainty about the extent and timing of CO2 emissions regulations for the electricity-generating sector exacerbates the difficulty of selecting investment strategies for retrofitting or alternatively replacing existent coal-fired power plants. This may result in inefficient investments imposing economic and environmental costs to society. In this paper, we construct a multiperiod decision model with an embedded multistage stochastic dynamic program minimizing the expected total costs of plant operation, installations, and pollution allowances. We use the model to forecast optimal sequential investment decisions of a power plant operator with and without uncertainty about future CO2 allowance prices. The comparison of the two cases demonstrates that uncertainty on future CO2 emissions regulations might cause significant economic costs and higher air emissions.
Improving Emergency Department flow through optimized bed utilization
Chartier, Lucas Brien; Simoes, Licinia; Kuipers, Meredith; McGovern, Barb
2016-01-01
Over the last decade, patient volumes in the emergency department (ED) have grown disproportionately compared to the increase in staffing and resources at the Toronto Western Hospital, an academic tertiary care centre in Toronto, Canada. The resultant congestion has spilled over to the ED waiting room, where medically undifferentiated and potentially unstable patients must wait until a bed becomes available. The aim of this quality improvement project was to decrease the 90th percentile of wait time between triage and bed assignment (time-to-bed) by half, from 120 to 60 minutes, for our highest acuity patients. We engaged key stakeholders to identify barriers and potential strategies to achieve optimal flow of patients into the ED. We first identified multiple flow-interrupting challenges, including operational bottlenecks and cultural issues. We then generated change ideas to address two main underlying causes of ED congestion: unnecessary patient utilization of ED beds and communication breakdown causing bed turnaround delays. We subsequently performed seven tests of change through sequential plan-do-study-act (PDSA) cycles. The most significant gains were made by improving communication strategies: small gains were achieved through the optimization of in-house digital information management systems, while significant improvements were achieved through the implementation of a low-tech direct contact mechanism (a two-way radio or walkie-talkie). In the post-intervention phase, time-to-bed for the 90th percentile of high-acuity patients decreased from 120 minutes to 66 minutes, with special cause variation showing a significant shift in the weekly measurements. PMID:27752312
Improving Emergency Department flow through optimized bed utilization.
Chartier, Lucas Brien; Simoes, Licinia; Kuipers, Meredith; McGovern, Barb
2016-01-01
Over the last decade, patient volumes in the emergency department (ED) have grown disproportionately compared to the increase in staffing and resources at the Toronto Western Hospital, an academic tertiary care centre in Toronto, Canada. The resultant congestion has spilled over to the ED waiting room, where medically undifferentiated and potentially unstable patients must wait until a bed becomes available. The aim of this quality improvement project was to decrease the 90th percentile of wait time between triage and bed assignment (time-to-bed) by half, from 120 to 60 minutes, for our highest acuity patients. We engaged key stakeholders to identify barriers and potential strategies to achieve optimal flow of patients into the ED. We first identified multiple flow-interrupting challenges, including operational bottlenecks and cultural issues. We then generated change ideas to address two main underlying causes of ED congestion: unnecessary patient utilization of ED beds and communication breakdown causing bed turnaround delays. We subsequently performed seven tests of change through sequential plan-do-study-act (PDSA) cycles. The most significant gains were made by improving communication strategies: small gains were achieved through the optimization of in-house digital information management systems, while significant improvements were achieved through the implementation of a low-tech direct contact mechanism (a two-way radio or walkie-talkie). In the post-intervention phase, time-to-bed for the 90th percentile of high-acuity patients decreased from 120 minutes to 66 minutes, with special cause variation showing a significant shift in the weekly measurements.
Diniz, Juliana B; Costa, Daniel Lc; Cassab, Raony Cc; Pereira, Carlos Ab; Miguel, Euripedes C; Shavitt, Roseli G
2014-06-01
Our aim was to investigate the impact of comorbid body dysmorphic disorder (BDD) on the response to sequential pharmacological trials in adult obsessive-compulsive disorder (OCD) patients. The sequential trial initially involved fluoxetine monotherapy followed by one of three randomized, add-on strategies: placebo, clomipramine or quetiapine. We included 138 patients in the initial phase of fluoxetine, up to 80 mg or the maximum tolerated dosage, for 12 weeks. We invited 70 non-responders to participate in the add-on trial; as 54 accepted, we allocated 18 to each treatment group and followed them for an additional 12 weeks. To evaluate the combined effects of sex, age, age at onset, initial severity, type of augmentation and BDD on the response to sequential treatments, we constructed a model using generalized estimating equations (GEE). Of the 39 patients who completed the study (OCD-BDD, n = 13; OCD-non-BDD, n = 26), the OCD-BDD patients were less likely to be classified as responders than the OCD-non-BDD patients (Pearson Chi-Square = 4.4; p = 0.036). In the GEE model, BDD was not significantly associated with a worse response to sequential treatments (z-robust = 1.77; p = 0.07). The predictive potential of BDD regarding sequential treatment strategies for OCD did not survive when the analyses were controlled for other clinical characteristics. © The Author(s) 2013.
Optimization and Development of a Human Scent Collection Method
2007-06-04
19. Schoon, G. A. A., Scent Identification Lineups by Dogs (Canis familiaris): Experimental Design and Forensic Application. Applied Animal...Parker, Lloyd R., Morgan, Stephen L., Deming, Stanley N., Sequential Simplex Optimization. Chemometrics Series, ed. S.D. Brown. 1991, Boca Raton
Sequential Pointing in Children and Adults.
ERIC Educational Resources Information Center
Badan, Maryse; Hauert, Claude-Alain; Mounoud, Pierre
2000-01-01
Four experiments investigated the development of visuomotor control in sequential pointing in tasks varying in difficulty among 6- to 10-year-olds and adults. Comparisons across difficulty levels and ages suggest that motor development is not a uniform fine-tuning of stable strategies. Findings raise argument for stage characteristics of…
The Motivating Language of Principals: A Sequential Transformative Strategy
ERIC Educational Resources Information Center
Holmes, William Tobias
2012-01-01
This study implemented a Sequential Transformative Mixed Methods design with teachers (as recipients) and principals (to give voice) in the examination of principal talk in two different school accountability contexts (Continuously Improving and Continuously Zigzag) using the conceptual framework of Motivating Language Theory. In phase one,…
Accelerated search for materials with targeted properties by adaptive design
Xue, Dezhen; Balachandran, Prasanna V.; Hogden, John; Theiler, James; Xue, Deqing; Lookman, Turab
2016-01-01
Finding new materials with targeted properties has traditionally been guided by intuition, and trial and error. With increasing chemical complexity, the combinatorial possibilities are too large for an Edisonian approach to be practical. Here we show how an adaptive design strategy, tightly coupled with experiments, can accelerate the discovery process by sequentially identifying the next experiments or calculations, to effectively navigate the complex search space. Our strategy uses inference and global optimization to balance the trade-off between exploitation and exploration of the search space. We demonstrate this by finding very low thermal hysteresis (ΔT) NiTi-based shape memory alloys, with Ti50.0Ni46.7Cu0.8Fe2.3Pd0.2 possessing the smallest ΔT (1.84 K). We synthesize and characterize 36 predicted compositions (9 feedback loops) from a potential space of ∼800,000 compositions. Of these, 14 had smaller ΔT than any of the 22 in the original data set. PMID:27079901
Karthivashan, Govindarajan; Masarudin, Mas Jaffri; Kura, Aminu Umar; Abas, Faridah; Fakurazi, Sharida
2016-01-01
This study involves adaptation of bulk or sequential technique to load multiple flavonoids in a single phytosome, which can be termed as “flavonosome”. Three widely established and therapeutically valuable flavonoids, such as quercetin (Q), kaempferol (K), and apigenin (A), were quantified in the ethyl acetate fraction of Moringa oleifera leaves extract and were commercially obtained and incorporated in a single flavonosome (QKA–phosphatidylcholine) through four different methods of synthesis – bulk (M1) and serialized (M2) co-sonication and bulk (M3) and sequential (M4) co-loading. The study also established an optimal formulation method based on screening the synthesized flavonosomes with respect to their size, charge, polydispersity index, morphology, drug–carrier interaction, antioxidant potential through in vitro 1,1-diphenyl-2-picrylhydrazyl kinetics, and cytotoxicity evaluation against human hepatoma cell line (HepaRG). Furthermore, entrapment and loading efficiency of flavonoids in the optimal flavonosome have been identified. Among the four synthesis methods, sequential loading technique has been optimized as the best method for the synthesis of QKA–phosphatidylcholine flavonosome, which revealed an average diameter of 375.93±33.61 nm, with a zeta potential of −39.07±3.55 mV, and the entrapment efficiency was >98% for all the flavonoids, whereas the drug-loading capacity of Q, K, and A was 31.63%±0.17%, 34.51%±2.07%, and 31.79%±0.01%, respectively. The in vitro 1,1-diphenyl-2-picrylhydrazyl kinetics of the flavonoids indirectly depicts the release kinetic behavior of the flavonoids from the carrier. The QKA-loaded flavonosome had no indication of toxicity toward human hepatoma cell line as shown by the 3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide result, wherein even at the higher concentration of 200 µg/mL, the flavonosomes exert >85% of cell viability. These results suggest that sequential loading technique may be a promising nanodrug delivery system for loading multiflavonoids in a single entity with sustained activity as an antioxidant, hepatoprotective, and hepatosupplement candidate. PMID:27555765
Karthivashan, Govindarajan; Masarudin, Mas Jaffri; Kura, Aminu Umar; Abas, Faridah; Fakurazi, Sharida
2016-01-01
This study involves adaptation of bulk or sequential technique to load multiple flavonoids in a single phytosome, which can be termed as "flavonosome". Three widely established and therapeutically valuable flavonoids, such as quercetin (Q), kaempferol (K), and apigenin (A), were quantified in the ethyl acetate fraction of Moringa oleifera leaves extract and were commercially obtained and incorporated in a single flavonosome (QKA-phosphatidylcholine) through four different methods of synthesis - bulk (M1) and serialized (M2) co-sonication and bulk (M3) and sequential (M4) co-loading. The study also established an optimal formulation method based on screening the synthesized flavonosomes with respect to their size, charge, polydispersity index, morphology, drug-carrier interaction, antioxidant potential through in vitro 1,1-diphenyl-2-picrylhydrazyl kinetics, and cytotoxicity evaluation against human hepatoma cell line (HepaRG). Furthermore, entrapment and loading efficiency of flavonoids in the optimal flavonosome have been identified. Among the four synthesis methods, sequential loading technique has been optimized as the best method for the synthesis of QKA-phosphatidylcholine flavonosome, which revealed an average diameter of 375.93±33.61 nm, with a zeta potential of -39.07±3.55 mV, and the entrapment efficiency was >98% for all the flavonoids, whereas the drug-loading capacity of Q, K, and A was 31.63%±0.17%, 34.51%±2.07%, and 31.79%±0.01%, respectively. The in vitro 1,1-diphenyl-2-picrylhydrazyl kinetics of the flavonoids indirectly depicts the release kinetic behavior of the flavonoids from the carrier. The QKA-loaded flavonosome had no indication of toxicity toward human hepatoma cell line as shown by the 3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide result, wherein even at the higher concentration of 200 µg/mL, the flavonosomes exert >85% of cell viability. These results suggest that sequential loading technique may be a promising nanodrug delivery system for loading multiflavonoids in a single entity with sustained activity as an antioxidant, hepatoprotective, and hepatosupplement candidate.
Optimal decision making on the basis of evidence represented in spike trains.
Zhang, Jiaxiang; Bogacz, Rafal
2010-05-01
Experimental data indicate that perceptual decision making involves integration of sensory evidence in certain cortical areas. Theoretical studies have proposed that the computation in neural decision circuits approximates statistically optimal decision procedures (e.g., sequential probability ratio test) that maximize the reward rate in sequential choice tasks. However, these previous studies assumed that the sensory evidence was represented by continuous values from gaussian distributions with the same variance across alternatives. In this article, we make a more realistic assumption that sensory evidence is represented in spike trains described by the Poisson processes, which naturally satisfy the mean-variance relationship observed in sensory neurons. We show that for such a representation, the neural circuits involving cortical integrators and basal ganglia can approximate the optimal decision procedures for two and multiple alternative choice tasks.
The impact of uncertainty on optimal emission policies
NASA Astrophysics Data System (ADS)
Botta, Nicola; Jansson, Patrik; Ionescu, Cezar
2018-05-01
We apply a computational framework for specifying and solving sequential decision problems to study the impact of three kinds of uncertainties on optimal emission policies in a stylized sequential emission problem.We find that uncertainties about the implementability of decisions on emission reductions (or increases) have a greater impact on optimal policies than uncertainties about the availability of effective emission reduction technologies and uncertainties about the implications of trespassing critical cumulated emission thresholds. The results show that uncertainties about the implementability of decisions on emission reductions (or increases) call for more precautionary policies. In other words, delaying emission reductions to the point in time when effective technologies will become available is suboptimal when these uncertainties are accounted for rigorously. By contrast, uncertainties about the implications of exceeding critical cumulated emission thresholds tend to make early emission reductions less rewarding.
On the effect of response transformations in sequential parameter optimization.
Wagner, Tobias; Wessing, Simon
2012-01-01
Parameter tuning of evolutionary algorithms (EAs) is attracting more and more interest. In particular, the sequential parameter optimization (SPO) framework for the model-assisted tuning of stochastic optimizers has resulted in established parameter tuning algorithms. In this paper, we enhance the SPO framework by introducing transformation steps before the response aggregation and before the actual modeling. Based on design-of-experiments techniques, we empirically analyze the effect of integrating different transformations. We show that in particular, a rank transformation of the responses provides significant improvements. A deeper analysis of the resulting models and additional experiments with adaptive procedures indicates that the rank and the Box-Cox transformation are able to improve the properties of the resultant distributions with respect to symmetry and normality of the residuals. Moreover, model-based effect plots document a higher discriminatory power obtained by the rank transformation.
A sequential solution for anisotropic total variation image denoising with interval constraints
NASA Astrophysics Data System (ADS)
Xu, Jingyan; Noo, Frédéric
2017-09-01
We show that two problems involving the anisotropic total variation (TV) and interval constraints on the unknown variables admit, under some conditions, a simple sequential solution. Problem 1 is a constrained TV penalized image denoising problem; problem 2 is a constrained fused lasso signal approximator. The sequential solution entails finding first the solution to the unconstrained problem, and then applying a thresholding to satisfy the constraints. If the interval constraints are uniform, this sequential solution solves problem 1. If the interval constraints furthermore contain zero, the sequential solution solves problem 2. Here uniform interval constraints refer to all unknowns being constrained to the same interval. A typical example of application is image denoising in x-ray CT, where the image intensities are non-negative as they physically represent linear attenuation coefficient in the patient body. Our results are simple yet seem unknown; we establish them using the Karush-Kuhn-Tucker conditions for constrained convex optimization.
Sequence Based Prediction of Antioxidant Proteins Using a Classifier Selection Strategy
Zhang, Lina; Zhang, Chengjin; Gao, Rui; Yang, Runtao; Song, Qing
2016-01-01
Antioxidant proteins perform significant functions in maintaining oxidation/antioxidation balance and have potential therapies for some diseases. Accurate identification of antioxidant proteins could contribute to revealing physiological processes of oxidation/antioxidation balance and developing novel antioxidation-based drugs. In this study, an ensemble method is presented to predict antioxidant proteins with hybrid features, incorporating SSI (Secondary Structure Information), PSSM (Position Specific Scoring Matrix), RSA (Relative Solvent Accessibility), and CTD (Composition, Transition, Distribution). The prediction results of the ensemble predictor are determined by an average of prediction results of multiple base classifiers. Based on a classifier selection strategy, we obtain an optimal ensemble classifier composed of RF (Random Forest), SMO (Sequential Minimal Optimization), NNA (Nearest Neighbor Algorithm), and J48 with an accuracy of 0.925. A Relief combined with IFS (Incremental Feature Selection) method is adopted to obtain optimal features from hybrid features. With the optimal features, the ensemble method achieves improved performance with a sensitivity of 0.95, a specificity of 0.93, an accuracy of 0.94, and an MCC (Matthew’s Correlation Coefficient) of 0.880, far better than the existing method. To evaluate the prediction performance objectively, the proposed method is compared with existing methods on the same independent testing dataset. Encouragingly, our method performs better than previous studies. In addition, our method achieves more balanced performance with a sensitivity of 0.878 and a specificity of 0.860. These results suggest that the proposed ensemble method can be a potential candidate for antioxidant protein prediction. For public access, we develop a user-friendly web server for antioxidant protein identification that is freely accessible at http://antioxidant.weka.cc. PMID:27662651
Polymeric micelles for multi-drug delivery in cancer.
Cho, Hyunah; Lai, Tsz Chung; Tomoda, Keishiro; Kwon, Glen S
2015-02-01
Drug combinations are common in cancer treatment and are rapidly evolving, moving beyond chemotherapy combinations to combinations of signal transduction inhibitors. For the delivery of drug combinations, i.e., multi-drug delivery, major considerations are synergy, dose regimen (concurrent versus sequential), pharmacokinetics, toxicity, and safety. In this contribution, we review recent research on polymeric micelles for multi-drug delivery in cancer. In concurrent drug delivery, polymeric micelles deliver multi-poorly water-soluble anticancer agents, satisfying strict requirements in solubility, stability, and safety. In sequential drug delivery, polymeric micelles participate in pretreatment strategies that "prime" solid tumors and enhance the penetration of secondarily administered anticancer agent or nanocarrier. The improved delivery of multiple poorly water-soluble anticancer agents by polymeric micelles via concurrent or sequential regimens offers novel and interesting strategies for drug combinations in cancer treatment.
Optimization of Multiple Related Negotiation through Multi-Negotiation Network
NASA Astrophysics Data System (ADS)
Ren, Fenghui; Zhang, Minjie; Miao, Chunyan; Shen, Zhiqi
In this paper, a Multi-Negotiation Network (MNN) and a Multi- Negotiation Influence Diagram (MNID) are proposed to optimally handle Multiple Related Negotiations (MRN) in a multi-agent system. Most popular, state-of-the-art approaches perform MRN sequentially. However, a sequential procedure may not optimally execute MRN in terms of maximizing the global outcome, and may even lead to unnecessary losses in some situations. The motivation of this research is to use a MNN to handle MRN concurrently so as to maximize the expected utility of MRN. Firstly, both the joint success rate and the joint utility by considering all related negotiations are dynamically calculated based on a MNN. Secondly, by employing a MNID, an agent's possible decision on each related negotiation is reflected by the value of expected utility. Lastly, through comparing expected utilities between all possible policies to conduct MRN, an optimal policy is generated to optimize the global outcome of MRN. The experimental results indicate that the proposed approach can improve the global outcome of MRN in a successful end scenario, and avoid unnecessary losses in an unsuccessful end scenario.
NASA Astrophysics Data System (ADS)
Liao, Haitao; Wu, Wenwang; Fang, Daining
2018-07-01
A coupled approach combining the reduced space Sequential Quadratic Programming (SQP) method with the harmonic balance condensation technique for finding the worst resonance response is developed. The nonlinear equality constraints of the optimization problem are imposed on the condensed harmonic balance equations. Making use of the null space decomposition technique, the original optimization formulation in the full space is mathematically simplified, and solved in the reduced space by means of the reduced SQP method. The transformation matrix that maps the full space to the null space of the constrained optimization problem is constructed via the coordinate basis scheme. The removal of the nonlinear equality constraints is accomplished, resulting in a simple optimization problem subject to bound constraints. Moreover, second order correction technique is introduced to overcome Maratos effect. The combination application of the reduced SQP method and condensation technique permits a large reduction of the computational cost. Finally, the effectiveness and applicability of the proposed methodology is demonstrated by two numerical examples.
Sequential Online Wellness Programming Is an Effective Strategy to Promote Behavior Change
ERIC Educational Resources Information Center
MacNab, Lindsay R.; Francis, Sarah L.
2015-01-01
The growing number of United States youth and adults categorized as overweight or obese illustrates a need for research-based family wellness interventions. Sequential, online, Extension-delivered family wellness interventions offer a time- and cost-effective approach for both participants and Extension educators. The 6-week, online Healthy…
Multi-Target Tracking via Mixed Integer Optimization
2016-05-13
solving these two problems separately, however few algorithms attempt to solve these simultaneously and even fewer utilize optimization. In this paper we...introduce a new mixed integer optimization (MIO) model which solves the data association and trajectory estimation problems simultaneously by minimizing...Kalman filter [5], which updates the trajectory estimates before the algorithm progresses forward to the next scan. This process repeats sequentially
Sensitivity Analysis in Sequential Decision Models.
Chen, Qiushi; Ayer, Turgay; Chhatwal, Jagpreet
2017-02-01
Sequential decision problems are frequently encountered in medical decision making, which are commonly solved using Markov decision processes (MDPs). Modeling guidelines recommend conducting sensitivity analyses in decision-analytic models to assess the robustness of the model results against the uncertainty in model parameters. However, standard methods of conducting sensitivity analyses cannot be directly applied to sequential decision problems because this would require evaluating all possible decision sequences, typically in the order of trillions, which is not practically feasible. As a result, most MDP-based modeling studies do not examine confidence in their recommended policies. In this study, we provide an approach to estimate uncertainty and confidence in the results of sequential decision models. First, we provide a probabilistic univariate method to identify the most sensitive parameters in MDPs. Second, we present a probabilistic multivariate approach to estimate the overall confidence in the recommended optimal policy considering joint uncertainty in the model parameters. We provide a graphical representation, which we call a policy acceptability curve, to summarize the confidence in the optimal policy by incorporating stakeholders' willingness to accept the base case policy. For a cost-effectiveness analysis, we provide an approach to construct a cost-effectiveness acceptability frontier, which shows the most cost-effective policy as well as the confidence in that for a given willingness to pay threshold. We demonstrate our approach using a simple MDP case study. We developed a method to conduct sensitivity analysis in sequential decision models, which could increase the credibility of these models among stakeholders.
Risk-Constrained Dynamic Programming for Optimal Mars Entry, Descent, and Landing
NASA Technical Reports Server (NTRS)
Ono, Masahiro; Kuwata, Yoshiaki
2013-01-01
A chance-constrained dynamic programming algorithm was developed that is capable of making optimal sequential decisions within a user-specified risk bound. This work handles stochastic uncertainties over multiple stages in the CEMAT (Combined EDL-Mobility Analyses Tool) framework. It was demonstrated by a simulation of Mars entry, descent, and landing (EDL) using real landscape data obtained from the Mars Reconnaissance Orbiter. Although standard dynamic programming (DP) provides a general framework for optimal sequential decisionmaking under uncertainty, it typically achieves risk aversion by imposing an arbitrary penalty on failure states. Such a penalty-based approach cannot explicitly bound the probability of mission failure. A key idea behind the new approach is called risk allocation, which decomposes a joint chance constraint into a set of individual chance constraints and distributes risk over them. The joint chance constraint was reformulated into a constraint on an expectation over a sum of an indicator function, which can be incorporated into the cost function by dualizing the optimization problem. As a result, the chance-constraint optimization problem can be turned into an unconstrained optimization over a Lagrangian, which can be solved efficiently using a standard DP approach.
Interactions Between Genetics, Lifestyle, and Environmental Factors for Healthcare.
Lin, Yuxin; Chen, Jiajia; Shen, Bairong
2017-01-01
The occurrence and progression of diseases are strongly associated with a combination of genetic, lifestyle, and environmental factors. Understanding the interplay between genetic and nongenetic components provides deep insights into disease pathogenesis and promotes personalized strategies for people healthcare. Recently, the paradigm of systems medicine, which integrates biomedical data and knowledge at multidimensional levels, is considered to be an optimal way for disease management and clinical decision-making in the era of precision medicine. In this chapter, epigenetic-mediated genetics-lifestyle-environment interactions within specific diseases and different ethnic groups are systematically discussed, and data sources, computational models, and translational platforms for systems medicine research are sequentially presented. Moreover, feasible suggestions on precision healthcare and healthy longevity are kindly proposed based on the comprehensive review of current studies.
Brain Metastases in Oncogene-Addicted Non-Small Cell Lung Cancer Patients: Incidence and Treatment
Remon, J.; Besse, Benjamin
2018-01-01
Brain metastases (BM) are common in non-small cell lung cancer patients including in molecularly selected populations, such as EGFR-mutant and ALK-rearranged tumors. They are associated with a reduced quality of life, and are commonly the first site of progression for patients receiving tyrosine kinase inhibitors (TKIs). In this review, we summarize incidence of BM and intracranial efficacy with TKI agents according to oncogene driver mutations, focusing on important clinical issues, notably optimal first-line treatment in oncogene-addicted lung tumors with upfront BM (local therapies followed by TKI vs. TKI monotherapy). We also discuss the potential role of newly emerging late-generation TKIs as new standard treatment in oncogene-addicted lung cancer tumors compared with sequential strategies. PMID:29696132
Burstein, Harold J.; Prestrud, Ann Alexis; Seidenfeld, Jerome; Anderson, Holly; Buchholz, Thomas A.; Davidson, Nancy E.; Gelmon, Karen E.; Giordano, Sharon H.; Hudis, Clifford A.; Malin, Jennifer; Mamounas, Eleftherios P.; Rowden, Diana; Solky, Alexander J.; Sowers, MaryFran R.; Stearns, Vered; Winer, Eric P.; Somerfield, Mark R.; Griggs, Jennifer J.
2010-01-01
Purpose To develop evidence-based guidelines, based on a systematic review, for endocrine therapy for postmenopausal women with hormone receptor–positive breast cancer. Methods A literature search identified relevant randomized trials. Databases searched included MEDLINE, PREMEDLINE, the Cochrane Collaboration Library, and those for the Annual Meetings of the American Society of Clinical Oncology (ASCO) and the San Antonio Breast Cancer Symposium (SABCS). The primary outcomes of interest were disease-free survival, overall survival, and time to contralateral breast cancer. Secondary outcomes included adverse events and quality of life. An expert panel reviewed the literature, especially 12 major trials, and developed updated recommendations. Results An adjuvant treatment strategy incorporating an aromatase inhibitor (AI) as primary (initial endocrine therapy), sequential (using both tamoxifen and an AI in either order), or extended (AI after 5 years of tamoxifen) therapy reduces the risk of breast cancer recurrence compared with 5 years of tamoxifen alone. Data suggest that including an AI as primary monotherapy or as sequential treatment after 2 to 3 years of tamoxifen yields similar outcomes. Tamoxifen and AIs differ in their adverse effect profiles, and these differences may inform treatment preferences. Conclusion The Update Committee recommends that postmenopausal women with hormone receptor–positive breast cancer consider incorporating AI therapy at some point during adjuvant treatment, either as up-front therapy or as sequential treatment after tamoxifen. The optimal timing and duration of endocrine treatment remain unresolved. The Update Committee supports careful consideration of adverse effect profiles and patient preferences in deciding whether and when to incorporate AI therapy. PMID:20625130
Thatcher, W W; Santos, J E P; Silvestre, F T; Kim, I H; Staples, C R
2010-09-01
Increasing reproductive performance of post-partum lactating dairy cows is a multi-factorial challenge involving disciplines of production medicine, nutrition, physiology and herd management. Systems of programmed timed insemination have been fine-tuned to achieve pregnancy per artificial inseminations (AI) approximating 45%. Systems have optimized follicle development, integrated follicle development with timing of induced corpus luteum regression and fine-tuned sequential timing of induced ovulation and AI. Use of programmes for insemination have identified occurrence of anovulatory ovarian status, body condition, uterine health and seasonal summer stress as factors contributing to reduced herd fertility. Furthermore, programmes of timed insemination provide a platform to evaluate efficacy of nutritional and herd health systems targeted to the transition and post-partum periods. The homeorhetic periparturient period, as cows deal with decreases in dry matter intake, results in a negative energy balance and is associated with a period of immunosuppression. Cows that transition well will cycle earlier and have a greater risk of becoming pregnant earlier post-partum. The innate arms of the immune system (acute and adaptive) are suppressed during the periparturient period. Cows experiencing the sequential complex of disorders such as dystocia, puerperal metritis, metritis, endometritis and subclinical endometritis are subsequently less fertile. Targeted strategies of providing specific nutraceuticals that provide pro- and anti-inflammatory effects, such as polyunsaturated fatty acids (e.g., linoleic, eicosapentaenoic/docosahexaenoic, conjugated linoleic acid), sequential glycogenic and lipogenic enrichment of diets, and organic selenium appear to differentially regulate and improve the immune and reproductive systems to benefit an earlier restoration of ovarian activity and increased fertility. © 2010 Blackwell Verlag GmbH.
Sequential design of discrete linear quadratic regulators via optimal root-locus techniques
NASA Technical Reports Server (NTRS)
Shieh, Leang S.; Yates, Robert E.; Ganesan, Sekar
1989-01-01
A sequential method employing classical root-locus techniques has been developed in order to determine the quadratic weighting matrices and discrete linear quadratic regulators of multivariable control systems. At each recursive step, an intermediate unity rank state-weighting matrix that contains some invariant eigenvectors of that open-loop matrix is assigned, and an intermediate characteristic equation of the closed-loop system containing the invariant eigenvalues is created.
Technology: Digital Photography in an Inner-City Fifth Grade, Part 1
ERIC Educational Resources Information Center
Riner, Phil
2005-01-01
Research tells us we can learn complex tasks most easily if they are taught in "small sequential steps." This column is about the small sequential steps that unlocked the powers of digital photography, of portraiture, and of student creativity. The strategies and ideas described in this article came as a result of working with…
Developing L2 Pragmatic Competence in Mandarin Chinese: Sequential Realization of Requests
ERIC Educational Resources Information Center
Su, Yunwen; Ren, Wei
2017-01-01
The present study explored the development of second language (L2) Chinese learners' ability to negotiate requests in interactions. It investigated the effect of proficiency on learners' use of request strategies and internal modifications and on their sequential realization of requests in L2 Chinese. Twenty-four American English learners of L2…
ERIC Educational Resources Information Center
Matsumoto, Yumi
2011-01-01
This is a qualitative study of nonnative English speakers who speak English as a lingua franca (ELF) in their graduate student dormitory in the United States, a community of practice (Wegner, 2004) comprised almost entirely of second language users. Using a sequential analysis (Koshik, 2002; Markee, 2000; Sacks, Schegloff, & Jefferson, 1974;…
Sequential use of simulation and optimization in analysis and planning
Hans R. Zuuring; Jimmie D. Chew; J. Greg Jones
2000-01-01
Management activities are analyzed at landscape scales employing both simulation and optimization. SIMPPLLE, a stochastic simulation modeling system, is initially applied to assess the risks associated with a specific natural process occurring on the current landscape without management treatments, but with fire suppression. These simulation results are input into...
Robust parameter design for automatically controlled systems and nanostructure synthesis
NASA Astrophysics Data System (ADS)
Dasgupta, Tirthankar
2007-12-01
This research focuses on developing comprehensive frameworks for developing robust parameter design methodology for dynamic systems with automatic control and for synthesis of nanostructures. In many automatically controlled dynamic processes, the optimal feedback control law depends on the parameter design solution and vice versa and therefore an integrated approach is necessary. A parameter design methodology in the presence of feedback control is developed for processes of long duration under the assumption that experimental noise factors are uncorrelated over time. Systems that follow a pure-gain dynamic model are considered and the best proportional-integral and minimum mean squared error control strategies are developed by using robust parameter design. The proposed method is illustrated using a simulated example and a case study in a urea packing plant. This idea is also extended to cases with on-line noise factors. The possibility of integrating feedforward control with a minimum mean squared error feedback control scheme is explored. To meet the needs of large scale synthesis of nanostructures, it is critical to systematically find experimental conditions under which the desired nanostructures are synthesized reproducibly, at large quantity and with controlled morphology. The first part of the research in this area focuses on modeling and optimization of existing experimental data. Through a rigorous statistical analysis of experimental data, models linking the probabilities of obtaining specific morphologies to the process variables are developed. A new iterative algorithm for fitting a Multinomial GLM is proposed and used. The optimum process conditions, which maximize the above probabilities and make the synthesis process less sensitive to variations of process variables around set values, are derived from the fitted models using Monte-Carlo simulations. The second part of the research deals with development of an experimental design methodology, tailor-made to address the unique phenomena associated with nanostructure synthesis. A sequential space filling design called Sequential Minimum Energy Design (SMED) for exploring best process conditions for synthesis of nanowires. The SMED is a novel approach to generate sequential designs that are model independent, can quickly "carve out" regions with no observable nanostructure morphology, and allow for the exploration of complex response surfaces.
Liu, Fu-Hwa; Huang, Yu-Wen; Huang, Huei-Mei
2013-01-01
Expression of oncogenic Bcr-Abl inhibits cell differentiation of hematopoietic stem/progenitor cells in chronic myeloid leukemia (CML). Differentiation therapy is considered to be a new strategy for treating this type of leukemia. Aclacinomycin A (ACM) is an antitumor antibiotic. Previous studies have shown that ACM induced erythroid differentiation of CML cells. In this study, we investigate the effect of ACM on the sensitivity of human CML cell line K562 to Bcr-Abl specific inhibitor imatinib (STI571, Gleevec). We first determined the optimal concentration of ACM for erythroid differentiation but not growth inhibition and apoptosis in K562 cells. Then, pretreatment with this optimal concentration of ACM followed by a minimally toxic concentration of imatinib strongly induced growth inhibition and apoptosis compared to that with simultaneous co-treatment, indicating that ACM-induced erythroid differentiation sensitizes K562 cells to imatinib. Sequential treatment with ACM and imatinib induced Bcr-Abl down-regulation, cytochrome c release into the cytosol, and caspase-3 activation, as well as decreased Mcl-1 and Bcl-xL expressions, but did not affect Fas ligand/Fas death receptor and procaspase-8 expressions. ACM/imatinib sequential treatment-induced apoptosis was suppressed by a caspase-9 inhibitor and a caspase-3 inhibitor, indicating that the caspase cascade is involved in this apoptosis. Furthermore, we demonstrated that ACM induced erythroid differentiation through the p38 mitogen-activated protein kinase (MAPK) pathway. The inhibition of erythroid differentiation by p38MAPK inhibitor SB202190, p38MAPK dominant negative mutant or p38MAPK shRNA knockdown, reduced the ACM/imatinib sequential treatment-mediated growth inhibition and apoptosis. These results suggest that differentiated K562 cells induced by ACM-mediated p38MAPK pathway become more sensitive to imatinib and result in down-regulations of Bcr-Abl and anti-apoptotic proteins, growth inhibition and apoptosis. These results provided a potential management by which ACM might have a crucial impact on increasing sensitivity of CML cells to imatinib in the differentiation therapeutic approaches. PMID:23613979
Lee, Yueh-Lun; Chen, Chih-Wei; Liu, Fu-Hwa; Huang, Yu-Wen; Huang, Huei-Mei
2013-01-01
Expression of oncogenic Bcr-Abl inhibits cell differentiation of hematopoietic stem/progenitor cells in chronic myeloid leukemia (CML). Differentiation therapy is considered to be a new strategy for treating this type of leukemia. Aclacinomycin A (ACM) is an antitumor antibiotic. Previous studies have shown that ACM induced erythroid differentiation of CML cells. In this study, we investigate the effect of ACM on the sensitivity of human CML cell line K562 to Bcr-Abl specific inhibitor imatinib (STI571, Gleevec). We first determined the optimal concentration of ACM for erythroid differentiation but not growth inhibition and apoptosis in K562 cells. Then, pretreatment with this optimal concentration of ACM followed by a minimally toxic concentration of imatinib strongly induced growth inhibition and apoptosis compared to that with simultaneous co-treatment, indicating that ACM-induced erythroid differentiation sensitizes K562 cells to imatinib. Sequential treatment with ACM and imatinib induced Bcr-Abl down-regulation, cytochrome c release into the cytosol, and caspase-3 activation, as well as decreased Mcl-1 and Bcl-xL expressions, but did not affect Fas ligand/Fas death receptor and procaspase-8 expressions. ACM/imatinib sequential treatment-induced apoptosis was suppressed by a caspase-9 inhibitor and a caspase-3 inhibitor, indicating that the caspase cascade is involved in this apoptosis. Furthermore, we demonstrated that ACM induced erythroid differentiation through the p38 mitogen-activated protein kinase (MAPK) pathway. The inhibition of erythroid differentiation by p38MAPK inhibitor SB202190, p38MAPK dominant negative mutant or p38MAPK shRNA knockdown, reduced the ACM/imatinib sequential treatment-mediated growth inhibition and apoptosis. These results suggest that differentiated K562 cells induced by ACM-mediated p38MAPK pathway become more sensitive to imatinib and result in down-regulations of Bcr-Abl and anti-apoptotic proteins, growth inhibition and apoptosis. These results provided a potential management by which ACM might have a crucial impact on increasing sensitivity of CML cells to imatinib in the differentiation therapeutic approaches.
Aerostructural Shape and Topology Optimization of Aircraft Wings
NASA Astrophysics Data System (ADS)
James, Kai
A series of novel algorithms for performing aerostructural shape and topology optimization are introduced and applied to the design of aircraft wings. An isoparametric level set method is developed for performing topology optimization of wings and other non-rectangular structures that must be modeled using a non-uniform, body-fitted mesh. The shape sensitivities are mapped to computational space using the transformation defined by the Jacobian of the isoparametric finite elements. The mapped sensitivities are then passed to the Hamilton-Jacobi equation, which is solved on a uniform Cartesian grid. The method is derived for several objective functions including mass, compliance, and global von Mises stress. The results are compared with SIMP results for several two-dimensional benchmark problems. The method is also demonstrated on a three-dimensional wingbox structure subject to fixed loading. It is shown that the isoparametric level set method is competitive with the SIMP method in terms of the final objective value as well as computation time. In a separate problem, the SIMP formulation is used to optimize the structural topology of a wingbox as part of a larger MDO framework. Here, topology optimization is combined with aerodynamic shape optimization, using a monolithic MDO architecture that includes aerostructural coupling. The aerodynamic loads are modeled using a three-dimensional panel method, and the structural analysis makes use of linear, isoparametric, hexahedral elements. The aerodynamic shape is parameterized via a set of twist variables representing the jig twist angle at equally spaced locations along the span of the wing. The sensitivities are determined analytically using a coupled adjoint method. The wing is optimized for minimum drag subject to a compliance constraint taken from a 2 g maneuver condition. The results from the MDO algorithm are compared with those of a sequential optimization procedure in order to quantify the benefits of the MDO approach. While the sequentially optimized wing exhibits a nearly-elliptical lift distribution, the MDO design seeks to push a greater portion of the load toward the root, thus reducing the structural deflection, and allowing for a lighter structure. By exploiting this trade-off, the MDO design achieves a 42% lower drag than the sequential result.
Bansal, A; Kapoor, R; Singh, S K; Kumar, N; Oinam, A S; Sharma, S C
2012-07-01
DOSIMETERIC AND RADIOBIOLOGICAL COMPARISON OF TWO RADIATION SCHEDULES IN LOCALIZED CARCINOMA PROSTATE: Standard Three-Dimensional Conformal Radiotherapy (3DCRT) followed by Intensity Modulated Radiotherapy (IMRT) boost (sequential-IMRT) with Simultaneous Integrated Boost IMRT (SIB-IMRT). Thirty patients were enrolled. In all, the target consisted of PTV P + SV (Prostate and seminal vesicles) and PTV LN (lymph nodes) where PTV refers to planning target volume and the critical structures included: bladder, rectum and small bowel. All patients were treated with sequential-IMRT plan, but for dosimetric comparison, SIB-IMRT plan was also created. The prescription dose to PTV P + SV was 74 Gy in both strategies but with different dose per fraction, however, the dose to PTV LN was 50 Gy delivered in 25 fractions over 5 weeks for sequential-IMRT and 54 Gy delivered in 27 fractions over 5.5 weeks for SIB-IMRT. The treatment plans were compared in terms of dose-volume histograms. Also, Tumor Control Probability (TCP) and Normal Tissue Complication Probability (NTCP) obtained with the two plans were compared. The volume of rectum receiving 70 Gy or more (V > 70 Gy) was reduced to 18.23% with SIB-IMRT from 22.81% with sequential-IMRT. SIB-IMRT reduced the mean doses to both bladder and rectum by 13% and 17%, respectively, as compared to sequential-IMRT. NTCP of 0.86 ± 0.75% and 0.01 ± 0.02% for the bladder, 5.87 ± 2.58% and 4.31 ± 2.61% for the rectum and 8.83 ± 7.08% and 8.25 ± 7.98% for the bowel was seen with sequential-IMRT and SIB-IMRT plans respectively. For equal PTV coverage, SIB-IMRT markedly reduced doses to critical structures, therefore should be considered as the strategy for dose escalation. SIB-IMRT achieves lesser NTCP than sequential-IMRT.
DEEP MOTIF DASHBOARD: VISUALIZING AND UNDERSTANDING GENOMIC SEQUENCES USING DEEP NEURAL NETWORKS.
Lanchantin, Jack; Singh, Ritambhara; Wang, Beilun; Qi, Yanjun
2017-01-01
Deep neural network (DNN) models have recently obtained state-of-the-art prediction accuracy for the transcription factor binding (TFBS) site classification task. However, it remains unclear how these approaches identify meaningful DNA sequence signals and give insights as to why TFs bind to certain locations. In this paper, we propose a toolkit called the Deep Motif Dashboard (DeMo Dashboard) which provides a suite of visualization strategies to extract motifs, or sequence patterns from deep neural network models for TFBS classification. We demonstrate how to visualize and understand three important DNN models: convolutional, recurrent, and convolutional-recurrent networks. Our first visualization method is finding a test sequence's saliency map which uses first-order derivatives to describe the importance of each nucleotide in making the final prediction. Second, considering recurrent models make predictions in a temporal manner (from one end of a TFBS sequence to the other), we introduce temporal output scores, indicating the prediction score of a model over time for a sequential input. Lastly, a class-specific visualization strategy finds the optimal input sequence for a given TFBS positive class via stochastic gradient optimization. Our experimental results indicate that a convolutional-recurrent architecture performs the best among the three architectures. The visualization techniques indicate that CNN-RNN makes predictions by modeling both motifs as well as dependencies among them.
Deep Motif Dashboard: Visualizing and Understanding Genomic Sequences Using Deep Neural Networks
Lanchantin, Jack; Singh, Ritambhara; Wang, Beilun; Qi, Yanjun
2018-01-01
Deep neural network (DNN) models have recently obtained state-of-the-art prediction accuracy for the transcription factor binding (TFBS) site classification task. However, it remains unclear how these approaches identify meaningful DNA sequence signals and give insights as to why TFs bind to certain locations. In this paper, we propose a toolkit called the Deep Motif Dashboard (DeMo Dashboard) which provides a suite of visualization strategies to extract motifs, or sequence patterns from deep neural network models for TFBS classification. We demonstrate how to visualize and understand three important DNN models: convolutional, recurrent, and convolutional-recurrent networks. Our first visualization method is finding a test sequence’s saliency map which uses first-order derivatives to describe the importance of each nucleotide in making the final prediction. Second, considering recurrent models make predictions in a temporal manner (from one end of a TFBS sequence to the other), we introduce temporal output scores, indicating the prediction score of a model over time for a sequential input. Lastly, a class-specific visualization strategy finds the optimal input sequence for a given TFBS positive class via stochastic gradient optimization. Our experimental results indicate that a convolutional-recurrent architecture performs the best among the three architectures. The visualization techniques indicate that CNN-RNN makes predictions by modeling both motifs as well as dependencies among them. PMID:27896980
Focusing light through random photonic layers by four-element division algorithm
NASA Astrophysics Data System (ADS)
Fang, Longjie; Zhang, Xicheng; Zuo, Haoyi; Pang, Lin
2018-02-01
The propagation of waves in turbid media is a fundamental problem of optics with vast applications. Optical phase optimization approaches for focusing light through turbid media using phase control algorithm have been widely studied in recent years due to the rapid development of spatial light modulator. The existing approaches include element-based algorithms - stepwise sequential algorithm, continuous sequential algorithm and whole element optimization approaches - partitioning algorithm, transmission matrix approach and genetic algorithm. The advantage of element-based approaches is that the phase contribution of each element is very clear; however, because the intensity contribution of each element to the focal point is small especially for the case of large number of elements, the determination of the optimal phase for a single element would be difficult. In other words, the signal to noise ratio of the measurement is weak, leading to possibly local maximal during the optimization. As for whole element optimization approaches, all elements are employed for the optimization. Of course, signal to noise ratio during the optimization is improved. However, because more random processings are introduced into the processing, optimizations take more time to converge than the single element based approaches. Based on the advantages of both single element based approaches and whole element optimization approaches, we propose FEDA approach. Comparisons with the existing approaches show that FEDA only takes one third of measurement time to reach the optimization, which means that FEDA is promising in practical application such as for deep tissue imaging.
Performance evaluation of an asynchronous multisensor track fusion filter
NASA Astrophysics Data System (ADS)
Alouani, Ali T.; Gray, John E.; McCabe, D. H.
2003-08-01
Recently the authors developed a new filter that uses data generated by asynchronous sensors to produce a state estimate that is optimal in the minimum mean square sense. The solution accounts for communications delay between sensors platform and fusion center. It also deals with out of sequence data as well as latent data by processing the information in a batch-like manner. This paper compares, using simulated targets and Monte Carlo simulations, the performance of the filter to the optimal sequential processing approach. It was found that the new asynchronous Multisensor track fusion filter (AMSTFF) performance is identical to that of the extended sequential Kalman filter (SEKF), while the new filter updates its track at a much lower rate than the SEKF.
New pharmacotherapy options for multiple myeloma.
Mina, Roberto; Cerrato, Chiara; Bernardini, Annalisa; Aghemo, Elena; Palumbo, Antonio
2016-01-01
Novel agents and the availability of autologous stem-cell transplantation have revolutionized the treatment of patients with multiple myeloma. First-generation novel agents namely thalidomide, lenalidomide, and bortezomib have significantly improved response and survival of patients. Second-generation novel agents such as pomalidomide, carfilzomib, and monoclonal antibodies are being tested both in the newly diagnosed and relapse settings, and results are promising. In this review article, the main results derived from Phase III trials with thalidomide, lenalidomide, and bortezomib for the treatment of myeloma patients, both at diagnosis and at relapse, are summarized. Data about second-generation novel agents such as pomalidomide and carfilzomib are also reported. Newer effective drugs currently under investigation and the promising results with monoclonal antibodies are described. The availability of new effective drugs has considerably increased the treatment options for myeloma patients. A sequential approach including induction, transplantation (when possible), consolidation, and maintenance is an optimal strategy to achieve disease control and prolong survival. Despite these improvements, the best combination, the optimal sequence, and the proper target of newer drugs need to be defined.
The optimality of hospital financing system: the role of physician-manager interactions.
Crainich, David; Leleu, Hervé; Mauleon, Ana
2008-12-01
The ability of a prospective payment system to ensure an optimal level of both quality and cost reducing activities in the hospital industry has been stressed by Ma (Ma, J Econ Manage Strategy 8(2):93-112, 1994) whose analysis assumes that decisions about quality and costs are made by a single agent. This paper examines whether this result holds when the main decisions made within the hospital are shared between physicians (quality of treatment) and hospital managers (cost reduction). Ma's conclusions appear to be relevant in the US context (where the hospital managers pay the whole cost of treatment). Nonetheless, when physicians partly reimburse hospitals for the treatment cost as it is the case in many European countries, we show that the ability of a prospective payment system to achieve both objectives is sensitive to the type of interaction (simultaneous, sequential or joint decision-making) between the agents. Our analysis suggests that regulation policies in the hospital sector should not be exclusively focused on the financing system but should also take the interaction between physicians and hospital managers into account.
Structural Optimization of a Force Balance Using a Computational Experiment Design
NASA Technical Reports Server (NTRS)
Parker, P. A.; DeLoach, R.
2002-01-01
This paper proposes a new approach to force balance structural optimization featuring a computational experiment design. Currently, this multi-dimensional design process requires the designer to perform a simplification by executing parameter studies on a small subset of design variables. This one-factor-at-a-time approach varies a single variable while holding all others at a constant level. Consequently, subtle interactions among the design variables, which can be exploited to achieve the design objectives, are undetected. The proposed method combines Modern Design of Experiments techniques to direct the exploration of the multi-dimensional design space, and a finite element analysis code to generate the experimental data. To efficiently search for an optimum combination of design variables and minimize the computational resources, a sequential design strategy was employed. Experimental results from the optimization of a non-traditional force balance measurement section are presented. An approach to overcome the unique problems associated with the simultaneous optimization of multiple response criteria is described. A quantitative single-point design procedure that reflects the designer's subjective impression of the relative importance of various design objectives, and a graphical multi-response optimization procedure that provides further insights into available tradeoffs among competing design objectives are illustrated. The proposed method enhances the intuition and experience of the designer by providing new perspectives on the relationships between the design variables and the competing design objectives providing a systematic foundation for advancements in structural design.
Chambaz, Antoine; Zheng, Wenjing; van der Laan, Mark J
2017-01-01
This article studies the targeted sequential inference of an optimal treatment rule (TR) and its mean reward in the non-exceptional case, i.e. , assuming that there is no stratum of the baseline covariates where treatment is neither beneficial nor harmful, and under a companion margin assumption. Our pivotal estimator, whose definition hinges on the targeted minimum loss estimation (TMLE) principle, actually infers the mean reward under the current estimate of the optimal TR. This data-adaptive statistical parameter is worthy of interest on its own. Our main result is a central limit theorem which enables the construction of confidence intervals on both mean rewards under the current estimate of the optimal TR and under the optimal TR itself. The asymptotic variance of the estimator takes the form of the variance of an efficient influence curve at a limiting distribution, allowing to discuss the efficiency of inference. As a by product, we also derive confidence intervals on two cumulated pseudo-regrets, a key notion in the study of bandits problems. A simulation study illustrates the procedure. One of the corner-stones of the theoretical study is a new maximal inequality for martingales with respect to the uniform entropy integral.
Dmitriy Volinskiy; John C Bergstrom; Christopher M Cornwell; Thomas P Holmes
2010-01-01
The assumption of independence of irrelevant alternatives in a sequential contingent valuation format should be questioned. Statistically, most valuation studies treat nonindependence as a consequence of unobserved individual effects. Another approach is to consider an inferential process in which any particular choice is part of a general choosing strategy of a survey...
Jungreuthmayer, C; Jaasma, M J; Al-Munajjed, A A; Zanghellini, J; Kelly, D J; O'Brien, F J
2009-05-01
Tissue-engineered bone shows promise in meeting the huge demand for bone grafts caused by up to 4 million bone replacement procedures per year, worldwide. State-of-the-art bone tissue engineering strategies use flow perfusion bioreactors to apply biophysical stimuli to cells seeded on scaffolds and to grow tissue suitable for implantation into the patient's body. The aim of this study was to quantify the deformation of cells seeded on a collagen-GAG scaffold which was perfused by culture medium inside a flow perfusion bioreactor. Using a microCT scan of an unseeded collagen-GAG scaffold, a sequential 3D CFD-deformation model was developed. The wall shear stress and the hydrostatic wall pressure acting on the cells were computed through the use of a CFD simulation and fed into a linear elastostatics model in order to calculate the deformation of the cells. The model used numerically seeded cells of two common morphologies where cells are either attached flatly on the scaffold wall or bridging two struts of the scaffold. Our study showed that the displacement of the cells is primarily determined by the cell morphology. Although cells of both attachment profiles were subjected to the same mechanical load, cells bridging two struts experienced a deformation up to 500 times higher than cells only attached to one strut. As the scaffold's pore size determines both the mechanical load and the type of attachment, the design of an optimal scaffold must take into account the interplay of these two features and requires a design process that optimizes both parameters at the same time.
Metabolic Engineering for Substrate Co-utilization
NASA Astrophysics Data System (ADS)
Gawand, Pratish
Production of biofuels and bio-based chemicals is being increasingly pursued by chemical industry to reduce its dependence on petroleum. Lignocellulosic biomass (LCB) is an abundant source of sugars that can be used for producing biofuels and bio-based chemicals using fermentation. Hydrolysis of LCB results in a mixture of sugars mainly composed of glucose and xylose. Fermentation of such a sugar mixture presents multiple technical challenges at industrial scale. Most industrial microorganisms utilize sugars in a sequential manner due to the regulatory phenomenon of carbon catabolite repression (CCR). Due to sequential utilization of sugars, the LCB-based fermentation processes suffer low productivities and complicated operation. Performance of fermentation processes can be improved by metabolic engineering of microorganisms to obtain superior characteristics such as high product yield. With increased computational power and availability of complete genomes of microorganisms, use of model-based metabolic engineering is now a common practice. The problem of sequential sugar utilization, however, is a regulatory problem, and metabolic models have never been used to solve such regulatory problems. The focus of this thesis is to use model-guided metabolic engineering to construct industrial strains capable of co-utilizing sugars. First, we develop a novel bilevel optimization algorithm SimUp, that uses metabolic models to identify reaction deletion strategies to force co-utilization of two sugars. We then use SimUp to identify reaction deletion strategies to force glucose-xylose co-utilization in Escherichia coli. To validate SimUp predictions, we construct three mutants with multiple gene knockouts and test them for glucose-xylose utilization characteristics. Two mutants, designated as LMSE2 and LMSE5, are shown to co-utilize glucose and xylose in agreement with SimUp predictions. To understand the molecular mechanism involved in glucose-xylose co-utilization of the mutant LMSE2, the mutant is subjected to targeted and whole genome sequencing. Finally, we use the mutant LMSE2 to produce D-ribose from a mixture of glucose and xylose by overexpressing an endogenous phosphatase. The methods developed in this thesis are anticipated to provide a novel approach to solve sugar co-utilization problem in industrial microorganisms, and provide insights into microbial response to forced co-utilization of sugars.
1990-03-01
knowledge covering problems of this type is called calculus of variations or optimal control theory (Refs. 1-8). As stated before, appli - cations occur...to the optimality conditions and the feasibility equations of Problem (GP), respectively. Clearly, after the transformation (26) is applied , the...trajectories, the primal sequential gradient-restoration algorithm (PSGRA) is applied to compute optimal trajectories for aeroassisted orbital transfer
Solving the infeasible trust-region problem using approximations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Renaud, John E.; Perez, Victor M.; Eldred, Michael Scott
2004-07-01
The use of optimization in engineering design has fueled the development of algorithms for specific engineering needs. When the simulations are expensive to evaluate or the outputs present some noise, the direct use of nonlinear optimizers is not advisable, since the optimization process will be expensive and may result in premature convergence. The use of approximations for both cases is an alternative investigated by many researchers including the authors. When approximations are present, a model management is required for proper convergence of the algorithm. In nonlinear programming, the use of trust-regions for globalization of a local algorithm has been provenmore » effective. The same approach has been used to manage the local move limits in sequential approximate optimization frameworks as in Alexandrov et al., Giunta and Eldred, Perez et al. , Rodriguez et al., etc. The experience in the mathematical community has shown that more effective algorithms can be obtained by the specific inclusion of the constraints (SQP type of algorithms) rather than by using a penalty function as in the augmented Lagrangian formulation. The presence of explicit constraints in the local problem bounded by the trust region, however, may have no feasible solution. In order to remedy this problem the mathematical community has developed different versions of a composite steps approach. This approach consists of a normal step to reduce the amount of constraint violation and a tangential step to minimize the objective function maintaining the level of constraint violation attained at the normal step. Two of the authors have developed a different approach for a sequential approximate optimization framework using homotopy ideas to relax the constraints. This algorithm called interior-point trust-region sequential approximate optimization (IPTRSAO) presents some similarities to the two normal-tangential steps algorithms. In this paper, a description of the similarities is presented and an expansion of the two steps algorithm is presented for the case of approximations.« less
Navarrete-Bolaños, J L; Téllez-Martínez, M G; Miranda-López, R; Jiménez-Islas, H
2017-07-03
For any fermentation process, the production cost depends on several factors, such as the genetics of the microorganism, the process condition, and the culture medium composition. In this work, a guideline for the design of cost-efficient culture media using a sequential approach based on response surface methodology is described. The procedure was applied to analyze and optimize a culture medium of registered trademark and a base culture medium obtained as a result of the screening analysis from different culture media used to grow the same strain according to the literature. During the experiments, the procedure quantitatively identified an appropriate array of micronutrients to obtain a significant yield and find a minimum number of culture medium ingredients without limiting the process efficiency. The resultant culture medium showed an efficiency that compares favorably with the registered trademark medium at a 95% lower cost as well as reduced the number of ingredients in the base culture medium by 60% without limiting the process efficiency. These results demonstrated that, aside from satisfying the qualitative requirements, an optimum quantity of each constituent is needed to obtain a cost-effective culture medium. Study process variables for optimized culture medium and scaling-up production for the optimal values are desirable.
Liu, Zhenqiu; Sun, Fengzhu; McGovern, Dermot P
2017-01-01
Feature selection and prediction are the most important tasks for big data mining. The common strategies for feature selection in big data mining are L 1 , SCAD and MC+. However, none of the existing algorithms optimizes L 0 , which penalizes the number of nonzero features directly. In this paper, we develop a novel sparse generalized linear model (GLM) with L 0 approximation for feature selection and prediction with big omics data. The proposed approach approximate the L 0 optimization directly. Even though the original L 0 problem is non-convex, the problem is approximated by sequential convex optimizations with the proposed algorithm. The proposed method is easy to implement with only several lines of code. Novel adaptive ridge algorithms ( L 0 ADRIDGE) for L 0 penalized GLM with ultra high dimensional big data are developed. The proposed approach outperforms the other cutting edge regularization methods including SCAD and MC+ in simulations. When it is applied to integrated analysis of mRNA, microRNA, and methylation data from TCGA ovarian cancer, multilevel gene signatures associated with suboptimal debulking are identified simultaneously. The biological significance and potential clinical importance of those genes are further explored. The developed Software L 0 ADRIDGE in MATLAB is available at https://github.com/liuzqx/L0adridge.
Rational approximations to rational models: alternative algorithms for category learning.
Sanborn, Adam N; Griffiths, Thomas L; Navarro, Daniel J
2010-10-01
Rational models of cognition typically consider the abstract computational problems posed by the environment, assuming that people are capable of optimally solving those problems. This differs from more traditional formal models of cognition, which focus on the psychological processes responsible for behavior. A basic challenge for rational models is thus explaining how optimal solutions can be approximated by psychological processes. We outline a general strategy for answering this question, namely to explore the psychological plausibility of approximation algorithms developed in computer science and statistics. In particular, we argue that Monte Carlo methods provide a source of rational process models that connect optimal solutions to psychological processes. We support this argument through a detailed example, applying this approach to Anderson's (1990, 1991) rational model of categorization (RMC), which involves a particularly challenging computational problem. Drawing on a connection between the RMC and ideas from nonparametric Bayesian statistics, we propose 2 alternative algorithms for approximate inference in this model. The algorithms we consider include Gibbs sampling, a procedure appropriate when all stimuli are presented simultaneously, and particle filters, which sequentially approximate the posterior distribution with a small number of samples that are updated as new data become available. Applying these algorithms to several existing datasets shows that a particle filter with a single particle provides a good description of human inferences.
Corre, Guillaume; Dessainte, Michel; Marteau, Jean-Brice; Dalle, Bruno; Fenard, David; Galy, Anne
2016-02-01
Nonreplicative recombinant HIV-1-derived lentiviral vectors (LV) are increasingly used in gene therapy of various genetic diseases, infectious diseases, and cancer. Before they are used in humans, preparations of LV must undergo extensive quality control testing. In particular, testing of LV must demonstrate the absence of replication-competent lentiviruses (RCL) with suitable methods, on representative fractions of vector batches. Current methods based on cell culture are challenging because high titers of vector batches translate into high volumes of cell culture to be tested in RCL assays. As vector batch size and titers are continuously increasing because of the improvement of production and purification methods, it became necessary for us to modify the current RCL assay based on the detection of p24 in cultures of indicator cells. Here, we propose a practical optimization of this method using a pairwise pooling strategy enabling easier testing of higher vector inoculum volumes. These modifications significantly decrease material handling and operator time, leading to a cost-effective method, while maintaining optimal sensibility of the RCL testing. This optimized "RCL-pooling assay" ameliorates the feasibility of the quality control of large-scale batches of clinical-grade LV while maintaining the same sensitivity.
Shin, Yong-Uk; Yoo, Ha-Young; Kim, Seonghun; Chung, Kyung-Mi; Park, Yong-Gyun; Hwang, Kwang-Hyun; Hong, Seok Won; Park, Hyunwoong; Cho, Kangwoo; Lee, Jaesang
2017-09-19
A two-stage sequential electro-Fenton (E-Fenton) oxidation followed by electrochemical chlorination (EC) was demonstrated to concomitantly treat high concentrations of organic carbon and ammonium nitrogen (NH 4 + -N) in real anaerobically digested food wastewater (ADFW). The anodic Fenton process caused the rapid mineralization of phenol as a model substrate through the production of hydroxyl radical as the main oxidant. The electrochemical oxidation of NH 4 + by a dimensionally stable anode (DSA) resulted in temporal concentration profiles of combined and free chlorine species that were analogous to those during the conventional breakpoint chlorination of NH 4 + . Together with the minimal production of nitrate, this confirmed that the conversion of NH 4 + to nitrogen gas was electrochemically achievable. The monitoring of treatment performance with varying key parameters (e.g., current density, H 2 O 2 feeding rate, pH, NaCl loading, and DSA type) led to the optimization of two component systems. The comparative evaluation of two sequentially combined systems (i.e., the E-Fenton-EC system versus the EC-E-Fenton system) using the mixture of phenol and NH 4 + under the predetermined optimal conditions suggested the superiority of the E-Fenton-EC system in terms of treatment efficiency and energy consumption. Finally, the sequential E-Fenton-EC process effectively mineralized organic carbon and decomposed NH 4 + -N in the real ADFW without external supply of NaCl.
Parallelization strategies for continuum-generalized method of moments on the multi-thread systems
NASA Astrophysics Data System (ADS)
Bustamam, A.; Handhika, T.; Ernastuti, Kerami, D.
2017-07-01
Continuum-Generalized Method of Moments (C-GMM) covers the Generalized Method of Moments (GMM) shortfall which is not as efficient as Maximum Likelihood estimator by using the continuum set of moment conditions in a GMM framework. However, this computation would take a very long time since optimizing regularization parameter. Unfortunately, these calculations are processed sequentially whereas in fact all modern computers are now supported by hierarchical memory systems and hyperthreading technology, which allowing for parallel computing. This paper aims to speed up the calculation process of C-GMM by designing a parallel algorithm for C-GMM on the multi-thread systems. First, parallel regions are detected for the original C-GMM algorithm. There are two parallel regions in the original C-GMM algorithm, that are contributed significantly to the reduction of computational time: the outer-loop and the inner-loop. Furthermore, this parallel algorithm will be implemented with standard shared-memory application programming interface, i.e. Open Multi-Processing (OpenMP). The experiment shows that the outer-loop parallelization is the best strategy for any number of observations.
Exposure to toxic waste sites: an investigative approach.
Stehr-Green, P A; Lybarger, J A
1989-01-01
Improper dumping and storage of hazardous substances and whether these practices produce significant human exposure and health effects are growing concerns. A sequential approach has been used by the Centers for Disease Control and the Agency for Toxic Substances and Disease Registry in investigating potential exposure to and health effects resulting from environmental contamination with materials such as heavy metals, volatile organic compounds, and pesticide residues at sites throughout the United States. The strategy consists of four phases: site evaluation, pilot studies of exposure or health effects, analytic epidemiology studies, and public health surveillance. This approach offers a logical, phased strategy to use limited personnel and financial resources of local, State, national, or global health agency jurisdictions optimally in evaluating populations potentially exposed to hazardous materials in waste sites. Primarily, this approach is most helpful in identifying sites for etiologic studies and providing investigative leads to direct and focus these studies. The results of such studies provide information needed for making risk-management decisions to mitigate or eliminate human exposures and for developing interventions to prevent or minimize health problems resulting from exposures that already have occurred.
Leadership Strategies for Maintaining Profitability in a Volatile Crude Oil Market
NASA Astrophysics Data System (ADS)
Braimoh, Lucky Anderson
Volatile crude oil prices significantly affect the profitability of crude oil firms. The purpose of this single case study was to explore strategies some crude oil and gas business leaders used to remain profitable during periods of crude oil price volatility. The target population comprised 8 crude oil and gas business leaders located in Calgary, Canada, whose company remained profitable despite crude oil price volatility. The transformational leadership theory formed the conceptual framework for the study. Data were collected through the use of semistructured face-to-face interviews, company reports, and field notes. Data analysis involved a modified Van Kamm method, which included descriptive coding, a sequential review of the interview transcripts, and member checking. Based on methodological triangulation and thematic analysis, 5 themes emerged from the study, including communication and engagement; motivation and empowerment; measurement, monitoring, and control; self-awareness and humility; and efficiency and optimization. The implications for social change include the potential for crude oil and gas companies in Calgary, Canada to manage production costs, ensure earnings and profitability, and thus improve the socioeconomic well-being of Calgary indigenes through improved employment opportunities.
Hirsh, Vera
2018-01-01
Four epidermal growth factor receptor (EGFR) tyrosine kinase inhibitors (TKIs), erlotinib, gefitinib, afatinib and osimertinib, are currently available for the management of EGFR mutation-positive non-small-cell lung cancer (NSCLC), with others in development. Although tumors are exquisitely sensitive to these agents, acquired resistance is inevitable. Furthermore, emerging data indicate that first- (erlotinib and gefitinib), second- (afatinib) and third-generation (osimertinib) EGFR TKIs differ in terms of efficacy and tolerability profiles. Therefore, there is a strong imperative to optimize the sequence of TKIs in order to maximize their clinical benefit. Osimertinib has demonstrated striking efficacy as a second-line treatment option in patients with T790M-positive tumors, and also confers efficacy and tolerability advantages over first-generation TKIs in the first-line setting. However, while accrual of T790M is the most predominant mechanism of resistance to erlotinib, gefitinib and afatinib, resistance mechanisms to osimertinib have not been clearly elucidated, meaning that possible therapy options after osimertinib failure are not clear. At present, few data comparing sequential regimens in patients with EGFR mutation-positive NSCLC are available and prospective clinical trials are required. This article reviews the similarities and differences between EGFR TKIs, and discusses key considerations when assessing optimal sequential therapy with these agents for the treatment of EGFR mutation-positive NSCLC. PMID:29383041
ERIC Educational Resources Information Center
Cavanagh, Martine Odile; Langevin, Rene
2010-01-01
The object of this exploratory study was to test two hypotheses. The first was that a student's preferential cognitive style, sequential or simultaneous, can negatively affect the imaginative fiction texts that he or she produces. The second hypothesis was that students possessing a sequential or simultaneous preferential cognitive style would…
Allouche, G
2016-02-01
Among the therapeutic strategies in treatment of resistant depression, the use of sequential prescriptions is discussed here. A number of observations, initially quite isolated and few controlled studies, some large-scale, have been reported, which showed a definite therapeutic effect of certain requirements in sequential treatment of depression. The Sequenced Treatment Alternatives to Relieve Depression Study (STAR*D) is up to now the largest clinical trial exploring treatment strategies in non psychotic resistant depression in real-life conditions with an algorithm of sequential decision. The main conclusions of this study are the following: after two unsuccessful attempts, the chance of remission decreases considerably. A 12-months follow-up showed that the higher the use of the processing steps were high, the more common the relapses were during this period. The pharmacological differences between psychotropic did not cause clinically significant difference. The positive effect of lithium in combination with antidepressants has been known since the work of De Montigny. Antidepressants allow readjustment of physiological sequence involving different monoaminergic systems together. Studies with tricyclic antidepressant-thyroid hormone T3: in depression, decreased norepinephrine at the synaptic receptors believed to cause hypersensitivity of these receptors. Thyroid hormones modulate the activity of adrenergic receptors. There would be a balance of activity between alpha and beta-adrenergic receptors, depending on the bioavailability of thyroid hormones. ECT may in some cases promote pharmacological response in case of previous resistance, or be effective in preventing relapse. Cognitive therapy and antidepressant medications likely have an effect on different types of depression. We can consider the interest of cognitive therapy in a sequential pattern after effective treatment with an antidepressant effect for treatment of residual symptoms, preventing relapses and recurrences, in antidepressant maintenance. These data support the interest of therapeutic strategies based on evolutionary criteria. Sequential models inspired by statistical methods may incorporate the effects of a future treatment by measuring the current one. Copyright © 2015 L’Encéphale, Paris. Published by Elsevier Masson SAS. All rights reserved.
Stone, Orrin J; Biette, Kelly M; Murphy, Patrick J M
2014-01-01
Hydrophobic interaction chromatography (HIC) most commonly requires experimental determination (i.e., scouting) in order to select an optimal chromatographic medium for purifying a given target protein. Neither a two-step purification of untagged green fluorescent protein (GFP) from crude bacterial lysate using sequential HIC and size exclusion chromatography (SEC), nor HIC column scouting elution profiles of GFP, have been previously reported. Bacterial lysate expressing recombinant GFP was sequentially adsorbed to commercially available HIC columns containing butyl, octyl, and phenyl-based HIC ligands coupled to matrices of varying bead size. The lysate was fractionated using a linear ammonium phosphate salt gradient at constant pH. Collected HIC eluate fractions containing retained GFP were then pooled and further purified using high-resolution preparative SEC. Significant differences in presumptive GFP elution profiles were observed using in-line absorption spectrophotometry (A395) and post-run fluorimetry. SDS-PAGE and western blot demonstrated that fluorometric detection was the more accurate indicator of GFP elution in both HIC and SEC purification steps. Comparison of composite HIC column scouting data indicated that a phenyl ligand coupled to a 34 µm matrix produced the highest degree of target protein capture and separation. Conducting two-step protein purification using the preferred HIC medium followed by SEC resulted in a final, concentrated product with >98% protein purity. In-line absorbance spectrophotometry was not as precise of an indicator of GFP elution as post-run fluorimetry. These findings demonstrate the importance of utilizing a combination of detection methods when evaluating purification strategies. GFP is a well-characterized model protein, used heavily in educational settings and by researchers with limited protein purification experience, and the data and strategies presented here may aid in development other of HIC-compatible protein purification schemes.
Saldaña, Erick; Siche, Raúl; da Silva Pinto, Jair Sebastião; de Almeida, Marcio Aurélio; Selani, Miriam Mabel; Rios-Mera, Juan; Contreras-Castillo, Carmen J
2018-02-01
This study aims to optimize simultaneously the lipid profile and instrumental hardness of low-fat mortadella. For lipid mixture optimization, the overlapping of surface boundaries was used to select the quantities of canola, olive, and fish oils, in order to maximize PUFAs, specifically the long-chain n-3 fatty acids (eicosapentaenoic-EPA, docosahexaenoic acids-DHA) using the minimum content of fish oil. Increased quantities of canola oil were associated with higher PUFA/SFA ratios. The presence of fish oil, even in small amounts, was effective in improving the nutritional quality of the mixture, showing lower n-6/n-3 ratios and significant levels of EPA and DHA. Thus, the optimal lipid mixture comprised of 20, 30 and 50% fish, olive and canola oils, respectively, which present PUFA/SFA (2.28) and n-6/n-3 (2.30) ratios within the recommendations of a healthy diet. Once the lipid mixture was optimized, components of the pre-emulsion used as fat replacer in the mortadella, such as lipid mixture (LM), sodium alginate (SA), and milk protein concentrate (PC), were studied to optimize hardness and springiness to target ranges of 13-16 N and 0.86-0.87, respectively. Results showed that springiness was not significantly affected by these variables. However, as the concentration of the three components increased, hardness decreased. Through the desirability function, the optimal proportions were 30% LM, 0.5% SA, and 0.5% PC. This study showed that the pre-emulsion decreases hardness of mortadella. In addition, response surface methodology was efficient to model lipid mixture and hardness, resulting in a product with improved texture and lipid quality.
Taffe, Michael A.; Taffe, William J.
2011-01-01
Several nonhuman primate species have been reported to employ a distance-minimizing, traveling salesman-like, strategy during foraging as well as in experimental spatial search tasks involving lesser amounts of locomotion. Spatial sequencing may optimize performance by reducing reference or episodic memory loads, locomotor costs, competition or other demands. A computerized self-ordered spatial search (SOSS) memory task has been adapted from a human neuropsychological testing battery (CANTAB, Cambridge Cognition, Ltd) for use in monkeys. Accurate completion of a trial requires sequential responses to colored boxes in two or more spatial locations without repetition of a previous location. Marmosets have been reported to employ a circling pattern of search, suggesting spontaneous adoption of a strategy to reduce working memory load. In this study the SOSS performance of rhesus monkeys was assessed to determine if the use of a distance-minimizing search path enhances accuracy. A novel strategy score, independent of the trial difficulty and arrangement of boxes, has been devised. Analysis of the performance of 21 monkeys trained on SOSS over two years shows that a distance-minimizing search strategy is associated with improved accuracy. This effect is observed within individuals as they improve over many cumulative sessions of training on the task and across individuals at any given level of training. Erroneous trials were associated with a failure to deploy the strategy. It is concluded that the effect of utilizing the strategy on this locomotion-free, laboratory task is to enhance accuracy by reducing demands on spatial working memory resources. PMID:21840507
Cost Optimal Design of a Power Inductor by Sequential Gradient Search
NASA Astrophysics Data System (ADS)
Basak, Raju; Das, Arabinda; Sanyal, Amarnath
2018-05-01
Power inductors are used for compensating VAR generated by long EHV transmission lines and in electronic circuits. For the EHV-lines, the rating of the inductor is decided upon by techno-economic considerations on the basis of the line-susceptance. It is a high voltage high current device, absorbing little active power and large reactive power. The cost is quite high- hence the design should be made cost-optimally. The 3-phase power inductor is similar in construction to a 3-phase core-type transformer with the exception that it has only one winding per phase and each limb is provided with an air-gap, the length of which is decided upon by the inductance required. In this paper, a design methodology based on sequential gradient search technique and the corresponding algorithm leading to cost-optimal design of a 3-phase EHV power inductor has been presented. The case-study has been made on a 220 kV long line of NHPC running from Chukha HPS to Birpara of Coochbihar.
NASA Astrophysics Data System (ADS)
Vimmrová, Alena; Kočí, Václav; Krejsová, Jitka; Černý, Robert
2016-06-01
A method for lightweight-gypsum material design using waste stone dust as the foaming agent is described. The main objective is to reach several physical properties which are inversely related in a certain way. Therefore, a linear optimization method is applied to handle this task systematically. The optimization process is based on sequential measurement of physical properties. The results are subsequently point-awarded according to a complex point criterion and new composition is proposed. After 17 trials the final mixture is obtained, having the bulk density equal to (586 ± 19) kg/m3 and compressive strength (1.10 ± 0.07) MPa. According to a detailed comparative analysis with reference gypsum, the newly developed material can be used as excellent thermally insulating interior plaster with the thermal conductivity of (0.082 ± 0.005) W/(m·K). In addition, its practical application can bring substantial economic and environmental benefits as the material contains 25 % of waste stone dust.
Applications of colored petri net and genetic algorithms to cluster tool scheduling
NASA Astrophysics Data System (ADS)
Liu, Tung-Kuan; Kuo, Chih-Jen; Hsiao, Yung-Chin; Tsai, Jinn-Tsong; Chou, Jyh-Horng
2005-12-01
In this paper, we propose a method, which uses Coloured Petri Net (CPN) and genetic algorithm (GA) to obtain an optimal deadlock-free schedule and to solve re-entrant problem for the flexible process of the cluster tool. The process of the cluster tool for producing a wafer usually can be classified into three types: 1) sequential process, 2) parallel process, and 3) sequential parallel process. But these processes are not economical enough to produce a variety of wafers in small volume. Therefore, this paper will propose the flexible process where the operations of fabricating wafers are randomly arranged to achieve the best utilization of the cluster tool. However, the flexible process may have deadlock and re-entrant problems which can be detected by CPN. On the other hand, GAs have been applied to find the optimal schedule for many types of manufacturing processes. Therefore, we successfully integrate CPN and GAs to obtain an optimal schedule with the deadlock and re-entrant problems for the flexible process of the cluster tool.
Sequential ChIP Protocol for Profiling Bivalent Epigenetic Modifications (ReChIP).
Desvoyes, Bénédicte; Sequeira-Mendes, Joana; Vergara, Zaida; Madeira, Sofia; Gutierrez, Crisanto
2018-01-01
Identification of chromatin modifications, e.g., histone acetylation and methylation, among others, is widely carried out by using a chromatin immunoprecipitation (ChIP) strategy. The information obtained with these procedures is useful to gain an overall picture of modifications present in all cells of the population under study. It also serves as a basis to figure out the mechanisms of chromatin organization and gene regulation at the population level. However, the ultimate goal is to understand gene regulation at the level of single chromatin fibers. This requires the identification of chromatin modifications that occur at a given genomic location and within the same chromatin fiber. This is achieved by following a sequential ChIP strategy using two antibodies to distinguish different chromatin modifications. Here, we describe a sequential ChIP protocol (Re-ChIP), paying special attention to the controls needed and the required steps to obtain meaningful and reproducible results. The protocol is developed for young Arabidopsis seedlings but could be adapted to other plant materials.
Reducing interaction in simultaneous paired stimulation with CI.
Vellinga, Dirk; Bruijn, Saskia; Briaire, Jeroen J; Kalkman, Randy K; Frijns, Johan H M
2017-01-01
In this study simultaneous paired stimulation of electrodes in cochlear implants is investigated by psychophysical experiments in 8 post-lingually deaf subjects (and one extra subject who only participated in part of the experiments). Simultaneous and sequential monopolar stimulation modes are used as references and are compared to channel interaction compensation, partial tripolar stimulation and a novel sequential stimulation strategy named phased array compensation. Psychophysical experiments are performed to investigate both the loudness integration during paired stimulation at the main electrodes as well as the interaction with the electrode contact located halfway between the stimulating pair. The study shows that simultaneous monopolar stimulation has more loudness integration on the main electrodes and more interaction in between the electrodes than sequential stimulation. Channel interaction compensation works to reduce the loudness integration at the main electrodes, but does not reduce the interaction in between the electrodes caused by paired stimulation. Partial tripolar stimulation uses much more current to reach the needed loudness, but shows the same interaction in between the electrodes as sequential monopolar stimulation. In phased array compensation we have used the individual impedance matrix of each subject to calculate the current needed on each electrode to exactly match the stimulation voltage along the array to that of sequential stimulation. The results show that the interaction in between the electrodes is the same as monopolar stimulation. The strategy uses less current than partial tripolar stimulation, but more than monopolar stimulation. In conclusion, the paper shows that paired stimulation is possible if the interaction is compensated.
Dopamine reward prediction-error signalling: a two-component response
Schultz, Wolfram
2017-01-01
Environmental stimuli and objects, including rewards, are often processed sequentially in the brain. Recent work suggests that the phasic dopamine reward prediction-error response follows a similar sequential pattern. An initial brief, unselective and highly sensitive increase in activity unspecifically detects a wide range of environmental stimuli, then quickly evolves into the main response component, which reflects subjective reward value and utility. This temporal evolution allows the dopamine reward prediction-error signal to optimally combine speed and accuracy. PMID:26865020
2014-01-01
Background Pattern recognition (PR) based strategies for the control of myoelectric upper limb prostheses are generally evaluated through offline classification accuracy, which is an admittedly useful metric, but insufficient to discuss functional performance in real time. Existing functional tests are extensive to set up and most fail to provide a challenging, objective framework to assess the strategy performance in real time. Methods Nine able-bodied and two amputee subjects gave informed consent and participated in the local Institutional Review Board approved study. We designed a two-dimensional target acquisition task, based on the principles of Fitts’ law for human motor control. Subjects were prompted to steer a cursor from the screen center of into a series of subsequently appearing targets of different difficulties. Three cursor control systems were tested, corresponding to three electromyography-based prosthetic control strategies: 1) amplitude-based direct control (the clinical standard of care), 2) sequential PR control, and 3) simultaneous PR control, allowing for a concurrent activation of two degrees of freedom (DOF). We computed throughput (bits/second), path efficiency (%), reaction time (second), and overshoot (%)) and used general linear models to assess significant differences between the strategies for each metric. Results We validated the proposed methodology by achieving very high coefficients of determination for Fitts’ law. Both PR strategies significantly outperformed direct control in two-DOF targets and were more intuitive to operate. In one-DOF targets, the simultaneous approach was the least precise. The direct control was efficient in one-DOF targets but cumbersome to operate in two-DOF targets through a switch-depended sequential cursor control. Conclusions We designed a test, capable of comprehensively describing prosthetic control strategies in real time. When implemented on control subjects, the test was able to capture statistically significant differences (p < 0.05) in control strategies when considering throughputs, path efficiencies and reaction times. Of particular note, we found statistically significant (p < 0.01) improvements in throughputs and path efficiencies with simultaneous PR when compared to direct control or sequential PR. Amputees could readily achieve the task; however a limited number of subjects was tested and a statistical analysis was not performed with that population. PMID:24886664
Wurth, Sophie M; Hargrove, Levi J
2014-05-30
Pattern recognition (PR) based strategies for the control of myoelectric upper limb prostheses are generally evaluated through offline classification accuracy, which is an admittedly useful metric, but insufficient to discuss functional performance in real time. Existing functional tests are extensive to set up and most fail to provide a challenging, objective framework to assess the strategy performance in real time. Nine able-bodied and two amputee subjects gave informed consent and participated in the local Institutional Review Board approved study. We designed a two-dimensional target acquisition task, based on the principles of Fitts' law for human motor control. Subjects were prompted to steer a cursor from the screen center of into a series of subsequently appearing targets of different difficulties. Three cursor control systems were tested, corresponding to three electromyography-based prosthetic control strategies: 1) amplitude-based direct control (the clinical standard of care), 2) sequential PR control, and 3) simultaneous PR control, allowing for a concurrent activation of two degrees of freedom (DOF). We computed throughput (bits/second), path efficiency (%), reaction time (second), and overshoot (%)) and used general linear models to assess significant differences between the strategies for each metric. We validated the proposed methodology by achieving very high coefficients of determination for Fitts' law. Both PR strategies significantly outperformed direct control in two-DOF targets and were more intuitive to operate. In one-DOF targets, the simultaneous approach was the least precise. The direct control was efficient in one-DOF targets but cumbersome to operate in two-DOF targets through a switch-depended sequential cursor control. We designed a test, capable of comprehensively describing prosthetic control strategies in real time. When implemented on control subjects, the test was able to capture statistically significant differences (p < 0.05) in control strategies when considering throughputs, path efficiencies and reaction times. Of particular note, we found statistically significant (p < 0.01) improvements in throughputs and path efficiencies with simultaneous PR when compared to direct control or sequential PR. Amputees could readily achieve the task; however a limited number of subjects was tested and a statistical analysis was not performed with that population.
Numerical study on the sequential Bayesian approach for radioactive materials detection
NASA Astrophysics Data System (ADS)
Qingpei, Xiang; Dongfeng, Tian; Jianyu, Zhu; Fanhua, Hao; Ge, Ding; Jun, Zeng
2013-01-01
A new detection method, based on the sequential Bayesian approach proposed by Candy et al., offers new horizons for the research of radioactive detection. Compared with the commonly adopted detection methods incorporated with statistical theory, the sequential Bayesian approach offers the advantages of shorter verification time during the analysis of spectra that contain low total counts, especially in complex radionuclide components. In this paper, a simulation experiment platform implanted with the methodology of sequential Bayesian approach was developed. Events sequences of γ-rays associating with the true parameters of a LaBr3(Ce) detector were obtained based on an events sequence generator using Monte Carlo sampling theory to study the performance of the sequential Bayesian approach. The numerical experimental results are in accordance with those of Candy. Moreover, the relationship between the detection model and the event generator, respectively represented by the expected detection rate (Am) and the tested detection rate (Gm) parameters, is investigated. To achieve an optimal performance for this processor, the interval of the tested detection rate as a function of the expected detection rate is also presented.
Bansal, A.; Kapoor, R.; Singh, S. K.; Kumar, N.; Oinam, A. S.; Sharma, S. C.
2012-01-01
Aims: Dosimeteric and radiobiological comparison of two radiation schedules in localized carcinoma prostate: Standard Three-Dimensional Conformal Radiotherapy (3DCRT) followed by Intensity Modulated Radiotherapy (IMRT) boost (sequential-IMRT) with Simultaneous Integrated Boost IMRT (SIB-IMRT). Material and Methods: Thirty patients were enrolled. In all, the target consisted of PTV P + SV (Prostate and seminal vesicles) and PTV LN (lymph nodes) where PTV refers to planning target volume and the critical structures included: bladder, rectum and small bowel. All patients were treated with sequential-IMRT plan, but for dosimetric comparison, SIB-IMRT plan was also created. The prescription dose to PTV P + SV was 74 Gy in both strategies but with different dose per fraction, however, the dose to PTV LN was 50 Gy delivered in 25 fractions over 5 weeks for sequential-IMRT and 54 Gy delivered in 27 fractions over 5.5 weeks for SIB-IMRT. The treatment plans were compared in terms of dose–volume histograms. Also, Tumor Control Probability (TCP) and Normal Tissue Complication Probability (NTCP) obtained with the two plans were compared. Results: The volume of rectum receiving 70 Gy or more (V > 70 Gy) was reduced to 18.23% with SIB-IMRT from 22.81% with sequential-IMRT. SIB-IMRT reduced the mean doses to both bladder and rectum by 13% and 17%, respectively, as compared to sequential-IMRT. NTCP of 0.86 ± 0.75% and 0.01 ± 0.02% for the bladder, 5.87 ± 2.58% and 4.31 ± 2.61% for the rectum and 8.83 ± 7.08% and 8.25 ± 7.98% for the bowel was seen with sequential-IMRT and SIB-IMRT plans respectively. Conclusions: For equal PTV coverage, SIB-IMRT markedly reduced doses to critical structures, therefore should be considered as the strategy for dose escalation. SIB-IMRT achieves lesser NTCP than sequential-IMRT. PMID:23204659
Formation of Onion-Like NiCo2 S4 Particles via Sequential Ion-Exchange for Hybrid Supercapacitors.
Guan, Bu Yuan; Yu, Le; Wang, Xiao; Song, Shuyan; Lou, Xiong Wen David
2017-02-01
Onion-like NiCo 2 S 4 particles with unique hollow structured shells are synthesized by a sequential ion-exchange strategy. With the structural and compositional advantages, these unique onion-like NiCo 2 S 4 particles exhibit enhanced electrochemical performance as an electrode material for hybrid supercapacitors. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Fan, Tingbo; Liu, Zhenbo; Zhang, Dong; Tang, Mengxing
2013-03-01
Lesion formation and temperature distribution induced by high-intensity focused ultrasound (HIFU) were investigated both numerically and experimentally via two energy-delivering strategies, i.e., sequential discrete and continuous scanning modes. Simulations were presented based on the combination of Khokhlov-Zabolotskaya-Kuznetsov (KZK) equation and bioheat equation. Measurements were performed on tissue-mimicking phantoms sonicated by a 1.12-MHz single-element focused transducer working at an acoustic power of 75 W. Both the simulated and experimental results show that, in the sequential discrete mode, obvious saw-tooth-like contours could be observed for the peak temperature distribution and the lesion boundaries, with the increasing interval space between two adjacent exposure points. In the continuous scanning mode, more uniform peak temperature distributions and lesion boundaries would be produced, and the peak temperature values would decrease significantly with the increasing scanning speed. In addition, compared to the sequential discrete mode, the continuous scanning mode could achieve higher treatment efficiency (lesion area generated per second) with a lower peak temperature. The present studies suggest that the peak temperature and tissue lesion resulting from the HIFU exposure could be controlled by adjusting the transducer scanning speed, which is important for improving the HIFU treatment efficiency.
A Thoroughly Validated Virtual Screening Strategy for Discovery of Novel HDAC3 Inhibitors.
Hu, Huabin; Xia, Jie; Wang, Dongmei; Wang, Xiang Simon; Wu, Song
2017-01-18
Histone deacetylase 3 (HDAC3) has been recently identified as a potential target for the treatment of cancer and other diseases, such as chronic inflammation, neurodegenerative diseases, and diabetes. Virtual screening (VS) is currently a routine technique for hit identification, but its success depends on rational development of VS strategies. To facilitate this process, we applied our previously released benchmarking dataset, i.e., MUBD-HDAC3 to the evaluation of structure-based VS (SBVS) and ligand-based VS (LBVS) combinatorial approaches. We have identified FRED (Chemgauss4) docking against a structural model of HDAC3, i.e., SAHA-3 generated by a computationally inexpensive "flexible docking", as the best SBVS approach and a common feature pharmacophore model, i.e., Hypo1 generated by Catalyst/HipHop as the optimal model for LBVS. We then developed a pipeline that was composed of Hypo1, FRED (Chemgauss4), and SAHA-3 sequentially, and demonstrated that it was superior to other combinations in terms of ligand enrichment. In summary, we present the first highly-validated, rationally-designed VS strategy specific to HDAC3 inhibitor discovery. The constructed pipeline is publicly accessible for the scientific community to identify novel HDAC3 inhibitors in a time-efficient and cost-effective way.
A Thoroughly Validated Virtual Screening Strategy for Discovery of Novel HDAC3 Inhibitors
Hu, Huabin; Xia, Jie; Wang, Dongmei; Wang, Xiang Simon; Wu, Song
2017-01-01
Histone deacetylase 3 (HDAC3) has been recently identified as a potential target for the treatment of cancer and other diseases, such as chronic inflammation, neurodegenerative diseases, and diabetes. Virtual screening (VS) is currently a routine technique for hit identification, but its success depends on rational development of VS strategies. To facilitate this process, we applied our previously released benchmarking dataset, i.e., MUBD-HDAC3 to the evaluation of structure-based VS (SBVS) and ligand-based VS (LBVS) combinatorial approaches. We have identified FRED (Chemgauss4) docking against a structural model of HDAC3, i.e., SAHA-3 generated by a computationally inexpensive “flexible docking”, as the best SBVS approach and a common feature pharmacophore model, i.e., Hypo1 generated by Catalyst/HipHop as the optimal model for LBVS. We then developed a pipeline that was composed of Hypo1, FRED (Chemgauss4), and SAHA-3 sequentially, and demonstrated that it was superior to other combinations in terms of ligand enrichment. In summary, we present the first highly-validated, rationally-designed VS strategy specific to HDAC3 inhibitor discovery. The constructed pipeline is publicly accessible for the scientific community to identify novel HDAC3 inhibitors in a time-efficient and cost-effective way. PMID:28106794
Progress in multidisciplinary design optimization at NASA Langley
NASA Technical Reports Server (NTRS)
Padula, Sharon L.
1993-01-01
Multidisciplinary Design Optimization refers to some combination of disciplinary analyses, sensitivity analysis, and optimization techniques used to design complex engineering systems. The ultimate objective of this research at NASA Langley Research Center is to help the US industry reduce the costs associated with development, manufacturing, and maintenance of aerospace vehicles while improving system performance. This report reviews progress towards this objective and highlights topics for future research. Aerospace design problems selected from the author's research illustrate strengths and weaknesses in existing multidisciplinary optimization techniques. The techniques discussed include multiobjective optimization, global sensitivity equations and sequential linear programming.
Leveraging Hypoxia-Activated Prodrugs to Prevent Drug Resistance in Solid Tumors.
Lindsay, Danika; Garvey, Colleen M; Mumenthaler, Shannon M; Foo, Jasmine
2016-08-01
Experimental studies have shown that one key factor in driving the emergence of drug resistance in solid tumors is tumor hypoxia, which leads to the formation of localized environmental niches where drug-resistant cell populations can evolve and survive. Hypoxia-activated prodrugs (HAPs) are compounds designed to penetrate to hypoxic regions of a tumor and release cytotoxic or cytostatic agents; several of these HAPs are currently in clinical trial. However, preliminary results have not shown a survival benefit in several of these trials. We hypothesize that the efficacy of treatments involving these prodrugs depends heavily on identifying the correct treatment schedule, and that mathematical modeling can be used to help design potential therapeutic strategies combining HAPs with standard therapies to achieve long-term tumor control or eradication. We develop this framework in the specific context of EGFR-driven non-small cell lung cancer, which is commonly treated with the tyrosine kinase inhibitor erlotinib. We develop a stochastic mathematical model, parametrized using clinical and experimental data, to explore a spectrum of treatment regimens combining a HAP, evofosfamide, with erlotinib. We design combination toxicity constraint models and optimize treatment strategies over the space of tolerated schedules to identify specific combination schedules that lead to optimal tumor control. We find that (i) combining these therapies delays resistance longer than any monotherapy schedule with either evofosfamide or erlotinib alone, (ii) sequentially alternating single doses of each drug leads to minimal tumor burden and maximal reduction in probability of developing resistance, and (iii) strategies minimizing the length of time after an evofosfamide dose and before erlotinib confer further benefits in reduction of tumor burden. These results provide insights into how hypoxia-activated prodrugs may be used to enhance therapeutic effectiveness in the clinic.
Optimal startup control of a jacketed tubular reactor.
NASA Technical Reports Server (NTRS)
Hahn, D. R.; Fan, L. T.; Hwang, C. L.
1971-01-01
The optimal startup policy of a jacketed tubular reactor, in which a first-order, reversible, exothermic reaction takes place, is presented. A distributed maximum principle is presented for determining weak necessary conditions for optimality of a diffusional distributed parameter system. A numerical technique is developed for practical implementation of the distributed maximum principle. This involves the sequential solution of the state and adjoint equations, in conjunction with a functional gradient technique for iteratively improving the control function.
The sequential structure of brain activation predicts skill.
Anderson, John R; Bothell, Daniel; Fincham, Jon M; Moon, Jungaa
2016-01-29
In an fMRI study, participants were trained to play a complex video game. They were scanned early and then again after substantial practice. While better players showed greater activation in one region (right dorsal striatum) their relative skill was better diagnosed by considering the sequential structure of whole brain activation. Using a cognitive model that played this game, we extracted a characterization of the mental states that are involved in playing a game and the statistical structure of the transitions among these states. There was a strong correspondence between this measure of sequential structure and the skill of different players. Using multi-voxel pattern analysis, it was possible to recognize, with relatively high accuracy, the cognitive states participants were in during particular scans. We used the sequential structure of these activation-recognized states to predict the skill of individual players. These findings indicate that important features about information-processing strategies can be identified from a model-based analysis of the sequential structure of brain activation. Copyright © 2015 Elsevier Ltd. All rights reserved.
A path-level exact parallelization strategy for sequential simulation
NASA Astrophysics Data System (ADS)
Peredo, Oscar F.; Baeza, Daniel; Ortiz, Julián M.; Herrero, José R.
2018-01-01
Sequential Simulation is a well known method in geostatistical modelling. Following the Bayesian approach for simulation of conditionally dependent random events, Sequential Indicator Simulation (SIS) method draws simulated values for K categories (categorical case) or classes defined by K different thresholds (continuous case). Similarly, Sequential Gaussian Simulation (SGS) method draws simulated values from a multivariate Gaussian field. In this work, a path-level approach to parallelize SIS and SGS methods is presented. A first stage of re-arrangement of the simulation path is performed, followed by a second stage of parallel simulation for non-conflicting nodes. A key advantage of the proposed parallelization method is to generate identical realizations as with the original non-parallelized methods. Case studies are presented using two sequential simulation codes from GSLIB: SISIM and SGSIM. Execution time and speedup results are shown for large-scale domains, with many categories and maximum kriging neighbours in each case, achieving high speedup results in the best scenarios using 16 threads of execution in a single machine.
Barriga-Rivera, Alejandro; Morley, John W; Lovell, Nigel H; Suaning, Gregg J
2016-08-01
Researchers continue to develop visual prostheses towards safer and more efficacious systems. However limitations still exist in the number of stimulating channels that can be integrated. Therefore there is a need for spatial and time multiplexing techniques to provide improved performance of the current technology. In particular, bright and high-contrast visual scenes may require simultaneous activation of several electrodes. In this research, a 24-electrode array was suprachoroidally implanted in three normally-sighted cats. Multi-unit activity was recorded from the primary visual cortex. Four stimulation strategies were contrasted to provide activation of seven electrodes arranged hexagonally: simultaneous monopolar, sequential monopolar, sequential bipolar and hexapolar. Both monopolar configurations showed similar cortical activation maps. Hexapolar and sequential bipolar configurations activated a lower number of cortical channels. Overall, the return configuration played a more relevant role in cortical activation than time multiplexing and thus, rapid sequential stimulation may assist in reducing the number of channels required to activate large retinal areas.
Multiple Ordinal Regression by Maximizing the Sum of Margins
Hamsici, Onur C.; Martinez, Aleix M.
2016-01-01
Human preferences are usually measured using ordinal variables. A system whose goal is to estimate the preferences of humans and their underlying decision mechanisms requires to learn the ordering of any given sample set. We consider the solution of this ordinal regression problem using a Support Vector Machine algorithm. Specifically, the goal is to learn a set of classifiers with common direction vectors and different biases correctly separating the ordered classes. Current algorithms are either required to solve a quadratic optimization problem, which is computationally expensive, or are based on maximizing the minimum margin (i.e., a fixed margin strategy) between a set of hyperplanes, which biases the solution to the closest margin. Another drawback of these strategies is that they are limited to order the classes using a single ranking variable (e.g., perceived length). In this paper, we define a multiple ordinal regression algorithm based on maximizing the sum of the margins between every consecutive class with respect to one or more rankings (e.g., perceived length and weight). We provide derivations of an efficient, easy-to-implement iterative solution using a Sequential Minimal Optimization procedure. We demonstrate the accuracy of our solutions in several datasets. In addition, we provide a key application of our algorithms in estimating human subjects’ ordinal classification of attribute associations to object categories. We show that these ordinal associations perform better than the binary one typically employed in the literature. PMID:26529784
Sequential self-assembly of DNA functionalized droplets
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Yin; McMullen, Angus; Pontani, Lea-Laetitia
Complex structures and devices, both natural and manmade, are often constructed sequentially. From crystallization to embryogenesis, a nucleus or seed is formed and built upon. Sequential assembly allows for initiation, signaling, and logical programming, which are necessary for making enclosed, hierarchical structures. Though biology relies on such schemes, they have not been available in materials science. We demonstrate programmed sequential self-assembly of DNA functionalized emulsions. The droplets are initially inert because the grafted DNA strands are pre-hybridized in pairs. Active strands on initiator droplets then displace one of the paired strands and thus release its complement, which in turn activatesmore » the next droplet in the sequence, akin to living polymerization. This strategy provides time and logic control during the self-assembly process, and offers a new perspective on the synthesis of materials.« less
Sequential self-assembly of DNA functionalized droplets
Zhang, Yin; McMullen, Angus; Pontani, Lea-Laetitia; ...
2017-06-16
Complex structures and devices, both natural and manmade, are often constructed sequentially. From crystallization to embryogenesis, a nucleus or seed is formed and built upon. Sequential assembly allows for initiation, signaling, and logical programming, which are necessary for making enclosed, hierarchical structures. Though biology relies on such schemes, they have not been available in materials science. We demonstrate programmed sequential self-assembly of DNA functionalized emulsions. The droplets are initially inert because the grafted DNA strands are pre-hybridized in pairs. Active strands on initiator droplets then displace one of the paired strands and thus release its complement, which in turn activatesmore » the next droplet in the sequence, akin to living polymerization. This strategy provides time and logic control during the self-assembly process, and offers a new perspective on the synthesis of materials.« less
Decision Aids for Naval Air ASW
1980-03-15
Algorithm for Zone Optimization Investigation) NADC Developing Sonobuoy Pattern for Air ASW Search DAISY (Decision Aiding Information System) Wharton...sion making behavior. 0 Artificial intelligence sequential pattern recognition algorithm for reconstructing the decision maker’s utility functions. 0...display presenting the uncertainty area of the target. 3.1.5 Algorithm for Zone Optimization Investigation (AZOI) -- Naval Air Development Center 0 A
Induction of simultaneous and sequential malolactic fermentation in durian wine.
Taniasuri, Fransisca; Lee, Pin-Rou; Liu, Shao-Quan
2016-08-02
This study represented for the first time the impact of malolactic fermentation (MLF) induced by Oenococcus oeni and its inoculation strategies (simultaneous vs. sequential) on the fermentation performance as well as aroma compound profile of durian wine. There was no negative impact of simultaneous inoculation of O. oeni and Saccharomyces cerevisiae on the growth and fermentation kinetics of S. cerevisiae as compared to sequential fermentation. Simultaneous MLF did not lead to an excessive increase in volatile acidity as compared to sequential MLF. The kinetic changes of organic acids (i.e. malic, lactic, succinic, acetic and α-ketoglutaric acids) varied with simultaneous and sequential MLF relative to yeast alone. MLF, regardless of inoculation mode, resulted in higher production of fermentation-derived volatiles as compared to control (alcoholic fermentation only), including esters, volatile fatty acids, and terpenes, except for higher alcohols. Most indigenous volatile sulphur compounds in durian were decreased to trace levels with little differences among the control, simultaneous and sequential MLF. Among the different wines, the wine with simultaneous MLF had higher concentrations of terpenes and acetate esters while sequential MLF had increased concentrations of medium- and long-chain ethyl esters. Relative to alcoholic fermentation only, both simultaneous and sequential MLF reduced acetaldehyde substantially with sequential MLF being more effective. These findings illustrate that MLF is an effective and novel way of modulating the volatile and aroma compound profile of durian wine. Copyright © 2016 Elsevier B.V. All rights reserved.
A technique for sequential segmental neuromuscular stimulation with closed loop feedback control.
Zonnevijlle, Erik D H; Abadia, Gustavo Perez; Somia, Naveen N; Kon, Moshe; Barker, John H; Koenig, Steven; Ewert, D L; Stremel, Richard W
2002-01-01
In dynamic myoplasty, dysfunctional muscle is assisted or replaced with skeletal muscle from a donor site. Electrical stimulation is commonly used to train and animate the skeletal muscle to perform its new task. Due to simultaneous tetanic contractions of the entire myoplasty, muscles are deprived of perfusion and fatigue rapidly, causing long-term problems such as excessive scarring and muscle ischemia. Sequential stimulation contracts part of the muscle while other parts rest, thus significantly improving blood perfusion. However, the muscle still fatigues. In this article, we report a test of the feasibility of using closed-loop control to economize the contractions of the sequentially stimulated myoplasty. A simple stimulation algorithm was developed and tested on a sequentially stimulated neo-sphincter designed from a canine gracilis muscle. Pressure generated in the lumen of the myoplasty neo-sphincter was used as feedback to regulate the stimulation signal via three control parameters, thereby optimizing the performance of the myoplasty. Additionally, we investigated and compared the efficiency of amplitude and frequency modulation techniques. Closed-loop feedback enabled us to maintain target pressures within 10% deviation using amplitude modulation and optimized control parameters (correction frequency = 4 Hz, correction threshold = 4%, and transition time = 0.3 s). The large-scale stimulation/feedback setup was unfit for chronic experimentation, but can be used as a blueprint for a small-scale version to unveil the theoretical benefits of closed-loop control in chronic experimentation.
Upper bounds on sequential decoding performance parameters
NASA Technical Reports Server (NTRS)
Jelinek, F.
1974-01-01
This paper presents the best obtainable random coding and expurgated upper bounds on the probabilities of undetectable error, of t-order failure (advance to depth t into an incorrect subset), and of likelihood rise in the incorrect subset, applicable to sequential decoding when the metric bias G is arbitrary. Upper bounds on the Pareto exponent are also presented. The G-values optimizing each of the parameters of interest are determined, and are shown to lie in intervals that in general have nonzero widths. The G-optimal expurgated bound on undetectable error is shown to agree with that for maximum likelihood decoding of convolutional codes, and that on failure agrees with the block code expurgated bound. Included are curves evaluating the bounds for interesting choices of G and SNR for a binary-input quantized-output Gaussian additive noise channel.
ERIC Educational Resources Information Center
Ültay, Eser; Alev, Nedim
2017-01-01
The purpose of this study was to investigate the effect of explanation assisted REACT strategy which was based on context-based learning approach on prospective science teachers' (PSTs) learning in impulse, momentum and collisions topics. The sequential explanatory strategy within mixed methods design was employed in this study. The first phase of…
ERIC Educational Resources Information Center
Das, J. P.; Janzen, Troy; Georgiou, George K.
2007-01-01
Individual differences in reading and cognitive processing among a sample of generally poor readers were studied in order to answer two major questions: Do they have a specific cognitive style that favors global-simultaneous strategies and a weak sequential strategy? If they do not have a distinct cognitive style or strategy, but are merely poor…
Sequential Design of Experiments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anderson-Cook, Christine Michaela
2017-06-30
A sequential design of experiments strategy is being developed and implemented that allows for adaptive learning based on incoming results as the experiment is being run. The plan is to incorporate these strategies for the NCCC and TCM experimental campaigns to be run in the coming months. This strategy for experimentation has the advantages of allowing new data collected during the experiment to inform future experimental runs based on their projected utility for a particular goal. For example, the current effort for the MEA capture system at NCCC plans to focus on maximally improving the quality of prediction of COmore » 2 capture efficiency as measured by the width of the confidence interval for the underlying response surface that is modeled as a function of 1) Flue Gas Flowrate [1000-3000] kg/hr; 2) CO 2 weight fraction [0.125-0.175]; 3) Lean solvent loading [0.1-0.3], and; 4) Lean solvent flowrate [3000-12000] kg/hr.« less
Online Graph Completion: Multivariate Signal Recovery in Computer Vision.
Kim, Won Hwa; Jalal, Mona; Hwang, Seongjae; Johnson, Sterling C; Singh, Vikas
2017-07-01
The adoption of "human-in-the-loop" paradigms in computer vision and machine learning is leading to various applications where the actual data acquisition (e.g., human supervision) and the underlying inference algorithms are closely interwined. While classical work in active learning provides effective solutions when the learning module involves classification and regression tasks, many practical issues such as partially observed measurements, financial constraints and even additional distributional or structural aspects of the data typically fall outside the scope of this treatment. For instance, with sequential acquisition of partial measurements of data that manifest as a matrix (or tensor), novel strategies for completion (or collaborative filtering) of the remaining entries have only been studied recently. Motivated by vision problems where we seek to annotate a large dataset of images via a crowdsourced platform or alternatively, complement results from a state-of-the-art object detector using human feedback, we study the "completion" problem defined on graphs, where requests for additional measurements must be made sequentially. We design the optimization model in the Fourier domain of the graph describing how ideas based on adaptive submodularity provide algorithms that work well in practice. On a large set of images collected from Imgur, we see promising results on images that are otherwise difficult to categorize. We also show applications to an experimental design problem in neuroimaging.
NASA Astrophysics Data System (ADS)
Akil, Mohamed
2017-05-01
The real-time processing is getting more and more important in many image processing applications. Image segmentation is one of the most fundamental tasks image analysis. As a consequence, many different approaches for image segmentation have been proposed. The watershed transform is a well-known image segmentation tool. The watershed transform is a very data intensive task. To achieve acceleration and obtain real-time processing of watershed algorithms, parallel architectures and programming models for multicore computing have been developed. This paper focuses on the survey of the approaches for parallel implementation of sequential watershed algorithms on multicore general purpose CPUs: homogeneous multicore processor with shared memory. To achieve an efficient parallel implementation, it's necessary to explore different strategies (parallelization/distribution/distributed scheduling) combined with different acceleration and optimization techniques to enhance parallelism. In this paper, we give a comparison of various parallelization of sequential watershed algorithms on shared memory multicore architecture. We analyze the performance measurements of each parallel implementation and the impact of the different sources of overhead on the performance of the parallel implementations. In this comparison study, we also discuss the advantages and disadvantages of the parallel programming models. Thus, we compare the OpenMP (an application programming interface for multi-Processing) with Ptheads (POSIX Threads) to illustrate the impact of each parallel programming model on the performance of the parallel implementations.
Li, Yanjiao; Zhang, Sen; Yin, Yixin; Xiao, Wendong; Zhang, Jie
2017-08-10
Gas utilization ratio (GUR) is an important indicator used to measure the operating status and energy consumption of blast furnaces (BFs). In this paper, we present a soft-sensor approach, i.e., a novel online sequential extreme learning machine (OS-ELM) named DU-OS-ELM, to establish a data-driven model for GUR prediction. In DU-OS-ELM, firstly, the old collected data are discarded gradually and the newly acquired data are given more attention through a novel dynamic forgetting factor (DFF), depending on the estimation errors to enhance the dynamic tracking ability. Furthermore, we develop an updated selection strategy (USS) to judge whether the model needs to be updated with the newly coming data, so that the proposed approach is more in line with the actual production situation. Then, the convergence analysis of the proposed DU-OS-ELM is presented to ensure the estimation of output weight converge to the true value with the new data arriving. Meanwhile, the proposed DU-OS-ELM is applied to build a soft-sensor model to predict GUR. Experimental results demonstrate that the proposed DU-OS-ELM obtains better generalization performance and higher prediction accuracy compared with a number of existing related approaches using the real production data from a BF and the created GUR prediction model can provide an effective guidance for further optimization operation.
NASA Astrophysics Data System (ADS)
Levi-Zada, Anat; Fefer, Daniela; David, Maayan; Eliyahu, Miriam; Franco, José Carlos; Protasov, Alex; Dunkelblum, Ezra; Mendel, Zvi
2014-08-01
The diel periodicity of sex pheromone release was monitored in two mealybug species, Planococcus citri and Planococcus ficus (Hemiptera; Pseudococcidae), using sequential SPME/GCMS analysis. A maximal release of 2 ng/h pheromone by 9-12-day-old P. citri females occurred 1-2 h before the beginning of photophase. The highest release of pheromone by P. ficus females was 1-2 ng/2 h of 10-20-day-old females, approximately 2 h after the beginning of photophase. Mating resulted in termination of the pheromone release in both mealybug species. The temporal flight activity of the males was monitored in rearing chambers using pheromone baited delta traps. Males of both P. citri and P. ficus displayed the same flight pattern and began flying at 06:00 hours when the light was turned on, reaching a peak during the first and second hour of the photophase. Our results suggest that other biparental mealybug species display also diel periodicities of maximal pheromone release and response. Direct evaluation of the diel periodicity of the pheromone release by the automatic sequential analysis is convenient and will be very helpful in optimizing the airborne collection and identification of other unknown mealybug pheromones and to study the calling behavior of females. Considering this behavior pattern may help to develop more effective pheromone-based management strategies against mealybugs.
Li, Yanjiao; Yin, Yixin; Xiao, Wendong; Zhang, Jie
2017-01-01
Gas utilization ratio (GUR) is an important indicator used to measure the operating status and energy consumption of blast furnaces (BFs). In this paper, we present a soft-sensor approach, i.e., a novel online sequential extreme learning machine (OS-ELM) named DU-OS-ELM, to establish a data-driven model for GUR prediction. In DU-OS-ELM, firstly, the old collected data are discarded gradually and the newly acquired data are given more attention through a novel dynamic forgetting factor (DFF), depending on the estimation errors to enhance the dynamic tracking ability. Furthermore, we develop an updated selection strategy (USS) to judge whether the model needs to be updated with the newly coming data, so that the proposed approach is more in line with the actual production situation. Then, the convergence analysis of the proposed DU-OS-ELM is presented to ensure the estimation of output weight converge to the true value with the new data arriving. Meanwhile, the proposed DU-OS-ELM is applied to build a soft-sensor model to predict GUR. Experimental results demonstrate that the proposed DU-OS-ELM obtains better generalization performance and higher prediction accuracy compared with a number of existing related approaches using the real production data from a BF and the created GUR prediction model can provide an effective guidance for further optimization operation. PMID:28796187
Design of synthetic biological logic circuits based on evolutionary algorithm.
Chuang, Chia-Hua; Lin, Chun-Liang; Chang, Yen-Chang; Jennawasin, Tanagorn; Chen, Po-Kuei
2013-08-01
The construction of an artificial biological logic circuit using systematic strategy is recognised as one of the most important topics for the development of synthetic biology. In this study, a real-structured genetic algorithm (RSGA), which combines general advantages of the traditional real genetic algorithm with those of the structured genetic algorithm, is proposed to deal with the biological logic circuit design problem. A general model with the cis-regulatory input function and appropriate promoter activity functions is proposed to synthesise a wide variety of fundamental logic gates such as NOT, Buffer, AND, OR, NAND, NOR and XOR. The results obtained can be extended to synthesise advanced combinational and sequential logic circuits by topologically distinct connections. The resulting optimal design of these logic gates and circuits are established via the RSGA. The in silico computer-based modelling technology has been verified showing its great advantages in the purpose.
Missing observations in multiyear rotation sampling designs
NASA Technical Reports Server (NTRS)
Gbur, E. E.; Sielken, R. L., Jr. (Principal Investigator)
1982-01-01
Because Multiyear estimation of at-harvest stratum crop proportions is more efficient than single year estimation, the behavior of multiyear estimators in the presence of missing acquisitions was studied. Only the (worst) case when a segment proportion cannot be estimated for the entire year is considered. The effect of these missing segments on the variance of the at-harvest stratum crop proportion estimator is considered when missing segments are not replaced, and when missing segments are replaced by segments not sampled in previous years. The principle recommendations are to replace missing segments according to some specified strategy, and to use a sequential procedure for selecting a sampling design; i.e., choose an optimal two year design and then, based on the observed two year design after segment losses have been taken into account, choose the best possible three year design having the observed two year parent design.
Application of adobe flash media to optimize jigsaw learning model on geometry material
NASA Astrophysics Data System (ADS)
Imam, P.; Imam, S.; Ikrar, P.
2018-05-01
This study aims to determine and describe the effectiveness of the application of adobe flash media for jigsaw learning model on geometry material. In this study, the modified jigsaw learning with adobe flash media is called jigsaw-flash model. This research was conducted in Surakarta. The research method used is mix method research with exploratory sequential strategy. The results of this study indicate that students feel more comfortable and interested in studying geometry material taught by jigsaw-flash model. In addition, students taught using the jigsaw-flash model are more active and motivated than the students who were taught using ordinary jigsaw models. This shows that the use of the jigsaw-flash model can increase student participation and motivation. It can be concluded that the adobe flash media can be used as a solution to reduce the level of student abstraction in learning mathematics.
Mechanics of Re-Torquing in Bolted Flange Connections
NASA Technical Reports Server (NTRS)
Gordon, Ali P.; Drilling Brian; Weichman, Kyle; Kammerer, Catherine; Baldwin, Frank
2010-01-01
It has been widely accepted that the phenomenon of time-dependent loosening of flange connections is a strong consequence of the viscous nature of the compression seal material. Characterizing the coupled interaction between gasket creep and elastic bolt stiffness has been useful in predicting conditions that facilitate leakage. Prior advances on this sub-class of bolted joints has lead to the development of (1) constitutive models for elastomerics, (2) initial tightening strategies, (3) etc. The effect of re-torque, which is a major consideration for typical bolted flange seals used on the Space Shuttle fleet, has not been fully characterized, however. The current study presents a systematic approach to characterizing bolted joint behavior as the consequence of sequentially applied torques. Based on exprimenta1 and numerical results, the optimal re-torquing parameters have been identified that allow for the negligible load loss after pre-load application
Sequential therapy in metastatic clear cell renal carcinoma: TKI-TKI vs TKI-mTOR.
Felici, Alessandra; Bria, Emilio; Tortora, Giampaolo; Cognetti, Francesco; Milella, Michele
2012-12-01
With seven targeted agents, directed against the VEGF/VEGF receptor (VEGFR) axis or the mTOR pathway, approved for the treatment of metastatic renal cell carcinoma and more active agents in advanced phase of clinical testing, questions have arisen with regard to their optimal use, either in combination or in sequence. One of the most compelling (and debated) issues is whether continued VEGF/VEGFR inhibition with agents hitting the same targets (TKI-TKI) affords better results than switching mechanisms of action by alternating VEGFR and mTOR inhibition (TKI-mTOR). In this article, the authors review the (little) available evidence coming from randomized Phase III clinical trials and try to fill in the (many) remaining gaps using evidence from small-size, single-arm Phase II studies and retrospective series, as well as reviewing preclinical evidence supporting either strategy.
2013-01-01
Although much advancement has been achieved in the treatment of chronic hepatitis B, antiviral resistance is still a challenging issue. Previous generation antiviral agents have already developed resistance in a number of patients, and it is still being used especially in resource limited countries. Once antiviral resistance occurs, it predisposes to subsequent resistance, resulting in multidrug resistance. Therefore, prevention of initial antiviral resistance is the most important strategy, and appropriate choice and modification of therapy would be the cornerstone in avoiding treatment failures. Until now, management of antiviral resistance has been evolving from sequential therapy to combination therapy. In the era of tenofovir, the paradigm shifts again, and we have to decide when to switch and when to combine on the basis of newly emerging clinical data. We expect future eradication of chronic hepatitis B virus infection by proper prevention and optimal management of antiviral resistance. PMID:24133659
Microscale Symmetrical Electroporator Array as a Versatile Molecular Delivery System
NASA Astrophysics Data System (ADS)
Ouyang, Mengxing; Hill, Winfield; Lee, Jung Hyun; Hur, Soojung Claire
2017-03-01
Successful developments of new therapeutic strategies often rely on the ability to deliver exogenous molecules into cytosol. We have developed a versatile on-chip vortex-assisted electroporation system, engineered to conduct sequential intracellular delivery of multiple molecules into various cell types at low voltage in a dosage-controlled manner. Micro-patterned planar electrodes permit substantial reduction in operational voltages and seamless integration with an existing microfluidic technology. Equipped with real-time process visualization functionality, the system enables on-chip optimization of electroporation parameters for cells with varying properties. Moreover, the system’s dosage control and multi-molecular delivery capabilities facilitate intracellular delivery of various molecules as a single agent or in combination and its utility in biological research has been demonstrated by conducting RNA interference assays. We envision the system to be a powerful tool, aiding a wide range of applications, requiring single-cell level co-administrations of multiple molecules with controlled dosages.
NASA Technical Reports Server (NTRS)
Howard, R. A.; North, D. W.; Pezier, J. P.
1975-01-01
A new methodology is proposed for integrating planetary quarantine objectives into space exploration planning. This methodology is designed to remedy the major weaknesses inherent in the current formulation of planetary quarantine requirements. Application of the methodology is illustrated by a tutorial analysis of a proposed Jupiter Orbiter mission. The proposed methodology reformulates planetary quarantine planning as a sequential decision problem. Rather than concentrating on a nominal plan, all decision alternatives and possible consequences are laid out in a decision tree. Probabilities and values are associated with the outcomes, including the outcome of contamination. The process of allocating probabilities, which could not be made perfectly unambiguous and systematic, is replaced by decomposition and optimization techniques based on principles of dynamic programming. Thus, the new methodology provides logical integration of all available information and allows selection of the best strategy consistent with quarantine and other space exploration goals.
Cost-effectiveness of allopurinol and febuxostat for the management of gout.
Jutkowitz, Eric; Choi, Hyon K; Pizzi, Laura T; Kuntz, Karen M
2014-11-04
Gout is the most common inflammatory arthritis in the United States. To evaluate the cost-effectiveness of urate-lowering treatment strategies for the management of gout. Markov model. Published literature and expert opinion. Patients for whom allopurinol or febuxostat is a suitable initial urate-lowering treatment. Lifetime. Health care payer. 5 urate-lowering treatment strategies were evaluated: no treatment; allopurinol- or febuxostat-only therapy; allopurinol-febuxostat sequential therapy; and febuxostat-allopurinol sequential therapy. Two dosing scenarios were investigated: fixed dose (80 mg of febuxostat daily, 0.80 success rate; 300 mg of allopurinol daily, 0.39 success rate) and dose escalation (≤120 mg of febuxostat daily, 0.82 success rate; ≤800 mg of allopurinol daily, 0.78 success rate). Discounted costs, discounted quality-adjusted life-years, and incremental cost-effectiveness ratios. In both dosing scenarios, allopurinol-only therapy was cost-saving. Dose-escalation allopurinol-febuxostat sequential therapy was more costly but more effective than dose-escalation allopurinol therapy, with an incremental cost-effectiveness ratio of $39 400 per quality-adjusted life-year. The relative rankings of treatments did not change. Our results were relatively sensitive to several potential variations of model assumptions; however, the cost-effectiveness ratios of dose escalation with allopurinol-febuxostat sequential therapy remained lower than the willingness-to-pay threshold of $109 000 per quality-adjusted life-year. Long-term outcome data for patients with gout, including medication adherence, are limited. Allopurinol single therapy is cost-saving compared with no treatment. Dose-escalation allopurinol-febuxostat sequential therapy is cost-effective compared with accepted willingness-to-pay thresholds. Agency for Healthcare Research and Quality.
Yoda, Satoshi; Lin, Jessica J; Lawrence, Michael S; Burke, Benjamin J; Friboulet, Luc; Langenbucher, Adam; Dardaei, Leila; Prutisto-Chang, Kylie; Dagogo-Jack, Ibiayi; Timofeevski, Sergei; Hubbeling, Harper; Gainor, Justin F; Ferris, Lorin A; Riley, Amanda K; Kattermann, Krystina E; Timonina, Daria; Heist, Rebecca S; Iafrate, A John; Benes, Cyril H; Lennerz, Jochen K; Mino-Kenudson, Mari; Engelman, Jeffrey A; Johnson, Ted W; Hata, Aaron N; Shaw, Alice T
2018-06-01
The cornerstone of treatment for advanced ALK-positive lung cancer is sequential therapy with increasingly potent and selective ALK inhibitors. The third-generation ALK inhibitor lorlatinib has demonstrated clinical activity in patients who failed previous ALK inhibitors. To define the spectrum of ALK mutations that confer lorlatinib resistance, we performed accelerated mutagenesis screening of Ba/F3 cells expressing EML4-ALK. Under comparable conditions, N -ethyl- N -nitrosourea (ENU) mutagenesis generated numerous crizotinib-resistant but no lorlatinib-resistant clones harboring single ALK mutations. In similar screens with EML4-ALK containing single ALK resistance mutations, numerous lorlatinib-resistant clones emerged harboring compound ALK mutations. To determine the clinical relevance of these mutations, we analyzed repeat biopsies from lorlatinib-resistant patients. Seven of 20 samples (35%) harbored compound ALK mutations, including two identified in the ENU screen. Whole-exome sequencing in three cases confirmed the stepwise accumulation of ALK mutations during sequential treatment. These results suggest that sequential ALK inhibitors can foster the emergence of compound ALK mutations, identification of which is critical to informing drug design and developing effective therapeutic strategies. Significance: Treatment with sequential first-, second-, and third-generation ALK inhibitors can select for compound ALK mutations that confer high-level resistance to ALK-targeted therapies. A more efficacious long-term strategy may be up-front treatment with a third-generation ALK inhibitor to prevent the emergence of on-target resistance. Cancer Discov; 8(6); 714-29. ©2018 AACR. This article is highlighted in the In This Issue feature, p. 663 . ©2018 American Association for Cancer Research.
Numerical study of a matrix-free trust-region SQP method for equality constrained optimization.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Heinkenschloss, Matthias; Ridzal, Denis; Aguilo, Miguel Antonio
2011-12-01
This is a companion publication to the paper 'A Matrix-Free Trust-Region SQP Algorithm for Equality Constrained Optimization' [11]. In [11], we develop and analyze a trust-region sequential quadratic programming (SQP) method that supports the matrix-free (iterative, in-exact) solution of linear systems. In this report, we document the numerical behavior of the algorithm applied to a variety of equality constrained optimization problems, with constraints given by partial differential equations (PDEs).
An efficient dynamic load balancing algorithm
NASA Astrophysics Data System (ADS)
Lagaros, Nikos D.
2014-01-01
In engineering problems, randomness and uncertainties are inherent. Robust design procedures, formulated in the framework of multi-objective optimization, have been proposed in order to take into account sources of randomness and uncertainty. These design procedures require orders of magnitude more computational effort than conventional analysis or optimum design processes since a very large number of finite element analyses is required to be dealt. It is therefore an imperative need to exploit the capabilities of computing resources in order to deal with this kind of problems. In particular, parallel computing can be implemented at the level of metaheuristic optimization, by exploiting the physical parallelization feature of the nondominated sorting evolution strategies method, as well as at the level of repeated structural analyses required for assessing the behavioural constraints and for calculating the objective functions. In this study an efficient dynamic load balancing algorithm for optimum exploitation of available computing resources is proposed and, without loss of generality, is applied for computing the desired Pareto front. In such problems the computation of the complete Pareto front with feasible designs only, constitutes a very challenging task. The proposed algorithm achieves linear speedup factors and almost 100% speedup factor values with reference to the sequential procedure.
Chalasani, Pavani
2017-01-01
The treatment landscape for hormone receptor-positive metastatic breast cancer continues to evolve as the molecular mechanisms of this heterogeneous disease are better understood and targeted treatment strategies are developed. Patients are now living for extended periods of time with this disease as they progress through sequential lines of treatment. With a rapidly expanding therapeutic armamentarium, the prevalence of metastatic breast cancer patients with prolonged survival is expected to increase, as is the duration of survival. Practice guidelines recommend endocrine therapy alone as first-line therapy for the majority of patients with metastatic hormone receptor-positive, human epidermal growth factor receptor 2-negative breast cancer. The approval of new agents and expanded combination options has extended their use beyond first line, but endocrine therapy is not used as widely in clinical practice as recommended. As all treatments are palliative, even as survival is prolonged, optimizing and maintaining patient quality of life is crucial. This article surveys data relevant to the use of endocrine therapy in the setting of hormone receptor-positive metastatic breast cancer, including key clinical evidence regarding approved therapies and the impact of these therapies on patient quality of life. © 2017 S. Karger AG, Basel.
Strategies to Save 50% Site Energy in Grocery and General Merchandise Stores
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hirsch, A.; Hale, E.; Leach, M.
2011-03-01
This paper summarizes the methodology and main results of two recently published Technical Support Documents. These reports explore the feasibility of designing general merchandise and grocery stores that use half the energy of a minimally code-compliant building, as measured on a whole-building basis. We used an optimization algorithm to trace out a minimum cost curve and identify designs that satisfy the 50% energy savings goal. We started from baseline building energy use and progressed to more energy-efficient designs by sequentially adding energy design measures (EDMs). Certain EDMs figured prominently in reaching the 50% energy savings goal for both building types:more » (1) reduced lighting power density; (2) optimized area fraction and construction of view glass or skylights, or both, as part of a daylighting system tuned to 46.5 fc (500 lux); (3) reduced infiltration with a main entrance vestibule or an envelope air barrier, or both; and (4) energy recovery ventilators, especially in humid and cold climates. In grocery stores, the most effective EDM, which was chosen for all climates, was replacing baseline medium-temperature refrigerated cases with high-efficiency models that have doors.« less
Altered behavioral and neural responsiveness to counterfactual gains in the elderly.
Tobia, Michael J; Guo, Rong; Gläscher, Jan; Schwarze, Ulrike; Brassen, Stefanie; Büchel, Christian; Obermayer, Klaus; Sommer, Tobias
2016-06-01
Counterfactual information processing refers to the consideration of events that did not occur in comparison to those actually experienced, in order to determine optimal actions, and can be formulated as computational learning signals, referred to as fictive prediction errors. Decision making and the neural circuitry for counterfactual processing are altered in healthy elderly adults. This experiment investigated age differences in neural systems for decision making with knowledge of counterfactual outcomes. Two groups of healthy adult participants, young (N = 30; ages 19-30 years) and elderly (N = 19; ages 65-80 years), were scanned with fMRI during 240 trials of a strategic sequential investment task in which a particular strategy of differentially weighting counterfactual gains and losses during valuation is associated with more optimal performance. Elderly participants earned significantly less than young adults, differently weighted counterfactual consequences and exploited task knowledge, and exhibited altered activity in a fronto-striatal circuit while making choices, compared to young adults. The degree to which task knowledge was exploited was positively correlated with modulation of neural activity by expected value in the vmPFC for young adults, but not in the elderly. These findings demonstrate that elderly participants' poor task performance may be related to different counterfactual processing.
Potential economic value of drought information to support early warning in Africa
NASA Astrophysics Data System (ADS)
Quiroga, S.; Iglesias, A.; Diz, A.; Garrote, L.
2012-04-01
We present a methodology to estimate the economic value of advanced climate information for food production in Africa under climate change scenarios. The results aim to facilitate better choices in water resources management. The methodology includes 4 sequential steps. First two contrasting management strategies (with and without early warning) are defined. Second, the associated impacts of the management actions are estimated by calculating the effect of drought in crop productivity under climate change scenarios. Third, the optimal management option is calculated as a function of the drought information and risk aversion of potential information users. Finally we use these optimal management simulations to compute the economic value of enhanced water allocation rules to support stable food production in Africa. Our results show how a timely response to climate variations can help reduce loses in food production. The proposed framework is developed within the Dewfora project (Early warning and forecasting systems to predict climate related drought vulnerability and risk in Africa) that aims to improve the knowledge on drought forecasting, warning and mitigation, and advance the understanding of climate related vulnerability to drought and to develop a prototype operational forecasting.
Chuprom, Julalak; Bovornreungroj, Preeyanuch; Ahmad, Mehraj; Kantachote, Duangporn; Dueramae, Sawitree
2016-06-01
A new potent halophilic protease producer, Halobacterium sp. strain LBU50301 was isolated from salt-fermented fish samples ( budu ) and identified by phenotypic analysis, and 16S rDNA gene sequencing. Thereafter, sequential statistical strategy was used to optimize halophilic protease production from Halobacterium sp. strain LBU50301 by shake-flask fermentation. The classical one-factor-at-a-time (OFAT) approach determined gelatin was the best nitrogen source. Based on Plackett - Burman (PB) experimental design; gelatin, MgSO 4 ·7H 2 O, NaCl and pH significantly influenced the halophilic protease production. Central composite design (CCD) determined the optimum level of medium components. Subsequently, an 8.78-fold increase in corresponding halophilic protease yield (156.22 U/mL) was obtained, compared with that produced in the original medium (17.80 U/mL). Validation experiments proved the adequacy and accuracy of model, and the results showed the predicted value agreed well with the experimental values. An overall 13-fold increase in halophilic protease yield was achieved using a 3 L laboratory fermenter and optimized medium (231.33 U/mL).
Mahoney, J. Matthew; Titiz, Ali S.; Hernan, Amanda E.; Scott, Rod C.
2016-01-01
Hippocampal neural systems consolidate multiple complex behaviors into memory. However, the temporal structure of neural firing supporting complex memory consolidation is unknown. Replay of hippocampal place cells during sleep supports the view that a simple repetitive behavior modifies sleep firing dynamics, but does not explain how multiple episodes could be integrated into associative networks for recollection during future cognition. Here we decode sequential firing structure within spike avalanches of all pyramidal cells recorded in sleeping rats after running in a circular track. We find that short sequences that combine into multiple long sequences capture the majority of the sequential structure during sleep, including replay of hippocampal place cells. The ensemble, however, is not optimized for maximally producing the behavior-enriched episode. Thus behavioral programming of sequential correlations occurs at the level of short-range interactions, not whole behavioral sequences and these short sequences are assembled into a large and complex milieu that could support complex memory consolidation. PMID:26866597
Dynamics of Sequential Decision Making
NASA Astrophysics Data System (ADS)
Rabinovich, Mikhail I.; Huerta, Ramón; Afraimovich, Valentin
2006-11-01
We suggest a new paradigm for intelligent decision-making suitable for dynamical sequential activity of animals or artificial autonomous devices that depends on the characteristics of the internal and external world. To do it we introduce a new class of dynamical models that are described by ordinary differential equations with a finite number of possibilities at the decision points, and also include rules solving this uncertainty. Our approach is based on the competition between possible cognitive states using their stable transient dynamics. The model controls the order of choosing successive steps of a sequential activity according to the environment and decision-making criteria. Two strategies (high-risk and risk-aversion conditions) that move the system out of an erratic environment are analyzed.
Chang, Shih-Sheng; Shih, Che-Hao; Lai, Kwun-Cheng; Mong, Kwok-Kong Tony
2010-05-03
The beta-selectivity of mannosylation has been found to be dependent on the addition rate of the mannosyl trichloroacetimidate donor in an inverse-addition (I-A) procedure. This rate dependent I-A procedure can improve the selectivity of direct beta-mannosylation and is applicable to orthogonal glycosylations of thioglycoside acceptors. Further elaboration of this novel procedure enables the development of the contiguous sequential glycosylation strategy, which streamlines the preparation of oligosaccharides invoking beta-mannosidic bond formation. The synthetic utility of the contiguous glycosylation strategy was demonstrated by the preparation of the trisaccharide core of human N-linked glycoproteins and the trisaccharide repeating unit of the O-specific polysaccharide found in the cellular capsule of Salmonelle bacteria.
Rossi, G P; Seccia, T M; Miotto, D; Zucchetta, P; Cecchin, D; Calò, L; Puato, M; Motta, R; Caielli, P; Vincenzi, M; Ramondo, G; Taddei, S; Ferri, C; Letizia, C; Borghi, C; Morganti, A; Pessina, A C
2012-08-01
It is unclear whether revascularization of renal artery stenosis (RAS) by means of percutaneous renal angioplasty and stenting (PTRAS) is advantageous over optimal medical therapy. Hence, we designed a randomized clinical trial based on an optimized patient selection strategy and hard experimental endpoints. Primary objective of this study is to determine whether PTRAS is superior or equivalent to optimal medical treatment for preserving glomerular filtration rate (GFR) in the ischemic kidney as assessed by 99mTcDTPA sequential renal scintiscan. Secondary objectives of this study are to establish whether the two treatments are equivalent in lowering blood pressure, preserving overall renal function and regressing target organ damage, preventing cardiovascular events and improving quality of life. The study is designed as a prospective multicentre randomized, un-blinded two-arm study. Eligible patients will have clinical and angio-CT evidence of RAS. Inclusion criteria is RAS affecting the main renal artery or its major branches either >70% or, if <70, with post-stenotic dilatation. Renal function will be assessed with 99mTc-DTPA renal scintigraphy. Patients will be randomized to either arms considering both resistance index value in the ischemic kidney and the presence of unilateral/bilateral stenosis. Primary experimental endpoint will be the GFR of the ischemic kidney, assessed as quantitative variable by 99TcDTPA, and the loss of ischemic kidney defined as a categorical variable.
Engine With Regression and Neural Network Approximators Designed
NASA Technical Reports Server (NTRS)
Patnaik, Surya N.; Hopkins, Dale A.
2001-01-01
At the NASA Glenn Research Center, the NASA engine performance program (NEPP, ref. 1) and the design optimization testbed COMETBOARDS (ref. 2) with regression and neural network analysis-approximators have been coupled to obtain a preliminary engine design methodology. The solution to a high-bypass-ratio subsonic waverotor-topped turbofan engine, which is shown in the preceding figure, was obtained by the simulation depicted in the following figure. This engine is made of 16 components mounted on two shafts with 21 flow stations. The engine is designed for a flight envelope with 47 operating points. The design optimization utilized both neural network and regression approximations, along with the cascade strategy (ref. 3). The cascade used three algorithms in sequence: the method of feasible directions, the sequence of unconstrained minimizations technique, and sequential quadratic programming. The normalized optimum thrusts obtained by the three methods are shown in the following figure: the cascade algorithm with regression approximation is represented by a triangle, a circle is shown for the neural network solution, and a solid line indicates original NEPP results. The solutions obtained from both approximate methods lie within one standard deviation of the benchmark solution for each operating point. The simulation improved the maximum thrust by 5 percent. The performance of the linear regression and neural network methods as alternate engine analyzers was found to be satisfactory for the analysis and operation optimization of air-breathing propulsion engines (ref. 4).
Rochau, Ursula; Sroczynski, Gaby; Wolf, Dominik; Schmidt, Stefan; Jahn, Beate; Kluibenschaedl, Martina; Conrads-Frank, Annette; Stenehjem, David; Brixner, Diana; Radich, Jerald; Gastl, Günther; Siebert, Uwe
2015-01-01
Several tyrosine kinase inhibitors (TKIs) are approved for chronic myeloid leukemia (CML) therapy. We evaluated the long-term cost-effectiveness of seven sequential therapy regimens for CML in Austria. A cost-effectiveness analysis was performed using a state-transition Markov model. As model parameters, we used published trial data, clinical, epidemiological and economic data from the Austrian CML registry and national databases. We performed a cohort simulation over a life-long time-horizon from a societal perspective. Nilotinib without second-line TKI yielded an incremental cost-utility ratio of 121,400 €/quality-adjusted life year (QALY) compared to imatinib without second-line TKI after imatinib failure. Imatinib followed by nilotinib after failure resulted in 131,100 €/QALY compared to nilotinib without second-line TKI. Nilotinib followed by dasatinib yielded 152,400 €/QALY compared to imatinib followed by nilotinib after failure. Remaining strategies were dominated. The sequential application of TKIs is standard-of-care, and thus, our analysis points toward imatinib followed by nilotinib as the most cost-effective strategy.
Optimal integer resolution for attitude determination using global positioning system signals
NASA Technical Reports Server (NTRS)
Crassidis, John L.; Markley, F. Landis; Lightsey, E. Glenn
1998-01-01
In this paper, a new motion-based algorithm for GPS integer ambiguity resolution is derived. The first step of this algorithm converts the reference sightline vectors into body frame vectors. This is accomplished by an optimal vectorized transformation of the phase difference measurements. The result of this transformation leads to the conversion of the integer ambiguities to vectorized biases. This essentially converts the problem to the familiar magnetometer-bias determination problem, for which an optimal and efficient solution exists. Also, the formulation in this paper is re-derived to provide a sequential estimate, so that a suitable stopping condition can be found during the vehicle motion. The advantages of the new algorithm include: it does not require an a-priori estimate of the vehicle's attitude; it provides an inherent integrity check using a covariance-type expression; and it can sequentially estimate the ambiguities during the vehicle motion. The only disadvantage of the new algorithm is that it requires at least three non-coplanar baselines. The performance of the new algorithm is tested on a dynamic hardware simulator.
Imbs, Diane-Charlotte; El Cheikh, Raouf; Boyer, Arnaud; Ciccolini, Joseph; Mascaux, Céline; Lacarelle, Bruno; Barlesi, Fabrice; Barbolosi, Dominique; Benzekry, Sébastien
2018-01-01
Concomitant administration of bevacizumab and pemetrexed-cisplatin is a common treatment for advanced nonsquamous non-small cell lung cancer (NSCLC). Vascular normalization following bevacizumab administration may transiently enhance drug delivery, suggesting improved efficacy with sequential administration. To investigate optimal scheduling, we conducted a study in NSCLC-bearing mice. First, experiments demonstrated improved efficacy when using sequential vs. concomitant scheduling of bevacizumab and chemotherapy. Combining this data with a mathematical model of tumor growth under therapy accounting for the normalization effect, we predicted an optimal delay of 2.8 days between bevacizumab and chemotherapy. This prediction was confirmed experimentally, with reduced tumor growth of 38% as compared to concomitant scheduling, and prolonged survival (74 vs. 70 days). Alternate sequencing of 8 days failed in achieving a similar increase in efficacy, thus emphasizing the utility of modeling support to identify optimal scheduling. The model could also be a useful tool in the clinic to personally tailor regimen sequences. © 2017 The Authors CPT: Pharmacometrics & Systems Pharmacology published by Wiley Periodicals, Inc. on behalf of American Society for Clinical Pharmacology and Therapeutics.
NASA Astrophysics Data System (ADS)
Masuda, Hiroshi; Kanda, Yutaro; Okamoto, Yoshifumi; Hirono, Kazuki; Hoshino, Reona; Wakao, Shinji; Tsuburaya, Tomonori
2017-12-01
It is very important to design electrical machineries with high efficiency from the point of view of saving energy. Therefore, topology optimization (TO) is occasionally used as a design method for improving the performance of electrical machinery under the reasonable constraints. Because TO can achieve a design with much higher degree of freedom in terms of structure, there is a possibility for deriving the novel structure which would be quite different from the conventional structure. In this paper, topology optimization using sequential linear programming using move limit based on adaptive relaxation is applied to two models. The magnetic shielding, in which there are many local minima, is firstly employed as firstly benchmarking for the performance evaluation among several mathematical programming methods. Secondly, induction heating model is defined in 2-D axisymmetric field. In this model, the magnetic energy stored in the magnetic body is maximized under the constraint on the volume of magnetic body. Furthermore, the influence of the location of the design domain on the solutions is investigated.
Parry, Gareth; Malbut, Katie; Dark, John H; Bexton, Rodney S
1992-01-01
Objective—To investigate the response of the transplanted heart to different pacing modes and to synchronisation of the recipient and donor atria in terms of cardiac output at rest. Design—Doppler derived cardiac output measurements at three pacing rates (90/min, 110/min and 130/min) in five pacing modes: right ventricular pacing, donor atrial pacing, recipient-donor synchronous pacing, donor atrial-ventricular sequential pacing, and synchronous recipient-donor atrial-ventricular sequential pacing. Patients—11 healthy cardiac transplant recipients with three pairs of epicardial leads inserted at transplantation. Results—Donor atrial pacing (+11% overall) and donor atrial-ventricular sequential pacing (+8% overall) were significantly better than right ventricular pacing (p < 0·001) at all pacing rates. Synchronised pacing of recipient and donor atrial segments did not confer additional benefit in either atrial or atrial-ventricular sequential modes of pacing in terms of cardiac output at rest at these fixed rates. Conclusions—Atrial pacing or atrial-ventricular sequential pacing appear to be appropriate modes in cardiac transplant recipients. Synchronisation of recipient and donor atrial segments in this study produced no additional benefit. Chronotropic competence in these patients may, however, result in improved exercise capacity and deserves further investigation. PMID:1389737
A Bayesian sequential design with adaptive randomization for 2-sided hypothesis test.
Yu, Qingzhao; Zhu, Lin; Zhu, Han
2017-11-01
Bayesian sequential and adaptive randomization designs are gaining popularity in clinical trials thanks to their potentials to reduce the number of required participants and save resources. We propose a Bayesian sequential design with adaptive randomization rates so as to more efficiently attribute newly recruited patients to different treatment arms. In this paper, we consider 2-arm clinical trials. Patients are allocated to the 2 arms with a randomization rate to achieve minimum variance for the test statistic. Algorithms are presented to calculate the optimal randomization rate, critical values, and power for the proposed design. Sensitivity analysis is implemented to check the influence on design by changing the prior distributions. Simulation studies are applied to compare the proposed method and traditional methods in terms of power and actual sample sizes. Simulations show that, when total sample size is fixed, the proposed design can obtain greater power and/or cost smaller actual sample size than the traditional Bayesian sequential design. Finally, we apply the proposed method to a real data set and compare the results with the Bayesian sequential design without adaptive randomization in terms of sample sizes. The proposed method can further reduce required sample size. Copyright © 2017 John Wiley & Sons, Ltd.
SeGRAm - A practical and versatile tool for spacecraft trajectory optimization
NASA Technical Reports Server (NTRS)
Rishikof, Brian H.; Mccormick, Bernell R.; Pritchard, Robert E.; Sponaugle, Steven J.
1991-01-01
An implementation of the Sequential Gradient/Restoration Algorithm, SeGRAm, is presented along with selected examples. This spacecraft trajectory optimization and simulation program uses variational calculus to solve problems of spacecraft flying under the influence of one or more gravitational bodies. It produces a series of feasible solutions to problems involving a wide range of vehicles, environments and optimization functions, until an optimal solution is found. The examples included highlight the various capabilities of the program and emphasize in particular its versatility over a wide spectrum of applications from ascent to interplanetary trajectories.
Procedures for shape optimization of gas turbine disks
NASA Technical Reports Server (NTRS)
Cheu, Tsu-Chien
1989-01-01
Two procedures, the feasible direction method and sequential linear programming, for shape optimization of gas turbine disks are presented. The objective of these procedures is to obtain optimal designs of turbine disks with geometric and stress constraints. The coordinates of the selected points on the disk contours are used as the design variables. Structural weight, stress and their derivatives with respect to the design variables are calculated by an efficient finite element method for design senitivity analysis. Numerical examples of the optimal designs of a disk subjected to thermo-mechanical loadings are presented to illustrate and compare the effectiveness of these two procedures.
Metal Big Area Additive Manufacturing: Process Modeling and Validation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Simunovic, Srdjan; Nycz, Andrzej; Noakes, Mark W
Metal Big Area Additive Manufacturing (mBAAM) is a new additive manufacturing (AM) technology for printing large-scale 3D objects. mBAAM is based on the gas metal arc welding process and uses a continuous feed of welding wire to manufacture an object. An electric arc forms between the wire and the substrate, which melts the wire and deposits a bead of molten metal along the predetermined path. In general, the welding process parameters and local conditions determine the shape of the deposited bead. The sequence of the bead deposition and the corresponding thermal history of the manufactured object determine the long rangemore » effects, such as thermal-induced distortions and residual stresses. Therefore, the resulting performance or final properties of the manufactured object are dependent on its geometry and the deposition path, in addition to depending on the basic welding process parameters. Physical testing is critical for gaining the necessary knowledge for quality prints, but traversing the process parameter space in order to develop an optimized build strategy for each new design is impractical by pure experimental means. Computational modeling and optimization may accelerate development of a build process strategy and saves time and resources. Because computational modeling provides these opportunities, we have developed a physics-based Finite Element Method (FEM) simulation framework and numerical models to support the mBAAM process s development and design. In this paper, we performed a sequentially coupled heat transfer and stress analysis for predicting the final deformation of a small rectangular structure printed using the mild steel welding wire. Using the new simulation technologies, material was progressively added into the FEM simulation as the arc weld traversed the build path. In the sequentially coupled heat transfer and stress analysis, the heat transfer was performed to calculate the temperature evolution, which was used in a stress analysis to evaluate the residual stresses and distortions. In this formulation, we assume that physics is directionally coupled, i.e. the effect of stress of the component on the temperatures is negligible. The experiment instrumentation (measurement types, sensor types, sensor locations, sensor placements, measurement intervals) and the measurements are presented. The temperatures and distortions from the simulations show good correlation with experimental measurements. Ongoing modeling work is also briefly discussed.« less
Sathish, T; Uppuluri, K B; Veera Bramha Chari, P; Kezia, D
There is an increased l-glutaminase market worldwide due to its relevant industrial applications. Salt tolerance l-glutaminases play a vital role in the increase of flavor of different types of foods like soya sauce and tofu. This chapter is presenting the economically viable l-glutaminases production in solid-state fermentation (SSF) by Aspergillus flavus MTCC 9972 as a case study. The enzyme production was improved following a three step optimization process. Initially mixture design (MD) (augmented simplex lattice design) was employed to optimize the solid substrate mixture. Such solid substrate mixture consisted of 59:41 of wheat bran and Bengal gram husk has given higher amounts of l-glutaminase. Glucose and l-glutamine were screened as a finest additional carbon and nitrogen sources for l-glutaminase production with help of Plackett-Burman Design (PBD). l-Glutamine also acting as a nitrogen source as well as inducer for secretion of l-glutaminase from A. flavus MTCC 9972. In the final step of optimization various environmental and nutritive parameters such as pH, temperature, moisture content, inoculum concentration, glucose, and l-glutamine levels were optimized through the use of hybrid feed forward neural networks (FFNNs) and genetic algorithm (GA). Through sequential optimization methods MD-PBD-FFNN-GA, the l-glutaminase production in SSF could be improved by 2.7-fold (453-1690U/g). © 2016 Elsevier Inc. All rights reserved.
Privatization and subsidization in a leadership duopoly
NASA Astrophysics Data System (ADS)
Ferreira, Fernanda A.
2017-07-01
In this paper, we consider a competition in both mixed and privatized markets, in which the firms set prices in a sequential way. We study the effects of optimal production subsidies in both mixed and privatized duopoly.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shusharina, N; Khan, F; Sharp, G
Purpose: To determine the dose level and timing of the boost in locally advanced lung cancer patients with confirmed tumor recurrence by comparing different boosting strategies by an impact of dose escalation in improvement of the therapeutic ratio. Methods: We selected eighteen patients with advanced NSCLC and confirmed recurrence. For each patient, a base IMRT plan to 60 Gy prescribed to PTV was created. Then we compared three dose escalation strategies: a uniform escalation to the original PTV, an escalation to a PET-defined target planned sequentially and concurrently. The PET-defined targets were delineated by biologically-weighed regions on a pre-treatment 18F-FDGmore » PET. The maximal achievable dose, without violating the OAR constraints, was identified for each boosting method. The EUD for the target, spinal cord, combined lung, and esophagus was compared for each plan. Results: The average prescribed dose was 70.4±13.9 Gy for the uniform boost, 88.5±15.9 Gy for the sequential boost and 89.1±16.5 Gy for concurrent boost. The size of the boost planning volume was 12.8% (range: 1.4 – 27.9%) of the PTV. The most prescription-limiting dose constraints was the V70 of the esophagus. The EUD within the target increased by 10.6 Gy for the uniform boost, by 31.4 Gy for the sequential boost and by 38.2 for the concurrent boost. The EUD for OARs increased by the following amounts: spinal cord, 3.1 Gy for uniform boost, 2.8 Gy for sequential boost, 5.8 Gy for concurrent boost; combined lung, 1.6 Gy for uniform, 1.1 Gy for sequential, 2.8 Gy for concurrent; esophagus, 4.2 Gy for uniform, 1.3 Gy for sequential, 5.6 Gy for concurrent. Conclusion: Dose escalation to a biologically-weighed gross tumor volume defined on a pre-treatment 18F-FDG PET may provide improved therapeutic ratio without breaching predefined OAR constraints. Sequential boost provides better sparing of OARs as compared with concurrent boost.« less
Sequential estimation and satellite data assimilation in meteorology and oceanography
NASA Technical Reports Server (NTRS)
Ghil, M.
1986-01-01
The central theme of this review article is the role that dynamics plays in estimating the state of the atmosphere and of the ocean from incomplete and noisy data. Objective analysis and inverse methods represent an attempt at relying mostly on the data and minimizing the role of dynamics in the estimation. Four-dimensional data assimilation tries to balance properly the roles of dynamical and observational information. Sequential estimation is presented as the proper framework for understanding this balance, and the Kalman filter as the ideal, optimal procedure for data assimilation. The optimal filter computes forecast error covariances of a given atmospheric or oceanic model exactly, and hence data assimilation should be closely connected with predictability studies. This connection is described, and consequences drawn for currently active areas of the atmospheric and oceanic sciences, namely, mesoscale meteorology, medium and long-range forecasting, and upper-ocean dynamics.
Optimal medication dosing from suboptimal clinical examples: a deep reinforcement learning approach.
Nemati, Shamim; Ghassemi, Mohammad M; Clifford, Gari D
2016-08-01
Misdosing medications with sensitive therapeutic windows, such as heparin, can place patients at unnecessary risk, increase length of hospital stay, and lead to wasted hospital resources. In this work, we present a clinician-in-the-loop sequential decision making framework, which provides an individualized dosing policy adapted to each patient's evolving clinical phenotype. We employed retrospective data from the publicly available MIMIC II intensive care unit database, and developed a deep reinforcement learning algorithm that learns an optimal heparin dosing policy from sample dosing trails and their associated outcomes in large electronic medical records. Using separate training and testing datasets, our model was observed to be effective in proposing heparin doses that resulted in better expected outcomes than the clinical guidelines. Our results demonstrate that a sequential modeling approach, learned from retrospective data, could potentially be used at the bedside to derive individualized patient dosing policies.
Optimal trajectories for aeroassisted orbital transfer
NASA Technical Reports Server (NTRS)
Miele, A.; Venkataraman, P.
1983-01-01
Consideration is given to classical and minimax problems involved in aeroassisted transfer from high earth orbit (HEO) to low earth orbit (LEO). The transfer is restricted to coplanar operation, with trajectory control effected by means of lift modulation. The performance of the maneuver is indexed to the energy expenditure or, alternatively, the time integral of the heating rate. Firist-order optimality conditions are defined for the classical approach, as are a sequential gradient-restoration algorithm and a combined gradient-restoration algorithm. Minimization techniques are presented for the aeroassisted transfer energy consumption and time-delay integral of the heating rate, as well as minimization of the pressure. It is shown that the eigenvalues of the Jacobian matrix of the differential system is both stiff and unstable, implying that the sequential gradient restoration algorithm in its present version is unsuitable. A new method, involving a multipoint approach to the two-poing boundary value problem, is recommended.
Sequential vs. simultaneous photokilling by mitochondrial and lysosomal photodamage
NASA Astrophysics Data System (ADS)
Kessel, David
2017-02-01
We previously reported that a low level of lysosomal photoda mage can markedly promote the subsequent efficacy of PDT directed at mitochondria. This involves release of Ca2+ from photo damaged lysosomes, cleavage of the autophagy-associated protein ATG5 after activation of calpain and an interaction between the ATG5 fragment and mitochondria resulting in enhanced apoptosis. Inhibition of calpain activity abolished th is effect. We examined permissible irradiation sequences. Lysosomal photodamage must occur first with the `enhancement' effect showing a short half-life ( 15 min), presumably reflecting the survival of the ATG5 fragment. Simultaneous photo damage to both loci was found to be as effective as the sequential protocol. Since Photofrin can target both lysosomes and mitochondria for photo damage, this broad spectrum of photo damage may explain the efficacy of this photo sensitizing agent in spite of a sub-optimal absorbance profile at a sub- optimal wavelength for tissue transparency.
Learning in Noise: Dynamic Decision-Making in a Variable Environment
Gureckis, Todd M.; Love, Bradley C.
2009-01-01
In engineering systems, noise is a curse, obscuring important signals and increasing the uncertainty associated with measurement. However, the negative effects of noise and uncertainty are not universal. In this paper, we examine how people learn sequential control strategies given different sources and amounts of feedback variability. In particular, we consider people’s behavior in a task where short- and long-term rewards are placed in conflict (i.e., the best option in the short-term is worst in the long-term). Consistent with a model based on reinforcement learning principles (Gureckis & Love, in press), we find that learners differentially weight information predictive of the current task state. In particular, when cues that signal state are noisy and uncertain, we find that participants’ ability to identify an optimal strategy is strongly impaired relative to equivalent amounts of uncertainty that obscure the rewards/valuations of those states. In other situations, we find that noise and uncertainty in reward signals may paradoxically improve performance by encouraging exploration. Our results demonstrate how experimentally-manipulated task variability can be used to test predictions about the mechanisms that learners engage in dynamic decision making tasks. PMID:20161328
Song, Xinxin; Wu, Yanjie; Wu, Lin; Hu, Yufang; Li, Wenrou; Guo, Zhiyong; Su, Xiurong; Jiang, Xiaohua
2017-01-01
A developed Christmas-tree derived immunosensor based on a gold label silver stain (GLSS) technique was fabricated for a highly sensitive analysis of Vibrio parahaemolyticu (VP). In this strategy, captured VP antibody (cAb) was immobilized on a solid substrate; then, the VPs were sequentially tagged with a signal probe by incubating the assay with a detection VP antibody (dAb) conjugated gold nanoparticles (AuNPs)-labeled graphite-like carbon nitride (g-C 3 N 4 ). Finally, the attached signal probe could harvest a visible signal by the silver meal deposition, and then followed by homebrew Matlab 6.0 as a grey value acquisition. In addition, the overall design of the biosensor was established in abundant AuNPs and g-C 3 N 4 with a two-dimensional structure, affording a bulb-decorated Christmas-tree model. Moreover, with the optimized conditions, the detection limit of the as-proposed biosensor is as low as 10 2 CFU (Colony-Forming Units) mL -1 , exhibiting an increase of two orders of magnitude compared with the traditional immune-gold method. Additionally, the developed visible immunosensor was also successfully applied to the analysis of complicated samples.
Schulze, Christin; Newell, Ben R
2016-07-01
Cognitive load has previously been found to have a positive effect on strategy selection in repeated risky choice. Specifically, whereas inferior probability matching often prevails under single-task conditions, optimal probability maximizing sometimes dominates when a concurrent task competes for cognitive resources. We examined the extent to which this seemingly beneficial effect of increased task demands hinges on the effort required to implement each of the choice strategies. Probability maximizing typically involves a simple repeated response to a single option, whereas probability matching requires choice proportions to be tracked carefully throughout a sequential choice task. Here, we flipped this pattern by introducing a manipulation that made the implementation of maximizing more taxing and, at the same time, allowed decision makers to probability match via a simple repeated response to a single option. The results from two experiments showed that increasing the implementation effort of probability maximizing resulted in decreased adoption rates of this strategy. This was the case both when decision makers simultaneously learned about the outcome probabilities and responded to a dual task (Exp. 1) and when these two aspects were procedurally separated in two distinct stages (Exp. 2). We conclude that the effort involved in implementing a choice strategy is a key factor in shaping repeated choice under uncertainty. Moreover, highlighting the importance of implementation effort casts new light on the sometimes surprising and inconsistent effects of cognitive load that have previously been reported in the literature.
NASA Astrophysics Data System (ADS)
Elliott, John R.; Baxter, Stephen
2012-09-01
D.I.S.C: Decipherment Impact of a Signal's Content. The authors present a numerical method to characterise the significance of the receipt of a complex and potentially decipherable signal from extraterrestrial intelligence (ETI). The purpose of the scale is to facilitate the public communication of work on any such claimed signal, as such work proceeds, and to assist in its discussion and interpretation. Building on a "position" paper rationale, this paper looks at the DISC quotient proposed and develops the algorithmic steps and comprising measures that form this post detection strategy for information dissemination, based on prior work on message detection, decipherment. As argued, we require a robust and incremental strategy, to disseminate timely, accurate and meaningful information, to the scientific community and the general public, in the event we receive an "alien" signal that displays decipherable information. This post-detection strategy is to serve as a stepwise algorithm for a logical approach to information extraction and a vehicle for sequential information dissemination, to manage societal impact. The "DISC Quotient", which is based on signal analysis processing stages, includes factors based on the signal's data quantity, structure, affinity to known human languages, and likely decipherment times. Comparisons with human and other phenomena are included as a guide to assessing likely societal impact. It is submitted that the development, refinement and implementation of DISC as an integral strategy, during the complex processes involved in post detection and decipherment, is essential if we wish to minimize disruption and optimize dissemination.
NASA Astrophysics Data System (ADS)
Borhan, Hoseinali
Modern hybrid electric vehicles and many stationary renewable power generation systems combine multiple power generating and energy storage devices to achieve an overall system-level efficiency and flexibility which is higher than their individual components. The power or energy management control, "brain" of these "hybrid" systems, determines adaptively and based on the power demand the power split between multiple subsystems and plays a critical role in overall system-level efficiency. This dissertation proposes that a receding horizon optimal control (aka Model Predictive Control) approach can be a natural and systematic framework for formulating this type of power management controls. More importantly the dissertation develops new results based on the classical theory of optimal control that allow solving the resulting optimal control problem in real-time, in spite of the complexities that arise due to several system nonlinearities and constraints. The dissertation focus is on two classes of hybrid systems: hybrid electric vehicles in the first part and wind farms with battery storage in the second part. The first part of the dissertation proposes and fully develops a real-time optimization-based power management strategy for hybrid electric vehicles. Current industry practice uses rule-based control techniques with "else-then-if" logic and look-up maps and tables in the power management of production hybrid vehicles. These algorithms are not guaranteed to result in the best possible fuel economy and there exists a gap between their performance and a minimum possible fuel economy benchmark. Furthermore, considerable time and effort are spent calibrating the control system in the vehicle development phase, and there is little flexibility in real-time handling of constraints and re-optimization of the system operation in the event of changing operating conditions and varying parameters. In addition, a proliferation of different powertrain configurations may result in the need for repeated control system redesign. To address these shortcomings, we formulate the power management problem as a nonlinear and constrained optimal control problem. Solution of this optimal control problem in real-time on chronometric- and memory-constrained automotive microcontrollers is quite challenging; this computational complexity is due to the highly nonlinear dynamics of the powertrain subsystems, mixed-integer switching modes of their operation, and time-varying and nonlinear hard constraints that system variables should satisfy. The main contribution of the first part of the dissertation is that it establishes methods for systematic and step-by step improvements in fuel economy while maintaining the algorithmic computational requirements in a real-time implementable framework. More specifically a linear time-varying model predictive control approach is employed first which uses sequential quadratic programming to find sub-optimal solutions to the power management problem. Next the objective function is further refined and broken into a short and a long horizon segments; the latter approximated as a function of the state using the connection between the Pontryagin minimum principle and Hamilton-Jacobi-Bellman equations. The power management problem is then solved using a nonlinear MPC framework with a dynamic programming solver and the fuel economy is further improved. Typical simplifying academic assumptions are minimal throughout this work, thanks to close collaboration with research scientists at Ford research labs and their stringent requirement that the proposed solutions be tested on high-fidelity production models. Simulation results on a high-fidelity model of a hybrid electric vehicle over multiple standard driving cycles reveal the potential for substantial fuel economy gains. To address the control calibration challenges, we also present a novel and fast calibration technique utilizing parallel computing techniques. ^ The second part of this dissertation presents an optimization-based control strategy for the power management of a wind farm with battery storage. The strategy seeks to minimize the error between the power delivered by the wind farm with battery storage and the power demand from an operator. In addition, the strategy attempts to maximize battery life. The control strategy has two main stages. The first stage produces a family of control solutions that minimize the power error subject to the battery constraints over an optimization horizon. These solutions are parameterized by a given value for the state of charge at the end of the optimization horizon. The second stage screens the family of control solutions to select one attaining an optimal balance between power error and battery life. The battery life model used in this stage is a weighted Amp-hour (Ah) throughput model. The control strategy is modular, allowing for more sophisticated optimization models in the first stage, or more elaborate battery life models in the second stage. The strategy is implemented in real-time in the framework of Model Predictive Control (MPC).
Advances in stable isotope assisted labeling strategies with information science.
Kigawa, Takanori
2017-08-15
Stable-isotope (SI) labeling of proteins is an essential technique to investigate their structures, interactions or dynamics by nuclear magnetic resonance (NMR) spectroscopy. The assignment of the main-chain signals, which is the fundamental first step in these analyses, is usually achieved by a sequential assignment method based on triple resonance experiments. Independently of the triple resonance experiment-based sequential assignment, amino acid-selective SI labeling is beneficial for discriminating the amino acid type of each signal; therefore, it is especially useful for the signal assignment of difficult targets. Various combinatorial selective labeling schemes have been developed as more sophisticated labeling strategies. In these strategies, amino acids are represented by combinations of SI labeled samples, rather than simply assigning one amino acid to one SI labeled sample as in the case of conventional amino acid-selective labeling. These strategies have proven to be useful for NMR analyses of difficult proteins, such as those in large complex systems, in living cells, attached or integrated into membranes, or with poor solubility. In this review, recent advances in stable isotope assisted labeling strategies will be discussed. Copyright © 2017 Elsevier Inc. All rights reserved.
Topology synthesis and size optimization of morphing wing structures
NASA Astrophysics Data System (ADS)
Inoyama, Daisaku
This research demonstrates a novel topology and size optimization methodology for synthesis of distributed actuation systems with specific applications to morphing air vehicle structures. The main emphasis is placed on the topology and size optimization problem formulations and the development of computational modeling concepts. The analysis model is developed to meet several important criteria: It must allow a rigid-body displacement, as well as a variation in planform area, with minimum strain on structural members while retaining acceptable numerical stability for finite element analysis. Topology optimization is performed on a semi-ground structure with design variables that control the system configuration. In effect, the optimization process assigns morphing members as "soft" elements, non-morphing load-bearing members as "stiff' elements, and non-existent members as "voids." The optimization process also determines the optimum actuator placement, where each actuator is represented computationally by equal and opposite nodal forces with soft axial stiffness. In addition, the configuration of attachments that connect the morphing structure to a non-morphing structure is determined simultaneously. Several different optimization problem formulations are investigated to understand their potential benefits in solution quality, as well as meaningfulness of the formulations. Extensions and enhancements to the initial concept and problem formulations are made to accommodate multiple-configuration definitions. In addition, the principal issues on the external-load dependency and the reversibility of a design, as well as the appropriate selection of a reference configuration, are addressed in the research. The methodology to control actuator distributions and concentrations is also discussed. Finally, the strategy to transfer the topology solution to the sizing optimization is developed and cross-sectional areas of existent structural members are optimized under applied aerodynamic loads. That is, the optimization process is implemented in sequential order: The actuation system layout is first determined through multi-disciplinary topology optimization process, and then the thickness or cross-sectional area of each existent member is optimized under given constraints and boundary conditions. Sample problems are solved to demonstrate the potential capabilities of the presented methodology. The research demonstrates an innovative structural design procedure from a computational perspective and opens new insights into the potential design requirements and characteristics of morphing structures.
ERIC Educational Resources Information Center
Mohammadipour, Mohammad; Rashid, Sabariah Md; Rafik-Galea, Shameem; Thai, Yap Ngee
2018-01-01
Emotions are an indispensable part of second language learning. The aim of this study is to determine the relationship between the use of language learning strategies and positive emotions. The present study adopted a sequential mixed methods design. The participants were 300 Malaysian ESL undergraduates selected through stratified random sampling…
Effects of an Elementary Strategy on Operations of Exclusion.
ERIC Educational Resources Information Center
Lawton, Joseph T.
Effects of an advance organizer lesson (containing high-order science concepts relating to the law of capillary attraction, and an elementary problem-solving strategy for determining causal relations) were evaluated for a sample of 80 urban 6- and 10-year-old children. Significant sequential transfer effects were established from the lesson.…
Effective Teaching Strategies for Gifted/Learning-Disabled Students with Spatial Strengths
ERIC Educational Resources Information Center
Mann, Rebecca L.
2006-01-01
This study sought to determine effective teaching strategies for use with high-ability students who have spatial strengths and sequential weaknesses. Gifted students with spatial strengths and weak verbal skills often struggle in the traditional classroom. Their learning style enables them to grasp complex systems and excel at higher levels of…
Thinking Style, Browsing Primes and Hypermedia Navigation
ERIC Educational Resources Information Center
Fiorina, Lorenzo; Antonietti, Alessandro; Colombo, Barbara; Bartolomeo, Annella
2007-01-01
There is a common assumption that hypermedia navigation is influenced by a learner's style of thinking, so people who are inclined to apply sequential and analytical strategies (left-thinkers) are thought to browse hypermedia in a linear way, whereas those who prefer holistic and intuitive strategies (right-thinkers) tend towards non-linear paths.…
Salter-Venzon, Dawna; Kazlova, Valentina; Izzy Ford, Samantha; Intra, Janjira; Klosner, Allison E; Gellenbeck, Kevin W
2017-05-01
Despite the notable health benefits of carotenoids for human health, the majority of human diets worldwide are repeatedly shown to be inadequate in intake of carotenoid-rich fruits and vegetables, according to current health recommendations. To address this deficit, strategies designed to increase dietary intakes and subsequent plasma levels of carotenoids are warranted. When mixed carotenoids are delivered into the intestinal tract simultaneously, competition occurs for micelle formation and absorption, affecting carotenoid bioavailability. Previously, we tested the in vitro viability of a carotenoid mix designed to deliver individual carotenoids sequentially spaced from one another over the 6 hr transit time of the human upper gastrointestinal system. We hypothesized that temporally and spatially separating the individual carotenoids would reduce competition for micelle formation, improve uptake, and maximize efficacy. Here, we test this hypothesis in a double-blind, repeated-measure, cross-over human study with 12 subjects by comparing the change of plasma carotenoid levels for 8 hr after oral doses of a sequentially spaced carotenoid mix, to a matched mix without sequential spacing. We find the carotenoid change from baseline, measured as area under the curve, is increased following consumption of the sequentially spaced mix compared to concomitant carotenoids delivery. These results demonstrate reduced interaction and regulation between the sequentially spaced carotenoids, suggesting improved bioavailability from a novel sequentially spaced carotenoid mix.
Liu, Xiaoxia; Tian, Miaomiao; Camara, Mohamed Amara; Guo, Liping; Yang, Li
2015-10-01
We present sequential CE analysis of amino acids and L-asparaginase-catalyzed enzyme reaction, by combing the on-line derivatization, optically gated (OG) injection and commercial-available UV-Vis detection. Various experimental conditions for sequential OG-UV/vis CE analysis were investigated and optimized by analyzing a standard mixture of amino acids. High reproducibility of the sequential CE analysis was demonstrated with RSD values (n = 20) of 2.23, 2.57, and 0.70% for peak heights, peak areas, and migration times, respectively, and the LOD of 5.0 μM (for asparagine) and 2.0 μM (for aspartic acid) were obtained. With the application of the OG-UV/vis CE analysis, sequential online CE enzyme assay of L-asparaginase-catalyzed enzyme reaction was carried out by automatically and continuously monitoring the substrate consumption and the product formation every 12 s from the beginning to the end of the reaction. The Michaelis constants for the reaction were obtained and were found to be in good agreement with the results of traditional off-line enzyme assays. The study demonstrated the feasibility and reliability of integrating the OG injection with UV/vis detection for sequential online CE analysis, which could be of potential value for online monitoring various chemical reaction and bioprocesses. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
León-López, Liliana; Dávila-Ortiz, Gloria; Jiménez-Martínez, Cristian; Hernández-Sánchez, Humberto
2013-01-01
Jatropha curcas seed cake is a protein-rich byproduct of oil extraction which could be used to produce protein isolates. The purpose of this study was the optimization of the protein isolation process from the seed cake of an edible provenance of J. curcas by an alkaline extraction followed by isoelectric precipitation method via a sequentially integrated optimization approach. The influence of four different factors (solubilization pH, extraction temperature, NaCl addition, and precipitation pH) on the protein and antinutritional compounds content of the isolate was evaluated. The estimated optimal conditions were an extraction temperature of 20°C, a precipitation pH of 4, and an amount of NaCl in the extraction solution of 0.6 M for a predicted protein content of 93.3%. Under these conditions, it was possible to obtain experimentally a protein isolate with 93.21% of proteins, 316.5 mg 100 g(-1) of total phenolics, 2891.84 mg 100 g(-1) of phytates and 168 mg 100 g(-1) of saponins. The protein content of the this isolate was higher than the content reported by other authors.
León-López, Liliana; Dávila-Ortiz, Gloria; Jiménez-Martínez, Cristian; Hernández-Sánchez, Humberto
2013-01-01
Jatropha curcas seed cake is a protein-rich byproduct of oil extraction which could be used to produce protein isolates. The purpose of this study was the optimization of the protein isolation process from the seed cake of an edible provenance of J. curcas by an alkaline extraction followed by isoelectric precipitation method via a sequentially integrated optimization approach. The influence of four different factors (solubilization pH, extraction temperature, NaCl addition, and precipitation pH) on the protein and antinutritional compounds content of the isolate was evaluated. The estimated optimal conditions were an extraction temperature of 20°C, a precipitation pH of 4, and an amount of NaCl in the extraction solution of 0.6 M for a predicted protein content of 93.3%. Under these conditions, it was possible to obtain experimentally a protein isolate with 93.21% of proteins, 316.5 mg 100 g−1 of total phenolics, 2891.84 mg 100 g−1 of phytates and 168 mg 100 g−1 of saponins. The protein content of the this isolate was higher than the content reported by other authors. PMID:25937971
Laird, Robert A
2018-09-07
Cooperation is a central topic in evolutionary biology because (a) it is difficult to reconcile why individuals would act in a way that benefits others if such action is costly to themselves, and (b) it underpins many of the 'major transitions of evolution', making it essential for explaining the origins of successively higher levels of biological organization. Within evolutionary game theory, the Prisoner's Dilemma and Snowdrift games are the main theoretical constructs used to study the evolution of cooperation in dyadic interactions. In single-shot versions of these games, wherein individuals play each other only once, players typically act simultaneously rather than sequentially. Allowing one player to respond to the actions of its co-player-in the absence of any possibility of the responder being rewarded for cooperation or punished for defection, as in simultaneous or sequential iterated games-may seem to invite more incentive for exploitation and retaliation in single-shot games, compared to when interactions occur simultaneously, thereby reducing the likelihood that cooperative strategies can thrive. To the contrary, I use lattice-based, evolutionary-dynamical simulation models of single-shot games to demonstrate that under many conditions, sequential interactions have the potential to enhance unilaterally or mutually cooperative outcomes and increase the average payoff of populations, relative to simultaneous interactions-benefits that are especially prevalent in a spatially explicit context. This surprising result is attributable to the presence of conditional strategies that emerge in sequential games that can't occur in the corresponding simultaneous versions. Copyright © 2018 Elsevier Ltd. All rights reserved.
Wei, Meng; Chen, Jiajun; Wang, Xingwei
2016-08-01
Testing of sequential soil washing in triplicate using typical chelating agent (Na2EDTA), organic acid (oxalic acid) and inorganic weak acid (phosphoric acid) was conducted to remediate soil contaminated by heavy metals close to a mining area. The aim of the testing was to improve removal efficiency and reduce mobility of heavy metals. The sequential extraction procedure and further speciation analysis of heavy metals demonstrated that the primary components of arsenic and cadmium in the soil were residual As (O-As) and exchangeable fraction, which accounted for 60% and 70% of total arsenic and cadmium, respectively. It was determined that soil washing agents and their washing order were critical to removal efficiencies of metal fractions, metal bioavailability and potential mobility due to different levels of dissolution of residual fractions and inter-transformation of metal fractions. The optimal soil washing option for arsenic and cadmium was identified as phosphoric-oxalic acid-Na2EDTA sequence (POE) based on the high removal efficiency (41.9% for arsenic and 89.6% for cadmium) and the minimal harmful effects of the mobility and bioavailability of the remaining heavy metals. Copyright © 2016 Elsevier Ltd. All rights reserved.
Gonzalez, Aroa Garcia; Taraba, Lukáš; Hraníček, Jakub; Kozlík, Petr; Coufal, Pavel
2017-01-01
Dasatinib is a novel oral prescription drug proposed for treating adult patients with chronic myeloid leukemia. Three analytical methods, namely ultra high performance liquid chromatography, capillary zone electrophoresis, and sequential injection analysis, were developed, validated, and compared for determination of the drug in the tablet dosage form. The total analysis time of optimized ultra high performance liquid chromatography and capillary zone electrophoresis methods was 2.0 and 2.2 min, respectively. Direct ultraviolet detection with detection wavelength of 322 nm was employed in both cases. The optimized sequential injection analysis method was based on spectrophotometric detection of dasatinib after a simple colorimetric reaction with folin ciocalteau reagent forming a blue-colored complex with an absorbance maximum at 745 nm. The total analysis time was 2.5 min. The ultra high performance liquid chromatography method provided the lowest detection and quantitation limits and the most precise and accurate results. All three newly developed methods were demonstrated to be specific, linear, sensitive, precise, and accurate, providing results satisfactorily meeting the requirements of the pharmaceutical industry, and can be employed for the routine determination of the active pharmaceutical ingredient in the tablet dosage form. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
OPTIMIZING NIST SEQUENTIAL EXTRACTION METHOD FOR LAKE SEDIMENT (SRM4354)
Traditionally, measurements of radionuclides in the environment have focused on the determination of total concentration. It is clear, however, that total concentration does not describe the bioavailability of contaminating radionuclides. The environmental behavior depends on spe...
Sequential state discrimination and requirement of quantum dissonance
NASA Astrophysics Data System (ADS)
Pang, Chao-Qian; Zhang, Fu-Lin; Xu, Li-Fang; Liang, Mai-Lin; Chen, Jing-Ling
2013-11-01
We study the procedure for sequential unambiguous state discrimination. A qubit is prepared in one of two possible states and measured by two observers Bob and Charlie sequentially. A necessary condition for the state to be unambiguously discriminated by Charlie is the absence of entanglement between the principal qubit, prepared by Alice, and Bob's auxiliary system. In general, the procedure for both Bob and Charlie to recognize between two nonorthogonal states conclusively relies on the availability of quantum discord which is precisely the quantum dissonance when the entanglement is absent. In Bob's measurement, the left discord is positively correlated with the information extracted by Bob, and the right discord enhances the information left to Charlie. When their product achieves its maximum the probability for both Bob and Charlie to identify the state achieves its optimal value.
IMPROVED ALGORITHMS FOR RADAR-BASED RECONSTRUCTION OF ASTEROID SHAPES
DOE Office of Scientific and Technical Information (OSTI.GOV)
Greenberg, Adam H.; Margot, Jean-Luc
We describe our implementation of a global-parameter optimizer and Square Root Information Filter into the asteroid-modeling software shape. We compare the performance of our new optimizer with that of the existing sequential optimizer when operating on various forms of simulated data and actual asteroid radar data. In all cases, the new implementation performs substantially better than its predecessor: it converges faster, produces shape models that are more accurate, and solves for spin axis orientations more reliably. We discuss potential future changes to improve shape's fitting speed and accuracy.
Energy optimization in mobile sensor networks
NASA Astrophysics Data System (ADS)
Yu, Shengwei
Mobile sensor networks are considered to consist of a network of mobile robots, each of which has computation, communication and sensing capabilities. Energy efficiency is a critical issue in mobile sensor networks, especially when mobility (i.e., locomotion control), routing (i.e., communications) and sensing are unique characteristics of mobile robots for energy optimization. This thesis focuses on the problem of energy optimization of mobile robotic sensor networks, and the research results can be extended to energy optimization of a network of mobile robots that monitors the environment, or a team of mobile robots that transports materials from stations to stations in a manufacturing environment. On the energy optimization of mobile robotic sensor networks, our research focuses on the investigation and development of distributed optimization algorithms to exploit the mobility of robotic sensor nodes for network lifetime maximization. In particular, the thesis studies these five problems: 1. Network-lifetime maximization by controlling positions of networked mobile sensor robots based on local information with distributed optimization algorithms; 2. Lifetime maximization of mobile sensor networks with energy harvesting modules; 3. Lifetime maximization using joint design of mobility and routing; 4. Optimal control for network energy minimization; 5. Network lifetime maximization in mobile visual sensor networks. In addressing the first problem, we consider only the mobility strategies of the robotic relay nodes in a mobile sensor network in order to maximize its network lifetime. By using variable substitutions, the original problem is converted into a convex problem, and a variant of the sub-gradient method for saddle-point computation is developed for solving this problem. An optimal solution is obtained by the method. Computer simulations show that mobility of robotic sensors can significantly prolong the lifetime of the whole robotic sensor network while consuming negligible amount of energy for mobility cost. For the second problem, the problem is extended to accommodate mobile robotic nodes with energy harvesting capability, which makes it a non-convex optimization problem. The non-convexity issue is tackled by using the existing sequential convex approximation method, based on which we propose a novel procedure of modified sequential convex approximation that has fast convergence speed. For the third problem, the proposed procedure is used to solve another challenging non-convex problem, which results in utilizing mobility and routing simultaneously in mobile robotic sensor networks to prolong the network lifetime. The results indicate that joint design of mobility and routing has an edge over other methods in prolonging network lifetime, which is also the justification for the use of mobility in mobile sensor networks for energy efficiency purpose. For the fourth problem, we include the dynamics of the robotic nodes in the problem by modeling the networked robotic system using hybrid systems theory. A novel distributed method for the networked hybrid system is used to solve the optimal moving trajectories for robotic nodes and optimal network links, which are not answered by previous approaches. Finally, the fact that mobility is more effective in prolonging network lifetime for a data-intensive network leads us to apply our methods to study mobile visual sensor networks, which are useful in many applications. We investigate the joint design of mobility, data routing, and encoding power to help improving the video quality while maximizing the network lifetime. This study leads to a better understanding of the role mobility can play in data-intensive surveillance sensor networks.
Coordinated control of a space manipulator tested by means of an air bearing free floating platform
NASA Astrophysics Data System (ADS)
Sabatini, Marco; Gasbarri, Paolo; Palmerini, Giovanni B.
2017-10-01
A typical approach studied for the guidance of next generation space manipulators (satellites with robotic arms aimed at autonomously performing on-orbit operations) is to decouple the platform and the arm maneuvers, which are supposed to happen sequentially, mainly because of safety concerns. This control is implemented in this work as a two-stage Sequential control, where a first stage calls for the motion of the platform and the second stage calls for the motion of the manipulator. A second novel strategy is proposed, considering the platform and the manipulator as a single multibody system subject to a Coordinated control, with the goal of approaching and grasping a target spacecraft. At the scope, a region that the end effector can reach by means of the arm motion with limited reactions on the platform is identified (the so called Reaction Null workspace). The Coordinated control algorithm performs a gain modulation (finalized to a balanced contribution of the platform and arm motion) as a function of the target position within this Reaction Null map. The result is a coordinated maneuver in which the end effector moves thanks to the platform motion, predominant in a first phase, and to the arm motion, predominant when the Reaction-Null workspace is reached. In this way the collision avoidance and attitude over-control issues are automatically considered, without the need of splitting the mission in independent (and overall sub-optimal) segments. The guidance and control algorithms are first simulated by means of a multibody code, and successively tested in the lab by means of a free floating platform equipped with a robotic arm, moving frictionless on a flat granite table thanks to air bearings and on-off thrusters; the results will be discussed in terms of optimality of the fuel consumption and final accuracy.
Preliminary Analysis of Optimal Round Trip Lunar Missions
NASA Astrophysics Data System (ADS)
Gagg Filho, L. A.; da Silva Fernandes, S.
2015-10-01
A study of optimal bi-impulsive trajectories of round trip lunar missions is presented in this paper. The optimization criterion is the total velocity increment. The dynamical model utilized to describe the motion of the space vehicle is a full lunar patched-conic approximation, which embraces the lunar patched-conic of the outgoing trip and the lunar patched-conic of the return mission. Each one of these parts is considered separately to solve an optimization problem of two degrees of freedom. The Sequential Gradient Restoration Algorithm (SGRA) is employed to achieve the optimal solutions, which show a good agreement with the ones provided by literature, and, proved to be consistent with the image trajectories theorem.
The effect of code expanding optimizations on instruction cache design
NASA Technical Reports Server (NTRS)
Chen, William Y.; Chang, Pohua P.; Conte, Thomas M.; Hwu, Wen-Mei W.
1991-01-01
It is shown that code expanding optimizations have strong and non-intuitive implications on instruction cache design. Three types of code expanding optimizations are studied: instruction placement, function inline expansion, and superscalar optimizations. Overall, instruction placement reduces the miss ratio of small caches. Function inline expansion improves the performance for small cache sizes, but degrades the performance of medium caches. Superscalar optimizations increases the cache size required for a given miss ratio. On the other hand, they also increase the sequentiality of instruction access so that a simple load-forward scheme effectively cancels the negative effects. Overall, it is shown that with load forwarding, the three types of code expanding optimizations jointly improve the performance of small caches and have little effect on large caches.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Haman, R.L.; Kerry, T.G.; Jarc, C.A.
1996-12-31
A technology provided by Ultramax Corporation and EPRI, based on sequential process optimization (SPO), is being used as a cost-effective tool to gain improvements prior to decisions for capital-intensive solutions. This empirical method of optimization, called the ULTRAMAX{reg_sign} Method, can determine the best boiler capabilities and help delay, or even avoid, expensive retrofits or repowering. SPO can serve as a least-cost way to attain the right degree of compliance with current and future phases of CAAA. Tuning ensures a staged strategy to stay ahead of emissions regulations, but not so far ahead as to cause regret for taking actions thatmore » ultimately are not mandated or warranted. One large utility investigating SPO as a tool to lower NO{sub x} emissions and to optimize boiler performance is Detroit Edison. The company has applied SPO to tune two coal-fired units at its River Rouge Power Plant to evaluate the technology for possible system-wide usage. Following the successful demonstration in reducing NO{sub x} from these units, SPO is being considered for use in other Detroit Edison fossil-fired plants. Tuning first will be used as a least-cost option to drive NO{sub x} to its lowest level with operating adjustment. In addition, optimization shows the true capability of the units and the margins available when the Phase 2 rules become effective in 2000. This paper includes a case study of the second tuning process and discusses the opportunities the technology affords.« less
Wasser, Tobias; Pollard, Jessica; Fisk, Deborah; Srihari, Vinod
2017-10-01
In first-episode psychosis there is a heightened risk of aggression and subsequent criminal justice involvement. This column reviews the evidence pointing to these heightened risks and highlights opportunities, using a sequential intercept model, for collaboration between mental health services and existing diversionary programs, particularly for patients whose behavior has already brought them to the attention of the criminal justice system. Coordinating efforts in these areas across criminal justice and clinical spheres can decrease the caseload burden on the criminal justice system and optimize clinical and legal outcomes for this population.
Optimization for minimum sensitivity to uncertain parameters
NASA Technical Reports Server (NTRS)
Pritchard, Jocelyn I.; Adelman, Howard M.; Sobieszczanski-Sobieski, Jaroslaw
1994-01-01
A procedure to design a structure for minimum sensitivity to uncertainties in problem parameters is described. The approach is to minimize directly the sensitivity derivatives of the optimum design with respect to fixed design parameters using a nested optimization procedure. The procedure is demonstrated for the design of a bimetallic beam for minimum weight with insensitivity to uncertainties in structural properties. The beam is modeled with finite elements based on two dimensional beam analysis. A sequential quadratic programming procedure used as the optimizer supplies the Lagrange multipliers that are used to calculate the optimum sensitivity derivatives. The method was perceived to be successful from comparisons of the optimization results with parametric studies.
ERIC Educational Resources Information Center
Brenner, Aimee M.; Brill, Jennifer M.
2016-01-01
The purpose of this study was to identify instructional technology integration strategies and practices in preservice teacher education that contribute to the transfer of technology integration knowledge and skills to the instructional practices of early career teachers. This study used a two-phase, sequential explanatory strategy. Data were…
ERIC Educational Resources Information Center
Sung, Y.-T.; Hou, H.-T.; Liu, C.-K.; Chang, K.-E.
2010-01-01
Mobile devices have been increasingly utilized in informal learning because of their high degree of portability; mobile guide systems (or electronic guidebooks) have also been adopted in museum learning, including those that combine learning strategies and the general audio-visual guide systems. To gain a deeper understanding of the features and…
NASA Astrophysics Data System (ADS)
Thamvichai, Ratchaneekorn; Huang, Liang-Chih; Ashok, Amit; Gong, Qian; Coccarelli, David; Greenberg, Joel A.; Gehm, Michael E.; Neifeld, Mark A.
2017-05-01
We employ an adaptive measurement system, based on sequential hypotheses testing (SHT) framework, for detecting material-based threats using experimental data acquired on an X-ray experimental testbed system. This testbed employs 45-degree fan-beam geometry and 15 views over a 180-degree span to generate energy sensitive X-ray projection data. Using this testbed system, we acquire multiple view projection data for 200 bags. We consider an adaptive measurement design where the X-ray projection measurements are acquired in a sequential manner and the adaptation occurs through the choice of the optimal "next" source/view system parameter. Our analysis of such an adaptive measurement design using the experimental data demonstrates a 3x-7x reduction in the probability of error relative to a static measurement design. Here the static measurement design refers to the operational system baseline that corresponds to a sequential measurement using all the available sources/views. We also show that by using adaptive measurements it is possible to reduce the number of sources/views by nearly 50% compared a system that relies on static measurements.
Analysis of filter tuning techniques for sequential orbit determination
NASA Technical Reports Server (NTRS)
Lee, T.; Yee, C.; Oza, D.
1995-01-01
This paper examines filter tuning techniques for a sequential orbit determination (OD) covariance analysis. Recently, there has been a renewed interest in sequential OD, primarily due to the successful flight qualification of the Tracking and Data Relay Satellite System (TDRSS) Onboard Navigation System (TONS) using Doppler data extracted onboard the Extreme Ultraviolet Explorer (EUVE) spacecraft. TONS computes highly accurate orbit solutions onboard the spacecraft in realtime using a sequential filter. As the result of the successful TONS-EUVE flight qualification experiment, the Earth Observing System (EOS) AM-1 Project has selected TONS as the prime navigation system. In addition, sequential OD methods can be used successfully for ground OD. Whether data are processed onboard or on the ground, a sequential OD procedure is generally favored over a batch technique when a realtime automated OD system is desired. Recently, OD covariance analyses were performed for the TONS-EUVE and TONS-EOS missions using the sequential processing options of the Orbit Determination Error Analysis System (ODEAS). ODEAS is the primary covariance analysis system used by the Goddard Space Flight Center (GSFC) Flight Dynamics Division (FDD). The results of these analyses revealed a high sensitivity of the OD solutions to the state process noise filter tuning parameters. The covariance analysis results show that the state estimate error contributions from measurement-related error sources, especially those due to the random noise and satellite-to-satellite ionospheric refraction correction errors, increase rapidly as the state process noise increases. These results prompted an in-depth investigation of the role of the filter tuning parameters in sequential OD covariance analysis. This paper analyzes how the spacecraft state estimate errors due to dynamic and measurement-related error sources are affected by the process noise level used. This information is then used to establish guidelines for determining optimal filter tuning parameters in a given sequential OD scenario for both covariance analysis and actual OD. Comparisons are also made with corresponding definitive OD results available from the TONS-EUVE analysis.
Lin, Carol Y; Li, Ling
2016-11-07
HPV DNA diagnostic tests for epidemiology monitoring (research purpose) or cervical cancer screening (clinical purpose) have often been considered separately. Women with positive Linear Array (LA) polymerase chain reaction (PCR) research test results typically are neither informed nor referred for colposcopy. Recently, a sequential testing by using Hybrid Capture 2 (HC2) HPV clinical test as a triage before genotype by LA has been adopted for monitoring HPV infections. Also, HC2 has been reported as a more feasible screening approach for cervical cancer in low-resource countries. Thus, knowing the performance of testing strategies incorporating HPV clinical test (i.e., HC2-only or using HC2 as a triage before genotype by LA) compared with LA-only testing in measuring HPV prevalence will be informative for public health practice. We conducted a Monte Carlo simulation study. Data were generated using mathematical algorithms. We designated the reported HPV infection prevalence in the U.S. and Latin America as the "true" underlying type-specific HPV prevalence. Analytical sensitivity of HC2 for detecting 14 high-risk (oncogenic) types was considered to be less than LA. Estimated-to-true prevalence ratios and percentage reductions were calculated. When the "true" HPV prevalence was designated as the reported prevalence in the U.S., with LA genotyping sensitivity and specificity of (0.95, 0.95), estimated-to-true prevalence ratios of 14 high-risk types were 2.132, 1.056, 0.958 for LA-only, HC2-only, and sequential testing, respectively. Estimated-to-true prevalence ratios of two vaccine-associated high-risk types were 2.359 and 1.063 for LA-only and sequential testing, respectively. When designated type-specific prevalence of HPV16 and 18 were reduced by 50 %, using either LA-only or sequential testing, prevalence estimates were reduced by 18 %. Estimated-to-true HPV infection prevalence ratios using LA-only testing strategy are generally higher than using HC2-only or using HC2 as a triage before genotype by LA. HPV clinical testing can be incorporated to monitor HPV prevalence or vaccine effectiveness. Caution is needed when comparing apparent prevalence from different testing strategies.
Sequential programmable self-assembly: Role of cooperative interactions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jonathan D. Halverson; Tkachenko, Alexei V.
Here, we propose a general strategy of “sequential programmable self-assembly” that enables a bottom-up design of arbitrary multi-particle architectures on nano- and microscales. We show that a naive realization of this scheme, based on the pairwise additive interactions between particles, has fundamental limitations that lead to a relatively high error rate. This can be overcome by using cooperative interparticle binding. The cooperativity is a well known feature of many biochemical processes, responsible, e.g., for signaling and regulations in living systems. Here we propose to utilize a similar strategy for high precision self-assembly, and show that DNA-mediated interactions provide a convenientmore » platform for its implementation. In particular, we outline a specific design of a DNA-based complex which we call “DNA spider,” that acts as a smart interparticle linker and provides a built-in cooperativity of binding. We demonstrate versatility of the sequential self-assembly based on spider-functionalized particles by designing several mesostructures of increasing complexity and simulating their assembly process. This includes a number of finite and repeating structures, in particular, the so-called tetrahelix and its several derivatives. Due to its generality, this approach allows one to design and successfully self-assemble virtually any structure made of a “GEOMAG” magnetic construction toy, out of nanoparticles. According to our results, once the binding cooperativity is strong enough, the sequential self-assembly becomes essentially error-free.« less
Sequential programmable self-assembly: Role of cooperative interactions
Jonathan D. Halverson; Tkachenko, Alexei V.
2016-03-04
Here, we propose a general strategy of “sequential programmable self-assembly” that enables a bottom-up design of arbitrary multi-particle architectures on nano- and microscales. We show that a naive realization of this scheme, based on the pairwise additive interactions between particles, has fundamental limitations that lead to a relatively high error rate. This can be overcome by using cooperative interparticle binding. The cooperativity is a well known feature of many biochemical processes, responsible, e.g., for signaling and regulations in living systems. Here we propose to utilize a similar strategy for high precision self-assembly, and show that DNA-mediated interactions provide a convenientmore » platform for its implementation. In particular, we outline a specific design of a DNA-based complex which we call “DNA spider,” that acts as a smart interparticle linker and provides a built-in cooperativity of binding. We demonstrate versatility of the sequential self-assembly based on spider-functionalized particles by designing several mesostructures of increasing complexity and simulating their assembly process. This includes a number of finite and repeating structures, in particular, the so-called tetrahelix and its several derivatives. Due to its generality, this approach allows one to design and successfully self-assemble virtually any structure made of a “GEOMAG” magnetic construction toy, out of nanoparticles. According to our results, once the binding cooperativity is strong enough, the sequential self-assembly becomes essentially error-free.« less
Luo, Jianquan; Meyer, Anne S; Mateiu, R V; Pinelo, Manuel
2015-05-25
Facile co-immobilization of enzymes is highly desirable for bioconversion methods involving multi-enzymatic cascade reactions. Here we show for the first time that three enzymes can be immobilized in flat-sheet polymeric membranes simultaneously or separately by simple pressure-driven filtration (i.e. by directing membrane fouling formation), without any addition of organic solvent. Such co-immobilization and sequential immobilization systems were examined for the production of methanol from CO2 with formate dehydrogenase (FDH), formaldehyde dehydrogenase (FaldDH) and alcohol dehydrogenase (ADH). Enzyme activity was fully retained by this non-covalent immobilization strategy. The two immobilization systems had similar catalytic efficiencies because the second reaction (formic acid→formaldehyde) catalyzed by FaldDH was found to be the cascade bottleneck (a threshold substrate concentration was required). Moreover, the trade-off between the mitigation of product inhibition and low substrate concentration for the adjacent enzymes probably made the co-immobilization meaningless. Thus, sequential immobilization could be used for multi-enzymatic cascade reactions, as it allowed the operational conditions for each single step to be optimized, not only during the enzyme immobilization but also during the reaction process, and the pressure-driven mass transfer (flow-through mode) could overcome the diffusion resistance between enzymes. This study not only offers a green and facile immobilization method for multi-enzymatic cascade systems, but also reveals the reaction bottleneck and provides possible solutions for the bioconversion of CO2 to methanol. Copyright © 2015 Elsevier B.V. All rights reserved.
Hampson, Lisa V; Fisch, Roland; Van, Linh M; Jaki, Thomas
2017-02-10
Extrapolating from information available on one patient group to support conclusions about another is common in clinical research. For example, the findings of clinical trials, often conducted in highly selective patient cohorts, are routinely extrapolated to wider populations by policy makers. Meanwhile, the results of adult trials may be used to support conclusions about the effects of a medicine in children. For example, if the effective concentration of a drug can be assumed to be similar in adults and children, an appropriate paediatric dosing rule may be found by 'bridging', that is, by matching the adult effective concentration. However, this strategy may result in children receiving an ineffective or hazardous dose if, in fact, effective concentrations differ between adults and children. When there is uncertainty about the equality of effective concentrations, some pharmacokinetic-pharmacodynamic data may be needed in children to verify that differences are small. In this paper, we derive optimal group sequential tests that can be used to verify this assumption efficiently. Asymmetric inner wedge tests are constructed that permit early stopping to accept or reject an assumption of similar effective drug concentrations in adults and children. Asymmetry arises because the consequences of under- and over-dosing may differ. We show how confidence intervals can be obtained on termination of these tests and illustrate the small sample operating characteristics of designs using simulation. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
The Effects of Evidence Bounds on Decision-Making: Theoretical and Empirical Developments
Zhang, Jiaxiang
2012-01-01
Converging findings from behavioral, neurophysiological, and neuroimaging studies suggest an integration-to-boundary mechanism governing decision formation and choice selection. This mechanism is supported by sequential sampling models of choice decisions, which can implement statistically optimal decision strategies for selecting between multiple alternative options on the basis of sensory evidence. This review focuses on recent developments in understanding the evidence boundary, an important component of decision-making raised by experimental findings and models. The article starts by reviewing the neurobiology of perceptual decisions and several influential sequential sampling models, in particular the drift-diffusion model, the Ornstein–Uhlenbeck model and the leaky-competing-accumulator model. In the second part, the article examines how the boundary may affect a model’s dynamics and performance and to what extent it may improve a model’s fits to experimental data. In the third part, the article examines recent findings that support the presence and site of boundaries in the brain. The article considers two questions: (1) whether the boundary is a spontaneous property of neural integrators, or is controlled by dedicated neural circuits; (2) if the boundary is variable, what could be the driving factors behind boundary changes? The review brings together studies using different experimental methods in seeking answers to these questions, highlights psychological and physiological factors that may be associated with the boundary and its changes, and further considers the evidence boundary as a generic mechanism to guide complex behavior. PMID:22870070
Sequential Auctions with Partially Substitutable Goods
NASA Astrophysics Data System (ADS)
Vetsikas, Ioannis A.; Jennings, Nicholas R.
In this paper, we examine a setting in which a number of partially substitutable goods are sold in sequential single unit auctions. Each bidder needs to buy exactly one of these goods. In previous work, this setting has been simplified by assuming that bidders do not know their valuations for all items a priori, but rather are informed of their true valuation for each item right before the corresponding auction takes place. This assumption simplifies the strategies of bidders, as the expected revenue from future auctions is the same for all bidders due to the complete lack of private information. In our analysis we don't make this assumption. This complicates the computation of the equilibrium strategies significantly. We examine this setting both for first and second-price auction variants, initially when the closing prices are not announced, for which we prove that sequential first and second-price auctions are revenue equivalent. Then we assume that the prices are announced; because of the asymmetry in the announced prices between the two auction variants, revenue equivalence does not hold in this case. We finish the paper, by giving some initial results about the case when free disposal is allowed, and therefore a bidder can purchase more than one item.
A Study of Penalty Function Methods for Constraint Handling with Genetic Algorithm
NASA Technical Reports Server (NTRS)
Ortiz, Francisco
2004-01-01
COMETBOARDS (Comparative Evaluation Testbed of Optimization and Analysis Routines for Design of Structures) is a design optimization test bed that can evaluate the performance of several different optimization algorithms. A few of these optimization algorithms are the sequence of unconstrained minimization techniques (SUMT), sequential linear programming (SLP) and the sequential quadratic programming techniques (SQP). A genetic algorithm (GA) is a search technique that is based on the principles of natural selection or "survival of the fittest". Instead of using gradient information, the GA uses the objective function directly in the search. The GA searches the solution space by maintaining a population of potential solutions. Then, using evolving operations such as recombination, mutation and selection, the GA creates successive generations of solutions that will evolve and take on the positive characteristics of their parents and thus gradually approach optimal or near-optimal solutions. By using the objective function directly in the search, genetic algorithms can be effectively applied in non-convex, highly nonlinear, complex problems. The genetic algorithm is not guaranteed to find the global optimum, but it is less likely to get trapped at a local optimum than traditional gradient-based search methods when the objective function is not smooth and generally well behaved. The purpose of this research is to assist in the integration of genetic algorithm (GA) into COMETBOARDS. COMETBOARDS cast the design of structures as a constrained nonlinear optimization problem. One method used to solve constrained optimization problem with a GA to convert the constrained optimization problem into an unconstrained optimization problem by developing a penalty function that penalizes infeasible solutions. There have been several suggested penalty function in the literature each with there own strengths and weaknesses. A statistical analysis of some suggested penalty functions is performed in this study. Also, a response surface approach to robust design is used to develop a new penalty function approach. This new penalty function approach is then compared with the other existing penalty functions.
Roemhild, Roderich; Barbosa, Camilo; Beardmore, Robert E; Jansen, Gunther; Schulenburg, Hinrich
2015-01-01
Antibiotic resistance is a growing concern to public health. New treatment strategies may alleviate the situation by slowing down the evolution of resistance. Here, we evaluated sequential treatment protocols using two fully independent laboratory-controlled evolution experiments with the human pathogen Pseudomonas aeruginosa PA14 and two pairs of clinically relevant antibiotics (doripenem/ciprofloxacin and cefsulodin/gentamicin). Our results consistently show that the sequential application of two antibiotics decelerates resistance evolution relative to monotherapy. Sequential treatment enhanced population extinction although we applied antibiotics at sublethal dosage. In both experiments, we identified an order effect of the antibiotics used in the sequential protocol, leading to significant variation in the long-term efficacy of the tested protocols. These variations appear to be caused by asymmetric evolutionary constraints, whereby adaptation to one drug slowed down adaptation to the other drug, but not vice versa. An understanding of such asymmetric constraints may help future development of evolutionary robust treatments against infectious disease. PMID:26640520
Heterobimetallic Pd–K carbene complexes via one-electron reductions of palladium radical carbenes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cui, Peng; Hoffbauer, Melissa R.; Vyushkova, Mariya
2016-03-24
Unprecedented sequential substitution/reduction synthetic strategy on the Pd radical carbenes afforded heterobimetallic Pd–K carbene complexes, which features novel Pd–C carbene–K structural moieties.
Heterobimetallic Pd–K carbene complexes via one-electron reductions of palladium radical carbenes
Cui, Peng; Hoffbauer, Melissa R.; Vyushkova, Mariya; ...
2016-01-01
Unprecedented sequential substitution/reduction synthetic strategy on the Pd radical carbenes afforded heterobimetallic Pd–K carbene complexes, which features novel Pd–C carbene–K structural moieties.
Murphy, Patrick J. M.
2014-01-01
Background Hydrophobic interaction chromatography (HIC) most commonly requires experimental determination (i.e., scouting) in order to select an optimal chromatographic medium for purifying a given target protein. Neither a two-step purification of untagged green fluorescent protein (GFP) from crude bacterial lysate using sequential HIC and size exclusion chromatography (SEC), nor HIC column scouting elution profiles of GFP, have been previously reported. Methods and Results Bacterial lysate expressing recombinant GFP was sequentially adsorbed to commercially available HIC columns containing butyl, octyl, and phenyl-based HIC ligands coupled to matrices of varying bead size. The lysate was fractionated using a linear ammonium phosphate salt gradient at constant pH. Collected HIC eluate fractions containing retained GFP were then pooled and further purified using high-resolution preparative SEC. Significant differences in presumptive GFP elution profiles were observed using in-line absorption spectrophotometry (A395) and post-run fluorimetry. SDS-PAGE and western blot demonstrated that fluorometric detection was the more accurate indicator of GFP elution in both HIC and SEC purification steps. Comparison of composite HIC column scouting data indicated that a phenyl ligand coupled to a 34 µm matrix produced the highest degree of target protein capture and separation. Conclusions Conducting two-step protein purification using the preferred HIC medium followed by SEC resulted in a final, concentrated product with >98% protein purity. In-line absorbance spectrophotometry was not as precise of an indicator of GFP elution as post-run fluorimetry. These findings demonstrate the importance of utilizing a combination of detection methods when evaluating purification strategies. GFP is a well-characterized model protein, used heavily in educational settings and by researchers with limited protein purification experience, and the data and strategies presented here may aid in development other of HIC-compatible protein purification schemes. PMID:25254496
Application of Sequential Quadratic Programming to Minimize Smart Active Flap Rotor Hub Loads
NASA Technical Reports Server (NTRS)
Kottapalli, Sesi; Leyland, Jane
2014-01-01
In an analytical study, SMART active flap rotor hub loads have been minimized using nonlinear programming constrained optimization methodology. The recently developed NLPQLP system (Schittkowski, 2010) that employs Sequential Quadratic Programming (SQP) as its core algorithm was embedded into a driver code (NLP10x10) specifically designed to minimize active flap rotor hub loads (Leyland, 2014). Three types of practical constraints on the flap deflections have been considered. To validate the current application, two other optimization methods have been used: i) the standard, linear unconstrained method, and ii) the nonlinear Generalized Reduced Gradient (GRG) method with constraints. The new software code NLP10x10 has been systematically checked out. It has been verified that NLP10x10 is functioning as desired. The following are briefly covered in this paper: relevant optimization theory; implementation of the capability of minimizing a metric of all, or a subset, of the hub loads as well as the capability of using all, or a subset, of the flap harmonics; and finally, solutions for the SMART rotor. The eventual goal is to implement NLP10x10 in a real-time wind tunnel environment.
Avallone, Antonio; Pecori, Biagio; Bianco, Franco; Aloj, Luigi; Tatangelo, Fabiana; Romano, Carmela; Granata, Vincenza; Marone, Pietro; Leone, Alessandra; Botti, Gerardo; Petrillo, Antonella; Caracò, Corradina; Iaffaioli, Vincenzo R; Muto, Paolo; Romano, Giovanni; Comella, Pasquale; Budillon, Alfredo; Delrio, Paolo
2015-10-06
We have previously shown that an intensified preoperative regimen including oxaliplatin plus raltitrexed and 5-fluorouracil/folinic acid (OXATOM/FUFA) during preoperative pelvic radiotherapy produced promising results in locally advanced rectal cancer (LARC). Preclinical evidence suggests that the scheduling of bevacizumab may be crucial to optimize its combination with chemo-radiotherapy. This non-randomized, non-comparative, phase II study was conducted in MRI-defined high-risk LARC. Patients received three biweekly cycles of OXATOM/FUFA during RT. Bevacizumab was given 2 weeks before the start of chemo-radiotherapy, and on the same day of chemotherapy for 3 cycles (concomitant-schedule A) or 4 days prior to the first and second cycle of chemotherapy (sequential-schedule B). Primary end point was pathological complete tumor regression (TRG1) rate. The accrual for the concomitant-schedule was early terminated because the number of TRG1 (2 out of 16 patients) was statistically inconsistent with the hypothesis of activity (30%) to be tested. Conversely, the endpoint was reached with the sequential-schedule and the final TRG1 rate among 46 enrolled patients was 50% (95% CI 35%-65%). Neutropenia was the most common grade ≥ 3 toxicity with both schedules, but it was less pronounced with the sequential than concomitant-schedule (30% vs. 44%). Postoperative complications occurred in 8/15 (53%) and 13/46 (28%) patients in schedule A and B, respectively. At 5 year follow-up the probability of PFS and OS was 80% (95%CI, 66%-89%) and 85% (95%CI, 69%-93%), respectively, for the sequential-schedule. These results highlights the relevance of bevacizumab scheduling to optimize its combination with preoperative chemo-radiotherapy in the management of LARC.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Ray -Bing; Wang, Weichung; Jeff Wu, C. F.
A numerical method, called OBSM, was recently proposed which employs overcomplete basis functions to achieve sparse representations. While the method can handle non-stationary response without the need of inverting large covariance matrices, it lacks the capability to quantify uncertainty in predictions. We address this issue by proposing a Bayesian approach which first imposes a normal prior on the large space of linear coefficients, then applies the MCMC algorithm to generate posterior samples for predictions. From these samples, Bayesian credible intervals can then be obtained to assess prediction uncertainty. A key application for the proposed method is the efficient construction ofmore » sequential designs. Several sequential design procedures with different infill criteria are proposed based on the generated posterior samples. As a result, numerical studies show that the proposed schemes are capable of solving problems of positive point identification, optimization, and surrogate fitting.« less
Chen, Ray -Bing; Wang, Weichung; Jeff Wu, C. F.
2017-04-12
A numerical method, called OBSM, was recently proposed which employs overcomplete basis functions to achieve sparse representations. While the method can handle non-stationary response without the need of inverting large covariance matrices, it lacks the capability to quantify uncertainty in predictions. We address this issue by proposing a Bayesian approach which first imposes a normal prior on the large space of linear coefficients, then applies the MCMC algorithm to generate posterior samples for predictions. From these samples, Bayesian credible intervals can then be obtained to assess prediction uncertainty. A key application for the proposed method is the efficient construction ofmore » sequential designs. Several sequential design procedures with different infill criteria are proposed based on the generated posterior samples. As a result, numerical studies show that the proposed schemes are capable of solving problems of positive point identification, optimization, and surrogate fitting.« less
System training and assessment in simultaneous proportional myoelectric prosthesis control
2014-01-01
Background Pattern recognition control of prosthetic hands take inputs from one or more myoelectric sensors and controls one or more degrees of freedom. However, most systems created allow only sequential control of one motion class at a time. Additionally, only recently have researchers demonstrated proportional myoelectric control in such systems, an option that is believed to make fine control easier for the user. Recent developments suggest improved reliability if the user follows a so-called prosthesis guided training (PGT) scheme. Methods In this study, a system for simultaneous proportional myoelectric control has been developed for a hand prosthesis with two motor functions (hand open/close, and wrist pro-/supination). The prosthesis has been used with a prosthesis socket equivalent designed for normally-limbed subjects. An extended version of PGT was developed for use with proportional control. The control system’s performance was tested for two subjects in the Clothespin Relocation Task and the Southampton Hand Assessment Procedure (SHAP). Simultaneous proportional control was compared with three other control strategies implemented on the same prosthesis: mutex proportional control (the same system but with simultaneous control disabled), mutex on-off control, and a more traditional, sequential proportional control system with co-contractions for state switching. Results The practical tests indicate that the simultaneous proportional control strategy and the two mutex-based pattern recognition strategies performed equally well, and superiorly to the more traditional sequential strategy according to the chosen outcome measures. Conclusions This is the first simultaneous proportional myoelectric control system demonstrated on a prosthesis affixed to the forearm of a subject. The study illustrates that PGT is a promising system training method for proportional control. Due to the limited number of subjects in this study, no definite conclusions can be drawn. PMID:24775602
Redmond, Niamh M; Hollinghurst, Sandra; Costelloe, Céire; Montgomery, Alan A; Fletcher, Margaret; Peters, Tim J; Hay, Alastair D
2013-08-01
Recruitment to primary care trials, particularly those involving young children, is known to be difficult. There are limited data available to inform researchers about the effectiveness of different trial recruitment strategies and their associated costs. To describe, evaluate, and investigate the costs of three strategies for recruiting febrile children to a community-based randomised trial of antipyretics. The three recruitment strategies used in the trial were termed as follows: (1) 'local', where paediatric research nurses stationed in primary care sites invited parents of children to participate; (2) 'remote', where clinicians at primary care sites faxed details of potentially eligible children to the trial office; and (3) 'community', where parents, responding to trial publicity, directly contacted the trial office when their child was unwell. Recruitment rates increased in response to the sequential introduction of three recruitment strategies, which were supplemented by additional recruiting staff, flexible staff work patterns, and improved clinician reimbursement schemes. The three strategies yielded different randomisation rates. They also appeared to be interdependent and highly effective together. Strategy-specific costs varied from £297 to £857 per randomised participant and represented approximately 10% of the total trial budget. Because the recruitment strategies were implemented sequentially, it was difficult to measure their independent effects. The cost analysis was performed retrospectively. Trial recruiter expertise and deployment of several interdependent, illness-specific strategies were key factors in achieving rapid recruitment of young children to a community-based randomised controlled trial (RCT). The 'remote' recruitment strategy was shown to be more cost-effective compared to 'community' and 'local' strategies in the context of this trial. Future trialists should report recruitment costs to facilitate a transparent evaluation of recruitment strategy cost-effectiveness.
Su, Chun-Lung; Gardner, Ian A; Johnson, Wesley O
2004-07-30
The two-test two-population model, originally formulated by Hui and Walter, for estimation of test accuracy and prevalence estimation assumes conditionally independent tests, constant accuracy across populations and binomial sampling. The binomial assumption is incorrect if all individuals in a population e.g. child-care centre, village in Africa, or a cattle herd are sampled or if the sample size is large relative to population size. In this paper, we develop statistical methods for evaluating diagnostic test accuracy and prevalence estimation based on finite sample data in the absence of a gold standard. Moreover, two tests are often applied simultaneously for the purpose of obtaining a 'joint' testing strategy that has either higher overall sensitivity or specificity than either of the two tests considered singly. Sequential versions of such strategies are often applied in order to reduce the cost of testing. We thus discuss joint (simultaneous and sequential) testing strategies and inference for them. Using the developed methods, we analyse two real and one simulated data sets, and we compare 'hypergeometric' and 'binomial-based' inferences. Our findings indicate that the posterior standard deviations for prevalence (but not sensitivity and specificity) based on finite population sampling tend to be smaller than their counterparts for infinite population sampling. Finally, we make recommendations about how small the sample size should be relative to the population size to warrant use of the binomial model for prevalence estimation. Copyright 2004 John Wiley & Sons, Ltd.
What do men want? Re-examining whether men benefit from higher fertility than is optimal for women
Sear, Rebecca
2016-01-01
Several empirical observations suggest that when women have more autonomy over their reproductive decisions, fertility is lower. Some evolutionary theorists have interpreted this as evidence for sexual conflicts of interest, arguing that higher fertility is more adaptive for men than women. We suggest the assumptions underlying these arguments are problematic: assuming that women suffer higher costs of reproduction than men neglects the (different) costs of reproduction for men; the assumption that men can repartner is often false. We use simple models to illustrate that (i) men or women can prefer longer interbirth intervals (IBIs), (ii) if men can only partner with wives sequentially they may favour shorter IBIs than women, but such a strategy would only be optimal for a few men who can repartner. This suggests that an evolved universal male preference for higher fertility than women prefer is implausible and is unlikely to fully account for the empirical data. This further implies that if women have more reproductive autonomy, populations should grow, not decline. More precise theoretical explanations with clearly stated assumptions, and data that better address both ultimate fitness consequences and proximate psychological motivations, are needed to understand under which conditions sexual conflict over reproductive timing should arise. PMID:27022076
Bis-reaction-trigger as a strategy to improve the selectivity of fluorescent probes.
Li, Dan; Cheng, Juan; Wang, Cheng-Kun; Ying, Huazhou; Hu, Yongzhou; Han, Feng; Li, Xin
2018-06-01
By the strategy of equipping a fluorophore with two reaction triggers that are tailored to the specific chemistry of peroxynitrite, we have developed a highly selective probe for detecting peroxynitrite in live cells. Sequential response by the two triggers enabled the probe to reveal various degrees of nitrosative stress in live cells via a sensitive emission colour change.
Ungoverned Areas and Threats from Safe Havens
2008-01-01
reasonably well developed transportation and communication infrastructures tend to be more attractive to illicit actors than undeveloped places, for...into a broader UGA/SH strategy — or sequentially, to help strategists, planners, and regional or country teams develop a comprehensive UGA/SH strategy...need a reference for developing or revising existing products such as: " a country report on counterterrorism, drug enforcement, stabilization
Multidisciplinary optimization for engineering systems - Achievements and potential
NASA Technical Reports Server (NTRS)
Sobieszczanski-Sobieski, Jaroslaw
1989-01-01
The currently common sequential design process for engineering systems is likely to lead to suboptimal designs. Recently developed decomposition methods offer an alternative for coming closer to optimum by breaking the large task of system optimization into smaller, concurrently executed and, yet, coupled tasks, identified with engineering disciplines or subsystems. The hierarchic and non-hierarchic decompositions are discussed and illustrated by examples. An organization of a design process centered on the non-hierarchic decomposition is proposed.
Multidisciplinary optimization for engineering systems: Achievements and potential
NASA Technical Reports Server (NTRS)
Sobieszczanski-Sobieski, Jaroslaw
1989-01-01
The currently common sequential design process for engineering systems is likely to lead to suboptimal designs. Recently developed decomposition methods offer an alternative for coming closer to optimum by breaking the large task of system optimization into smaller, concurrently executed and, yet, coupled tasks, identified with engineering disciplines or subsystems. The hierarchic and non-hierarchic decompositions are discussed and illustrated by examples. An organization of a design process centered on the non-hierarchic decomposition is proposed.
Júnez-Ferreira, H E; Herrera, G S
2013-04-01
This paper presents a new methodology for the optimal design of space-time hydraulic head monitoring networks and its application to the Valle de Querétaro aquifer in Mexico. The selection of the space-time monitoring points is done using a static Kalman filter combined with a sequential optimization method. The Kalman filter requires as input a space-time covariance matrix, which is derived from a geostatistical analysis. A sequential optimization method that selects the space-time point that minimizes a function of the variance, in each step, is used. We demonstrate the methodology applying it to the redesign of the hydraulic head monitoring network of the Valle de Querétaro aquifer with the objective of selecting from a set of monitoring positions and times, those that minimize the spatiotemporal redundancy. The database for the geostatistical space-time analysis corresponds to information of 273 wells located within the aquifer for the period 1970-2007. A total of 1,435 hydraulic head data were used to construct the experimental space-time variogram. The results show that from the existing monitoring program that consists of 418 space-time monitoring points, only 178 are not redundant. The implied reduction of monitoring costs was possible because the proposed method is successful in propagating information in space and time.
Continuous performance measurement in flight systems. [sequential control model
NASA Technical Reports Server (NTRS)
Connelly, E. M.; Sloan, N. A.; Zeskind, R. M.
1975-01-01
The desired response of many man machine control systems can be formulated as a solution to an optimal control synthesis problem where the cost index is given and the resulting optimal trajectories correspond to the desired trajectories of the man machine system. Optimal control synthesis provides the reference criteria and the significance of error information required for performance measurement. The synthesis procedure described provides a continuous performance measure (CPM) which is independent of the mechanism generating the control action. Therefore, the technique provides a meaningful method for online evaluation of man's control capability in terms of total man machine performance.
Constrained Burn Optimization for the International Space Station
NASA Technical Reports Server (NTRS)
Brown, Aaron J.; Jones, Brandon A.
2017-01-01
In long-term trajectory planning for the International Space Station (ISS), translational burns are currently targeted sequentially to meet the immediate trajectory constraints, rather than simultaneously to meet all constraints, do not employ gradient-based search techniques, and are not optimized for a minimum total deltav (v) solution. An analytic formulation of the constraint gradients is developed and used in an optimization solver to overcome these obstacles. Two trajectory examples are explored, highlighting the advantage of the proposed method over the current approach, as well as the potential v and propellant savings in the event of propellant shortages.
Tran-Duy, An; Boonen, Annelies; van de Laar, Mart A F J; Franke, Angelinus C; Severens, Johan L
2011-12-01
To develop a modelling framework which can simulate long-term quality of life, societal costs and cost-effectiveness as affected by sequential drug treatment strategies for ankylosing spondylitis (AS). Discrete event simulation paradigm was selected for model development. Drug efficacy was modelled as changes in disease activity (Bath Ankylosing Spondylitis Disease Activity Index (BASDAI)) and functional status (Bath Ankylosing Spondylitis Functional Index (BASFI)), which were linked to costs and health utility using statistical models fitted based on an observational AS cohort. Published clinical data were used to estimate drug efficacy and time to events. Two strategies were compared: (1) five available non-steroidal anti-inflammatory drugs (strategy 1) and (2) same as strategy 1 plus two tumour necrosis factor α inhibitors (strategy 2). 13,000 patients were followed up individually until death. For probability sensitivity analysis, Monte Carlo simulations were performed with 1000 sets of parameters sampled from the appropriate probability distributions. The models successfully generated valid data on treatments, BASDAI, BASFI, utility, quality-adjusted life years (QALYs) and costs at time points with intervals of 1-3 months during the simulation length of 70 years. Incremental cost per QALY gained in strategy 2 compared with strategy 1 was €35,186. At a willingness-to-pay threshold of €80,000, it was 99.9% certain that strategy 2 was cost-effective. The modelling framework provides great flexibility to implement complex algorithms representing treatment selection, disease progression and changes in costs and utilities over time of patients with AS. Results obtained from the simulation are plausible.
Shteingart, Hanan; Loewenstein, Yonatan
2016-01-01
There is a long history of experiments in which participants are instructed to generate a long sequence of binary random numbers. The scope of this line of research has shifted over the years from identifying the basic psychological principles and/or the heuristics that lead to deviations from randomness, to one of predicting future choices. In this paper, we used generalized linear regression and the framework of Reinforcement Learning in order to address both points. In particular, we used logistic regression analysis in order to characterize the temporal sequence of participants' choices. Surprisingly, a population analysis indicated that the contribution of the most recent trial has only a weak effect on behavior, compared to more preceding trials, a result that seems irreconcilable with standard sequential effects that decay monotonously with the delay. However, when considering each participant separately, we found that the magnitudes of the sequential effect are a monotonous decreasing function of the delay, yet these individual sequential effects are largely averaged out in a population analysis because of heterogeneity. The substantial behavioral heterogeneity in this task is further demonstrated quantitatively by considering the predictive power of the model. We show that a heterogeneous model of sequential dependencies captures the structure available in random sequence generation. Finally, we show that the results of the logistic regression analysis can be interpreted in the framework of reinforcement learning, allowing us to compare the sequential effects in the random sequence generation task to those in an operant learning task. We show that in contrast to the random sequence generation task, sequential effects in operant learning are far more homogenous across the population. These results suggest that in the random sequence generation task, different participants adopt different cognitive strategies to suppress sequential dependencies when generating the "random" sequences.
An integrated reactor system has been developed to remediate pentachlorophenol (PCP) containing wastes using sequential anaerobic and aerobic biodegradation. Anaerobically, PCP was degraded to approximately equimolar concentrations (>99%) of chlorophenol (CP) in a granular activa...
SU-E-T-395: Multi-GPU-Based VMAT Treatment Plan Optimization Using a Column-Generation Approach
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tian, Z; Shi, F; Jia, X
Purpose: GPU has been employed to speed up VMAT optimizations from hours to minutes. However, its limited memory capacity makes it difficult to handle cases with a huge dose-deposition-coefficient (DDC) matrix, e.g. those with a large target size, multiple arcs, small beam angle intervals and/or small beamlet size. We propose multi-GPU-based VMAT optimization to solve this memory issue to make GPU-based VMAT more practical for clinical use. Methods: Our column-generation-based method generates apertures sequentially by iteratively searching for an optimal feasible aperture (referred as pricing problem, PP) and optimizing aperture intensities (referred as master problem, MP). The PP requires accessmore » to the large DDC matrix, which is implemented on a multi-GPU system. Each GPU stores a DDC sub-matrix corresponding to one fraction of beam angles and is only responsible for calculation related to those angles. Broadcast and parallel reduction schemes are adopted for inter-GPU data transfer. MP is a relatively small-scale problem and is implemented on one GPU. One headand- neck cancer case was used for test. Three different strategies for VMAT optimization on single GPU were also implemented for comparison: (S1) truncating DDC matrix to ignore its small value entries for optimization; (S2) transferring DDC matrix part by part to GPU during optimizations whenever needed; (S3) moving DDC matrix related calculation onto CPU. Results: Our multi-GPU-based implementation reaches a good plan within 1 minute. Although S1 was 10 seconds faster than our method, the obtained plan quality is worse. Both S2 and S3 handle the full DDC matrix and hence yield the same plan as in our method. However, the computation time is longer, namely 4 minutes and 30 minutes, respectively. Conclusion: Our multi-GPU-based VMAT optimization can effectively solve the limited memory issue with good plan quality and high efficiency, making GPUbased ultra-fast VMAT planning practical for real clinical use.« less
NASA Technical Reports Server (NTRS)
Cohn, S. E.
1982-01-01
Numerical weather prediction (NWP) is an initial-value problem for a system of nonlinear differential equations, in which initial values are known incompletely and inaccurately. Observational data available at the initial time must therefore be supplemented by data available prior to the initial time, a problem known as meteorological data assimilation. A further complication in NWP is that solutions of the governing equations evolve on two different time scales, a fast one and a slow one, whereas fast scale motions in the atmosphere are not reliably observed. This leads to the so called initialization problem: initial values must be constrained to result in a slowly evolving forecast. The theory of estimation of stochastic dynamic systems provides a natural approach to such problems. For linear stochastic dynamic models, the Kalman-Bucy (KB) sequential filter is the optimal data assimilation method, for linear models, the optimal combined data assimilation-initialization method is a modified version of the KB filter.
Gasser, Christoph A; Čvančarová, Monika; Ammann, Erik M; Schäffer, Andreas; Shahgaldian, Patrick; Corvini, Philippe F-X
2017-03-01
Lignin, a complex three-dimensional amorphous polymer, is considered to be a potential natural renewable resource for the production of low-molecular-weight aromatic compounds. In the present study, a novel sequential lignin treatment method consisting of a biocatalytic oxidation step followed by a formic acid-induced lignin depolymerization step was developed and optimized using response surface methodology. The biocatalytic step employed a laccase mediator system using the redox mediator 1-hydroxybenzotriazole. Laccases were immobilized on superparamagnetic nanoparticles using a sorption-assisted surface conjugation method allowing easy separation and reuse of the biocatalysts after treatment. Under optimized conditions, as much as 45 wt% of lignin could be solubilized either in aqueous solution after the first treatment or in ethyl acetate after the second (chemical) treatment. The solubilized products were found to be mainly low-molecular-weight aromatic monomers and oligomers. The process might be used for the production of low-molecular-weight soluble aromatic products that can be purified and/or upgraded applying further downstream processes.
Methodological Issues in Research on Web-Based Behavioral Interventions
Danaher, Brian G; Seeley, John R
2013-01-01
Background Web-based behavioral intervention research is rapidly growing. Purpose We review methodological issues shared across Web-based intervention research to help inform future research in this area. Methods We examine measures and their interpretation using exemplar studies and our research. Results We report on research designs used to evaluate Web-based interventions and recommend newer, blended designs. We review and critique methodological issues associated with recruitment, engagement, and social validity. Conclusions We suggest that there is value to viewing this burgeoning realm of research from the broader context of behavior change research. We conclude that many studies use blended research designs, that innovative mantling designs such as the Multiphase Optimization Strategy and Sequential Multiple Assignment Randomized Trial methods hold considerable promise and should be used more widely, and that Web-based controls should be used instead of usual care or no-treatment controls in public health research. We recommend topics for future research that address participant recruitment, engagement, and social validity. PMID:19806416
Short Term Gains, Long Term Pains: How Cues About State Aid Learning in Dynamic Environments
Gureckis, Todd M.; Love, Bradley C.
2009-01-01
Successful investors seeking returns, animals foraging for food, and pilots controlling aircraft all must take into account how their current decisions will impact their future standing. One challenge facing decision makers is that options that appear attractive in the short-term may not turn out best in the long run. In this paper, we explore human learning in a dynamic decision-making task which places short- and long-term rewards in conflict. Our goal in these studies was to evaluate how people’s mental representation of a task affects their ability to discover an optimal decision strategy. We find that perceptual cues that readily align with the underlying state of the task environment help people overcome the impulsive appeal of short-term rewards. Our experimental manipulations, predictions, and analyses are motivated by current work in reinforcement learning which details how learners value delayed outcomes in sequential tasks and the importance that “state” identification plays in effective learning. PMID:19427635
Robson, Scott A; Takeuchi, Koh; Boeszoermenyi, Andras; Coote, Paul W; Dubey, Abhinav; Hyberts, Sven; Wagner, Gerhard; Arthanari, Haribabu
2018-01-24
Backbone resonance assignment is a critical first step in the investigation of proteins by NMR. This is traditionally achieved with a standard set of experiments, most of which are not optimal for large proteins. Of these, HNCA is the most sensitive experiment that provides sequential correlations. However, this experiment suffers from chemical shift degeneracy problems during the assignment procedure. We present a strategy that increases the effective resolution of HNCA and enables near-complete resonance assignment using this single HNCA experiment. We utilize a combination of 2- 13 C and 3- 13 C pyruvate as the carbon source for isotope labeling, which suppresses the one bond ( 1 J αβ ) coupling providing enhanced resolution for the Cα resonance and amino acid-specific peak shapes that arise from the residual coupling. Using this approach, we can obtain near-complete (>85%) backbone resonance assignment of a 42 kDa protein using a single HNCA experiment.
Cisplatin Cross-Linked Multifunctional Nanodrugplexes for Combination Therapy.
Zhang, Weiqi; Tung, Ching-Hsuan
2017-03-15
Combination therapy efficiently tackles cancer by hitting multiple action mechanisms. However, drugs administered, simultaneously or sequentially, may not reach the targeted sites with the desired dose and ratio. The outcomes of combination therapy could be improved with a polymeric nanoparticle, which can simultaneously transport an optimal combination of drugs. We have demonstrated a simple one-pot strategy to formulate nanomedicines based on platinum coordination and the noncovalent interactions of the drugs. A naturally occurring polymer, hyaluronan (HA), was chosen as the building scaffold to form a nanodrugplex with cisplatin and aromatic-cationic drugs. The platinum coordination between cisplatin and HA induces the formation of a nanocomplex. The aromatic-cationic drugs are tightly packed by an electrostatic interaction and π-π stacking. The nanodrugplex bears excellent flexibility in drug combination and size control. It is stable in storage and has favorable release kinetics and targeting capabilities toward CD44, a receptor for HA that is highly expressed on many types of cancer cells.
Efficient multitasking: parallel versus serial processing of multiple tasks
Fischer, Rico; Plessow, Franziska
2015-01-01
In the context of performance optimizations in multitasking, a central debate has unfolded in multitasking research around whether cognitive processes related to different tasks proceed only sequentially (one at a time), or can operate in parallel (simultaneously). This review features a discussion of theoretical considerations and empirical evidence regarding parallel versus serial task processing in multitasking. In addition, we highlight how methodological differences and theoretical conceptions determine the extent to which parallel processing in multitasking can be detected, to guide their employment in future research. Parallel and serial processing of multiple tasks are not mutually exclusive. Therefore, questions focusing exclusively on either task-processing mode are too simplified. We review empirical evidence and demonstrate that shifting between more parallel and more serial task processing critically depends on the conditions under which multiple tasks are performed. We conclude that efficient multitasking is reflected by the ability of individuals to adjust multitasking performance to environmental demands by flexibly shifting between different processing strategies of multiple task-component scheduling. PMID:26441742
Efficient multitasking: parallel versus serial processing of multiple tasks.
Fischer, Rico; Plessow, Franziska
2015-01-01
In the context of performance optimizations in multitasking, a central debate has unfolded in multitasking research around whether cognitive processes related to different tasks proceed only sequentially (one at a time), or can operate in parallel (simultaneously). This review features a discussion of theoretical considerations and empirical evidence regarding parallel versus serial task processing in multitasking. In addition, we highlight how methodological differences and theoretical conceptions determine the extent to which parallel processing in multitasking can be detected, to guide their employment in future research. Parallel and serial processing of multiple tasks are not mutually exclusive. Therefore, questions focusing exclusively on either task-processing mode are too simplified. We review empirical evidence and demonstrate that shifting between more parallel and more serial task processing critically depends on the conditions under which multiple tasks are performed. We conclude that efficient multitasking is reflected by the ability of individuals to adjust multitasking performance to environmental demands by flexibly shifting between different processing strategies of multiple task-component scheduling.
Direct Synthesis of Medium-Bridged Twisted Amides via a Transannular Cyclization Strategy
Szostak, Michal; Aubé, Jeffrey
2009-01-01
The sequential RCM to construct a challenging medium-sized ring followed by a transannular cyclization across a medium-sized ring delivers previously unattainable twisted amides from simple acyclic precursors. PMID:19708701
Empty tracks optimization based on Z-Map model
NASA Astrophysics Data System (ADS)
Liu, Le; Yan, Guangrong; Wang, Zaijun; Zang, Genao
2017-12-01
For parts with many features, there are more empty tracks during machining. If these tracks are not optimized, the machining efficiency will be seriously affected. In this paper, the characteristics of the empty tracks are studied in detail. Combining with the existing optimization algorithm, a new tracks optimization method based on Z-Map model is proposed. In this method, the tool tracks are divided into the unit processing section, and then the Z-Map model simulation technique is used to analyze the order constraint between the unit segments. The empty stroke optimization problem is transformed into the TSP with sequential constraints, and then through the genetic algorithm solves the established TSP problem. This kind of optimization method can not only optimize the simple structural parts, but also optimize the complex structural parts, so as to effectively plan the empty tracks and greatly improve the processing efficiency.
Parallelization of sequential Gaussian, indicator and direct simulation algorithms
NASA Astrophysics Data System (ADS)
Nunes, Ruben; Almeida, José A.
2010-08-01
Improving the performance and robustness of algorithms on new high-performance parallel computing architectures is a key issue in efficiently performing 2D and 3D studies with large amount of data. In geostatistics, sequential simulation algorithms are good candidates for parallelization. When compared with other computational applications in geosciences (such as fluid flow simulators), sequential simulation software is not extremely computationally intensive, but parallelization can make it more efficient and creates alternatives for its integration in inverse modelling approaches. This paper describes the implementation and benchmarking of a parallel version of the three classic sequential simulation algorithms: direct sequential simulation (DSS), sequential indicator simulation (SIS) and sequential Gaussian simulation (SGS). For this purpose, the source used was GSLIB, but the entire code was extensively modified to take into account the parallelization approach and was also rewritten in the C programming language. The paper also explains in detail the parallelization strategy and the main modifications. Regarding the integration of secondary information, the DSS algorithm is able to perform simple kriging with local means, kriging with an external drift and collocated cokriging with both local and global correlations. SIS includes a local correction of probabilities. Finally, a brief comparison is presented of simulation results using one, two and four processors. All performance tests were carried out on 2D soil data samples. The source code is completely open source and easy to read. It should be noted that the code is only fully compatible with Microsoft Visual C and should be adapted for other systems/compilers.
Towards efficient multi-scale methods for monitoring sugarcane aphid infestations in sorghum
USDA-ARS?s Scientific Manuscript database
We discuss approaches and issues involved with developing optimal monitoring methods for sugarcane aphid infestations (SCA) in grain sorghum. We discuss development of sequential sampling methods that allow for estimation of the number of aphids per sample unit, and statistical decision making rela...
University of Iowa at TREC 2008 Legal and Relevance Feedback Tracks
2008-11-01
Fellbaum, C, [ed.]. Wordnet: An Electronic Lexical Database. Cambridge : MIT Press, 1998. [3] Salton , G. (ed) (1971), The SMART Retrieval System...learning tools and techniques. 2nd Edition. San Francisco : Morgan Kaufmann, 2005. [5] Platt, J . Machines using Sequential Minimal Optimization. [ed.] B
Sawada, Kota; Yamaguchi, Sachi; Iwasa, Yoh
2017-05-21
Among animals living in groups with reproductive skew associated with a dominance hierarchy, subordinates may do best by using various alternative tactics. Sequential hermaphrodites or sex changers adopt a unique solution, that is, being the sex with weaker skew when they are small and subordinate, and changing sex when they become larger. In bi-directionally sex-changing fishes, although most are haremic and basically protogynous, subordinate males can change sex to being females. We study a mathematical model to examine when and why such reversed sex change is more adaptive than dispersal to take over another harem. We attempt to examine previously proposed hypotheses that the risk of dispersal and low density favor reversed sex change, and to specify an optimal decision-making strategy for subordinates. As a result, while the size-dependent conditional strategy in which smaller males tend to change sex is predicted, even large males are predicted to change sex under low density and/or high risk of dispersal, supporting both previous hypotheses. The importance of spatiotemporal variation of social and ecological conditions is also suggested. We discuss a unified framework to understand hermaphroditic and gonochoristic societies. Copyright © 2017 Elsevier Ltd. All rights reserved.
Phytoremediation and innovative strategies for specialized remedial actions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alleman, B.C.; Leeson, A.
1999-01-01
Phytoremediation is a site remediation strategy whose time seems to have come in the past few years, with field implementations taking place in a host of applications. From laboratory studies on plant uptake to full-scale phytoremediation treatment strategies, this volume covers the use of plants to treat contaminants such as hydrocarbons, metals, pesticides, perchlorate, and chlorinated solvents. In addition to the phytoremediation studies, this volume also covers specialized remediation approaches such as sequential anaerobic/aerobic in situ treatment, membrane bioreactors, and Fenton's reagent oxidation.
Phytoremediation and innovative strategies for specialized remedial actions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alleman, B.C.; Leeson, A.
1999-11-01
Phytoremediation is a site remediation strategy whose time seems to have come in the past few years, with field implementations taking place in a host of applications. From laboratory studies on plant uptake to full-scale phytoremediation treatment strategies, this volume covers the use of plants to treat contaminants such as hydrocarbons, metals, pesticides, perchlorate, and chlorinated solvents. In addition to the phytoremediation studies, this volume also covers specialized remediation approaches such as sequential anaerobic/aerobic in situ treatment, membrane bioreactors, and Fenton`s reagent oxidation.
Phytoremediation and innovative strategies for specialized remedial applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alleman, B.C.; Leeson, A.
1999-10-01
Phytoremediation is a site remediation strategy whose time seems to have come in the past few years, with field implementations taking place in a host of applications. From laboratory studies on plant uptake to full-scale phytoremediation treatment strategies, this volume covers the use of plants to treat contaminants such as hydrocarbons, metals, pesticides, perchlorate, and chlorinated solvents. In addition to the phytoremediation studies, this volume also covers specialized remediation approaches such as sequential anaerobic/aerobic in situ treatment, membrane bioreactors, and Fenton`s reagent oxidation.
NASA Astrophysics Data System (ADS)
Wright, Robert; Abraham, Edo; Parpas, Panos; Stoianov, Ivan
2015-12-01
The operation of water distribution networks (WDN) with a dynamic topology is a recently pioneered approach for the advanced management of District Metered Areas (DMAs) that integrates novel developments in hydraulic modeling, monitoring, optimization, and control. A common practice for leakage management is the sectorization of WDNs into small zones, called DMAs, by permanently closing isolation valves. This facilitates water companies to identify bursts and estimate leakage levels by measuring the inlet flow for each DMA. However, by permanently closing valves, a number of problems have been created including reduced resilience to failure and suboptimal pressure management. By introducing a dynamic topology to these zones, these disadvantages can be eliminated while still retaining the DMA structure for leakage monitoring. In this paper, a novel optimization method based on sequential convex programming (SCP) is outlined for the control of a dynamic topology with the objective of reducing average zone pressure (AZP). A key attribute for control optimization is reliable convergence. To achieve this, the SCP method we propose guarantees that each optimization step is strictly feasible, resulting in improved convergence properties. By using a null space algorithm for hydraulic analyses, the computations required are also significantly reduced. The optimized control is actuated on a real WDN operated with a dynamic topology. This unique experimental program incorporates a number of technologies set up with the objective of investigating pioneering developments in WDN management. Preliminary results indicate AZP reductions for a dynamic topology of up to 6.5% over optimally controlled fixed topology DMAs. This article was corrected on 12 JAN 2016. See the end of the full text for details.
Sequential Reactions of Surface-Tethered Glycolytic Enzymes
Mukai, Chinatsu; Bergkvist, Magnus; Nelson, Jacquelyn L.; Travis, Alexander J.
2014-01-01
SUMMARY The development of complex hybrid organic-inorganic devices faces several challenges, including how they can generate energy. Cells face similar challenges regarding local energy production. Mammalian sperm solve this problem by generating ATP down the flagellar principal piece by means of glycolytic enzymes, several of which are tethered to a cytoskeletal support via germ cell-specific targeting domains. Inspired by this design, we have produced recombinant hexokinase type 1 and glucose-6-phosphate isomerase capable of oriented immobilization on a nickel-nitrilotriacetic acid modified surface. Specific activities of enzymes tethered via this strategy were substantially higher than when randomly adsorbed. Furthermore, these enzymes showed sequential activities when tethered onto the same surface. This is the first demonstration of surface-tethered pathway components showing sequential enzymatic activities, and it provides a first step toward reconstitution of glycolysis on engineered hybrid devices. PMID:19778729
Ultra-Wide Band Non-reciprocity through Sequentially-Switched Delay Lines.
Biedka, Mathew M; Zhu, Rui; Xu, Qiang Mark; Wang, Yuanxun Ethan
2017-01-06
Achieving non-reciprocity through unconventional methods without the use of magnetic material has recently become a subject of great interest. Towards this goal a time switching strategy known as the Sequentially-Switched Delay Line (SSDL) is proposed. The essential SSDL configuration consists of six transmission lines of equal length, along with five switches. Each switch is turned on and off sequentially to distribute and route the propagating electromagnetic wave, allowing for simultaneous transmission and receiving of signals through the device. Preliminary experimental results with commercial off the shelf parts are presented which demonstrated non-reciprocal behavior with greater than 40 dB isolation from 200 KHz to 200 MHz. The theory and experimental results demonstrated that the SSDL concept may lead to future on-chip circulators over multi-octaves of frequency.
Ultra-Wide Band Non-reciprocity through Sequentially-Switched Delay Lines
Biedka, Mathew M.; Zhu, Rui; Xu, Qiang Mark; Wang, Yuanxun Ethan
2017-01-01
Achieving non-reciprocity through unconventional methods without the use of magnetic material has recently become a subject of great interest. Towards this goal a time switching strategy known as the Sequentially-Switched Delay Line (SSDL) is proposed. The essential SSDL configuration consists of six transmission lines of equal length, along with five switches. Each switch is turned on and off sequentially to distribute and route the propagating electromagnetic wave, allowing for simultaneous transmission and receiving of signals through the device. Preliminary experimental results with commercial off the shelf parts are presented which demonstrated non-reciprocal behavior with greater than 40 dB isolation from 200 KHz to 200 MHz. The theory and experimental results demonstrated that the SSDL concept may lead to future on-chip circulators over multi-octaves of frequency. PMID:28059132
NASA Astrophysics Data System (ADS)
Chen, Yizhong; Lu, Hongwei; Li, Jing; Ren, Lixia; He, Li
2017-05-01
This study presents the mathematical formulation and implementations of a synergistic optimization framework based on an understanding of water availability and reliability together with the characteristics of multiple water demands. This framework simultaneously integrates a set of leader-followers-interactive objectives established by different decision makers during the synergistic optimization. The upper-level model (leader's one) determines the optimal pollutants discharge to satisfy the environmental target. The lower-level model (follower's one) accepts the dispatch requirement from the upper-level one and dominates the optimal water-allocation strategy to maximize economic benefits representing the regional authority. The complicated bi-level model significantly improves upon the conventional programming methods through the mutual influence and restriction between the upper- and lower-level decision processes, particularly when limited water resources are available for multiple completing users. To solve the problem, a bi-level interactive solution algorithm based on satisfactory degree is introduced into the decision-making process for measuring to what extent the constraints are met and the objective reaches its optima. The capabilities of the proposed model are illustrated through a real-world case study of water resources management system in the district of Fengtai located in Beijing, China. Feasible decisions in association with water resources allocation, wastewater emission and pollutants discharge would be sequentially generated for balancing the objectives subject to the given water-related constraints, which can enable Stakeholders to grasp the inherent conflicts and trade-offs between the environmental and economic interests. The performance of the developed bi-level model is enhanced by comparing with single-level models. Moreover, in consideration of the uncertainty in water demand and availability, sensitivity analysis and policy analysis are employed for identifying their impacts on the final decisions and improving the practical applications.
Maternal Distancing Strategies toward Twin Sons, One with Mild Hearing Loss: A Case Study
ERIC Educational Resources Information Center
Munoz-Silva, Alicia; Sanchez-Garcia, Manuel
2004-01-01
The authors apply descriptive and sequential analyses to a mother's distancing strategies toward her 3-year-old twin sons in puzzle assembly and book reading tasks. One boy had normal hearing and the other a mild hearing loss (threshold: 30 dB). The results show that the mother used more distancing behaviors with the son with a hearing loss, and…
ERIC Educational Resources Information Center
Davidson, Maaike T.
2013-01-01
This sequential, mixed method, QUAN-QUAL study redefines the craft of teaching into the science ("what"), art ("how"), and the business of teaching to assess and prepare preservice teachers. It also measures the effectiveness of using theatrical elements as teaching strategies to effectively develop preservice teachers in the…
Gekas, Jean; Gagné, Geneviève; Bujold, Emmanuel; Douillard, Daniel; Forest, Jean-Claude; Reinharz, Daniel; Rousseau, François
2009-02-13
To assess and compare the cost effectiveness of three different strategies for prenatal screening for Down's syndrome (integrated test, sequential screening, and contingent screenings) and to determine the most useful cut-off values for risk. Computer simulations to study integrated, sequential, and contingent screening strategies with various cut-offs leading to 19 potential screening algorithms. The computer simulation was populated with data from the Serum Urine and Ultrasound Screening Study (SURUSS), real unit costs for healthcare interventions, and a population of 110 948 pregnancies from the province of Québec for the year 2001. Cost effectiveness ratios, incremental cost effectiveness ratios, and screening options' outcomes. The contingent screening strategy dominated all other screening options: it had the best cost effectiveness ratio ($C26,833 per case of Down's syndrome) with fewer procedure related euploid miscarriages and unnecessary terminations (respectively, 6 and 16 per 100,000 pregnancies). It also outperformed serum screening at the second trimester. In terms of the incremental cost effectiveness ratio, contingent screening was still dominant: compared with screening based on maternal age alone, the savings were $C30,963 per additional birth with Down's syndrome averted. Contingent screening was the only screening strategy that offered early reassurance to the majority of women (77.81%) in first trimester and minimised costs by limiting retesting during the second trimester (21.05%). For the contingent and sequential screening strategies, the choice of cut-off value for risk in the first trimester test significantly affected the cost effectiveness ratios (respectively, from $C26,833 to $C37,260 and from $C35,215 to $C45,314 per case of Down's syndrome), the number of procedure related euploid miscarriages (from 6 to 46 and from 6 to 45 per 100,000 pregnancies), and the number of unnecessary terminations (from 16 to 26 and from 16 to 25 per 100,000 pregnancies). Contingent screening, with a first trimester cut-off value for high risk of 1 in 9, is the preferred option for prenatal screening of women for pregnancies affected by Down's syndrome.
Children's sequential information search is sensitive to environmental probabilities.
Nelson, Jonathan D; Divjak, Bojana; Gudmundsdottir, Gudny; Martignon, Laura F; Meder, Björn
2014-01-01
We investigated 4th-grade children's search strategies on sequential search tasks in which the goal is to identify an unknown target object by asking yes-no questions about its features. We used exhaustive search to identify the most efficient question strategies and evaluated the usefulness of children's questions accordingly. Results show that children have good intuitions regarding questions' usefulness and search adaptively, relative to the statistical structure of the task environment. Search was especially efficient in a task environment that was representative of real-world experiences. This suggests that children may use their knowledge of real-world environmental statistics to guide their search behavior. We also compared different related search tasks. We found positive transfer effects from first doing a number search task on a later person search task. Copyright © 2013 The Authors. Published by Elsevier B.V. All rights reserved.
Clarke, Charlotte H; Yip, Christine; Badgwell, Donna; Fung, Eric T; Coombes, Kevin R; Zhang, Zhen; Lu, Karen H; Bast, Robert C
2011-09-01
The low prevalence of ovarian cancer demands both high sensitivity (>75%) and specificity (99.6%) to achieve a positive predictive value of 10% for successful early detection. Utilizing a two stage strategy where serum marker(s) prompt the performance of transvaginal sonography (TVS) in a limited number (2%) of women could reduce the requisite specificity for serum markers to 98%. We have attempted to improve sensitivity by combining CA125 with proteomic markers. Sera from 41 patients with early stage (I/II) and 51 with late stage (III/IV) epithelial ovarian cancer, 40 with benign disease and 99 healthy individuals, were analyzed to measure 7 proteins [Apolipoprotein A1 (Apo-A1), truncated transthyretin (TT), transferrin, hepcidin, ß-2-microglobulin (ß2M), Connective Tissue Activating Protein III (CTAPIII), and Inter-alpha-trypsin inhibitor heavy chain 4 (ITIH4)]. Statistical models were fit by logistic regression, followed by optimization of factors retained in the models determined by optimizing the Akaike Information Criterion. A validation set included 136 stage I ovarian cancers, 140 benign pelvic masses and 174 healthy controls. In a training set analysis, the 3 most effective biomarkers (Apo-A1, TT and CTAPIII) exhibited 54% sensitivity at 98% specificity, CA125 alone produced 68% sensitivity and the combination increased sensitivity to 88%. In a validation set, the marker panel plus CA125 produced a sensitivity of 84% at 98% specificity (P=0.015, McNemar's test). Combining a panel of proteomic markers with CA125 could provide a first step in a sequential two-stage strategy with TVS for early detection of ovarian cancer. Copyright © 2011. Published by Elsevier Inc.
Clarke, Charlotte H.; Yip, Christine; Badgwell, Donna; Fung, Eric T.; Coombes, Kevin R.; Zhang, Zhen; Lu, Karen H.; Bast, Robert C.
2011-01-01
Objective The low prevalence of ovarian cancer demands both high sensitivity (>75%) and specificity (99.6%) to achieve a positive predictive value of 10% for successful early detection. Utilizing a two stage strategy where serum marker(s) prompt the performance of transvaginal sonography (TVS) in a limited number (2%) of women could reduce the requisite specificity for serum markers to 98%. We have attempted to improve sensitivity by combining CA125 with proteomic markers. Methods Sera from 41 patients with early stage (I/II) and 51 with late stage (III/IV) epithelial ovarian cancer, 40 with benign disease and 99 healthy individuals, were analyzed to measure 7 proteins [Apolipoprotein A1 (Apo-A1), truncated transthyretin (TT), transferrin, hepcidin, ß-2-microglobulin (ß2M), Connective Tissue Activating Protein III (CTAPIII), and Inter-alpha-trypsin inhibitor heavy chain 4 (ITIH4)]. Statistical models were fit by logistic regression, followed by optimization of factors retained in the models determined by optimizing the Akaike Information Criterion. A validation set included 136 stage I ovarian cancers, 140 benign pelvic masses and 174 healthy controls. Results In a training set analysis, the 3 most effective biomarkers (Apo-A1, TT and CTAPIII) exhibited 54% sensitivity at 98% specificity, CA125 alone produced 68% sensitivity and the combination increased sensitivity to 88%. In a validation set, the marker panel plus CA125 produced a sensitivity of 84% at 98% specificity (P= 0.015, McNemar's test). Conclusion Combining a panel of proteomic markers with CA125 could provide a first step in a sequential two-stage strategy with TVS for early detection of ovarian cancer. PMID:21708402
Parke, Tom; Marchenko, Olga; Anisimov, Vladimir; Ivanova, Anastasia; Jennison, Christopher; Perevozskaya, Inna; Song, Guochen
2017-01-01
Designing an oncology clinical program is more challenging than designing a single study. The standard approaches have been proven to be not very successful during the last decade; the failure rate of Phase 2 and Phase 3 trials in oncology remains high. Improving a development strategy by applying innovative statistical methods is one of the major objectives of a drug development process. The oncology sub-team on Adaptive Program under the Drug Information Association Adaptive Design Scientific Working Group (DIA ADSWG) evaluated hypothetical oncology programs with two competing treatments and published the work in the Therapeutic Innovation and Regulatory Science journal in January 2014. Five oncology development programs based on different Phase 2 designs, including adaptive designs and a standard two parallel arm Phase 3 design were simulated and compared in terms of the probability of clinical program success and expected net present value (eNPV). In this article, we consider eight Phase2/Phase3 development programs based on selected combinations of five Phase 2 study designs and three Phase 3 study designs. We again used the probability of program success and eNPV to compare simulated programs. For the development strategies, we considered that the eNPV showed robust improvement for each successive strategy, with the highest being for a three-arm response adaptive randomization design in Phase 2 and a group sequential design with 5 analyses in Phase 3.
Optimism, coping and long-term recovery from coronary artery surgery in women.
King, K B; Rowe, M A; Kimble, L P; Zerwic, J J
1998-02-01
Optimism, coping strategies, and psychological and functional outcomes were measured in 55 women undergoing coronary artery surgery. Data were collected in-hospital and at 1, 6, and 12 months after surgery. Optimism was related to positive moods and life satisfaction, and inversely related to negative moods. Few relationships were found between optimism and functional ability. Cognitive coping strategies accounted for a mediating effect between optimism and negative mood. Optimists were more likely to accept their situation, and less likely to use escapism. In turn, these coping strategies were inversely related to negative mood and mediated the relationship between optimism and this outcome. Optimism was not related to problem-focused coping strategies; this, these coping strategies cannot explain the relationship between optimism and outcomes.
Optimal trajectories for an aerospace plane. Part 2: Data, tables, and graphs
NASA Technical Reports Server (NTRS)
Miele, Angelo; Lee, W. Y.; Wu, G. D.
1990-01-01
Data, tables, and graphs relative to the optimal trajectories for an aerospace plane are presented. A single-stage-to-orbit (SSTO) configuration is considered, and the transition from low supersonic speeds to orbital speeds is studied for a single aerodynamic model (GHAME) and three engine models. Four optimization problems are solved using the sequential gradient-restoration algorithm for optimal control problems: (1) minimization of the weight of fuel consumed; (2) minimization of the peak dynamic pressure; (3) minimization of the peak heating rate; and (4) minimization of the peak tangential acceleration. The above optimization studies are carried out for different combinations of constraints, specifically: initial path inclination that is either free or given; dynamic pressure that is either free or bounded; and tangential acceleration that is either free or bounded.
Paliwoda, Rebecca E; Li, Feng; Reid, Michael S; Lin, Yanwen; Le, X Chris
2014-06-17
Functionalizing nanomaterials for diverse analytical, biomedical, and therapeutic applications requires determination of surface coverage (or density) of DNA on nanomaterials. We describe a sequential strand displacement beacon assay that is able to quantify specific DNA sequences conjugated or coconjugated onto gold nanoparticles (AuNPs). Unlike the conventional fluorescence assay that requires the target DNA to be fluorescently labeled, the sequential strand displacement beacon method is able to quantify multiple unlabeled DNA oligonucleotides using a single (universal) strand displacement beacon. This unique feature is achieved by introducing two short unlabeled DNA probes for each specific DNA sequence and by performing sequential DNA strand displacement reactions. Varying the relative amounts of the specific DNA sequences and spacing DNA sequences during their coconjugation onto AuNPs results in different densities of the specific DNA on AuNP, ranging from 90 to 230 DNA molecules per AuNP. Results obtained from our sequential strand displacement beacon assay are consistent with those obtained from the conventional fluorescence assays. However, labeling of DNA with some fluorescent dyes, e.g., tetramethylrhodamine, alters DNA density on AuNP. The strand displacement strategy overcomes this problem by obviating direct labeling of the target DNA. This method has broad potential to facilitate more efficient design and characterization of novel multifunctional materials for diverse applications.
Gaudrain, Etienne; Carlyon, Robert P
2013-01-01
Previous studies have suggested that cochlear implant users may have particular difficulties exploiting opportunities to glimpse clear segments of a target speech signal in the presence of a fluctuating masker. Although it has been proposed that this difficulty is associated with a deficit in linking the glimpsed segments across time, the details of this mechanism are yet to be explained. The present study introduces a method called Zebra-speech developed to investigate the relative contribution of simultaneous and sequential segregation mechanisms in concurrent speech perception, using a noise-band vocoder to simulate cochlear implants. One experiment showed that the saliency of the difference between the target and the masker is a key factor for Zebra-speech perception, as it is for sequential segregation. Furthermore, forward masking played little or no role, confirming that intelligibility was not limited by energetic masking but by across-time linkage abilities. In another experiment, a binaural cue was used to distinguish the target and the masker. It showed that the relative contribution of simultaneous and sequential segregation depended on the spectral resolution, with listeners relying more on sequential segregation when the spectral resolution was reduced. The potential of Zebra-speech as a segregation enhancement strategy for cochlear implants is discussed.
Gaudrain, Etienne; Carlyon, Robert P.
2013-01-01
Previous studies have suggested that cochlear implant users may have particular difficulties exploiting opportunities to glimpse clear segments of a target speech signal in the presence of a fluctuating masker. Although it has been proposed that this difficulty is associated with a deficit in linking the glimpsed segments across time, the details of this mechanism are yet to be explained. The present study introduces a method called Zebra-speech developed to investigate the relative contribution of simultaneous and sequential segregation mechanisms in concurrent speech perception, using a noise-band vocoder to simulate cochlear implants. One experiment showed that the saliency of the difference between the target and the masker is a key factor for Zebra-speech perception, as it is for sequential segregation. Furthermore, forward masking played little or no role, confirming that intelligibility was not limited by energetic masking but by across-time linkage abilities. In another experiment, a binaural cue was used to distinguish target and masker. It showed that the relative contribution of simultaneous and sequential segregation depended on the spectral resolution, with listeners relying more on sequential segregation when the spectral resolution was reduced. The potential of Zebra-speech as a segregation enhancement strategy for cochlear implants is discussed. PMID:23297922
ERIC Educational Resources Information Center
Kidwell, Kelley M.; Hyde, Luke W.
2016-01-01
Heterogeneity between and within people necessitates the need for sequential personalized interventions to optimize individual outcomes. Personalized or adaptive interventions (AIs) are relevant for diseases and maladaptive behavioral trajectories when one intervention is not curative and success of a subsequent intervention may depend on…
A Sequential Quadratic Programming Algorithm Using an Incomplete Solution of the Subproblem
1990-09-01
Electr6nica e Inform’itica Industrial E.T.S. Ingenieros Industriales Universidad Polit6cnica, Madrid Technical Report SOL 90-12 September 1990 -Y...MURRAY* AND FRANCISCO J. PRIETOt *Systems Optimization Laboratory Department of Operations Research Stanford University tDept. de Automitica, Ingenieria
USDA-ARS?s Scientific Manuscript database
The performance of conventional filtering methods can be degraded by ignoring the time lag between soil moisture and discharge response when discharge observations are assimilated into streamflow modelling. This has led to the ongoing development of more optimal ways to implement sequential data ass...
Dinglas, Victor D; Huang, Minxuan; Sepulveda, Kristin A; Pinedo, Mariela; Hopkins, Ramona O; Colantuoni, Elizabeth; Needham, Dale M
2015-01-09
Effective strategies for contacting and recruiting study participants are critical in conducting clinical research. In this study, we conducted two sequential randomized controlled trials of mail- and telephone-based strategies for contacting and recruiting participants, and evaluated participant-related variables' association with time to survey completion and survey completion rates. Subjects eligible for this study were survivors of acute lung injury who had been previously enrolled in a 12-month observational follow-up study evaluating their physical, cognitive and mental health outcomes, with their last study visit completed at a median of 34 months previously. Eligible subjects were contacted to complete a new research survey as part of two randomized trials, initially using a randomized mail-based contact strategy, followed by a randomized telephone-based contact strategy for non-responders to the mail strategy. Both strategies focused on using either a personalized versus a generic approach. In addition, 18 potentially relevant subject-related variables (e.g., demographics, last known physical and mental health status) were evaluated for association with time to survey completion. Of 308 eligible subjects, 67% completed the survey with a median (IQR) of 3 (2, 5) contact attempts required. There was no significant difference in the time to survey completion for either randomized trial of mail- or phone-based contact strategy. Among all subject-related variables, age ≤40 years and minority race were independently associated with a longer time to survey completion. We found that age ≤40 years and minority race were associated with a longer time to survey completion, but personalized versus generic approaches to mail- and telephone-based contact strategies had no significant effect. Repeating both mail and telephone contact attempts was important for increasing survey completion rate. NCT00719446.
Raja, Muhammad Asif Zahoor; Zameer, Aneela; Khan, Aziz Ullah; Wazwaz, Abdul Majid
2016-01-01
In this study, a novel bio-inspired computing approach is developed to analyze the dynamics of nonlinear singular Thomas-Fermi equation (TFE) arising in potential and charge density models of an atom by exploiting the strength of finite difference scheme (FDS) for discretization and optimization through genetic algorithms (GAs) hybrid with sequential quadratic programming. The FDS procedures are used to transform the TFE differential equations into a system of nonlinear equations. A fitness function is constructed based on the residual error of constituent equations in the mean square sense and is formulated as the minimization problem. Optimization of parameters for the system is carried out with GAs, used as a tool for viable global search integrated with SQP algorithm for rapid refinement of the results. The design scheme is applied to solve TFE for five different scenarios by taking various step sizes and different input intervals. Comparison of the proposed results with the state of the art numerical and analytical solutions reveals that the worth of our scheme in terms of accuracy and convergence. The reliability and effectiveness of the proposed scheme are validated through consistently getting optimal values of statistical performance indices calculated for a sufficiently large number of independent runs to establish its significance.
Gu, Xiaosi; Kirk, Ulrich; Lohrenz, Terry M; Montague, P Read
2014-08-01
Computational models of reward processing suggest that foregone or fictive outcomes serve as important information sources for learning and augment those generated by experienced rewards (e.g. reward prediction errors). An outstanding question is how these learning signals interact with top-down cognitive influences, such as cognitive reappraisal strategies. Using a sequential investment task and functional magnetic resonance imaging, we show that the reappraisal strategy selectively attenuates the influence of fictive, but not reward prediction error signals on investment behavior; such behavioral effect is accompanied by changes in neural activity and connectivity in the anterior insular cortex, a brain region thought to integrate subjective feelings with high-order cognition. Furthermore, individuals differ in the extent to which their behaviors are driven by fictive errors versus reward prediction errors, and the reappraisal strategy interacts with such individual differences; a finding also accompanied by distinct underlying neural mechanisms. These findings suggest that the variable interaction of cognitive strategies with two important classes of computational learning signals (fictive, reward prediction error) represent one contributing substrate for the variable capacity of individuals to control their behavior based on foregone rewards. These findings also expose important possibilities for understanding the lack of control in addiction based on possibly foregone rewarding outcomes. Copyright © 2013 The Authors. Human Brain Mapping Published by Wiley Periodicals, Inc.
Dyslexia, an Imbalance in Cerebral Information-Processing Strategies.
ERIC Educational Resources Information Center
Aaron, P. G.
1978-01-01
Twenty-eight reading disabled children (in grades 2-4) were divided (on the basis of the nature of errors made in a writing from dictation task), into two groups--analytic-sequential deficient and holistic-simultaneous deficient. (Author/PHR)
A Comparison of Trajectory Optimization Methods for the Impulsive Minimum Fuel Rendezvous Problem
NASA Technical Reports Server (NTRS)
Hughes, Steven P.; Mailhe, Laurie M.; Guzman, Jose J.
2002-01-01
In this paper we present a comparison of optimization approaches to the minimum fuel rendezvous problem. Both indirect and direct methods are compared for a variety of test cases. The indirect approach is based on primer vector theory. The direct approaches are implemented numerically and include Sequential Quadratic Programming (SQP), Quasi-Newton, Simplex, Genetic Algorithms, and Simulated Annealing. Each method is applied to a variety of test cases including, circular to circular coplanar orbits, LEO to GEO, and orbit phasing in highly elliptic orbits. We also compare different constrained optimization routines on complex orbit rendezvous problems with complicated, highly nonlinear constraints.
NASA Astrophysics Data System (ADS)
Haapasalo, Erkka; Pellonpää, Juha-Pekka
2017-12-01
Various forms of optimality for quantum observables described as normalized positive-operator-valued measures (POVMs) are studied in this paper. We give characterizations for observables that determine the values of the measured quantity with probabilistic certainty or a state of the system before or after the measurement. We investigate observables that are free from noise caused by classical post-processing, mixing, or pre-processing of quantum nature. Especially, a complete characterization of pre-processing and post-processing clean observables is given, and necessary and sufficient conditions are imposed on informationally complete POVMs within the set of pure states. We also discuss joint and sequential measurements of optimal quantum observables.
Impact of Diurnal Variations of Precursors on the Prediction of Ozone
NASA Astrophysics Data System (ADS)
Hamer, P. D.; Bowman, K. W.; Henze, D. K.; Singh, K.
2009-12-01
Using a photochemical box model and its adjoint, constructed using the Kinetic Pre-Processor, we investigate the impacts of changing observational capacity, observation frequency and quality upon the ability to both understand and predict the nature of peak ozone events within a variety of polluted environments. The model consists of a chemical mechanism based on the Master Chemical Mechanism utilising 171 chemical species and 524 chemical reactions interacting with emissions, dry deposition and mixing schemes. The model was run under a variety of conditions designed to simulate a range of summertime polluted environments spanning a range of NOx and volatile organic compound regimes (VOCs). Using the forward model we were able to generate simulated atmospheric conditions representative of a particular polluted environment, which could in turn be used to generate a set of pseudo observations of key photochemical constituents. The model was then run under somewhat less polluted conditions to generate a background and then perturbed back towards the polluted trajectory using sequential data assimilation and the pseudo observations. Using a combination of the adjoint sensitivity analysis and the sequential data assimilation described here we assess the optimal time of observation and the diversity of observed chemical species required to provide acceptable forecast estimates of ozone concentrations. As the photochemical regime changes depending on NOx and VOC concentrations different observing strategies become favourable. The impact of using remote sensing based observations of the free tropospheric photochemical state are investigated to demonstrate the advantage of gaining knowledge of atmospheric trace gases away from the immediate photochemical environment.
Risk-aware multi-armed bandit problem with application to portfolio selection
Huo, Xiaoguang
2017-01-01
Sequential portfolio selection has attracted increasing interest in the machine learning and quantitative finance communities in recent years. As a mathematical framework for reinforcement learning policies, the stochastic multi-armed bandit problem addresses the primary difficulty in sequential decision-making under uncertainty, namely the exploration versus exploitation dilemma, and therefore provides a natural connection to portfolio selection. In this paper, we incorporate risk awareness into the classic multi-armed bandit setting and introduce an algorithm to construct portfolio. Through filtering assets based on the topological structure of the financial market and combining the optimal multi-armed bandit policy with the minimization of a coherent risk measure, we achieve a balance between risk and return. PMID:29291122
Risk-aware multi-armed bandit problem with application to portfolio selection.
Huo, Xiaoguang; Fu, Feng
2017-11-01
Sequential portfolio selection has attracted increasing interest in the machine learning and quantitative finance communities in recent years. As a mathematical framework for reinforcement learning policies, the stochastic multi-armed bandit problem addresses the primary difficulty in sequential decision-making under uncertainty, namely the exploration versus exploitation dilemma, and therefore provides a natural connection to portfolio selection. In this paper, we incorporate risk awareness into the classic multi-armed bandit setting and introduce an algorithm to construct portfolio. Through filtering assets based on the topological structure of the financial market and combining the optimal multi-armed bandit policy with the minimization of a coherent risk measure, we achieve a balance between risk and return.
Sequential desorption energy of hydrogen from nickel clusters
DOE Office of Scientific and Technical Information (OSTI.GOV)
Deepika,; Kumar, Rakesh, E-mail: rakesh@iitrpr.ac.in; R, Kamal Raj.
2015-06-24
We report reversible Hydrogen adsorption on Nickel clusters, which act as a catalyst for solid state storage of Hydrogen on a substrate. First-principles technique is employed to investigate the maximum number of chemically adsorbed Hydrogen molecules on Nickel cluster. We observe a maximum of four Hydrogen molecules adsorbed per Nickel atom, but the average Hydrogen molecules adsorbed per Nickel atom decrease with cluster size. The dissociative chemisorption energy per Hydrogen molecule and sequential desorption energy per Hydrogen atom on Nickel cluster is found to decrease with number of adsorbed Hydrogen molecules, which on optimization may help in economical storage andmore » regeneration of Hydrogen as a clean energy carrier.« less
Qi, Hong; Qiao, Yao-Bin; Ren, Ya-Tao; Shi, Jing-Wen; Zhang, Ze-Yu; Ruan, Li-Ming
2016-10-17
Sequential quadratic programming (SQP) is used as an optimization algorithm to reconstruct the optical parameters based on the time-domain radiative transfer equation (TD-RTE). Numerous time-resolved measurement signals are obtained using the TD-RTE as forward model. For a high computational efficiency, the gradient of objective function is calculated using an adjoint equation technique. SQP algorithm is employed to solve the inverse problem and the regularization term based on the generalized Gaussian Markov random field (GGMRF) model is used to overcome the ill-posed problem. Simulated results show that the proposed reconstruction scheme performs efficiently and accurately.
A sequential quadratic programming algorithm using an incomplete solution of the subproblem
DOE Office of Scientific and Technical Information (OSTI.GOV)
Murray, W.; Prieto, F.J.
1993-05-01
We analyze sequential quadratic programming (SQP) methods to solve nonlinear constrained optimization problems that are more flexible in their definition than standard SQP methods. The type of flexibility introduced is motivated by the necessity to deviate from the standard approach when solving large problems. Specifically we no longer require a minimizer of the QP subproblem to be determined or particular Lagrange multiplier estimates to be used. Our main focus is on an SQP algorithm that uses a particular augmented Lagrangian merit function. New results are derived for this algorithm under weaker conditions than previously assumed; in particular, it is notmore » assumed that the iterates lie on a compact set.« less
Distributed Wireless Power Transfer With Energy Feedback
NASA Astrophysics Data System (ADS)
Lee, Seunghyun; Zhang, Rui
2017-04-01
Energy beamforming (EB) is a key technique for achieving efficient radio-frequency (RF) transmission enabled wireless energy transfer (WET). By optimally designing the waveforms from multiple energy transmitters (ETs) over the wireless channels, they can be constructively combined at the energy receiver (ER) to achieve an EB gain that scales with the number of ETs. However, the optimal design of EB waveforms requires accurate channel state information (CSI) at the ETs, which is challenging to obtain practically, especially in a distributed system with ETs at separate locations. In this paper, we study practical and efficient channel training methods to achieve optimal EB in a distributed WET system. We propose two protocols with and without centralized coordination, respectively, where distributed ETs either sequentially or in parallel adapt their transmit phases based on a low-complexity energy feedback from the ER. The energy feedback only depends on the received power level at the ER, where each feedback indicates one particular transmit phase that results in the maximum harvested power over a set of previously used phases. Simulation results show that the two proposed training protocols converge very fast in practical WET systems even with a large number of distributed ETs, while the protocol with sequential ET phase adaptation is also analytically shown to converge to the optimal EB design with perfect CSI by increasing the training time. Numerical results are also provided to evaluate the performance of the proposed distributed EB and training designs as compared to other benchmark schemes.
NASA Astrophysics Data System (ADS)
Yang, R. B.; Liang, W. F.; Wu, C. H.; Chen, C. C.
2016-05-01
Radar absorbing materials (RAMs) also known as microwave absorbers, which can absorb and dissipate incident electromagnetic wave, are widely used in the fields of radar-cross section reduction, electromagnetic interference (EMI) reduction and human health protection. In this study, the synthesis of functionally graded material (FGM) (CI/Polyurethane composites), which is fabricated with semi-sequentially varied composition along the thickness, is implemented with a genetic algorithm (GA) to optimize the microwave absorption efficiency and bandwidth of FGM. For impedance matching and broad-band design, the original 8-layered FGM was obtained by the GA method to calculate the thickness of each layer for a sequential stacking of FGM from 20, 30, 40, 50, 60, 65, 70 and 75 wt% of CI fillers. The reflection loss of the original 8-layered FGM below -10 dB can be obtained in the frequency range of 5.12˜18 GHz with a total thickness of 9.66 mm. Further optimization reduces the number of the layers and the stacking sequence of the optimized 4-layered FGM is 20, 30, 65, 75 wt% with thickness of 0.8, 1.6, 0.6 and 1.0 mm, respectively. The synthesis and measurement of the optimized 4-layered FGM with a thickness of 4 mm reveal a minimum reflection loss of -25.2 dB at 6.64 GHz and its bandwidth below - 10 dB is larger than 12.8 GHz.
A Cascade Optimization Strategy for Solution of Difficult Multidisciplinary Design Problems
NASA Technical Reports Server (NTRS)
Patnaik, Surya N.; Coroneos, Rula M.; Hopkins, Dale A.; Berke, Laszlo
1996-01-01
A research project to comparatively evaluate 10 nonlinear optimization algorithms was recently completed. A conclusion was that no single optimizer could successfully solve all 40 problems in the test bed, even though most optimizers successfully solved at least one-third of the problems. We realized that improved search directions and step lengths, available in the 10 optimizers compared, were not likely to alleviate the convergence difficulties. For the solution of those difficult problems we have devised an alternative approach called cascade optimization strategy. The cascade strategy uses several optimizers, one followed by another in a specified sequence, to solve a problem. A pseudorandom scheme perturbs design variables between the optimizers. The cascade strategy has been tested successfully in the design of supersonic and subsonic aircraft configurations and air-breathing engines for high-speed civil transport applications. These problems could not be successfully solved by an individual optimizer. The cascade optimization strategy, however, generated feasible optimum solutions for both aircraft and engine problems. This paper presents the cascade strategy and solutions to a number of these problems.
Computational aspects of helicopter trim analysis and damping levels from Floquet theory
NASA Technical Reports Server (NTRS)
Gaonkar, Gopal H.; Achar, N. S.
1992-01-01
Helicopter trim settings of periodic initial state and control inputs are investigated for convergence of Newton iteration in computing the settings sequentially and in parallel. The trim analysis uses a shooting method and a weak version of two temporal finite element methods with displacement formulation and with mixed formulation of displacements and momenta. These three methods broadly represent two main approaches of trim analysis: adaptation of initial-value and finite element boundary-value codes to periodic boundary conditions, particularly for unstable and marginally stable systems. In each method, both the sequential and in-parallel schemes are used and the resulting nonlinear algebraic equations are solved by damped Newton iteration with an optimally selected damping parameter. The impact of damped Newton iteration, including earlier-observed divergence problems in trim analysis, is demonstrated by the maximum condition number of the Jacobian matrices of the iterative scheme and by virtual elimination of divergence. The advantages of the in-parallel scheme over the conventional sequential scheme are also demonstrated.
Parallelization of NAS Benchmarks for Shared Memory Multiprocessors
NASA Technical Reports Server (NTRS)
Waheed, Abdul; Yan, Jerry C.; Saini, Subhash (Technical Monitor)
1998-01-01
This paper presents our experiences of parallelizing the sequential implementation of NAS benchmarks using compiler directives on SGI Origin2000 distributed shared memory (DSM) system. Porting existing applications to new high performance parallel and distributed computing platforms is a challenging task. Ideally, a user develops a sequential version of the application, leaving the task of porting to new generations of high performance computing systems to parallelization tools and compilers. Due to the simplicity of programming shared-memory multiprocessors, compiler developers have provided various facilities to allow the users to exploit parallelism. Native compilers on SGI Origin2000 support multiprocessing directives to allow users to exploit loop-level parallelism in their programs. Additionally, supporting tools can accomplish this process automatically and present the results of parallelization to the users. We experimented with these compiler directives and supporting tools by parallelizing sequential implementation of NAS benchmarks. Results reported in this paper indicate that with minimal effort, the performance gain is comparable with the hand-parallelized, carefully optimized, message-passing implementations of the same benchmarks.
Yu, Zhan; Li, Yuanyang; Liu, Lisheng; Guo, Jin; Wang, Tingfeng; Yang, Guoqing
2017-11-10
The speckle pattern (line by line) sequential extraction (SPSE) metric is proposed by the one-dimensional speckle intensity level crossing theory. Through the sequential extraction of received speckle information, the speckle metrics for estimating the variation of focusing spot size on a remote diffuse target are obtained. Based on the simulation, we will give some discussions about the SPSE metric range of application under the theoretical conditions, and the aperture size will affect the metric performance of the observation system. The results of the analyses are verified by the experiment. This method is applied to the detection of relative static target (speckled jitter frequency is less than the CCD sampling frequency). The SPSE metric can determine the variation of the focusing spot size over a long distance, moreover, the metric will estimate the spot size under some conditions. Therefore, the monitoring and the feedback of far-field spot will be implemented laser focusing system applications and help the system to optimize the focusing performance.
Sequential bearings-only-tracking initiation with particle filtering method.
Liu, Bin; Hao, Chengpeng
2013-01-01
The tracking initiation problem is examined in the context of autonomous bearings-only-tracking (BOT) of a single appearing/disappearing target in the presence of clutter measurements. In general, this problem suffers from a combinatorial explosion in the number of potential tracks resulted from the uncertainty in the linkage between the target and the measurement (a.k.a the data association problem). In addition, the nonlinear measurements lead to a non-Gaussian posterior probability density function (pdf) in the optimal Bayesian sequential estimation framework. The consequence of this nonlinear/non-Gaussian context is the absence of a closed-form solution. This paper models the linkage uncertainty and the nonlinear/non-Gaussian estimation problem jointly with solid Bayesian formalism. A particle filtering (PF) algorithm is derived for estimating the model's parameters in a sequential manner. Numerical results show that the proposed solution provides a significant benefit over the most commonly used methods, IPDA and IMMPDA. The posterior Cramér-Rao bounds are also involved for performance evaluation.
Hsieh, Tsung-Yu; Huang, Chi-Kai; Su, Tzu-Sen; Hong, Cheng-You; Wei, Tzu-Chien
2017-03-15
Crystal morphology and structure are important for improving the organic-inorganic lead halide perovskite semiconductor property in optoelectronic, electronic, and photovoltaic devices. In particular, crystal growth and dissolution are two major phenomena in determining the morphology of methylammonium lead iodide perovskite in the sequential deposition method for fabricating a perovskite solar cell. In this report, the effect of immersion time in the second step, i.e., methlyammonium iodide immersion in the morphological, structural, optical, and photovoltaic evolution, is extensively investigated. Supported by experimental evidence, a five-staged, time-dependent evolution of the morphology of methylammonium lead iodide perovskite crystals is established and is well connected to the photovoltaic performance. This result is beneficial for engineering optimal time for methylammonium iodide immersion and converging the solar cell performance in the sequential deposition route. Meanwhile, our result suggests that large, well-faceted methylammonium lead iodide perovskite single crystal may be incubated by solution process. This offers a low cost route for synthesizing perovskite single crystal.
"Shotgun" versus sequential testing. Cost-effectiveness of diagnostic strategies for vaginitis.
Carr, Phyllis L; Rothberg, Michael B; Friedman, Robert H; Felsenstein, Donna; Pliskin, Joseph S
2005-09-01
Although vaginitis is a common outpatient problem, only 60% of patients can be diagnosed at the initial office visit of a primary care provider using the office procedures of pH testing, whiff tests, normal saline, and potassium hydroxide preps. To determine the most cost-effective diagnostic and treatment approach for the medical management of vaginitis. Decision and cost-effectiveness analyses. Healthy women with symptoms of vaginitis undiagnosed after an initial pelvic exam, wet mount preparations, pH, and the four criteria to diagnose bacterial vaginosis. General office practice. We evaluated 28 diagnostic strategies comprised of combinations of pH testing, vaginal cultures for yeast and Trichomonas vaginalis, Gram's stain for bacterial vaginosis, and DNA probes for Neisseria gonorrhoeae and Chlamydia. Data sources for the study were confined to English language literature. The outcome measures were symptom-days and costs. The least expensive strategy was to perform yeast culture, gonorrhoeae and Chlamydia probes at the initial visit, and Gram's stain and Trichomonas culture only when the vaginal pH exceeded 4.9 (330 dollars, 7.30 symptom days). Other strategies cost 8 dollars to 76 dollars more and increased duration of symptoms by up to 1.3 days. In probabilistic sensitivity analysis, this strategy was always the most effective strategy and was also least expensive 58% of the time. For patients with vaginitis symptoms undiagnosed by pelvic examination, wet mount preparations and related office tests, a comprehensive, pH-guided testing strategy at the initial office visit is less expensive and more effective than ordering tests sequentially.
Statistical and optimal learning with applications in business analytics
NASA Astrophysics Data System (ADS)
Han, Bin
Statistical learning is widely used in business analytics to discover structure or exploit patterns from historical data, and build models that capture relationships between an outcome of interest and a set of variables. Optimal learning on the other hand, solves the operational side of the problem, by iterating between decision making and data acquisition/learning. All too often the two problems go hand-in-hand, which exhibit a feedback loop between statistics and optimization. We apply this statistical/optimal learning concept on a context of fundraising marketing campaign problem arising in many non-profit organizations. Many such organizations use direct-mail marketing to cultivate one-time donors and convert them into recurring contributors. Cultivated donors generate much more revenue than new donors, but also lapse with time, making it important to steadily draw in new cultivations. The direct-mail budget is limited, but better-designed mailings can improve success rates without increasing costs. We first apply statistical learning to analyze the effectiveness of several design approaches used in practice, based on a massive dataset covering 8.6 million direct-mail communications with donors to the American Red Cross during 2009-2011. We find evidence that mailed appeals are more effective when they emphasize disaster preparedness and training efforts over post-disaster cleanup. Including small cards that affirm donors' identity as Red Cross supporters is an effective strategy, while including gift items such as address labels is not. Finally, very recent acquisitions are more likely to respond to appeals that ask them to contribute an amount similar to their most recent donation, but this approach has an adverse effect on donors with a longer history. We show via simulation that a simple design strategy based on these insights has potential to improve success rates from 5.4% to 8.1%. Given these findings, when new scenario arises, however, new data need to be acquired to update our model and decisions, which is studied under optimal learning framework. The goal becomes discovering a sequential information collection strategy that learns the best campaign design alternative as quickly as possible. Regression structure is used to learn about a set of unknown parameters, which alternates with optimization to design new data points. Such problems have been extensively studied in the ranking and selection (R&S) community, but traditional R&S procedures experience high computational costs when the decision space grows combinatorially. We present a value of information procedure for simultaneously learning unknown regression parameters and unknown sampling noise. We then develop an approximate version of the procedure, based on semi-definite programming relaxation, that retains good performance and scales better to large problems. We also prove the asymptotic consistency of the algorithm in the parametric model, a result that has not previously been available for even the known-variance case.
Design and protocol of a randomized multiple behavior change trial: Make Better Choices 2 (MBC2).
Pellegrini, Christine A; Steglitz, Jeremy; Johnston, Winter; Warnick, Jennifer; Adams, Tiara; McFadden, H G; Siddique, Juned; Hedeker, Donald; Spring, Bonnie
2015-03-01
Suboptimal diet and inactive lifestyle are among the most prevalent preventable causes of premature death. Interventions that target multiple behaviors are potentially efficient; however the optimal way to initiate and maintain multiple health behavior changes is unknown. The Make Better Choices 2 (MBC2) trial aims to examine whether sustained healthful diet and activity change are best achieved by targeting diet and activity behaviors simultaneously or sequentially. Study design approximately 250 inactive adults with poor quality diet will be randomized to 3 conditions examining the best way to prescribe healthy diet and activity change. The 3 intervention conditions prescribe: 1) an increase in fruit and vegetable consumption (F/V+), decrease in sedentary leisure screen time (Sed-), and increase in physical activity (PA+) simultaneously (Simultaneous); 2) F/V+ and Sed- first, and then sequentially add PA+ (Sequential); or 3) Stress Management Control that addresses stress, relaxation, and sleep. All participants will receive a smartphone application to self-monitor behaviors and regular coaching calls to help facilitate behavior change during the 9 month intervention. Healthy lifestyle change in fruit/vegetable and saturated fat intakes, sedentary leisure screen time, and physical activity will be assessed at 3, 6, and 9 months. MBC2 is a randomized m-Health intervention examining methods to maximize initiation and maintenance of multiple healthful behavior changes. Results from this trial will provide insight about an optimal technology supported approach to promote improvement in diet and physical activity. Copyright © 2015 Elsevier Inc. All rights reserved.
Emergency strategy optimization for the environmental control system in manned spacecraft
NASA Astrophysics Data System (ADS)
Li, Guoxiang; Pang, Liping; Liu, Meng; Fang, Yufeng; Zhang, Helin
2018-02-01
It is very important for a manned environmental control system (ECS) to be able to reconfigure its operation strategy in emergency conditions. In this article, a multi-objective optimization is established to design the optimal emergency strategy for an ECS in an insufficient power supply condition. The maximum ECS lifetime and the minimum power consumption are chosen as the optimization objectives. Some adjustable key variables are chosen as the optimization variables, which finally represent the reconfigured emergency strategy. The non-dominated sorting genetic algorithm-II is adopted to solve this multi-objective optimization problem. Optimization processes are conducted at four different carbon dioxide partial pressure control levels. The study results show that the Pareto-optimal frontiers obtained from this multi-objective optimization can represent the relationship between the lifetime and the power consumption of the ECS. Hence, the preferred emergency operation strategy can be recommended for situations when there is suddenly insufficient power.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yuan, S; Lu, WG; Chen, YP
2015-03-11
A unique strategy, sequential linker installation (SLI), has been developed to construct multivariate MOFs with functional groups precisely positioned. PCN-700, a Zr-MOF with eight-connected Zr6O4(OH)(8)(H2O)(4) clusters, has been judiciously designed; the Zr-6 clusters in this MOF are arranged in such a fashion that, by replacement of terminal OH-/H2O ligands, subsequent insertion of linear dicarboxylate linkers is achieved. We demonstrate that linkers with distinct lengths and functionalities can be sequentially installed into PCN-700. Single-crystal to single-crystal transformation is realized so that the positions of the subsequently installed linkers are pinpointed via single-crystal X-ray diffraction analyses. This methodology provides a powerful toolmore » to construct multivariate MOFs with precisely positioned functionalities in the desired proximity, which would otherwise be difficult to achieve.« less
Peng, Shu; Pan, Yu‐Chen; Wang, Yaling; Xu, Zhe; Chen, Chao
2017-01-01
Abstract The introduction of controlled self‐assembly into living organisms opens up desired biomedical applications in wide areas including bioimaging/assays, drug delivery, and tissue engineering. Besides the enzyme‐activated examples reported before, controlled self‐assembly under integrated stimuli, especially in the form of sequential input, is unprecedented and ultimately challenging. This study reports a programmable self‐assembling strategy in living cells under sequentially integrated control of both endogenous and exogenous stimuli. Fluorescent polymerized vesicles are constructed by using cholinesterase conversion followed by photopolymerization and thermochromism. Furthermore, as a proof‐of‐principle application, the cell apoptosis involved in the overexpression of cholinesterase in virtue of the generated fluorescence is monitored, showing potential in screening apoptosis‐inducing drugs. The approach exhibits multiple advantages for bioimaging in living cells, including specificity to cholinesterase, red emission, wash free, high signal‐to‐noise ratio. PMID:29201625
Peng, Shu; Pan, Yu-Chen; Wang, Yaling; Xu, Zhe; Chen, Chao; Ding, Dan; Wang, Yongjian; Guo, Dong-Sheng
2017-11-01
The introduction of controlled self-assembly into living organisms opens up desired biomedical applications in wide areas including bioimaging/assays, drug delivery, and tissue engineering. Besides the enzyme-activated examples reported before, controlled self-assembly under integrated stimuli, especially in the form of sequential input, is unprecedented and ultimately challenging. This study reports a programmable self-assembling strategy in living cells under sequentially integrated control of both endogenous and exogenous stimuli. Fluorescent polymerized vesicles are constructed by using cholinesterase conversion followed by photopolymerization and thermochromism. Furthermore, as a proof-of-principle application, the cell apoptosis involved in the overexpression of cholinesterase in virtue of the generated fluorescence is monitored, showing potential in screening apoptosis-inducing drugs. The approach exhibits multiple advantages for bioimaging in living cells, including specificity to cholinesterase, red emission, wash free, high signal-to-noise ratio.
Mirza, Bilal; Lin, Zhiping
2016-08-01
In this paper, a meta-cognitive online sequential extreme learning machine (MOS-ELM) is proposed for class imbalance and concept drift learning. In MOS-ELM, meta-cognition is used to self-regulate the learning by selecting suitable learning strategies for class imbalance and concept drift problems. MOS-ELM is the first sequential learning method to alleviate the imbalance problem for both binary class and multi-class data streams with concept drift. In MOS-ELM, a new adaptive window approach is proposed for concept drift learning. A single output update equation is also proposed which unifies various application specific OS-ELM methods. The performance of MOS-ELM is evaluated under different conditions and compared with methods each specific to some of the conditions. On most of the datasets in comparison, MOS-ELM outperforms the competing methods. Copyright © 2016 Elsevier Ltd. All rights reserved.
Guided particle swarm optimization method to solve general nonlinear optimization problems
NASA Astrophysics Data System (ADS)
Abdelhalim, Alyaa; Nakata, Kazuhide; El-Alem, Mahmoud; Eltawil, Amr
2018-04-01
The development of hybrid algorithms is becoming an important topic in the global optimization research area. This article proposes a new technique in hybridizing the particle swarm optimization (PSO) algorithm and the Nelder-Mead (NM) simplex search algorithm to solve general nonlinear unconstrained optimization problems. Unlike traditional hybrid methods, the proposed method hybridizes the NM algorithm inside the PSO to improve the velocities and positions of the particles iteratively. The new hybridization considers the PSO algorithm and NM algorithm as one heuristic, not in a sequential or hierarchical manner. The NM algorithm is applied to improve the initial random solution of the PSO algorithm and iteratively in every step to improve the overall performance of the method. The performance of the proposed method was tested over 20 optimization test functions with varying dimensions. Comprehensive comparisons with other methods in the literature indicate that the proposed solution method is promising and competitive.
Eda, Yasuyuki; Takizawa, Mari; Murakami, Toshio; Maeda, Hiroaki; Kimachi, Kazuhiko; Yonemura, Hiroshi; Koyanagi, Satoshi; Shiosaki, Kouichi; Higuchi, Hirofumi; Makizumi, Keiichi; Nakashima, Toshihiro; Osatomi, Kiyoshi; Tokiyoshi, Sachio; Matsushita, Shuzo; Yamamoto, Naoki; Honda, Mitsuo
2006-06-01
An antibody response capable of neutralizing not only homologous but also heterologous forms of the CXCR4-tropic human immunodeficiency virus type 1 (HIV-1) MNp and CCR5-tropic primary isolate HIV-1 JR-CSF was achieved through sequential immunization with a combination of synthetic peptides representing HIV-1 Env V3 sequences from field and laboratory HIV-1 clade B isolates. In contrast, repeated immunization with a single V3 peptide generated antibodies that neutralized only type-specific laboratory-adapted homologous viruses. To determine whether the cross-neutralization response could be attributed to a cross-reactive antibody in the immunized animals, we isolated a monoclonal antibody, C25, which neutralized the heterologous primary viruses of HIV-1 clade B. Furthermore, we generated a humanized monoclonal antibody, KD-247, by transferring the genes of the complementary determining region of C25 into genes of the human V region of the antibody. KD-247 bound with high affinity to the "PGR" motif within the HIV-1 Env V3 tip region, and, among the established reference antibodies, it most effectively neutralized primary HIV-1 field isolates possessing the matching neutralization sequence motif, suggesting its promise for clinical applications involving passive immunizations. These results demonstrate that sequential immunization with B-cell epitope peptides may contribute to a humoral immune-based HIV vaccine strategy. Indeed, they help lay the groundwork for the development of HIV-1 vaccine strategies that use sequential immunization with biologically relevant peptides to overcome difficulties associated with otherwise poorly immunogenic epitopes.
Eda, Yasuyuki; Takizawa, Mari; Murakami, Toshio; Maeda, Hiroaki; Kimachi, Kazuhiko; Yonemura, Hiroshi; Koyanagi, Satoshi; Shiosaki, Kouichi; Higuchi, Hirofumi; Makizumi, Keiichi; Nakashima, Toshihiro; Osatomi, Kiyoshi; Tokiyoshi, Sachio; Matsushita, Shuzo; Yamamoto, Naoki; Honda, Mitsuo
2006-01-01
An antibody response capable of neutralizing not only homologous but also heterologous forms of the CXCR4-tropic human immunodeficiency virus type 1 (HIV-1) MNp and CCR5-tropic primary isolate HIV-1 JR-CSF was achieved through sequential immunization with a combination of synthetic peptides representing HIV-1 Env V3 sequences from field and laboratory HIV-1 clade B isolates. In contrast, repeated immunization with a single V3 peptide generated antibodies that neutralized only type-specific laboratory-adapted homologous viruses. To determine whether the cross-neutralization response could be attributed to a cross-reactive antibody in the immunized animals, we isolated a monoclonal antibody, C25, which neutralized the heterologous primary viruses of HIV-1 clade B. Furthermore, we generated a humanized monoclonal antibody, KD-247, by transferring the genes of the complementary determining region of C25 into genes of the human V region of the antibody. KD-247 bound with high affinity to the “PGR” motif within the HIV-1 Env V3 tip region, and, among the established reference antibodies, it most effectively neutralized primary HIV-1 field isolates possessing the matching neutralization sequence motif, suggesting its promise for clinical applications involving passive immunizations. These results demonstrate that sequential immunization with B-cell epitope peptides may contribute to a humoral immune-based HIV vaccine strategy. Indeed, they help lay the groundwork for the development of HIV-1 vaccine strategies that use sequential immunization with biologically relevant peptides to overcome difficulties associated with otherwise poorly immunogenic epitopes. PMID:16699036
Constrained simultaneous multi-state reconfigurable wing structure configuration optimization
NASA Astrophysics Data System (ADS)
Snyder, Matthew
A reconfigurable aircraft is capable of in-flight shape change to increase mission performance or provide multi-mission capability. Reconfigurability has always been a consideration in aircraft design, from the Wright Flyer, to the F-14, and most recently the Lockheed-Martin folding wing concept. The Wright Flyer used wing-warping for roll control, the F-14 had a variable-sweep wing to improve supersonic flight capabilities, and the Lockheed-Martin folding wing demonstrated radical in-flight shape change. This dissertation will examine two questions that aircraft reconfigurability raises, especially as reconfiguration increases in complexity. First, is there an efficient method to develop a light weight structure which supports all the loads generated by each configuration? Second, can this method include the capability to propose a sub-structure topology that weighs less than other considered designs? The first question requires a method that will design and optimize multiple configurations of a reconfigurable aerostructure. Three options exist, this dissertation will show one is better than the others. Simultaneous optimization considers all configurations and their respective load cases and constraints at the same time. Another method is sequential optimization which considers each configuration of the vehicle one after the other - with the optimum design variable values from the first configuration becoming the lower bounds for subsequent configurations. This process repeats for each considered configuration and the lower bounds update as necessary. The third approach is aggregate combination — this method keeps the thickness or area of each member for the most critical configuration, the configuration that requires the largest cross-section. This research will show that simultaneous optimization produces a lower weight and different topology for the considered structures when compared to the sequential and aggregate techniques. To answer the second question, the developed optimization algorithm combines simultaneous optimization with a new method for determining the optimum location of the structural members of the sub-structure. The method proposed here considers an over-populated structural model, one in which there are initially more members than necessary. Using a unique iterative process, the optimization algorithm removes members from the design if they do not carry enough load to justify their presence. The initial set of members includes ribs, spars and a series of cross-members that diagonally connect the ribs and spars. The final result is a different structure, which is lower weight than one developed from sequential optimization or aggregate combination, and suggests the primary load paths. Chapter 1 contains background information on reconfigurable aircraft and a description of the new reconfigurable air vehicle being considered by the Air Vehicles Directorate of the Air Force Research Laboratory. This vehicle serves as a platform to test the proposed optimization process. Chapters 2 and 3 overview the optimization method and Chapter 4 provides some background analysis which is unique to this particular reconfigurable air vehicle. Chapter 5 contains the results of the optimizations and demonstrates how changing constraints or initial configuration impacts the final weight and topology of the wing structure. The final chapter contains conclusions and comments on some future work which would further enhance the effectiveness of the simultaneous reconfigurable structural topology optimization process developed and used in this dissertation.
An efficiency study of the simultaneous analysis and design of structures
NASA Technical Reports Server (NTRS)
Striz, Alfred G.; Wu, Zhiqi; Sobieski, Jaroslaw
1995-01-01
The efficiency of the Simultaneous Analysis and Design (SAND) approach in the minimum weight optimization of structural systems subject to strength and displacement constraints as well as size side constraints is investigated. SAND allows for an optimization to take place in one single operation as opposed to the more traditional and sequential Nested Analysis and Design (NAND) method, where analyses and optimizations alternate. Thus, SAND has the advantage that the stiffness matrix is never factored during the optimization retaining its original sparsity. One of SAND's disadvantages is the increase in the number of design variables and in the associated number of constraint gradient evaluations. If SAND is to be an acceptable player in the optimization field, it is essential to investigate the efficiency of the method and to present a possible cure for any inherent deficiencies.
NASA Astrophysics Data System (ADS)
Li, Shuang; Zhu, Yongsheng; Wang, Yukai
2014-02-01
Asteroid deflection techniques are essential in order to protect the Earth from catastrophic impacts by hazardous asteroids. Rapid design and optimization of low-thrust rendezvous/interception trajectories is considered as one of the key technologies to successfully deflect potentially hazardous asteroids. In this paper, we address a general framework for the rapid design and optimization of low-thrust rendezvous/interception trajectories for future asteroid deflection missions. The design and optimization process includes three closely associated steps. Firstly, shape-based approaches and genetic algorithm (GA) are adopted to perform preliminary design, which provides a reasonable initial guess for subsequent accurate optimization. Secondly, Radau pseudospectral method is utilized to transcribe the low-thrust trajectory optimization problem into a discrete nonlinear programming (NLP) problem. Finally, sequential quadratic programming (SQP) is used to efficiently solve the nonlinear programming problem and obtain the optimal low-thrust rendezvous/interception trajectories. The rapid design and optimization algorithms developed in this paper are validated by three simulation cases with different performance indexes and boundary constraints.
Optical design of system for a lightship
NASA Astrophysics Data System (ADS)
Chirkov, M. A.; Tsyganok, E. A.
2017-06-01
This article presents the result of the optical design of illuminating optical system for lightship using the freeform surface. It shows an algorithm of optical design of side-emitting lens for point source using Freeform Z function in Zemax non-sequential mode; optimization of calculation results and testing of optical system with real diode
Neumann, Patricio; González, Zenón; Vidal, Gladys
2017-06-01
The influence of sequential ultrasound and low-temperature (55°C) thermal pretreatment on sewage sludge solubilization, enzyme activity and anaerobic digestion was assessed. The pretreatment led to significant increases of 427-1030% and 230-674% in the soluble concentrations of carbohydrates and proteins, respectively, and 1.6-4.3 times higher enzymatic activities in the soluble phase of the sludge. Optimal conditions for chemical oxygen demand solubilization were determined at 59.3kg/L total solids (TS) concentration, 30,500kJ/kg TS specific energy and 13h thermal treatment time using response surface methodology. The methane yield after pretreatment increased up to 50% compared with the raw sewage sludge, whereas the maximum methane production rate was 1.3-1.8 times higher. An energy assessment showed that the increased methane yield compensated for energy consumption only under conditions where 500kJ/kg TS specific energy was used for ultrasound, with up to 24% higher electricity recovery. Copyright © 2017 Elsevier Ltd. All rights reserved.
Avallone, Antonio; Pecori, Biagio; Bianco, Franco; Aloj, Luigi; Tatangelo, Fabiana; Romano, Carmela; Granata, Vincenza; Marone, Pietro; Leone, Alessandra; Botti, Gerardo; Petrillo, Antonella; Caracò, Corradina; Iaffaioli, Vincenzo R.; Muto, Paolo; Romano, Giovanni; Comella, Pasquale; Budillon, Alfredo; Delrio, Paolo
2015-01-01
Background We have previously shown that an intensified preoperative regimen including oxaliplatin plus raltitrexed and 5-fluorouracil/folinic acid (OXATOM/FUFA) during preoperative pelvic radiotherapy produced promising results in locally advanced rectal cancer (LARC). Preclinical evidence suggests that the scheduling of bevacizumab may be crucial to optimize its combination with chemo-radiotherapy. Patients and methods This non-randomized, non-comparative, phase II study was conducted in MRI-defined high-risk LARC. Patients received three biweekly cycles of OXATOM/FUFA during RT. Bevacizumab was given 2 weeks before the start of chemo-radiotherapy, and on the same day of chemotherapy for 3 cycles (concomitant-schedule A) or 4 days prior to the first and second cycle of chemotherapy (sequential-schedule B). Primary end point was pathological complete tumor regression (TRG1) rate. Results The accrual for the concomitant-schedule was early terminated because the number of TRG1 (2 out of 16 patients) was statistically inconsistent with the hypothesis of activity (30%) to be tested. Conversely, the endpoint was reached with the sequential-schedule and the final TRG1 rate among 46 enrolled patients was 50% (95% CI 35%–65%). Neutropenia was the most common grade ≥3 toxicity with both schedules, but it was less pronounced with the sequential than concomitant-schedule (30% vs. 44%). Postoperative complications occurred in 8/15 (53%) and 13/46 (28%) patients in schedule A and B, respectively. At 5 year follow-up the probability of PFS and OS was 80% (95%CI, 66%–89%) and 85% (95%CI, 69%–93%), respectively, for the sequential-schedule. Conclusions These results highlights the relevance of bevacizumab scheduling to optimize its combination with preoperative chemo-radiotherapy in the management of LARC. PMID:26320185
Multi-Constraint Multi-Variable Optimization of Source-Driven Nuclear Systems
NASA Astrophysics Data System (ADS)
Watkins, Edward Francis
1995-01-01
A novel approach to the search for optimal designs of source-driven nuclear systems is investigated. Such systems include radiation shields, fusion reactor blankets and various neutron spectrum-shaping assemblies. The novel approach involves the replacement of the steepest-descents optimization algorithm incorporated in the code SWAN by a significantly more general and efficient sequential quadratic programming optimization algorithm provided by the code NPSOL. The resulting SWAN/NPSOL code system can be applied to more general, multi-variable, multi-constraint shield optimization problems. The constraints it accounts for may include simple bounds on variables, linear constraints, and smooth nonlinear constraints. It may also be applied to unconstrained, bound-constrained and linearly constrained optimization. The shield optimization capabilities of the SWAN/NPSOL code system is tested and verified in a variety of optimization problems: dose minimization at constant cost, cost minimization at constant dose, and multiple-nonlinear constraint optimization. The replacement of the optimization part of SWAN with NPSOL is found feasible and leads to a very substantial improvement in the complexity of optimization problems which can be efficiently handled.
Zhang, Wei; Zhang, Xiaolong; Qiang, Yan; Tian, Qi; Tang, Xiaoxian
2017-01-01
The fast and accurate segmentation of lung nodule image sequences is the basis of subsequent processing and diagnostic analyses. However, previous research investigating nodule segmentation algorithms cannot entirely segment cavitary nodules, and the segmentation of juxta-vascular nodules is inaccurate and inefficient. To solve these problems, we propose a new method for the segmentation of lung nodule image sequences based on superpixels and density-based spatial clustering of applications with noise (DBSCAN). First, our method uses three-dimensional computed tomography image features of the average intensity projection combined with multi-scale dot enhancement for preprocessing. Hexagonal clustering and morphological optimized sequential linear iterative clustering (HMSLIC) for sequence image oversegmentation is then proposed to obtain superpixel blocks. The adaptive weight coefficient is then constructed to calculate the distance required between superpixels to achieve precise lung nodules positioning and to obtain the subsequent clustering starting block. Moreover, by fitting the distance and detecting the change in slope, an accurate clustering threshold is obtained. Thereafter, a fast DBSCAN superpixel sequence clustering algorithm, which is optimized by the strategy of only clustering the lung nodules and adaptive threshold, is then used to obtain lung nodule mask sequences. Finally, the lung nodule image sequences are obtained. The experimental results show that our method rapidly, completely and accurately segments various types of lung nodule image sequences. PMID:28880916
Structure-activity studies and therapeutic potential of host defense peptides of human thrombin.
Kasetty, Gopinath; Papareddy, Praveen; Kalle, Martina; Rydengård, Victoria; Mörgelin, Matthias; Albiger, Barbara; Malmsten, Martin; Schmidtchen, Artur
2011-06-01
Peptides of the C-terminal region of human thrombin are released upon proteolysis and identified in human wounds. In this study, we wanted to investigate minimal determinants, as well as structural features, governing the antimicrobial and immunomodulating activity of this peptide region. Sequential amino acid deletions of the peptide GKYGFYTHVFRLKKWIQKVIDQFGE (GKY25), as well as substitutions at strategic and structurally relevant positions, were followed by analyses of antimicrobial activity against the Gram-negative bacteria Escherichia coli and Pseudomonas aeruginosa, the Gram-positive bacterium Staphylococcus aureus, and the fungus Candida albicans. Furthermore, peptide effects on lipopolysaccharide (LPS)-, lipoteichoic acid-, or zymosan-induced macrophage activation were studied. The thrombin-derived peptides displayed length- and sequence-dependent antimicrobial as well as immunomodulating effects. A peptide length of at least 20 amino acids was required for effective anti-inflammatory effects in macrophage models, as well as optimal antimicrobial activity as judged by MIC assays. However, shorter (>12 amino acids) variants also displayed significant antimicrobial effects. A central K14 residue was important for optimal antimicrobial activity. Finally, one peptide variant, GKYGFYTHVFRLKKWIQKVI (GKY20) exhibiting improved selectivity, i.e., low toxicity and a preserved antimicrobial as well as anti-inflammatory effect, showed efficiency in mouse models of LPS shock and P. aeruginosa sepsis. The work defines structure-activity relationships of C-terminal host defense peptides of thrombin and delineates a strategy for selecting peptide epitopes of therapeutic interest.
Efficiency of parallel direct optimization
NASA Technical Reports Server (NTRS)
Janies, D. A.; Wheeler, W. C.
2001-01-01
Tremendous progress has been made at the level of sequential computation in phylogenetics. However, little attention has been paid to parallel computation. Parallel computing is particularly suited to phylogenetics because of the many ways large computational problems can be broken into parts that can be analyzed concurrently. In this paper, we investigate the scaling factors and efficiency of random addition and tree refinement strategies using the direct optimization software, POY, on a small (10 slave processors) and a large (256 slave processors) cluster of networked PCs running LINUX. These algorithms were tested on several data sets composed of DNA and morphology ranging from 40 to 500 taxa. Various algorithms in POY show fundamentally different properties within and between clusters. All algorithms are efficient on the small cluster for the 40-taxon data set. On the large cluster, multibuilding exhibits excellent parallel efficiency, whereas parallel building is inefficient. These results are independent of data set size. Branch swapping in parallel shows excellent speed-up for 16 slave processors on the large cluster. However, there is no appreciable speed-up for branch swapping with the further addition of slave processors (>16). This result is independent of data set size. Ratcheting in parallel is efficient with the addition of up to 32 processors in the large cluster. This result is independent of data set size. c2001 The Willi Hennig Society.
Computer-Based Career Interventions.
ERIC Educational Resources Information Center
Mau, Wei-Cheng
The possible utilities and limitations of computer-assisted career guidance systems (CACG) have been widely discussed although the effectiveness of CACG has not been systematically considered. This paper investigates the effectiveness of a theory-based CACG program, integrating Sequential Elimination and Expected Utility strategies. Three types of…
F-16 Training System Media Report
1981-03-01
practice items. 4.1.3 Use/Procedure This strategy requires the learner to apply a set of sequential steps designed to accomplish a specific task which needs...information. 6. Feedback: Provides the student with the correct answers for the practice items. 4.1.5 Use/Rule This strategy requires the learner to...provide the background and rationale for selecting and/or modifying instructional media to best meet the needs of the F-16 training program. The
Raji Reddy, Chada; Kumaraswamy, Paridala; Singarapu, Kiran K
2014-09-05
An efficient approach for the construction of novel bicyclic fused cyclopentenones starting from Morita-Baylis-Hillman (MBH) acetates of acetylenic aldehydes with flexible scaffold diversity has been achieved using a two-step reaction sequence involving allylic substitution and the Pauson-Khand reaction. This strategy provided a facile access to various bicyclic cyclopentenones fused with either a carbocyclic or a heterocyclic ring system in good yield.
Fully vs. Sequentially Coupled Loads Analysis of Offshore Wind Turbines
DOE Office of Scientific and Technical Information (OSTI.GOV)
Damiani, Rick; Wendt, Fabian; Musial, Walter
The design and analysis methods for offshore wind turbines must consider the aerodynamic and hydrodynamic loads and response of the entire system (turbine, tower, substructure, and foundation) coupled to the turbine control system dynamics. Whereas a fully coupled (turbine and support structure) modeling approach is more rigorous, intellectual property concerns can preclude this approach. In fact, turbine control system algorithms and turbine properties are strictly guarded and often not shared. In many cases, a partially coupled analysis using separate tools and an exchange of reduced sets of data via sequential coupling may be necessary. In the sequentially coupled approach, themore » turbine and substructure designers will independently determine and exchange an abridged model of their respective subsystems to be used in their partners' dynamic simulations. Although the ability to achieve design optimization is sacrificed to some degree with a sequentially coupled analysis method, the central question here is whether this approach can deliver the required safety and how the differences in the results from the fully coupled method could affect the design. This work summarizes the scope and preliminary results of a study conducted for the Bureau of Safety and Environmental Enforcement aimed at quantifying differences between these approaches through aero-hydro-servo-elastic simulations of two offshore wind turbines on a monopile and jacket substructure.« less
Galletly, Cherrie A; Carnell, Benjamin L; Clarke, Patrick; Gill, Shane
2017-03-01
A great deal of research has established the efficacy of repetitive transcranial magnetic stimulation (rTMS) in the treatment of depression. However, questions remain about the optimal method to deliver treatment. One area requiring consideration is the difference in efficacy between bilateral and unilateral treatment protocols. This study aimed to compare the effectiveness of sequential bilateral rTMS and right unilateral rTMS. A total of 135 patients participated in the study, receiving either bilateral rTMS (N = 57) or right unilateral rTMS (N = 78). Treatment response was assessed using the Hamilton depression rating scale. Sequential bilateral rTMS had a higher response rate than right unilateral (43.9% vs 30.8%), but this difference was not statistically significant. This was also the case for remission rates (33.3% vs 21.8%, respectively). Controlling for pretreatment severity of depression, the results did not indicate a significant difference between the protocols with regard to posttreatment Hamilton depression rating scale scores. The current study found no statistically significant differences in response and remission rates between sequential bilateral rTMS and right unilateral rTMS. Given the shorter treatment time and the greater safety and tolerability of right unilateral rTMS, this may be a better choice than bilateral treatment in clinical settings.
Quarello, Paola; Tandoi, Francesco; Carraro, Francesca; Vassallo, Elena; Pinon, Michele; Romagnoli, Renato; David, Ezio; Dell Olio, Dominic; Salizzoni, Mauro; Fagioli, Franca; Calvo, Pier Luigi
2018-05-01
Hematopoietic stem cell transplantation (HSCT) is curative in patients with primary immunodeficiencies. However, pre-HSCT conditioning entails unacceptably high risks if the liver is compromised. The presence of a recurrent opportunistic infection affecting the biliary tree and determining liver cirrhosis with portal hypertension posed particular decisional difficulties in a 7-year-old child with X-linked CD40-ligand deficiency. We aim at adding to the scanty experience available on such rare cases, as successful management with sequential liver transplantation (LT) and HSCT has been reported in detail only in 1 young adult to date. A closely sequential strategy, with a surgical complication-free LT, followed by reduced-intensity conditioning, allowed HSCT to be performed only one month after LT, preventing Cryptosporidium parvum recolonization of the liver graft. Combined sequential LT and HSCT resolved the cirrhotic evolution and corrected the immunodeficiency so that the infection responsible for the progressive sclerosing cholangitis did not recur. Hopefully, this report of the successful resolution of a potentially fatal combination of immunodeficiency and chronic opportunistic infection with end-stage organ damage in a child will encourage others to adapt a sequential transplant approach to this highly complex pathology. However, caution is to be exercised to carefully balance the risks intrinsic to transplant surgery and immunosuppression in primary immunodeficiencies.
Mining of high utility-probability sequential patterns from uncertain databases
Zhang, Binbin; Fournier-Viger, Philippe; Li, Ting
2017-01-01
High-utility sequential pattern mining (HUSPM) has become an important issue in the field of data mining. Several HUSPM algorithms have been designed to mine high-utility sequential patterns (HUPSPs). They have been applied in several real-life situations such as for consumer behavior analysis and event detection in sensor networks. Nonetheless, most studies on HUSPM have focused on mining HUPSPs in precise data. But in real-life, uncertainty is an important factor as data is collected using various types of sensors that are more or less accurate. Hence, data collected in a real-life database can be annotated with existing probabilities. This paper presents a novel pattern mining framework called high utility-probability sequential pattern mining (HUPSPM) for mining high utility-probability sequential patterns (HUPSPs) in uncertain sequence databases. A baseline algorithm with three optional pruning strategies is presented to mine HUPSPs. Moroever, to speed up the mining process, a projection mechanism is designed to create a database projection for each processed sequence, which is smaller than the original database. Thus, the number of unpromising candidates can be greatly reduced, as well as the execution time for mining HUPSPs. Substantial experiments both on real-life and synthetic datasets show that the designed algorithm performs well in terms of runtime, number of candidates, memory usage, and scalability for different minimum utility and minimum probability thresholds. PMID:28742847
Christenson, Stuart D; Chareonthaitawee, Panithaya; Burnes, John E; Hill, Michael R S; Kemp, Brad J; Khandheria, Bijoy K; Hayes, David L; Gibbons, Raymond J
2008-02-01
Cardiac resynchronization therapy (CRT) can improve left ventricular (LV) hemodynamics and function. Recent data suggest the energy cost of such improvement is favorable. The effects of sequential CRT on myocardial oxidative metabolism (MVO(2)) and efficiency have not been previously assessed. Eight patients with NYHA class III heart failure were studied 196 +/- 180 days after CRT implant. Dynamic [(11)C]acetate positron emission tomography (PET) and echocardiography were performed after 1 hour of: 1) AAI pacing, 2) simultaneous CRT, and 3) sequential CRT. MVO(2) was calculated using the monoexponential clearance rate of [(11)C]acetate (k(mono)). Myocardial efficiency was expressed in terms of the work metabolic index (WMI). P values represent overall significance from repeated measures analysis. Global LV and right ventricular (RV) MVO(2) were not significantly different between pacing modes, but the septal/lateral MVO(2) ratio differed significantly with the change in pacing mode (AAI pacing = 0.696 +/- 0.094 min(-1), simultaneous CRT = 0.975 +/- 0.143 min(-1), and sequential CRT = 0.938 +/- 0.189 min(-1); overall P = 0.001). Stroke volume index (SVI) (AAI pacing = 26.7 +/- 10.4 mL/m(2), simultaneous CRT = 30.6 +/- 11.2 mL/m(2), sequential CRT = 33.5 +/- 12.2 mL/m(2); overall P < 0.001) and WMI (AAI pacing = 3.29 +/- 1.34 mmHg*mL/m(2)*10(6), simultaneous CRT = 4.29 +/- 1.72 mmHg*mL/m(2)*10(6), sequential CRT = 4.79 +/- 1.92 mmHg*mL/m(2)*10(6); overall P = 0.002) also differed between pacing modes. Compared with simultaneous CRT, additional changes in septal/lateral MVO(2), SVI, and WMI with sequential CRT were not statistically significant on post hoc analysis. In this small selected population, CRT increases LV SVI without increasing MVO(2), resulting in improved myocardial efficiency. Additional improvements in LV work, oxidative metabolism, and efficiency from simultaneous to sequential CRT were not significant.
Research on parallel algorithm for sequential pattern mining
NASA Astrophysics Data System (ADS)
Zhou, Lijuan; Qin, Bai; Wang, Yu; Hao, Zhongxiao
2008-03-01
Sequential pattern mining is the mining of frequent sequences related to time or other orders from the sequence database. Its initial motivation is to discover the laws of customer purchasing in a time section by finding the frequent sequences. In recent years, sequential pattern mining has become an important direction of data mining, and its application field has not been confined to the business database and has extended to new data sources such as Web and advanced science fields such as DNA analysis. The data of sequential pattern mining has characteristics as follows: mass data amount and distributed storage. Most existing sequential pattern mining algorithms haven't considered the above-mentioned characteristics synthetically. According to the traits mentioned above and combining the parallel theory, this paper puts forward a new distributed parallel algorithm SPP(Sequential Pattern Parallel). The algorithm abides by the principal of pattern reduction and utilizes the divide-and-conquer strategy for parallelization. The first parallel task is to construct frequent item sets applying frequent concept and search space partition theory and the second task is to structure frequent sequences using the depth-first search method at each processor. The algorithm only needs to access the database twice and doesn't generate the candidated sequences, which abates the access time and improves the mining efficiency. Based on the random data generation procedure and different information structure designed, this paper simulated the SPP algorithm in a concrete parallel environment and implemented the AprioriAll algorithm. The experiments demonstrate that compared with AprioriAll, the SPP algorithm had excellent speedup factor and efficiency.
Leontaridou, Maria; Gabbert, Silke; Van Ierland, Ekko C; Worth, Andrew P; Landsiedel, Robert
2016-07-01
This paper offers a Bayesian Value-of-Information (VOI) analysis for guiding the development of non-animal testing strategies, balancing information gains from testing with the expected social gains and costs from the adoption of regulatory decisions. Testing is assumed to have value, if, and only if, the information revealed from testing triggers a welfare-improving decision on the use (or non-use) of a substance. As an illustration, our VOI model is applied to a set of five individual non-animal prediction methods used for skin sensitisation hazard assessment, seven battery combinations of these methods, and 236 sequential 2-test and 3-test strategies. Their expected values are quantified and compared to the expected value of the local lymph node assay (LLNA) as the animal method. We find that battery and sequential combinations of non-animal prediction methods reveal a significantly higher expected value than the LLNA. This holds for the entire range of prior beliefs. Furthermore, our results illustrate that the testing strategy with the highest expected value does not necessarily have to follow the order of key events in the sensitisation adverse outcome pathway (AOP). 2016 FRAME.
Impact of a Sequential Intervention on Albumin Utilization in Critical Care.
Lyu, Peter F; Hockenberry, Jason M; Gaydos, Laura M; Howard, David H; Buchman, Timothy G; Murphy, David J
2016-07-01
Literature generally finds no advantages in mortality risk for albumin over cheaper alternatives in many settings. Few studies have combined financial and nonfinancial strategies to reduce albumin overuse. We evaluated the effect of a sequential multifaceted intervention on decreasing albumin use in ICU and explore the effects of different strategies. Prospective prepost cohort study. Eight ICUs at two hospitals in an academic healthcare system. Adult patients admitted to study ICUs from September 2011 to August 2014 (n = 22,004). Over 2 years, providers in study ICUs participated in an intervention to reduce albumin use involving monthly feedback and explicit financial incentives in the first year and internal guidelines and order process changes in the second year. Outcomes measured were albumin orders per ICU admission, direct albumin costs, and mortality. Mean (SD) utilization decreased 37% from 2.7 orders (6.8) per admission during the baseline to 1.7 orders (4.6) during the intervention (p < 0.001). Regression analysis revealed that the intervention was independently associated with 0.9 fewer orders per admission, a 42% relative decrease. This adjusted effect consisted of an 18% reduction in the probability of using any albumin (p < 0.001) and a 29% reduction in the number of orders per admission among patients receiving any (p < 0.001). Secondary analysis revealed that probability reductions were concurrent with internal guidelines and order process modification while reductions in quantity occurred largely during the financial incentives and feedback period. Estimated cost savings totaled $2.5M during the 2-year intervention. There was no significant difference in ICU or hospital mortality between baseline and intervention. A sequential intervention achieved significant reductions in ICU albumin use and cost savings without changes in patient outcomes, supporting the combination of financial and nonfinancial strategies to align providers with evidence-based practices.
Mertz, Joseph; Tan, Haiyan; Pagala, Vishwajeeth; Bai, Bing; Chen, Ping-Chung; Li, Yuxin; Cho, Ji-Hoon; Shaw, Timothy; Wang, Xusheng; Peng, Junmin
2015-01-01
The mind bomb 1 (Mib1) ubiquitin ligase is essential for controlling metazoan development by Notch signaling and possibly the Wnt pathway. It is also expressed in postmitotic neurons and regulates neuronal morphogenesis and synaptic activity by mechanisms that are largely unknown. We sought to comprehensively characterize the Mib1 interactome and study its potential function in neuron development utilizing a novel sequential elution strategy for affinity purification, in which Mib1 binding proteins were eluted under different stringency and then quantified by the isobaric labeling method. The strategy identified the Mib1 interactome with both deep coverage and the ability to distinguish high-affinity partners from low-affinity partners. A total of 817 proteins were identified during the Mib1 affinity purification, including 56 high-affinity partners and 335 low-affinity partners, whereas the remaining 426 proteins are likely copurified contaminants or extremely weak binding proteins. The analysis detected all previously known Mib1-interacting proteins and revealed a large number of novel components involved in Notch and Wnt pathways, endocytosis and vesicle transport, the ubiquitin-proteasome system, cellular morphogenesis, and synaptic activities. Immunofluorescence studies further showed colocalization of Mib1 with five selected proteins: the Usp9x (FAM) deubiquitinating enzyme, alpha-, beta-, and delta-catenins, and CDKL5. Mutations of CDKL5 are associated with early infantile epileptic encephalopathy-2 (EIEE2), a severe form of mental retardation. We found that the expression of Mib1 down-regulated the protein level of CDKL5 by ubiquitination, and antagonized CDKL5 function during the formation of dendritic spines. Thus, the sequential elution strategy enables biochemical characterization of protein interactomes; and Mib1 analysis provides a comprehensive interactome for investigating its role in signaling networks and neuronal development. PMID:25931508
Modeling Search Behaviors during the Acquisition of Expertise in a Sequential Decision-Making Task.
Moënne-Loccoz, Cristóbal; Vergara, Rodrigo C; López, Vladimir; Mery, Domingo; Cosmelli, Diego
2017-01-01
Our daily interaction with the world is plagued of situations in which we develop expertise through self-motivated repetition of the same task. In many of these interactions, and especially when dealing with computer and machine interfaces, we must deal with sequences of decisions and actions. For instance, when drawing cash from an ATM machine, choices are presented in a step-by-step fashion and a specific sequence of choices must be performed in order to produce the expected outcome. But, as we become experts in the use of such interfaces, is it possible to identify specific search and learning strategies? And if so, can we use this information to predict future actions? In addition to better understanding the cognitive processes underlying sequential decision making, this could allow building adaptive interfaces that can facilitate interaction at different moments of the learning curve. Here we tackle the question of modeling sequential decision-making behavior in a simple human-computer interface that instantiates a 4-level binary decision tree (BDT) task. We record behavioral data from voluntary participants while they attempt to solve the task. Using a Hidden Markov Model-based approach that capitalizes on the hierarchical structure of behavior, we then model their performance during the interaction. Our results show that partitioning the problem space into a small set of hierarchically related stereotyped strategies can potentially capture a host of individual decision making policies. This allows us to follow how participants learn and develop expertise in the use of the interface. Moreover, using a Mixture of Experts based on these stereotyped strategies, the model is able to predict the behavior of participants that master the task.
Optimal nonlinear filtering using the finite-volume method
NASA Astrophysics Data System (ADS)
Fox, Colin; Morrison, Malcolm E. K.; Norton, Richard A.; Molteno, Timothy C. A.
2018-01-01
Optimal sequential inference, or filtering, for the state of a deterministic dynamical system requires simulation of the Frobenius-Perron operator, that can be formulated as the solution of a continuity equation. For low-dimensional, smooth systems, the finite-volume numerical method provides a solution that conserves probability and gives estimates that converge to the optimal continuous-time values, while a Courant-Friedrichs-Lewy-type condition assures that intermediate discretized solutions remain positive density functions. This method is demonstrated in an example of nonlinear filtering for the state of a simple pendulum, with comparison to results using the unscented Kalman filter, and for a case where rank-deficient observations lead to multimodal probability distributions.
DE and NLP Based QPLS Algorithm
NASA Astrophysics Data System (ADS)
Yu, Xiaodong; Huang, Dexian; Wang, Xiong; Liu, Bo
As a novel evolutionary computing technique, Differential Evolution (DE) has been considered to be an effective optimization method for complex optimization problems, and achieved many successful applications in engineering. In this paper, a new algorithm of Quadratic Partial Least Squares (QPLS) based on Nonlinear Programming (NLP) is presented. And DE is used to solve the NLP so as to calculate the optimal input weights and the parameters of inner relationship. The simulation results based on the soft measurement of diesel oil solidifying point on a real crude distillation unit demonstrate that the superiority of the proposed algorithm to linear PLS and QPLS which is based on Sequential Quadratic Programming (SQP) in terms of fitting accuracy and computational costs.
Integrated Controls-Structures Design Methodology for Flexible Spacecraft
NASA Technical Reports Server (NTRS)
Maghami, P. G.; Joshi, S. M.; Price, D. B.
1995-01-01
This paper proposes an approach for the design of flexible spacecraft, wherein the structural design and the control system design are performed simultaneously. The integrated design problem is posed as an optimization problem in which both the structural parameters and the control system parameters constitute the design variables, which are used to optimize a common objective function, thereby resulting in an optimal overall design. The approach is demonstrated by application to the integrated design of a geostationary platform, and to a ground-based flexible structure experiment. The numerical results obtained indicate that the integrated design approach generally yields spacecraft designs that are substantially superior to the conventional approach, wherein the structural design and control design are performed sequentially.
Optimal landing of a helicopter in autorotation
NASA Technical Reports Server (NTRS)
Lee, A. Y. N.
1985-01-01
Gliding descent in autorotation is a maneuver used by helicopter pilots in case of engine failure. The landing of a helicopter in autorotation is formulated as a nonlinear optimal control problem. The OH-58A helicopter was used. Helicopter vertical and horizontal velocities, vertical and horizontal displacement, and the rotor angle speed were modeled. An empirical approximation for the induced veloctiy in the vortex-ring state were provided. The cost function of the optimal control problem is a weighted sum of the squared horizontal and vertical components of the helicopter velocity at touchdown. Optimal trajectories are calculated for entry conditions well within the horizontal-vertical restriction curve, with the helicopter initially in hover or forwared flight. The resultant two-point boundary value problem with path equality constraints was successfully solved using the Sequential Gradient Restoration Technique.
Analysis and optimization of population annealing
NASA Astrophysics Data System (ADS)
Amey, Christopher; Machta, Jonathan
2018-03-01
Population annealing is an easily parallelizable sequential Monte Carlo algorithm that is well suited for simulating the equilibrium properties of systems with rough free-energy landscapes. In this work we seek to understand and improve the performance of population annealing. We derive several useful relations between quantities that describe the performance of population annealing and use these relations to suggest methods to optimize the algorithm. These optimization methods were tested by performing large-scale simulations of the three-dimensional (3D) Edwards-Anderson (Ising) spin glass and measuring several observables. The optimization methods were found to substantially decrease the amount of computational work necessary as compared to previously used, unoptimized versions of population annealing. We also obtain more accurate values of several important observables for the 3D Edwards-Anderson model.
NASA Astrophysics Data System (ADS)
Hao, Ping
2017-10-01
Potentiality of sequential hydrogen bioproduction from sugary wastewater treatment was investigated using continuous stirred tank reactor (CSTR) for various substrate COD concentrations and HRTs. At optimum substrate concentration of 6 g COD/L, hydrogen could be efficiently produced from CSTR with the highest production rate of 3.00 (±0.04) L/L reactor d at HRT of 6 h. The up flow anaerobic sludge bed (UASB) reactor was used for continuous methane bioproduction from the effluents of hydrogen bioproduction. At optimal HRT 12 h, methane could be produced with a production rate of 2.27 (±0.08) L/L reactor d and the COD removal efficiency reached up to the maximum 82.3%.
Management practices affect soil nutrients and bacterial populations in backgrounding beef feedlot
USDA-ARS?s Scientific Manuscript database
Contaminants associated with manure in animal production sites are of significant concern. Unless properly managed, high soil nutrient concentrations in feedlots can deteriorate soil and water quality. This three year study tested a nutrient management strategy with three sequentially imposed manage...
A Strategy for Understanding Noise-Induced Annoyance
1988-08-01
Estimation by Sequential Testing (PEST) (Taylor and Creelman , 1967) can be used to efficiently establish the indifference point for each such pair of...population on applicability of noise rating procedures". Noise Control Engineering, 4, 65-70. Taylor, M. M. & Creelman , C. D. "PEST: Efficient
A Simulation-Optimization Model for the Management of Seawater Intrusion
NASA Astrophysics Data System (ADS)
Stanko, Z.; Nishikawa, T.
2012-12-01
Seawater intrusion is a common problem in coastal aquifers where excessive groundwater pumping can lead to chloride contamination of a freshwater resource. Simulation-optimization techniques have been developed to determine optimal management strategies while mitigating seawater intrusion. The simulation models are often density-independent groundwater-flow models that may assume a sharp interface and/or use equivalent freshwater heads. The optimization methods are often linear-programming (LP) based techniques that that require simplifications of the real-world system. However, seawater intrusion is a highly nonlinear, density-dependent flow and transport problem, which requires the use of nonlinear-programming (NLP) or global-optimization (GO) techniques. NLP approaches are difficult because of the need for gradient information; therefore, we have chosen a GO technique for this study. Specifically, we have coupled a multi-objective genetic algorithm (GA) with a density-dependent groundwater-flow and transport model to simulate and identify strategies that optimally manage seawater intrusion. GA is a heuristic approach, often chosen when seeking optimal solutions to highly complex and nonlinear problems where LP or NLP methods cannot be applied. The GA utilized in this study is the Epsilon-Nondominated Sorted Genetic Algorithm II (ɛ-NSGAII), which can approximate a pareto-optimal front between competing objectives. This algorithm has several key features: real and/or binary variable capabilities; an efficient sorting scheme; preservation and diversity of good solutions; dynamic population sizing; constraint handling; parallelizable implementation; and user controlled precision for each objective. The simulation model is SEAWAT, the USGS model that couples MODFLOW with MT3DMS for variable-density flow and transport. ɛ-NSGAII and SEAWAT were efficiently linked together through a C-Fortran interface. The simulation-optimization model was first tested by using a published density-independent flow model test case that was originally solved using a sequential LP method with the USGS's Ground-Water Management Process (GWM). For the problem formulation, the objective is to maximize net groundwater extraction, subject to head and head-gradient constraints. The decision variables are pumping rates at fixed wells and the system's state is represented with freshwater hydraulic head. The results of the proposed algorithm were similar to the published results (within 1%); discrepancies may be attributed to differences in the simulators and inherent differences between LP and GA. The GWM test case was then extended to a density-dependent flow and transport version. As formulated, the optimization problem is infeasible because of the density effects on hydraulic head. Therefore, the sum of the squared constraint violation (SSC) was used as a second objective. The result is a pareto curve showing optimal pumping rates versus the SSC. Analysis of this curve indicates that a similar net-extraction rate to the test case can be obtained with a minor violation in vertical head-gradient constraints. This study shows that a coupled ɛ-NSGAII/SEAWAT model can be used for the management of groundwater seawater intrusion. In the future, the proposed methodology will be applied to a real-world seawater intrusion and resource management problem for Santa Barbara, CA.
A multiple imputation strategy for sequential multiple assignment randomized trials
Shortreed, Susan M.; Laber, Eric; Stroup, T. Scott; Pineau, Joelle; Murphy, Susan A.
2014-01-01
Sequential multiple assignment randomized trials (SMARTs) are increasingly being used to inform clinical and intervention science. In a SMART, each patient is repeatedly randomized over time. Each randomization occurs at a critical decision point in the treatment course. These critical decision points often correspond to milestones in the disease process or other changes in a patient’s health status. Thus, the timing and number of randomizations may vary across patients and depend on evolving patient-specific information. This presents unique challenges when analyzing data from a SMART in the presence of missing data. This paper presents the first comprehensive discussion of missing data issues typical of SMART studies: we describe five specific challenges, and propose a flexible imputation strategy to facilitate valid statistical estimation and inference using incomplete data from a SMART. To illustrate these contributions, we consider data from the Clinical Antipsychotic Trial of Intervention and Effectiveness (CATIE), one of the most well-known SMARTs to date. PMID:24919867
Dong, Angang; Ye, Xingchen; Chen, Jun; Kang, Yijin; Gordon, Thomas; Kikkawa, James M; Murray, Christopher B
2011-02-02
The ability to engineer surface properties of nanocrystals (NCs) is important for various applications, as many of the physical and chemical properties of nanoscale materials are strongly affected by the surface chemistry. Here, we report a facile ligand-exchange approach, which enables sequential surface functionalization and phase transfer of colloidal NCs while preserving the NC size and shape. Nitrosonium tetrafluoroborate (NOBF4) is used to replace the original organic ligands attached to the NC surface, stabilizing the NCs in various polar, hydrophilic media such as N,N-dimethylformamide for years, with no observed aggregation or precipitation. This approach is applicable to various NCs (metal oxides, metals, semiconductors, and dielectrics) of different sizes and shapes. The hydrophilic NCs obtained can subsequently be further functionalized using a variety of capping molecules, imparting different surface functionalization to NCs depending on the molecules employed. Our work provides a versatile ligand-exchange strategy for NC surface functionalization and represents an important step toward controllably engineering the surface properties of NCs.
Manasse, N J; Hux, K; Snell, J
2005-08-10
Recalling names in real-world contexts is often difficult for survivors of traumatic brain injury despite successful completion of face-name association training programmes. This small number study utilized a sequential treatment approach in which a traditional training programme preceded real-world training. The traditional training component was identical across programmes: one-on-one intervention using visual imagery and photographs to assist in mastery of face-name associations. The real-world training component compared the effectiveness of three cueing strategies--name restating, phonemic cueing and visual imagery--and was conducted by the actual to-be-named people. Results revealed improved name learning and use by the participants regardless of cueing strategy. After treatment targeting six names, four of five participants consistently used two or more names spontaneously and consistently knew three or more names in response to questioning. In addition to documenting the effectiveness of real-world treatment paradigms, the findings call into question the necessity for preliminary traditional intervention.
Mir-Tutusaus, J A; Sarrà, M; Caminal, G
2016-11-15
Hospital wastewaters have a high load of pharmaceutical active compounds (PhACs). Fungal treatments could be appropriate for source treatment of such effluents but the transition to non-sterile conditions proved to be difficult due to competition with indigenous microorganisms, resulting in very short-duration operations. In this article, coagulation-flocculation and UV-radiation processes were studied as pretreatments to a fungal reactor treating non-sterile hospital wastewater in sequential batch operation and continuous operation modes. The influent was spiked with ibuprofen and ketoprofen, and both compounds were successfully degraded by over 80%. UV pretreatment did not extent the fungal activity after coagulation-flocculation measured as laccase production and pellet integrity. Sequential batch operation did not reduce bacteria competition during fungal treatment. The best strategy was the addition of a coagulation-flocculation pretreatment to a continuous reactor, which led to an operation of 28days without biomass renovation. Copyright © 2016 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Sumin, M. I.
2015-06-01
A parametric nonlinear programming problem in a metric space with an operator equality constraint in a Hilbert space is studied assuming that its lower semicontinuous value function at a chosen individual parameter value has certain subdifferentiability properties in the sense of nonlinear (nonsmooth) analysis. Such subdifferentiability can be understood as the existence of a proximal subgradient or a Fréchet subdifferential. In other words, an individual problem has a corresponding generalized Kuhn-Tucker vector. Under this assumption, a stable sequential Kuhn-Tucker theorem in nondifferential iterative form is proved and discussed in terms of minimizing sequences on the basis of the dual regularization method. This theorem provides necessary and sufficient conditions for the stable construction of a minimizing approximate solution in the sense of Warga in the considered problem, whose initial data can be approximately specified. A substantial difference of the proved theorem from its classical same-named analogue is that the former takes into account the possible instability of the problem in the case of perturbed initial data and, as a consequence, allows for the inherited instability of classical optimality conditions. This theorem can be treated as a regularized generalization of the classical Uzawa algorithm to nonlinear programming problems. Finally, the theorem is applied to the "simplest" nonlinear optimal control problem, namely, to a time-optimal control problem.
Sequential ultrasound-microwave assisted acid extraction (UMAE) of pectin from pomelo peels.
Liew, Shan Qin; Ngoh, Gek Cheng; Yusoff, Rozita; Teoh, Wen Hui
2016-12-01
This study aims to optimize sequential ultrasound-microwave assisted extraction (UMAE) on pomelo peel using citric acid. The effects of pH, sonication time, microwave power and irradiation time on the yield and the degree of esterification (DE) of pectin were investigated. Under optimized conditions of pH 1.80, 27.52min sonication followed by 6.40min microwave irradiation at 643.44W, the yield and the DE value of pectin obtained was respectively at 38.00% and 56.88%. Based upon optimized UMAE condition, the pectin from microwave-ultrasound assisted extraction (MUAE), ultrasound assisted extraction (UAE) and microwave assisted extraction (MAE) were studied. The yield of pectin adopting the UMAE was higher than all other techniques in the order of UMAE>MUAE>MAE>UAE. The pectin's galacturonic acid content obtained from combined extraction technique is higher than that obtained from sole extraction technique and the pectin gel produced from various techniques exhibited a pseudoplastic behaviour. The morphological structures of pectin extracted from MUAE and MAE closely resemble each other. The extracted pectin from UMAE with smaller and more regular surface differs greatly from that of UAE. This has substantiated the highest pectin yield of 36.33% from UMAE and further signified their compatibility and potentiality in pectin extraction. Copyright © 2016 Elsevier B.V. All rights reserved.
Hairy root biotechnology--indicative timeline to understand missing links and future outlook.
Mehrotra, Shakti; Srivastava, Vikas; Ur Rahman, Laiq; Kukreja, A K
2015-09-01
Agrobacterium rhizogenes-mediated hairy roots (HR) were developed in the laboratory to mimic the natural phenomenon of bacterial gene transfer and occurrence of disease syndrome. The timeline analysis revealed that during 90 s, the research expanded to the hairy root-based secondary metabolite production and different yield enhancement strategies like media optimization, up-scaling, metabolic engineering etc. An outlook indicates that much emphasis has been given to the strategies that are helpful in making this technology more practical in terms of high productivity at low cost. However, a sequential analysis of literature shows that this technique is upgraded to a biotechnology platform where different intra- and interdisciplinary work areas were established, progressed, and diverged to provide scientific benefits of various hairy root-based applications like phytoremediation, molecular farming, biotransformation, etc. In the present scenario, this biotechnology research platform includes (a) elemental research like hairy root-mediated secondary metabolite production coupled with productivity enhancement strategies and (b) HR-based functional research. The latter comprised of hairy root-based applied aspects such as generation of agro-economical traits in plants, production of high value as well as less hazardous molecules through biotransformation/farming and remediation, respectively. This review presents an indicative timeline portrayal of hairy root research reflected by a chronology of research outputs. The timeline also reveals a progressive trend in the state-of-art global advances in hairy root biotechnology. Furthermore, the review also discusses ideas to explore missing links and to deal with the challenges in future progression and prospects of research in all related fields of this important area of plant biotechnology.
Jakobi, Annika; Stützer, Kristin; Bandurska-Luque, Anna; Löck, Steffen; Haase, Robert; Wack, Linda-Jacqueline; Mönnich, David; Thorwarth, Daniel; Perez, Damien; Lühr, Armin; Zips, Daniel; Krause, Mechthild; Baumann, Michael; Perrin, Rosalind; Richter, Christian
2015-01-01
To determine by treatment plan comparison differences in toxicity risk reduction for patients with head and neck squamous cell carcinoma (HNSCC) from proton therapy either used for complete treatment or sequential boost treatment only. For 45 HNSCC patients, intensity-modulated photon (IMXT) and proton (IMPT) treatment plans were created including a dose escalation via simultaneous integrated boost with a one-step adaptation strategy after 25 fractions for sequential boost treatment. Dose accumulation was performed for pure IMXT treatment, pure IMPT treatment and for a mixed modality treatment with IMXT for the elective target followed by a sequential boost with IMPT. Treatment plan evaluation was based on modern normal tissue complication probability (NTCP) models for mucositis, xerostomia, aspiration, dysphagia, larynx edema and trismus. Individual NTCP differences between IMXT and IMPT (∆NTCPIMXT-IMPT) as well as between IMXT and the mixed modality treatment (∆NTCPIMXT-Mix) were calculated. Target coverage was similar in all three scenarios. NTCP values could be reduced in all patients using IMPT treatment. However, ∆NTCPIMXT-Mix values were a factor 2-10 smaller than ∆NTCPIMXT-IMPT. Assuming a threshold of ≥ 10% NTCP reduction in xerostomia or dysphagia risk as criterion for patient assignment to IMPT, less than 15% of the patients would be selected for a proton boost, while about 50% would be assigned to pure IMPT treatment. For mucositis and trismus, ∆NTCP ≥ 10% occurred in six and four patients, respectively, with pure IMPT treatment, while no such difference was identified with the proton boost. The use of IMPT generally reduces the expected toxicity risk while maintaining good tumor coverage in the examined HNSCC patients. A mixed modality treatment using IMPT solely for a sequential boost reduces the risk by 10% only in rare cases. In contrast, pure IMPT treatment may be reasonable for about half of the examined patient cohort considering the toxicities xerostomia and dysphagia, if a feasible strategy for patient anatomy changes is implemented.
Estimation After a Group Sequential Trial.
Milanzi, Elasma; Molenberghs, Geert; Alonso, Ariel; Kenward, Michael G; Tsiatis, Anastasios A; Davidian, Marie; Verbeke, Geert
2015-10-01
Group sequential trials are one important instance of studies for which the sample size is not fixed a priori but rather takes one of a finite set of pre-specified values, dependent on the observed data. Much work has been devoted to the inferential consequences of this design feature. Molenberghs et al (2012) and Milanzi et al (2012) reviewed and extended the existing literature, focusing on a collection of seemingly disparate, but related, settings, namely completely random sample sizes, group sequential studies with deterministic and random stopping rules, incomplete data, and random cluster sizes. They showed that the ordinary sample average is a viable option for estimation following a group sequential trial, for a wide class of stopping rules and for random outcomes with a distribution in the exponential family. Their results are somewhat surprising in the sense that the sample average is not optimal, and further, there does not exist an optimal, or even, unbiased linear estimator. However, the sample average is asymptotically unbiased, both conditionally upon the observed sample size as well as marginalized over it. By exploiting ignorability they showed that the sample average is the conventional maximum likelihood estimator. They also showed that a conditional maximum likelihood estimator is finite sample unbiased, but is less efficient than the sample average and has the larger mean squared error. Asymptotically, the sample average and the conditional maximum likelihood estimator are equivalent. This previous work is restricted, however, to the situation in which the the random sample size can take only two values, N = n or N = 2 n . In this paper, we consider the more practically useful setting of sample sizes in a the finite set { n 1 , n 2 , …, n L }. It is shown that the sample average is then a justifiable estimator , in the sense that it follows from joint likelihood estimation, and it is consistent and asymptotically unbiased. We also show why simulations can give the false impression of bias in the sample average when considered conditional upon the sample size. The consequence is that no corrections need to be made to estimators following sequential trials. When small-sample bias is of concern, the conditional likelihood estimator provides a relatively straightforward modification to the sample average. Finally, it is shown that classical likelihood-based standard errors and confidence intervals can be applied, obviating the need for technical corrections.
Optimization of the Switch Mechanism in a Circuit Breaker Using MBD Based Simulation
Jang, Jin-Seok; Yoon, Chang-Gyu; Ryu, Chi-Young; Kim, Hyun-Woo; Bae, Byung-Tae; Yoo, Wan-Suk
2015-01-01
A circuit breaker is widely used to protect electric power system from fault currents or system errors; in particular, the opening mechanism in a circuit breaker is important to protect current overflow in the electric system. In this paper, multibody dynamic model of a circuit breaker including switch mechanism was developed including the electromagnetic actuator system. Since the opening mechanism operates sequentially, optimization of the switch mechanism was carried out to improve the current breaking time. In the optimization process, design parameters were selected from length and shape of each latch, which changes pivot points of bearings to shorten the breaking time. To validate optimization results, computational results were compared to physical tests with a high speed camera. Opening time of the optimized mechanism was decreased by 2.3 ms, which was proved by experiments. Switch mechanism design process can be improved including contact-latch system by using this process. PMID:25918740
Research on design method of the full form ship with minimum thrust deduction factor
NASA Astrophysics Data System (ADS)
Zhang, Bao-ji; Miao, Ai-qin; Zhang, Zhu-xin
2015-04-01
In the preliminary design stage of the full form ships, in order to obtain a hull form with low resistance and maximum propulsion efficiency, an optimization design program for a full form ship with the minimum thrust deduction factor has been developed, which combined the potential flow theory and boundary layer theory with the optimization technique. In the optimization process, the Sequential Unconstrained Minimization Technique (SUMT) interior point method of Nonlinear Programming (NLP) was proposed with the minimum thrust deduction factor as the objective function. An appropriate displacement is a basic constraint condition, and the boundary layer separation is an additional one. The parameters of the hull form modification function are used as design variables. At last, the numerical optimization example for lines of after-body of 50000 DWT product oil tanker was provided, which indicated that the propulsion efficiency was improved distinctly by this optimal design method.
Ramsay, Jonathan E; Yang, Fang; Pang, Joyce S; Lai, Ching-Man; Ho, Roger Cm; Mak, Kwok-Kei
2015-07-01
Previous research has indicated that both cognitive and behavioral variables mediate the positive effect of optimism on quality of life; yet few attempts have been made to accommodate these constructs into a single explanatory framework. Adopting Fredrickson's broaden-and-build perspective, we examined the relationships between optimism, self-rated health, resilience, exercise, and quality of life in 365 Chinese university students using path analysis. For physical quality of life, a two-stage model, in which the effects of optimism were sequentially mediated by cognitive and behavioral variables, provided the best fit. A one-stage model, with full mediation by cognitive variables, provided the best fit for mental quality of life. This suggests that optimism influences physical and mental quality of life via different pathways. © The Author(s) 2013.
NASA Astrophysics Data System (ADS)
Ohdaira, Tetsushi
2014-07-01
Previous studies discussing cooperation employ the best decision that every player knows all information regarding the payoff matrix and selects the strategy of the highest payoff. Therefore, they do not discuss cooperation based on the altruistic decision with limited information (bounded rational altruistic decision). In addition, they do not cover the case where every player can submit his/her strategy several times in a match of the game. This paper is based on Ohdaira's reconsideration of the bounded rational altruistic decision, and also employs the framework of the prisoner's dilemma game (PDG) with sequential strategy. The distinction between this study and the Ohdaira's reconsideration is that the former covers the model of multiple groups, but the latter deals with the model of only two groups. Ohdaira's reconsideration shows that the bounded rational altruistic decision facilitates much more cooperation in the PDG with sequential strategy than Ohdaira and Terano's bounded rational second-best decision does. However, the detail of cooperation of multiple groups based on the bounded rational altruistic decision has not been resolved yet. This study, therefore, shows how randomness in the network composed of multiple groups affects the increase of the average frequency of mutual cooperation (cooperation between groups) based on the bounded rational altruistic decision of multiple groups. We also discuss the results of the model in comparison with related studies which employ the best decision.
Metabolomic analysis of urine samples by UHPLC-QTOF-MS: Impact of normalization strategies.
Gagnebin, Yoric; Tonoli, David; Lescuyer, Pierre; Ponte, Belen; de Seigneux, Sophie; Martin, Pierre-Yves; Schappler, Julie; Boccard, Julien; Rudaz, Serge
2017-02-22
Among the various biological matrices used in metabolomics, urine is a biofluid of major interest because of its non-invasive collection and its availability in large quantities. However, significant sources of variability in urine metabolomics based on UHPLC-MS are related to the analytical drift and variation of the sample concentration, thus requiring normalization. A sequential normalization strategy was developed to remove these detrimental effects, including: (i) pre-acquisition sample normalization by individual dilution factors to narrow the concentration range and to standardize the analytical conditions, (ii) post-acquisition data normalization by quality control-based robust LOESS signal correction (QC-RLSC) to correct for potential analytical drift, and (iii) post-acquisition data normalization by MS total useful signal (MSTUS) or probabilistic quotient normalization (PQN) to prevent the impact of concentration variability. This generic strategy was performed with urine samples from healthy individuals and was further implemented in the context of a clinical study to detect alterations in urine metabolomic profiles due to kidney failure. In the case of kidney failure, the relation between creatinine/osmolality and the sample concentration is modified, and relying only on these measurements for normalization could be highly detrimental. The sequential normalization strategy was demonstrated to significantly improve patient stratification by decreasing the unwanted variability and thus enhancing data quality. Copyright © 2016 Elsevier B.V. All rights reserved.