Science.gov

Sample records for experimental design optimization

  1. Optimal experimental design strategies for detecting hormesis.

    PubMed

    Dette, Holger; Pepelyshev, Andrey; Wong, Weng Kee

    2011-12-01

    Hormesis is a widely observed phenomenon in many branches of life sciences, ranging from toxicology studies to agronomy, with obvious public health and risk assessment implications. We address optimal experimental design strategies for determining the presence of hormesis in a controlled environment using the recently proposed Hunt-Bowman model. We propose alternative models that have an implicit hormetic threshold, discuss their advantages over current models, and construct and study properties of optimal designs for (i) estimating model parameters, (ii) estimating the threshold dose, and (iii) testing for the presence of hormesis. We also determine maximin optimal designs that maximize the minimum of the design efficiencies when we have multiple design criteria or there is model uncertainty where we have a few plausible models of interest. We apply these optimal design strategies to a teratology study and show that the proposed designs outperform the implemented design by a wide margin for many situations.

  2. Optimal Experimental Design for Model Discrimination

    PubMed Central

    Myung, Jay I.; Pitt, Mark A.

    2009-01-01

    Models of a psychological process can be difficult to discriminate experimentally because it is not easy to determine the values of the critical design variables (e.g., presentation schedule, stimulus structure) that will be most informative in differentiating them. Recent developments in sampling-based search methods in statistics make it possible to determine these values, and thereby identify an optimal experimental design. After describing the method, it is demonstrated in two content areas in cognitive psychology in which models are highly competitive: retention (i.e., forgetting) and categorization. The optimal design is compared with the quality of designs used in the literature. The findings demonstrate that design optimization has the potential to increase the informativeness of the experimental method. PMID:19618983

  3. Optimizing Experimental Designs: Finding Hidden Treasure.

    USDA-ARS?s Scientific Manuscript database

    Classical experimental design theory, the predominant treatment in most textbooks, promotes the use of blocking designs for control of spatial variability in field studies and other situations in which there is significant variation among heterogeneity among experimental units. Many blocking design...

  4. Optimal experimental design for diffusion kurtosis imaging.

    PubMed

    Poot, Dirk H J; den Dekker, Arnold J; Achten, Eric; Verhoye, Marleen; Sijbers, Jan

    2010-03-01

    Diffusion kurtosis imaging (DKI) is a new magnetic resonance imaging (MRI) model that describes the non-Gaussian diffusion behavior in tissues. It has recently been shown that DKI parameters, such as the radial or axial kurtosis, are more sensitive to brain physiology changes than the well-known diffusion tensor imaging (DTI) parameters in several white and gray matter structures. In order to estimate either DTI or DKI parameters with maximum precision, the diffusion weighting gradient settings that are applied during the acquisition need to be optimized. Indeed, it has been shown previously that optimizing the set of diffusion weighting gradient settings can have a significant effect on the precision with which DTI parameters can be estimated. In this paper, we focus on the optimization of DKI gradients settings. Commonly, DKI data are acquired using a standard set of diffusion weighting gradients with fixed directions and with regularly spaced gradient strengths. In this paper, we show that such gradient settings are suboptimal with respect to the precision with which DKI parameters can be estimated. Furthermore, the gradient directions and the strengths of the diffusion-weighted MR images are optimized by minimizing the Cramér-Rao lower bound of DKI parameters. The impact of the optimized gradient settings is evaluated, both on simulated as well as experimentally recorded datasets. It is shown that the precision with which the kurtosis parameters can be estimated, increases substantially by optimizing the gradient settings.

  5. Optimal Experimental Design for Model Discrimination

    ERIC Educational Resources Information Center

    Myung, Jay I.; Pitt, Mark A.

    2009-01-01

    Models of a psychological process can be difficult to discriminate experimentally because it is not easy to determine the values of the critical design variables (e.g., presentation schedule, stimulus structure) that will be most informative in differentiating them. Recent developments in sampling-based search methods in statistics make it…

  6. Ceramic processing: Experimental design and optimization

    NASA Technical Reports Server (NTRS)

    Weiser, Martin W.; Lauben, David N.; Madrid, Philip

    1992-01-01

    The objectives of this paper are to: (1) gain insight into the processing of ceramics and how green processing can affect the properties of ceramics; (2) investigate the technique of slip casting; (3) learn how heat treatment and temperature contribute to density, strength, and effects of under and over firing to ceramic properties; (4) experience some of the problems inherent in testing brittle materials and learn about the statistical nature of the strength of ceramics; (5) investigate orthogonal arrays as tools to examine the effect of many experimental parameters using a minimum number of experiments; (6) recognize appropriate uses for clay based ceramics; and (7) measure several different properties important to ceramic use and optimize them for a given application.

  7. Process Model Construction and Optimization Using Statistical Experimental Design,

    DTIC Science & Technology

    1988-04-01

    Memo No. 88-442 ~LECTE March 1988 31988 %,.. MvAY 1 98 0) PROCESS MODEL CONSTRUCTION AND OPTIMIZATION USING STATISTICAL EXPERIMENTAL DESIGN Emmanuel...Sachs and George Prueger Abstract A methodology is presented for the construction of process models by the combination of physically based mechanistic...253-8138. .% I " Process Model Construction and Optimization Using Statistical Experimental Design" by Emanuel Sachs Assistant Professor and George

  8. A Robust Adaptive Autonomous Approach to Optimal Experimental Design

    NASA Astrophysics Data System (ADS)

    Gu, Hairong

    Experimentation is the fundamental tool of scientific inquiries to understand the laws governing the nature and human behaviors. Many complex real-world experimental scenarios, particularly in quest of prediction accuracy, often encounter difficulties to conduct experiments using an existing experimental procedure for the following two reasons. First, the existing experimental procedures require a parametric model to serve as the proxy of the latent data structure or data-generating mechanism at the beginning of an experiment. However, for those experimental scenarios of concern, a sound model is often unavailable before an experiment. Second, those experimental scenarios usually contain a large number of design variables, which potentially leads to a lengthy and costly data collection cycle. Incompetently, the existing experimental procedures are unable to optimize large-scale experiments so as to minimize the experimental length and cost. Facing the two challenges in those experimental scenarios, the aim of the present study is to develop a new experimental procedure that allows an experiment to be conducted without the assumption of a parametric model while still achieving satisfactory prediction, and performs optimization of experimental designs to improve the efficiency of an experiment. The new experimental procedure developed in the present study is named robust adaptive autonomous system (RAAS). RAAS is a procedure for sequential experiments composed of multiple experimental trials, which performs function estimation, variable selection, reverse prediction and design optimization on each trial. Directly addressing the challenges in those experimental scenarios of concern, function estimation and variable selection are performed by data-driven modeling methods to generate a predictive model from data collected during the course of an experiment, thus exempting the requirement of a parametric model at the beginning of an experiment; design optimization is

  9. Optimizing Experimental Design for Comparing Models of Brain Function

    PubMed Central

    Daunizeau, Jean; Preuschoff, Kerstin; Friston, Karl; Stephan, Klaas

    2011-01-01

    This article presents the first attempt to formalize the optimization of experimental design with the aim of comparing models of brain function based on neuroimaging data. We demonstrate our approach in the context of Dynamic Causal Modelling (DCM), which relates experimental manipulations to observed network dynamics (via hidden neuronal states) and provides an inference framework for selecting among candidate models. Here, we show how to optimize the sensitivity of model selection by choosing among experimental designs according to their respective model selection accuracy. Using Bayesian decision theory, we (i) derive the Laplace-Chernoff risk for model selection, (ii) disclose its relationship with classical design optimality criteria and (iii) assess its sensitivity to basic modelling assumptions. We then evaluate the approach when identifying brain networks using DCM. Monte-Carlo simulations and empirical analyses of fMRI data from a simple bimanual motor task in humans serve to demonstrate the relationship between network identification and the optimal experimental design. For example, we show that deciding whether there is a feedback connection requires shorter epoch durations, relative to asking whether there is experimentally induced change in a connection that is known to be present. Finally, we discuss limitations and potential extensions of this work. PMID:22125485

  10. Model-Based Optimal Experimental Design for Complex Physical Systems

    DTIC Science & Technology

    2015-12-03

    NUMBER 9. SPONSORING/MONITORING AGENCY NAME(S) AND ADDRESS(ES) Jean-Luc Cambier Program Officer, Computational Mathematics , AFOSR/RTA 875 N...computational tools have been inadequate. Our goal has been to develop new mathematical formulations, estimation approaches, and approximation strategies...previous suboptimal approaches. 15. SUBJECT TERMS computational mathematics ; optimal experimental design; uncertainty quantification; Bayesian inference

  11. Optimal active vibration absorber - Design and experimental results

    NASA Technical Reports Server (NTRS)

    Lee-Glauser, Gina; Juang, Jer-Nan; Sulla, Jeffrey L.

    1993-01-01

    An optimal active vibration absorber can provide guaranteed closed-loop stability and control for large flexible space structures with collocated sensors/actuators. The active vibration absorber is a second-order dynamic system which is designed to suppress any unwanted structural vibration. This can be designed with minimum knowledge of the controlled system. Two methods for optimizing the active vibration absorber parameters are illustrated: minimum resonant amplitude and frequency matched active controllers. The Controls-Structures Interaction Phase-1 Evolutionary Model at NASA LaRC is used to demonstrate the effectiveness of the active vibration absorber for vibration suppression. Performance is compared numerically and experimentally using acceleration feedback.

  12. Optimal active vibration absorber: Design and experimental results

    NASA Technical Reports Server (NTRS)

    Lee-Glauser, Gina; Juang, Jer-Nan; Sulla, Jeffrey L.

    1992-01-01

    An optimal active vibration absorber can provide guaranteed closed-loop stability and control for large flexible space structures with collocated sensors/actuators. The active vibration absorber is a second-order dynamic system which is designed to suppress any unwanted structural vibration. This can be designed with minimum knowledge of the controlled system. Two methods for optimizing the active vibration absorber parameters are illustrated: minimum resonant amplitude and frequency matched active controllers. The Controls-Structures Interaction Phase-1 Evolutionary Model at NASA LaRC is used to demonstrate the effectiveness of the active vibration absorber for vibration suppression. Performance is compared numerically and experimentally using acceleration feedback.

  13. Criteria for the optimal design of experimental tests

    NASA Technical Reports Server (NTRS)

    Canavos, G. C.

    1972-01-01

    Some of the basic concepts are unified that were developed for the problem of finding optimal approximating functions which relate a set of controlled variables to a measurable response. The techniques have the potential for reducing the amount of testing required in experimental investigations. Specifically, two low-order polynomial models are considered as approximations to unknown functionships. For each model, optimal means of designing experimental tests are presented which, for a modest number of measurements, yield prediction equations that minimize the error of an estimated response anywhere inside a selected region of experimentation. Moreover, examples are provided for both models to illustrate their use. Finally, an analysis of a second-order prediction equation is given to illustrate ways of determining maximum or minimum responses inside the experimentation region.

  14. Optimizing an experimental design for an electromagnetic experiment

    NASA Astrophysics Data System (ADS)

    Roux, Estelle; Garcia, Xavier

    2013-04-01

    Most of geophysical studies focus on data acquisition and analysis, but another aspect which is gaining importance is the discussion on acquisition of suitable datasets. This can be done through the design of an optimal experiment. Optimizing an experimental design implies a compromise between maximizing the information we get about the target and reducing the cost of the experiment, considering a wide range of constraints (logistical, financial, experimental …). We are currently developing a method to design an optimal controlled-source electromagnetic (CSEM) experiment to detect a potential CO2 reservoir and monitor this reservoir during and after CO2 injection. Our statistical algorithm combines the use of linearized inverse theory (to evaluate the quality of one given design via the objective function) and stochastic optimization methods like genetic algorithm (to examine a wide range of possible surveys). The particularity of our method is that it uses a multi-objective genetic algorithm that searches for designs that fit several objective functions simultaneously. One main advantage of this kind of technique to design an experiment is that it does not require the acquisition of any data and can thus be easily conducted before any geophysical survey. Our new experimental design algorithm has been tested with a realistic one-dimensional resistivity model of the Earth in the region of study (northern Spain CO2 sequestration test site). We show that a small number of well distributed observations have the potential to resolve the target. This simple test also points out the importance of a well chosen objective function. Finally, in the context of CO2 sequestration that motivates this study, we might be interested in maximizing the information we get about the reservoir layer. In that case, we show how the combination of two different objective functions considerably improve its resolution.

  15. Optimization of formulation variables of benzocaine liposomes using experimental design.

    PubMed

    Mura, Paola; Capasso, Gaetano; Maestrelli, Francesca; Furlanetto, Sandra

    2008-01-01

    This study aimed to optimize, by means of an experimental design multivariate strategy, a liposomal formulation for topical delivery of the local anaesthetic agent benzocaine. The formulation variables for the vesicle lipid phase uses potassium glycyrrhizinate (KG) as an alternative to cholesterol and the addition of a cationic (stearylamine) or anionic (dicethylphosphate) surfactant (qualitative factors); the percents of ethanol and the total volume of the hydration phase (quantitative factors) were the variables for the hydrophilic phase. The combined influence of these factors on the considered responses (encapsulation efficiency (EE%) and percent drug permeated at 180 min (P%)) was evaluated by means of a D-optimal design strategy. Graphic analysis of the effects indicated that maximization of the selected responses requested opposite levels of the considered factors: For example, KG and stearylamine were better for increasing EE%, and cholesterol and dicethylphosphate for increasing P%. In the second step, the Doehlert design, applied for the response-surface study of the quantitative factors, pointed out a negative interaction between percent ethanol and volume of the hydration phase and allowed prediction of the best formulation for maximizing drug permeation rate. Experimental P% data of the optimized formulation were inside the confidence interval (P < 0.05) calculated around the predicted value of the response. This proved the suitability of the proposed approach for optimizing the composition of liposomal formulations and predicting the effects of formulation variables on the considered experimental response. Moreover, the optimized formulation enabled a significant improvement (P < 0.05) of the drug anaesthetic effect with respect to the starting reference liposomal formulation, thus demonstrating its actually better therapeutic effectiveness.

  16. Optimizing an experimental design for an electromagnetic experiment

    NASA Astrophysics Data System (ADS)

    Roux, E.; Garcia, X.

    2012-04-01

    Most of geophysical studies focus on data acquisition and analysis, but another aspect which is gaining importance is the discussion on acquisition of suitable datasets. This can be done through the design of an optimal experiment. An optimal experiment maximizes geophysical information while maintaining the cost of the experiment as low as possible. This requires a careful selection of recording parameters as source and receivers locations or range of periods needed to image the target. We are developing a method to design an optimal experiment in the context of detecting and monitoring a CO2 reservoir using controlled-source electromagnetic (CSEM) data. Using an algorithm for a simple one-dimensional (1D) situation, we look for the most suitable locations for source and receivers and optimum characteristics of the source to image the subsurface. One main advantage of this kind of technique to design an experiment is that it does not require the acquisition of any data and can thus be easily conducted before any geophysical survey. Our algorithm is based on a genetic algorithm which has been proved to be an efficient technique to examine a wide range of possible surveys and select the one that gives superior resolution. Each configuration is associated to one value of the objective function that characterizes the quality of this particular design. Here, we describe the method used to optimize an experimental design. Then, we validate this new technique and explore the different issues of experimental design by simulating a CSEM survey with a realistic 1D layered model.

  17. A fundamental experimental approach for optimal design of speed bumps.

    PubMed

    Lav, A Hakan; Bilgin, Ertugrul; Lav, A Hilmi

    2017-06-02

    Speed bumps and humps are utilized as means of calming traffic and controlling vehicular speed. Needless to say, bumps and humps of large dimensions in length and width force drivers to significantly reduce their driving speeds so as to avoid significant vehicle vertical acceleration. It is thus that this experimental study was conducted with the aim of determining a speed bump design that performs optimally when leading drivers to reduce the speed of their vehicles to safe levels. The first step of the investigation starts off by considering the following question: "What is the optimal design of a speed bump that will - at the same time - reduce the velocity of an incoming vehicle significantly and to a speed that resulting vertical acceleration does not jeopardize road safety? The experiment has been designed to study the dependent variables and collect data in order to propose an optimal design for a speed bump. To achieve this, a scaled model of 1:6 to real life was created to simulate the interaction between a car wheel and a speed bump. During the course of the experiment, a wheel was accelerated down an inclined plane onto a horizontal plane of motion where it was allowed to collide with a speed bump. The speed of the wheel and the vertical acceleration at the speed bump were captured by means of a Vernier Motion Detector. Copyright © 2017 Elsevier Ltd. All rights reserved.

  18. Prediction uncertainty and optimal experimental design for learning dynamical systems.

    PubMed

    Letham, Benjamin; Letham, Portia A; Rudin, Cynthia; Browne, Edward P

    2016-06-01

    Dynamical systems are frequently used to model biological systems. When these models are fit to data, it is necessary to ascertain the uncertainty in the model fit. Here, we present prediction deviation, a metric of uncertainty that determines the extent to which observed data have constrained the model's predictions. This is accomplished by solving an optimization problem that searches for a pair of models that each provides a good fit for the observed data, yet has maximally different predictions. We develop a method for estimating a priori the impact that additional experiments would have on the prediction deviation, allowing the experimenter to design a set of experiments that would most reduce uncertainty. We use prediction deviation to assess uncertainty in a model of interferon-alpha inhibition of viral infection, and to select a sequence of experiments that reduces this uncertainty. Finally, we prove a theoretical result which shows that prediction deviation provides bounds on the trajectories of the underlying true model. These results show that prediction deviation is a meaningful metric of uncertainty that can be used for optimal experimental design.

  19. Prediction uncertainty and optimal experimental design for learning dynamical systems

    NASA Astrophysics Data System (ADS)

    Letham, Benjamin; Letham, Portia A.; Rudin, Cynthia; Browne, Edward P.

    2016-06-01

    Dynamical systems are frequently used to model biological systems. When these models are fit to data, it is necessary to ascertain the uncertainty in the model fit. Here, we present prediction deviation, a metric of uncertainty that determines the extent to which observed data have constrained the model's predictions. This is accomplished by solving an optimization problem that searches for a pair of models that each provides a good fit for the observed data, yet has maximally different predictions. We develop a method for estimating a priori the impact that additional experiments would have on the prediction deviation, allowing the experimenter to design a set of experiments that would most reduce uncertainty. We use prediction deviation to assess uncertainty in a model of interferon-alpha inhibition of viral infection, and to select a sequence of experiments that reduces this uncertainty. Finally, we prove a theoretical result which shows that prediction deviation provides bounds on the trajectories of the underlying true model. These results show that prediction deviation is a meaningful metric of uncertainty that can be used for optimal experimental design.

  20. Optimal experimental design to position transducers in ultrasound breast imaging

    NASA Astrophysics Data System (ADS)

    Korta Martiartu, Naiara; Boehm, Christian; Vinard, Nicolas; Jovanović Balic, Ivana; Fichtner, Andreas

    2017-03-01

    We present methods to optimize the setup of a 3D ultrasound tomography scanner for breast cancer detection. This approach provides a systematic and quantitative tool to evaluate different designs and to optimize the con- figuration with respect to predefined design parameters. We consider both, time-of-flight inversion using straight rays and time-domain waveform inversion governed by the acoustic wave equation for imaging the sound speed. In order to compare different designs, we measure their quality by extracting properties from the Hessian operator of the time-of-flight or waveform differences defined in the inverse problem, i.e., the second derivatives with respect to the sound speed. Spatial uncertainties and resolution can be related to the eigenvalues of the Hessian, which provide a good indication of the information contained in the data that is acquired with a given design. However, the complete spectrum is often prohibitively expensive to compute, thus suitable approximations have to be developed and analyzed. We use the trace of the Hessian operator as design criterion, which is equivalent to the sum of all eigenvalues and requires less computational effort. In addition, we suggest to take advantage of the spatial symmetry to extrapolate the 3D experimental design from a set of 2D configurations. In order to maximize the quality criterion, we use a genetic algorithm to explore the space of possible design configurations. Numerical results show that the proposed strategies are capable of improving an initial configuration with uniformly distributed transducers, clustering them around regions with poor illumination and improving the ray coverage of the domain of interest.

  1. Optimal experimental design for a nonlinear response in environmental toxicology.

    PubMed

    Wright, Stephen E; Bailer, A John

    2006-09-01

    A start-stop experiment in environmental toxicology provides a backdrop for this design discussion. The basic problem is to decide when to sample a nonlinear response in order to minimize the generalized variance of the estimated parameters. An easily coded heuristic optimization strategy can be applied to this problem to obtain optimal or nearly optimal designs. The efficiency of the heuristic approach allows a straightforward exploration of the sensitivity of the suggested design with respect to such problem-specific concerns as variance heterogeneity, time-grid resolution, design criteria, and interval specification of planning values for parameters. A second illustration of design optimization is briefly presented in the context of concentration spacing for a reproductive toxicity study.

  2. Optimization of model parameters and experimental designs with the Optimal Experimental Design Toolbox (v1.0) exemplified by sedimentation in salt marshes

    NASA Astrophysics Data System (ADS)

    Reimer, J.; Schuerch, M.; Slawig, T.

    2015-03-01

    The geosciences are a highly suitable field of application for optimizing model parameters and experimental designs especially because many data are collected. In this paper, the weighted least squares estimator for optimizing model parameters is presented together with its asymptotic properties. A popular approach to optimize experimental designs called local optimal experimental designs is described together with a lesser known approach which takes into account the potential nonlinearity of the model parameters. These two approaches have been combined with two methods to solve their underlying discrete optimization problem. All presented methods were implemented in an open-source MATLAB toolbox called the Optimal Experimental Design Toolbox whose structure and application is described. In numerical experiments, the model parameters and experimental design were optimized using this toolbox. Two existing models for sediment concentration in seawater and sediment accretion on salt marshes of different complexity served as an application example. The advantages and disadvantages of these approaches were compared based on these models. Thanks to optimized experimental designs, the parameters of these models could be determined very accurately with significantly fewer measurements compared to unoptimized experimental designs. The chosen optimization approach played a minor role for the accuracy; therefore, the approach with the least computational effort is recommended.

  3. Optimal experimental design with the sigma point method.

    PubMed

    Schenkendorf, R; Kremling, A; Mangold, M

    2009-01-01

    Using mathematical models for a quantitative description of dynamical systems requires the identification of uncertain parameters by minimising the difference between simulation and measurement. Owing to the measurement noise also, the estimated parameters possess an uncertainty expressed by their variances. To obtain highly predictive models, very precise parameters are needed. The optimal experimental design (OED) as a numerical optimisation method is used to reduce the parameter uncertainty by minimising the parameter variances iteratively. A frequently applied method to define a cost function for OED is based on the inverse of the Fisher information matrix. The application of this traditional method has at least two shortcomings for models that are nonlinear in their parameters: (i) it gives only a lower bound of the parameter variances and (ii) the bias of the estimator is neglected. Here, the authors show that by applying the sigma point (SP) method a better approximation of characteristic values of the parameter statistics can be obtained, which has a direct benefit on OED. An additional advantage of the SP method is that it can also be used to investigate the influence of the parameter uncertainties on the simulation results. The SP method is demonstrated for the example of a widely used biological model.

  4. Optimizing Experimental Designs Relative to Costs and Effect Sizes.

    ERIC Educational Resources Information Center

    Headrick, Todd C.; Zumbo, Bruno D.

    A general model is derived for the purpose of efficiently allocating integral numbers of units in multi-level designs given prespecified power levels. The derivation of the model is based on a constrained optimization problem that maximizes a general form of a ratio of expected mean squares subject to a budget constraint. This model provides more…

  5. Optimizing an experimental design for an electromagnetic experiment

    NASA Astrophysics Data System (ADS)

    Roux, E.; Garcia, X. A.

    2011-12-01

    Most of geophysical studies focus on data acquisition and analysis, but another aspect which is gaining importance is the discussion on the acquisition of suitable datasets. This can be done through the design of an optimal experiment. An optimal experiment maximizes geophysical information while maintaining the cost of the experiment as low as possible. This requires a careful selection of recording parameters as source and receivers locations or range of periods needed to image the target. We are developing a method to design an optimal experiment in the context of detecting and monitoring a CO2 reservoir using controlled-source electromagnetic (CSEM) data. Using an algorithm for a simple one-dimensional (1D) situation, we look for the most suitable locations for source and receivers and optimum characteristics of the source to image the subsurface. One main advantage of this kind of technique to design an experiment is that it does not require the acquisition of any data and can thus be easily conducted before any geophysical survey. Our algorithm is based on a genetic algorithm which has been proved to be an efficient technique to examine a wide range of possible surveys and select the one that gives superior resolution. Each particular design needs to be quantified. Different quantities have been used to estimate the "goodness" of a model, most of them being sensitive to the eigenvalues of the corresponding inversion problem. Here we show a comparison of results obtained using different objective functions. Then, we simulate a CSEM survey with a realistic 1D structure and discuss the optimum recording parameters determined by our method.

  6. Surface laser marking optimization using an experimental design approach

    NASA Astrophysics Data System (ADS)

    Brihmat-Hamadi, F.; Amara, E. H.; Lavisse, L.; Jouvard, J. M.; Cicala, E.; Kellou, H.

    2017-04-01

    Laser surface marking is performed on a titanium substrate using a pulsed frequency doubled Nd:YAG laser ( λ= 532 nm, τ pulse=5 ns) to process the substrate surface under normal atmospheric conditions. The aim of the work is to investigate, following experimental and statistical approaches, the correlation between the process parameters and the response variables (output), using a Design of Experiment method (DOE): Taguchi methodology and a response surface methodology (RSM). A design is first created using MINTAB program, and then the laser marking process is performed according to the planned design. The response variables; surface roughness and surface reflectance were measured for each sample, and incorporated into the design matrix. The results are then analyzed and the RSM model is developed and verified for predicting the process output for the given set of process parameters values. The analysis shows that the laser beam scanning speed is the most influential operating factor followed by the laser pumping intensity during marking, while the other factors show complex influences on the objective functions.

  7. Optimization of fast disintegration tablets using pullulan as diluent by central composite experimental design.

    PubMed

    Patel, Dipil; Chauhan, Musharraf; Patel, Ravi; Patel, Jayvadan

    2012-03-01

    The objective of this work was to apply central composite experimental design to investigate main and interaction effect of formulation parameters in optimizing novel fast disintegration tablets formulation using pullulan as diluents. Face centered central composite experimental design was employed to optimize fast disintegration tablet formulation. The variables studied were concentration of diluents (pullulan, X(1)), superdisintigrant (sodium starch glycolate, X(2)), and direct compression aid (spray dried lactose, X(3)). Tablets were characterized for weight variation, thickness, disintegration time (Y(1)) and hardness (Y(2)). Good correlation between the predicted values and experimental data of the optimized formulation methodology in optimizing fast disintegrating tablets using pullulan as a diluent.

  8. Optimization of the intravenous glucose tolerance test in T2DM patients using optimal experimental design.

    PubMed

    Silber, Hanna E; Nyberg, Joakim; Hooker, Andrew C; Karlsson, Mats O

    2009-06-01

    Intravenous glucose tolerance test (IVGTT) provocations are informative, but complex and laborious, for studying the glucose-insulin system. The objective of this study was to evaluate, through optimal design methodology, the possibilities of more informative and/or less laborious study design of the insulin modified IVGTT in type 2 diabetic patients. A previously developed model for glucose and insulin regulation was implemented in the optimal design software PopED 2.0. The following aspects of the study design of the insulin modified IVGTT were evaluated; (1) glucose dose, (2) insulin infusion, (3) combination of (1) and (2), (4) sampling times, (5) exclusion of labeled glucose. Constraints were incorporated to avoid prolonged hyper- and/or hypoglycemia and a reduced design was used to decrease run times. Design efficiency was calculated as a measure of the improvement with an optimal design compared to the basic design. The results showed that the design of the insulin modified IVGTT could be substantially improved by the use of an optimized design compared to the standard design and that it was possible to use a reduced number of samples. Optimization of sample times gave the largest improvement followed by insulin dose. The results further showed that it was possible to reduce the total sample time with only a minor loss in efficiency. Simulations confirmed the predictions from PopED. The predicted uncertainty of parameter estimates (CV) was low in all tested cases, despite the reduction in the number of samples/subject. The best design had a predicted average CV of parameter estimates of 19.5%. We conclude that improvement can be made to the design of the insulin modified IVGTT and that the most important design factor was the placement of sample times followed by the use of an optimal insulin dose. This paper illustrates how complex provocation experiments can be improved by sequential modeling and optimal design.

  9. Design and Experimental Implementation of Optimal Spacecraft Antenna Slews

    DTIC Science & Technology

    2013-12-01

    any spacecraft antenna configuration. Various software suites were used to perform thorough validation and verification of the Newton -Euler...verification of the Newton -Euler formulation developed herein. The antenna model was then utilized to solve an optimal control problem for a geostationary...DEVELOPING A MULTI-BODY DYNAMIC MODEL ........................................9  A.  THE NEWTON -EULER APPROACH

  10. OPTIMIZATION OF EXPERIMENTAL DESIGNS BY INCORPORATING NIF FACILITY IMPACTS

    SciTech Connect

    Eder, D C; Whitman, P K; Koniges, A E; Anderson, R W; Wang, P; Gunney, B T; Parham, T G; Koerner, J G; Dixit, S N; . Suratwala, T I; Blue, B E; Hansen, J F; Tobin, M T; Robey, H F; Spaeth, M L; MacGowan, B J

    2005-08-31

    For experimental campaigns on the National Ignition Facility (NIF) to be successful, they must obtain useful data without causing unacceptable impact on the facility. Of particular concern is excessive damage to optics and diagnostic components. There are 192 fused silica main debris shields (MDS) exposed to the potentially hostile target chamber environment on each shot. Damage in these optics results either from the interaction of laser light with contamination and pre-existing imperfections on the optic surface or from the impact of shrapnel fragments. Mitigation of this second damage source is possible by identifying shrapnel sources and shielding optics from them. It was recently demonstrated that the addition of 1.1-mm thick borosilicate disposable debris shields (DDS) block the majority of debris and shrapnel fragments from reaching the relatively expensive MDS's. However, DDS's cannot stop large, faster moving fragments. We have experimentally demonstrated one shrapnel mitigation technique showing that it is possible to direct fast moving fragments by changing the source orientation, in this case a Ta pinhole array. Another mitigation method is to change the source material to one that produces smaller fragments. Simulations and validating experiments are necessary to determine which fragments can penetrate or break 1-3 mm thick DDS's. Three-dimensional modeling of complex target-diagnostic configurations is necessary to predict the size, velocity, and spatial distribution of shrapnel fragments. The tools we are developing will be used to set the allowed level of debris and shrapnel generation for all NIF experimental campaigns.

  11. End-point controller design for an experimental two-link flexible manipulator using convex optimization

    NASA Technical Reports Server (NTRS)

    Oakley, Celia M.; Barratt, Craig H.

    1990-01-01

    Recent results in linear controller design are used to design an end-point controller for an experimental two-link flexible manipulator. A nominal 14-state linear-quadratic-Gaussian (LQG) controller was augmented with a 528-tap finite-impulse-response (FIR) filter designed using convex optimization techniques. The resulting 278-state controller produced improved end-point trajectory tracking and disturbance rejection in simulation and experimentally in real time.

  12. Analytical and experimental performance of optimal controller designs for a supersonic inlet

    NASA Technical Reports Server (NTRS)

    Zeller, J. R.; Lehtinen, B.; Geyser, L. C.; Batterton, P. G.

    1973-01-01

    The techniques of modern optimal control theory were applied to the design of a control system for a supersonic inlet. The inlet control problem was approached as a linear stochastic optimal control problem using as the performance index the expected frequency of unstarts. The details of the formulation of the stochastic inlet control problem are presented. The computational procedures required to obtain optimal controller designs are discussed, and the analytically predicted performance of controllers designed for several different inlet conditions is tabulated. The experimental implementation of the optimal control laws is described, and the experimental results obtained in a supersonic wind tunnel are presented. The control laws were implemented with analog and digital computers. Comparisons are made between the experimental and analytically predicted performance results. Comparisons are also made between the results obtained with continuous analog computer controllers and discrete digital computer versions.

  13. Optimal experimental designs for dose-response studies with continuous endpoints.

    PubMed

    Holland-Letz, Tim; Kopp-Schneider, Annette

    2015-11-01

    In most areas of clinical and preclinical research, the required sample size determines the costs and effort for any project, and thus, optimizing sample size is of primary importance. An experimental design of dose-response studies is determined by the number and choice of dose levels as well as the allocation of sample size to each level. The experimental design of toxicological studies tends to be motivated by convention. Statistical optimal design theory, however, allows the setting of experimental conditions (dose levels, measurement times, etc.) in a way which minimizes the number of required measurements and subjects to obtain the desired precision of the results. While the general theory is well established, the mathematical complexity of the problem so far prevents widespread use of these techniques in practical studies. The paper explains the concepts of statistical optimal design theory with a minimum of mathematical terminology and uses these concepts to generate concrete usable D-optimal experimental designs for dose-response studies on the basis of three common dose-response functions in toxicology: log-logistic, log-normal and Weibull functions with four parameters each. The resulting designs usually require control plus only three dose levels and are quite intuitively plausible. The optimal designs are compared to traditional designs such as the typical setup of cytotoxicity studies for 96-well plates. As the optimal design depends on prior estimates of the dose-response function parameters, it is shown what loss of efficiency occurs if the parameters for design determination are misspecified, and how Bayes optimal designs can improve the situation.

  14. Optimization of natural lipstick formulation based on pitaya (Hylocereus polyrhizus) seed oil using D-optimal mixture experimental design.

    PubMed

    Kamairudin, Norsuhaili; Gani, Siti Salwa Abd; Masoumi, Hamid Reza Fard; Hashim, Puziah

    2014-10-16

    The D-optimal mixture experimental design was employed to optimize the melting point of natural lipstick based on pitaya (Hylocereus polyrhizus) seed oil. The influence of the main lipstick components-pitaya seed oil (10%-25% w/w), virgin coconut oil (25%-45% w/w), beeswax (5%-25% w/w), candelilla wax (1%-5% w/w) and carnauba wax (1%-5% w/w)-were investigated with respect to the melting point properties of the lipstick formulation. The D-optimal mixture experimental design was applied to optimize the properties of lipstick by focusing on the melting point with respect to the above influencing components. The D-optimal mixture design analysis showed that the variation in the response (melting point) could be depicted as a quadratic function of the main components of the lipstick. The best combination of each significant factor determined by the D-optimal mixture design was established to be pitaya seed oil (25% w/w), virgin coconut oil (37% w/w), beeswax (17% w/w), candelilla wax (2% w/w) and carnauba wax (2% w/w). With respect to these factors, the 46.0 °C melting point property was observed experimentally, similar to the theoretical prediction of 46.5 °C. Carnauba wax is the most influential factor on this response (melting point) with its function being with respect to heat endurance. The quadratic polynomial model sufficiently fit the experimental data.

  15. Optimization of experimental design in fMRI: a general framework using a genetic algorithm.

    PubMed

    Wager, Tor D; Nichols, Thomas E

    2003-02-01

    This article describes a method for selecting design parameters and a particular sequence of events in fMRI so as to maximize statistical power and psychological validity. Our approach uses a genetic algorithm (GA), a class of flexible search algorithms that optimize designs with respect to single or multiple measures of fitness. Two strengths of the GA framework are that (1) it operates with any sort of model, allowing for very specific parameterization of experimental conditions, including nonstandard trial types and experimentally observed scanner autocorrelation, and (2) it is flexible with respect to fitness criteria, allowing optimization over known or novel fitness measures. We describe how genetic algorithms may be applied to experimental design for fMRI, and we use the framework to explore the space of possible fMRI design parameters, with the goal of providing information about optimal design choices for several types of designs. In our simulations, we considered three fitness measures: contrast estimation efficiency, hemodynamic response estimation efficiency, and design counterbalancing. Although there are inherent trade-offs between these three fitness measures, GA optimization can produce designs that outperform random designs on all three criteria simultaneously.

  16. Optimal experimental designs for the estimation of thermal properties of composite materials

    NASA Technical Reports Server (NTRS)

    Scott, Elaine P.; Moncman, Deborah A.

    1994-01-01

    Reliable estimation of thermal properties is extremely important in the utilization of new advanced materials, such as composite materials. The accuracy of these estimates can be increased if the experiments are designed carefully. The objectives of this study are to design optimal experiments to be used in the prediction of these thermal properties and to then utilize these designs in the development of an estimation procedure to determine the effective thermal properties (thermal conductivity and volumetric heat capacity). The experiments were optimized by choosing experimental parameters that maximize the temperature derivatives with respect to all of the unknown thermal properties. This procedure has the effect of minimizing the confidence intervals of the resulting thermal property estimates. Both one-dimensional and two-dimensional experimental designs were optimized. A heat flux boundary condition is required in both analyses for the simultaneous estimation of the thermal properties. For the one-dimensional experiment, the parameters optimized were the heating time of the applied heat flux, the temperature sensor location, and the experimental time. In addition to these parameters, the optimal location of the heat flux was also determined for the two-dimensional experiments. Utilizing the optimal one-dimensional experiment, the effective thermal conductivity perpendicular to the fibers and the effective volumetric heat capacity were then estimated for an IM7-Bismaleimide composite material. The estimation procedure used is based on the minimization of a least squares function which incorporates both calculated and measured temperatures and allows for the parameters to be estimated simultaneously.

  17. OPTIMIZING THE PRECISION OF TOXICITY THRESHOLD ESTIMATION USING A TWO-STAGE EXPERIMENTAL DESIGN

    EPA Science Inventory

    An important consideration for risk assessment is the existence of a threshold, i.e., the highest toxicant dose where the response is not distinguishable from background. We have developed methodology for finding an experimental design that optimizes the precision of threshold mo...

  18. OPTIMIZING THE PRECISION OF TOXICITY THRESHOLD ESTIMATION USING A TWO-STAGE EXPERIMENTAL DESIGN

    EPA Science Inventory

    An important consideration for risk assessment is the existence of a threshold, i.e., the highest toxicant dose where the response is not distinguishable from background. We have developed methodology for finding an experimental design that optimizes the precision of threshold mo...

  19. A new experimental design method to optimize formulations focusing on a lubricant for hydrophilic matrix tablets.

    PubMed

    Choi, Du Hyung; Shin, Sangmun; Khoa Viet Truong, Nguyen; Jeong, Seong Hoon

    2012-09-01

    A robust experimental design method was developed with the well-established response surface methodology and time series modeling to facilitate the formulation development process with magnesium stearate incorporated into hydrophilic matrix tablets. Two directional analyses and a time-oriented model were utilized to optimize the experimental responses. Evaluations of tablet gelation and drug release were conducted with two factors x₁ and x₂: one was a formulation factor (the amount of magnesium stearate) and the other was a processing factor (mixing time), respectively. Moreover, different batch sizes (100 and 500 tablet batches) were also evaluated to investigate an effect of batch size. The selected input control factors were arranged in a mixture simplex lattice design with 13 experimental runs. The obtained optimal settings of magnesium stearate for gelation were 0.46 g, 2.76 min (mixing time) for a 100 tablet batch and 1.54 g, 6.51 min for a 500 tablet batch. The optimal settings for drug release were 0.33 g, 7.99 min for a 100 tablet batch and 1.54 g, 6.51 min for a 500 tablet batch. The exact ratio and mixing time of magnesium stearate could be formulated according to the resulting hydrophilic matrix tablet properties. The newly designed experimental method provided very useful information for characterizing significant factors and hence to obtain optimum formulations allowing for a systematic and reliable experimental design method.

  20. Optimization of Experimental Design for Estimating Groundwater Pumping Using Model Reduction

    NASA Astrophysics Data System (ADS)

    Ushijima, T.; Cheng, W.; Yeh, W. W.

    2012-12-01

    An optimal experimental design algorithm is developed to choose locations for a network of observation wells for estimating unknown groundwater pumping rates in a confined aquifer. The design problem can be expressed as an optimization problem which employs a maximal information criterion to choose among competing designs subject to the specified design constraints. Because of the combinatorial search required in this optimization problem, given a realistic, large-scale groundwater model, the dimensionality of the optimal design problem becomes very large and can be difficult if not impossible to solve using mathematical programming techniques such as integer programming or the Simplex with relaxation. Global search techniques, such as Genetic Algorithms (GAs), can be used to solve this type of combinatorial optimization problem; however, because a GA requires an inordinately large number of calls of a groundwater model, this approach may still be infeasible to use to find the optimal design in a realistic groundwater model. Proper Orthogonal Decomposition (POD) is therefore applied to the groundwater model to reduce the model space and thereby reduce the computational burden of solving the optimization problem. Results for a one-dimensional test case show identical results among using GA, integer programming, and an exhaustive search demonstrating that GA is a valid method for use in a global optimum search and has potential for solving large-scale optimal design problems. Additionally, other results show that the algorithm using GA with POD model reduction is several orders of magnitude faster than an algorithm that employs GA without POD model reduction in terms of time required to find the optimal solution. Application of the proposed methodology is being made to a large-scale, real-world groundwater problem.

  1. KL-optimal experimental design for discriminating between two growth models applied to a beef farm.

    PubMed

    Campos-Barreiro, Santiago; López-Fidalgo, Jesús

    2016-02-01

    The body mass growth of organisms is usually represented in terms of what is known as ontogenetic growth models, which represent the relation of dependence between the mass of the body and time. The paper is concerned with a problem of finding an optimal experimental design for discriminating between two competing mass growth models applied to a beef farm. T-optimality was first introduced for discrimination between models but in this paper, KL-optimality based on the Kullback-Leibler distance is used to deal with correlated obsevations since, in this case, observations on a particular animal are not independent.

  2. Chemometric experimental design based optimization techniques in capillary electrophoresis: a critical review of modern applications.

    PubMed

    Hanrahan, Grady; Montes, Ruthy; Gomez, Frank A

    2008-01-01

    A critical review of recent developments in the use of chemometric experimental design based optimization techniques in capillary electrophoresis applications is presented. Current advances have led to enhanced separation capabilities of a wide range of analytes in such areas as biological, environmental, food technology, pharmaceutical, and medical analysis. Significant developments in design, detection methodology and applications from the last 5 years (2002-2007) are reported. Furthermore, future perspectives in the use of chemometric methodology in capillary electrophoresis are considered.

  3. Experimental design for optimizing drug release from silicone elastomer matrix and investigation of transdermal drug delivery.

    PubMed

    Snorradóttir, Bergthóra S; Gudnason, Pálmar I; Thorsteinsson, Freygardur; Másson, Már

    2011-04-18

    Silicone elastomers are commonly used for medical devices and external prosthesis. Recently, there has been growing interest in silicone-based medical devices with enhanced function that release drugs from the elastomer matrix. In the current study, an experimental design approach was used to optimize the release properties of the model drug diclofenac from medical silicone elastomer matrix, including a combination of four permeation enhancers as additives and allowing for constraints in the properties of the material. The D-optimal design included six factors and five responses describing material properties and release of the drug. The first experimental object was screening, to investigate the main and interaction effects, based on 29 experiments. All excipients had a significant effect and were therefore included in the optimization, which also allowed the possible contribution of quadratic terms to the model and was based on 38 experiments. Screening and optimization of release and material properties resulted in the production of two optimized silicone membranes, which were tested for transdermal delivery. The results confirmed the validity of the model for the optimized membranes that were used for further testing for transdermal drug delivery through heat-separated human skin. The optimization resulted in an excipient/drug/silicone composition that resulted in a cured elastomer with good tensile strength and a 4- to 7-fold transdermal delivery increase relative to elastomer that did not contain excipients.

  4. Optimal experimental design in an epidermal growth factor receptor signalling and down-regulation model.

    PubMed

    Casey, F P; Baird, D; Feng, Q; Gutenkunst, R N; Waterfall, J J; Myers, C R; Brown, K S; Cerione, R A; Sethna, J P

    2007-05-01

    We apply the methods of optimal experimental design to a differential equation model for epidermal growth factor receptor signalling, trafficking and down-regulation. The model incorporates the role of a recently discovered protein complex made up of the E3 ubiquitin ligase, Cbl, the guanine exchange factor (GEF), Cool-1 (beta -Pix) and the Rho family G protein Cdc42. The complex has been suggested to be important in disrupting receptor down-regulation. We demonstrate that the model interactions can accurately reproduce the experimental observations, that they can be used to make predictions with accompanying uncertainties, and that we can apply ideas of optimal experimental design to suggest new experiments that reduce the uncertainty on unmeasurable components of the system.

  5. Optimal experimental design for assessment of enzyme kinetics in a drug discovery screening environment.

    PubMed

    Sjögren, Erik; Nyberg, Joakim; Magnusson, Mats O; Lennernäs, Hans; Hooker, Andrew; Bredberg, Ulf

    2011-05-01

    A penalized expectation of determinant (ED)-optimal design with a discrete parameter distribution was used to find an optimal experimental design for assessment of enzyme kinetics in a screening environment. A data set for enzyme kinetic data (V(max) and K(m)) was collected from previously reported studies, and every V(max)/K(m) pair (n = 76) was taken to represent a unique drug compound. The design was restricted to 15 samples, an incubation time of up to 40 min, and starting concentrations (C(0)) for the incubation between 0.01 and 100 μM. The optimization was performed by finding the sample times and C(0) returning the lowest uncertainty (S.E.) of the model parameter estimates. Individual optimal designs, one general optimal design and one, for laboratory practice suitable, pragmatic optimal design (OD) were obtained. In addition, a standard design (STD-D), representing a commonly applied approach for metabolic stability investigations, was constructed. Simulations were performed for OD and STD-D by using the Michaelis-Menten (MM) equation, and enzyme kinetic parameters were estimated with both MM and a monoexponential decay. OD generated a better result (relative standard error) for 99% of the compounds and an equal or better result [(root mean square error (RMSE)] for 78% of the compounds in estimation of metabolic intrinsic clearance. Furthermore, high-quality estimates (RMSE < 30%) of both V(max) and K(m) could be obtained for a considerable number (26%) of the investigated compounds by using the suggested OD. The results presented in this study demonstrate that the output could generally be improved compared with that obtained from the standard approaches used today.

  6. An Artificial Intelligence Technique to Generate Self-Optimizing Experimental Designs.

    DTIC Science & Technology

    1983-02-01

    pattern or a binary chopping technique in the space of decision variables while carrying out a sequence of contiroLled experiments on the strategy ...7 AD-A127 764 AN ARTIFICIAL INTELLIGENCE TECHNIQUE TO GENERATE 1/1 SELF-OPTIMIZING EXPERIME. .(U) ARIZONA STATE UNIV TEMPE GROUP FOR COMPUTER STUDIES...6 3 A - - II 1* Ii.LI~1 11. AI-. jMR.TR- 3 0 3 37 AN ARTIFICIAL INTELLIGENCE TECHNIQUE TO GENERATE SELF-OPTIMIZING EXPERIMENTAL DESIGNS Nicholas V

  7. Time-Domain Optimal Experimental Design in Human Seated Postural Control Testing.

    PubMed

    Cody Priess, M; Choi, Jongeun; Radcliffe, Clark; Popovich, John M; Cholewicki, Jacek; Peter Reeves, N

    2015-05-01

    We are developing a series of systems science-based clinical tools that will assist in modeling, diagnosing, and quantifying postural control deficits in human subjects. In line with this goal, we have designed and constructed a seated balance device and associated experimental task for identification of the human seated postural control system. In this work, we present a quadratic programming (QP) technique for optimizing a time-domain experimental input signal for this device. The goal of this optimization is to maximize the information present in the experiment, and therefore its ability to produce accurate estimates of several desired seated postural control parameters. To achieve this, we formulate the problem as a nonconvex QP and attempt to locally maximize a measure (T-optimality condition) of the experiment's Fisher information matrix (FIM) under several constraints. These constraints include limits on the input amplitude, physiological output magnitude, subject control amplitude, and input signal autocorrelation. Because the autocorrelation constraint takes the form of a quadratic constraint (QC), we replace it with a conservative linear relaxation about a nominal point, which is iteratively updated during the course of optimization. We show that this iterative descent algorithm generates a convergent suboptimal solution that guarantees monotonic nonincreasing of the cost function value while satisfying all constraints during iterations. Finally, we present successful experimental results using an optimized input sequence.

  8. Fermilab D-0 Experimental Facility: Energy conservation report and mechanical systems design optimization and cost analysis study

    SciTech Connect

    Krstulovich, S.F.

    1987-10-31

    This report is developed as part of the Fermilab D-0 Experimental Facility Project Title II Design Documentation Update. As such, it concentrates primarily on HVAC mechanical systems design optimization and cost analysis.

  9. Demonstration of decomposition and optimization in the design of experimental space systems

    NASA Technical Reports Server (NTRS)

    Padula, Sharon; Sandridge, Chris A.; Haftka, Raphael T.; Walsh, Joanne L.

    1989-01-01

    Effective design strategies for a class of systems which may be termed Experimental Space Systems (ESS) are needed. These systems, which include large space antenna and observatories, space platforms, earth satellites and deep space explorers, have special characteristics which make them particularly difficult to design. It is argued here that these same characteristics encourage the use of advanced computer-aided optimization and planning techniques. The broad goal of this research is to develop optimization strategies for the design of ESS. These strategics would account for the possibly conflicting requirements of mission life, safety, scientific payoffs, initial system cost, launch limitations and maintenance costs. The strategies must also preserve the coupling between disciplines or between subsystems. Here, the specific purpose is to describe a computer-aided planning and scheduling technique. This technique provides the designer with a way to map the flow of data between multidisciplinary analyses. The technique is important because it enables the designer to decompose the system design problem into a number of smaller subproblems. The planning and scheduling technique is demonstrated by its application to a specific preliminary design problem.

  10. Optimization study on the formulation of roxithromycin dispersible tablet using experimental design.

    PubMed

    Weon, K Y; Lee, K T; Seo, S H

    2000-10-01

    This study set out to improve the physical and pharmaceutical characteristics of the present formulation using an appropriate experimental design. The work described here concerns the formulation of the dispersible tablet applying direct compression method containing roxithromycin in the form of coated granules. In this study 2(3) factorial design was used as screening test model and Central Composite Design (CCC) associated with response surface methodology was used as optimization study model to develop and to optimize the proper formulation of roxithromycin dispersible tablet. The three independent variables investigated were functional excipients like binder (X1), disintegrant (X2) and lubricant (X3). The effects of these variables were investigated on the following responses: hardness (Y1), friability (Y2) and disintegration time (Y3) of tablet. Three replicates at the center levels of the each design were used to independently calculate the experimental error and to detect any curvature in the response surface. This enabled the best formulations to be selected objectively. The effect order of each term to all response variable was X3> X2> X1> X1*X2> X2*X2> X2*X3> X3*X3> X1*X3> X1*X1 and model equations on each response variables were generated. Optimized compositions of formula were accordingly computed using those model equations and confirmed by following demonstration study. As a result, this study has demonstrated the efficiency and effectiveness of using a systematic formulation optimization process to develop the tablet formulation of roxithromycin dispersible tablet with limited experiment.

  11. Near-optimal experimental design for model selection in systems biology

    PubMed Central

    Busetto, Alberto Giovanni; Hauser, Alain; Krummenacher, Gabriel; Sunnåker, Mikael; Dimopoulos, Sotiris; Ong, Cheng Soon; Stelling, Jörg; Buhmann, Joachim M.

    2013-01-01

    Motivation: Biological systems are understood through iterations of modeling and experimentation. Not all experiments, however, are equally valuable for predictive modeling. This study introduces an efficient method for experimental design aimed at selecting dynamical models from data. Motivated by biological applications, the method enables the design of crucial experiments: it determines a highly informative selection of measurement readouts and time points. Results: We demonstrate formal guarantees of design efficiency on the basis of previous results. By reducing our task to the setting of graphical models, we prove that the method finds a near-optimal design selection with a polynomial number of evaluations. Moreover, the method exhibits the best polynomial-complexity constant approximation factor, unless P = NP. We measure the performance of the method in comparison with established alternatives, such as ensemble non-centrality, on example models of different complexity. Efficient design accelerates the loop between modeling and experimentation: it enables the inference of complex mechanisms, such as those controlling central metabolic operation. Availability: Toolbox ‘NearOED’ available with source code under GPL on the Machine Learning Open Source Software Web site (mloss.org). Contact: busettoa@inf.ethz.ch Supplementary information: Supplementary data are available at Bioinformatics online. PMID:23900189

  12. Optimal design and experimental analyses of a new micro-vibration control payload-platform

    NASA Astrophysics Data System (ADS)

    Sun, Xiaoqing; Yang, Bintang; Zhao, Long; Sun, Xiaofen

    2016-07-01

    This paper presents a new payload-platform, for precision devices, which possesses the capability of isolating the complex space micro-vibration in low frequency range below 5 Hz. The novel payload-platform equipped with smart material actuators is investigated and designed through optimization strategy based on the minimum energy loss rate, for the aim of achieving high drive efficiency and reducing the effect of the magnetic circuit nonlinearity. Then, the dynamic model of the driving element is established by using the Lagrange method and the performance of the designed payload-platform is further discussed through the combination of the controlled auto regressive moving average (CARMA) model with modified generalized prediction control (MGPC) algorithm. Finally, an experimental prototype is developed and tested. The experimental results demonstrate that the payload-platform has an impressive potential of micro-vibration isolation.

  13. Experimental Investigation of a Point Design Optimized Arrow Wing HSCT Configuration

    NASA Technical Reports Server (NTRS)

    Narducci, Robert P.; Sundaram, P.; Agrawal, Shreekant; Cheung, S.; Arslan, A. E.; Martin, G. L.

    1999-01-01

    The M2.4-7A Arrow Wing HSCT configuration was optimized for straight and level cruise at a Mach number of 2.4 and a lift coefficient of 0.10. A quasi-Newton optimization scheme maximized the lift-to-drag ratio (by minimizing drag-to-lift) using Euler solutions from FL067 to estimate the lift and drag forces. A 1.675% wind-tunnel model of the Opt5 HSCT configuration was built to validate the design methodology. Experimental data gathered at the NASA Langley Unitary Plan Wind Tunnel (UPWT) section #2 facility verified CFL3D Euler and Navier-Stokes predictions of the Opt5 performance at the design point. In turn, CFL3D confirmed the improvement in the lift-to-drag ratio obtained during the optimization, thus validating the design procedure. A data base at off-design conditions was obtained during three wind-tunnel tests. The entry into NASA Langley UPWT section #2 obtained data at a free stream Mach number, M(sub infinity), of 2.55 as well as the design Mach number, M(sub infinity)=2.4. Data from a Mach number range of 1.8 to 2.4 was taken at UPWT section #1. Transonic and low supersonic Mach numbers, M(sub infinity)=0.6 to 1.2, was gathered at the NASA Langley 16 ft. Transonic Wind Tunnel (TWT). In addition to good agreement between CFD and experimental data, highlights from the wind-tunnel tests include a trip dot study suggesting a linear relationship between trip dot drag and Mach number, an aeroelastic study that measured the outboard wing deflection and twist, and a flap scheduling study that identifies the possibility of only one leading-edge and trailing-edge flap setting for transonic cruise and another for low supersonic acceleration.

  14. Experimental Investigation of a Point Design Optimized Arrow Wing HSCT Configuration

    NASA Technical Reports Server (NTRS)

    Narducci, Robert P.; Sundaram, P.; Agrawal, Shreekant; Cheung, S.; Arslan, A. E.; Martin, G. L.

    1999-01-01

    The M2.4-7A Arrow Wing HSCT configuration was optimized for straight and level cruise at a Mach number of 2.4 and a lift coefficient of 0.10. A quasi-Newton optimization scheme maximized the lift-to-drag ratio (by minimizing drag-to-lift) using Euler solutions from FL067 to estimate the lift and drag forces. A 1.675% wind-tunnel model of the Opt5 HSCT configuration was built to validate the design methodology. Experimental data gathered at the NASA Langley Unitary Plan Wind Tunnel (UPWT) section #2 facility verified CFL3D Euler and Navier-Stokes predictions of the Opt5 performance at the design point. In turn, CFL3D confirmed the improvement in the lift-to-drag ratio obtained during the optimization, thus validating the design procedure. A data base at off-design conditions was obtained during three wind-tunnel tests. The entry into NASA Langley UPWT section #2 obtained data at a free stream Mach number, M(sub infinity), of 2.55 as well as the design Mach number, M(sub infinity)=2.4. Data from a Mach number range of 1.8 to 2.4 was taken at UPWT section #1. Transonic and low supersonic Mach numbers, M(sub infinity)=0.6 to 1.2, was gathered at the NASA Langley 16 ft. Transonic Wind Tunnel (TWT). In addition to good agreement between CFD and experimental data, highlights from the wind-tunnel tests include a trip dot study suggesting a linear relationship between trip dot drag and Mach number, an aeroelastic study that measured the outboard wing deflection and twist, and a flap scheduling study that identifies the possibility of only one leading-edge and trailing-edge flap setting for transonic cruise and another for low supersonic acceleration.

  15. Experimental characterization and multidisciplinary conceptual design optimization of a bendable load stiffened unmanned air vehicle wing

    NASA Astrophysics Data System (ADS)

    Jagdale, Vijay Narayan

    Demand for deployable MAVs and UAVs with wings designed to reduce aircraft storage volume led to the development of a bendable wing concept at the University of Florida (UF). The wing shows an ability to load stiffen in the flight load direction, still remaining compliant in the opposite direction, enabling UAV storage inside smaller packing volumes. From the design prospective, when the wing shape parameters are treated as design variables, the performance requirements : high aerodynamic efficiency, structural stability under aggressive flight loads and desired compliant nature to prevent breaking while stored, in general conflict with each other. Creep deformation induced by long term storage and its effect on the wing flight characteristics are additional considerations. Experimental characterization of candidate bendable UAV wings is performed in order to demonstrate and understand aerodynamic and structural behavior of the bendable load stiffened wing under flight loads and while the wings are stored inside a canister for long duration, in the process identifying some important wing shape parameters. A multidisciplinary, multiobjective design optimization approach is utilized for conceptual design of a 24 inch span and 7 inch root chord bendable wing. Aerodynamic performance of the wing is studied using an extended vortex lattice method based Athena Vortex Lattice (AVL) program. An arc length method based nonlinear FEA routine in ABAQUS is used to evaluate the structural performance of the wing and to determine maximum flying velocity that the wing can withstand without buckling or failing under aggressive flight loads. An analytical approach is used to study the stresses developed in the composite wing during storage and Tsai-Wu criterion is used to check failure of the composite wing due to the rolling stresses to determine minimum safe storage diameter. Multidisciplinary wing shape and layup optimization is performed using an elitist non-dominated sorting

  16. Optimizing an experimental design for a CSEM experiment: methodology and synthetic tests

    NASA Astrophysics Data System (ADS)

    Roux, E.; Garcia, X.

    2014-04-01

    Optimizing an experimental design is a compromise between maximizing information we get about the target and limiting the cost of the experiment, providing a wide range of constraints. We present a statistical algorithm for experiment design that combines the use of linearized inverse theory and stochastic optimization technique. Linearized inverse theory is used to quantify the quality of one given experiment design while genetic algorithm (GA) enables us to examine a wide range of possible surveys. The particularity of our algorithm is the use of the multi-objective GA NSGA II that searches designs that fit several objective functions (OFs) simultaneously. This ability of NSGA II is helping us in defining an experiment design that focuses on a specified target area. We present a test of our algorithm using a 1-D electrical subsurface structure. The model we use represents a simple but realistic scenario in the context of CO2 sequestration that motivates this study. Our first synthetic test using a single OF shows that a limited number of well-distributed observations from a chosen design have the potential to resolve the given model. This synthetic test also points out the importance of a well-chosen OF, depending on our target. In order to improve these results, we show how the combination of two OFs using a multi-objective GA enables us to determine an experimental design that maximizes information about the reservoir layer. Finally, we present several tests of our statistical algorithm in more challenging environments by exploring the influence of noise, specific site characteristics or its potential for reservoir monitoring.

  17. Optimal experimental designs for estimating Henry's law constants via the method of phase ratio variation.

    PubMed

    Kapelner, Adam; Krieger, Abba; Blanford, William J

    2016-10-14

    When measuring Henry's law constants (kH) using the phase ratio variation (PRV) method via headspace gas chromatography (GC), the value of kH of the compound under investigation is calculated from the ratio of the slope to the intercept of a linear regression of the inverse GC response versus the ratio of gas to liquid volumes of a series of vials drawn from the same parent solution. Thus, an experimenter collects measurements consisting of the independent variable (the gas/liquid volume ratio) and dependent variable (the GC(-1) peak area). A review of the literature found that the common design is a simple uniform spacing of liquid volumes. We present an optimal experimental design which estimates kH with minimum error and provides multiple means for building confidence intervals for such estimates. We illustrate performance improvements of our design with an example measuring the kH for Naphthalene in aqueous solution as well as simulations on previous studies. Our designs are most applicable after a trial run defines the linear GC response and the linear phase ratio to the GC(-1) region (where the PRV method is suitable) after which a practitioner can collect measurements in bulk. The designs can be easily computed using our open source software optDesignSlopeInt, an R package on CRAN.

  18. Medium optimization of antifungal activity production by Bacillus amyloliquefaciens using statistical experimental design.

    PubMed

    Mezghanni, Héla; Khedher, Saoussen Ben; Tounsi, Slim; Zouari, Nabil

    2012-01-01

    In order to overproduce biofungicides agents by Bacillus amyloliquefaciens BLB371, a suitable culture medium was optimized using response surface methodology. Plackett-Burman design and central composite design were employed for experimental design and analysis of the results. Peptone, sucrose, and yeast extract were found to significantly influence antifungal activity production and their optimal concentrations were, respectively, 20 g/L, 25 g/L, and 4.5 g/L. The corresponding biofungicide production was 250 AU/mL, corresponding to 56% improvement in antifungal components production over a previously used medium (160 AU/mL). Moreover, our results indicated that a deficiency of the minerals CuSO(4), FeCl(3) · 6H(2)O, Na(2)MoO(4), KI, ZnSO(4) · 7H(2)O, H(3)BO(3), and C(6)H(8)O(7) in the optimized culture medium was not crucial for biofungicides production by Bacillus amyloliquefaciens BLB371, which is interesting from a practical point of view, particularly for low-cost production and use of the biofungicide for the control of agricultural fungal pests.

  19. Experimental design, modeling and optimization of polyplex formation between DNA oligonucleotides and branched polyethylenimine.

    PubMed

    Clima, Lilia; Ursu, Elena L; Cojocaru, Corneliu; Rotaru, Alexandru; Barboiu, Mihail; Pinteala, Mariana

    2015-09-28

    The complexes formed by DNA and polycations have received great attention owing to their potential application in gene therapy. In this study, the binding efficiency between double-stranded oligonucleotides (dsDNA) and branched polyethylenimine (B-PEI) has been quantified by processing of the images captured from the gel electrophoresis assays. The central composite experimental design has been employed to investigate the effects of controllable factors on the binding efficiency. On the basis of experimental data and the response surface methodology, a multivariate regression model has been constructed and statistically validated. The model has enabled us to predict the binding efficiency depending on experimental factors, such as concentrations of dsDNA and B-PEI as well as the initial pH of solution. The optimization of the binding process has been performed using simplex and gradient methods. The optimal conditions determined for polyplex formation have yielded a maximal binding efficiency close to 100%. In order to reveal the mechanism of complex formation at the atomic-scale, a molecular dynamic simulation has been carried out. According to the computation results, B-PEI amine hydrogen atoms have interacted with oxygen atoms from dsDNA phosphate groups. These interactions have led to the formation of hydrogen bonds between macromolecules, stabilizing the polyplex structure.

  20. Online optimal experimental re-design in robotic parallel fed-batch cultivation facilities.

    PubMed

    Cruz Bournazou, M N; Barz, T; Nickel, D B; Lopez Cárdenas, D C; Glauche, F; Knepper, A; Neubauer, P

    2017-03-01

    We present an integrated framework for the online optimal experimental re-design applied to parallel nonlinear dynamic processes that aims to precisely estimate the parameter set of macro kinetic growth models with minimal experimental effort. This provides a systematic solution for rapid validation of a specific model to new strains, mutants, or products. In biosciences, this is especially important as model identification is a long and laborious process which is continuing to limit the use of mathematical modeling in this field. The strength of this approach is demonstrated by fitting a macro-kinetic differential equation model for Escherichia coli fed-batch processes after 6 h of cultivation. The system includes two fully-automated liquid handling robots; one containing eight mini-bioreactors and another used for automated at-line analyses, which allows for the immediate use of the available data in the modeling environment. As a result, the experiment can be continually re-designed while the cultivations are running using the information generated by periodical parameter estimations. The advantages of an online re-computation of the optimal experiment are proven by a 50-fold lower average coefficient of variation on the parameter estimates compared to the sequential method (4.83% instead of 235.86%). The success obtained in such a complex system is a further step towards a more efficient computer aided bioprocess development. Biotechnol. Bioeng. 2017;114: 610-619. © 2016 Wiley Periodicals, Inc.

  1. Experimental design of an optimal phase duration control strategy used in batch biological wastewater treatment.

    PubMed

    Pavgelj, N B; Hvala, N; Kocijan, J; Ros, M; Subelj, M; Music, G; Strmcnik, S

    2001-01-01

    The paper presents the design of an algorithm used in control of a sequencing batch reactor (SBR) for wastewater treatment. The algorithm is used for the on-line optimization of the batch phases duration which should be applied due to the variable input wastewater. Compared to an operation with fixed times of batch phases, this kind of a control strategy improves the treatment quality and reduces energy consumption. The designed control algorithm is based on following the course of some simple indirect process variables (i.e. redox potential, dissolved oxygen concentration and pH), and automatic recognition of the characteristic patterns in their time profile. The algorithm acts on filtered on-line signals and is based on heuristic rules. The control strategy was developed and tested on a laboratory pilot plant. To facilitate the experimentation, the pilot plant was superimposed by a computer-supported experimental environment that enabled: (i) easy access to all data (on-line signals, laboratory measurements, batch parameters) needed for the design of the algorithm, (ii) the immediate application of the algorithm designed off-line in the Matlab package also in real-time control. When testing on the pilot plant, the control strategy demonstrated good agreement between the proposed completion times and actual terminations of the desired biodegradation processes.

  2. Effect of an experimental design for evaluating the nonlinear optimal formulation of theophylline tablets using a bootstrap resampling technique.

    PubMed

    Arai, Hiroaki; Suzuki, Tatsuya; Kaseda, Chosei; Takayama, Kozo

    2009-06-01

    The optimal solutions of theophylline tablet formulations based on datasets from 4 experimental designs (Box and Behnken design, central composite design, D-optimal design, and full factorial design) were calculated by the response surface method incorporating multivariate spline interpolation (RSM(S)). Reliability of these solutions was evaluated by a bootstrap (BS) resampling technique. The optimal solutions derived from the Box and Behnken design, D-optimal design, and full factorial design dataset were similar. The distributions of the BS optimal solutions calculated for these datasets were symmetrical. Thus, the accuracy and the reproducibility of the optimal solutions enabled quantitative evaluation based on the deviations of these distributions. However, the distribution of the BS optimal solutions calculated for the central composite design dataset were almost unsymmetrical, and the basic statistic of these distributions could not be conducted. The reason for this problem was considered to be the mixing of the global and local optima. Therefore, self-organizing map (SOM) clustering was applied to identify the global optimal solutions. The BS optimal solutions were divided into 4 clusters by SOM clustering, the accuracy and reproducibility of the optimal solutions in each cluster were quantitatively evaluated, and the cluster containing the global optima was identified. Therefore, SOM clustering was considered to reinforce the BS resampling method for the evaluation of the reliability of optimal solutions irrespective of the dataset style.

  3. Experimental design and optimization of raloxifene hydrochloride loaded nanotransfersomes for transdermal application.

    PubMed

    Mahmood, Syed; Taher, Muhammad; Mandal, Uttam Kumar

    2014-01-01

    Raloxifene hydrochloride, a highly effective drug for the treatment of invasive breast cancer and osteoporosis in post-menopausal women, shows poor oral bioavailability of 2%. The aim of this study was to develop, statistically optimize, and characterize raloxifene hydrochloride-loaded transfersomes for transdermal delivery, in order to overcome the poor bioavailability issue with the drug. A response surface methodology experimental design was applied for the optimization of transfersomes, using Box-Behnken experimental design. Phospholipon(®) 90G, sodium deoxycholate, and sonication time, each at three levels, were selected as independent variables, while entrapment efficiency, vesicle size, and transdermal flux were identified as dependent variables. The formulation was characterized by surface morphology and shape, particle size, and zeta potential. Ex vivo transdermal flux was determined using a Hanson diffusion cell assembly, with rat skin as a barrier medium. Transfersomes from the optimized formulation were found to have spherical, unilamellar structures, with a homogeneous distribution and low polydispersity index (0.08). They had a particle size of 134±9 nM, with an entrapment efficiency of 91.00%±4.90%, and transdermal flux of 6.5±1.1 μg/cm(2)/hour. Raloxifene hydrochloride-loaded transfersomes proved significantly superior in terms of amount of drug permeated and deposited in the skin, with enhancement ratios of 6.25±1.50 and 9.25±2.40, respectively, when compared with drug-loaded conventional liposomes, and an ethanolic phosphate buffer saline. Differential scanning calorimetry study revealed a greater change in skin structure, compared with a control sample, during the ex vivo drug diffusion study. Further, confocal laser scanning microscopy proved an enhanced permeation of coumarin-6-loaded transfersomes, to a depth of approximately160 μM, as compared with rigid liposomes. These ex vivo findings proved that a raloxifene hydrochloride

  4. Doehlert experimental design applied to optimization of light emitting textile structures

    NASA Astrophysics Data System (ADS)

    Oguz, Yesim; Cochrane, Cedric; Koncar, Vladan; Mordon, Serge R.

    2016-07-01

    A light emitting fabric (LEF) has been developed for photodynamic therapy (PDT) for the treatment of dermatologic diseases such as Actinic Keratosis (AK). A successful PDT requires homogenous and reproducible light with controlled power and wavelength on the treated skin area. Due to the shape of the human body, traditional PDT with external light sources is unable to deliver homogenous light everywhere on the skin (head vertex, hand, etc.). For better light delivery homogeneity, plastic optical fibers (POFs) have been woven in textile in order to emit laterally the injected light. The previous studies confirmed that the light power could be locally controlled by modifying the radius of POF macro-bendings within the textile structure. The objective of this study is to optimize the distribution of macro-bendings over the LEF surface in order to increase the light intensity (mW/cm2), and to guarantee the best possible light deliver homogeneity over the LEF which are often contradictory. Fifteen experiments have been carried out with Doehlert experimental design involving Response Surface Methodology (RSM). The proposed models are fitted to the experimental data to enable the optimal set up of the warp yarns tensions.

  5. Multi-objective optimization design and experimental investigation of centrifugal fan performance

    NASA Astrophysics Data System (ADS)

    Zhang, Lei; Wang, Songling; Hu, Chenxing; Zhang, Qian

    2013-11-01

    Current studies of fan performance optimization mainly focus on two aspects: one is to improve the blade profile, and another is only to consider the influence of single impeller structural parameter on fan performance. However, there are few studies on the comprehensive effect of the key parameters such as blade number, exit stagger angle of blade and the impeller outlet width on the fan performance. The G4-73 backward centrifugal fan widely used in power plants is selected as the research object. Based on orthogonal design and BP neural network, a model for predicting the centrifugal fan performance parameters is established, and the maximum relative errors of the total pressure and efficiency are 0.974% and 0.333%, respectively. Multi-objective optimization of total pressure and efficiency of the fan is conducted with genetic algorithm, and the optimum combination of impeller structural parameters is proposed. The optimized parameters of blade number, exit stagger angle of blade and the impeller outlet width are seperately 14, 43.9°, and 21 cm. The experiments on centrifugal fan performance and noise are conducted before and after the installation of the new impeller. The experimental results show that with the new impeller, the total pressure of fan increases significantly in total range of the flow rate, and the fan efficiency is improved when the relative flow is above 75%, also the high efficiency area is broadened. Additionally, in 65% -100% relative flow, the fan noise is reduced. Under the design operating condition, total pressure and efficiency of the fan are improved by 6.91% and 0.5%, respectively. This research sheds light on the considering of comprehensive effect of impeller structrual parameters on fan performance, and a new impeller can be designed to satisfy the engineering demand such as energy-saving, noise reduction or solving air pressure insufficiency for power plants.

  6. Optimization of primaquine diphosphate tablet formulation for controlled drug release using the mixture experimental design.

    PubMed

    Duque, Marcelo Dutra; Kreidel, Rogério Nepomuceno; Taqueda, Maria Elena Santos; Baby, André Rolim; Kaneko, Telma Mary; Velasco, Maria Valéria Robles; Consiglieri, Vladi Olga

    2013-01-01

    A tablet formulation based on hydrophilic matrix with a controlled drug release was developed, and the effect of polymer concentrations on the release of primaquine diphosphate was evaluated. To achieve this purpose, a 20-run, four-factor with multiple constraints on the proportions of the components was employed to obtain tablet compositions. Drug release was determined by an in vitro dissolution study in phosphate buffer solution at pH 6.8. The polynomial fitted functions described the behavior of the mixture on simplex coordinate systems to study the effects of each factor (polymer) on tablet characteristics. Based on the response surface methodology, a tablet composition was optimized with the purpose of obtaining a primaquine diphosphate release closer to a zero order kinetic. This formulation released 85.22% of the drug for 8 h and its kinetic was studied regarding to Korsmeyer-Peppas model, (Adj-R(2) = 0.99295) which has confirmed that both diffusion and erosion were related to the mechanism of the drug release. The data from the optimized formulation were very close to the predictions from statistical analysis, demonstrating that mixture experimental design could be used to optimize primaquine diphosphate dissolution from hidroxypropylmethyl cellulose and polyethylene glycol matrix tablets.

  7. Optimization and enhancement of soil bioremediation by composting using the experimental design technique.

    PubMed

    Sayara, Tahseen; Sarrà, Montserrat; Sánchez, Antoni

    2010-06-01

    The objective of this study was the application of the experimental design technique to optimize the conditions for the bioremediation of contaminated soil by means of composting. A low-cost material such as compost from the Organic Fraction of Municipal Solid Waste as amendment and pyrene as model pollutant were used. The effect of three factors was considered: pollutant concentration (0.1-2 g/kg), soil:compost mixing ratio (1:0.5-1:2 w/w) and compost stability measured as respiration index (0.78, 2.69 and 4.52 mg O2 g(-1) Organic Matter h(-1)). Stable compost permitted to achieve an almost complete degradation of pyrene in a short time (10 days). Results indicated that compost stability is a key parameter to optimize PAHs biodegradation. A factor analysis indicated that the optimal conditions for bioremediation after 10, 20 and 30 days of process were (1.4, 0.78, 1:1.4), (1.4, 2.18. 1:1.3) and (1.3, 2.18, 1:1.3) for concentration (g/kg), compost stability (mg O2 g(-1) Organic Matter h(-1)) and soil:compost mixing ratio, respectively.

  8. Formulation and optimization by experimental design of eco-friendly emulsions based on d-limonene.

    PubMed

    Pérez-Mosqueda, Luis M; Trujillo-Cayado, Luis A; Carrillo, Francisco; Ramírez, Pablo; Muñoz, José

    2015-04-01

    d-Limonene is a natural occurring solvent that can replace more pollutant chemicals in agrochemical formulations. In the present work, a comprehensive study of the influence of dispersed phase mass fraction, ϕ, and of the surfactant/oil ratio, R, on the emulsion stability and droplet size distribution of d-limonene-in-water emulsions stabilized by a non-ionic triblock copolymer surfactant has been carried out. An experimental full factorial design 3(2) was conducted in order to optimize the emulsion formulation. The independent variables, ϕ and R were studied in the range 10-50 wt% and 0.02-0.1, respectively. The emulsions studied were mainly destabilized by both creaming and Ostwald ripening. Therefore, initial droplet size and an overall destabilization parameter, the so-called turbiscan stability index, were used as dependent variables. The optimal formulation, comprising minimum droplet size and maximum stability was achieved at ϕ=50 wt%; R=0.062. Furthermore, the surface response methodology allowed us to obtain the formulation yielding sub-micron emulsions by using a single step rotor/stator homogenizer process instead of most commonly used two-step emulsification methods. In addition, the optimal formulation was further improved against Ostwald ripening by adding silicone oil to the dispersed phase. The combination of these experimental findings allowed us to gain a deeper insight into the stability of these emulsions, which can be applied to the rational development of new formulations with potential application in agrochemical formulations. Copyright © 2015 Elsevier B.V. All rights reserved.

  9. Mixed culture optimization for marigold flower ensilage via experimental design and response surface methodology.

    PubMed

    Navarrete-Bolaños, José Luis; Jiménez-Islas, Hugo; Botello-Alvarez, Enrique; Rico-Martínez, Ramiro

    2003-04-09

    Endogenous microorganisms isolated from the marigold flower (Tagetes erecta) were studied to understand the events taking place during its ensilage. Studies of the cellulase enzymatic activity and the ensilage process were undertaken. In both studies, the use of approximate second-order models and multiple lineal regression, within the context of an experimental mixture design using the response surface methodology as optimization strategy, determined that the microorganisms Flavobacterium IIb, Acinetobacter anitratus, and Rhizopus nigricans are the most significant in marigold flower ensilage and exhibit high cellulase activity. A mixed culture comprised of 9.8% Flavobacterium IIb, 41% A. anitratus, and 49.2% R. nigricans used during ensilage resulted in an increased yield of total xanthophylls extracted of 24.94 g/kg of dry weight compared with 12.92 for the uninoculated control ensilage.

  10. Parameter estimation and uncertainty quantification in a biogeochemical model using optimal experimental design methods

    NASA Astrophysics Data System (ADS)

    Reimer, Joscha; Piwonski, Jaroslaw; Slawig, Thomas

    2016-04-01

    The statistical significance of any model-data comparison strongly depends on the quality of the used data and the criterion used to measure the model-to-data misfit. The statistical properties (such as mean values, variances and covariances) of the data should be taken into account by choosing a criterion as, e.g., ordinary, weighted or generalized least squares. Moreover, the criterion can be restricted onto regions or model quantities which are of special interest. This choice influences the quality of the model output (also for not measured quantities) and the results of a parameter estimation or optimization process. We have estimated the parameters of a three-dimensional and time-dependent marine biogeochemical model describing the phosphorus cycle in the ocean. For this purpose, we have developed a statistical model for measurements of phosphate and dissolved organic phosphorus. This statistical model includes variances and correlations varying with time and location of the measurements. We compared the obtained estimations of model output and parameters for different criteria. Another question is if (and which) further measurements would increase the model's quality at all. Using experimental design criteria, the information content of measurements can be quantified. This may refer to the uncertainty in unknown model parameters as well as the uncertainty regarding which model is closer to reality. By (another) optimization, optimal measurement properties such as locations, time instants and quantities to be measured can be identified. We have optimized such properties for additional measurement for the parameter estimation of the marine biogeochemical model. For this purpose, we have quantified the uncertainty in the optimal model parameters and the model output itself regarding the uncertainty in the measurement data using the (Fisher) information matrix. Furthermore, we have calculated the uncertainty reduction by additional measurements depending on time

  11. Development and optimization of quercetin-loaded PLGA nanoparticles by experimental design

    PubMed Central

    TEFAS, LUCIA RUXANDRA; TOMUŢĂ, IOAN; ACHIM, MARCELA; VLASE, LAURIAN

    2015-01-01

    Background and aims Quercetin is a flavonoid with good antioxidant activity, and exhibits various important pharmacological effects. The aim of the present work was to study the influence of formulation factors on the physicochemical properties of quercetin-loaded polymeric nanoparticles in order to optimize the formulation. Materials and methods The nanoparticles were prepared by the nanoprecipitation method. A 3-factor, 3-level Box-Behnken design was employed in this study considering poly(D,L-lactic-co-glycolic) acid (PLGA) concentration, polyvinyl alcohol (PVA) concentration and the stirring speed as independent variables. The responses were particle size, polydispersity index, zeta potential and encapsulation efficiency. Results The PLGA concentration seemed to be the most important factor influencing quercetin-nanoparticle characteristics. Increasing PLGA concentration led to an increase in particle size, as well as encapsulation efficiency. On the other hand, it exhibited a negative influence on the polydispersity index and zeta potential. The PVA concentration and the stirring speed had only a slight influence on particle size and polydispersity index. However, PVA concentration had an important negative effect on the encapsulation efficiency. Based on the results obtained, an optimized formulation was prepared, and the experimental values were comparable to the predicted ones. Conclusions The overall results indicated that PLGA concentration was the main factor influencing particle size, while entrapment efficiency was predominantly affected by the PVA concentration. PMID:26528074

  12. Optimizing indomethacin-loaded chitosan nanoparticle size, encapsulation, and release using Box-Behnken experimental design.

    PubMed

    Abul Kalam, Mohd; Khan, Abdul Arif; Khan, Shahanavaj; Almalik, Abdulaziz; Alshamsan, Aws

    2016-06-01

    Indomethacin chitosan nanoparticles (NPs) were developed by ionotropic gelation and optimized by concentrations of chitosan and tripolyphosphate (TPP) and stirring time by 3-factor 3-level Box-Behnken experimental design. Optimal concentration of chitosan (A) and TPP (B) were found 0.6mg/mL and 0.4mg/mL with 120min stirring time (C), with applied constraints of minimizing particle size (R1) and maximizing encapsulation efficiency (R2) and drug release (R3). Based on obtained 3D response surface plots, factors A, B and C were found to give synergistic effect on R1, while factor A has a negative impact on R2 and R3. Interaction of AB was negative on R1 and R2 but positive on R3. The factor AC was having synergistic effect on R1 and on R3, while the same combination had a negative effect on R2. The interaction BC was positive on the all responses. NPs were found in the size range of 321-675nm with zeta potentials (+25 to +32mV) after 6 months storage. Encapsulation, drug release, and content were in the range of 56-79%, 48-73% and 98-99%, respectively. In vitro drug release data were fitted in different kinetic models and pattern of drug release followed Higuchi-matrix type.

  13. Experimental design and optimization of raloxifene hydrochloride loaded nanotransfersomes for transdermal application

    PubMed Central

    Mahmood, Syed; Taher, Muhammad; Mandal, Uttam Kumar

    2014-01-01

    Raloxifene hydrochloride, a highly effective drug for the treatment of invasive breast cancer and osteoporosis in post-menopausal women, shows poor oral bioavailability of 2%. The aim of this study was to develop, statistically optimize, and characterize raloxifene hydrochloride-loaded transfersomes for transdermal delivery, in order to overcome the poor bioavailability issue with the drug. A response surface methodology experimental design was applied for the optimization of transfersomes, using Box-Behnken experimental design. Phospholipon® 90G, sodium deoxycholate, and sonication time, each at three levels, were selected as independent variables, while entrapment efficiency, vesicle size, and transdermal flux were identified as dependent variables. The formulation was characterized by surface morphology and shape, particle size, and zeta potential. Ex vivo transdermal flux was determined using a Hanson diffusion cell assembly, with rat skin as a barrier medium. Transfersomes from the optimized formulation were found to have spherical, unilamellar structures, with a homogeneous distribution and low polydispersity index (0.08). They had a particle size of 134±9 nM, with an entrapment efficiency of 91.00%±4.90%, and transdermal flux of 6.5±1.1 μg/cm2/hour. Raloxifene hydrochloride-loaded transfersomes proved significantly superior in terms of amount of drug permeated and deposited in the skin, with enhancement ratios of 6.25±1.50 and 9.25±2.40, respectively, when compared with drug-loaded conventional liposomes, and an ethanolic phosphate buffer saline. Differential scanning calorimetry study revealed a greater change in skin structure, compared with a control sample, during the ex vivo drug diffusion study. Further, confocal laser scanning microscopy proved an enhanced permeation of coumarin-6-loaded transfersomes, to a depth of approximately160 μM, as compared with rigid liposomes. These ex vivo findings proved that a raloxifene hydrochloride

  14. Optimization of fluid bed formulations of metoprolol granules and tablets using an experimental design.

    PubMed

    Tomuţă, I; Alecu, C; Rus, L L; Leuçuta, S E

    2009-09-01

    The granulation process of a metoprolol tartrate (very difficult to process active pharmaceutical ingredient) formulation in laboratory scale fluid bed equipment was studied. To study the influence of two formulation factors and three process parameters on the characteristics of the granules and subsequently of the tablets, in the case of fluid bed granulating of a powder mix containing metoprolol tartrate. In order to study the influence of formulation factors (binder solution concentration and the silicon dioxide ratio) and process factors (atomizing pressure, the length of the final drying phase, and the inlet air temperature) on the technological and pharmaceutical properties of granules and tablets, a fractional factorial experimental design resolution V+ with five factors and two levels was used. A high atomizing pressure allows us to obtain fine granules with large poly-dispersion index and granules with high tapped and untapped density, tablets with short disintegration time, short mean dissolution time, and a high percentage metoprolol tartrate release in the first 15 minutes. A lower concentration of binder solution allows us to obtain granules with very good flow properties, tablets which have no tendency to stick on the set punch of tabletting machine and no capping. The final drying time of granules has an influence only on the granule's relative humidity and tapped and untapped density, without any influence on the granules flow properties. The practical experimental results from the formulation processed in optimal working conditions were close to the predicted ones by Modde 6.0 software.

  15. Determination of pharmaceuticals in drinking water by CD-modified MEKC: separation optimization using experimental design.

    PubMed

    Drover, Vincent J; Bottaro, Christina S

    2008-12-01

    A suite of 12 widely used pharmaceuticals (ibuprofen, diclofenac, naproxen, bezafibrate, gemfibrozil, ofloxacin, norfloxacin, carbamazepine, primidone, sulphamethazine, sulphadimethoxine and sulphamethoxazole) commonly found in environmental waters were separated by highly sulphated CD-modified MEKC (CD-MEKC) with UV detection. An experimental design method, face-centred composite design, was employed to minimize run time without sacrificing resolution. Using an optimized BGE composed of 10 mM ammonium hydrogen phosphate, pH 11.5, 69 mM SDS, 6 mg/mL sulphated beta-CD and 8.5% v/v isopropanol, a separation voltage of 30 kV and a 48.5 cm x 50 microm id bare silica capillary at 30 degrees C allowed baseline separation of the 12 analytes in a total analysis time of 6.7 min. Instrument LODs in the low milligram per litre range were obtained, and when combined with offline preconcentration by SPE, LODs were between 4 and 30 microg/L.

  16. Design and optimization of an experimental bioregenerative life support system with higher plants and silkworms

    NASA Astrophysics Data System (ADS)

    Hu, Enzhu; Bartsev, Sergey I.; Zhao, Ming; Liu, Professor Hong

    The conceptual scheme of an experimental bioregenerative life support system (BLSS) for planetary exploration was designed, which consisted of four elements - human metabolism, higher plants, silkworms and waste treatment. 15 kinds of higher plants, such as wheat, rice, soybean, lettuce, mulberry, et al., were selected as regenerative component of BLSS providing the crew with air, water, and vegetable food. Silkworms, which producing animal nutrition for crews, were fed by mulberry-leaves during the first three instars, and lettuce leaves last two instars. The inedible biomass of higher plants, human wastes and silkworm feces were composted into soil like substrate, which can be reused by higher plants cultivation. Salt, sugar and some household material such as soap, shampoo would be provided from outside. To support the steady state of BLSS the same amount and elementary composition of dehydrated wastes were removed periodically. The balance of matter flows between BLSS components was described by the system of algebraic equations. The mass flows between the components were optimized by EXCEL spreadsheets and using Solver. The numerical method used in this study was Newton's method.

  17. Use of a statistically designed experimental approach to optimize the propylketal derivatization of barbiturates.

    PubMed

    Kushnir, M M; Urry, F M

    2001-04-01

    The derivatization of barbiturates with dimethylformamide dipropylacetal and dimethylformamide diisopropylacetal is studied with respect to the optimization of reaction recovery and reliability. A second-order orthogonal experimental design is utilized in order to obtain regression equations for the reaction recovery dependence on the derivatization solution composition, incubation temperature, and time for amobarbital, butalbital, pentobarbital, phenobarbital, and secobarbital. Regression equations for the effect of incubation temperature and time on the derivative recovery and the optimum conditions for derivatization recoveries are obtained. Differences in the phenomena of the derivative formation are evaluated between the two derivatizing reagents and the barbiturates. Based on the analysis of the obtained equations, it is concluded that the dipropylketal derivative of barbiturates is superior in comparison with diisopropylketal when considering the milder conditions of the reaction, absence of sudden changes in the recovery with a variation in the derivatization parameters, and reliability for the simultaneous testing of the barbiturates. A method for the routine testing of the barbiturates by gas chromatography-mass spectrometry in urine specimens is included.

  18. Degradation of caffeine by photo-Fenton process: optimization of treatment conditions using experimental design.

    PubMed

    Trovó, Alam G; Silva, Tatiane F S; Gomes, Oswaldo; Machado, Antonio E H; Neto, Waldomiro Borges; Muller, Paulo S; Daniel, Daniela

    2013-01-01

    The degradation of caffeine in different kind of effluents, via photo-Fenton process, was investigated in lab-scale and in a solar pilot plant. The treatment conditions (caffeine, Fe(2+) and H(2)O(2) concentrations) were defined by experimental design. The optimized conditions for each variable, obtained using the response factor (% mineralization), were: 52.0 mg L(-1)caffeine, 10.0 mg L(-1)Fe(2+) and 42.0 mg L(-1)H(2)O(2) (replaced in kinetic experiments). Under these conditions, in ultrapure water (UW), the caffeine concentration reached the quantitation limit (0.76 mg L(-1)) after 20 min, and 78% of mineralization was obtained respectively after 120 min of reaction. Using the same conditions, the matrix influence (surface water - SW and sewage treatment plant effluent - STP) on caffeine degradation was also evaluated. The total removal of caffeine in SW was reached at the same time in UW (after 20 min), while 40 min were necessary in STP. Although lower mineralization rates were verified for high organic load, under the same operational conditions, less H(2)O(2) was necessary to mineralize the dissolved organic carbon as the initial organic load increases. A high efficiency of the photo-Fenton process was also observed in caffeine degradation by solar photocatalysis using a CPC reactor, as well as intermediates of low toxicity, demonstrating that photo-Fenton process can be a viable alternative for caffeine removal in wastewater.

  19. Numerical and Experimental Approach for the Optimal Design of a Dual Plate Under Ballistic Impact

    NASA Astrophysics Data System (ADS)

    Yoo, Jeonghoon; Chung, Dong-Teak; Park, Myung Soo

    To predict the behavior of a dual plate composed of 5052-aluminum and 1002-cold rolled steel under ballistic impact, numerical and experimental approaches are attempted. For the accurate numerical simulation of the impact phenomena, the appropriate selection of the key parameter values based on numerical or experimental tests are critical. This study is focused on not only the optimization technique using the numerical simulation but also numerical and experimental procedures to obtain the required parameter values in the simulation. The Johnson-Cook model is used to simulate the mechanical behaviors, and the simplified experimental and the numerical approaches are performed to obtain the material properties of the model. The element erosion scheme for the robust simulation of the ballistic impact problem is applied by adjusting the element erosion criteria of each material based on numerical and experimental results. The adequate mesh size and the aspect ratio are chosen based on parametric studies. Plastic energy is suggested as a response representing the strength of the plate for the optimization under dynamic loading. Optimized thickness of the dual plate is obtained to resist the ballistic impact without penetration as well as to minimize the total weight.

  20. Optimization of scaffold design for bone tissue engineering: A computational and experimental study.

    PubMed

    Dias, Marta R; Guedes, José M; Flanagan, Colleen L; Hollister, Scott J; Fernandes, Paulo R

    2014-04-01

    In bone tissue engineering, the scaffold has not only to allow the diffusion of cells, nutrients and oxygen but also provide adequate mechanical support. One way to ensure the scaffold has the right properties is to use computational tools to design such a scaffold coupled with additive manufacturing to build the scaffolds to the resulting optimized design specifications. In this study a topology optimization algorithm is proposed as a technique to design scaffolds that meet specific requirements for mass transport and mechanical load bearing. Several micro-structures obtained computationally are presented. Designed scaffolds were then built using selective laser sintering and the actual features of the fabricated scaffolds were measured and compared to the designed values. It was possible to obtain scaffolds with an internal geometry that reasonably matched the computational design (within 14% of porosity target, 40% for strut size and 55% for throat size in the building direction and 15% for strut size and 17% for throat size perpendicular to the building direction). These results support the use of these kind of computational algorithms to design optimized scaffolds with specific target properties and confirm the value of these techniques for bone tissue engineering.

  1. Monitoring and optimizing the co-composting of dewatered sludge: a mixture experimental design approach.

    PubMed

    Komilis, Dimitrios; Evangelou, Alexandros; Voudrias, Evangelos

    2011-09-01

    The management of dewatered wastewater sludge is a major issue worldwide. Sludge disposal to landfills is not sustainable and thus alternative treatment techniques are being sought. The objective of this work was to determine optimal mixing ratios of dewatered sludge with other organic amendments in order to maximize the degradability of the mixtures during composting. This objective was achieved using mixture experimental design principles. An additional objective was to study the impact of the initial C/N ratio and moisture contents on the co-composting process of dewatered sludge. The composting process was monitored through measurements of O(2) uptake rates, CO(2) evolution, temperature profile and solids reduction. Eight (8) runs were performed in 100 L insulated air-tight bioreactors under a dynamic air flow regime. The initial mixtures were prepared using dewatered wastewater sludge, mixed paper wastes, food wastes, tree branches and sawdust at various initial C/N ratios and moisture contents. According to empirical modeling, mixtures of sludge and food waste mixtures at 1:1 ratio (ww, wet weight) maximize degradability. Structural amendments should be maintained below 30% to reach thermophilic temperatures. The initial C/N ratio and initial moisture content of the mixture were not found to influence the decomposition process. The bio C/bio N ratio started from around 10, for all runs, decreased during the middle of the process and increased to up to 20 at the end of the process. The solid carbon reduction of the mixtures without the branches ranged from 28% to 62%, whilst solid N reductions ranged from 30% to 63%. Respiratory quotients had a decreasing trend throughout the composting process.

  2. A new multiresponse optimization approach in combination with a D-Optimal experimental design for the determination of biogenic amines in fish by HPLC-FLD.

    PubMed

    Herrero, A; Sanllorente, S; Reguera, C; Ortiz, M C; Sarabia, L A

    2016-11-16

    A new strategy to approach multiresponse optimization in conjunction to a D-optimal design for simultaneously optimizing a large number of experimental factors is proposed. The procedure is applied to the determination of biogenic amines (histamine, putrescine, cadaverine, tyramine, tryptamine, 2-phenylethylamine, spermine and spermidine) in swordfish by HPLC-FLD after extraction with an acid and subsequent derivatization with dansyl chloride. Firstly, the extraction from a solid matrix and the derivatization of the extract are optimized. Ten experimental factors involved in both stages are studied, seven of them at two levels and the remaining at three levels; the use of a D-optimal design leads to optimize the ten experimental variables, significantly reducing by a factor of 67 the experimental effort needed but guaranteeing the quality of the estimates. A model with 19 coefficients, which includes those corresponding to the main effects and two possible interactions, is fitted to the peak area of each amine. Then, the validated models are used to predict the response (peak area) of the 3456 experiments of the complete factorial design. The variability among peak areas ranges from 13.5 for 2-phenylethylamine to 122.5 for spermine, which shows, to a certain extent, the high and different effect of the pretreatment on the responses. Then the percentiles are calculated from the peak areas of each amine. As the experimental conditions are in conflict, the optimal solution for the multiresponse optimization is chosen from among those which have all the responses greater than a certain percentile for all the amines. The developed procedure reaches decision limits down to 2.5 μg L(-1) for cadaverine or 497 μg L(-1) for histamine in solvent and 0.07 mg kg(-1) and 14.81 mg kg(-1) in fish (probability of false positive equal to 0.05), respectively.

  3. Optimal experimental design for nano-particle atom-counting from high-resolution STEM images.

    PubMed

    De Backer, A; De Wael, A; Gonnissen, J; Van Aert, S

    2015-04-01

    In the present paper, the principles of detection theory are used to quantify the probability of error for atom-counting from high resolution scanning transmission electron microscopy (HR STEM) images. Binary and multiple hypothesis testing have been investigated in order to determine the limits to the precision with which the number of atoms in a projected atomic column can be estimated. The probability of error has been calculated when using STEM images, scattering cross-sections or peak intensities as a criterion to count atoms. Based on this analysis, we conclude that scattering cross-sections perform almost equally well as images and perform better than peak intensities. Furthermore, the optimal STEM detector design can be derived for atom-counting using the expression for the probability of error. We show that for very thin objects LAADF is optimal and that for thicker objects the optimal inner detector angle increases.

  4. Actinobacteria consortium as an efficient biotechnological tool for mixed polluted soil reclamation: Experimental factorial design for bioremediation process optimization.

    PubMed

    Aparicio, Juan Daniel; Raimondo, Enzo Emanuel; Gil, Raúl Andrés; Benimeli, Claudia Susana; Polti, Marta Alejandra

    2017-08-19

    The objective of the present work was to establish optimal biological and physicochemical parameters in order to remove simultaneously lindane and Cr(VI) at high and/or low pollutants concentrations from the soil by an actinobacteria consortium formed by Streptomyces sp. M7, MC1, A5, and Amycolatopsis tucumanensis AB0. Also, the final aim was to treat real soils contaminated with Cr(VI) and/or lindane from the Northwest of Argentina employing the optimal biological and physicochemical conditions. In this sense, after determining the optimal inoculum concentration (2gkg(-1)), an experimental design model with four factors (temperature, moisture, initial concentration of Cr(VI) and lindane) was employed for predicting the system behavior during bioremediation process. According to response optimizer, the optimal moisture level was 30% for all bioremediation processes. However, the optimal temperature was different for each situation: for low initial concentrations of both pollutants, the optimal temperature was 25°C; for low initial concentrations of Cr(VI) and high initial concentrations of lindane, the optimal temperature was 30°C; and for high initial concentrations of Cr(VI), the optimal temperature was 35°C. In order to confirm the model adequacy and the validity of the optimization procedure, experiments were performed in six real contaminated soils samples. The defined actinobacteria consortium reduced the contaminants concentrations in five of the six samples, by working at laboratory scale and employing the optimal conditions obtained through the factorial design. Copyright © 2017 Elsevier B.V. All rights reserved.

  5. Bioslurry phase remediation of chlorpyrifos contaminated soil: process evaluation and optimization by Taguchi design of experimental (DOE) methodology.

    PubMed

    Venkata Mohan, S; Sirisha, K; Sreenivasa Rao, R; Sarma, P N

    2007-10-01

    Design of experimental (DOE) methodology using Taguchi orthogonal array (OA) was applied to evaluate the influence of eight biotic and abiotic factors (substrate-loading rate, slurry phase pH, slurry phase dissolved oxygen (DO), soil water ratio, temperature, soil microflora load, application of bioaugmentation and humic substance concentration) on the soil bound chlorpyrifos bioremediation in bioslurry phase reactor. The selected eight factors were considered at three levels (18 experiments) in the experimental design. Substrate-loading rate showed significant influence on the bioremediation process among the selected factors. Derived optimum operating conditions obtained by the methodology showed enhanced chlorpyrifos degradation from 1479.99 to 2458.33microg/g (over all 39.82% enhancement). The proposed method facilitated systematic mathematical approach to understand the complex bioremediation process and the optimization of near optimum design parameters, only with a few well-defined experimental sets.

  6. A Resampling Based Approach to Optimal Experimental Design for Computer Analysis of a Complex System

    SciTech Connect

    Rutherford, Brian

    1999-08-04

    The investigation of a complex system is often performed using computer generated response data supplemented by system and component test results where possible. Analysts rely on an efficient use of limited experimental resources to test the physical system, evaluate the models and to assure (to the extent possible) that the models accurately simulate the system order investigation. The general problem considered here is one where only a restricted number of system simulations (or physical tests) can be performed to provide additional data necessary to accomplish the project objectives. The levels of variables used for defining input scenarios, for setting system parameters and for initializing other experimental options must be selected in an efficient way. The use of computer algorithms to support experimental design in complex problems has been a topic of recent research in the areas of statistics and engineering. This paper describes a resampling based approach to form dating this design. An example is provided illustrating in two dimensions how the algorithm works and indicating its potential on larger problems. The results show that the proposed approach has characteristics desirable of an algorithmic approach on the simple examples. Further experimentation is needed to evaluate its performance on larger problems.

  7. Optimizing laboratory animal stress paradigms: The H-H* experimental design.

    PubMed

    McCarty, Richard

    2017-01-01

    Major advances in behavioral neuroscience have been facilitated by the development of consistent and highly reproducible experimental paradigms that have been widely adopted. In contrast, many different experimental approaches have been employed to expose laboratory mice and rats to acute versus chronic intermittent stress. An argument is advanced in this review that more consistent approaches to the design of chronic intermittent stress experiments would provide greater reproducibility of results across laboratories and greater reliability relating to various neural, endocrine, immune, genetic, and behavioral adaptations. As an example, the H-H* experimental design incorporates control, homotypic (H), and heterotypic (H*) groups and allows for comparisons across groups, where each animal is exposed to the same stressor, but that stressor has vastly different biological and behavioral effects depending upon each animal's prior stress history. Implementation of the H-H* experimental paradigm makes possible a delineation of transcriptional changes and neural, endocrine, and immune pathways that are activated in precisely defined stressor contexts. Copyright © 2016 Elsevier Ltd. All rights reserved.

  8. Design and optimization of an experimental test bench for the study of impulsive fluid-structure interactions

    NASA Astrophysics Data System (ADS)

    Russo, S.; Krastev, V. K.; Jannelli, E.; Falcucci, G.

    2016-06-01

    In this work, the design and the optimization of an experimental test bench for the experimental characterization of impulsive water-entry problems are presented. Currently, the majority of the experimental apparatus allow impact test only in specific conditions. Our test bench allows for testing of rigid and compliant bodies and allows performing experiments on floating or sinking structures, in free-fall, or under dynamic motion control. The experimental apparatus is characterized by the adoption of accelerometers, encoders, position sensors and, above all, FBG (fiber Bragg grating) sensors that, together with a high speed camera, provide accurate and fast data acquisitions for the dissection of structural deformations and hydrodynamic loadings under a broad set of experimental conditions.

  9. An experimental evaluation of a helicopter rotor section designed by numerical optimization

    NASA Technical Reports Server (NTRS)

    Hicks, R. M.; Mccroskey, W. J.

    1980-01-01

    The wind tunnel performance of a 10-percent thick helicopter rotor section design by numerical optimization is presented. The model was tested at Mach number from 0.2 to 0.84 with Reynolds number ranging from 1,900,000 at Mach 0.2 to 4,000,000 at Mach numbers above 0.5. The airfoil section exhibited maximum lift coefficients greater than 1.3 at Mach numbers below 0.45 and a drag divergence Mach number of 0.82 for lift coefficients near 0. A moderate 'drag creep' is observed at low lift coefficients for Mach numbers greater than 0.6.

  10. Optimal design of disc-type magneto-rheological brake for mid-sized motorcycle: experimental evaluation

    NASA Astrophysics Data System (ADS)

    Sohn, Jung Woo; Jeon, Juncheol; Nguyen, Quoc Hung; Choi, Seung-Bok

    2015-08-01

    In this paper, a disc-type magneto-rheological (MR) brake is designed for a mid-sized motorcycle and its performance is experimentally evaluated. The proposed MR brake consists of an outer housing, a rotating disc immersed in MR fluid, and a copper wire coiled around a bobbin to generate a magnetic field. The structural configuration of the MR brake is first presented with consideration of the installation space for the conventional hydraulic brake of a mid-sized motorcycle. The design parameters of the proposed MR brake are optimized to satisfy design requirements such as the braking torque, total mass of the MR brake, and cruising temperature caused by the magnetic-field friction of the MR fluid. In the optimization procedure, the braking torque is calculated based on the Herschel-Bulkley rheological model, which predicts MR fluid behavior well at high shear rate. An optimization tool based on finite element analysis is used to obtain the optimized dimensions of the MR brake. After manufacturing the MR brake, mechanical performances regarding the response time, braking torque and cruising temperature are experimentally evaluated.

  11. Separation of 20 coumarin derivatives using the capillary electrophoresis method optimized by a series of Doehlert experimental designs.

    PubMed

    Woźniakiewicz, Michał; Gładysz, Marta; Nowak, Paweł M; Kędzior, Justyna; Kościelniak, Paweł

    2017-05-15

    The aim of this study was to develop the first CE-based method enabling separation of 20 structurally similar coumarin derivatives. To facilitate method optimization a series of three consequent Doehlert experimental designs with the response surface methodology was employed, using number of peaks and the adjusted time of analysis as the selected responses. Initially, three variables were examined: buffer pH, ionic strength and temperature (No. 1 Doehlert design). The optimal conditions provided only partial separation, on that account, several buffer additives were examined at the next step: organic cosolvents and cyclodextrin (No. 2 Doehlert design). The optimal cyclodextrin type was also selected experimentally. The most promising results were obtained for the buffers fortified with methanol, acetonitrile and heptakis(2,3,6-tri-O-methyl)-β-cyclodextrin. Since these additives may potentially affect acid-base equilibrium and ionization state of analytes, the third Doehlert design (No. 3) was used to reconcile concentration of these additives with optimal pH. Ultimately, the total separation of all 20 compounds was achieved using the borate buffer at basic pH 9.5 in the presence of 10mM cyclodextrin, 9% (v/v) acetonitrile and 36% (v/v) methanol. Identity of all compounds was confirmed using the in-lab build UV-VIS spectra library. The developed method succeeded in identification of coumarin derivatives in three real samples. It demonstrates a huge resolving power of CE assisted by addition of cyclodextrins and organic cosolvents. Our unique optimization approach, based on the three Doehlert designs, seems to be prospective for future applications of this technique.

  12. Experimental validation of a magnetorheological energy absorber design optimized for shock and impact loads

    NASA Astrophysics Data System (ADS)

    Singh, Harinder J.; Hu, Wei; Wereley, Norman M.; Glass, William

    2014-12-01

    A linear stroke adaptive magnetorheological energy absorber (MREA) was designed, fabricated and tested for intense impact conditions with piston velocities up to 8 m s-1. The performance of the MREA was characterized using dynamic range, which is defined as the ratio of maximum on-state MREA force to the off-state MREA force. Design optimization techniques were employed in order to maximize the dynamic range at high impact velocities such that MREA maintained good control authority. Geometrical parameters of the MREA were optimized by evaluating MREA performance on the basis of a Bingham-plastic analysis incorporating minor losses (BPM analysis). Computational fluid dynamics and magnetic FE analysis were conducted to verify the performance of passive and controllable MREA force, respectively. Subsequently, high-speed drop testing (0-4.5 m s-1 at 0 A) was conducted for quantitative comparison with the numerical simulations. Refinements to the nonlinear BPM analysis were carried out to improve prediction of MREA performance.

  13. Optimal design of high temperature metalized thin-film polymer capacitors: A combined numerical and experimental method

    NASA Astrophysics Data System (ADS)

    Wang, Zhuo; Li, Qi; Trinh, Wei; Lu, Qianli; Cho, Heejin; Wang, Qing; Chen, Lei

    2017-07-01

    The objective of this paper is to design and optimize the high temperature metalized thin-film polymer capacitor by a combined computational and experimental method. A finite-element based thermal model is developed to incorporate Joule heating and anisotropic heat conduction arising from anisotropic geometric structures of the capacitor. The anisotropic thermal conductivity and temperature dependent electrical conductivity required by the thermal model are measured from the experiments. The polymer represented by thermally crosslinking benzocyclobutene (BCB) in the presence of boron nitride nanosheets (BNNSs) is selected for high temperature capacitor design based on the results of highest internal temperature (HIT) and the time to achieve thermal equilibrium. The c-BCB/BNNS-based capacitor aiming at the operating temperature of 250 °C is geometrically optimized with respect to its shape and volume. ;Safe line; plot is also presented to reveal the influence of the cooling strength on capacitor geometry design.

  14. Optimal design and experimental analysis of a magnetorheological valve system for the vehicle lifter used in maintenance

    NASA Astrophysics Data System (ADS)

    Shin, Sang-Un; Lee, Tae-Hoon; Cha, Seung-Woo; Choi, Ji-Young; Choi, Seung-Bok

    2017-04-01

    An accurate position control is demanded in the current hydraulic lifter used for vehicle maintenance. This work presents a new type of vehicle lifter for precision position control using a magnetorheological valve system. In the first step, the principal design parameters such as gap size of oil passage, length and depth of coil part, and distance coil part from the end of valve are considered to achieve the objective function for getting the highest position accuracy under current input constraint. After determining the optimized design values, the field-dependent pressure drops of the optimized valve system are experimentally evaluated and compared to those obtained from the initial design. Subsequently, the position of the vehicle lifter is controlled by change of pressure drop using a simple PID controller. It is demonstrated that the proposed vehicle lifter can be effectively applied to vehicle service center for more accurate tasks under proper height.

  15. A novel experimental design method to optimize hydrophilic matrix formulations with drug release profiles and mechanical properties.

    PubMed

    Choi, Du Hyung; Lim, Jun Yeul; Shin, Sangmun; Choi, Won Jun; Jeong, Seong Hoon; Lee, Sangkil

    2014-10-01

    To investigate the effects of hydrophilic polymers on the matrix system, an experimental design method was developed to integrate response surface methodology and the time series modeling. Moreover, the relationships among polymers on the matrix system were studied with the evaluation of physical properties including water uptake, mass loss, diffusion, and gelling index. A mixture simplex lattice design was proposed while considering eight input control factors: Polyethylene glycol 6000 (x1 ), polyethylene oxide (PEO) N-10 (x2 ), PEO 301 (x3 ), PEO coagulant (x4 ), PEO 303 (x5 ), hydroxypropyl methylcellulose (HPMC) 100SR (x6 ), HPMC 4000SR (x7 ), and HPMC 10(5) SR (x8 ). With the modeling, optimal formulations were obtained depending on the four types of targets. The optimal formulations showed the four significant factors (x1 , x2 , x3 , and x8 ) and other four input factors (x4 , x5 , x6 , and x7 ) were not significant based on drug release profiles. Moreover, the optimization results were analyzed with estimated values, targets values, absolute biases, and relative biases based on observed times for the drug release rates with four different targets. The result showed that optimal solutions and target values had consistent patterns with small biases. On the basis of the physical properties of the optimal solutions, the type and ratio of the hydrophilic polymer and the relationships between polymers significantly influenced the physical properties of the system and drug release. This experimental design method is very useful in formulating a matrix system with optimal drug release. Moreover, it can distinctly confirm the relationships between excipients and the effects on the system with extensive and intensive evaluations.

  16. Using highly efficient nonlinear experimental design methods for optimization of Lactococcus lactis fermentation in chemically defined media.

    PubMed

    Zhang, Guiying; Block, David E

    2009-01-01

    Optimization of fermentation media and processes is a difficult task due to the potential for high dimensionality and nonlinearity. Here we develop and evaluate variations on two novel and highly efficient methods for experimental fermentation optimization. The first approach is based on using a truncated genetic algorithm with a developing neural network model to choose the best experiments to run. The second approach uses information theory, along with Bayesian regularized neural network models, for experiment selection. To evaluate these methods experimentally, we used them to develop a new chemically defined medium for Lactococcus lactis IL1403, along with an optimal temperature and initial pH, to achieve maximum cell growth. The media consisted of 19 defined components or groups of components. The optimization results show that the maximum cell growth from the optimal process of each novel method is generally comparable to or higher than that achieved using a traditional statistical experimental design method, but these optima are reached in about half of the experiments (73-94 vs. 161, depending on the variants of methods). The optimal chemically defined media developed in this work are rich media that can support high cell density growth 3.5-4 times higher than the best reported synthetic medium and 72% higher than a commonly used complex medium (M17) at optimization scale. The best chemically defined medium found using the method was evaluated and compared with other defined or complex media at flask- and fermentor-scales. (c) 2009 American Institute of Chemical Engineers Biotechnol. Prog., 2009.

  17. PVA-PEG physically cross-linked hydrogel film as a wound dressing: experimental design and optimization.

    PubMed

    Ahmed, Afnan Sh; Mandal, Uttam Kumar; Taher, Muhammad; Susanti, Deny; Jaffri, Juliana Md

    2017-04-05

    The development of hydrogel films as wound healing dressings is of a great interest owing to their biological tissue-like nature. Polyvinyl alcohol/polyethylene glycol (PVA/PEG) hydrogels loaded with asiaticoside, a standardized rich fraction of Centella asiatica, were successfully developed using the freeze-thaw method. Response surface methodology with Box-Behnken experimental design was employed to optimize the hydrogels. The hydrogels were characterized and optimized by gel fraction, swelling behavior, water vapor transmission rate and mechanical strength. The formulation with 8% PVA, 5% PEG 400 and five consecutive freeze-thaw cycles was selected as the optimized formulation and was further characterized by its drug release, rheological study, morphology, cytotoxicity and microbial studies. The optimized formulation showed more than 90% drug release at 12 hours. The rheological properties exhibited that the formulation has viscoelastic behavior and remains stable upon storage. Cell culture studies confirmed the biocompatible nature of the optimized hydrogel formulation. In the microbial limit tests, the optimized hydrogel showed no microbial growth. The developed optimized PVA/PEG hydrogel using freeze-thaw method was swellable, elastic, safe, and it can be considered as a promising new wound dressing formulation.

  18. Optimization of Acid Protease Production by Aspergillus niger I1 on Shrimp Peptone Using Statistical Experimental Design

    PubMed Central

    Siala, Rayda; Frikha, Fakher; Mhamdi, Samiha; Nasri, Moncef; Sellami Kamoun, Alya

    2012-01-01

    Medium composition and culture conditions for the acid protease production by Aspergillus niger I1 were optimized by response surface methodology (RSM). A significant influence of temperature, KH2PO4, and initial pH on the protease production was evaluated by Plackett-Burman design (PBD). These factors were further optimized using Box-Behnken design and RSM. Under the proposed optimized conditions, the experimental protease production (183.13 U mL−1) closely matched the yield predicted by the statistical model (172.57 U mL−1) with R 2 = 0.914. Compared with the initial M1 medium on which protease production was 43.13 U mL−1, a successful and significant improvement by 4.25 folds was achieved in the optimized medium containing (g/L): hulled grain of wheat (HGW) 5.0; KH2PO4 1.0; NaCl 0.3; MgSO4(7H2O) 0.5; CaCl2 (7H2O) 0.4; ZnSO4 0.1; Na2HPO4 1.6; shrimp peptone (SP) 1.0. The pH was adjusted at 5 and the temperature at 30°C. More interestingly, the optimization was accomplished using two cheap and local fermentation substrates, HGW and SP, which may result in a significant reduction in the cost of medium constituents. PMID:22593695

  19. Simultaneous production of nisin and lactic acid from cheese whey: optimization of fermentation conditions through statistically based experimental designs.

    PubMed

    Liu, Chuanbin; Liu, Yan; Liao, Wei; Wen, Zhiyou; Chen, Shulin

    2004-01-01

    A biorefinery process that utilizes cheese whey as substrate to simultaneously produce nisin, a natural food preservative, and lactic acid, a raw material for biopolymer production, was studied. The conditions for nisin biosynthesis and lactic acid coproduction by Lactococcus lactis subsp. lactis (ATCC 11454) in a whey-based medium were optimized using statistically based experimental designs. A Plackett-Burman design was applied to screen seven parameters for significant factors for the production of nisin and lactic acid. Nutrient supplements, including yeast extract, MgSO4, and KH2PO4, were found to be the significant factors affecting nisin and lactic acid formation. As a follow-up, a central-composite design was applied to optimize these factors. Second-order polynomial models were developed to quantify the relationship between nisin and lactic acid production and the variables. The optimal values of these variables were also determined. Finally, a verification experiment was performed to confirm the optimal values that were predicted by the models. The experimented results agreed well with the model prediction, giving a similar production of 19.3 g/L of lactic acid and 92.9 mg/L of nisin.

  20. Active SAmpling Protocol (ASAP) to Optimize Individual Neurocognitive Hypothesis Testing: A BCI-Inspired Dynamic Experimental Design.

    PubMed

    Sanchez, Gaëtan; Lecaignard, Françoise; Otman, Anatole; Maby, Emmanuel; Mattout, Jérémie

    2016-01-01

    The relatively young field of Brain-Computer Interfaces has promoted the use of electrophysiology and neuroimaging in real-time. In the meantime, cognitive neuroscience studies, which make extensive use of functional exploration techniques, have evolved toward model-based experiments and fine hypothesis testing protocols. Although these two developments are mostly unrelated, we argue that, brought together, they may trigger an important shift in the way experimental paradigms are being designed, which should prove fruitful to both endeavors. This change simply consists in using real-time neuroimaging in order to optimize advanced neurocognitive hypothesis testing. We refer to this new approach as the instantiation of an Active SAmpling Protocol (ASAP). As opposed to classical (static) experimental protocols, ASAP implements online model comparison, enabling the optimization of design parameters (e.g., stimuli) during the course of data acquisition. This follows the well-known principle of sequential hypothesis testing. What is radically new, however, is our ability to perform online processing of the huge amount of complex data that brain imaging techniques provide. This is all the more relevant at a time when physiological and psychological processes are beginning to be approached using more realistic, generative models which may be difficult to tease apart empirically. Based upon Bayesian inference, ASAP proposes a generic and principled way to optimize experimental design adaptively. In this perspective paper, we summarize the main steps in ASAP. Using synthetic data we illustrate its superiority in selecting the right perceptual model compared to a classical design. Finally, we briefly discuss its future potential for basic and clinical neuroscience as well as some remaining challenges.

  1. Optimization of the azo dye Procion Red H-EXL degradation by Fenton's reagent using experimental design.

    PubMed

    Rodrigues, Carmen S D; Madeira, Luis M; Boaventura, Rui A R

    2009-05-30

    Chemical oxidation by Fenton's reagent of a reactive azo dye (Procion Deep Red H-EXL gran) solution has been optimized making use of the experimental design methodology. The variables considered for the oxidative process optimization were the temperature and the initial concentrations of hydrogen peroxide and ferrous ion, for a dye concentration of 100mg/L at pH 3.5, the latter being fixed after some preliminary runs. Experiments were carried out according to a central composite design approach. The methodology employed allowed to evaluate and identify the effects and interactions of the considered variables with statistical meaning in the process response, i.e., in the total organic carbon (TOC) reduction after 120 min of reaction. A quadratic model with good adherence to the experimental data in the domain analysed was developed, which was used to plot the response surface curves and to perform process optimization. It was concluded that temperature and ferrous ion concentration are the only variables that affect TOC removal, and due to the cross-interactions, the effect of each variable depends on the value of the other one, thus affecting positively or negatively the process response.

  2. Experimental design method to the weld bead geometry optimization for hybrid laser-MAG welding in a narrow chamfer configuration

    NASA Astrophysics Data System (ADS)

    Bidi, Lyes; Le Masson, Philippe; Cicala, Eugen; Primault, Christophe

    2017-03-01

    The work presented in this paper relates to the optimization of operating parameters of the welding by the experimental design approach. The welding process used is the hybrid laser-MAG welding, which consists in combining a laser beam with an MAG torch, to increase the productivity and reliability of the chamfer filling operation in several passes over the entire height of the chamfer. Each pass, providing 2 mm deposited metal and must provide sufficient lateral penetration of about 0.2 mm. The experimental design method has been used in order to estimate the operating parameters effects and their interactions on the lateral penetration on one hand, and to provide a mathematical model that relates the welding parameters of welding to the objective function lateral penetration on the other hand. Furthermore, in this study, we sought to the identification of the set of optimum parameters sufficient to comply with a constraint on the quality of weld bead. This constraint is to simultaneously obtain a total lateral penetration greater than 0.4 mm and an H/L ratio less than 0.6. In order to obtain this condition, the multi-objective optimization (for both response functions) of a weld bead by the implementation of the plans method using two categories of Experiments Plans, on two levels has been used: the first is a complete experimental design (CED) with 32 tests and the second a fractional experimental design (FED) with 8 tests. A comparative analysis of the implementation of both types of experiments plans identified the advantages and disadvantages for each type of plan.

  3. Stepwise optimization approach for improving LC-MS/MS analysis of zwitterionic antiepileptic drugs with implementation of experimental design.

    PubMed

    Kostić, Nađa; Dotsikas, Yannis; Malenović, Anđelija; Jančić Stojanović, Biljana; Rakić, Tijana; Ivanović, Darko; Medenica, Mirjana

    2013-07-01

    In this article, a step-by-step optimization procedure for improving analyte response with implementation of experimental design is described. Zwitterionic antiepileptics, namely vigabatrin, pregabalin and gabapentin, were chosen as model compounds to undergo chloroformate-mediated derivatization followed by liquid chromatography coupled with tandem mass spectrometry (LC-MS/MS) analysis. Application of a planned stepwise optimization procedure allowed responses of analytes, expressed as areas and signal-to-noise ratios, to be improved, enabling achievement of lower limit of detection values. Results from the current study demonstrate that optimization of parameters such as scan time, geometry of ion source, sheath and auxiliary gas pressure, capillary temperature, collision pressure and mobile phase composition can have a positive impact on sensitivity of LC-MS/MS methods. Optimization of LC and MS parameters led to a total increment of 53.9%, 83.3% and 95.7% in areas of derivatized vigabatrin, pregabalin and gabapentin, respectively, while for signal-to-noise values, an improvement of 140.0%, 93.6% and 124.0% was achieved, compared to autotune settings. After defining the final optimal conditions, a time-segmented method was validated for the determination of mentioned drugs in plasma. The method proved to be accurate and precise with excellent linearity for the tested concentration range (40.0 ng ml(-1)-10.0 × 10(3)  ng ml(-1)).

  4. Application of statistically based experimental designs for the optimization of exo-polysaccharide production by Cordyceps militaris NG3.

    PubMed

    Xu, Chun-Ping; Kim, Sang-Woo; Hwang, Hye-Jin; Yun, Jong-Won

    2002-10-01

    Statistically based experimental designs were applied to the optimization of medium composition for exo-polysaccharide production by Cordyceps militaris NG3 in shake-flask cultures. First, the Plackett-Burman design was used to search for the main factors on mycelia and exo-polysaccharide production. Among these variables, sucrose and corn steep powder were found to be two significant factors and had positive effects on mycelial yield (with confidence level >80%) and exo-polysaccharide production (with confidence level >90%). Subsequently, to study the mutual interactions between variables, the effects of the two main factors on exo-polysaccharide production were further investigated using a central composite design. The optimal composition was found to be 1.03 g/l corn steep powder, 2.95 g/l sucrose, 0.1 g/l K(2)HPO(4), 0.5 g/l MgSO(4) x 5H(2)O and 0.1 g/l KNO(3) for the enhanced production of the exo-polysaccharide, which was 2.604 g/l in shake-flask cultures. Under optimal culture conditions, the maximum exo-polysaccharide concentration in a 5 l stirred-tank bioreactor was 3.8 g/l.

  5. Molecular identification of potential denitrifying bacteria and use of D-optimal mixture experimental design for the optimization of denitrification process.

    PubMed

    Ben Taheur, Fadia; Fdhila, Kais; Elabed, Hamouda; Bouguerra, Amel; Kouidhi, Bochra; Bakhrouf, Amina; Chaieb, Kamel

    2016-04-01

    Three bacterial strains (TE1, TD3 and FB2) were isolated from date palm (degla), pistachio and barley. The presence of nitrate reductase (narG) and nitrite reductase (nirS and nirK) genes in the selected strains was detected by PCR technique. Molecular identification based on 16S rDNA sequencing method was applied to identify positive strains. In addition, the D-optimal mixture experimental design was used to optimize the optimal formulation of probiotic bacteria for denitrification process. Strains harboring denitrification genes were identified as: TE1, Agrococcus sp LN828197; TD3, Cronobacter sakazakii LN828198 and FB2, Pedicoccus pentosaceus LN828199. PCR results revealed that all strains carried the nirS gene. However only C. sakazakii LN828198 and Agrococcus sp LN828197 harbored the nirK and the narG genes respectively. Moreover, the studied bacteria were able to form biofilm on abiotic surfaces with different degree. Process optimization showed that the most significant reduction of nitrate was 100% with 14.98% of COD consumption and 5.57 mg/l nitrite accumulation. Meanwhile, the response values were optimized and showed that the most optimal combination was 78.79% of C. sakazakii LN828198 (curve value), 21.21% of P. pentosaceus LN828199 (curve value) and absence (0%) of Agrococcus sp LN828197 (curve value). Copyright © 2016 Elsevier Ltd. All rights reserved.

  6. Application of statistical experimental design for optimization of silver nanoparticles biosynthesis by a nanofactory Streptomyces viridochromogenes.

    PubMed

    El-Naggar, Noura El-Ahmady; Abdelwahed, Nayera A M

    2014-01-01

    Central composite design was chosen to determine the combined effects of four process variables (AgNO3 concentration, incubation period, pH level and inoculum size) on the extracellular biosynthesis of silver nanoparticles (AgNPs) by Streptomyces viridochromogenes. Statistical analysis of the results showed that incubation period, initial pH level and inoculum size had significant effects (P<0.05) on the biosynthesis of silver nanoparticles at their individual level. The maximum biosynthesis of silver nanoparticles was achieved at a concentration of 0.5% (v/v) of 1 mM AgNO3, incubation period of 96 h, initial pH of 9 and inoculum size of 2% (v/v). After optimization, the biosynthesis of silver nanoparticles was improved by approximately 5-fold as compared to that of the unoptimized conditions. The synthetic process of silver nanoparticle generation using the reduction of aqueous Ag+ ion by the culture supernatants of S. viridochromogenes was quite fast, and silver nanoparticles were formed immediately by the addition of AgNO3 solution (1 mM) to the cell-free supernatant. Initial characterization of silver nanoparticles was performed by visual observation of color change from yellow to intense brown color. UV-visible spectrophotometry for measuring surface plasmon resonance showed a single absorption peak at 400 nm, which confirmed the presence of silver nanoparticles. Fourier Transform Infrared Spectroscopy analysis provided evidence for proteins as possible reducing and capping agents for stabilizing the nanoparticles. Transmission Electron Microscopy revealed the extracellular formation of spherical silver nanoparticles in the size range of 2.15-7.27 nm. Compared to the cell-free supernatant, the biosynthesized AgNPs revealed superior antimicrobial activity against Gram-negative, Gram-positive bacterial strains and Candida albicans.

  7. Optimal experimental design for improving the estimation of growth parameters of Lactobacillus viridescens from data under non-isothermal conditions.

    PubMed

    Longhi, Daniel Angelo; Martins, Wiaslan Figueiredo; da Silva, Nathália Buss; Carciofi, Bruno Augusto Mattar; de Aragão, Gláucia Maria Falcão; Laurindo, João Borges

    2017-01-02

    In predictive microbiology, the model parameters have been estimated using the sequential two-step modeling (TSM) approach, in which primary models are fitted to the microbial growth data, and then secondary models are fitted to the primary model parameters to represent their dependence with the environmental variables (e.g., temperature). The Optimal Experimental Design (OED) approach allows reducing the experimental workload and costs, and the improvement of model identifiability because primary and secondary models are fitted simultaneously from non-isothermal data. Lactobacillus viridescens was selected to this study because it is a lactic acid bacterium of great interest to meat products preservation. The objectives of this study were to estimate the growth parameters of L. viridescens in culture medium from TSM and OED approaches and to evaluate both the number of experimental data and the time needed in each approach and the confidence intervals of the model parameters. Experimental data for estimating the model parameters with TSM approach were obtained at six temperatures (total experimental time of 3540h and 196 experimental data of microbial growth). Data for OED approach were obtained from four optimal non-isothermal profiles (total experimental time of 588h and 60 experimental data of microbial growth), two profiles with increasing temperatures (IT) and two with decreasing temperatures (DT). The Baranyi and Roberts primary model and the square root secondary model were used to describe the microbial growth, in which the parameters b and Tmin (±95% confidence interval) were estimated from the experimental data. The parameters obtained from TSM approach were b=0.0290 (±0.0020) [1/(h(0.5)°C)] and Tmin=-1.33 (±1.26) [°C], with R(2)=0.986 and RMSE=0.581, and the parameters obtained with the OED approach were b=0.0316 (±0.0013) [1/(h(0.5)°C)] and Tmin=-0.24 (±0.55) [°C], with R(2)=0.990 and RMSE=0.436. The parameters obtained from OED approach

  8. Using Central Composite Experimental Design to Optimize the Degradation of Tylosin from Aqueous Solution by Photo-Fenton Reaction

    PubMed Central

    Sarrai, Abd Elaziz; Hanini, Salah; Merzouk, Nachida Kasbadji; Tassalit, Djilali; Szabó, Tibor; Hernádi, Klára; Nagy, László

    2016-01-01

    The feasibility of the application of the Photo-Fenton process in the treatment of aqueous solution contaminated by Tylosin antibiotic was evaluated. The Response Surface Methodology (RSM) based on Central Composite Design (CCD) was used to evaluate and optimize the effect of hydrogen peroxide, ferrous ion concentration and initial pH as independent variables on the total organic carbon (TOC) removal as the response function. The interaction effects and optimal parameters were obtained by using MODDE software. The significance of the independent variables and their interactions was tested by means of analysis of variance (ANOVA) with a 95% confidence level. Results show that the concentration of the ferrous ion and pH were the main parameters affecting TOC removal, while peroxide concentration had a slight effect on the reaction. The optimum operating conditions to achieve maximum TOC removal were determined. The model prediction for maximum TOC removal was compared to the experimental result at optimal operating conditions. A good agreement between the model prediction and experimental results confirms the soundness of the developed model. PMID:28773551

  9. Experimental design approach for deposition optimization of RF sputtered chalcogenide thin films devoted to environmental optical sensors.

    PubMed

    Baudet, E; Sergent, M; Němec, P; Cardinaud, C; Rinnert, E; Michel, K; Jouany, L; Bureau, B; Nazabal, V

    2017-06-14

    The development of the optical bio-chemical sensing technology is an extremely important scientific and technological issue for diagnosis and monitoring of diseases, control of industrial processes, environmental detection of air and water pollutants. Owing to their distinctive features, chalcogenide amorphous thin films represent a keystone in the manufacture of middle infrared integrated optical devices for a sensitive detection of biological or environmental variations. Since the chalcogenide thin films characteristics, i.e. stoichiometric conformity, structure, roughness or optical properties can be affected by the growth process, the choice and control of the deposition method is crucial. An approach based on the experimental design is undoubtedly a way to be explored allowing fast optimization of chalcogenide film deposition by means of radio frequency sputtering process. Argon (Ar) pressure, working power and deposition time were selected as potentially the most influential factors among all possible. The experimental design analysis confirms the great influence of the Ar pressure on studied responses: chemical composition, refractive index in near-IR (1.55 µm) and middle infrared (6.3 and 7.7 µm), band-gap energy, deposition rate and surface roughness. Depending on the intended application and therefore desired thin film characteristics, mappings of the experimental design meaningfully help to select suitable deposition parameters.

  10. An experimental design based strategy to optimize a capillary electrophoresis method for the separation of 19 polycyclic aromatic hydrocarbons.

    PubMed

    Ferey, Ludivine; Delaunay, Nathalie; Rutledge, Douglas N; Huertas, Alain; Raoul, Yann; Gareil, Pierre; Vial, Jérôme; Rivals, Isabelle

    2014-04-11

    Because of their high toxicity, international regulatory institutions recommend monitoring specific polycyclic aromatic hydrocarbons (PAHs) in environmental and food samples. A fast, selective and sensitive method is therefore required for their quantitation in such complex samples. This article deals with the optimization, based on an experimental design strategy, of a cyclodextrin (CD) modified capillary zone electrophoresis separation method for the simultaneous separation of 19 PAHs listed as priority pollutants. First, using a central composite design, the normalized peak-start and peak-end times were modelled as functions of the factors that most affect PAH electrophoretic behavior: the concentrations of the anionic sulfobutylether-β-CD and neutral methyl-β-CD, and the percentage of MeOH in the background electrolyte. Then, to circumvent computational difficulties resulting from the changes in migration order likely to occur while varying experimental conditions, an original approach based on the systematic evaluation of the time intervals between all the possible pairs of peaks was used. Finally, a desirability analysis based on the smallest time interval between two consecutive peaks and on the overall analysis time, allowed us to achieve, for the first time in CE, full resolution of all 19 PAHs in less than 18 min. Using this optimized capillary electrophoresis method, a vegetable oil was successfully analyzed, proving its suitability for real complex sample analysis.

  11. Experimental design to optimize an Haemophilus influenzae type b conjugate vaccine made with hydrazide-derivatized tetanus toxoid.

    PubMed

    Laferriere, Craig; Ravenscroft, Neil; Wilson, Seanette; Combrink, Jill; Gordon, Lizelle; Petre, Jean

    2011-10-01

    The introduction of type b Haemophilus influenzae conjugate vaccines into routine vaccination schedules has significantly reduced the burden of this disease; however, widespread use in developing countries is constrained by vaccine costs, and there is a need for a simple and high-yielding manufacturing process. The vaccine is composed of purified capsular polysaccharide conjugated to an immunogenic carrier protein. To improve the yield and rate of the reductive amination conjugation reaction used to make this vaccine, some of the carboxyl groups of the carrier protein, tetanus toxoid, were modified to hydrazides, which are more reactive than the ε -amine of lysine. Other reaction parameters, including the ratio of the reactants, the size of the polysaccharide, the temperature and the salt concentration, were also investigated. Experimental design was used to minimize the number of experiments required to optimize all these parameters to obtain conjugate in high yield with target characteristics. It was found that increasing the reactant ratio and decreasing the size of the polysaccharide increased the polysaccharide:protein mass ratio in the product. Temperature and salt concentration did not improve this ratio. These results are consistent with a diffusion controlled rate limiting step in the conjugation reaction. Excessive modification of tetanus toxoid with hydrazide was correlated with reduced yield and lower free polysaccharide. This was attributed to a greater tendency for precipitation, possibly due to changes in the isoelectric point. Experimental design and multiple regression helped identify key parameters to control and thereby optimize this conjugation reaction.

  12. Experimental design and multiple response optimization. Using the desirability function in analytical methods development.

    PubMed

    Candioti, Luciana Vera; De Zan, María M; Cámara, María S; Goicoechea, Héctor C

    2014-06-01

    A review about the application of response surface methodology (RSM) when several responses have to be simultaneously optimized in the field of analytical methods development is presented. Several critical issues like response transformation, multiple response optimization and modeling with least squares and artificial neural networks are discussed. Most recent analytical applications are presented in the context of analytLaboratorio de Control de Calidad de Medicamentos (LCCM), Facultad de Bioquímica y Ciencias Biológicas, Universidad Nacional del Litoral, C.C. 242, S3000ZAA Santa Fe, ArgentinaLaboratorio de Control de Calidad de Medicamentos (LCCM), Facultad de Bioquímica y Ciencias Biológicas, Universidad Nacional del Litoral, C.C. 242, S3000ZAA Santa Fe, Argentinaical methods development, especially in multiple response optimization procedures using the desirability function.

  13. Statistical experimental design optimization of rhamsan gum production by Sphingomonas sp. CGMCC 6833.

    PubMed

    Xu, Xiao-Ying; Dong, Shu-Hao; Li, Sha; Chen, Xiao-Ye; Wu, Ding; Xu, Hong

    2015-04-01

    Rhamsan gum is a type of water-soluble exopolysaccharide produced by species of Sphingomonas bacteria. The optimal fermentation medium for rhamsan gum production by Sphingomonas sp. CGMCC 6833 was explored definition. Single-factor experiments indicate that glucose, soybean meal, K(2)HPO(4) and MnSO(4) compose the optimal medium along with and initial pH 7.5. To discover ideal cultural conditions for rhamsan gum production in a shake flask culture, response surface methodology was employed, from which the following optimal ratio was derived: 5.38 g/L soybean meal, 5.71 g/L K(2)HPO(4) and 0.32 g/L MnSO(4). Under ideal fermentation rhamsan gum yield reached 19.58 g/L ± 1.23 g/L, 42.09% higher than that of the initial medium (13.78 g/L ± 1.38 g/L). Optimizing the fermentation medium results in enhanced rhamsan gum production.

  14. Optimization of low-cost medium for very high gravity ethanol fermentations by Saccharomyces cerevisiae using statistical experimental designs.

    PubMed

    Pereira, Francisco B; Guimarães, Pedro M R; Teixeira, José A; Domingues, Lucília

    2010-10-01

    Statistical experimental designs were used to develop a medium based on corn steep liquor (CSL) and other low-cost nutrient sources for high-performance very high gravity (VHG) ethanol fermentations by Saccharomyces cerevisiae. The critical nutrients were initially selected according to a Plackett-Burman design and the optimized medium composition (44.3 g/L CSL; 2.3 g/L urea; 3.8 g/L MgSO₄·7H₂O; 0.03 g/L CuSO₄·5H₂O) for maximum ethanol production by the laboratory strain CEN.PK 113-7D was obtained by response surface methodology, based on a three-level four-factor Box-Behnken design. The optimization process resulted in significantly enhanced final ethanol titre, productivity and yeast viability in batch VHG fermentations (up to 330 g/L glucose) with CEN.PK113-7D and with industrial strain PE-2, which is used for bio-ethanol production in Brazil. Strain PE-2 was able to produce 18.6±0.5% (v/v) ethanol with a corresponding productivity of 2.4±0.1g/L/h. This study provides valuable insights into cost-effective nutritional supplementation of industrial fuel ethanol VHG fermentations.

  15. Limit of detection of 15{sub N} by gas-chromatography atomic emission detection: Optimization using an experimental design

    SciTech Connect

    Deruaz, D.; Bannier, A.; Pionchon, C.

    1995-08-01

    This paper deals with the optimal conditions for the detection of {sup 15}N determined using a four-factor experimental design from [2{sup 13}C,-1,3 {sup 15}N] caffeine measured with an atomic emission detector (AED) coupled to gas chromatography (GC). Owing to the capability of a photodiodes array, AED can simultaneously detect several elements using their specific emission lines within a wavelength range of 50 nm. So, the emissions of {sup 15}N and {sup 14}N are simultaneously detected at 420.17 nm and 421.46 nm respectively. Four independent experimental factors were tested (1) helium flow rate (plasma gas); (2) methane pressure (reactant gas); (3) oxygen pressure; (4) hydrogen pressure. It has been shown that these four gases had a significant influence on the analytical response of {sup 15}N. The linearity of the detection was determined using {sup 15}N amounts ranging from 1.52 pg to 19 ng under the optimal conditions obtained from the experimental design. The limit of detection was studied using different methods. The limits of detection of {sup 15}N was 1.9 pg/s according to the IUPAC method (International-Union of Pure and Applied Chemistry). The method proposed by Quimby and Sullivan gave a value of 2.3 pg/s and that of Oppenheimer gave a limit of 29 pg/s. For each determination, and internal standard: 1-isobutyl-3.7 dimethylxanthine was used. The results clearly demonstrate that GC AED is sensitive and selective enough to detect and measure {sup 15}N-labelled molecules after gas chromatographic separation.

  16. Experimental design optimization of reverse osmosis purification of pretreatedolive mill wastewater.

    PubMed

    Ochando-Pulido, J M; Martinez-Ferez, A

    2017-06-01

    The management of the effluents generated by olive oil industries, commonly known as olive mills, represents an ever increasing problem still unresolved. The core of the present work was the modelling and optimization of a reverse osmosis (RO) membrane operation for the purification of a tertiary-treated olive mill wastewater stream (OMW2TT). Statistical multifactorial analysis showed all the studied variables including the operating pressure (PTM), crossflow velocity (vt) and operating temperature (T) remarkably influence the permeate flux yielded by the selected membrane (p-value practically equal to zero), confirming a statistically significant relationship among the variables considered at 95% confidence level. However, PTM and T exhibit a deeper influence than vt, according to the p-values withdrawn from the analysis, being the squared effects significant too, but more in case of the former ones. The obtained contour plots and response surface support the former results. In particular, the optimized parameters were ambient temperature range (24-29.6°C), moderate operating pressure (31.5-35bar) and turbulent crossflow (4.1-5.1ms(-1)). In the end, the quality standards to reuse the purified effluent for irrigation purposes and discharge to sewers were stably ensured. Copyright © 2017 Elsevier B.V. All rights reserved.

  17. Experimental validation of structural optimization methods

    NASA Technical Reports Server (NTRS)

    Adelman, Howard M.

    1992-01-01

    The topic of validating structural optimization methods by use of experimental results is addressed. The need for validating the methods as a way of effecting a greater and an accelerated acceptance of formal optimization methods by practicing engineering designers is described. The range of validation strategies is defined which includes comparison of optimization results with more traditional design approaches, establishing the accuracy of analyses used, and finally experimental validation of the optimization results. Examples of the use of experimental results to validate optimization techniques are described. The examples include experimental validation of the following: optimum design of a trussed beam; combined control-structure design of a cable-supported beam simulating an actively controlled space structure; minimum weight design of a beam with frequency constraints; minimization of the vibration response of helicopter rotor blade; minimum weight design of a turbine blade disk; aeroelastic optimization of an aircraft vertical fin; airfoil shape optimization for drag minimization; optimization of the shape of a hole in a plate for stress minimization; optimization to minimize beam dynamic response; and structural optimization of a low vibration helicopter rotor.

  18. Optimization of Magnetosome Production and Growth by the Magnetotactic Vibrio Magnetovibrio blakemorei Strain MV-1 through a Statistics-Based Experimental Design

    PubMed Central

    Silva, Karen T.; Leão, Pedro E.; Abreu, Fernanda; López, Jimmy A.; Gutarra, Melissa L.; Farina, Marcos; Bazylinski, Dennis A.; Freire, Denise M. G.

    2013-01-01

    The growth and magnetosome production of the marine magnetotactic vibrio Magnetovibrio blakemorei strain MV-1 were optimized through a statistics-based experimental factorial design. In the optimized growth medium, maximum magnetite yields of 64.3 mg/liter in batch cultures and 26 mg/liter in a bioreactor were obtained. PMID:23396329

  19. Finding hidden treasure: a 28-year case study for optimizing experimental designs

    USDA-ARS?s Scientific Manuscript database

    Field-based agronomic and genetic research is a decision-based process. Many decisions are required to design, conduct, analyze, and complete any field experiment. While these decisions are critical to the success of any research program, their importance is magnified for research on perennial crops...

  20. Optimization of critical factors to enhance polyhydroxyalkanoates (PHA) synthesis by mixed culture using Taguchi design of experimental methodology.

    PubMed

    Venkata Mohan, S; Venkateswar Reddy, M

    2013-01-01

    Optimizing different factors is crucial for enhancement of mixed culture bioplastics (polyhydroxyalkanoates (PHA)) production. Design of experimental (DOE) methodology using Taguchi orthogonal array (OA) was applied to evaluate the influence and specific function of eight important factors (iron, glucose concentration, VFA concentration, VFA composition, nitrogen concentration, phosphorous concentration, pH, and microenvironment) on the bioplastics production. Three levels of factor (2(1) × 3(7)) variation were considered with symbolic arrays of experimental matrix [L(18)-18 experimental trails]. All the factors were assigned with three levels except iron concentration (2(1)). Among all the factors, microenvironment influenced bioplastics production substantially (contributing 81%), followed by pH (11%) and glucose concentration (2.5%). Validation experiments were performed with the obtained optimum conditions which resulted in improved PHA production. Good substrate degradation (as COD) of 68% was registered during PHA production. Dehydrogenase and phosphatase enzymatic activities were monitored during process operation. Copyright © 2012 Elsevier Ltd. All rights reserved.

  1. An experimental design approach to optimize an amperometric immunoassay on a screen printed electrode for Clostridium tetani antibody determination.

    PubMed

    Patris, Stéphanie; Vandeput, Marie; Kenfack, Gersonie Momo; Mertens, Dominique; Dejaegher, Bieke; Kauffmann, Jean-Michel

    2016-03-15

    An immunoassay for the determination of anti-tetani antibodies has been developed using a screen printed electrode (SPE) as solid support for toxoid (antigen) immobilization. The assay was performed in guinea pig serum. The immunoreaction and the subsequent amperometric detection occurred directly onto the SPE surface. The assay consisted of spiking the anti-tetani sample directly onto the toxoid modified SPE, and then a second antibody, i.e. a HRP-labeled anti-immunoglobulin G, was deposited onto the biosensor. Subsequent amperometric detection was realized by spiking 10 µL of a hydroquinone (HQ) solution into 40 µL of buffer solution containing hydrogen peroxide. An experimental design approach was implemented for the optimization of the immunoassay. The variables of interest, such as bovine serum albumin (BSA) concentration, incubation times and labeled antibody dilution, were optimized with the aid of the response surface methodology using a circumscribed central composite design (CCCD). It was observed that two factors exhibited the greatest impact on the response, i.e. the anti-tetani incubation time and the dilution factor of the labeled antibody. It was discovered that in order to maximize the response, the dilution factor should be small, while the anti-tetani antibody incubation time should be long. The BSA concentration and the HRP-anti-IgG incubation had very limited influence. Under the optimized conditions, the immunoassay had a limit of detection of 0.011 IU/mL and a limit of quantification of 0.012 IU/mL. These values were below the protective human antibody limit of 0.06 IU/mL.

  2. Conceptual design optimization study

    NASA Technical Reports Server (NTRS)

    Hollowell, S. J.; Beeman, E. R., II; Hiyama, R. M.

    1990-01-01

    The feasibility of applying multilevel functional decomposition and optimization techniques to conceptual design of advanced fighter aircraft was investigated. Applying the functional decomposition techniques to the conceptual design phase appears to be feasible. The initial implementation of the modified design process will optimize wing design variables. A hybrid approach, combining functional decomposition techniques for generation of aerodynamic and mass properties linear sensitivity derivatives with existing techniques for sizing mission performance and optimization, is proposed.

  3. Optimal Experimental Design to Estimate Statistically Significant Periods of Oscillations in Time Course Data

    PubMed Central

    Mourão, Márcio; Satin, Leslie; Schnell, Santiago

    2014-01-01

    We investigated commonly used methods (Autocorrelation, Enright, and Discrete Fourier Transform) to estimate the periodicity of oscillatory data and determine which method most accurately estimated periods while being least vulnerable to the presence of noise. Both simulated and experimental data were used in the analysis performed. We determined the significance of calculated periods by applying these methods to several random permutations of the data and then calculating the probability of obtaining the period's peak in the corresponding periodograms. Our analysis suggests that the Enright method is the most accurate for estimating the period of oscillatory data. We further show that to accurately estimate the period of oscillatory data, it is necessary that at least five cycles of data are sampled, using at least four data points per cycle. These results suggest that the Enright method should be more widely applied in order to improve the analysis of oscillatory data. PMID:24699692

  4. Optimal experimental design to estimate statistically significant periods of oscillations in time course data.

    PubMed

    Mourão, Márcio; Satin, Leslie; Schnell, Santiago

    2014-01-01

    We investigated commonly used methods (Autocorrelation, Enright, and Discrete Fourier Transform) to estimate the periodicity of oscillatory data and determine which method most accurately estimated periods while being least vulnerable to the presence of noise. Both simulated and experimental data were used in the analysis performed. We determined the significance of calculated periods by applying these methods to several random permutations of the data and then calculating the probability of obtaining the period's peak in the corresponding periodograms. Our analysis suggests that the Enright method is the most accurate for estimating the period of oscillatory data. We further show that to accurately estimate the period of oscillatory data, it is necessary that at least five cycles of data are sampled, using at least four data points per cycle. These results suggest that the Enright method should be more widely applied in order to improve the analysis of oscillatory data.

  5. Heterogeneity of the gut microbiome in mice: guidelines for optimizing experimental design

    PubMed Central

    Laukens, Debby; Brinkman, Brigitta M.; Raes, Jeroen; De Vos, Martine; Vandenabeele, Peter

    2015-01-01

    Targeted manipulation of the gut flora is increasingly being recognized as a means to improve human health. Yet, the temporal dynamics and intra- and interindividual heterogeneity of the microbiome represent experimental limitations, especially in human cross-sectional studies. Therefore, rodent models represent an invaluable tool to study the host–microbiota interface. Progress in technical and computational tools to investigate the composition and function of the microbiome has opened a new era of research and we gradually begin to understand the parameters that influence variation of host-associated microbial communities. To isolate true effects from confounding factors, it is essential to include such parameters in model intervention studies. Also, explicit journal instructions to include essential information on animal experiments are mandatory. The purpose of this review is to summarize the factors that influence microbiota composition in mice and to provide guidelines to improve the reproducibility of animal experiments. PMID:26323480

  6. Heterogeneity of the gut microbiome in mice: guidelines for optimizing experimental design.

    PubMed

    Laukens, Debby; Brinkman, Brigitta M; Raes, Jeroen; De Vos, Martine; Vandenabeele, Peter

    2016-01-01

    Targeted manipulation of the gut flora is increasingly being recognized as a means to improve human health. Yet, the temporal dynamics and intra- and interindividual heterogeneity of the microbiome represent experimental limitations, especially in human cross-sectional studies. Therefore, rodent models represent an invaluable tool to study the host-microbiota interface. Progress in technical and computational tools to investigate the composition and function of the microbiome has opened a new era of research and we gradually begin to understand the parameters that influence variation of host-associated microbial communities. To isolate true effects from confounding factors, it is essential to include such parameters in model intervention studies. Also, explicit journal instructions to include essential information on animal experiments are mandatory. The purpose of this review is to summarize the factors that influence microbiota composition in mice and to provide guidelines to improve the reproducibility of animal experiments. © FEMS 2015.

  7. Optimization of chitosan nanoparticles for colon tumors using experimental design methodology.

    PubMed

    Jain, Anekant; Jain, Sanjay K

    2016-12-01

    Purpose Colon-specific drug delivery systems (CDDS) can improve the bio-availability of drugs through the oral route. A novel formulation for oral administration using ligand coupled chitosan nanoparticles bearing 5-Flurouracil (5FU) encapsulated in enteric coated pellets has been investigated for CDDS. Method The effect of polymer concentration, drug concentration, stirring time and stirring speed on the encapsulation efficiency, and size of nanoparticles were evaluated. The best (or optimum) formulation was obtained by response surface methodology. Using the experimental data, analysis of variance has been carried out to evolve linear empirical models. Using a new methodology, polynomial models have been evolved and the parametric analysis has been carried out. In order to target nanoparticles to the hyaluronic acid (HA) receptors present on colon tumors, HA coupled nanoparticles were tested for their efficacy in vivo. The HA coupled nanoparticles were encapsulated in pellets and were enteric coated to release the drug in the colon. Results Drug release studies under conditions of mimicking stomach to colon transit have shown that the drug was protected from being released in the physiological environment of the stomach and small intestine. The relatively high local drug concentration with prolonged exposure time provides a potential to enhance anti-tumor efficacy with low systemic toxicity for the treatment of colon cancer. Conclusions Conclusively, HA coupled nanoparticles can be considered as the potential candidate for targeted drug delivery and are anticipated to be promising in the treatment of colorectal cancer.

  8. Quantitative and qualitative optimization of allergen extraction from peanut and selected tree nuts. Part 1. Screening of optimal extraction conditions using a D-optimal experimental design.

    PubMed

    L'Hocine, Lamia; Pitre, Mélanie

    2016-03-01

    A D-optimal design was constructed to optimize allergen extraction efficiency simultaneously from roasted, non-roasted, defatted, and non-defatted almond, hazelnut, peanut, and pistachio flours using three non-denaturing aqueous (phosphate, borate, and carbonate) buffers at various conditions of ionic strength, buffer-to-protein ratio, extraction temperature, and extraction duration. Statistical analysis showed that roasting and non-defatting significantly lowered protein recovery for all nuts. Increasing the temperature and the buffer-to-protein ratio during extraction significantly increased protein recovery, whereas increasing the extraction time had no significant impact. The impact of the three buffers on protein recovery varied significantly among the nuts. Depending on the extraction conditions, protein recovery varied from 19% to 95% for peanut, 31% to 73% for almond, 17% to 64% for pistachio, and 27% to 88% for hazelnut. A modulation by the buffer type and ionic strength of protein and immunoglobuline E binding profiles of extracts was evidenced, where high protein recovery levels did not always correlate with high immunoreactivity.

  9. Optimization of Wear Behavior of Magnesium Alloy AZ91 Hybrid Composites Using Taguchi Experimental Design

    NASA Astrophysics Data System (ADS)

    Girish, B. M.; Satish, B. M.; Sarapure, Sadanand; Basawaraj

    2016-06-01

    In the present paper, the statistical investigation on wear behavior of magnesium alloy (AZ91) hybrid metal matrix composites using Taguchi technique has been reported. The composites were reinforced with SiC and graphite particles of average size 37 μm. The specimens were processed by stir casting route. Dry sliding wear of the hybrid composites were tested on a pin-on-disk tribometer under dry conditions at different normal loads (20, 40, and 60 N), sliding speeds (1.047, 1.57, and 2.09 m/s), and composition (1, 2, and 3 wt pct of each of SiC and graphite). The design of experiments approach using Taguchi technique was employed to statistically analyze the wear behavior of hybrid composites. Signal-to-noise ratio and analysis of variance were used to investigate the influence of the parameters on the wear rate.

  10. D-OPTIMAL EXPERIMENTAL DESIGNS TO TEST FOR DEPARTURE FROM ADDITIVITY IN A FIXED-RATIO MIXTURE RAY.

    EPA Science Inventory

    Traditional factorial designs for evaluating interactions among chemicals in a mixture are prohibitive when the number of chemicals is large. However, recent advances in statistically-based experimental design have made it easier to evaluate interactions involving many chemicals...

  11. D-OPTIMAL EXPERIMENTAL DESIGNS TO TEST FOR DEPARTURE FROM ADDITIVITY IN A FIXED-RATIO MIXTURE RAY.

    EPA Science Inventory

    Traditional factorial designs for evaluating interactions among chemicals in a mixture are prohibitive when the number of chemicals is large. However, recent advances in statistically-based experimental design have made it easier to evaluate interactions involving many chemicals...

  12. Development and Optimization of HPLC Analysis of Metronidazole, Diloxanide, Spiramycin and Cliquinol in Pharmaceutical Dosage Forms Using Experimental Design.

    PubMed

    Elkhoudary, Mahmoud M; Abdel Salam, Randa A; Hadad, Ghada M

    2016-11-01

    A new simple, sensitive, rapid and accurate gradient reversed-phase high-performance liquid chromatography with photodiode array detector (RP-HPLC-DAD) was developed and validated for simultaneous analysis of Metronidazole (MNZ), Spiramycin (SPY), Diloxanidefuroate (DIX) and Cliquinol (CLQ) using statistical experimental design. Initially, a resolution V fractional factorial design was used in order to screen five independent factors: the column temperature (°C), pH, phosphate buffer concentration (mM), flow rate (ml/min) and the initial fraction of mobile phase B (%). pH, flow rate and initial fraction of mobile phase B were identified as significant, using analysis of variance. The optimum conditions of separation determined with the aid of central composite design were: (1) initial mobile phase concentration: phosphate buffer/methanol (50/50, v/v), (2) phosphate buffer concentration (50 mM), (3) pH (4.72), (4) column temperature 30°C and (5) mobile phase flow rate (0.8 ml min(-1)). Excellent linearity was observed for all of the standard calibration curves, and the correlation coefficients were above 0.9999. Limits of detection for all of the analyzed compounds ranged between 0.02 and 0.11 μg ml(-1); limits of quantitation ranged between 0.06 and 0.33 μg ml(-1) The proposed method showed good prediction ability. The optimized method was validated according to ICH guidelines. Three commercially available tablets were analyzed showing good % recovery and %RSD.

  13. Optimization of a Three-Component Green Corrosion Inhibitor Mixture for Using in Cooling Water by Experimental Design

    NASA Astrophysics Data System (ADS)

    Asghari, E.; Ashassi-Sorkhabi, H.; Ahangari, M.; Bagheri, R.

    2016-04-01

    Factors such as inhibitor concentration, solution hydrodynamics, and temperature influence the performance of corrosion inhibitor mixtures. The simultaneous studying of the impact of different factors is a time- and cost-consuming process. The use of experimental design methods can be useful in minimizing the number of experiments and finding local optimized conditions for factors under the investigation. In the present work, the inhibition performance of a three-component inhibitor mixture against corrosion of St37 steel rotating disk electrode, RDE, was studied. The mixture was composed of citric acid, lanthanum(III) nitrate, and tetrabutylammonium perchlorate. In order to decrease the number of experiments, the L16 Taguchi orthogonal array was used. The "control factors" were the concentration of each component and the rotation rate of RDE and the "response factor" was the inhibition efficiency. The scanning electron microscopy and energy dispersive x-ray spectroscopy techniques verified the formation of islands of adsorbed citrate complexes with lanthanum ions and insoluble lanthanum(III) hydroxide. From the Taguchi analysis results the mixture of 0.50 mM lanthanum(III) nitrate, 0.50 mM citric acid, and 2.0 mM tetrabutylammonium perchlorate under the electrode rotation rate of 1000 rpm was found as optimum conditions.

  14. Optimized design for PIGMI

    SciTech Connect

    Hansborough, L.; Hamm, R.; Stovall, J.; Swenson, D.

    1980-01-01

    PIGMI (Pion Generator for Medical Irradiations) is a compact linear proton accelerator design, optimized for pion production and cancer treatment use in a hospital environment. Technology developed during a four-year PIGMI Prototype experimental program allows the design of smaller, less expensive, and more reliable proton linacs. A new type of low-energy accelerating structure, the radio-frequency quadrupole (RFQ) has been tested; it produces an exceptionally good-quality beam and allows the use of a simple 30-kV injector. Average axial electric-field gradients of over 9 MV/m have been demonstrated in a drift-tube linac (DTL) structure. Experimental work is underway to test the disk-and-washer (DAW) structure, another new type of accelerating structure for use in the high-energy coupled-cavity linac (CCL). Sufficient experimental and developmental progress has been made to closely define an actual PIGMI. It will consist of a 30-kV injector, and RFQ linac to a proton energy of 2.5 MeV, a DTL linac to 125 MeV, and a CCL linac to the final energy of 650 MeV. The total length of the accelerator is 133 meters. The RFQ and DTL will be driven by a single 440-MHz klystron; the CCL will be driven by six 1320-MHz klystrons. The peak beam current is 28 mA. The beam pulse length is 60 ..mu..s at a 60-Hz repetition rate, resulting in a 100-..mu..A average beam current. The total cost of the accelerator is estimated to be approx. $10 million.

  15. Multidisciplinary design and optimization

    NASA Technical Reports Server (NTRS)

    Sobieszczanski-Sobieski, Jaroslaw

    1992-01-01

    Mutual couplings among the mathematical models of physical phenomena and parts of a system such as an aircraft complicate the design process because each contemplated design change may have a far reaching consequence throughout the system. This paper outlines techniques for computing these influences as system design derivatives useful to both judgmental and formal optimization purposes. The techniques facilitate decomposition of the design process into smaller, more manageable tasks and they form a methodology that can easily fit into existing engineering optimizations and incorporate their design tools.

  16. An experimental design approach for hydrothermal synthesis of NaYF4: Yb3+, Tm3+ upconversion microcrystal: UV emission optimization

    NASA Astrophysics Data System (ADS)

    Kaviani Darani, Masoume; Bastani, Saeed; Ghahari, Mehdi; Kardar, Pooneh

    2015-11-01

    Ultraviolet (UV) emissions of hydrothermally synthesized NaYF4: Yb3+, Tm3+ upconversion crystals were optimized using the response surface methodology experimental design. In these experimental designs, 9 runs, two factors namely (1) Tm3+ ion concentration, and (2) pH value were investigated using 3 different ligands. Introducing UV upconversion emissions as responses, their intensity were separately maximized. Analytical methods such as XRD, SEM, and FTIR could be used to study crystal structure, morphology, and fluorescent spectroscopy in order to obtain luminescence properties. From the photo-luminescence spectra, emissions centered at 347, 364, 452, 478, 648 and 803 nm were observed. Some results show that increasing each DOE factor up to an optimum value resulted in an increase in emission intensity, followed by reduction. To optimize UV emission, as a final result to the UV emission optimization, each design had a suggestion.

  17. Optimal Flow Control Design

    NASA Technical Reports Server (NTRS)

    Allan, Brian; Owens, Lewis

    2010-01-01

    In support of the Blended-Wing-Body aircraft concept, a new flow control hybrid vane/jet design has been developed for use in a boundary-layer-ingesting (BLI) offset inlet in transonic flows. This inlet flow control is designed to minimize the engine fan-face distortion levels and the first five Fourier harmonic half amplitudes while maximizing the inlet pressure recovery. This concept represents a potentially enabling technology for quieter and more environmentally friendly transport aircraft. An optimum vane design was found by minimizing the engine fan-face distortion, DC60, and the first five Fourier harmonic half amplitudes, while maximizing the total pressure recovery. The optimal vane design was then used in a BLI inlet wind tunnel experiment at NASA Langley's 0.3-meter transonic cryogenic tunnel. The experimental results demonstrated an 80-percent decrease in DPCPavg, the reduction in the circumferential distortion levels, at an inlet mass flow rate corresponding to the middle of the operational range at the cruise condition. Even though the vanes were designed at a single inlet mass flow rate, they performed very well over the entire inlet mass flow range tested in the wind tunnel experiment with the addition of a small amount of jet flow control. While the circumferential distortion was decreased, the radial distortion on the outer rings at the aerodynamic interface plane (AIP) increased. This was a result of the large boundary layer being distributed from the bottom of the AIP in the baseline case to the outer edges of the AIP when using the vortex generator (VG) vane flow control. Experimental results, as already mentioned, showed an 80-percent reduction of DPCPavg, the circumferential distortion level at the engine fan-face. The hybrid approach leverages strengths of vane and jet flow control devices, increasing inlet performance over a broader operational range with significant reduction in mass flow requirements. Minimal distortion level requirements

  18. Evidence evaluation: measure Z corresponds to human utility judgments better than measure L and optimal-experimental-design models.

    PubMed

    Rusconi, Patrice; Marelli, Marco; D'Addario, Marco; Russo, Selena; Cherubini, Paolo

    2014-05-01

    Evidence evaluation is a crucial process in many human activities, spanning from medical diagnosis to impression formation. The present experiments investigated which, if any, normative model best conforms to people's intuition about the value of the obtained evidence. Psychologists, epistemologists, and philosophers of science have proposed several models to account for people's intuition about the utility of the obtained evidence with respect either to a focal hypothesis or to a constellation of hypotheses. We pitted against each other the so-called optimal-experimental-design models (i.e., Bayesian diagnosticity, log₁₀ diagnosticity, information gain, Kullback-Leibler distance, probability gain, and impact) and measures L and Z to compare their ability to describe humans' intuition about the value of the obtained evidence. Participants received words-and-numbers scenarios concerning 2 hypotheses and binary features. They were asked to evaluate the utility of "yes" and "no" answers to questions about some features possessed in different proportions (i.e., the likelihoods) by 2 types of extraterrestrial creatures (corresponding to 2 mutually exclusive and exhaustive hypotheses). Participants evaluated either how an answer was helpful or how an answer decreased/increased their beliefs with respect either to a single hypothesis or to both hypotheses. We fitted mixed-effects models and used the Akaike information criterion and the Bayesian information criterion values to compare the competing models of the value of the obtained evidence. Overall, the experiments showed that measure Z was the best fitting model of participants' judgments of the value of obtained answers. We discussed the implications for the human hypothesis-evaluation process.

  19. Improvement of production of citric acid from oil palm empty fruit bunches: optimization of media by statistical experimental designs.

    PubMed

    Bari, Md Niamul; Alam, Md Zahangir; Muyibi, Suleyman A; Jamal, Parveen; Abdullah-Al-Mamun

    2009-06-01

    A sequential optimization based on statistical design and one-factor-at-a-time (OFAT) method was employed to optimize the media constituents for the improvement of citric acid production from oil palm empty fruit bunches (EFB) through solid state bioconversion using Aspergillus niger IBO-103MNB. The results obtained from the Plackett-Burman design indicated that the co-substrate (sucrose), stimulator (methanol) and minerals (Zn, Cu, Mn and Mg) were found to be the major factors for further optimization. Based on the OFAT method, the selected medium constituents and inoculum concentration were optimized by the central composite design (CCD) under the response surface methodology (RSM). The statistical analysis showed that the optimum media containing 6.4% (w/w) of sucrose, 9% (v/w) of minerals and 15.5% (v/w) of inoculum gave the maximum production of citric acid (337.94 g/kg of dry EFB). The analysis showed that sucrose (p<0.0011) and mineral solution (p<0.0061) were more significant compared to inoculum concentration (p<0.0127) for the citric acid production.

  20. Optimization of digital designs

    NASA Technical Reports Server (NTRS)

    Whitaker, Sterling R. (Inventor); Miles, Lowell H. (Inventor)

    2009-01-01

    An application specific integrated circuit is optimized by translating a first representation of its digital design to a second representation. The second representation includes multiple syntactic expressions that admit a representation of a higher-order function of base Boolean values. The syntactic expressions are manipulated to form a third representation of the digital design.

  1. Hydrodynamic Design Optimization Tool

    DTIC Science & Technology

    2011-08-01

    appreciated. The authors would also like to thank David Walden and Francis Noblesse of Code 50 for being instrumental in defining this project, Wesley...and efficiently during the early stage of the design process. The Computational Fluid Dynamics ( CFD ) group at George Mason University has an...specific design constraints. In order to apply CFD -based tool to the hydrodynamic design optimization of ship hull forms, an initial hull form is

  2. Optimal experimental design for the detection of light atoms from high-resolution scanning transmission electron microscopy images

    SciTech Connect

    Gonnissen, J.; De Backer, A.; Martinez, G. T.; Van Aert, S.; Dekker, A. J. den; Rosenauer, A.; Sijbers, J.

    2014-08-11

    We report an innovative method to explore the optimal experimental settings to detect light atoms from scanning transmission electron microscopy (STEM) images. Since light elements play a key role in many technologically important materials, such as lithium-battery devices or hydrogen storage applications, much effort has been made to optimize the STEM technique in order to detect light elements. Therefore, classical performance criteria, such as contrast or signal-to-noise ratio, are often discussed hereby aiming at improvements of the direct visual interpretability. However, when images are interpreted quantitatively, one needs an alternative criterion, which we derive based on statistical detection theory. Using realistic simulations of technologically important materials, we demonstrate the benefits of the proposed method and compare the results with existing approaches.

  3. Optimal control model predictions of system performance and attention allocation and their experimental validation in a display design study

    NASA Technical Reports Server (NTRS)

    Johannsen, G.; Govindaraj, T.

    1980-01-01

    The influence of different types of predictor displays in a longitudinal vertical takeoff and landing (VTOL) hover task is analyzed in a theoretical study. Several cases with differing amounts of predictive and rate information are compared. The optimal control model of the human operator is used to estimate human and system performance in terms of root-mean-square (rms) values and to compute optimized attention allocation. The only part of the model which is varied to predict these data is the observation matrix. Typical cases are selected for a subsequent experimental validation. The rms values as well as eye-movement data are recorded. The results agree favorably with those of the theoretical study in terms of relative differences. Better matching is achieved by revised model input data.

  4. Optimal control model predictions of system performance and attention allocation and their experimental validation in a display design study

    NASA Technical Reports Server (NTRS)

    Johannsen, G.; Govindaraj, T.

    1980-01-01

    The influence of different types of predictor displays in a longitudinal vertical takeoff and landing (VTOL) hover task is analyzed in a theoretical study. Several cases with differing amounts of predictive and rate information are compared. The optimal control model of the human operator is used to estimate human and system performance in terms of root-mean-square (rms) values and to compute optimized attention allocation. The only part of the model which is varied to predict these data is the observation matrix. Typical cases are selected for a subsequent experimental validation. The rms values as well as eye-movement data are recorded. The results agree favorably with those of the theoretical study in terms of relative differences. Better matching is achieved by revised model input data.

  5. "Real-time" disintegration analysis and D-optimal experimental design for the optimization of diclofenac sodium fast-dissolving films.

    PubMed

    El-Malah, Yasser; Nazzal, Sami

    2013-01-01

    The objective of this work was to study the dissolution and mechanical properties of fast-dissolving films prepared from a tertiary mixture of pullulan, polyvinylpyrrolidone and hypromellose. Disintegration studies were performed in real-time by probe spectroscopy to detect the onset of film disintegration. Tensile strength and elastic modulus of the films were measured by texture analysis. Disintegration time of the films ranged from 21 to 105 seconds whereas their mechanical properties ranged from approximately 2 to 49 MPa for tensile strength and 1 to 21 MPa% for young's modulus. After generating polynomial models correlating the variables using a D-Optimal mixture design, an optimal formulation with desired responses was proposed by the statistical package. For validation, a new film formulation loaded with diclofenac sodium based on the optimized composition was prepared and tested for dissolution and tensile strength. Dissolution of the optimized film was found to commence almost immediately with 50% of the drug released within one minute. Tensile strength and young's modulus of the film were 11.21 MPa and 6, 78 MPa%, respectively. Real-time spectroscopy in conjunction with statistical design were shown to be very efficient for the optimization and development of non-conventional intraoral delivery system such as fast dissolving films.

  6. Optimal designs for copula models

    PubMed Central

    Perrone, E.; Müller, W.G.

    2016-01-01

    Copula modelling has in the past decade become a standard tool in many areas of applied statistics. However, a largely neglected aspect concerns the design of related experiments. Particularly the issue of whether the estimation of copula parameters can be enhanced by optimizing experimental conditions and how robust all the parameter estimates for the model are with respect to the type of copula employed. In this paper an equivalence theorem for (bivariate) copula models is provided that allows formulation of efficient design algorithms and quick checks of whether designs are optimal or at least efficient. Some examples illustrate that in practical situations considerable gains in design efficiency can be achieved. A natural comparison between different copula models with respect to design efficiency is provided as well. PMID:27453616

  7. Optimization of a liquid chromatography ion mobility-mass spectrometry method for untargeted metabolomics using experimental design and multivariate data analysis.

    PubMed

    Tebani, Abdellah; Schmitz-Afonso, Isabelle; Rutledge, Douglas N; Gonzalez, Bruno J; Bekri, Soumeya; Afonso, Carlos

    2016-03-24

    High-resolution mass spectrometry coupled with pattern recognition techniques is an established tool to perform comprehensive metabolite profiling of biological datasets. This paves the way for new, powerful and innovative diagnostic approaches in the post-genomic era and molecular medicine. However, interpreting untargeted metabolomic data requires robust, reproducible and reliable analytical methods to translate results into biologically relevant and actionable knowledge. The analyses of biological samples were developed based on ultra-high performance liquid chromatography (UHPLC) coupled to ion mobility - mass spectrometry (IM-MS). A strategy for optimizing the analytical conditions for untargeted UHPLC-IM-MS methods is proposed using an experimental design approach. Optimization experiments were conducted through a screening process designed to identify the factors that have significant effects on the selected responses (total number of peaks and number of reliable peaks). For this purpose, full and fractional factorial designs were used while partial least squares regression was used for experimental design modeling and optimization of parameter values. The total number of peaks yielded the best predictive model and is used for optimization of parameters setting. Copyright © 2016 Elsevier B.V. All rights reserved.

  8. Spectrophotometric determination of fluoxetine by molecularly imprinted polypyrrole and optimization by experimental design, artificial neural network and genetic algorithm.

    PubMed

    Nezhadali, Azizollah; Motlagh, Maryam Omidvar; Sadeghzadeh, Samira

    2017-09-13

    A selective method based on molecularly imprinted polymer (MIP) solid-phase extraction (SPE) using UV-Vis spectrophotometry as a detection technique was developed for the determination of fluoxetine (FLU) in pharmaceutical and human serum samples. The MIPs were synthesized using pyrrole as a functional monomer in the presence of FLU as a template molecule. The factors that affecting the preparation and extraction ability of MIP such as amount of sorbent, initiator concentration, the amount of monomer to template ratio, uptake shaking rate, uptake time, washing buffer pH, take shaking rate, Taking time and polymerization time were considered for optimization. First a Plackett-Burman design (PBD) consists of 12 randomized runs were applied to determine the influence of each factor. The other optimization processes were performed using central composite design (CCD), artificial neural network (ANN) and genetic algorithm (GA). At optimal condition the calibration curve showed linearity over a concentration range of 10(-7)-10(-8)M with a correlation coefficient (R(2)) of 0.9970. The limit of detection (LOD) for FLU was obtained 6.56×10(-9)M. The repeatability of the method was obtained 1.61%. The synthesized MIP sorbent showed a good selectivity and sensitivity toward FLU. The MIP/SPE method was used for the determination of FLU in pharmaceutical, serum and plasma samples, successfully. Copyright © 2017 Elsevier B.V. All rights reserved.

  9. Greater enhancement of Bacillus subtilis spore yields in submerged cultures by optimization of medium composition through statistical experimental designs.

    PubMed

    Chen, Zhen-Min; Li, Qing; Liu, Hua-Mei; Yu, Na; Xie, Tian-Jian; Yang, Ming-Yuan; Shen, Ping; Chen, Xiang-Dong

    2010-02-01

    Bacillus subtilis spore preparations are promising probiotics and biocontrol agents, which can be used in plants, animals, and humans. The aim of this work was to optimize the nutritional conditions using a statistical approach for the production of B. subtilis (WHK-Z12) spores. Our preliminary experiments show that corn starch, corn flour, and wheat bran were the best carbon sources. Using Plackett-Burman design, corn steep liquor, soybean flour, and yeast extract were found to be the best nitrogen source ingredients for enhancing spore production and were studied for further optimization using central composite design. The key medium components in our optimization medium were 16.18 g/l of corn steep liquor, 17.53 g/l of soybean flour, and 8.14 g/l of yeast extract. The improved medium produced spores as high as 1.52 +/- 0.06 x 10(10) spores/ml under flask cultivation conditions, and 1.56 +/- 0.07 x 10(10) spores/ml could be achieved in a 30-l fermenter after 40 h of cultivation. To the best of our knowledge, these results compared favorably to the documented spore yields produced by B. subtilis strains.

  10. The Automatic Neuroscientist: A framework for optimizing experimental design with closed-loop real-time fMRI

    PubMed Central

    Lorenz, Romy; Monti, Ricardo Pio; Violante, Inês R.; Anagnostopoulos, Christoforos; Faisal, Aldo A.; Montana, Giovanni; Leech, Robert

    2016-01-01

    Functional neuroimaging typically explores how a particular task activates a set of brain regions. Importantly though, the same neural system can be activated by inherently different tasks. To date, there is no approach available that systematically explores whether and how distinct tasks probe the same neural system. Here, we propose and validate an alternative framework, the Automatic Neuroscientist, which turns the standard fMRI approach on its head. We use real-time fMRI in combination with modern machine-learning techniques to automatically design the optimal experiment to evoke a desired target brain state. In this work, we present two proof-of-principle studies involving perceptual stimuli. In both studies optimization algorithms of varying complexity were employed; the first involved a stochastic approximation method while the second incorporated a more sophisticated Bayesian optimization technique. In the first study, we achieved convergence for the hypothesized optimum in 11 out of 14 runs in less than 10 min. Results of the second study showed how our closed-loop framework accurately and with high efficiency estimated the underlying relationship between stimuli and neural responses for each subject in one to two runs: with each run lasting 6.3 min. Moreover, we demonstrate that using only the first run produced a reliable solution at a group-level. Supporting simulation analyses provided evidence on the robustness of the Bayesian optimization approach for scenarios with low contrast-to-noise ratio. This framework is generalizable to numerous applications, ranging from optimizing stimuli in neuroimaging pilot studies to tailoring clinical rehabilitation therapy to patients and can be used with multiple imaging modalities in humans and animals. PMID:26804778

  11. Removal of cobalt ions from aqueous solutions by polymer assisted ultrafiltration using experimental design approach. part 1: optimization of complexation conditions.

    PubMed

    Cojocaru, Corneliu; Zakrzewska-Trznadel, Grazyna; Jaworska, Agnieszka

    2009-09-30

    The polymer assisted ultrafiltration process combines the selectivity of the chelating agent with the filtration ability of the membrane acting in synergy. Such hybrid process (complexation-ultrafiltration) is influenced by several factors and therefore the application of experimental design for process optimization using a reduced number of experiments is of great importance. The present work deals with the investigation and optimization of cobalt ions removal from aqueous solutions by polymer enhanced ultrafiltration using experimental design and response surface methodological approach. Polyethyleneimine has been used as chelating agent for cobalt complexation and the ultrafiltration experiments were carried out in dead-end operating mode using a flat-sheet membrane made from regenerated cellulose. The aim of this part of experiments was to find optimal conditions for cobalt complexation, i.e. the influence of initial concentration of cobalt in feed solution, polymer/metal ratio and pH of feed solution, on the rejection efficiency and binding capacity of the polymer. In this respect, the central compositional design has been used for planning the experiments and for construction of second-order response surface models applicable for predictions. The analysis of variance has been employed for statistical validation of regression models. The optimum conditions for maximum rejection efficiency of 96.65% has been figured out experimentally by gradient method and was found to be as follows: [Co(2+)](0)=65 mg/L, polymer/metal ratio=5.88 and pH 6.84.

  12. Optimization of Xylanase Production from Penicillium sp.WX-Z1 by a Two-Step Statistical Strategy: Plackett-Burman and Box-Behnken Experimental Design

    PubMed Central

    Cui, Fengjie; Zhao, Liming

    2012-01-01

    The objective of the study was to optimize the nutrition sources in a culture medium for the production of xylanase from Penicillium sp.WX-Z1 using Plackett-Burman design and Box-Behnken design. The Plackett-Burman multifactorial design was first employed to screen the important nutrient sources in the medium for xylanase production by Penicillium sp.WX-Z1 and subsequent use of the response surface methodology (RSM) was further optimized for xylanase production by Box-Behnken design. The important nutrient sources in the culture medium, identified by the initial screening method of Placket-Burman, were wheat bran, yeast extract, NaNO3, MgSO4, and CaCl2. The optimal amounts (in g/L) for maximum production of xylanase were: wheat bran, 32.8; yeast extract, 1.02; NaNO3, 12.71; MgSO4, 0.96; and CaCl2, 1.04. Using this statistical experimental design, the xylanase production under optimal condition reached 46.50 U/mL and an increase in xylanase activity of 1.34-fold was obtained compared with the original medium for fermentation carried out in a 30-L bioreactor. PMID:22949884

  13. Optimization of Xylanase production from Penicillium sp.WX-Z1 by a two-step statistical strategy: Plackett-Burman and Box-Behnken experimental design.

    PubMed

    Cui, Fengjie; Zhao, Liming

    2012-01-01

    The objective of the study was to optimize the nutrition sources in a culture medium for the production of xylanase from Penicillium sp.WX-Z1 using Plackett-Burman design and Box-Behnken design. The Plackett-Burman multifactorial design was first employed to screen the important nutrient sources in the medium for xylanase production by Penicillium sp.WX-Z1 and subsequent use of the response surface methodology (RSM) was further optimized for xylanase production by Box-Behnken design. The important nutrient sources in the culture medium, identified by the initial screening method of Placket-Burman, were wheat bran, yeast extract, NaNO(3), MgSO(4), and CaCl(2). The optimal amounts (in g/L) for maximum production of xylanase were: wheat bran, 32.8; yeast extract, 1.02; NaNO(3), 12.71; MgSO(4), 0.96; and CaCl(2), 1.04. Using this statistical experimental design, the xylanase production under optimal condition reached 46.50 U/mL and an increase in xylanase activity of 1.34-fold was obtained compared with the original medium for fermentation carried out in a 30-L bioreactor.

  14. Optimization of photocatalytic degradation of biodiesel using TiO2/H2O2 by experimental design.

    PubMed

    Ambrosio, Elizangela; Lucca, Diego L; Garcia, Maicon H B; de Souza, Maísa T F; de S Freitas, Thábata K F; de Souza, Renata P; Visentainer, Jesuí V; Garcia, Juliana C

    2017-03-01

    This study reports on the investigation of the photodegradation of biodiesel (B100) in contact with water using TiO2/H2O2. The TiO2 was characterized by X-ray diffraction analysis (XRD), pH point of zero charge (pHpzc) and textural analysis. The results of the experiments were fitted to a quadratic polynomial model developed using response surface methodology (RSM) to optimize the parameters. Using the three factors, three levels, and the Box-Behnken design of experiment technique, 15 sets of experiments were designed considering the effective ranges of the influential parameters. The responses of those parameters were optimized using computational techniques. After 24h of irradiation under an Hg vapor lamp, removal of 22.0% of the oils and greases (OG) and a 33.54% reduction in the total of fatty acid methyl ester (FAME) concentration was observed in the aqueous phase, as determined using gas chromatography coupled with flame ionization detection (GC/FID). The estimate of FAMEs undergo base-catalyzed hydrolysis is at least 3years (1095days) and after photocatalytic treatment using TiO2/H2O2, it was reduced to 33.54% of FAMEs in only 1day.

  15. Optimal design of antireflection coating and experimental verification by plasma enhanced chemical vapor deposition in small displays

    SciTech Connect

    Yang, S. M.; Hsieh, Y. C.; Jeng, C. A.

    2009-03-15

    Conventional antireflection coating by thin films of quarter-wavelength thickness is limited by material selections and these films' refractive indices. The optimal design by non-quarter-wavelength thickness is presented in this study. A multilayer thin-film model is developed by the admittance loci to show that the two-layer thin film of SiN{sub x}/SiO{sub y} at 124/87 nm and three layer of SiN{sub x}/SiN{sub y}/SiO{sub z} at 58/84/83 nm can achieve average transmittances of 94.4% and 94.9%, respectively, on polymer, glass, and silicon substrates. The optimal design is validated by plasma enhanced chemical vapor deposition of N{sub 2}O/SiH{sub 4} and NH{sub 3}/SiH{sub 4} to achieve the desired optical constants. Application of the antireflection coating to a 4 in. liquid crystal display demonstrates that the transmittance is over 94%, the mean luminance can be increased by 25%, and the total reflection angle increased from 41 deg. to 58 deg.

  16. Optimizing the coagulation process in a drinking water treatment plant -- comparison between traditional and statistical experimental design jar tests.

    PubMed

    Zainal-Abideen, M; Aris, A; Yusof, F; Abdul-Majid, Z; Selamat, A; Omar, S I

    2012-01-01

    In this study of coagulation operation, a comparison was made between the optimum jar test values for pH, coagulant and coagulant aid obtained from traditional methods (an adjusted one-factor-at-a-time (OFAT) method) and with central composite design (the standard design of response surface methodology (RSM)). Alum (coagulant) and polymer (coagulant aid) were used to treat a water source with very low pH and high aluminium concentration at Sri-Gading water treatment plant (WTP) Malaysia. The optimum conditions for these factors were chosen when the final turbidity, pH after coagulation and residual aluminium were within 0-5 NTU, 6.5-7.5 and 0-0.20 mg/l respectively. Traditional and RSM jar tests were conducted to find their respective optimum coagulation conditions. It was observed that the optimum dose for alum obtained through the traditional method was 12 mg/l, while the value for polymer was set constant at 0.020 mg/l. Through RSM optimization, the optimum dose for alum was 7 mg/l and for polymer was 0.004 mg/l. Optimum pH for the coagulation operation obtained through traditional methods and RSM was 7.6. The final turbidity, pH after coagulation and residual aluminium recorded were all within acceptable limits. The RSM method was demonstrated to be an appropriate approach for the optimization and was validated by a further test.

  17. Optimization of ultrasound assisted dispersive liquid-liquid microextraction of six antidepressants in human plasma using experimental design.

    PubMed

    Fernández, P; Taboada, V; Regenjo, M; Morales, L; Alvarez, I; Carro, A M; Lorenzo, R A

    2016-05-30

    A simple Ultrasounds Assisted-Dispersive Liquid Liquid Microextraction (UA-DLLME) method is presented for the simultaneous determination of six second-generation antidepressants in plasma by Ultra Performance Liquid Chromatography with Photodiode Array Detector (UPLC-PDA). The main factors that potentially affect to DLLME were optimized by a screening design followed by a response surface design and desirability functions. The optimal conditions were 2.5 mL of acetonitrile as dispersant solvent, 0.2 mL of chloroform as extractant solvent, 3 min of ultrasounds stirring and extraction pH 9.8.Under optimized conditions, the UPLC-PDA method showed good separation of antidepressants in 2.5 min and good linearity in the range of 0.02-4 μg mL(-1), with determination coefficients higher than 0.998. The limits of detection were in the range 4-5 ng mL(-1). The method precision (n=5) was evaluated showing relative standard deviations (RSD) lower than 8.1% for all compounds. The average recoveries ranged from 92.5% for fluoxetine to 110% for mirtazapine. The applicability of DLLME/UPLC-PDA was successfully tested in twenty nine plasma samples from antidepressant consumers. Real samples were analyzed by the proposed method and the results were successfully submitted to comparison with those obtained by a Liquid Liquid Extraction-Gas Chromatography - Mass Spectrometry (LLE-GC-MS) method. The results confirmed the presence of venlafaxine in most cases (19 cases), followed by sertraline (3 cases) and fluoxetine (3 cases) at concentrations below toxic levels.

  18. Elements of Bayesian experimental design

    SciTech Connect

    Sivia, D.S.

    1997-09-01

    We consider some elements of the Bayesian approach that are important for optimal experimental design. While the underlying principles used are very general, and are explained in detail in a recent tutorial text, they are applied here to the specific case of characterising the inferential value of different resolution peakshapes. This particular issue was considered earlier by Silver, Sivia and Pynn (1989, 1990a, 1990b), and the following presentation confirms and extends the conclusions of their analysis.

  19. Optimizing experimental design using the house mouse (Mus musculus L.) as a model for determining grain feeding preferences.

    PubMed

    Fuerst, E Patrick; Morris, Craig F; Dasgupta, Nairanjana; McLean, Derek J

    2013-10-01

    There is little research evaluating flavor preferences among wheat varieties. We previously demonstrated that mice exert very strong preferences when given binary mixtures of wheat varieties. We plan to utilize mice to identify wheat genes associated with flavor, and then relate this back to human preferences. Here we explore the effects of experimental design including the number of days (from 1 to 4) and number of mice (from 2 to 15) in order to identify designs that provide significant statistical inferences while minimizing requirements for labor and animals. When mice expressed a significant preference between 2 wheat varieties, increasing the number of days (for a given number of mice) increased the significance level (decreased P-values) for their preference, as expected, but with diminishing benefit as more days were added. However, increasing the number of mice (for a given number of days) provided a more dramatic log-linear decrease in P-values and thus increased statistical power. In conclusion, when evaluating mouse feeding preferences in binary mixtures of grain, an efficient experimental design would emphasize fewer days rather than fewer animals thus shortening the experiment duration and reducing the overall requirement for labor and animals.

  20. Experimental design and husbandry.

    PubMed

    Festing, M F

    1997-01-01

    Rodent gerontology experiments should be carefully designed and correctly analyzed so as to provide the maximum amount of information for the minimum amount of work. There are five criteria for a "good" experimental design. These are applicable both to in vivo and in vitro experiments: (1) The experiment should be unbiased so that it is possible to make a true comparison between treatment groups in the knowledge that no one group has a more favorable "environment." (2) The experiment should have high precision so that if there is a true treatment effect there will be a good chance of detecting it. This is obtained by selecting uniform material such as isogenic strains, which are free of pathogenic microorganisms, and by using randomized block experimental designs. It can also be increased by increasing the number of observations. However, increasing the size of the experiment beyond a certain point will only marginally increase precision. (3) The experiment should have a wide range of applicability so it should be designed to explore the sensitivity of the observed experimental treatment effect to other variables such as the strain, sex, diet, husbandry, and age of the animals. With in vitro data, variables such as media composition and incubation times may also be important. The importance of such variables can often be evaluated efficiently using "factorial" experimental designs, without any substantial increase in the overall number of animals. (4) The experiment should be simple so that there is little chance of groups becoming muddled. Generally, formal experimental designs that are planned before the work starts should be used. (5) The experiment should provide the ability to calculate uncertainty. In other words, it should be capable of being statistically analyzed so that the level of confidence in the results can be quantified.

  1. An orbital angular momentum radio communication system optimized by intensity controlled masks effectively: Theoretical design and experimental verification

    SciTech Connect

    Gao, Xinlu; Huang, Shanguo Wei, Yongfeng; Zhai, Wensheng; Xu, Wenjing; Yin, Shan; Gu, Wanyi; Zhou, Jing

    2014-12-15

    A system of generating and receiving orbital angular momentum (OAM) radio beams, which are collectively formed by two circular array antennas (CAAs) and effectively optimized by two intensity controlled masks, is proposed and experimentally investigated. The scheme is effective in blocking of the unwanted OAM modes and enhancing the power of received radio signals, which results in the capacity gain of system and extended transmission distance of the OAM radio beams. The operation principle of the intensity controlled masks, which can be regarded as both collimator and filter, is feasible and simple to realize. Numerical simulations of intensity and phase distributions at each key cross-sectional plane of the radio beams demonstrate the collimated results. The experimental results match well with the theoretical analysis and the receive distance of the OAM radio beam at radio frequency (RF) 20 GHz is extended up to 200 times of the wavelength of the RF signals, the measured distance is 5 times of the original measured distance. The presented proof-of-concept experiment demonstrates the feasibility of the system.

  2. An orbital angular momentum radio communication system optimized by intensity controlled masks effectively: Theoretical design and experimental verification

    NASA Astrophysics Data System (ADS)

    Gao, Xinlu; Huang, Shanguo; Wei, Yongfeng; Zhai, Wensheng; Xu, Wenjing; Yin, Shan; Zhou, Jing; Gu, Wanyi

    2014-12-01

    A system of generating and receiving orbital angular momentum (OAM) radio beams, which are collectively formed by two circular array antennas (CAAs) and effectively optimized by two intensity controlled masks, is proposed and experimentally investigated. The scheme is effective in blocking of the unwanted OAM modes and enhancing the power of received radio signals, which results in the capacity gain of system and extended transmission distance of the OAM radio beams. The operation principle of the intensity controlled masks, which can be regarded as both collimator and filter, is feasible and simple to realize. Numerical simulations of intensity and phase distributions at each key cross-sectional plane of the radio beams demonstrate the collimated results. The experimental results match well with the theoretical analysis and the receive distance of the OAM radio beam at radio frequency (RF) 20 GHz is extended up to 200 times of the wavelength of the RF signals, the measured distance is 5 times of the original measured distance. The presented proof-of-concept experiment demonstrates the feasibility of the system.

  3. Use of experimental designs for the optimization of stir bar sorptive extraction coupled to GC-MS/MS and comprehensive validation for the quantification of pesticides in freshwaters.

    PubMed

    Assoumani, A; Margoum, C; Guillemain, C; Coquery, M

    2014-04-01

    Although experimental design is a powerful tool, it is rarely used for the development of analytical methods for the determination of organic contaminants in the environment. When investigated factors are interdependent, this methodology allows studying efficiently not only their effects on the response but also the effects of their interactions. A complete and didactic chemometric study is described herein for the optimization of an analytical method involving stir bar sorptive extraction followed by thermal desorption coupled with gas chromatography and tandem mass spectrometry for the rapid quantification of several pesticides in freshwaters. We studied, under controlled conditions, the effects of thermal desorption parameters and the effects of their interactions on the desorption efficiency. The desorption time, temperature, flow, and the injector temperature were optimized through a screening design and a Box-Behnken design. The two sequential designs allowed establishing an optimum set of conditions for maximum response. Then, we present the comprehensive validation and the determination of measurement uncertainty of the optimized method. Limits of quantification determined in different natural waters were in the range of 2.5 to 50 ng L(-1), and recoveries were between 90 and 104 %, depending on the pesticide. The whole method uncertainty, assessed at three concentration levels under intra-laboratory reproducibility conditions, was below 25 % for all tested pesticides. Hence, we optimized and validated a robust analytical method to quantify the target pesticides at low concentration levels in freshwater samples, with a simple, fast, and solventless desorption step.

  4. Teaching experimental design.

    PubMed

    Fry, Derek J

    2014-01-01

    Awareness of poor design and published concerns over study quality stimulated the development of courses on experimental design intended to improve matters. This article describes some of the thinking behind these courses and how the topics can be presented in a variety of formats. The premises are that education in experimental design should be undertaken with an awareness of educational principles, of how adults learn, and of the particular topics in the subject that need emphasis. For those using laboratory animals, it should include ethical considerations, particularly severity issues, and accommodate learners not confident with mathematics. Basic principles, explanation of fully randomized, randomized block, and factorial designs, and discussion of how to size an experiment form the minimum set of topics. A problem-solving approach can help develop the skills of deciding what are correct experimental units and suitable controls in different experimental scenarios, identifying when an experiment has not been properly randomized or blinded, and selecting the most efficient design for particular experimental situations. Content, pace, and presentation should suit the audience and time available, and variety both within a presentation and in ways of interacting with those being taught is likely to be effective. Details are given of a three-day course based on these ideas, which has been rated informative, educational, and enjoyable, and can form a postgraduate module. It has oral presentations reinforced by group exercises and discussions based on realistic problems, and computer exercises which include some analysis. Other case studies consider a half-day format and a module for animal technicians. © The Author 2014. Published by Oxford University Press on behalf of the Institute for Laboratory Animal Research. All rights reserved. For permissions, please email: journals.permissions@oup.com.

  5. Optimization of a supercritical fluid extraction/reaction methodology for the analysis of castor oil using experimental design.

    PubMed

    Turner, Charlotta; Whitehand, Linda C; Nguyen, Tasha; McKeon, Thomas

    2004-01-14

    The aim of this work was to optimize a supercritical fluid extraction (SFE)/enzymatic reaction process for the determination of the fatty acid composition of castor seeds. A lipase from Candida antarctica (Novozyme 435) was used to catalyze the methanolysis reaction in supercritical carbon dioxide (SC-CO(2)). A Box-Behnken statistical design was used to evaluate effects of various values of pressure (200-400 bar), temperature (40-80 degrees C), methanol concentration (1-5 vol %), and water concentration (0.02-0.18 vol %) on the yield of methylated castor oil. Response surfaces were plotted, and these together with results from some additional experiments produced optimal extraction/reaction conditions for SC-CO(2) at 300 bar and 80 degrees C, with 7 vol % methanol and 0.02 vol % water. These conditions were used for the determination of the castor oil content expressed as fatty acid methyl esters (FAMEs) in castor seeds. The results obtained were similar to those obtained using conventional methodology based on solvent extraction followed by chemical transmethylation. It was concluded that the methodology developed could be used for the determination of castor oil content as well as composition of individual FAMEs in castor seeds.

  6. Optimization of the ultrasonic assisted removal of methylene blue by gold nanoparticles loaded on activated carbon using experimental design methodology.

    PubMed

    Roosta, M; Ghaedi, M; Daneshfar, A; Sahraei, R; Asghari, A

    2014-01-01

    The present study was focused on the removal of methylene blue (MB) from aqueous solution by ultrasound-assisted adsorption onto the gold nanoparticles loaded on activated carbon (Au-NP-AC). This nanomaterial was characterized using different techniques such as SEM, XRD, and BET. The effects of variables such as pH, initial dye concentration, adsorbent dosage (g), temperature and sonication time (min) on MB removal were studied and using central composite design (CCD) and the optimum experimental conditions were found with desirability function (DF) combined response surface methodology (RSM). Fitting the experimental equilibrium data to various isotherm models such as Langmuir, Freundlich, Tempkin and Dubinin-Radushkevich models show the suitability and applicability of the Langmuir model. Analysis of experimental adsorption data to various kinetic models such as pseudo-first and second order, Elovich and intraparticle diffusion models show the applicability of the second-order equation model. The small amount of proposed adsorbent (0.01 g) is applicable for successful removal of MB (RE>95%) in short time (1.6 min) with high adsorption capacity (104-185 mg g(-1)).

  7. Dynamic modeling, experimental evaluation, optimal design and control of integrated fuel cell system and hybrid energy systems for building demands

    NASA Astrophysics Data System (ADS)

    Nguyen, Gia Luong Huu

    obtained experimental data, the research studied the control of airflow to regulate the temperature of reactors within the fuel processor. The dynamic model provided a platform to test the dynamic response for different control gains. With sufficient sensing and appropriate control, a rapid response to maintain the temperature of the reactor despite an increase in power was possible. The third part of the research studied the use of a fuel cell in conjunction with photovoltaic panels, and energy storage to provide electricity for buildings. This research developed an optimization framework to determine the size of each device in the hybrid energy system to satisfy the electrical demands of buildings and yield the lowest cost. The advantage of having the fuel cell with photovoltaic and energy storage was the ability to operate the fuel cell at baseload at night, thus reducing the need for large battery systems to shift the solar power produced in the day to the night. In addition, the dispatchability of the fuel cell provided an extra degree of freedom necessary for unforeseen disturbances. An operation framework based on model predictive control showed that the method is suitable for optimizing the dispatch of the hybrid energy system.

  8. Optimization of critical quality attributes in continuous twin-screw wet granulation via design space validated with pilot scale experimental data.

    PubMed

    Liu, Huolong; Galbraith, S C; Ricart, Brendon; Stanton, Courtney; Smith-Goettler, Brandye; Verdi, Luke; O'Connor, Thomas; Lee, Sau; Yoon, Seongkyu

    2017-06-15

    In this study, the influence of key process variables (screw speed, throughput and liquid to solid (L/S) ratio) of a continuous twin screw wet granulation (TSWG) was investigated using a central composite face-centered (CCF) experimental design method. Regression models were developed to predict the process responses (motor torque, granule residence time), granule properties (size distribution, volume average diameter, yield, relative width, flowability) and tablet properties (tensile strength). The effects of the three key process variables were analyzed via contour and interaction plots. The experimental results have demonstrated that all the process responses, granule properties and tablet properties are influenced by changing the screw speed, throughput and L/S ratio. The TSWG process was optimized to produce granules with specific volume average diameter of 150μm and the yield of 95% based on the developed regression models. A design space (DS) was built based on volume average granule diameter between 90 and 200μm and the granule yield larger than 75% with a failure probability analysis using Monte Carlo simulations. Validation experiments successfully validated the robustness and accuracy of the DS generated using the CCF experimental design in optimizing a continuous TSWG process. Copyright © 2017 Elsevier B.V. All rights reserved.

  9. Optimization of photocatalytic degradation of methyl blue using silver ion doped titanium dioxide by combination of experimental design and response surface approach.

    PubMed

    Sahoo, C; Gupta, A K

    2012-05-15

    Photocatalytic degradation of methyl blue (MYB) was studied using Ag(+) doped TiO(2) under UV irradiation in a batch reactor. Catalytic dose, initial concentration of dye and pH of the reaction mixture were found to influence the degradation process most. The degradation was found to be effective in the range catalytic dose (0.5-1.5g/L), initial dye concentration (25-100ppm) and pH of reaction mixture (5-9). Using the three factors three levels Box-Behnken design of experiment technique 15 sets of experiments were designed considering the effective ranges of the influential parameters. The results of the experiments were fitted to two quadratic polynomial models developed using response surface methodology (RSM), representing functional relationship between the decolorization and mineralization of MYB and the experimental parameters. Design Expert software version 8.0.6.1 was used to optimize the effects of the experimental parameters on the responses. The optimum values of the parameters were dose of Ag(+) doped TiO(2) 0.99g/L, initial concentration of MYB 57.68ppm and pH of reaction mixture 7.76. Under the optimal condition the predicted decolorization and mineralization rate of MYB were 95.97% and 80.33%, respectively. Regression analysis with R(2) values >0.99 showed goodness of fit of the experimental results with predicted values.

  10. Improved Titanium Billet Inspection Sensitivity through Optimized Phased Array Design, Part II: Experimental Validation and Comparative Study with Multizone

    SciTech Connect

    Hassan, W.; Vensel, F.; Knowles, B.

    2006-03-06

    The inspection of critical rotating components of aircraft engines has made important advances over the last decade. The development of Phased Array (PA) inspection capability for billet and forging materials used in the manufacturing of critical engine rotating components has been a priority for Honeywell Aerospace. The demonstration of improved PA inspection system sensitivity over what is currently used at the inspection houses is a critical step in the development of this technology and its introduction to the supply base as a production inspection. As described in Part I (in these proceedings), a new phased array transducer was designed and manufactured for optimal inspection of eight inch diameter Ti-6Al-4V billets. After confirming that the transducer was manufactured in accordance with the design specifications a validation study was conducted to assess the sensitivity improvement of the PAI over the current capability of Multi-zone (MZ) inspection. The results of this study confirm the significant ({approx_equal} 6 dB in FBH number sign sensitivity) improvement of the PAI sensitivity over that of MZI.

  11. Multifactorial Experimental Design to Optimize the Anti-Inflammatory and Proangiogenic Potential of Mesenchymal Stem Cell Spheroids.

    PubMed

    Murphy, Kaitlin C; Whitehead, Jacklyn; Falahee, Patrick C; Zhou, Dejie; Simon, Scott I; Leach, J Kent

    2017-06-01

    Mesenchymal stem cell therapies promote wound healing by manipulating the local environment to enhance the function of host cells. Aggregation of mesenchymal stem cells (MSCs) into three-dimensional spheroids increases cell survival and augments their anti-inflammatory and proangiogenic potential, yet there is no consensus on the preferred conditions for maximizing spheroid function in this application. The objective of this study was to optimize conditions for forming MSC spheroids that simultaneously enhance their anti-inflammatory and proangiogenic nature. We applied a design of experiments (DOE) approach to determine the interaction between three input variables (number of cells per spheroid, oxygen tension, and inflammatory stimulus) on MSC spheroids by quantifying secretion of prostaglandin E2 (PGE2 ) and vascular endothelial growth factor (VEGF), two potent molecules in the MSC secretome. DOE results revealed that MSC spheroids formed with 40,000 cells per spheroid in 1% oxygen with an inflammatory stimulus (Spheroid 1) would exhibit enhanced PGE2 and VEGF production versus those formed with 10,000 cells per spheroid in 21% oxygen with no inflammatory stimulus (Spheroid 2). Compared to Spheroid 2, Spheroid 1 produced fivefold more PGE2 and fourfold more VEGF, providing the opportunity to simultaneously upregulate the secretion of these factors from the same spheroid. The spheroids induced macrophage polarization, sprout formation with endothelial cells, and keratinocyte migration in a human skin equivalent model-demonstrating efficacy on three key cell types that are dysfunctional in chronic non-healing wounds. We conclude that DOE-based analysis effectively identifies optimal culture conditions to enhance the anti-inflammatory and proangiogenic potential of MSC spheroids. Stem Cells 2017;35:1493-1504. © 2017 AlphaMed Press.

  12. Experimental design approach to the optimization of ultrasonic degradation of alachlor and enhancement of treated water biodegradability.

    PubMed

    Torres, Ricardo A; Mosteo, Rosa; Pétrier, Christian; Pulgarin, Cesar

    2009-03-01

    This work presents the application of experimental design for the ultrasonic degradation of alachlor which is pesticide classified as priority substance by the European Commission within the scope of the Water Framework Directive. The effect of electrical power (20-80W), pH (3-10) and substrate concentration (10-50mgL(-1)) was evaluated. For a confidential level of 90%, pH showed a low effect on the initial degradation rate of alachlor; whereas electrical power, pollutant concentration and the interaction of these two parameters were significant. A reduced model taking into account the significant variables and interactions between variables has shown a good correlation with the experimental results. Additional experiments conducted in natural and deionised water indicated that the alachlor degradation by ultrasound is practically unaffected by the presence of potential *OH radical scavengers: bicarbonate, sulphate, chloride and oxalic acid. In both cases, alachlor was readily eliminated ( approximately 75min). However, after 4h of treatment only 20% of the initial TOC was removed, showing that alachlor by-products are recalcitrant to the ultrasonic action. Biodegradability test (BOD5/COD) carried out during the course of the treatment indicated that the ultrasonic system noticeably increases the biodegradability of the initial solution.

  13. Interfacial modification to optimize stainless steel photoanode design for flexible dye sensitized solar cells: an experimental and numerical modeling approach

    NASA Astrophysics Data System (ADS)

    Salehi Taleghani, Sara; Zamani Meymian, Mohammad Reza; Ameri, Mohsen

    2016-10-01

    In the present research, we report fabrication, experimental characterization and theoretical analysis of semi and full flexible dye sensitized solar cells (DSSCs) manufactured on the basis of bare and roughened stainless steel type 304 (SS304) substrates. The morphological, optical and electrical characterizations confirm the advantage of roughened SS304 over bare and even common transparent conducting oxides (TCOs). A significant enhancement of about 51% in power conversion efficiency is obtained for flexible device (5.51%) based on roughened SS304 substrate compared to the bare SS304. The effect of roughening the SS304 substrates on electrical transport characteristics is also investigated by means of numerical modeling with regard to metal-semiconductor and interfacial resistance arising from the metallic substrate and nanocrystalline semiconductor contact. The numerical modeling results provide a reliable theoretical backbone to be combined with experimental implications. It highlights the stronger effect of series resistance compared to schottky barrier in lowering the fill factor of the SS304-based DSSCs. The findings of the present study nominate roughened SS304 as a promising replacement for conventional DSSCs substrates as well as introducing a highly accurate modeling framework to design and diagnose treated metallic or non-metallic based DSSCs.

  14. Optimized quadrature surface coil designs

    PubMed Central

    Kumar, Ananda; Bottomley, Paul A.

    2008-01-01

    Background Quadrature surface MRI/MRS detectors comprised of circular loop and figure-8 or butterfly-shaped coils offer improved signal-to-noise-ratios (SNR) compared to single surface coils, and reduced power and specific absorption rates (SAR) when used for MRI excitation. While the radius of the optimum loop coil for performing MRI at depth d in a sample is known, the optimum geometry for figure-8 and butterfly coils is not. Materials and methods The geometries of figure-8 and square butterfly detector coils that deliver the optimum SNR are determined numerically by the electromagnetic method of moments. Figure-8 and loop detectors are then combined to create SNR-optimized quadrature detectors whose theoretical and experimental SNR performance are compared with a novel quadrature detector comprised of a strip and a loop, and with two overlapped loops optimized for the same depth at 3 T. The quadrature detection efficiency and local SAR during transmission for the three quadrature configurations are analyzed and compared. Results The SNR-optimized figure-8 detector has loop radius r8 ∼ 0.6d, so r8/r0 ∼ 1.3 in an optimized quadrature detector at 3 T. The optimized butterfly coil has side length ∼ d and crossover angle of ≥ 150° at the center. Conclusions These new design rules for figure-8 and butterfly coils optimize their performance as linear and quadrature detectors. PMID:18057975

  15. Optimization of a pharmaceutical freeze-dried product and its process using an experimental design approach and innovative process analyzers.

    PubMed

    De Beer, T R M; Wiggenhorn, M; Hawe, A; Kasper, J C; Almeida, A; Quinten, T; Friess, W; Winter, G; Vervaet, C; Remon, J P

    2011-02-15

    The aim of the present study was to examine the possibilities/advantages of using recently introduced in-line spectroscopic process analyzers (Raman, NIR and plasma emission spectroscopy), within well-designed experiments, for the optimization of a pharmaceutical formulation and its freeze-drying process. The formulation under investigation was a mannitol (crystalline bulking agent)-sucrose (lyo- and cryoprotector) excipient system. The effects of two formulation variables (mannitol/sucrose ratio and amount of NaCl) and three process variables (freezing rate, annealing temperature and secondary drying temperature) upon several critical process and product responses (onset and duration of ice crystallization, onset and duration of mannitol crystallization, duration of primary drying, residual moisture content and amount of mannitol hemi-hydrate in end product) were examined using a design of experiments (DOE) methodology. A 2-level fractional factorial design (2(5-1)=16 experiments+3 center points=19 experiments) was employed. All experiments were monitored in-line using Raman, NIR and plasma emission spectroscopy, which supply continuous process and product information during freeze-drying. Off-line X-ray powder diffraction analysis and Karl-Fisher titration were performed to determine the morphology and residual moisture content of the end product, respectively. In first instance, the results showed that - besides the previous described findings in De Beer et al., Anal. Chem. 81 (2009) 7639-7649 - Raman and NIR spectroscopy are able to monitor the product behavior throughout the complete annealing step during freeze-drying. The DOE approach allowed predicting the optimum combination of process and formulation parameters leading to the desired responses. Applying a mannitol/sucrose ratio of 4, without adding NaCl and processing the formulation without an annealing step, using a freezing rate of 0.9°C/min and a secondary drying temperature of 40°C resulted in

  16. Optimization of process variables for the biosynthesis of silver nanoparticles by Aspergillus wentii using statistical experimental design

    NASA Astrophysics Data System (ADS)

    Biswas, Supratim; Mulaba-Bafubiandi, Antoine F.

    2016-12-01

    The present scientific endeavour focuses on the optimization of process parameters using central composite design towards development of an efficient technique for the biosynthesis of silver nanoparticles. The combined effects of three process variables (days of fermentation, duration of incubation, concentration of AgNO3) upon extracellular biological synthesis of silver nanoparticles (AgNPs) by Aspergillus wentii NCIM 667 were studied. A single absorption peak at 455 nm confirming the presence of silver nanoparticles was observed in the UV-visible spectrophotometric graph. Using Fourier transform infrared spectroscopic analysis the presence of proteins as viable reducing agents for the formation AgNPs was recorded. High resolution transmission electron microscopy showed the realization of spherically shaped AgNPs of size 15-40 nm. Biologically formed AgNPs revealed higher antimicrobial activity against gram-negative than gram-positive bacterial strains. We present the enumeration of the properties of biosynthesized nanoparticles which exhibit photocatalysis exhausting an organic dye, the methyl orange, upon exposure to sunlight thereby accomplishing the degradation of almost (88%) the methyl orange dye within 5 h.

  17. Adsorption of phenol onto activated carbon from Rhazya stricta: determination of the optimal experimental parameters using factorial design

    NASA Astrophysics Data System (ADS)

    Hegazy, A. K.; Abdel-Ghani, N. T.; El-Chaghaby, G. A.

    2014-09-01

    A novel activated carbon was prepared from Rhazya stricta leaves and was successfully used as an adsorbent for phenol removal from aqueous solution. The prepared activated carbon was characterized by FTIR and SEM analysis. Three factors (namely, temperature, pH and adsorbent dose) were screened to study their effect on the adsorption of phenol by R. stricta activated carbon. A 23 full factorial design was employed for optimizing the adsorption process. The removal of phenol by adsorption onto R. stricta carbon reached 85 % at a solution pH of 3, an adsorbent dose of 0.5 g/l and a temperature of 45 °C. The temperature and adsorbent weight had a positive effect on phenol removal percentage, when both factors were changed from low to high and the opposite is true for the initial solution pH. The results of the main effects showed that the three studied factors significantly affected phenol removal by R. stricta carbon with 95 % confidence level. The interaction effects revealed that the interaction between the temperature and pH had the most significant effect on the removal percentage of phenol by R. stricta activated carbon. The present work showed that the carbon prepared from a low-cost and natural material which is R. stricta leaves is a good adsorbent for the removal of phenol from aqueous solution.

  18. Thermal and optical design analyses, optimizations, and experimental verification for a novel glare-free LED lamp for household applications.

    PubMed

    Khan, M Nisa

    2015-07-20

    Light-emitting diode (LED) technologies are undergoing very fast developments to enable household lamp products with improved energy efficiency and lighting properties at lower cost. Although many LED replacement lamps are claimed to provide similar or better lighting quality at lower electrical wattage compared with general-purpose incumbent lamps, certain lighting characteristics important to human vision are neglected in this comparison, which include glare-free illumination and omnidirectional or sufficiently broad light distribution with adequate homogeneity. In this paper, we comprehensively investigate the thermal and lighting performance and trade-offs for several commercial LED replacement lamps for the most popular Edison incandescent bulb. We present simulations and analyses for thermal and optical performance trade-offs for various LED lamps at the chip and module granularity levels. In addition, we present a novel, glare-free, and production-friendly LED lamp design optimized to produce very desirable light distribution properties as demonstrated by our simulation results, some of which are verified by experiments.

  19. The Box-Benkhen experimental design for the optimization of the electrocatalytic treatment of wastewaters with high concentrations of phenol and organic matter.

    PubMed

    GilPavas, Edison; Betancourt, Alejandra; Angulo, Mónica; Dobrosz-Gómez, Izabela; Gómez-García, Miguel Angel

    2009-01-01

    In this work, the Box-Benkhen experimental Design (BBD) was applied for the optimization of the parameters of the electrocatalytic degradation of wastewaters resulting from a phenolic resins industry placed in the suburbs of Medellin (Colombia). The direct and the oxidant assisted electro-oxidation experiments were carried out in a laboratory scale batch cell reactor, with monopolar configuration, and electrodes made of graphite (anode) and titanium (cathode). A multifactorial experimental design was proposed, including the following experimental variables: initial phenol concentration, conductivity, and pH. The direct electro-oxidation process allowed to reach ca. 88% of phenol degradation, 38% of mineralization (TOC), 52% of Chemical Oxygen Demand (COD) degradation, and an increase in water biodegradability of 13%. The synergetic effect of the electro-oxidation process and the respective oxidant agent (Fenton reactant, potassium permanganate, or sodium persulfate) let to a significant increase in the rate of the degradation process. At the optimized variables values, it was possible to reach ca. 99% of phenol degradation, 80% of TOC and 88% of COD degradation. A kinetic study was accomplished, which included the identification of the intermediate compounds generated during the oxidation process.

  20. OPTIMAL NETWORK TOPOLOGY DESIGN

    NASA Technical Reports Server (NTRS)

    Yuen, J. H.

    1994-01-01

    This program was developed as part of a research study on the topology design and performance analysis for the Space Station Information System (SSIS) network. It uses an efficient algorithm to generate candidate network designs (consisting of subsets of the set of all network components) in increasing order of their total costs, and checks each design to see if it forms an acceptable network. This technique gives the true cost-optimal network, and is particularly useful when the network has many constraints and not too many components. It is intended that this new design technique consider all important performance measures explicitly and take into account the constraints due to various technical feasibilities. In the current program, technical constraints are taken care of by the user properly forming the starting set of candidate components (e.g. nonfeasible links are not included). As subsets are generated, they are tested to see if they form an acceptable network by checking that all requirements are satisfied. Thus the first acceptable subset encountered gives the cost-optimal topology satisfying all given constraints. The user must sort the set of "feasible" link elements in increasing order of their costs. The program prompts the user for the following information for each link: 1) cost, 2) connectivity (number of stations connected by the link), and 3) the stations connected by that link. Unless instructed to stop, the program generates all possible acceptable networks in increasing order of their total costs. The program is written only to generate topologies that are simply connected. Tests on reliability, delay, and other performance measures are discussed in the documentation, but have not been incorporated into the program. This program is written in PASCAL for interactive execution and has been implemented on an IBM PC series computer operating under PC DOS. The disk contains source code only. This program was developed in 1985.

  1. Optimization of the separation of a group of triazine herbicides by micellar capillary electrophoresis using experimental design and artificial neural networks.

    PubMed

    Frías-García, Sergio; Sánchez, M Jesús; Rodríguez- Delgado, Miguel Angel

    2004-04-01

    The micellar electrokinetic chromatography separation of a group of triazine compounds was optimized using a combination of experimental design (ED) and artificial neural network (ANN). Different variables affecting separation were selected and used as input in the ANN. A chromatographic exponential function (CEF) combining resolution and separation time was used as output to obtain optimal separation conditions. An optimized buffer (19.3 mM sodium borate, 15.4 mM disodium hydrogen phosphate, 28.4 mM SDS, pH 9.45, and 7.5% 1-propanol) provides the best separation with regard to resolution and separation time. Besides, an analysis of variance (ANOVA) approach of the MEKC separation, using the same variables, was developed, and the best capability of the combination of ED-ANN for the optimization of the analytical methodology was demonstrated by comparing the results obtained from both approaches. In order to validate the proposed method, the different analytical parameters as repeatability and day-to-day precision were calculated. Finally, the optimized method was applied to the determination of these compounds in spiked and nonspiked ground water samples.

  2. Optimal designs for comparing curves

    PubMed Central

    Dette, Holger; Schorning, Kirsten

    2016-01-01

    We consider the optimal design problem for a comparison of two regression curves, which is used to establish the similarity between the dose response relationships of two groups. An optimal pair of designs minimizes the width of the confidence band for the difference between the two regression functions. Optimal design theory (equivalence theorems, efficiency bounds) is developed for this non standard design problem and for some commonly used dose response models optimal designs are found explicitly. The results are illustrated in several examples modeling dose response relationships. It is demonstrated that the optimal pair of designs for the comparison of the regression curves is not the pair of the optimal designs for the individual models. In particular it is shown that the use of the optimal designs proposed in this paper instead of commonly used “non-optimal” designs yields a reduction of the width of the confidence band by more than 50%. PMID:27340305

  3. Cr(VI) transport via a supported ionic liquid membrane containing CYPHOS IL101 as carrier: system analysis and optimization through experimental design strategies.

    PubMed

    Rodríguez de San Miguel, Eduardo; Vital, Xóchitl; de Gyves, Josefina

    2014-05-30

    Chromium(VI) transport through a supported liquid membrane (SLM) system containing the commercial ionic liquid CYPHOS IL101 as carrier was studied. A reducing stripping phase was used as a mean to increase recovery and to simultaneously transform Cr(VI) into a less toxic residue for disposal or reuse. General functions which describe the time-depending evolution of the metal fractions in the cell compartments were defined and used in data evaluation. An experimental design strategy, using factorial and central-composite design matrices, was applied to assess the influence of the extractant, NaOH and citrate concentrations in the different phases, while a desirability function scheme allowed the synchronized optimization of depletion and recovery of the analyte. The mechanism for chromium permeation was analyzed and discussed to contribute to the understanding of the transfer process. The influence of metal concentration was evaluated as well. The presence of different interfering ions (Ca(2+), Al(3+), NO3(-), SO4(2-), and Cl(-)) at several Cr(VI): interfering ion ratios was studied through the use of a Plackett and Burman experimental design matrix. Under optimized conditions 90% of recovery was obtained from a feed solution containing 7mgL(-1) of Cr(VI) in 0.01moldm(-3) HCl medium after 5h of pertraction.

  4. Experimental design methods for bioengineering applications.

    PubMed

    Keskin Gündoğdu, Tuğba; Deniz, İrem; Çalışkan, Gülizar; Şahin, Erdem Sefa; Azbar, Nuri

    2016-01-01

    Experimental design is a form of process analysis in which certain factors are selected to obtain the desired responses of interest. It may also be used for the determination of the effects of various independent factors on a dependent factor. The bioengineering discipline includes many different areas of scientific interest, and each study area is affected and governed by many different factors. Briefly analyzing the important factors and selecting an experimental design for optimization are very effective tools for the design of any bioprocess under question. This review summarizes experimental design methods that can be used to investigate various factors relating to bioengineering processes. The experimental methods generally used in bioengineering are as follows: full factorial design, fractional factorial design, Plackett-Burman design, Taguchi design, Box-Behnken design and central composite design. These design methods are briefly introduced, and then the application of these design methods to study different bioengineering processes is analyzed.

  5. Optimization of a pressurized liquid extraction method by experimental design methodologies for the determination of fluoroquinolone residues in infant foods by liquid chromatography.

    PubMed

    Rodriguez, E; Villoslada, F Navarro; Moreno-Bondi, M C; Marazuela, M D

    2010-01-29

    In the present study, we have developed a method based on pressurized liquid extraction (PLE) and liquid chromatography with fluorescence detection (LC-FLD) for the determination of residues of fluoroquinolones (FQs) in infant food products. PLE extraction has been optimized by the application of experimental design methodologies. Initially, a fractional factorial design (FFD) was used to screen the significance of four extraction parameters: solvent composition, temperature, pressure and number of cycles. The most significant factors, identified by ANOVA analysis, were the solvent composition, temperature and pressure, which were further optimized with the aid of a face centred design (FCD) and the desirability function. The optimized operating PLE conditions were as follows: ACN/o-phosphoric acid 50mM pH 3.0 (80:20, v/v), 80 degrees C, 2000psi and three extraction cycles of 5min. Under these conditions, recoveries of the target FQs varied between 69% and 107% with RSDs below 9%. The whole method was validated according to the Commission Decision 2002/657/EC guidelines. The proposed method has been successfully applied to the analysis of different infant food products bought in local supermarkets and pharmacies. The results showed the presence of residues of enrofloxacin in a non-compliant baby food sample corresponding to a chicken-based formulation, which were also confirmed and quantified by LC-MS/MS analysis. Copyright (c) 2009 Elsevier B.V. All rights reserved.

  6. Dispersive liquid-liquid microextraction of quinolones in porcine blood: Optimization of extraction procedure and CE separation using experimental design.

    PubMed

    Vera-Candioti, Luciana; Teglia, Carla M; Cámara, María S

    2016-10-01

    A dispersive liquid-liquid microextraction procedure was developed to extract nine fluoroquinolones in porcine blood, six of which were quantified using a univariate calibration method. Extraction parameters including type and volume of extraction and dispersive solvent and pH, were optimized using a full factorial and a central composite designs. The optimum extraction parameters were a mixture of 250 μL dichloromethane (extract solvent) and 1250 μL ACN (dispersive solvent) in 500 μL of porcine blood reached to pH 6.80. After shaking and centrifugation, the upper phase was transferred in a glass tube and evaporated under N2 steam. The residue was resuspended into 50 μL of water-ACN (70:30, v/v) and determined by CE method with DAD, under optimum separation conditions. Consequently, a tenfold enrichment factor can potentially be reached with the pretreatment, taking into account the relationship between initial sample volume and final extract volume. Optimum separation conditions were as follows: BGE solution containing equal amounts of sodium borate (Na2 B4 O7 ) and di-sodium hydrogen phosphate (Na2 HPO4 ) with a final concentration of 23 mmol/L containing 0.2% of poly (diallyldimethylammonium chloride) and adjusted to pH 7.80. Separation was performed applying a negative potential of 25 kV, the cartridge was maintained at 25.0°C and the electropherograms were recorded at 275 nm during 4 min. The hydrodynamic injection was performed in the cathode by applying a pressure of 50 mbar for 10 s.

  7. Aplication of the statistical experimental design to optimize mine-impacted water (MIW) remediation using shrimp-shell.

    PubMed

    Núñez-Gómez, Dámaris; Alves, Alcione Aparecida de Almeida; Lapolli, Flavio Rubens; Lobo-Recio, María A

    2017-01-01

    Mine-impacted water (MIW) is one of the most serious mining problems and has a high negative impact on water resources and aquatic life. The main characteristics of MIW are a low pH (between 2 and 4) and high concentrations of SO4(2-) and metal ions (Cd, Cu, Ni, Pb, Zn, Fe, Al, Cr, Mn, Mg, etc.), many of which are toxic to ecosystems and human life. Shrimp shell was selected as a MIW treatment agent because it is a low-cost metal-sorbent biopolymer with a high chitin content and contains calcium carbonate, an acid-neutralizing agent. To determine the best metal-removal conditions, a statistical study using statistical planning was carried out. Thus, the objective of this work was to identify the degree of influence and dependence of the shrimp-shell content for the removal of Fe, Al, Mn, Co, and Ni from MIW. In this study, a central composite rotational experimental design (CCRD) with a quadruplicate at the midpoint (2(2)) was used to evaluate the joint influence of two formulation variables-agitation and the shrimp-shell content. The statistical results showed the significant influence (p < 0.05) of the agitation variable for Fe and Ni removal (linear and quadratic form, respectively) and of the shrimp-shell content variable for Mn (linear form), Al and Co (linear and quadratic form) removal. Analysis of variance (ANOVA) for Al, Co, and Ni removal showed that the model is valid at the 95% confidence interval and that no adjustment needed within the ranges evaluated of agitation (0-251.5 rpm) and shrimp-shell content (1.2-12.8 g L(-1)). The model required adjustments to the 90% and 75% confidence interval for Fe and Mn removal, respectively. In terms of efficiency in removing pollutants, it was possible to determine the best experimental values of the variables considered as 188 rpm and 9.36 g L(-1) of shrimp-shells. Copyright © 2016 Elsevier Ltd. All rights reserved.

  8. Optimization of the combined ultrasonic assisted/adsorption method for the removal of malachite green by gold nanoparticles loaded on activated carbon: Experimental design

    NASA Astrophysics Data System (ADS)

    Roosta, M.; Ghaedi, M.; Shokri, N.; Daneshfar, A.; Sahraei, R.; Asghari, A.

    2014-01-01

    The present study was aimed to experimental design optimization applied to removal of malachite green (MG) from aqueous solution by ultrasound-assisted removal onto the gold nanoparticles loaded on activated carbon (Au-NP-AC). This nanomaterial was characterized using different techniques such as FESEM, TEM, BET, and UV-vis measurements. The effects of variables such as pH, initial dye concentration, adsorbent dosage (g), temperature and sonication time on MG removal were studied using central composite design (CCD) and the optimum experimental conditions were found with desirability function (DF) combined response surface methodology (RSM). Fitting the experimental equilibrium data to various isotherm models such as Langmuir, Freundlich, Tempkin and Dubinin-Radushkevich models show the suitability and applicability of the Langmuir model. Kinetic models such as pseudo -first order, pseudo-second order, Elovich and intraparticle diffusion models applicability was tested for experimental data and the second-order equation and intraparticle diffusion models control the kinetic of the adsorption process. The small amount of proposed adsorbent (0.015 g) is applicable for successful removal of MG (RE > 99%) in short time (4.4 min) with high adsorption capacity (140-172 mg g-1).

  9. Optimization of the combined ultrasonic assisted/adsorption method for the removal of malachite green by gold nanoparticles loaded on activated carbon: experimental design.

    PubMed

    Roosta, M; Ghaedi, M; Shokri, N; Daneshfar, A; Sahraei, R; Asghari, A

    2014-01-24

    The present study was aimed to experimental design optimization applied to removal of malachite green (MG) from aqueous solution by ultrasound-assisted removal onto the gold nanoparticles loaded on activated carbon (Au-NP-AC). This nanomaterial was characterized using different techniques such as FESEM, TEM, BET, and UV-vis measurements. The effects of variables such as pH, initial dye concentration, adsorbent dosage (g), temperature and sonication time on MG removal were studied using central composite design (CCD) and the optimum experimental conditions were found with desirability function (DF) combined response surface methodology (RSM). Fitting the experimental equilibrium data to various isotherm models such as Langmuir, Freundlich, Tempkin and Dubinin-Radushkevich models show the suitability and applicability of the Langmuir model. Kinetic models such as pseudo -first order, pseudo-second order, Elovich and intraparticle diffusion models applicability was tested for experimental data and the second-order equation and intraparticle diffusion models control the kinetic of the adsorption process. The small amount of proposed adsorbent (0.015 g) is applicable for successful removal of MG (RE>99%) in short time (4.4 min) with high adsorption capacity (140-172 mg g(-1)). Copyright © 2013. Published by Elsevier B.V.

  10. Optimal optoacoustic detector design

    NASA Technical Reports Server (NTRS)

    Rosengren, L.-G.

    1975-01-01

    Optoacoustic detectors are used to measure pressure changes occurring in enclosed gases, liquids, or solids being excited by intensity or frequency modulated electromagnetic radiation. Radiation absorption spectra, collisional relaxation rates, substance compositions, and reactions can be determined from the time behavior of these pressure changes. Very successful measurements of gaseous air pollutants have, for instance, been performed by using detectors of this type together with different lasers. The measuring instrument consisting of radiation source, modulator, optoacoustic detector, etc. is often called spectrophone. In the present paper, a thorough optoacoustic detector optimization analysis based upon a review of its theory of operation is introduced. New quantitative rules and suggestions explaining how to design detectors with maximal pressure responsivity and over-all sensitivity and minimal background signal are presented.

  11. Removal of Mefenamic acid from aqueous solutions by oxidative process: Optimization through experimental design and HPLC/UV analysis.

    PubMed

    Colombo, Renata; Ferreira, Tanare C R; Ferreira, Renato A; Lanza, Marcos R V

    2016-02-01

    Mefenamic acid (MEF) is a non-steroidal anti-inflammatory drug indicated for relief of mild to moderate pain, and for the treatment of primary dysmenorrhea. The presence of MEF in raw and sewage waters has been detected worldwide at concentrations exceeding the predicted no-effect concentration. In this study, using experimental designs, different oxidative processes (H2O2, H2O2/UV, fenton and Photo-fenton) were simultaneously evaluated for MEF degradation efficiency. The influence and interaction effects of the most important variables in the oxidative process (concentration and addition mode of hydrogen peroxide, concentration and type of catalyst, pH, reaction period and presence/absence of light) were investigated. The parameters were determined based on the maximum efficiency to save time and minimize the consumption of reagents. According to the results, the photo-Fenton process is the best procedure to remove the drug from water. A reaction mixture containing 1.005 mmol L(-1) of ferrioxalate and 17.5 mmol L(-1) of hydrogen peroxide, added at the initial reaction period, pH of 6.1 and 60 min of degradation indicated the most efficient degradation, promoting 95% of MEF removal. The development and validation of a rapid and efficient qualitative and quantitative HPLC/UV methodology for detecting this pollutant in aqueous solution is also reported. The method can be applied in water quality control that is generated and/or treated in municipal or industrial wastewater treatment plants.

  12. [Optimization of the nutrient medium makeup for the biosynthesis of bleomycine antibiotic using a mathematical method of experimental design].

    PubMed

    Korobkova, T P; Maksimova, T S; Ol'khovatova, O L; Iurina, M S; Zenkova, V A

    1978-12-01

    The fermentation medium for bleomycin biosynthesis was optimized with the help of a mathematical method for experiment modelling. With the use of the schemes of orthogonal latin squares the optimal concentrations of the sources of nitrogen, carbon and mineral salts were determined and the negative effect of cupric sulphate on the antibiotic biosynthesis was shown. The antibiotic production on the developed medium was 3.7 times higher than that on the initial medium.

  13. Structural Optimization in automotive design

    NASA Technical Reports Server (NTRS)

    Bennett, J. A.; Botkin, M. E.

    1984-01-01

    Although mathematical structural optimization has been an active research area for twenty years, there has been relatively little penetration into the design process. Experience indicates that often this is due to the traditional layout-analysis design process. In many cases, optimization efforts have been outgrowths of analysis groups which are themselves appendages to the traditional design process. As a result, optimization is often introduced into the design process too late to have a significant effect because many potential design variables have already been fixed. A series of examples are given to indicate how structural optimization has been effectively integrated into the design process.

  14. Isolation, identification and characterization of a novel Rhodococcus sp. strain in biodegradation of tetrahydrofuran and its medium optimization using sequential statistics-based experimental designs.

    PubMed

    Yao, Yanlai; Lv, Zhenmei; Min, Hang; Lv, Zhenhua; Jiao, Huipeng

    2009-06-01

    Statistics-based experimental designs were applied to optimize the culture conditions for tetrahydrofuran (THF) degradation by a newly isolated Rhodococcus sp. YYL that tolerates high THF concentrations. Single factor experiments were undertaken for determining the optimum range of each of four factors (initial pH and concentrations of K(2)HPO(4).3H(2)O, NH(4)Cl and yeast extract) and these factors were subsequently optimized using the response surface methodology. The Plackett-Burman design was used to identify three trace elements (Mg(2+), Zn(2+)and Fe(2+)) that significantly increased the THF degradation rate. The optimum conditions were found to be: 1.80 g/L NH(4)Cl, 0.81 g/L K(2)HPO(4).3H(2)O, 0.06 g/L yeast extract, 0.40 g/L MgSO(4).7H(2)O, 0.006 g/L ZnSO(4).7H(2)O, 0.024 g/L FeSO(4).7H(2)O, and an initial pH of 8.26. Under these optimized conditions, the maximum THF degradation rate increased to 137.60 mg THF h(-1) g dry weight in Rhodococcus sp. YYL, which was nearly five times of that by the previously described THF degrading Rhodococcus strain.

  15. Experimental design and optimization of leaching process for recovery of valuable chemical elements (U, La, V, Mo, Yb and Th) from low-grade uranium ore.

    PubMed

    Zakrzewska-Koltuniewicz, Grażyna; Herdzik-Koniecko, Irena; Cojocaru, Corneliu; Chajduk, Ewelina

    2014-06-30

    The paper deals with experimental design and optimization of leaching process of uranium and associated metals from low-grade, Polish ores. The chemical elements of interest for extraction from the ore were U, La, V, Mo, Yb and Th. Sulphuric acid has been used as leaching reagent. Based on the design of experiments the second-order regression models have been constructed to approximate the leaching efficiency of elements. The graphical illustrations using 3-D surface plots have been employed in order to identify the main, quadratic and interaction effects of the factors. The multi-objective optimization method based on desirability approach has been applied in this study. The optimum condition have been determined as P=5 bar, T=120 °C and t=90 min. Under these optimal conditions, the overall extraction performance is 81.43% (for U), 64.24% (for La), 98.38% (for V), 43.69% (for Yb) and 76.89% (for Mo) and 97.00% (for Th).

  16. Thermospray flame furnace-AAS determination of copper after on-line sorbent preconcentration using a system optimized by experimental designs.

    PubMed

    Tarley, César Ricardo Teixeira; Figueiredo, Eduardo da Costa; Matos, Geraldo Domingues

    2005-11-01

    The present paper describes the on-line coupling of a flow-injection system to a new technique, thermospray flame furnace-AAS (TS-FF-AAS), for the preconcentration and determination of copper in water samples. Copper was preconcentrated onto polyurethane foam (PUF) complexed with ammonium O,O-diethyldithiophosphate (DDTP), while elution was performed using 80% (v/v) ethanol. An experimental design for optimizing the copper preconcentration system was established using a full factorial (2(4)) design without replicates for screening and a Doehlert design for optimization, studying four variables: sample pH, ammonium O,O-diethyldithiophosphate (DDTP) concentration, presence of a coil and the sampling flow rate. The results obtained from the full factorial and based on a Pareto chart indicate that only the pH and the DDTP concentration, as well as their interaction, exert influence on the system within a 95% confidence level. The proposed method provided a preconcentration factor of 65 fold, thus notably improving the detectability of TS-FF-AAS. The detection limit was 0.22 microg/dm3 and the precision, expressed as the relative standard deviation (RSD) for eight independent determinations, was 2.7 and 1.1 for copper solutions containing 5 and 30 microg/dm3, respectively. The procedure was successfully applied for copper determination in water samples.

  17. A free lunch in linearized experimental design?

    NASA Astrophysics Data System (ADS)

    Coles, Darrell; Curtis, Andrew

    2011-08-01

    The No Free Lunch (NFL) theorems state that no single optimization algorithm is ideally suited for all objective functions and, conversely, that no single objective function is ideally suited for all optimization algorithms. This paper examines the influence of the NFL theorems on linearized statistical experimental design (SED). We consider four design algorithms with three different design objective functions to examine their interdependency. As a foundation for the study, we consider experimental designs for fitting ellipses to data, a problem pertinent to the study of transverse isotropy in many disciplines. Surprisingly, we find that the quality of optimized experiments, and the computational efficiency of their optimization, is generally independent of the criterion-algorithm pairing. We discuss differences in the performance of each design algorithm, providing a guideline for selecting design algorithms for other problems. As a by-product we demonstrate and discuss the principle of diminishing returns in SED, namely, that the value of experimental design decreases with experiment size. Another outcome of this study is a simple rule-of-thumb for prescribing optimal experiments for ellipse fitting, which bypasses the computational expense of SED. This is used to define a template for optimizing survey designs, under simple assumptions, for Amplitude Variations with Azimuth and Offset (AVAZ) seismics in the specialized problem of fracture characterization, such as is of interest in the petroleum industry. Finally, we discuss the scope of our conclusions for the NFL theorems as they apply to nonlinear and Bayesian SED.

  18. Integrated controls design optimization

    DOEpatents

    Lou, Xinsheng; Neuschaefer, Carl H.

    2015-09-01

    A control system (207) for optimizing a chemical looping process of a power plant includes an optimizer (420), an income algorithm (230) and a cost algorithm (225) and a chemical looping process models. The process models are used to predict the process outputs from process input variables. Some of the process in puts and output variables are related to the income of the plant; and some others are related to the cost of the plant operations. The income algorithm (230) provides an income input to the optimizer (420) based on a plurality of input parameters (215) of the power plant. The cost algorithm (225) provides a cost input to the optimizer (420) based on a plurality of output parameters (220) of the power plant. The optimizer (420) determines an optimized operating parameter solution based on at least one of the income input and the cost input, and supplies the optimized operating parameter solution to the power plant.

  19. Ocean power technology design optimization

    DOE PAGES

    van Rij, Jennifer; Yu, Yi-Hsiang; Edwards, Kathleen; ...

    2017-07-18

    For this study, the National Renewable Energy Laboratory and Ocean Power Technologies (OPT) conducted a collaborative code validation and design optimization study for OPT's PowerBuoy wave energy converter (WEC). NREL utilized WEC-Sim, an open-source WEC simulator, to compare four design variations of OPT's PowerBuoy. As an input to the WEC-Sim models, viscous drag coefficients for the PowerBuoy floats were first evaluated using computational fluid dynamics. The resulting WEC-Sim PowerBuoy models were then validated with experimental power output and fatigue load data provided by OPT. The validated WEC-Sim models were then used to simulate the power performance and loads for operationalmore » conditions, extreme conditions, and directional waves, for each of the four PowerBuoy design variations, assuming the wave environment of Humboldt Bay, California. And finally, ratios of power-to-weight, power-to-fatigue-load, power-to-maximum-extreme-load, power-to-water-plane-area, and power-to-wetted-surface-area were used to make a final comparison of the potential PowerBuoy WEC designs. Lastly, the design comparison methodologies developed and presented in this study are applicable to other WEC devices and may be useful as a framework for future WEC design development projects.« less

  20. Optimization and validation of a HPLC method for simultaneous determination of aflatoxin B1, B2, G1, G2, ochratoxin A and zearalenone using an experimental design.

    PubMed

    Rahmani, Anosheh; Selamat, Jinap; Soleimany, Farhang

    2011-01-01

    A reversed-phase HPLC optimization strategy is presented for investigating the separation and retention behavior of aflatoxin B1, B2, G1, G2, ochratoxin A and zearalenone, simultaneously. A fractional factorial design (FFD) was used to screen the significance effect of seven independent variables on chromatographic responses. The independent variables used were: (X1) column oven temperature (20-40°C), (X2) flow rate (0.8-1.2 ml/min), (X3) acid concentration in aqueous phase (0-2%), (X4) organic solvent percentage at the beginning (40-50%), and (X5) at the end (50-60%) of the gradient mobile phase, as well as (X6) ratio of methanol/acetonitrile at the beginning (1-4) and (X7) at the end (0-1) of gradient mobile phase. Responses of chromatographic analysis were resolution of mycotoxin peaks and HPLC run time. A central composite design (CCD) using response surface methodology (RSM) was then carried out for optimization of the most significant factors by multiple regression models for response variables. The proposed optimal method using 40°C oven temperature, 1 ml/min flow rate, 0.1% acetic acid concentration in aqueous phase, 41% organic phase (beginning), 60% organic phase (end), 1.92 ratio of methanol to acetonitrile (beginning) and 0.2 ratio (end) for X1-X7, respectively, showed good prediction ability between the experimental data and predictive values throughout the studied parameter space. Finally, the optimized method was validated by measuring the linearity, sensitivity, accuracy and precision parameters, and has been applied successfully to the analysis of spiked cereal samples.

  1. Optimal experimental design for filter exchange imaging: Apparent exchange rate measurements in the healthy brain and in intracranial tumors

    PubMed Central

    Szczepankiewicz, Filip; van Westen, Danielle; Englund, Elisabet; C Sundgren, Pia; Lätt, Jimmy; Ståhlberg, Freddy; Nilsson, Markus

    2016-01-01

    Purpose Filter exchange imaging (FEXI) is sensitive to the rate of diffusional water exchange, which depends, eg, on the cell membrane permeability. The aim was to optimize and analyze the ability of FEXI to infer differences in the apparent exchange rate (AXR) in the brain between two populations. Methods A FEXI protocol was optimized for minimal measurement variance in the AXR. The AXR variance was investigated by test‐retest acquisitions in six brain regions in 18 healthy volunteers. Preoperative FEXI data and postoperative microphotos were obtained in six meningiomas and five astrocytomas. Results Protocol optimization reduced the coefficient of variation of AXR by approximately 40%. Test‐retest AXR values were heterogeneous across normal brain regions, from 0.3 ± 0.2 s−1 in the corpus callosum to 1.8 ± 0.3 s−1 in the frontal white matter. According to analysis of statistical power, in all brain regions except one, group differences of 0.3–0.5 s−1 in the AXR can be inferred using 5 to 10 subjects per group. An AXR difference of this magnitude was observed between meningiomas (0.6 ± 0.1 s−1) and astrocytomas (1.0 ± 0.3 s−1). Conclusions With the optimized protocol, FEXI has the ability to infer relevant differences in the AXR between two populations for small group sizes. Magn Reson Med 77:1104–1114, 2017. © 2016 The Authors Magnetic Resonance in Medicine published by Wiley Periodicals, Inc. on behalf of International Society for Magnetic Resonance in Medicine. This is an open access article under the terms of the Creative Commons Attribution‐NonCommercial‐NoDerivs License, which permits use and distribution in any medium, provided the original work is properly cited, the use is non‐commercial and no modifications or adaptations are made. PMID:26968557

  2. Optimal design and experimental verification of a magnetically actuated optical image stabilization system for cameras in mobile phones

    NASA Astrophysics Data System (ADS)

    Chiu, Chi-Wei; Chao, Paul C.-P.; Kao, Nicholas Y.-Y.; Young, Fu-Kuan

    2008-04-01

    A novel miniaturized optical image stabilizer (OIS) is proposed, which is installed inside the limited inner space of a mobile phone. The relation between the VCM electromagnetic force inside the OIS and the applied voltage is first established via an equivalent circuit and further validated by a finite element model. Various dimensions of the VCMs are optimized by a genetic algorithm (GA) to maximize sensitivities and also achieving high uniformity of the magnetic flux intensity.

  3. Design Optimization Toolkit: Users' Manual

    SciTech Connect

    Aguilo Valentin, Miguel Alejandro

    2014-07-01

    The Design Optimization Toolkit (DOTk) is a stand-alone C++ software package intended to solve complex design optimization problems. DOTk software package provides a range of solution methods that are suited for gradient/nongradient-based optimization, large scale constrained optimization, and topology optimization. DOTk was design to have a flexible user interface to allow easy access to DOTk solution methods from external engineering software packages. This inherent flexibility makes DOTk barely intrusive to other engineering software packages. As part of this inherent flexibility, DOTk software package provides an easy-to-use MATLAB interface that enables users to call DOTk solution methods directly from the MATLAB command window.

  4. D-optimal experimental approach for designing topical microemulsion of itraconazole: Characterization and evaluation of antifungal efficacy against a standardized Tinea pedis infection model in Wistar rats.

    PubMed

    Kumar, Neeraj; Shishu

    2015-01-25

    The study aims to statistically develop a microemulsion system of an antifungal agent, itraconazole for overcoming the shortcomings and adverse effects of currently used therapies. Following preformulation studies like solubility determination, component selection and pseudoternary phase diagram construction, a 3-factor D-optimal mixture design was used for optimizing a microemulsion having desirable formulation characteristics. The factors studied for sixteen experimental trials were percent contents (w/w) of water, oil and surfactant, whereas the responses investigated were globule size, transmittance, drug skin retention and drug skin permeation in 6h. Optimized microemulsion (OPT-ME) was incorporated in Carbopol based hydrogel to improve topical applicability. Physical characterization of the formulations was performed using particle size analysis, transmission electron microscopy, texture analysis and rheology behavior. Ex vivo studies carried out in Wistar rat skin depicted that the optimized formulation enhanced drug skin retention and permeation in 6h in comparison to conventional cream and Capmul 908P oil solution of itraconazole. The in vivo evaluation of optimized formulation was performed using a standardized Tinea pedis model in Wistar rats and the results of the pharmacodynamic study, obtained in terms of physical manifestations, fungal-burden score, histopathological profiles and oxidative stress. Rapid remission of Tinea pedis from rats treated with OPT-ME formulation was observed in comparison to commercially available therapies (ketoconazole cream and oral itraconazole solution), thereby indicating the superiority of microemulsion hydrogel formulation over conventional approaches for treating superficial fungal infections. The formulation was stable for a period of twelve months under refrigeration and ambient temperature conditions. All results, therefore, suggest that the OPT-ME can prove to be a promising and rapid alternative to conventional

  5. MultiSimplex and experimental design as chemometric tools to optimize a SPE-HPLC-UV method for the determination of eprosartan in human plasma samples.

    PubMed

    Ferreirós, N; Iriarte, G; Alonso, R M; Jiménez, R M

    2006-05-15

    A chemometric approach was applied for the optimization of the extraction and separation of the antihypertensive drug eprosartan from human plasma samples. MultiSimplex program was used to optimize the HPLC-UV method due to the number of experimental and response variables to be studied. The measured responses were the corrected area, the separation of eprosartan chromatographic peak from plasma interferences peaks and the retention time of the analyte. The use of an Atlantis dC18, 100mmx3.9mm i.d. chromatographic column with a 0.026% trifluoroacetic acid (TFA) in the organic phase and 0.031% TFA in the aqueous phase, an initial composition of 80% aqueous phase in the mobile phase, a stepness of acetonitrile of 3% during the gradient elution mode with a flow rate of 1.25mL/min and a column temperature of 35+/-0.2 degrees C allowed the separation of eprosartan and irbesartan used as internal standard from plasma endogenous compounds. In the solid phase extraction procedure, experimental design was used in order to achieve a maximum recovery percentage. Firstly, the significant variables were chosen by way of fractional factorial design; then, a central composite design was run to obtain the more adequate values of the significant variables. Thus, the extraction procedure for spiked human plasma samples was carried out using C8 cartridges, phosphate buffer pH 2 as conditioning agent, a drying step of 10min, a washing step with methanol-phosphate buffer (20:80, v/v) and methanol as eluent liquid. The SPE-HPLC-UV developed method allowed the separation and quantitation of eprosartan from human plasma samples with an adequate resolution and a total analysis time of 1h.

  6. MDTri: robust and efficient global mixed integer search of spaces of multiple ternary alloys: A DIRECT-inspired optimization algorithm for experimentally accessible computational material design

    DOE PAGES

    Graf, Peter A.; Billups, Stephen

    2017-07-24

    Computational materials design has suffered from a lack of algorithms formulated in terms of experimentally accessible variables. Here we formulate the problem of (ternary) alloy optimization at the level of choice of atoms and their composition that is normal for synthesists. Mathematically, this is a mixed integer problem where a candidate solution consists of a choice of three elements, and how much of each of them to use. This space has the natural structure of a set of equilateral triangles. We solve this problem by introducing a novel version of the DIRECT algorithm that (1) operates on equilateral triangles insteadmore » of rectangles and (2) works across multiple triangles. We demonstrate on a test case that the algorithm is both robust and efficient. Lastly, we offer an explanation of the efficacy of DIRECT -- specifically, its balance of global and local search -- by showing that 'potentially optimal rectangles' of the original algorithm are akin to the Pareto front of the 'multi-component optimization' of global and local search.« less

  7. D-OPTIMAL EXPERIMENTAL DESIGNS TO TEST FOR DEPARTURE FROM ADDITIVITY IN A FIXED-RATIO RAY MIXTURE.

    EPA Science Inventory

    Risk assessors are becoming increasingly aware of the importance of assessing interactions between chemicals in a mixture. Most traditional designs for evaluating interactions are prohibitive when the number of chemicals in the mixture is large. However, evaluation of interacti...

  8. Optimal design of isotope labeling experiments.

    PubMed

    Yang, Hong; Mandy, Dominic E; Libourel, Igor G L

    2014-01-01

    Stable isotope labeling experiments (ILE) constitute a powerful methodology for estimating metabolic fluxes. An optimal label design for such an experiment is necessary to maximize the precision with which fluxes can be determined. But often, precision gained in the determination of one flux comes at the expense of the precision of other fluxes, and an appropriate label design therefore foremost depends on the question the investigator wants to address. One could liken ILE to shadows that metabolism casts on products. Optimal label design is the placement of the lamp; creating clear shadows for some parts of metabolism and obscuring others.An optimal isotope label design is influenced by: (1) the network structure; (2) the true flux values; (3) the available label measurements; and, (4) commercially available substrates. The first two aspects are dictated by nature and constrain any optimal design. The second two aspects are suitable design parameters. To create an optimal label design, an explicit optimization criterion needs to be formulated. This usually is a property of the flux covariance matrix, which can be augmented by weighting label substrate cost. An optimal design is found by using such a criterion as an objective function for an optimizer. This chapter uses a simple elementary metabolite units (EMU) representation of the TCA cycle to illustrate the process of experimental design of isotope labeled substrates.

  9. A Case Study on the Application of a Structured Experimental Method for Optimal Parameter Design of a Complex Control System

    NASA Technical Reports Server (NTRS)

    Torres-Pomales, Wilfredo

    2015-01-01

    This report documents a case study on the application of Reliability Engineering techniques to achieve an optimal balance between performance and robustness by tuning the functional parameters of a complex non-linear control system. For complex systems with intricate and non-linear patterns of interaction between system components, analytical derivation of a mathematical model of system performance and robustness in terms of functional parameters may not be feasible or cost-effective. The demonstrated approach is simple, structured, effective, repeatable, and cost and time efficient. This general approach is suitable for a wide range of systems.

  10. Factorial design optimization of experimental variables in preconcentration of carbamates pesticides in water samples using solid phase extraction and liquid chromatography-electrospray-mass spectrometry determination.

    PubMed

    Latrous El Atrache, Latifa; Ben Sghaier, Rafika; Bejaoui Kefi, Bochra; Haldys, Violette; Dachraoui, Mohamed; Tortajada, Jeanine

    2013-12-15

    An experimental design was applied for the optimization of extraction process of carbamates pesticides from surface water samples. Solid phase extraction (SPE) of carbamates compounds and their determination by liquid chromatography coupled to electrospray mass spectrometry detector were considered. A two level full factorial design 2(k) was used for selecting the variables which affected the extraction procedure. Eluent and sample volumes were statistically the most significant parameters. These significant variables were optimized using Doehlert matrix. The developed SPE method included 200mg of C-18 sorbent, 143.5 mL of water sample and 5.5 mL of acetonitrile in the elution step. For validation of the technique, accuracy, precision, detection and quantification limits, linearity, sensibility and selectivity were evaluated. Extraction recovery percentages of all the carbamates were above 90% with relative standard deviations (R.S.D.) in the range of 3-11%. The extraction method was selective and the detection and quantification limits were between 0.1 and 0.5 µg L(-1), and 1 and 3 µg L(-1), respectively.

  11. Experimental design based response surface methodology optimization of ultrasonic assisted adsorption of safaranin O by tin sulfide nanoparticle loaded on activated carbon.

    PubMed

    Roosta, M; Ghaedi, M; Daneshfar, A; Sahraei, R

    2014-03-25

    In this research, the adsorption rate of safranine O (SO) onto tin sulfide nanoparticle loaded on activated carbon (SnS-NPAC) was accelerated by the ultrasound. SnS-NP-AC was characterized by different techniques such as SEM, XRD and UV-Vis measurements. The present results confirm that the ultrasound assisted adsorption method has remarkable ability to improve the adsorption efficiency. The influence of parameters such as the sonication time, adsorbent dosage, pH and initial SO concentration was examined and evaluated by central composite design (CCD) combined with response surface methodology (RSM) and desirability function (DF). Conducting adsorption experiments at optimal conditions set as 4 min of sonication time, 0.024 g of adsorbent, pH 7 and 18 mg L(-1) SO make admit to achieve high removal percentage (98%) and high adsorption capacity (50.25 mg g(-)(1)). A good agreement between experimental and predicted data in this study was observed. The experimental equilibrium data fitting to Langmuir, Freundlich, Tempkin and Dubinin-Radushkevich models show that the Langmuir model is a good and suitable model for evaluation and the actual behavior of adsorption. Kinetic evaluation of experimental data showed that the adsorption processes followed well pseudo-second-order and intraparticle diffusion models.

  12. Experimental design based response surface methodology optimization of ultrasonic assisted adsorption of safaranin O by tin sulfide nanoparticle loaded on activated carbon

    NASA Astrophysics Data System (ADS)

    Roosta, M.; Ghaedi, M.; Daneshfar, A.; Sahraei, R.

    2014-03-01

    In this research, the adsorption rate of safranine O (SO) onto tin sulfide nanoparticle loaded on activated carbon (SnS-NPAC) was accelerated by the ultrasound. SnS-NP-AC was characterized by different techniques such as SEM, XRD and UV-Vis measurements. The present results confirm that the ultrasound assisted adsorption method has remarkable ability to improve the adsorption efficiency. The influence of parameters such as the sonication time, adsorbent dosage, pH and initial SO concentration was examined and evaluated by central composite design (CCD) combined with response surface methodology (RSM) and desirability function (DF). Conducting adsorption experiments at optimal conditions set as 4 min of sonication time, 0.024 g of adsorbent, pH 7 and 18 mg L-1 SO make admit to achieve high removal percentage (98%) and high adsorption capacity (50.25 mg g-1). A good agreement between experimental and predicted data in this study was observed. The experimental equilibrium data fitting to Langmuir, Freundlich, Tempkin and Dubinin-Radushkevich models show that the Langmuir model is a good and suitable model for evaluation and the actual behavior of adsorption. Kinetic evaluation of experimental data showed that the adsorption processes followed well pseudo-second-order and intraparticle diffusion models.

  13. Experimental design for the optimization of the derivatization reaction in determining chlorophenols and chloroanisoles by headspace-solid-phase microextraction-gas chromatography/mass spectrometry.

    PubMed

    Morales, Rocío; Sarabia, Luis A; Sánchez, M Sagrario; Ortiz, M Cruz

    2013-06-28

    The paper shows some tools (its interpretation and usefulness) to optimize a derivatization reaction and to more easily interpret and visualize the effect that some experimental factors exert on several analytical responses of interest when these responses are in conflict. The entire proposed procedure has been applied in the optimization of equilibrium/extraction temperature and extraction time in the acetylation reaction of 2,4,6-trichlorophenol; 2,3,4,6-tetrachlorophenol, pentachlorophenol and 2,4,6-tribromophenol as internal standard (IS) in presence of 2,4,6-trichloroanisole, 2,3,5,6-tetrachloroanisole, pentachloroanisole and 2,4,6-trichloroanisole-d5 as IS. The procedure relies on the second order advantage of PARAFAC (parallel factor analysis) that allows the unequivocal identification and quantification, mandatory according international regulations (in this paper the EU document SANCO/12495/2011), of the acetyl-chlorophenols and chloroanisoles that are determined by means of a HS-SPME-GC/MS automated device. The joint use of a PARAFAC decomposition and a Doehlert design provides the data to fit a response surface for each analyte. With the fitted surfaces, the overall desirability function and the Pareto-optimal front are used to describe the relation between the conditions of the derivatization reaction and the quantity extracted of each analyte. The visualization by using a parallel coordinates plot allows a deeper knowledge about the problem at hand as well as the wise selection of the conditions of the experimental factors for achieving specific goals about the responses. In the optimal experimental conditions (45°C and 25min) the determination by means of an automated HS-SPME-GC/MS system is carried out. By using the regression line fitted between calculated and true concentrations, it has been checked that the procedure has neither proportional nor constant bias. The decision limits, CCa, for probability a of false positive set to 0.05, vary between

  14. Optimization of experimental parameters based on the Taguchi robust design for the formation of zinc oxide nanocrystals by solvothermal method

    SciTech Connect

    Yiamsawas, Doungporn; Boonpavanitchakul, Kanittha; Kangwansupamonkon, Wiyong

    2011-05-15

    Research highlights: {yields} Taguchi robust design can be applied to study ZnO nanocrystal growth. {yields} Spherical-like and rod-like shaped of ZnO nanocrystals can be obtained from solvothermal method. {yields} [NaOH]/[Zn{sup 2+}] ratio plays the most important factor on the aspect ratio of prepared ZnO. -- Abstract: Zinc oxide (ZnO) nanoparticles and nanorods were successfully synthesized by a solvothermal process. Taguchi robust design was applied to study the factors which result in stronger ZnO nanocrystal growth. The factors which have been studied are molar concentration ratio of sodium hydroxide and zinc acetate, amount of polymer templates and molecular weight of polymer templates. Transmission electron microscopy and X-ray diffraction technique were used to analyze the experiment results. The results show that the concentration ratio of sodium hydroxide and zinc acetate ratio has the greatest effect on ZnO nanocrystal growth.

  15. Experimental design for the formulation and optimization of novel cross-linked oilispheres developed for in vitro site-specific release of Mentha piperita oil.

    PubMed

    Sibanda, Wilbert; Pillay, Viness; Danckwerts, Michael P; Viljoen, Alvaro M; van Vuuren, Sandy; Khan, Riaz A

    2004-03-12

    A Plackett-Burman design was employed to develop and optimize a novel crosslinked calcium-aluminum-alginate-pectinate oilisphere complex as a potential system for the in vitro site-specific release of Mentha piperita, an essential oil used for the treatment of irritable bowel syndrome. The physicochemical and textural properties (dependent variables) of this complex were found to be highly sensitive to changes in the concentration of the polymers (0%-1.5% wt/vol), crosslinkers (0%-4% wt/vol), and crosslinking reaction times (0.5-6 hours) (independent variables). Particle size analysis indicated both unimodal and bimodal populations with the highest frequency of 2 mm oilispheres. Oil encapsulation ranged from 6 to 35 mg/100 mg oilispheres. Gravimetric changes of the crosslinked matrix indicated significant ion sequestration and loss in an exponential manner, while matrix erosion followed Higuchi's cube root law. Among the various measured responses, the total fracture energy was the most suitable optimization objective (R2 = 0.88, Durbin-Watson Index = 1.21%, Coefficient of Variation (CV) = 33.21%). The Lagrangian technique produced no significant differences (P > .05) between the experimental and predicted total fracture energy values (0.0150 vs 0.0107 J). Artificial Neural Networks, as an alternative predictive tool of the total fracture energy, was highly accurate (final mean square error of optimal network epoch approximately 0.02). Fused-coated optimized oilispheres produced a 4-hour lag phase followed by zero-order kinetics (n > 0.99), whereby analysis of release data indicated that diffusion (Fickian constant k1 = 0.74 vs relaxation constant k2 = 0.02) was the predominant release mechanism.

  16. Determining optimal operation parameters for reducing PCDD/F emissions (I-TEQ values) from the iron ore sintering process by using the Taguchi experimental design.

    PubMed

    Chen, Yu-Cheng; Tsai, Perng-Jy; Mou, Jin-Luh

    2008-07-15

    This study is the first one using the Taguchi experimental design to identify the optimal operating condition for reducing polychlorinated dibenzo-p-dioxins and dibenzofurans (PCDD/ Fs) formations during the iron ore sintering process. Four operating parameters, including the water content (Wc; range = 6.0-7.0 wt %), suction pressure (Ps; range = 1000-1400 mmH2O), bed height (Hb; range = 500-600 mm), and type of hearth layer (including sinter, hematite, and limonite), were selected for conducting experiments in a pilot scale sinter pot to simulate various sintering operating conditions of a real-scale sinter plant We found that the resultant optimal combination (Wc = 6.5 wt%, Hb = 500 mm, Ps = 1000 mmH2O, and hearth layer = hematite) could decrease the emission factor of total PCDD/Fs (total EF(PCDD/Fs)) up to 62.8% by reference to the current operating condition of the real-scale sinter plant (Wc = 6.5 wt %, Hb = 550 mm, Ps = 1200 mmH2O, and hearth layer = sinter). Through the ANOVA analysis, we found that Wc was the most significant parameter in determining total EF(PCDD/Fs (accounting for 74.7% of the total contribution of the four selected parameters). The resultant optimal combination could also enhance slightly in both sinter productivity and sinter strength (30.3 t/m2/day and 72.4%, respectively) by reference to those obtained from the reference operating condition (29.9 t/m (2)/day and 72.2%, respectively). The above results further ensure the applicability of the obtained optimal combination for the real-scale sinter production without interfering its sinter productivity and sinter strength.

  17. An experimental design approach for optimizing polycyclic aromatic hydrocarbon analysis in contaminated soil by pyrolyser-gas chromatography-mass spectrometry.

    PubMed

    Buco, S; Moragues, M; Sergent, M; Doumenq, P; Mille, G

    2007-06-01

    Pyrolyser-gas chromatography-mass spectrometry was used to analyze polycyclic aromatic hydrocarbons in contaminated soil without preliminary extraction. Experimental research methodology was used to obtain optimal performance of the system. After determination of the main factors (desorption time, Curie point temperature, carrier gas flow), modeling was done using a Box-Behnken matrix. Study of the response surface led to factor values that optimize the experimental response and achieve better chromatographic results.

  18. Pathway Design, Engineering, and Optimization.

    PubMed

    Garcia-Ruiz, Eva; HamediRad, Mohammad; Zhao, Huimin

    2016-09-16

    The microbial metabolic versatility found in nature has inspired scientists to create microorganisms capable of producing value-added compounds. Many endeavors have been made to transfer and/or combine pathways, existing or even engineered enzymes with new function to tractable microorganisms to generate new metabolic routes for drug, biofuel, and specialty chemical production. However, the success of these pathways can be impeded by different complications from an inherent failure of the pathway to cell perturbations. Pursuing ways to overcome these shortcomings, a wide variety of strategies have been developed. This chapter will review the computational algorithms and experimental tools used to design efficient metabolic routes, and construct and optimize biochemical pathways to produce chemicals of high interest.

  19. Optimization of ultrasound-assisted dispersive solid-phase microextraction based on nanoparticles followed by spectrophotometry for the simultaneous determination of dyes using experimental design.

    PubMed

    Asfaram, Arash; Ghaedi, Mehrorang; Goudarzi, Alireza

    2016-09-01

    A simple, low cost and ultrasensitive method for the simultaneous preconcentration and determination of trace amount of auramine-O and malachite green in aqueous media following accumulation on novel and lower toxicity nanomaterials by ultrasound-assisted dispersive solid phase micro-extraction (UA-DSPME) procedure combined with spectrophotometric has been described. The Mn doped ZnS nanoparticles loaded on activated carbon were characterized by Field emission scanning electron microscopy (FE-SEM), particle size distribution, X-ray diffraction (XRD), Fourier transform infrared spectroscopy (FT-IR) analyses and subsequently were used as green and efficient material for dyes accumulation. Contribution of experimental variables such as ultrasonic time, ultrasonic temperature, adsorbent mass, vortex time, ionic strength, pH and elution volume were optimized through experimental design, and while the preconcentrated analytes were efficiently eluted by acetone. Preliminary Plackett-Burman design was applied for selection of most significant factors and giving useful information about their main and interaction part of significant variables like ultrasonic time, adsorbent mass, elution volume and pH were obtained by central composite design combined with response surface analysis and optimum experimental conditions was set at pH of 8.0, 1.2mg of adsorbent, 150μL eluent and 3.7min sonication. Under optimized conditions, the average recoveries (five replicates) for two dyes (spiked at 500.0ngmL(-1)) changes in the range of 92.80-97.70% with acceptable RSD% less than 4.0% over a linear range of 3.0-5000.0ngmL(-1) for the AO and MG in water samples with regression coefficients (R(2)) of 0.9975 and 0.9977, respectively. Acceptable limits of detection of 0.91 and 0.61ngmL(-1) for AO and MG, respectively and high accuracy and repeatability are unique advantages of present method to improve the figures of merit for their accurate determination at trace level in complicated

  20. Optimization of low-frequency low-intensity ultrasound-mediated microvessel disruption on prostate cancer xenografts in nude mice using an orthogonal experimental design

    PubMed Central

    YANG, YU; BAI, WENKUN; CHEN, YINI; LIN, YANDUAN; HU, BING

    2015-01-01

    The present study aimed to provide a complete exploration of the effect of sound intensity, frequency, duty cycle, microbubble volume and irradiation time on low-frequency low-intensity ultrasound (US)-mediated microvessel disruption, and to identify an optimal combination of the five factors that maximize the blockage effect. An orthogonal experimental design approach was used. Enhanced US imaging and acoustic quantification were performed to assess tumor blood perfusion. In the confirmatory test, in addition to acoustic quantification, the specimens of the tumor were stained with hematoxylin and eosin and observed using light microscopy. The results revealed that sound intensity, frequency, duty cycle, microbubble volume and irradiation time had a significant effect on the average peak intensity (API). The extent of the impact of the variables on the API was in the following order: Sound intensity; frequency; duty cycle; microbubble volume; and irradiation time. The optimum conditions were found to be as follows: Sound intensity, 1.00 W/cm2; frequency, 20 Hz; duty cycle, 40%; microbubble volume, 0.20 ml; and irradiation time, 3 min. In the confirmatory test, the API was 19.97±2.66 immediately subsequent to treatment, and histological examination revealed signs of tumor blood vessel injury in the optimum parameter combination group. In conclusion, the Taguchi L18 (3)6 orthogonal array design was successfully applied for determining the optimal parameter combination of API following treatment. Under the optimum orthogonal design condition, a minimum API of 19.97±2.66 subsequent to low-frequency and low-intensity mediated blood perfusion blockage was obtained. PMID:26722279

  1. Optimization of low-frequency low-intensity ultrasound-mediated microvessel disruption on prostate cancer xenografts in nude mice using an orthogonal experimental design.

    PubMed

    Yang, Y U; Bai, Wenkun; Chen, Yini; Lin, Yanduan; Hu, Bing

    2015-11-01

    The present study aimed to provide a complete exploration of the effect of sound intensity, frequency, duty cycle, microbubble volume and irradiation time on low-frequency low-intensity ultrasound (US)-mediated microvessel disruption, and to identify an optimal combination of the five factors that maximize the blockage effect. An orthogonal experimental design approach was used. Enhanced US imaging and acoustic quantification were performed to assess tumor blood perfusion. In the confirmatory test, in addition to acoustic quantification, the specimens of the tumor were stained with hematoxylin and eosin and observed using light microscopy. The results revealed that sound intensity, frequency, duty cycle, microbubble volume and irradiation time had a significant effect on the average peak intensity (API). The extent of the impact of the variables on the API was in the following order: Sound intensity; frequency; duty cycle; microbubble volume; and irradiation time. The optimum conditions were found to be as follows: Sound intensity, 1.00 W/cm(2); frequency, 20 Hz; duty cycle, 40%; microbubble volume, 0.20 ml; and irradiation time, 3 min. In the confirmatory test, the API was 19.97±2.66 immediately subsequent to treatment, and histological examination revealed signs of tumor blood vessel injury in the optimum parameter combination group. In conclusion, the Taguchi L18 (3)(6) orthogonal array design was successfully applied for determining the optimal parameter combination of API following treatment. Under the optimum orthogonal design condition, a minimum API of 19.97±2.66 subsequent to low-frequency and low-intensity mediated blood perfusion blockage was obtained.

  2. Optimized solar module design

    NASA Technical Reports Server (NTRS)

    Santala, T.; Sabol, R.; Carbajal, B. G.

    1978-01-01

    The minimum cost per unit of power output from flat plate solar modules can most likely be achieved through efficient packaging of higher efficiency solar cells. This paper outlines a module optimization method which is broadly applicable, and illustrates the potential results achievable from a specific high efficiency tandem junction (TJ) cell. A mathematical model is used to assess the impact of various factors influencing the encapsulated cell and packing efficiency. The optimization of the packing efficiency is demonstrated. The effect of encapsulated cell and packing efficiency on the module add-on cost is shown in a nomograph form.

  3. Nanoparticle-Laden Contact Lens for Controlled Ocular Delivery of Prednisolone: Formulation Optimization Using Statistical Experimental Design

    PubMed Central

    ElShaer, Amr; Mustafa, Shelan; Kasar, Mohamad; Thapa, Sapana; Ghatora, Baljit; Alany, Raid G.

    2016-01-01

    Human eye is one of the most accessible organs in the body, nonetheless, its physiology and associated precorneal factors such as nasolacrimal drainage, blinking, tear film, tear turnover, and induced lacrimation has significantly decreased the residence time of any foreign substances including pharmaceutical dosage forms. Soft contact lenses are promising delivery devices that can sustain the drug release and prolong residence time by acting as a geometric barrier to drug diffusion to tear fluid. This study investigates experimental parameters such as composition of polymer mixtures, stabilizer and the amount of active pharmaceutical ingredient on the preparation of a polymeric drug delivery system for the topical ocular administration of Prednisolone. To achieve this goal, prednisolone-loaded poly (lactic-co-glycolic acid) (PLGA) nanoparticles were prepared by single emulsion solvent evaporation method. Prednisolone was quantified using a validated high performance liquid chromatography (HPLC) method. Nanoparticle size was mostly affected by the amount of co-polymer (PLGA) used whereas drug load was mostly affected by amount of prednisolone (API) used. Longer homogenization time along with higher amount of API yielded the smallest size nanoparticles. The nanoparticles prepared had an average particle size of 347.1 ± 11.9 nm with a polydispersity index of 0.081. The nanoparticles were then incorporated in the contact lens mixture before preparing them. Clear and transparent contact lenses were successfully prepared. When the nanoparticle (NP)-loaded contact lenses were compared with control contact lenses (unloaded NP contact lenses), a decrease in hydration by 2% (31.2% ± 1.25% hydration for the 0.2 g loaded NP contact lenses) and light transmission by 8% (unloaded NP contact lenses 94.5% NP 0.2 g incorporated contact lenses 86.23%). The wettability of the contact lenses remained within the desired value (<90 °C) even upon incorporation of the NP. NP alone and

  4. Nanoparticle-Laden Contact Lens for Controlled Ocular Delivery of Prednisolone: Formulation Optimization Using Statistical Experimental Design.

    PubMed

    ElShaer, Amr; Mustafa, Shelan; Kasar, Mohamad; Thapa, Sapana; Ghatora, Baljit; Alany, Raid G

    2016-04-20

    Human eye is one of the most accessible organs in the body, nonetheless, its physiology and associated precorneal factors such as nasolacrimal drainage, blinking, tear film, tear turnover, and induced lacrimation has significantly decreased the residence time of any foreign substances including pharmaceutical dosage forms. Soft contact lenses are promising delivery devices that can sustain the drug release and prolong residence time by acting as a geometric barrier to drug diffusion to tear fluid. This study investigates experimental parameters such as composition of polymer mixtures, stabilizer and the amount of active pharmaceutical ingredient on the preparation of a polymeric drug delivery system for the topical ocular administration of Prednisolone. To achieve this goal, prednisolone-loaded poly (lactic-co-glycolic acid) (PLGA) nanoparticles were prepared by single emulsion solvent evaporation method. Prednisolone was quantified using a validated high performance liquid chromatography (HPLC) method. Nanoparticle size was mostly affected by the amount of co-polymer (PLGA) used whereas drug load was mostly affected by amount of prednisolone (API) used. Longer homogenization time along with higher amount of API yielded the smallest size nanoparticles. The nanoparticles prepared had an average particle size of 347.1 ± 11.9 nm with a polydispersity index of 0.081. The nanoparticles were then incorporated in the contact lens mixture before preparing them. Clear and transparent contact lenses were successfully prepared. When the nanoparticle (NP)-loaded contact lenses were compared with control contact lenses (unloaded NP contact lenses), a decrease in hydration by 2% (31.2% ± 1.25% hydration for the 0.2 g loaded NP contact lenses) and light transmission by 8% (unloaded NP contact lenses 94.5% NP 0.2 g incorporated contact lenses 86.23%). The wettability of the contact lenses remained within the desired value (<90 °C) even upon incorporation of the NP. NP alone and

  5. A Surrogate Approach to the Experimental Optimization of Multielement Airfoils

    NASA Technical Reports Server (NTRS)

    Otto, John C.; Landman, Drew; Patera, Anthony T.

    1996-01-01

    The incorporation of experimental test data into the optimization process is accomplished through the use of Bayesian-validated surrogates. In the surrogate approach, a surrogate for the experiment (e.g., a response surface) serves in the optimization process. The validation step of the framework provides a qualitative assessment of the surrogate quality, and bounds the surrogate-for-experiment error on designs "near" surrogate-predicted optimal designs. The utility of the framework is demonstrated through its application to the experimental selection of the trailing edge ap position to achieve a design lift coefficient for a three-element airfoil.

  6. Designing an Experimental "Accident"

    ERIC Educational Resources Information Center

    Picker, Lester

    1974-01-01

    Describes an experimental "accident" that resulted in much student learning, seeks help in the identification of nematodes, and suggests biology teachers introduce similar accidents into their teaching to stimulate student interest. (PEB)

  7. Optimizing exchanger design early

    SciTech Connect

    Lacunza, M.; Vaschetti, G.; Campana, H.

    1987-08-01

    It is not practical for process engineers and designers to make a rigorous economic evaluation for each component of a process due to the loss of time and money. But, it's very helpful and useful to have a method for a quick design evaluation of heat exchangers, considering their important contribution to the total fixed investment in a process plant. This article is devoted to this subject, and the authors present a method that has been proved in some design cases. Linking rigorous design procedures with a quick cost-estimation method provides a good technique for obtaining the right heat exchanger. The cost will be appropriate, sometimes not the lowest because of design restrictions, but a good approach to the optimum in an earlier process design stage. The authors intend to show the influence of the design variables in a shell and tube heat exchanger on capital investment, or conversely, taking into account the general limiting factors of the process such as thermodynamics, operability, corrosion, etc., and/or from the mechanical design of the calculated unit. The last is a special consideration for countries with no access to industrial technology or with difficulties in obtaining certain construction materials or equipment.

  8. Design optimization of transonic airfoils

    NASA Technical Reports Server (NTRS)

    Joh, C.-Y.; Grossman, B.; Haftka, R. T.

    1991-01-01

    Numerical optimization procedures were considered for the design of airfoils in transonic flow based on the transonic small disturbance (TSD) and Euler equations. A sequential approximation optimization technique was implemented with an accurate approximation of the wave drag based on the Nixon's coordinate straining approach. A modification of the Euler surface boundary conditions was implemented in order to efficiently compute design sensitivities without remeshing the grid. Two effective design procedures producing converged designs in approximately 10 global iterations were developed: interchanging the role of the objective function and constraint and the direct lift maximization with move limits which were fixed absolute values of the design variables.

  9. Winglet design using multidisciplinary design optimization techniques

    NASA Astrophysics Data System (ADS)

    Elham, Ali; van Tooren, Michel J. L.

    2014-10-01

    A quasi-three-dimensional aerodynamic solver is integrated with a semi-analytical structural weight estimation method inside a multidisciplinary design optimization framework to design and optimize a winglet for a passenger aircraft. The winglet is optimized for minimum drag and minimum structural weight. The Pareto front between those two objective functions is found applying a genetic algorithm. The aircraft minimum take-off weight and the aircraft minimum direct operating cost are used to select the best winglets among those on the Pareto front.

  10. True Experimental Design.

    ERIC Educational Resources Information Center

    Huck, Schuyler W.

    1991-01-01

    This poem, with stanzas in limerick form, refers humorously to the many threats to validity posed by problems in research design, including problems of sample selection, data collection, and data analysis. (SLD)

  11. Taguchi Experimental Design for Optimization of Recombinant Human Growth Hormone Production in CHO Cell Lines and Comparing its Biological Activity with Prokaryotic Growth Hormone.

    PubMed

    Aghili, Zahra Sadat; Zarkesh-Esfahani, Sayyed Hamid

    2017-09-12

    Growth hormone deficiency results in growth retardation in children and the GH deficiency syndrome in adults and they need to receive recombinant-GH in order to rectify the GH deficiency symptoms. Mammalian cells have become the favorite system for production of recombinant proteins for clinical application compared to prokaryotic systems because of their capability for appropriate protein folding, assembly, post-translational modification and proper signal. However, production level in mammalian cells is generally low compared to prokaryotic hosts. Taguchi has established orthogonal arrays to describe a large number of experimental situations mainly to reduce experimental errors and to enhance the efficiency and reproducibility of laboratory experiments.In the present study, rhGH was produced in CHO cells and production of rhGH was assessed using Dot blotting, western blotting and Elisa assay. For optimization of rhGH production in CHO cells using Taguchi method An M16 orthogonal experimental design was used to investigate four different culture components. The biological activity of rhGH was assessed using LHRE-TK-Luciferase reporter gene system in HEK-293 and compared to the biological activity of prokaryotic rhGH.A maximal productivity of rhGH was reached in the conditions of 1%DMSO, 1%glycerol, 25 µM ZnSO4 and 0 mM NaBu. Our findings indicate that control of culture conditions such as the addition of chemical components helps to develop an efficient large-scale and industrial process for the production of rhGH in CHO cells. Results of bioassay indicated that rhGH produced by CHO cells is able to induce GH-mediated intracellular cell signaling and showed higher bioactivity when compared to prokaryotic GH at the same concentrations. © Georg Thieme Verlag KG Stuttgart · New York.

  12. Habitat Design Optimization and Analysis

    NASA Technical Reports Server (NTRS)

    SanSoucie, Michael P.; Hull, Patrick V.; Tinker, Michael L.

    2006-01-01

    Long-duration surface missions to the Moon and Mars will require habitats for the astronauts. The materials chosen for the habitat walls play a direct role in the protection against the harsh environments found on the surface. Choosing the best materials, their configuration, and the amount required is extremely difficult due to the immense size of the design region. Advanced optimization techniques are necessary for habitat wall design. Standard optimization techniques are not suitable for problems with such large search spaces; therefore, a habitat design optimization tool utilizing genetic algorithms has been developed. Genetic algorithms use a "survival of the fittest" philosophy, where the most fit individuals are more likely to survive and reproduce. This habitat design optimization tool is a multi-objective formulation of structural analysis, heat loss, radiation protection, and meteoroid protection. This paper presents the research and development of this tool.

  13. Computational design and optimization of energy materials

    NASA Astrophysics Data System (ADS)

    Chan, Maria

    The use of density functional theory (DFT) to understand and improve energy materials for diverse applications - including energy storage, thermal management, catalysis, and photovoltaics - is widespread. The further step of using high throughput DFT calculations to design materials and has led to an acceleration in materials discovery and development. Due to various limitations in DFT, including accuracy and computational cost, however, it is important to leverage effective models and, in some cases, experimental information to aid the design process. In this talk, I will discuss efforts in design and optimization of energy materials using a combination of effective models, DFT, machine learning, and experimental information.

  14. Determination of opiates in whole blood and vitreous humor: a study of the matrix effect and an experimental design to optimize conditions for the enzymatic hydrolysis of glucuronides.

    PubMed

    Sanches, Livia Rentas; Seulin, Saskia Carolina; Leyton, Vilma; Paranhos, Beatriz Aparecida Passos Bismara; Pasqualucci, Carlos Augusto; Muñoz, Daniel Romero; Osselton, Michael David; Yonamine, Mauricio

    2012-04-01

    Undoubtedly, whole blood and vitreous humor have been biological samples of great importance in forensic toxicology. The determination of opiates and their metabolites has been essential for better interpretation of toxicological findings. This report describes the application of experimental design and response surface methodology to optimize conditions for enzymatic hydrolysis of morphine-3-glucuronide and morphine-6-glucuronide. The analytes (free morphine, 6-acetylmorphine and codeine) were extracted from the samples using solid-phase extraction on mixed-mode cartridges, followed by derivatization to their trimethylsilyl derivatives. The extracts were analysed by gas chromatography-mass spectrometry with electron ionization and full scan mode. The method was validated for both specimens (whole blood and vitreous humor). A significant matrix effect was found by applying the F-test. Different recovery values were also found (82% on average for whole blood and 100% on average for vitreous humor). The calibration curves were linear for all analytes in the concentration range of 10-1,500 ng/mL. The limits of detection ranged from 2.0 to 5.0 ng/mL. The method was applied to a case in which a victim presented with a previous history of opiate use.

  15. Application of modificated magnetic nanomaterial for optimization of ultrasound-enhanced removal of Pb(2+) ions from aqueous solution under experimental design: Investigation of kinetic and isotherm.

    PubMed

    Dil, Ebrahim Alipanahpour; Ghaedi, Mehrorang; Asfaram, Arash; Mehrabi, Fatemeh

    2017-05-01

    Magnetic γ-Fe2O3 nanoparticles modificated by bis(5-bromosalicylidene)-1,3-propandiamine (M-γ-Fe2O3-NPs-BBSPN) and characterized by field emission scanning electron microscopy (FE-SEM), Fourier transforms infrared spectroscopy (FT-IR) and X-ray diffraction (XRD). This modified compound as novel adsorbent was applied for the ultrasound-assisted removal of Pb(2+) ion in combination with flame atomic absorption spectroscopy (FAAS). The influences of the effective parameters including initial Pb(2+) ion concentration, pH, adsorbent mass and ultrasound time were optimized by central composite design (CCD). Maximum removal percentage of Pb(2+) ion which obtained at 25mgL(-)(1) of Pb(2+), 25mg of adsorbent and 4min mixing with sonication at pH 6.0. The precision of the equation obtained by CCD was confirmed by the analysis of variance and calculation of correlation coefficient relating the predicted and the experimental values of removal percentage of Pb(2+) ion. The kinetic and isotherm of ultrasound-assisted removal of Pb(2+) ion was well described by second-order kinetic and Langmuir isotherm model with maximum adsorption capacity of 163.57mgg(-)(1). Copyright © 2016 Elsevier B.V. All rights reserved.

  16. Optimization of Experimental Conditions of the Pulsed Current GTAW Parameters for Mechanical Properties of SDSS UNS S32760 Welds Based on the Taguchi Design Method

    NASA Astrophysics Data System (ADS)

    Yousefieh, M.; Shamanian, M.; Saatchi, A.

    2012-09-01

    Taguchi design method with L9 orthogonal array was implemented to optimize the pulsed current gas tungsten arc welding parameters for the hardness and the toughness of super duplex stainless steel (SDSS, UNS S32760) welds. In this regard, the hardness and the toughness were considered as performance characteristics. Pulse current, background current, % on time, and pulse frequency were chosen as main parameters. Each parameter was varied at three different levels. As a result of pooled analysis of variance, the pulse current is found to be the most significant factor for both the hardness and the toughness of SDSS welds by percentage contribution of 71.81 for hardness and 78.18 for toughness. The % on time (21.99%) and the background current (17.81%) had also the next most significant effect on the hardness and the toughness, respectively. The optimum conditions within the selected parameter values for hardness were found as the first level of pulse current (100 A), third level of background current (70 A), first level of % on time (40%), and first level of pulse frequency (1 Hz), while they were found as the second level of pulse current (120 A), second level of background current (60 A), second level of % on time (60%), and third level of pulse frequency (5 Hz) for toughness. The Taguchi method was found to be a promising tool to obtain the optimum conditions for such studies. Finally, in order to verify experimental results, confirmation tests were carried out at optimum working conditions. Under these conditions, there were good agreements between the predicted and the experimental results for the both hardness and toughness.

  17. Multidisciplinary design optimization using response surface analysis

    NASA Technical Reports Server (NTRS)

    Unal, Resit

    1992-01-01

    Aerospace conceptual vehicle design is a complex process which involves multidisciplinary studies of configuration and technology options considering many parameters at many values. NASA Langley's Vehicle Analysis Branch (VAB) has detailed computerized analysis capabilities in most of the key disciplines required by advanced vehicle design. Given a configuration, the capability exists to quickly determine its performance and lifecycle cost. The next step in vehicle design is to determine the best settings of design parameters that optimize the performance characteristics. Typical approach to design optimization is experience based, trial and error variation of many parameters one at a time where possible combinations usually number in the thousands. However, this approach can either lead to a very long and expensive design process or to a premature termination of the design process due to budget and/or schedule pressures. Furthermore, one variable at a time approach can not account for the interactions that occur among parts of systems and among disciplines. As a result, vehicle design may be far from optimal. Advanced multidisciplinary design optimization (MDO) methods are needed to direct the search in an efficient and intelligent manner in order to drastically reduce the number of candidate designs to be evaluated. The payoffs in terms of enhanced performance and reduced cost are significant. A literature review yields two such advanced MDO methods used in aerospace design optimization; Taguchi methods and response surface methods. Taguchi methods provide a systematic and efficient method for design optimization for performance and cost. However, response surface method (RSM) leads to a better, more accurate exploration of the parameter space and to estimated optimum conditions with a small expenditure on experimental data. These two methods are described.

  18. The use of experimental design in the optimization of risperidone biodegradable nanoparticles: in vitro and in vivo study.

    PubMed

    Alzubaidi, Ali F A; El-Helw, Abdel-Raheem M; Ahmed, Tarek A; Ahmed, Osama A A

    2017-03-01

    The aim of this study was optimization of risperidone (model drug) biodegradable nanoparticles utilizing emulsion-solvent evaporation technique. Box-Behnken design was adopted to optimize the preparation process. Optimized nanoparticles were characterized for surface morphology using scanning electron microscope. Pharmacokinetic parameters were compared with the marketed tablets. Results revealed that the optimized formula showed 297.37 nm, 85.12%, and 59.79% for Y1, Y2, and Y3, respectively. Optimized formula showed significant improved bioavailability compared with marketed tablets. Successful achievement of prolonged risperidone release with improved bioavailability is expected to maximize patients' adherence to their antipsychotic drug therapy and to minimize risk of relapse during maintenance therapy.

  19. Advanced transport design using multidisciplinary design optimization

    NASA Technical Reports Server (NTRS)

    Barnum, Jennifer; Bathras, Curt; Beene, Kirk; Bush, Michael; Kaupin, Glenn; Lowe, Steve; Sobieski, Ian; Tingen, Kelly; Wells, Douglas

    1991-01-01

    This paper describes the results of the first implementation of multidisciplinary design optimisation (MDO) techniques by undergraduates ina design course. The objective of the work was to design a civilian transport aircraft of the Boeing 777 class. The first half of the two semester design course consisted of application of traditional sizing methods and techniques to form a baseline aircraft. MDO techniques were then applied to this baseline design. This paper describes the evolution of the design with special emphasis on the application of MDO techniques, and presents the results of four iterations through the design space. Minimization of take-off gross weight was the goal of the optimization process. The resultant aircraft derived from the MDO procedure weighed approximately 13,382 lbs (2.57 percent) less than the baseline aircraft.

  20. Design optimization of space structures

    NASA Astrophysics Data System (ADS)

    Felippa, Carlos

    1991-11-01

    The topology-shape-size optimization of space structures is investigated through Kikuchi's homogenization method. The method starts from a 'design domain block,' which is a region of space into which the structure is to materialize. This domain is initially filled with a finite element mesh, typically regular. Force and displacement boundary conditions corresponding to applied loads and supports are applied at specific points in the domain. An optimal structure is to be 'carved out' of the design under two conditions: (1) a cost function is to be minimized, and (2) equality or inequality constraints are to be satisfied. The 'carving' process is accomplished by letting microstructure holes develop and grow in elements during the optimization process. These holes have a rectangular shape in two dimensions and a cubical shape in three dimensions, and may also rotate with respect to the reference axes. The properties of the perforated element are obtained through an homogenization procedure. Once a hole reaches the volume of the element, that element effectively disappears. The project has two phases. In the first phase the method was implemented as the combination of two computer programs: a finite element module, and an optimization driver. In the second part, focus is on the application of this technique to planetary structures. The finite element part of the method was programmed for the two-dimensional case using four-node quadrilateral elements to cover the design domain. An element homogenization technique different from that of Kikuchi and coworkers was implemented. The optimization driver is based on an augmented Lagrangian optimizer, with the volume constraint treated as a Courant penalty function. The optimizer has to be especially tuned to this type of optimization because the number of design variables can reach into the thousands. The driver is presently under development.

  1. Use of experimental design in the optimization of stir bar sorptive extraction for the determination of polybrominated diphenyl ethers in environmental matrices.

    PubMed

    Serôdio, P; Cabral, M Salomé; Nogueira, J M F

    2007-02-09

    Stir bar sorptive extraction and liquid desorption (LD) followed by large volume injection and capillary gas chromatography coupled to mass spectrometry (SBSE-LD-LVI-GC-MS), had been applied for the determination of ultra-traces of eleven polybrominated diphenylethers (PBDEs), from tetra to nona congeners (BDE-47, BDE-100, BDE-99, BDE-85, BDE-154, BDE-153, BDE-183, BDE-197, BDE-196, BDE-207 and BDE-206), in environmental matrices. Instrumental calibration under the selected-ion monitoring (SIM) mode acquisition and parameters that could affect the SBSE-LD efficiency are fully discussed. A complete randomized factorial design was established for the first time to optimize the main experimental parameters that affecting the SBSE-LD efficiency, including decisive interactions, which provides a more realistic picture of the sampling process. The analysis of variance (ANOVA) was the statistical method used to analyze data. From the data obtained, it can be emphasized that experimental parameters such as extraction time (240 min), agitation speed (1250 rpm), methanol content (40%) and desorption conditions (acetonitrile, 15 min), were the best analytical compromise for the simultaneous determination between tetra and nona congeners in aqueous media. A remarkable recovery (65.6-116.9%) and repeatability (<12.1%) were attained, whilst the experimental data allowed very good agreement with predict theoretical equilibrium described by the octanol-water partition coefficients (K(PDMS/W) approximately = K(O/W)), with the exception of nona congeners since slightly lower yields were measured. Furthermore, excellent linear dynamic ranges from 0.01 to 14.0 microg/L (r2>0.9917) and low detection limits (0.3-203.4 ng/L) were also achieved for the eleven congeners studied. The proposed methodology was applied for the determination of ultra-trace levels of PBDEs in waste water, sediments and printed board circuit matrices by the standard addition approach, showing to be reliable

  2. Experimental Design: Review and Comment.

    DTIC Science & Technology

    1984-02-01

    and early work in the subject was done by Wald (1943), Hotelling (1944), and Elfving (1952). The major contributions to the area, however, were made by...Kiefer (1958, 1959) and Kiefer and Wolfowitz (1959, 1960), who synthesized and greatly extended the previous work. Although the ideas of optimal...design theory is the general equivalence theorem (Kiefer and Wolfowitz 1960), which links D- and G-optimality. The theorem is phrased in terms of

  3. Simple and Sensitive UPLC-MS/MS Method for High-Throughput Analysis of Ibrutinib in Rat Plasma: Optimization by Box-Behnken Experimental Design.

    PubMed

    2016-04-07

    Ibrutinib was the first Bruton's tyrosine kinase inhibitor that was approved by the U.S. Food and Drug Administration (FDA) for the treatment of mantle cell lymphoma, chronic lymphocytic leukemia, and waldenstrom macroglobulinemia. The aim of this study was to develop a UPLC-tandem MS method for the high-throughput analysis of ibrutinib in rat plasma samples. A chromatographic condition was optimized by the implementation of the Box-Behnken experimental design. Both ibrutinib and internal standard (vilazodone; IS) were separated within 2 min using the mobile phase of 0.1% formic acid in acetonitrile and 0.1% formic acid in 10 mM ammonium acetate in a ratio of 80+20, eluted at a flow rate of 0.250 mL/min. A simple protein precipitation method was used for the sample cleanup procedure. The detection was performed in electrospray ionization (ESI) positive mode using multiple reaction monitoring by ion transitions of m/z 441.16 > 84.02 for ibrutinib and m/z 442.17 > 155.02 for IS, respectively. All calibration curves were linear in the concentration range of 0.35 to 400 ng/mL (r(2) ≥ 0.997) with a lower LOQ of 0.35 ng/mL only. All validation parameter results were within the acceptance criteria as per international regulatory guidelines. The developed assay was successfully applied in the pharmacokinetic study of a novel ibrutinib self-nanoemulsifying drug-delivery system formulation.

  4. Optimization methods for alternative energy system design

    NASA Astrophysics Data System (ADS)

    Reinhardt, Michael Henry

    An electric vehicle heating system and a solar thermal coffee dryer are presented as case studies in alternative energy system design optimization. Design optimization tools are compared using these case studies, including linear programming, integer programming, and fuzzy integer programming. Although most decision variables in the designs of alternative energy systems are generally discrete (e.g., numbers of photovoltaic modules, thermal panels, layers of glazing in windows), the literature shows that the optimization methods used historically for design utilize continuous decision variables. Integer programming, used to find the optimal investment in conservation measures as a function of life cycle cost of an electric vehicle heating system, is compared to linear programming, demonstrating the importance of accounting for the discrete nature of design variables. The electric vehicle study shows that conservation methods similar to those used in building design, that reduce the overall UA of a 22 ft. electric shuttle bus from 488 to 202 (Btu/hr-F), can eliminate the need for fossil fuel heating systems when operating in the northeast United States. Fuzzy integer programming is presented as a means of accounting for imprecise design constraints such as being environmentally friendly in the optimization process. The solar thermal coffee dryer study focuses on a deep-bed design using unglazed thermal collectors (UTC). Experimental data from parchment coffee drying are gathered, including drying constants and equilibrium moisture. In this case, fuzzy linear programming is presented as a means of optimizing experimental procedures to produce the most information under imprecise constraints. Graphical optimization is used to show that for every 1 m2 deep-bed dryer, of 0.4 m depth, a UTC array consisting of 5, 1.1 m 2 panels, and a photovoltaic array consisting of 1, 0.25 m 2 panels produces the most dry coffee per dollar invested in the system. In general this study

  5. Global Design Optimization for Fluid Machinery Applications

    NASA Technical Reports Server (NTRS)

    Shyy, Wei; Papila, Nilay; Tucker, Kevin; Vaidyanathan, Raj; Griffin, Lisa

    2000-01-01

    Recent experiences in utilizing the global optimization methodology, based on polynomial and neural network techniques for fluid machinery design are summarized. Global optimization methods can utilize the information collected from various sources and by different tools. These methods offer multi-criterion optimization, handle the existence of multiple design points and trade-offs via insight into the entire design space can easily perform tasks in parallel, and are often effective in filtering the noise intrinsic to numerical and experimental data. Another advantage is that these methods do not need to calculate the sensitivity of each design variable locally. However, a successful application of the global optimization method needs to address issues related to data requirements with an increase in the number of design variables and methods for predicting the model performance. Examples of applications selected from rocket propulsion components including a supersonic turbine and an injector element and a turbulent flow diffuser are used to illustrate the usefulness of the global optimization method.

  6. Telemanipulator design and optimization software

    NASA Astrophysics Data System (ADS)

    Cote, Jean; Pelletier, Michel

    1995-12-01

    For many years, industrial robots have been used to execute specific repetitive tasks. In those cases, the optimal configuration and location of the manipulator only has to be found once. The optimal configuration or position where often found empirically according to the tasks to be performed. In telemanipulation, the nature of the tasks to be executed is much wider and can be very demanding in terms of dexterity and workspace. The position/orientation of the robot's base could be required to move during the execution of a task. At present, the choice of the initial position of the teleoperator is usually found empirically which can be sufficient in the case of an easy or repetitive task. In the converse situation, the amount of time wasted to move the teleoperator support platform has to be taken into account during the execution of the task. Automatic optimization of the position/orientation of the platform or a better designed robot configuration could minimize these movements and save time. This paper will present two algorithms. The first algorithm is used to optimize the position and orientation of a given manipulator (or manipulators) with respect to the environment on which a task has to be executed. The second algorithm is used to optimize the position or the kinematic configuration of a robot. For this purpose, the tasks to be executed are digitized using a position/orientation measurement system and a compact representation based on special octrees. Given a digitized task, the optimal position or Denavit-Hartenberg configuration of the manipulator can be obtained numerically. Constraints on the robot design can also be taken into account. A graphical interface has been designed to facilitate the use of the two optimization algorithms.

  7. A free lunch in linearized experimental design?

    NASA Astrophysics Data System (ADS)

    Coles, D.; Curtis, A.

    2009-12-01

    The No Free Lunch (NFL) theorems state that no single optimization algorithm is ideally suited for all objective functions and, conversely, that no single objective function is ideally suited for all optimization algorithms (Wolpert and Macready, 1997). It is therefore of limited use to report the performance of a particular algorithm with respect to a particular objective function because the results cannot be safely extrapolated to other algorithms or objective functions. We examine the influence of the NFL theorems on linearized statistical experimental design (SED). We are aware of no publication that compares multiple design criteria in combination with multiple design algorithms. We examine four design algorithms in concert with three design objective functions to assess their interdependency. As a foundation for the study, we consider experimental designs for fitting ellipses to data, a problem pertinent, for example, to the study of transverse isotropy in a variety of disciplines. Surprisingly, we find that the quality of optimized experiments, and the computational efficiency of their optimization, is generally independent of the criterion-algorithm pairing. This is promising for linearized SED. While the NFL theorems must generally be true, the criterion-algorithm pairings we investigated are fairly robust to the theorems, indicating that we need not account for independency when choosing design algorithms and criteria from the set examined here. However, particular design algorithms do show patterns of performance, irrespective of the design criterion, and from this we establish a rough guideline for choosing from the examined algorithms for other design problems. As a by-product of our study we demonstrate that SED is subject to the principle of diminishing returns. That is, we see that the value of experimental design decreases with survey size, a fact that must be considered when deciding whether or not to design an experiment at all. Another outcome

  8. Parameters optimization using experimental design for headspace solid phase micro-extraction analysis of short-chain chlorinated paraffins in waters under the European water framework directive.

    PubMed

    Gandolfi, F; Malleret, L; Sergent, M; Doumenq, P

    2015-08-07

    The water framework directives (WFD 2000/60/EC and 2013/39/EU) force European countries to monitor the quality of their aquatic environment. Among the priority hazardous substances targeted by the WFD, short chain chlorinated paraffins C10-C13 (SCCPs), still represent an analytical challenge, because few laboratories are nowadays able to analyze them. Moreover, an annual average quality standards as low as 0.4μgL(-1) was set for SCCPs in surface water. Therefore, to test for compliance, the implementation of sensitive and reliable analysis method of SCCPs in water are required. The aim of this work was to address this issue by evaluating automated solid phase micro-extraction (SPME) combined on line with gas chromatography-electron capture negative ionization mass spectrometry (GC/ECNI-MS). Fiber polymer, extraction mode, ionic strength, extraction temperature and time were the most significant thermodynamic and kinetic parameters studied. To determine the suitable factors working ranges, the study of the extraction conditions was first carried out by using a classical one factor-at-a-time approach. Then a mixed level factorial 3×2(3) design was performed, in order to give rise to the most influent parameters and to estimate potential interactions effects between them. The most influent factors, i.e. extraction temperature and duration, were optimized by using a second experimental design, in order to maximize the chromatographic response. At the close of the study, a method involving headspace SPME (HS-SPME) coupled to GC/ECNI-MS is proposed. The optimum extraction conditions were sample temperature 90°C, extraction time 80min, with the PDMS 100μm fiber and desorption at 250°C during 2min. Linear response from 0.2ngmL(-1) to 10ngmL(-1) with r(2)=0.99 and limits of detection and quantification, respectively of 4pgmL(-1) and 120pgmL(-1) in MilliQ water, were achieved. The method proved to be applicable in different types of waters and show key advantages, such

  9. Design approaches to experimental mediation☆

    PubMed Central

    Pirlott, Angela G.; MacKinnon, David P.

    2016-01-01

    Identifying causal mechanisms has become a cornerstone of experimental social psychology, and editors in top social psychology journals champion the use of mediation methods, particularly innovative ones when possible (e.g. Halberstadt, 2010, Smith, 2012). Commonly, studies in experimental social psychology randomly assign participants to levels of the independent variable and measure the mediating and dependent variables, and the mediator is assumed to causally affect the dependent variable. However, participants are not randomly assigned to levels of the mediating variable(s), i.e., the relationship between the mediating and dependent variables is correlational. Although researchers likely know that correlational studies pose a risk of confounding, this problem seems forgotten when thinking about experimental designs randomly assigning participants to levels of the independent variable and measuring the mediator (i.e., “measurement-of-mediation” designs). Experimentally manipulating the mediator provides an approach to solving these problems, yet these methods contain their own set of challenges (e.g., Bullock, Green, & Ha, 2010). We describe types of experimental manipulations targeting the mediator (manipulations demonstrating a causal effect of the mediator on the dependent variable and manipulations targeting the strength of the causal effect of the mediator) and types of experimental designs (double randomization, concurrent double randomization, and parallel), provide published examples of the designs, and discuss the strengths and challenges of each design. Therefore, the goals of this paper include providing a practical guide to manipulation-of-mediator designs in light of their challenges and encouraging researchers to use more rigorous approaches to mediation because manipulation-of-mediator designs strengthen the ability to infer causality of the mediating variable on the dependent variable. PMID:27570259

  10. Experimental design in chemistry: A tutorial.

    PubMed

    Leardi, Riccardo

    2009-10-12

    In this tutorial the main concepts and applications of experimental design in chemistry will be explained. Unfortunately, nowadays experimental design is not as known and applied as it should be, and many papers can be found in which the "optimization" of a procedure is performed one variable at a time. Goal of this paper is to show the real advantages in terms of reduced experimental effort and of increased quality of information that can be obtained if this approach is followed. To do that, three real examples will be shown. Rather than on the mathematical aspects, this paper will focus on the mental attitude required by experimental design. The readers being interested to deepen their knowledge of the mathematical and algorithmical part can find very good books and tutorials in the references [G.E.P. Box, W.G. Hunter, J.S. Hunter, Statistics for Experimenters: An Introduction to Design, Data Analysis, and Model Building, John Wiley & Sons, New York, 1978; R. Brereton, Chemometrics: Data Analysis for the Laboratory and Chemical Plant, John Wiley & Sons, New York, 1978; R. Carlson, J.E. Carlson, Design and Optimization in Organic Synthesis: Second Revised and Enlarged Edition, in: Data Handling in Science and Technology, vol. 24, Elsevier, Amsterdam, 2005; J.A. Cornell, Experiments with Mixtures: Designs, Models and the Analysis of Mixture Data, in: Series in Probability and Statistics, John Wiley & Sons, New York, 1991; R.E. Bruns, I.S. Scarminio, B. de Barros Neto, Statistical Design-Chemometrics, in: Data Handling in Science and Technology, vol. 25, Elsevier, Amsterdam, 2006; D.C. Montgomery, Design and Analysis of Experiments, 7th edition, John Wiley & Sons, Inc., 2009; T. Lundstedt, E. Seifert, L. Abramo, B. Thelin, A. Nyström, J. Pettersen, R. Bergman, Chemolab 42 (1998) 3; Y. Vander Heyden, LC-GC Europe 19 (9) (2006) 469].

  11. Optimal design of solidification processes

    NASA Technical Reports Server (NTRS)

    Dantzig, Jonathan A.; Tortorelli, Daniel A.

    1991-01-01

    An optimal design algorithm is presented for the analysis of general solidification processes, and is demonstrated for the growth of GaAs crystals in a Bridgman furnace. The system is optimal in the sense that the prespecified temperature distribution in the solidifying materials is obtained to maximize product quality. The optimization uses traditional numerical programming techniques which require the evaluation of cost and constraint functions and their sensitivities. The finite element method is incorporated to analyze the crystal solidification problem, evaluate the cost and constraint functions, and compute the sensitivities. These techniques are demonstrated in the crystal growth application by determining an optimal furnace wall temperature distribution to obtain the desired temperature profile in the crystal, and hence to maximize the crystal's quality. Several numerical optimization algorithms are studied to determine the proper convergence criteria, effective 1-D search strategies, appropriate forms of the cost and constraint functions, etc. In particular, we incorporate the conjugate gradient and quasi-Newton methods for unconstrained problems. The efficiency and effectiveness of each algorithm is presented in the example problem.

  12. Shape optimization techniques for musical instrument design

    NASA Astrophysics Data System (ADS)

    Henrique, Luis; Antunes, Jose; Carvalho, Joao S.

    2002-11-01

    The design of musical instruments is still mostly based on empirical knowledge and costly experimentation. One interesting improvement is the shape optimization of resonating components, given a number of constraints (allowed parameter ranges, shape smoothness, etc.), so that vibrations occur at specified modal frequencies. Each admissible geometrical configuration generates an error between computed eigenfrequencies and the target set. Typically, error surfaces present many local minima, corresponding to suboptimal designs. This difficulty can be overcome using global optimization techniques, such as simulated annealing. However these methods are greedy, concerning the number of function evaluations required. Thus, the computational effort can be unacceptable if complex problems, such as bell optimization, are tackled. Those issues are addressed in this paper, and a method for improving optimization procedures is proposed. Instead of using the local geometric parameters as searched variables, the system geometry is modeled in terms of truncated series of orthogonal space-funcitons, and optimization is performed on their amplitude coefficients. Fourier series and orthogonal polynomials are typical such functions. This technique reduces considerably the number of searched variables, and has a potential for significant computational savings in complex problems. It is illustrated by optimizing the shapes of both current and uncommon marimba bars.

  13. Research on optimization-based design

    NASA Technical Reports Server (NTRS)

    Balling, R. J.; Parkinson, A. R.; Free, J. C.

    1989-01-01

    Research on optimization-based design is discussed. Illustrative examples are given for cases involving continuous optimization with discrete variables and optimization with tolerances. Approximation of computationally expensive and noisy functions, electromechanical actuator/control system design using decomposition and application of knowledge-based systems and optimization for the design of a valve anti-cavitation device are among the topics covered.

  14. Optimization of confocal scanning laser ophthalmoscope design.

    PubMed

    LaRocca, Francesco; Dhalla, Al-Hafeez; Kelly, Michael P; Farsiu, Sina; Izatt, Joseph A

    2013-07-01

    Confocal scanning laser ophthalmoscopy (cSLO) enables high-resolution and high-contrast imaging of the retina by employing spatial filtering for scattered light rejection. However, to obtain optimized image quality, one must design the cSLO around scanner technology limitations and minimize the effects of ocular aberrations and imaging artifacts. We describe a cSLO design methodology resulting in a simple, relatively inexpensive, and compact lens-based cSLO design optimized to balance resolution and throughput for a 20-deg field of view (FOV) with minimal imaging artifacts. We tested the imaging capabilities of our cSLO design with an experimental setup from which we obtained fast and high signal-to-noise ratio (SNR) retinal images. At lower FOVs, we were able to visualize parafoveal cone photoreceptors and nerve fiber bundles even without the use of adaptive optics. Through an experiment comparing our optimized cSLO design to a commercial cSLO system, we show that our design demonstrates a significant improvement in both image quality and resolution.

  15. Design optimization of LiNi0.6Co0.2Mn0.2O2/graphite lithium-ion cells based on simulation and experimental data

    NASA Astrophysics Data System (ADS)

    Appiah, Williams Agyei; Park, Joonam; Song, Seonghyun; Byun, Seoungwoo; Ryou, Myung-Hyun; Lee, Yong Min

    2016-07-01

    LiNi0.6Co0.2Mn0.2O2 cathodes of different thicknesses and porosities are prepared and tested, in order to optimize the design of lithium-ion cells. A mathematical model for simulating multiple types of particles with different contact resistances in a single electrode is adopted to study the effects of the different cathode thicknesses and porosities on lithium-ion transport using the nonlinear least squares technique. The model is used to optimize the design of LiNi0.6Co0.2Mn0.2O2/graphite lithium-ion cells by employing it to generate a number of Ragone plots. The cells are optimized for cathode porosity and thickness, while the anode porosity, anode-to-cathode capacity ratio, thickness and porosity of separator, and electrolyte salt concentration are held constant. Optimization is performed for discharge times ranging from 10 h to 5 min. Using the Levenberg-Marquardt method as a fitting technique, accounting for multiple particles with different contact resistances, and employing a rate-dependent solid-phase diffusion coefficient results in there being good agreement between the simulated and experimentally determined discharge curves. The optimized parameters obtained from this study should serve as a guide for the battery industry as well as for researchers for determining the optimal cell design for different applications.

  16. An experimental study of an ultra-mobile vehicle for off-road transportation. Appendix 2. Dissertation. Kinematic optimal design of a six-legged walking machine

    NASA Astrophysics Data System (ADS)

    McGhee, R. B.; Waldron, K. J.; Song, S. M.

    1985-05-01

    Chapter 2 is a review of previous work in the following two areas: The mechanical structure of walking machines and walking gaits. In Chapter 3, the mathematical and graphical background for gait analysis is presented. The gait selection problem in different types of terrain is also discussed. Detailed studies of the major gaits used in level walking are presented. In Chapter 4, gaits for walking on gradients and methods to improve stability are studied. Also, gaits which may be used in crossing three major obstacle types are studied. In Chapter 5, the design of leg geometries based on four-bar linkages is discussed. Major techniques to optimize leg linkages for optimal walking volume are introduced. In Chapter 6, the design of a different leg geometry, based on a pantograph mechanism, is presented. A theoretical background of the motion characteristics of pantographs is given first. In Chapter 7, some other related items of the leg design are discussed. One of these is the foot-ankle system. A few conceptual passive foot-ankle systems are introduced. The second is a numerical method to find the shortest crank for a four-finitely-separated-position-synthesis problem. The shortest crank usually results in a crank rocker, which is the most desirable linkage type in many applications. Finally, in Chapter 8, the research work presented in this dissertation is evaluated and the future development of walking machines is discussed.

  17. Optimization of multiwalled carbon nanotubes reinforced hollow-fiber solid-liquid-phase microextraction for the determination of polycyclic aromatic hydrocarbons in environmental water samples using experimental design.

    PubMed

    Hamedi, Raheleh; Hadjmohammadi, Mohammad Reza

    2017-09-01

    A novel design of hollow-fiber liquid-phase microextraction containing multiwalled carbon nanotubes as a solid sorbent, which is immobilized in the pore and lumen of hollow fiber by the sol-gel technique, was developed for the pre-concentration and determination of polycyclic aromatic hydrocarbons in environmental water samples. The proposed method utilized both solid- and liquid-phase microextraction media. Parameters that affect the extraction of polycyclic aromatic hydrocarbons were optimized in two successive steps as follows. Firstly, a methodology based on a quarter factorial design was used to choose the significant variables. Then, these significant factors were optimized utilizing central composite design. Under the optimized condition (extraction time = 25 min, amount of multiwalled carbon nanotubes = 78 mg, sample volume = 8 mL, and desorption time = 5 min), the calibration curves showed high linearity (R(2)  = 0.99) in the range of 0.01-500 ng/mL and the limits of detection were in the range of 0.007-1.47 ng/mL. The obtained extraction recoveries for 10 ng/mL of polycyclic aromatic hydrocarbons standard solution were in the range of 85-92%. Replicating the experiment under these conditions five times gave relative standard deviations lower than 6%. Finally, the method was successfully applied for pre-concentration and determination of polycyclic aromatic hydrocarbons in environmental water samples. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  18. Applications of Experimental Design to the Optimization of Microextraction Sample Preparation Parameters for the Analysis of Pesticide Residues in Fruits and Vegetables.

    PubMed

    Abdulra'uf, Lukman Bola; Sirhan, Ala Yahya; Tan, Guan Huat

    2015-01-01

    Sample preparation has been identified as the most important step in analytical chemistry and has been tagged as the bottleneck of analytical methodology. The current trend is aimed at developing cost-effective, miniaturized, simplified, and environmentally friendly sample preparation techniques. The fundamentals and applications of multivariate statistical techniques for the optimization of microextraction sample preparation and chromatographic analysis of pesticide residues are described in this review. The use of Placket-Burman, Doehlert matrix, and Box-Behnken designs are discussed. As observed in this review, a number of analytical chemists have combined chemometrics and microextraction techniques, which has helped to streamline sample preparation and improve sample throughput.

  19. Development of a novel pH sensor based upon Janus Green B immobilized on triacetyl cellulose membrane: Experimental design and optimization.

    PubMed

    Chamkouri, Narges; Niazi, Ali; Zare-Shahabadi, Vali

    2016-03-05

    A novel pH optical sensor was prepared by immobilizing an azo dye called Janus Green B on the triacetylcellulose membrane. Condition of the dye solution used in the immobilization step, including concentration of the dye, pH, and duration were considered and optimized using the Box-Behnken design. The proposed sensor showed good behavior and precision (RSD<5%) in the pH range of 2.0-10.0. Advantages of this optical sensor include on-line applicability, no leakage, long-term stability (more than 6 months), fast response time (less than 1 min), high selectivity and sensitivity as well as good reversibility and reproducibility.

  20. Development of a novel pH sensor based upon Janus Green B immobilized on triacetyl cellulose membrane: Experimental design and optimization

    NASA Astrophysics Data System (ADS)

    Chamkouri, Narges; Niazi, Ali; Zare-Shahabadi, Vali

    2016-03-01

    A novel pH optical sensor was prepared by immobilizing an azo dye called Janus Green B on the triacetylcellulose membrane. Condition of the dye solution used in the immobilization step, including concentration of the dye, pH, and duration were considered and optimized using the Box-Behnken design. The proposed sensor showed good behavior and precision (RSD < 5%) in the pH range of 2.0-10.0. Advantages of this optical sensor include on-line applicability, no leakage, long-term stability (more than 6 months), fast response time (less than 1 min), high selectivity and sensitivity as well as good reversibility and reproducibility.

  1. Computational Optimization of a Natural Laminar Flow Experimental Wing Glove

    NASA Technical Reports Server (NTRS)

    Hartshom, Fletcher

    2012-01-01

    Computational optimization of a natural laminar flow experimental wing glove that is mounted on a business jet is presented and discussed. The process of designing a laminar flow wing glove starts with creating a two-dimensional optimized airfoil and then lofting it into a three-dimensional wing glove section. The airfoil design process does not consider the three dimensional flow effects such as cross flow due wing sweep as well as engine and body interference. Therefore, once an initial glove geometry is created from the airfoil, the three dimensional wing glove has to be optimized to ensure that the desired extent of laminar flow is maintained over the entire glove. TRANAIR, a non-linear full potential solver with a coupled boundary layer code was used as the main tool in the design and optimization process of the three-dimensional glove shape. The optimization process uses the Class-Shape-Transformation method to perturb the geometry with geometric constraints that allow for a 2-in clearance from the main wing. The three-dimensional glove shape was optimized with the objective of having a spanwise uniform pressure distribution that matches the optimized two-dimensional pressure distribution as closely as possible. Results show that with the appropriate inputs, the optimizer is able to match the two dimensional pressure distributions practically across the entire span of the wing glove. This allows for the experiment to have a much higher probability of having a large extent of natural laminar flow in flight.

  2. Optimal design of airlift fermenters

    SciTech Connect

    Moresi, M.

    1981-11-01

    In this article a modeling of a draft-tube airlift fermenter (ALF) based on perfect back-mixing of liquid and plugflow for gas bubbles has been carried out to optimize the design and operation of fermentation units at different working capacities. With reference to a whey fermentation by yeasts the economic optimization has led to a slim ALF with an aspect ratio of about 15. As far as power expended per unit of oxygen transfer is concerned, the responses of the model are highly influenced by kLa. However, a safer use of the model has been suggested in order to assess the feasibility of the fermentation process under study. (Refs. 39).

  3. Graphical Models for Quasi-Experimental Designs

    ERIC Educational Resources Information Center

    Kim, Yongnam; Steiner, Peter M.; Hall, Courtney E.; Su, Dan

    2016-01-01

    Experimental and quasi-experimental designs play a central role in estimating cause-effect relationships in education, psychology, and many other fields of the social and behavioral sciences. This paper presents and discusses the causal graphs of experimental and quasi-experimental designs. For quasi-experimental designs the authors demonstrate…

  4. Optimization of headspace solid-phase microextraction by means of an experimental design for the determination of methyl tert.-butyl ether in water by gas chromatography-flame ionization detection.

    PubMed

    Dron, Julien; Garcia, Rosa; Millán, Esmeralda

    2002-07-19

    A procedure for determination of methyl tert.-butyl ether (MTBE) in water by headspace solid-phase microextraction (HS-SPME) has been developed. The analysis was carried out by gas chromatography with flame ionization detection. The extraction procedure, using a 65-microm poly(dimethylsiloxane)-divinylbenzene SPME fiber, was optimized following experimental design. A fractional factorial design for screening and a central composite design for optimizing the significant variables were applied. Extraction temperature and sodium chloride concentration were significant variables, and 20 degrees C and 300 g/l were, respectively chosen for the best extraction response. With these conditions, an extraction time of 5 min was sufficient to extract MTBE. The calibration linear range for MTBE was 5-500 microg/l and the detection limit 0.45 microg/l. The relative standard deviation, for seven replicates of 250 microg/l MTBE in water, was 6.3%.

  5. Experimental design for the optimization and robustness testing of a liquid chromatography tandem mass spectrometry method for the trace analysis of the potentially genotoxic 1,3-diisopropylurea.

    PubMed

    Székely, György; Henriques, Bruno; Gil, Marco; Alvarez, Carlos

    2014-09-01

    This paper discusses a design of experiments (DoE) assisted optimization and robustness testing of a liquid chromatography-tandem mass spectrometry (LC-MS/MS) method development for the trace analysis of the potentially genotoxic 1,3-diisopropylurea (IPU) impurity in mometasone furoate glucocorticosteroid. Compared to the conventional trial-and-error method development, DoE is a cost-effective and systematic approach to system optimization by which the effects of multiple parameters and parameter interactions on a given response are considered. The LC and MS factors were studied simultaneously: flow (F), gradient (G), injection volume (Vinj), cone voltage (E(con)), and collision energy (E(col)). The optimization was carried out with respect to four responses: separation of peaks (Sep), peak area (A(p)), length of the analysis (T), and the signal-to-noise ratio (S/N). An optimization central composite face (CCF) DoE was conducted leading to the early discovery of carry-over effect which was further investigated in order to establish the maximum injectable sample load. A second DoE was conducted in order to obtain the optimal LC-MS/MS method. As part of the validation of the obtained method, its robustness was determined by conducting a fractional factorial of resolution III DoE, wherein column temperature and quadrupole resolution were considered as additional factors. The method utilizes a common Phenomenex Gemini NX C-18 HPLC analytical column with electrospray ionization and a triple quadrupole mass detector in multiple reaction monitoring (MRM) mode, resulting in short analyses with a 10-min runtime. The high sensitivity and low limit of quantification (LOQ) was achieved by (1) MRM mode (instead of single ion monitoring) and (2) avoiding the drawbacks of derivatization (incomplete reaction and time-consuming sample preparation). Quantitatively, the DoE method development strategy resulted in the robust trace analysis of IPU at 1.25 ng/mL absolute concentration

  6. Probabilistic Finite Element Analysis & Design Optimization for Structural Designs

    NASA Astrophysics Data System (ADS)

    Deivanayagam, Arumugam

    This study focuses on implementing probabilistic nature of material properties (Kevlar® 49) to the existing deterministic finite element analysis (FEA) of fabric based engine containment system through Monte Carlo simulations (MCS) and implementation of probabilistic analysis in engineering designs through Reliability Based Design Optimization (RBDO). First, the emphasis is on experimental data analysis focusing on probabilistic distribution models which characterize the randomness associated with the experimental data. The material properties of Kevlar® 49 are modeled using experimental data analysis and implemented along with an existing spiral modeling scheme (SMS) and user defined constitutive model (UMAT) for fabric based engine containment simulations in LS-DYNA. MCS of the model are performed to observe the failure pattern and exit velocities of the models. Then the solutions are compared with NASA experimental tests and deterministic results. MCS with probabilistic material data give a good prospective on results rather than a single deterministic simulation results. The next part of research is to implement the probabilistic material properties in engineering designs. The main aim of structural design is to obtain optimal solutions. In any case, in a deterministic optimization problem even though the structures are cost effective, it becomes highly unreliable if the uncertainty that may be associated with the system (material properties, loading etc.) is not represented or considered in the solution process. Reliable and optimal solution can be obtained by performing reliability optimization along with the deterministic optimization, which is RBDO. In RBDO problem formulation, in addition to structural performance constraints, reliability constraints are also considered. This part of research starts with introduction to reliability analysis such as first order reliability analysis, second order reliability analysis followed by simulation technique that

  7. Optimal Designs for the Rasch Model

    ERIC Educational Resources Information Center

    Grasshoff, Ulrike; Holling, Heinz; Schwabe, Rainer

    2012-01-01

    In this paper, optimal designs will be derived for estimating the ability parameters of the Rasch model when difficulty parameters are known. It is well established that a design is locally D-optimal if the ability and difficulty coincide. But locally optimal designs require that the ability parameters to be estimated are known. To attenuate this…

  8. Optimal Designs for the Rasch Model

    ERIC Educational Resources Information Center

    Grasshoff, Ulrike; Holling, Heinz; Schwabe, Rainer

    2012-01-01

    In this paper, optimal designs will be derived for estimating the ability parameters of the Rasch model when difficulty parameters are known. It is well established that a design is locally D-optimal if the ability and difficulty coincide. But locally optimal designs require that the ability parameters to be estimated are known. To attenuate this…

  9. An optimal structural design algorithm using optimality criteria

    NASA Technical Reports Server (NTRS)

    Taylor, J. E.; Rossow, M. P.

    1976-01-01

    An algorithm for optimal design is given which incorporates several of the desirable features of both mathematical programming and optimality criteria, while avoiding some of the undesirable features. The algorithm proceeds by approaching the optimal solution through the solutions of an associated set of constrained optimal design problems. The solutions of the constrained problems are recognized at each stage through the application of optimality criteria based on energy concepts. Two examples are described in which the optimal member size and layout of a truss is predicted, given the joint locations and loads.

  10. Ultra-high performance liquid chromatographic determination of levofloxacin in human plasma and prostate tissue with use of experimental design optimization procedures.

    PubMed

    Szerkus, O; Jacyna, J; Wiczling, P; Gibas, A; Sieczkowski, M; Siluk, D; Matuszewski, M; Kaliszan, R; Markuszewski, M J

    2016-09-01

    Fluoroquinolones are considered as gold standard for the prevention of bacterial infections after transrectal ultrasound guided prostate biopsy. However, recent studies reported that fluoroquinolone- resistant bacterial strains are responsible for gradually increasing number of infections after transrectal prostate biopsy. In daily clinical practice, antibacterial efficacy is evaluated only in vitro, by measuring the reaction of bacteria with an antimicrobial agent in culture media (i.e. calculation of minimal inhibitory concentration). Such approach, however, has no relation to the treated tissue characteristics and might be highly misleading. Thus, the objective of this study was to develop, with the use of Design of Experiments approach, a reliable, specific and sensitive ultra-high performance liquid chromatography- diode array detection method for the quantitative analysis of levofloxacin in plasma and prostate tissue samples obtained from patients undergoing prostate biopsy. Moreover, correlation study between concentrations observed in plasma samples vs prostatic tissue samples was performed, resulting in better understanding, evaluation and optimization of the fluoroquinolone-based antimicrobial prophylaxis during transrectal ultrasound guided prostate biopsy. Box-Behnken design was employed to optimize chromatographic conditions of the isocratic elution program in order to obtain desirable retention time, peak symmetry and resolution of levofloxacine and ciprofloxacine (internal standard) peaks. Fractional Factorial design 2(4-1) with four center points was used for screening of significant factors affecting levofloxacin extraction from the prostatic tissue. Due to the limited number of tissue samples the prostatic sample preparation procedure was further optimized using Central Composite design. Design of Experiments approach was also utilized for evaluation of parameter robustness. The method was found linear over the range of 0.030-10μg/mL for human

  11. [Optimized extraction technology of flavonoid compounds with anti-SMMC-7721 tumor activities in bark of Juglans mandshurica by orthogonal experimental design based on dose-effect fusion evaluation method].

    PubMed

    Zheng, Ying; Wang, Shuai; Meng, Xian-Sheng; Bao, Yong-Rui

    2013-10-01

    To optimize the extraction technology of total flavonoids with antineoplastic activities in Juglans mandshurica, and explore the correlation between total flavonoids and pharmacodynamics indicators. The quantity of antineoplastic components, ratio of extraction and cell inhibition rate were taken as the comprehensive indexes to optimize the main factors that influence the extraction of effective components by orthogonal experiment design. SPSS 17.0 software was used to analyze the Pearson correlation between effective components and pharmacodynamics indexes. The best extracting condition of total flavonoids were as follows: the ratio of 60% ethanol to Juglans mandshurica was 20: 1, extracting for 3 times, each time for 2 hour at 70 degrees C. Flavonoids extraction yield and cell inhibition rate was positively related in the straight line. This study provides a new insight into the optimization of extraction technology for traditional Chinese medicine, and lays a safe and reliable experimental basis for the clinical application of Juglans mandshurica.

  12. Experimental Design For Photoresist Characterization

    NASA Astrophysics Data System (ADS)

    Luckock, Larry

    1987-04-01

    In processing a semiconductor product (from discrete devices up to the most complex products produced) we find more photolithographic steps in wafer fabrication than any other kind of process step. Thus, the success of a semiconductor manufacturer hinges on the optimization of their photolithographic processes. Yet, we find few companies that have taken the time to properly characterize this critical operation; they are sitting in the "passenger's seat", waiting to see what will come out, hoping that the yields will improve someday. There is no "black magic" involved in setting up a process at its optimum conditions (i.e. minimum sensitivity to all variables at the same time). This paper gives an example of a real world situation for optimizing a photolithographic process by the use of a properly designed experiment, followed by adequate multidimensional analysis of the data. Basic SPC practices like plotting control charts will not, by themselves, improve yields; the control charts are, however, among the necessary tools used in the determination of the process capability and in the formulation of the problems to be addressed. The example we shall consider is the twofold objective of shifting the process average, while tightening the variance, of polysilicon line widths. This goal was identified from a Pareto analysis of yield-limiting mechanisms, plus inspection of the control charts. A key issue in a characterization of this type of process is the number of interactions between variables; this example rules out two-level full factorial and three-level fractional factorial designs (which cannot detect all of the interactions). We arrive at an experiment with five factors at five levels each. A full factorial design for five factors at three levels would require 3125 wafers. Instead, we will use a design that allows us to run this experiment with only 25 wafers, for a significant reduction in time, materials and manufacturing interruption in order to complete the

  13. A hydrometallurgical process for the recovery of terbium from fluorescent lamps: Experimental design, optimization of acid leaching process and process analysis.

    PubMed

    Innocenzi, Valentina; Ippolito, Nicolò Maria; De Michelis, Ida; Medici, Franco; Vegliò, Francesco

    2016-12-15

    Terbium and rare earths recovery from fluorescent powders of exhausted lamps by acid leaching with hydrochloric acid was the objective of this study. In order to investigate the factors affecting leaching a series of experiments was performed in according to a full factorial plan with four variables and two levels (4(2)). The factors studied were temperature, concentration of acid, pulp density and leaching time. Experimental conditions of terbium dissolution were optimized by statistical analysis. The results showed that temperature and pulp density were significant with a positive and negative effect, respectively. The empirical mathematical model deducted by experimental data demonstrated that terbium content was completely dissolved under the following conditions: 90 °C, 2 M hydrochloric acid and 5% of pulp density; while when the pulp density was 15% an extraction of 83% could be obtained at 90 °C and 5 M hydrochloric acid. Finally a flow sheet for the recovery of rare earth elements was proposed. The process was tested and simulated by commercial software for the chemical processes. The mass balance of the process was calculated: from 1 ton of initial powder it was possible to obtain around 160 kg of a concentrate of rare earths having a purity of 99%. The main rare earths elements in the final product was yttrium oxide (86.43%) following by cerium oxide (4.11%), lanthanum oxide (3.18%), europium oxide (3.08%) and terbium oxide (2.20%). The estimated total recovery of the rare earths elements was around 70% for yttrium and europium and 80% for the other rare earths.

  14. Optimized IR synchrotron beamline design.

    PubMed

    Moreno, Thierry

    2015-09-01

    Synchrotron infrared beamlines are powerful tools on which to perform spectroscopy on microscopic length scales but require working with large bending-magnet source apertures in order to provide intense photon beams to the experiments. Many infrared beamlines use a single toroidal-shaped mirror to focus the source emission which generates, for large apertures, beams with significant geometrical aberrations resulting from the shape of the source and the beamline optics. In this paper, an optical layout optimized for synchrotron infrared beamlines, that removes almost totally the geometrical aberrations of the source, is presented and analyzed. This layout is already operational on the IR beamline of the Brazilian synchrotron. An infrared beamline design based on a SOLEIL bending-magnet source is given as an example, which could be useful for future IR beamline improvements at this facility.

  15. Animal husbandry and experimental design.

    PubMed

    Nevalainen, Timo

    2014-01-01

    If the scientist needs to contact the animal facility after any study to inquire about husbandry details, this represents a lost opportunity, which can ultimately interfere with the study results and their interpretation. There is a clear tendency for authors to describe methodological procedures down to the smallest detail, but at the same time to provide minimal information on animals and their husbandry. Controlling all major variables as far as possible is the key issue when establishing an experimental design. The other common mechanism affecting study results is a change in the variation. Factors causing bias or variation changes are also detectable within husbandry. Our lives and the lives of animals are governed by cycles: the seasons, the reproductive cycle, the weekend-working days, the cage change/room sanitation cycle, and the diurnal rhythm. Some of these may be attributable to routine husbandry, and the rest are cycles, which may be affected by husbandry procedures. Other issues to be considered are consequences of in-house transport, restrictions caused by caging, randomization of cage location, the physical environment inside the cage, the acoustic environment audible to animals, olfactory environment, materials in the cage, cage complexity, feeding regimens, kinship, and humans. Laboratory animal husbandry issues are an integral but underappreciated part of investigators' experimental design, which if ignored can cause major interference with the results. All researchers should familiarize themselves with the current routine animal care of the facility serving them, including their capabilities for the monitoring of biological and physicochemical environment.

  16. An Evolutionary Optimization System for Spacecraft Design

    NASA Technical Reports Server (NTRS)

    Fukunaga, A.; Stechert, A.

    1997-01-01

    Spacecraft design optimization is a domian that can benefit from the application of optimization algorithms such as genetic algorithms. In this paper, we describe DEVO, an evolutionary optimization system that addresses these issues and provides a tool that can be applied to a number of real-world spacecraft design applications. We describe two current applications of DEVO: physical design if a Mars Microprobe Soil Penetrator, and system configuration optimization for a Neptune Orbiter.

  17. Optimal design of compact spur gear reductions

    NASA Technical Reports Server (NTRS)

    Savage, M.; Lattime, S. B.; Kimmel, J. A.; Coe, H. H.

    1992-01-01

    The optimal design of compact spur gear reductions includes the selection of bearing and shaft proportions in addition to gear mesh parameters. Designs for single mesh spur gear reductions are based on optimization of system life, system volume, and system weight including gears, support shafts, and the four bearings. The overall optimization allows component properties to interact, yielding the best composite design. A modified feasible directions search algorithm directs the optimization through a continuous design space. Interpolated polynomials expand the discrete bearing properties and proportions into continuous variables for optimization. After finding the continuous optimum, the designer can analyze near optimal designs for comparison and selection. Design examples show the influence of the bearings on the optimal configurations.

  18. Quasi-Experimental Designs for Causal Inference

    ERIC Educational Resources Information Center

    Kim, Yongnam; Steiner, Peter

    2016-01-01

    When randomized experiments are infeasible, quasi-experimental designs can be exploited to evaluate causal treatment effects. The strongest quasi-experimental designs for causal inference are regression discontinuity designs, instrumental variable designs, matching and propensity score designs, and comparative interrupted time series designs. This…

  19. Quasi-Experimental Designs for Causal Inference

    ERIC Educational Resources Information Center

    Kim, Yongnam; Steiner, Peter

    2016-01-01

    When randomized experiments are infeasible, quasi-experimental designs can be exploited to evaluate causal treatment effects. The strongest quasi-experimental designs for causal inference are regression discontinuity designs, instrumental variable designs, matching and propensity score designs, and comparative interrupted time series designs. This…

  20. A novel homocystine-agarose adsorbent for separation and preconcentration of nickel in table salt and baking soda using factorial design optimization of the experimental conditions.

    PubMed

    Hashemi, Payman; Rahmani, Zohreh

    2006-02-28

    Homocystine was for the first time, chemically linked to a highly cross-linked agarose support (Novarose) to be employed as a chelating adsorbent for preconcentration and AAS determination of nickel in table salt and baking soda. Nickel is quantitatively adsorbed on a small column packed with 0.25ml of the adsorbent, in a pH range of 5.5-6.5 and simply eluted with 5ml of a 1moll(-1) hydrochloric acid solution. A factorial design was used for optimization of the effects of five different variables on the recovery of nickel. The results indicated that the factors of flow rate and column length, and the interactions between pH and sample volume are significant. In the optimized conditions, the column could tolerate salt concentrations up to 0.5moll(-1) and sample volumes beyond 500ml. Matrix ions of Mg(2+) and Ca(2+), with a concentration of 200mgl(-1), and potentially interfering ions of Cd(2+), Cu(2+), Zn(2+) and Mn(2+), with a concentration of 10mgl(-1), did not have significant effect on the analyte's signal. Preconcentration factors up to 100 and a detection limit of 0.49mugl(-1), corresponding to an enrichment volume of 500ml, were obtained for the determination of the analyte by flame AAS. Application of the method to the determination of natural and spiked nickel in table salt and baking soda solutions resulted in quantitative recoveries. Direct ETAAS determination of nickel in the same samples was not possible because of a high background observed.

  1. Optimality models in the age of experimental evolution and genomics

    PubMed Central

    Bull, J. J.; Wang, I.-N.

    2010-01-01

    Optimality models have been used to predict evolution of many properties of organisms. They typically neglect genetic details, whether by necessity or design. This omission is a common source of criticism, and although this limitation of optimality is widely acknowledged, it has mostly been defended rather than evaluated for its impact. Experimental adaptation of model organisms provides a new arena for testing optimality models and for simultaneously integrating genetics. First, an experimental context with a well-researched organism allows dissection of the evolutionary process to identify causes of model failure – whether the model is wrong about genetics or selection. Second, optimality models provide a meaningful context for the process and mechanics of evolution, and thus may be used to elicit realistic genetic bases of adaptation – an especially useful augmentation to well-researched genetic systems. A few studies of microbes have begun to pioneer this new direction. Incompatibility between the assumed and actual genetics has been demonstrated to be the cause of model failure in some cases. More interestingly, evolution at the phenotypic level has sometimes matched prediction even though the adaptive mutations defy mechanisms established by decades of classic genetic studies. Integration of experimental evolutionary tests with genetics heralds a new wave for optimality models and their extensions that does not merely emphasize the forces driving evolution. PMID:20646132

  2. Design optimization method for Francis turbine

    NASA Astrophysics Data System (ADS)

    Kawajiri, H.; Enomoto, Y.; Kurosawa, S.

    2014-03-01

    This paper presents a design optimization system coupled CFD. Optimization algorithm of the system employs particle swarm optimization (PSO). Blade shape design is carried out in one kind of NURBS curve defined by a series of control points. The system was applied for designing the stationary vanes and the runner of higher specific speed francis turbine. As the first step, single objective optimization was performed on stay vane profile, and second step was multi-objective optimization for runner in wide operating range. As a result, it was confirmed that the design system is useful for developing of hydro turbine.

  3. Sequential experimental design based generalised ANOVA

    SciTech Connect

    Chakraborty, Souvik Chowdhury, Rajib

    2016-07-15

    Over the last decade, surrogate modelling technique has gained wide popularity in the field of uncertainty quantification, optimization, model exploration and sensitivity analysis. This approach relies on experimental design to generate training points and regression/interpolation for generating the surrogate. In this work, it is argued that conventional experimental design may render a surrogate model inefficient. In order to address this issue, this paper presents a novel distribution adaptive sequential experimental design (DA-SED). The proposed DA-SED has been coupled with a variant of generalised analysis of variance (G-ANOVA), developed by representing the component function using the generalised polynomial chaos expansion. Moreover, generalised analytical expressions for calculating the first two statistical moments of the response, which are utilized in predicting the probability of failure, have also been developed. The proposed approach has been utilized in predicting probability of failure of three structural mechanics problems. It is observed that the proposed approach yields accurate and computationally efficient estimate of the failure probability.

  4. Sequential experimental design based generalised ANOVA

    NASA Astrophysics Data System (ADS)

    Chakraborty, Souvik; Chowdhury, Rajib

    2016-07-01

    Over the last decade, surrogate modelling technique has gained wide popularity in the field of uncertainty quantification, optimization, model exploration and sensitivity analysis. This approach relies on experimental design to generate training points and regression/interpolation for generating the surrogate. In this work, it is argued that conventional experimental design may render a surrogate model inefficient. In order to address this issue, this paper presents a novel distribution adaptive sequential experimental design (DA-SED). The proposed DA-SED has been coupled with a variant of generalised analysis of variance (G-ANOVA), developed by representing the component function using the generalised polynomial chaos expansion. Moreover, generalised analytical expressions for calculating the first two statistical moments of the response, which are utilized in predicting the probability of failure, have also been developed. The proposed approach has been utilized in predicting probability of failure of three structural mechanics problems. It is observed that the proposed approach yields accurate and computationally efficient estimate of the failure probability.

  5. Optimization of a microwave-pseudo-digestion procedure by experimental designs for the determination of trace elements in seafood products by atomic absorption spectrometry

    NASA Astrophysics Data System (ADS)

    Bermejo-Barrera, P.; Moreda-Piñeiro, A.; Muñiz-Naveiro, O.; Gómez-Fernández, A. M. J.; Bermejo-Barrera, A.

    2000-08-01

    A Plackett-Burman 2 7×3/32 design for seven factors (nitric acid concentration, hydrochloride acid concentration, hydrogen peroxide concentration, acid solution volume, particle size, microwave power, and exposure time to microwave energy) was carried out in order to find the significant variables affecting the metals acid leaching after a pseudo-digestion procedure by microwave energy from mussel. Nitric acid concentration, hydrochloride concentration or hydrogen peroxide, and exposure time to microwave energy were the most significant variables, and a 2 3+star central composite design was used for their optimization. Nitric and hydrochloric acid concentrations between 4.1 and 5.3 M, and between 2.8 and 3.8 M, respectively, were found as optimum for many elements (Ca, Cd, Cr, Cu, Fe, Mg, Mn, Pb and Zn) yielding the acid leaching process for times in the 1.2-2.2 min range. However, As was quantitatively leached with hydrochloric acid concentrations between 4.8 and 5.3 M and an exposure time of 2.0 min, while Co and Se were extracted using nitric acid (1.0 and 5.0 M, respectively) and hydrogen peroxide (5.0 M) solution and an exposure time of 2.0 min. Finally, Hg was extracted using a hydrochloric acid/hydrogen peroxide solution at 3.5:2.0 M, and also for an optimum time of microwave radiation of 1.75 min. Trace metals were determined using flame atomic absorption spectrometry, electrothermal atomic absorption spectrometry and cold vapor — atomic absorption spectrometry. The methods were finally applied to several reference materials (DORM-1, DOLT-1 and TORT-1), achieving good accuracy.

  6. Synthesis and characterization of magnetic metal-organic framework (MOF) as a novel sorbent, and its optimization by experimental design methodology for determination of palladium in environmental samples.

    PubMed

    Bagheri, Akbar; Taghizadeh, Mohsen; Behbahani, Mohammad; Asgharinezhad, Ali Akbar; Salarian, Mani; Dehghani, Ali; Ebrahimzadeh, Homeira; Amini, Mostafa M

    2012-09-15

    This paper describes the synthesis and application of novel magnetic metal-organic framework (MOF) [(Fe(3)O(4)-Pyridine)/Cu(3)(BTC)(2)] for preconcentration of Pd(II) and its determination by flame atomic absorption spectrometry (FAAS). A Box-Behnken design was used to find the optimum conditions for the preconcentration procedure through response surface methodology. Three variables including amount of magnetic MOF, extraction time, and pH of extraction were selected as factors for adsorption step, and in desorption step, four parameters including type, volume, and concentration of eluent, and desorption time were selected in the optimization study. These values were 30 mg, 6 min, 6.9, K(2)SO(4)+NaOH, 6 mL, 9.5 (w/v %)+0.01 mol L(-1), 15.5 min, for amount of MOF, extraction time, pH of extraction, type, volume, and concentration of the eluent, and desorption time, respectively. The preconcentration factor (PF), relative standard deviation (RSD), limit of detection (LOD), and adsorption capacity of the method were found to be 208, 2.1%, 0.37 ng mL(-1), and 105.1 mg g(-1), respectively. It was found that the magnetic MOF has more capacity compared to Fe(3)O(4)-Py. Finally, the magnetic MOF was successfully applied for rapid extraction of trace amounts of Pd (II) ions in fish, sediment, soil, and water samples. Copyright © 2012 Elsevier B.V. All rights reserved.

  7. Optimal design of biaxial tensile cruciform specimens

    NASA Astrophysics Data System (ADS)

    Demmerle, S.; Boehler, J. P.

    1993-01-01

    F OR EXPERIMENTAL investigations concerning the mechanical behaviour under biaxial stress states of rolled sheet metals, mostly cruciform flat specimens are used. By means of empirical methods, different specimen geometries have been proposed in the literature. In order to evaluate the suitability of a specimen design, a mathematically well defined criterion is developed, based on the standard deviations of the values of the stresses in the test section. Applied to the finite element method, the criterion is employed to realize the shape optimization of biaxial cruciform specimens for isotropic elastic materials. Furthermore, the performance of the obtained optimized specimen design is investigated in the case of off-axes tests on anisotropic materials. Therefore, for the first time, an original testing device, consisting of hinged fixtures with knife edges at each arm of the specimen, is applied to the biaxial test. The obtained results indicate the decisive superiority of the optimized specimens for the proper performance on isotropic materials, as well as the paramount importance of the proposed off-axes testing technique for biaxial tests on anisotropic materials.

  8. Sound design of chimney pipes by optimization of their resonators.

    PubMed

    Rucz, Péter; Trommer, Thomas; Angster, Judit; Miklós, András; Augusztinovicz, Fülöp

    2013-01-01

    An optimization method, based on an acoustic waveguide model of chimney and resonator, was developed and tested by laboratory measurements of experimental chimney pipes. The dimensions of the chimney pipes are modified by the optimization algorithm until the specified fundamental frequency is achieved, and a predetermined harmonic partial overlaps with an eigenfrequency of the pipe. The experimental pipes were dimensioned by the optimization method for four different scenarios and were built by an organ builder. The measurements show excellent agreement between the measured sound spectra and calculated input admittances. The developed optimization method can be used for sound design of chimney pipes.

  9. Application of experimental design and derivative spectrophotometry methods in optimization and analysis of biosorption of binary mixtures of basic dyes from aqueous solutions.

    PubMed

    Asfaram, Arash; Ghaedi, Mehrorang; Ghezelbash, Gholam Reza; Pepe, Francesco

    2017-05-01

    Simultaneous biosorption of malachite green (MG) and crystal violet (CV) on biosorbent Yarrowia lipolytica ISF7 was studied. An appropriate derivative spectrophotometry technique was used to evaluate the concentration of each dye in binary solutions, despite significant interferences in visible light absorbances. The effects of pH, temperature, growth time, initial MG and CV concentration in batch experiments were assessed using Design of Experiment (DOE) according to central composite second order response surface methodology (RSM). The analysis showed that the greatest biosorption efficiency (>99% for both dyes) can be obtained at pH 7.0, T=28°C, 24h mixing and 20mgL(-1) initial concentrations for both MG and CV dyes. The quadratic constructed equation ability for fitting experimental data is judged based on criterions like R(2) values, significant p and lack-of-fit value strongly confirm its high adequacy and applicability for prediction of revel behavior of the system under study. The proposed model showed very high correlation coefficients (R(2)=0.9997 for CV and R(2)=0.9989 for MG), while supported by closeness of predicted and experimental value. A kinetic analysis was carried out, showing that for both dyes a pseudo-second order kinetic model adequately describes the available data. The Langmuir isotherm model in single and binary components has better performance for description of dyes biosorption with maximum monolayer biosorption capacity of 59.4 and 62.7mgg(-1) in single component and 46.4 and 50.0mgg(-1) for CV and MB in binary components, respectively. The surface structure of biosorbents and the possible biosorbents-dyes interactions between were also evaluated by Fourier transform infrared (FT-IR) spectroscopy and scanning electron microscopy (SEM). The values of thermodynamic parameters including ΔG° and ΔH° strongly confirm which method is spontaneous and endothermic. Copyright © 2017. Published by Elsevier Inc.

  10. A Bayesian experimental design approach to structural health monitoring

    SciTech Connect

    Farrar, Charles; Flynn, Eric; Todd, Michael

    2010-01-01

    Optimal system design for SHM involves two primarily challenges. The first is the derivation of a proper performance function for a given system design. The second is the development of an efficient optimization algorithm for choosing a design that maximizes, or nearly maximizes the performance function. In this paper we will outline how an SHM practitioner can construct the proper performance function by casting the entire design problem into a framework of Bayesian experimental design. The approach demonstrates how the design problem necessarily ties together all steps of the SHM process.

  11. Rationally reduced libraries for combinatorial pathway optimization minimizing experimental effort

    PubMed Central

    Jeschek, Markus; Gerngross, Daniel; Panke, Sven

    2016-01-01

    Rational flux design in metabolic engineering approaches remains difficult since important pathway information is frequently not available. Therefore empirical methods are applied that randomly change absolute and relative pathway enzyme levels and subsequently screen for variants with improved performance. However, screening is often limited on the analytical side, generating a strong incentive to construct small but smart libraries. Here we introduce RedLibs (Reduced Libraries), an algorithm that allows for the rational design of smart combinatorial libraries for pathway optimization thereby minimizing the use of experimental resources. We demonstrate the utility of RedLibs for the design of ribosome-binding site libraries by in silico and in vivo screening with fluorescent proteins and perform a simple two-step optimization of the product selectivity in the branched multistep pathway for violacein biosynthesis, indicating a general applicability for the algorithm and the proposed heuristics. We expect that RedLibs will substantially simplify the refactoring of synthetic metabolic pathways. PMID:27029461

  12. Rationally reduced libraries for combinatorial pathway optimization minimizing experimental effort.

    PubMed

    Jeschek, Markus; Gerngross, Daniel; Panke, Sven

    2016-03-31

    Rational flux design in metabolic engineering approaches remains difficult since important pathway information is frequently not available. Therefore empirical methods are applied that randomly change absolute and relative pathway enzyme levels and subsequently screen for variants with improved performance. However, screening is often limited on the analytical side, generating a strong incentive to construct small but smart libraries. Here we introduce RedLibs (Reduced Libraries), an algorithm that allows for the rational design of smart combinatorial libraries for pathway optimization thereby minimizing the use of experimental resources. We demonstrate the utility of RedLibs for the design of ribosome-binding site libraries by in silico and in vivo screening with fluorescent proteins and perform a simple two-step optimization of the product selectivity in the branched multistep pathway for violacein biosynthesis, indicating a general applicability for the algorithm and the proposed heuristics. We expect that RedLibs will substantially simplify the refactoring of synthetic metabolic pathways.

  13. Program Aids Analysis And Optimization Of Design

    NASA Technical Reports Server (NTRS)

    Rogers, James L., Jr.; Lamarsh, William J., II

    1994-01-01

    NETS/ PROSSS (NETS Coupled With Programming System for Structural Synthesis) computer program developed to provide system for combining NETS (MSC-21588), neural-network application program and CONMIN (Constrained Function Minimization, ARC-10836), optimization program. Enables user to reach nearly optimal design. Design then used as starting point in normal optimization process, possibly enabling user to converge to optimal solution in significantly fewer iterations. NEWT/PROSSS written in C language and FORTRAN 77.

  14. Program Aids Analysis And Optimization Of Design

    NASA Technical Reports Server (NTRS)

    Rogers, James L., Jr.; Lamarsh, William J., II

    1994-01-01

    NETS/ PROSSS (NETS Coupled With Programming System for Structural Synthesis) computer program developed to provide system for combining NETS (MSC-21588), neural-network application program and CONMIN (Constrained Function Minimization, ARC-10836), optimization program. Enables user to reach nearly optimal design. Design then used as starting point in normal optimization process, possibly enabling user to converge to optimal solution in significantly fewer iterations. NEWT/PROSSS written in C language and FORTRAN 77.

  15. Reliability based design optimization: Formulations and methodologies

    NASA Astrophysics Data System (ADS)

    Agarwal, Harish

    Modern products ranging from simple components to complex systems should be designed to be optimal and reliable. The challenge of modern engineering is to ensure that manufacturing costs are reduced and design cycle times are minimized while achieving requirements for performance and reliability. If the market for the product is competitive, improved quality and reliability can generate very strong competitive advantages. Simulation based design plays an important role in designing almost any kind of automotive, aerospace, and consumer products under these competitive conditions. Single discipline simulations used for analysis are being coupled together to create complex coupled simulation tools. This investigation focuses on the development of efficient and robust methodologies for reliability based design optimization in a simulation based design environment. Original contributions of this research are the development of a novel efficient and robust unilevel methodology for reliability based design optimization, the development of an innovative decoupled reliability based design optimization methodology, the application of homotopy techniques in unilevel reliability based design optimization methodology, and the development of a new framework for reliability based design optimization under epistemic uncertainty. The unilevel methodology for reliability based design optimization is shown to be mathematically equivalent to the traditional nested formulation. Numerical test problems show that the unilevel methodology can reduce computational cost by at least 50% as compared to the nested approach. The decoupled reliability based design optimization methodology is an approximate technique to obtain consistent reliable designs at lesser computational expense. Test problems show that the methodology is computationally efficient compared to the nested approach. A framework for performing reliability based design optimization under epistemic uncertainty is also developed

  16. Comparison of optimal design methods in inverse problems

    NASA Astrophysics Data System (ADS)

    Banks, H. T.; Holm, K.; Kappel, F.

    2011-07-01

    Typical optimal design methods for inverse or parameter estimation problems are designed to choose optimal sampling distributions through minimization of a specific cost function related to the resulting error in parameter estimates. It is hoped that the inverse problem will produce parameter estimates with increased accuracy using data collected according to the optimal sampling distribution. Here we formulate the classical optimal design problem in the context of general optimization problems over distributions of sampling times. We present a new Prohorov metric-based theoretical framework that permits one to treat succinctly and rigorously any optimal design criteria based on the Fisher information matrix. A fundamental approximation theory is also included in this framework. A new optimal design, SE-optimal design (standard error optimal design), is then introduced in the context of this framework. We compare this new design criterion with the more traditional D-optimal and E-optimal designs. The optimal sampling distributions from each design are used to compute and compare standard errors; the standard errors for parameters are computed using asymptotic theory or bootstrapping and the optimal mesh. We use three examples to illustrate ideas: the Verhulst-Pearl logistic population model (Banks H T and Tran H T 2009 Mathematical and Experimental Modeling of Physical and Biological Processes (Boca Raton, FL: Chapman and Hall/CRC)), the standard harmonic oscillator model (Banks H T and Tran H T 2009) and a popular glucose regulation model (Bergman R N, Ider Y Z, Bowden C R and Cobelli C 1979 Am. J. Physiol. 236 E667-77 De Gaetano A and Arino O 2000 J. Math. Biol. 40 136-68 Toffolo G, Bergman R N, Finegood D T, Bowden C R and Cobelli C 1980 Diabetes 29 979-90).

  17. Vehicle systems design optimization study

    SciTech Connect

    Gilmour, J. L.

    1980-04-01

    The optimization of an electric vehicle layout requires a weight distribution in the range of 53/47 to 62/38 in order to assure dynamic handling characteristics comparable to current production internal combustion engine vehicles. It is possible to achieve this goal and also provide passenger and cargo space comparable to a selected current production sub-compact car either in a unique new design or by utilizing the production vehicle as a base. Necessary modification of the base vehicle can be accomplished without major modification of the structure or running gear. As long as batteries are as heavy and require as much space as they currently do, they must be divided into two packages - one at front under the hood and a second at the rear under the cargo area - in order to achieve the desired weight distribution. The weight distribution criteria requires the placement of batteries at the front of the vehicle even when the central tunnel is used for the location of some batteries. The optimum layout has a front motor and front wheel drive. This configuration provides the optimum vehicle dynamic handling characteristics and the maximum passsenger and cargo space for a given size vehicle.

  18. Integrated multidisciplinary design optimization of rotorcraft

    NASA Technical Reports Server (NTRS)

    Adelman, Howard M.; Mantay, Wayne R.

    1989-01-01

    The NASA/Army research plan for developing the logic elements for helicopter rotor design optimization by integrating appropriate disciplines and accounting for important interactions among the disciplines is discussed. The paper describes the optimization formulation in terms of the objective function, design variables, and constraints. The analysis aspects are discussed, and an initial effort at defining the interdisciplinary coupling is summarized. Results are presented on the achievements made in the rotor aerodynamic performance optimization for minimum hover horsepower, rotor dynamic optimization for vibration reduction, rotor structural optimization for minimum weight, and integrated aerodynamic load/dynamics optimization for minimum vibration and weight.

  19. Control design for the SERC experimental testbeds

    NASA Technical Reports Server (NTRS)

    Jacques, Robert; Blackwood, Gary; Macmartin, Douglas G.; How, Jonathan; Anderson, Eric

    1992-01-01

    Viewgraphs on control design for the Space Engineering Research Center experimental testbeds are presented. Topics covered include: SISO control design and results; sensor and actuator location; model identification; control design; experimental results; preliminary LAC experimental results; active vibration isolation problem statement; base flexibility coupling into isolation feedback loop; cantilever beam testbed; and closed loop results.

  20. WE-AB-BRB-01: Development of a Probe-Format Graphite Calorimeter for Practical Clinical Dosimetry: Numerical Design Optimization, Prototyping, and Experimental Proof-Of-Concept

    SciTech Connect

    Renaud, J; Seuntjens, J; Sarfehnia, A

    2015-06-15

    Purpose: In this work, the feasibility of performing absolute dose to water measurements using a constant temperature graphite probe calorimeter (GPC) in a clinical environment is established. Methods: A numerical design optimization study was conducted by simulating the heat transfer in the GPC resulting from irradiation using a finite element method software package. The choice of device shape, dimensions, and materials was made to minimize the heat loss in the sensitive volume of the GPC. The resulting design, which incorporates a novel aerogel-based thermal insulator, and 15 temperature sensitive resistors capable of both Joule heating and measuring temperature, was constructed in house. A software based process controller was developed to stabilize the temperatures of the GPC’s constituent graphite components to within a few 10’s of µK. This control system enables the GPC to operate in either the quasi-adiabatic or isothermal mode, two well-known, and independent calorimetry techniques. Absorbed dose to water measurements were made using these two methods under standard conditions in a 6 MV 1000 MU/min photon beam and subsequently compared against TG-51 derived values. Results: Compared to an expected dose to water of 76.9 cGy/100 MU, the average GPC-measured doses were 76.5 ± 0.5 and 76.9 ± 0.5 cGy/100 MU for the adiabatic and isothermal modes, respectively. The Monte Carlo calculated graphite to water dose conversion was 1.013, and the adiabatic heat loss correction was 1.003. With an overall uncertainty of about 1%, the most significant contributions were the specific heat capacity (type B, 0.8%) and the repeatability (type A, 0.6%). Conclusion: While the quasi-adiabatic mode of operation had been validated in previous work, this is the first time that the GPC has been successfully used isothermally. This proof-of-concept will serve as the basis for further study into the GPC’s application to small fields and MRI-linac dosimetry. This work has been

  1. Optimal control concepts in design sensitivity analysis

    NASA Technical Reports Server (NTRS)

    Belegundu, Ashok D.

    1987-01-01

    A close link is established between open loop optimal control theory and optimal design by noting certain similarities in the gradient calculations. The resulting benefits include a unified approach, together with physical insights in design sensitivity analysis, and an efficient approach for simultaneous optimal control and design. Both matrix displacement and matrix force methods are considered, and results are presented for dynamic systems, structures, and elasticity problems.

  2. Genetic algorithms for the construction of D-optimal designs

    SciTech Connect

    Heredia-Langner, Alejandro; Carlyle, W M.; Montgomery, D C.; Borror, Connie M.; Runger, George C.

    2003-01-01

    Computer-generated designs are useful for situations where standard factorial, fractional factorial or response surface designs cannot be easily employed. Alphabetically-optimal designs are the most widely used type of computer-generated designs, and of these, the D-optimal (or D-efficient) class of designs are extremely popular. D-optimal designs are usually constructed by algorithms that sequentially add and delete points from a potential design based using a candidate set of points spaced over the region of interest. We present a technique to generate D-efficient designs using genetic algorithms (GA). This approach eliminates the need to explicitly consider a candidate set of experimental points and it can handle highly constrained regions while maintaining a level of performance comparable to more traditional design construction techniques.

  3. Topology Optimization for Architected Materials Design

    NASA Astrophysics Data System (ADS)

    Osanov, Mikhail; Guest, James K.

    2016-07-01

    Advanced manufacturing processes provide a tremendous opportunity to fabricate materials with precisely defined architectures. To fully leverage these capabilities, however, materials architectures must be optimally designed according to the target application, base material used, and specifics of the fabrication process. Computational topology optimization offers a systematic, mathematically driven framework for navigating this new design challenge. The design problem is posed and solved formally as an optimization problem with unit cell and upscaling mechanics embedded within this formulation. This article briefly reviews the key requirements to apply topology optimization to materials architecture design and discusses several fundamental findings related to optimization of elastic, thermal, and fluidic properties in periodic materials. Emerging areas related to topology optimization for manufacturability and manufacturing variations, nonlinear mechanics, and multiscale design are also discussed.

  4. Design optimization of a portable, micro-hydrokinetic turbine

    NASA Astrophysics Data System (ADS)

    Schleicher, W. Chris

    Marine and hydrokinetic (MHK) technology is a growing field that encompasses many different types of turbomachinery that operate on the kinetic energy of water. Micro hydrokinetics are a subset of MHK technology comprised of units designed to produce less than 100 kW of power. A propeller-type hydrokinetic turbine is investigated as a solution for a portable micro-hydrokinetic turbine with the needs of the United States Marine Corps in mind, as well as future commercial applications. This dissertation investigates using a response surface optimization methodology to create optimal turbine blade designs under many operating conditions. The field of hydrokinetics is introduced. The finite volume method is used to solve the Reynolds-Averaged Navier-Stokes equations with the k ω Shear Stress Transport model, for different propeller-type hydrokinetic turbines. The adaptive response surface optimization methodology is introduced as related to hydrokinetic turbines, and is benchmarked with complex algebraic functions. The optimization method is further studied to characterize the size of the experimental design on its ability to find optimum conditions. It was found that a large deviation between experimental design points was preferential. Different propeller hydrokinetic turbines were designed and compared with other forms of turbomachinery. It was found that the rapid simulations usually under predict performance compare to the refined simulations, and for some other designs it drastically over predicted performance. The optimization method was used to optimize a modular pump-turbine, verifying that the optimization work for other hydro turbine designs.

  5. Designed Proteins as Optimized Oxygen Carriers for Artificial Blood

    DTIC Science & Technology

    2014-02-01

    process in which the bis-histidine-ligated ferrous heme iron donates an electron, forming superoxide. Experimental testing of this hypothesis are...Award Number: W81XWH-11-2-0083 TITLE: Designed Proteins as Optimized Oxygen Carriers for Artificial Blood PRINCIPAL INVESTIGATOR: Ronald L...AND SUBTITLE 5a. CONTRACT NUMBER Designed Proteins as Optimized Oxygen Carriers for Artificial Blood 5b. GRANT NUMBER W81XWH-11-2-0083 5c

  6. Determination of the optimal amount of water in liquid-fill masses for hard gelatin capsules by means of texture analysis and experimental design.

    PubMed

    Kuentz, Martin; Röthlisberger, Dieter

    2002-04-02

    The aim of this study is to use texture analysis as a non-destructive test for hard gelatin capsules filled with liquid formulations to investigate mechanical changes upon storage. A suitable amount of water in the formulations is determined to obtain the best possible compatibility with the gelatin shell. This quantity of water to be added to a formulation is called the balanced amount of water (BAW). Texture profiling was conducted on capsules filled with hydrophilic polymer mixtures and with formulations based on amphiphilic masses with high HLB value. The first model mixture consisted of polyethylene glycol 400 and polyvinylpyrrolidone K17 with water and the second type consisted of caprylocaproyl macrogol glycerides (Labrasol) with colloidal silica (Aerosil 200) and water. The liquid-fill capsules were investigated by measuring changes on mass and stiffness after storage under confined conditions in aluminium foils. Capsule stiffness was investigated also as a parameter in a response surface analysis to identify the BAW. Polyvinylpyrrolidone did not show a great influence on the BAW in the range of 10-12% (w/w) for the first model mixture. Capsules with the less hydrophilic Labrasol formulations, however, kept their initial stiffness after storage best with only half of that amount, i.e. 5-6% (w/w) of water in the compositions. From this study it can be concluded that texture profiling in the framework of an experimental design helps to find hydrophilic or amphiphilic formulations that are compatible with gelatin capsules. Short-term stability tests are meaningful if capsule embrittlement or softening is due to water equilibration or another migration process that takes place rapidly. Long-term stability tests will always be needed for a final statement of compatibility between a formulation and hard gelatin capsules.

  7. Optimal design criteria - prediction vs. parameter estimation

    NASA Astrophysics Data System (ADS)

    Waldl, Helmut

    2014-05-01

    G-optimality is a popular design criterion for optimal prediction, it tries to minimize the kriging variance over the whole design region. A G-optimal design minimizes the maximum variance of all predicted values. If we use kriging methods for prediction it is self-evident to use the kriging variance as a measure of uncertainty for the estimates. Though the computation of the kriging variance and even more the computation of the empirical kriging variance is computationally very costly and finding the maximum kriging variance in high-dimensional regions can be time demanding such that we cannot really find the G-optimal design with nowadays available computer equipment in practice. We cannot always avoid this problem by using space-filling designs because small designs that minimize the empirical kriging variance are often non-space-filling. D-optimality is the design criterion related to parameter estimation. A D-optimal design maximizes the determinant of the information matrix of the estimates. D-optimality in terms of trend parameter estimation and D-optimality in terms of covariance parameter estimation yield basically different designs. The Pareto frontier of these two competing determinant criteria corresponds with designs that perform well under both criteria. Under certain conditions searching the G-optimal design on the above Pareto frontier yields almost as good results as searching the G-optimal design in the whole design region. In doing so the maximum of the empirical kriging variance has to be computed only a few times though. The method is demonstrated by means of a computer simulation experiment based on data provided by the Belgian institute Management Unit of the North Sea Mathematical Models (MUMM) that describe the evolution of inorganic and organic carbon and nutrients, phytoplankton, bacteria and zooplankton in the Southern Bight of the North Sea.

  8. R/qtlDesign: inbred line cross experimental design

    PubMed Central

    Sen, Śaunak; Satagopan, Jaya M.; Broman, Karl W.; Churchill, Gary A.

    2008-01-01

    An investigator planning a QTL (quantitative trait locus) experiment has to choose which strains to cross, the type of cross, genotyping strategies, and the number of progeny to raise and phenotype. To help make such choices, we have developed an interactive program for power and sample size calculations for QTL experiments, R/qtlDesign. Our software includes support for selective genotyping strategies, variable marker spacing, and tools to optimize information content subject to cost constraints for backcross, intercross, and recombinant inbred lines from two parental strains. We review the impact of experimental design choices on the variance attributable to a segregating locus, the residual error variance, and the effective sample size. We give examples of software usage in real-life settings. The software is available at http://www.biostat.ucsf.edu/sen/software.html. PMID:17347894

  9. Integrated multidisciplinary design optimization of rotorcraft

    NASA Technical Reports Server (NTRS)

    Adelman, Howard M.; Mantay, Wayne R.

    1989-01-01

    The NASA/Army research plan for developing the logic elements for helicopter rotor design optimization by integrating appropriate disciplines and accounting for important interactions among the disciplines is discussed. The optimization formulation is described in terms of the objective function, design variables, and constraints. The analysis aspects are discussed, and an initial effort at defining the interdisciplinary coupling is summarized. Results are presented on the achievements made in the rotor dynamic optimization for vibration reduction, rotor structural optimization for minimum weight, and integrated aerodynamic load/dynamics optimization for minimum vibration and weight.

  10. Optimal Designs of Staggered Dean Vortex Micromixers

    PubMed Central

    Chen, Jyh Jian; Chen, Chun Huei; Shie, Shian Ruei

    2011-01-01

    A novel parallel laminar micromixer with a two-dimensional staggered Dean Vortex micromixer is optimized and fabricated in our study. Dean vortices induced by centrifugal forces in curved rectangular channels cause fluids to produce secondary flows. The split-and-recombination (SAR) structures of the flow channels and the impinging effects result in the reduction of the diffusion distance of two fluids. Three different designs of a curved channel micromixer are introduced to evaluate the mixing performance of the designed micromixer. Mixing performances are demonstrated by means of a pH indicator using an optical microscope and fluorescent particles via a confocal microscope at different flow rates corresponding to Reynolds numbers (Re) ranging from 0.5 to 50. The comparison between the experimental data and numerical results shows a very reasonable agreement. At a Re of 50, the mixing length at the sixth segment, corresponding to the downstream distance of 21.0 mm, can be achieved in a distance 4 times shorter than when the Re equals 1. An optimization of this micromixer is performed with two geometric parameters. These are the angle between the lines from the center to two intersections of two consecutive curved channels, θ, and the angle between two lines of the centers of three consecutive curved channels, ϕ. It can be found that the maximal mixing index is related to the maximal value of the sum of θ and ϕ, which is equal to 139.82°. PMID:21747691

  11. Experimental Design and Some Threats to Experimental Validity: A Primer

    ERIC Educational Resources Information Center

    Skidmore, Susan

    2008-01-01

    Experimental designs are distinguished as the best method to respond to questions involving causality. The purpose of the present paper is to explicate the logic of experimental design and why it is so vital to questions that demand causal conclusions. In addition, types of internal and external validity threats are discussed. To emphasize the…

  12. An Alternative Optimization Model and Robust Experimental Design for the Assignment Scheduling Capability for Unmanned Aerial Vehicles (ASC-U) Simulation

    DTIC Science & Technology

    2007-06-01

    Designator • Laser Designator ( LD ) • Laser Rangefinder ( LR ) • Light Detection and Ranging ( LIDAR ) Sensor Technology • Meteorological Sensors • Signals...Launch and Recovery Site ( LRS ) .....................................................14 2. Ground Control Station (GCS...Constraints.......................................................................................... 36 6. Objective Function

  13. Web-based tools for finding optimal designs in biomedical studies

    PubMed Central

    Wong, Weng Kee

    2013-01-01

    Experimental costs are rising and applications of optimal design ideas are increasingly applied in many disciplines. However, the theory for constructing optimal designs can be esoteric and its implementation can be difficult. To help practitioners have easier access to optimal designs and better appreciate design issues, we present a web site at http://optimal-design.biostat.ucla.edu/optimal/ capable of generating different types of tailor-made optimal designs for popular models in the biological sciences. This site also evaluates various efficiencies of a user-specified design and so enables practitioners to appreciate robustness properties of the design before implementation. PMID:23806678

  14. Design optimization studies using COSMIC NASTRAN

    NASA Technical Reports Server (NTRS)

    Pitrof, Stephen M.; Bharatram, G.; Venkayya, Vipperla B.

    1993-01-01

    The purpose of this study is to create, test and document a procedure to integrate mathematical optimization algorithms with COSMIC NASTRAN. This procedure is very important to structural design engineers who wish to capitalize on optimization methods to ensure that their design is optimized for its intended application. The OPTNAST computer program was created to link NASTRAN and design optimization codes into one package. This implementation was tested using two truss structure models and optimizing their designs for minimum weight, subject to multiple loading conditions and displacement and stress constraints. However, the process is generalized so that an engineer could design other types of elements by adding to or modifying some parts of the code.

  15. Optimal Multiobjective Design of Digital Filters Using Taguchi Optimization Technique

    NASA Astrophysics Data System (ADS)

    Ouadi, Abderrahmane; Bentarzi, Hamid; Recioui, Abdelmadjid

    2014-01-01

    The multiobjective design of digital filters using the powerful Taguchi optimization technique is considered in this paper. This relatively new optimization tool has been recently introduced to the field of engineering and is based on orthogonal arrays. It is characterized by its robustness, immunity to local optima trapping, relative fast convergence and ease of implementation. The objectives of filter design include matching some desired frequency response while having minimum linear phase; hence, reducing the time response. The results demonstrate that the proposed problem solving approach blended with the use of the Taguchi optimization technique produced filters that fulfill the desired characteristics and are of practical use.

  16. Optimal multiobjective design of digital filters using spiral optimization technique.

    PubMed

    Ouadi, Abderrahmane; Bentarzi, Hamid; Recioui, Abdelmadjid

    2013-01-01

    The multiobjective design of digital filters using spiral optimization technique is considered in this paper. This new optimization tool is a metaheuristic technique inspired by the dynamics of spirals. It is characterized by its robustness, immunity to local optima trapping, relative fast convergence and ease of implementation. The objectives of filter design include matching some desired frequency response while having minimum linear phase; hence, reducing the time response. The results demonstrate that the proposed problem solving approach blended with the use of the spiral optimization technique produced filters which fulfill the desired characteristics and are of practical use.

  17. Design Optimization of Marine Reduction Gears.

    DTIC Science & Technology

    1983-09-01

    Association (AGMA). For instance, the AGNIA recommends tooth proportions for fine-pitch involute spur and helical gears and specifies 67 TABLE 11 UNITED STATES...E 6F -h me) Computer Aided Design; Design Optimization; Automated Design Synthesis; Marine Gear Design; Marine Gears ; Helical Gears ; Computer Aided...41 1. Materials--------------------------------42 2. Gear Size--------------------------------43 3. Tooth Size

  18. Optimization, an Important Stage of Engineering Design

    ERIC Educational Resources Information Center

    Kelley, Todd R.

    2010-01-01

    A number of leaders in technology education have indicated that a major difference between the technological design process and the engineering design process is analysis and optimization. The analysis stage of the engineering design process is when mathematical models and scientific principles are employed to help the designer predict design…

  19. Optimization, an Important Stage of Engineering Design

    ERIC Educational Resources Information Center

    Kelley, Todd R.

    2010-01-01

    A number of leaders in technology education have indicated that a major difference between the technological design process and the engineering design process is analysis and optimization. The analysis stage of the engineering design process is when mathematical models and scientific principles are employed to help the designer predict design…

  20. Computational design optimization for microfluidic magnetophoresis

    PubMed Central

    Plouffe, Brian D.; Lewis, Laura H.; Murthy, Shashi K.

    2011-01-01

    Current macro- and microfluidic approaches for the isolation of mammalian cells are limited in both efficiency and purity. In order to design a robust platform for the enumeration of a target cell population, high collection efficiencies are required. Additionally, the ability to isolate pure populations with minimal biological perturbation and efficient off-chip recovery will enable subcellular analyses of these cells for applications in personalized medicine. Here, a rational design approach for a simple and efficient device that isolates target cell populations via magnetic tagging is presented. In this work, two magnetophoretic microfluidic device designs are described, with optimized dimensions and operating conditions determined from a force balance equation that considers two dominant and opposing driving forces exerted on a magnetic-particle-tagged cell, namely, magnetic and viscous drag. Quantitative design criteria for an electromagnetic field displacement-based approach are presented, wherein target cells labeled with commercial magnetic microparticles flowing in a central sample stream are shifted laterally into a collection stream. Furthermore, the final device design is constrained to fit on standard rectangular glass coverslip (60 (L)×24 (W)×0.15 (H) mm3) to accommodate small sample volume and point-of-care design considerations. The anticipated performance of the device is examined via a parametric analysis of several key variables within the model. It is observed that minimal currents (<500 mA) are required to generate magnetic fields sufficient to separate cells from the sample streams flowing at rate as high as 7 ml∕h, comparable to the performance of current state-of-the-art magnet-activated cell sorting systems currently used in clinical settings. Experimental validation of the presented model illustrates that a device designed according to the derived rational optimization can effectively isolate (∼100%) a magnetic-particle-tagged cell

  1. Experimental Eavesdropping Based on Optimal Quantum Cloning

    NASA Astrophysics Data System (ADS)

    Bartkiewicz, Karol; Lemr, Karel; Černoch, Antonín; Soubusta, Jan; Miranowicz, Adam

    2013-04-01

    The security of quantum cryptography is guaranteed by the no-cloning theorem, which implies that an eavesdropper copying transmitted qubits in unknown states causes their disturbance. Nevertheless, in real cryptographic systems some level of disturbance has to be allowed to cover, e.g., transmission losses. An eavesdropper can attack such systems by replacing a noisy channel by a better one and by performing approximate cloning of transmitted qubits which disturb them but below the noise level assumed by legitimate users. We experimentally demonstrate such symmetric individual eavesdropping on the quantum key distribution protocols of Bennett and Brassard (BB84) and the trine-state spherical code of Renes (R04) with two-level probes prepared using a recently developed photonic multifunctional quantum cloner [Lemr et al., Phys. Rev. A 85, 050307(R) (2012)PLRAAN1050-2947]. We demonstrated that our optimal cloning device with high-success rate makes the eavesdropping possible by hiding it in usual transmission losses. We believe that this experiment can stimulate the quest for other operational applications of quantum cloning.

  2. Graphical Models for Quasi-Experimental Designs

    ERIC Educational Resources Information Center

    Steiner, Peter M.; Kim, Yongnam; Hall, Courtney E.; Su, Dan

    2017-01-01

    Randomized controlled trials (RCTs) and quasi-experimental designs like regression discontinuity (RD) designs, instrumental variable (IV) designs, and matching and propensity score (PS) designs are frequently used for inferring causal effects. It is well known that the features of these designs facilitate the identification of a causal estimand…

  3. Singularities in Optimal Structural Design

    NASA Technical Reports Server (NTRS)

    Patnaik, S. N.; Guptill, J. D.; Berke, L.

    1992-01-01

    Singularity conditions that arise during structural optimization can seriously degrade the performance of the optimizer. The singularities are intrinsic to the formulation of the structural optimization problem and are not associated with the method of analysis. Certain conditions that give rise to singularities have been identified in earlier papers, encompassing the entire structure. Further examination revealed more complex sets of conditions in which singularities occur. Some of these singularities are local in nature, being associated with only a segment of the structure. Moreover, the likelihood that one of these local singularities may arise during an optimization procedure can be much greater than that of the global singularity identified earlier. Examples are provided of these additional forms of singularities. A framework is also given in which these singularities can be recognized. In particular, the singularities can be identified by examination of the stress displacement relations along with the compatibility conditions and/or the displacement stress relations derived in the integrated force method of structural analysis.

  4. Singularities in optimal structural design

    NASA Technical Reports Server (NTRS)

    Patnaik, S. N.; Guptill, J. D.; Berke, L.

    1992-01-01

    Singularity conditions that arise during structural optimization can seriously degrade the performance of the optimizer. The singularities are intrinsic to the formulation of the structural optimization problem and are not associated with the method of analysis. Certain conditions that give rise to singularities have been identified in earlier papers, encompassing the entire structure. Further examination revealed more complex sets of conditions in which singularities occur. Some of these singularities are local in nature, being associated with only a segment of the structure. Moreover, the likelihood that one of these local singularities may arise during an optimization procedure can be much greater than that of the global singularity identified earlier. Examples are provided of these additional forms of singularities. A framework is also given in which these singularities can be recognized. In particular, the singularities can be identified by examination of the stress displacement relations along with the compatibility conditions and/or the displacement stress relations derived in the integrated force method of structural analysis.

  5. Optimal design of structures with buckling constraints.

    NASA Technical Reports Server (NTRS)

    Kiusalaas, J.

    1973-01-01

    The paper presents an iterative, finite element method for minimum weight design of structures with respect to buckling constraints. The redesign equation is derived from the optimality criterion, as opposed to a numerical search procedure, and can handle problems that are characterized by the existence of two fundamental buckling modes at the optimal design. Application of the method is illustrated by beam and orthogonal frame design problems.

  6. Multidisciplinary Optimization Methods for Aircraft Preliminary Design

    NASA Technical Reports Server (NTRS)

    Kroo, Ilan; Altus, Steve; Braun, Robert; Gage, Peter; Sobieski, Ian

    1994-01-01

    This paper describes a research program aimed at improved methods for multidisciplinary design and optimization of large-scale aeronautical systems. The research involves new approaches to system decomposition, interdisciplinary communication, and methods of exploiting coarse-grained parallelism for analysis and optimization. A new architecture, that involves a tight coupling between optimization and analysis, is intended to improve efficiency while simplifying the structure of multidisciplinary, computation-intensive design problems involving many analysis disciplines and perhaps hundreds of design variables. Work in two areas is described here: system decomposition using compatibility constraints to simplify the analysis structure and take advantage of coarse-grained parallelism; and collaborative optimization, a decomposition of the optimization process to permit parallel design and to simplify interdisciplinary communication requirements.

  7. Experimental broadband absorption enhancement in silicon nanohole structures with optimized complex unit cells.

    PubMed

    Lin, Chenxi; Martínez, Luis Javier; Povinelli, Michelle L

    2013-09-09

    We design silicon membranes with nanohole structures with optimized complex unit cells that maximize broadband absorption. We fabricate the optimized design and measure the optical absorption. We demonstrate an experimental broadband absorption about 3.5 times higher than an equally-thick thin film.

  8. D-optimal experimental design coupled with parallel factor analysis 2 decomposition a useful tool in the determination of triazines in oranges by programmed temperature vaporization-gas chromatography-mass spectrometry when using dispersive-solid phase extraction.

    PubMed

    Herrero, A; Ortiz, M C; Sarabia, L A

    2013-05-03

    The determination of triazines in oranges using a GC-MS system coupled to a programmed temperature vaporizer (PTV) inlet in the context of legislation is performed. Both pretreatment (using a Quick Easy Cheap Effective Rugged and Safe (QuEChERS) procedure) and injection steps are optimized using D-optimal experimental designs for reducing the experimental effort. The relative dirty extracts obtained and the elution time shifts make it necessary to use a PARAFAC2 decomposition to solve these two usual problems in the chromatographic determinations. The "second-order advantage" of the PARAFAC2 decomposition allows unequivocal identification according to document SANCO/12495/2011 (taking into account the tolerances for relative retention time and the relative abundance for the diagnostic ions), avoiding false negatives even in the presence of unknown co-eluents. The detection limits (CCα) found, from 0.51 to 1.05μgkg(-1), are far below the maximum residue levels (MRLs) established by the European Union for simazine, atrazine, terbuthylazine, ametryn, simetryn, prometryn and terbutryn in oranges. No MRL violations were found in the commercial oranges analyzed.

  9. Optimizing experimental conditions for stimulated emission depletion microscopy in biophotonics

    NASA Astrophysics Data System (ADS)

    Beeson, Karl; Potasek, Mary J.; Parilov, Evgueni

    2015-03-01

    Using a novel numerical method we show how to optimize the resolution enhancement of stimulated emission depletion (STED) by simulating the entire process including the absorption, overlapping multiple beams and stimulated emission. We provide calculations showing that for fixed donut pulse energy, a longer donut pulse length can result in greater resolution enhancement than a shorter donut pulse length. These results show how it is possible to use our simulations to design the best experimental conditions for STED resolution enhancement and illustrate the importance of having a software program that includes both multiple beams and stimulated emission.

  10. Exponential approximations in optimal design

    NASA Technical Reports Server (NTRS)

    Belegundu, A. D.; Rajan, S. D.; Rajgopal, J.

    1990-01-01

    One-point and two-point exponential functions have been developed and proved to be very effective approximations of structural response. The exponential has been compared to the linear, reciprocal and quadratic fit methods. Four test problems in structural analysis have been selected. The use of such approximations is attractive in structural optimization to reduce the numbers of exact analyses which involve computationally expensive finite element analysis.

  11. Optimization design of electromagnetic shielding composites

    NASA Astrophysics Data System (ADS)

    Qu, Zhaoming; Wang, Qingguo; Qin, Siliang; Hu, Xiaofeng

    2013-03-01

    The effective electromagnetic parameters physical model of composites and prediction formulas of composites' shielding effectiveness and reflectivity were derived based on micromechanics, variational principle and electromagnetic wave transmission theory. The multi-objective optimization design of multilayer composites was carried out using genetic algorithm. The optimized results indicate that material parameter proportioning of biggest absorption ability can be acquired under the condition of the minimum shielding effectiveness can be satisfied in certain frequency band. The validity of optimization design model was verified and the scheme has certain theoretical value and directive significance to the design of high efficiency shielding composites.

  12. Topology optimization design of space rectangular mirror

    NASA Astrophysics Data System (ADS)

    Qu, Yanjun; Wang, Wei; Liu, Bei; Li, Xupeng

    2016-10-01

    A conceptual lightweight rectangular mirror is designed based on the theory of topology optimization and the specific structure size is determined through sensitivity analysis and size optimization in this paper. Under the load condition of gravity along the optical axis, compared with the mirrors designed by traditional method using finite element analysis method, the performance of the topology optimization reflectors supported by peripheral six points are superior in lightweight ratio, structure stiffness and the reflective surface accuracy. This suggests that the lightweight method in this paper is effective and has potential value for the design of rectangular reflector.

  13. Optimization methods applied to hybrid vehicle design

    NASA Technical Reports Server (NTRS)

    Donoghue, J. F.; Burghart, J. H.

    1983-01-01

    The use of optimization methods as an effective design tool in the design of hybrid vehicle propulsion systems is demonstrated. Optimization techniques were used to select values for three design parameters (battery weight, heat engine power rating and power split between the two on-board energy sources) such that various measures of vehicle performance (acquisition cost, life cycle cost and petroleum consumption) were optimized. The apporach produced designs which were often significant improvements over hybrid designs already reported on in the literature. The principal conclusions are as follows. First, it was found that the strategy used to split the required power between the two on-board energy sources can have a significant effect on life cycle cost and petroleum consumption. Second, the optimization program should be constructed so that performance measures and design variables can be easily changed. Third, the vehicle simulation program has a significant effect on the computer run time of the overall optimization program; run time can be significantly reduced by proper design of the types of trips the vehicle takes in a one year period. Fourth, care must be taken in designing the cost and constraint expressions which are used in the optimization so that they are relatively smooth functions of the design variables. Fifth, proper handling of constraints on battery weight and heat engine rating, variables which must be large enough to meet power demands, is particularly important for the success of an optimization study. Finally, the principal conclusion is that optimization methods provide a practical tool for carrying out the design of a hybrid vehicle propulsion system.

  14. Design Optimization Programmable Calculators versus Campus Computers.

    ERIC Educational Resources Information Center

    Savage, Michael

    1982-01-01

    A hypothetical design optimization problem and technical information on the three design parameters are presented. Although this nested iteration problem can be solved on a computer (flow diagram provided), this article suggests that several hand held calculators can be used to perform the same design iteration. (SK)

  15. Experimental design for functional MRI of scene memory encoding.

    PubMed

    Narayan, Veena M; Kimberg, Daniel Y; Tang, Kathy Z; Detre, John A

    2005-03-01

    The use of functional imaging to identify encoding-related areas in the medial temporal lobe has previously been explored for presurgical evaluation in patients with temporal lobe epilepsy. Optimizing sensitivity in such paradigms is critical for the reliable detection of regions most closely engaged in memory encoding. A variety of experimental designs have been used to detect encoding-related activity, including blocked, sparse event-related, and rapid event-related designs. Although blocked designs are generally more sensitive than event-related designs, design and analysis advantages could potentially overcome this difference. In the present study, we directly contrast different experimental designs in terms of the intensity, extent, and lateralization of activation detected in healthy subjects. Our results suggest that although improved design augments the sensitivity of event-related designs, these benefits are not sufficient to overcome the sensitivity advantages of traditional blocked designs.

  16. Computer optimization of photoreceiver designs

    SciTech Connect

    Wistey, M.

    1994-04-01

    We customized an existing simulator which uses the beam propagation method to analyze optoelectronic devices, in order to test the wavelength filtering characteristics of a number of photoreceiver designs. We tested the simulator against traditional analytical techniques and verified its accuracy for small step sizes. Finally, we applied a simulated annealing algorithm to the BPM simulator in order to produce photoreceiver designs with improved wavelength filtering characteristics.

  17. Application of fractional factorial experimental and Box-Behnken designs for optimization of single-drop microextraction of 2,4,6-trichloroanisole and 2,4,6-tribromoanisole from wine samples.

    PubMed

    Martendal, Edmar; Budziak, Dilma; Carasek, Eduardo

    2007-05-04

    In this paper a new method for the determination of 2,4,6-trichloroanisole (TCA) and 2,4,6-tribromoanisole (TBA) in wine samples is presented. Headspace single-drop microextraction (HS-SDME) was used for the extraction and preconcentration of the analytes, followed by analysis by gas chromatography and electron-capture detection (GC-ECD). The variables affecting extraction efficiency were optimized using fractional factorial experimental and Box-Behnken designs. The external calibration procedure was successfully carried out using a synthetic wine solution and diluted red wine samples. The method was also applied to white wine samples. Excellent detection limits of 8.1 and 6.1 ng L(-1) were achieved for TCA and TBA, respectively. Good precision and accuracy were obtained.

  18. Interaction prediction optimization in multidisciplinary design optimization problems.

    PubMed

    Meng, Debiao; Zhang, Xiaoling; Huang, Hong-Zhong; Wang, Zhonglai; Xu, Huanwei

    2014-01-01

    The distributed strategy of Collaborative Optimization (CO) is suitable for large-scale engineering systems. However, it is hard for CO to converge when there is a high level coupled dimension. Furthermore, the discipline objectives cannot be considered in each discipline optimization problem. In this paper, one large-scale systems control strategy, the interaction prediction method (IPM), is introduced to enhance CO. IPM is utilized for controlling subsystems and coordinating the produce process in large-scale systems originally. We combine the strategy of IPM with CO and propose the Interaction Prediction Optimization (IPO) method to solve MDO problems. As a hierarchical strategy, there are a system level and a subsystem level in IPO. The interaction design variables (including shared design variables and linking design variables) are operated at the system level and assigned to the subsystem level as design parameters. Each discipline objective is considered and optimized at the subsystem level simultaneously. The values of design variables are transported between system level and subsystem level. The compatibility constraints are replaced with the enhanced compatibility constraints to reduce the dimension of design variables in compatibility constraints. Two examples are presented to show the potential application of IPO for MDO.

  19. Application of machine/statistical learning, artificial intelligence and statistical experimental design for the modeling and optimization of methylene blue and Cd(ii) removal from a binary aqueous solution by natural walnut carbon.

    PubMed

    Mazaheri, H; Ghaedi, M; Ahmadi Azqhandi, M H; Asfaram, A

    2017-05-10

    Analytical chemists apply statistical methods for both the validation and prediction of proposed models. Methods are required that are adequate for finding the typical features of a dataset, such as nonlinearities and interactions. Boosted regression trees (BRTs), as an ensemble technique, are fundamentally different to other conventional techniques, with the aim to fit a single parsimonious model. In this work, BRT, artificial neural network (ANN) and response surface methodology (RSM) models have been used for the optimization and/or modeling of the stirring time (min), pH, adsorbent mass (mg) and concentrations of MB and Cd(2+) ions (mg L(-1)) in order to develop respective predictive equations for simulation of the efficiency of MB and Cd(2+) adsorption based on the experimental data set. Activated carbon, as an adsorbent, was synthesized from walnut wood waste which is abundant, non-toxic, cheap and locally available. This adsorbent was characterized using different techniques such as FT-IR, BET, SEM, point of zero charge (pHpzc) and also the determination of oxygen containing functional groups. The influence of various parameters (i.e. pH, stirring time, adsorbent mass and concentrations of MB and Cd(2+) ions) on the percentage removal was calculated by investigation of sensitive function, variable importance rankings (BRT) and analysis of variance (RSM). Furthermore, a central composite design (CCD) combined with a desirability function approach (DFA) as a global optimization technique was used for the simultaneous optimization of the effective parameters. The applicability of the BRT, ANN and RSM models for the description of experimental data was examined using four statistical criteria (absolute average deviation (AAD), mean absolute error (MAE), root mean square error (RMSE) and coefficient of determination (R(2))). All three models demonstrated good predictions in this study. The BRT model was more precise compared to the other models and this showed

  20. Recurring sequence-structure motifs in (βα)8-barrel proteins and experimental optimization of a chimeric protein designed based on such motifs.

    PubMed

    Wang, Jichao; Zhang, Tongchuan; Liu, Ruicun; Song, Meilin; Wang, Juncheng; Hong, Jiong; Chen, Quan; Liu, Haiyan

    2017-02-01

    An interesting way of generating novel artificial proteins is to combine sequence motifs from natural proteins, mimicking the evolutionary path suggested by natural proteins comprising recurring motifs. We analyzed the βα and αβ modules of TIM barrel proteins by structure alignment-based sequence clustering. A number of preferred motifs were identified. A chimeric TIM was designed by using recurring elements as mutually compatible interfaces. The foldability of the designed TIM protein was then significantly improved by six rounds of directed evolution. The melting temperature has been improved by more than 20°C. A variety of characteristics suggested that the resulting protein is well-folded. Our analysis provided a library of peptide motifs that is potentially useful for different protein engineering studies. The protein engineering strategy of using recurring motifs as interfaces to connect partial natural proteins may be applied to other protein folds. Copyright © 2016 Elsevier B.V. All rights reserved.

  1. Optimal Design and Purposeful Sampling: Complementary Methodologies for Implementation Research.

    PubMed

    Duan, Naihua; Bhaumik, Dulal K; Palinkas, Lawrence A; Hoagwood, Kimberly

    2015-09-01

    Optimal design has been an under-utilized methodology. However, it has significant real-world applications, particularly in mixed methods implementation research. We review the concept and demonstrate how it can be used to assess the sensitivity of design decisions and balance competing needs. For observational studies, this methodology enables selection of the most informative study units. For experimental studies, it entails selecting and assigning study units to intervention conditions in the most informative manner. We blend optimal design methods with purposeful sampling to show how these two concepts balance competing needs when there are multiple study aims, a common situation in implementation research.

  2. Optimal Design and Purposeful Sampling: Complementary Methodologies for Implementation Research

    PubMed Central

    Duan, Naihua; Bhaumik, Dulal K.; Palinkas, Lawrence A.; Hoagwood, Kimberly

    2015-01-01

    Optimal design has been an under-utilized methodology. However, it has significant real-world applications, particularly in mixed methods implementation research. We review the concept and demonstrate how it can be used to assess the sensitivity of design decisions and balance competing needs. For observational studies, this methodology enables selection of the most informative study units. For experimental studies, it entails selecting and assigning study units to intervention conditions in the most informative manner. We blend optimal design methods with purposeful sampling to show how these two concepts balance competing needs when there are multiple study aims, a common situation in implementation research. PMID:25491200

  3. Turbomachinery Airfoil Design Optimization Using Differential Evolution

    NASA Technical Reports Server (NTRS)

    Madavan, Nateri K.; Biegel, Bryan (Technical Monitor)

    2002-01-01

    An aerodynamic design optimization procedure that is based on a evolutionary algorithm known at Differential Evolution is described. Differential Evolution is a simple, fast, and robust evolutionary strategy that has been proven effective in determining the global optimum for several difficult optimization problems, including highly nonlinear systems with discontinuities and multiple local optima. The method is combined with a Navier-Stokes solver that evaluates the various intermediate designs and provides inputs to the optimization procedure. An efficient constraint handling mechanism is also incorporated. Results are presented for the inverse design of a turbine airfoil from a modern jet engine and compared to earlier methods. The capability of the method to search large design spaces and obtain the optimal airfoils in an automatic fashion is demonstrated. Substantial reductions in the overall computing time requirements are achieved by using the algorithm in conjunction with neural networks.

  4. Turbomachinery Airfoil Design Optimization Using Differential Evolution

    NASA Technical Reports Server (NTRS)

    Madavan, Nateri K.; Biegel, Bryan A. (Technical Monitor)

    2002-01-01

    An aerodynamic design optimization procedure that is based on a evolutionary algorithm known at Differential Evolution is described. Differential Evolution is a simple, fast, and robust evolutionary strategy that has been proven effective in determining the global optimum for several difficult optimization problems, including highly nonlinear systems with discontinuities and multiple local optima. The method is combined with a Navier-Stokes solver that evaluates the various intermediate designs and provides inputs to the optimization procedure. An efficient constraint handling mechanism is also incorporated. Results are presented for the inverse design of a turbine airfoil from a modern jet engine. The capability of the method to search large design spaces and obtain the optimal airfoils in an automatic fashion is demonstrated. Substantial reductions in the overall computing time requirements are achieved by using the algorithm in conjunction with neural networks.

  5. Optimal design of artificial reefs for sturgeon

    NASA Astrophysics Data System (ADS)

    Yarbrough, Cody; Cotel, Aline; Kleinheksel, Abby

    2015-11-01

    The Detroit River, part of a busy corridor between Lakes Huron and Erie, was extensively modified to create deep shipping channels, resulting in a loss of spawning habitat for lake sturgeon and other native fish (Caswell et al. 2004, Bennion and Manny 2011). Under the U.S.- Canada Great Lakes Water Quality Agreement, there are remediation plans to construct fish spawning reefs to help with historic habitat losses and degraded fish populations, specifically sturgeon. To determine optimal reef design, experimental work has been undertaken. Different sizes and shapes of reefs are tested for a given set of physical conditions, such as flow depth and flow velocity, matching the relevant dimensionless parameters dominating the flow physics. The physical conditions are matched with the natural conditions encountered in the Detroit River. Using Particle Image Velocimetry, Acoustic Doppler Velocimetry and dye studies, flow structures, vorticity and velocity gradients at selected locations have been identified and quantified to allow comparison with field observations and numerical model results. Preliminary results are helping identify the design features to be implemented in the next phase of reef construction. Sponsored by NOAA.

  6. Geometric methods for optimal sensor design.

    PubMed

    Belabbas, M-A

    2016-01-01

    The Kalman-Bucy filter is the optimal estimator of the state of a linear dynamical system from sensor measurements. Because its performance is limited by the sensors to which it is paired, it is natural to seek optimal sensors. The resulting optimization problem is however non-convex. Therefore, many ad hoc methods have been used over the years to design sensors in fields ranging from engineering to biology to economics. We show in this paper how to obtain optimal sensors for the Kalman filter. Precisely, we provide a structural equation that characterizes optimal sensors. We furthermore provide a gradient algorithm and prove its convergence to the optimal sensor. This optimal sensor yields the lowest possible estimation error for measurements with a fixed signal-to-noise ratio. The results of the paper are proved by reducing the optimal sensor problem to an optimization problem on a Grassmannian manifold and proving that the function to be minimized is a Morse function with a unique minimum. The results presented here also apply to the dual problem of optimal actuator design.

  7. Geometric methods for optimal sensor design

    PubMed Central

    Belabbas, M.-A.

    2016-01-01

    The Kalman–Bucy filter is the optimal estimator of the state of a linear dynamical system from sensor measurements. Because its performance is limited by the sensors to which it is paired, it is natural to seek optimal sensors. The resulting optimization problem is however non-convex. Therefore, many ad hoc methods have been used over the years to design sensors in fields ranging from engineering to biology to economics. We show in this paper how to obtain optimal sensors for the Kalman filter. Precisely, we provide a structural equation that characterizes optimal sensors. We furthermore provide a gradient algorithm and prove its convergence to the optimal sensor. This optimal sensor yields the lowest possible estimation error for measurements with a fixed signal-to-noise ratio. The results of the paper are proved by reducing the optimal sensor problem to an optimization problem on a Grassmannian manifold and proving that the function to be minimized is a Morse function with a unique minimum. The results presented here also apply to the dual problem of optimal actuator design. PMID:26997885

  8. Multidisciplinary design optimization using genetic algorithms

    NASA Technical Reports Server (NTRS)

    Unal, Resit

    1994-01-01

    Multidisciplinary design optimization (MDO) is an important step in the conceptual design and evaluation of launch vehicles since it can have a significant impact on performance and life cycle cost. The objective is to search the system design space to determine values of design variables that optimize the performance characteristic subject to system constraints. Gradient-based optimization routines have been used extensively for aerospace design optimization. However, one limitation of gradient based optimizers is their need for gradient information. Therefore, design problems which include discrete variables can not be studied. Such problems are common in launch vehicle design. For example, the number of engines and material choices must be integer values or assume only a few discrete values. In this study, genetic algorithms are investigated as an approach to MDO problems involving discrete variables and discontinuous domains. Optimization by genetic algorithms (GA) uses a search procedure which is fundamentally different from those gradient based methods. Genetic algorithms seek to find good solutions in an efficient and timely manner rather than finding the best solution. GA are designed to mimic evolutionary selection. A population of candidate designs is evaluated at each iteration, and each individual's probability of reproduction (existence in the next generation) depends on its fitness value (related to the value of the objective function). Progress toward the optimum is achieved by the crossover and mutation operations. GA is attractive since it uses only objective function values in the search process, so gradient calculations are avoided. Hence, GA are able to deal with discrete variables. Studies report success in the use of GA for aircraft design optimization studies, trajectory analysis, space structure design and control systems design. In these studies reliable convergence was achieved, but the number of function evaluations was large compared

  9. Optimized heavy ion beam probing for International Thermonuclear Experimental Reactor

    NASA Astrophysics Data System (ADS)

    Melnikov, A. V.; Eliseev, L. G.

    1999-01-01

    The international workgroup developed the conceptual design of a heavy ion beam probe (HIBP) diagnostics for International Thermonuclear Experimental Reactor (ITER), which is intended for measurements of the plasma potential profile in a gradient area. Now we optimized it by the accurate analysis of the probing trajectories and variation of positions of the injection and detection points. Optimization allows us to reduce the energy of Tl+ beam from 5.6 to 3.4 MeV for standard ITER regime. The detector line starting at the plasma edge towards the center can get an outer part of the horizontal radial potential profile by variation of the energy. The observed radial interval is slightly increased up to 0.76<ρ<1 with respect to initial version 0.8<ρ<1, that allows to cover the region of the density gradient more reliably. Almost double reduction of the beam energy is a critical point. Thus we can significantly decrease the sizes of the accelerator and energy analyzer, the cost of the equipment, and impact of the diagnostics to the machine. Therefore the optimized HIBP design can be realized in ITER.

  10. OSHA and Experimental Safety Design.

    ERIC Educational Resources Information Center

    Sichak, Stephen, Jr.

    1983-01-01

    Suggests that a governmental agency, most likely Occupational Safety and Health Administration (OSHA) be considered in the safety design stage of any experiment. Focusing on OSHA's role, discusses such topics as occupational health hazards of toxic chemicals in laboratories, occupational exposure to benzene, and role/regulations of other agencies.…

  11. OSHA and Experimental Safety Design.

    ERIC Educational Resources Information Center

    Sichak, Stephen, Jr.

    1983-01-01

    Suggests that a governmental agency, most likely Occupational Safety and Health Administration (OSHA) be considered in the safety design stage of any experiment. Focusing on OSHA's role, discusses such topics as occupational health hazards of toxic chemicals in laboratories, occupational exposure to benzene, and role/regulations of other agencies.…

  12. An expert system for optimal gear design

    SciTech Connect

    Lin, K.C.

    1988-01-01

    By properly developing the mathematical model, numerical optimization can be used to seek the best solution for a given set of geometric constraints. The process of determining the non-geometric design variables is automated by using symbolic computation. This gear-design system is built according to the AGMA standards and a survey of gear-design experts. The recommendations of gear designers and the information provided by AGMA standards are integrated into knowledge bases and data bases. By providing fast information retrieval and design guidelines, this expert system greatly streamlines the spur gear design process. The concept of separating the design space into geometric and non-geometric variables can also be applied to the design process for general mechanical elements. The expert-system techniques is used to simulate a human designer to optimize the process of determining non-geometric parameters, and the numerical optimization is used to identify for the best geometric solution. The combination of the expert-system technique with numerical optimization essentially eliminates the deficiencies of both methods and thus provides a better way of modeling the engineering design process.

  13. Data-driven design optimization for composite material characterization

    Treesearch

    John G. Michopoulos; John C. Hermanson; Athanasios Iliopoulos; Samuel G. Lambrakos; Tomonari Furukawa

    2011-06-01

    The main goal of the present paper is to demonstrate the value of design optimization beyond its use for structural shape determination in the realm of the constitutive characterization of anisotropic material systems such as polymer matrix composites with or without damage. The approaches discussed are based on the availability of massive experimental data...

  14. Applications of Chemiluminescence in the Teaching of Experimental Design

    ERIC Educational Resources Information Center

    Krawczyk, Tomasz; Slupska, Roksana; Baj, Stefan

    2015-01-01

    This work describes a single-session laboratory experiment devoted to teaching the principles of factorial experimental design. Students undertook the rational optimization of a luminol oxidation reaction, using a two-level experiment that aimed to create a long-lasting bright emission. During the session students used only simple glassware and…

  15. Applications of Chemiluminescence in the Teaching of Experimental Design

    ERIC Educational Resources Information Center

    Krawczyk, Tomasz; Slupska, Roksana; Baj, Stefan

    2015-01-01

    This work describes a single-session laboratory experiment devoted to teaching the principles of factorial experimental design. Students undertook the rational optimization of a luminol oxidation reaction, using a two-level experiment that aimed to create a long-lasting bright emission. During the session students used only simple glassware and…

  16. Experimental design for single point diamond turning of silicon optics

    SciTech Connect

    Krulewich, D.A.

    1996-06-16

    The goal of these experiments is to determine optimum cutting factors for the machining of silicon optics. This report describes experimental design, a systematic method of selecting optimal settings for a limited set of experiments, and its use in the silcon-optics turning experiments. 1 fig., 11 tabs.

  17. Light Experimental Supercruiser Conceptual Design

    DTIC Science & Technology

    1976-07-01

    Contract F33615-75-C-3150. Wind tunnel tests were conducted under a Boeing independent research program to verify performance results. The Light...Two-Dimensional Airframe Integrated Exhaust System. The air vehicle design 1s based on NASA SCAT 15 arrow wing research and 1s powered by a modified...133 CANARD CONFIGURATION - LIFT, DRAG AND MOMENT CHARACTERISTICS, M = 1.8 131 CANARD CONFIGURATION - NON -LIFTING DRAG COMPARISON 135 CANARD

  18. Optimal design of reverse osmosis module networks

    SciTech Connect

    Maskan, F.; Wiley, D.E.; Johnston, L.P.M.; Clements, D.J.

    2000-05-01

    The structure of individual reverse osmosis modules, the configuration of the module network, and the operating conditions were optimized for seawater and brackish water desalination. The system model included simple mathematical equations to predict the performance of the reverse osmosis modules. The optimization problem was formulated as a constrained multivariable nonlinear optimization. The objective function was the annual profit for the system, consisting of the profit obtained from the permeate, capital cost for the process units, and operating costs associated with energy consumption and maintenance. Optimization of several dual-stage reverse osmosis systems were investigated and compared. It was found that optimal network designs are the ones that produce the most permeate. It may be possible to achieve economic improvements by refining current membrane module designs and their operating pressures.

  19. Vehicle systems design optimization study

    NASA Technical Reports Server (NTRS)

    Gilmour, J. L.

    1980-01-01

    The optimum vehicle configuration and component locations are determined for an electric drive vehicle based on using the basic structure of a current production subcompact vehicle. The optimization of an electric vehicle layout requires a weight distribution in the range of 53/47 to 62/38 in order to assure dynamic handling characteristics comparable to current internal combustion engine vehicles. Necessary modification of the base vehicle can be accomplished without major modification of the structure or running gear. As long as batteries are as heavy and require as much space as they currently do, they must be divided into two packages, one at front under the hood and a second at the rear under the cargo area, in order to achieve the desired weight distribution. The weight distribution criteria requires the placement of batteries at the front of the vehicle even when the central tunnel is used for the location of some batteries. The optimum layout has a front motor and front wheel drive. This configuration provides the optimum vehicle dynamic handling characteristics and the maximum passenger and cargo space for a given size vehicle.

  20. Design optimization for active twist rotor blades

    NASA Astrophysics Data System (ADS)

    Mok, Ji Won

    This dissertation introduces the process of optimizing active twist rotor blades in the presence of embedded anisotropic piezo-composite actuators. Optimum design of active twist blades is a complex task, since it involves a rich design space with tightly coupled design variables. The study presents the development of an optimization framework for active helicopter rotor blade cross-sectional design. This optimization framework allows for exploring a rich and highly nonlinear design space in order to optimize the active twist rotor blades. Different analytical components are combined in the framework: cross-sectional analysis (UM/VABS), an automated mesh generator, a beam solver (DYMORE), a three-dimensional local strain recovery module, and a gradient based optimizer within MATLAB. Through the mathematical optimization problem, the static twist actuation performance of a blade is maximized while satisfying a series of blade constraints. These constraints are associated with locations of the center of gravity and elastic axis, blade mass per unit span, fundamental rotating blade frequencies, and the blade strength based on local three-dimensional strain fields under worst loading conditions. Through pre-processing, limitations of the proposed process have been studied. When limitations were detected, resolution strategies were proposed. These include mesh overlapping, element distortion, trailing edge tab modeling, electrode modeling and foam implementation of the mesh generator, and the initial point sensibility of the current optimization scheme. Examples demonstrate the effectiveness of this process. Optimization studies were performed on the NASA/Army/MIT ATR blade case. Even though that design was built and shown significant impact in vibration reduction, the proposed optimization process showed that the design could be improved significantly. The second example, based on a model scale of the AH-64D Apache blade, emphasized the capability of this framework to

  1. Torsional ultrasonic transducer computational design optimization.

    PubMed

    Melchor, J; Rus, G

    2014-09-01

    A torsional piezoelectric ultrasonic sensor design is proposed in this paper and computationally tested and optimized to measure shear stiffness properties of soft tissue. These are correlated with a number of pathologies like tumors, hepatic lesions and others. The reason is that, whereas compressibility is predominantly governed by the fluid phase of the tissue, the shear stiffness is dependent on the stroma micro-architecture, which is directly affected by those pathologies. However, diagnostic tools to quantify them are currently not well developed. The first contribution is a new typology of design adapted to quasifluids. A second contribution is the procedure for design optimization, for which an analytical estimate of the Robust Probability Of Detection, called RPOD, is presented for use as optimality criteria. The RPOD is formulated probabilistically to maximize the probability of detecting the least possible pathology while minimizing the effect of noise. The resulting optimal transducer has a resonance frequency of 28 kHz.

  2. Optimization of Hydrothermal and Diluted Acid Pretreatments of Tunisian Luffa cylindrica (L.) Fibers for 2G Bioethanol Production through the Cubic Central Composite Experimental Design CCD: Response Surface Methodology

    PubMed Central

    Ziadi, Manel; Ben Hassen-Trabelsi, Aida; Mekni, Sabrine; Aïssi, Balkiss; Alaya, Marwen; Bergaoui, Latifa; Hamdi, Moktar

    2017-01-01

    This paper opens up a new issue dealing with Luffa cylindrica (LC) lignocellulosic biomass recovery in order to produce 2G bioethanol. LC fibers are composed of three principal fractions, namely, α-cellulose (45.80%  ± 1.3), hemicelluloses (20.76%  ± 0.3), and lignins (13.15%  ± 0.6). The optimization of LC fibers hydrothermal and diluted acid pretreatments duration and temperature were achieved through the cubic central composite experimental design CCD. The pretreatments optimization was monitored via the determination of reducing sugars. Then, the 2G bioethanol process feasibility was tested by means of three successive steps, namely, LC fibers hydrothermal pretreatment performed at 96°C during 54 minutes, enzymatic saccharification carried out by means of a commercial enzyme AP2, and the alcoholic fermentation fulfilled with Saccharomyces cerevisiae. LC fibers hydrothermal pretreatment liberated 33.55 g/kg of reducing sugars. Enzymatic hydrolysis allowed achieving 59.4 g/kg of reducing sugars. The conversion yield of reducing sugar to ethanol was 88.66%. After the distillation step, concentration of ethanol was 1.58% with a volumetric yield about 70%. PMID:28243606

  3. Optimization of Hydrothermal and Diluted Acid Pretreatments of Tunisian Luffa cylindrica (L.) Fibers for 2G Bioethanol Production through the Cubic Central Composite Experimental Design CCD: Response Surface Methodology.

    PubMed

    Zaafouri, Kaouther; Ziadi, Manel; Ben Hassen-Trabelsi, Aida; Mekni, Sabrine; Aïssi, Balkiss; Alaya, Marwen; Bergaoui, Latifa; Hamdi, Moktar

    2017-01-01

    This paper opens up a new issue dealing with Luffa cylindrica (LC) lignocellulosic biomass recovery in order to produce 2G bioethanol. LC fibers are composed of three principal fractions, namely, α-cellulose (45.80%  ± 1.3), hemicelluloses (20.76%  ± 0.3), and lignins (13.15%  ± 0.6). The optimization of LC fibers hydrothermal and diluted acid pretreatments duration and temperature were achieved through the cubic central composite experimental design CCD. The pretreatments optimization was monitored via the determination of reducing sugars. Then, the 2G bioethanol process feasibility was tested by means of three successive steps, namely, LC fibers hydrothermal pretreatment performed at 96°C during 54 minutes, enzymatic saccharification carried out by means of a commercial enzyme AP2, and the alcoholic fermentation fulfilled with Saccharomyces cerevisiae. LC fibers hydrothermal pretreatment liberated 33.55 g/kg of reducing sugars. Enzymatic hydrolysis allowed achieving 59.4 g/kg of reducing sugars. The conversion yield of reducing sugar to ethanol was 88.66%. After the distillation step, concentration of ethanol was 1.58% with a volumetric yield about 70%.

  4. Optimal Design of Tests with Dichotomous and Polytomous Items.

    ERIC Educational Resources Information Center

    Berger, Martijn P. F.

    1998-01-01

    Reviews some results on optimal design of tests with items of dichotomous and polytomous response formats and offers rules and guidelines for optimal test assembly. Discusses the problem of optimal test design for two optimality criteria. (Author/SLD)

  5. Global Design Optimization for Aerodynamics and Rocket Propulsion Components

    NASA Technical Reports Server (NTRS)

    Shyy, Wei; Papila, Nilay; Vaidyanathan, Rajkumar; Tucker, Kevin; Turner, James E. (Technical Monitor)

    2000-01-01

    Modern computational and experimental tools for aerodynamics and propulsion applications have matured to a stage where they can provide substantial insight into engineering processes involving fluid flows, and can be fruitfully utilized to help improve the design of practical devices. In particular, rapid and continuous development in aerospace engineering demands that new design concepts be regularly proposed to meet goals for increased performance, robustness and safety while concurrently decreasing cost. To date, the majority of the effort in design optimization of fluid dynamics has relied on gradient-based search algorithms. Global optimization methods can utilize the information collected from various sources and by different tools. These methods offer multi-criterion optimization, handle the existence of multiple design points and trade-offs via insight into the entire design space, can easily perform tasks in parallel, and are often effective in filtering the noise intrinsic to numerical and experimental data. However, a successful application of the global optimization method needs to address issues related to data requirements with an increase in the number of design variables, and methods for predicting the model performance. In this article, we review recent progress made in establishing suitable global optimization techniques employing neural network and polynomial-based response surface methodologies. Issues addressed include techniques for construction of the response surface, design of experiment techniques for supplying information in an economical manner, optimization procedures and multi-level techniques, and assessment of relative performance between polynomials and neural networks. Examples drawn from wing aerodynamics, turbulent diffuser flows, gas-gas injectors, and supersonic turbines are employed to help demonstrate the issues involved in an engineering design context. Both the usefulness of the existing knowledge to aid current design

  6. Aircraft design optimization with multidisciplinary performance criteria

    NASA Technical Reports Server (NTRS)

    Morris, Stephen; Kroo, Ilan

    1989-01-01

    The method described here for aircraft design optimization with dynamic response considerations provides an inexpensive means of integrating dynamics into aircraft preliminary design. By defining a dynamic performance index that can be added to a conventional objective function, a designer can investigate the trade-off between performance and handling (as measured by the vehicle's unforced response). The procedure is formulated to permit the use of control system gains as design variables, but does not require full-state feedback. The examples discussed here show how such an approach can lead to significant improvements in the design as compared with the more common sequential design of system and control law.

  7. Inducing optimally directed non-routine designs

    NASA Technical Reports Server (NTRS)

    Cagan, Jonathan; Agogino, Alice M.

    1990-01-01

    Non-routine designs are characterized by the creation of new variables and thus an expansion of the design space. These differ from routine designs in that the latter are restricted to a fixed set of variables and thus a pre-defined design space. In previous papers, the 1stPRINCE non-routine design methodology that expands the design space in such a way that optimal trends dictate the direction and form of this expansion is described. How inductive techniques can determine these optimal trends are examined. The induction techniques in 1stPRINCE utilize constraint information from monotonicity analysis to determine how to mutate the design space. The process observes the constraint information of mutated designs and induces trends from those constraints. Application of 1stPRINCE to a beam under flexural load leads to an optimally directed tapered beam. An interesting application of 1stPRINCE to determine the optimally directed shape of a spinning square block such that resistance to spinning is minimized leads to the discovery of a circular wheel.

  8. A design optimization methodology for Li+ batteries

    NASA Astrophysics Data System (ADS)

    Golmon, Stephanie; Maute, Kurt; Dunn, Martin L.

    2014-05-01

    Design optimization for functionally graded battery electrodes is shown to improve the usable energy capacity of Li batteries predicted by computational simulations and numerically optimizing the electrode porosities and particle radii. A multi-scale battery model which accounts for nonlinear transient transport processes, electrochemical reactions, and mechanical deformations is used to predict the usable energy storage capacity of the battery over a range of discharge rates. A multi-objective formulation of the design problem is introduced to maximize the usable capacity over a range of discharge rates while limiting the mechanical stresses. The optimization problem is solved via a gradient based optimization. A LiMn2O4 cathode is simulated with a PEO-LiCF3SO3 electrolyte and both a Li Foil (half cell) and LiC6 anode. Studies were performed on both half and full cell configurations resulting in distinctly different optimal electrode designs. The numerical results show that the highest rate discharge drives the simulations and the optimal designs are dominated by Li+ transport rates. The results also suggest that spatially varying electrode porosities and active particle sizes provides an efficient approach to improve the power-to-energy density of Li+ batteries. For the half cell configuration, the optimal design improves the discharge capacity by 29% while for the full cell the discharge capacity was improved 61% relative to an initial design with a uniform electrode structure. Most of the improvement in capacity was due to the spatially varying porosity, with up to 5% of the gains attributed to the particle radii design variables.

  9. Robust measurement selection for biochemical pathway experimental design.

    PubMed

    Brown, Martin; He, Fei; Yeung, Lam Fat

    2008-01-01

    As a general lack of quantitative measurement data for pathway modelling and parameter identification process, time-series experimental design is particularly important in current systems biology research. This paper mainly investigates state measurement/observer selection problem when parametric uncertainties are considered. Based on the extension of optimal design criteria, two robust experimental design strategies are investigated, one is the regularisation-based design method, and the other is Taguchi-based design approach. By implementing to a simplified IkappaBalpha - NF - kappaB signalling pathway system, two design approaches are comparatively studied. When large parametric uncertainty is present, by assuming that different parametric uncertainties are identical in scale, two methods tend to provide a similar uniform design result.

  10. Autonomous entropy-based intelligent experimental design

    NASA Astrophysics Data System (ADS)

    Malakar, Nabin Kumar

    2011-07-01

    The aim of this thesis is to explore the application of probability and information theory in experimental design, and to do so in a way that combines what we know about inference and inquiry in a comprehensive and consistent manner. Present day scientific frontiers involve data collection at an ever-increasing rate. This requires that we find a way to collect the most relevant data in an automated fashion. By following the logic of the scientific method, we couple an inference engine with an inquiry engine to automate the iterative process of scientific learning. The inference engine involves Bayesian machine learning techniques to estimate model parameters based upon both prior information and previously collected data, while the inquiry engine implements data-driven exploration. By choosing an experiment whose distribution of expected results has the maximum entropy, the inquiry engine selects the experiment that maximizes the expected information gain. The coupled inference and inquiry engines constitute an autonomous learning method for scientific exploration. We apply it to a robotic arm to demonstrate the efficacy of the method. Optimizing inquiry involves searching for an experiment that promises, on average, to be maximally informative. If the set of potential experiments is described by many parameters, the search involves a high-dimensional entropy space. In such cases, a brute force search method will be slow and computationally expensive. We develop an entropy-based search algorithm, called nested entropy sampling, to select the most informative experiment. This helps to reduce the number of computations necessary to find the optimal experiment. We also extended the method of maximizing entropy, and developed a method of maximizing joint entropy so that it could be used as a principle of collaboration between two robots. This is a major achievement of this thesis, as it allows the information-based collaboration between two robotic units towards a same

  11. Multiobjective optimization techniques for structural design

    NASA Technical Reports Server (NTRS)

    Rao, S. S.

    1984-01-01

    The multiobjective programming techniques are important in the design of complex structural systems whose quality depends generally on a number of different and often conflicting objective functions which cannot be combined into a single design objective. The applicability of multiobjective optimization techniques is studied with reference to simple design problems. Specifically, the parameter optimization of a cantilever beam with a tip mass and a three-degree-of-freedom vabration isolation system and the trajectory optimization of a cantilever beam are considered. The solutions of these multicriteria design problems are attempted by using global criterion, utility function, game theory, goal programming, goal attainment, bounded objective function, and lexicographic methods. It has been observed that the game theory approach required the maximum computational effort, but it yielded better optimum solutions with proper balance of the various objective functions in all the cases.

  12. An Integrated Optimization Design Method Based on Surrogate Modeling Applied to Diverging Duct Design

    NASA Astrophysics Data System (ADS)

    Hanan, Lu; Qiushi, Li; Shaobin, Li

    2016-12-01

    This paper presents an integrated optimization design method in which uniform design, response surface methodology and genetic algorithm are used in combination. In detail, uniform design is used to select the experimental sampling points in the experimental domain and the system performance is evaluated by means of computational fluid dynamics to construct a database. After that, response surface methodology is employed to generate a surrogate mathematical model relating the optimization objective and the design variables. Subsequently, genetic algorithm is adopted and applied to the surrogate model to acquire the optimal solution in the case of satisfying some constraints. The method has been applied to the optimization design of an axisymmetric diverging duct, dealing with three design variables including one qualitative variable and two quantitative variables. The method of modeling and optimization design performs well in improving the duct aerodynamic performance and can be also applied to wider fields of mechanical design and seen as a useful tool for engineering designers, by reducing the design time and computation consumption.

  13. Design optimization of rod shaped IPMC actuator

    NASA Astrophysics Data System (ADS)

    Ruiz, S. A.; Mead, B.; Yun, H.; Yim, W.; Kim, K. J.

    2013-04-01

    Ionic polymer-metal composites (IPMCs) are some of the most well-known electro-active polymers. This is due to their large deformation provided a relatively low voltage source. IPMCs have been acknowledged as a potential candidate for biomedical applications such as cardiac catheters and surgical probes; however, there is still no existing mass manufacturing of IPMCs. This study intends to provide a theoretical framework which could be used to design practical purpose IPMCs depending on the end users interest. By explicitly coupling electrostatics, transport phenomenon, and solid mechanics, design optimization is conducted on a simulation in order to provide conceptual motivation for future designs. Utilizing a multi-physics analysis approach on a three dimensional cylinder and tube type IPMC provides physically accurate results for time dependent end effector displacement given a voltage source. Simulations are conducted with the finite element method and are also validated with empirical evidences. Having an in-depth understanding of the physical coupling provides optimal design parameters that cannot be altered from a standard electro-mechanical coupling. These parameters are altered in order to determine optimal designs for end-effector displacement, maximum force, and improved mobility with limited voltage magnitude. Design alterations are conducted on the electrode patterns in order to provide greater mobility, electrode size for efficient bending, and Nafion diameter for improved force. The results of this study will provide optimal design parameters of the IPMC for different applications.

  14. Optimization-based controller design for rotorcraft

    NASA Technical Reports Server (NTRS)

    Tsing, N.-K.; Fan, M. K. H.; Barlow, J.; Tits, A. L.; Tischler, M. B.

    1993-01-01

    An optimization-based methodology for linear control system design is outlined by considering the design of a controller for a UH-60 rotorcraft in hover. A wide range of design specifications is taken into account: internal stability, decoupling between longitudinal and lateral motions, handling qualities, and rejection of windgusts. These specifications are investigated while taking into account physical limitations in the swashplate displacements and rates of displacement. The methodology crucially relies on user-machine interaction for tradeoff exploration.

  15. Spacecraft design optimization using Taguchi analysis

    NASA Technical Reports Server (NTRS)

    Unal, Resit

    1991-01-01

    The quality engineering methods of Dr. Genichi Taguchi, employing design of experiments, are important statistical tools for designing high quality systems at reduced cost. The Taguchi method was utilized to study several simultaneous parameter level variations of a lunar aerobrake structure to arrive at the lightest weight configuration. Finite element analysis was used to analyze the unique experimental aerobrake configurations selected by Taguchi method. Important design parameters affecting weight and global buckling were identified and the lowest weight design configuration was selected.

  16. Removal of cobalt ions from aqueous solutions by polymer assisted ultrafiltration using experimental design approach: part 2: Optimization of hydrodynamic conditions for a crossflow ultrafiltration module with rotating part.

    PubMed

    Cojocaru, Corneliu; Zakrzewska-Trznadel, Grazyna; Miskiewicz, Agnieszka

    2009-09-30

    Application of shear-enhanced crossflow ultrafiltration for separation of cobalt ions from synthetic wastewaters by prior complexation with polyethyleneimine has been investigated via experimental design approach. The hydrodynamic conditions in the module with tubular metallic membrane have been planned according to full factorial design in order to figure out the main and interaction effects of process factors upon permeate flux and cumulative flux decline. It has been noticed that the turbulent flow induced by rotation of inner cylinder in the module conducts to growth of permeate flux, normalized flux and membrane permeability as well as to decreasing of permeate flux decline. In addition, the rotation has led to self-cleaning effect as a result of the reduction of estimated polymer layer thickness on the membrane surface. The optimal hydrodynamic conditions in the module have been figured out by response surface methodology and overlap contour plot, being as follows: DeltaP=70 kPa, Q(R)=108 L/h and W=2800 rpm. In such conditions the maximal permeate flux and the minimal flux decline has been observed.

  17. Optimization of a novel method for determination of benzene, toluene, ethylbenzene, and xylenes in hair and waste water samples by carbon nanotubes reinforced sol-gel based hollow fiber solid phase microextraction and gas chromatography using factorial experimental design.

    PubMed

    Es'haghi, Zarrin; Ebrahimi, Mahmoud; Hosseini, Mohammad-Saeid

    2011-05-27

    A novel design of solid phase microextraction fiber containing carbon nanotube reinforced sol-gel which was protected by polypropylene hollow fiber (HF-SPME) was developed for pre-concentration and determination of BTEX in environmental waste water and human hair samples. The method validation was included and satisfying results with high pre-concentration factors were obtained. In the present study orthogonal array experimental design (OAD) procedure with OA(16) (4(4)) matrix was applied to study the effect of four factors influencing the HF-SPME method efficiency: stirring speed, volume of adsorption organic solvent, extraction and desorption time of the sample solution, by which the effect of each factor was estimated using individual contributions as response functions in the screening process. Analysis of variance (ANOVA) was employed for estimating the main significant factors and their percentage contributions in extraction. Calibration curves were plotted using ten spiking levels of BTEX in the concentration ranges of 0.02-30,000ng/mL with correlation coefficients (r) 0.989-0.9991 for analytes. Under the optimized extraction conditions, the method showed good linearity (0.3-20,000ng/L), repeatability, low limits of detections (0.49-0.7ng/L) and excellent pre-concentration factors (185-1872). The best conditions which were estimated then applied for the analysis of BTEX compounds in the real samples.

  18. Regression analysis as a design optimization tool

    NASA Technical Reports Server (NTRS)

    Perley, R.

    1984-01-01

    The optimization concepts are described in relation to an overall design process as opposed to a detailed, part-design process where the requirements are firmly stated, the optimization criteria are well established, and a design is known to be feasible. The overall design process starts with the stated requirements. Some of the design criteria are derived directly from the requirements, but others are affected by the design concept. It is these design criteria that define the performance index, or objective function, that is to be minimized within some constraints. In general, there will be multiple objectives, some mutually exclusive, with no clear statement of their relative importance. The optimization loop that is given adjusts the design variables and analyzes the resulting design, in an iterative fashion, until the objective function is minimized within the constraints. This provides a solution, but it is only the beginning. In effect, the problem definition evolves as information is derived from the results. It becomes a learning process as we determine what the physics of the system can deliver in relation to the desirable system characteristics. As with any learning process, an interactive capability is a real attriubute for investigating the many alternatives that will be suggested as learning progresses.

  19. Lens design: optimization with Global Explorer

    NASA Astrophysics Data System (ADS)

    Isshiki, Masaki

    2013-02-01

    The optimization method damped least squares method (DLS) was almost completed late in the 1960s. DLS has been overwhelming in the local optimization technology. After that, various efforts were made to seek the global optimization. They came into the world after 1990 and the Global Explorer (GE) was one of them invented by the author to find plural solutions, each of which has the local minimum of the merit function. The robustness of the designed lens is also an important factor as well as the performance of the lens; both of these requirements are balanced in the process of optimization with GE2 (the second version of GE). An idea is also proposed to modify GE2 for aspherical lens systems. A design example is shown.

  20. Response Surface Model Building and Multidisciplinary Optimization Using D-Optimal Designs

    NASA Technical Reports Server (NTRS)

    Unal, Resit; Lepsch, Roger A.; McMillin, Mark L.

    1998-01-01

    This paper discusses response surface methods for approximation model building and multidisciplinary design optimization. The response surface methods discussed are central composite designs, Bayesian methods and D-optimal designs. An over-determined D-optimal design is applied to a configuration design and optimization study of a wing-body, launch vehicle. Results suggest that over determined D-optimal designs may provide an efficient approach for approximation model building and for multidisciplinary design optimization.

  1. Experimental Design for Vector Output Systems

    PubMed Central

    Banks, H.T.; Rehm, K.L.

    2013-01-01

    We formulate an optimal design problem for the selection of best states to observe and optimal sampling times for parameter estimation or inverse problems involving complex nonlinear dynamical systems. An iterative algorithm for implementation of the resulting methodology is proposed. Its use and efficacy is illustrated on two applied problems of practical interest: (i) dynamic models of HIV progression and (ii) modeling of the Calvin cycle in plant metabolism and growth. PMID:24563655

  2. Design of optimized piezoelectric HDD-sliders

    NASA Astrophysics Data System (ADS)

    Nakasone, Paulo H.; Yoo, Jeonghoon; Silva, Emilio C. N.

    2010-04-01

    As storage data density in hard-disk drives (HDDs) increases for constant or miniaturizing sizes, precision positioning of HDD heads becomes a more relevant issue to ensure enormous amounts of data to be properly written and read. Since the traditional single-stage voice coil motor (VCM) cannot satisfy the positioning requirement of high-density tracks per inch (TPI) HDDs, dual-stage servo systems have been proposed to overcome this matter, by using VCMs to coarsely move the HDD head while piezoelectric actuators provides fine and fast positioning. Thus, the aim of this work is to apply topology optimization method (TOM) to design novel piezoelectric HDD heads, by finding optimal placement of base-plate and piezoelectric material to high precision positioning HDD heads. Topology optimization method is a structural optimization technique that combines the finite element method (FEM) with optimization algorithms. The laminated finite element employs the MITC (mixed interpolation of tensorial components) formulation to provide accurate and reliable results. The topology optimization uses a rational approximation of material properties to vary the material properties between 'void' and 'filled' portions. The design problem consists in generating optimal structures that provide maximal displacements, appropriate structural stiffness and resonance phenomena avoidance. The requirements are achieved by applying formulations to maximize displacements, minimize structural compliance and maximize resonance frequencies. This paper presents the implementation of the algorithms and show results to confirm the feasibility of this approach.

  3. Multifidelity Analysis and Optimization for Supersonic Design

    NASA Technical Reports Server (NTRS)

    Kroo, Ilan; Willcox, Karen; March, Andrew; Haas, Alex; Rajnarayan, Dev; Kays, Cory

    2010-01-01

    Supersonic aircraft design is a computationally expensive optimization problem and multifidelity approaches over a significant opportunity to reduce design time and computational cost. This report presents tools developed to improve supersonic aircraft design capabilities including: aerodynamic tools for supersonic aircraft configurations; a systematic way to manage model uncertainty; and multifidelity model management concepts that incorporate uncertainty. The aerodynamic analysis tools developed are appropriate for use in a multifidelity optimization framework, and include four analysis routines to estimate the lift and drag of a supersonic airfoil, a multifidelity supersonic drag code that estimates the drag of aircraft configurations with three different methods: an area rule method, a panel method, and an Euler solver. In addition, five multifidelity optimization methods are developed, which include local and global methods as well as gradient-based and gradient-free techniques.

  4. Dynamic optimization and adaptive controller design

    NASA Astrophysics Data System (ADS)

    Inamdar, S. R.

    2010-10-01

    In this work I present a new type of controller which is an adaptive tracking controller which employs dynamic optimization for optimizing current value of controller action for the temperature control of nonisothermal continuously stirred tank reactor (CSTR). We begin with a two-state model of nonisothermal CSTR which are mass and heat balance equations and then add cooling system dynamics to eliminate input multiplicity. The initial design value is obtained using local stability of steady states where approach temperature for cooling action is specified as a steady state and a design specification. Later we make a correction in the dynamics where material balance is manipulated to use feed concentration as a system parameter as an adaptive control measure in order to avoid actuator saturation for the main control loop. The analysis leading to design of dynamic optimization based parameter adaptive controller is presented. The important component of this mathematical framework is reference trajectory generation to form an adaptive control measure.

  5. Multidisciplinary Optimization Methods for Preliminary Design

    NASA Technical Reports Server (NTRS)

    Korte, J. J.; Weston, R. P.; Zang, T. A.

    1997-01-01

    An overview of multidisciplinary optimization (MDO) methodology and two applications of this methodology to the preliminary design phase are presented. These applications are being undertaken to improve, develop, validate and demonstrate MDO methods. Each is presented to illustrate different aspects of this methodology. The first application is an MDO preliminary design problem for defining the geometry and structure of an aerospike nozzle of a linear aerospike rocket engine. The second application demonstrates the use of the Framework for Interdisciplinary Design Optimization (FIDO), which is a computational environment system, by solving a preliminary design problem for a High-Speed Civil Transport (HSCT). The two sample problems illustrate the advantages to performing preliminary design with an MDO process.

  6. Application of an optimization method to high performance propeller designs

    NASA Technical Reports Server (NTRS)

    Li, K. C.; Stefko, G. L.

    1984-01-01

    The application of an optimization method to determine the propeller blade twist distribution which maximizes propeller efficiency is presented. The optimization employs a previously developed method which has been improved to include the effects of blade drag, camber and thickness. Before the optimization portion of the computer code is used, comparisons of calculated propeller efficiencies and power coefficients are made with experimental data for one NACA propeller at Mach numbers in the range of 0.24 to 0.50 and another NACA propeller at a Mach number of 0.71 to validate the propeller aerodynamic analysis portion of the computer code. Then comparisons of calculated propeller efficiencies for the optimized and the original propellers show the benefits of the optimization method in improving propeller performance. This method can be applied to the aerodynamic design of propellers having straight, swept, or nonplanar propeller blades.

  7. Design optimization for cost and quality: The robust design approach

    NASA Technical Reports Server (NTRS)

    Unal, Resit

    1990-01-01

    Designing reliable, low cost, and operable space systems has become the key to future space operations. Designing high quality space systems at low cost is an economic and technological challenge to the designer. A systematic and efficient way to meet this challenge is a new method of design optimization for performance, quality, and cost, called Robust Design. Robust Design is an approach for design optimization. It consists of: making system performance insensitive to material and subsystem variation, thus allowing the use of less costly materials and components; making designs less sensitive to the variations in the operating environment, thus improving reliability and reducing operating costs; and using a new structured development process so that engineering time is used most productively. The objective in Robust Design is to select the best combination of controllable design parameters so that the system is most robust to uncontrollable noise factors. The robust design methodology uses a mathematical tool called an orthogonal array, from design of experiments theory, to study a large number of decision variables with a significantly small number of experiments. Robust design also uses a statistical measure of performance, called a signal-to-noise ratio, from electrical control theory, to evaluate the level of performance and the effect of noise factors. The purpose is to investigate the Robust Design methodology for improving quality and cost, demonstrate its application by the use of an example, and suggest its use as an integral part of space system design process.

  8. The optimal design of standard gearsets

    NASA Technical Reports Server (NTRS)

    Savage, M.; Coy, J. J.; Townsend, D. P.

    1983-01-01

    A design procedure for sizing standard involute spur gearsets is presented. The procedure is applied to find the optimal design for two examples - an external gear mesh with a ratio of 5:1 and an internal gear mesh with a ratio of 5:1. In the procedure, the gear mesh is designed to minimize the center distance for a given gear ratio, pressure angle, pinion torque, and allowable tooth strengths. From the methodology presented, a design space may be formulated for either external gear contact or for internal contact. The design space includes kinematics considerations of involute interference, tip fouling, and contact ratio. Also included are design constraints based on bending fatigue in the pinion fillet and Hertzian contact pressure in the full load region and at the gear tip where scoring is possible. This design space is two dimensional, giving the gear mesh center distance as a function of diametral pitch and the number of pinion teeth. The constraint equations were identified for kinematic interference, fillet bending fatigue, pitting fatigue, and scoring pressure, which define the optimal design space for a given gear design. The locus of equal size optimum designs was identified as the straight line through the origin which has the least slope in the design region.

  9. Multidisciplinary design optimization using multiobjective formulation techniques

    NASA Technical Reports Server (NTRS)

    Chattopadhyay, Aditi; Pagaldipti, Narayanan S.

    1995-01-01

    This report addresses the development of a multidisciplinary optimization procedure using an efficient semi-analytical sensitivity analysis technique and multilevel decomposition for the design of aerospace vehicles. A semi-analytical sensitivity analysis procedure is developed for calculating computational grid sensitivities and aerodynamic design sensitivities. Accuracy and efficiency of the sensitivity analysis procedure is established through comparison of the results with those obtained using a finite difference technique. The developed sensitivity analysis technique are then used within a multidisciplinary optimization procedure for designing aerospace vehicles. The optimization problem, with the integration of aerodynamics and structures, is decomposed into two levels. Optimization is performed for improved aerodynamic performance at the first level and improved structural performance at the second level. Aerodynamic analysis is performed by solving the three-dimensional parabolized Navier Stokes equations. A nonlinear programming technique and an approximate analysis procedure are used for optimization. The proceduredeveloped is applied to design the wing of a high speed aircraft. Results obtained show significant improvements in the aircraft aerodynamic and structural performance when compared to a reference or baseline configuration. The use of the semi-analytical sensitivity technique provides significant computational savings.

  10. Optimal Experiment Design for Thermal Characterization of Functionally Graded Materials

    NASA Technical Reports Server (NTRS)

    Cole, Kevin D.

    2003-01-01

    The purpose of the project was to investigate methods to accurately verify that designed , materials meet thermal specifications. The project involved heat transfer calculations and optimization studies, and no laboratory experiments were performed. One part of the research involved study of materials in which conduction heat transfer predominates. Results include techniques to choose among several experimental designs, and protocols for determining the optimum experimental conditions for determination of thermal properties. Metal foam materials were also studied in which both conduction and radiation heat transfer are present. Results of this work include procedures to optimize the design of experiments to accurately measure both conductive and radiative thermal properties. Detailed results in the form of three journal papers have been appended to this report.

  11. Using Approximations to Accelerate Engineering Design Optimization

    NASA Technical Reports Server (NTRS)

    Torczon, Virginia; Trosset, Michael W.

    1998-01-01

    Optimization problems that arise in engineering design are often characterized by several features that hinder the use of standard nonlinear optimization techniques. Foremost among these features is that the functions used to define the engineering optimization problem often are computationally intensive. Within a standard nonlinear optimization algorithm, the computational expense of evaluating the functions that define the problem would necessarily be incurred for each iteration of the optimization algorithm. Faced with such prohibitive computational costs, an attractive alternative is to make use of surrogates within an optimization context since surrogates can be chosen or constructed so that they are typically much less expensive to compute. For the purposes of this paper, we will focus on the use of algebraic approximations as surrogates for the objective. In this paper we introduce the use of so-called merit functions that explicitly recognize the desirability of improving the current approximation to the objective during the course of the optimization. We define and experiment with the use of merit functions chosen to simultaneously improve both the solution to the optimization problem (the objective) and the quality of the approximation. Our goal is to further improve the effectiveness of our general approach without sacrificing any of its rigor.

  12. Application of Optimal Designs to Item Calibration

    PubMed Central

    Lu, Hung-Yi

    2014-01-01

    In computerized adaptive testing (CAT), examinees are presented with various sets of items chosen from a precalibrated item pool. Consequently, the attrition speed of the items is extremely fast, and replenishing the item pool is essential. Therefore, item calibration has become a crucial concern in maintaining item banks. In this study, a two-parameter logistic model is used. We applied optimal designs and adaptive sequential analysis to solve this item calibration problem. The results indicated that the proposed optimal designs are cost effective and time efficient. PMID:25188318

  13. Evaluation of Frameworks for HSCT Design Optimization

    NASA Technical Reports Server (NTRS)

    Krishnan, Ramki

    1998-01-01

    This report is an evaluation of engineering frameworks that could be used to augment, supplement, or replace the existing FIDO 3.5 (Framework for Interdisciplinary Design and Optimization Version 3.5) framework. The report begins with the motivation for this effort, followed by a description of an "ideal" multidisciplinary design and optimization (MDO) framework. The discussion then turns to how each candidate framework stacks up against this ideal. This report ends with recommendations as to the "best" frameworks that should be down-selected for detailed review.

  14. Drought Adaptation Mechanisms Should Guide Experimental Design.

    PubMed

    Gilbert, Matthew E; Medina, Viviana

    2016-08-01

    The mechanism, or hypothesis, of how a plant might be adapted to drought should strongly influence experimental design. For instance, an experiment testing for water conservation should be distinct from a damage-tolerance evaluation. We define here four new, general mechanisms for plant adaptation to drought such that experiments can be more easily designed based upon the definitions. A series of experimental methods are suggested together with appropriate physiological measurements related to the drought adaptation mechanisms. The suggestion is made that the experimental manipulation should match the rate, length, and severity of soil water deficit (SWD) necessary to test the hypothesized type of drought adaptation mechanism.

  15. Fatigue reliability based optimal design of planar compliant micropositioning stages

    NASA Astrophysics Data System (ADS)

    Wang, Qiliang; Zhang, Xianmin

    2015-10-01

    Conventional compliant micropositioning stages are usually developed based on static strength and deterministic methods, which may lead to either unsafe or excessive designs. This paper presents a fatigue reliability analysis and optimal design of a three-degree-of-freedom (3 DOF) flexure-based micropositioning stage. Kinematic, modal, static, and fatigue stress modelling of the stage were conducted using the finite element method. The maximum equivalent fatigue stress in the hinges was derived using sequential quadratic programming. The fatigue strength of the hinges was obtained by considering various influencing factors. On this basis, the fatigue reliability of the hinges was analysed using the stress-strength interference method. Fatigue-reliability-based optimal design of the stage was then conducted using the genetic algorithm and MATLAB. To make fatigue life testing easier, a 1 DOF stage was then optimized and manufactured. Experimental results demonstrate the validity of the approach.

  16. Fatigue reliability based optimal design of planar compliant micropositioning stages.

    PubMed

    Wang, Qiliang; Zhang, Xianmin

    2015-10-01

    Conventional compliant micropositioning stages are usually developed based on static strength and deterministic methods, which may lead to either unsafe or excessive designs. This paper presents a fatigue reliability analysis and optimal design of a three-degree-of-freedom (3 DOF) flexure-based micropositioning stage. Kinematic, modal, static, and fatigue stress modelling of the stage were conducted using the finite element method. The maximum equivalent fatigue stress in the hinges was derived using sequential quadratic programming. The fatigue strength of the hinges was obtained by considering various influencing factors. On this basis, the fatigue reliability of the hinges was analysed using the stress-strength interference method. Fatigue-reliability-based optimal design of the stage was then conducted using the genetic algorithm and MATLAB. To make fatigue life testing easier, a 1 DOF stage was then optimized and manufactured. Experimental results demonstrate the validity of the approach.

  17. Engineering design optimization using services and workflows.

    PubMed

    Crick, Tom; Dunning, Peter; Kim, Hyunsun; Padget, Julian

    2009-07-13

    Multi-disciplinary design optimization (MDO) is the process whereby the often conflicting requirements of the different disciplines to the engineering design process attempts to converge upon a description that represents an acceptable compromise in the design space. We present a simple demonstrator of a flexible workflow framework for engineering design optimization using an e-Science tool. This paper provides a concise introduction to MDO, complemented by a summary of the related tools and techniques developed under the umbrella of the UK e-Science programme that we have explored in support of the engineering process. The main contributions of this paper are: (i) a description of the optimization workflow that has been developed in the Taverna workbench, (ii) a demonstrator of a structural optimization process with a range of tool options using common benchmark problems, (iii) some reflections on the experience of software engineering meeting mechanical engineering, and (iv) an indicative discussion on the feasibility of a 'plug-and-play' engineering environment for analysis and design.

  18. Instrument design and optimization using genetic algorithms

    SciTech Connect

    Hoelzel, Robert; Bentley, Phillip M.; Fouquet, Peter

    2006-10-15

    This article describes the design of highly complex physical instruments by using a canonical genetic algorithm (GA). The procedure can be applied to all instrument designs where performance goals can be quantified. It is particularly suited to the optimization of instrument design where local optima in the performance figure of merit are prevalent. Here, a GA is used to evolve the design of the neutron spin-echo spectrometer WASP which is presently being constructed at the Institut Laue-Langevin, Grenoble, France. A comparison is made between this artificial intelligence approach and the traditional manual design methods. We demonstrate that the search of parameter space is more efficient when applying the genetic algorithm, and the GA produces a significantly better instrument design. Furthermore, it is found that the GA increases flexibility, by facilitating the reoptimization of the design after changes in boundary conditions during the design phase. The GA also allows the exploration of 'nonstandard' magnet coil geometries. We conclude that this technique constitutes a powerful complementary tool for the design and optimization of complex scientific apparatus, without replacing the careful thought processes employed in traditional design methods.

  19. Branch target buffer design and optimization

    NASA Technical Reports Server (NTRS)

    Perleberg, Chris H.; Smith, Alan J.

    1993-01-01

    Consideration is given to two major issues in the design of branch target buffers (BTBs), with the goal of achieving maximum performance for a given number of bits allocated to the BTB design. The first issue is BTB management; the second is what information to keep in the BTB. A number of solutions to these problems are reviewed, and various optimizations in the design of BTBs are discussed. Design target miss ratios for BTBs are developed, making it possible to estimate the performance of BTBs for real workloads.

  20. Branch target buffer design and optimization

    NASA Technical Reports Server (NTRS)

    Perleberg, Chris H.; Smith, Alan J.

    1993-01-01

    Consideration is given to two major issues in the design of branch target buffers (BTBs), with the goal of achieving maximum performance for a given number of bits allocated to the BTB design. The first issue is BTB management; the second is what information to keep in the BTB. A number of solutions to these problems are reviewed, and various optimizations in the design of BTBs are discussed. Design target miss ratios for BTBs are developed, making it possible to estimate the performance of BTBs for real workloads.

  1. Yakima Hatchery Experimental Design : Annual Progress Report.

    SciTech Connect

    Busack, Craig; Knudsen, Curtis; Marshall, Anne

    1991-08-01

    This progress report details the results and status of Washington Department of Fisheries' (WDF) pre-facility monitoring, research, and evaluation efforts, through May 1991, designed to support the development of an Experimental Design Plan (EDP) for the Yakima/Klickitat Fisheries Project (YKFP), previously termed the Yakima/Klickitat Production Project (YKPP or Y/KPP). This pre- facility work has been guided by planning efforts of various research and quality control teams of the project that are annually captured as revisions to the experimental design and pre-facility work plans. The current objective are as follows: to develop genetic monitoring and evaluation approach for the Y/KPP; to evaluate stock identification monitoring tools, approaches, and opportunities available to meet specific objectives of the experimental plan; and to evaluate adult and juvenile enumeration and sampling/collection capabilities in the Y/KPP necessary to measure experimental response variables.

  2. Evolutionary optimization methods for accelerator design

    NASA Astrophysics Data System (ADS)

    Poklonskiy, Alexey A.

    Many problems from the fields of accelerator physics and beam theory can be formulated as optimization problems and, as such, solved using optimization methods. Despite growing efficiency of the optimization methods, the adoption of modern optimization techniques in these fields is rather limited. Evolutionary Algorithms (EAs) form a relatively new and actively developed optimization methods family. They possess many attractive features such as: ease of the implementation, modest requirements on the objective function, a good tolerance to noise, robustness, and the ability to perform a global search efficiently. In this work we study the application of EAs to problems from accelerator physics and beam theory. We review the most commonly used methods of unconstrained optimization and describe the GATool, evolutionary algorithm and the software package, used in this work, in detail. Then we use a set of test problems to assess its performance in terms of computational resources, quality of the obtained result, and the tradeoff between them. We justify the choice of GATool as a heuristic method to generate cutoff values for the COSY-GO rigorous global optimization package for the COSY Infinity scientific computing package. We design the model of their mutual interaction and demonstrate that the quality of the result obtained by GATool increases as the information about the search domain is refined, which supports the usefulness of this model. We Giscuss GATool's performance on the problems suffering from static and dynamic noise and study useful strategies of GATool parameter tuning for these and other difficult problems. We review the challenges of constrained optimization with EAs and methods commonly used to overcome them. We describe REPA, a new constrained optimization method based on repairing, in exquisite detail, including the properties of its two repairing techniques: REFIND and REPROPT. We assess REPROPT's performance on the standard constrained

  3. Optimal Shape Design of a Plane Diffuser in Turbulent Flow

    NASA Astrophysics Data System (ADS)

    Lim, Seokhyun; Choi, Haecheon

    2000-11-01

    Stratford (1959) experimentally designed an optimal shape of plane diffuser for maximum pressure recovery by having zero skin friction throughout the region of pressure rise. In the present study, we apply an algorithm of optimal shape design developed by Pironneau (1973, 1974) and Cabuk & Modi (1992) to a diffuser in turbulent flow, and show that maintaining zero skin friction in the pressure-rise region is an optimal condition for maximum pressure recovery at the diffuser exit. For turbulence model, we use the k-ɛ-v^2-f model by Durbin (1995) which is known to accurately predict flow with separation. Our results with this model agree well with the previous experimental and LES results for a diffuser shape tested by Obi et al. (1993). From this initial shape, an optimal diffuser shape for maximum pressure recovery is obtained through an iterative procedure. The optimal diffuser has indeed zero skin friction throughout the pressure-rise region, and thus there is no separation in the flow. For the optimal diffuser shape obtained, an LES is being conducted to investigate the turbulence characteristics near the zero-skin-friction wall. A preliminary result of LES will also be presented.

  4. Integrated structural-aerodynamic design optimization

    NASA Technical Reports Server (NTRS)

    Haftka, R. T.; Kao, P. J.; Grossman, B.; Polen, D.; Sobieszczanski-Sobieski, J.

    1988-01-01

    This paper focuses on the processes of simultaneous aerodynamic and structural wing design as a prototype for design integration, with emphasis on the major difficulty associated with multidisciplinary design optimization processes, their enormous computational costs. Methods are presented for reducing this computational burden through the development of efficient methods for cross-sensitivity calculations and the implementation of approximate optimization procedures. Utilizing a modular sensitivity analysis approach, it is shown that the sensitivities can be computed without the expensive calculation of the derivatives of the aerodynamic influence coefficient matrix, and the derivatives of the structural flexibility matrix. The same process is used to efficiently evaluate the sensitivities of the wing divergence constraint, which should be particularly useful, not only in problems of complete integrated aircraft design, but also in aeroelastic tailoring applications.

  5. Application of optimal design methodologies in clinical pharmacology experiments.

    PubMed

    Ogungbenro, Kayode; Dokoumetzidis, Aristides; Aarons, Leon

    2009-01-01

    Pharmacokinetics and pharmacodynamics data are often analysed by mixed-effects modelling techniques (also known as population analysis), which has become a standard tool in the pharmaceutical industries for drug development. The last 10 years has witnessed considerable interest in the application of experimental design theories to population pharmacokinetic and pharmacodynamic experiments. Design of population pharmacokinetic experiments involves selection and a careful balance of a number of design factors. Optimal design theory uses prior information about the model and parameter estimates to optimize a function of the Fisher information matrix to obtain the best combination of the design factors. This paper provides a review of the different approaches that have been described in the literature for optimal design of population pharmacokinetic and pharmacodynamic experiments. It describes options that are available and highlights some of the issues that could be of concern as regards practical application. It also discusses areas of application of optimal design theories in clinical pharmacology experiments. It is expected that as the awareness about the benefits of this approach increases, more people will embrace it and ultimately will lead to more efficient population pharmacokinetic and pharmacodynamic experiments and can also help to reduce both cost and time during drug development.

  6. Stented artery biomechanics and device design optimization.

    PubMed

    Timmins, Lucas H; Moreno, Michael R; Meyer, Clark A; Criscione, John C; Rachev, Alexander; Moore, James E

    2007-05-01

    The deployment of a vascular stent aims to increase lumen diameter for the restoration of blood flow, but the accompanied alterations in the mechanical environment possibly affect the long-term patency of these devices. The primary aim of this investigation was to develop an algorithm to optimize stent design, allowing for consideration of competing solid mechanical concerns (wall stress, lumen gain, and cyclic deflection). Finite element modeling (FEM) was used to estimate artery wall stress and systolic/diastolic geometries, from which single parameter outputs were derived expressing stress, lumen gain, and cyclic artery wall deflection. An optimization scheme was developed using Lagrangian interpolation elements that sought to minimize the sum of these outputs, with weighting coefficients. Varying the weighting coefficients results in stent designs that prioritize one output over another. The accuracy of the algorithm was confirmed by evaluating the resulting outputs of the optimized geometries using FEM. The capacity of the optimization algorithm to identify optimal geometries and their resulting mechanical measures was retained over a wide range of weighting coefficients. The variety of stent designs identified provides general guidelines that have potential clinical use (i.e., lesion-specific stenting).

  7. Multidisciplinary Concurrent Design Optimization via the Internet

    NASA Technical Reports Server (NTRS)

    Woodard, Stanley E.; Kelkar, Atul G.; Koganti, Gopichand

    2001-01-01

    A methodology is presented which uses commercial design and analysis software and the Internet to perform concurrent multidisciplinary optimization. The methodology provides a means to develop multidisciplinary designs without requiring that all software be accessible from the same local network. The procedures are amenable to design and development teams whose members, expertise and respective software are not geographically located together. This methodology facilitates multidisciplinary teams working concurrently on a design problem of common interest. Partition of design software to different machines allows each constituent software to be used on the machine that provides the most economy and efficiency. The methodology is demonstrated on the concurrent design of a spacecraft structure and attitude control system. Results are compared to those derived from performing the design with an autonomous FORTRAN program.

  8. PopED lite: An optimal design software for preclinical pharmacokinetic and pharmacodynamic studies.

    PubMed

    Aoki, Yasunori; Sundqvist, Monika; Hooker, Andrew C; Gennemark, Peter

    2016-04-01

    Optimal experimental design approaches are seldom used in preclinical drug discovery. The objective is to develop an optimal design software tool specifically designed for preclinical applications in order to increase the efficiency of drug discovery in vivo studies. Several realistic experimental design case studies were collected and many preclinical experimental teams were consulted to determine the design goal of the software tool. The tool obtains an optimized experimental design by solving a constrained optimization problem, where each experimental design is evaluated using some function of the Fisher Information Matrix. The software was implemented in C++ using the Qt framework to assure a responsive user-software interaction through a rich graphical user interface, and at the same time, achieving the desired computational speed. In addition, a discrete global optimization algorithm was developed and implemented. The software design goals were simplicity, speed and intuition. Based on these design goals, we have developed the publicly available software PopED lite (http://www.bluetree.me/PopED_lite). Optimization computation was on average, over 14 test problems, 30 times faster in PopED lite compared to an already existing optimal design software tool. PopED lite is now used in real drug discovery projects and a few of these case studies are presented in this paper. PopED lite is designed to be simple, fast and intuitive. Simple, to give many users access to basic optimal design calculations. Fast, to fit a short design-execution cycle and allow interactive experimental design (test one design, discuss proposed design, test another design, etc). Intuitive, so that the input to and output from the software tool can easily be understood by users without knowledge of the theory of optimal design. In this way, PopED lite is highly useful in practice and complements existing tools. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  9. Reliability-based structural optimization: A proposed analytical-experimental study

    NASA Technical Reports Server (NTRS)

    Stroud, W. Jefferson; Nikolaidis, Efstratios

    1993-01-01

    An analytical and experimental study for assessing the potential of reliability-based structural optimization is proposed and described. In the study, competing designs obtained by deterministic and reliability-based optimization are compared. The experimental portion of the study is practical because the structure selected is a modular, actively and passively controlled truss that consists of many identical members, and because the competing designs are compared in terms of their dynamic performance and are not destroyed if failure occurs. The analytical portion of this study is illustrated on a 10-bar truss example. In the illustrative example, it is shown that reliability-based optimization can yield a design that is superior to an alternative design obtained by deterministic optimization. These analytical results provide motivation for the proposed study, which is underway.

  10. Optimal radar waveform design for moving target

    NASA Astrophysics Data System (ADS)

    Zhu, Binqi; Gao, Yesheng; Wang, Kaizhi; Liu, Xingzhao

    2016-07-01

    An optimal radar waveform-design method is proposed to detect moving targets in the presence of clutter and noise. The clutter is split into moving and static parts. Radar-moving target/clutter models are introduced and combined with Neyman-Pearson criteria to design optimal waveforms. Results show that optimal waveform for a moving target is different with that for a static target. The combination of simple-frequency signals could produce maximum detectability based on different noise-power spectrum density situations. Simulations show that our algorithm greatly improves signal-to-clutter plus noise ratio of radar system. Therefore, this algorithm may be preferable for moving target detection when prior information on clutter and noise is available.

  11. Photovoltaic design optimization for terrestrial applications

    NASA Technical Reports Server (NTRS)

    Ross, R. G., Jr.

    1978-01-01

    As part of the Jet Propulsion Laboratory's Low-Cost Solar Array Project, a comprehensive program of module cost-optimization has been carried out. The objective of these studies has been to define means of reducing the cost and improving the utility and reliability of photovoltaic modules for the broad spectrum of terrestrial applications. This paper describes one of the methods being used for module optimization, including the derivation of specific equations which allow the optimization of various module design features. The method is based on minimizing the life-cycle cost of energy for the complete system. Comparison of the life-cycle energy cost with the marginal cost of energy each year allows the logical plant lifetime to be determined. The equations derived allow the explicit inclusion of design parameters such as tracking, site variability, and module degradation with time. An example problem involving the selection of an optimum module glass substrate is presented.

  12. Microelectronics package design using experimentally-validated modeling and simulation.

    SciTech Connect

    Johnson, Jay Dean; Young, Nathan Paul; Ewsuk, Kevin Gregory

    2010-11-01

    Packaging high power radio frequency integrated circuits (RFICs) in low temperature cofired ceramic (LTCC) presents many challenges. Within the constraints of LTCC fabrication, the design must provide the usual electrical isolation and interconnections required to package the IC, with additional consideration given to RF isolation and thermal management. While iterative design and prototyping is an option for developing RFIC packaging, it would be expensive and most likely unsuccessful due to the complexity of the problem. To facilitate and optimize package design, thermal and mechanical simulations were used to understand and control the critical parameters in LTCC package design. The models were validated through comparisons to experimental results. This paper summarizes an experimentally-validated modeling approach to RFIC package design, and presents some results and key findings.

  13. Shape design sensitivity analysis and optimal design of structural systems

    NASA Technical Reports Server (NTRS)

    Choi, Kyung K.

    1987-01-01

    The material derivative concept of continuum mechanics and an adjoint variable method of design sensitivity analysis are used to relate variations in structural shape to measures of structural performance. A domain method of shape design sensitivity analysis is used to best utilize the basic character of the finite element method that gives accurate information not on the boundary but in the domain. Implementation of shape design sensitivty analysis using finite element computer codes is discussed. Recent numerical results are used to demonstrate the accuracy obtainable using the method. Result of design sensitivity analysis is used to carry out design optimization of a built-up structure.

  14. MDO can help resolve the designer's dilemma. [multidisciplinary design optimization

    NASA Technical Reports Server (NTRS)

    Sobieszczanski-Sobieski, Jaroslaw; Tulinius, Jan R.

    1991-01-01

    Multidisciplinary design optimization (MDO) is presented as a rapidly growing body of methods, algorithms, and techniques that will provide a quantum jump in the effectiveness and efficiency of the quantitative side of design, and will turn that side into an environment in which the qualitative side can thrive. MDO borrows from CAD/CAM for graphic visualization of geometrical and numerical data, data base technology, and in computer software and hardware. Expected benefits from this methodology are a rational, mathematically consistent approach to hypersonic aircraft designs, designs pushed closer to the optimum, and a design process either shortened or leaving time available for different concepts to be explored.

  15. MDO can help resolve the designer's dilemma. [Multidisciplinary design optimization

    SciTech Connect

    Sobieszczanski-sobieski, Jaroslaw; Tulinius, J.R. Rockwell International Corp., El Segundo, CA )

    1991-09-01

    Multidisciplinary design optimization (MDO) is presented as a rapidly growing body of methods, algorithms, and techniques that will provide a quantum jump in the effectiveness and efficiency of the quantitative side of design, and will turn that side into an environment in which the qualitative side can thrive. MDO borrows from CAD/CAM for graphic visualization of geometrical and numerical data, data base technology, and in computer software and hardware. Expected benefits from this methodology are a rational, mathematically consistent approach to hypersonic aircraft designs, designs pushed closer to the optimum, and a design process either shortened or leaving time available for different concepts to be explored.

  16. MDO can help resolve the designer's dilemma. [multidisciplinary design optimization

    NASA Technical Reports Server (NTRS)

    Sobieszczanski-Sobieski, Jaroslaw; Tulinius, Jan R.

    1991-01-01

    Multidisciplinary design optimization (MDO) is presented as a rapidly growing body of methods, algorithms, and techniques that will provide a quantum jump in the effectiveness and efficiency of the quantitative side of design, and will turn that side into an environment in which the qualitative side can thrive. MDO borrows from CAD/CAM for graphic visualization of geometrical and numerical data, data base technology, and in computer software and hardware. Expected benefits from this methodology are a rational, mathematically consistent approach to hypersonic aircraft designs, designs pushed closer to the optimum, and a design process either shortened or leaving time available for different concepts to be explored.

  17. Design Optimization of Structural Health Monitoring Systems

    SciTech Connect

    Flynn, Eric B.

    2014-03-06

    Sensor networks drive decisions. Approach: Design networks to minimize the expected total cost (in a statistical sense, i.e. Bayes Risk) associated with making wrong decisions and with installing maintaining and running the sensor network itself. Search for optimal solutions using Monte-Carlo-Sampling-Adapted Genetic Algorithm. Applications include structural health monitoring and surveillance.

  18. Design of Optimally Robust Control Systems.

    DTIC Science & Technology

    1980-01-01

    approach is that the optimization framework is an artificial device. While some design constraints can easily be incorporated into a single cost function...indicating that that point was indeed the solution. Also, an intellegent initial guess for k was important in order to avoid being hung up at the double

  19. Design Oriented Structural Modeling for Airplane Conceptual Design Optimization

    NASA Technical Reports Server (NTRS)

    Livne, Eli

    1999-01-01

    The main goal for research conducted with the support of this grant was to develop design oriented structural optimization methods for the conceptual design of airplanes. Traditionally in conceptual design airframe weight is estimated based on statistical equations developed over years of fitting airplane weight data in data bases of similar existing air- planes. Utilization of such regression equations for the design of new airplanes can be justified only if the new air-planes use structural technology similar to the technology on the airplanes in those weight data bases. If any new structural technology is to be pursued or any new unconventional configurations designed the statistical weight equations cannot be used. In such cases any structural weight estimation must be based on rigorous "physics based" structural analysis and optimization of the airframes under consideration. Work under this grant progressed to explore airframe design-oriented structural optimization techniques along two lines of research: methods based on "fast" design oriented finite element technology and methods based on equivalent plate / equivalent shell models of airframes, in which the vehicle is modelled as an assembly of plate and shell components, each simulating a lifting surface or nacelle / fuselage pieces. Since response to changes in geometry are essential in conceptual design of airplanes, as well as the capability to optimize the shape itself, research supported by this grant sought to develop efficient techniques for parametrization of airplane shape and sensitivity analysis with respect to shape design variables. Towards the end of the grant period a prototype automated structural analysis code designed to work with the NASA Aircraft Synthesis conceptual design code ACS= was delivered to NASA Ames.

  20. Experimental Testing of Dynamically Optimized Photoelectron Beams

    SciTech Connect

    Rosenzweig, J. B.; Cook, A. M.; Dunning, M.; England, R. J.; Musumeci, P.; Bellaveglia, M.; Boscolo, M.; Catani, L.; Cianchi, A.; Di Pirro, G.; Ferrario, M.; Fillipetto, D.; Gatti, G.; Palumbo, L.; Vicario, C.; Serafini, L.; Jones, S.

    2006-11-27

    We discuss the design of and initial results from an experiment in space-charge dominated beam dynamics which explores a new regime of high-brightness electron beam generation at the SPARC photoinjector. The scheme under study employs the tendency of intense electron beams to rearrange to produce uniform density, giving a nearly ideal beam from the viewpoint of space charge-induced emittance. The experiments are aimed at testing the marriage of this idea with a related concept, emittance compensation. We show that this new regime of operating photoinjector may be the preferred method of obtaining highest brightness beams with lower energy spread. We discuss the design of the experiment, including developing of a novel time-dependent, aerogel-based imaging system. This system has been installed at SPARC, and first evidence for nearly uniformly filled ellipsoidal charge distributions recorded.

  1. Experimental Testing of Dynamically Optimized Photoelectron Beams

    NASA Astrophysics Data System (ADS)

    Rosenzweig, J. B.; Cook, A. M.; Dunning, M.; England, R. J.; Musumeci, P.; Bellaveglia, M.; Boscolo, M.; Catani, L.; Cianchi, A.; Di Pirro, G.; Ferrario, M.; Fillipetto, D.; Gatti, G.; Palumbo, L.; Serafini, L.; Vicario, C.; Jones, S.

    2006-11-01

    We discuss the design of and initial results from an experiment in space-charge dominated beam dynamics which explores a new regime of high-brightness electron beam generation at the SPARC photoinjector. The scheme under study employs the tendency of intense electron beams to rearrange to produce uniform density, giving a nearly ideal beam from the viewpoint of space charge-induced emittance. The experiments are aimed at testing the marriage of this idea with a related concept, emittance compensation. We show that this new regime of operating photoinjector may be the preferred method of obtaining highest brightness beams with lower energy spread. We discuss the design of the experiment, including developing of a novel time-dependent, aerogel-based imaging system. This system has been installed at SPARC, and first evidence for nearly uniformly filled ellipsoidal charge distributions recorded.

  2. Using experimental design to define boundary manikins.

    PubMed

    Bertilsson, Erik; Högberg, Dan; Hanson, Lars

    2012-01-01

    When evaluating human-machine interaction it is central to consider anthropometric diversity to ensure intended accommodation levels. A well-known method is the use of boundary cases where manikins with extreme but likely measurement combinations are derived by mathematical treatment of anthropometric data. The supposition by that method is that the use of these manikins will facilitate accommodation of the expected part of the total, less extreme, population. In literature sources there are differences in how many and in what way these manikins should be defined. A similar field to the boundary case method is the use of experimental design in where relationships between affecting factors of a process is studied by a systematic approach. This paper examines the possibilities to adopt methodology used in experimental design to define a group of manikins. Different experimental designs were adopted to be used together with a confidence region and its axes. The result from the study shows that it is possible to adapt the methodology of experimental design when creating groups of manikins. The size of these groups of manikins depends heavily on the number of key measurements but also on the type of chosen experimental design.

  3. Complex optimization for big computational and experimental neutron datasets

    NASA Astrophysics Data System (ADS)

    Bao, Feng; Archibald, Richard; Niedziela, Jennifer; Bansal, Dipanshu; Delaire, Olivier

    2016-12-01

    We present a framework to use high performance computing to determine accurate solutions to the inverse optimization problem of big experimental data against computational models. We demonstrate how image processing, mathematical regularization, and hierarchical modeling can be used to solve complex optimization problems on big data. We also demonstrate how both model and data information can be used to further increase solution accuracy of optimization by providing confidence regions for the processing and regularization algorithms. We use the framework in conjunction with the software package SIMPHONIES to analyze results from neutron scattering experiments on silicon single crystals, and refine first principles calculations to better describe the experimental data.

  4. Complex optimization for big computational and experimental neutron datasets

    SciTech Connect

    Bao, Feng; Archibald, Richard; Niedziela, Jennifer; Bansal, Dipanshu; Delaire, Olivier

    2016-11-07

    Here, we present a framework to use high performance computing to determine accurate solutions to the inverse optimization problem of big experimental data against computational models. We demonstrate how image processing, mathematical regularization, and hierarchical modeling can be used to solve complex optimization problems on big data. We also demonstrate how both model and data information can be used to further increase solution accuracy of optimization by providing confidence regions for the processing and regularization algorithms. Finally, we use the framework in conjunction with the software package SIMPHONIES to analyze results from neutron scattering experiments on silicon single crystals, and refine first principles calculations to better describe the experimental data.

  5. Aerodynamic design optimization using sensitivity analysis and computational fluid dynamics

    NASA Technical Reports Server (NTRS)

    Baysal, Oktay; Eleshaky, Mohamed E.

    1991-01-01

    A new and efficient method is presented for aerodynamic design optimization, which is based on a computational fluid dynamics (CFD)-sensitivity analysis algorithm. The method is applied to design a scramjet-afterbody configuration for an optimized axial thrust. The Euler equations are solved for the inviscid analysis of the flow, which in turn provides the objective function and the constraints. The CFD analysis is then coupled with the optimization procedure that uses a constrained minimization method. The sensitivity coefficients, i.e. gradients of the objective function and the constraints, needed for the optimization are obtained using a quasi-analytical method rather than the traditional brute force method of finite difference approximations. During the one-dimensional search of the optimization procedure, an approximate flow analysis (predicted flow) based on a first-order Taylor series expansion is used to reduce the computational cost. Finally, the sensitivity of the optimum objective function to various design parameters, which are kept constant during the optimization, is computed to predict new optimum solutions. The flow analysis of the demonstrative example are compared with the experimental data. It is shown that the method is more efficient than the traditional methods.

  6. Optimal designs in regression with correlated errors

    PubMed Central

    Dette, Holger; Pepelyshev, Andrey; Zhigljavsky, Anatoly

    2016-01-01

    This paper discusses the problem of determining optimal designs for regression models, when the observations are dependent and taken on an interval. A complete solution of this challenging optimal design problem is given for a broad class of regression models and covariance kernels. We propose a class of estimators which are only slightly more complicated than the ordinary least-squares estimators. We then demonstrate that we can design the experiments, such that asymptotically the new estimators achieve the same precision as the best linear unbiased estimator computed for the whole trajectory of the process. As a by-product we derive explicit expressions for the BLUE in the continuous time model and analytic expressions for the optimal designs in a wide class of regression models. We also demonstrate that for a finite number of observations the precision of the proposed procedure, which includes the estimator and design, is very close to the best achievable. The results are illustrated on a few numerical examples. PMID:27340304

  7. Optimizing the TRD design for ACCESS

    SciTech Connect

    Cherry, M. L.; Guzik, T. G.; Isbert, J.; Wefel, J. P.

    1999-01-22

    The present ACCESS design combines an ionization calorimeter with a transition radiation detector (TRD) to measure the cosmic ray composition and energy spectrum from H to Fe at energies above 1 TeV/nucleon to the 'knee' in the all particle spectrum. We are in the process of optimizing the TRD design to extend the range of the technique to as high an energy as possible given the constraints of the International Space Station mission and the need to coexist with the calorimeter. The current status of the design effort and preliminary results will be presented.

  8. Experimental Testing of Dynamically Optimized Photoelectron Beams

    NASA Astrophysics Data System (ADS)

    Rosenzweig, J. B.; Cook, A. M.; Dunning, M.; England, R. J.; Musumeci, P.; Bellaveglia, M.; Boscolo, M.; Catani, L.; Cianchi, A.; Pirro, G. Di; Ferrario, M.; Fillipetto, D.; Gatti, G.; Palumbo, L.; Serafini, L.; Vicario, C.

    2007-09-01

    We discuss the design of and initial results from an experiment in space-charge dominated beam dynamics which explores a new regime of high-brightness electron beam generation at the SPARC (located at INFN-LNF, Frascati) photoinjector. The scheme under study employs the natural tendency in intense electron beams to configure themselves to produce a uniform density, giving a nearly ideal beam from the viewpoint of space charge-induced emittance. The experiments are aimed at testing the marriage of this idea with a related concept, emittance compensation, We show that the existing infrastructure at SPARC is nearly ideal for the proposed tests, and that this new regime of operating photoinjector may be the preferred method of obtaining highest brightness beams with lower energy spread. We discuss the design of the experiment, including developing of a novel time-dependent, aerogel-based imaging system. This system has been installed at SPARC, and first evidence for nearly uniformly filled ellipsoidal charge distributions recorded.

  9. Experimental Testing of Dynamically Optimized Photoelectron Beams

    NASA Astrophysics Data System (ADS)

    Rosenzweig, J. B.; Cook, A. M.; Dunning, M.; England, R. J.; Musumeci, P.; Bellaveglia, M.; Boscolo, M.; Catani, L.; Cianchi, A.; di Pirro, G.; Ferrario, M.; Fillipetto, D.; Gatti, G.; Palumbo, L.; Serafini, L.; Vicario, C.

    We discuss the design of and initial results from an experiment in space-charge dominated beam dynamics which explores a new regime of high-brightness electron beam generation at the SPARC (located at INFN-LNF, Frascati) photoinjector. The scheme under study employs the natural tendency in intense electron beams to configure themselves to produce a uniform density, giving a nearly ideal beam from the viewpoint of space charge-induced emittance. The experiments are aimed at testing the marriage of this idea with a related concept, emittance compensation, We show that the existing infrastructure at SPARC is nearly ideal for the proposed tests, and that this new regime of operating photoinjector may be the preferred method of obtaining highest brightness beams with lower energy spread. We discuss the design of the experiment, including developing of a novel time-dependent, aerogel-based imaging system. This system has been installed at SPARC, and first evidence for nearly uniformly filled ellipsoidal charge distributions recorded.

  10. Multidisciplinary Design Optimization on Conceptual Design of Aero-engine

    NASA Astrophysics Data System (ADS)

    Zhang, Xiao-bo; Wang, Zhan-xue; Zhou, Li; Liu, Zeng-wen

    2016-06-01

    In order to obtain better integrated performance of aero-engine during the conceptual design stage, multiple disciplines such as aerodynamics, structure, weight, and aircraft mission are required. Unfortunately, the couplings between these disciplines make it difficult to model or solve by conventional method. MDO (Multidisciplinary Design Optimization) methodology which can well deal with couplings of disciplines is considered to solve this coupled problem. Approximation method, optimization method, coordination method, and modeling method for MDO framework are deeply analyzed. For obtaining the more efficient MDO framework, an improved CSSO (Concurrent Subspace Optimization) strategy which is based on DOE (Design Of Experiment) and RSM (Response Surface Model) methods is proposed in this paper; and an improved DE (Differential Evolution) algorithm is recommended to solve the system-level and discipline-level optimization problems in MDO framework. The improved CSSO strategy and DE algorithm are evaluated by utilizing the numerical test problem. The result shows that the efficiency of improved methods proposed by this paper is significantly increased. The coupled problem of VCE (Variable Cycle Engine) conceptual design is solved by utilizing improved CSSO strategy, and the design parameter given by improved CSSO strategy is better than the original one. The integrated performance of VCE is significantly improved.

  11. Aircraft family design using enhanced collaborative optimization

    NASA Astrophysics Data System (ADS)

    Roth, Brian Douglas

    Significant progress has been made toward the development of multidisciplinary design optimization (MDO) methods that are well-suited to practical large-scale design problems. However, opportunities exist for further progress. This thesis describes the development of enhanced collaborative optimization (ECO), a new decomposition-based MDO method. To support the development effort, the thesis offers a detailed comparison of two existing MDO methods: collaborative optimization (CO) and analytical target cascading (ATC). This aids in clarifying their function and capabilities, and it provides inspiration for the development of ECO. The ECO method offers several significant contributions. First, it enhances communication between disciplinary design teams while retaining the low-order coupling between them. Second, it provides disciplinary design teams with more authority over the design process. Third, it resolves several troubling computational inefficiencies that are associated with CO. As a result, ECO provides significant computational savings (relative to CO) for the test cases and practical design problems described in this thesis. New aircraft development projects seldom focus on a single set of mission requirements. Rather, a family of aircraft is designed, with each family member tailored to a different set of requirements. This thesis illustrates the application of decomposition-based MDO methods to aircraft family design. This represents a new application area, since MDO methods have traditionally been applied to multidisciplinary problems. ECO offers aircraft family design the same benefits that it affords to multidisciplinary design problems. Namely, it simplifies analysis integration, it provides a means to manage problem complexity, and it enables concurrent design of all family members. In support of aircraft family design, this thesis introduces a new wing structural model with sufficient fidelity to capture the tradeoffs associated with component

  12. Optimal AFCS: particularities of real design

    NASA Astrophysics Data System (ADS)

    Platonov, A.; Zaitsev, Ie.; Chaciński, H.

    2015-09-01

    The paper discusses particularities of optimal adaptive communication systems (AFCS) design conditioned by the particularities of their architecture and way of functioning, as well as by the approach to their design. The main one is that AFCS employ the analog method of transmission (in the paper - amplitude modulation), and are intended for short-range transmission of signals from the analog sources. Another one is that AFCS design is carried out on the basis of strict results of concurrent analytical optimisation of the transmitting and receiving parts of the system. Below, general problems appearing during transition from the theoretical results to the real engineering design, as well as approach to their solution are discussed. Some concrete tasks of AFCS design are also considered.

  13. Optimal design of waveguiding periodic structures

    NASA Astrophysics Data System (ADS)

    Diana, Roberto; Giorgio, Agostino; Perri, Anna Gina; Armenise, Mario Nicola

    2003-04-01

    The design of some 1D waveguiding photonic bandgap (PBG) devices and Fiber Bragg Gratings (FBG) for microstrain based sensing applications has been carried out by a model based on the Bloch-Floquet theorem. A lwo loss, very narrow passband GaAs PBG filter for the operating wavelength λ = 1.55 μm, having an air bridge configuration, was designed and simulated. Moreover, a resonant Si on glass PBG device has been designed to obtain the resonance condition at λ= 1.55 μm. Finally, a FBG-based microstrain sensor design has been carried out, having an array of 32 FBG. A complete analysis of the propagation characteristics, electromagnetic field harmonics and total field distribution, transmission and reflection coefficients, guided and raidated power, and total losses, enabled the optimization of the design in a very short CPU time.

  14. Principles of Experimental Design for Big Data Analysis

    PubMed Central

    Drovandi, Christopher C; Holmes, Christopher; McGree, James M; Mengersen, Kerrie; Richardson, Sylvia; Ryan, Elizabeth G

    2016-01-01

    Big Datasets are endemic, but are often notoriously difficult to analyse because of their size, heterogeneity and quality. The purpose of this paper is to open a discourse on the potential for modern decision theoretic optimal experimental design methods, which by their very nature have traditionally been applied prospectively, to improve the analysis of Big Data through retrospective designed sampling in order to answer particular questions of interest. By appealing to a range of examples, it is suggested that this perspective on Big Data modelling and analysis has the potential for wide generality and advantageous inferential and computational properties. We highlight current hurdles and open research questions surrounding efficient computational optimisation in using retrospective designs, and in part this paper is a call to the optimisation and experimental design communities to work together in the field of Big Data analysis. PMID:28883686

  15. Principles of Experimental Design for Big Data Analysis.

    PubMed

    Drovandi, Christopher C; Holmes, Christopher; McGree, James M; Mengersen, Kerrie; Richardson, Sylvia; Ryan, Elizabeth G

    2017-08-01

    Big Datasets are endemic, but are often notoriously difficult to analyse because of their size, heterogeneity and quality. The purpose of this paper is to open a discourse on the potential for modern decision theoretic optimal experimental design methods, which by their very nature have traditionally been applied prospectively, to improve the analysis of Big Data through retrospective designed sampling in order to answer particular questions of interest. By appealing to a range of examples, it is suggested that this perspective on Big Data modelling and analysis has the potential for wide generality and advantageous inferential and computational properties. We highlight current hurdles and open research questions surrounding efficient computational optimisation in using retrospective designs, and in part this paper is a call to the optimisation and experimental design communities to work together in the field of Big Data analysis.

  16. Robust Design Optimization via Failure Domain Bounding

    NASA Technical Reports Server (NTRS)

    Crespo, Luis G.; Kenny, Sean P.; Giesy, Daniel P.

    2007-01-01

    This paper extends and applies the strategies recently developed by the authors for handling constraints under uncertainty to robust design optimization. For the scope of this paper, robust optimization is a methodology aimed at problems for which some parameters are uncertain and are only known to belong to some uncertainty set. This set can be described by either a deterministic or a probabilistic model. In the methodology developed herein, optimization-based strategies are used to bound the constraint violation region using hyper-spheres and hyper-rectangles. By comparing the resulting bounding sets with any given uncertainty model, it can be determined whether the constraints are satisfied for all members of the uncertainty model (i.e., constraints are feasible) or not (i.e., constraints are infeasible). If constraints are infeasible and a probabilistic uncertainty model is available, upper bounds to the probability of constraint violation can be efficiently calculated. The tools developed enable approximating not only the set of designs that make the constraints feasible but also, when required, the set of designs for which the probability of constraint violation is below a prescribed admissible value. When constraint feasibility is possible, several design criteria can be used to shape the uncertainty model of performance metrics of interest. Worst-case, least-second-moment, and reliability-based design criteria are considered herein. Since the problem formulation is generic and the tools derived only require standard optimization algorithms for their implementation, these strategies are easily applicable to a broad range of engineering problems.

  17. Automation enhancements in multidisciplinary design optimization

    NASA Astrophysics Data System (ADS)

    Wujek, Brett Alan

    The process of designing complex systems has necessarily evolved into one which includes the contributions and interactions of multiple disciplines. To date, the Multidisciplinary Design Optimization (MDO) process has been addressed mainly from the standpoint of algorithm development, with the primary concerns being effective and efficient coordination of disciplinary activities, modification of conventional optimization methods, and the utility of approximation techniques toward this goal. The focus of this dissertation is on improving the efficiency of MDO algorithms through the automation of common procedures and the development of improved methods to carry out these procedures. In this research, automation enhancements are made to the MDO process in three different areas: execution, sensitivity analysis and utility, and design variable move-limit management. A framework is developed along with a graphical user interface called NDOPT to automate the setup and execution of MDO algorithms in a research environment. The technology of automatic differentiation (AD) is utilized within various modules of MDO algorithms for fast and accurate sensitivity calculation, allowing for the frequent use of updated sensitivity information. With the use of AD, efficiency improvements are observed in the convergence of system analyses and in certain optimization procedures since gradient-based methods, traditionally considered cost-prohibitive, can be employed at a more reasonable expense. Finally, a method is developed to automatically monitor and adjust design variable move-limits for the approximate optimization process commonly used in MDO algorithms. With its basis in the well established and probably convergent trust region approach, the Trust region Ratio Approximation method (TRAM) developed in this research accounts for approximation accuracy and the sensitivity of the model error to the design space in providing a flexible move-limit adjustment factor. Favorable results

  18. Optimal design of a space power system

    NASA Technical Reports Server (NTRS)

    Chun, Young W.; Braun, James F.

    1990-01-01

    The aerospace industry, like many other industries, regularly applies optimization techniques to develop designs which reduce cost, maximize performance, and minimize weight. The desire to minimize weight is of particular importance in space-related products since the costs of launch are directly related to payload weight, and launch vehicle capabilities often limit the allowable weight of a component or system. With these concerns in mind, this paper presents the optimization of a space-based power generation system for minimum mass. The goal of this work is to demonstrate the use of optimization techniques on a realistic and practical engineering system. The power system described uses thermoelectric devices to convert heat into electricity. The heat source for the system is a nuclear reactor. Waste heat is rejected from the system to space by a radiator.

  19. Design Methods and Optimization for Morphing Aircraft

    NASA Technical Reports Server (NTRS)

    Crossley, William A.

    2005-01-01

    This report provides a summary of accomplishments made during this research effort. The major accomplishments are in three areas. The first is the use of a multiobjective optimization strategy to help identify potential morphing features that uses an existing aircraft sizing code to predict the weight, size and performance of several fixed-geometry aircraft that are Pareto-optimal based upon on two competing aircraft performance objectives. The second area has been titled morphing as an independent variable and formulates the sizing of a morphing aircraft as an optimization problem in which the amount of geometric morphing for various aircraft parameters are included as design variables. This second effort consumed most of the overall effort on the project. The third area involved a more detailed sizing study of a commercial transport aircraft that would incorporate a morphing wing to possibly enable transatlantic point-to-point passenger service.

  20. Optimal design of a space power system

    NASA Technical Reports Server (NTRS)

    Chun, Young W.; Braun, James F.

    1990-01-01

    The aerospace industry, like many other industries, regularly applies optimization techniques to develop designs which reduce cost, maximize performance, and minimize weight. The desire to minimize weight is of particular importance in space-related products since the costs of launch are directly related to payload weight, and launch vehicle capabilities often limit the allowable weight of a component or system. With these concerns in mind, this paper presents the optimization of a space-based power generation system for minimum mass. The goal of this work is to demonstrate the use of optimization techniques on a realistic and practical engineering system. The power system described uses thermoelectric devices to convert heat into electricity. The heat source for the system is a nuclear reactor. Waste heat is rejected from the system to space by a radiator.

  1. Generalized mathematical models in design optimization

    NASA Technical Reports Server (NTRS)

    Papalambros, Panos Y.; Rao, J. R. Jagannatha

    1989-01-01

    The theory of optimality conditions of extremal problems can be extended to problems continuously deformed by an input vector. The connection between the sensitivity, well-posedness, stability and approximation of optimization problems is steadily emerging. The authors believe that the important realization here is that the underlying basis of all such work is still the study of point-to-set maps and of small perturbations, yet what has been identified previously as being just related to solution procedures is now being extended to study modeling itself in its own right. Many important studies related to the theoretical issues of parametric programming and large deformation in nonlinear programming have been reported in the last few years, and the challenge now seems to be in devising effective computational tools for solving these generalized design optimization models.

  2. Sequential ensemble-based optimal design for parameter estimation: SEQUENTIAL ENSEMBLE-BASED OPTIMAL DESIGN

    SciTech Connect

    Man, Jun; Zhang, Jiangjiang; Li, Weixuan; Zeng, Lingzao; Wu, Laosheng

    2016-10-01

    The ensemble Kalman filter (EnKF) has been widely used in parameter estimation for hydrological models. The focus of most previous studies was to develop more efficient analysis (estimation) algorithms. On the other hand, it is intuitively understandable that a well-designed sampling (data-collection) strategy should provide more informative measurements and subsequently improve the parameter estimation. In this work, a Sequential Ensemble-based Optimal Design (SEOD) method, coupled with EnKF, information theory and sequential optimal design, is proposed to improve the performance of parameter estimation. Based on the first-order and second-order statistics, different information metrics including the Shannon entropy difference (SD), degrees of freedom for signal (DFS) and relative entropy (RE) are used to design the optimal sampling strategy, respectively. The effectiveness of the proposed method is illustrated by synthetic one-dimensional and two-dimensional unsaturated flow case studies. It is shown that the designed sampling strategies can provide more accurate parameter estimation and state prediction compared with conventional sampling strategies. Optimal sampling designs based on various information metrics perform similarly in our cases. The effect of ensemble size on the optimal design is also investigated. Overall, larger ensemble size improves the parameter estimation and convergence of optimal sampling strategy. Although the proposed method is applied to unsaturated flow problems in this study, it can be equally applied in any other hydrological problems.

  3. Discrete optimization of isolator locations for vibration isolation systems: An analytical and experimental investigation

    SciTech Connect

    Ponslet, E.R.; Eldred, M.S.

    1996-05-17

    An analytical and experimental study is conducted to investigate the effect of isolator locations on the effectiveness of vibration isolation systems. The study uses isolators with fixed properties and evaluates potential improvements to the isolation system that can be achieved by optimizing isolator locations. Because the available locations for the isolators are discrete in this application, a Genetic Algorithm (GA) is used as the optimization method. The system is modeled in MATLAB{trademark} and coupled with the GA available in the DAKOTA optimization toolkit under development at Sandia National Laboratories. Design constraints dictated by hardware and experimental limitations are implemented through penalty function techniques. A series of GA runs reveal difficulties in the search on this heavily constrained, multimodal, discrete problem. However, the GA runs provide a variety of optimized designs with predicted performance from 30 to 70 times better than a baseline configuration. An alternate approach is also tested on this problem: it uses continuous optimization, followed by rounding of the solution to neighboring discrete configurations. Results show that this approach leads to either infeasible or poor designs. Finally, a number of optimized designs obtained from the GA searches are tested in the laboratory and compared to the baseline design. These experimental results show a 7 to 46 times improvement in vibration isolation from the baseline configuration.

  4. Optimizing Trial Designs for Targeted Therapies

    PubMed Central

    Beckman, Robert A.; Burman, Carl-Fredrik; König, Franz; Stallard, Nigel; Posch, Martin

    2016-01-01

    An important objective in the development of targeted therapies is to identify the populations where the treatment under consideration has positive benefit risk balance. We consider pivotal clinical trials, where the efficacy of a treatment is tested in an overall population and/or in a pre-specified subpopulation. Based on a decision theoretic framework we derive optimized trial designs by maximizing utility functions. Features to be optimized include the sample size and the population in which the trial is performed (the full population or the targeted subgroup only) as well as the underlying multiple test procedure. The approach accounts for prior knowledge of the efficacy of the drug in the considered populations using a two dimensional prior distribution. The considered utility functions account for the costs of the clinical trial as well as the expected benefit when demonstrating efficacy in the different subpopulations. We model utility functions from a sponsor’s as well as from a public health perspective, reflecting actual civil interests. Examples of optimized trial designs obtained by numerical optimization are presented for both perspectives. PMID:27684573

  5. Multiobjective optimization in integrated photonics design.

    PubMed

    Gagnon, Denis; Dumont, Joey; Dubé, Louis J

    2013-07-01

    We propose the use of the parallel tabu search algorithm (PTS) to solve combinatorial inverse design problems in integrated photonics. To assess the potential of this algorithm, we consider the problem of beam shaping using a two-dimensional arrangement of dielectric scatterers. The performance of PTS is compared to one of the most widely used optimization algorithms in photonics design, the genetic algorithm (GA). We find that PTS can produce comparable or better solutions than the GA, while requiring less computation time and fewer adjustable parameters. For the coherent beam shaping problem as a case study, we demonstrate how PTS can tackle multiobjective optimization problems and represent a robust and efficient alternative to GA.

  6. Initial data sampling in design optimization

    NASA Astrophysics Data System (ADS)

    Southall, Hugh L.; O'Donnell, Terry H.

    2011-06-01

    Evolutionary computation (EC) techniques in design optimization such as genetic algorithms (GA) or efficient global optimization (EGO) require an initial set of data samples (design points) to start the algorithm. They are obtained by evaluating the cost function at selected sites in the input space. A two-dimensional input space can be sampled using a Latin square, a statistical sampling technique which samples a square grid such that there is a single sample in any given row and column. The Latin hypercube is a generalization to any number of dimensions. However, a standard random Latin hypercube can result in initial data sets which may be highly correlated and may not have good space-filling properties. There are techniques which address these issues. We describe and use one technique in this paper.

  7. Simulation as an Aid to Experimental Design.

    ERIC Educational Resources Information Center

    Frazer, Jack W.; And Others

    1983-01-01

    Discusses simulation program to aid in the design of enzyme kinetic experimentation (includes sample runs). Concentration versus time profiles of any subset or all nine states of reactions can be displayed with/without simulated instrumental noise, allowing the user to estimate the practicality of any proposed experiment given known instrument…

  8. Optimized design for an electrothermal microactuator

    NASA Astrophysics Data System (ADS)

    Cǎlimǎnescu, Ioan; Stan, Liviu-Constantin; Popa, Viorica

    2015-02-01

    In micromechanical structures, electrothermal actuators are known to be capable of providing larger force and reasonable tip deflection compared to electrostatic ones. Many studies have been devoted to the analysis of the flexure actuators. One of the most popular electrothermal actuators is called `U-shaped' actuator. The device is composed of two suspended beams with variable cross sections joined at the free end, which constrains the tip to move in an arcing motion while current is passed through the actuator. The goal of this research is to determine via FEA the best fitted geometry of the microactuator (optimization input parameters) in order to render some of the of the output parameters such as thermal strain or total deformations to their maximum values. The software to generate the CAD geometry was SolidWorks 2010 and all the FEA analysis was conducted with Ansys 13 TM. The optimized model has smaller geometric values of the input parameters that is a more compact geometry; The maximum temperature reached a smaller value for the optimized model; The calculated heat flux is with 13% bigger for the optimized model; the same for Joule Heat (26%), Total deformation (1.2%) and Thermal Strain (8%). By simple optimizing the design the dimensions and the performance of the micro actuator resulted more compact and more efficient.

  9. Reliability Optimization Design for Contact Springs of AC Contactors Based on Adaptive Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    Zhao, Sheng; Su, Xiuping; Wu, Ziran; Xu, Chengwen

    The paper illustrates the procedure of reliability optimization modeling for contact springs of AC contactors under nonlinear multi-constraint conditions. The adaptive genetic algorithm (AGA) is utilized to perform reliability optimization on the contact spring parameters of a type of AC contactor. A method that changes crossover and mutation rates at different times in the AGA can effectively avoid premature convergence, and experimental tests are performed after optimization. The experimental result shows that the mass of each optimized spring is reduced by 16.2%, while the reliability increases to 99.9% from 94.5%. The experimental result verifies the correctness and feasibility of this reliability optimization designing method.

  10. Computational Methods for Design, Control and Optimization

    DTIC Science & Technology

    2007-10-01

    34scenario" that applies to channel flows ( Poiseuille flows , Couette flow ) and pipe flows . Over the past 75 years many complex "transition theories" have... Simulation of Turbulent Flows , Springer Verlag, 2005. Additional Publications Supported by this Grant 1. J. Borggaard and T. Iliescu, Approximate Deconvolution...rigorous analysis of design algorithms that combine numerical simulation codes, approximate sensitivity calculations and optimization codes. The fundamental

  11. Optimal Design of Compact Spur Gear Reductions

    DTIC Science & Technology

    1992-09-01

    stress, psi Lundberg and Palmgren (1952) developed a theory for the life and pressure angle, deg capacity of ball and roller bearings . This life model is... bearings (Lundberg and Paimgren, 1952). Lundberg and Palmgren determined that the scatter in the life of a bearing can be modeled with a two-parameter...optimal design of compact spur gear reductions includes the Vf unit gradient in the feasible direction selection of bearing and shaft proportions in

  12. Optimization of the National Ignition Facility primary shield design

    SciTech Connect

    Annese, C.E.; Watkins, E.F.; Greenspan, E.; Miller, W.F.; Latkowski, J.; Lee, J.D.; Soran, P.; Tobin, M.L.

    1993-10-01

    Minimum cost design concepts of the primary shield for the National Ignition laser fusion experimental Facility (NIF) are searched with the help of the optimization code SWAN. The computational method developed for this search involves incorporating the time dependence of the delayed photon field within effective delayed photon production cross sections. This method enables one to address the time-dependent problem using relatively simple, time-independent transport calculations, thus significantly simplifying the design process. A novel approach was used for the identification of the optimal combination of constituents that will minimize the shield cost; it involves the generation, with SWAN, of effectiveness functions for replacing materials on an equal cost basis. The minimum cost shield design concept was found to consist of a mixture of polyethylene and low cost, low activation materials such as SiC, with boron added near the shield boundaries.

  13. Optimal serial dilutions designs for drug discovery experiments.

    PubMed

    Donev, Alexander N; Tobias, Randall D

    2011-05-01

    Dose-response studies are an essential part of the drug discovery process. They are typically carried out on a large number of chemical compounds using serial dilution experimental designs. This paper proposes a method of selecting the key parameters of these designs (maximum dose, dilution factor, number of concentrations and number of replicated observations for each concentration) depending on the stage of the drug discovery process where the study takes place. This is achieved by employing and extending results from optimal design theory. Population D- and D(S)-optimality are defined and used to evaluate the precision of estimating the potency of the tested compounds. The proposed methodology is easy to use and creates opportunities to reduce the cost of the experiments without compromising the quality of the data obtained in them.

  14. Optimal design of a tidal turbine

    NASA Astrophysics Data System (ADS)

    Kueny, J. L.; Lalande, T.; Herou, J. J.; Terme, L.

    2012-11-01

    An optimal design procedure has been applied to improve the design of an open-center tidal turbine. A specific software developed in C++ enables to generate the geometry adapted to the specific constraints imposed to this machine. Automatic scripts based on the AUTOGRID, IGG, FINE/TURBO and CFView software of the NUMECA CFD suite are used to evaluate all the candidate geometries. This package is coupled with the optimization software EASY, which is based on an evolutionary strategy completed by an artificial neural network. A new technique is proposed to guarantee the robustness of the mesh in the whole range of the design parameters. An important improvement of the initial geometry has been obtained. To limit the whole CPU time necessary for this optimization process, the geometry of the tidal turbine has been considered as axisymmetric, with a uniform upstream velocity. A more complete model (12 M nodes) has been built in order to analyze the effects related to the sea bed boundary layer, the proximity of the sea surface, the presence of an important triangular basement supporting the turbine and a possible incidence of the upstream velocity.

  15. General purpose optimization software for engineering design

    NASA Technical Reports Server (NTRS)

    Vanderplaats, G. N.

    1990-01-01

    The author has developed several general purpose optimization programs over the past twenty years. The earlier programs were developed as research codes and served that purpose reasonably well. However, in taking the formal step from research to industrial application programs, several important lessons have been learned. Among these are the importance of clear documentation, immediate user support, and consistent maintenance. Most important has been the issue of providing software that gives a good, or at least acceptable, design at minimum computational cost. Here, the basic issues developing optimization software for industrial applications are outlined and issues of convergence rate, reliability, and relative minima are discussed. Considerable feedback has been received from users, and new software is being developed to respond to identified needs. The basic capabilities of this software are outlined. A major motivation for the development of commercial grade software is ease of use and flexibility, and these issues are discussed with reference to general multidisciplinary applications. It is concluded that design productivity can be significantly enhanced by the more widespread use of optimization as an everyday design tool.

  16. Optimal Design of Non-equilibrium Experiments for Genetic Network Interrogation.

    PubMed

    Adoteye, Kaska; Banks, H T; Flores, Kevin B

    2015-02-01

    Many experimental systems in biology, especially synthetic gene networks, are amenable to perturbations that are controlled by the experimenter. We developed an optimal design algorithm that calculates optimal observation times in conjunction with optimal experimental perturbations in order to maximize the amount of information gained from longitudinal data derived from such experiments. We applied the algorithm to a validated model of a synthetic Brome Mosaic Virus (BMV) gene network and found that optimizing experimental perturbations may substantially decrease uncertainty in estimating BMV model parameters.

  17. Taguchi method of experimental design in materials education

    NASA Technical Reports Server (NTRS)

    Weiser, Martin W.

    1993-01-01

    Some of the advantages and disadvantages of the Taguchi Method of experimental design as applied to Materials Science will be discussed. This is a fractional factorial method that employs the minimum number of experimental trials for the information obtained. The analysis is also very simple to use and teach, which is quite advantageous in the classroom. In addition, the Taguchi loss function can be easily incorporated to emphasize that improvements in reproducibility are often at least as important as optimization of the response. The disadvantages of the Taguchi Method include the fact that factor interactions are normally not accounted for, there are zero degrees of freedom if all of the possible factors are used, and randomization is normally not used to prevent environmental biasing. In spite of these disadvantages it is felt that the Taguchi Method is extremely useful for both teaching experimental design and as a research tool, as will be shown with a number of brief examples.

  18. Taguchi method of experimental design in materials education

    NASA Technical Reports Server (NTRS)

    Weiser, Martin W.

    1993-01-01

    Some of the advantages and disadvantages of the Taguchi Method of experimental design as applied to Materials Science will be discussed. This is a fractional factorial method that employs the minimum number of experimental trials for the information obtained. The analysis is also very simple to use and teach, which is quite advantageous in the classroom. In addition, the Taguchi loss function can be easily incorporated to emphasize that improvements in reproducibility are often at least as important as optimization of the response. The disadvantages of the Taguchi Method include the fact that factor interactions are normally not accounted for, there are zero degrees of freedom if all of the possible factors are used, and randomization is normally not used to prevent environmental biasing. In spite of these disadvantages it is felt that the Taguchi Method is extremely useful for both teaching experimental design and as a research tool, as will be shown with a number of brief examples.

  19. Design and optimization of a HTS insert for solenoid magnets

    NASA Astrophysics Data System (ADS)

    Tomassetti, Giordano; de Marzi, Gianluca; Muzzi, Luigi; Celentano, Giuseppe; della Corte, Antonio

    2016-12-01

    With the availability of High-Temperature Superconducting (HTS) prototype cables, based on high-performance REBCO Coated Conductor (CC) tapes, new designs can now be made for large bore high-field inserts in superconducting solenoids, thus extending the magnet operating point to higher magnetic fields. In this work, as an alternative approach to the standard trial-and-error design process, an optimization procedure for a HTS grading section design is proposed, including parametric electro-magnetic and structural analyses, using the ANSYS software coupled with a numerically-efficient optimization algorithm. This HTS grading section is designed to be inserted into a 12 T large bore Low-Temperature Superconducting (LTS) solenoid (diameter about 1 m) to increase the field up to a maximum value of at least 17 T. The optimization variables taken into consideration are the number of turns and layers and the circle-in-square jacket inner diameter in order to minimize the total needed conductor length to achieve a peak field of at least 17 T, while guaranteeing the structural integrity and manufacturing constraints. By means of the optimization, an optimal 360 m total conductor length was found, achieving 17.2 T with an operating current of 22.4 kA and a coil comprised of 18 × 12 turns, shortened of about 20% with respect to the best initial candidate architectural design. The optimal HTS insert has a bore compatible with manufacturing constraints (inner bore radius larger than 30 cm). A scaled HTS insert for validation purposes, with a reduced conductor length, to be tested in an advanced experimental facility currently under construction, is also mentioned.

  20. Machine Learning Techniques in Optimal Design

    NASA Technical Reports Server (NTRS)

    Cerbone, Giuseppe

    1992-01-01

    Many important applications can be formalized as constrained optimization tasks. For example, we are studying the engineering domain of two-dimensional (2-D) structural design. In this task, the goal is to design a structure of minimum weight that bears a set of loads. A solution to a design problem in which there is a single load (L) and two stationary support points (S1 and S2) consists of four members, E1, E2, E3, and E4 that connect the load to the support points is discussed. In principle, optimal solutions to problems of this kind can be found by numerical optimization techniques. However, in practice [Vanderplaats, 1984] these methods are slow and they can produce different local solutions whose quality (ratio to the global optimum) varies with the choice of starting points. Hence, their applicability to real-world problems is severely restricted. To overcome these limitations, we propose to augment numerical optimization by first performing a symbolic compilation stage to produce: (a) objective functions that are faster to evaluate and that depend less on the choice of the starting point and (b) selection rules that associate problem instances to a set of recommended solutions. These goals are accomplished by successive specializations of the problem class and of the associated objective functions. In the end, this process reduces the problem to a collection of independent functions that are fast to evaluate, that can be differentiated symbolically, and that represent smaller regions of the overall search space. However, the specialization process can produce a large number of sub-problems. This is overcome by deriving inductively selection rules which associate problems to small sets of specialized independent sub-problems. Each set of candidate solutions is chosen to minimize a cost function which expresses the tradeoff between the quality of the solution that can be obtained from the sub-problem and the time it takes to produce it. The overall solution

  1. Topology optimization design of a space mirror

    NASA Astrophysics Data System (ADS)

    Liu, Jiazhen; Jiang, Bo

    2015-11-01

    As key components of the optical system of the space optical remote sensor, Space mirrors' surface accuracy had a direct impact that couldn't be ignored of the imaging quality of the remote sensor. In the future, large-diameter mirror would become an important trend in the development of space optical technology. However, a sharp increase in the mirror diameter would cause the deformation of the mirror and increase the thermal deformation caused by temperature variations. A reasonable lightweight structure designed to ensure the optical performance of the system to meet the requirements was required. As a new type of lightweight approach, topology optimization technology was an important direction of the current space optical remote sensing technology research. The lightweight design of rectangular mirror was studied. the variable density method of topology optimization was used. The mirror type precision of the mirror assemblies was obtained in different conditions. PV value was less than λ/10 and RMS value was less than λ/50(λ = 632.8nm). The results show that the entire The mirror assemblies can achieve a sufficiently high static rigidity, dynamic stiffness and thermal stability and has the capability of sufficient resistance to external environmental interference . Key words: topology optimization, space mirror, lightweight, space optical remote sensor

  2. Design search and optimization in aerospace engineering.

    PubMed

    Keane, A J; Scanlan, J P

    2007-10-15

    In this paper, we take a design-led perspective on the use of computational tools in the aerospace sector. We briefly review the current state-of-the-art in design search and optimization (DSO) as applied to problems from aerospace engineering, focusing on those problems that make heavy use of computational fluid dynamics (CFD). This ranges over issues of representation, optimization problem formulation and computational modelling. We then follow this with a multi-objective, multi-disciplinary example of DSO applied to civil aircraft wing design, an area where this kind of approach is becoming essential for companies to maintain their competitive edge. Our example considers the structure and weight of a transonic civil transport wing, its aerodynamic performance at cruise speed and its manufacturing costs. The goals are low drag and cost while holding weight and structural performance at acceptable levels. The constraints and performance metrics are modelled by a linked series of analysis codes, the most expensive of which is a CFD analysis of the aerodynamics using an Euler code with coupled boundary layer model. Structural strength and weight are assessed using semi-empirical schemes based on typical airframe company practice. Costing is carried out using a newly developed generative approach based on a hierarchical decomposition of the key structural elements of a typical machined and bolted wing-box assembly. To carry out the DSO process in the face of multiple competing goals, a recently developed multi-objective probability of improvement formulation is invoked along with stochastic process response surface models (Krigs). This approach both mitigates the significant run times involved in CFD computation and also provides an elegant way of balancing competing goals while still allowing the deployment of the whole range of single objective optimizers commonly available to design teams.

  3. Optimally designing games for behavioural research.

    PubMed

    Rafferty, Anna N; Zaharia, Matei; Griffiths, Thomas L

    2014-07-08

    Computer games can be motivating and engaging experiences that facilitate learning, leading to their increasing use in education and behavioural experiments. For these applications, it is often important to make inferences about the knowledge and cognitive processes of players based on their behaviour. However, designing games that provide useful behavioural data are a difficult task that typically requires significant trial and error. We address this issue by creating a new formal framework that extends optimal experiment design, used in statistics, to apply to game design. In this framework, we use Markov decision processes to model players' actions within a game, and then make inferences about the parameters of a cognitive model from these actions. Using a variety of concept learning games, we show that in practice, this method can predict which games will result in better estimates of the parameters of interest. The best games require only half as many players to attain the same level of precision.

  4. Optimal interferometer designs for optical coherence tomography.

    PubMed

    Rollins, A M; Izatt, J A

    1999-11-01

    We introduce a family of power-conserving fiber-optic interferometer designs for low-coherence reflectometry that use optical circulators, unbalanced couplers, and (or) balanced heterodyne detection. Simple design equations for optimization of the signal-to-noise ratio of the interferometers are expressed in terms of relevant signal and noise sources and measurable system parameters. We use the equations to evaluate the expected performance of the new configurations compared with that of the standard Michelson interferometer that is commonly used in optical coherence tomography (OCT) systems. The analysis indicates that improved sensitivity is expected for all the new interferometer designs, compared with the sensitivity of the standard OCT interferometer, under high-speed imaging conditions.

  5. Optimally designing games for behavioural research

    PubMed Central

    Rafferty, Anna N.; Zaharia, Matei; Griffiths, Thomas L.

    2014-01-01

    Computer games can be motivating and engaging experiences that facilitate learning, leading to their increasing use in education and behavioural experiments. For these applications, it is often important to make inferences about the knowledge and cognitive processes of players based on their behaviour. However, designing games that provide useful behavioural data are a difficult task that typically requires significant trial and error. We address this issue by creating a new formal framework that extends optimal experiment design, used in statistics, to apply to game design. In this framework, we use Markov decision processes to model players' actions within a game, and then make inferences about the parameters of a cognitive model from these actions. Using a variety of concept learning games, we show that in practice, this method can predict which games will result in better estimates of the parameters of interest. The best games require only half as many players to attain the same level of precision. PMID:25002821

  6. A FRAMEWORK TO DESIGN AND OPTIMIZE CHEMICAL FLOODING PROCESSES

    SciTech Connect

    Mojdeh Delshad; Gary A. Pope; Kamy Sepehrnoori

    2005-07-01

    The goal of this proposed research is to provide an efficient and user friendly simulation framework for screening and optimizing chemical/microbial enhanced oil recovery processes. The framework will include (1) a user friendly interface to identify the variables that have the most impact on oil recovery using the concept of experimental design and response surface maps, (2) UTCHEM reservoir simulator to perform the numerical simulations, and (3) an economic model that automatically imports the simulation production data to evaluate the profitability of a particular design. Such a reservoir simulation framework is not currently available to the oil industry. The objectives of Task 1 are to develop three primary modules representing reservoir, chemical, and well data. The modules will be interfaced with an already available experimental design model. The objective of the Task 2 is to incorporate UTCHEM reservoir simulator and the modules with the strategic variables and developing the response surface maps to identify the significant variables from each module. The objective of the Task 3 is to develop the economic model designed specifically for the chemical processes targeted in this proposal and interface the economic model with UTCHEM production output. Task 4 is on the validation of the framework and performing simulations of oil reservoirs to screen, design and optimize the chemical processes.

  7. A Framework to Design and Optimize Chemical Flooding Processes

    SciTech Connect

    Mojdeh Delshad; Gary A. Pope Kamy Sepehrnoori

    2006-08-31

    The goal of this proposed research is to provide an efficient and user friendly simulation framework for screening and optimizing chemical/microbial enhanced oil recovery processes. The framework will include (1) a user friendly interface to identify the variables that have the most impact on oil recovery using the concept of experimental design and response surface maps, (2) UTCHEM reservoir simulator to perform the numerical simulations, and (3) an economic model that automatically imports the simulation production data to evaluate the profitability of a particular design. Such a reservoir simulation framework is not currently available to the oil industry. The objectives of Task 1 are to develop three primary modules representing reservoir, chemical, and well data. The modules will be interfaced with an already available experimental design model. The objective of the Task 2 is to incorporate UTCHEM reservoir simulator and the modules with the strategic variables and developing the response surface maps to identify the significant variables from each module. The objective of the Task 3 is to develop the economic model designed specifically for the chemical processes targeted in this proposal and interface the economic model with UTCHEM production output. Task 4 is on the validation of the framework and performing simulations of oil reservoirs to screen, design and optimize the chemical processes.

  8. A Framework to Design and Optimize Chemical Flooding Processes

    SciTech Connect

    Mojdeh Delshad; Gary A. Pope; Kamy Sepehrnoori

    2006-08-31

    The goal of this proposed research is to provide an efficient and user friendly simulation framework for screening and optimizing chemical/microbial enhanced oil recovery processes. The framework will include (1) a user friendly interface to identify the variables that have the most impact on oil recovery using the concept of experimental design and response surface maps, (2) UTCHEM reservoir simulator to perform the numerical simulations, and (3) an economic model that automatically imports the simulation production data to evaluate the profitability of a particular design. Such a reservoir simulation framework is not currently available to the oil industry. The objectives of Task 1 are to develop three primary modules representing reservoir, chemical, and well data. The modules will be interfaced with an already available experimental design model. The objective of the Task 2 is to incorporate UTCHEM reservoir simulator and the modules with the strategic variables and developing the response surface maps to identify the significant variables from each module. The objective of the Task 3 is to develop the economic model designed specifically for the chemical processes targeted in this proposal and interface the economic model with UTCHEM production output. Task 4 is on the validation of the framework and performing simulations of oil reservoirs to screen, design and optimize the chemical processes.

  9. A FRAMEWORK TO DESIGN AND OPTIMIZE CHEMICAL FLOODING PROCESSES

    SciTech Connect

    Mojdeh Delshad; Gary A. Pope; Kamy Sepehrnoori

    2004-11-01

    The goal of this proposed research is to provide an efficient and user friendly simulation framework for screening and optimizing chemical/microbial enhanced oil recovery processes. The framework will include (1) a user friendly interface to identify the variables that have the most impact on oil recovery using the concept of experimental design and response surface maps, (2) UTCHEM reservoir simulator to perform the numerical simulations, and (3) an economic model that automatically imports the simulation production data to evaluate the profitability of a particular design. Such a reservoir simulation framework is not currently available to the oil industry. The objectives of Task 1 are to develop three primary modules representing reservoir, chemical, and well data. The modules will be interfaced with an already available experimental design model. The objective of the Task 2 is to incorporate UTCHEM reservoir simulator and the modules with the strategic variables and developing the response surface maps to identify the significant variables from each module. The objective of the Task 3 is to develop the economic model designed specifically for the chemical processes targeted in this proposal and interface the economic model with UTCHEM production output. Task 4 is on the validation of the framework and performing simulations of oil reservoirs to screen, design and optimize the chemical processes.

  10. Involving students in experimental design: three approaches.

    PubMed

    McNeal, A P; Silverthorn, D U; Stratton, D B

    1998-12-01

    Many faculty want to involve students more actively in laboratories and in experimental design. However, just "turning them loose in the lab" is time-consuming and can be frustrating for both students and faculty. We describe three different ways of providing structures for labs that require students to design their own experiments but guide the choices. One approach emphasizes invertebrate preparations and classic techniques that students can learn fairly easily. Students must read relevant primary literature and learn each technique in one week, and then design and carry out their own experiments in the next week. Another approach provides a "design framework" for the experiments so that all students are using the same technique and the same statistical comparisons, whereas their experimental questions differ widely. The third approach involves assigning the questions or problems but challenging students to design good protocols to answer these questions. In each case, there is a mixture of structure and freedom that works for the level of the students, the resources available, and our particular aims.

  11. Multidisciplinary design optimization for sonic boom mitigation

    NASA Astrophysics Data System (ADS)

    Ozcer, Isik A.

    product design. The simulation tools are used to optimize three geometries for sonic boom mitigation. The first is a simple axisymmetric shape to be used as a generic nose component, the second is a delta wing with lift, and the third is a real aircraft with nose and wing optimization. The objectives are to minimize the pressure impulse or the peak pressure in the sonic boom signal, while keeping the drag penalty under feasible limits. The design parameters for the meridian profile of the nose shape are the lengths and the half-cone angles of the linear segments that make up the profile. The design parameters for the lifting wing are the dihedral angle, angle of attack, non-linear span-wise twist and camber distribution. The test-bed aircraft is the modified F-5E aircraft built by Northrop Grumman, designated the Shaped Sonic Boom Demonstrator. This aircraft is fitted with an optimized axisymmetric nose, and the wings are optimized to demonstrate optimization for sonic boom mitigation for a real aircraft. The final results predict 42% reduction in bow shock strength, 17% reduction in peak Deltap, 22% reduction in pressure impulse, 10% reduction in foot print size, 24% reduction in inviscid drag, and no loss in lift for the optimized aircraft. Optimization is carried out using response surface methodology, and the design matrices are determined using standard DoE techniques for quadratic response modeling.

  12. Design Optimization of Irregular Cellular Structure for Additive Manufacturing

    NASA Astrophysics Data System (ADS)

    Song, Guo-Hua; Jing, Shi-Kai; Zhao, Fang-Lei; Wang, Ye-Dong; Xing, Hao; Zhou, Jing-Tao

    2017-09-01

    Irregularcellular structurehas great potential to be considered in light-weight design field. However, the research on optimizing irregular cellular structures has not yet been reporteddue to the difficulties in their modeling technology. Based on the variable density topology optimization theory, an efficient method for optimizing the topology of irregular cellular structures fabricated through additive manufacturing processes is proposed. The proposed method utilizes tangent circles to automatically generate the main outline of irregular cellular structure. The topological layoutof each cellstructure is optimized using the relative density informationobtained from the proposed modified SIMP method. A mapping relationship between cell structure and relative densityelement is builtto determine the diameter of each cell structure. The results show that the irregular cellular structure can be optimized with the proposed method. The results of simulation and experimental test are similar for irregular cellular structure, which indicate that the maximum deformation value obtained using the modified Solid Isotropic Microstructures with Penalization (SIMP) approach is lower 5.4×10-5 mm than that using the SIMP approach under the same under the same external load. The proposed research provides the instruction to design the other irregular cellular structure.

  13. PLS-optimal: a stepwise D-optimal design based on latent variables.

    PubMed

    Brandmaier, Stefan; Sahlin, Ullrika; Tetko, Igor V; Öberg, Tomas

    2012-04-23

    Several applications, such as risk assessment within REACH or drug discovery, require reliable methods for the design of experiments and efficient testing strategies. Keeping the number of experiments as low as possible is important from both a financial and an ethical point of view, as exhaustive testing of compounds requires significant financial resources and animal lives. With a large initial set of compounds, experimental design techniques can be used to select a representative subset for testing. Once measured, these compounds can be used to develop quantitative structure-activity relationship models to predict properties of the remaining compounds. This reduces the required resources and time. D-Optimal design is frequently used to select an optimal set of compounds by analyzing data variance. We developed a new sequential approach to apply a D-Optimal design to latent variables derived from a partial least squares (PLS) model instead of principal components. The stepwise procedure selects a new set of molecules to be measured after each previous measurement cycle. We show that application of the D-Optimal selection generates models with a significantly improved performance on four different data sets with end points relevant for REACH. Compared to those derived from principal components, PLS models derived from the selection on latent variables had a lower root-mean-square error and a higher Q2 and R2. This improvement is statistically significant, especially for the small number of compounds selected.

  14. Theoretical design for the optimization of a material's geometry in diode-pumped high-energy Yb3+:YAG lasers and its experimental validation at 0.5-1 J.

    PubMed

    Jolly, Alain; Artigaut, Eric

    2004-11-10

    The geometry of ytterbium-doped active media in diode-pumped lasers can be calculated with the help of a few analytic expressions for the optimization of high-energy and high-efficiency Q-switched lasers. The first step in the optimization consists in the definition of a basic three-level model with which to estimate the energy to be extracted. In the second step, for validation purposes we use a side-pumped Yb3+:YAG slab at 2-kW peak pump power in the long-pulse mode of operation up to 1 J, and we Q switch it at reduced energies up to 100 mJ. The final step of this study provides fairly general relationships devoted the geometric sizing of optimized slabs that will be of some interest for the design of higher-energy ytterbium-doped Q-switched lasers.

  15. ODIN: Optimal design integration system. [reusable launch vehicle design

    NASA Technical Reports Server (NTRS)

    Glatt, C. R.; Hague, D. S.

    1975-01-01

    The report provides a summary of the Optimal Design Integration (ODIN) System as it exists at Langley Research Center. A discussion of the ODIN System, the executive program and the data base concepts are presented. Two examples illustrate the capabilities of the system which have been exploited. Appended to the report are a summary of abstracts for the ODIN library programs and a description of the use of the executive program in linking the library programs.

  16. Database Design for Structural Analysis and Design Optimization.

    DTIC Science & Technology

    1984-10-01

    C.C. Wu 9. PERFORMING ORGANIZATION NAME AND ADDRESS 10. PROGRAM ELEMENT, PROJECT , TASK AREA & WORK UNIT NUMBERS Applied-Optimal Design Laboratory...relational algebraic operations such as PROJECT , JOIN, and SELECT can be used to form new relations. Figure 2.1 shows a typical relational model of data...data set contains the definition of mathematical 3-D surfaces of up to third order to which lines and grids may be projected . The surfaces are defined in

  17. Design optimization of an airbreathing aerospaceplane

    NASA Astrophysics Data System (ADS)

    Janovsky, R.; Staufenbiel, R.

    A new species of advanced space transportation vehicles, called aerospaceplanes, is under discussion, and challenging technology programs are on the way. These vehicles rely on airbreathing propulsion, which, compared to rocket propulsions, provides much higher overall efficiency. Aerospaceplanes will be able to take-off and land horizontally and are partly or fully reusable. For this new kind of transportation system, the application of new design principles is necessary. Complex airbreathing propulsion systems have to be integrated into the airframe. Compared to conventional rocket systems, a spaceplane has to stand higher aerodynamic and thermodynamic loads during the longer flight in a denser atmosphere. Therefore, the empty mass fraction of an aerospaceplane will be higher than that of a rocket system and will partly compensate for the higher propulsion efficiency. In this paper, the influence of the key design parameters on the transport efficiency will be determined for the first stage of a two-stage-to-orbit spaceplane with an airbreathing first stage. The configuration of the first stage is a special lifting body, called 'ELAC I'. As a measure of transport efficiency, the ratio of the vehicle take-off mass to payload mass, the so called Growth Factor, is chosen. Furthermore, the design of the first stage is optimized by using a multidimensional optimization procedure to obtain a minimum Growth Factor for various size ratios between the second and first stages.

  18. An Optimal Pulse System Design by Multichannel Sensors Fusion.

    PubMed

    Wang, Dimin; Zhang, David; Lu, Guangming

    2016-03-01

    Pulse diagnosis, recognized as an important branch of traditional Chinese medicine (TCM), has a long history for health diagnosis. Certain features in the pulse are known to be related with the physiological status, which have been identified as biomarkers. In recent years, an electronic equipment is designed to obtain the valuable information inside pulse. Single-point pulse acquisition platform has the benefit of low cost and flexibility, but is time consuming in operation and not standardized in pulse location. The pulse system with a single-type sensor is easy to implement, but is limited in extracting sufficient pulse information. This paper proposes a novel system with optimal design that is special for pulse diagnosis. We combine a pressure sensor with a photoelectric sensor array to make a multichannel sensor fusion structure. Then, the optimal pulse signal processing methods and sensor fusion strategy are introduced for the feature extraction. Finally, the developed optimal pulse system and methods are tested on pulse database acquired from the healthy subjects and the patients known to be afflicted with diabetes. The experimental results indicate that the classification accuracy is increased significantly under the optimal design and also demonstrate that the developed pulse system with multichannel sensors fusion is more effective than the previous pulse acquisition platforms.

  19. A factorial design for optimizing a flow injection analysis system.

    PubMed

    Luna, J R; Ovalles, J F; León, A; Buchheister, M

    2000-05-01

    The use of a factorial design for the response exploration of a flow injection (FI) system is described and illustrated by FI spectrophotometric determination of paraquat. Variable response (absorbance) is explored as a function of the factors flow rate and length of the reaction coil. The present study was found to be useful to detect and estimate any interaction among the factors that may affect the optimal conditions for the maximal response in the optimization of the FI system, which is not possible with a univariate design. In addition, this study showed that factorial experiments enable economy of experimentation and yield results of high precision due to the use of the whole data for calculating the effects.

  20. Complex optimization for big computational and experimental neutron datasets

    DOE PAGES

    Bao, Feng; Oak Ridge National Lab.; Archibald, Richard; ...

    2016-11-07

    Here, we present a framework to use high performance computing to determine accurate solutions to the inverse optimization problem of big experimental data against computational models. We demonstrate how image processing, mathematical regularization, and hierarchical modeling can be used to solve complex optimization problems on big data. We also demonstrate how both model and data information can be used to further increase solution accuracy of optimization by providing confidence regions for the processing and regularization algorithms. Finally, we use the framework in conjunction with the software package SIMPHONIES to analyze results from neutron scattering experiments on silicon single crystals, andmore » refine first principles calculations to better describe the experimental data.« less

  1. Design, optimization, and control of tensegrity structures

    NASA Astrophysics Data System (ADS)

    Masic, Milenko

    The contributions of this dissertation may be divided into four categories. The first category involves developing a systematic form-finding method for general and symmetric tensegrity structures. As an extension of the available results, different shape constraints are incorporated in the problem. Methods for treatment of these constraints are considered and proposed. A systematic formulation of the form-finding problem for symmetric tensegrity structures is introduced, and it uses the symmetry to reduce both the number of equations and the number of variables in the problem. The equilibrium analysis of modular tensegrities exploits their peculiar symmetry. The tensegrity similarity transformation completes the contributions in the area of enabling tools for tensegrity form-finding. The second group of contributions develops the methods for optimal mass-to-stiffness-ratio design of tensegrity structures. This technique represents the state-of-the-art for the static design of tensegrity structures. It is an extension of the results available for the topology optimization of truss structures. Besides guaranteeing that the final design satisfies the tensegrity paradigm, the problem constrains the structure from different modes of failure, which makes it very general. The open-loop control of the shape of modular tensegrities is the third contribution of the dissertation. This analytical result offers a closed form solution for the control of the reconfiguration of modular structures. Applications range from the deployment and stowing of large-scale space structures to the locomotion-inducing control for biologically inspired structures. The control algorithm is applicable regardless of the size of the structures, and it represents a very general result for a large class of tensegrities. Controlled deployments of large-scale tensegrity plates and tensegrity towers are shown as examples that demonstrate the full potential of this reconfiguration strategy. The last

  2. Computer optimization of landfill-cover design

    SciTech Connect

    Massmann, J.W.; Moore, C.A.

    1982-12-01

    A finite difference computer program to aid optimizing landfill-cover design was developed. The program was used to compare the methane yield from sand-covred and clay-covered landfills equipped with methane-recovery systems. The results of this comparison indicate a clay cover can restrict air inflow into the landfill system, thus preventing oxygen poisoning of the methane-producing organisms. The practice of monitoring methane-to-air ratios in the pipelines of the recovery system in order to warn of oxygen infiltration into the fill material was shown to be ineffective in some situations. More-reliable methods to forewarn of oxygen poisoning are suggested.

  3. Model Selection in Systems Biology Depends on Experimental Design

    PubMed Central

    Silk, Daniel; Kirk, Paul D. W.; Barnes, Chris P.; Toni, Tina; Stumpf, Michael P. H.

    2014-01-01

    Experimental design attempts to maximise the information available for modelling tasks. An optimal experiment allows the inferred models or parameters to be chosen with the highest expected degree of confidence. If the true system is faithfully reproduced by one of the models, the merit of this approach is clear - we simply wish to identify it and the true parameters with the most certainty. However, in the more realistic situation where all models are incorrect or incomplete, the interpretation of model selection outcomes and the role of experimental design needs to be examined more carefully. Using a novel experimental design and model selection framework for stochastic state-space models, we perform high-throughput in-silico analyses on families of gene regulatory cascade models, to show that the selected model can depend on the experiment performed. We observe that experimental design thus makes confidence a criterion for model choice, but that this does not necessarily correlate with a model's predictive power or correctness. Finally, in the special case of linear ordinary differential equation (ODE) models, we explore how wrong a model has to be before it influences the conclusions of a model selection analysis. PMID:24922483

  4. Model selection in systems biology depends on experimental design.

    PubMed

    Silk, Daniel; Kirk, Paul D W; Barnes, Chris P; Toni, Tina; Stumpf, Michael P H

    2014-06-01

    Experimental design attempts to maximise the information available for modelling tasks. An optimal experiment allows the inferred models or parameters to be chosen with the highest expected degree of confidence. If the true system is faithfully reproduced by one of the models, the merit of this approach is clear - we simply wish to identify it and the true parameters with the most certainty. However, in the more realistic situation where all models are incorrect or incomplete, the interpretation of model selection outcomes and the role of experimental design needs to be examined more carefully. Using a novel experimental design and model selection framework for stochastic state-space models, we perform high-throughput in-silico analyses on families of gene regulatory cascade models, to show that the selected model can depend on the experiment performed. We observe that experimental design thus makes confidence a criterion for model choice, but that this does not necessarily correlate with a model's predictive power or correctness. Finally, in the special case of linear ordinary differential equation (ODE) models, we explore how wrong a model has to be before it influences the conclusions of a model selection analysis.

  5. Handling Qualities Optimization for Rotorcraft Conceptual Design

    NASA Technical Reports Server (NTRS)

    Lawrence, Ben; Theodore, Colin R.; Berger, Tom

    2016-01-01

    Over the past decade, NASA, under a succession of rotary-wing programs has been moving towards coupling multiple discipline analyses in a rigorous consistent manner to evaluate rotorcraft conceptual designs. Handling qualities is one of the component analyses to be included in a future NASA Multidisciplinary Analysis and Optimization framework for conceptual design of VTOL aircraft. Similarly, the future vision for the capability of the Concept Design and Assessment Technology Area (CD&A-TA) of the U.S Army Aviation Development Directorate also includes a handling qualities component. SIMPLI-FLYD is a tool jointly developed by NASA and the U.S. Army to perform modeling and analysis for the assessment of flight dynamics and control aspects of the handling qualities of rotorcraft conceptual designs. An exploration of handling qualities analysis has been carried out using SIMPLI-FLYD in illustrative scenarios of a tiltrotor in forward flight and single-main rotor helicopter at hover. Using SIMPLI-FLYD and the conceptual design tool NDARC integrated into a single process, the effects of variations of design parameters such as tail or rotor size were evaluated in the form of margins to fixed- and rotary-wing handling qualities metrics as well as the vehicle empty weight. The handling qualities design margins are shown to vary across the flight envelope due to both changing flight dynamic and control characteristics and changing handling qualities specification requirements. The current SIMPLI-FLYD capability and future developments are discussed in the context of an overall rotorcraft conceptual design process.

  6. Inter occasion variability in individual optimal design.

    PubMed

    Kristoffersson, Anders N; Friberg, Lena E; Nyberg, Joakim

    2015-12-01

    Inter occasion variability (IOV) is of importance to consider in the development of a design where individual pharmacokinetic or pharmacodynamic parameters are of interest. IOV may adversely affect the precision of maximum a posteriori (MAP) estimated individual parameters, yet the influence of inclusion of IOV in optimal design for estimation of individual parameters has not been investigated. In this work two methods of including IOV in the maximum a posteriori Fisher information matrix (FIMMAP) are evaluated: (i) MAP occ-the IOV is included as a fixed effect deviation per occasion and individual, and (ii) POP occ-the IOV is included as an occasion random effect. Sparse sampling schedules were designed for two test models and compared to a scenario where IOV is ignored, either by omitting known IOV (Omit) or by mimicking a situation where unknown IOV has inflated the IIV (Inflate). Accounting for IOV in the FIMMAP markedly affected the designs compared to ignoring IOV and, as evaluated by stochastic simulation and estimation, resulted in superior precision in the individual parameters. In addition MAPocc and POP occ accurately predicted precision and shrinkage. For the investigated designs, the MAP occ method was on average slightly superior to POP occ and was less computationally intensive.

  7. A method for nonlinear optimization with discrete design variables

    NASA Technical Reports Server (NTRS)

    Olsen, Gregory R.; Vanderplaats, Garret N.

    1987-01-01

    A numerical method is presented for the solution of nonlinear discrete optimization problems. The applicability of discrete optimization to engineering design is discussed, and several standard structural optimization problems are solved using discrete design variables. The method uses approximation techniques to create subproblems suitable for linear mixed-integer programming methods. The method employs existing software for continuous optimization and integer programming.

  8. A method for nonlinear optimization with discrete design variables

    NASA Technical Reports Server (NTRS)

    Olsen, Gregory R.; Vanderplaats, Garret N.

    1987-01-01

    A numerical method is presented for the solution of nonlinear discrete optimization problems. The applicability of discrete optimization to engineering design is discussed, and several standard structural optimization problems a