Science.gov

Sample records for experimental design optimization

  1. Optimal experimental design strategies for detecting hormesis.

    PubMed

    Dette, Holger; Pepelyshev, Andrey; Wong, Weng Kee

    2011-12-01

    Hormesis is a widely observed phenomenon in many branches of life sciences, ranging from toxicology studies to agronomy, with obvious public health and risk assessment implications. We address optimal experimental design strategies for determining the presence of hormesis in a controlled environment using the recently proposed Hunt-Bowman model. We propose alternative models that have an implicit hormetic threshold, discuss their advantages over current models, and construct and study properties of optimal designs for (i) estimating model parameters, (ii) estimating the threshold dose, and (iii) testing for the presence of hormesis. We also determine maximin optimal designs that maximize the minimum of the design efficiencies when we have multiple design criteria or there is model uncertainty where we have a few plausible models of interest. We apply these optimal design strategies to a teratology study and show that the proposed designs outperform the implemented design by a wide margin for many situations.

  2. Optimal Experimental Design for Model Discrimination

    PubMed Central

    Myung, Jay I.; Pitt, Mark A.

    2009-01-01

    Models of a psychological process can be difficult to discriminate experimentally because it is not easy to determine the values of the critical design variables (e.g., presentation schedule, stimulus structure) that will be most informative in differentiating them. Recent developments in sampling-based search methods in statistics make it possible to determine these values, and thereby identify an optimal experimental design. After describing the method, it is demonstrated in two content areas in cognitive psychology in which models are highly competitive: retention (i.e., forgetting) and categorization. The optimal design is compared with the quality of designs used in the literature. The findings demonstrate that design optimization has the potential to increase the informativeness of the experimental method. PMID:19618983

  3. Optimizing Experimental Designs: Finding Hidden Treasure.

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Classical experimental design theory, the predominant treatment in most textbooks, promotes the use of blocking designs for control of spatial variability in field studies and other situations in which there is significant variation among heterogeneity among experimental units. Many blocking design...

  4. Optimal Experimental Design for Model Discrimination

    ERIC Educational Resources Information Center

    Myung, Jay I.; Pitt, Mark A.

    2009-01-01

    Models of a psychological process can be difficult to discriminate experimentally because it is not easy to determine the values of the critical design variables (e.g., presentation schedule, stimulus structure) that will be most informative in differentiating them. Recent developments in sampling-based search methods in statistics make it…

  5. Ceramic processing: Experimental design and optimization

    NASA Technical Reports Server (NTRS)

    Weiser, Martin W.; Lauben, David N.; Madrid, Philip

    1992-01-01

    The objectives of this paper are to: (1) gain insight into the processing of ceramics and how green processing can affect the properties of ceramics; (2) investigate the technique of slip casting; (3) learn how heat treatment and temperature contribute to density, strength, and effects of under and over firing to ceramic properties; (4) experience some of the problems inherent in testing brittle materials and learn about the statistical nature of the strength of ceramics; (5) investigate orthogonal arrays as tools to examine the effect of many experimental parameters using a minimum number of experiments; (6) recognize appropriate uses for clay based ceramics; and (7) measure several different properties important to ceramic use and optimize them for a given application.

  6. Process Model Construction and Optimization Using Statistical Experimental Design,

    DTIC Science & Technology

    1988-04-01

    Memo No. 88-442 ~LECTE March 1988 31988 %,.. MvAY 1 98 0) PROCESS MODEL CONSTRUCTION AND OPTIMIZATION USING STATISTICAL EXPERIMENTAL DESIGN Emmanuel...Sachs and George Prueger Abstract A methodology is presented for the construction of process models by the combination of physically based mechanistic...253-8138. .% I " Process Model Construction and Optimization Using Statistical Experimental Design" by Emanuel Sachs Assistant Professor and George

  7. Model-Based Optimal Experimental Design for Complex Physical Systems

    DTIC Science & Technology

    2015-12-03

    NUMBER 9. SPONSORING/MONITORING AGENCY NAME(S) AND ADDRESS(ES) Jean-Luc Cambier Program Officer, Computational Mathematics , AFOSR/RTA 875 N...computational tools have been inadequate. Our goal has been to develop new mathematical formulations, estimation approaches, and approximation strategies...previous suboptimal approaches. 15. SUBJECT TERMS computational mathematics ; optimal experimental design; uncertainty quantification; Bayesian inference

  8. Optimal active vibration absorber: Design and experimental results

    NASA Technical Reports Server (NTRS)

    Lee-Glauser, Gina; Juang, Jer-Nan; Sulla, Jeffrey L.

    1992-01-01

    An optimal active vibration absorber can provide guaranteed closed-loop stability and control for large flexible space structures with collocated sensors/actuators. The active vibration absorber is a second-order dynamic system which is designed to suppress any unwanted structural vibration. This can be designed with minimum knowledge of the controlled system. Two methods for optimizing the active vibration absorber parameters are illustrated: minimum resonant amplitude and frequency matched active controllers. The Controls-Structures Interaction Phase-1 Evolutionary Model at NASA LaRC is used to demonstrate the effectiveness of the active vibration absorber for vibration suppression. Performance is compared numerically and experimentally using acceleration feedback.

  9. Criteria for the optimal design of experimental tests

    NASA Technical Reports Server (NTRS)

    Canavos, G. C.

    1972-01-01

    Some of the basic concepts are unified that were developed for the problem of finding optimal approximating functions which relate a set of controlled variables to a measurable response. The techniques have the potential for reducing the amount of testing required in experimental investigations. Specifically, two low-order polynomial models are considered as approximations to unknown functionships. For each model, optimal means of designing experimental tests are presented which, for a modest number of measurements, yield prediction equations that minimize the error of an estimated response anywhere inside a selected region of experimentation. Moreover, examples are provided for both models to illustrate their use. Finally, an analysis of a second-order prediction equation is given to illustrate ways of determining maximum or minimum responses inside the experimentation region.

  10. Optimizing an experimental design for an electromagnetic experiment

    NASA Astrophysics Data System (ADS)

    Roux, Estelle; Garcia, Xavier

    2013-04-01

    Most of geophysical studies focus on data acquisition and analysis, but another aspect which is gaining importance is the discussion on acquisition of suitable datasets. This can be done through the design of an optimal experiment. Optimizing an experimental design implies a compromise between maximizing the information we get about the target and reducing the cost of the experiment, considering a wide range of constraints (logistical, financial, experimental …). We are currently developing a method to design an optimal controlled-source electromagnetic (CSEM) experiment to detect a potential CO2 reservoir and monitor this reservoir during and after CO2 injection. Our statistical algorithm combines the use of linearized inverse theory (to evaluate the quality of one given design via the objective function) and stochastic optimization methods like genetic algorithm (to examine a wide range of possible surveys). The particularity of our method is that it uses a multi-objective genetic algorithm that searches for designs that fit several objective functions simultaneously. One main advantage of this kind of technique to design an experiment is that it does not require the acquisition of any data and can thus be easily conducted before any geophysical survey. Our new experimental design algorithm has been tested with a realistic one-dimensional resistivity model of the Earth in the region of study (northern Spain CO2 sequestration test site). We show that a small number of well distributed observations have the potential to resolve the target. This simple test also points out the importance of a well chosen objective function. Finally, in the context of CO2 sequestration that motivates this study, we might be interested in maximizing the information we get about the reservoir layer. In that case, we show how the combination of two different objective functions considerably improve its resolution.

  11. Optimization of formulation variables of benzocaine liposomes using experimental design.

    PubMed

    Mura, Paola; Capasso, Gaetano; Maestrelli, Francesca; Furlanetto, Sandra

    2008-01-01

    This study aimed to optimize, by means of an experimental design multivariate strategy, a liposomal formulation for topical delivery of the local anaesthetic agent benzocaine. The formulation variables for the vesicle lipid phase uses potassium glycyrrhizinate (KG) as an alternative to cholesterol and the addition of a cationic (stearylamine) or anionic (dicethylphosphate) surfactant (qualitative factors); the percents of ethanol and the total volume of the hydration phase (quantitative factors) were the variables for the hydrophilic phase. The combined influence of these factors on the considered responses (encapsulation efficiency (EE%) and percent drug permeated at 180 min (P%)) was evaluated by means of a D-optimal design strategy. Graphic analysis of the effects indicated that maximization of the selected responses requested opposite levels of the considered factors: For example, KG and stearylamine were better for increasing EE%, and cholesterol and dicethylphosphate for increasing P%. In the second step, the Doehlert design, applied for the response-surface study of the quantitative factors, pointed out a negative interaction between percent ethanol and volume of the hydration phase and allowed prediction of the best formulation for maximizing drug permeation rate. Experimental P% data of the optimized formulation were inside the confidence interval (P < 0.05) calculated around the predicted value of the response. This proved the suitability of the proposed approach for optimizing the composition of liposomal formulations and predicting the effects of formulation variables on the considered experimental response. Moreover, the optimized formulation enabled a significant improvement (P < 0.05) of the drug anaesthetic effect with respect to the starting reference liposomal formulation, thus demonstrating its actually better therapeutic effectiveness.

  12. Optimizing an experimental design for an electromagnetic experiment

    NASA Astrophysics Data System (ADS)

    Roux, E.; Garcia, X.

    2012-04-01

    Most of geophysical studies focus on data acquisition and analysis, but another aspect which is gaining importance is the discussion on acquisition of suitable datasets. This can be done through the design of an optimal experiment. An optimal experiment maximizes geophysical information while maintaining the cost of the experiment as low as possible. This requires a careful selection of recording parameters as source and receivers locations or range of periods needed to image the target. We are developing a method to design an optimal experiment in the context of detecting and monitoring a CO2 reservoir using controlled-source electromagnetic (CSEM) data. Using an algorithm for a simple one-dimensional (1D) situation, we look for the most suitable locations for source and receivers and optimum characteristics of the source to image the subsurface. One main advantage of this kind of technique to design an experiment is that it does not require the acquisition of any data and can thus be easily conducted before any geophysical survey. Our algorithm is based on a genetic algorithm which has been proved to be an efficient technique to examine a wide range of possible surveys and select the one that gives superior resolution. Each configuration is associated to one value of the objective function that characterizes the quality of this particular design. Here, we describe the method used to optimize an experimental design. Then, we validate this new technique and explore the different issues of experimental design by simulating a CSEM survey with a realistic 1D layered model.

  13. Prediction uncertainty and optimal experimental design for learning dynamical systems.

    PubMed

    Letham, Benjamin; Letham, Portia A; Rudin, Cynthia; Browne, Edward P

    2016-06-01

    Dynamical systems are frequently used to model biological systems. When these models are fit to data, it is necessary to ascertain the uncertainty in the model fit. Here, we present prediction deviation, a metric of uncertainty that determines the extent to which observed data have constrained the model's predictions. This is accomplished by solving an optimization problem that searches for a pair of models that each provides a good fit for the observed data, yet has maximally different predictions. We develop a method for estimating a priori the impact that additional experiments would have on the prediction deviation, allowing the experimenter to design a set of experiments that would most reduce uncertainty. We use prediction deviation to assess uncertainty in a model of interferon-alpha inhibition of viral infection, and to select a sequence of experiments that reduces this uncertainty. Finally, we prove a theoretical result which shows that prediction deviation provides bounds on the trajectories of the underlying true model. These results show that prediction deviation is a meaningful metric of uncertainty that can be used for optimal experimental design.

  14. Prediction uncertainty and optimal experimental design for learning dynamical systems

    NASA Astrophysics Data System (ADS)

    Letham, Benjamin; Letham, Portia A.; Rudin, Cynthia; Browne, Edward P.

    2016-06-01

    Dynamical systems are frequently used to model biological systems. When these models are fit to data, it is necessary to ascertain the uncertainty in the model fit. Here, we present prediction deviation, a metric of uncertainty that determines the extent to which observed data have constrained the model's predictions. This is accomplished by solving an optimization problem that searches for a pair of models that each provides a good fit for the observed data, yet has maximally different predictions. We develop a method for estimating a priori the impact that additional experiments would have on the prediction deviation, allowing the experimenter to design a set of experiments that would most reduce uncertainty. We use prediction deviation to assess uncertainty in a model of interferon-alpha inhibition of viral infection, and to select a sequence of experiments that reduces this uncertainty. Finally, we prove a theoretical result which shows that prediction deviation provides bounds on the trajectories of the underlying true model. These results show that prediction deviation is a meaningful metric of uncertainty that can be used for optimal experimental design.

  15. Optimization of model parameters and experimental designs with the Optimal Experimental Design Toolbox (v1.0) exemplified by sedimentation in salt marshes

    NASA Astrophysics Data System (ADS)

    Reimer, J.; Schuerch, M.; Slawig, T.

    2015-03-01

    The geosciences are a highly suitable field of application for optimizing model parameters and experimental designs especially because many data are collected. In this paper, the weighted least squares estimator for optimizing model parameters is presented together with its asymptotic properties. A popular approach to optimize experimental designs called local optimal experimental designs is described together with a lesser known approach which takes into account the potential nonlinearity of the model parameters. These two approaches have been combined with two methods to solve their underlying discrete optimization problem. All presented methods were implemented in an open-source MATLAB toolbox called the Optimal Experimental Design Toolbox whose structure and application is described. In numerical experiments, the model parameters and experimental design were optimized using this toolbox. Two existing models for sediment concentration in seawater and sediment accretion on salt marshes of different complexity served as an application example. The advantages and disadvantages of these approaches were compared based on these models. Thanks to optimized experimental designs, the parameters of these models could be determined very accurately with significantly fewer measurements compared to unoptimized experimental designs. The chosen optimization approach played a minor role for the accuracy; therefore, the approach with the least computational effort is recommended.

  16. Optimal experimental design with the sigma point method.

    PubMed

    Schenkendorf, R; Kremling, A; Mangold, M

    2009-01-01

    Using mathematical models for a quantitative description of dynamical systems requires the identification of uncertain parameters by minimising the difference between simulation and measurement. Owing to the measurement noise also, the estimated parameters possess an uncertainty expressed by their variances. To obtain highly predictive models, very precise parameters are needed. The optimal experimental design (OED) as a numerical optimisation method is used to reduce the parameter uncertainty by minimising the parameter variances iteratively. A frequently applied method to define a cost function for OED is based on the inverse of the Fisher information matrix. The application of this traditional method has at least two shortcomings for models that are nonlinear in their parameters: (i) it gives only a lower bound of the parameter variances and (ii) the bias of the estimator is neglected. Here, the authors show that by applying the sigma point (SP) method a better approximation of characteristic values of the parameter statistics can be obtained, which has a direct benefit on OED. An additional advantage of the SP method is that it can also be used to investigate the influence of the parameter uncertainties on the simulation results. The SP method is demonstrated for the example of a widely used biological model.

  17. Optimizing Experimental Designs Relative to Costs and Effect Sizes.

    ERIC Educational Resources Information Center

    Headrick, Todd C.; Zumbo, Bruno D.

    A general model is derived for the purpose of efficiently allocating integral numbers of units in multi-level designs given prespecified power levels. The derivation of the model is based on a constrained optimization problem that maximizes a general form of a ratio of expected mean squares subject to a budget constraint. This model provides more…

  18. Optimizing an experimental design for an electromagnetic experiment

    NASA Astrophysics Data System (ADS)

    Roux, E.; Garcia, X. A.

    2011-12-01

    Most of geophysical studies focus on data acquisition and analysis, but another aspect which is gaining importance is the discussion on the acquisition of suitable datasets. This can be done through the design of an optimal experiment. An optimal experiment maximizes geophysical information while maintaining the cost of the experiment as low as possible. This requires a careful selection of recording parameters as source and receivers locations or range of periods needed to image the target. We are developing a method to design an optimal experiment in the context of detecting and monitoring a CO2 reservoir using controlled-source electromagnetic (CSEM) data. Using an algorithm for a simple one-dimensional (1D) situation, we look for the most suitable locations for source and receivers and optimum characteristics of the source to image the subsurface. One main advantage of this kind of technique to design an experiment is that it does not require the acquisition of any data and can thus be easily conducted before any geophysical survey. Our algorithm is based on a genetic algorithm which has been proved to be an efficient technique to examine a wide range of possible surveys and select the one that gives superior resolution. Each particular design needs to be quantified. Different quantities have been used to estimate the "goodness" of a model, most of them being sensitive to the eigenvalues of the corresponding inversion problem. Here we show a comparison of results obtained using different objective functions. Then, we simulate a CSEM survey with a realistic 1D structure and discuss the optimum recording parameters determined by our method.

  19. Surface laser marking optimization using an experimental design approach

    NASA Astrophysics Data System (ADS)

    Brihmat-Hamadi, F.; Amara, E. H.; Lavisse, L.; Jouvard, J. M.; Cicala, E.; Kellou, H.

    2017-04-01

    Laser surface marking is performed on a titanium substrate using a pulsed frequency doubled Nd:YAG laser ( λ= 532 nm, τ pulse=5 ns) to process the substrate surface under normal atmospheric conditions. The aim of the work is to investigate, following experimental and statistical approaches, the correlation between the process parameters and the response variables (output), using a Design of Experiment method (DOE): Taguchi methodology and a response surface methodology (RSM). A design is first created using MINTAB program, and then the laser marking process is performed according to the planned design. The response variables; surface roughness and surface reflectance were measured for each sample, and incorporated into the design matrix. The results are then analyzed and the RSM model is developed and verified for predicting the process output for the given set of process parameters values. The analysis shows that the laser beam scanning speed is the most influential operating factor followed by the laser pumping intensity during marking, while the other factors show complex influences on the objective functions.

  20. Optimization of the intravenous glucose tolerance test in T2DM patients using optimal experimental design.

    PubMed

    Silber, Hanna E; Nyberg, Joakim; Hooker, Andrew C; Karlsson, Mats O

    2009-06-01

    Intravenous glucose tolerance test (IVGTT) provocations are informative, but complex and laborious, for studying the glucose-insulin system. The objective of this study was to evaluate, through optimal design methodology, the possibilities of more informative and/or less laborious study design of the insulin modified IVGTT in type 2 diabetic patients. A previously developed model for glucose and insulin regulation was implemented in the optimal design software PopED 2.0. The following aspects of the study design of the insulin modified IVGTT were evaluated; (1) glucose dose, (2) insulin infusion, (3) combination of (1) and (2), (4) sampling times, (5) exclusion of labeled glucose. Constraints were incorporated to avoid prolonged hyper- and/or hypoglycemia and a reduced design was used to decrease run times. Design efficiency was calculated as a measure of the improvement with an optimal design compared to the basic design. The results showed that the design of the insulin modified IVGTT could be substantially improved by the use of an optimized design compared to the standard design and that it was possible to use a reduced number of samples. Optimization of sample times gave the largest improvement followed by insulin dose. The results further showed that it was possible to reduce the total sample time with only a minor loss in efficiency. Simulations confirmed the predictions from PopED. The predicted uncertainty of parameter estimates (CV) was low in all tested cases, despite the reduction in the number of samples/subject. The best design had a predicted average CV of parameter estimates of 19.5%. We conclude that improvement can be made to the design of the insulin modified IVGTT and that the most important design factor was the placement of sample times followed by the use of an optimal insulin dose. This paper illustrates how complex provocation experiments can be improved by sequential modeling and optimal design.

  1. Design and Experimental Implementation of Optimal Spacecraft Antenna Slews

    DTIC Science & Technology

    2013-12-01

    any spacecraft antenna configuration. Various software suites were used to perform thorough validation and verification of the Newton -Euler...verification of the Newton -Euler formulation developed herein. The antenna model was then utilized to solve an optimal control problem for a geostationary...DEVELOPING A MULTI-BODY DYNAMIC MODEL ........................................9  A.  THE NEWTON -EULER APPROACH

  2. OPTIMIZATION OF EXPERIMENTAL DESIGNS BY INCORPORATING NIF FACILITY IMPACTS

    SciTech Connect

    Eder, D C; Whitman, P K; Koniges, A E; Anderson, R W; Wang, P; Gunney, B T; Parham, T G; Koerner, J G; Dixit, S N; . Suratwala, T I; Blue, B E; Hansen, J F; Tobin, M T; Robey, H F; Spaeth, M L; MacGowan, B J

    2005-08-31

    For experimental campaigns on the National Ignition Facility (NIF) to be successful, they must obtain useful data without causing unacceptable impact on the facility. Of particular concern is excessive damage to optics and diagnostic components. There are 192 fused silica main debris shields (MDS) exposed to the potentially hostile target chamber environment on each shot. Damage in these optics results either from the interaction of laser light with contamination and pre-existing imperfections on the optic surface or from the impact of shrapnel fragments. Mitigation of this second damage source is possible by identifying shrapnel sources and shielding optics from them. It was recently demonstrated that the addition of 1.1-mm thick borosilicate disposable debris shields (DDS) block the majority of debris and shrapnel fragments from reaching the relatively expensive MDS's. However, DDS's cannot stop large, faster moving fragments. We have experimentally demonstrated one shrapnel mitigation technique showing that it is possible to direct fast moving fragments by changing the source orientation, in this case a Ta pinhole array. Another mitigation method is to change the source material to one that produces smaller fragments. Simulations and validating experiments are necessary to determine which fragments can penetrate or break 1-3 mm thick DDS's. Three-dimensional modeling of complex target-diagnostic configurations is necessary to predict the size, velocity, and spatial distribution of shrapnel fragments. The tools we are developing will be used to set the allowed level of debris and shrapnel generation for all NIF experimental campaigns.

  3. Analytical and experimental performance of optimal controller designs for a supersonic inlet

    NASA Technical Reports Server (NTRS)

    Zeller, J. R.; Lehtinen, B.; Geyser, L. C.; Batterton, P. G.

    1973-01-01

    The techniques of modern optimal control theory were applied to the design of a control system for a supersonic inlet. The inlet control problem was approached as a linear stochastic optimal control problem using as the performance index the expected frequency of unstarts. The details of the formulation of the stochastic inlet control problem are presented. The computational procedures required to obtain optimal controller designs are discussed, and the analytically predicted performance of controllers designed for several different inlet conditions is tabulated. The experimental implementation of the optimal control laws is described, and the experimental results obtained in a supersonic wind tunnel are presented. The control laws were implemented with analog and digital computers. Comparisons are made between the experimental and analytically predicted performance results. Comparisons are also made between the results obtained with continuous analog computer controllers and discrete digital computer versions.

  4. Optimal experimental designs for dose-response studies with continuous endpoints.

    PubMed

    Holland-Letz, Tim; Kopp-Schneider, Annette

    2015-11-01

    In most areas of clinical and preclinical research, the required sample size determines the costs and effort for any project, and thus, optimizing sample size is of primary importance. An experimental design of dose-response studies is determined by the number and choice of dose levels as well as the allocation of sample size to each level. The experimental design of toxicological studies tends to be motivated by convention. Statistical optimal design theory, however, allows the setting of experimental conditions (dose levels, measurement times, etc.) in a way which minimizes the number of required measurements and subjects to obtain the desired precision of the results. While the general theory is well established, the mathematical complexity of the problem so far prevents widespread use of these techniques in practical studies. The paper explains the concepts of statistical optimal design theory with a minimum of mathematical terminology and uses these concepts to generate concrete usable D-optimal experimental designs for dose-response studies on the basis of three common dose-response functions in toxicology: log-logistic, log-normal and Weibull functions with four parameters each. The resulting designs usually require control plus only three dose levels and are quite intuitively plausible. The optimal designs are compared to traditional designs such as the typical setup of cytotoxicity studies for 96-well plates. As the optimal design depends on prior estimates of the dose-response function parameters, it is shown what loss of efficiency occurs if the parameters for design determination are misspecified, and how Bayes optimal designs can improve the situation.

  5. Optimization of natural lipstick formulation based on pitaya (Hylocereus polyrhizus) seed oil using D-optimal mixture experimental design.

    PubMed

    Kamairudin, Norsuhaili; Gani, Siti Salwa Abd; Masoumi, Hamid Reza Fard; Hashim, Puziah

    2014-10-16

    The D-optimal mixture experimental design was employed to optimize the melting point of natural lipstick based on pitaya (Hylocereus polyrhizus) seed oil. The influence of the main lipstick components-pitaya seed oil (10%-25% w/w), virgin coconut oil (25%-45% w/w), beeswax (5%-25% w/w), candelilla wax (1%-5% w/w) and carnauba wax (1%-5% w/w)-were investigated with respect to the melting point properties of the lipstick formulation. The D-optimal mixture experimental design was applied to optimize the properties of lipstick by focusing on the melting point with respect to the above influencing components. The D-optimal mixture design analysis showed that the variation in the response (melting point) could be depicted as a quadratic function of the main components of the lipstick. The best combination of each significant factor determined by the D-optimal mixture design was established to be pitaya seed oil (25% w/w), virgin coconut oil (37% w/w), beeswax (17% w/w), candelilla wax (2% w/w) and carnauba wax (2% w/w). With respect to these factors, the 46.0 °C melting point property was observed experimentally, similar to the theoretical prediction of 46.5 °C. Carnauba wax is the most influential factor on this response (melting point) with its function being with respect to heat endurance. The quadratic polynomial model sufficiently fit the experimental data.

  6. Optimal experimental designs for the estimation of thermal properties of composite materials

    NASA Technical Reports Server (NTRS)

    Scott, Elaine P.; Moncman, Deborah A.

    1994-01-01

    Reliable estimation of thermal properties is extremely important in the utilization of new advanced materials, such as composite materials. The accuracy of these estimates can be increased if the experiments are designed carefully. The objectives of this study are to design optimal experiments to be used in the prediction of these thermal properties and to then utilize these designs in the development of an estimation procedure to determine the effective thermal properties (thermal conductivity and volumetric heat capacity). The experiments were optimized by choosing experimental parameters that maximize the temperature derivatives with respect to all of the unknown thermal properties. This procedure has the effect of minimizing the confidence intervals of the resulting thermal property estimates. Both one-dimensional and two-dimensional experimental designs were optimized. A heat flux boundary condition is required in both analyses for the simultaneous estimation of the thermal properties. For the one-dimensional experiment, the parameters optimized were the heating time of the applied heat flux, the temperature sensor location, and the experimental time. In addition to these parameters, the optimal location of the heat flux was also determined for the two-dimensional experiments. Utilizing the optimal one-dimensional experiment, the effective thermal conductivity perpendicular to the fibers and the effective volumetric heat capacity were then estimated for an IM7-Bismaleimide composite material. The estimation procedure used is based on the minimization of a least squares function which incorporates both calculated and measured temperatures and allows for the parameters to be estimated simultaneously.

  7. OPTIMIZING THE PRECISION OF TOXICITY THRESHOLD ESTIMATION USING A TWO-STAGE EXPERIMENTAL DESIGN

    EPA Science Inventory

    An important consideration for risk assessment is the existence of a threshold, i.e., the highest toxicant dose where the response is not distinguishable from background. We have developed methodology for finding an experimental design that optimizes the precision of threshold mo...

  8. A new experimental design method to optimize formulations focusing on a lubricant for hydrophilic matrix tablets.

    PubMed

    Choi, Du Hyung; Shin, Sangmun; Khoa Viet Truong, Nguyen; Jeong, Seong Hoon

    2012-09-01

    A robust experimental design method was developed with the well-established response surface methodology and time series modeling to facilitate the formulation development process with magnesium stearate incorporated into hydrophilic matrix tablets. Two directional analyses and a time-oriented model were utilized to optimize the experimental responses. Evaluations of tablet gelation and drug release were conducted with two factors x₁ and x₂: one was a formulation factor (the amount of magnesium stearate) and the other was a processing factor (mixing time), respectively. Moreover, different batch sizes (100 and 500 tablet batches) were also evaluated to investigate an effect of batch size. The selected input control factors were arranged in a mixture simplex lattice design with 13 experimental runs. The obtained optimal settings of magnesium stearate for gelation were 0.46 g, 2.76 min (mixing time) for a 100 tablet batch and 1.54 g, 6.51 min for a 500 tablet batch. The optimal settings for drug release were 0.33 g, 7.99 min for a 100 tablet batch and 1.54 g, 6.51 min for a 500 tablet batch. The exact ratio and mixing time of magnesium stearate could be formulated according to the resulting hydrophilic matrix tablet properties. The newly designed experimental method provided very useful information for characterizing significant factors and hence to obtain optimum formulations allowing for a systematic and reliable experimental design method.

  9. KL-optimal experimental design for discriminating between two growth models applied to a beef farm.

    PubMed

    Campos-Barreiro, Santiago; López-Fidalgo, Jesús

    2016-02-01

    The body mass growth of organisms is usually represented in terms of what is known as ontogenetic growth models, which represent the relation of dependence between the mass of the body and time. The paper is concerned with a problem of finding an optimal experimental design for discriminating between two competing mass growth models applied to a beef farm. T-optimality was first introduced for discrimination between models but in this paper, KL-optimality based on the Kullback-Leibler distance is used to deal with correlated obsevations since, in this case, observations on a particular animal are not independent.

  10. Chemometric experimental design based optimization techniques in capillary electrophoresis: a critical review of modern applications.

    PubMed

    Hanrahan, Grady; Montes, Ruthy; Gomez, Frank A

    2008-01-01

    A critical review of recent developments in the use of chemometric experimental design based optimization techniques in capillary electrophoresis applications is presented. Current advances have led to enhanced separation capabilities of a wide range of analytes in such areas as biological, environmental, food technology, pharmaceutical, and medical analysis. Significant developments in design, detection methodology and applications from the last 5 years (2002-2007) are reported. Furthermore, future perspectives in the use of chemometric methodology in capillary electrophoresis are considered.

  11. Experimental design for optimizing drug release from silicone elastomer matrix and investigation of transdermal drug delivery.

    PubMed

    Snorradóttir, Bergthóra S; Gudnason, Pálmar I; Thorsteinsson, Freygardur; Másson, Már

    2011-04-18

    Silicone elastomers are commonly used for medical devices and external prosthesis. Recently, there has been growing interest in silicone-based medical devices with enhanced function that release drugs from the elastomer matrix. In the current study, an experimental design approach was used to optimize the release properties of the model drug diclofenac from medical silicone elastomer matrix, including a combination of four permeation enhancers as additives and allowing for constraints in the properties of the material. The D-optimal design included six factors and five responses describing material properties and release of the drug. The first experimental object was screening, to investigate the main and interaction effects, based on 29 experiments. All excipients had a significant effect and were therefore included in the optimization, which also allowed the possible contribution of quadratic terms to the model and was based on 38 experiments. Screening and optimization of release and material properties resulted in the production of two optimized silicone membranes, which were tested for transdermal delivery. The results confirmed the validity of the model for the optimized membranes that were used for further testing for transdermal drug delivery through heat-separated human skin. The optimization resulted in an excipient/drug/silicone composition that resulted in a cured elastomer with good tensile strength and a 4- to 7-fold transdermal delivery increase relative to elastomer that did not contain excipients.

  12. Optimal experimental design for assessment of enzyme kinetics in a drug discovery screening environment.

    PubMed

    Sjögren, Erik; Nyberg, Joakim; Magnusson, Mats O; Lennernäs, Hans; Hooker, Andrew; Bredberg, Ulf

    2011-05-01

    A penalized expectation of determinant (ED)-optimal design with a discrete parameter distribution was used to find an optimal experimental design for assessment of enzyme kinetics in a screening environment. A data set for enzyme kinetic data (V(max) and K(m)) was collected from previously reported studies, and every V(max)/K(m) pair (n = 76) was taken to represent a unique drug compound. The design was restricted to 15 samples, an incubation time of up to 40 min, and starting concentrations (C(0)) for the incubation between 0.01 and 100 μM. The optimization was performed by finding the sample times and C(0) returning the lowest uncertainty (S.E.) of the model parameter estimates. Individual optimal designs, one general optimal design and one, for laboratory practice suitable, pragmatic optimal design (OD) were obtained. In addition, a standard design (STD-D), representing a commonly applied approach for metabolic stability investigations, was constructed. Simulations were performed for OD and STD-D by using the Michaelis-Menten (MM) equation, and enzyme kinetic parameters were estimated with both MM and a monoexponential decay. OD generated a better result (relative standard error) for 99% of the compounds and an equal or better result [(root mean square error (RMSE)] for 78% of the compounds in estimation of metabolic intrinsic clearance. Furthermore, high-quality estimates (RMSE < 30%) of both V(max) and K(m) could be obtained for a considerable number (26%) of the investigated compounds by using the suggested OD. The results presented in this study demonstrate that the output could generally be improved compared with that obtained from the standard approaches used today.

  13. An Artificial Intelligence Technique to Generate Self-Optimizing Experimental Designs.

    DTIC Science & Technology

    1983-02-01

    pattern or a binary chopping technique in the space of decision variables while carrying out a sequence of contiroLled experiments on the strategy ...7 AD-A127 764 AN ARTIFICIAL INTELLIGENCE TECHNIQUE TO GENERATE 1/1 SELF-OPTIMIZING EXPERIME. .(U) ARIZONA STATE UNIV TEMPE GROUP FOR COMPUTER STUDIES...6 3 A - - II 1* Ii.LI~1 11. AI-. jMR.TR- 3 0 3 37 AN ARTIFICIAL INTELLIGENCE TECHNIQUE TO GENERATE SELF-OPTIMIZING EXPERIMENTAL DESIGNS Nicholas V

  14. Fermilab D-0 Experimental Facility: Energy conservation report and mechanical systems design optimization and cost analysis study

    SciTech Connect

    Krstulovich, S.F.

    1987-10-31

    This report is developed as part of the Fermilab D-0 Experimental Facility Project Title II Design Documentation Update. As such, it concentrates primarily on HVAC mechanical systems design optimization and cost analysis.

  15. Demonstration of decomposition and optimization in the design of experimental space systems

    NASA Technical Reports Server (NTRS)

    Padula, Sharon; Sandridge, Chris A.; Haftka, Raphael T.; Walsh, Joanne L.

    1989-01-01

    Effective design strategies for a class of systems which may be termed Experimental Space Systems (ESS) are needed. These systems, which include large space antenna and observatories, space platforms, earth satellites and deep space explorers, have special characteristics which make them particularly difficult to design. It is argued here that these same characteristics encourage the use of advanced computer-aided optimization and planning techniques. The broad goal of this research is to develop optimization strategies for the design of ESS. These strategics would account for the possibly conflicting requirements of mission life, safety, scientific payoffs, initial system cost, launch limitations and maintenance costs. The strategies must also preserve the coupling between disciplines or between subsystems. Here, the specific purpose is to describe a computer-aided planning and scheduling technique. This technique provides the designer with a way to map the flow of data between multidisciplinary analyses. The technique is important because it enables the designer to decompose the system design problem into a number of smaller subproblems. The planning and scheduling technique is demonstrated by its application to a specific preliminary design problem.

  16. Optimization study on the formulation of roxithromycin dispersible tablet using experimental design.

    PubMed

    Weon, K Y; Lee, K T; Seo, S H

    2000-10-01

    This study set out to improve the physical and pharmaceutical characteristics of the present formulation using an appropriate experimental design. The work described here concerns the formulation of the dispersible tablet applying direct compression method containing roxithromycin in the form of coated granules. In this study 2(3) factorial design was used as screening test model and Central Composite Design (CCC) associated with response surface methodology was used as optimization study model to develop and to optimize the proper formulation of roxithromycin dispersible tablet. The three independent variables investigated were functional excipients like binder (X1), disintegrant (X2) and lubricant (X3). The effects of these variables were investigated on the following responses: hardness (Y1), friability (Y2) and disintegration time (Y3) of tablet. Three replicates at the center levels of the each design were used to independently calculate the experimental error and to detect any curvature in the response surface. This enabled the best formulations to be selected objectively. The effect order of each term to all response variable was X3> X2> X1> X1*X2> X2*X2> X2*X3> X3*X3> X1*X3> X1*X1 and model equations on each response variables were generated. Optimized compositions of formula were accordingly computed using those model equations and confirmed by following demonstration study. As a result, this study has demonstrated the efficiency and effectiveness of using a systematic formulation optimization process to develop the tablet formulation of roxithromycin dispersible tablet with limited experiment.

  17. Experimental Investigation of a Point Design Optimized Arrow Wing HSCT Configuration

    NASA Technical Reports Server (NTRS)

    Narducci, Robert P.; Sundaram, P.; Agrawal, Shreekant; Cheung, S.; Arslan, A. E.; Martin, G. L.

    1999-01-01

    The M2.4-7A Arrow Wing HSCT configuration was optimized for straight and level cruise at a Mach number of 2.4 and a lift coefficient of 0.10. A quasi-Newton optimization scheme maximized the lift-to-drag ratio (by minimizing drag-to-lift) using Euler solutions from FL067 to estimate the lift and drag forces. A 1.675% wind-tunnel model of the Opt5 HSCT configuration was built to validate the design methodology. Experimental data gathered at the NASA Langley Unitary Plan Wind Tunnel (UPWT) section #2 facility verified CFL3D Euler and Navier-Stokes predictions of the Opt5 performance at the design point. In turn, CFL3D confirmed the improvement in the lift-to-drag ratio obtained during the optimization, thus validating the design procedure. A data base at off-design conditions was obtained during three wind-tunnel tests. The entry into NASA Langley UPWT section #2 obtained data at a free stream Mach number, M(sub infinity), of 2.55 as well as the design Mach number, M(sub infinity)=2.4. Data from a Mach number range of 1.8 to 2.4 was taken at UPWT section #1. Transonic and low supersonic Mach numbers, M(sub infinity)=0.6 to 1.2, was gathered at the NASA Langley 16 ft. Transonic Wind Tunnel (TWT). In addition to good agreement between CFD and experimental data, highlights from the wind-tunnel tests include a trip dot study suggesting a linear relationship between trip dot drag and Mach number, an aeroelastic study that measured the outboard wing deflection and twist, and a flap scheduling study that identifies the possibility of only one leading-edge and trailing-edge flap setting for transonic cruise and another for low supersonic acceleration.

  18. Experimental characterization and multidisciplinary conceptual design optimization of a bendable load stiffened unmanned air vehicle wing

    NASA Astrophysics Data System (ADS)

    Jagdale, Vijay Narayan

    Demand for deployable MAVs and UAVs with wings designed to reduce aircraft storage volume led to the development of a bendable wing concept at the University of Florida (UF). The wing shows an ability to load stiffen in the flight load direction, still remaining compliant in the opposite direction, enabling UAV storage inside smaller packing volumes. From the design prospective, when the wing shape parameters are treated as design variables, the performance requirements : high aerodynamic efficiency, structural stability under aggressive flight loads and desired compliant nature to prevent breaking while stored, in general conflict with each other. Creep deformation induced by long term storage and its effect on the wing flight characteristics are additional considerations. Experimental characterization of candidate bendable UAV wings is performed in order to demonstrate and understand aerodynamic and structural behavior of the bendable load stiffened wing under flight loads and while the wings are stored inside a canister for long duration, in the process identifying some important wing shape parameters. A multidisciplinary, multiobjective design optimization approach is utilized for conceptual design of a 24 inch span and 7 inch root chord bendable wing. Aerodynamic performance of the wing is studied using an extended vortex lattice method based Athena Vortex Lattice (AVL) program. An arc length method based nonlinear FEA routine in ABAQUS is used to evaluate the structural performance of the wing and to determine maximum flying velocity that the wing can withstand without buckling or failing under aggressive flight loads. An analytical approach is used to study the stresses developed in the composite wing during storage and Tsai-Wu criterion is used to check failure of the composite wing due to the rolling stresses to determine minimum safe storage diameter. Multidisciplinary wing shape and layup optimization is performed using an elitist non-dominated sorting

  19. Optimizing an experimental design for a CSEM experiment: methodology and synthetic tests

    NASA Astrophysics Data System (ADS)

    Roux, E.; Garcia, X.

    2014-04-01

    Optimizing an experimental design is a compromise between maximizing information we get about the target and limiting the cost of the experiment, providing a wide range of constraints. We present a statistical algorithm for experiment design that combines the use of linearized inverse theory and stochastic optimization technique. Linearized inverse theory is used to quantify the quality of one given experiment design while genetic algorithm (GA) enables us to examine a wide range of possible surveys. The particularity of our algorithm is the use of the multi-objective GA NSGA II that searches designs that fit several objective functions (OFs) simultaneously. This ability of NSGA II is helping us in defining an experiment design that focuses on a specified target area. We present a test of our algorithm using a 1-D electrical subsurface structure. The model we use represents a simple but realistic scenario in the context of CO2 sequestration that motivates this study. Our first synthetic test using a single OF shows that a limited number of well-distributed observations from a chosen design have the potential to resolve the given model. This synthetic test also points out the importance of a well-chosen OF, depending on our target. In order to improve these results, we show how the combination of two OFs using a multi-objective GA enables us to determine an experimental design that maximizes information about the reservoir layer. Finally, we present several tests of our statistical algorithm in more challenging environments by exploring the influence of noise, specific site characteristics or its potential for reservoir monitoring.

  20. Medium optimization of antifungal activity production by Bacillus amyloliquefaciens using statistical experimental design.

    PubMed

    Mezghanni, Héla; Khedher, Saoussen Ben; Tounsi, Slim; Zouari, Nabil

    2012-01-01

    In order to overproduce biofungicides agents by Bacillus amyloliquefaciens BLB371, a suitable culture medium was optimized using response surface methodology. Plackett-Burman design and central composite design were employed for experimental design and analysis of the results. Peptone, sucrose, and yeast extract were found to significantly influence antifungal activity production and their optimal concentrations were, respectively, 20 g/L, 25 g/L, and 4.5 g/L. The corresponding biofungicide production was 250 AU/mL, corresponding to 56% improvement in antifungal components production over a previously used medium (160 AU/mL). Moreover, our results indicated that a deficiency of the minerals CuSO(4), FeCl(3) · 6H(2)O, Na(2)MoO(4), KI, ZnSO(4) · 7H(2)O, H(3)BO(3), and C(6)H(8)O(7) in the optimized culture medium was not crucial for biofungicides production by Bacillus amyloliquefaciens BLB371, which is interesting from a practical point of view, particularly for low-cost production and use of the biofungicide for the control of agricultural fungal pests.

  1. Online optimal experimental re-design in robotic parallel fed-batch cultivation facilities.

    PubMed

    Cruz Bournazou, M N; Barz, T; Nickel, D B; Lopez Cárdenas, D C; Glauche, F; Knepper, A; Neubauer, P

    2017-03-01

    We present an integrated framework for the online optimal experimental re-design applied to parallel nonlinear dynamic processes that aims to precisely estimate the parameter set of macro kinetic growth models with minimal experimental effort. This provides a systematic solution for rapid validation of a specific model to new strains, mutants, or products. In biosciences, this is especially important as model identification is a long and laborious process which is continuing to limit the use of mathematical modeling in this field. The strength of this approach is demonstrated by fitting a macro-kinetic differential equation model for Escherichia coli fed-batch processes after 6 h of cultivation. The system includes two fully-automated liquid handling robots; one containing eight mini-bioreactors and another used for automated at-line analyses, which allows for the immediate use of the available data in the modeling environment. As a result, the experiment can be continually re-designed while the cultivations are running using the information generated by periodical parameter estimations. The advantages of an online re-computation of the optimal experiment are proven by a 50-fold lower average coefficient of variation on the parameter estimates compared to the sequential method (4.83% instead of 235.86%). The success obtained in such a complex system is a further step towards a more efficient computer aided bioprocess development. Biotechnol. Bioeng. 2017;114: 610-619. © 2016 Wiley Periodicals, Inc.

  2. Optimizing the spectrofluorimetric determination of cefdinir through a Taguchi experimental design approach.

    PubMed

    Abou-Taleb, Noura Hemdan; El-Wasseef, Dalia Rashad; El-Sherbiny, Dina Tawfik; El-Ashry, Saadia Mohamed

    2016-05-01

    The aim of this work is to optimize a spectrofluorimetric method for the determination of cefdinir (CFN) using the Taguchi method. The proposed method is based on the oxidative coupling reaction of CFN and cerium(IV) sulfate. The quenching effect of CFN on the fluorescence of the produced cerous ions is measured at an emission wavelength (λ(em)) of 358 nm after excitation (λ(ex)) at 301 nm. The Taguchi orthogonal array L9 (3(4)) was designed to determine the optimum reaction conditions. The results were analyzed using the signal-to-noise (S/N) ratio and analysis of variance (ANOVA). The optimal experimental conditions obtained from this study were 1 mL of 0.2% MBTH, 0.4 mL of 0.25% Ce(IV), a reaction time of 10 min and methanol as the diluting solvent. The calibration plot displayed a good linear relationship over a range of 0.5-10.0 µg/mL. The proposed method was successfully applied to the determination of CFN in bulk powder and pharmaceutical dosage forms. The results are in good agreement with those obtained using the comparison method. Finally, the Taguchi method provided a systematic and efficient methodology for this optimization, with considerably less effort than would be required for other optimizations techniques.

  3. Experimental design of an optimal phase duration control strategy used in batch biological wastewater treatment.

    PubMed

    Pavgelj, N B; Hvala, N; Kocijan, J; Ros, M; Subelj, M; Music, G; Strmcnik, S

    2001-01-01

    The paper presents the design of an algorithm used in control of a sequencing batch reactor (SBR) for wastewater treatment. The algorithm is used for the on-line optimization of the batch phases duration which should be applied due to the variable input wastewater. Compared to an operation with fixed times of batch phases, this kind of a control strategy improves the treatment quality and reduces energy consumption. The designed control algorithm is based on following the course of some simple indirect process variables (i.e. redox potential, dissolved oxygen concentration and pH), and automatic recognition of the characteristic patterns in their time profile. The algorithm acts on filtered on-line signals and is based on heuristic rules. The control strategy was developed and tested on a laboratory pilot plant. To facilitate the experimentation, the pilot plant was superimposed by a computer-supported experimental environment that enabled: (i) easy access to all data (on-line signals, laboratory measurements, batch parameters) needed for the design of the algorithm, (ii) the immediate application of the algorithm designed off-line in the Matlab package also in real-time control. When testing on the pilot plant, the control strategy demonstrated good agreement between the proposed completion times and actual terminations of the desired biodegradation processes.

  4. Effect of an experimental design for evaluating the nonlinear optimal formulation of theophylline tablets using a bootstrap resampling technique.

    PubMed

    Arai, Hiroaki; Suzuki, Tatsuya; Kaseda, Chosei; Takayama, Kozo

    2009-06-01

    The optimal solutions of theophylline tablet formulations based on datasets from 4 experimental designs (Box and Behnken design, central composite design, D-optimal design, and full factorial design) were calculated by the response surface method incorporating multivariate spline interpolation (RSM(S)). Reliability of these solutions was evaluated by a bootstrap (BS) resampling technique. The optimal solutions derived from the Box and Behnken design, D-optimal design, and full factorial design dataset were similar. The distributions of the BS optimal solutions calculated for these datasets were symmetrical. Thus, the accuracy and the reproducibility of the optimal solutions enabled quantitative evaluation based on the deviations of these distributions. However, the distribution of the BS optimal solutions calculated for the central composite design dataset were almost unsymmetrical, and the basic statistic of these distributions could not be conducted. The reason for this problem was considered to be the mixing of the global and local optima. Therefore, self-organizing map (SOM) clustering was applied to identify the global optimal solutions. The BS optimal solutions were divided into 4 clusters by SOM clustering, the accuracy and reproducibility of the optimal solutions in each cluster were quantitatively evaluated, and the cluster containing the global optima was identified. Therefore, SOM clustering was considered to reinforce the BS resampling method for the evaluation of the reliability of optimal solutions irrespective of the dataset style.

  5. Experimental design and optimization of raloxifene hydrochloride loaded nanotransfersomes for transdermal application.

    PubMed

    Mahmood, Syed; Taher, Muhammad; Mandal, Uttam Kumar

    2014-01-01

    Raloxifene hydrochloride, a highly effective drug for the treatment of invasive breast cancer and osteoporosis in post-menopausal women, shows poor oral bioavailability of 2%. The aim of this study was to develop, statistically optimize, and characterize raloxifene hydrochloride-loaded transfersomes for transdermal delivery, in order to overcome the poor bioavailability issue with the drug. A response surface methodology experimental design was applied for the optimization of transfersomes, using Box-Behnken experimental design. Phospholipon(®) 90G, sodium deoxycholate, and sonication time, each at three levels, were selected as independent variables, while entrapment efficiency, vesicle size, and transdermal flux were identified as dependent variables. The formulation was characterized by surface morphology and shape, particle size, and zeta potential. Ex vivo transdermal flux was determined using a Hanson diffusion cell assembly, with rat skin as a barrier medium. Transfersomes from the optimized formulation were found to have spherical, unilamellar structures, with a homogeneous distribution and low polydispersity index (0.08). They had a particle size of 134±9 nM, with an entrapment efficiency of 91.00%±4.90%, and transdermal flux of 6.5±1.1 μg/cm(2)/hour. Raloxifene hydrochloride-loaded transfersomes proved significantly superior in terms of amount of drug permeated and deposited in the skin, with enhancement ratios of 6.25±1.50 and 9.25±2.40, respectively, when compared with drug-loaded conventional liposomes, and an ethanolic phosphate buffer saline. Differential scanning calorimetry study revealed a greater change in skin structure, compared with a control sample, during the ex vivo drug diffusion study. Further, confocal laser scanning microscopy proved an enhanced permeation of coumarin-6-loaded transfersomes, to a depth of approximately160 μM, as compared with rigid liposomes. These ex vivo findings proved that a raloxifene hydrochloride

  6. Doehlert experimental design applied to optimization of light emitting textile structures

    NASA Astrophysics Data System (ADS)

    Oguz, Yesim; Cochrane, Cedric; Koncar, Vladan; Mordon, Serge R.

    2016-07-01

    A light emitting fabric (LEF) has been developed for photodynamic therapy (PDT) for the treatment of dermatologic diseases such as Actinic Keratosis (AK). A successful PDT requires homogenous and reproducible light with controlled power and wavelength on the treated skin area. Due to the shape of the human body, traditional PDT with external light sources is unable to deliver homogenous light everywhere on the skin (head vertex, hand, etc.). For better light delivery homogeneity, plastic optical fibers (POFs) have been woven in textile in order to emit laterally the injected light. The previous studies confirmed that the light power could be locally controlled by modifying the radius of POF macro-bendings within the textile structure. The objective of this study is to optimize the distribution of macro-bendings over the LEF surface in order to increase the light intensity (mW/cm2), and to guarantee the best possible light deliver homogeneity over the LEF which are often contradictory. Fifteen experiments have been carried out with Doehlert experimental design involving Response Surface Methodology (RSM). The proposed models are fitted to the experimental data to enable the optimal set up of the warp yarns tensions.

  7. Optimization and enhancement of soil bioremediation by composting using the experimental design technique.

    PubMed

    Sayara, Tahseen; Sarrà, Montserrat; Sánchez, Antoni

    2010-06-01

    The objective of this study was the application of the experimental design technique to optimize the conditions for the bioremediation of contaminated soil by means of composting. A low-cost material such as compost from the Organic Fraction of Municipal Solid Waste as amendment and pyrene as model pollutant were used. The effect of three factors was considered: pollutant concentration (0.1-2 g/kg), soil:compost mixing ratio (1:0.5-1:2 w/w) and compost stability measured as respiration index (0.78, 2.69 and 4.52 mg O2 g(-1) Organic Matter h(-1)). Stable compost permitted to achieve an almost complete degradation of pyrene in a short time (10 days). Results indicated that compost stability is a key parameter to optimize PAHs biodegradation. A factor analysis indicated that the optimal conditions for bioremediation after 10, 20 and 30 days of process were (1.4, 0.78, 1:1.4), (1.4, 2.18. 1:1.3) and (1.3, 2.18, 1:1.3) for concentration (g/kg), compost stability (mg O2 g(-1) Organic Matter h(-1)) and soil:compost mixing ratio, respectively.

  8. Mixed culture optimization for marigold flower ensilage via experimental design and response surface methodology.

    PubMed

    Navarrete-Bolaños, José Luis; Jiménez-Islas, Hugo; Botello-Alvarez, Enrique; Rico-Martínez, Ramiro

    2003-04-09

    Endogenous microorganisms isolated from the marigold flower (Tagetes erecta) were studied to understand the events taking place during its ensilage. Studies of the cellulase enzymatic activity and the ensilage process were undertaken. In both studies, the use of approximate second-order models and multiple lineal regression, within the context of an experimental mixture design using the response surface methodology as optimization strategy, determined that the microorganisms Flavobacterium IIb, Acinetobacter anitratus, and Rhizopus nigricans are the most significant in marigold flower ensilage and exhibit high cellulase activity. A mixed culture comprised of 9.8% Flavobacterium IIb, 41% A. anitratus, and 49.2% R. nigricans used during ensilage resulted in an increased yield of total xanthophylls extracted of 24.94 g/kg of dry weight compared with 12.92 for the uninoculated control ensilage.

  9. Parameter estimation and uncertainty quantification in a biogeochemical model using optimal experimental design methods

    NASA Astrophysics Data System (ADS)

    Reimer, Joscha; Piwonski, Jaroslaw; Slawig, Thomas

    2016-04-01

    The statistical significance of any model-data comparison strongly depends on the quality of the used data and the criterion used to measure the model-to-data misfit. The statistical properties (such as mean values, variances and covariances) of the data should be taken into account by choosing a criterion as, e.g., ordinary, weighted or generalized least squares. Moreover, the criterion can be restricted onto regions or model quantities which are of special interest. This choice influences the quality of the model output (also for not measured quantities) and the results of a parameter estimation or optimization process. We have estimated the parameters of a three-dimensional and time-dependent marine biogeochemical model describing the phosphorus cycle in the ocean. For this purpose, we have developed a statistical model for measurements of phosphate and dissolved organic phosphorus. This statistical model includes variances and correlations varying with time and location of the measurements. We compared the obtained estimations of model output and parameters for different criteria. Another question is if (and which) further measurements would increase the model's quality at all. Using experimental design criteria, the information content of measurements can be quantified. This may refer to the uncertainty in unknown model parameters as well as the uncertainty regarding which model is closer to reality. By (another) optimization, optimal measurement properties such as locations, time instants and quantities to be measured can be identified. We have optimized such properties for additional measurement for the parameter estimation of the marine biogeochemical model. For this purpose, we have quantified the uncertainty in the optimal model parameters and the model output itself regarding the uncertainty in the measurement data using the (Fisher) information matrix. Furthermore, we have calculated the uncertainty reduction by additional measurements depending on time

  10. Development and optimization of quercetin-loaded PLGA nanoparticles by experimental design

    PubMed Central

    TEFAS, LUCIA RUXANDRA; TOMUŢĂ, IOAN; ACHIM, MARCELA; VLASE, LAURIAN

    2015-01-01

    Background and aims Quercetin is a flavonoid with good antioxidant activity, and exhibits various important pharmacological effects. The aim of the present work was to study the influence of formulation factors on the physicochemical properties of quercetin-loaded polymeric nanoparticles in order to optimize the formulation. Materials and methods The nanoparticles were prepared by the nanoprecipitation method. A 3-factor, 3-level Box-Behnken design was employed in this study considering poly(D,L-lactic-co-glycolic) acid (PLGA) concentration, polyvinyl alcohol (PVA) concentration and the stirring speed as independent variables. The responses were particle size, polydispersity index, zeta potential and encapsulation efficiency. Results The PLGA concentration seemed to be the most important factor influencing quercetin-nanoparticle characteristics. Increasing PLGA concentration led to an increase in particle size, as well as encapsulation efficiency. On the other hand, it exhibited a negative influence on the polydispersity index and zeta potential. The PVA concentration and the stirring speed had only a slight influence on particle size and polydispersity index. However, PVA concentration had an important negative effect on the encapsulation efficiency. Based on the results obtained, an optimized formulation was prepared, and the experimental values were comparable to the predicted ones. Conclusions The overall results indicated that PLGA concentration was the main factor influencing particle size, while entrapment efficiency was predominantly affected by the PVA concentration. PMID:26528074

  11. Optimizing indomethacin-loaded chitosan nanoparticle size, encapsulation, and release using Box-Behnken experimental design.

    PubMed

    Abul Kalam, Mohd; Khan, Abdul Arif; Khan, Shahanavaj; Almalik, Abdulaziz; Alshamsan, Aws

    2016-06-01

    Indomethacin chitosan nanoparticles (NPs) were developed by ionotropic gelation and optimized by concentrations of chitosan and tripolyphosphate (TPP) and stirring time by 3-factor 3-level Box-Behnken experimental design. Optimal concentration of chitosan (A) and TPP (B) were found 0.6mg/mL and 0.4mg/mL with 120min stirring time (C), with applied constraints of minimizing particle size (R1) and maximizing encapsulation efficiency (R2) and drug release (R3). Based on obtained 3D response surface plots, factors A, B and C were found to give synergistic effect on R1, while factor A has a negative impact on R2 and R3. Interaction of AB was negative on R1 and R2 but positive on R3. The factor AC was having synergistic effect on R1 and on R3, while the same combination had a negative effect on R2. The interaction BC was positive on the all responses. NPs were found in the size range of 321-675nm with zeta potentials (+25 to +32mV) after 6 months storage. Encapsulation, drug release, and content were in the range of 56-79%, 48-73% and 98-99%, respectively. In vitro drug release data were fitted in different kinetic models and pattern of drug release followed Higuchi-matrix type.

  12. Experimental design and optimization of raloxifene hydrochloride loaded nanotransfersomes for transdermal application

    PubMed Central

    Mahmood, Syed; Taher, Muhammad; Mandal, Uttam Kumar

    2014-01-01

    Raloxifene hydrochloride, a highly effective drug for the treatment of invasive breast cancer and osteoporosis in post-menopausal women, shows poor oral bioavailability of 2%. The aim of this study was to develop, statistically optimize, and characterize raloxifene hydrochloride-loaded transfersomes for transdermal delivery, in order to overcome the poor bioavailability issue with the drug. A response surface methodology experimental design was applied for the optimization of transfersomes, using Box-Behnken experimental design. Phospholipon® 90G, sodium deoxycholate, and sonication time, each at three levels, were selected as independent variables, while entrapment efficiency, vesicle size, and transdermal flux were identified as dependent variables. The formulation was characterized by surface morphology and shape, particle size, and zeta potential. Ex vivo transdermal flux was determined using a Hanson diffusion cell assembly, with rat skin as a barrier medium. Transfersomes from the optimized formulation were found to have spherical, unilamellar structures, with a homogeneous distribution and low polydispersity index (0.08). They had a particle size of 134±9 nM, with an entrapment efficiency of 91.00%±4.90%, and transdermal flux of 6.5±1.1 μg/cm2/hour. Raloxifene hydrochloride-loaded transfersomes proved significantly superior in terms of amount of drug permeated and deposited in the skin, with enhancement ratios of 6.25±1.50 and 9.25±2.40, respectively, when compared with drug-loaded conventional liposomes, and an ethanolic phosphate buffer saline. Differential scanning calorimetry study revealed a greater change in skin structure, compared with a control sample, during the ex vivo drug diffusion study. Further, confocal laser scanning microscopy proved an enhanced permeation of coumarin-6-loaded transfersomes, to a depth of approximately160 μM, as compared with rigid liposomes. These ex vivo findings proved that a raloxifene hydrochloride

  13. Determination of pharmaceuticals in drinking water by CD-modified MEKC: separation optimization using experimental design.

    PubMed

    Drover, Vincent J; Bottaro, Christina S

    2008-12-01

    A suite of 12 widely used pharmaceuticals (ibuprofen, diclofenac, naproxen, bezafibrate, gemfibrozil, ofloxacin, norfloxacin, carbamazepine, primidone, sulphamethazine, sulphadimethoxine and sulphamethoxazole) commonly found in environmental waters were separated by highly sulphated CD-modified MEKC (CD-MEKC) with UV detection. An experimental design method, face-centred composite design, was employed to minimize run time without sacrificing resolution. Using an optimized BGE composed of 10 mM ammonium hydrogen phosphate, pH 11.5, 69 mM SDS, 6 mg/mL sulphated beta-CD and 8.5% v/v isopropanol, a separation voltage of 30 kV and a 48.5 cm x 50 microm id bare silica capillary at 30 degrees C allowed baseline separation of the 12 analytes in a total analysis time of 6.7 min. Instrument LODs in the low milligram per litre range were obtained, and when combined with offline preconcentration by SPE, LODs were between 4 and 30 microg/L.

  14. Design and optimization of an experimental bioregenerative life support system with higher plants and silkworms

    NASA Astrophysics Data System (ADS)

    Hu, Enzhu; Bartsev, Sergey I.; Zhao, Ming; Liu, Professor Hong

    The conceptual scheme of an experimental bioregenerative life support system (BLSS) for planetary exploration was designed, which consisted of four elements - human metabolism, higher plants, silkworms and waste treatment. 15 kinds of higher plants, such as wheat, rice, soybean, lettuce, mulberry, et al., were selected as regenerative component of BLSS providing the crew with air, water, and vegetable food. Silkworms, which producing animal nutrition for crews, were fed by mulberry-leaves during the first three instars, and lettuce leaves last two instars. The inedible biomass of higher plants, human wastes and silkworm feces were composted into soil like substrate, which can be reused by higher plants cultivation. Salt, sugar and some household material such as soap, shampoo would be provided from outside. To support the steady state of BLSS the same amount and elementary composition of dehydrated wastes were removed periodically. The balance of matter flows between BLSS components was described by the system of algebraic equations. The mass flows between the components were optimized by EXCEL spreadsheets and using Solver. The numerical method used in this study was Newton's method.

  15. Degradation of caffeine by photo-Fenton process: optimization of treatment conditions using experimental design.

    PubMed

    Trovó, Alam G; Silva, Tatiane F S; Gomes, Oswaldo; Machado, Antonio E H; Neto, Waldomiro Borges; Muller, Paulo S; Daniel, Daniela

    2013-01-01

    The degradation of caffeine in different kind of effluents, via photo-Fenton process, was investigated in lab-scale and in a solar pilot plant. The treatment conditions (caffeine, Fe(2+) and H(2)O(2) concentrations) were defined by experimental design. The optimized conditions for each variable, obtained using the response factor (% mineralization), were: 52.0 mg L(-1)caffeine, 10.0 mg L(-1)Fe(2+) and 42.0 mg L(-1)H(2)O(2) (replaced in kinetic experiments). Under these conditions, in ultrapure water (UW), the caffeine concentration reached the quantitation limit (0.76 mg L(-1)) after 20 min, and 78% of mineralization was obtained respectively after 120 min of reaction. Using the same conditions, the matrix influence (surface water - SW and sewage treatment plant effluent - STP) on caffeine degradation was also evaluated. The total removal of caffeine in SW was reached at the same time in UW (after 20 min), while 40 min were necessary in STP. Although lower mineralization rates were verified for high organic load, under the same operational conditions, less H(2)O(2) was necessary to mineralize the dissolved organic carbon as the initial organic load increases. A high efficiency of the photo-Fenton process was also observed in caffeine degradation by solar photocatalysis using a CPC reactor, as well as intermediates of low toxicity, demonstrating that photo-Fenton process can be a viable alternative for caffeine removal in wastewater.

  16. Numerical and Experimental Approach for the Optimal Design of a Dual Plate Under Ballistic Impact

    NASA Astrophysics Data System (ADS)

    Yoo, Jeonghoon; Chung, Dong-Teak; Park, Myung Soo

    To predict the behavior of a dual plate composed of 5052-aluminum and 1002-cold rolled steel under ballistic impact, numerical and experimental approaches are attempted. For the accurate numerical simulation of the impact phenomena, the appropriate selection of the key parameter values based on numerical or experimental tests are critical. This study is focused on not only the optimization technique using the numerical simulation but also numerical and experimental procedures to obtain the required parameter values in the simulation. The Johnson-Cook model is used to simulate the mechanical behaviors, and the simplified experimental and the numerical approaches are performed to obtain the material properties of the model. The element erosion scheme for the robust simulation of the ballistic impact problem is applied by adjusting the element erosion criteria of each material based on numerical and experimental results. The adequate mesh size and the aspect ratio are chosen based on parametric studies. Plastic energy is suggested as a response representing the strength of the plate for the optimization under dynamic loading. Optimized thickness of the dual plate is obtained to resist the ballistic impact without penetration as well as to minimize the total weight.

  17. Optimization of scaffold design for bone tissue engineering: A computational and experimental study.

    PubMed

    Dias, Marta R; Guedes, José M; Flanagan, Colleen L; Hollister, Scott J; Fernandes, Paulo R

    2014-04-01

    In bone tissue engineering, the scaffold has not only to allow the diffusion of cells, nutrients and oxygen but also provide adequate mechanical support. One way to ensure the scaffold has the right properties is to use computational tools to design such a scaffold coupled with additive manufacturing to build the scaffolds to the resulting optimized design specifications. In this study a topology optimization algorithm is proposed as a technique to design scaffolds that meet specific requirements for mass transport and mechanical load bearing. Several micro-structures obtained computationally are presented. Designed scaffolds were then built using selective laser sintering and the actual features of the fabricated scaffolds were measured and compared to the designed values. It was possible to obtain scaffolds with an internal geometry that reasonably matched the computational design (within 14% of porosity target, 40% for strut size and 55% for throat size in the building direction and 15% for strut size and 17% for throat size perpendicular to the building direction). These results support the use of these kind of computational algorithms to design optimized scaffolds with specific target properties and confirm the value of these techniques for bone tissue engineering.

  18. Monitoring and optimizing the co-composting of dewatered sludge: a mixture experimental design approach.

    PubMed

    Komilis, Dimitrios; Evangelou, Alexandros; Voudrias, Evangelos

    2011-09-01

    The management of dewatered wastewater sludge is a major issue worldwide. Sludge disposal to landfills is not sustainable and thus alternative treatment techniques are being sought. The objective of this work was to determine optimal mixing ratios of dewatered sludge with other organic amendments in order to maximize the degradability of the mixtures during composting. This objective was achieved using mixture experimental design principles. An additional objective was to study the impact of the initial C/N ratio and moisture contents on the co-composting process of dewatered sludge. The composting process was monitored through measurements of O(2) uptake rates, CO(2) evolution, temperature profile and solids reduction. Eight (8) runs were performed in 100 L insulated air-tight bioreactors under a dynamic air flow regime. The initial mixtures were prepared using dewatered wastewater sludge, mixed paper wastes, food wastes, tree branches and sawdust at various initial C/N ratios and moisture contents. According to empirical modeling, mixtures of sludge and food waste mixtures at 1:1 ratio (ww, wet weight) maximize degradability. Structural amendments should be maintained below 30% to reach thermophilic temperatures. The initial C/N ratio and initial moisture content of the mixture were not found to influence the decomposition process. The bio C/bio N ratio started from around 10, for all runs, decreased during the middle of the process and increased to up to 20 at the end of the process. The solid carbon reduction of the mixtures without the branches ranged from 28% to 62%, whilst solid N reductions ranged from 30% to 63%. Respiratory quotients had a decreasing trend throughout the composting process.

  19. A new multiresponse optimization approach in combination with a D-Optimal experimental design for the determination of biogenic amines in fish by HPLC-FLD.

    PubMed

    Herrero, A; Sanllorente, S; Reguera, C; Ortiz, M C; Sarabia, L A

    2016-11-16

    A new strategy to approach multiresponse optimization in conjunction to a D-optimal design for simultaneously optimizing a large number of experimental factors is proposed. The procedure is applied to the determination of biogenic amines (histamine, putrescine, cadaverine, tyramine, tryptamine, 2-phenylethylamine, spermine and spermidine) in swordfish by HPLC-FLD after extraction with an acid and subsequent derivatization with dansyl chloride. Firstly, the extraction from a solid matrix and the derivatization of the extract are optimized. Ten experimental factors involved in both stages are studied, seven of them at two levels and the remaining at three levels; the use of a D-optimal design leads to optimize the ten experimental variables, significantly reducing by a factor of 67 the experimental effort needed but guaranteeing the quality of the estimates. A model with 19 coefficients, which includes those corresponding to the main effects and two possible interactions, is fitted to the peak area of each amine. Then, the validated models are used to predict the response (peak area) of the 3456 experiments of the complete factorial design. The variability among peak areas ranges from 13.5 for 2-phenylethylamine to 122.5 for spermine, which shows, to a certain extent, the high and different effect of the pretreatment on the responses. Then the percentiles are calculated from the peak areas of each amine. As the experimental conditions are in conflict, the optimal solution for the multiresponse optimization is chosen from among those which have all the responses greater than a certain percentile for all the amines. The developed procedure reaches decision limits down to 2.5 μg L(-1) for cadaverine or 497 μg L(-1) for histamine in solvent and 0.07 mg kg(-1) and 14.81 mg kg(-1) in fish (probability of false positive equal to 0.05), respectively.

  20. Optimal experimental design for nano-particle atom-counting from high-resolution STEM images.

    PubMed

    De Backer, A; De Wael, A; Gonnissen, J; Van Aert, S

    2015-04-01

    In the present paper, the principles of detection theory are used to quantify the probability of error for atom-counting from high resolution scanning transmission electron microscopy (HR STEM) images. Binary and multiple hypothesis testing have been investigated in order to determine the limits to the precision with which the number of atoms in a projected atomic column can be estimated. The probability of error has been calculated when using STEM images, scattering cross-sections or peak intensities as a criterion to count atoms. Based on this analysis, we conclude that scattering cross-sections perform almost equally well as images and perform better than peak intensities. Furthermore, the optimal STEM detector design can be derived for atom-counting using the expression for the probability of error. We show that for very thin objects LAADF is optimal and that for thicker objects the optimal inner detector angle increases.

  1. Bioslurry phase remediation of chlorpyrifos contaminated soil: process evaluation and optimization by Taguchi design of experimental (DOE) methodology.

    PubMed

    Venkata Mohan, S; Sirisha, K; Sreenivasa Rao, R; Sarma, P N

    2007-10-01

    Design of experimental (DOE) methodology using Taguchi orthogonal array (OA) was applied to evaluate the influence of eight biotic and abiotic factors (substrate-loading rate, slurry phase pH, slurry phase dissolved oxygen (DO), soil water ratio, temperature, soil microflora load, application of bioaugmentation and humic substance concentration) on the soil bound chlorpyrifos bioremediation in bioslurry phase reactor. The selected eight factors were considered at three levels (18 experiments) in the experimental design. Substrate-loading rate showed significant influence on the bioremediation process among the selected factors. Derived optimum operating conditions obtained by the methodology showed enhanced chlorpyrifos degradation from 1479.99 to 2458.33microg/g (over all 39.82% enhancement). The proposed method facilitated systematic mathematical approach to understand the complex bioremediation process and the optimization of near optimum design parameters, only with a few well-defined experimental sets.

  2. Design and optimization of an experimental test bench for the study of impulsive fluid-structure interactions

    NASA Astrophysics Data System (ADS)

    Russo, S.; Krastev, V. K.; Jannelli, E.; Falcucci, G.

    2016-06-01

    In this work, the design and the optimization of an experimental test bench for the experimental characterization of impulsive water-entry problems are presented. Currently, the majority of the experimental apparatus allow impact test only in specific conditions. Our test bench allows for testing of rigid and compliant bodies and allows performing experiments on floating or sinking structures, in free-fall, or under dynamic motion control. The experimental apparatus is characterized by the adoption of accelerometers, encoders, position sensors and, above all, FBG (fiber Bragg grating) sensors that, together with a high speed camera, provide accurate and fast data acquisitions for the dissection of structural deformations and hydrodynamic loadings under a broad set of experimental conditions.

  3. A Resampling Based Approach to Optimal Experimental Design for Computer Analysis of a Complex System

    SciTech Connect

    Rutherford, Brian

    1999-08-04

    The investigation of a complex system is often performed using computer generated response data supplemented by system and component test results where possible. Analysts rely on an efficient use of limited experimental resources to test the physical system, evaluate the models and to assure (to the extent possible) that the models accurately simulate the system order investigation. The general problem considered here is one where only a restricted number of system simulations (or physical tests) can be performed to provide additional data necessary to accomplish the project objectives. The levels of variables used for defining input scenarios, for setting system parameters and for initializing other experimental options must be selected in an efficient way. The use of computer algorithms to support experimental design in complex problems has been a topic of recent research in the areas of statistics and engineering. This paper describes a resampling based approach to form dating this design. An example is provided illustrating in two dimensions how the algorithm works and indicating its potential on larger problems. The results show that the proposed approach has characteristics desirable of an algorithmic approach on the simple examples. Further experimentation is needed to evaluate its performance on larger problems.

  4. An experimental evaluation of a helicopter rotor section designed by numerical optimization

    NASA Technical Reports Server (NTRS)

    Hicks, R. M.; Mccroskey, W. J.

    1980-01-01

    The wind tunnel performance of a 10-percent thick helicopter rotor section design by numerical optimization is presented. The model was tested at Mach number from 0.2 to 0.84 with Reynolds number ranging from 1,900,000 at Mach 0.2 to 4,000,000 at Mach numbers above 0.5. The airfoil section exhibited maximum lift coefficients greater than 1.3 at Mach numbers below 0.45 and a drag divergence Mach number of 0.82 for lift coefficients near 0. A moderate 'drag creep' is observed at low lift coefficients for Mach numbers greater than 0.6.

  5. Optimal design of disc-type magneto-rheological brake for mid-sized motorcycle: experimental evaluation

    NASA Astrophysics Data System (ADS)

    Sohn, Jung Woo; Jeon, Juncheol; Nguyen, Quoc Hung; Choi, Seung-Bok

    2015-08-01

    In this paper, a disc-type magneto-rheological (MR) brake is designed for a mid-sized motorcycle and its performance is experimentally evaluated. The proposed MR brake consists of an outer housing, a rotating disc immersed in MR fluid, and a copper wire coiled around a bobbin to generate a magnetic field. The structural configuration of the MR brake is first presented with consideration of the installation space for the conventional hydraulic brake of a mid-sized motorcycle. The design parameters of the proposed MR brake are optimized to satisfy design requirements such as the braking torque, total mass of the MR brake, and cruising temperature caused by the magnetic-field friction of the MR fluid. In the optimization procedure, the braking torque is calculated based on the Herschel-Bulkley rheological model, which predicts MR fluid behavior well at high shear rate. An optimization tool based on finite element analysis is used to obtain the optimized dimensions of the MR brake. After manufacturing the MR brake, mechanical performances regarding the response time, braking torque and cruising temperature are experimentally evaluated.

  6. Experimental validation of a magnetorheological energy absorber design optimized for shock and impact loads

    NASA Astrophysics Data System (ADS)

    Singh, Harinder J.; Hu, Wei; Wereley, Norman M.; Glass, William

    2014-12-01

    A linear stroke adaptive magnetorheological energy absorber (MREA) was designed, fabricated and tested for intense impact conditions with piston velocities up to 8 m s-1. The performance of the MREA was characterized using dynamic range, which is defined as the ratio of maximum on-state MREA force to the off-state MREA force. Design optimization techniques were employed in order to maximize the dynamic range at high impact velocities such that MREA maintained good control authority. Geometrical parameters of the MREA were optimized by evaluating MREA performance on the basis of a Bingham-plastic analysis incorporating minor losses (BPM analysis). Computational fluid dynamics and magnetic FE analysis were conducted to verify the performance of passive and controllable MREA force, respectively. Subsequently, high-speed drop testing (0-4.5 m s-1 at 0 A) was conducted for quantitative comparison with the numerical simulations. Refinements to the nonlinear BPM analysis were carried out to improve prediction of MREA performance.

  7. A novel experimental design method to optimize hydrophilic matrix formulations with drug release profiles and mechanical properties.

    PubMed

    Choi, Du Hyung; Lim, Jun Yeul; Shin, Sangmun; Choi, Won Jun; Jeong, Seong Hoon; Lee, Sangkil

    2014-10-01

    To investigate the effects of hydrophilic polymers on the matrix system, an experimental design method was developed to integrate response surface methodology and the time series modeling. Moreover, the relationships among polymers on the matrix system were studied with the evaluation of physical properties including water uptake, mass loss, diffusion, and gelling index. A mixture simplex lattice design was proposed while considering eight input control factors: Polyethylene glycol 6000 (x1 ), polyethylene oxide (PEO) N-10 (x2 ), PEO 301 (x3 ), PEO coagulant (x4 ), PEO 303 (x5 ), hydroxypropyl methylcellulose (HPMC) 100SR (x6 ), HPMC 4000SR (x7 ), and HPMC 10(5) SR (x8 ). With the modeling, optimal formulations were obtained depending on the four types of targets. The optimal formulations showed the four significant factors (x1 , x2 , x3 , and x8 ) and other four input factors (x4 , x5 , x6 , and x7 ) were not significant based on drug release profiles. Moreover, the optimization results were analyzed with estimated values, targets values, absolute biases, and relative biases based on observed times for the drug release rates with four different targets. The result showed that optimal solutions and target values had consistent patterns with small biases. On the basis of the physical properties of the optimal solutions, the type and ratio of the hydrophilic polymer and the relationships between polymers significantly influenced the physical properties of the system and drug release. This experimental design method is very useful in formulating a matrix system with optimal drug release. Moreover, it can distinctly confirm the relationships between excipients and the effects on the system with extensive and intensive evaluations.

  8. PVA-PEG physically cross-linked hydrogel film as a wound dressing: experimental design and optimization.

    PubMed

    Ahmed, Afnan Sh; Mandal, Uttam Kumar; Taher, Muhammad; Susanti, Deny; Jaffri, Juliana Md

    2017-04-05

    The development of hydrogel films as wound healing dressings is of a great interest owing to their biological tissue-like nature. Polyvinyl alcohol/polyethylene glycol (PVA/PEG) hydrogels loaded with asiaticoside, a standardized rich fraction of Centella asiatica, were successfully developed using the freeze-thaw method. Response surface methodology with Box-Behnken experimental design was employed to optimize the hydrogels. The hydrogels were characterized and optimized by gel fraction, swelling behavior, water vapor transmission rate and mechanical strength. The formulation with 8% PVA, 5% PEG 400 and five consecutive freeze-thaw cycles was selected as the optimized formulation and was further characterized by its drug release, rheological study, morphology, cytotoxicity and microbial studies. The optimized formulation showed more than 90% drug release at 12 hours. The rheological properties exhibited that the formulation has viscoelastic behavior and remains stable upon storage. Cell culture studies confirmed the biocompatible nature of the optimized hydrogel formulation. In the microbial limit tests, the optimized hydrogel showed no microbial growth. The developed optimized PVA/PEG hydrogel using freeze-thaw method was swellable, elastic, safe, and it can be considered as a promising new wound dressing formulation.

  9. Simultaneous production of nisin and lactic acid from cheese whey: optimization of fermentation conditions through statistically based experimental designs.

    PubMed

    Liu, Chuanbin; Liu, Yan; Liao, Wei; Wen, Zhiyou; Chen, Shulin

    2004-01-01

    A biorefinery process that utilizes cheese whey as substrate to simultaneously produce nisin, a natural food preservative, and lactic acid, a raw material for biopolymer production, was studied. The conditions for nisin biosynthesis and lactic acid coproduction by Lactococcus lactis subsp. lactis (ATCC 11454) in a whey-based medium were optimized using statistically based experimental designs. A Plackett-Burman design was applied to screen seven parameters for significant factors for the production of nisin and lactic acid. Nutrient supplements, including yeast extract, MgSO4, and KH2PO4, were found to be the significant factors affecting nisin and lactic acid formation. As a follow-up, a central-composite design was applied to optimize these factors. Second-order polynomial models were developed to quantify the relationship between nisin and lactic acid production and the variables. The optimal values of these variables were also determined. Finally, a verification experiment was performed to confirm the optimal values that were predicted by the models. The experimented results agreed well with the model prediction, giving a similar production of 19.3 g/L of lactic acid and 92.9 mg/L of nisin.

  10. Optimization of the azo dye Procion Red H-EXL degradation by Fenton's reagent using experimental design.

    PubMed

    Rodrigues, Carmen S D; Madeira, Luis M; Boaventura, Rui A R

    2009-05-30

    Chemical oxidation by Fenton's reagent of a reactive azo dye (Procion Deep Red H-EXL gran) solution has been optimized making use of the experimental design methodology. The variables considered for the oxidative process optimization were the temperature and the initial concentrations of hydrogen peroxide and ferrous ion, for a dye concentration of 100mg/L at pH 3.5, the latter being fixed after some preliminary runs. Experiments were carried out according to a central composite design approach. The methodology employed allowed to evaluate and identify the effects and interactions of the considered variables with statistical meaning in the process response, i.e., in the total organic carbon (TOC) reduction after 120 min of reaction. A quadratic model with good adherence to the experimental data in the domain analysed was developed, which was used to plot the response surface curves and to perform process optimization. It was concluded that temperature and ferrous ion concentration are the only variables that affect TOC removal, and due to the cross-interactions, the effect of each variable depends on the value of the other one, thus affecting positively or negatively the process response.

  11. Experimental design method to the weld bead geometry optimization for hybrid laser-MAG welding in a narrow chamfer configuration

    NASA Astrophysics Data System (ADS)

    Bidi, Lyes; Le Masson, Philippe; Cicala, Eugen; Primault, Christophe

    2017-03-01

    The work presented in this paper relates to the optimization of operating parameters of the welding by the experimental design approach. The welding process used is the hybrid laser-MAG welding, which consists in combining a laser beam with an MAG torch, to increase the productivity and reliability of the chamfer filling operation in several passes over the entire height of the chamfer. Each pass, providing 2 mm deposited metal and must provide sufficient lateral penetration of about 0.2 mm. The experimental design method has been used in order to estimate the operating parameters effects and their interactions on the lateral penetration on one hand, and to provide a mathematical model that relates the welding parameters of welding to the objective function lateral penetration on the other hand. Furthermore, in this study, we sought to the identification of the set of optimum parameters sufficient to comply with a constraint on the quality of weld bead. This constraint is to simultaneously obtain a total lateral penetration greater than 0.4 mm and an H/L ratio less than 0.6. In order to obtain this condition, the multi-objective optimization (for both response functions) of a weld bead by the implementation of the plans method using two categories of Experiments Plans, on two levels has been used: the first is a complete experimental design (CED) with 32 tests and the second a fractional experimental design (FED) with 8 tests. A comparative analysis of the implementation of both types of experiments plans identified the advantages and disadvantages for each type of plan.

  12. Stepwise optimization approach for improving LC-MS/MS analysis of zwitterionic antiepileptic drugs with implementation of experimental design.

    PubMed

    Kostić, Nađa; Dotsikas, Yannis; Malenović, Anđelija; Jančić Stojanović, Biljana; Rakić, Tijana; Ivanović, Darko; Medenica, Mirjana

    2013-07-01

    In this article, a step-by-step optimization procedure for improving analyte response with implementation of experimental design is described. Zwitterionic antiepileptics, namely vigabatrin, pregabalin and gabapentin, were chosen as model compounds to undergo chloroformate-mediated derivatization followed by liquid chromatography coupled with tandem mass spectrometry (LC-MS/MS) analysis. Application of a planned stepwise optimization procedure allowed responses of analytes, expressed as areas and signal-to-noise ratios, to be improved, enabling achievement of lower limit of detection values. Results from the current study demonstrate that optimization of parameters such as scan time, geometry of ion source, sheath and auxiliary gas pressure, capillary temperature, collision pressure and mobile phase composition can have a positive impact on sensitivity of LC-MS/MS methods. Optimization of LC and MS parameters led to a total increment of 53.9%, 83.3% and 95.7% in areas of derivatized vigabatrin, pregabalin and gabapentin, respectively, while for signal-to-noise values, an improvement of 140.0%, 93.6% and 124.0% was achieved, compared to autotune settings. After defining the final optimal conditions, a time-segmented method was validated for the determination of mentioned drugs in plasma. The method proved to be accurate and precise with excellent linearity for the tested concentration range (40.0 ng ml(-1)-10.0 × 10(3)  ng ml(-1)).

  13. Molecular identification of potential denitrifying bacteria and use of D-optimal mixture experimental design for the optimization of denitrification process.

    PubMed

    Ben Taheur, Fadia; Fdhila, Kais; Elabed, Hamouda; Bouguerra, Amel; Kouidhi, Bochra; Bakhrouf, Amina; Chaieb, Kamel

    2016-04-01

    Three bacterial strains (TE1, TD3 and FB2) were isolated from date palm (degla), pistachio and barley. The presence of nitrate reductase (narG) and nitrite reductase (nirS and nirK) genes in the selected strains was detected by PCR technique. Molecular identification based on 16S rDNA sequencing method was applied to identify positive strains. In addition, the D-optimal mixture experimental design was used to optimize the optimal formulation of probiotic bacteria for denitrification process. Strains harboring denitrification genes were identified as: TE1, Agrococcus sp LN828197; TD3, Cronobacter sakazakii LN828198 and FB2, Pedicoccus pentosaceus LN828199. PCR results revealed that all strains carried the nirS gene. However only C. sakazakii LN828198 and Agrococcus sp LN828197 harbored the nirK and the narG genes respectively. Moreover, the studied bacteria were able to form biofilm on abiotic surfaces with different degree. Process optimization showed that the most significant reduction of nitrate was 100% with 14.98% of COD consumption and 5.57 mg/l nitrite accumulation. Meanwhile, the response values were optimized and showed that the most optimal combination was 78.79% of C. sakazakii LN828198 (curve value), 21.21% of P. pentosaceus LN828199 (curve value) and absence (0%) of Agrococcus sp LN828197 (curve value).

  14. Application of statistical experimental design for optimization of silver nanoparticles biosynthesis by a nanofactory Streptomyces viridochromogenes.

    PubMed

    El-Naggar, Noura El-Ahmady; Abdelwahed, Nayera A M

    2014-01-01

    Central composite design was chosen to determine the combined effects of four process variables (AgNO3 concentration, incubation period, pH level and inoculum size) on the extracellular biosynthesis of silver nanoparticles (AgNPs) by Streptomyces viridochromogenes. Statistical analysis of the results showed that incubation period, initial pH level and inoculum size had significant effects (P<0.05) on the biosynthesis of silver nanoparticles at their individual level. The maximum biosynthesis of silver nanoparticles was achieved at a concentration of 0.5% (v/v) of 1 mM AgNO3, incubation period of 96 h, initial pH of 9 and inoculum size of 2% (v/v). After optimization, the biosynthesis of silver nanoparticles was improved by approximately 5-fold as compared to that of the unoptimized conditions. The synthetic process of silver nanoparticle generation using the reduction of aqueous Ag+ ion by the culture supernatants of S. viridochromogenes was quite fast, and silver nanoparticles were formed immediately by the addition of AgNO3 solution (1 mM) to the cell-free supernatant. Initial characterization of silver nanoparticles was performed by visual observation of color change from yellow to intense brown color. UV-visible spectrophotometry for measuring surface plasmon resonance showed a single absorption peak at 400 nm, which confirmed the presence of silver nanoparticles. Fourier Transform Infrared Spectroscopy analysis provided evidence for proteins as possible reducing and capping agents for stabilizing the nanoparticles. Transmission Electron Microscopy revealed the extracellular formation of spherical silver nanoparticles in the size range of 2.15-7.27 nm. Compared to the cell-free supernatant, the biosynthesized AgNPs revealed superior antimicrobial activity against Gram-negative, Gram-positive bacterial strains and Candida albicans.

  15. Optimal experimental design for improving the estimation of growth parameters of Lactobacillus viridescens from data under non-isothermal conditions.

    PubMed

    Longhi, Daniel Angelo; Martins, Wiaslan Figueiredo; da Silva, Nathália Buss; Carciofi, Bruno Augusto Mattar; de Aragão, Gláucia Maria Falcão; Laurindo, João Borges

    2017-01-02

    In predictive microbiology, the model parameters have been estimated using the sequential two-step modeling (TSM) approach, in which primary models are fitted to the microbial growth data, and then secondary models are fitted to the primary model parameters to represent their dependence with the environmental variables (e.g., temperature). The Optimal Experimental Design (OED) approach allows reducing the experimental workload and costs, and the improvement of model identifiability because primary and secondary models are fitted simultaneously from non-isothermal data. Lactobacillus viridescens was selected to this study because it is a lactic acid bacterium of great interest to meat products preservation. The objectives of this study were to estimate the growth parameters of L. viridescens in culture medium from TSM and OED approaches and to evaluate both the number of experimental data and the time needed in each approach and the confidence intervals of the model parameters. Experimental data for estimating the model parameters with TSM approach were obtained at six temperatures (total experimental time of 3540h and 196 experimental data of microbial growth). Data for OED approach were obtained from four optimal non-isothermal profiles (total experimental time of 588h and 60 experimental data of microbial growth), two profiles with increasing temperatures (IT) and two with decreasing temperatures (DT). The Baranyi and Roberts primary model and the square root secondary model were used to describe the microbial growth, in which the parameters b and Tmin (±95% confidence interval) were estimated from the experimental data. The parameters obtained from TSM approach were b=0.0290 (±0.0020) [1/(h(0.5)°C)] and Tmin=-1.33 (±1.26) [°C], with R(2)=0.986 and RMSE=0.581, and the parameters obtained with the OED approach were b=0.0316 (±0.0013) [1/(h(0.5)°C)] and Tmin=-0.24 (±0.55) [°C], with R(2)=0.990 and RMSE=0.436. The parameters obtained from OED approach

  16. Experimental design to optimize an Haemophilus influenzae type b conjugate vaccine made with hydrazide-derivatized tetanus toxoid.

    PubMed

    Laferriere, Craig; Ravenscroft, Neil; Wilson, Seanette; Combrink, Jill; Gordon, Lizelle; Petre, Jean

    2011-10-01

    The introduction of type b Haemophilus influenzae conjugate vaccines into routine vaccination schedules has significantly reduced the burden of this disease; however, widespread use in developing countries is constrained by vaccine costs, and there is a need for a simple and high-yielding manufacturing process. The vaccine is composed of purified capsular polysaccharide conjugated to an immunogenic carrier protein. To improve the yield and rate of the reductive amination conjugation reaction used to make this vaccine, some of the carboxyl groups of the carrier protein, tetanus toxoid, were modified to hydrazides, which are more reactive than the ε -amine of lysine. Other reaction parameters, including the ratio of the reactants, the size of the polysaccharide, the temperature and the salt concentration, were also investigated. Experimental design was used to minimize the number of experiments required to optimize all these parameters to obtain conjugate in high yield with target characteristics. It was found that increasing the reactant ratio and decreasing the size of the polysaccharide increased the polysaccharide:protein mass ratio in the product. Temperature and salt concentration did not improve this ratio. These results are consistent with a diffusion controlled rate limiting step in the conjugation reaction. Excessive modification of tetanus toxoid with hydrazide was correlated with reduced yield and lower free polysaccharide. This was attributed to a greater tendency for precipitation, possibly due to changes in the isoelectric point. Experimental design and multiple regression helped identify key parameters to control and thereby optimize this conjugation reaction.

  17. Experimental design and multiple response optimization. Using the desirability function in analytical methods development.

    PubMed

    Candioti, Luciana Vera; De Zan, María M; Cámara, María S; Goicoechea, Héctor C

    2014-06-01

    A review about the application of response surface methodology (RSM) when several responses have to be simultaneously optimized in the field of analytical methods development is presented. Several critical issues like response transformation, multiple response optimization and modeling with least squares and artificial neural networks are discussed. Most recent analytical applications are presented in the context of analytLaboratorio de Control de Calidad de Medicamentos (LCCM), Facultad de Bioquímica y Ciencias Biológicas, Universidad Nacional del Litoral, C.C. 242, S3000ZAA Santa Fe, ArgentinaLaboratorio de Control de Calidad de Medicamentos (LCCM), Facultad de Bioquímica y Ciencias Biológicas, Universidad Nacional del Litoral, C.C. 242, S3000ZAA Santa Fe, Argentinaical methods development, especially in multiple response optimization procedures using the desirability function.

  18. Optimization of low-cost medium for very high gravity ethanol fermentations by Saccharomyces cerevisiae using statistical experimental designs.

    PubMed

    Pereira, Francisco B; Guimarães, Pedro M R; Teixeira, José A; Domingues, Lucília

    2010-10-01

    Statistical experimental designs were used to develop a medium based on corn steep liquor (CSL) and other low-cost nutrient sources for high-performance very high gravity (VHG) ethanol fermentations by Saccharomyces cerevisiae. The critical nutrients were initially selected according to a Plackett-Burman design and the optimized medium composition (44.3 g/L CSL; 2.3 g/L urea; 3.8 g/L MgSO₄·7H₂O; 0.03 g/L CuSO₄·5H₂O) for maximum ethanol production by the laboratory strain CEN.PK 113-7D was obtained by response surface methodology, based on a three-level four-factor Box-Behnken design. The optimization process resulted in significantly enhanced final ethanol titre, productivity and yeast viability in batch VHG fermentations (up to 330 g/L glucose) with CEN.PK113-7D and with industrial strain PE-2, which is used for bio-ethanol production in Brazil. Strain PE-2 was able to produce 18.6±0.5% (v/v) ethanol with a corresponding productivity of 2.4±0.1g/L/h. This study provides valuable insights into cost-effective nutritional supplementation of industrial fuel ethanol VHG fermentations.

  19. Statistical experimental design optimization of rhamsan gum production by Sphingomonas sp. CGMCC 6833.

    PubMed

    Xu, Xiao-Ying; Dong, Shu-Hao; Li, Sha; Chen, Xiao-Ye; Wu, Ding; Xu, Hong

    2015-04-01

    Rhamsan gum is a type of water-soluble exopolysaccharide produced by species of Sphingomonas bacteria. The optimal fermentation medium for rhamsan gum production by Sphingomonas sp. CGMCC 6833 was explored definition. Single-factor experiments indicate that glucose, soybean meal, K(2)HPO(4) and MnSO(4) compose the optimal medium along with and initial pH 7.5. To discover ideal cultural conditions for rhamsan gum production in a shake flask culture, response surface methodology was employed, from which the following optimal ratio was derived: 5.38 g/L soybean meal, 5.71 g/L K(2)HPO(4) and 0.32 g/L MnSO(4). Under ideal fermentation rhamsan gum yield reached 19.58 g/L ± 1.23 g/L, 42.09% higher than that of the initial medium (13.78 g/L ± 1.38 g/L). Optimizing the fermentation medium results in enhanced rhamsan gum production.

  20. Limit of detection of 15{sub N} by gas-chromatography atomic emission detection: Optimization using an experimental design

    SciTech Connect

    Deruaz, D.; Bannier, A.; Pionchon, C.

    1995-08-01

    This paper deals with the optimal conditions for the detection of {sup 15}N determined using a four-factor experimental design from [2{sup 13}C,-1,3 {sup 15}N] caffeine measured with an atomic emission detector (AED) coupled to gas chromatography (GC). Owing to the capability of a photodiodes array, AED can simultaneously detect several elements using their specific emission lines within a wavelength range of 50 nm. So, the emissions of {sup 15}N and {sup 14}N are simultaneously detected at 420.17 nm and 421.46 nm respectively. Four independent experimental factors were tested (1) helium flow rate (plasma gas); (2) methane pressure (reactant gas); (3) oxygen pressure; (4) hydrogen pressure. It has been shown that these four gases had a significant influence on the analytical response of {sup 15}N. The linearity of the detection was determined using {sup 15}N amounts ranging from 1.52 pg to 19 ng under the optimal conditions obtained from the experimental design. The limit of detection was studied using different methods. The limits of detection of {sup 15}N was 1.9 pg/s according to the IUPAC method (International-Union of Pure and Applied Chemistry). The method proposed by Quimby and Sullivan gave a value of 2.3 pg/s and that of Oppenheimer gave a limit of 29 pg/s. For each determination, and internal standard: 1-isobutyl-3.7 dimethylxanthine was used. The results clearly demonstrate that GC AED is sensitive and selective enough to detect and measure {sup 15}N-labelled molecules after gas chromatographic separation.

  1. Experimental validation of structural optimization methods

    NASA Technical Reports Server (NTRS)

    Adelman, Howard M.

    1992-01-01

    The topic of validating structural optimization methods by use of experimental results is addressed. The need for validating the methods as a way of effecting a greater and an accelerated acceptance of formal optimization methods by practicing engineering designers is described. The range of validation strategies is defined which includes comparison of optimization results with more traditional design approaches, establishing the accuracy of analyses used, and finally experimental validation of the optimization results. Examples of the use of experimental results to validate optimization techniques are described. The examples include experimental validation of the following: optimum design of a trussed beam; combined control-structure design of a cable-supported beam simulating an actively controlled space structure; minimum weight design of a beam with frequency constraints; minimization of the vibration response of helicopter rotor blade; minimum weight design of a turbine blade disk; aeroelastic optimization of an aircraft vertical fin; airfoil shape optimization for drag minimization; optimization of the shape of a hole in a plate for stress minimization; optimization to minimize beam dynamic response; and structural optimization of a low vibration helicopter rotor.

  2. Optimization of Magnetosome Production and Growth by the Magnetotactic Vibrio Magnetovibrio blakemorei Strain MV-1 through a Statistics-Based Experimental Design

    PubMed Central

    Silva, Karen T.; Leão, Pedro E.; Abreu, Fernanda; López, Jimmy A.; Gutarra, Melissa L.; Farina, Marcos; Bazylinski, Dennis A.; Freire, Denise M. G.

    2013-01-01

    The growth and magnetosome production of the marine magnetotactic vibrio Magnetovibrio blakemorei strain MV-1 were optimized through a statistics-based experimental factorial design. In the optimized growth medium, maximum magnetite yields of 64.3 mg/liter in batch cultures and 26 mg/liter in a bioreactor were obtained. PMID:23396329

  3. Ultrasonic assisted removal of sunset yellow from aqueous solution by zinc hydroxide nanoparticle loaded activated carbon: Optimized experimental design.

    PubMed

    Roosta, M; Ghaedi, M; Sahraei, R; Purkait, M K

    2015-01-01

    The efficiency of zinc hydroxide nanoparticle loaded on activated carbon (Zn(OH)2-NP-AC) in the removal of sunset yellow from aqueous solutions using ultrasonic-assisted adsorption method was investigated. This nanomaterial was characterized using different techniques such as SEM, XRD and UV-vis spectrophotometer. A central composite design (CCD) was used for the optimization of significant factors using response surface methodology (RSM). Under the best conditions (5.2 min of sonication time, pH3, 0.023 g of adsorbent and 30 mg L(-1) of SY), Langmuir model was fitting the experimental equilibrium data well. The small amount of proposed adsorbent (0.023 g) is applicable for the successful removal of SY (>97%) in short time (5 min) with high adsorption capacity (83-114 mg g(-1)).

  4. An experimental design approach to optimize an amperometric immunoassay on a screen printed electrode for Clostridium tetani antibody determination.

    PubMed

    Patris, Stéphanie; Vandeput, Marie; Kenfack, Gersonie Momo; Mertens, Dominique; Dejaegher, Bieke; Kauffmann, Jean-Michel

    2016-03-15

    An immunoassay for the determination of anti-tetani antibodies has been developed using a screen printed electrode (SPE) as solid support for toxoid (antigen) immobilization. The assay was performed in guinea pig serum. The immunoreaction and the subsequent amperometric detection occurred directly onto the SPE surface. The assay consisted of spiking the anti-tetani sample directly onto the toxoid modified SPE, and then a second antibody, i.e. a HRP-labeled anti-immunoglobulin G, was deposited onto the biosensor. Subsequent amperometric detection was realized by spiking 10 µL of a hydroquinone (HQ) solution into 40 µL of buffer solution containing hydrogen peroxide. An experimental design approach was implemented for the optimization of the immunoassay. The variables of interest, such as bovine serum albumin (BSA) concentration, incubation times and labeled antibody dilution, were optimized with the aid of the response surface methodology using a circumscribed central composite design (CCCD). It was observed that two factors exhibited the greatest impact on the response, i.e. the anti-tetani incubation time and the dilution factor of the labeled antibody. It was discovered that in order to maximize the response, the dilution factor should be small, while the anti-tetani antibody incubation time should be long. The BSA concentration and the HRP-anti-IgG incubation had very limited influence. Under the optimized conditions, the immunoassay had a limit of detection of 0.011 IU/mL and a limit of quantification of 0.012 IU/mL. These values were below the protective human antibody limit of 0.06 IU/mL.

  5. Heterogeneity of the gut microbiome in mice: guidelines for optimizing experimental design

    PubMed Central

    Laukens, Debby; Brinkman, Brigitta M.; Raes, Jeroen; De Vos, Martine; Vandenabeele, Peter

    2015-01-01

    Targeted manipulation of the gut flora is increasingly being recognized as a means to improve human health. Yet, the temporal dynamics and intra- and interindividual heterogeneity of the microbiome represent experimental limitations, especially in human cross-sectional studies. Therefore, rodent models represent an invaluable tool to study the host–microbiota interface. Progress in technical and computational tools to investigate the composition and function of the microbiome has opened a new era of research and we gradually begin to understand the parameters that influence variation of host-associated microbial communities. To isolate true effects from confounding factors, it is essential to include such parameters in model intervention studies. Also, explicit journal instructions to include essential information on animal experiments are mandatory. The purpose of this review is to summarize the factors that influence microbiota composition in mice and to provide guidelines to improve the reproducibility of animal experiments. PMID:26323480

  6. Conceptual design optimization study

    NASA Technical Reports Server (NTRS)

    Hollowell, S. J.; Beeman, E. R., II; Hiyama, R. M.

    1990-01-01

    The feasibility of applying multilevel functional decomposition and optimization techniques to conceptual design of advanced fighter aircraft was investigated. Applying the functional decomposition techniques to the conceptual design phase appears to be feasible. The initial implementation of the modified design process will optimize wing design variables. A hybrid approach, combining functional decomposition techniques for generation of aerodynamic and mass properties linear sensitivity derivatives with existing techniques for sizing mission performance and optimization, is proposed.

  7. Optimization of chitosan nanoparticles for colon tumors using experimental design methodology.

    PubMed

    Jain, Anekant; Jain, Sanjay K

    2016-12-01

    Purpose Colon-specific drug delivery systems (CDDS) can improve the bio-availability of drugs through the oral route. A novel formulation for oral administration using ligand coupled chitosan nanoparticles bearing 5-Flurouracil (5FU) encapsulated in enteric coated pellets has been investigated for CDDS. Method The effect of polymer concentration, drug concentration, stirring time and stirring speed on the encapsulation efficiency, and size of nanoparticles were evaluated. The best (or optimum) formulation was obtained by response surface methodology. Using the experimental data, analysis of variance has been carried out to evolve linear empirical models. Using a new methodology, polynomial models have been evolved and the parametric analysis has been carried out. In order to target nanoparticles to the hyaluronic acid (HA) receptors present on colon tumors, HA coupled nanoparticles were tested for their efficacy in vivo. The HA coupled nanoparticles were encapsulated in pellets and were enteric coated to release the drug in the colon. Results Drug release studies under conditions of mimicking stomach to colon transit have shown that the drug was protected from being released in the physiological environment of the stomach and small intestine. The relatively high local drug concentration with prolonged exposure time provides a potential to enhance anti-tumor efficacy with low systemic toxicity for the treatment of colon cancer. Conclusions Conclusively, HA coupled nanoparticles can be considered as the potential candidate for targeted drug delivery and are anticipated to be promising in the treatment of colorectal cancer.

  8. Quantitative and qualitative optimization of allergen extraction from peanut and selected tree nuts. Part 1. Screening of optimal extraction conditions using a D-optimal experimental design.

    PubMed

    L'Hocine, Lamia; Pitre, Mélanie

    2016-03-01

    A D-optimal design was constructed to optimize allergen extraction efficiency simultaneously from roasted, non-roasted, defatted, and non-defatted almond, hazelnut, peanut, and pistachio flours using three non-denaturing aqueous (phosphate, borate, and carbonate) buffers at various conditions of ionic strength, buffer-to-protein ratio, extraction temperature, and extraction duration. Statistical analysis showed that roasting and non-defatting significantly lowered protein recovery for all nuts. Increasing the temperature and the buffer-to-protein ratio during extraction significantly increased protein recovery, whereas increasing the extraction time had no significant impact. The impact of the three buffers on protein recovery varied significantly among the nuts. Depending on the extraction conditions, protein recovery varied from 19% to 95% for peanut, 31% to 73% for almond, 17% to 64% for pistachio, and 27% to 88% for hazelnut. A modulation by the buffer type and ionic strength of protein and immunoglobuline E binding profiles of extracts was evidenced, where high protein recovery levels did not always correlate with high immunoreactivity.

  9. Optimization of Wear Behavior of Magnesium Alloy AZ91 Hybrid Composites Using Taguchi Experimental Design

    NASA Astrophysics Data System (ADS)

    Girish, B. M.; Satish, B. M.; Sarapure, Sadanand; Basawaraj

    2016-06-01

    In the present paper, the statistical investigation on wear behavior of magnesium alloy (AZ91) hybrid metal matrix composites using Taguchi technique has been reported. The composites were reinforced with SiC and graphite particles of average size 37 μm. The specimens were processed by stir casting route. Dry sliding wear of the hybrid composites were tested on a pin-on-disk tribometer under dry conditions at different normal loads (20, 40, and 60 N), sliding speeds (1.047, 1.57, and 2.09 m/s), and composition (1, 2, and 3 wt pct of each of SiC and graphite). The design of experiments approach using Taguchi technique was employed to statistically analyze the wear behavior of hybrid composites. Signal-to-noise ratio and analysis of variance were used to investigate the influence of the parameters on the wear rate.

  10. D-OPTIMAL EXPERIMENTAL DESIGNS TO TEST FOR DEPARTURE FROM ADDITIVITY IN A FIXED-RATIO MIXTURE RAY.

    EPA Science Inventory

    Traditional factorial designs for evaluating interactions among chemicals in a mixture are prohibitive when the number of chemicals is large. However, recent advances in statistically-based experimental design have made it easier to evaluate interactions involving many chemicals...

  11. Development and Optimization of HPLC Analysis of Metronidazole, Diloxanide, Spiramycin and Cliquinol in Pharmaceutical Dosage Forms Using Experimental Design.

    PubMed

    Elkhoudary, Mahmoud M; Abdel Salam, Randa A; Hadad, Ghada M

    2016-11-01

    A new simple, sensitive, rapid and accurate gradient reversed-phase high-performance liquid chromatography with photodiode array detector (RP-HPLC-DAD) was developed and validated for simultaneous analysis of Metronidazole (MNZ), Spiramycin (SPY), Diloxanidefuroate (DIX) and Cliquinol (CLQ) using statistical experimental design. Initially, a resolution V fractional factorial design was used in order to screen five independent factors: the column temperature (°C), pH, phosphate buffer concentration (mM), flow rate (ml/min) and the initial fraction of mobile phase B (%). pH, flow rate and initial fraction of mobile phase B were identified as significant, using analysis of variance. The optimum conditions of separation determined with the aid of central composite design were: (1) initial mobile phase concentration: phosphate buffer/methanol (50/50, v/v), (2) phosphate buffer concentration (50 mM), (3) pH (4.72), (4) column temperature 30°C and (5) mobile phase flow rate (0.8 ml min(-1)). Excellent linearity was observed for all of the standard calibration curves, and the correlation coefficients were above 0.9999. Limits of detection for all of the analyzed compounds ranged between 0.02 and 0.11 μg ml(-1); limits of quantitation ranged between 0.06 and 0.33 μg ml(-1) The proposed method showed good prediction ability. The optimized method was validated according to ICH guidelines. Three commercially available tablets were analyzed showing good % recovery and %RSD.

  12. Optimization of a Three-Component Green Corrosion Inhibitor Mixture for Using in Cooling Water by Experimental Design

    NASA Astrophysics Data System (ADS)

    Asghari, E.; Ashassi-Sorkhabi, H.; Ahangari, M.; Bagheri, R.

    2016-04-01

    Factors such as inhibitor concentration, solution hydrodynamics, and temperature influence the performance of corrosion inhibitor mixtures. The simultaneous studying of the impact of different factors is a time- and cost-consuming process. The use of experimental design methods can be useful in minimizing the number of experiments and finding local optimized conditions for factors under the investigation. In the present work, the inhibition performance of a three-component inhibitor mixture against corrosion of St37 steel rotating disk electrode, RDE, was studied. The mixture was composed of citric acid, lanthanum(III) nitrate, and tetrabutylammonium perchlorate. In order to decrease the number of experiments, the L16 Taguchi orthogonal array was used. The "control factors" were the concentration of each component and the rotation rate of RDE and the "response factor" was the inhibition efficiency. The scanning electron microscopy and energy dispersive x-ray spectroscopy techniques verified the formation of islands of adsorbed citrate complexes with lanthanum ions and insoluble lanthanum(III) hydroxide. From the Taguchi analysis results the mixture of 0.50 mM lanthanum(III) nitrate, 0.50 mM citric acid, and 2.0 mM tetrabutylammonium perchlorate under the electrode rotation rate of 1000 rpm was found as optimum conditions.

  13. An experimental design approach for hydrothermal synthesis of NaYF4: Yb3+, Tm3+ upconversion microcrystal: UV emission optimization

    NASA Astrophysics Data System (ADS)

    Kaviani Darani, Masoume; Bastani, Saeed; Ghahari, Mehdi; Kardar, Pooneh

    2015-11-01

    Ultraviolet (UV) emissions of hydrothermally synthesized NaYF4: Yb3+, Tm3+ upconversion crystals were optimized using the response surface methodology experimental design. In these experimental designs, 9 runs, two factors namely (1) Tm3+ ion concentration, and (2) pH value were investigated using 3 different ligands. Introducing UV upconversion emissions as responses, their intensity were separately maximized. Analytical methods such as XRD, SEM, and FTIR could be used to study crystal structure, morphology, and fluorescent spectroscopy in order to obtain luminescence properties. From the photo-luminescence spectra, emissions centered at 347, 364, 452, 478, 648 and 803 nm were observed. Some results show that increasing each DOE factor up to an optimum value resulted in an increase in emission intensity, followed by reduction. To optimize UV emission, as a final result to the UV emission optimization, each design had a suggestion.

  14. Optimized design for PIGMI

    SciTech Connect

    Hansborough, L.; Hamm, R.; Stovall, J.; Swenson, D.

    1980-01-01

    PIGMI (Pion Generator for Medical Irradiations) is a compact linear proton accelerator design, optimized for pion production and cancer treatment use in a hospital environment. Technology developed during a four-year PIGMI Prototype experimental program allows the design of smaller, less expensive, and more reliable proton linacs. A new type of low-energy accelerating structure, the radio-frequency quadrupole (RFQ) has been tested; it produces an exceptionally good-quality beam and allows the use of a simple 30-kV injector. Average axial electric-field gradients of over 9 MV/m have been demonstrated in a drift-tube linac (DTL) structure. Experimental work is underway to test the disk-and-washer (DAW) structure, another new type of accelerating structure for use in the high-energy coupled-cavity linac (CCL). Sufficient experimental and developmental progress has been made to closely define an actual PIGMI. It will consist of a 30-kV injector, and RFQ linac to a proton energy of 2.5 MeV, a DTL linac to 125 MeV, and a CCL linac to the final energy of 650 MeV. The total length of the accelerator is 133 meters. The RFQ and DTL will be driven by a single 440-MHz klystron; the CCL will be driven by six 1320-MHz klystrons. The peak beam current is 28 mA. The beam pulse length is 60 ..mu..s at a 60-Hz repetition rate, resulting in a 100-..mu..A average beam current. The total cost of the accelerator is estimated to be approx. $10 million.

  15. Multidisciplinary design and optimization

    NASA Technical Reports Server (NTRS)

    Sobieszczanski-Sobieski, Jaroslaw

    1992-01-01

    Mutual couplings among the mathematical models of physical phenomena and parts of a system such as an aircraft complicate the design process because each contemplated design change may have a far reaching consequence throughout the system. This paper outlines techniques for computing these influences as system design derivatives useful to both judgmental and formal optimization purposes. The techniques facilitate decomposition of the design process into smaller, more manageable tasks and they form a methodology that can easily fit into existing engineering optimizations and incorporate their design tools.

  16. Optimal Flow Control Design

    NASA Technical Reports Server (NTRS)

    Allan, Brian; Owens, Lewis

    2010-01-01

    In support of the Blended-Wing-Body aircraft concept, a new flow control hybrid vane/jet design has been developed for use in a boundary-layer-ingesting (BLI) offset inlet in transonic flows. This inlet flow control is designed to minimize the engine fan-face distortion levels and the first five Fourier harmonic half amplitudes while maximizing the inlet pressure recovery. This concept represents a potentially enabling technology for quieter and more environmentally friendly transport aircraft. An optimum vane design was found by minimizing the engine fan-face distortion, DC60, and the first five Fourier harmonic half amplitudes, while maximizing the total pressure recovery. The optimal vane design was then used in a BLI inlet wind tunnel experiment at NASA Langley's 0.3-meter transonic cryogenic tunnel. The experimental results demonstrated an 80-percent decrease in DPCPavg, the reduction in the circumferential distortion levels, at an inlet mass flow rate corresponding to the middle of the operational range at the cruise condition. Even though the vanes were designed at a single inlet mass flow rate, they performed very well over the entire inlet mass flow range tested in the wind tunnel experiment with the addition of a small amount of jet flow control. While the circumferential distortion was decreased, the radial distortion on the outer rings at the aerodynamic interface plane (AIP) increased. This was a result of the large boundary layer being distributed from the bottom of the AIP in the baseline case to the outer edges of the AIP when using the vortex generator (VG) vane flow control. Experimental results, as already mentioned, showed an 80-percent reduction of DPCPavg, the circumferential distortion level at the engine fan-face. The hybrid approach leverages strengths of vane and jet flow control devices, increasing inlet performance over a broader operational range with significant reduction in mass flow requirements. Minimal distortion level requirements

  17. Improvement of production of citric acid from oil palm empty fruit bunches: optimization of media by statistical experimental designs.

    PubMed

    Bari, Md Niamul; Alam, Md Zahangir; Muyibi, Suleyman A; Jamal, Parveen; Abdullah-Al-Mamun

    2009-06-01

    A sequential optimization based on statistical design and one-factor-at-a-time (OFAT) method was employed to optimize the media constituents for the improvement of citric acid production from oil palm empty fruit bunches (EFB) through solid state bioconversion using Aspergillus niger IBO-103MNB. The results obtained from the Plackett-Burman design indicated that the co-substrate (sucrose), stimulator (methanol) and minerals (Zn, Cu, Mn and Mg) were found to be the major factors for further optimization. Based on the OFAT method, the selected medium constituents and inoculum concentration were optimized by the central composite design (CCD) under the response surface methodology (RSM). The statistical analysis showed that the optimum media containing 6.4% (w/w) of sucrose, 9% (v/w) of minerals and 15.5% (v/w) of inoculum gave the maximum production of citric acid (337.94 g/kg of dry EFB). The analysis showed that sucrose (p<0.0011) and mineral solution (p<0.0061) were more significant compared to inoculum concentration (p<0.0127) for the citric acid production.

  18. Optimization of digital designs

    NASA Technical Reports Server (NTRS)

    Whitaker, Sterling R. (Inventor); Miles, Lowell H. (Inventor)

    2009-01-01

    An application specific integrated circuit is optimized by translating a first representation of its digital design to a second representation. The second representation includes multiple syntactic expressions that admit a representation of a higher-order function of base Boolean values. The syntactic expressions are manipulated to form a third representation of the digital design.

  19. Optimal control model predictions of system performance and attention allocation and their experimental validation in a display design study

    NASA Technical Reports Server (NTRS)

    Johannsen, G.; Govindaraj, T.

    1980-01-01

    The influence of different types of predictor displays in a longitudinal vertical takeoff and landing (VTOL) hover task is analyzed in a theoretical study. Several cases with differing amounts of predictive and rate information are compared. The optimal control model of the human operator is used to estimate human and system performance in terms of root-mean-square (rms) values and to compute optimized attention allocation. The only part of the model which is varied to predict these data is the observation matrix. Typical cases are selected for a subsequent experimental validation. The rms values as well as eye-movement data are recorded. The results agree favorably with those of the theoretical study in terms of relative differences. Better matching is achieved by revised model input data.

  20. Optimal experimental design for the detection of light atoms from high-resolution scanning transmission electron microscopy images

    SciTech Connect

    Gonnissen, J.; De Backer, A.; Martinez, G. T.; Van Aert, S.; Dekker, A. J. den; Rosenauer, A.; Sijbers, J.

    2014-08-11

    We report an innovative method to explore the optimal experimental settings to detect light atoms from scanning transmission electron microscopy (STEM) images. Since light elements play a key role in many technologically important materials, such as lithium-battery devices or hydrogen storage applications, much effort has been made to optimize the STEM technique in order to detect light elements. Therefore, classical performance criteria, such as contrast or signal-to-noise ratio, are often discussed hereby aiming at improvements of the direct visual interpretability. However, when images are interpreted quantitatively, one needs an alternative criterion, which we derive based on statistical detection theory. Using realistic simulations of technologically important materials, we demonstrate the benefits of the proposed method and compare the results with existing approaches.

  1. "Real-time" disintegration analysis and D-optimal experimental design for the optimization of diclofenac sodium fast-dissolving films.

    PubMed

    El-Malah, Yasser; Nazzal, Sami

    2013-01-01

    The objective of this work was to study the dissolution and mechanical properties of fast-dissolving films prepared from a tertiary mixture of pullulan, polyvinylpyrrolidone and hypromellose. Disintegration studies were performed in real-time by probe spectroscopy to detect the onset of film disintegration. Tensile strength and elastic modulus of the films were measured by texture analysis. Disintegration time of the films ranged from 21 to 105 seconds whereas their mechanical properties ranged from approximately 2 to 49 MPa for tensile strength and 1 to 21 MPa% for young's modulus. After generating polynomial models correlating the variables using a D-Optimal mixture design, an optimal formulation with desired responses was proposed by the statistical package. For validation, a new film formulation loaded with diclofenac sodium based on the optimized composition was prepared and tested for dissolution and tensile strength. Dissolution of the optimized film was found to commence almost immediately with 50% of the drug released within one minute. Tensile strength and young's modulus of the film were 11.21 MPa and 6, 78 MPa%, respectively. Real-time spectroscopy in conjunction with statistical design were shown to be very efficient for the optimization and development of non-conventional intraoral delivery system such as fast dissolving films.

  2. Greater enhancement of Bacillus subtilis spore yields in submerged cultures by optimization of medium composition through statistical experimental designs.

    PubMed

    Chen, Zhen-Min; Li, Qing; Liu, Hua-Mei; Yu, Na; Xie, Tian-Jian; Yang, Ming-Yuan; Shen, Ping; Chen, Xiang-Dong

    2010-02-01

    Bacillus subtilis spore preparations are promising probiotics and biocontrol agents, which can be used in plants, animals, and humans. The aim of this work was to optimize the nutritional conditions using a statistical approach for the production of B. subtilis (WHK-Z12) spores. Our preliminary experiments show that corn starch, corn flour, and wheat bran were the best carbon sources. Using Plackett-Burman design, corn steep liquor, soybean flour, and yeast extract were found to be the best nitrogen source ingredients for enhancing spore production and were studied for further optimization using central composite design. The key medium components in our optimization medium were 16.18 g/l of corn steep liquor, 17.53 g/l of soybean flour, and 8.14 g/l of yeast extract. The improved medium produced spores as high as 1.52 +/- 0.06 x 10(10) spores/ml under flask cultivation conditions, and 1.56 +/- 0.07 x 10(10) spores/ml could be achieved in a 30-l fermenter after 40 h of cultivation. To the best of our knowledge, these results compared favorably to the documented spore yields produced by B. subtilis strains.

  3. Optimal designs for copula models

    PubMed Central

    Perrone, E.; Müller, W.G.

    2016-01-01

    Copula modelling has in the past decade become a standard tool in many areas of applied statistics. However, a largely neglected aspect concerns the design of related experiments. Particularly the issue of whether the estimation of copula parameters can be enhanced by optimizing experimental conditions and how robust all the parameter estimates for the model are with respect to the type of copula employed. In this paper an equivalence theorem for (bivariate) copula models is provided that allows formulation of efficient design algorithms and quick checks of whether designs are optimal or at least efficient. Some examples illustrate that in practical situations considerable gains in design efficiency can be achieved. A natural comparison between different copula models with respect to design efficiency is provided as well. PMID:27453616

  4. The Automatic Neuroscientist: A framework for optimizing experimental design with closed-loop real-time fMRI

    PubMed Central

    Lorenz, Romy; Monti, Ricardo Pio; Violante, Inês R.; Anagnostopoulos, Christoforos; Faisal, Aldo A.; Montana, Giovanni; Leech, Robert

    2016-01-01

    Functional neuroimaging typically explores how a particular task activates a set of brain regions. Importantly though, the same neural system can be activated by inherently different tasks. To date, there is no approach available that systematically explores whether and how distinct tasks probe the same neural system. Here, we propose and validate an alternative framework, the Automatic Neuroscientist, which turns the standard fMRI approach on its head. We use real-time fMRI in combination with modern machine-learning techniques to automatically design the optimal experiment to evoke a desired target brain state. In this work, we present two proof-of-principle studies involving perceptual stimuli. In both studies optimization algorithms of varying complexity were employed; the first involved a stochastic approximation method while the second incorporated a more sophisticated Bayesian optimization technique. In the first study, we achieved convergence for the hypothesized optimum in 11 out of 14 runs in less than 10 min. Results of the second study showed how our closed-loop framework accurately and with high efficiency estimated the underlying relationship between stimuli and neural responses for each subject in one to two runs: with each run lasting 6.3 min. Moreover, we demonstrate that using only the first run produced a reliable solution at a group-level. Supporting simulation analyses provided evidence on the robustness of the Bayesian optimization approach for scenarios with low contrast-to-noise ratio. This framework is generalizable to numerous applications, ranging from optimizing stimuli in neuroimaging pilot studies to tailoring clinical rehabilitation therapy to patients and can be used with multiple imaging modalities in humans and animals. PMID:26804778

  5. Removal of cobalt ions from aqueous solutions by polymer assisted ultrafiltration using experimental design approach. part 1: optimization of complexation conditions.

    PubMed

    Cojocaru, Corneliu; Zakrzewska-Trznadel, Grazyna; Jaworska, Agnieszka

    2009-09-30

    The polymer assisted ultrafiltration process combines the selectivity of the chelating agent with the filtration ability of the membrane acting in synergy. Such hybrid process (complexation-ultrafiltration) is influenced by several factors and therefore the application of experimental design for process optimization using a reduced number of experiments is of great importance. The present work deals with the investigation and optimization of cobalt ions removal from aqueous solutions by polymer enhanced ultrafiltration using experimental design and response surface methodological approach. Polyethyleneimine has been used as chelating agent for cobalt complexation and the ultrafiltration experiments were carried out in dead-end operating mode using a flat-sheet membrane made from regenerated cellulose. The aim of this part of experiments was to find optimal conditions for cobalt complexation, i.e. the influence of initial concentration of cobalt in feed solution, polymer/metal ratio and pH of feed solution, on the rejection efficiency and binding capacity of the polymer. In this respect, the central compositional design has been used for planning the experiments and for construction of second-order response surface models applicable for predictions. The analysis of variance has been employed for statistical validation of regression models. The optimum conditions for maximum rejection efficiency of 96.65% has been figured out experimentally by gradient method and was found to be as follows: [Co(2+)](0)=65 mg/L, polymer/metal ratio=5.88 and pH 6.84.

  6. Elements of Bayesian experimental design

    SciTech Connect

    Sivia, D.S.

    1997-09-01

    We consider some elements of the Bayesian approach that are important for optimal experimental design. While the underlying principles used are very general, and are explained in detail in a recent tutorial text, they are applied here to the specific case of characterising the inferential value of different resolution peakshapes. This particular issue was considered earlier by Silver, Sivia and Pynn (1989, 1990a, 1990b), and the following presentation confirms and extends the conclusions of their analysis.

  7. Optimization of photocatalytic degradation of biodiesel using TiO2/H2O2 by experimental design.

    PubMed

    Ambrosio, Elizangela; Lucca, Diego L; Garcia, Maicon H B; de Souza, Maísa T F; de S Freitas, Thábata K F; de Souza, Renata P; Visentainer, Jesuí V; Garcia, Juliana C

    2017-03-01

    This study reports on the investigation of the photodegradation of biodiesel (B100) in contact with water using TiO2/H2O2. The TiO2 was characterized by X-ray diffraction analysis (XRD), pH point of zero charge (pHpzc) and textural analysis. The results of the experiments were fitted to a quadratic polynomial model developed using response surface methodology (RSM) to optimize the parameters. Using the three factors, three levels, and the Box-Behnken design of experiment technique, 15 sets of experiments were designed considering the effective ranges of the influential parameters. The responses of those parameters were optimized using computational techniques. After 24h of irradiation under an Hg vapor lamp, removal of 22.0% of the oils and greases (OG) and a 33.54% reduction in the total of fatty acid methyl ester (FAME) concentration was observed in the aqueous phase, as determined using gas chromatography coupled with flame ionization detection (GC/FID). The estimate of FAMEs undergo base-catalyzed hydrolysis is at least 3years (1095days) and after photocatalytic treatment using TiO2/H2O2, it was reduced to 33.54% of FAMEs in only 1day.

  8. Optimizing the coagulation process in a drinking water treatment plant -- comparison between traditional and statistical experimental design jar tests.

    PubMed

    Zainal-Abideen, M; Aris, A; Yusof, F; Abdul-Majid, Z; Selamat, A; Omar, S I

    2012-01-01

    In this study of coagulation operation, a comparison was made between the optimum jar test values for pH, coagulant and coagulant aid obtained from traditional methods (an adjusted one-factor-at-a-time (OFAT) method) and with central composite design (the standard design of response surface methodology (RSM)). Alum (coagulant) and polymer (coagulant aid) were used to treat a water source with very low pH and high aluminium concentration at Sri-Gading water treatment plant (WTP) Malaysia. The optimum conditions for these factors were chosen when the final turbidity, pH after coagulation and residual aluminium were within 0-5 NTU, 6.5-7.5 and 0-0.20 mg/l respectively. Traditional and RSM jar tests were conducted to find their respective optimum coagulation conditions. It was observed that the optimum dose for alum obtained through the traditional method was 12 mg/l, while the value for polymer was set constant at 0.020 mg/l. Through RSM optimization, the optimum dose for alum was 7 mg/l and for polymer was 0.004 mg/l. Optimum pH for the coagulation operation obtained through traditional methods and RSM was 7.6. The final turbidity, pH after coagulation and residual aluminium recorded were all within acceptable limits. The RSM method was demonstrated to be an appropriate approach for the optimization and was validated by a further test.

  9. Optimization of ultrasound assisted dispersive liquid-liquid microextraction of six antidepressants in human plasma using experimental design.

    PubMed

    Fernández, P; Taboada, V; Regenjo, M; Morales, L; Alvarez, I; Carro, A M; Lorenzo, R A

    2016-05-30

    A simple Ultrasounds Assisted-Dispersive Liquid Liquid Microextraction (UA-DLLME) method is presented for the simultaneous determination of six second-generation antidepressants in plasma by Ultra Performance Liquid Chromatography with Photodiode Array Detector (UPLC-PDA). The main factors that potentially affect to DLLME were optimized by a screening design followed by a response surface design and desirability functions. The optimal conditions were 2.5 mL of acetonitrile as dispersant solvent, 0.2 mL of chloroform as extractant solvent, 3 min of ultrasounds stirring and extraction pH 9.8.Under optimized conditions, the UPLC-PDA method showed good separation of antidepressants in 2.5 min and good linearity in the range of 0.02-4 μg mL(-1), with determination coefficients higher than 0.998. The limits of detection were in the range 4-5 ng mL(-1). The method precision (n=5) was evaluated showing relative standard deviations (RSD) lower than 8.1% for all compounds. The average recoveries ranged from 92.5% for fluoxetine to 110% for mirtazapine. The applicability of DLLME/UPLC-PDA was successfully tested in twenty nine plasma samples from antidepressant consumers. Real samples were analyzed by the proposed method and the results were successfully submitted to comparison with those obtained by a Liquid Liquid Extraction-Gas Chromatography - Mass Spectrometry (LLE-GC-MS) method. The results confirmed the presence of venlafaxine in most cases (19 cases), followed by sertraline (3 cases) and fluoxetine (3 cases) at concentrations below toxic levels.

  10. Experimental design and husbandry.

    PubMed

    Festing, M F

    1997-01-01

    Rodent gerontology experiments should be carefully designed and correctly analyzed so as to provide the maximum amount of information for the minimum amount of work. There are five criteria for a "good" experimental design. These are applicable both to in vivo and in vitro experiments: (1) The experiment should be unbiased so that it is possible to make a true comparison between treatment groups in the knowledge that no one group has a more favorable "environment." (2) The experiment should have high precision so that if there is a true treatment effect there will be a good chance of detecting it. This is obtained by selecting uniform material such as isogenic strains, which are free of pathogenic microorganisms, and by using randomized block experimental designs. It can also be increased by increasing the number of observations. However, increasing the size of the experiment beyond a certain point will only marginally increase precision. (3) The experiment should have a wide range of applicability so it should be designed to explore the sensitivity of the observed experimental treatment effect to other variables such as the strain, sex, diet, husbandry, and age of the animals. With in vitro data, variables such as media composition and incubation times may also be important. The importance of such variables can often be evaluated efficiently using "factorial" experimental designs, without any substantial increase in the overall number of animals. (4) The experiment should be simple so that there is little chance of groups becoming muddled. Generally, formal experimental designs that are planned before the work starts should be used. (5) The experiment should provide the ability to calculate uncertainty. In other words, it should be capable of being statistically analyzed so that the level of confidence in the results can be quantified.

  11. Optimizing experimental design using the house mouse (Mus musculus L.) as a model for determining grain feeding preferences.

    PubMed

    Fuerst, E Patrick; Morris, Craig F; Dasgupta, Nairanjana; McLean, Derek J

    2013-10-01

    There is little research evaluating flavor preferences among wheat varieties. We previously demonstrated that mice exert very strong preferences when given binary mixtures of wheat varieties. We plan to utilize mice to identify wheat genes associated with flavor, and then relate this back to human preferences. Here we explore the effects of experimental design including the number of days (from 1 to 4) and number of mice (from 2 to 15) in order to identify designs that provide significant statistical inferences while minimizing requirements for labor and animals. When mice expressed a significant preference between 2 wheat varieties, increasing the number of days (for a given number of mice) increased the significance level (decreased P-values) for their preference, as expected, but with diminishing benefit as more days were added. However, increasing the number of mice (for a given number of days) provided a more dramatic log-linear decrease in P-values and thus increased statistical power. In conclusion, when evaluating mouse feeding preferences in binary mixtures of grain, an efficient experimental design would emphasize fewer days rather than fewer animals thus shortening the experiment duration and reducing the overall requirement for labor and animals.

  12. Use of experimental designs for the optimization of stir bar sorptive extraction coupled to GC-MS/MS and comprehensive validation for the quantification of pesticides in freshwaters.

    PubMed

    Assoumani, A; Margoum, C; Guillemain, C; Coquery, M

    2014-04-01

    Although experimental design is a powerful tool, it is rarely used for the development of analytical methods for the determination of organic contaminants in the environment. When investigated factors are interdependent, this methodology allows studying efficiently not only their effects on the response but also the effects of their interactions. A complete and didactic chemometric study is described herein for the optimization of an analytical method involving stir bar sorptive extraction followed by thermal desorption coupled with gas chromatography and tandem mass spectrometry for the rapid quantification of several pesticides in freshwaters. We studied, under controlled conditions, the effects of thermal desorption parameters and the effects of their interactions on the desorption efficiency. The desorption time, temperature, flow, and the injector temperature were optimized through a screening design and a Box-Behnken design. The two sequential designs allowed establishing an optimum set of conditions for maximum response. Then, we present the comprehensive validation and the determination of measurement uncertainty of the optimized method. Limits of quantification determined in different natural waters were in the range of 2.5 to 50 ng L(-1), and recoveries were between 90 and 104 %, depending on the pesticide. The whole method uncertainty, assessed at three concentration levels under intra-laboratory reproducibility conditions, was below 25 % for all tested pesticides. Hence, we optimized and validated a robust analytical method to quantify the target pesticides at low concentration levels in freshwater samples, with a simple, fast, and solventless desorption step.

  13. An orbital angular momentum radio communication system optimized by intensity controlled masks effectively: Theoretical design and experimental verification

    SciTech Connect

    Gao, Xinlu; Huang, Shanguo Wei, Yongfeng; Zhai, Wensheng; Xu, Wenjing; Yin, Shan; Gu, Wanyi; Zhou, Jing

    2014-12-15

    A system of generating and receiving orbital angular momentum (OAM) radio beams, which are collectively formed by two circular array antennas (CAAs) and effectively optimized by two intensity controlled masks, is proposed and experimentally investigated. The scheme is effective in blocking of the unwanted OAM modes and enhancing the power of received radio signals, which results in the capacity gain of system and extended transmission distance of the OAM radio beams. The operation principle of the intensity controlled masks, which can be regarded as both collimator and filter, is feasible and simple to realize. Numerical simulations of intensity and phase distributions at each key cross-sectional plane of the radio beams demonstrate the collimated results. The experimental results match well with the theoretical analysis and the receive distance of the OAM radio beam at radio frequency (RF) 20 GHz is extended up to 200 times of the wavelength of the RF signals, the measured distance is 5 times of the original measured distance. The presented proof-of-concept experiment demonstrates the feasibility of the system.

  14. Optimization of a supercritical fluid extraction/reaction methodology for the analysis of castor oil using experimental design.

    PubMed

    Turner, Charlotta; Whitehand, Linda C; Nguyen, Tasha; McKeon, Thomas

    2004-01-14

    The aim of this work was to optimize a supercritical fluid extraction (SFE)/enzymatic reaction process for the determination of the fatty acid composition of castor seeds. A lipase from Candida antarctica (Novozyme 435) was used to catalyze the methanolysis reaction in supercritical carbon dioxide (SC-CO(2)). A Box-Behnken statistical design was used to evaluate effects of various values of pressure (200-400 bar), temperature (40-80 degrees C), methanol concentration (1-5 vol %), and water concentration (0.02-0.18 vol %) on the yield of methylated castor oil. Response surfaces were plotted, and these together with results from some additional experiments produced optimal extraction/reaction conditions for SC-CO(2) at 300 bar and 80 degrees C, with 7 vol % methanol and 0.02 vol % water. These conditions were used for the determination of the castor oil content expressed as fatty acid methyl esters (FAMEs) in castor seeds. The results obtained were similar to those obtained using conventional methodology based on solvent extraction followed by chemical transmethylation. It was concluded that the methodology developed could be used for the determination of castor oil content as well as composition of individual FAMEs in castor seeds.

  15. Optimization of the ultrasonic assisted removal of methylene blue by gold nanoparticles loaded on activated carbon using experimental design methodology.

    PubMed

    Roosta, M; Ghaedi, M; Daneshfar, A; Sahraei, R; Asghari, A

    2014-01-01

    The present study was focused on the removal of methylene blue (MB) from aqueous solution by ultrasound-assisted adsorption onto the gold nanoparticles loaded on activated carbon (Au-NP-AC). This nanomaterial was characterized using different techniques such as SEM, XRD, and BET. The effects of variables such as pH, initial dye concentration, adsorbent dosage (g), temperature and sonication time (min) on MB removal were studied and using central composite design (CCD) and the optimum experimental conditions were found with desirability function (DF) combined response surface methodology (RSM). Fitting the experimental equilibrium data to various isotherm models such as Langmuir, Freundlich, Tempkin and Dubinin-Radushkevich models show the suitability and applicability of the Langmuir model. Analysis of experimental adsorption data to various kinetic models such as pseudo-first and second order, Elovich and intraparticle diffusion models show the applicability of the second-order equation model. The small amount of proposed adsorbent (0.01 g) is applicable for successful removal of MB (RE>95%) in short time (1.6 min) with high adsorption capacity (104-185 mg g(-1)).

  16. Optimization of photocatalytic degradation of methyl blue using silver ion doped titanium dioxide by combination of experimental design and response surface approach.

    PubMed

    Sahoo, C; Gupta, A K

    2012-05-15

    Photocatalytic degradation of methyl blue (MYB) was studied using Ag(+) doped TiO(2) under UV irradiation in a batch reactor. Catalytic dose, initial concentration of dye and pH of the reaction mixture were found to influence the degradation process most. The degradation was found to be effective in the range catalytic dose (0.5-1.5g/L), initial dye concentration (25-100ppm) and pH of reaction mixture (5-9). Using the three factors three levels Box-Behnken design of experiment technique 15 sets of experiments were designed considering the effective ranges of the influential parameters. The results of the experiments were fitted to two quadratic polynomial models developed using response surface methodology (RSM), representing functional relationship between the decolorization and mineralization of MYB and the experimental parameters. Design Expert software version 8.0.6.1 was used to optimize the effects of the experimental parameters on the responses. The optimum values of the parameters were dose of Ag(+) doped TiO(2) 0.99g/L, initial concentration of MYB 57.68ppm and pH of reaction mixture 7.76. Under the optimal condition the predicted decolorization and mineralization rate of MYB were 95.97% and 80.33%, respectively. Regression analysis with R(2) values >0.99 showed goodness of fit of the experimental results with predicted values.

  17. Dynamic modeling, experimental evaluation, optimal design and control of integrated fuel cell system and hybrid energy systems for building demands

    NASA Astrophysics Data System (ADS)

    Nguyen, Gia Luong Huu

    obtained experimental data, the research studied the control of airflow to regulate the temperature of reactors within the fuel processor. The dynamic model provided a platform to test the dynamic response for different control gains. With sufficient sensing and appropriate control, a rapid response to maintain the temperature of the reactor despite an increase in power was possible. The third part of the research studied the use of a fuel cell in conjunction with photovoltaic panels, and energy storage to provide electricity for buildings. This research developed an optimization framework to determine the size of each device in the hybrid energy system to satisfy the electrical demands of buildings and yield the lowest cost. The advantage of having the fuel cell with photovoltaic and energy storage was the ability to operate the fuel cell at baseload at night, thus reducing the need for large battery systems to shift the solar power produced in the day to the night. In addition, the dispatchability of the fuel cell provided an extra degree of freedom necessary for unforeseen disturbances. An operation framework based on model predictive control showed that the method is suitable for optimizing the dispatch of the hybrid energy system.

  18. Improved Titanium Billet Inspection Sensitivity through Optimized Phased Array Design, Part II: Experimental Validation and Comparative Study with Multizone

    SciTech Connect

    Hassan, W.; Vensel, F.; Knowles, B.

    2006-03-06

    The inspection of critical rotating components of aircraft engines has made important advances over the last decade. The development of Phased Array (PA) inspection capability for billet and forging materials used in the manufacturing of critical engine rotating components has been a priority for Honeywell Aerospace. The demonstration of improved PA inspection system sensitivity over what is currently used at the inspection houses is a critical step in the development of this technology and its introduction to the supply base as a production inspection. As described in Part I (in these proceedings), a new phased array transducer was designed and manufactured for optimal inspection of eight inch diameter Ti-6Al-4V billets. After confirming that the transducer was manufactured in accordance with the design specifications a validation study was conducted to assess the sensitivity improvement of the PAI over the current capability of Multi-zone (MZ) inspection. The results of this study confirm the significant ({approx_equal} 6 dB in FBH number sign sensitivity) improvement of the PAI sensitivity over that of MZI.

  19. Experimental design approach to the optimization of ultrasonic degradation of alachlor and enhancement of treated water biodegradability.

    PubMed

    Torres, Ricardo A; Mosteo, Rosa; Pétrier, Christian; Pulgarin, Cesar

    2009-03-01

    This work presents the application of experimental design for the ultrasonic degradation of alachlor which is pesticide classified as priority substance by the European Commission within the scope of the Water Framework Directive. The effect of electrical power (20-80W), pH (3-10) and substrate concentration (10-50mgL(-1)) was evaluated. For a confidential level of 90%, pH showed a low effect on the initial degradation rate of alachlor; whereas electrical power, pollutant concentration and the interaction of these two parameters were significant. A reduced model taking into account the significant variables and interactions between variables has shown a good correlation with the experimental results. Additional experiments conducted in natural and deionised water indicated that the alachlor degradation by ultrasound is practically unaffected by the presence of potential *OH radical scavengers: bicarbonate, sulphate, chloride and oxalic acid. In both cases, alachlor was readily eliminated ( approximately 75min). However, after 4h of treatment only 20% of the initial TOC was removed, showing that alachlor by-products are recalcitrant to the ultrasonic action. Biodegradability test (BOD5/COD) carried out during the course of the treatment indicated that the ultrasonic system noticeably increases the biodegradability of the initial solution.

  20. Interfacial modification to optimize stainless steel photoanode design for flexible dye sensitized solar cells: an experimental and numerical modeling approach

    NASA Astrophysics Data System (ADS)

    Salehi Taleghani, Sara; Zamani Meymian, Mohammad Reza; Ameri, Mohsen

    2016-10-01

    In the present research, we report fabrication, experimental characterization and theoretical analysis of semi and full flexible dye sensitized solar cells (DSSCs) manufactured on the basis of bare and roughened stainless steel type 304 (SS304) substrates. The morphological, optical and electrical characterizations confirm the advantage of roughened SS304 over bare and even common transparent conducting oxides (TCOs). A significant enhancement of about 51% in power conversion efficiency is obtained for flexible device (5.51%) based on roughened SS304 substrate compared to the bare SS304. The effect of roughening the SS304 substrates on electrical transport characteristics is also investigated by means of numerical modeling with regard to metal-semiconductor and interfacial resistance arising from the metallic substrate and nanocrystalline semiconductor contact. The numerical modeling results provide a reliable theoretical backbone to be combined with experimental implications. It highlights the stronger effect of series resistance compared to schottky barrier in lowering the fill factor of the SS304-based DSSCs. The findings of the present study nominate roughened SS304 as a promising replacement for conventional DSSCs substrates as well as introducing a highly accurate modeling framework to design and diagnose treated metallic or non-metallic based DSSCs.

  1. Optimization of a pharmaceutical freeze-dried product and its process using an experimental design approach and innovative process analyzers.

    PubMed

    De Beer, T R M; Wiggenhorn, M; Hawe, A; Kasper, J C; Almeida, A; Quinten, T; Friess, W; Winter, G; Vervaet, C; Remon, J P

    2011-02-15

    The aim of the present study was to examine the possibilities/advantages of using recently introduced in-line spectroscopic process analyzers (Raman, NIR and plasma emission spectroscopy), within well-designed experiments, for the optimization of a pharmaceutical formulation and its freeze-drying process. The formulation under investigation was a mannitol (crystalline bulking agent)-sucrose (lyo- and cryoprotector) excipient system. The effects of two formulation variables (mannitol/sucrose ratio and amount of NaCl) and three process variables (freezing rate, annealing temperature and secondary drying temperature) upon several critical process and product responses (onset and duration of ice crystallization, onset and duration of mannitol crystallization, duration of primary drying, residual moisture content and amount of mannitol hemi-hydrate in end product) were examined using a design of experiments (DOE) methodology. A 2-level fractional factorial design (2(5-1)=16 experiments+3 center points=19 experiments) was employed. All experiments were monitored in-line using Raman, NIR and plasma emission spectroscopy, which supply continuous process and product information during freeze-drying. Off-line X-ray powder diffraction analysis and Karl-Fisher titration were performed to determine the morphology and residual moisture content of the end product, respectively. In first instance, the results showed that - besides the previous described findings in De Beer et al., Anal. Chem. 81 (2009) 7639-7649 - Raman and NIR spectroscopy are able to monitor the product behavior throughout the complete annealing step during freeze-drying. The DOE approach allowed predicting the optimum combination of process and formulation parameters leading to the desired responses. Applying a mannitol/sucrose ratio of 4, without adding NaCl and processing the formulation without an annealing step, using a freezing rate of 0.9°C/min and a secondary drying temperature of 40°C resulted in

  2. The Box-Benkhen experimental design for the optimization of the electrocatalytic treatment of wastewaters with high concentrations of phenol and organic matter.

    PubMed

    GilPavas, Edison; Betancourt, Alejandra; Angulo, Mónica; Dobrosz-Gómez, Izabela; Gómez-García, Miguel Angel

    2009-01-01

    In this work, the Box-Benkhen experimental Design (BBD) was applied for the optimization of the parameters of the electrocatalytic degradation of wastewaters resulting from a phenolic resins industry placed in the suburbs of Medellin (Colombia). The direct and the oxidant assisted electro-oxidation experiments were carried out in a laboratory scale batch cell reactor, with monopolar configuration, and electrodes made of graphite (anode) and titanium (cathode). A multifactorial experimental design was proposed, including the following experimental variables: initial phenol concentration, conductivity, and pH. The direct electro-oxidation process allowed to reach ca. 88% of phenol degradation, 38% of mineralization (TOC), 52% of Chemical Oxygen Demand (COD) degradation, and an increase in water biodegradability of 13%. The synergetic effect of the electro-oxidation process and the respective oxidant agent (Fenton reactant, potassium permanganate, or sodium persulfate) let to a significant increase in the rate of the degradation process. At the optimized variables values, it was possible to reach ca. 99% of phenol degradation, 80% of TOC and 88% of COD degradation. A kinetic study was accomplished, which included the identification of the intermediate compounds generated during the oxidation process.

  3. Optimization of process variables for the biosynthesis of silver nanoparticles by Aspergillus wentii using statistical experimental design

    NASA Astrophysics Data System (ADS)

    Biswas, Supratim; Mulaba-Bafubiandi, Antoine F.

    2016-12-01

    The present scientific endeavour focuses on the optimization of process parameters using central composite design towards development of an efficient technique for the biosynthesis of silver nanoparticles. The combined effects of three process variables (days of fermentation, duration of incubation, concentration of AgNO3) upon extracellular biological synthesis of silver nanoparticles (AgNPs) by Aspergillus wentii NCIM 667 were studied. A single absorption peak at 455 nm confirming the presence of silver nanoparticles was observed in the UV-visible spectrophotometric graph. Using Fourier transform infrared spectroscopic analysis the presence of proteins as viable reducing agents for the formation AgNPs was recorded. High resolution transmission electron microscopy showed the realization of spherically shaped AgNPs of size 15-40 nm. Biologically formed AgNPs revealed higher antimicrobial activity against gram-negative than gram-positive bacterial strains. We present the enumeration of the properties of biosynthesized nanoparticles which exhibit photocatalysis exhausting an organic dye, the methyl orange, upon exposure to sunlight thereby accomplishing the degradation of almost (88%) the methyl orange dye within 5 h.

  4. Thermal and optical design analyses, optimizations, and experimental verification for a novel glare-free LED lamp for household applications.

    PubMed

    Khan, M Nisa

    2015-07-20

    Light-emitting diode (LED) technologies are undergoing very fast developments to enable household lamp products with improved energy efficiency and lighting properties at lower cost. Although many LED replacement lamps are claimed to provide similar or better lighting quality at lower electrical wattage compared with general-purpose incumbent lamps, certain lighting characteristics important to human vision are neglected in this comparison, which include glare-free illumination and omnidirectional or sufficiently broad light distribution with adequate homogeneity. In this paper, we comprehensively investigate the thermal and lighting performance and trade-offs for several commercial LED replacement lamps for the most popular Edison incandescent bulb. We present simulations and analyses for thermal and optical performance trade-offs for various LED lamps at the chip and module granularity levels. In addition, we present a novel, glare-free, and production-friendly LED lamp design optimized to produce very desirable light distribution properties as demonstrated by our simulation results, some of which are verified by experiments.

  5. Adsorption of phenol onto activated carbon from Rhazya stricta: determination of the optimal experimental parameters using factorial design

    NASA Astrophysics Data System (ADS)

    Hegazy, A. K.; Abdel-Ghani, N. T.; El-Chaghaby, G. A.

    2014-09-01

    A novel activated carbon was prepared from Rhazya stricta leaves and was successfully used as an adsorbent for phenol removal from aqueous solution. The prepared activated carbon was characterized by FTIR and SEM analysis. Three factors (namely, temperature, pH and adsorbent dose) were screened to study their effect on the adsorption of phenol by R. stricta activated carbon. A 23 full factorial design was employed for optimizing the adsorption process. The removal of phenol by adsorption onto R. stricta carbon reached 85 % at a solution pH of 3, an adsorbent dose of 0.5 g/l and a temperature of 45 °C. The temperature and adsorbent weight had a positive effect on phenol removal percentage, when both factors were changed from low to high and the opposite is true for the initial solution pH. The results of the main effects showed that the three studied factors significantly affected phenol removal by R. stricta carbon with 95 % confidence level. The interaction effects revealed that the interaction between the temperature and pH had the most significant effect on the removal percentage of phenol by R. stricta activated carbon. The present work showed that the carbon prepared from a low-cost and natural material which is R. stricta leaves is a good adsorbent for the removal of phenol from aqueous solution.

  6. Experimental design methods for bioengineering applications.

    PubMed

    Keskin Gündoğdu, Tuğba; Deniz, İrem; Çalışkan, Gülizar; Şahin, Erdem Sefa; Azbar, Nuri

    2016-01-01

    Experimental design is a form of process analysis in which certain factors are selected to obtain the desired responses of interest. It may also be used for the determination of the effects of various independent factors on a dependent factor. The bioengineering discipline includes many different areas of scientific interest, and each study area is affected and governed by many different factors. Briefly analyzing the important factors and selecting an experimental design for optimization are very effective tools for the design of any bioprocess under question. This review summarizes experimental design methods that can be used to investigate various factors relating to bioengineering processes. The experimental methods generally used in bioengineering are as follows: full factorial design, fractional factorial design, Plackett-Burman design, Taguchi design, Box-Behnken design and central composite design. These design methods are briefly introduced, and then the application of these design methods to study different bioengineering processes is analyzed.

  7. Optimization of the separation of a group of triazine herbicides by micellar capillary electrophoresis using experimental design and artificial neural networks.

    PubMed

    Frías-García, Sergio; Sánchez, M Jesús; Rodríguez- Delgado, Miguel Angel

    2004-04-01

    The micellar electrokinetic chromatography separation of a group of triazine compounds was optimized using a combination of experimental design (ED) and artificial neural network (ANN). Different variables affecting separation were selected and used as input in the ANN. A chromatographic exponential function (CEF) combining resolution and separation time was used as output to obtain optimal separation conditions. An optimized buffer (19.3 mM sodium borate, 15.4 mM disodium hydrogen phosphate, 28.4 mM SDS, pH 9.45, and 7.5% 1-propanol) provides the best separation with regard to resolution and separation time. Besides, an analysis of variance (ANOVA) approach of the MEKC separation, using the same variables, was developed, and the best capability of the combination of ED-ANN for the optimization of the analytical methodology was demonstrated by comparing the results obtained from both approaches. In order to validate the proposed method, the different analytical parameters as repeatability and day-to-day precision were calculated. Finally, the optimized method was applied to the determination of these compounds in spiked and nonspiked ground water samples.

  8. Cr(VI) transport via a supported ionic liquid membrane containing CYPHOS IL101 as carrier: system analysis and optimization through experimental design strategies.

    PubMed

    Rodríguez de San Miguel, Eduardo; Vital, Xóchitl; de Gyves, Josefina

    2014-05-30

    Chromium(VI) transport through a supported liquid membrane (SLM) system containing the commercial ionic liquid CYPHOS IL101 as carrier was studied. A reducing stripping phase was used as a mean to increase recovery and to simultaneously transform Cr(VI) into a less toxic residue for disposal or reuse. General functions which describe the time-depending evolution of the metal fractions in the cell compartments were defined and used in data evaluation. An experimental design strategy, using factorial and central-composite design matrices, was applied to assess the influence of the extractant, NaOH and citrate concentrations in the different phases, while a desirability function scheme allowed the synchronized optimization of depletion and recovery of the analyte. The mechanism for chromium permeation was analyzed and discussed to contribute to the understanding of the transfer process. The influence of metal concentration was evaluated as well. The presence of different interfering ions (Ca(2+), Al(3+), NO3(-), SO4(2-), and Cl(-)) at several Cr(VI): interfering ion ratios was studied through the use of a Plackett and Burman experimental design matrix. Under optimized conditions 90% of recovery was obtained from a feed solution containing 7mgL(-1) of Cr(VI) in 0.01moldm(-3) HCl medium after 5h of pertraction.

  9. OPTIMAL NETWORK TOPOLOGY DESIGN

    NASA Technical Reports Server (NTRS)

    Yuen, J. H.

    1994-01-01

    This program was developed as part of a research study on the topology design and performance analysis for the Space Station Information System (SSIS) network. It uses an efficient algorithm to generate candidate network designs (consisting of subsets of the set of all network components) in increasing order of their total costs, and checks each design to see if it forms an acceptable network. This technique gives the true cost-optimal network, and is particularly useful when the network has many constraints and not too many components. It is intended that this new design technique consider all important performance measures explicitly and take into account the constraints due to various technical feasibilities. In the current program, technical constraints are taken care of by the user properly forming the starting set of candidate components (e.g. nonfeasible links are not included). As subsets are generated, they are tested to see if they form an acceptable network by checking that all requirements are satisfied. Thus the first acceptable subset encountered gives the cost-optimal topology satisfying all given constraints. The user must sort the set of "feasible" link elements in increasing order of their costs. The program prompts the user for the following information for each link: 1) cost, 2) connectivity (number of stations connected by the link), and 3) the stations connected by that link. Unless instructed to stop, the program generates all possible acceptable networks in increasing order of their total costs. The program is written only to generate topologies that are simply connected. Tests on reliability, delay, and other performance measures are discussed in the documentation, but have not been incorporated into the program. This program is written in PASCAL for interactive execution and has been implemented on an IBM PC series computer operating under PC DOS. The disk contains source code only. This program was developed in 1985.

  10. Dispersive liquid-liquid microextraction of quinolones in porcine blood: Optimization of extraction procedure and CE separation using experimental design.

    PubMed

    Vera-Candioti, Luciana; Teglia, Carla M; Cámara, María S

    2016-10-01

    A dispersive liquid-liquid microextraction procedure was developed to extract nine fluoroquinolones in porcine blood, six of which were quantified using a univariate calibration method. Extraction parameters including type and volume of extraction and dispersive solvent and pH, were optimized using a full factorial and a central composite designs. The optimum extraction parameters were a mixture of 250 μL dichloromethane (extract solvent) and 1250 μL ACN (dispersive solvent) in 500 μL of porcine blood reached to pH 6.80. After shaking and centrifugation, the upper phase was transferred in a glass tube and evaporated under N2 steam. The residue was resuspended into 50 μL of water-ACN (70:30, v/v) and determined by CE method with DAD, under optimum separation conditions. Consequently, a tenfold enrichment factor can potentially be reached with the pretreatment, taking into account the relationship between initial sample volume and final extract volume. Optimum separation conditions were as follows: BGE solution containing equal amounts of sodium borate (Na2 B4 O7 ) and di-sodium hydrogen phosphate (Na2 HPO4 ) with a final concentration of 23 mmol/L containing 0.2% of poly (diallyldimethylammonium chloride) and adjusted to pH 7.80. Separation was performed applying a negative potential of 25 kV, the cartridge was maintained at 25.0°C and the electropherograms were recorded at 275 nm during 4 min. The hydrodynamic injection was performed in the cathode by applying a pressure of 50 mbar for 10 s.

  11. Optimization of the combined ultrasonic assisted/adsorption method for the removal of malachite green by gold nanoparticles loaded on activated carbon: experimental design.

    PubMed

    Roosta, M; Ghaedi, M; Shokri, N; Daneshfar, A; Sahraei, R; Asghari, A

    2014-01-24

    The present study was aimed to experimental design optimization applied to removal of malachite green (MG) from aqueous solution by ultrasound-assisted removal onto the gold nanoparticles loaded on activated carbon (Au-NP-AC). This nanomaterial was characterized using different techniques such as FESEM, TEM, BET, and UV-vis measurements. The effects of variables such as pH, initial dye concentration, adsorbent dosage (g), temperature and sonication time on MG removal were studied using central composite design (CCD) and the optimum experimental conditions were found with desirability function (DF) combined response surface methodology (RSM). Fitting the experimental equilibrium data to various isotherm models such as Langmuir, Freundlich, Tempkin and Dubinin-Radushkevich models show the suitability and applicability of the Langmuir model. Kinetic models such as pseudo -first order, pseudo-second order, Elovich and intraparticle diffusion models applicability was tested for experimental data and the second-order equation and intraparticle diffusion models control the kinetic of the adsorption process. The small amount of proposed adsorbent (0.015 g) is applicable for successful removal of MG (RE>99%) in short time (4.4 min) with high adsorption capacity (140-172 mg g(-1)).

  12. Optimization of the combined ultrasonic assisted/adsorption method for the removal of malachite green by gold nanoparticles loaded on activated carbon: Experimental design

    NASA Astrophysics Data System (ADS)

    Roosta, M.; Ghaedi, M.; Shokri, N.; Daneshfar, A.; Sahraei, R.; Asghari, A.

    2014-01-01

    The present study was aimed to experimental design optimization applied to removal of malachite green (MG) from aqueous solution by ultrasound-assisted removal onto the gold nanoparticles loaded on activated carbon (Au-NP-AC). This nanomaterial was characterized using different techniques such as FESEM, TEM, BET, and UV-vis measurements. The effects of variables such as pH, initial dye concentration, adsorbent dosage (g), temperature and sonication time on MG removal were studied using central composite design (CCD) and the optimum experimental conditions were found with desirability function (DF) combined response surface methodology (RSM). Fitting the experimental equilibrium data to various isotherm models such as Langmuir, Freundlich, Tempkin and Dubinin-Radushkevich models show the suitability and applicability of the Langmuir model. Kinetic models such as pseudo -first order, pseudo-second order, Elovich and intraparticle diffusion models applicability was tested for experimental data and the second-order equation and intraparticle diffusion models control the kinetic of the adsorption process. The small amount of proposed adsorbent (0.015 g) is applicable for successful removal of MG (RE > 99%) in short time (4.4 min) with high adsorption capacity (140-172 mg g-1).

  13. Aplication of the statistical experimental design to optimize mine-impacted water (MIW) remediation using shrimp-shell.

    PubMed

    Núñez-Gómez, Dámaris; Alves, Alcione Aparecida de Almeida; Lapolli, Flavio Rubens; Lobo-Recio, María A

    2017-01-01

    Mine-impacted water (MIW) is one of the most serious mining problems and has a high negative impact on water resources and aquatic life. The main characteristics of MIW are a low pH (between 2 and 4) and high concentrations of SO4(2-) and metal ions (Cd, Cu, Ni, Pb, Zn, Fe, Al, Cr, Mn, Mg, etc.), many of which are toxic to ecosystems and human life. Shrimp shell was selected as a MIW treatment agent because it is a low-cost metal-sorbent biopolymer with a high chitin content and contains calcium carbonate, an acid-neutralizing agent. To determine the best metal-removal conditions, a statistical study using statistical planning was carried out. Thus, the objective of this work was to identify the degree of influence and dependence of the shrimp-shell content for the removal of Fe, Al, Mn, Co, and Ni from MIW. In this study, a central composite rotational experimental design (CCRD) with a quadruplicate at the midpoint (2(2)) was used to evaluate the joint influence of two formulation variables-agitation and the shrimp-shell content. The statistical results showed the significant influence (p < 0.05) of the agitation variable for Fe and Ni removal (linear and quadratic form, respectively) and of the shrimp-shell content variable for Mn (linear form), Al and Co (linear and quadratic form) removal. Analysis of variance (ANOVA) for Al, Co, and Ni removal showed that the model is valid at the 95% confidence interval and that no adjustment needed within the ranges evaluated of agitation (0-251.5 rpm) and shrimp-shell content (1.2-12.8 g L(-1)). The model required adjustments to the 90% and 75% confidence interval for Fe and Mn removal, respectively. In terms of efficiency in removing pollutants, it was possible to determine the best experimental values of the variables considered as 188 rpm and 9.36 g L(-1) of shrimp-shells.

  14. Removal of Mefenamic acid from aqueous solutions by oxidative process: Optimization through experimental design and HPLC/UV analysis.

    PubMed

    Colombo, Renata; Ferreira, Tanare C R; Ferreira, Renato A; Lanza, Marcos R V

    2016-02-01

    Mefenamic acid (MEF) is a non-steroidal anti-inflammatory drug indicated for relief of mild to moderate pain, and for the treatment of primary dysmenorrhea. The presence of MEF in raw and sewage waters has been detected worldwide at concentrations exceeding the predicted no-effect concentration. In this study, using experimental designs, different oxidative processes (H2O2, H2O2/UV, fenton and Photo-fenton) were simultaneously evaluated for MEF degradation efficiency. The influence and interaction effects of the most important variables in the oxidative process (concentration and addition mode of hydrogen peroxide, concentration and type of catalyst, pH, reaction period and presence/absence of light) were investigated. The parameters were determined based on the maximum efficiency to save time and minimize the consumption of reagents. According to the results, the photo-Fenton process is the best procedure to remove the drug from water. A reaction mixture containing 1.005 mmol L(-1) of ferrioxalate and 17.5 mmol L(-1) of hydrogen peroxide, added at the initial reaction period, pH of 6.1 and 60 min of degradation indicated the most efficient degradation, promoting 95% of MEF removal. The development and validation of a rapid and efficient qualitative and quantitative HPLC/UV methodology for detecting this pollutant in aqueous solution is also reported. The method can be applied in water quality control that is generated and/or treated in municipal or industrial wastewater treatment plants.

  15. Isolation, identification and characterization of a novel Rhodococcus sp. strain in biodegradation of tetrahydrofuran and its medium optimization using sequential statistics-based experimental designs.

    PubMed

    Yao, Yanlai; Lv, Zhenmei; Min, Hang; Lv, Zhenhua; Jiao, Huipeng

    2009-06-01

    Statistics-based experimental designs were applied to optimize the culture conditions for tetrahydrofuran (THF) degradation by a newly isolated Rhodococcus sp. YYL that tolerates high THF concentrations. Single factor experiments were undertaken for determining the optimum range of each of four factors (initial pH and concentrations of K(2)HPO(4).3H(2)O, NH(4)Cl and yeast extract) and these factors were subsequently optimized using the response surface methodology. The Plackett-Burman design was used to identify three trace elements (Mg(2+), Zn(2+)and Fe(2+)) that significantly increased the THF degradation rate. The optimum conditions were found to be: 1.80 g/L NH(4)Cl, 0.81 g/L K(2)HPO(4).3H(2)O, 0.06 g/L yeast extract, 0.40 g/L MgSO(4).7H(2)O, 0.006 g/L ZnSO(4).7H(2)O, 0.024 g/L FeSO(4).7H(2)O, and an initial pH of 8.26. Under these optimized conditions, the maximum THF degradation rate increased to 137.60 mg THF h(-1) g dry weight in Rhodococcus sp. YYL, which was nearly five times of that by the previously described THF degrading Rhodococcus strain.

  16. Experimental design and optimization of leaching process for recovery of valuable chemical elements (U, La, V, Mo, Yb and Th) from low-grade uranium ore.

    PubMed

    Zakrzewska-Koltuniewicz, Grażyna; Herdzik-Koniecko, Irena; Cojocaru, Corneliu; Chajduk, Ewelina

    2014-06-30

    The paper deals with experimental design and optimization of leaching process of uranium and associated metals from low-grade, Polish ores. The chemical elements of interest for extraction from the ore were U, La, V, Mo, Yb and Th. Sulphuric acid has been used as leaching reagent. Based on the design of experiments the second-order regression models have been constructed to approximate the leaching efficiency of elements. The graphical illustrations using 3-D surface plots have been employed in order to identify the main, quadratic and interaction effects of the factors. The multi-objective optimization method based on desirability approach has been applied in this study. The optimum condition have been determined as P=5 bar, T=120 °C and t=90 min. Under these optimal conditions, the overall extraction performance is 81.43% (for U), 64.24% (for La), 98.38% (for V), 43.69% (for Yb) and 76.89% (for Mo) and 97.00% (for Th).

  17. Optimal optoacoustic detector design

    NASA Technical Reports Server (NTRS)

    Rosengren, L.-G.

    1975-01-01

    Optoacoustic detectors are used to measure pressure changes occurring in enclosed gases, liquids, or solids being excited by intensity or frequency modulated electromagnetic radiation. Radiation absorption spectra, collisional relaxation rates, substance compositions, and reactions can be determined from the time behavior of these pressure changes. Very successful measurements of gaseous air pollutants have, for instance, been performed by using detectors of this type together with different lasers. The measuring instrument consisting of radiation source, modulator, optoacoustic detector, etc. is often called spectrophone. In the present paper, a thorough optoacoustic detector optimization analysis based upon a review of its theory of operation is introduced. New quantitative rules and suggestions explaining how to design detectors with maximal pressure responsivity and over-all sensitivity and minimal background signal are presented.

  18. Structural Optimization in automotive design

    NASA Technical Reports Server (NTRS)

    Bennett, J. A.; Botkin, M. E.

    1984-01-01

    Although mathematical structural optimization has been an active research area for twenty years, there has been relatively little penetration into the design process. Experience indicates that often this is due to the traditional layout-analysis design process. In many cases, optimization efforts have been outgrowths of analysis groups which are themselves appendages to the traditional design process. As a result, optimization is often introduced into the design process too late to have a significant effect because many potential design variables have already been fixed. A series of examples are given to indicate how structural optimization has been effectively integrated into the design process.

  19. Optimization and validation of a HPLC method for simultaneous determination of aflatoxin B1, B2, G1, G2, ochratoxin A and zearalenone using an experimental design.

    PubMed

    Rahmani, Anosheh; Selamat, Jinap; Soleimany, Farhang

    2011-01-01

    A reversed-phase HPLC optimization strategy is presented for investigating the separation and retention behavior of aflatoxin B1, B2, G1, G2, ochratoxin A and zearalenone, simultaneously. A fractional factorial design (FFD) was used to screen the significance effect of seven independent variables on chromatographic responses. The independent variables used were: (X1) column oven temperature (20-40°C), (X2) flow rate (0.8-1.2 ml/min), (X3) acid concentration in aqueous phase (0-2%), (X4) organic solvent percentage at the beginning (40-50%), and (X5) at the end (50-60%) of the gradient mobile phase, as well as (X6) ratio of methanol/acetonitrile at the beginning (1-4) and (X7) at the end (0-1) of gradient mobile phase. Responses of chromatographic analysis were resolution of mycotoxin peaks and HPLC run time. A central composite design (CCD) using response surface methodology (RSM) was then carried out for optimization of the most significant factors by multiple regression models for response variables. The proposed optimal method using 40°C oven temperature, 1 ml/min flow rate, 0.1% acetic acid concentration in aqueous phase, 41% organic phase (beginning), 60% organic phase (end), 1.92 ratio of methanol to acetonitrile (beginning) and 0.2 ratio (end) for X1-X7, respectively, showed good prediction ability between the experimental data and predictive values throughout the studied parameter space. Finally, the optimized method was validated by measuring the linearity, sensitivity, accuracy and precision parameters, and has been applied successfully to the analysis of spiked cereal samples.

  20. Optimal design and experimental verification of a magnetically actuated optical image stabilization system for cameras in mobile phones

    NASA Astrophysics Data System (ADS)

    Chiu, Chi-Wei; Chao, Paul C.-P.; Kao, Nicholas Y.-Y.; Young, Fu-Kuan

    2008-04-01

    A novel miniaturized optical image stabilizer (OIS) is proposed, which is installed inside the limited inner space of a mobile phone. The relation between the VCM electromagnetic force inside the OIS and the applied voltage is first established via an equivalent circuit and further validated by a finite element model. Various dimensions of the VCMs are optimized by a genetic algorithm (GA) to maximize sensitivities and also achieving high uniformity of the magnetic flux intensity.

  1. Optimal experimental design for filter exchange imaging: Apparent exchange rate measurements in the healthy brain and in intracranial tumors

    PubMed Central

    Szczepankiewicz, Filip; van Westen, Danielle; Englund, Elisabet; C Sundgren, Pia; Lätt, Jimmy; Ståhlberg, Freddy; Nilsson, Markus

    2016-01-01

    Purpose Filter exchange imaging (FEXI) is sensitive to the rate of diffusional water exchange, which depends, eg, on the cell membrane permeability. The aim was to optimize and analyze the ability of FEXI to infer differences in the apparent exchange rate (AXR) in the brain between two populations. Methods A FEXI protocol was optimized for minimal measurement variance in the AXR. The AXR variance was investigated by test‐retest acquisitions in six brain regions in 18 healthy volunteers. Preoperative FEXI data and postoperative microphotos were obtained in six meningiomas and five astrocytomas. Results Protocol optimization reduced the coefficient of variation of AXR by approximately 40%. Test‐retest AXR values were heterogeneous across normal brain regions, from 0.3 ± 0.2 s−1 in the corpus callosum to 1.8 ± 0.3 s−1 in the frontal white matter. According to analysis of statistical power, in all brain regions except one, group differences of 0.3–0.5 s−1 in the AXR can be inferred using 5 to 10 subjects per group. An AXR difference of this magnitude was observed between meningiomas (0.6 ± 0.1 s−1) and astrocytomas (1.0 ± 0.3 s−1). Conclusions With the optimized protocol, FEXI has the ability to infer relevant differences in the AXR between two populations for small group sizes. Magn Reson Med 77:1104–1114, 2017. © 2016 The Authors Magnetic Resonance in Medicine published by Wiley Periodicals, Inc. on behalf of International Society for Magnetic Resonance in Medicine. This is an open access article under the terms of the Creative Commons Attribution‐NonCommercial‐NoDerivs License, which permits use and distribution in any medium, provided the original work is properly cited, the use is non‐commercial and no modifications or adaptations are made. PMID:26968557

  2. MultiSimplex and experimental design as chemometric tools to optimize a SPE-HPLC-UV method for the determination of eprosartan in human plasma samples.

    PubMed

    Ferreirós, N; Iriarte, G; Alonso, R M; Jiménez, R M

    2006-05-15

    A chemometric approach was applied for the optimization of the extraction and separation of the antihypertensive drug eprosartan from human plasma samples. MultiSimplex program was used to optimize the HPLC-UV method due to the number of experimental and response variables to be studied. The measured responses were the corrected area, the separation of eprosartan chromatographic peak from plasma interferences peaks and the retention time of the analyte. The use of an Atlantis dC18, 100mmx3.9mm i.d. chromatographic column with a 0.026% trifluoroacetic acid (TFA) in the organic phase and 0.031% TFA in the aqueous phase, an initial composition of 80% aqueous phase in the mobile phase, a stepness of acetonitrile of 3% during the gradient elution mode with a flow rate of 1.25mL/min and a column temperature of 35+/-0.2 degrees C allowed the separation of eprosartan and irbesartan used as internal standard from plasma endogenous compounds. In the solid phase extraction procedure, experimental design was used in order to achieve a maximum recovery percentage. Firstly, the significant variables were chosen by way of fractional factorial design; then, a central composite design was run to obtain the more adequate values of the significant variables. Thus, the extraction procedure for spiked human plasma samples was carried out using C8 cartridges, phosphate buffer pH 2 as conditioning agent, a drying step of 10min, a washing step with methanol-phosphate buffer (20:80, v/v) and methanol as eluent liquid. The SPE-HPLC-UV developed method allowed the separation and quantitation of eprosartan from human plasma samples with an adequate resolution and a total analysis time of 1h.

  3. Design Optimization Toolkit: Users' Manual

    SciTech Connect

    Aguilo Valentin, Miguel Alejandro

    2014-07-01

    The Design Optimization Toolkit (DOTk) is a stand-alone C++ software package intended to solve complex design optimization problems. DOTk software package provides a range of solution methods that are suited for gradient/nongradient-based optimization, large scale constrained optimization, and topology optimization. DOTk was design to have a flexible user interface to allow easy access to DOTk solution methods from external engineering software packages. This inherent flexibility makes DOTk barely intrusive to other engineering software packages. As part of this inherent flexibility, DOTk software package provides an easy-to-use MATLAB interface that enables users to call DOTk solution methods directly from the MATLAB command window.

  4. D-OPTIMAL EXPERIMENTAL DESIGNS TO TEST FOR DEPARTURE FROM ADDITIVITY IN A FIXED-RATIO RAY MIXTURE.

    EPA Science Inventory

    Risk assessors are becoming increasingly aware of the importance of assessing interactions between chemicals in a mixture. Most traditional designs for evaluating interactions are prohibitive when the number of chemicals in the mixture is large. However, evaluation of interacti...

  5. Integrated controls design optimization

    DOEpatents

    Lou, Xinsheng; Neuschaefer, Carl H.

    2015-09-01

    A control system (207) for optimizing a chemical looping process of a power plant includes an optimizer (420), an income algorithm (230) and a cost algorithm (225) and a chemical looping process models. The process models are used to predict the process outputs from process input variables. Some of the process in puts and output variables are related to the income of the plant; and some others are related to the cost of the plant operations. The income algorithm (230) provides an income input to the optimizer (420) based on a plurality of input parameters (215) of the power plant. The cost algorithm (225) provides a cost input to the optimizer (420) based on a plurality of output parameters (220) of the power plant. The optimizer (420) determines an optimized operating parameter solution based on at least one of the income input and the cost input, and supplies the optimized operating parameter solution to the power plant.

  6. Optimal design of isotope labeling experiments.

    PubMed

    Yang, Hong; Mandy, Dominic E; Libourel, Igor G L

    2014-01-01

    Stable isotope labeling experiments (ILE) constitute a powerful methodology for estimating metabolic fluxes. An optimal label design for such an experiment is necessary to maximize the precision with which fluxes can be determined. But often, precision gained in the determination of one flux comes at the expense of the precision of other fluxes, and an appropriate label design therefore foremost depends on the question the investigator wants to address. One could liken ILE to shadows that metabolism casts on products. Optimal label design is the placement of the lamp; creating clear shadows for some parts of metabolism and obscuring others.An optimal isotope label design is influenced by: (1) the network structure; (2) the true flux values; (3) the available label measurements; and, (4) commercially available substrates. The first two aspects are dictated by nature and constrain any optimal design. The second two aspects are suitable design parameters. To create an optimal label design, an explicit optimization criterion needs to be formulated. This usually is a property of the flux covariance matrix, which can be augmented by weighting label substrate cost. An optimal design is found by using such a criterion as an objective function for an optimizer. This chapter uses a simple elementary metabolite units (EMU) representation of the TCA cycle to illustrate the process of experimental design of isotope labeled substrates.

  7. A Case Study on the Application of a Structured Experimental Method for Optimal Parameter Design of a Complex Control System

    NASA Technical Reports Server (NTRS)

    Torres-Pomales, Wilfredo

    2015-01-01

    This report documents a case study on the application of Reliability Engineering techniques to achieve an optimal balance between performance and robustness by tuning the functional parameters of a complex non-linear control system. For complex systems with intricate and non-linear patterns of interaction between system components, analytical derivation of a mathematical model of system performance and robustness in terms of functional parameters may not be feasible or cost-effective. The demonstrated approach is simple, structured, effective, repeatable, and cost and time efficient. This general approach is suitable for a wide range of systems.

  8. Factorial design optimization of experimental variables in preconcentration of carbamates pesticides in water samples using solid phase extraction and liquid chromatography-electrospray-mass spectrometry determination.

    PubMed

    Latrous El Atrache, Latifa; Ben Sghaier, Rafika; Bejaoui Kefi, Bochra; Haldys, Violette; Dachraoui, Mohamed; Tortajada, Jeanine

    2013-12-15

    An experimental design was applied for the optimization of extraction process of carbamates pesticides from surface water samples. Solid phase extraction (SPE) of carbamates compounds and their determination by liquid chromatography coupled to electrospray mass spectrometry detector were considered. A two level full factorial design 2(k) was used for selecting the variables which affected the extraction procedure. Eluent and sample volumes were statistically the most significant parameters. These significant variables were optimized using Doehlert matrix. The developed SPE method included 200mg of C-18 sorbent, 143.5 mL of water sample and 5.5 mL of acetonitrile in the elution step. For validation of the technique, accuracy, precision, detection and quantification limits, linearity, sensibility and selectivity were evaluated. Extraction recovery percentages of all the carbamates were above 90% with relative standard deviations (R.S.D.) in the range of 3-11%. The extraction method was selective and the detection and quantification limits were between 0.1 and 0.5 µg L(-1), and 1 and 3 µg L(-1), respectively.

  9. Experimental design based response surface methodology optimization of ultrasonic assisted adsorption of safaranin O by tin sulfide nanoparticle loaded on activated carbon.

    PubMed

    Roosta, M; Ghaedi, M; Daneshfar, A; Sahraei, R

    2014-03-25

    In this research, the adsorption rate of safranine O (SO) onto tin sulfide nanoparticle loaded on activated carbon (SnS-NPAC) was accelerated by the ultrasound. SnS-NP-AC was characterized by different techniques such as SEM, XRD and UV-Vis measurements. The present results confirm that the ultrasound assisted adsorption method has remarkable ability to improve the adsorption efficiency. The influence of parameters such as the sonication time, adsorbent dosage, pH and initial SO concentration was examined and evaluated by central composite design (CCD) combined with response surface methodology (RSM) and desirability function (DF). Conducting adsorption experiments at optimal conditions set as 4 min of sonication time, 0.024 g of adsorbent, pH 7 and 18 mg L(-1) SO make admit to achieve high removal percentage (98%) and high adsorption capacity (50.25 mg g(-)(1)). A good agreement between experimental and predicted data in this study was observed. The experimental equilibrium data fitting to Langmuir, Freundlich, Tempkin and Dubinin-Radushkevich models show that the Langmuir model is a good and suitable model for evaluation and the actual behavior of adsorption. Kinetic evaluation of experimental data showed that the adsorption processes followed well pseudo-second-order and intraparticle diffusion models.

  10. Experimental design based response surface methodology optimization of ultrasonic assisted adsorption of safaranin O by tin sulfide nanoparticle loaded on activated carbon

    NASA Astrophysics Data System (ADS)

    Roosta, M.; Ghaedi, M.; Daneshfar, A.; Sahraei, R.

    2014-03-01

    In this research, the adsorption rate of safranine O (SO) onto tin sulfide nanoparticle loaded on activated carbon (SnS-NPAC) was accelerated by the ultrasound. SnS-NP-AC was characterized by different techniques such as SEM, XRD and UV-Vis measurements. The present results confirm that the ultrasound assisted adsorption method has remarkable ability to improve the adsorption efficiency. The influence of parameters such as the sonication time, adsorbent dosage, pH and initial SO concentration was examined and evaluated by central composite design (CCD) combined with response surface methodology (RSM) and desirability function (DF). Conducting adsorption experiments at optimal conditions set as 4 min of sonication time, 0.024 g of adsorbent, pH 7 and 18 mg L-1 SO make admit to achieve high removal percentage (98%) and high adsorption capacity (50.25 mg g-1). A good agreement between experimental and predicted data in this study was observed. The experimental equilibrium data fitting to Langmuir, Freundlich, Tempkin and Dubinin-Radushkevich models show that the Langmuir model is a good and suitable model for evaluation and the actual behavior of adsorption. Kinetic evaluation of experimental data showed that the adsorption processes followed well pseudo-second-order and intraparticle diffusion models.

  11. Optimization of experimental parameters based on the Taguchi robust design for the formation of zinc oxide nanocrystals by solvothermal method

    SciTech Connect

    Yiamsawas, Doungporn; Boonpavanitchakul, Kanittha; Kangwansupamonkon, Wiyong

    2011-05-15

    Research highlights: {yields} Taguchi robust design can be applied to study ZnO nanocrystal growth. {yields} Spherical-like and rod-like shaped of ZnO nanocrystals can be obtained from solvothermal method. {yields} [NaOH]/[Zn{sup 2+}] ratio plays the most important factor on the aspect ratio of prepared ZnO. -- Abstract: Zinc oxide (ZnO) nanoparticles and nanorods were successfully synthesized by a solvothermal process. Taguchi robust design was applied to study the factors which result in stronger ZnO nanocrystal growth. The factors which have been studied are molar concentration ratio of sodium hydroxide and zinc acetate, amount of polymer templates and molecular weight of polymer templates. Transmission electron microscopy and X-ray diffraction technique were used to analyze the experiment results. The results show that the concentration ratio of sodium hydroxide and zinc acetate ratio has the greatest effect on ZnO nanocrystal growth.

  12. An experimental design approach for optimizing polycyclic aromatic hydrocarbon analysis in contaminated soil by pyrolyser-gas chromatography-mass spectrometry.

    PubMed

    Buco, S; Moragues, M; Sergent, M; Doumenq, P; Mille, G

    2007-06-01

    Pyrolyser-gas chromatography-mass spectrometry was used to analyze polycyclic aromatic hydrocarbons in contaminated soil without preliminary extraction. Experimental research methodology was used to obtain optimal performance of the system. After determination of the main factors (desorption time, Curie point temperature, carrier gas flow), modeling was done using a Box-Behnken matrix. Study of the response surface led to factor values that optimize the experimental response and achieve better chromatographic results.

  13. Optimization of ultrasound-assisted dispersive solid-phase microextraction based on nanoparticles followed by spectrophotometry for the simultaneous determination of dyes using experimental design.

    PubMed

    Asfaram, Arash; Ghaedi, Mehrorang; Goudarzi, Alireza

    2016-09-01

    A simple, low cost and ultrasensitive method for the simultaneous preconcentration and determination of trace amount of auramine-O and malachite green in aqueous media following accumulation on novel and lower toxicity nanomaterials by ultrasound-assisted dispersive solid phase micro-extraction (UA-DSPME) procedure combined with spectrophotometric has been described. The Mn doped ZnS nanoparticles loaded on activated carbon were characterized by Field emission scanning electron microscopy (FE-SEM), particle size distribution, X-ray diffraction (XRD), Fourier transform infrared spectroscopy (FT-IR) analyses and subsequently were used as green and efficient material for dyes accumulation. Contribution of experimental variables such as ultrasonic time, ultrasonic temperature, adsorbent mass, vortex time, ionic strength, pH and elution volume were optimized through experimental design, and while the preconcentrated analytes were efficiently eluted by acetone. Preliminary Plackett-Burman design was applied for selection of most significant factors and giving useful information about their main and interaction part of significant variables like ultrasonic time, adsorbent mass, elution volume and pH were obtained by central composite design combined with response surface analysis and optimum experimental conditions was set at pH of 8.0, 1.2mg of adsorbent, 150μL eluent and 3.7min sonication. Under optimized conditions, the average recoveries (five replicates) for two dyes (spiked at 500.0ngmL(-1)) changes in the range of 92.80-97.70% with acceptable RSD% less than 4.0% over a linear range of 3.0-5000.0ngmL(-1) for the AO and MG in water samples with regression coefficients (R(2)) of 0.9975 and 0.9977, respectively. Acceptable limits of detection of 0.91 and 0.61ngmL(-1) for AO and MG, respectively and high accuracy and repeatability are unique advantages of present method to improve the figures of merit for their accurate determination at trace level in complicated

  14. Optimization of low-frequency low-intensity ultrasound-mediated microvessel disruption on prostate cancer xenografts in nude mice using an orthogonal experimental design

    PubMed Central

    YANG, YU; BAI, WENKUN; CHEN, YINI; LIN, YANDUAN; HU, BING

    2015-01-01

    The present study aimed to provide a complete exploration of the effect of sound intensity, frequency, duty cycle, microbubble volume and irradiation time on low-frequency low-intensity ultrasound (US)-mediated microvessel disruption, and to identify an optimal combination of the five factors that maximize the blockage effect. An orthogonal experimental design approach was used. Enhanced US imaging and acoustic quantification were performed to assess tumor blood perfusion. In the confirmatory test, in addition to acoustic quantification, the specimens of the tumor were stained with hematoxylin and eosin and observed using light microscopy. The results revealed that sound intensity, frequency, duty cycle, microbubble volume and irradiation time had a significant effect on the average peak intensity (API). The extent of the impact of the variables on the API was in the following order: Sound intensity; frequency; duty cycle; microbubble volume; and irradiation time. The optimum conditions were found to be as follows: Sound intensity, 1.00 W/cm2; frequency, 20 Hz; duty cycle, 40%; microbubble volume, 0.20 ml; and irradiation time, 3 min. In the confirmatory test, the API was 19.97±2.66 immediately subsequent to treatment, and histological examination revealed signs of tumor blood vessel injury in the optimum parameter combination group. In conclusion, the Taguchi L18 (3)6 orthogonal array design was successfully applied for determining the optimal parameter combination of API following treatment. Under the optimum orthogonal design condition, a minimum API of 19.97±2.66 subsequent to low-frequency and low-intensity mediated blood perfusion blockage was obtained. PMID:26722279

  15. Optimization of low-frequency low-intensity ultrasound-mediated microvessel disruption on prostate cancer xenografts in nude mice using an orthogonal experimental design.

    PubMed

    Yang, Y U; Bai, Wenkun; Chen, Yini; Lin, Yanduan; Hu, Bing

    2015-11-01

    The present study aimed to provide a complete exploration of the effect of sound intensity, frequency, duty cycle, microbubble volume and irradiation time on low-frequency low-intensity ultrasound (US)-mediated microvessel disruption, and to identify an optimal combination of the five factors that maximize the blockage effect. An orthogonal experimental design approach was used. Enhanced US imaging and acoustic quantification were performed to assess tumor blood perfusion. In the confirmatory test, in addition to acoustic quantification, the specimens of the tumor were stained with hematoxylin and eosin and observed using light microscopy. The results revealed that sound intensity, frequency, duty cycle, microbubble volume and irradiation time had a significant effect on the average peak intensity (API). The extent of the impact of the variables on the API was in the following order: Sound intensity; frequency; duty cycle; microbubble volume; and irradiation time. The optimum conditions were found to be as follows: Sound intensity, 1.00 W/cm(2); frequency, 20 Hz; duty cycle, 40%; microbubble volume, 0.20 ml; and irradiation time, 3 min. In the confirmatory test, the API was 19.97±2.66 immediately subsequent to treatment, and histological examination revealed signs of tumor blood vessel injury in the optimum parameter combination group. In conclusion, the Taguchi L18 (3)(6) orthogonal array design was successfully applied for determining the optimal parameter combination of API following treatment. Under the optimum orthogonal design condition, a minimum API of 19.97±2.66 subsequent to low-frequency and low-intensity mediated blood perfusion blockage was obtained.

  16. A Surrogate Approach to the Experimental Optimization of Multielement Airfoils

    NASA Technical Reports Server (NTRS)

    Otto, John C.; Landman, Drew; Patera, Anthony T.

    1996-01-01

    The incorporation of experimental test data into the optimization process is accomplished through the use of Bayesian-validated surrogates. In the surrogate approach, a surrogate for the experiment (e.g., a response surface) serves in the optimization process. The validation step of the framework provides a qualitative assessment of the surrogate quality, and bounds the surrogate-for-experiment error on designs "near" surrogate-predicted optimal designs. The utility of the framework is demonstrated through its application to the experimental selection of the trailing edge ap position to achieve a design lift coefficient for a three-element airfoil.

  17. Pathway Design, Engineering, and Optimization.

    PubMed

    Garcia-Ruiz, Eva; HamediRad, Mohammad; Zhao, Huimin

    2016-09-16

    The microbial metabolic versatility found in nature has inspired scientists to create microorganisms capable of producing value-added compounds. Many endeavors have been made to transfer and/or combine pathways, existing or even engineered enzymes with new function to tractable microorganisms to generate new metabolic routes for drug, biofuel, and specialty chemical production. However, the success of these pathways can be impeded by different complications from an inherent failure of the pathway to cell perturbations. Pursuing ways to overcome these shortcomings, a wide variety of strategies have been developed. This chapter will review the computational algorithms and experimental tools used to design efficient metabolic routes, and construct and optimize biochemical pathways to produce chemicals of high interest.

  18. Nanoparticle-Laden Contact Lens for Controlled Ocular Delivery of Prednisolone: Formulation Optimization Using Statistical Experimental Design.

    PubMed

    ElShaer, Amr; Mustafa, Shelan; Kasar, Mohamad; Thapa, Sapana; Ghatora, Baljit; Alany, Raid G

    2016-04-20

    Human eye is one of the most accessible organs in the body, nonetheless, its physiology and associated precorneal factors such as nasolacrimal drainage, blinking, tear film, tear turnover, and induced lacrimation has significantly decreased the residence time of any foreign substances including pharmaceutical dosage forms. Soft contact lenses are promising delivery devices that can sustain the drug release and prolong residence time by acting as a geometric barrier to drug diffusion to tear fluid. This study investigates experimental parameters such as composition of polymer mixtures, stabilizer and the amount of active pharmaceutical ingredient on the preparation of a polymeric drug delivery system for the topical ocular administration of Prednisolone. To achieve this goal, prednisolone-loaded poly (lactic-co-glycolic acid) (PLGA) nanoparticles were prepared by single emulsion solvent evaporation method. Prednisolone was quantified using a validated high performance liquid chromatography (HPLC) method. Nanoparticle size was mostly affected by the amount of co-polymer (PLGA) used whereas drug load was mostly affected by amount of prednisolone (API) used. Longer homogenization time along with higher amount of API yielded the smallest size nanoparticles. The nanoparticles prepared had an average particle size of 347.1 ± 11.9 nm with a polydispersity index of 0.081. The nanoparticles were then incorporated in the contact lens mixture before preparing them. Clear and transparent contact lenses were successfully prepared. When the nanoparticle (NP)-loaded contact lenses were compared with control contact lenses (unloaded NP contact lenses), a decrease in hydration by 2% (31.2% ± 1.25% hydration for the 0.2 g loaded NP contact lenses) and light transmission by 8% (unloaded NP contact lenses 94.5% NP 0.2 g incorporated contact lenses 86.23%). The wettability of the contact lenses remained within the desired value (<90 °C) even upon incorporation of the NP. NP alone and

  19. Nanoparticle-Laden Contact Lens for Controlled Ocular Delivery of Prednisolone: Formulation Optimization Using Statistical Experimental Design

    PubMed Central

    ElShaer, Amr; Mustafa, Shelan; Kasar, Mohamad; Thapa, Sapana; Ghatora, Baljit; Alany, Raid G.

    2016-01-01

    Human eye is one of the most accessible organs in the body, nonetheless, its physiology and associated precorneal factors such as nasolacrimal drainage, blinking, tear film, tear turnover, and induced lacrimation has significantly decreased the residence time of any foreign substances including pharmaceutical dosage forms. Soft contact lenses are promising delivery devices that can sustain the drug release and prolong residence time by acting as a geometric barrier to drug diffusion to tear fluid. This study investigates experimental parameters such as composition of polymer mixtures, stabilizer and the amount of active pharmaceutical ingredient on the preparation of a polymeric drug delivery system for the topical ocular administration of Prednisolone. To achieve this goal, prednisolone-loaded poly (lactic-co-glycolic acid) (PLGA) nanoparticles were prepared by single emulsion solvent evaporation method. Prednisolone was quantified using a validated high performance liquid chromatography (HPLC) method. Nanoparticle size was mostly affected by the amount of co-polymer (PLGA) used whereas drug load was mostly affected by amount of prednisolone (API) used. Longer homogenization time along with higher amount of API yielded the smallest size nanoparticles. The nanoparticles prepared had an average particle size of 347.1 ± 11.9 nm with a polydispersity index of 0.081. The nanoparticles were then incorporated in the contact lens mixture before preparing them. Clear and transparent contact lenses were successfully prepared. When the nanoparticle (NP)-loaded contact lenses were compared with control contact lenses (unloaded NP contact lenses), a decrease in hydration by 2% (31.2% ± 1.25% hydration for the 0.2 g loaded NP contact lenses) and light transmission by 8% (unloaded NP contact lenses 94.5% NP 0.2 g incorporated contact lenses 86.23%). The wettability of the contact lenses remained within the desired value (<90 °C) even upon incorporation of the NP. NP alone and

  20. Optimized solar module design

    NASA Technical Reports Server (NTRS)

    Santala, T.; Sabol, R.; Carbajal, B. G.

    1978-01-01

    The minimum cost per unit of power output from flat plate solar modules can most likely be achieved through efficient packaging of higher efficiency solar cells. This paper outlines a module optimization method which is broadly applicable, and illustrates the potential results achievable from a specific high efficiency tandem junction (TJ) cell. A mathematical model is used to assess the impact of various factors influencing the encapsulated cell and packing efficiency. The optimization of the packing efficiency is demonstrated. The effect of encapsulated cell and packing efficiency on the module add-on cost is shown in a nomograph form.

  1. True Experimental Design.

    ERIC Educational Resources Information Center

    Huck, Schuyler W.

    1991-01-01

    This poem, with stanzas in limerick form, refers humorously to the many threats to validity posed by problems in research design, including problems of sample selection, data collection, and data analysis. (SLD)

  2. Design optimization of transonic airfoils

    NASA Technical Reports Server (NTRS)

    Joh, C.-Y.; Grossman, B.; Haftka, R. T.

    1991-01-01

    Numerical optimization procedures were considered for the design of airfoils in transonic flow based on the transonic small disturbance (TSD) and Euler equations. A sequential approximation optimization technique was implemented with an accurate approximation of the wave drag based on the Nixon's coordinate straining approach. A modification of the Euler surface boundary conditions was implemented in order to efficiently compute design sensitivities without remeshing the grid. Two effective design procedures producing converged designs in approximately 10 global iterations were developed: interchanging the role of the objective function and constraint and the direct lift maximization with move limits which were fixed absolute values of the design variables.

  3. Optimizing exchanger design early

    SciTech Connect

    Lacunza, M.; Vaschetti, G.; Campana, H.

    1987-08-01

    It is not practical for process engineers and designers to make a rigorous economic evaluation for each component of a process due to the loss of time and money. But, it's very helpful and useful to have a method for a quick design evaluation of heat exchangers, considering their important contribution to the total fixed investment in a process plant. This article is devoted to this subject, and the authors present a method that has been proved in some design cases. Linking rigorous design procedures with a quick cost-estimation method provides a good technique for obtaining the right heat exchanger. The cost will be appropriate, sometimes not the lowest because of design restrictions, but a good approach to the optimum in an earlier process design stage. The authors intend to show the influence of the design variables in a shell and tube heat exchanger on capital investment, or conversely, taking into account the general limiting factors of the process such as thermodynamics, operability, corrosion, etc., and/or from the mechanical design of the calculated unit. The last is a special consideration for countries with no access to industrial technology or with difficulties in obtaining certain construction materials or equipment.

  4. Winglet design using multidisciplinary design optimization techniques

    NASA Astrophysics Data System (ADS)

    Elham, Ali; van Tooren, Michel J. L.

    2014-10-01

    A quasi-three-dimensional aerodynamic solver is integrated with a semi-analytical structural weight estimation method inside a multidisciplinary design optimization framework to design and optimize a winglet for a passenger aircraft. The winglet is optimized for minimum drag and minimum structural weight. The Pareto front between those two objective functions is found applying a genetic algorithm. The aircraft minimum take-off weight and the aircraft minimum direct operating cost are used to select the best winglets among those on the Pareto front.

  5. Habitat Design Optimization and Analysis

    NASA Technical Reports Server (NTRS)

    SanSoucie, Michael P.; Hull, Patrick V.; Tinker, Michael L.

    2006-01-01

    Long-duration surface missions to the Moon and Mars will require habitats for the astronauts. The materials chosen for the habitat walls play a direct role in the protection against the harsh environments found on the surface. Choosing the best materials, their configuration, and the amount required is extremely difficult due to the immense size of the design region. Advanced optimization techniques are necessary for habitat wall design. Standard optimization techniques are not suitable for problems with such large search spaces; therefore, a habitat design optimization tool utilizing genetic algorithms has been developed. Genetic algorithms use a "survival of the fittest" philosophy, where the most fit individuals are more likely to survive and reproduce. This habitat design optimization tool is a multi-objective formulation of structural analysis, heat loss, radiation protection, and meteoroid protection. This paper presents the research and development of this tool.

  6. Computational design and optimization of energy materials

    NASA Astrophysics Data System (ADS)

    Chan, Maria

    The use of density functional theory (DFT) to understand and improve energy materials for diverse applications - including energy storage, thermal management, catalysis, and photovoltaics - is widespread. The further step of using high throughput DFT calculations to design materials and has led to an acceleration in materials discovery and development. Due to various limitations in DFT, including accuracy and computational cost, however, it is important to leverage effective models and, in some cases, experimental information to aid the design process. In this talk, I will discuss efforts in design and optimization of energy materials using a combination of effective models, DFT, machine learning, and experimental information.

  7. Determination of opiates in whole blood and vitreous humor: a study of the matrix effect and an experimental design to optimize conditions for the enzymatic hydrolysis of glucuronides.

    PubMed

    Sanches, Livia Rentas; Seulin, Saskia Carolina; Leyton, Vilma; Paranhos, Beatriz Aparecida Passos Bismara; Pasqualucci, Carlos Augusto; Muñoz, Daniel Romero; Osselton, Michael David; Yonamine, Mauricio

    2012-04-01

    Undoubtedly, whole blood and vitreous humor have been biological samples of great importance in forensic toxicology. The determination of opiates and their metabolites has been essential for better interpretation of toxicological findings. This report describes the application of experimental design and response surface methodology to optimize conditions for enzymatic hydrolysis of morphine-3-glucuronide and morphine-6-glucuronide. The analytes (free morphine, 6-acetylmorphine and codeine) were extracted from the samples using solid-phase extraction on mixed-mode cartridges, followed by derivatization to their trimethylsilyl derivatives. The extracts were analysed by gas chromatography-mass spectrometry with electron ionization and full scan mode. The method was validated for both specimens (whole blood and vitreous humor). A significant matrix effect was found by applying the F-test. Different recovery values were also found (82% on average for whole blood and 100% on average for vitreous humor). The calibration curves were linear for all analytes in the concentration range of 10-1,500 ng/mL. The limits of detection ranged from 2.0 to 5.0 ng/mL. The method was applied to a case in which a victim presented with a previous history of opiate use.

  8. Multidisciplinary design optimization using response surface analysis

    NASA Technical Reports Server (NTRS)

    Unal, Resit

    1992-01-01

    Aerospace conceptual vehicle design is a complex process which involves multidisciplinary studies of configuration and technology options considering many parameters at many values. NASA Langley's Vehicle Analysis Branch (VAB) has detailed computerized analysis capabilities in most of the key disciplines required by advanced vehicle design. Given a configuration, the capability exists to quickly determine its performance and lifecycle cost. The next step in vehicle design is to determine the best settings of design parameters that optimize the performance characteristics. Typical approach to design optimization is experience based, trial and error variation of many parameters one at a time where possible combinations usually number in the thousands. However, this approach can either lead to a very long and expensive design process or to a premature termination of the design process due to budget and/or schedule pressures. Furthermore, one variable at a time approach can not account for the interactions that occur among parts of systems and among disciplines. As a result, vehicle design may be far from optimal. Advanced multidisciplinary design optimization (MDO) methods are needed to direct the search in an efficient and intelligent manner in order to drastically reduce the number of candidate designs to be evaluated. The payoffs in terms of enhanced performance and reduced cost are significant. A literature review yields two such advanced MDO methods used in aerospace design optimization; Taguchi methods and response surface methods. Taguchi methods provide a systematic and efficient method for design optimization for performance and cost. However, response surface method (RSM) leads to a better, more accurate exploration of the parameter space and to estimated optimum conditions with a small expenditure on experimental data. These two methods are described.

  9. Experimental Design: Review and Comment.

    DTIC Science & Technology

    1984-02-01

    and early work in the subject was done by Wald (1943), Hotelling (1944), and Elfving (1952). The major contributions to the area, however, were made by...Kiefer (1958, 1959) and Kiefer and Wolfowitz (1959, 1960), who synthesized and greatly extended the previous work. Although the ideas of optimal...design theory is the general equivalence theorem (Kiefer and Wolfowitz 1960), which links D- and G-optimality. The theorem is phrased in terms of

  10. Advanced transport design using multidisciplinary design optimization

    NASA Technical Reports Server (NTRS)

    Barnum, Jennifer; Bathras, Curt; Beene, Kirk; Bush, Michael; Kaupin, Glenn; Lowe, Steve; Sobieski, Ian; Tingen, Kelly; Wells, Douglas

    1991-01-01

    This paper describes the results of the first implementation of multidisciplinary design optimisation (MDO) techniques by undergraduates ina design course. The objective of the work was to design a civilian transport aircraft of the Boeing 777 class. The first half of the two semester design course consisted of application of traditional sizing methods and techniques to form a baseline aircraft. MDO techniques were then applied to this baseline design. This paper describes the evolution of the design with special emphasis on the application of MDO techniques, and presents the results of four iterations through the design space. Minimization of take-off gross weight was the goal of the optimization process. The resultant aircraft derived from the MDO procedure weighed approximately 13,382 lbs (2.57 percent) less than the baseline aircraft.

  11. Design optimization of space structures

    NASA Astrophysics Data System (ADS)

    Felippa, Carlos

    1991-11-01

    The topology-shape-size optimization of space structures is investigated through Kikuchi's homogenization method. The method starts from a 'design domain block,' which is a region of space into which the structure is to materialize. This domain is initially filled with a finite element mesh, typically regular. Force and displacement boundary conditions corresponding to applied loads and supports are applied at specific points in the domain. An optimal structure is to be 'carved out' of the design under two conditions: (1) a cost function is to be minimized, and (2) equality or inequality constraints are to be satisfied. The 'carving' process is accomplished by letting microstructure holes develop and grow in elements during the optimization process. These holes have a rectangular shape in two dimensions and a cubical shape in three dimensions, and may also rotate with respect to the reference axes. The properties of the perforated element are obtained through an homogenization procedure. Once a hole reaches the volume of the element, that element effectively disappears. The project has two phases. In the first phase the method was implemented as the combination of two computer programs: a finite element module, and an optimization driver. In the second part, focus is on the application of this technique to planetary structures. The finite element part of the method was programmed for the two-dimensional case using four-node quadrilateral elements to cover the design domain. An element homogenization technique different from that of Kikuchi and coworkers was implemented. The optimization driver is based on an augmented Lagrangian optimizer, with the volume constraint treated as a Courant penalty function. The optimizer has to be especially tuned to this type of optimization because the number of design variables can reach into the thousands. The driver is presently under development.

  12. Simulation design for microalgal protein optimization

    PubMed Central

    Imamoglu, Esra

    2015-01-01

    A method for designing the operating parameters (surface light intensity, operating temperature and agitation rate) was proposed for microalgal protein production. Furthermore, quadratic model was established and validated (R2 > 0.90) with experimental data. It was recorded that temperature and agitation rate were slightly interdependent. The microalgal protein performance could be estimated using the simulated experimental setup and procedure developed in this study. The results also showed a holistic approach for opening a new avenue on simulation design for microalgal protein optimization. PMID:26418695

  13. Simple and Sensitive UPLC-MS/MS Method for High-Throughput Analysis of Ibrutinib in Rat Plasma: Optimization by Box-Behnken Experimental Design.

    PubMed

    2016-04-07

    Ibrutinib was the first Bruton's tyrosine kinase inhibitor that was approved by the U.S. Food and Drug Administration (FDA) for the treatment of mantle cell lymphoma, chronic lymphocytic leukemia, and waldenstrom macroglobulinemia. The aim of this study was to develop a UPLC-tandem MS method for the high-throughput analysis of ibrutinib in rat plasma samples. A chromatographic condition was optimized by the implementation of the Box-Behnken experimental design. Both ibrutinib and internal standard (vilazodone; IS) were separated within 2 min using the mobile phase of 0.1% formic acid in acetonitrile and 0.1% formic acid in 10 mM ammonium acetate in a ratio of 80+20, eluted at a flow rate of 0.250 mL/min. A simple protein precipitation method was used for the sample cleanup procedure. The detection was performed in electrospray ionization (ESI) positive mode using multiple reaction monitoring by ion transitions of m/z 441.16 > 84.02 for ibrutinib and m/z 442.17 > 155.02 for IS, respectively. All calibration curves were linear in the concentration range of 0.35 to 400 ng/mL (r(2) ≥ 0.997) with a lower LOQ of 0.35 ng/mL only. All validation parameter results were within the acceptance criteria as per international regulatory guidelines. The developed assay was successfully applied in the pharmacokinetic study of a novel ibrutinib self-nanoemulsifying drug-delivery system formulation.

  14. A free lunch in linearized experimental design?

    NASA Astrophysics Data System (ADS)

    Coles, D.; Curtis, A.

    2009-12-01

    The No Free Lunch (NFL) theorems state that no single optimization algorithm is ideally suited for all objective functions and, conversely, that no single objective function is ideally suited for all optimization algorithms (Wolpert and Macready, 1997). It is therefore of limited use to report the performance of a particular algorithm with respect to a particular objective function because the results cannot be safely extrapolated to other algorithms or objective functions. We examine the influence of the NFL theorems on linearized statistical experimental design (SED). We are aware of no publication that compares multiple design criteria in combination with multiple design algorithms. We examine four design algorithms in concert with three design objective functions to assess their interdependency. As a foundation for the study, we consider experimental designs for fitting ellipses to data, a problem pertinent, for example, to the study of transverse isotropy in a variety of disciplines. Surprisingly, we find that the quality of optimized experiments, and the computational efficiency of their optimization, is generally independent of the criterion-algorithm pairing. This is promising for linearized SED. While the NFL theorems must generally be true, the criterion-algorithm pairings we investigated are fairly robust to the theorems, indicating that we need not account for independency when choosing design algorithms and criteria from the set examined here. However, particular design algorithms do show patterns of performance, irrespective of the design criterion, and from this we establish a rough guideline for choosing from the examined algorithms for other design problems. As a by-product of our study we demonstrate that SED is subject to the principle of diminishing returns. That is, we see that the value of experimental design decreases with survey size, a fact that must be considered when deciding whether or not to design an experiment at all. Another outcome

  15. Experimental design in chemistry: A tutorial.

    PubMed

    Leardi, Riccardo

    2009-10-12

    In this tutorial the main concepts and applications of experimental design in chemistry will be explained. Unfortunately, nowadays experimental design is not as known and applied as it should be, and many papers can be found in which the "optimization" of a procedure is performed one variable at a time. Goal of this paper is to show the real advantages in terms of reduced experimental effort and of increased quality of information that can be obtained if this approach is followed. To do that, three real examples will be shown. Rather than on the mathematical aspects, this paper will focus on the mental attitude required by experimental design. The readers being interested to deepen their knowledge of the mathematical and algorithmical part can find very good books and tutorials in the references [G.E.P. Box, W.G. Hunter, J.S. Hunter, Statistics for Experimenters: An Introduction to Design, Data Analysis, and Model Building, John Wiley & Sons, New York, 1978; R. Brereton, Chemometrics: Data Analysis for the Laboratory and Chemical Plant, John Wiley & Sons, New York, 1978; R. Carlson, J.E. Carlson, Design and Optimization in Organic Synthesis: Second Revised and Enlarged Edition, in: Data Handling in Science and Technology, vol. 24, Elsevier, Amsterdam, 2005; J.A. Cornell, Experiments with Mixtures: Designs, Models and the Analysis of Mixture Data, in: Series in Probability and Statistics, John Wiley & Sons, New York, 1991; R.E. Bruns, I.S. Scarminio, B. de Barros Neto, Statistical Design-Chemometrics, in: Data Handling in Science and Technology, vol. 25, Elsevier, Amsterdam, 2006; D.C. Montgomery, Design and Analysis of Experiments, 7th edition, John Wiley & Sons, Inc., 2009; T. Lundstedt, E. Seifert, L. Abramo, B. Thelin, A. Nyström, J. Pettersen, R. Bergman, Chemolab 42 (1998) 3; Y. Vander Heyden, LC-GC Europe 19 (9) (2006) 469].

  16. Global Design Optimization for Fluid Machinery Applications

    NASA Technical Reports Server (NTRS)

    Shyy, Wei; Papila, Nilay; Tucker, Kevin; Vaidyanathan, Raj; Griffin, Lisa

    2000-01-01

    Recent experiences in utilizing the global optimization methodology, based on polynomial and neural network techniques for fluid machinery design are summarized. Global optimization methods can utilize the information collected from various sources and by different tools. These methods offer multi-criterion optimization, handle the existence of multiple design points and trade-offs via insight into the entire design space can easily perform tasks in parallel, and are often effective in filtering the noise intrinsic to numerical and experimental data. Another advantage is that these methods do not need to calculate the sensitivity of each design variable locally. However, a successful application of the global optimization method needs to address issues related to data requirements with an increase in the number of design variables and methods for predicting the model performance. Examples of applications selected from rocket propulsion components including a supersonic turbine and an injector element and a turbulent flow diffuser are used to illustrate the usefulness of the global optimization method.

  17. Optimization methods for alternative energy system design

    NASA Astrophysics Data System (ADS)

    Reinhardt, Michael Henry

    An electric vehicle heating system and a solar thermal coffee dryer are presented as case studies in alternative energy system design optimization. Design optimization tools are compared using these case studies, including linear programming, integer programming, and fuzzy integer programming. Although most decision variables in the designs of alternative energy systems are generally discrete (e.g., numbers of photovoltaic modules, thermal panels, layers of glazing in windows), the literature shows that the optimization methods used historically for design utilize continuous decision variables. Integer programming, used to find the optimal investment in conservation measures as a function of life cycle cost of an electric vehicle heating system, is compared to linear programming, demonstrating the importance of accounting for the discrete nature of design variables. The electric vehicle study shows that conservation methods similar to those used in building design, that reduce the overall UA of a 22 ft. electric shuttle bus from 488 to 202 (Btu/hr-F), can eliminate the need for fossil fuel heating systems when operating in the northeast United States. Fuzzy integer programming is presented as a means of accounting for imprecise design constraints such as being environmentally friendly in the optimization process. The solar thermal coffee dryer study focuses on a deep-bed design using unglazed thermal collectors (UTC). Experimental data from parchment coffee drying are gathered, including drying constants and equilibrium moisture. In this case, fuzzy linear programming is presented as a means of optimizing experimental procedures to produce the most information under imprecise constraints. Graphical optimization is used to show that for every 1 m2 deep-bed dryer, of 0.4 m depth, a UTC array consisting of 5, 1.1 m 2 panels, and a photovoltaic array consisting of 1, 0.25 m 2 panels produces the most dry coffee per dollar invested in the system. In general this study

  18. Parameters optimization using experimental design for headspace solid phase micro-extraction analysis of short-chain chlorinated paraffins in waters under the European water framework directive.

    PubMed

    Gandolfi, F; Malleret, L; Sergent, M; Doumenq, P

    2015-08-07

    The water framework directives (WFD 2000/60/EC and 2013/39/EU) force European countries to monitor the quality of their aquatic environment. Among the priority hazardous substances targeted by the WFD, short chain chlorinated paraffins C10-C13 (SCCPs), still represent an analytical challenge, because few laboratories are nowadays able to analyze them. Moreover, an annual average quality standards as low as 0.4μgL(-1) was set for SCCPs in surface water. Therefore, to test for compliance, the implementation of sensitive and reliable analysis method of SCCPs in water are required. The aim of this work was to address this issue by evaluating automated solid phase micro-extraction (SPME) combined on line with gas chromatography-electron capture negative ionization mass spectrometry (GC/ECNI-MS). Fiber polymer, extraction mode, ionic strength, extraction temperature and time were the most significant thermodynamic and kinetic parameters studied. To determine the suitable factors working ranges, the study of the extraction conditions was first carried out by using a classical one factor-at-a-time approach. Then a mixed level factorial 3×2(3) design was performed, in order to give rise to the most influent parameters and to estimate potential interactions effects between them. The most influent factors, i.e. extraction temperature and duration, were optimized by using a second experimental design, in order to maximize the chromatographic response. At the close of the study, a method involving headspace SPME (HS-SPME) coupled to GC/ECNI-MS is proposed. The optimum extraction conditions were sample temperature 90°C, extraction time 80min, with the PDMS 100μm fiber and desorption at 250°C during 2min. Linear response from 0.2ngmL(-1) to 10ngmL(-1) with r(2)=0.99 and limits of detection and quantification, respectively of 4pgmL(-1) and 120pgmL(-1) in MilliQ water, were achieved. The method proved to be applicable in different types of waters and show key advantages, such

  19. Heat Sink Design and Optimization

    DTIC Science & Technology

    2015-12-01

    hot surfaces to cooler ambient air. Typically, the fins are oriented in a way to permit a natural convection air draft to flow upward through...main objective. Heat transfer from the heat sink consists of radiation and convection from both the intra-fin passages and the unshielded...Natural convection Radiation Design Modeling Optimization 16. SECURITY CLASSIFICATION OF: 17

  20. A Tutorial on Adaptive Design Optimization

    PubMed Central

    Myung, Jay I.; Cavagnaro, Daniel R.; Pitt, Mark A.

    2013-01-01

    Experimentation is ubiquitous in the field of psychology and fundamental to the advancement of its science, and one of the biggest challenges for researchers is designing experiments that can conclusively discriminate the theoretical hypotheses or models under investigation. The recognition of this challenge has led to the development of sophisticated statistical methods that aid in the design of experiments and that are within the reach of everyday experimental scientists. This tutorial paper introduces the reader to an implementable experimentation methodology, dubbed Adaptive Design Optimization, that can help scientists to conduct “smart” experiments that are maximally informative and highly efficient, which in turn should accelerate scientific discovery in psychology and beyond. PMID:23997275

  1. Shape optimization techniques for musical instrument design

    NASA Astrophysics Data System (ADS)

    Henrique, Luis; Antunes, Jose; Carvalho, Joao S.

    2002-11-01

    The design of musical instruments is still mostly based on empirical knowledge and costly experimentation. One interesting improvement is the shape optimization of resonating components, given a number of constraints (allowed parameter ranges, shape smoothness, etc.), so that vibrations occur at specified modal frequencies. Each admissible geometrical configuration generates an error between computed eigenfrequencies and the target set. Typically, error surfaces present many local minima, corresponding to suboptimal designs. This difficulty can be overcome using global optimization techniques, such as simulated annealing. However these methods are greedy, concerning the number of function evaluations required. Thus, the computational effort can be unacceptable if complex problems, such as bell optimization, are tackled. Those issues are addressed in this paper, and a method for improving optimization procedures is proposed. Instead of using the local geometric parameters as searched variables, the system geometry is modeled in terms of truncated series of orthogonal space-funcitons, and optimization is performed on their amplitude coefficients. Fourier series and orthogonal polynomials are typical such functions. This technique reduces considerably the number of searched variables, and has a potential for significant computational savings in complex problems. It is illustrated by optimizing the shapes of both current and uncommon marimba bars.

  2. Optimal design of solidification processes

    NASA Technical Reports Server (NTRS)

    Dantzig, Jonathan A.; Tortorelli, Daniel A.

    1991-01-01

    An optimal design algorithm is presented for the analysis of general solidification processes, and is demonstrated for the growth of GaAs crystals in a Bridgman furnace. The system is optimal in the sense that the prespecified temperature distribution in the solidifying materials is obtained to maximize product quality. The optimization uses traditional numerical programming techniques which require the evaluation of cost and constraint functions and their sensitivities. The finite element method is incorporated to analyze the crystal solidification problem, evaluate the cost and constraint functions, and compute the sensitivities. These techniques are demonstrated in the crystal growth application by determining an optimal furnace wall temperature distribution to obtain the desired temperature profile in the crystal, and hence to maximize the crystal's quality. Several numerical optimization algorithms are studied to determine the proper convergence criteria, effective 1-D search strategies, appropriate forms of the cost and constraint functions, etc. In particular, we incorporate the conjugate gradient and quasi-Newton methods for unconstrained problems. The efficiency and effectiveness of each algorithm is presented in the example problem.

  3. Design optimization of LiNi0.6Co0.2Mn0.2O2/graphite lithium-ion cells based on simulation and experimental data

    NASA Astrophysics Data System (ADS)

    Appiah, Williams Agyei; Park, Joonam; Song, Seonghyun; Byun, Seoungwoo; Ryou, Myung-Hyun; Lee, Yong Min

    2016-07-01

    LiNi0.6Co0.2Mn0.2O2 cathodes of different thicknesses and porosities are prepared and tested, in order to optimize the design of lithium-ion cells. A mathematical model for simulating multiple types of particles with different contact resistances in a single electrode is adopted to study the effects of the different cathode thicknesses and porosities on lithium-ion transport using the nonlinear least squares technique. The model is used to optimize the design of LiNi0.6Co0.2Mn0.2O2/graphite lithium-ion cells by employing it to generate a number of Ragone plots. The cells are optimized for cathode porosity and thickness, while the anode porosity, anode-to-cathode capacity ratio, thickness and porosity of separator, and electrolyte salt concentration are held constant. Optimization is performed for discharge times ranging from 10 h to 5 min. Using the Levenberg-Marquardt method as a fitting technique, accounting for multiple particles with different contact resistances, and employing a rate-dependent solid-phase diffusion coefficient results in there being good agreement between the simulated and experimentally determined discharge curves. The optimized parameters obtained from this study should serve as a guide for the battery industry as well as for researchers for determining the optimal cell design for different applications.

  4. Research on optimization-based design

    NASA Technical Reports Server (NTRS)

    Balling, R. J.; Parkinson, A. R.; Free, J. C.

    1989-01-01

    Research on optimization-based design is discussed. Illustrative examples are given for cases involving continuous optimization with discrete variables and optimization with tolerances. Approximation of computationally expensive and noisy functions, electromechanical actuator/control system design using decomposition and application of knowledge-based systems and optimization for the design of a valve anti-cavitation device are among the topics covered.

  5. Optimization of the enantioseparation of a diaryl-pyrazole sulfonamide derivative by capillary electrophoresis in a dual CD mode using experimental design.

    PubMed

    Rogez-Florent, Tiphaine; Foulon, Catherine; Six, Perrine; Goossens, Laurence; Danel, Cécile; Goossens, Jean-François

    2014-10-01

    A CE method using dual cationic and neutral cyclodextrins (CD) was optimized for the enantiomeric separation of a compound presenting a diaryl sulfonamide group. Preliminary studies were made to select the optimal CDs and pH of the BGE. Two CDs (amino-β-CD and β-CD) were selected to separate the enantiomers in a 67 mM phosphate buffer at pH 7.4. However, the repeatability of the analyses obtained on bare-fused silica capillary was not acceptable owing to the adsorption of the amino-β-CD to the capillary. To prevent this, a dynamic coating of the capillary was used employing five layers of ionic-polymer (poly(diallyldimethylammonium) chloride (PDADMAC) and poly(sodium 4-styrenesulfonate). The efficiency of the coating was assessed by measuring the EOF stability. Repeatability of the injections was obtained when intermediate coating with PDADMAC was performed between each run. Secondly, this enantioseparation method was optimized using a central composite circumscribed design including three factors: amino-β-CD and β-CD concentrations and the percentage of methanol. Under the optimal conditions (i.e. 16.6 mM of amino-β-CD, 2.6 mM of β-CD, 0% MeOH in 67 mM phosphate buffer (pH 7.4) as BGE, cathodic injection 0.5 psi, 5 s, separation voltage 15 kV and a temperature of 15°C), complete enantioresolution of the analyte was obtained. It is worth mentioning that the design of experiments (DOE) protocol employed showed a significant interaction between CDs, highlighting the utility of DOE in method development. Finally, small variations in the ionic-polymer concentrations did not significantly influence the EOF, confirming the robustness of the coating method.

  6. An experimental study of an ultra-mobile vehicle for off-road transportation. Appendix 2. Dissertation. Kinematic optimal design of a six-legged walking machine

    NASA Astrophysics Data System (ADS)

    McGhee, R. B.; Waldron, K. J.; Song, S. M.

    1985-05-01

    Chapter 2 is a review of previous work in the following two areas: The mechanical structure of walking machines and walking gaits. In Chapter 3, the mathematical and graphical background for gait analysis is presented. The gait selection problem in different types of terrain is also discussed. Detailed studies of the major gaits used in level walking are presented. In Chapter 4, gaits for walking on gradients and methods to improve stability are studied. Also, gaits which may be used in crossing three major obstacle types are studied. In Chapter 5, the design of leg geometries based on four-bar linkages is discussed. Major techniques to optimize leg linkages for optimal walking volume are introduced. In Chapter 6, the design of a different leg geometry, based on a pantograph mechanism, is presented. A theoretical background of the motion characteristics of pantographs is given first. In Chapter 7, some other related items of the leg design are discussed. One of these is the foot-ankle system. A few conceptual passive foot-ankle systems are introduced. The second is a numerical method to find the shortest crank for a four-finitely-separated-position-synthesis problem. The shortest crank usually results in a crank rocker, which is the most desirable linkage type in many applications. Finally, in Chapter 8, the research work presented in this dissertation is evaluated and the future development of walking machines is discussed.

  7. Computational Optimization of a Natural Laminar Flow Experimental Wing Glove

    NASA Technical Reports Server (NTRS)

    Hartshom, Fletcher

    2012-01-01

    Computational optimization of a natural laminar flow experimental wing glove that is mounted on a business jet is presented and discussed. The process of designing a laminar flow wing glove starts with creating a two-dimensional optimized airfoil and then lofting it into a three-dimensional wing glove section. The airfoil design process does not consider the three dimensional flow effects such as cross flow due wing sweep as well as engine and body interference. Therefore, once an initial glove geometry is created from the airfoil, the three dimensional wing glove has to be optimized to ensure that the desired extent of laminar flow is maintained over the entire glove. TRANAIR, a non-linear full potential solver with a coupled boundary layer code was used as the main tool in the design and optimization process of the three-dimensional glove shape. The optimization process uses the Class-Shape-Transformation method to perturb the geometry with geometric constraints that allow for a 2-in clearance from the main wing. The three-dimensional glove shape was optimized with the objective of having a spanwise uniform pressure distribution that matches the optimized two-dimensional pressure distribution as closely as possible. Results show that with the appropriate inputs, the optimizer is able to match the two dimensional pressure distributions practically across the entire span of the wing glove. This allows for the experiment to have a much higher probability of having a large extent of natural laminar flow in flight.

  8. Applications of Experimental Design to the Optimization of Microextraction Sample Preparation Parameters for the Analysis of Pesticide Residues in Fruits and Vegetables.

    PubMed

    Abdulra'uf, Lukman Bola; Sirhan, Ala Yahya; Tan, Guan Huat

    2015-01-01

    Sample preparation has been identified as the most important step in analytical chemistry and has been tagged as the bottleneck of analytical methodology. The current trend is aimed at developing cost-effective, miniaturized, simplified, and environmentally friendly sample preparation techniques. The fundamentals and applications of multivariate statistical techniques for the optimization of microextraction sample preparation and chromatographic analysis of pesticide residues are described in this review. The use of Placket-Burman, Doehlert matrix, and Box-Behnken designs are discussed. As observed in this review, a number of analytical chemists have combined chemometrics and microextraction techniques, which has helped to streamline sample preparation and improve sample throughput.

  9. Development of a novel pH sensor based upon Janus Green B immobilized on triacetyl cellulose membrane: Experimental design and optimization.

    PubMed

    Chamkouri, Narges; Niazi, Ali; Zare-Shahabadi, Vali

    2016-03-05

    A novel pH optical sensor was prepared by immobilizing an azo dye called Janus Green B on the triacetylcellulose membrane. Condition of the dye solution used in the immobilization step, including concentration of the dye, pH, and duration were considered and optimized using the Box-Behnken design. The proposed sensor showed good behavior and precision (RSD<5%) in the pH range of 2.0-10.0. Advantages of this optical sensor include on-line applicability, no leakage, long-term stability (more than 6 months), fast response time (less than 1 min), high selectivity and sensitivity as well as good reversibility and reproducibility.

  10. Development of a novel pH sensor based upon Janus Green B immobilized on triacetyl cellulose membrane: Experimental design and optimization

    NASA Astrophysics Data System (ADS)

    Chamkouri, Narges; Niazi, Ali; Zare-Shahabadi, Vali

    2016-03-01

    A novel pH optical sensor was prepared by immobilizing an azo dye called Janus Green B on the triacetylcellulose membrane. Condition of the dye solution used in the immobilization step, including concentration of the dye, pH, and duration were considered and optimized using the Box-Behnken design. The proposed sensor showed good behavior and precision (RSD < 5%) in the pH range of 2.0-10.0. Advantages of this optical sensor include on-line applicability, no leakage, long-term stability (more than 6 months), fast response time (less than 1 min), high selectivity and sensitivity as well as good reversibility and reproducibility.

  11. Graphical Models for Quasi-Experimental Designs

    ERIC Educational Resources Information Center

    Kim, Yongnam; Steiner, Peter M.; Hall, Courtney E.; Su, Dan

    2016-01-01

    Experimental and quasi-experimental designs play a central role in estimating cause-effect relationships in education, psychology, and many other fields of the social and behavioral sciences. This paper presents and discusses the causal graphs of experimental and quasi-experimental designs. For quasi-experimental designs the authors demonstrate…

  12. Optimization of headspace solid-phase microextraction by means of an experimental design for the determination of methyl tert.-butyl ether in water by gas chromatography-flame ionization detection.

    PubMed

    Dron, Julien; Garcia, Rosa; Millán, Esmeralda

    2002-07-19

    A procedure for determination of methyl tert.-butyl ether (MTBE) in water by headspace solid-phase microextraction (HS-SPME) has been developed. The analysis was carried out by gas chromatography with flame ionization detection. The extraction procedure, using a 65-microm poly(dimethylsiloxane)-divinylbenzene SPME fiber, was optimized following experimental design. A fractional factorial design for screening and a central composite design for optimizing the significant variables were applied. Extraction temperature and sodium chloride concentration were significant variables, and 20 degrees C and 300 g/l were, respectively chosen for the best extraction response. With these conditions, an extraction time of 5 min was sufficient to extract MTBE. The calibration linear range for MTBE was 5-500 microg/l and the detection limit 0.45 microg/l. The relative standard deviation, for seven replicates of 250 microg/l MTBE in water, was 6.3%.

  13. Optimal design of airlift fermenters

    SciTech Connect

    Moresi, M.

    1981-11-01

    In this article a modeling of a draft-tube airlift fermenter (ALF) based on perfect back-mixing of liquid and plugflow for gas bubbles has been carried out to optimize the design and operation of fermentation units at different working capacities. With reference to a whey fermentation by yeasts the economic optimization has led to a slim ALF with an aspect ratio of about 15. As far as power expended per unit of oxygen transfer is concerned, the responses of the model are highly influenced by kLa. However, a safer use of the model has been suggested in order to assess the feasibility of the fermentation process under study. (Refs. 39).

  14. Experimental design for the optimization and robustness testing of a liquid chromatography tandem mass spectrometry method for the trace analysis of the potentially genotoxic 1,3-diisopropylurea.

    PubMed

    Székely, György; Henriques, Bruno; Gil, Marco; Alvarez, Carlos

    2014-09-01

    This paper discusses a design of experiments (DoE) assisted optimization and robustness testing of a liquid chromatography-tandem mass spectrometry (LC-MS/MS) method development for the trace analysis of the potentially genotoxic 1,3-diisopropylurea (IPU) impurity in mometasone furoate glucocorticosteroid. Compared to the conventional trial-and-error method development, DoE is a cost-effective and systematic approach to system optimization by which the effects of multiple parameters and parameter interactions on a given response are considered. The LC and MS factors were studied simultaneously: flow (F), gradient (G), injection volume (Vinj), cone voltage (E(con)), and collision energy (E(col)). The optimization was carried out with respect to four responses: separation of peaks (Sep), peak area (A(p)), length of the analysis (T), and the signal-to-noise ratio (S/N). An optimization central composite face (CCF) DoE was conducted leading to the early discovery of carry-over effect which was further investigated in order to establish the maximum injectable sample load. A second DoE was conducted in order to obtain the optimal LC-MS/MS method. As part of the validation of the obtained method, its robustness was determined by conducting a fractional factorial of resolution III DoE, wherein column temperature and quadrupole resolution were considered as additional factors. The method utilizes a common Phenomenex Gemini NX C-18 HPLC analytical column with electrospray ionization and a triple quadrupole mass detector in multiple reaction monitoring (MRM) mode, resulting in short analyses with a 10-min runtime. The high sensitivity and low limit of quantification (LOQ) was achieved by (1) MRM mode (instead of single ion monitoring) and (2) avoiding the drawbacks of derivatization (incomplete reaction and time-consuming sample preparation). Quantitatively, the DoE method development strategy resulted in the robust trace analysis of IPU at 1.25 ng/mL absolute concentration

  15. Probabilistic Finite Element Analysis & Design Optimization for Structural Designs

    NASA Astrophysics Data System (ADS)

    Deivanayagam, Arumugam

    This study focuses on implementing probabilistic nature of material properties (Kevlar® 49) to the existing deterministic finite element analysis (FEA) of fabric based engine containment system through Monte Carlo simulations (MCS) and implementation of probabilistic analysis in engineering designs through Reliability Based Design Optimization (RBDO). First, the emphasis is on experimental data analysis focusing on probabilistic distribution models which characterize the randomness associated with the experimental data. The material properties of Kevlar® 49 are modeled using experimental data analysis and implemented along with an existing spiral modeling scheme (SMS) and user defined constitutive model (UMAT) for fabric based engine containment simulations in LS-DYNA. MCS of the model are performed to observe the failure pattern and exit velocities of the models. Then the solutions are compared with NASA experimental tests and deterministic results. MCS with probabilistic material data give a good prospective on results rather than a single deterministic simulation results. The next part of research is to implement the probabilistic material properties in engineering designs. The main aim of structural design is to obtain optimal solutions. In any case, in a deterministic optimization problem even though the structures are cost effective, it becomes highly unreliable if the uncertainty that may be associated with the system (material properties, loading etc.) is not represented or considered in the solution process. Reliable and optimal solution can be obtained by performing reliability optimization along with the deterministic optimization, which is RBDO. In RBDO problem formulation, in addition to structural performance constraints, reliability constraints are also considered. This part of research starts with introduction to reliability analysis such as first order reliability analysis, second order reliability analysis followed by simulation technique that

  16. Optimal Designs for the Rasch Model

    ERIC Educational Resources Information Center

    Grasshoff, Ulrike; Holling, Heinz; Schwabe, Rainer

    2012-01-01

    In this paper, optimal designs will be derived for estimating the ability parameters of the Rasch model when difficulty parameters are known. It is well established that a design is locally D-optimal if the ability and difficulty coincide. But locally optimal designs require that the ability parameters to be estimated are known. To attenuate this…

  17. Ultra-high performance liquid chromatographic determination of levofloxacin in human plasma and prostate tissue with use of experimental design optimization procedures.

    PubMed

    Szerkus, O; Jacyna, J; Wiczling, P; Gibas, A; Sieczkowski, M; Siluk, D; Matuszewski, M; Kaliszan, R; Markuszewski, M J

    2016-09-01

    Fluoroquinolones are considered as gold standard for the prevention of bacterial infections after transrectal ultrasound guided prostate biopsy. However, recent studies reported that fluoroquinolone- resistant bacterial strains are responsible for gradually increasing number of infections after transrectal prostate biopsy. In daily clinical practice, antibacterial efficacy is evaluated only in vitro, by measuring the reaction of bacteria with an antimicrobial agent in culture media (i.e. calculation of minimal inhibitory concentration). Such approach, however, has no relation to the treated tissue characteristics and might be highly misleading. Thus, the objective of this study was to develop, with the use of Design of Experiments approach, a reliable, specific and sensitive ultra-high performance liquid chromatography- diode array detection method for the quantitative analysis of levofloxacin in plasma and prostate tissue samples obtained from patients undergoing prostate biopsy. Moreover, correlation study between concentrations observed in plasma samples vs prostatic tissue samples was performed, resulting in better understanding, evaluation and optimization of the fluoroquinolone-based antimicrobial prophylaxis during transrectal ultrasound guided prostate biopsy. Box-Behnken design was employed to optimize chromatographic conditions of the isocratic elution program in order to obtain desirable retention time, peak symmetry and resolution of levofloxacine and ciprofloxacine (internal standard) peaks. Fractional Factorial design 2(4-1) with four center points was used for screening of significant factors affecting levofloxacin extraction from the prostatic tissue. Due to the limited number of tissue samples the prostatic sample preparation procedure was further optimized using Central Composite design. Design of Experiments approach was also utilized for evaluation of parameter robustness. The method was found linear over the range of 0.030-10μg/mL for human

  18. Experimental Design For Photoresist Characterization

    NASA Astrophysics Data System (ADS)

    Luckock, Larry

    1987-04-01

    In processing a semiconductor product (from discrete devices up to the most complex products produced) we find more photolithographic steps in wafer fabrication than any other kind of process step. Thus, the success of a semiconductor manufacturer hinges on the optimization of their photolithographic processes. Yet, we find few companies that have taken the time to properly characterize this critical operation; they are sitting in the "passenger's seat", waiting to see what will come out, hoping that the yields will improve someday. There is no "black magic" involved in setting up a process at its optimum conditions (i.e. minimum sensitivity to all variables at the same time). This paper gives an example of a real world situation for optimizing a photolithographic process by the use of a properly designed experiment, followed by adequate multidimensional analysis of the data. Basic SPC practices like plotting control charts will not, by themselves, improve yields; the control charts are, however, among the necessary tools used in the determination of the process capability and in the formulation of the problems to be addressed. The example we shall consider is the twofold objective of shifting the process average, while tightening the variance, of polysilicon line widths. This goal was identified from a Pareto analysis of yield-limiting mechanisms, plus inspection of the control charts. A key issue in a characterization of this type of process is the number of interactions between variables; this example rules out two-level full factorial and three-level fractional factorial designs (which cannot detect all of the interactions). We arrive at an experiment with five factors at five levels each. A full factorial design for five factors at three levels would require 3125 wafers. Instead, we will use a design that allows us to run this experiment with only 25 wafers, for a significant reduction in time, materials and manufacturing interruption in order to complete the

  19. An optimal structural design algorithm using optimality criteria

    NASA Technical Reports Server (NTRS)

    Taylor, J. E.; Rossow, M. P.

    1976-01-01

    An algorithm for optimal design is given which incorporates several of the desirable features of both mathematical programming and optimality criteria, while avoiding some of the undesirable features. The algorithm proceeds by approaching the optimal solution through the solutions of an associated set of constrained optimal design problems. The solutions of the constrained problems are recognized at each stage through the application of optimality criteria based on energy concepts. Two examples are described in which the optimal member size and layout of a truss is predicted, given the joint locations and loads.

  20. Animal husbandry and experimental design.

    PubMed

    Nevalainen, Timo

    2014-01-01

    If the scientist needs to contact the animal facility after any study to inquire about husbandry details, this represents a lost opportunity, which can ultimately interfere with the study results and their interpretation. There is a clear tendency for authors to describe methodological procedures down to the smallest detail, but at the same time to provide minimal information on animals and their husbandry. Controlling all major variables as far as possible is the key issue when establishing an experimental design. The other common mechanism affecting study results is a change in the variation. Factors causing bias or variation changes are also detectable within husbandry. Our lives and the lives of animals are governed by cycles: the seasons, the reproductive cycle, the weekend-working days, the cage change/room sanitation cycle, and the diurnal rhythm. Some of these may be attributable to routine husbandry, and the rest are cycles, which may be affected by husbandry procedures. Other issues to be considered are consequences of in-house transport, restrictions caused by caging, randomization of cage location, the physical environment inside the cage, the acoustic environment audible to animals, olfactory environment, materials in the cage, cage complexity, feeding regimens, kinship, and humans. Laboratory animal husbandry issues are an integral but underappreciated part of investigators' experimental design, which if ignored can cause major interference with the results. All researchers should familiarize themselves with the current routine animal care of the facility serving them, including their capabilities for the monitoring of biological and physicochemical environment.

  1. Quasi-Experimental Designs for Causal Inference

    ERIC Educational Resources Information Center

    Kim, Yongnam; Steiner, Peter

    2016-01-01

    When randomized experiments are infeasible, quasi-experimental designs can be exploited to evaluate causal treatment effects. The strongest quasi-experimental designs for causal inference are regression discontinuity designs, instrumental variable designs, matching and propensity score designs, and comparative interrupted time series designs. This…

  2. A hydrometallurgical process for the recovery of terbium from fluorescent lamps: Experimental design, optimization of acid leaching process and process analysis.

    PubMed

    Innocenzi, Valentina; Ippolito, Nicolò Maria; De Michelis, Ida; Medici, Franco; Vegliò, Francesco

    2016-12-15

    Terbium and rare earths recovery from fluorescent powders of exhausted lamps by acid leaching with hydrochloric acid was the objective of this study. In order to investigate the factors affecting leaching a series of experiments was performed in according to a full factorial plan with four variables and two levels (4(2)). The factors studied were temperature, concentration of acid, pulp density and leaching time. Experimental conditions of terbium dissolution were optimized by statistical analysis. The results showed that temperature and pulp density were significant with a positive and negative effect, respectively. The empirical mathematical model deducted by experimental data demonstrated that terbium content was completely dissolved under the following conditions: 90 °C, 2 M hydrochloric acid and 5% of pulp density; while when the pulp density was 15% an extraction of 83% could be obtained at 90 °C and 5 M hydrochloric acid. Finally a flow sheet for the recovery of rare earth elements was proposed. The process was tested and simulated by commercial software for the chemical processes. The mass balance of the process was calculated: from 1 ton of initial powder it was possible to obtain around 160 kg of a concentrate of rare earths having a purity of 99%. The main rare earths elements in the final product was yttrium oxide (86.43%) following by cerium oxide (4.11%), lanthanum oxide (3.18%), europium oxide (3.08%) and terbium oxide (2.20%). The estimated total recovery of the rare earths elements was around 70% for yttrium and europium and 80% for the other rare earths.

  3. Optimal design of compact spur gear reductions

    NASA Technical Reports Server (NTRS)

    Savage, M.; Lattime, S. B.; Kimmel, J. A.; Coe, H. H.

    1992-01-01

    The optimal design of compact spur gear reductions includes the selection of bearing and shaft proportions in addition to gear mesh parameters. Designs for single mesh spur gear reductions are based on optimization of system life, system volume, and system weight including gears, support shafts, and the four bearings. The overall optimization allows component properties to interact, yielding the best composite design. A modified feasible directions search algorithm directs the optimization through a continuous design space. Interpolated polynomials expand the discrete bearing properties and proportions into continuous variables for optimization. After finding the continuous optimum, the designer can analyze near optimal designs for comparison and selection. Design examples show the influence of the bearings on the optimal configurations.

  4. A novel homocystine-agarose adsorbent for separation and preconcentration of nickel in table salt and baking soda using factorial design optimization of the experimental conditions.

    PubMed

    Hashemi, Payman; Rahmani, Zohreh

    2006-02-28

    Homocystine was for the first time, chemically linked to a highly cross-linked agarose support (Novarose) to be employed as a chelating adsorbent for preconcentration and AAS determination of nickel in table salt and baking soda. Nickel is quantitatively adsorbed on a small column packed with 0.25ml of the adsorbent, in a pH range of 5.5-6.5 and simply eluted with 5ml of a 1moll(-1) hydrochloric acid solution. A factorial design was used for optimization of the effects of five different variables on the recovery of nickel. The results indicated that the factors of flow rate and column length, and the interactions between pH and sample volume are significant. In the optimized conditions, the column could tolerate salt concentrations up to 0.5moll(-1) and sample volumes beyond 500ml. Matrix ions of Mg(2+) and Ca(2+), with a concentration of 200mgl(-1), and potentially interfering ions of Cd(2+), Cu(2+), Zn(2+) and Mn(2+), with a concentration of 10mgl(-1), did not have significant effect on the analyte's signal. Preconcentration factors up to 100 and a detection limit of 0.49mugl(-1), corresponding to an enrichment volume of 500ml, were obtained for the determination of the analyte by flame AAS. Application of the method to the determination of natural and spiked nickel in table salt and baking soda solutions resulted in quantitative recoveries. Direct ETAAS determination of nickel in the same samples was not possible because of a high background observed.

  5. Sequential experimental design based generalised ANOVA

    SciTech Connect

    Chakraborty, Souvik Chowdhury, Rajib

    2016-07-15

    Over the last decade, surrogate modelling technique has gained wide popularity in the field of uncertainty quantification, optimization, model exploration and sensitivity analysis. This approach relies on experimental design to generate training points and regression/interpolation for generating the surrogate. In this work, it is argued that conventional experimental design may render a surrogate model inefficient. In order to address this issue, this paper presents a novel distribution adaptive sequential experimental design (DA-SED). The proposed DA-SED has been coupled with a variant of generalised analysis of variance (G-ANOVA), developed by representing the component function using the generalised polynomial chaos expansion. Moreover, generalised analytical expressions for calculating the first two statistical moments of the response, which are utilized in predicting the probability of failure, have also been developed. The proposed approach has been utilized in predicting probability of failure of three structural mechanics problems. It is observed that the proposed approach yields accurate and computationally efficient estimate of the failure probability.

  6. Sequential experimental design based generalised ANOVA

    NASA Astrophysics Data System (ADS)

    Chakraborty, Souvik; Chowdhury, Rajib

    2016-07-01

    Over the last decade, surrogate modelling technique has gained wide popularity in the field of uncertainty quantification, optimization, model exploration and sensitivity analysis. This approach relies on experimental design to generate training points and regression/interpolation for generating the surrogate. In this work, it is argued that conventional experimental design may render a surrogate model inefficient. In order to address this issue, this paper presents a novel distribution adaptive sequential experimental design (DA-SED). The proposed DA-SED has been coupled with a variant of generalised analysis of variance (G-ANOVA), developed by representing the component function using the generalised polynomial chaos expansion. Moreover, generalised analytical expressions for calculating the first two statistical moments of the response, which are utilized in predicting the probability of failure, have also been developed. The proposed approach has been utilized in predicting probability of failure of three structural mechanics problems. It is observed that the proposed approach yields accurate and computationally efficient estimate of the failure probability.

  7. Optimized IR synchrotron beamline design.

    PubMed

    Moreno, Thierry

    2015-09-01

    Synchrotron infrared beamlines are powerful tools on which to perform spectroscopy on microscopic length scales but require working with large bending-magnet source apertures in order to provide intense photon beams to the experiments. Many infrared beamlines use a single toroidal-shaped mirror to focus the source emission which generates, for large apertures, beams with significant geometrical aberrations resulting from the shape of the source and the beamline optics. In this paper, an optical layout optimized for synchrotron infrared beamlines, that removes almost totally the geometrical aberrations of the source, is presented and analyzed. This layout is already operational on the IR beamline of the Brazilian synchrotron. An infrared beamline design based on a SOLEIL bending-magnet source is given as an example, which could be useful for future IR beamline improvements at this facility.

  8. A Bayesian experimental design approach to structural health monitoring

    SciTech Connect

    Farrar, Charles; Flynn, Eric; Todd, Michael

    2010-01-01

    Optimal system design for SHM involves two primarily challenges. The first is the derivation of a proper performance function for a given system design. The second is the development of an efficient optimization algorithm for choosing a design that maximizes, or nearly maximizes the performance function. In this paper we will outline how an SHM practitioner can construct the proper performance function by casting the entire design problem into a framework of Bayesian experimental design. The approach demonstrates how the design problem necessarily ties together all steps of the SHM process.

  9. Rationally reduced libraries for combinatorial pathway optimization minimizing experimental effort

    PubMed Central

    Jeschek, Markus; Gerngross, Daniel; Panke, Sven

    2016-01-01

    Rational flux design in metabolic engineering approaches remains difficult since important pathway information is frequently not available. Therefore empirical methods are applied that randomly change absolute and relative pathway enzyme levels and subsequently screen for variants with improved performance. However, screening is often limited on the analytical side, generating a strong incentive to construct small but smart libraries. Here we introduce RedLibs (Reduced Libraries), an algorithm that allows for the rational design of smart combinatorial libraries for pathway optimization thereby minimizing the use of experimental resources. We demonstrate the utility of RedLibs for the design of ribosome-binding site libraries by in silico and in vivo screening with fluorescent proteins and perform a simple two-step optimization of the product selectivity in the branched multistep pathway for violacein biosynthesis, indicating a general applicability for the algorithm and the proposed heuristics. We expect that RedLibs will substantially simplify the refactoring of synthetic metabolic pathways. PMID:27029461

  10. Optimal design of biaxial tensile cruciform specimens

    NASA Astrophysics Data System (ADS)

    Demmerle, S.; Boehler, J. P.

    1993-01-01

    F OR EXPERIMENTAL investigations concerning the mechanical behaviour under biaxial stress states of rolled sheet metals, mostly cruciform flat specimens are used. By means of empirical methods, different specimen geometries have been proposed in the literature. In order to evaluate the suitability of a specimen design, a mathematically well defined criterion is developed, based on the standard deviations of the values of the stresses in the test section. Applied to the finite element method, the criterion is employed to realize the shape optimization of biaxial cruciform specimens for isotropic elastic materials. Furthermore, the performance of the obtained optimized specimen design is investigated in the case of off-axes tests on anisotropic materials. Therefore, for the first time, an original testing device, consisting of hinged fixtures with knife edges at each arm of the specimen, is applied to the biaxial test. The obtained results indicate the decisive superiority of the optimized specimens for the proper performance on isotropic materials, as well as the paramount importance of the proposed off-axes testing technique for biaxial tests on anisotropic materials.

  11. Control design for the SERC experimental testbeds

    NASA Technical Reports Server (NTRS)

    Jacques, Robert; Blackwood, Gary; Macmartin, Douglas G.; How, Jonathan; Anderson, Eric

    1992-01-01

    Viewgraphs on control design for the Space Engineering Research Center experimental testbeds are presented. Topics covered include: SISO control design and results; sensor and actuator location; model identification; control design; experimental results; preliminary LAC experimental results; active vibration isolation problem statement; base flexibility coupling into isolation feedback loop; cantilever beam testbed; and closed loop results.

  12. Comparison of optimal design methods in inverse problems

    NASA Astrophysics Data System (ADS)

    Banks, H. T.; Holm, K.; Kappel, F.

    2011-07-01

    Typical optimal design methods for inverse or parameter estimation problems are designed to choose optimal sampling distributions through minimization of a specific cost function related to the resulting error in parameter estimates. It is hoped that the inverse problem will produce parameter estimates with increased accuracy using data collected according to the optimal sampling distribution. Here we formulate the classical optimal design problem in the context of general optimization problems over distributions of sampling times. We present a new Prohorov metric-based theoretical framework that permits one to treat succinctly and rigorously any optimal design criteria based on the Fisher information matrix. A fundamental approximation theory is also included in this framework. A new optimal design, SE-optimal design (standard error optimal design), is then introduced in the context of this framework. We compare this new design criterion with the more traditional D-optimal and E-optimal designs. The optimal sampling distributions from each design are used to compute and compare standard errors; the standard errors for parameters are computed using asymptotic theory or bootstrapping and the optimal mesh. We use three examples to illustrate ideas: the Verhulst-Pearl logistic population model (Banks H T and Tran H T 2009 Mathematical and Experimental Modeling of Physical and Biological Processes (Boca Raton, FL: Chapman and Hall/CRC)), the standard harmonic oscillator model (Banks H T and Tran H T 2009) and a popular glucose regulation model (Bergman R N, Ider Y Z, Bowden C R and Cobelli C 1979 Am. J. Physiol. 236 E667-77 De Gaetano A and Arino O 2000 J. Math. Biol. 40 136-68 Toffolo G, Bergman R N, Finegood D T, Bowden C R and Cobelli C 1980 Diabetes 29 979-90).

  13. Program Aids Analysis And Optimization Of Design

    NASA Technical Reports Server (NTRS)

    Rogers, James L., Jr.; Lamarsh, William J., II

    1994-01-01

    NETS/ PROSSS (NETS Coupled With Programming System for Structural Synthesis) computer program developed to provide system for combining NETS (MSC-21588), neural-network application program and CONMIN (Constrained Function Minimization, ARC-10836), optimization program. Enables user to reach nearly optimal design. Design then used as starting point in normal optimization process, possibly enabling user to converge to optimal solution in significantly fewer iterations. NEWT/PROSSS written in C language and FORTRAN 77.

  14. Reliability based design optimization: Formulations and methodologies

    NASA Astrophysics Data System (ADS)

    Agarwal, Harish

    Modern products ranging from simple components to complex systems should be designed to be optimal and reliable. The challenge of modern engineering is to ensure that manufacturing costs are reduced and design cycle times are minimized while achieving requirements for performance and reliability. If the market for the product is competitive, improved quality and reliability can generate very strong competitive advantages. Simulation based design plays an important role in designing almost any kind of automotive, aerospace, and consumer products under these competitive conditions. Single discipline simulations used for analysis are being coupled together to create complex coupled simulation tools. This investigation focuses on the development of efficient and robust methodologies for reliability based design optimization in a simulation based design environment. Original contributions of this research are the development of a novel efficient and robust unilevel methodology for reliability based design optimization, the development of an innovative decoupled reliability based design optimization methodology, the application of homotopy techniques in unilevel reliability based design optimization methodology, and the development of a new framework for reliability based design optimization under epistemic uncertainty. The unilevel methodology for reliability based design optimization is shown to be mathematically equivalent to the traditional nested formulation. Numerical test problems show that the unilevel methodology can reduce computational cost by at least 50% as compared to the nested approach. The decoupled reliability based design optimization methodology is an approximate technique to obtain consistent reliable designs at lesser computational expense. Test problems show that the methodology is computationally efficient compared to the nested approach. A framework for performing reliability based design optimization under epistemic uncertainty is also developed

  15. WE-AB-BRB-01: Development of a Probe-Format Graphite Calorimeter for Practical Clinical Dosimetry: Numerical Design Optimization, Prototyping, and Experimental Proof-Of-Concept

    SciTech Connect

    Renaud, J; Seuntjens, J; Sarfehnia, A

    2015-06-15

    Purpose: In this work, the feasibility of performing absolute dose to water measurements using a constant temperature graphite probe calorimeter (GPC) in a clinical environment is established. Methods: A numerical design optimization study was conducted by simulating the heat transfer in the GPC resulting from irradiation using a finite element method software package. The choice of device shape, dimensions, and materials was made to minimize the heat loss in the sensitive volume of the GPC. The resulting design, which incorporates a novel aerogel-based thermal insulator, and 15 temperature sensitive resistors capable of both Joule heating and measuring temperature, was constructed in house. A software based process controller was developed to stabilize the temperatures of the GPC’s constituent graphite components to within a few 10’s of µK. This control system enables the GPC to operate in either the quasi-adiabatic or isothermal mode, two well-known, and independent calorimetry techniques. Absorbed dose to water measurements were made using these two methods under standard conditions in a 6 MV 1000 MU/min photon beam and subsequently compared against TG-51 derived values. Results: Compared to an expected dose to water of 76.9 cGy/100 MU, the average GPC-measured doses were 76.5 ± 0.5 and 76.9 ± 0.5 cGy/100 MU for the adiabatic and isothermal modes, respectively. The Monte Carlo calculated graphite to water dose conversion was 1.013, and the adiabatic heat loss correction was 1.003. With an overall uncertainty of about 1%, the most significant contributions were the specific heat capacity (type B, 0.8%) and the repeatability (type A, 0.6%). Conclusion: While the quasi-adiabatic mode of operation had been validated in previous work, this is the first time that the GPC has been successfully used isothermally. This proof-of-concept will serve as the basis for further study into the GPC’s application to small fields and MRI-linac dosimetry. This work has been

  16. Integrated multidisciplinary design optimization of rotorcraft

    NASA Technical Reports Server (NTRS)

    Adelman, Howard M.; Mantay, Wayne R.

    1989-01-01

    The NASA/Army research plan for developing the logic elements for helicopter rotor design optimization by integrating appropriate disciplines and accounting for important interactions among the disciplines is discussed. The paper describes the optimization formulation in terms of the objective function, design variables, and constraints. The analysis aspects are discussed, and an initial effort at defining the interdisciplinary coupling is summarized. Results are presented on the achievements made in the rotor aerodynamic performance optimization for minimum hover horsepower, rotor dynamic optimization for vibration reduction, rotor structural optimization for minimum weight, and integrated aerodynamic load/dynamics optimization for minimum vibration and weight.

  17. Optimal control concepts in design sensitivity analysis

    NASA Technical Reports Server (NTRS)

    Belegundu, Ashok D.

    1987-01-01

    A close link is established between open loop optimal control theory and optimal design by noting certain similarities in the gradient calculations. The resulting benefits include a unified approach, together with physical insights in design sensitivity analysis, and an efficient approach for simultaneous optimal control and design. Both matrix displacement and matrix force methods are considered, and results are presented for dynamic systems, structures, and elasticity problems.

  18. Design optimization of a portable, micro-hydrokinetic turbine

    NASA Astrophysics Data System (ADS)

    Schleicher, W. Chris

    Marine and hydrokinetic (MHK) technology is a growing field that encompasses many different types of turbomachinery that operate on the kinetic energy of water. Micro hydrokinetics are a subset of MHK technology comprised of units designed to produce less than 100 kW of power. A propeller-type hydrokinetic turbine is investigated as a solution for a portable micro-hydrokinetic turbine with the needs of the United States Marine Corps in mind, as well as future commercial applications. This dissertation investigates using a response surface optimization methodology to create optimal turbine blade designs under many operating conditions. The field of hydrokinetics is introduced. The finite volume method is used to solve the Reynolds-Averaged Navier-Stokes equations with the k ω Shear Stress Transport model, for different propeller-type hydrokinetic turbines. The adaptive response surface optimization methodology is introduced as related to hydrokinetic turbines, and is benchmarked with complex algebraic functions. The optimization method is further studied to characterize the size of the experimental design on its ability to find optimum conditions. It was found that a large deviation between experimental design points was preferential. Different propeller hydrokinetic turbines were designed and compared with other forms of turbomachinery. It was found that the rapid simulations usually under predict performance compare to the refined simulations, and for some other designs it drastically over predicted performance. The optimization method was used to optimize a modular pump-turbine, verifying that the optimization work for other hydro turbine designs.

  19. Topology Optimization for Architected Materials Design

    NASA Astrophysics Data System (ADS)

    Osanov, Mikhail; Guest, James K.

    2016-07-01

    Advanced manufacturing processes provide a tremendous opportunity to fabricate materials with precisely defined architectures. To fully leverage these capabilities, however, materials architectures must be optimally designed according to the target application, base material used, and specifics of the fabrication process. Computational topology optimization offers a systematic, mathematically driven framework for navigating this new design challenge. The design problem is posed and solved formally as an optimization problem with unit cell and upscaling mechanics embedded within this formulation. This article briefly reviews the key requirements to apply topology optimization to materials architecture design and discusses several fundamental findings related to optimization of elastic, thermal, and fluidic properties in periodic materials. Emerging areas related to topology optimization for manufacturability and manufacturing variations, nonlinear mechanics, and multiscale design are also discussed.

  20. Experimental Design and Some Threats to Experimental Validity: A Primer

    ERIC Educational Resources Information Center

    Skidmore, Susan

    2008-01-01

    Experimental designs are distinguished as the best method to respond to questions involving causality. The purpose of the present paper is to explicate the logic of experimental design and why it is so vital to questions that demand causal conclusions. In addition, types of internal and external validity threats are discussed. To emphasize the…

  1. Optimal design criteria - prediction vs. parameter estimation

    NASA Astrophysics Data System (ADS)

    Waldl, Helmut

    2014-05-01

    G-optimality is a popular design criterion for optimal prediction, it tries to minimize the kriging variance over the whole design region. A G-optimal design minimizes the maximum variance of all predicted values. If we use kriging methods for prediction it is self-evident to use the kriging variance as a measure of uncertainty for the estimates. Though the computation of the kriging variance and even more the computation of the empirical kriging variance is computationally very costly and finding the maximum kriging variance in high-dimensional regions can be time demanding such that we cannot really find the G-optimal design with nowadays available computer equipment in practice. We cannot always avoid this problem by using space-filling designs because small designs that minimize the empirical kriging variance are often non-space-filling. D-optimality is the design criterion related to parameter estimation. A D-optimal design maximizes the determinant of the information matrix of the estimates. D-optimality in terms of trend parameter estimation and D-optimality in terms of covariance parameter estimation yield basically different designs. The Pareto frontier of these two competing determinant criteria corresponds with designs that perform well under both criteria. Under certain conditions searching the G-optimal design on the above Pareto frontier yields almost as good results as searching the G-optimal design in the whole design region. In doing so the maximum of the empirical kriging variance has to be computed only a few times though. The method is demonstrated by means of a computer simulation experiment based on data provided by the Belgian institute Management Unit of the North Sea Mathematical Models (MUMM) that describe the evolution of inorganic and organic carbon and nutrients, phytoplankton, bacteria and zooplankton in the Southern Bight of the North Sea.

  2. Integrated multidisciplinary design optimization of rotorcraft

    NASA Technical Reports Server (NTRS)

    Adelman, Howard M.; Mantay, Wayne R.

    1989-01-01

    The NASA/Army research plan for developing the logic elements for helicopter rotor design optimization by integrating appropriate disciplines and accounting for important interactions among the disciplines is discussed. The optimization formulation is described in terms of the objective function, design variables, and constraints. The analysis aspects are discussed, and an initial effort at defining the interdisciplinary coupling is summarized. Results are presented on the achievements made in the rotor dynamic optimization for vibration reduction, rotor structural optimization for minimum weight, and integrated aerodynamic load/dynamics optimization for minimum vibration and weight.

  3. Web-based tools for finding optimal designs in biomedical studies

    PubMed Central

    Wong, Weng Kee

    2013-01-01

    Experimental costs are rising and applications of optimal design ideas are increasingly applied in many disciplines. However, the theory for constructing optimal designs can be esoteric and its implementation can be difficult. To help practitioners have easier access to optimal designs and better appreciate design issues, we present a web site at http://optimal-design.biostat.ucla.edu/optimal/ capable of generating different types of tailor-made optimal designs for popular models in the biological sciences. This site also evaluates various efficiencies of a user-specified design and so enables practitioners to appreciate robustness properties of the design before implementation. PMID:23806678

  4. Optimal Designs of Staggered Dean Vortex Micromixers

    PubMed Central

    Chen, Jyh Jian; Chen, Chun Huei; Shie, Shian Ruei

    2011-01-01

    A novel parallel laminar micromixer with a two-dimensional staggered Dean Vortex micromixer is optimized and fabricated in our study. Dean vortices induced by centrifugal forces in curved rectangular channels cause fluids to produce secondary flows. The split-and-recombination (SAR) structures of the flow channels and the impinging effects result in the reduction of the diffusion distance of two fluids. Three different designs of a curved channel micromixer are introduced to evaluate the mixing performance of the designed micromixer. Mixing performances are demonstrated by means of a pH indicator using an optical microscope and fluorescent particles via a confocal microscope at different flow rates corresponding to Reynolds numbers (Re) ranging from 0.5 to 50. The comparison between the experimental data and numerical results shows a very reasonable agreement. At a Re of 50, the mixing length at the sixth segment, corresponding to the downstream distance of 21.0 mm, can be achieved in a distance 4 times shorter than when the Re equals 1. An optimization of this micromixer is performed with two geometric parameters. These are the angle between the lines from the center to two intersections of two consecutive curved channels, θ, and the angle between two lines of the centers of three consecutive curved channels, ϕ. It can be found that the maximal mixing index is related to the maximal value of the sum of θ and ϕ, which is equal to 139.82°. PMID:21747691

  5. Experimental Eavesdropping Based on Optimal Quantum Cloning

    NASA Astrophysics Data System (ADS)

    Bartkiewicz, Karol; Lemr, Karel; Černoch, Antonín; Soubusta, Jan; Miranowicz, Adam

    2013-04-01

    The security of quantum cryptography is guaranteed by the no-cloning theorem, which implies that an eavesdropper copying transmitted qubits in unknown states causes their disturbance. Nevertheless, in real cryptographic systems some level of disturbance has to be allowed to cover, e.g., transmission losses. An eavesdropper can attack such systems by replacing a noisy channel by a better one and by performing approximate cloning of transmitted qubits which disturb them but below the noise level assumed by legitimate users. We experimentally demonstrate such symmetric individual eavesdropping on the quantum key distribution protocols of Bennett and Brassard (BB84) and the trine-state spherical code of Renes (R04) with two-level probes prepared using a recently developed photonic multifunctional quantum cloner [Lemr et al., Phys. Rev. A 85, 050307(R) (2012)PLRAAN1050-2947]. We demonstrated that our optimal cloning device with high-success rate makes the eavesdropping possible by hiding it in usual transmission losses. We believe that this experiment can stimulate the quest for other operational applications of quantum cloning.

  6. Design optimization studies using COSMIC NASTRAN

    NASA Technical Reports Server (NTRS)

    Pitrof, Stephen M.; Bharatram, G.; Venkayya, Vipperla B.

    1993-01-01

    The purpose of this study is to create, test and document a procedure to integrate mathematical optimization algorithms with COSMIC NASTRAN. This procedure is very important to structural design engineers who wish to capitalize on optimization methods to ensure that their design is optimized for its intended application. The OPTNAST computer program was created to link NASTRAN and design optimization codes into one package. This implementation was tested using two truss structure models and optimizing their designs for minimum weight, subject to multiple loading conditions and displacement and stress constraints. However, the process is generalized so that an engineer could design other types of elements by adding to or modifying some parts of the code.

  7. Optimization, an Important Stage of Engineering Design

    ERIC Educational Resources Information Center

    Kelley, Todd R.

    2010-01-01

    A number of leaders in technology education have indicated that a major difference between the technological design process and the engineering design process is analysis and optimization. The analysis stage of the engineering design process is when mathematical models and scientific principles are employed to help the designer predict design…

  8. Optimal multiobjective design of digital filters using spiral optimization technique.

    PubMed

    Ouadi, Abderrahmane; Bentarzi, Hamid; Recioui, Abdelmadjid

    2013-01-01

    The multiobjective design of digital filters using spiral optimization technique is considered in this paper. This new optimization tool is a metaheuristic technique inspired by the dynamics of spirals. It is characterized by its robustness, immunity to local optima trapping, relative fast convergence and ease of implementation. The objectives of filter design include matching some desired frequency response while having minimum linear phase; hence, reducing the time response. The results demonstrate that the proposed problem solving approach blended with the use of the spiral optimization technique produced filters which fulfill the desired characteristics and are of practical use.

  9. Optimal Multiobjective Design of Digital Filters Using Taguchi Optimization Technique

    NASA Astrophysics Data System (ADS)

    Ouadi, Abderrahmane; Bentarzi, Hamid; Recioui, Abdelmadjid

    2014-01-01

    The multiobjective design of digital filters using the powerful Taguchi optimization technique is considered in this paper. This relatively new optimization tool has been recently introduced to the field of engineering and is based on orthogonal arrays. It is characterized by its robustness, immunity to local optima trapping, relative fast convergence and ease of implementation. The objectives of filter design include matching some desired frequency response while having minimum linear phase; hence, reducing the time response. The results demonstrate that the proposed problem solving approach blended with the use of the Taguchi optimization technique produced filters that fulfill the desired characteristics and are of practical use.

  10. D-optimal experimental design coupled with parallel factor analysis 2 decomposition a useful tool in the determination of triazines in oranges by programmed temperature vaporization-gas chromatography-mass spectrometry when using dispersive-solid phase extraction.

    PubMed

    Herrero, A; Ortiz, M C; Sarabia, L A

    2013-05-03

    The determination of triazines in oranges using a GC-MS system coupled to a programmed temperature vaporizer (PTV) inlet in the context of legislation is performed. Both pretreatment (using a Quick Easy Cheap Effective Rugged and Safe (QuEChERS) procedure) and injection steps are optimized using D-optimal experimental designs for reducing the experimental effort. The relative dirty extracts obtained and the elution time shifts make it necessary to use a PARAFAC2 decomposition to solve these two usual problems in the chromatographic determinations. The "second-order advantage" of the PARAFAC2 decomposition allows unequivocal identification according to document SANCO/12495/2011 (taking into account the tolerances for relative retention time and the relative abundance for the diagnostic ions), avoiding false negatives even in the presence of unknown co-eluents. The detection limits (CCα) found, from 0.51 to 1.05μgkg(-1), are far below the maximum residue levels (MRLs) established by the European Union for simazine, atrazine, terbuthylazine, ametryn, simetryn, prometryn and terbutryn in oranges. No MRL violations were found in the commercial oranges analyzed.

  11. Optimal design of structures with buckling constraints.

    NASA Technical Reports Server (NTRS)

    Kiusalaas, J.

    1973-01-01

    The paper presents an iterative, finite element method for minimum weight design of structures with respect to buckling constraints. The redesign equation is derived from the optimality criterion, as opposed to a numerical search procedure, and can handle problems that are characterized by the existence of two fundamental buckling modes at the optimal design. Application of the method is illustrated by beam and orthogonal frame design problems.

  12. Multidisciplinary Optimization Methods for Aircraft Preliminary Design

    NASA Technical Reports Server (NTRS)

    Kroo, Ilan; Altus, Steve; Braun, Robert; Gage, Peter; Sobieski, Ian

    1994-01-01

    This paper describes a research program aimed at improved methods for multidisciplinary design and optimization of large-scale aeronautical systems. The research involves new approaches to system decomposition, interdisciplinary communication, and methods of exploiting coarse-grained parallelism for analysis and optimization. A new architecture, that involves a tight coupling between optimization and analysis, is intended to improve efficiency while simplifying the structure of multidisciplinary, computation-intensive design problems involving many analysis disciplines and perhaps hundreds of design variables. Work in two areas is described here: system decomposition using compatibility constraints to simplify the analysis structure and take advantage of coarse-grained parallelism; and collaborative optimization, a decomposition of the optimization process to permit parallel design and to simplify interdisciplinary communication requirements.

  13. Singularities in Optimal Structural Design

    NASA Technical Reports Server (NTRS)

    Patnaik, S. N.; Guptill, J. D.; Berke, L.

    1992-01-01

    Singularity conditions that arise during structural optimization can seriously degrade the performance of the optimizer. The singularities are intrinsic to the formulation of the structural optimization problem and are not associated with the method of analysis. Certain conditions that give rise to singularities have been identified in earlier papers, encompassing the entire structure. Further examination revealed more complex sets of conditions in which singularities occur. Some of these singularities are local in nature, being associated with only a segment of the structure. Moreover, the likelihood that one of these local singularities may arise during an optimization procedure can be much greater than that of the global singularity identified earlier. Examples are provided of these additional forms of singularities. A framework is also given in which these singularities can be recognized. In particular, the singularities can be identified by examination of the stress displacement relations along with the compatibility conditions and/or the displacement stress relations derived in the integrated force method of structural analysis.

  14. Singularities in optimal structural design

    NASA Technical Reports Server (NTRS)

    Patnaik, S. N.; Guptill, J. D.; Berke, L.

    1992-01-01

    Singularity conditions that arise during structural optimization can seriously degrade the performance of the optimizer. The singularities are intrinsic to the formulation of the structural optimization problem and are not associated with the method of analysis. Certain conditions that give rise to singularities have been identified in earlier papers, encompassing the entire structure. Further examination revealed more complex sets of conditions in which singularities occur. Some of these singularities are local in nature, being associated with only a segment of the structure. Moreover, the likelihood that one of these local singularities may arise during an optimization procedure can be much greater than that of the global singularity identified earlier. Examples are provided of these additional forms of singularities. A framework is also given in which these singularities can be recognized. In particular, the singularities can be identified by examination of the stress displacement relations along with the compatibility conditions and/or the displacement stress relations derived in the integrated force method of structural analysis.

  15. Optimization design of electromagnetic shielding composites

    NASA Astrophysics Data System (ADS)

    Qu, Zhaoming; Wang, Qingguo; Qin, Siliang; Hu, Xiaofeng

    2013-03-01

    The effective electromagnetic parameters physical model of composites and prediction formulas of composites' shielding effectiveness and reflectivity were derived based on micromechanics, variational principle and electromagnetic wave transmission theory. The multi-objective optimization design of multilayer composites was carried out using genetic algorithm. The optimized results indicate that material parameter proportioning of biggest absorption ability can be acquired under the condition of the minimum shielding effectiveness can be satisfied in certain frequency band. The validity of optimization design model was verified and the scheme has certain theoretical value and directive significance to the design of high efficiency shielding composites.

  16. Topology optimization design of space rectangular mirror

    NASA Astrophysics Data System (ADS)

    Qu, Yanjun; Wang, Wei; Liu, Bei; Li, Xupeng

    2016-10-01

    A conceptual lightweight rectangular mirror is designed based on the theory of topology optimization and the specific structure size is determined through sensitivity analysis and size optimization in this paper. Under the load condition of gravity along the optical axis, compared with the mirrors designed by traditional method using finite element analysis method, the performance of the topology optimization reflectors supported by peripheral six points are superior in lightweight ratio, structure stiffness and the reflective surface accuracy. This suggests that the lightweight method in this paper is effective and has potential value for the design of rectangular reflector.

  17. Optimization methods applied to hybrid vehicle design

    NASA Technical Reports Server (NTRS)

    Donoghue, J. F.; Burghart, J. H.

    1983-01-01

    The use of optimization methods as an effective design tool in the design of hybrid vehicle propulsion systems is demonstrated. Optimization techniques were used to select values for three design parameters (battery weight, heat engine power rating and power split between the two on-board energy sources) such that various measures of vehicle performance (acquisition cost, life cycle cost and petroleum consumption) were optimized. The apporach produced designs which were often significant improvements over hybrid designs already reported on in the literature. The principal conclusions are as follows. First, it was found that the strategy used to split the required power between the two on-board energy sources can have a significant effect on life cycle cost and petroleum consumption. Second, the optimization program should be constructed so that performance measures and design variables can be easily changed. Third, the vehicle simulation program has a significant effect on the computer run time of the overall optimization program; run time can be significantly reduced by proper design of the types of trips the vehicle takes in a one year period. Fourth, care must be taken in designing the cost and constraint expressions which are used in the optimization so that they are relatively smooth functions of the design variables. Fifth, proper handling of constraints on battery weight and heat engine rating, variables which must be large enough to meet power demands, is particularly important for the success of an optimization study. Finally, the principal conclusion is that optimization methods provide a practical tool for carrying out the design of a hybrid vehicle propulsion system.

  18. Exponential approximations in optimal design

    NASA Technical Reports Server (NTRS)

    Belegundu, A. D.; Rajan, S. D.; Rajgopal, J.

    1990-01-01

    One-point and two-point exponential functions have been developed and proved to be very effective approximations of structural response. The exponential has been compared to the linear, reciprocal and quadratic fit methods. Four test problems in structural analysis have been selected. The use of such approximations is attractive in structural optimization to reduce the numbers of exact analyses which involve computationally expensive finite element analysis.

  19. Design Optimization Programmable Calculators versus Campus Computers.

    ERIC Educational Resources Information Center

    Savage, Michael

    1982-01-01

    A hypothetical design optimization problem and technical information on the three design parameters are presented. Although this nested iteration problem can be solved on a computer (flow diagram provided), this article suggests that several hand held calculators can be used to perform the same design iteration. (SK)

  20. Light Experimental Supercruiser Conceptual Design

    DTIC Science & Technology

    1976-07-01

    with a definitely related Government procurement operation , the United States Government thereby incurs no responsibility nor any obligation...PERFORMANCE (985-213) 144 77 LANDING PERFORMANCE 145 78 GLOBAL PERSISTENCE (985-213) 146 79 SPECIFIC EXCESS POWER - 1 g (985-213) 146 80 SPECIFIC EXCESS...MODEL 985-213 19.7 FEET ir i. 9.3 FEET fOINT DESIGN WEIGHTS • DESIGN MISSION 13,600 POUNDS • OVERLOAD MISSION 16.780 POUNDS • OPERATING WEIGHT

  1. Recurring sequence-structure motifs in (βα)8-barrel proteins and experimental optimization of a chimeric protein designed based on such motifs.

    PubMed

    Wang, Jichao; Zhang, Tongchuan; Liu, Ruicun; Song, Meilin; Wang, Juncheng; Hong, Jiong; Chen, Quan; Liu, Haiyan

    2017-02-01

    An interesting way of generating novel artificial proteins is to combine sequence motifs from natural proteins, mimicking the evolutionary path suggested by natural proteins comprising recurring motifs. We analyzed the βα and αβ modules of TIM barrel proteins by structure alignment-based sequence clustering. A number of preferred motifs were identified. A chimeric TIM was designed by using recurring elements as mutually compatible interfaces. The foldability of the designed TIM protein was then significantly improved by six rounds of directed evolution. The melting temperature has been improved by more than 20°C. A variety of characteristics suggested that the resulting protein is well-folded. Our analysis provided a library of peptide motifs that is potentially useful for different protein engineering studies. The protein engineering strategy of using recurring motifs as interfaces to connect partial natural proteins may be applied to other protein folds.

  2. Optimized heavy ion beam probing for International Thermonuclear Experimental Reactor

    NASA Astrophysics Data System (ADS)

    Melnikov, A. V.; Eliseev, L. G.

    1999-01-01

    The international workgroup developed the conceptual design of a heavy ion beam probe (HIBP) diagnostics for International Thermonuclear Experimental Reactor (ITER), which is intended for measurements of the plasma potential profile in a gradient area. Now we optimized it by the accurate analysis of the probing trajectories and variation of positions of the injection and detection points. Optimization allows us to reduce the energy of Tl+ beam from 5.6 to 3.4 MeV for standard ITER regime. The detector line starting at the plasma edge towards the center can get an outer part of the horizontal radial potential profile by variation of the energy. The observed radial interval is slightly increased up to 0.76<ρ<1 with respect to initial version 0.8<ρ<1, that allows to cover the region of the density gradient more reliably. Almost double reduction of the beam energy is a critical point. Thus we can significantly decrease the sizes of the accelerator and energy analyzer, the cost of the equipment, and impact of the diagnostics to the machine. Therefore the optimized HIBP design can be realized in ITER.

  3. Optimal Design and Purposeful Sampling: Complementary Methodologies for Implementation Research.

    PubMed

    Duan, Naihua; Bhaumik, Dulal K; Palinkas, Lawrence A; Hoagwood, Kimberly

    2015-09-01

    Optimal design has been an under-utilized methodology. However, it has significant real-world applications, particularly in mixed methods implementation research. We review the concept and demonstrate how it can be used to assess the sensitivity of design decisions and balance competing needs. For observational studies, this methodology enables selection of the most informative study units. For experimental studies, it entails selecting and assigning study units to intervention conditions in the most informative manner. We blend optimal design methods with purposeful sampling to show how these two concepts balance competing needs when there are multiple study aims, a common situation in implementation research.

  4. Interaction prediction optimization in multidisciplinary design optimization problems.

    PubMed

    Meng, Debiao; Zhang, Xiaoling; Huang, Hong-Zhong; Wang, Zhonglai; Xu, Huanwei

    2014-01-01

    The distributed strategy of Collaborative Optimization (CO) is suitable for large-scale engineering systems. However, it is hard for CO to converge when there is a high level coupled dimension. Furthermore, the discipline objectives cannot be considered in each discipline optimization problem. In this paper, one large-scale systems control strategy, the interaction prediction method (IPM), is introduced to enhance CO. IPM is utilized for controlling subsystems and coordinating the produce process in large-scale systems originally. We combine the strategy of IPM with CO and propose the Interaction Prediction Optimization (IPO) method to solve MDO problems. As a hierarchical strategy, there are a system level and a subsystem level in IPO. The interaction design variables (including shared design variables and linking design variables) are operated at the system level and assigned to the subsystem level as design parameters. Each discipline objective is considered and optimized at the subsystem level simultaneously. The values of design variables are transported between system level and subsystem level. The compatibility constraints are replaced with the enhanced compatibility constraints to reduce the dimension of design variables in compatibility constraints. Two examples are presented to show the potential application of IPO for MDO.

  5. OSHA and Experimental Safety Design.

    ERIC Educational Resources Information Center

    Sichak, Stephen, Jr.

    1983-01-01

    Suggests that a governmental agency, most likely Occupational Safety and Health Administration (OSHA) be considered in the safety design stage of any experiment. Focusing on OSHA's role, discusses such topics as occupational health hazards of toxic chemicals in laboratories, occupational exposure to benzene, and role/regulations of other agencies.…

  6. Experimental design for single point diamond turning of silicon optics

    SciTech Connect

    Krulewich, D.A.

    1996-06-16

    The goal of these experiments is to determine optimum cutting factors for the machining of silicon optics. This report describes experimental design, a systematic method of selecting optimal settings for a limited set of experiments, and its use in the silcon-optics turning experiments. 1 fig., 11 tabs.

  7. Applications of Chemiluminescence in the Teaching of Experimental Design

    ERIC Educational Resources Information Center

    Krawczyk, Tomasz; Slupska, Roksana; Baj, Stefan

    2015-01-01

    This work describes a single-session laboratory experiment devoted to teaching the principles of factorial experimental design. Students undertook the rational optimization of a luminol oxidation reaction, using a two-level experiment that aimed to create a long-lasting bright emission. During the session students used only simple glassware and…

  8. Turbomachinery Airfoil Design Optimization Using Differential Evolution

    NASA Technical Reports Server (NTRS)

    Madavan, Nateri K.; Biegel, Bryan (Technical Monitor)

    2002-01-01

    An aerodynamic design optimization procedure that is based on a evolutionary algorithm known at Differential Evolution is described. Differential Evolution is a simple, fast, and robust evolutionary strategy that has been proven effective in determining the global optimum for several difficult optimization problems, including highly nonlinear systems with discontinuities and multiple local optima. The method is combined with a Navier-Stokes solver that evaluates the various intermediate designs and provides inputs to the optimization procedure. An efficient constraint handling mechanism is also incorporated. Results are presented for the inverse design of a turbine airfoil from a modern jet engine and compared to earlier methods. The capability of the method to search large design spaces and obtain the optimal airfoils in an automatic fashion is demonstrated. Substantial reductions in the overall computing time requirements are achieved by using the algorithm in conjunction with neural networks.

  9. Turbomachinery Airfoil Design Optimization Using Differential Evolution

    NASA Technical Reports Server (NTRS)

    Madavan, Nateri K.; Biegel, Bryan A. (Technical Monitor)

    2002-01-01

    An aerodynamic design optimization procedure that is based on a evolutionary algorithm known at Differential Evolution is described. Differential Evolution is a simple, fast, and robust evolutionary strategy that has been proven effective in determining the global optimum for several difficult optimization problems, including highly nonlinear systems with discontinuities and multiple local optima. The method is combined with a Navier-Stokes solver that evaluates the various intermediate designs and provides inputs to the optimization procedure. An efficient constraint handling mechanism is also incorporated. Results are presented for the inverse design of a turbine airfoil from a modern jet engine. The capability of the method to search large design spaces and obtain the optimal airfoils in an automatic fashion is demonstrated. Substantial reductions in the overall computing time requirements are achieved by using the algorithm in conjunction with neural networks.

  10. Optimal design of artificial reefs for sturgeon

    NASA Astrophysics Data System (ADS)

    Yarbrough, Cody; Cotel, Aline; Kleinheksel, Abby

    2015-11-01

    The Detroit River, part of a busy corridor between Lakes Huron and Erie, was extensively modified to create deep shipping channels, resulting in a loss of spawning habitat for lake sturgeon and other native fish (Caswell et al. 2004, Bennion and Manny 2011). Under the U.S.- Canada Great Lakes Water Quality Agreement, there are remediation plans to construct fish spawning reefs to help with historic habitat losses and degraded fish populations, specifically sturgeon. To determine optimal reef design, experimental work has been undertaken. Different sizes and shapes of reefs are tested for a given set of physical conditions, such as flow depth and flow velocity, matching the relevant dimensionless parameters dominating the flow physics. The physical conditions are matched with the natural conditions encountered in the Detroit River. Using Particle Image Velocimetry, Acoustic Doppler Velocimetry and dye studies, flow structures, vorticity and velocity gradients at selected locations have been identified and quantified to allow comparison with field observations and numerical model results. Preliminary results are helping identify the design features to be implemented in the next phase of reef construction. Sponsored by NOAA.

  11. Geometric methods for optimal sensor design.

    PubMed

    Belabbas, M-A

    2016-01-01

    The Kalman-Bucy filter is the optimal estimator of the state of a linear dynamical system from sensor measurements. Because its performance is limited by the sensors to which it is paired, it is natural to seek optimal sensors. The resulting optimization problem is however non-convex. Therefore, many ad hoc methods have been used over the years to design sensors in fields ranging from engineering to biology to economics. We show in this paper how to obtain optimal sensors for the Kalman filter. Precisely, we provide a structural equation that characterizes optimal sensors. We furthermore provide a gradient algorithm and prove its convergence to the optimal sensor. This optimal sensor yields the lowest possible estimation error for measurements with a fixed signal-to-noise ratio. The results of the paper are proved by reducing the optimal sensor problem to an optimization problem on a Grassmannian manifold and proving that the function to be minimized is a Morse function with a unique minimum. The results presented here also apply to the dual problem of optimal actuator design.

  12. Geometric methods for optimal sensor design

    PubMed Central

    Belabbas, M.-A.

    2016-01-01

    The Kalman–Bucy filter is the optimal estimator of the state of a linear dynamical system from sensor measurements. Because its performance is limited by the sensors to which it is paired, it is natural to seek optimal sensors. The resulting optimization problem is however non-convex. Therefore, many ad hoc methods have been used over the years to design sensors in fields ranging from engineering to biology to economics. We show in this paper how to obtain optimal sensors for the Kalman filter. Precisely, we provide a structural equation that characterizes optimal sensors. We furthermore provide a gradient algorithm and prove its convergence to the optimal sensor. This optimal sensor yields the lowest possible estimation error for measurements with a fixed signal-to-noise ratio. The results of the paper are proved by reducing the optimal sensor problem to an optimization problem on a Grassmannian manifold and proving that the function to be minimized is a Morse function with a unique minimum. The results presented here also apply to the dual problem of optimal actuator design. PMID:26997885

  13. Multidisciplinary design optimization using genetic algorithms

    NASA Technical Reports Server (NTRS)

    Unal, Resit

    1994-01-01

    Multidisciplinary design optimization (MDO) is an important step in the conceptual design and evaluation of launch vehicles since it can have a significant impact on performance and life cycle cost. The objective is to search the system design space to determine values of design variables that optimize the performance characteristic subject to system constraints. Gradient-based optimization routines have been used extensively for aerospace design optimization. However, one limitation of gradient based optimizers is their need for gradient information. Therefore, design problems which include discrete variables can not be studied. Such problems are common in launch vehicle design. For example, the number of engines and material choices must be integer values or assume only a few discrete values. In this study, genetic algorithms are investigated as an approach to MDO problems involving discrete variables and discontinuous domains. Optimization by genetic algorithms (GA) uses a search procedure which is fundamentally different from those gradient based methods. Genetic algorithms seek to find good solutions in an efficient and timely manner rather than finding the best solution. GA are designed to mimic evolutionary selection. A population of candidate designs is evaluated at each iteration, and each individual's probability of reproduction (existence in the next generation) depends on its fitness value (related to the value of the objective function). Progress toward the optimum is achieved by the crossover and mutation operations. GA is attractive since it uses only objective function values in the search process, so gradient calculations are avoided. Hence, GA are able to deal with discrete variables. Studies report success in the use of GA for aircraft design optimization studies, trajectory analysis, space structure design and control systems design. In these studies reliable convergence was achieved, but the number of function evaluations was large compared

  14. An expert system for optimal gear design

    SciTech Connect

    Lin, K.C.

    1988-01-01

    By properly developing the mathematical model, numerical optimization can be used to seek the best solution for a given set of geometric constraints. The process of determining the non-geometric design variables is automated by using symbolic computation. This gear-design system is built according to the AGMA standards and a survey of gear-design experts. The recommendations of gear designers and the information provided by AGMA standards are integrated into knowledge bases and data bases. By providing fast information retrieval and design guidelines, this expert system greatly streamlines the spur gear design process. The concept of separating the design space into geometric and non-geometric variables can also be applied to the design process for general mechanical elements. The expert-system techniques is used to simulate a human designer to optimize the process of determining non-geometric parameters, and the numerical optimization is used to identify for the best geometric solution. The combination of the expert-system technique with numerical optimization essentially eliminates the deficiencies of both methods and thus provides a better way of modeling the engineering design process.

  15. Optimization of Hydrothermal and Diluted Acid Pretreatments of Tunisian Luffa cylindrica (L.) Fibers for 2G Bioethanol Production through the Cubic Central Composite Experimental Design CCD: Response Surface Methodology

    PubMed Central

    Ziadi, Manel; Ben Hassen-Trabelsi, Aida; Mekni, Sabrine; Aïssi, Balkiss; Alaya, Marwen; Bergaoui, Latifa; Hamdi, Moktar

    2017-01-01

    This paper opens up a new issue dealing with Luffa cylindrica (LC) lignocellulosic biomass recovery in order to produce 2G bioethanol. LC fibers are composed of three principal fractions, namely, α-cellulose (45.80%  ± 1.3), hemicelluloses (20.76%  ± 0.3), and lignins (13.15%  ± 0.6). The optimization of LC fibers hydrothermal and diluted acid pretreatments duration and temperature were achieved through the cubic central composite experimental design CCD. The pretreatments optimization was monitored via the determination of reducing sugars. Then, the 2G bioethanol process feasibility was tested by means of three successive steps, namely, LC fibers hydrothermal pretreatment performed at 96°C during 54 minutes, enzymatic saccharification carried out by means of a commercial enzyme AP2, and the alcoholic fermentation fulfilled with Saccharomyces cerevisiae. LC fibers hydrothermal pretreatment liberated 33.55 g/kg of reducing sugars. Enzymatic hydrolysis allowed achieving 59.4 g/kg of reducing sugars. The conversion yield of reducing sugar to ethanol was 88.66%. After the distillation step, concentration of ethanol was 1.58% with a volumetric yield about 70%. PMID:28243606

  16. Optimal design of reverse osmosis module networks

    SciTech Connect

    Maskan, F.; Wiley, D.E.; Johnston, L.P.M.; Clements, D.J.

    2000-05-01

    The structure of individual reverse osmosis modules, the configuration of the module network, and the operating conditions were optimized for seawater and brackish water desalination. The system model included simple mathematical equations to predict the performance of the reverse osmosis modules. The optimization problem was formulated as a constrained multivariable nonlinear optimization. The objective function was the annual profit for the system, consisting of the profit obtained from the permeate, capital cost for the process units, and operating costs associated with energy consumption and maintenance. Optimization of several dual-stage reverse osmosis systems were investigated and compared. It was found that optimal network designs are the ones that produce the most permeate. It may be possible to achieve economic improvements by refining current membrane module designs and their operating pressures.

  17. Optimal Design of Tests with Dichotomous and Polytomous Items.

    ERIC Educational Resources Information Center

    Berger, Martijn P. F.

    1998-01-01

    Reviews some results on optimal design of tests with items of dichotomous and polytomous response formats and offers rules and guidelines for optimal test assembly. Discusses the problem of optimal test design for two optimality criteria. (Author/SLD)

  18. Autonomous entropy-based intelligent experimental design

    NASA Astrophysics Data System (ADS)

    Malakar, Nabin Kumar

    2011-07-01

    The aim of this thesis is to explore the application of probability and information theory in experimental design, and to do so in a way that combines what we know about inference and inquiry in a comprehensive and consistent manner. Present day scientific frontiers involve data collection at an ever-increasing rate. This requires that we find a way to collect the most relevant data in an automated fashion. By following the logic of the scientific method, we couple an inference engine with an inquiry engine to automate the iterative process of scientific learning. The inference engine involves Bayesian machine learning techniques to estimate model parameters based upon both prior information and previously collected data, while the inquiry engine implements data-driven exploration. By choosing an experiment whose distribution of expected results has the maximum entropy, the inquiry engine selects the experiment that maximizes the expected information gain. The coupled inference and inquiry engines constitute an autonomous learning method for scientific exploration. We apply it to a robotic arm to demonstrate the efficacy of the method. Optimizing inquiry involves searching for an experiment that promises, on average, to be maximally informative. If the set of potential experiments is described by many parameters, the search involves a high-dimensional entropy space. In such cases, a brute force search method will be slow and computationally expensive. We develop an entropy-based search algorithm, called nested entropy sampling, to select the most informative experiment. This helps to reduce the number of computations necessary to find the optimal experiment. We also extended the method of maximizing entropy, and developed a method of maximizing joint entropy so that it could be used as a principle of collaboration between two robots. This is a major achievement of this thesis, as it allows the information-based collaboration between two robotic units towards a same

  19. Global Design Optimization for Aerodynamics and Rocket Propulsion Components

    NASA Technical Reports Server (NTRS)

    Shyy, Wei; Papila, Nilay; Vaidyanathan, Rajkumar; Tucker, Kevin; Turner, James E. (Technical Monitor)

    2000-01-01

    Modern computational and experimental tools for aerodynamics and propulsion applications have matured to a stage where they can provide substantial insight into engineering processes involving fluid flows, and can be fruitfully utilized to help improve the design of practical devices. In particular, rapid and continuous development in aerospace engineering demands that new design concepts be regularly proposed to meet goals for increased performance, robustness and safety while concurrently decreasing cost. To date, the majority of the effort in design optimization of fluid dynamics has relied on gradient-based search algorithms. Global optimization methods can utilize the information collected from various sources and by different tools. These methods offer multi-criterion optimization, handle the existence of multiple design points and trade-offs via insight into the entire design space, can easily perform tasks in parallel, and are often effective in filtering the noise intrinsic to numerical and experimental data. However, a successful application of the global optimization method needs to address issues related to data requirements with an increase in the number of design variables, and methods for predicting the model performance. In this article, we review recent progress made in establishing suitable global optimization techniques employing neural network and polynomial-based response surface methodologies. Issues addressed include techniques for construction of the response surface, design of experiment techniques for supplying information in an economical manner, optimization procedures and multi-level techniques, and assessment of relative performance between polynomials and neural networks. Examples drawn from wing aerodynamics, turbulent diffuser flows, gas-gas injectors, and supersonic turbines are employed to help demonstrate the issues involved in an engineering design context. Both the usefulness of the existing knowledge to aid current design

  20. Torsional ultrasonic transducer computational design optimization.

    PubMed

    Melchor, J; Rus, G

    2014-09-01

    A torsional piezoelectric ultrasonic sensor design is proposed in this paper and computationally tested and optimized to measure shear stiffness properties of soft tissue. These are correlated with a number of pathologies like tumors, hepatic lesions and others. The reason is that, whereas compressibility is predominantly governed by the fluid phase of the tissue, the shear stiffness is dependent on the stroma micro-architecture, which is directly affected by those pathologies. However, diagnostic tools to quantify them are currently not well developed. The first contribution is a new typology of design adapted to quasifluids. A second contribution is the procedure for design optimization, for which an analytical estimate of the Robust Probability Of Detection, called RPOD, is presented for use as optimality criteria. The RPOD is formulated probabilistically to maximize the probability of detecting the least possible pathology while minimizing the effect of noise. The resulting optimal transducer has a resonance frequency of 28 kHz.

  1. Vehicle systems design optimization study

    NASA Technical Reports Server (NTRS)

    Gilmour, J. L.

    1980-01-01

    The optimum vehicle configuration and component locations are determined for an electric drive vehicle based on using the basic structure of a current production subcompact vehicle. The optimization of an electric vehicle layout requires a weight distribution in the range of 53/47 to 62/38 in order to assure dynamic handling characteristics comparable to current internal combustion engine vehicles. Necessary modification of the base vehicle can be accomplished without major modification of the structure or running gear. As long as batteries are as heavy and require as much space as they currently do, they must be divided into two packages, one at front under the hood and a second at the rear under the cargo area, in order to achieve the desired weight distribution. The weight distribution criteria requires the placement of batteries at the front of the vehicle even when the central tunnel is used for the location of some batteries. The optimum layout has a front motor and front wheel drive. This configuration provides the optimum vehicle dynamic handling characteristics and the maximum passenger and cargo space for a given size vehicle.

  2. Aircraft design optimization with multidisciplinary performance criteria

    NASA Technical Reports Server (NTRS)

    Morris, Stephen; Kroo, Ilan

    1989-01-01

    The method described here for aircraft design optimization with dynamic response considerations provides an inexpensive means of integrating dynamics into aircraft preliminary design. By defining a dynamic performance index that can be added to a conventional objective function, a designer can investigate the trade-off between performance and handling (as measured by the vehicle's unforced response). The procedure is formulated to permit the use of control system gains as design variables, but does not require full-state feedback. The examples discussed here show how such an approach can lead to significant improvements in the design as compared with the more common sequential design of system and control law.

  3. An Integrated Optimization Design Method Based on Surrogate Modeling Applied to Diverging Duct Design

    NASA Astrophysics Data System (ADS)

    Hanan, Lu; Qiushi, Li; Shaobin, Li

    2016-12-01

    This paper presents an integrated optimization design method in which uniform design, response surface methodology and genetic algorithm are used in combination. In detail, uniform design is used to select the experimental sampling points in the experimental domain and the system performance is evaluated by means of computational fluid dynamics to construct a database. After that, response surface methodology is employed to generate a surrogate mathematical model relating the optimization objective and the design variables. Subsequently, genetic algorithm is adopted and applied to the surrogate model to acquire the optimal solution in the case of satisfying some constraints. The method has been applied to the optimization design of an axisymmetric diverging duct, dealing with three design variables including one qualitative variable and two quantitative variables. The method of modeling and optimization design performs well in improving the duct aerodynamic performance and can be also applied to wider fields of mechanical design and seen as a useful tool for engineering designers, by reducing the design time and computation consumption.

  4. A design optimization methodology for Li+ batteries

    NASA Astrophysics Data System (ADS)

    Golmon, Stephanie; Maute, Kurt; Dunn, Martin L.

    2014-05-01

    Design optimization for functionally graded battery electrodes is shown to improve the usable energy capacity of Li batteries predicted by computational simulations and numerically optimizing the electrode porosities and particle radii. A multi-scale battery model which accounts for nonlinear transient transport processes, electrochemical reactions, and mechanical deformations is used to predict the usable energy storage capacity of the battery over a range of discharge rates. A multi-objective formulation of the design problem is introduced to maximize the usable capacity over a range of discharge rates while limiting the mechanical stresses. The optimization problem is solved via a gradient based optimization. A LiMn2O4 cathode is simulated with a PEO-LiCF3SO3 electrolyte and both a Li Foil (half cell) and LiC6 anode. Studies were performed on both half and full cell configurations resulting in distinctly different optimal electrode designs. The numerical results show that the highest rate discharge drives the simulations and the optimal designs are dominated by Li+ transport rates. The results also suggest that spatially varying electrode porosities and active particle sizes provides an efficient approach to improve the power-to-energy density of Li+ batteries. For the half cell configuration, the optimal design improves the discharge capacity by 29% while for the full cell the discharge capacity was improved 61% relative to an initial design with a uniform electrode structure. Most of the improvement in capacity was due to the spatially varying porosity, with up to 5% of the gains attributed to the particle radii design variables.

  5. Multiobjective optimization techniques for structural design

    NASA Technical Reports Server (NTRS)

    Rao, S. S.

    1984-01-01

    The multiobjective programming techniques are important in the design of complex structural systems whose quality depends generally on a number of different and often conflicting objective functions which cannot be combined into a single design objective. The applicability of multiobjective optimization techniques is studied with reference to simple design problems. Specifically, the parameter optimization of a cantilever beam with a tip mass and a three-degree-of-freedom vabration isolation system and the trajectory optimization of a cantilever beam are considered. The solutions of these multicriteria design problems are attempted by using global criterion, utility function, game theory, goal programming, goal attainment, bounded objective function, and lexicographic methods. It has been observed that the game theory approach required the maximum computational effort, but it yielded better optimum solutions with proper balance of the various objective functions in all the cases.

  6. Removal of cobalt ions from aqueous solutions by polymer assisted ultrafiltration using experimental design approach: part 2: Optimization of hydrodynamic conditions for a crossflow ultrafiltration module with rotating part.

    PubMed

    Cojocaru, Corneliu; Zakrzewska-Trznadel, Grazyna; Miskiewicz, Agnieszka

    2009-09-30

    Application of shear-enhanced crossflow ultrafiltration for separation of cobalt ions from synthetic wastewaters by prior complexation with polyethyleneimine has been investigated via experimental design approach. The hydrodynamic conditions in the module with tubular metallic membrane have been planned according to full factorial design in order to figure out the main and interaction effects of process factors upon permeate flux and cumulative flux decline. It has been noticed that the turbulent flow induced by rotation of inner cylinder in the module conducts to growth of permeate flux, normalized flux and membrane permeability as well as to decreasing of permeate flux decline. In addition, the rotation has led to self-cleaning effect as a result of the reduction of estimated polymer layer thickness on the membrane surface. The optimal hydrodynamic conditions in the module have been figured out by response surface methodology and overlap contour plot, being as follows: DeltaP=70 kPa, Q(R)=108 L/h and W=2800 rpm. In such conditions the maximal permeate flux and the minimal flux decline has been observed.

  7. Optimization of a novel method for determination of benzene, toluene, ethylbenzene, and xylenes in hair and waste water samples by carbon nanotubes reinforced sol-gel based hollow fiber solid phase microextraction and gas chromatography using factorial experimental design.

    PubMed

    Es'haghi, Zarrin; Ebrahimi, Mahmoud; Hosseini, Mohammad-Saeid

    2011-05-27

    A novel design of solid phase microextraction fiber containing carbon nanotube reinforced sol-gel which was protected by polypropylene hollow fiber (HF-SPME) was developed for pre-concentration and determination of BTEX in environmental waste water and human hair samples. The method validation was included and satisfying results with high pre-concentration factors were obtained. In the present study orthogonal array experimental design (OAD) procedure with OA(16) (4(4)) matrix was applied to study the effect of four factors influencing the HF-SPME method efficiency: stirring speed, volume of adsorption organic solvent, extraction and desorption time of the sample solution, by which the effect of each factor was estimated using individual contributions as response functions in the screening process. Analysis of variance (ANOVA) was employed for estimating the main significant factors and their percentage contributions in extraction. Calibration curves were plotted using ten spiking levels of BTEX in the concentration ranges of 0.02-30,000ng/mL with correlation coefficients (r) 0.989-0.9991 for analytes. Under the optimized extraction conditions, the method showed good linearity (0.3-20,000ng/L), repeatability, low limits of detections (0.49-0.7ng/L) and excellent pre-concentration factors (185-1872). The best conditions which were estimated then applied for the analysis of BTEX compounds in the real samples.

  8. Experimental Design for Vector Output Systems

    PubMed Central

    Banks, H.T.; Rehm, K.L.

    2013-01-01

    We formulate an optimal design problem for the selection of best states to observe and optimal sampling times for parameter estimation or inverse problems involving complex nonlinear dynamical systems. An iterative algorithm for implementation of the resulting methodology is proposed. Its use and efficacy is illustrated on two applied problems of practical interest: (i) dynamic models of HIV progression and (ii) modeling of the Calvin cycle in plant metabolism and growth. PMID:24563655

  9. Response Surface Model Building and Multidisciplinary Optimization Using D-Optimal Designs

    NASA Technical Reports Server (NTRS)

    Unal, Resit; Lepsch, Roger A.; McMillin, Mark L.

    1998-01-01

    This paper discusses response surface methods for approximation model building and multidisciplinary design optimization. The response surface methods discussed are central composite designs, Bayesian methods and D-optimal designs. An over-determined D-optimal design is applied to a configuration design and optimization study of a wing-body, launch vehicle. Results suggest that over determined D-optimal designs may provide an efficient approach for approximation model building and for multidisciplinary design optimization.

  10. Design optimization of rod shaped IPMC actuator

    NASA Astrophysics Data System (ADS)

    Ruiz, S. A.; Mead, B.; Yun, H.; Yim, W.; Kim, K. J.

    2013-04-01

    Ionic polymer-metal composites (IPMCs) are some of the most well-known electro-active polymers. This is due to their large deformation provided a relatively low voltage source. IPMCs have been acknowledged as a potential candidate for biomedical applications such as cardiac catheters and surgical probes; however, there is still no existing mass manufacturing of IPMCs. This study intends to provide a theoretical framework which could be used to design practical purpose IPMCs depending on the end users interest. By explicitly coupling electrostatics, transport phenomenon, and solid mechanics, design optimization is conducted on a simulation in order to provide conceptual motivation for future designs. Utilizing a multi-physics analysis approach on a three dimensional cylinder and tube type IPMC provides physically accurate results for time dependent end effector displacement given a voltage source. Simulations are conducted with the finite element method and are also validated with empirical evidences. Having an in-depth understanding of the physical coupling provides optimal design parameters that cannot be altered from a standard electro-mechanical coupling. These parameters are altered in order to determine optimal designs for end-effector displacement, maximum force, and improved mobility with limited voltage magnitude. Design alterations are conducted on the electrode patterns in order to provide greater mobility, electrode size for efficient bending, and Nafion diameter for improved force. The results of this study will provide optimal design parameters of the IPMC for different applications.

  11. Optimization-based controller design for rotorcraft

    NASA Technical Reports Server (NTRS)

    Tsing, N.-K.; Fan, M. K. H.; Barlow, J.; Tits, A. L.; Tischler, M. B.

    1993-01-01

    An optimization-based methodology for linear control system design is outlined by considering the design of a controller for a UH-60 rotorcraft in hover. A wide range of design specifications is taken into account: internal stability, decoupling between longitudinal and lateral motions, handling qualities, and rejection of windgusts. These specifications are investigated while taking into account physical limitations in the swashplate displacements and rates of displacement. The methodology crucially relies on user-machine interaction for tradeoff exploration.

  12. Spacecraft design optimization using Taguchi analysis

    NASA Technical Reports Server (NTRS)

    Unal, Resit

    1991-01-01

    The quality engineering methods of Dr. Genichi Taguchi, employing design of experiments, are important statistical tools for designing high quality systems at reduced cost. The Taguchi method was utilized to study several simultaneous parameter level variations of a lunar aerobrake structure to arrive at the lightest weight configuration. Finite element analysis was used to analyze the unique experimental aerobrake configurations selected by Taguchi method. Important design parameters affecting weight and global buckling were identified and the lowest weight design configuration was selected.

  13. Regression analysis as a design optimization tool

    NASA Technical Reports Server (NTRS)

    Perley, R.

    1984-01-01

    The optimization concepts are described in relation to an overall design process as opposed to a detailed, part-design process where the requirements are firmly stated, the optimization criteria are well established, and a design is known to be feasible. The overall design process starts with the stated requirements. Some of the design criteria are derived directly from the requirements, but others are affected by the design concept. It is these design criteria that define the performance index, or objective function, that is to be minimized within some constraints. In general, there will be multiple objectives, some mutually exclusive, with no clear statement of their relative importance. The optimization loop that is given adjusts the design variables and analyzes the resulting design, in an iterative fashion, until the objective function is minimized within the constraints. This provides a solution, but it is only the beginning. In effect, the problem definition evolves as information is derived from the results. It becomes a learning process as we determine what the physics of the system can deliver in relation to the desirable system characteristics. As with any learning process, an interactive capability is a real attriubute for investigating the many alternatives that will be suggested as learning progresses.

  14. Lens design: optimization with Global Explorer

    NASA Astrophysics Data System (ADS)

    Isshiki, Masaki

    2013-02-01

    The optimization method damped least squares method (DLS) was almost completed late in the 1960s. DLS has been overwhelming in the local optimization technology. After that, various efforts were made to seek the global optimization. They came into the world after 1990 and the Global Explorer (GE) was one of them invented by the author to find plural solutions, each of which has the local minimum of the merit function. The robustness of the designed lens is also an important factor as well as the performance of the lens; both of these requirements are balanced in the process of optimization with GE2 (the second version of GE). An idea is also proposed to modify GE2 for aspherical lens systems. A design example is shown.

  15. Drought Adaptation Mechanisms Should Guide Experimental Design.

    PubMed

    Gilbert, Matthew E; Medina, Viviana

    2016-08-01

    The mechanism, or hypothesis, of how a plant might be adapted to drought should strongly influence experimental design. For instance, an experiment testing for water conservation should be distinct from a damage-tolerance evaluation. We define here four new, general mechanisms for plant adaptation to drought such that experiments can be more easily designed based upon the definitions. A series of experimental methods are suggested together with appropriate physiological measurements related to the drought adaptation mechanisms. The suggestion is made that the experimental manipulation should match the rate, length, and severity of soil water deficit (SWD) necessary to test the hypothesized type of drought adaptation mechanism.

  16. Application of an optimization method to high performance propeller designs

    NASA Technical Reports Server (NTRS)

    Li, K. C.; Stefko, G. L.

    1984-01-01

    The application of an optimization method to determine the propeller blade twist distribution which maximizes propeller efficiency is presented. The optimization employs a previously developed method which has been improved to include the effects of blade drag, camber and thickness. Before the optimization portion of the computer code is used, comparisons of calculated propeller efficiencies and power coefficients are made with experimental data for one NACA propeller at Mach numbers in the range of 0.24 to 0.50 and another NACA propeller at a Mach number of 0.71 to validate the propeller aerodynamic analysis portion of the computer code. Then comparisons of calculated propeller efficiencies for the optimized and the original propellers show the benefits of the optimization method in improving propeller performance. This method can be applied to the aerodynamic design of propellers having straight, swept, or nonplanar propeller blades.

  17. Design of optimized piezoelectric HDD-sliders

    NASA Astrophysics Data System (ADS)

    Nakasone, Paulo H.; Yoo, Jeonghoon; Silva, Emilio C. N.

    2010-04-01

    As storage data density in hard-disk drives (HDDs) increases for constant or miniaturizing sizes, precision positioning of HDD heads becomes a more relevant issue to ensure enormous amounts of data to be properly written and read. Since the traditional single-stage voice coil motor (VCM) cannot satisfy the positioning requirement of high-density tracks per inch (TPI) HDDs, dual-stage servo systems have been proposed to overcome this matter, by using VCMs to coarsely move the HDD head while piezoelectric actuators provides fine and fast positioning. Thus, the aim of this work is to apply topology optimization method (TOM) to design novel piezoelectric HDD heads, by finding optimal placement of base-plate and piezoelectric material to high precision positioning HDD heads. Topology optimization method is a structural optimization technique that combines the finite element method (FEM) with optimization algorithms. The laminated finite element employs the MITC (mixed interpolation of tensorial components) formulation to provide accurate and reliable results. The topology optimization uses a rational approximation of material properties to vary the material properties between 'void' and 'filled' portions. The design problem consists in generating optimal structures that provide maximal displacements, appropriate structural stiffness and resonance phenomena avoidance. The requirements are achieved by applying formulations to maximize displacements, minimize structural compliance and maximize resonance frequencies. This paper presents the implementation of the algorithms and show results to confirm the feasibility of this approach.

  18. Multidisciplinary Optimization Methods for Preliminary Design

    NASA Technical Reports Server (NTRS)

    Korte, J. J.; Weston, R. P.; Zang, T. A.

    1997-01-01

    An overview of multidisciplinary optimization (MDO) methodology and two applications of this methodology to the preliminary design phase are presented. These applications are being undertaken to improve, develop, validate and demonstrate MDO methods. Each is presented to illustrate different aspects of this methodology. The first application is an MDO preliminary design problem for defining the geometry and structure of an aerospike nozzle of a linear aerospike rocket engine. The second application demonstrates the use of the Framework for Interdisciplinary Design Optimization (FIDO), which is a computational environment system, by solving a preliminary design problem for a High-Speed Civil Transport (HSCT). The two sample problems illustrate the advantages to performing preliminary design with an MDO process.

  19. Dynamic optimization and adaptive controller design

    NASA Astrophysics Data System (ADS)

    Inamdar, S. R.

    2010-10-01

    In this work I present a new type of controller which is an adaptive tracking controller which employs dynamic optimization for optimizing current value of controller action for the temperature control of nonisothermal continuously stirred tank reactor (CSTR). We begin with a two-state model of nonisothermal CSTR which are mass and heat balance equations and then add cooling system dynamics to eliminate input multiplicity. The initial design value is obtained using local stability of steady states where approach temperature for cooling action is specified as a steady state and a design specification. Later we make a correction in the dynamics where material balance is manipulated to use feed concentration as a system parameter as an adaptive control measure in order to avoid actuator saturation for the main control loop. The analysis leading to design of dynamic optimization based parameter adaptive controller is presented. The important component of this mathematical framework is reference trajectory generation to form an adaptive control measure.

  20. Multifidelity Analysis and Optimization for Supersonic Design

    NASA Technical Reports Server (NTRS)

    Kroo, Ilan; Willcox, Karen; March, Andrew; Haas, Alex; Rajnarayan, Dev; Kays, Cory

    2010-01-01

    Supersonic aircraft design is a computationally expensive optimization problem and multifidelity approaches over a significant opportunity to reduce design time and computational cost. This report presents tools developed to improve supersonic aircraft design capabilities including: aerodynamic tools for supersonic aircraft configurations; a systematic way to manage model uncertainty; and multifidelity model management concepts that incorporate uncertainty. The aerodynamic analysis tools developed are appropriate for use in a multifidelity optimization framework, and include four analysis routines to estimate the lift and drag of a supersonic airfoil, a multifidelity supersonic drag code that estimates the drag of aircraft configurations with three different methods: an area rule method, a panel method, and an Euler solver. In addition, five multifidelity optimization methods are developed, which include local and global methods as well as gradient-based and gradient-free techniques.

  1. Yakima Hatchery Experimental Design : Annual Progress Report.

    SciTech Connect

    Busack, Craig; Knudsen, Curtis; Marshall, Anne

    1991-08-01

    This progress report details the results and status of Washington Department of Fisheries' (WDF) pre-facility monitoring, research, and evaluation efforts, through May 1991, designed to support the development of an Experimental Design Plan (EDP) for the Yakima/Klickitat Fisheries Project (YKFP), previously termed the Yakima/Klickitat Production Project (YKPP or Y/KPP). This pre- facility work has been guided by planning efforts of various research and quality control teams of the project that are annually captured as revisions to the experimental design and pre-facility work plans. The current objective are as follows: to develop genetic monitoring and evaluation approach for the Y/KPP; to evaluate stock identification monitoring tools, approaches, and opportunities available to meet specific objectives of the experimental plan; and to evaluate adult and juvenile enumeration and sampling/collection capabilities in the Y/KPP necessary to measure experimental response variables.

  2. Design optimization for cost and quality: The robust design approach

    NASA Technical Reports Server (NTRS)

    Unal, Resit

    1990-01-01

    Designing reliable, low cost, and operable space systems has become the key to future space operations. Designing high quality space systems at low cost is an economic and technological challenge to the designer. A systematic and efficient way to meet this challenge is a new method of design optimization for performance, quality, and cost, called Robust Design. Robust Design is an approach for design optimization. It consists of: making system performance insensitive to material and subsystem variation, thus allowing the use of less costly materials and components; making designs less sensitive to the variations in the operating environment, thus improving reliability and reducing operating costs; and using a new structured development process so that engineering time is used most productively. The objective in Robust Design is to select the best combination of controllable design parameters so that the system is most robust to uncontrollable noise factors. The robust design methodology uses a mathematical tool called an orthogonal array, from design of experiments theory, to study a large number of decision variables with a significantly small number of experiments. Robust design also uses a statistical measure of performance, called a signal-to-noise ratio, from electrical control theory, to evaluate the level of performance and the effect of noise factors. The purpose is to investigate the Robust Design methodology for improving quality and cost, demonstrate its application by the use of an example, and suggest its use as an integral part of space system design process.

  3. Optimal Experiment Design for Thermal Characterization of Functionally Graded Materials

    NASA Technical Reports Server (NTRS)

    Cole, Kevin D.

    2003-01-01

    The purpose of the project was to investigate methods to accurately verify that designed , materials meet thermal specifications. The project involved heat transfer calculations and optimization studies, and no laboratory experiments were performed. One part of the research involved study of materials in which conduction heat transfer predominates. Results include techniques to choose among several experimental designs, and protocols for determining the optimum experimental conditions for determination of thermal properties. Metal foam materials were also studied in which both conduction and radiation heat transfer are present. Results of this work include procedures to optimize the design of experiments to accurately measure both conductive and radiative thermal properties. Detailed results in the form of three journal papers have been appended to this report.

  4. The optimal design of standard gearsets

    NASA Technical Reports Server (NTRS)

    Savage, M.; Coy, J. J.; Townsend, D. P.

    1983-01-01

    A design procedure for sizing standard involute spur gearsets is presented. The procedure is applied to find the optimal design for two examples - an external gear mesh with a ratio of 5:1 and an internal gear mesh with a ratio of 5:1. In the procedure, the gear mesh is designed to minimize the center distance for a given gear ratio, pressure angle, pinion torque, and allowable tooth strengths. From the methodology presented, a design space may be formulated for either external gear contact or for internal contact. The design space includes kinematics considerations of involute interference, tip fouling, and contact ratio. Also included are design constraints based on bending fatigue in the pinion fillet and Hertzian contact pressure in the full load region and at the gear tip where scoring is possible. This design space is two dimensional, giving the gear mesh center distance as a function of diametral pitch and the number of pinion teeth. The constraint equations were identified for kinematic interference, fillet bending fatigue, pitting fatigue, and scoring pressure, which define the optimal design space for a given gear design. The locus of equal size optimum designs was identified as the straight line through the origin which has the least slope in the design region.

  5. Multidisciplinary design optimization using multiobjective formulation techniques

    NASA Technical Reports Server (NTRS)

    Chattopadhyay, Aditi; Pagaldipti, Narayanan S.

    1995-01-01

    This report addresses the development of a multidisciplinary optimization procedure using an efficient semi-analytical sensitivity analysis technique and multilevel decomposition for the design of aerospace vehicles. A semi-analytical sensitivity analysis procedure is developed for calculating computational grid sensitivities and aerodynamic design sensitivities. Accuracy and efficiency of the sensitivity analysis procedure is established through comparison of the results with those obtained using a finite difference technique. The developed sensitivity analysis technique are then used within a multidisciplinary optimization procedure for designing aerospace vehicles. The optimization problem, with the integration of aerodynamics and structures, is decomposed into two levels. Optimization is performed for improved aerodynamic performance at the first level and improved structural performance at the second level. Aerodynamic analysis is performed by solving the three-dimensional parabolized Navier Stokes equations. A nonlinear programming technique and an approximate analysis procedure are used for optimization. The proceduredeveloped is applied to design the wing of a high speed aircraft. Results obtained show significant improvements in the aircraft aerodynamic and structural performance when compared to a reference or baseline configuration. The use of the semi-analytical sensitivity technique provides significant computational savings.

  6. Fatigue reliability based optimal design of planar compliant micropositioning stages

    NASA Astrophysics Data System (ADS)

    Wang, Qiliang; Zhang, Xianmin

    2015-10-01

    Conventional compliant micropositioning stages are usually developed based on static strength and deterministic methods, which may lead to either unsafe or excessive designs. This paper presents a fatigue reliability analysis and optimal design of a three-degree-of-freedom (3 DOF) flexure-based micropositioning stage. Kinematic, modal, static, and fatigue stress modelling of the stage were conducted using the finite element method. The maximum equivalent fatigue stress in the hinges was derived using sequential quadratic programming. The fatigue strength of the hinges was obtained by considering various influencing factors. On this basis, the fatigue reliability of the hinges was analysed using the stress-strength interference method. Fatigue-reliability-based optimal design of the stage was then conducted using the genetic algorithm and MATLAB. To make fatigue life testing easier, a 1 DOF stage was then optimized and manufactured. Experimental results demonstrate the validity of the approach.

  7. Reliability-based structural optimization: A proposed analytical-experimental study

    NASA Technical Reports Server (NTRS)

    Stroud, W. Jefferson; Nikolaidis, Efstratios

    1993-01-01

    An analytical and experimental study for assessing the potential of reliability-based structural optimization is proposed and described. In the study, competing designs obtained by deterministic and reliability-based optimization are compared. The experimental portion of the study is practical because the structure selected is a modular, actively and passively controlled truss that consists of many identical members, and because the competing designs are compared in terms of their dynamic performance and are not destroyed if failure occurs. The analytical portion of this study is illustrated on a 10-bar truss example. In the illustrative example, it is shown that reliability-based optimization can yield a design that is superior to an alternative design obtained by deterministic optimization. These analytical results provide motivation for the proposed study, which is underway.

  8. Application of Optimal Designs to Item Calibration

    PubMed Central

    Lu, Hung-Yi

    2014-01-01

    In computerized adaptive testing (CAT), examinees are presented with various sets of items chosen from a precalibrated item pool. Consequently, the attrition speed of the items is extremely fast, and replenishing the item pool is essential. Therefore, item calibration has become a crucial concern in maintaining item banks. In this study, a two-parameter logistic model is used. We applied optimal designs and adaptive sequential analysis to solve this item calibration problem. The results indicated that the proposed optimal designs are cost effective and time efficient. PMID:25188318

  9. Evaluation of Frameworks for HSCT Design Optimization

    NASA Technical Reports Server (NTRS)

    Krishnan, Ramki

    1998-01-01

    This report is an evaluation of engineering frameworks that could be used to augment, supplement, or replace the existing FIDO 3.5 (Framework for Interdisciplinary Design and Optimization Version 3.5) framework. The report begins with the motivation for this effort, followed by a description of an "ideal" multidisciplinary design and optimization (MDO) framework. The discussion then turns to how each candidate framework stacks up against this ideal. This report ends with recommendations as to the "best" frameworks that should be down-selected for detailed review.

  10. Using Approximations to Accelerate Engineering Design Optimization

    NASA Technical Reports Server (NTRS)

    Torczon, Virginia; Trosset, Michael W.

    1998-01-01

    Optimization problems that arise in engineering design are often characterized by several features that hinder the use of standard nonlinear optimization techniques. Foremost among these features is that the functions used to define the engineering optimization problem often are computationally intensive. Within a standard nonlinear optimization algorithm, the computational expense of evaluating the functions that define the problem would necessarily be incurred for each iteration of the optimization algorithm. Faced with such prohibitive computational costs, an attractive alternative is to make use of surrogates within an optimization context since surrogates can be chosen or constructed so that they are typically much less expensive to compute. For the purposes of this paper, we will focus on the use of algebraic approximations as surrogates for the objective. In this paper we introduce the use of so-called merit functions that explicitly recognize the desirability of improving the current approximation to the objective during the course of the optimization. We define and experiment with the use of merit functions chosen to simultaneously improve both the solution to the optimization problem (the objective) and the quality of the approximation. Our goal is to further improve the effectiveness of our general approach without sacrificing any of its rigor.

  11. Instrument design and optimization using genetic algorithms

    SciTech Connect

    Hoelzel, Robert; Bentley, Phillip M.; Fouquet, Peter

    2006-10-15

    This article describes the design of highly complex physical instruments by using a canonical genetic algorithm (GA). The procedure can be applied to all instrument designs where performance goals can be quantified. It is particularly suited to the optimization of instrument design where local optima in the performance figure of merit are prevalent. Here, a GA is used to evolve the design of the neutron spin-echo spectrometer WASP which is presently being constructed at the Institut Laue-Langevin, Grenoble, France. A comparison is made between this artificial intelligence approach and the traditional manual design methods. We demonstrate that the search of parameter space is more efficient when applying the genetic algorithm, and the GA produces a significantly better instrument design. Furthermore, it is found that the GA increases flexibility, by facilitating the reoptimization of the design after changes in boundary conditions during the design phase. The GA also allows the exploration of 'nonstandard' magnet coil geometries. We conclude that this technique constitutes a powerful complementary tool for the design and optimization of complex scientific apparatus, without replacing the careful thought processes employed in traditional design methods.

  12. Optimal Shape Design of a Plane Diffuser in Turbulent Flow

    NASA Astrophysics Data System (ADS)

    Lim, Seokhyun; Choi, Haecheon

    2000-11-01

    Stratford (1959) experimentally designed an optimal shape of plane diffuser for maximum pressure recovery by having zero skin friction throughout the region of pressure rise. In the present study, we apply an algorithm of optimal shape design developed by Pironneau (1973, 1974) and Cabuk & Modi (1992) to a diffuser in turbulent flow, and show that maintaining zero skin friction in the pressure-rise region is an optimal condition for maximum pressure recovery at the diffuser exit. For turbulence model, we use the k-ɛ-v^2-f model by Durbin (1995) which is known to accurately predict flow with separation. Our results with this model agree well with the previous experimental and LES results for a diffuser shape tested by Obi et al. (1993). From this initial shape, an optimal diffuser shape for maximum pressure recovery is obtained through an iterative procedure. The optimal diffuser has indeed zero skin friction throughout the pressure-rise region, and thus there is no separation in the flow. For the optimal diffuser shape obtained, an LES is being conducted to investigate the turbulence characteristics near the zero-skin-friction wall. A preliminary result of LES will also be presented.

  13. Branch target buffer design and optimization

    NASA Technical Reports Server (NTRS)

    Perleberg, Chris H.; Smith, Alan J.

    1993-01-01

    Consideration is given to two major issues in the design of branch target buffers (BTBs), with the goal of achieving maximum performance for a given number of bits allocated to the BTB design. The first issue is BTB management; the second is what information to keep in the BTB. A number of solutions to these problems are reviewed, and various optimizations in the design of BTBs are discussed. Design target miss ratios for BTBs are developed, making it possible to estimate the performance of BTBs for real workloads.

  14. Application of optimal design methodologies in clinical pharmacology experiments.

    PubMed

    Ogungbenro, Kayode; Dokoumetzidis, Aristides; Aarons, Leon

    2009-01-01

    Pharmacokinetics and pharmacodynamics data are often analysed by mixed-effects modelling techniques (also known as population analysis), which has become a standard tool in the pharmaceutical industries for drug development. The last 10 years has witnessed considerable interest in the application of experimental design theories to population pharmacokinetic and pharmacodynamic experiments. Design of population pharmacokinetic experiments involves selection and a careful balance of a number of design factors. Optimal design theory uses prior information about the model and parameter estimates to optimize a function of the Fisher information matrix to obtain the best combination of the design factors. This paper provides a review of the different approaches that have been described in the literature for optimal design of population pharmacokinetic and pharmacodynamic experiments. It describes options that are available and highlights some of the issues that could be of concern as regards practical application. It also discusses areas of application of optimal design theories in clinical pharmacology experiments. It is expected that as the awareness about the benefits of this approach increases, more people will embrace it and ultimately will lead to more efficient population pharmacokinetic and pharmacodynamic experiments and can also help to reduce both cost and time during drug development.

  15. Evolutionary optimization methods for accelerator design

    NASA Astrophysics Data System (ADS)

    Poklonskiy, Alexey A.

    Many problems from the fields of accelerator physics and beam theory can be formulated as optimization problems and, as such, solved using optimization methods. Despite growing efficiency of the optimization methods, the adoption of modern optimization techniques in these fields is rather limited. Evolutionary Algorithms (EAs) form a relatively new and actively developed optimization methods family. They possess many attractive features such as: ease of the implementation, modest requirements on the objective function, a good tolerance to noise, robustness, and the ability to perform a global search efficiently. In this work we study the application of EAs to problems from accelerator physics and beam theory. We review the most commonly used methods of unconstrained optimization and describe the GATool, evolutionary algorithm and the software package, used in this work, in detail. Then we use a set of test problems to assess its performance in terms of computational resources, quality of the obtained result, and the tradeoff between them. We justify the choice of GATool as a heuristic method to generate cutoff values for the COSY-GO rigorous global optimization package for the COSY Infinity scientific computing package. We design the model of their mutual interaction and demonstrate that the quality of the result obtained by GATool increases as the information about the search domain is refined, which supports the usefulness of this model. We Giscuss GATool's performance on the problems suffering from static and dynamic noise and study useful strategies of GATool parameter tuning for these and other difficult problems. We review the challenges of constrained optimization with EAs and methods commonly used to overcome them. We describe REPA, a new constrained optimization method based on repairing, in exquisite detail, including the properties of its two repairing techniques: REFIND and REPROPT. We assess REPROPT's performance on the standard constrained

  16. Experimental Testing of Dynamically Optimized Photoelectron Beams

    SciTech Connect

    Rosenzweig, J. B.; Cook, A. M.; Dunning, M.; England, R. J.; Musumeci, P.; Bellaveglia, M.; Boscolo, M.; Catani, L.; Cianchi, A.; Di Pirro, G.; Ferrario, M.; Fillipetto, D.; Gatti, G.; Palumbo, L.; Vicario, C.; Serafini, L.; Jones, S.

    2006-11-27

    We discuss the design of and initial results from an experiment in space-charge dominated beam dynamics which explores a new regime of high-brightness electron beam generation at the SPARC photoinjector. The scheme under study employs the tendency of intense electron beams to rearrange to produce uniform density, giving a nearly ideal beam from the viewpoint of space charge-induced emittance. The experiments are aimed at testing the marriage of this idea with a related concept, emittance compensation. We show that this new regime of operating photoinjector may be the preferred method of obtaining highest brightness beams with lower energy spread. We discuss the design of the experiment, including developing of a novel time-dependent, aerogel-based imaging system. This system has been installed at SPARC, and first evidence for nearly uniformly filled ellipsoidal charge distributions recorded.

  17. Experimental Testing of Dynamically Optimized Photoelectron Beams

    NASA Astrophysics Data System (ADS)

    Rosenzweig, J. B.; Cook, A. M.; Dunning, M.; England, R. J.; Musumeci, P.; Bellaveglia, M.; Boscolo, M.; Catani, L.; Cianchi, A.; Di Pirro, G.; Ferrario, M.; Fillipetto, D.; Gatti, G.; Palumbo, L.; Serafini, L.; Vicario, C.; Jones, S.

    2006-11-01

    We discuss the design of and initial results from an experiment in space-charge dominated beam dynamics which explores a new regime of high-brightness electron beam generation at the SPARC photoinjector. The scheme under study employs the tendency of intense electron beams to rearrange to produce uniform density, giving a nearly ideal beam from the viewpoint of space charge-induced emittance. The experiments are aimed at testing the marriage of this idea with a related concept, emittance compensation. We show that this new regime of operating photoinjector may be the preferred method of obtaining highest brightness beams with lower energy spread. We discuss the design of the experiment, including developing of a novel time-dependent, aerogel-based imaging system. This system has been installed at SPARC, and first evidence for nearly uniformly filled ellipsoidal charge distributions recorded.

  18. Integrated structural-aerodynamic design optimization

    NASA Technical Reports Server (NTRS)

    Haftka, R. T.; Kao, P. J.; Grossman, B.; Polen, D.; Sobieszczanski-Sobieski, J.

    1988-01-01

    This paper focuses on the processes of simultaneous aerodynamic and structural wing design as a prototype for design integration, with emphasis on the major difficulty associated with multidisciplinary design optimization processes, their enormous computational costs. Methods are presented for reducing this computational burden through the development of efficient methods for cross-sensitivity calculations and the implementation of approximate optimization procedures. Utilizing a modular sensitivity analysis approach, it is shown that the sensitivities can be computed without the expensive calculation of the derivatives of the aerodynamic influence coefficient matrix, and the derivatives of the structural flexibility matrix. The same process is used to efficiently evaluate the sensitivities of the wing divergence constraint, which should be particularly useful, not only in problems of complete integrated aircraft design, but also in aeroelastic tailoring applications.

  19. Multidisciplinary Concurrent Design Optimization via the Internet

    NASA Technical Reports Server (NTRS)

    Woodard, Stanley E.; Kelkar, Atul G.; Koganti, Gopichand

    2001-01-01

    A methodology is presented which uses commercial design and analysis software and the Internet to perform concurrent multidisciplinary optimization. The methodology provides a means to develop multidisciplinary designs without requiring that all software be accessible from the same local network. The procedures are amenable to design and development teams whose members, expertise and respective software are not geographically located together. This methodology facilitates multidisciplinary teams working concurrently on a design problem of common interest. Partition of design software to different machines allows each constituent software to be used on the machine that provides the most economy and efficiency. The methodology is demonstrated on the concurrent design of a spacecraft structure and attitude control system. Results are compared to those derived from performing the design with an autonomous FORTRAN program.

  20. Complex optimization for big computational and experimental neutron datasets

    NASA Astrophysics Data System (ADS)

    Bao, Feng; Archibald, Richard; Niedziela, Jennifer; Bansal, Dipanshu; Delaire, Olivier

    2016-12-01

    We present a framework to use high performance computing to determine accurate solutions to the inverse optimization problem of big experimental data against computational models. We demonstrate how image processing, mathematical regularization, and hierarchical modeling can be used to solve complex optimization problems on big data. We also demonstrate how both model and data information can be used to further increase solution accuracy of optimization by providing confidence regions for the processing and regularization algorithms. We use the framework in conjunction with the software package SIMPHONIES to analyze results from neutron scattering experiments on silicon single crystals, and refine first principles calculations to better describe the experimental data.

  1. Photovoltaic design optimization for terrestrial applications

    NASA Technical Reports Server (NTRS)

    Ross, R. G., Jr.

    1978-01-01

    As part of the Jet Propulsion Laboratory's Low-Cost Solar Array Project, a comprehensive program of module cost-optimization has been carried out. The objective of these studies has been to define means of reducing the cost and improving the utility and reliability of photovoltaic modules for the broad spectrum of terrestrial applications. This paper describes one of the methods being used for module optimization, including the derivation of specific equations which allow the optimization of various module design features. The method is based on minimizing the life-cycle cost of energy for the complete system. Comparison of the life-cycle energy cost with the marginal cost of energy each year allows the logical plant lifetime to be determined. The equations derived allow the explicit inclusion of design parameters such as tracking, site variability, and module degradation with time. An example problem involving the selection of an optimum module glass substrate is presented.

  2. Optimal radar waveform design for moving target

    NASA Astrophysics Data System (ADS)

    Zhu, Binqi; Gao, Yesheng; Wang, Kaizhi; Liu, Xingzhao

    2016-07-01

    An optimal radar waveform-design method is proposed to detect moving targets in the presence of clutter and noise. The clutter is split into moving and static parts. Radar-moving target/clutter models are introduced and combined with Neyman-Pearson criteria to design optimal waveforms. Results show that optimal waveform for a moving target is different with that for a static target. The combination of simple-frequency signals could produce maximum detectability based on different noise-power spectrum density situations. Simulations show that our algorithm greatly improves signal-to-clutter plus noise ratio of radar system. Therefore, this algorithm may be preferable for moving target detection when prior information on clutter and noise is available.

  3. MDO can help resolve the designer's dilemma. [multidisciplinary design optimization

    NASA Technical Reports Server (NTRS)

    Sobieszczanski-Sobieski, Jaroslaw; Tulinius, Jan R.

    1991-01-01

    Multidisciplinary design optimization (MDO) is presented as a rapidly growing body of methods, algorithms, and techniques that will provide a quantum jump in the effectiveness and efficiency of the quantitative side of design, and will turn that side into an environment in which the qualitative side can thrive. MDO borrows from CAD/CAM for graphic visualization of geometrical and numerical data, data base technology, and in computer software and hardware. Expected benefits from this methodology are a rational, mathematically consistent approach to hypersonic aircraft designs, designs pushed closer to the optimum, and a design process either shortened or leaving time available for different concepts to be explored.

  4. Shape design sensitivity analysis and optimal design of structural systems

    NASA Technical Reports Server (NTRS)

    Choi, Kyung K.

    1987-01-01

    The material derivative concept of continuum mechanics and an adjoint variable method of design sensitivity analysis are used to relate variations in structural shape to measures of structural performance. A domain method of shape design sensitivity analysis is used to best utilize the basic character of the finite element method that gives accurate information not on the boundary but in the domain. Implementation of shape design sensitivty analysis using finite element computer codes is discussed. Recent numerical results are used to demonstrate the accuracy obtainable using the method. Result of design sensitivity analysis is used to carry out design optimization of a built-up structure.

  5. Design of Optimally Robust Control Systems.

    DTIC Science & Technology

    1980-01-01

    approach is that the optimization framework is an artificial device. While some design constraints can easily be incorporated into a single cost function...indicating that that point was indeed the solution. Also, an intellegent initial guess for k was important in order to avoid being hung up at the double

  6. Design Optimization of Structural Health Monitoring Systems

    SciTech Connect

    Flynn, Eric B.

    2014-03-06

    Sensor networks drive decisions. Approach: Design networks to minimize the expected total cost (in a statistical sense, i.e. Bayes Risk) associated with making wrong decisions and with installing maintaining and running the sensor network itself. Search for optimal solutions using Monte-Carlo-Sampling-Adapted Genetic Algorithm. Applications include structural health monitoring and surveillance.

  7. Experimental Testing of Dynamically Optimized Photoelectron Beams

    NASA Astrophysics Data System (ADS)

    Rosenzweig, J. B.; Cook, A. M.; Dunning, M.; England, R. J.; Musumeci, P.; Bellaveglia, M.; Boscolo, M.; Catani, L.; Cianchi, A.; Pirro, G. Di; Ferrario, M.; Fillipetto, D.; Gatti, G.; Palumbo, L.; Serafini, L.; Vicario, C.

    2007-09-01

    We discuss the design of and initial results from an experiment in space-charge dominated beam dynamics which explores a new regime of high-brightness electron beam generation at the SPARC (located at INFN-LNF, Frascati) photoinjector. The scheme under study employs the natural tendency in intense electron beams to configure themselves to produce a uniform density, giving a nearly ideal beam from the viewpoint of space charge-induced emittance. The experiments are aimed at testing the marriage of this idea with a related concept, emittance compensation, We show that the existing infrastructure at SPARC is nearly ideal for the proposed tests, and that this new regime of operating photoinjector may be the preferred method of obtaining highest brightness beams with lower energy spread. We discuss the design of the experiment, including developing of a novel time-dependent, aerogel-based imaging system. This system has been installed at SPARC, and first evidence for nearly uniformly filled ellipsoidal charge distributions recorded.

  8. Experimental Testing of Dynamically Optimized Photoelectron Beams

    NASA Astrophysics Data System (ADS)

    Rosenzweig, J. B.; Cook, A. M.; Dunning, M.; England, R. J.; Musumeci, P.; Bellaveglia, M.; Boscolo, M.; Catani, L.; Cianchi, A.; di Pirro, G.; Ferrario, M.; Fillipetto, D.; Gatti, G.; Palumbo, L.; Serafini, L.; Vicario, C.

    We discuss the design of and initial results from an experiment in space-charge dominated beam dynamics which explores a new regime of high-brightness electron beam generation at the SPARC (located at INFN-LNF, Frascati) photoinjector. The scheme under study employs the natural tendency in intense electron beams to configure themselves to produce a uniform density, giving a nearly ideal beam from the viewpoint of space charge-induced emittance. The experiments are aimed at testing the marriage of this idea with a related concept, emittance compensation, We show that the existing infrastructure at SPARC is nearly ideal for the proposed tests, and that this new regime of operating photoinjector may be the preferred method of obtaining highest brightness beams with lower energy spread. We discuss the design of the experiment, including developing of a novel time-dependent, aerogel-based imaging system. This system has been installed at SPARC, and first evidence for nearly uniformly filled ellipsoidal charge distributions recorded.

  9. Design Oriented Structural Modeling for Airplane Conceptual Design Optimization

    NASA Technical Reports Server (NTRS)

    Livne, Eli

    1999-01-01

    The main goal for research conducted with the support of this grant was to develop design oriented structural optimization methods for the conceptual design of airplanes. Traditionally in conceptual design airframe weight is estimated based on statistical equations developed over years of fitting airplane weight data in data bases of similar existing air- planes. Utilization of such regression equations for the design of new airplanes can be justified only if the new air-planes use structural technology similar to the technology on the airplanes in those weight data bases. If any new structural technology is to be pursued or any new unconventional configurations designed the statistical weight equations cannot be used. In such cases any structural weight estimation must be based on rigorous "physics based" structural analysis and optimization of the airframes under consideration. Work under this grant progressed to explore airframe design-oriented structural optimization techniques along two lines of research: methods based on "fast" design oriented finite element technology and methods based on equivalent plate / equivalent shell models of airframes, in which the vehicle is modelled as an assembly of plate and shell components, each simulating a lifting surface or nacelle / fuselage pieces. Since response to changes in geometry are essential in conceptual design of airplanes, as well as the capability to optimize the shape itself, research supported by this grant sought to develop efficient techniques for parametrization of airplane shape and sensitivity analysis with respect to shape design variables. Towards the end of the grant period a prototype automated structural analysis code designed to work with the NASA Aircraft Synthesis conceptual design code ACS= was delivered to NASA Ames.

  10. Aerodynamic design optimization using sensitivity analysis and computational fluid dynamics

    NASA Technical Reports Server (NTRS)

    Baysal, Oktay; Eleshaky, Mohamed E.

    1991-01-01

    A new and efficient method is presented for aerodynamic design optimization, which is based on a computational fluid dynamics (CFD)-sensitivity analysis algorithm. The method is applied to design a scramjet-afterbody configuration for an optimized axial thrust. The Euler equations are solved for the inviscid analysis of the flow, which in turn provides the objective function and the constraints. The CFD analysis is then coupled with the optimization procedure that uses a constrained minimization method. The sensitivity coefficients, i.e. gradients of the objective function and the constraints, needed for the optimization are obtained using a quasi-analytical method rather than the traditional brute force method of finite difference approximations. During the one-dimensional search of the optimization procedure, an approximate flow analysis (predicted flow) based on a first-order Taylor series expansion is used to reduce the computational cost. Finally, the sensitivity of the optimum objective function to various design parameters, which are kept constant during the optimization, is computed to predict new optimum solutions. The flow analysis of the demonstrative example are compared with the experimental data. It is shown that the method is more efficient than the traditional methods.

  11. Discrete optimization of isolator locations for vibration isolation systems: An analytical and experimental investigation

    SciTech Connect

    Ponslet, E.R.; Eldred, M.S.

    1996-05-17

    An analytical and experimental study is conducted to investigate the effect of isolator locations on the effectiveness of vibration isolation systems. The study uses isolators with fixed properties and evaluates potential improvements to the isolation system that can be achieved by optimizing isolator locations. Because the available locations for the isolators are discrete in this application, a Genetic Algorithm (GA) is used as the optimization method. The system is modeled in MATLAB{trademark} and coupled with the GA available in the DAKOTA optimization toolkit under development at Sandia National Laboratories. Design constraints dictated by hardware and experimental limitations are implemented through penalty function techniques. A series of GA runs reveal difficulties in the search on this heavily constrained, multimodal, discrete problem. However, the GA runs provide a variety of optimized designs with predicted performance from 30 to 70 times better than a baseline configuration. An alternate approach is also tested on this problem: it uses continuous optimization, followed by rounding of the solution to neighboring discrete configurations. Results show that this approach leads to either infeasible or poor designs. Finally, a number of optimized designs obtained from the GA searches are tested in the laboratory and compared to the baseline design. These experimental results show a 7 to 46 times improvement in vibration isolation from the baseline configuration.

  12. Aircraft family design using enhanced collaborative optimization

    NASA Astrophysics Data System (ADS)

    Roth, Brian Douglas

    Significant progress has been made toward the development of multidisciplinary design optimization (MDO) methods that are well-suited to practical large-scale design problems. However, opportunities exist for further progress. This thesis describes the development of enhanced collaborative optimization (ECO), a new decomposition-based MDO method. To support the development effort, the thesis offers a detailed comparison of two existing MDO methods: collaborative optimization (CO) and analytical target cascading (ATC). This aids in clarifying their function and capabilities, and it provides inspiration for the development of ECO. The ECO method offers several significant contributions. First, it enhances communication between disciplinary design teams while retaining the low-order coupling between them. Second, it provides disciplinary design teams with more authority over the design process. Third, it resolves several troubling computational inefficiencies that are associated with CO. As a result, ECO provides significant computational savings (relative to CO) for the test cases and practical design problems described in this thesis. New aircraft development projects seldom focus on a single set of mission requirements. Rather, a family of aircraft is designed, with each family member tailored to a different set of requirements. This thesis illustrates the application of decomposition-based MDO methods to aircraft family design. This represents a new application area, since MDO methods have traditionally been applied to multidisciplinary problems. ECO offers aircraft family design the same benefits that it affords to multidisciplinary design problems. Namely, it simplifies analysis integration, it provides a means to manage problem complexity, and it enables concurrent design of all family members. In support of aircraft family design, this thesis introduces a new wing structural model with sufficient fidelity to capture the tradeoffs associated with component

  13. Multidisciplinary Design Optimization on Conceptual Design of Aero-engine

    NASA Astrophysics Data System (ADS)

    Zhang, Xiao-bo; Wang, Zhan-xue; Zhou, Li; Liu, Zeng-wen

    2016-06-01

    In order to obtain better integrated performance of aero-engine during the conceptual design stage, multiple disciplines such as aerodynamics, structure, weight, and aircraft mission are required. Unfortunately, the couplings between these disciplines make it difficult to model or solve by conventional method. MDO (Multidisciplinary Design Optimization) methodology which can well deal with couplings of disciplines is considered to solve this coupled problem. Approximation method, optimization method, coordination method, and modeling method for MDO framework are deeply analyzed. For obtaining the more efficient MDO framework, an improved CSSO (Concurrent Subspace Optimization) strategy which is based on DOE (Design Of Experiment) and RSM (Response Surface Model) methods is proposed in this paper; and an improved DE (Differential Evolution) algorithm is recommended to solve the system-level and discipline-level optimization problems in MDO framework. The improved CSSO strategy and DE algorithm are evaluated by utilizing the numerical test problem. The result shows that the efficiency of improved methods proposed by this paper is significantly increased. The coupled problem of VCE (Variable Cycle Engine) conceptual design is solved by utilizing improved CSSO strategy, and the design parameter given by improved CSSO strategy is better than the original one. The integrated performance of VCE is significantly improved.

  14. Simulation as an Aid to Experimental Design.

    ERIC Educational Resources Information Center

    Frazer, Jack W.; And Others

    1983-01-01

    Discusses simulation program to aid in the design of enzyme kinetic experimentation (includes sample runs). Concentration versus time profiles of any subset or all nine states of reactions can be displayed with/without simulated instrumental noise, allowing the user to estimate the practicality of any proposed experiment given known instrument…

  15. Robust Design Optimization via Failure Domain Bounding

    NASA Technical Reports Server (NTRS)

    Crespo, Luis G.; Kenny, Sean P.; Giesy, Daniel P.

    2007-01-01

    This paper extends and applies the strategies recently developed by the authors for handling constraints under uncertainty to robust design optimization. For the scope of this paper, robust optimization is a methodology aimed at problems for which some parameters are uncertain and are only known to belong to some uncertainty set. This set can be described by either a deterministic or a probabilistic model. In the methodology developed herein, optimization-based strategies are used to bound the constraint violation region using hyper-spheres and hyper-rectangles. By comparing the resulting bounding sets with any given uncertainty model, it can be determined whether the constraints are satisfied for all members of the uncertainty model (i.e., constraints are feasible) or not (i.e., constraints are infeasible). If constraints are infeasible and a probabilistic uncertainty model is available, upper bounds to the probability of constraint violation can be efficiently calculated. The tools developed enable approximating not only the set of designs that make the constraints feasible but also, when required, the set of designs for which the probability of constraint violation is below a prescribed admissible value. When constraint feasibility is possible, several design criteria can be used to shape the uncertainty model of performance metrics of interest. Worst-case, least-second-moment, and reliability-based design criteria are considered herein. Since the problem formulation is generic and the tools derived only require standard optimization algorithms for their implementation, these strategies are easily applicable to a broad range of engineering problems.

  16. Taguchi method of experimental design in materials education

    NASA Technical Reports Server (NTRS)

    Weiser, Martin W.

    1993-01-01

    Some of the advantages and disadvantages of the Taguchi Method of experimental design as applied to Materials Science will be discussed. This is a fractional factorial method that employs the minimum number of experimental trials for the information obtained. The analysis is also very simple to use and teach, which is quite advantageous in the classroom. In addition, the Taguchi loss function can be easily incorporated to emphasize that improvements in reproducibility are often at least as important as optimization of the response. The disadvantages of the Taguchi Method include the fact that factor interactions are normally not accounted for, there are zero degrees of freedom if all of the possible factors are used, and randomization is normally not used to prevent environmental biasing. In spite of these disadvantages it is felt that the Taguchi Method is extremely useful for both teaching experimental design and as a research tool, as will be shown with a number of brief examples.

  17. Design Methods and Optimization for Morphing Aircraft

    NASA Technical Reports Server (NTRS)

    Crossley, William A.

    2005-01-01

    This report provides a summary of accomplishments made during this research effort. The major accomplishments are in three areas. The first is the use of a multiobjective optimization strategy to help identify potential morphing features that uses an existing aircraft sizing code to predict the weight, size and performance of several fixed-geometry aircraft that are Pareto-optimal based upon on two competing aircraft performance objectives. The second area has been titled morphing as an independent variable and formulates the sizing of a morphing aircraft as an optimization problem in which the amount of geometric morphing for various aircraft parameters are included as design variables. This second effort consumed most of the overall effort on the project. The third area involved a more detailed sizing study of a commercial transport aircraft that would incorporate a morphing wing to possibly enable transatlantic point-to-point passenger service.

  18. Optimal design of a space power system

    NASA Technical Reports Server (NTRS)

    Chun, Young W.; Braun, James F.

    1990-01-01

    The aerospace industry, like many other industries, regularly applies optimization techniques to develop designs which reduce cost, maximize performance, and minimize weight. The desire to minimize weight is of particular importance in space-related products since the costs of launch are directly related to payload weight, and launch vehicle capabilities often limit the allowable weight of a component or system. With these concerns in mind, this paper presents the optimization of a space-based power generation system for minimum mass. The goal of this work is to demonstrate the use of optimization techniques on a realistic and practical engineering system. The power system described uses thermoelectric devices to convert heat into electricity. The heat source for the system is a nuclear reactor. Waste heat is rejected from the system to space by a radiator.

  19. Generalized mathematical models in design optimization

    NASA Technical Reports Server (NTRS)

    Papalambros, Panos Y.; Rao, J. R. Jagannatha

    1989-01-01

    The theory of optimality conditions of extremal problems can be extended to problems continuously deformed by an input vector. The connection between the sensitivity, well-posedness, stability and approximation of optimization problems is steadily emerging. The authors believe that the important realization here is that the underlying basis of all such work is still the study of point-to-set maps and of small perturbations, yet what has been identified previously as being just related to solution procedures is now being extended to study modeling itself in its own right. Many important studies related to the theoretical issues of parametric programming and large deformation in nonlinear programming have been reported in the last few years, and the challenge now seems to be in devising effective computational tools for solving these generalized design optimization models.

  20. Reliability Optimization Design for Contact Springs of AC Contactors Based on Adaptive Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    Zhao, Sheng; Su, Xiuping; Wu, Ziran; Xu, Chengwen

    The paper illustrates the procedure of reliability optimization modeling for contact springs of AC contactors under nonlinear multi-constraint conditions. The adaptive genetic algorithm (AGA) is utilized to perform reliability optimization on the contact spring parameters of a type of AC contactor. A method that changes crossover and mutation rates at different times in the AGA can effectively avoid premature convergence, and experimental tests are performed after optimization. The experimental result shows that the mass of each optimized spring is reduced by 16.2%, while the reliability increases to 99.9% from 94.5%. The experimental result verifies the correctness and feasibility of this reliability optimization designing method.

  1. Optimal Design of Non-equilibrium Experiments for Genetic Network Interrogation.

    PubMed

    Adoteye, Kaska; Banks, H T; Flores, Kevin B

    2015-02-01

    Many experimental systems in biology, especially synthetic gene networks, are amenable to perturbations that are controlled by the experimenter. We developed an optimal design algorithm that calculates optimal observation times in conjunction with optimal experimental perturbations in order to maximize the amount of information gained from longitudinal data derived from such experiments. We applied the algorithm to a validated model of a synthetic Brome Mosaic Virus (BMV) gene network and found that optimizing experimental perturbations may substantially decrease uncertainty in estimating BMV model parameters.

  2. Optimizing Trial Designs for Targeted Therapies

    PubMed Central

    Beckman, Robert A.; Burman, Carl-Fredrik; König, Franz; Stallard, Nigel; Posch, Martin

    2016-01-01

    An important objective in the development of targeted therapies is to identify the populations where the treatment under consideration has positive benefit risk balance. We consider pivotal clinical trials, where the efficacy of a treatment is tested in an overall population and/or in a pre-specified subpopulation. Based on a decision theoretic framework we derive optimized trial designs by maximizing utility functions. Features to be optimized include the sample size and the population in which the trial is performed (the full population or the targeted subgroup only) as well as the underlying multiple test procedure. The approach accounts for prior knowledge of the efficacy of the drug in the considered populations using a two dimensional prior distribution. The considered utility functions account for the costs of the clinical trial as well as the expected benefit when demonstrating efficacy in the different subpopulations. We model utility functions from a sponsor’s as well as from a public health perspective, reflecting actual civil interests. Examples of optimized trial designs obtained by numerical optimization are presented for both perspectives. PMID:27684573

  3. Optimal serial dilutions designs for drug discovery experiments.

    PubMed

    Donev, Alexander N; Tobias, Randall D

    2011-05-01

    Dose-response studies are an essential part of the drug discovery process. They are typically carried out on a large number of chemical compounds using serial dilution experimental designs. This paper proposes a method of selecting the key parameters of these designs (maximum dose, dilution factor, number of concentrations and number of replicated observations for each concentration) depending on the stage of the drug discovery process where the study takes place. This is achieved by employing and extending results from optimal design theory. Population D- and D(S)-optimality are defined and used to evaluate the precision of estimating the potency of the tested compounds. The proposed methodology is easy to use and creates opportunities to reduce the cost of the experiments without compromising the quality of the data obtained in them.

  4. Optimization of the National Ignition Facility primary shield design

    SciTech Connect

    Annese, C.E.; Watkins, E.F.; Greenspan, E.; Miller, W.F.; Latkowski, J.; Lee, J.D.; Soran, P.; Tobin, M.L.

    1993-10-01

    Minimum cost design concepts of the primary shield for the National Ignition laser fusion experimental Facility (NIF) are searched with the help of the optimization code SWAN. The computational method developed for this search involves incorporating the time dependence of the delayed photon field within effective delayed photon production cross sections. This method enables one to address the time-dependent problem using relatively simple, time-independent transport calculations, thus significantly simplifying the design process. A novel approach was used for the identification of the optimal combination of constituents that will minimize the shield cost; it involves the generation, with SWAN, of effectiveness functions for replacing materials on an equal cost basis. The minimum cost shield design concept was found to consist of a mixture of polyethylene and low cost, low activation materials such as SiC, with boron added near the shield boundaries.

  5. Multiobjective optimization in integrated photonics design.

    PubMed

    Gagnon, Denis; Dumont, Joey; Dubé, Louis J

    2013-07-01

    We propose the use of the parallel tabu search algorithm (PTS) to solve combinatorial inverse design problems in integrated photonics. To assess the potential of this algorithm, we consider the problem of beam shaping using a two-dimensional arrangement of dielectric scatterers. The performance of PTS is compared to one of the most widely used optimization algorithms in photonics design, the genetic algorithm (GA). We find that PTS can produce comparable or better solutions than the GA, while requiring less computation time and fewer adjustable parameters. For the coherent beam shaping problem as a case study, we demonstrate how PTS can tackle multiobjective optimization problems and represent a robust and efficient alternative to GA.

  6. Initial data sampling in design optimization

    NASA Astrophysics Data System (ADS)

    Southall, Hugh L.; O'Donnell, Terry H.

    2011-06-01

    Evolutionary computation (EC) techniques in design optimization such as genetic algorithms (GA) or efficient global optimization (EGO) require an initial set of data samples (design points) to start the algorithm. They are obtained by evaluating the cost function at selected sites in the input space. A two-dimensional input space can be sampled using a Latin square, a statistical sampling technique which samples a square grid such that there is a single sample in any given row and column. The Latin hypercube is a generalization to any number of dimensions. However, a standard random Latin hypercube can result in initial data sets which may be highly correlated and may not have good space-filling properties. There are techniques which address these issues. We describe and use one technique in this paper.

  7. Optimized design for an electrothermal microactuator

    NASA Astrophysics Data System (ADS)

    Cǎlimǎnescu, Ioan; Stan, Liviu-Constantin; Popa, Viorica

    2015-02-01

    In micromechanical structures, electrothermal actuators are known to be capable of providing larger force and reasonable tip deflection compared to electrostatic ones. Many studies have been devoted to the analysis of the flexure actuators. One of the most popular electrothermal actuators is called `U-shaped' actuator. The device is composed of two suspended beams with variable cross sections joined at the free end, which constrains the tip to move in an arcing motion while current is passed through the actuator. The goal of this research is to determine via FEA the best fitted geometry of the microactuator (optimization input parameters) in order to render some of the of the output parameters such as thermal strain or total deformations to their maximum values. The software to generate the CAD geometry was SolidWorks 2010 and all the FEA analysis was conducted with Ansys 13 TM. The optimized model has smaller geometric values of the input parameters that is a more compact geometry; The maximum temperature reached a smaller value for the optimized model; The calculated heat flux is with 13% bigger for the optimized model; the same for Joule Heat (26%), Total deformation (1.2%) and Thermal Strain (8%). By simple optimizing the design the dimensions and the performance of the micro actuator resulted more compact and more efficient.

  8. Design Optimization of Marine Reduction Gears.

    DTIC Science & Technology

    1983-09-01

    Approved by: A t/ 6 𔃼 -A-,i Thesis Advisor Second Reader Chairman,De rtment or Mecanica Engineering I De&n of Science and Engineering 3...unconstrained problems. 1. Direct Methods Direct methods are popular constrained optimization algorithms. One well known direct method is the method of...various popular tooth forms and Appendix A contains a descriptive figure of gear tooth design variables. However, the following equations are a good

  9. Optimal Design of Compact Spur Gear Reductions

    DTIC Science & Technology

    1992-09-01

    stress, psi Lundberg and Palmgren (1952) developed a theory for the life and pressure angle, deg capacity of ball and roller bearings . This life model is... bearings (Lundberg and Paimgren, 1952). Lundberg and Palmgren determined that the scatter in the life of a bearing can be modeled with a two-parameter...optimal design of compact spur gear reductions includes the Vf unit gradient in the feasible direction selection of bearing and shaft proportions in

  10. Computational Methods for Design, Control and Optimization

    DTIC Science & Technology

    2007-10-01

    34scenario" that applies to channel flows ( Poiseuille flows , Couette flow ) and pipe flows . Over the past 75 years many complex "transition theories" have... Simulation of Turbulent Flows , Springer Verlag, 2005. Additional Publications Supported by this Grant 1. J. Borggaard and T. Iliescu, Approximate Deconvolution...rigorous analysis of design algorithms that combine numerical simulation codes, approximate sensitivity calculations and optimization codes. The fundamental

  11. Database Design and Management in Engineering Optimization.

    DTIC Science & Technology

    1988-02-01

    for 4 Steekanta Murthy, T., Shyy, Y.-K. and Arora, J. S. MIDAS: educational and research purposes. It has considerably Management of Information for...an education in the particular field of ,-". expertise. ..-. *, The types of information to be retained and presented depend on the user of the system...191 . ,. 110 Though the design of MIDAS is directly influenced by Obl- SPOC qUery-bioek the current structural optimization applications, it possesses

  12. Optimal design of a tidal turbine

    NASA Astrophysics Data System (ADS)

    Kueny, J. L.; Lalande, T.; Herou, J. J.; Terme, L.

    2012-11-01

    An optimal design procedure has been applied to improve the design of an open-center tidal turbine. A specific software developed in C++ enables to generate the geometry adapted to the specific constraints imposed to this machine. Automatic scripts based on the AUTOGRID, IGG, FINE/TURBO and CFView software of the NUMECA CFD suite are used to evaluate all the candidate geometries. This package is coupled with the optimization software EASY, which is based on an evolutionary strategy completed by an artificial neural network. A new technique is proposed to guarantee the robustness of the mesh in the whole range of the design parameters. An important improvement of the initial geometry has been obtained. To limit the whole CPU time necessary for this optimization process, the geometry of the tidal turbine has been considered as axisymmetric, with a uniform upstream velocity. A more complete model (12 M nodes) has been built in order to analyze the effects related to the sea bed boundary layer, the proximity of the sea surface, the presence of an important triangular basement supporting the turbine and a possible incidence of the upstream velocity.

  13. Design and optimization of a HTS insert for solenoid magnets

    NASA Astrophysics Data System (ADS)

    Tomassetti, Giordano; de Marzi, Gianluca; Muzzi, Luigi; Celentano, Giuseppe; della Corte, Antonio

    2016-12-01

    With the availability of High-Temperature Superconducting (HTS) prototype cables, based on high-performance REBCO Coated Conductor (CC) tapes, new designs can now be made for large bore high-field inserts in superconducting solenoids, thus extending the magnet operating point to higher magnetic fields. In this work, as an alternative approach to the standard trial-and-error design process, an optimization procedure for a HTS grading section design is proposed, including parametric electro-magnetic and structural analyses, using the ANSYS software coupled with a numerically-efficient optimization algorithm. This HTS grading section is designed to be inserted into a 12 T large bore Low-Temperature Superconducting (LTS) solenoid (diameter about 1 m) to increase the field up to a maximum value of at least 17 T. The optimization variables taken into consideration are the number of turns and layers and the circle-in-square jacket inner diameter in order to minimize the total needed conductor length to achieve a peak field of at least 17 T, while guaranteeing the structural integrity and manufacturing constraints. By means of the optimization, an optimal 360 m total conductor length was found, achieving 17.2 T with an operating current of 22.4 kA and a coil comprised of 18 × 12 turns, shortened of about 20% with respect to the best initial candidate architectural design. The optimal HTS insert has a bore compatible with manufacturing constraints (inner bore radius larger than 30 cm). A scaled HTS insert for validation purposes, with a reduced conductor length, to be tested in an advanced experimental facility currently under construction, is also mentioned.

  14. Involving students in experimental design: three approaches.

    PubMed

    McNeal, A P; Silverthorn, D U; Stratton, D B

    1998-12-01

    Many faculty want to involve students more actively in laboratories and in experimental design. However, just "turning them loose in the lab" is time-consuming and can be frustrating for both students and faculty. We describe three different ways of providing structures for labs that require students to design their own experiments but guide the choices. One approach emphasizes invertebrate preparations and classic techniques that students can learn fairly easily. Students must read relevant primary literature and learn each technique in one week, and then design and carry out their own experiments in the next week. Another approach provides a "design framework" for the experiments so that all students are using the same technique and the same statistical comparisons, whereas their experimental questions differ widely. The third approach involves assigning the questions or problems but challenging students to design good protocols to answer these questions. In each case, there is a mixture of structure and freedom that works for the level of the students, the resources available, and our particular aims.

  15. A Nonlinear Optimal Control Design using Narrowband Perturbation Feedback for Magnetostrictive Actuators

    DTIC Science & Technology

    2010-07-01

    A Nonlinear Optimal Control Design using Narrowband Perturbation Feedback for Magnetostrictive Actuators William S. Oates1, Rick Zrostlik2, Scott...Abstract Nonlinear optimal and narrowband feedback control designs are developed and experimentally implemented on a magnetostrictive Terfenol-D...utilizing narrowband feedback. A narrowband filter is implemented by treating the nonlinear and hysteretic magnetostrictive constitutive behavior as

  16. Machine Learning Techniques in Optimal Design

    NASA Technical Reports Server (NTRS)

    Cerbone, Giuseppe

    1992-01-01

    Many important applications can be formalized as constrained optimization tasks. For example, we are studying the engineering domain of two-dimensional (2-D) structural design. In this task, the goal is to design a structure of minimum weight that bears a set of loads. A solution to a design problem in which there is a single load (L) and two stationary support points (S1 and S2) consists of four members, E1, E2, E3, and E4 that connect the load to the support points is discussed. In principle, optimal solutions to problems of this kind can be found by numerical optimization techniques. However, in practice [Vanderplaats, 1984] these methods are slow and they can produce different local solutions whose quality (ratio to the global optimum) varies with the choice of starting points. Hence, their applicability to real-world problems is severely restricted. To overcome these limitations, we propose to augment numerical optimization by first performing a symbolic compilation stage to produce: (a) objective functions that are faster to evaluate and that depend less on the choice of the starting point and (b) selection rules that associate problem instances to a set of recommended solutions. These goals are accomplished by successive specializations of the problem class and of the associated objective functions. In the end, this process reduces the problem to a collection of independent functions that are fast to evaluate, that can be differentiated symbolically, and that represent smaller regions of the overall search space. However, the specialization process can produce a large number of sub-problems. This is overcome by deriving inductively selection rules which associate problems to small sets of specialized independent sub-problems. Each set of candidate solutions is chosen to minimize a cost function which expresses the tradeoff between the quality of the solution that can be obtained from the sub-problem and the time it takes to produce it. The overall solution

  17. A FRAMEWORK TO DESIGN AND OPTIMIZE CHEMICAL FLOODING PROCESSES

    SciTech Connect

    Mojdeh Delshad; Gary A. Pope; Kamy Sepehrnoori

    2005-07-01

    The goal of this proposed research is to provide an efficient and user friendly simulation framework for screening and optimizing chemical/microbial enhanced oil recovery processes. The framework will include (1) a user friendly interface to identify the variables that have the most impact on oil recovery using the concept of experimental design and response surface maps, (2) UTCHEM reservoir simulator to perform the numerical simulations, and (3) an economic model that automatically imports the simulation production data to evaluate the profitability of a particular design. Such a reservoir simulation framework is not currently available to the oil industry. The objectives of Task 1 are to develop three primary modules representing reservoir, chemical, and well data. The modules will be interfaced with an already available experimental design model. The objective of the Task 2 is to incorporate UTCHEM reservoir simulator and the modules with the strategic variables and developing the response surface maps to identify the significant variables from each module. The objective of the Task 3 is to develop the economic model designed specifically for the chemical processes targeted in this proposal and interface the economic model with UTCHEM production output. Task 4 is on the validation of the framework and performing simulations of oil reservoirs to screen, design and optimize the chemical processes.

  18. A Framework to Design and Optimize Chemical Flooding Processes

    SciTech Connect

    Mojdeh Delshad; Gary A. Pope; Kamy Sepehrnoori

    2006-08-31

    The goal of this proposed research is to provide an efficient and user friendly simulation framework for screening and optimizing chemical/microbial enhanced oil recovery processes. The framework will include (1) a user friendly interface to identify the variables that have the most impact on oil recovery using the concept of experimental design and response surface maps, (2) UTCHEM reservoir simulator to perform the numerical simulations, and (3) an economic model that automatically imports the simulation production data to evaluate the profitability of a particular design. Such a reservoir simulation framework is not currently available to the oil industry. The objectives of Task 1 are to develop three primary modules representing reservoir, chemical, and well data. The modules will be interfaced with an already available experimental design model. The objective of the Task 2 is to incorporate UTCHEM reservoir simulator and the modules with the strategic variables and developing the response surface maps to identify the significant variables from each module. The objective of the Task 3 is to develop the economic model designed specifically for the chemical processes targeted in this proposal and interface the economic model with UTCHEM production output. Task 4 is on the validation of the framework and performing simulations of oil reservoirs to screen, design and optimize the chemical processes.

  19. A FRAMEWORK TO DESIGN AND OPTIMIZE CHEMICAL FLOODING PROCESSES

    SciTech Connect

    Mojdeh Delshad; Gary A. Pope; Kamy Sepehrnoori

    2004-11-01

    The goal of this proposed research is to provide an efficient and user friendly simulation framework for screening and optimizing chemical/microbial enhanced oil recovery processes. The framework will include (1) a user friendly interface to identify the variables that have the most impact on oil recovery using the concept of experimental design and response surface maps, (2) UTCHEM reservoir simulator to perform the numerical simulations, and (3) an economic model that automatically imports the simulation production data to evaluate the profitability of a particular design. Such a reservoir simulation framework is not currently available to the oil industry. The objectives of Task 1 are to develop three primary modules representing reservoir, chemical, and well data. The modules will be interfaced with an already available experimental design model. The objective of the Task 2 is to incorporate UTCHEM reservoir simulator and the modules with the strategic variables and developing the response surface maps to identify the significant variables from each module. The objective of the Task 3 is to develop the economic model designed specifically for the chemical processes targeted in this proposal and interface the economic model with UTCHEM production output. Task 4 is on the validation of the framework and performing simulations of oil reservoirs to screen, design and optimize the chemical processes.

  20. Topology optimization design of a space mirror

    NASA Astrophysics Data System (ADS)

    Liu, Jiazhen; Jiang, Bo

    2015-11-01

    As key components of the optical system of the space optical remote sensor, Space mirrors' surface accuracy had a direct impact that couldn't be ignored of the imaging quality of the remote sensor. In the future, large-diameter mirror would become an important trend in the development of space optical technology. However, a sharp increase in the mirror diameter would cause the deformation of the mirror and increase the thermal deformation caused by temperature variations. A reasonable lightweight structure designed to ensure the optical performance of the system to meet the requirements was required. As a new type of lightweight approach, topology optimization technology was an important direction of the current space optical remote sensing technology research. The lightweight design of rectangular mirror was studied. the variable density method of topology optimization was used. The mirror type precision of the mirror assemblies was obtained in different conditions. PV value was less than λ/10 and RMS value was less than λ/50(λ = 632.8nm). The results show that the entire The mirror assemblies can achieve a sufficiently high static rigidity, dynamic stiffness and thermal stability and has the capability of sufficient resistance to external environmental interference . Key words: topology optimization, space mirror, lightweight, space optical remote sensor

  1. Design search and optimization in aerospace engineering.

    PubMed

    Keane, A J; Scanlan, J P

    2007-10-15

    In this paper, we take a design-led perspective on the use of computational tools in the aerospace sector. We briefly review the current state-of-the-art in design search and optimization (DSO) as applied to problems from aerospace engineering, focusing on those problems that make heavy use of computational fluid dynamics (CFD). This ranges over issues of representation, optimization problem formulation and computational modelling. We then follow this with a multi-objective, multi-disciplinary example of DSO applied to civil aircraft wing design, an area where this kind of approach is becoming essential for companies to maintain their competitive edge. Our example considers the structure and weight of a transonic civil transport wing, its aerodynamic performance at cruise speed and its manufacturing costs. The goals are low drag and cost while holding weight and structural performance at acceptable levels. The constraints and performance metrics are modelled by a linked series of analysis codes, the most expensive of which is a CFD analysis of the aerodynamics using an Euler code with coupled boundary layer model. Structural strength and weight are assessed using semi-empirical schemes based on typical airframe company practice. Costing is carried out using a newly developed generative approach based on a hierarchical decomposition of the key structural elements of a typical machined and bolted wing-box assembly. To carry out the DSO process in the face of multiple competing goals, a recently developed multi-objective probability of improvement formulation is invoked along with stochastic process response surface models (Krigs). This approach both mitigates the significant run times involved in CFD computation and also provides an elegant way of balancing competing goals while still allowing the deployment of the whole range of single objective optimizers commonly available to design teams.

  2. Optimally designing games for behavioural research.

    PubMed

    Rafferty, Anna N; Zaharia, Matei; Griffiths, Thomas L

    2014-07-08

    Computer games can be motivating and engaging experiences that facilitate learning, leading to their increasing use in education and behavioural experiments. For these applications, it is often important to make inferences about the knowledge and cognitive processes of players based on their behaviour. However, designing games that provide useful behavioural data are a difficult task that typically requires significant trial and error. We address this issue by creating a new formal framework that extends optimal experiment design, used in statistics, to apply to game design. In this framework, we use Markov decision processes to model players' actions within a game, and then make inferences about the parameters of a cognitive model from these actions. Using a variety of concept learning games, we show that in practice, this method can predict which games will result in better estimates of the parameters of interest. The best games require only half as many players to attain the same level of precision.

  3. Optimal interferometer designs for optical coherence tomography.

    PubMed

    Rollins, A M; Izatt, J A

    1999-11-01

    We introduce a family of power-conserving fiber-optic interferometer designs for low-coherence reflectometry that use optical circulators, unbalanced couplers, and (or) balanced heterodyne detection. Simple design equations for optimization of the signal-to-noise ratio of the interferometers are expressed in terms of relevant signal and noise sources and measurable system parameters. We use the equations to evaluate the expected performance of the new configurations compared with that of the standard Michelson interferometer that is commonly used in optical coherence tomography (OCT) systems. The analysis indicates that improved sensitivity is expected for all the new interferometer designs, compared with the sensitivity of the standard OCT interferometer, under high-speed imaging conditions.

  4. Optimally designing games for behavioural research

    PubMed Central

    Rafferty, Anna N.; Zaharia, Matei; Griffiths, Thomas L.

    2014-01-01

    Computer games can be motivating and engaging experiences that facilitate learning, leading to their increasing use in education and behavioural experiments. For these applications, it is often important to make inferences about the knowledge and cognitive processes of players based on their behaviour. However, designing games that provide useful behavioural data are a difficult task that typically requires significant trial and error. We address this issue by creating a new formal framework that extends optimal experiment design, used in statistics, to apply to game design. In this framework, we use Markov decision processes to model players' actions within a game, and then make inferences about the parameters of a cognitive model from these actions. Using a variety of concept learning games, we show that in practice, this method can predict which games will result in better estimates of the parameters of interest. The best games require only half as many players to attain the same level of precision. PMID:25002821

  5. Model selection in systems biology depends on experimental design.

    PubMed

    Silk, Daniel; Kirk, Paul D W; Barnes, Chris P; Toni, Tina; Stumpf, Michael P H

    2014-06-01

    Experimental design attempts to maximise the information available for modelling tasks. An optimal experiment allows the inferred models or parameters to be chosen with the highest expected degree of confidence. If the true system is faithfully reproduced by one of the models, the merit of this approach is clear - we simply wish to identify it and the true parameters with the most certainty. However, in the more realistic situation where all models are incorrect or incomplete, the interpretation of model selection outcomes and the role of experimental design needs to be examined more carefully. Using a novel experimental design and model selection framework for stochastic state-space models, we perform high-throughput in-silico analyses on families of gene regulatory cascade models, to show that the selected model can depend on the experiment performed. We observe that experimental design thus makes confidence a criterion for model choice, but that this does not necessarily correlate with a model's predictive power or correctness. Finally, in the special case of linear ordinary differential equation (ODE) models, we explore how wrong a model has to be before it influences the conclusions of a model selection analysis.

  6. Model Selection in Systems Biology Depends on Experimental Design

    PubMed Central

    Silk, Daniel; Kirk, Paul D. W.; Barnes, Chris P.; Toni, Tina; Stumpf, Michael P. H.

    2014-01-01

    Experimental design attempts to maximise the information available for modelling tasks. An optimal experiment allows the inferred models or parameters to be chosen with the highest expected degree of confidence. If the true system is faithfully reproduced by one of the models, the merit of this approach is clear - we simply wish to identify it and the true parameters with the most certainty. However, in the more realistic situation where all models are incorrect or incomplete, the interpretation of model selection outcomes and the role of experimental design needs to be examined more carefully. Using a novel experimental design and model selection framework for stochastic state-space models, we perform high-throughput in-silico analyses on families of gene regulatory cascade models, to show that the selected model can depend on the experiment performed. We observe that experimental design thus makes confidence a criterion for model choice, but that this does not necessarily correlate with a model's predictive power or correctness. Finally, in the special case of linear ordinary differential equation (ODE) models, we explore how wrong a model has to be before it influences the conclusions of a model selection analysis. PMID:24922483

  7. PLS-optimal: a stepwise D-optimal design based on latent variables.

    PubMed

    Brandmaier, Stefan; Sahlin, Ullrika; Tetko, Igor V; Öberg, Tomas

    2012-04-23

    Several applications, such as risk assessment within REACH or drug discovery, require reliable methods for the design of experiments and efficient testing strategies. Keeping the number of experiments as low as possible is important from both a financial and an ethical point of view, as exhaustive testing of compounds requires significant financial resources and animal lives. With a large initial set of compounds, experimental design techniques can be used to select a representative subset for testing. Once measured, these compounds can be used to develop quantitative structure-activity relationship models to predict properties of the remaining compounds. This reduces the required resources and time. D-Optimal design is frequently used to select an optimal set of compounds by analyzing data variance. We developed a new sequential approach to apply a D-Optimal design to latent variables derived from a partial least squares (PLS) model instead of principal components. The stepwise procedure selects a new set of molecules to be measured after each previous measurement cycle. We show that application of the D-Optimal selection generates models with a significantly improved performance on four different data sets with end points relevant for REACH. Compared to those derived from principal components, PLS models derived from the selection on latent variables had a lower root-mean-square error and a higher Q2 and R2. This improvement is statistically significant, especially for the small number of compounds selected.

  8. Multidisciplinary design optimization for sonic boom mitigation

    NASA Astrophysics Data System (ADS)

    Ozcer, Isik A.

    product design. The simulation tools are used to optimize three geometries for sonic boom mitigation. The first is a simple axisymmetric shape to be used as a generic nose component, the second is a delta wing with lift, and the third is a real aircraft with nose and wing optimization. The objectives are to minimize the pressure impulse or the peak pressure in the sonic boom signal, while keeping the drag penalty under feasible limits. The design parameters for the meridian profile of the nose shape are the lengths and the half-cone angles of the linear segments that make up the profile. The design parameters for the lifting wing are the dihedral angle, angle of attack, non-linear span-wise twist and camber distribution. The test-bed aircraft is the modified F-5E aircraft built by Northrop Grumman, designated the Shaped Sonic Boom Demonstrator. This aircraft is fitted with an optimized axisymmetric nose, and the wings are optimized to demonstrate optimization for sonic boom mitigation for a real aircraft. The final results predict 42% reduction in bow shock strength, 17% reduction in peak Deltap, 22% reduction in pressure impulse, 10% reduction in foot print size, 24% reduction in inviscid drag, and no loss in lift for the optimized aircraft. Optimization is carried out using response surface methodology, and the design matrices are determined using standard DoE techniques for quadratic response modeling.

  9. An Optimal Pulse System Design by Multichannel Sensors Fusion.

    PubMed

    Wang, Dimin; Zhang, David; Lu, Guangming

    2016-03-01

    Pulse diagnosis, recognized as an important branch of traditional Chinese medicine (TCM), has a long history for health diagnosis. Certain features in the pulse are known to be related with the physiological status, which have been identified as biomarkers. In recent years, an electronic equipment is designed to obtain the valuable information inside pulse. Single-point pulse acquisition platform has the benefit of low cost and flexibility, but is time consuming in operation and not standardized in pulse location. The pulse system with a single-type sensor is easy to implement, but is limited in extracting sufficient pulse information. This paper proposes a novel system with optimal design that is special for pulse diagnosis. We combine a pressure sensor with a photoelectric sensor array to make a multichannel sensor fusion structure. Then, the optimal pulse signal processing methods and sensor fusion strategy are introduced for the feature extraction. Finally, the developed optimal pulse system and methods are tested on pulse database acquired from the healthy subjects and the patients known to be afflicted with diabetes. The experimental results indicate that the classification accuracy is increased significantly under the optimal design and also demonstrate that the developed pulse system with multichannel sensors fusion is more effective than the previous pulse acquisition platforms.

  10. A factorial design for optimizing a flow injection analysis system.

    PubMed

    Luna, J R; Ovalles, J F; León, A; Buchheister, M

    2000-05-01

    The use of a factorial design for the response exploration of a flow injection (FI) system is described and illustrated by FI spectrophotometric determination of paraquat. Variable response (absorbance) is explored as a function of the factors flow rate and length of the reaction coil. The present study was found to be useful to detect and estimate any interaction among the factors that may affect the optimal conditions for the maximal response in the optimization of the FI system, which is not possible with a univariate design. In addition, this study showed that factorial experiments enable economy of experimentation and yield results of high precision due to the use of the whole data for calculating the effects.

  11. ODIN: Optimal design integration system. [reusable launch vehicle design

    NASA Technical Reports Server (NTRS)

    Glatt, C. R.; Hague, D. S.

    1975-01-01

    The report provides a summary of the Optimal Design Integration (ODIN) System as it exists at Langley Research Center. A discussion of the ODIN System, the executive program and the data base concepts are presented. Two examples illustrate the capabilities of the system which have been exploited. Appended to the report are a summary of abstracts for the ODIN library programs and a description of the use of the executive program in linking the library programs.

  12. Database Design for Structural Analysis and Design Optimization.

    DTIC Science & Technology

    1984-10-01

    C.C. Wu 9. PERFORMING ORGANIZATION NAME AND ADDRESS 10. PROGRAM ELEMENT, PROJECT , TASK AREA & WORK UNIT NUMBERS Applied-Optimal Design Laboratory...relational algebraic operations such as PROJECT , JOIN, and SELECT can be used to form new relations. Figure 2.1 shows a typical relational model of data...data set contains the definition of mathematical 3-D surfaces of up to third order to which lines and grids may be projected . The surfaces are defined in

  13. A method for nonlinear optimization with discrete design variables

    NASA Technical Reports Server (NTRS)

    Olsen, Gregory R.; Vanderplaats, Garret N.

    1987-01-01

    A numerical method is presented for the solution of nonlinear discrete optimization problems. The applicability of discrete optimization to engineering design is discussed, and several standard structural optimization problems are solved using discrete design variables. The method uses approximation techniques to create subproblems suitable for linear mixed-integer programming methods. The method employs existing software for continuous optimization and integer programming.

  14. Design, optimization, and control of tensegrity structures

    NASA Astrophysics Data System (ADS)

    Masic, Milenko

    The contributions of this dissertation may be divided into four categories. The first category involves developing a systematic form-finding method for general and symmetric tensegrity structures. As an extension of the available results, different shape constraints are incorporated in the problem. Methods for treatment of these constraints are considered and proposed. A systematic formulation of the form-finding problem for symmetric tensegrity structures is introduced, and it uses the symmetry to reduce both the number of equations and the number of variables in the problem. The equilibrium analysis of modular tensegrities exploits their peculiar symmetry. The tensegrity similarity transformation completes the contributions in the area of enabling tools for tensegrity form-finding. The second group of contributions develops the methods for optimal mass-to-stiffness-ratio design of tensegrity structures. This technique represents the state-of-the-art for the static design of tensegrity structures. It is an extension of the results available for the topology optimization of truss structures. Besides guaranteeing that the final design satisfies the tensegrity paradigm, the problem constrains the structure from different modes of failure, which makes it very general. The open-loop control of the shape of modular tensegrities is the third contribution of the dissertation. This analytical result offers a closed form solution for the control of the reconfiguration of modular structures. Applications range from the deployment and stowing of large-scale space structures to the locomotion-inducing control for biologically inspired structures. The control algorithm is applicable regardless of the size of the structures, and it represents a very general result for a large class of tensegrities. Controlled deployments of large-scale tensegrity plates and tensegrity towers are shown as examples that demonstrate the full potential of this reconfiguration strategy. The last

  15. Handling Qualities Optimization for Rotorcraft Conceptual Design

    NASA Technical Reports Server (NTRS)

    Lawrence, Ben; Theodore, Colin R.; Berger, Tom

    2016-01-01

    Over the past decade, NASA, under a succession of rotary-wing programs has been moving towards coupling multiple discipline analyses in a rigorous consistent manner to evaluate rotorcraft conceptual designs. Handling qualities is one of the component analyses to be included in a future NASA Multidisciplinary Analysis and Optimization framework for conceptual design of VTOL aircraft. Similarly, the future vision for the capability of the Concept Design and Assessment Technology Area (CD&A-TA) of the U.S Army Aviation Development Directorate also includes a handling qualities component. SIMPLI-FLYD is a tool jointly developed by NASA and the U.S. Army to perform modeling and analysis for the assessment of flight dynamics and control aspects of the handling qualities of rotorcraft conceptual designs. An exploration of handling qualities analysis has been carried out using SIMPLI-FLYD in illustrative scenarios of a tiltrotor in forward flight and single-main rotor helicopter at hover. Using SIMPLI-FLYD and the conceptual design tool NDARC integrated into a single process, the effects of variations of design parameters such as tail or rotor size were evaluated in the form of margins to fixed- and rotary-wing handling qualities metrics as well as the vehicle empty weight. The handling qualities design margins are shown to vary across the flight envelope due to both changing flight dynamic and control characteristics and changing handling qualities specification requirements. The current SIMPLI-FLYD capability and future developments are discussed in the context of an overall rotorcraft conceptual design process.

  16. Inter occasion variability in individual optimal design.

    PubMed

    Kristoffersson, Anders N; Friberg, Lena E; Nyberg, Joakim

    2015-12-01

    Inter occasion variability (IOV) is of importance to consider in the development of a design where individual pharmacokinetic or pharmacodynamic parameters are of interest. IOV may adversely affect the precision of maximum a posteriori (MAP) estimated individual parameters, yet the influence of inclusion of IOV in optimal design for estimation of individual parameters has not been investigated. In this work two methods of including IOV in the maximum a posteriori Fisher information matrix (FIMMAP) are evaluated: (i) MAP occ-the IOV is included as a fixed effect deviation per occasion and individual, and (ii) POP occ-the IOV is included as an occasion random effect. Sparse sampling schedules were designed for two test models and compared to a scenario where IOV is ignored, either by omitting known IOV (Omit) or by mimicking a situation where unknown IOV has inflated the IIV (Inflate). Accounting for IOV in the FIMMAP markedly affected the designs compared to ignoring IOV and, as evaluated by stochastic simulation and estimation, resulted in superior precision in the individual parameters. In addition MAPocc and POP occ accurately predicted precision and shrinkage. For the investigated designs, the MAP occ method was on average slightly superior to POP occ and was less computationally intensive.

  17. Optimal design of a hybridization scheme with a fuel cell using genetic optimization

    NASA Astrophysics Data System (ADS)

    Rodriguez, Marco A.

    Fuel cell is one of the most dependable "green power" technologies, readily available for immediate application. It enables direct conversion of hydrogen and other gases into electric energy without any pollution of the environment. However, the efficient power generation is strictly stationary process that cannot operate under dynamic environment. Consequently, fuel cell becomes practical only within a specially designed hybridization scheme, capable of power storage and power management functions. The resultant technology could be utilized to its full potential only when both the fuel cell element and the entire hybridization scheme are optimally designed. The design optimization in engineering is among the most complex computational tasks due to its multidimensionality, nonlinearity, discontinuity and presence of constraints in the underlying optimization problem. this research aims at the optimal utilization of the fuel cell technology through the use of genetic optimization, and advance computing. This study implements genetic optimization in the definition of optimum hybridization rules for a PEM fuel cell/supercapacitor power system. PEM fuel cells exhibit high energy density but they are not intended for pulsating power draw applications. They work better in steady state operation and thus, are often hybridized. In a hybrid system, the fuel cell provides power during steady state operation while capacitors or batteries augment the power of the fuel cell during power surges. Capacitors and batteries can also be recharged when the motor is acting as a generator. Making analogies to driving cycles, three hybrid system operating modes are investigated: 'Flat' mode, 'Uphill' mode, and 'Downhill' mode. In the process of discovering the switching rules for these three modes, we also generate a model of a 30W PEM fuel cell. This study also proposes the optimum design of a 30W PEM fuel cell. The PEM fuel cell model and hybridization's switching rules are postulated

  18. Multi-Disciplinary Design Optimization Using WAVE

    NASA Technical Reports Server (NTRS)

    Irwin, Keith

    2000-01-01

    develop an associative control structure (framework) in the UG WAVE environment enabling multi-disciplinary design of turbine propulsion systems. The capabilities of WAVE were evaluated to assess its use as a rapid optimization and productivity tool. This project also identified future WAVE product enhancements that will make the tool still more beneficial for product development.

  19. Optimal patch code design via device characterization

    NASA Astrophysics Data System (ADS)

    Wu, Wencheng; Dalal, Edul N.

    2012-01-01

    In many color measurement applications, such as those for color calibration and profiling, "patch code" has been used successfully for job identification and automation to reduce operator errors. A patch code is similar to a barcode, but is intended primarily for use in measurement devices that cannot read barcodes due to limited spatial resolution, such as spectrophotometers. There is an inherent tradeoff between decoding robustness and the number of code levels available for encoding. Previous methods have attempted to address this tradeoff, but those solutions have been sub-optimal. In this paper, we propose a method to design optimal patch codes via device characterization. The tradeoff between decoding robustness and the number of available code levels is optimized in terms of printing and measurement efforts, and decoding robustness against noises from the printing and measurement devices. Effort is drastically reduced relative to previous methods because print-and-measure is minimized through modeling and the use of existing printer profiles. Decoding robustness is improved by distributing the code levels in CIE Lab space rather than in CMYK space.

  20. Global optimization methods for engineering design

    NASA Technical Reports Server (NTRS)

    Arora, Jasbir S.

    1990-01-01

    The problem is to find a global minimum for the Problem P. Necessary and sufficient conditions are available for local optimality. However, global solution can be assured only under the assumption of convexity of the problem. If the constraint set S is compact and the cost function is continuous on it, existence of a global minimum is guaranteed. However, in view of the fact that no global optimality conditions are available, a global solution can be found only by an exhaustive search to satisfy Inequality. The exhaustive search can be organized in such a way that the entire design space need not be searched for the solution. This way the computational burden is reduced somewhat. It is concluded that zooming algorithm for global optimizations appears to be a good alternative to stochastic methods. More testing is needed; a general, robust, and efficient local minimizer is required. IDESIGN was used in all numerical calculations which is based on a sequential quadratic programming algorithm, and since feasible set keeps on shrinking, a good algorithm to find an initial feasible point is required. Such algorithms need to be developed and evaluated.

  1. Human Factors Experimental Design and Analysis Reference

    DTIC Science & Technology

    2007-07-01

    and R2Adj – PRESS Statistic – Mallows C(p) A linear regression model that includes all predictors investigated may not be the best model in terms of...as the Adjusted Coefficient of Determination, R2Adj, the PRESS statistic, and Mallows C(p) value. Human Factors Experimental Design and Analysis...equations with highest R2 using R2Adj, PRESS, and Mallows C(p) • Evaluation – Cumbersome as number of X’s increase 10 X’s = (210-1) = 1,023 Regression

  2. On the proper study design applicable to experimental balneology

    NASA Astrophysics Data System (ADS)

    Varga, Csaba

    2016-08-01

    The simple message of this paper is that it is the high time to reevaluate the strategies and optimize the efforts for investigation of thermal (spa) waters. Several articles trying to clear mode of action of medicinal waters have been published up to now. Almost all studies apply the unproven hypothesis, namely the inorganic ingredients are in close connection with healing effects of bathing. Change of paradigm would be highly necessary in this field taking into consideration the presence of several biologically active organic substances in these waters. A successful design for experimental mechanistic studies is approved.

  3. Comparison of Optimal Design Methods in Inverse Problems.

    PubMed

    Banks, H T; Holm, Kathleen; Kappel, Franz

    2011-07-01

    Typical optimal design methods for inverse or parameter estimation problems are designed to choose optimal sampling distributions through minimization of a specific cost function related to the resulting error in parameter estimates. It is hoped that the inverse problem will produce parameter estimates with increased accuracy using data collected according to the optimal sampling distribution. Here we formulate the classical optimal design problem in the context of general optimization problems over distributions of sampling times. We present a new Prohorov metric based theoretical framework that permits one to treat succinctly and rigorously any optimal design criteria based on the Fisher Information Matrix (FIM). A fundamental approximation theory is also included in this framework. A new optimal design, SE-optimal design (standard error optimal design), is then introduced in the context of this framework. We compare this new design criteria with the more traditional D-optimal and E-optimal designs. The optimal sampling distributions from each design are used to compute and compare standard errors; the standard errors for parameters are computed using asymptotic theory or bootstrapping and the optimal mesh. We use three examples to illustrate ideas: the Verhulst-Pearl logistic population model [13], the standard harmonic oscillator model [13] and a popular glucose regulation model [16, 19, 29].

  4. Optimal design of a touch trigger probe

    NASA Astrophysics Data System (ADS)

    Li, Rui-Jun; Xiang, Meng; Fan, Kuang-Chao; Zhou, Hao; Feng, Jian

    2015-02-01

    A tungsten stylus with a ruby ball tip was screwed into a floating plate, which was supported by four leaf springs. The displacement of the tip caused by the contact force in 3D could be transferred into the tilt or vertical displacement of a plane mirror mounted on the floating plate. A quadrant photo detector (QPD) based two dimensional angle sensor was used to detect the tilt or the vertical displacement of the plane mirror. The structural parameters of the probe are optimized for equal sensitivity and equal stiffness in a displacement range of +/-5 μm, and a restricted horizontal size of less than 40 mm. Simulation results indicated that the stiffness was less than 0.6 mN/μm and equal in 3D. Experimental results indicated that the probe could be used to achieve a resolution of 1 nm.

  5. CFD based draft tube hydraulic design optimization

    NASA Astrophysics Data System (ADS)

    McNabb, J.; Devals, C.; Kyriacou, S. A.; Murry, N.; Mullins, B. F.

    2014-03-01

    The draft tube design of a hydraulic turbine, particularly in low to medium head applications, plays an important role in determining the efficiency and power characteristics of the overall machine, since an important proportion of the available energy, being in kinetic form leaving the runner, needs to be recovered by the draft tube into static head. For large units, these efficiency and power characteristics can equate to large sums of money when considering the anticipated selling price of the energy produced over the machine's life-cycle. This same draft tube design is also a key factor in determining the overall civil costs of the powerhouse, primarily in excavation and concreting, which can amount to similar orders of magnitude as the price of the energy produced. Therefore, there is a need to find the optimum compromise between these two conflicting requirements. In this paper, an elaborate approach is described for dealing with this optimization problem. First, the draft tube's detailed geometry is defined as a function of a comprehensive set of design parameters (about 20 of which a subset is allowed to vary during the optimization process) and are then used in a non-uniform rational B-spline based geometric modeller to fully define the wetted surfaces geometry. Since the performance of the draft tube is largely governed by 3D viscous effects, such as boundary layer separation from the walls and swirling flow characteristics, which in turn governs the portion of the available kinetic energy which will be converted into pressure, a full 3D meshing and Navier-Stokes analysis is performed for each design. What makes this even more challenging is the fact that the inlet velocity distribution to the draft tube is governed by the runner at each of the various operating conditions that are of interest for the exploitation of the powerhouse. In order to determine these inlet conditions, a combined steady-state runner and an initial draft tube analysis, using a

  6. Experimental Design to Evaluate Directed Adaptive Mutation in Mammalian Cells

    PubMed Central

    Chiaro, Christopher R; May, Tobias

    2014-01-01

    and have limited preliminary data from several pilot experiments. Cell growth and DNA sequence data indicate that we have identified a cell clone that exhibits several suitable characteristics, although further study is required to identify a more optimal cell clone. Conclusions The experimental approach is based on a quantum biological model of basis-dependent selection describing a novel mechanism of adaptive mutation. This project is currently inactive due to lack of funding. However, consistent with the objective of early reports, we describe a proposed study that has not produced publishable results, but is worthy of report because of the hypothesis, experimental design, and protocols. We outline the project’s rationale and experimental design, with its strengths and weaknesses, to stimulate discussion and analysis, and lay the foundation for future studies in this field. PMID:25491410

  7. The suitability of selected multidisciplinary design and optimization techniques to conceptual aerospace vehicle design

    NASA Technical Reports Server (NTRS)

    Olds, John R.

    1992-01-01

    Four methods for preliminary aerospace vehicle design are reviewed. The first three methods (classical optimization, system decomposition, and system sensitivity analysis (SSA)) employ numerical optimization techniques and numerical gradients to feed back changes in the design variables. The optimum solution is determined by stepping through a series of designs toward a final solution. Of these three, SSA is argued to be the most applicable to a large-scale highly coupled vehicle design where an accurate minimum of an objective function is required. With SSA, several tasks can be performed in parallel. The techniques of classical optimization and decomposition can be included in SSA, resulting in a very powerful design method. The Taguchi method is more of a 'smart' parametric design method that analyzes variable trends and interactions over designer specified ranges with a minimum of experimental analysis runs. Its advantages are its relative ease of use, ability to handle discrete variables, and ability to characterize the entire design space with a minimum of analysis runs.

  8. Application of numerical optimization to the design of advanced supercritical airfoils

    NASA Technical Reports Server (NTRS)

    Johnson, R. R.; Hicks, R. M.

    1979-01-01

    An application of numerical optimization to the design of advanced airfoils for transonic aircraft showed that low-drag sections can be developed for a given design Mach number without an accompanying drag increase at lower Mach numbers. This is achieved by imposing a constraint on the drag coefficient at an off-design Mach number while minimizing the drag coefficient at the design Mach number. This multiple design-point numerical optimization has been implemented with the use of airfoil shape functions which permit a wide range of attainable profiles during the optimization process. Analytical data for the starting airfoil shape, a single design-point optimized shape, and a double design-point optimized shape are presented. Experimental data obtained in the NASA Ames two-by two-foot wind tunnel are also presented and discussed.

  9. Maximum Entropy/Optimal Projection (MEOP) control design synthesis: Optimal quantification of the major design tradeoffs

    NASA Technical Reports Server (NTRS)

    Hyland, D. C.; Bernstein, D. S.

    1987-01-01

    The underlying philosophy and motivation of the optimal projection/maximum entropy (OP/ME) stochastic modeling and reduced control design methodology for high order systems with parameter uncertainties are discussed. The OP/ME design equations for reduced-order dynamic compensation including the effect of parameter uncertainties are reviewed. The application of the methodology to several Large Space Structures (LSS) problems of representative complexity is illustrated.

  10. A Statistical Approach to Optimizing Concrete Mixture Design

    PubMed Central

    Alghamdi, Saeid A.

    2014-01-01

    A step-by-step statistical approach is proposed to obtain optimum proportioning of concrete mixtures using the data obtained through a statistically planned experimental program. The utility of the proposed approach for optimizing the design of concrete mixture is illustrated considering a typical case in which trial mixtures were considered according to a full factorial experiment design involving three factors and their three levels (33). A total of 27 concrete mixtures with three replicates (81 specimens) were considered by varying the levels of key factors affecting compressive strength of concrete, namely, water/cementitious materials ratio (0.38, 0.43, and 0.48), cementitious materials content (350, 375, and 400 kg/m3), and fine/total aggregate ratio (0.35, 0.40, and 0.45). The experimental data were utilized to carry out analysis of variance (ANOVA) and to develop a polynomial regression model for compressive strength in terms of the three design factors considered in this study. The developed statistical model was used to show how optimization of concrete mixtures can be carried out with different possible options. PMID:24688405

  11. Optimization of Regression Models of Experimental Data Using Confirmation Points

    NASA Technical Reports Server (NTRS)

    Ulbrich, N.

    2010-01-01

    A new search metric is discussed that may be used to better assess the predictive capability of different math term combinations during the optimization of a regression model of experimental data. The new search metric can be determined for each tested math term combination if the given experimental data set is split into two subsets. The first subset consists of data points that are only used to determine the coefficients of the regression model. The second subset consists of confirmation points that are exclusively used to test the regression model. The new search metric value is assigned after comparing two values that describe the quality of the fit of each subset. The first value is the standard deviation of the PRESS residuals of the data points. The second value is the standard deviation of the response residuals of the confirmation points. The greater of the two values is used as the new search metric value. This choice guarantees that both standard deviations are always less or equal to the value that is used during the optimization. Experimental data from the calibration of a wind tunnel strain-gage balance is used to illustrate the application of the new search metric. The new search metric ultimately generates an optimized regression model that was already tested at regression model independent confirmation points before it is ever used to predict an unknown response from a set of regressors.

  12. Effect and interaction study of acetamiprid photodegradation using experimental design.

    PubMed

    Tassalit, Djilali; Chekir, Nadia; Benhabiles, Ouassila; Mouzaoui, Oussama; Mahidine, Sarah; Merzouk, Nachida Kasbadji; Bentahar, Fatiha; Khalil, Abbas

    2016-10-01

    The methodology of experimental research was carried out using the MODDE 6.0 software to study the acetamiprid photodegradation depending on the operating parameters, such as the initial concentration of acetamiprid, concentration and type of the used catalyst and the initial pH of the medium. The results showed the importance of the pollutant concentration effect on the acetamiprid degradation rate. On the other hand, the amount and type of the used catalyst have a considerable influence on the elimination kinetics of this pollutant. The degradation of acetamiprid as an environmental pesticide pollutant via UV irradiation in the presence of titanium dioxide was assessed and optimized using response surface methodology with a D-optimal design. The acetamiprid degradation ratio was found to be sensitive to the different studied factors. The maximum value of discoloration under the optimum operating conditions was determined to be 99% after 300 min of UV irradiation.

  13. Performance enhancement of a pump impeller using optimal design method

    NASA Astrophysics Data System (ADS)

    Jeon, Seok-Yun; Kim, Chul-Kyu; Lee, Sang-Moon; Yoon, Joon-Yong; Jang, Choon-Man

    2017-04-01

    This paper presents the performance evaluation of a regenerative pump to increase its efficiency using optimal design method. Two design parameters which define the shape of the pump impeller, are introduced and analyzed. Pump performance is evaluated by numerical simulation and design of experiments(DOE). To analyze three-dimensional flow field in the pump, general analysis code, CFX, is used in the present work. Shear stress turbulence model is employed to estimate the eddy viscosity. Experimental apparatus with an open-loop facility is set up for measuring the pump performance. Pump performance, efficiency and pressure, obtained from numerical simulation are validated by comparison with the results of experiments. Throughout the shape optimization of the pump impeller at the operating flow condition, the pump efficiency is successfully increased by 3 percent compared to the reference pump. It is noted that the pressure increase of the optimum pump is mainly caused by higher momentum force generated inside blade passage due to the optimal blade shape. Comparisons of pump internal flow on the reference and optimum pump are also investigated and discussed in detail.

  14. Optimal design of a thermally stable composite optical bench

    NASA Technical Reports Server (NTRS)

    Gray, C. E., Jr.

    1985-01-01

    The Lidar Atmospheric Sensing Experiment will be performed aboard an ER-2 aircraft; the lidar system used will be mounted on a lightweight, thermally stable graphite/epoxy optical bench whose design is presently subjected to analytical study and experimental validation. Attention is given to analytical methods for the selection of such expected laminate properties as the thermal expansion coefficient, the apparent in-plane moduli, and ultimate strength. For a symmetric laminate in which one of the lamina angles remains variable, an optimal lamina angle is selected to produce a design laminate with a near-zero coefficient of thermal expansion. Finite elements are used to model the structural concept of the design, with a view to the optical bench's thermal structural response as well as the determination of the degree of success in meeting the experiment's alignment tolerances.

  15. Optimizing Monitoring Designs under Alternative Objectives

    DOE PAGES

    Gastelum, Jason A.; USA, Richland Washington; Porter, Ellen A.; ...

    2014-12-31

    This paper describes an approach to identify monitoring designs that optimize detection of CO2 leakage from a carbon capture and sequestration (CCS) reservoir and compares the results generated under two alternative objective functions. The first objective function minimizes the expected time to first detection of CO2 leakage, the second more conservative objective function minimizes the maximum time to leakage detection across the set of realizations. The approach applies a simulated annealing algorithm that searches the solution space by iteratively mutating the incumbent monitoring design. The approach takes into account uncertainty by evaluating the performance of potential monitoring designs across amore » set of simulated leakage realizations. The approach relies on a flexible two-tiered signature to infer that CO2 leakage has occurred. This research is part of the National Risk Assessment Partnership, a U.S. Department of Energy (DOE) project tasked with conducting risk and uncertainty analysis in the areas of reservoir performance, natural leakage pathways, wellbore integrity, groundwater protection, monitoring, and systems level modeling.« less

  16. Optimal Ground Source Heat Pump System Design

    SciTech Connect

    Ozbek, Metin; Yavuzturk, Cy; Pinder, George

    2015-04-01

    Despite the facts that GSHPs first gained popularity as early as the 1940’s and they can achieve 30 to 60 percent in energy savings and carbon emission reductions relative to conventional HVAC systems, the use of geothermal energy in the U.S. has been less than 1 percent of the total energy consumption. The key barriers preventing this technically-mature technology from reaching its full commercial potential have been its high installation cost and limited consumer knowledge and trust in GSHP systems to deliver the technology in a cost-effective manner in the market place. Led by ENVIRON, with support from University Hartford and University of Vermont, the team developed and tested a software-based a decision making tool (‘OptGSHP’) for the least-cost design of ground-source heat pump (‘GSHP’) systems. OptGSHP combines state of the art optimization algorithms with GSHP-specific HVAC and groundwater flow and heat transport simulation. The particular strength of OptGSHP is in integrating heat transport due to groundwater flow into the design, which most of the GSHP designs do not get credit for and therefore are overdesigned.

  17. Structural Optimization of a Force Balance Using a Computational Experiment Design

    NASA Technical Reports Server (NTRS)

    Parker, P. A.; DeLoach, R.

    2002-01-01

    This paper proposes a new approach to force balance structural optimization featuring a computational experiment design. Currently, this multi-dimensional design process requires the designer to perform a simplification by executing parameter studies on a small subset of design variables. This one-factor-at-a-time approach varies a single variable while holding all others at a constant level. Consequently, subtle interactions among the design variables, which can be exploited to achieve the design objectives, are undetected. The proposed method combines Modern Design of Experiments techniques to direct the exploration of the multi-dimensional design space, and a finite element analysis code to generate the experimental data. To efficiently search for an optimum combination of design variables and minimize the computational resources, a sequential design strategy was employed. Experimental results from the optimization of a non-traditional force balance measurement section are presented. An approach to overcome the unique problems associated with the simultaneous optimization of multiple response criteria is described. A quantitative single-point design procedure that reflects the designer's subjective impression of the relative importance of various design objectives, and a graphical multi-response optimization procedure that provides further insights into available tradeoffs among competing design objectives are illustrated. The proposed method enhances the intuition and experience of the designer by providing new perspectives on the relationships between the design variables and the competing design objectives providing a systematic foundation for advancements in structural design.

  18. Sampling design optimization for spatial functions

    USGS Publications Warehouse

    Olea, R.A.

    1984-01-01

    A new procedure is presented for minimizing the sampling requirements necessary to estimate a mappable spatial function at a specified level of accuracy. The technique is based on universal kriging, an estimation method within the theory of regionalized variables. Neither actual implementation of the sampling nor universal kriging estimations are necessary to make an optimal design. The average standard error and maximum standard error of estimation over the sampling domain are used as global indices of sampling efficiency. The procedure optimally selects those parameters controlling the magnitude of the indices, including the density and spatial pattern of the sample elements and the number of nearest sample elements used in the estimation. As an illustration, the network of observation wells used to monitor the water table in the Equus Beds of Kansas is analyzed and an improved sampling pattern suggested. This example demonstrates the practical utility of the procedure, which can be applied equally well to other spatial sampling problems, as the procedure is not limited by the nature of the spatial function. ?? 1984 Plenum Publishing Corporation.

  19. Space tourism optimized reusable spaceplane design

    NASA Astrophysics Data System (ADS)

    Penn, Jay P.; Lindley, Charles A.

    1997-01-01

    Market surveys suggest that a viable space tourism industry will require flight rates about two orders of magnitude higher than those required for conventional spacelift. Although enabling round-trip cost goals for a viable space tourism business are about $240 per pound ($529/kg), or $72,000 per passenger round-trip, goals should be about $50 per pound ($110/kg) or approximately $15,000 for a typical passenger and baggage. The lower price will probably open space tourism to the general population. Vehicle reliabilities must approach those of commercial aircraft as closely as possible. This paper addresses the development of spaceplanes optimized for the ultra-high flight rate and high reliability demands of the space tourism mission. It addresses the fundamental operability, reliability, and cost drivers needed to satisfy this mission need. Figures of merit similar to those used to evaluate the economic viability of conventional commercial aircraft are developed, including items such as payload/vehicle dry weight, turnaround time, propellant cost per passenger, and insurance and depreciation costs, which show that infrastructure can be developed for a viable space tourism industry. A reference spaceplane design optimized for space tourism is described. Subsystem allocations for reliability, operability, and costs are made and a route to developing such a capability is discussed. The vehicle's ability to also satisfy the traditional spacelift market is shown.

  20. Experimental Design for the LATOR Mission

    NASA Technical Reports Server (NTRS)

    Turyshev, Slava G.; Shao, Michael; Nordtvedt, Kenneth, Jr.

    2004-01-01

    This paper discusses experimental design for the Laser Astrometric Test Of Relativity (LATOR) mission. LATOR is designed to reach unprecedented accuracy of 1 part in 10(exp 8) in measuring the curvature of the solar gravitational field as given by the value of the key Eddington post-Newtonian parameter gamma. This mission will demonstrate the accuracy needed to measure effects of the next post-Newtonian order (near infinity G2) of light deflection resulting from gravity s intrinsic non-linearity. LATOR will provide the first precise measurement of the solar quadrupole moment parameter, J(sub 2), and will improve determination of a variety of relativistic effects including Lense-Thirring precession. The mission will benefit from the recent progress in the optical communication technologies the immediate and natural step above the standard radio-metric techniques. The key element of LATOR is a geometric redundancy provided by the laser ranging and long-baseline optical interferometry. We discuss the mission and optical designs, as well as the expected performance of this proposed mission. LATOR will lead to very robust advances in the tests of Fundamental physics: this mission could discover a violation or extension of general relativity, or reveal the presence of an additional long range interaction in the physical law. There are no analogs to the LATOR experiment; it is unique and is a natural culmination of solar system gravity experiments.

  1. Three Program Architecture for Design Optimization

    NASA Technical Reports Server (NTRS)

    Miura, Hirokazu; Olson, Lawrence E. (Technical Monitor)

    1998-01-01

    In this presentation, I would like to review historical perspective on the program architecture used to build design optimization capabilities based on mathematical programming and other numerical search techniques. It is rather straightforward to classify the program architecture in three categories as shown above. However, the relative importance of each of the three approaches has not been static, instead dynamically changing as the capabilities of available computational resource increases. For example, we considered that the direct coupling architecture would never be used for practical problems, but availability of such computer systems as multi-processor. In this presentation, I would like to review the roles of three architecture from historical as well as current and future perspective. There may also be some possibility for emergence of hybrid architecture. I hope to provide some seeds for active discussion where we are heading to in the very dynamic environment for high speed computing and communication.

  2. Design and global optimization of high-efficiency thermophotovoltaic systems.

    PubMed

    Bermel, Peter; Ghebrebrhan, Michael; Chan, Walker; Yeng, Yi Xiang; Araghchini, Mohammad; Hamam, Rafif; Marton, Christopher H; Jensen, Klavs F; Soljačić, Marin; Joannopoulos, John D; Johnson, Steven G; Celanovic, Ivan

    2010-09-13

    Despite their great promise, small experimental thermophotovoltaic (TPV) systems at 1000 K generally exhibit extremely low power conversion efficiencies (approximately 1%), due to heat losses such as thermal emission of undesirable mid-wavelength infrared radiation. Photonic crystals (PhC) have the potential to strongly suppress such losses. However, PhC-based designs present a set of non-convex optimization problems requiring efficient objective function evaluation and global optimization algorithms. Both are applied to two example systems: improved micro-TPV generators and solar thermal TPV systems. Micro-TPV reactors experience up to a 27-fold increase in their efficiency and power output; solar thermal TPV systems see an even greater 45-fold increase in their efficiency (exceeding the Shockley-Quiesser limit for a single-junction photovoltaic cell).

  3. Design and optimization of membrane-type acoustic metamaterials

    NASA Astrophysics Data System (ADS)

    Blevins, Matthew Grant

    One of the most common problems in noise control is the attenuation of low frequency noise. Typical solutions require barriers with high density and/or thickness. Membrane-type acoustic metamaterials are a novel type of engineered material capable of high low-frequency transmission loss despite their small thickness and light weight. These materials are ideally suited to applications with strict size and weight limitations such as aircraft, automobiles, and buildings. The transmission loss profile can be manipulated by changing the micro-level substructure, stacking multiple unit cells, or by creating multi-celled arrays. To date, analysis has focused primarily on experimental studies in plane-wave tubes and numerical modeling using finite element methods. These methods are inefficient when used for applications that require iterative changes to the structure of the material. To facilitate design and optimization of membrane-type acoustic metamaterials, computationally efficient dynamic models based on the impedance-mobility approach are proposed. Models of a single unit cell in a waveguide and in a baffle, a double layer of unit cells in a waveguide, and an array of unit cells in a baffle are studied. The accuracy of the models and the validity of assumptions used are verified using a finite element method. The remarkable computational efficiency of the impedance-mobility models compared to finite element methods enables implementation in design tools based on a graphical user interface and in optimization schemes. Genetic algorithms are used to optimize the unit cell design for a variety of noise reduction goals, including maximizing transmission loss for broadband, narrow-band, and tonal noise sources. The tools for design and optimization created in this work will enable rapid implementation of membrane-type acoustic metamaterials to solve real-world noise control problems.

  4. Experimental Optimization of a Free-to-Rotate Wing for Small UAS

    NASA Technical Reports Server (NTRS)

    Logan, Michael J.; DeLoach, Richard; Copeland, Tiwana; Vo, Steven

    2014-01-01

    This paper discusses an experimental investigation conducted to optimize a free-to-rotate wing for use on a small unmanned aircraft system (UAS). Although free-to-rotate wings have been used for decades on various small UAS and small manned aircraft, little is known about how to optimize these unusual wings for a specific application. The paper discusses some of the design rationale of the basic wing. In addition, three main parameters were selected for "optimization", wing camber, wing pivot location, and wing center of gravity (c.g.) location. A small apparatus was constructed to enable some simple experimental analysis of these parameters. A design-of-experiment series of tests were first conducted to discern which of the main optimization parameters were most likely to have the greatest impact on the outputs of interest, namely, some measure of "stability", some measure of the lift being generated at the neutral position, and how quickly the wing "recovers" from an upset. A second set of tests were conducted to develop a response-surface numerical representation of these outputs as functions of the three primary inputs. The response surface numerical representations are then used to develop an "optimum" within the trade space investigated. The results of the optimization are then tested experimentally to validate the predictions.

  5. Technological issues and experimental design of gene association studies.

    PubMed

    Distefano, Johanna K; Taverna, Darin M

    2011-01-01

    Genome-wide association studies (GWAS), in which thousands of single-nucleotide polymorphisms (SNPs) spanning the genome are genotyped in individuals who are phenotypically well characterized, -currently represent the most popular strategy for identifying gene regions associated with common -diseases and related quantitative traits. Improvements in technology and throughput capability, development of powerful statistical tools, and more widespread acceptance of pooling-based genotyping approaches have led to greater utilization of GWAS in human genetics research. However, important considerations for optimal experimental design, including selection of the most appropriate genotyping platform, can enhance the utility of the approach even further. This chapter reviews experimental and technological issues that may affect the success of GWAS findings and proposes strategies for developing the most comprehensive, logical, and cost-effective approaches for genotyping given the population of interest.

  6. Optimal screening designs for biomedical technology

    SciTech Connect

    Torney, D.C.; Bruno, W.J.; Knill, E.

    1997-10-01

    This is the final report of a three-year, Laboratory Directed Research and Development (LDRD) project at Los Alamos National Laboratory (LANL). Screening a large number of different types of molecules to isolate a few with desirable properties is essential in biomedical technology. For example, trying to find a particular gene in the Human genome could be akin to looking for a needle in a haystack. Fortunately, testing of mixtures, or pools, of molecules allows the desirable ones to be identified, using a number of experiments proportional only to the logarithm of the total number of experiments proportional only to the logarithm of the total number of types of molecules. We show how to capitalize upon this potential by using optimize pooling schemes, or designs. We propose efficient non-adaptive pooling designs, such as {open_quotes}random sets{close_quotes} designs and modified {open_quotes}row and column{close_quotes} designs. Our results have been applied in the pooling and unique-sequence screening of clone libraries used in the Human Genome Project and in the mapping of Human chromosome 16. This required the use of liquid-transferring robots and manifolds--for the largest clone libraries. Finally, we developed an efficient technique for finding the posterior probability each molecule has the desirable property, given the pool assay results. This technique works well, in practice, even if there are substantial rates of errors in the pool assay data. Both our methods and our results are relevant to a broad spectrum of research in modern biology.

  7. Chip Design Process Optimization Based on Design Quality Assessment

    NASA Astrophysics Data System (ADS)

    Häusler, Stefan; Blaschke, Jana; Sebeke, Christian; Rosenstiel, Wolfgang; Hahn, Axel

    2010-06-01

    Nowadays, the managing of product development projects is increasingly challenging. Especially the IC design of ASICs with both analog and digital components (mixed-signal design) is becoming more and more complex, while the time-to-market window narrows at the same time. Still, high quality standards must be fulfilled. Projects and their status are becoming less transparent due to this complexity. This makes the planning and execution of projects rather difficult. Therefore, there is a need for efficient project control. A main challenge is the objective evaluation of the current development status. Are all requirements successfully verified? Are all intermediate goals achieved? Companies often develop special solutions that are not reusable in other projects. This makes the quality measurement process itself less efficient and produces too much overhead. The method proposed in this paper is a contribution to solve these issues. It is applied at a German design house for analog mixed-signal IC design. This paper presents the results of a case study and introduces an optimized project scheduling on the basis of quality assessment results.

  8. Application of numerical optimization to rotor aerodynamic design

    NASA Technical Reports Server (NTRS)

    Pleasants, W. A., III; Wiggins, T. J.

    1984-01-01

    Based on initial results obtained from the performance optimization code, a number of observations can be made regarding the utility of optimization codes in supporting design of rotors for improved performance. (1) The primary objective of improving the productivity and responsiveness of current design methods can be met. (2) The use of optimization allows the designer to consider a wider range of design variables in a greatly compressed time period. (3) Optimization requires the user to carefully define his problem to avoid unproductive use of computer resources. (4) Optimization will increase the burden on the analyst to validate designs and to improve the accuracy of analysis methods. (5) Direct calculation of finite difference derivatives by the optimizer was not prohibitive for this application but was expensive. Approximate analysis in some form would be considered to improve program response time. (6) Program developement is not complete and will continue to evolve to integrate new analysis methods, design problems, and alternate optimizer options.

  9. Automated Design Framework for Synthetic Biology Exploiting Pareto Optimality.

    PubMed

    Otero-Muras, Irene; Banga, Julio R

    2017-04-12

    In this work we consider Pareto optimality for automated design in synthetic biology. We present a generalized framework based on a mixed-integer dynamic optimization formulation that, given design specifications, allows the computation of Pareto optimal sets of designs, that is, the set of best trade-offs for the metrics of interest. We show how this framework can be used for (i) forward design, that is, finding the Pareto optimal set of synthetic designs for implementation, and (ii) reverse design, that is, analyzing and inferring motifs and/or design principles of gene regulatory networks from the Pareto set of optimal circuits. Finally, we illustrate the capabilities and performance of this framework considering four case studies. In the first problem we consider the forward design of an oscillator. In the remaining problems, we illustrate how to apply the reverse design approach to find motifs for stripe formation, rapid adaption, and fold-change detection, respectively.

  10. Experimental implementation of an adiabatic quantum optimization algorithm

    NASA Astrophysics Data System (ADS)

    Steffen, Matthias; van Dam, Wim; Hogg, Tad; Breyta, Greg; Chuang, Isaac

    2003-03-01

    A novel quantum algorithm using adiabatic evolution was recently presented by Ed Farhi [1] and Tad Hogg [2]. This algorithm represents a remarkable discovery because it offers new insights into the usefulness of quantum resources. An experimental demonstration of an adiabatic algorithm has remained beyond reach because it requires an experimentally accessible Hamiltonian which encodes the problem and which must also be smoothly varied over time. We present tools to overcome these difficulties by discretizing the algorithm and extending average Hamiltonian techniques [3]. We used these techniques in the first experimental demonstration of an adiabatic optimization algorithm: solving an instance of the MAXCUT problem using three qubits and nuclear magnetic resonance techniques. We show that there exists an optimal run-time of the algorithm which can be predicted using a previously developed decoherence model. [1] E. Farhi et al., quant-ph/0001106 (2000) [2] T. Hogg, PRA, 61, 052311 (2000) [3] W. Rhim, A. Pines, J. Waugh, PRL, 24,218 (1970)

  11. Optimization and experimental validation of electrostatic adhesive geometry

    NASA Astrophysics Data System (ADS)

    Ruffatto, D.; Shah, J.; Spenko, M.

    This paper introduces a method to optimize the electrode geometry of electrostatic adhesives for robotic gripping, attachment, and manipulation applications. Electrostatic adhesion is achieved by applying a high voltage potential, on the order of kV, to a set of electrodes, which generates an electric field. The electric field polarizes the substrate material and creates an adhesion force. Previous attempts at creating electro-static adhesives have shown them to be effective, but researchers have made no effort to optimize the electrode configuration and geometry. We have shown that by optimizing the geometry of the electrode configuration, the electric field strength, and therefore the adhesion force, is enhanced. To accomplish this, Comsol Multiphysics was utilized to evaluate the average electric field generated by a given electrode geometry. Several electrode patterns were evaluated, including parallel conductors, concentric circles, Hilbert curves (a fractal geometry) and spirals. The arrangement of the electrodes in concentric circles with varying electrode widths proved to be the most effective. The most effective sizing was to use the smallest gap spacing allowable coupled with a variable electrode width. These results were experimentally validated on several different surfaces including drywall, wood, tile, glass, and steel. A new manufacturing process allowing for the fabrication of thin, conformal electro-static adhesive pads was utilized. By combining the optimized electrode geometry with the new fabrication process we are able to demonstrate a marked improvement of up to 500% in shear pressure when compared to previously published values.

  12. Space tourism optimized reusable spaceplane design

    SciTech Connect

    Penn, J.P.; Lindley, C.A.

    1997-01-01

    Market surveys suggest that a viable space tourism industry will require flight rates about two orders of magnitude higher than those required for conventional spacelift. Although enabling round-trip cost goals for a viable space tourism business are about {dollar_sign}240 per pound ({dollar_sign}529/kg), or {dollar_sign}72,000 per passenger round-trip, goals should be about {dollar_sign}50 per pound ({dollar_sign}110/kg) or approximately {dollar_sign}15,000 for a typical passenger and baggage. The lower price will probably open space tourism to the general population. Vehicle reliabilities must approach those of commercial aircraft as closely as possible. This paper addresses the development of spaceplanes optimized for the ultra-high flight rate and high reliability demands of the space tourism mission. It addresses the fundamental operability, reliability, and cost drivers needed to satisfy this mission need. Figures of merit similar to those used to evaluate the economic viability of conventional commercial aircraft are developed, including items such as payload/vehicle dry weight, turnaround time, propellant cost per passenger, and insurance and depreciation costs, which show that infrastructure can be developed for a viable space tourism industry. A reference spaceplane design optimized for space tourism is described. Subsystem allocations for reliability, operability, and costs are made and a route to developing such a capability is discussed. The vehicle{close_quote}s ability to also satisfy the traditional spacelift market is shown. {copyright} {ital 1997 American Institute of Physics.}

  13. Multidisciplinary Analysis and Optimal Design: As Easy as it Sounds?

    NASA Technical Reports Server (NTRS)

    Moore, Greg; Chainyk, Mike; Schiermeier, John

    2004-01-01

    The viewgraph presentation examines optimal design for precision, large aperture structures. Discussion focuses on aspects of design optimization, code architecture and current capabilities, and planned activities and collaborative area suggestions. The discussion of design optimization examines design sensitivity analysis; practical considerations; and new analytical environments including finite element-based capability for high-fidelity multidisciplinary analysis, design sensitivity, and optimization. The discussion of code architecture and current capabilities includes basic thermal and structural elements, nonlinear heat transfer solutions and process, and optical modes generation.

  14. Optimal trajectories for flexible-link manipulator slewing using recursive quadratic programming: Experimental verification

    SciTech Connect

    Parker, G.G.; Eisler, G.R.; Feddema, J.T.

    1994-09-01

    Procedures for trajectory planning and control of flexible link robots are becoming increasingly important to satisfy performance requirements of hazardous waste removal efforts. It has been shown that utilizing link flexibility in designing open loop joint commands can result in improved performance as opposed to damping vibration throughout a trajectory. The efficient use of link compliance is exploited in this work. Specifically, experimental verification of minimum time, straight line tracking using a two-link planar flexible robot is presented. A numerical optimization process, using an experimentally verified modal model, is used for obtaining minimum time joint torque and angle histories. The optimal joint states are used as commands to the proportional-derivative servo actuated joints. These commands are precompensated for the nonnegligible joint servo actuator dynamics. Using the precompensated joint commands, the optimal joint angles are tracked with such fidelity that the tip tracking error is less than 2.5 cm.

  15. A Framework for Designing Optimal Spacecraft Formations

    DTIC Science & Technology

    2002-09-01

    3 1. Reference Frame ..................................................................................6 B. SOLVING OPTIMAL CONTROL PROBLEMS ........................................7...spacecraft state. Depending on the model, there may be additional variables in the state, but there will be a minimum of these six. B. SOLVING OPTIMAL CONTROL PROBLEMS Until

  16. Design Time Optimization for Hardware Watermarking Protection of HDL Designs

    PubMed Central

    Castillo, E.; Morales, D. P.; García, A.; Parrilla, L.; Todorovich, E.; Meyer-Baese, U.

    2015-01-01

    HDL-level design offers important advantages for the application of watermarking to IP cores, but its complexity also requires tools automating these watermarking algorithms. A new tool for signature distribution through combinational logic is proposed in this work. IPP@HDL, a previously proposed high-level watermarking technique, has been employed for evaluating the tool. IPP@HDL relies on spreading the bits of a digital signature at the HDL design level using combinational logic included within the original system. The development of this new tool for the signature distribution has not only extended and eased the applicability of this IPP technique, but it has also improved the signature hosting process itself. Three algorithms were studied in order to develop this automated tool. The selection of a cost function determines the best hosting solutions in terms of area and performance penalties on the IP core to protect. An 1D-DWT core and MD5 and SHA1 digital signatures were used in order to illustrate the benefits of the new tool and its optimization related to the extraction logic resources. Among the proposed algorithms, the alternative based on simulated annealing reduces the additional resources while maintaining an acceptable computation time and also saving designer effort and time. PMID:25861681

  17. Design of Optimal Cyclers Using Solar Sails

    DTIC Science & Technology

    2002-12-01

    necessary (but not sufficient ) conditions for optimality in these cases. Moreover, the optimal control solution is the one where H is minimized...problems, 1H = − for all time. The first and second order necessary (but not sufficient ) conditions for optimality using the Hamiltonian are written as... optimization and the initial conditions , the path of the sail could be propagated by means of a numeric ordinary differential equation solver on the non

  18. Interactive computer program for optimal designs of longitudinal cohort studies.

    PubMed

    Tekle, Fetene B; Tan, Frans E S; Berger, Martijn P F

    2009-05-01

    Many large scale longitudinal cohort studies have been carried out or are ongoing in different fields of science. Such studies need a careful planning to obtain the desired quality of results with the available resources. In the past, a number of researches have been performed on optimal designs for longitudinal studies. However, there was no computer program yet available to help researchers to plan their longitudinal cohort design in an optimal way. A new interactive computer program for the optimization of designs of longitudinal cohort studies is therefore presented. The computer program helps users to identify the optimal cohort design with an optimal number of repeated measurements per subject and an optimal allocations of time points within a given study period. Further, users can compute the loss in relative efficiencies of any other alternative design compared to the optimal one. The computer program is described and illustrated using a practical example.

  19. Entropy-Based Search Algorithm for Experimental Design

    NASA Astrophysics Data System (ADS)

    Malakar, N. K.; Knuth, K. H.

    2011-03-01

    The scientific method relies on the iterated processes of inference and inquiry. The inference phase consists of selecting the most probable models based on the available data; whereas the inquiry phase consists of using what is known about the models to select the most relevant experiment. Optimizing inquiry involves searching the parameterized space of experiments to select the experiment that promises, on average, to be maximally informative. In the case where it is important to learn about each of the model parameters, the relevance of an experiment is quantified by Shannon entropy of the distribution of experimental outcomes predicted by a probable set of models. If the set of potential experiments is described by many parameters, we must search this high-dimensional entropy space. Brute force search methods will be slow and computationally expensive. We present an entropy-based search algorithm, called nested entropy sampling, to select the most informative experiment for efficient experimental design. This algorithm is inspired by Skilling's nested sampling algorithm used in inference and borrows the concept of a rising threshold while a set of experiment samples are maintained. We demonstrate that this algorithm not only selects highly relevant experiments, but also is more efficient than brute force search. Such entropic search techniques promise to greatly benefit autonomous experimental design.

  20. Design and optimization of bilayered tablet of Hydrochlorothiazide using the Quality-by-Design approach

    PubMed Central

    Dholariya, Yatin N; Bansod, Yogesh B; Vora, Rahul M; Mittal, Sandeep S; Shirsat, Ajinath Eknath; Bhingare, Chandrashekhar L

    2014-01-01

    Aim: The aim of the present study is to develop an optimize bilayered tablet using Hydrochlorothiazide (HCTZ) as a model drug candidate using quality by design (QbD) approach. Introduction and Method: The bilayered tablet gives biphasic drug release through loading dose; prepared using croscarmellose sodium a superdisintegrant and maintenance dose using several viscosity grades of hydrophilic polymers. The fundamental principle of QbD is to demonstrate understanding and control of pharmaceutical processes so as to deliver high quality pharmaceutical products with wide opportunities for continuous improvement. Risk assessment was carried out and subsequently 22 factorial designs in duplicate was selected to carry out design of experimentation (DOE) for evaluating the interactions and effects of the design factors on critical quality attribute. The design space was obtained by applying DOE and multivariate analysis, so as to ensure desired disintegration time (DT) and drug release is achieved. Bilayered tablet were evaluated for hardness, thickness, friability, drug content uniformity and in vitro drug dissolution. Result: Optimized formulation obtained from the design space exhibits DT of around 70 s, while DR T95% (time required to release 95% of the drug) was about 720 min. Kinetic studies of formulations revealed that erosion is the predominant mechanism for drug release. Conclusion: From the obtained results; it was concluded that independent variables have a significant effect over the dependent responses, which can be deduced from half normal plots, pareto charts and surface response graphs. The predicted values matched well with the experimental values and the result demonstrates the feasibility of the design model in the development and optimization of HCTZ bilayered tablet. PMID:25006554

  1. Optimizing the design of geophysical experiments: Is it worthwhile?

    NASA Astrophysics Data System (ADS)

    Curtis, Andrew; Maurer, Hansruedi

    Determining the structure, composition, and state of the Earth's subsurface from measured data is the principal task of many geophysical experiments and surveys. Standard procedures involve the recording of appropriate data sets followed by the application of data analysis techniques to extract the desired information. While the importance of new tools for the analysis stage of an experiment is well recognized, much less attention seems to be paid to improving the data acquisition.A measure of the effort allocated to data analysis research relative to that devoted to data acquisition research is presented in Figure 1. Since 1955 there have been more than 10,000 publications on inversion methods alone, but in the same period only 100 papers on experimental design have appeared in journals. Considering that the acquisition component of an experiment defines what information will be contained in the data, and that no amount of data analysis can compensate for the lack of such information, we suggest that greater effort be made to improve survey planning techniques. Furthermore, given that logistical and financial constraints are often stringent and that relationships between geophysical data and model parameters describing the Earths subsurface are generally complicated, optimizing the design of an experiment may be quite challenging. Here we review experimental design procedures that optimize the benefit of a field survey, such that maximum information about the target structures is obtained at minimum cost. We also announce a new Web site and e-mail group set up as a forum for communication on survey design research and application.

  2. Multidisciplinary design optimization - An emerging new engineering discipline

    NASA Technical Reports Server (NTRS)

    Sobieszczanski-Sobieski, Jaroslaw

    1993-01-01

    A definition of the multidisciplinary design optimization (MDO) is introduced, and functionality and relationship of the MDO conceptual components are examined. The latter include design-oriented analysis, approximation concepts, mathematical system modeling, design space search, an optimization procedure, and a humane interface.

  3. Optimal color design of psychological counseling room by design of experiments and response surface methodology.

    PubMed

    Liu, Wenjuan; Ji, Jianlin; Chen, Hua; Ye, Chenyu

    2014-01-01

    Color is one of the most powerful aspects of a psychological counseling environment. Little scientific research has been conducted on color design and much of the existing literature is based on observational studies. Using design of experiments and response surface methodology, this paper proposes an optimal color design approach for transforming patients' perception into color elements. Six indices, pleasant-unpleasant, interesting-uninteresting, exciting-boring, relaxing-distressing, safe-fearful, and active-inactive, were used to assess patients' impression. A total of 75 patients participated, including 42 for Experiment 1 and 33 for Experiment 2. 27 representative color samples were designed in Experiment 1, and the color sample (L = 75, a = 0, b = -60) was the most preferred one. In Experiment 2, this color sample was set as the 'central point', and three color attributes were optimized to maximize the patients' satisfaction. The experimental results show that the proposed method can get the optimal solution for color design of a counseling room.

  4. A robust optimization methodology for preliminary aircraft design

    NASA Astrophysics Data System (ADS)

    Prigent, S.; Maréchal, P.; Rondepierre, A.; Druot, T.; Belleville, M.

    2016-05-01

    This article focuses on a robust optimization of an aircraft preliminary design under operational constraints. According to engineers' know-how, the aircraft preliminary design problem can be modelled as an uncertain optimization problem whose objective (the cost or the fuel consumption) is almost affine, and whose constraints are convex. It is shown that this uncertain optimization problem can be approximated in a conservative manner by an uncertain linear optimization program, which enables the use of the techniques of robust linear programming of Ben-Tal, El Ghaoui, and Nemirovski [Robust Optimization, Princeton University Press, 2009]. This methodology is then applied to two real cases of aircraft design and numerical results are presented.

  5. Optimal Design of a Center Support Quadruple Mass Gyroscope (CSQMG).

    PubMed

    Zhang, Tian; Zhou, Bin; Yin, Peng; Chen, Zhiyong; Zhang, Rong

    2016-04-28

    This paper reports a more complete description of the design process of the Center Support Quadruple Mass Gyroscope (CSQMG), a gyro expected to provide breakthrough performance for flat structures. The operation of the CSQMG is based on four lumped masses in a circumferential symmetric distribution, oscillating in anti-phase motion, and providing differential signal extraction. With its 4-fold symmetrical axes pattern, the CSQMG achieves a similar operation mode to Hemispherical Resonant Gyroscopes (HRGs). Compared to the conventional flat design, four Y-shaped coupling beams are used in this new pattern in order to adjust mode distribution and enhance the synchronization mechanism of operation modes. For the purpose of obtaining the optimal design of the CSQMG, a kind of applicative optimization flow is developed with a comprehensive derivation of the operation mode coordination, the pseudo mode inhibition, and the lumped mass twisting motion elimination. The experimental characterization of the CSQMG was performed at room temperature, and the center operation frequency is 6.8 kHz after tuning. Experiments show an Allan variance stability 0.12°/h (@100 s) and a white noise level about 0.72°/h/√Hz, which means that the CSQMG possesses great potential to achieve navigation grade performance.

  6. Two-stage microbial community experimental design.

    PubMed

    Tickle, Timothy L; Segata, Nicola; Waldron, Levi; Weingart, Uri; Huttenhower, Curtis

    2013-12-01

    Microbial community samples can be efficiently surveyed in high throughput by sequencing markers such as the 16S ribosomal RNA gene. Often, a collection of samples is then selected for subsequent metagenomic, metabolomic or other follow-up. Two-stage study design has long been used in ecology but has not yet been studied in-depth for high-throughput microbial community investigations. To avoid ad hoc sample selection, we developed and validated several purposive sample selection methods for two-stage studies (that is, biological criteria) targeting differing types of microbial communities. These methods select follow-up samples from large community surveys, with criteria including samples typical of the initially surveyed population, targeting specific microbial clades or rare species, maximizing diversity, representing extreme or deviant communities, or identifying communities distinct or discriminating among environment or host phenotypes. The accuracies of each sampling technique and their influences on the characteristics of the resulting selected microbial community were evaluated using both simulated and experimental data. Specifically, all criteria were able to identify samples whose properties were accurately retained in 318 paired 16S amplicon and whole-community metagenomic (follow-up) samples from the Human Microbiome Project. Some selection criteria resulted in follow-up samples that were strongly non-representative of the original survey population; diversity maximization particularly undersampled community configurations. Only selection of intentionally representative samples minimized differences in the selected sample set from the original microbial survey. An implementation is provided as the microPITA (Microbiomes: Picking Interesting Taxa for Analysis) software for two-stage study design of microbial communities.

  7. RTM And VARTM Design, Optimization, And Control With SLIC

    DTIC Science & Technology

    2003-07-02

    UD-CCM l 2 July 2003 1 RTM AND VARTM DESIGN, OPTIMIZATION, AND CONTROL WITH SLIC Kuang-Ting Hsiao UD-CCM Report Documentation Page Form ApprovedOMB...COVERED - 4. TITLE AND SUBTITLE RTM And VARTM Design, Optimization, And Control With SLIC 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM...ONR Workshop - 5 Simulation-based Liquid Injection Control: Philosophy SLIC Artificial Intelligence Optimized Design For RTM / VARTM Sensors

  8. Active cooling design for scramjet engines using optimization methods

    NASA Technical Reports Server (NTRS)

    Scotti, Stephen J.; Martin, Carl J.; Lucas, Stephen H.

    1988-01-01

    A methodology for using optimization in designing metallic cooling jackets for scramjet engines is presented. The optimal design minimizes the required coolant flow rate subject to temperature, mechanical-stress, and thermal-fatigue-life constraints on the cooling-jacket panels, and Mach-number and pressure contraints on the coolant exiting the panel. The analytical basis for the methodology is presented, and results for the optimal design of panels are shown to demonstrate its utility.

  9. Active cooling design for scramjet engines using optimization methods

    NASA Technical Reports Server (NTRS)

    Scotti, Stephen J.; Martin, Carl J.; Lucas, Stephen H.

    1988-01-01

    A methodology for using optimization in designing metallic cooling jackets for scramjet engines is presented. The optimal design minimizes the required coolant flow rate subject to temperature, mechanical-stress, and thermal-fatigue-life constraints on the cooling-jacket panels, and Mach-number and pressure constraints on the coolant exiting the panel. The analytical basis for the methodology is presented, and results for the optimal design of panels are shown to demonstrate its utility.

  10. A new optimal sliding mode controller design using scalar sign function.

    PubMed

    Singla, Mithun; Shieh, Leang-San; Song, Gangbing; Xie, Linbo; Zhang, Yongpeng

    2014-03-01

    This paper presents a new optimal sliding mode controller using the scalar sign function method. A smooth, continuous-time scalar sign function is used to replace the discontinuous switching function in the design of a sliding mode controller. The proposed sliding mode controller is designed using an optimal Linear Quadratic Regulator (LQR) approach. The sliding surface of the system is designed using stable eigenvectors and the scalar sign function. Controller simulations are compared with another existing optimal sliding mode controller. To test the effectiveness of the proposed controller, the controller is implemented on an aluminum beam with piezoceramic sensor and actuator for vibration control. This paper includes the control design and stability analysis of the new optimal sliding mode controller, followed by simulation and experimental results. The simulation and experimental results show that the proposed approach is very effective.

  11. Manifold Regularized Experimental Design for Active Learning.

    PubMed

    Zhang, Lining; Shum, Hubert P H; Shao, Ling

    2016-12-02

    Various machine learning and data mining tasks in classification require abundant data samples to be labeled for training. Conventional active learning methods aim at labeling the most informative samples for alleviating the labor of the user. Many previous studies in active learning select one sample after another in a greedy manner. However, this is not very effective because the classification models has to be retrained for each newly labeled sample. Moreover, many popular active learning approaches utilize the most uncertain samples by leveraging the classification hyperplane of the classifier, which is not appropriate since the classification hyperplane is inaccurate when the training data are small-sized. The problem of insufficient training data in real-world systems limits the potential applications of these approaches. This paper presents a novel method of active learning called manifold regularized experimental design (MRED), which can label multiple informative samples at one time for training. In addition, MRED gives an explicit geometric explanation for the selected samples to be labeled by the user. Different from existing active learning methods, our method avoids the intrinsic problems caused by insufficiently labeled samples in real-world applications. Various experiments on synthetic datasets, the Yale face database and the Corel image database have been carried out to show how MRED outperforms existing methods.

  12. A sequential linear optimization approach for controller design

    NASA Technical Reports Server (NTRS)

    Horta, L. G.; Juang, J.-N.; Junkins, J. L.

    1985-01-01

    A linear optimization approach with a simple real arithmetic algorithm is presented for reliable controller design and vibration suppression of flexible structures. Using first order sensitivity of the system eigenvalues with respect to the design parameters in conjunction with a continuation procedure, the method converts a nonlinear optimization problem into a maximization problem with linear inequality constraints. The method of linear programming is then applied to solve the converted linear optimization problem. The general efficiency of the linear programming approach allows the method to handle structural optimization problems with a large number of inequality constraints on the design vector. The method is demonstrated using a truss beam finite element model for the optimal sizing and placement of active/passive-structural members for damping augmentation. Results using both the sequential linear optimization approach and nonlinear optimization are presented and compared. The insensitivity to initial conditions of the linear optimization approach is also demonstrated.

  13. Globally optimal trial design for local decision making.

    PubMed

    Eckermann, Simon; Willan, Andrew R

    2009-02-01

    Value of information methods allows decision makers to identify efficient trial design following a principle of maximizing the expected value to decision makers of information from potential trial designs relative to their expected cost. However, in health technology assessment (HTA) the restrictive assumption has been made that, prospectively, there is only expected value of sample information from research commissioned within jurisdiction. This paper extends the framework for optimal trial design and decision making within jurisdiction to allow for optimal trial design across jurisdictions. This is illustrated in identifying an optimal trial design for decision making across the US, the UK and Australia for early versus late external cephalic version for pregnant women presenting in the breech position. The expected net gain from locally optimal trial designs of US$0.72M is shown to increase to US$1.14M with a globally optimal trial design. In general, the proposed method of globally optimal trial design improves on optimal trial design within jurisdictions by: (i) reflecting the global value of non-rival information; (ii) allowing optimal allocation of trial sample across jurisdictions; (iii) avoiding market failure associated with free-rider effects, sub-optimal spreading of fixed costs and heterogeneity of trial information with multiple trials.

  14. Maximizing the efficiency of a flexible propulsor using experimental optimization

    NASA Astrophysics Data System (ADS)

    Quinn, Daniel; Lauder, George; Smits, Alexander

    2014-11-01

    Experimental gradient-based optimization is used to maximize the propulsive efficiency of a heaving and pitching flexible panel. Optimum and near-optimum conditions are studied via direct force measurements and Particle Image Velocimetry (PIV). The net thrust and power are found to scale predictably with the frequency and amplitude of the leading edge, but the efficiency shows a complex multimodal response. Optimum pitch and heave motions are found to produce nearly twice the efficiencies of optimum heave-only motions. Efficiency is globally optimized when (1) the Strouhal number is within an optimal range that varies weakly with amplitude and boundary conditions; (2) the panel is actuated at a resonant frequency of the fluid-propulsor system; (3) heave amplitude is tuned such that trailing edge amplitude is maximized while flow along the body remains attached; and (4) the maximum pitch angle and phase lag are chosen so that the effective angle of attack is minimized. This work was supported by the Office of Naval Research under MURI Grant Number N00014-08-1-0642 (Program Director Dr. Bob Brizzolara), and the National Science Foundation under Grant DBI 1062052 (PI Lisa Fauci) and Grant EFRI-0938043 (PI George Lauder).

  15. Exploring experimental fitness landscapes for chemical synthesis and property optimization.

    PubMed

    Tibbetts, Katharine Moore; Feng, Xiao-Jiang; Rabitz, Herschel

    2017-02-08

    Optimization is a central goal in the chemical sciences, encompassing diverse objectives including synthesis yield, catalytic activity of a material, and binding efficiency of a molecule to a target protein. Considering the enormous size of chemical space and the expected large numbers of experiments necessary to search through it in any particular application, optimization in chemistry is surprisingly efficient. This good fortune has recently been explained by analysis of the fitness landscape, i.e., the functional relationship between a target objective J (e.g., percent yield, catalytic activity) and a suitable set of variables (e.g., resources such as reactant concentrations and processing conditions). Mathematical analysis has demonstrated that, upon satisfaction of reasonable physical assumptions, the fitness landscape contains no local sub-optimal "traps" that preclude identification of the globally best value of J, in a development called the "OptiChem" theorem. One of the key assumptions behind the theorem is that sufficient resources are available to achieve the posed optimization goal. This work assesses the validity of this assumption underlying the OptiChem theorem through examination of experimental data from the recent literature. In order to explore fitness landscapes in high dimensions where the landscape cannot be visualized, a high dimensional model representation (HDMR) of experimental data is used to construct a model landscape amenable to topology assessment via gradient algorithm search. This method is shown to correctly capture the trap-free topology of a four-dimensional landscape where the objective is to optimize the composition of a solid state material (subject to an elemental mole-fraction constraint) for catalytic activity towards the oxygen evolution reaction. Analysis of a six-dimensional landscape for the objective of maximizing the photoluminescence of rare-earth solid state materials subject to two elemental mole

  16. (BRI) Direct and Inverse Design Optimization of Magnetic Alloys with Minimized Use of Rare Earth Elements

    DTIC Science & Technology

    2016-02-02

    AFRL-AFOSR-VA-TR-2016-0091 (BRI) Direct and Inverse Design Optimization of Magnetic Alloys with Minimized Use of Rare Earth Elements George...2012 – 31/10/2015 4. TITLE AND SUBTITLE (BRI) Direct and Inverse Design Optimization of Magnetic Alloys with Minimized Use of Rare Earth Elements... Science and Eng., Raleigh, NC (Profs. Justin Schwartz and Carl C. Koch). Their team performed all manufacturing and experimental measurements. 14

  17. Bayesian experimental design of a multichannel interferometer for Wendelstein 7-Xa)

    NASA Astrophysics Data System (ADS)

    Dreier, H.; Dinklage, A.; Fischer, R.; Hirsch, M.; Kornejew, P.

    2008-10-01

    Bayesian experimental design (BED) is a framework for the optimization of diagnostics basing on probability theory. In this work it is applied to the design of a multichannel interferometer at the Wendelstein 7-X stellarator experiment. BED offers the possibility to compare diverse designs quantitatively, which will be shown for beam-line designs resulting from different plasma configurations. The applicability of this method is discussed with respect to its computational effort.

  18. Minimax D-Optimal Designs for Item Response Theory Models.

    ERIC Educational Resources Information Center

    Berger, Martjin P. F.; King, C. Y. Joy; Wong, Weng Kee

    2000-01-01

    Proposed minimax designs for item response theory (IRT) models to overcome the problem of local optimality. Compared minimax designs to sequentially constructed designs for the two parameter logistic model. Results show that minimax designs can be nearly as efficient as sequentially constructed designs. (Author/SLD)

  19. Optimal design for nonlinear estimation of the hemodynamic response function.

    PubMed

    Maus, Bärbel; van Breukelen, Gerard J P; Goebel, Rainer; Berger, Martijn P F

    2012-06-01

    Subject-specific hemodynamic response functions (HRFs) have been recommended to capture variation in the form of the hemodynamic response between subjects (Aguirre et al., [ 1998]: Neuroimage 8:360-369). The purpose of this article is to find optimal designs for estimation of subject-specific parameters for the double gamma HRF. As the double gamma function is a nonlinear function of its parameters, optimal design theory for nonlinear models is employed in this article. The double gamma function is linearized by a Taylor approximation and the maximin criterion is used to handle dependency of the D-optimal design on the expansion point of the Taylor approximation. A realistic range of double gamma HRF parameters is used for the expansion point of the Taylor approximation. Furthermore, a genetic algorithm (GA) (Kao et al., [ 2009]: Neuroimage 44:849-856) is applied to find locally optimal designs for the different expansion points and the maximin design chosen from the locally optimal designs is compared to maximin designs obtained by m-sequences, blocked designs, designs with constant interstimulus interval (ISI) and random event-related designs. The maximin design obtained by the GA is most efficient. Random event-related designs chosen from several generated designs and m-sequences have a high efficiency, while blocked designs and designs with a constant ISI have a low efficiency compared to the maximin GA design.

  20. Comparison of the experimental aerodynamic characteristics of theoretically and experimentally designed supercritical airfoils

    NASA Technical Reports Server (NTRS)

    Harris, C. D.

    1974-01-01

    A lifting airfoil theoretically designed for shockless supercritical flow utilizing a complex hodograph method has been evaluated in the Langley 8-foot transonic pressure tunnel at design and off-design conditions. The experimental results are presented and compared with those of an experimentally designed supercritical airfoil which were obtained in the same tunnel.

  1. Multidisciplinary design optimization: An emerging new engineering discipline

    NASA Technical Reports Server (NTRS)

    Sobieszczanski-Sobieski, Jaroslaw

    1993-01-01

    This paper defines the Multidisciplinary Design Optimization (MDO) as a new field of research endeavor and as an aid in the design of engineering systems. It examines the MDO conceptual components in relation to each other and defines their functions.

  2. INNOVATIVE METHODS FOR THE OPTIMIZATION OF GRAVITY STORM SEWER DESIGN

    EPA Science Inventory

    The purpose of this paper is to describe a new method for optimizing the design of urban storm sewer systems. Previous efforts to optimize gravity sewers have met with limited success because classical optimization methods require that the problem be well behaved, e.g. describ...

  3. A design optimization process for Space Station Freedom

    NASA Technical Reports Server (NTRS)

    Chamberlain, Robert G.; Fox, George; Duquette, William H.

    1990-01-01

    The Space Station Freedom Program is used to develop and implement a process for design optimization. Because the relative worth of arbitrary design concepts cannot be assessed directly, comparisons must be based on designs that provide the same performance from the point of view of station users; such designs can be compared in terms of life cycle cost. Since the technology required to produce a space station is widely dispersed, a decentralized optimization process is essential. A formulation of the optimization process is provided and the mathematical models designed to facilitate its implementation are described.

  4. Web Based Learning Support for Experimental Design in Molecular Biology.

    ERIC Educational Resources Information Center

    Wilmsen, Tinri; Bisseling, Ton; Hartog, Rob

    An important learning goal of a molecular biology curriculum is a certain proficiency level in experimental design. Currently students are confronted with experimental approaches in textbooks, in lectures and in the laboratory. However, most students do not reach a satisfactory level of competence in the design of experimental approaches. This…

  5. A Matrix-Free Algorithm for Multidisciplinary Design Optimization

    NASA Astrophysics Data System (ADS)

    Lambe, Andrew Borean

    Multidisciplinary design optimization (MDO) is an approach to engineering design that exploits the coupling between components or knowledge disciplines in a complex system to improve the final product. In aircraft design, MDO methods can be used to simultaneously design the outer shape of the aircraft and the internal structure, taking into account the complex interaction between the aerodynamic forces and the structural flexibility. Efficient strategies are needed to solve such design optimization problems and guarantee convergence to an optimal design. This work begins with a comprehensive review of MDO problem formulations and solution algorithms. First, a fundamental MDO problem formulation is defined from which other formulations may be obtained through simple transformations. Using these fundamental problem formulations, decomposition methods from the literature are reviewed and classified. All MDO methods are presented in a unified mathematical notation to facilitate greater understanding. In addition, a novel set of diagrams, called extended design structure matrices, are used to simultaneously visualize both data communication and process flow between the many software components of each method. For aerostructural design optimization, modern decomposition-based MDO methods cannot efficiently handle the tight coupling between the aerodynamic and structural states. This fact motivates the exploration of methods that can reduce the computational cost. A particular structure in the direct and adjoint methods for gradient computation. motivates the idea of a matrix-free optimization method. A simple matrix-free optimizer is developed based on the augmented Lagrangian algorithm. This new matrix-free optimizer is tested on two structural optimization problems and one aerostructural optimization problem. The results indicate that the matrix-free optimizer is able to efficiently solve structural and multidisciplinary design problems with thousands of variables and

  6. A Matrix-Free Algorithm for Multidisciplinary Design Optimization

    NASA Astrophysics Data System (ADS)

    Lambe, Andrew Borean

    Multidisciplinary design optimization (MDO) is an approach to engineering design that exploits the coupling between components or knowledge disciplines in a complex system to improve the final product. In aircraft design, MDO methods can be used to simultaneously design the outer shape of the aircraft and the internal structure, taking into account the complex interaction between the aerodynamic forces and the structural flexibility. Efficient strategies are needed to solve such design optimization problems and guarantee convergence to an optimal design. This work begins with a comprehensive review of MDO problem formulations and solution algorithms. First, a fundamental MDO problem formulation is defined from which other formulations may be obtained through simple transformations. Using these fundamental problem formulations, decomposition methods from the literature are reviewed and classified. All MDO methods are presented in a unified mathematical notation to facilitate greater understanding. In addition, a novel set of diagrams, called extended design structure matrices, are used to simultaneously visualize both data communication and process flow between the many software components of each method. For aerostructural design optimization, modern decomposition-based MDO methods cannot efficiently handle the tight coupling between the aerodynamic and structural states. This fact motivates the exploration of methods that can reduce the computational cost. A particular structure in the direct and adjoint methods for gradient computation motivates the idea of a matrix-free optimization method. A simple matrix-free optimizer is developed based on the augmented Lagrangian algorithm. This new matrix-free optimizer is tested on two structural optimization problems and one aerostructural optimization problem. The results indicate that the matrix-free optimizer is able to efficiently solve structural and multidisciplinary design problems with thousands of variables and

  7. Antenna Design Using the Efficient Global Optimization (EGO) Algorithm

    DTIC Science & Technology

    2011-05-20

    small antennas in a parasitic super directive array configuration. (b) A comparison of the driven super directive gain achievable with these...we discuss antenna design optimization using EGO. The first antenna design is a parasitic super directive array where we compare EGO with a classic...In Section 4 (RESULTS AND DISCUSSION) we present design optimizations for parasitic, super directive arrays; wideband antenna design; and the

  8. Fatigue design of a cellular phone folder using regression model-based multi-objective optimization

    NASA Astrophysics Data System (ADS)

    Kim, Young Gyun; Lee, Jongsoo

    2016-08-01

    In a folding cellular phone, the folding device is repeatedly opened and closed by the user, which eventually results in fatigue damage, particularly to the front of the folder. Hence, it is important to improve the safety and endurance of the folder while also reducing its weight. This article presents an optimal design for the folder front that maximizes its fatigue endurance while minimizing its thickness. Design data for analysis and optimization were obtained experimentally using a test jig. Multi-objective optimization was carried out using a nonlinear regression model. Three regression methods were employed: back-propagation neural networks, logistic regression and support vector machines. The AdaBoost ensemble technique was also used to improve the approximation. Two-objective Pareto-optimal solutions were identified using the non-dominated sorting genetic algorithm (NSGA-II). Finally, a numerically optimized solution was validated against experimental product data, in terms of both fatigue endurance and thickness index.

  9. Optimal flexible sample size design with robust power.

    PubMed

    Zhang, Lanju; Cui, Lu; Yang, Bo

    2016-08-30

    It is well recognized that sample size determination is challenging because of the uncertainty on the treatment effect size. Several remedies are available in the literature. Group sequential designs start with a sample size based on a conservative (smaller) effect size and allow early stop at interim looks. Sample size re-estimation designs start with a sample size based on an optimistic (larger) effect size and allow sample size increase if the observed effect size is smaller than planned. Different opinions favoring one type over the other exist. We propose an optimal approach using an appropriate optimality criterion to select the best design among all the candidate designs. Our results show that (1) for the same type of designs, for example, group sequential designs, there is room for significant improvement through our optimization approach; (2) optimal promising zone designs appear to have no advantages over optimal group sequential designs; and (3) optimal designs with sample size re-estimation deliver the best adaptive performance. We conclude that to deal with the challenge of sample size determination due to effect size uncertainty, an optimal approach can help to select the best design that provides most robust power across the effect size range of interest. Copyright © 2016 John Wiley & Sons, Ltd.

  10. Progress in multidisciplinary design optimization at NASA Langley

    NASA Technical Reports Server (NTRS)

    Padula, Sharon L.

    1993-01-01

    Multidisciplinary Design Optimization refers to some combination of disciplinary analyses, sensitivity analysis, and optimization techniques used to design complex engineering systems. The ultimate objective of this research at NASA Langley Research Center is to help the US industry reduce the costs associated with development, manufacturing, and maintenance of aerospace vehicles while improving system performance. This report reviews progress towards this objective and highlights topics for future research. Aerospace design problems selected from the author's research illustrate strengths and weaknesses in existing multidisciplinary optimization techniques. The techniques discussed include multiobjective optimization, global sensitivity equations and sequential linear programming.

  11. Electro-Fenton oxidation of coking wastewater: optimization using the combination of central composite design and convex optimization method.

    PubMed

    Zhang, Bo; Sun, Jiwei; Wang, Qin; Fan, Niansi; Ni, Jialing; Li, Weicheng; Gao, Yingxin; Li, Yu-You; Xu, Changyou

    2017-01-12

    The electro-Fenton treatment of coking wastewater was evaluated experimentally in a batch electrochemical reactor. Based on central composite design coupled with response surface methodology, a regression quadratic equation was developed to model the total organic carbon (TOC) removal efficiency. This model was further proved to accurately predict the optimization of process variables by means of analysis of variance. With the aid of the convex optimization method, which is a global optimization method, the optimal parameters were determined as current density of 30.9 mA/cm(2), Fe(2+) concentration of 0.35 mg/L, and pH of 4.05. Under the optimized conditions, the corresponding TOC removal efficiency was up to 73.8%. The maximum TOC removal efficiency achieved can be further confirmed by the results of gas chromatography-mass spectrum analysis.

  12. Use of experimental design to optimize a triple-potential waveform to develop a method for the determination of streptomycin and dihydrostreptomycin in pharmaceutical veterinary dosage forms by HPLC-PAD.

    PubMed

    Martínez-Mejía, Mónica J; Rath, Susanne

    2015-02-01

    An HPLC-PAD method using a gold working electrode and a triple-potential waveform was developed for the simultaneous determination of streptomycin and dihydrostreptomycin in veterinary drugs. Glucose was used as the internal standard, and the triple-potential waveform was optimized using a factorial and a central composite design. The optimum potentials were as follows: amperometric detection, E1=-0.15V; cleaning potential, E2=+0.85V; and reactivation of the electrode surface, E3=-0.65V. For the separation of the aminoglycosides and the internal standard of glucose, a CarboPac™ PA1 anion exchange column was used together with a mobile phase consisting of a 0.070 mol L(-1) sodium hydroxide solution in the isocratic elution mode with a flow rate of 0.8 mL min(-1). The method was validated and applied to the determination of streptomycin and dihydrostreptomycin in veterinary formulations (injection, suspension and ointment) without any previous sample pretreatment, except for the ointments, for which a liquid-liquid extraction was required before HPLC-PAD analysis. The method showed adequate selectivity, with an accuracy of 98-107% and a precision of less than 3.9%.

  13. Integration of Physical Design and Sequential Optimization

    DTIC Science & Technology

    2006-03-06

    synchronous digital cir- cuits. A sufficient condition for the solution to the optimal clock scheduling problem in the face of process variations is given... Optimization For Lagrangian Dual 1: k← 0 2: x,y← argminx,y L(x,y,k) 3: while KKT conditions are not satisfied do 4: k←max(0,k+ γ ·g(x,y)) 5: x,y← argminx,y L(x...associated with the clock distribution tree in a digital synchronous circuit. A model for this problem and a sufficient condition for its optimal solution is

  14. Optimal fractional order PID design via Tabu Search based algorithm.

    PubMed

    Ateş, Abdullah; Yeroglu, Celaleddin

    2016-01-01

    This paper presents an optimization method based on the Tabu Search Algorithm (TSA) to design a Fractional-Order Proportional-Integral-Derivative (FOPID) controller. All parameter computations of the FOPID employ random initial conditions, using the proposed optimization method. Illustrative examples demonstrate the performance of the proposed FOPID controller design method.

  15. Topology and boundary shape optimization as an integrated design tool

    NASA Technical Reports Server (NTRS)

    Bendsoe, Martin Philip; Rodrigues, Helder Carrico

    1990-01-01

    The optimal topology of a two dimensional linear elastic body can be computed by regarding the body as a domain of the plane with a high density of material. Such an optimal topology can then be used as the basis for a shape optimization method that computes the optimal form of the boundary curves of the body. This results in an efficient and reliable design tool, which can be implemented via common FEM mesh generator and CAD type input-output facilities.

  16. Design and Optimization of Composite Gyroscope Momentum Wheel Rings

    NASA Technical Reports Server (NTRS)

    Bednarcyk, Brett A.; Arnold, Steven M.

    2007-01-01

    Stress analysis and preliminary design/optimization procedures are presented for gyroscope momentum wheel rings composed of metallic, metal matrix composite, and polymer matrix composite materials. The design of these components involves simultaneously minimizing both true part volume and mass, while maximizing angular momentum. The stress analysis results are combined with an anisotropic failure criterion to formulate a new sizing procedure that provides considerable insight into the design of gyroscope momentum wheel ring components. Results compare the performance of two optimized metallic designs, an optimized SiC/Ti composite design, and an optimized graphite/epoxy composite design. The graphite/epoxy design appears to be far superior to the competitors considered unless a much greater premium is placed on volume efficiency compared to mass efficiency.

  17. Design optimization of an opposed piston brake caliper

    NASA Astrophysics Data System (ADS)

    Sergent, Nicolas; Tirovic, Marko; Voveris, Jeronimas

    2014-11-01

    Successful brake caliper designs must be light and stiff, preventing excessive deformation and extended brake pedal travel. These conflicting requirements are difficult to optimize owing to complex caliper geometry, loading and interaction of individual brake components (pads, disc and caliper). The article studies a fixed, four-pot (piston) caliper, and describes in detail the computer-based topology optimization methodology applied to obtain two optimized designs. At first sight, relatively different designs (named 'Z' and 'W') were obtained by minor changes to the designable volume and boundary conditions. However, on closer inspection, the same main bridge design features could be recognized. Both designs offered considerable reduction of caliper mass, by 19% and 28%, respectively. Further finite element analyses conducted on one of the optimized designs (Z caliper) showed which individual bridge features and their combinations are the most important in maintaining caliper stiffness.

  18. Optimization of Forming Processes in Microstructure Sensitive Design

    NASA Astrophysics Data System (ADS)

    Garmestani, H.; Li, D. S.

    2004-06-01

    Optimization of the forming processes from initial microstructures of raw materials to desired microstructures of final products is an important topic in materials design. Processing path model proposed in this study gives an explicit mathematical solution about how the microstructure evolves during thermomechanical processing. Based on a conservation principle in the orientation space (originally proposed by Bunge), this methodology is independent of the underlying deformation mechanisms. The evolutions of texture coefficients are modeled using a texture evolution matrix calculated from the experimental results. For the same material using the same processing method, the texture evolution matrix is the same. It does not change with the initial texture. This processing path model provides functions of processing paths and streamlines.

  19. An uncertain multidisciplinary design optimization method using interval convex models

    NASA Astrophysics Data System (ADS)

    Li, Fangyi; Luo, Zhen; Sun, Guangyong; Zhang, Nong

    2013-06-01

    This article proposes an uncertain multi-objective multidisciplinary design optimization methodology, which employs the interval model to represent the uncertainties of uncertain-but-bounded parameters. The interval number programming method is applied to transform each uncertain objective function into two deterministic objective functions, and a satisfaction degree of intervals is used to convert both the uncertain inequality and equality constraints to deterministic inequality constraints. In doing so, an unconstrained deterministic optimization problem will be constructed in association with the penalty function method. The design will be finally formulated as a nested three-loop optimization, a class of highly challenging problems in the area of engineering design optimization. An advanced hierarchical optimization scheme is developed to solve the proposed optimization problem based on the multidisciplinary feasible strategy, which is a well-studied method able to reduce the dimensions of multidisciplinary design optimization problems by using the design variables as independent optimization variables. In the hierarchical optimization system, the non-dominated sorting genetic algorithm II, sequential quadratic programming method and Gauss-Seidel iterative approach are applied to the outer, middle and inner loops of the optimization problem, respectively. Typical numerical examples are used to demonstrate the effectiveness of the proposed methodology.

  20. A study of commuter airplane design optimization

    NASA Technical Reports Server (NTRS)

    Roskam, J.; Wyatt, R. D.; Griswold, D. A.; Hammer, J. L.

    1977-01-01

    Problems of commuter airplane configuration design were studied to affect a minimization of direct operating costs. Factors considered were the minimization of fuselage drag, methods of wing design, and the estimated drag of an airplane submerged in a propellor slipstream; all design criteria were studied under a set of fixed performance, mission, and stability constraints. Configuration design data were assembled for application by a computerized design methodology program similar to the NASA-Ames General Aviation Synthesis Program.

  1. Formulation optimization of long-acting depot injection of aripiprazole by using D-optimal mixture design.

    PubMed

    Nahata, Tushar; Saini, T R

    2009-01-01

    Non-adherence to medication specifications is a major cause for poor outcomes in the therapy of schizophrenia. In situ implantable preparation of aripiprazole, an atypical antipsychotic drug, was intended with the aim to improve the patient compliance and to offer an effective antipsychotic drug therapy. D-optimal mixture design was employed to design and optimize long-acting depot injection of aripiprazole using polylactide-co-glycolide (PLGA) 50:50, 75:25, 85:15, and cholesterol as release rate-retarding material. Desirability technique was used for the optimization of formulation. Predicted optimized formulation was experimentally validated, and it was found that the developed formulation releases the drug for a 14-day time period. The optimized formulation showed that the cholesterol-containing formulation exhibits a better drug release profile. The pharmacokinetic studies confirmed that the developed cholesterol-based depot formulation was capable of releasing the drug for a time period of more than 14 days. The implant formulation was sterilized by gamma radiation and ethylene oxide sterilization method. The D-optimal mixture design was proved to be an efficient technique for the formulation optimization.

  2. Design optimization of a magnetorheological brake in powered knee orthosis

    NASA Astrophysics Data System (ADS)

    Ma, Hao; Liao, Wei-Hsin

    2015-04-01

    Magneto-rheological (MR) fluids have been utilized in devices like orthoses and prostheses to generate controllable braking torque. In this paper, a flat shape rotary MR brake is designed for powered knee orthosis to provide adjustable resistance. Multiple disk structure with interior inner coil is adopted in the MR brake configuration. In order to increase the maximal magnetic flux, a novel internal structure design with smooth transition surface is proposed. Based on this design, a parameterized model of the MR brake is built for geometrical optimization. Multiple factors are considered in the optimization objective: braking torque, weight, and, particularly, average power consumption. The optimization is then performed with Finite Element Analysis (FEA), and the optimal design is obtained among the Pareto-optimal set considering the trade-offs in design objectives.

  3. Optimal input design for aircraft instrumentation systematic error estimation

    NASA Technical Reports Server (NTRS)

    Morelli, Eugene A.

    1991-01-01

    A new technique for designing optimal flight test inputs for accurate estimation of instrumentation systematic errors was developed and demonstrated. A simulation model of the F-18 High Angle of Attack Research Vehicle (HARV) aircraft was used to evaluate the effectiveness of the optimal input compared to input recorded during flight test. Instrumentation systematic error parameter estimates and their standard errors were compared. It was found that the optimal input design improved error parameter estimates and their accuracies for a fixed time input design. Pilot acceptability of the optimal input design was demonstrated using a six degree-of-freedom fixed base piloted simulation of the F-18 HARV. The technique described in this work provides a practical, optimal procedure for designing inputs for data compatibility experiments.

  4. Optimal shielding design for minimum materials cost or mass

    SciTech Connect

    Woolley, Robert D.

    2015-12-02

    The mathematical underpinnings of cost optimal radiation shielding designs based on an extension of optimal control theory are presented, a heuristic algorithm to iteratively solve the resulting optimal design equations is suggested, and computational results for a simple test case are discussed. A typical radiation shielding design problem can have infinitely many solutions, all satisfying the problem's specified set of radiation attenuation requirements. Each such design has its own total materials cost. For a design to be optimal, no admissible change in its deployment of shielding materials can result in a lower cost. This applies in particular to very small changes, which can be restated using the calculus of variations as the Euler-Lagrange equations. Furthermore, the associated Hamiltonian function and application of Pontryagin's theorem lead to conditions for a shield to be optimal.

  5. Gearbox design for uncertain load requirements using active robust optimization

    NASA Astrophysics Data System (ADS)

    Salomon, Shaul; Avigad, Gideon; Purshouse, Robin C.; Fleming, Peter J.

    2016-04-01

    Design and optimization of gear transmissions have been intensively studied, but surprisingly the robustness of the resulting optimal design to uncertain loads has never been considered. Active Robust (AR) optimization is a methodology to design products that attain robustness to uncertain or changing environmental conditions through adaptation. In this study the AR methodology is utilized to optimize the number of transmissions, as well as their gearing ratios, for an uncertain load demand. The problem is formulated as a bi-objective optimization problem where the objectives are to satisfy the load demand in the most energy efficient manner and to minimize production cost. The results show that this approach can find a set of robust designs, revealing a trade-off between energy efficiency and production cost. This can serve as a useful decision-making tool for the gearbox design process, as well as for other applications.

  6. Optimal shielding design for minimum materials cost or mass

    DOE PAGES

    Woolley, Robert D.

    2015-12-02

    The mathematical underpinnings of cost optimal radiation shielding designs based on an extension of optimal control theory are presented, a heuristic algorithm to iteratively solve the resulting optimal design equations is suggested, and computational results for a simple test case are discussed. A typical radiation shielding design problem can have infinitely many solutions, all satisfying the problem's specified set of radiation attenuation requirements. Each such design has its own total materials cost. For a design to be optimal, no admissible change in its deployment of shielding materials can result in a lower cost. This applies in particular to very smallmore » changes, which can be restated using the calculus of variations as the Euler-Lagrange equations. Furthermore, the associated Hamiltonian function and application of Pontryagin's theorem lead to conditions for a shield to be optimal.« less

  7. Formulation optimization of propranolol hydrochloride microcapsules employing central composite design.

    PubMed

    Shivakumar, H N; Patel, R; Desai, B G

    2008-01-01

    A central composite design was employed to produce microcapsules of propranolol hydrochloride by o/o emulsion solvent evaporation technique using a mixture of cellulose acetate butyrate as coat material and span-80 as an emulsifier. The effect of formulation variables namely levels of cellulose acetate butyrate (X(1)) and percentage of Span-80 (X(2)) on encapsulation efficiency (Y(1)), drug release at the end of 1.5 h (Y(2)), 4 h (Y(3)), 8 h (Y(4)), 14 h (Y(5)), and 24 h (Y(6)) were evaluated using the F test. Mathematical models containing only the significant terms were generated for each response parameter using multiple linear regression analysis and analysis of variance. Both the formulation variables exerted a significant influence (P <0.05) on Y(1) whereas the cellulose acetate butyrate level emerged as the lone factor which significantly influenced the other response parameters. Numerical optimization using desirability approach was employed to develop an optimized formulation by setting constraints on the dependent and independent variables. The experimental values of Y(1), Y(2), Y(3), Y(4), Y(5), and Y(6) for the optimized formulation was found to be 92.86+/-1.56% w/w, 29.58+/-1.22%, 48.56+/-2.56%, 60.85+/-2.35%, 76.23+/-3.16% and 95.12+/-2.41%, respectively which were in close agreement with those predicted by the mathematical models. The drug release from microcapsules followed first order kinetics and was characterized by Higuchi diffusion model. The optimized microcapsule formulation developed was found to comply with the USP drug release test-1 for extended release propranolol hydrochloride capsules.

  8. Optimizing spacecraft design - optimization engine development : progress and plans

    NASA Technical Reports Server (NTRS)

    Cornford, Steven L.; Feather, Martin S.; Dunphy, Julia R; Salcedo, Jose; Menzies, Tim

    2003-01-01

    At JPL and NASA, a process has been developed to perform life cycle risk management. This process requires users to identify: goals and objectives to be achieved (and their relative priorities), the various risks to achieving those goals and objectives, and options for risk mitigation (prevention, detection ahead of time, and alleviation). Risks are broadly defined to include the risk of failing to design a system with adequate performance, compatibility and robustness in addition to more traditional implementation and operational risks. The options for mitigating these different kinds of risks can include architectural and design choices, technology plans and technology back-up options, test-bed and simulation options, engineering models and hardware/software development techniques and other more traditional risk reduction techniques.

  9. Optimal experiment design for identification of large space structures

    NASA Technical Reports Server (NTRS)

    Bayard, D. S.; Hadaegh, F. Y.; Meldrum, D. R.

    1988-01-01

    The optimal experiment design for on-orbit identification of modal frequency and damping parameters in large flexible space structures is discussed. The main result is a separation principle for D-optimal design which states that under certain conditions the sensor placement problem is decoupled from the input design problem. This decoupling effect significantly simplifies the overall optimal experiment design determination for large MIMO structural systems with many unknown modal parameters. The error from using the uncoupled design is estimated in terms of the inherent damping of the structure. A numerical example is given, demonstrating the usefulness of the simplified criteria in determining optimal designs for on-orbit Space Station identification experiments.

  10. Optimal design of multi-conditions for axial flow pump

    NASA Astrophysics Data System (ADS)

    Shi, L. J.; Tang, F. P.; Liu, C.; Xie, R. S.; Zhang, W. P.

    2016-11-01

    Passage components of the pump device will have a negative flow state when axial pump run off the design condition. Combined with model tests of axial flow pump, this paper use numerical simulation and numerical optimization techniques, and change geometric design parameters of the impeller to optimal design of multi conditions for Axial Flow Pump, in order to improve the efficiency of non-design conditions, broad the high efficient district and reduce operating cost. The results show that, efficiency curve of optimized significantly wider than the initial one without optimization. The efficiency of low flow working point increased by about 2.6%, the designed working point increased by about 0.5%, and the high flow working point increased the most, about 7.4%. The change range of head is small, so all working point can meet the operational requirements. That will greatly reduce operating costs and shorten the period of optimal design. This paper adopted the CFD simulation as the subject analysis, combined with experiment study, instead of artificial way of optimization design with experience, which proves the reliability and efficiency of the optimization design of multi-operation conditions of axial-flow pump device.

  11. Design and experimental tests of a novel neutron spin analyzer for wide angle spin echo spectrometers

    SciTech Connect

    Fouquet, Peter; Farago, Bela; Andersen, Ken H.; Bentley, Phillip M.; Pastrello, Gilles; Sutton, Iain; Thaveron, Eric; Thomas, Frederic; Moskvin, Evgeny; Pappas, Catherine

    2009-09-15

    This paper describes the design and experimental tests of a novel neutron spin analyzer optimized for wide angle spin echo spectrometers. The new design is based on nonremanent magnetic supermirrors, which are magnetized by vertical magnetic fields created by NdFeB high field permanent magnets. The solution presented here gives stable performance at moderate costs in contrast to designs invoking remanent supermirrors. In the experimental part of this paper we demonstrate that the new design performs well in terms of polarization, transmission, and that high quality neutron spin echo spectra can be measured.

  12. Synthetic Gene Design Using Codon Optimization On-Line (COOL).

    PubMed

    Yu, Kai; Ang, Kok Siong; Lee, Dong-Yup

    2017-01-01

    Codon optimization has been widely used for designing native or synthetic genes to enhance their expression in heterologous host organisms. We recently developed Codon Optimization On-Line (COOL) which is a web-based tool to provide multi-objective codon optimization functionality for synthetic gene design. COOL provides a simple and flexible interface for customizing codon optimization based on several design parameters such as individual codon usage, codon pairing, and codon adaptation index. User-defined sequences can also be compared against the COOL optimized ones to show the extent by which the user's sequences can be evaluated and further improved. The utility of COOL is demonstrated via a case study where the codon optimized sequence of an invertase enzyme is generated for the enhanced expression in E. coli.

  13. Optimal design in pediatric pharmacokinetic and pharmacodynamic clinical studies.

    PubMed

    Roberts, Jessica K; Stockmann, Chris; Balch, Alfred; Yu, Tian; Ward, Robert M; Spigarelli, Michael G; Sherwin, Catherine M T

    2015-03-01

    It is not trivial to conduct clinical trials with pediatric participants. Ethical, logistical, and financial considerations add to the complexity of pediatric studies. Optimal design theory allows investigators the opportunity to apply mathematical optimization algorithms to define how to structure their data collection to answer focused research questions. These techniques can be used to determine an optimal sample size, optimal sample times, and the number of samples required for pharmacokinetic and pharmacodynamic studies. The aim of this review is to demonstrate how to determine optimal sample size, optimal sample times, and the number of samples required from each patient by presenting specific examples using optimal design tools. Additionally, this review aims to discuss the relative usefulness of sparse vs rich data. This review is intended to educate the clinician, as well as the basic research scientist, whom plan on conducting a pharmacokinetic/pharmacodynamic clinical trial in pediatric patients.

  14. Optimal design of a composite structure

    NASA Technical Reports Server (NTRS)

    Graesser, D. L.; Zabinsky, Z. B.; Tuttle, M. E.; Kim, G. I.

    1993-01-01

    This paper presents a design methodology for a laminated composite stiffened panel, subjected to multiple in-plane loads and bending moments. Design variables include the skin and stiffener ply orientation angles and stiffener geometry variables. Optimum designs are sought which minimize structural weight and satisfy mechanical performance requirements. Two types of mechanical performance requirements are placed on the panel, maximum strain and minimum strength. Minimum weight designs are presented which document that the choice of mechanical performance requirements cause changes in the optimum design. The effects of lay-up constraints which limit the ply angles to user specified values, such as symmetric or quasi-isotropic laminates, are also investigated.

  15. Trajectory optimization software for planetary mission design

    NASA Technical Reports Server (NTRS)

    D'Amario, Louis A.

    1989-01-01

    The development history and characteristics of the interactive trajectory-optimization programs MOSES (D'Amario et al., 1981) and PLATO (D'Amario et al., 1982) are briefly reviewed, with an emphasis on their application to the Galileo mission. The requirements imposed by a mission involving flybys of several planetary satellites or planets are discussed; the formulation of the parameter-optimization problem is outlined; and particular attention is given to the use of multiconic methods to model the gravitational attraction of Jupiter in MOSES. Diagrams and tables of numerical data are included.

  16. Managing Model Data Introduced Uncertainties in Simulator Predictions for Generation IV Systems via Optimum Experimental Design

    SciTech Connect

    Turinsky, Paul J; Abdel-Khalik, Hany S; Stover, Tracy E

    2011-03-01

    An optimization technique has been developed to select optimized experimental design specifications to produce data specifically designed to be assimilated to optimize a given reactor concept. Data from the optimized experiment is assimilated to generate posteriori uncertainties on the reactor concept’s core attributes from which the design responses are computed. The reactor concept is then optimized with the new data to realize cost savings by reducing margin. The optimization problem iterates until an optimal experiment is found to maximize the savings. A new generation of innovative nuclear reactor designs, in particular fast neutron spectrum recycle reactors, are being considered for the application of closing the nuclear fuel cycle in the future. Safe and economical design of these reactors will require uncertainty reduction in basic nuclear data which are input to the reactor design. These data uncertainty propagate to design responses which in turn require the reactor designer to incorporate additional safety margin into the design, which often increases the cost of the reactor. Therefore basic nuclear data needs to be improved and this is accomplished through experimentation. Considering the high cost of nuclear experiments, it is desired to have an optimized experiment which will provide the data needed for uncertainty reduction such that a reactor design concept can meet its target accuracies or to allow savings to be realized by reducing the margin required due to uncertainty propagated from basic nuclear data. However, this optimization is coupled to the reactor design itself because with improved data the reactor concept can be re-optimized itself. It is thus desired to find the experiment that gives the best optimized reactor design. Methods are first established to model both the reactor concept and the experiment and to efficiently propagate the basic nuclear data uncertainty through these models to outputs. The representativity of the experiment

  17. Multidisciplinary design optimization of mechatronic vehicles with active suspensions

    NASA Astrophysics Data System (ADS)

    He, Yuping; McPhee, John

    2005-05-01

    A multidisciplinary optimization method is applied to the design of mechatronic vehicles with active suspensions. The method is implemented in a GA-A'GEM-MATLAB simulation environment in such a way that the linear mechanical vehicle model is designed in a multibody dynamics software package, i.e. A'GEM, the controllers and estimators are constructed using linear quadratic Gaussian (LQG) method, and Kalman filter algorithm in Matlab, then the combined mechanical and control model is optimized simultaneously using a genetic algorithm (GA). The design variables include passive parameters and control parameters. In the numerical optimizations, both random and deterministic road inputs and both perfect measurement of full state variables and estimated limited state variables are considered. Optimization results show that the active suspension systems based on the multidisciplinary optimization method have better overall performance than those derived using conventional design methods with the LQG algorithm.

  18. Execution of Multidisciplinary Design Optimization Approaches on Common Test Problems

    NASA Technical Reports Server (NTRS)

    Balling, R. J.; Wilkinson, C. A.

    1997-01-01

    A class of synthetic problems for testing multidisciplinary design optimization (MDO) approaches is presented. These test problems are easy to reproduce because all functions are given as closed-form mathematical expressions. They are constructed in such a way that the optimal value of all variables and the objective is unity. The test problems involve three disciplines and allow the user to specify the number of design variables, state variables, coupling functions, design constraints, controlling design constraints, and the strength of coupling. Several MDO approaches were executed on two sample synthetic test problems. These approaches included single-level optimization approaches, collaborative optimization approaches, and concurrent subspace optimization approaches. Execution results are presented, and the robustness and efficiency of these approaches an evaluated for these sample problems.

  19. Multidisciplinary optimization in aircraft design using analytic technology models

    NASA Technical Reports Server (NTRS)

    Malone, Brett; Mason, W. H.

    1991-01-01

    An approach to multidisciplinary optimization is presented which combines the Global Sensitivity Equation method, parametric optimization, and analytic technology models. The result is a powerful yet simple procedure for identifying key design issues. It can be used both to investigate technology integration issues very early in the design cycle, and to establish the information flow framework between disciplines for use in multidisciplinary optimization projects using much more computational intense representations of each technology. To illustrate the approach, an examination of the optimization of a short takeoff heavy transport aircraft is presented for numerous combinations of performance and technology constraints.

  20. Numerical design optimization of an EMAT for A0 Lamb wave generation in steel plates

    NASA Astrophysics Data System (ADS)

    Seher, Matthias; Huthwaite, Peter; Lowe, Mike; Nagy, Peter; Cawley, Peter

    2014-02-01

    An electromagnetic acoustic transducer (EMAT) for A0 Lamb wave generation on steel plates is developed to operate at 0.50 MHz-mm. A key objective of the development is to maximize the excitation and reception of the A0 mode, while minimizing those of the S0 mode. The chosen EMAT design consists of an induction coil and a permanent magnet. A finite element (FE) model of the EMAT is developed, coupling the electromagnetic and elastodynamic phenomena. An optimization process using a genetic algorithm is implemented, employing the magnet diameter and liftoff distance from the plate as design parameters and using the FE model to calculate the fitness. The optimal design suggested by the optimization process is physically implemented and the experimental measurements are compared to the FE simulation results. In a further step, the variations of the design parameters are studied numerically and the proposed EMAT design exhibits a robust behavior to small changes of the design parameters.

  1. Demonstrating the benefits of template-based design-technology co-optimization

    NASA Astrophysics Data System (ADS)

    Liebmann, Lars; Hibbeler, Jason; Hieter, Nathaniel; Pileggi, Larry; Jhaveri, Tejas; Moe, Matthew; Rovner, Vyacheslav

    2010-03-01

    The concept of template-based design-technology co-optimization as a means of curbing escalating design complexity and increasing technology qualification risk is described. Data is presented highlighting the design efficacy of this proposal in terms of power, performance, and area benefits, quantifying the specific contributions of complex logic gates in this design optimization. Experimental results from 32nm technology node bulk CMOS wafers are presented to quantify the variability and design-margin reductions as well as yield and manufacturability improvements achievable with the proposed template-based design-technology co-optimization technique. The paper closes with data showing the predictable composability of individual templates, demonstrating a fundamental requirement of this proposal.

  2. Optimizing Your K-5 Engineering Design Challenge

    ERIC Educational Resources Information Center

    Coppola, Matthew Perkins; Merz, Alice H.

    2017-01-01

    Today, elementary school teachers continue to revisit old lessons and seek out new ones, especially in engineering. Optimization is the process by which an existing product or procedure is revised and refined. Drawn from the authors' experiences working directly with students in grades K-5 and their teachers and preservice teachers, the…

  3. Conceptual design report, CEBAF basic experimental equipment

    SciTech Connect

    1990-04-13

    The Continuous Electron Beam Accelerator Facility (CEBAF) will be dedicated to basic research in Nuclear Physics using electrons and photons as projectiles. The accelerator configuration allows three nearly continuous beams to be delivered simultaneously in three experimental halls, which will be equipped with complementary sets of instruments: Hall A--two high resolution magnetic spectrometers; Hall B--a large acceptance magnetic spectrometer; Hall C--a high-momentum, moderate resolution, magnetic spectrometer and a variety of more dedicated instruments. This report contains a short description of the initial complement of experimental equipment to be installed in each of the three halls.

  4. Origami Optimization: Role of Symmetry in Accelerating Design

    NASA Astrophysics Data System (ADS)

    Buskohl, Philip; Fuchi, Kazuko; Bazzan, Giorgio; Durstock, Michael; Reich, Gregory; Joo, James; Vaia, Richard

    Origami structures morph between 2D and 3D conformations along predetermined fold lines that efficiently program the form, function and mobility of the structure. Design optimization tools have recently been developed to predict optimal fold patterns with mechanics-based metrics, such as the maximal energy storage, auxetic response and actuation. Origami actuator design problems possess inherent symmetries associated with the grid, mechanical boundary conditions and the objective function, which are often exploited to reduce the design space and computational cost of optimization. However, enforcing symmetry eliminates the prediction of potentially better performing asymmetric designs, which are more likely to exist given the discrete nature of fold line optimization. To better understand this effect, actuator design problems with different combinations of rotation and reflection symmetries were optimized while varying the number of folds allowed in the final design. In each case, the optimal origami patterns transitioned between symmetric and asymmetric solutions depended on the number of folds available for the design, with fewer symmetries present with more fold lines allowed. This study investigates the interplay of symmetry and discrete vs continuous optimization in origami actuators and provides insight into how the symmetries of the reference grid regulate the performance landscape. This work was supported by the Air Force Office of Scientific Research.

  5. Lessons Learned During Solutions of Multidisciplinary Design Optimization Problems

    NASA Technical Reports Server (NTRS)

    Patnaik, Suna N.; Coroneos, Rula M.; Hopkins, Dale A.; Lavelle, Thomas M.

    2000-01-01

    Optimization research at NASA Glenn Research Center has addressed the design of structures, aircraft and airbreathing propulsion engines. During solution of the multidisciplinary problems several issues were encountered. This paper lists four issues and discusses the strategies adapted for their resolution: (1) The optimization process can lead to an inefficient local solution. This deficiency was encountered during design of an engine component. The limitation was overcome through an augmentation of animation into optimization. (2) Optimum solutions obtained were infeasible for aircraft and air-breathing propulsion engine problems. Alleviation of this deficiency required a cascading of multiple algorithms. (3) Profile optimization of a beam produced an irregular shape. Engineering intuition restored the regular shape for the beam. (4) The solution obtained for a cylindrical shell by a subproblem strategy converged to a design that can be difficult to manufacture. Resolution of this issue remains a challenge. The issues and resolutions are illustrated through six problems: (1) design of an engine component, (2) synthesis of a subsonic aircraft, (3) operation optimization of a supersonic engine, (4) design of a wave-rotor-topping device, (5) profile optimization of a cantilever beam, and (6) design of a cvlindrical shell. The combined effort of designers and researchers can bring the optimization method from academia to industry.

  6. Optimal Design of Aortic Leaflet Prosthesis

    NASA Technical Reports Server (NTRS)

    Ghista, Dhanjoo N.; Reul, Helmut; Ray, Gautam; Chandran, K. B.

    1978-01-01

    The design criteria for an optimum prosthetic-aortic leaflet valve are a smooth washout in the valve cusps, minimal leaflet stress, minimal transmembrane pressure for the valve to open, an adequate lifetime (for a given blood-compatible leaflet material's fatigue data). A rigorous design analysis is presented to obtain the prosthetic tri-leaflet aortic valve leaflet's optimum design parameters. Four alternative optimum leaflet geometries are obtained to satisfy the criteria of a smooth washout and minimal leaflet stress. The leaflet thicknesses of these four optimum designs are determined by satisfying the two remaining design criteria for minimal transmembrane opening pressure and adequate fatigue lifetime, which are formulated in terms of the elastic and fatigue properties of the selected leaflet material - Avcothane-51 (of the Avco-Everett Co. of Massachusetts). Prosthetic valves are fabricated on the basis of the optimum analysis and the resulting detailed engineering drawings of the designs are also presented in the paper.

  7. Experimental Stream Facility: Design and Research

    EPA Science Inventory

    The Experimental Stream Facility (ESF) is a valuable research tool for the U.S. Environmental Protection Agency’s (EPA) Office of Research and Development’s (ORD) laboratories in Cincinnati, Ohio. This brochure describes the ESF, which is one of only a handful of research facilit...

  8. Total energy control system autopilot design with constrained parameter optimization

    NASA Technical Reports Server (NTRS)

    Ly, Uy-Loi; Voth, Christopher

    1990-01-01

    A description is given of the application of a multivariable control design method (SANDY) based on constrained parameter optimization to the design of a multiloop aircraft flight control system. Specifically, the design method is applied to the direct synthesis of a multiloop AFCS inner-loop feedback control system based on total energy control system (TECS) principles. The design procedure offers a structured approach for the determination of a set of stabilizing controller design gains that meet design specifications in closed-loop stability, command tracking performance, disturbance rejection, and limits on control activities. The approach can be extended to a broader class of multiloop flight control systems. Direct tradeoffs between many real design goals are rendered systematic by proper formulation of the design objectives and constraints. Satisfactory designs are usually obtained in few iterations. Performance characteristics of the optimized TECS design have been improved, particularly in the areas of closed-loop damping and control activity in the presence of turbulence.

  9. A proposal of optimal sampling design using a modularity strategy

    NASA Astrophysics Data System (ADS)

    Simone, A.; Giustolisi, O.; Laucelli, D. B.

    2016-08-01

    In real water distribution networks (WDNs) are present thousands nodes and optimal placement of pressure and flow observations is a relevant issue for different management tasks. The planning of pressure observations in terms of spatial distribution and number is named sampling design and it was faced considering model calibration. Nowadays, the design of system monitoring is a relevant issue for water utilities e.g., in order to manage background leakages, to detect anomalies and bursts, to guarantee service quality, etc. In recent years, the optimal location of flow observations related to design of optimal district metering areas (DMAs) and leakage management purposes has been faced considering optimal network segmentation and the modularity index using a multiobjective strategy. Optimal network segmentation is the basis to identify network modules by means of optimal conceptual cuts, which are the candidate locations of closed gates or flow meters creating the DMAs. Starting from the WDN-oriented modularity index, as a metric for WDN segmentation, this paper proposes a new way to perform the sampling design, i.e., the optimal location of pressure meters, using newly developed sampling-oriented modularity index. The strategy optimizes the pressure monitoring system mainly based on network topology and weights assigned to pipes according to the specific technical tasks. A multiobjective optimization minimizes the cost of pressure meters while maximizing the sampling-oriented modularity index. The methodology is presented and discussed using the Apulian and Exnet networks.

  10. Optimal adaptive sequential designs for crossover bioequivalence studies.

    PubMed

    Xu, Jialin; Audet, Charles; DiLiberti, Charles E; Hauck, Walter W; Montague, Timothy H; Parr, Alan F; Potvin, Diane; Schuirmann, Donald J

    2016-01-01

    In prior works, this group demonstrated the feasibility of valid adaptive sequential designs for crossover bioequivalence studies. In this paper, we extend the prior work to optimize adaptive sequential designs over a range of geometric mean test/reference ratios (GMRs) of 70-143% within each of two ranges of intra-subject coefficient of variation (10-30% and 30-55%). These designs also introduce a futility decision for stopping the study after the first stage if there is sufficiently low likelihood of meeting bioequivalence criteria if the second stage were completed, as well as an upper limit on total study size. The optimized designs exhibited substantially improved performance characteristics over our previous adaptive sequential designs. Even though the optimized designs avoided undue inflation of type I error and maintained power at ≥ 80%, their average sample sizes were similar to or less than those of conventional single stage designs.

  11. Optimal design of geodesically stiffened composite cylindrical shells

    NASA Technical Reports Server (NTRS)

    Gendron, G.; Gurdal, Z.

    1992-01-01

    An optimization system based on general-purpose finite element code CSM Testbed and optimization program ADS is described. The system can be used to obtain minimum-mass designs of composite shell structures with complex stiffening arrangements. Ply thicknesses, ply orientations, and stiffener heights can be used as design variables. Buckling, displacement, and material failure constraints can be imposed on the design. The system is used to conduct a preliminary design study of geodesically stiffened shells. For comparison purposes, optimal designs of unstiffened shells, and ring and longitudinal stringer stiffened shells are also studied. Trends in the design of geodesically stiffened shells are identified. Features that enhance the capabilities and efficiency of the design system are described.

  12. Design optimization of system level adaptive optical performance

    NASA Astrophysics Data System (ADS)

    Michels, Gregory J.; Genberg, Victor L.; Doyle, Keith B.; Bisson, Gary R.

    2005-09-01

    By linking predictive methods from multiple engineering disciplines, engineers are able to compute more meaningful predictions of a product's performance. By coupling mechanical and optical predictive techniques mechanical design can be performed to optimize optical performance. This paper demonstrates how mechanical design optimization using system level optical performance can be used in the development of the design of a high precision adaptive optical telescope. While mechanical design parameters are treated as the design variables, the objective function is taken to be the adaptively corrected optical imaging performance of an orbiting two-mirror telescope.

  13. Field Quality Optimization in a Common Coil Magnet Design

    SciTech Connect

    Gupta, R.; Ramberger, S.

    1999-09-01

    This paper presents the results of initial field quality optimization of body and end harmonics in a 'common coil magnet design'. It is shown that a good field quality, as required in accelerator magnets, can be obtained by distributing conductor blocks in such a way that they simulate an elliptical coil geometry. This strategy assures that the amount of conductor used in this block design is similar to that is used in a conventional cosine theta design. An optimized yoke that keeps all harmonics small over the entire range of operation using a single power supply is also presented. The field harmonics are primarily optimized with the computer program ROXIE.

  14. Design Optimization of an Axial Fan Blade Through Multi-Objective Evolutionary Algorithm

    NASA Astrophysics Data System (ADS)

    Kim, Jin-Hyuk; Choi, Jae-Ho; Husain, Afzal; Kim, Kwang-Yong

    2010-06-01

    This paper presents design optimization of an axial fan blade with hybrid multi-objective evolutionary algorithm (hybrid MOEA). Reynolds-averaged Navier-Stokes equations with shear stress transport turbulence model are discretized by the finite volume approximations and solved on hexahedral grids for the flow analyses. The validation of the numerical results was performed with the experimental data for the axial and tangential velocities. Six design variables related to the blade lean angle and blade profile are selected and the Latin hypercube sampling of design of experiments is used to generate design points within the selected design space. Two objective functions namely total efficiency and torque are employed and the multi-objective optimization is carried out to enhance total efficiency and to reduce the torque. The flow analyses are performed numerically at the designed points to obtain values of the objective functions. The Non-dominated Sorting of Genetic Algorithm (NSGA-II) with ɛ -constraint strategy for local search coupled with surrogate model is used for multi-objective optimization. The Pareto-optimal solutions are presented and trade-off analysis is performed between the two competing objectives in view of the design and flow constraints. It is observed that total efficiency is enhanced and torque is decreased as compared to the reference design by the process of multi-objective optimization. The Pareto-optimal solutions are analyzed to understand the mechanism of the improvement in the total efficiency and reduction in torque.

  15. Optimal design of geodesically stiffened composite cylindrical shells

    NASA Technical Reports Server (NTRS)

    Gendron, G.; Guerdal, Z.

    1992-01-01

    An optimization system based on the finite element code Computations Structural Mechanics (CSM) Testbed and the optimization program, Automated Design Synthesis (ADS), is described. The optimization system can be used to obtain minimum-weight designs of composite stiffened structures. Ply thickness, ply orientations, and stiffener heights can be used as design variables. Buckling, displacement, and material failure constraints can be imposed on the design. The system is used to conduct a design study of geodesically stiffened shells. For comparison purposes, optimal designs of unstiffened shells and shells stiffened by rings and stingers are also obtained. Trends in the design of geodesically stiffened shells are identified. An approach to include local stress concentrations during the design optimization process is then presented. The method is based on a global/local analysis technique. It employs spline interpolation functions to determine displacements and rotations from a global model which are used as 'boundary conditions' for the local model. The organization of the strategy in the context of an optimization process is described. The method is validated with an example.

  16. Formulation of a generalized experimental model for a manually driven flywheel motor and its optimization.

    PubMed

    Modak, J P; Bapat, A R

    1994-04-01

    A manually driven brick-making machine has recently been developed without the benefit of any design data. The machine consists of three main units: a pedal-driven flywheel motor; the transmission between the flywheel shaft and the input shaft of the process machine; and the process unit, consisting of auger, cone and die. The machine was essentially developed on the basis of general mechanical design experience and intuition. In spite of this, it proved to be functional and economically viable. However, it was felt essential to develop it scientifically. This paper reports on the full development of the pedal-driven flywheel motor. As this is a human-machine system it is highly unlikely that a logic-based model can be established. Therefore an experimental method is adopted to evolve a generalized experimental model, which is further optimized to satisfy several objective functions.

  17. Optimal design of one-dimensional photonic crystal filters using minimax optimization approach.

    PubMed

    Hassan, Abdel-Karim S O; Mohamed, Ahmed S A; Maghrabi, Mahmoud M T; Rafat, Nadia H

    2015-02-20

    In this paper, we introduce a simulation-driven optimization approach for achieving the optimal design of electromagnetic wave (EMW) filters consisting of one-dimensional (1D) multilayer photonic crystal (PC) structures. The PC layers' thicknesses and/or material types are considered as designable parameters. The optimal design problem is formulated as a minimax optimization problem that is entirely solved by making use of readily available software tools. The proposed approach allows for the consideration of problems of higher dimension than usually treated before. In addition, it can proceed starting from bad initial design points. The validity, flexibility, and efficiency of the proposed approach is demonstrated by applying it to obtain the optimal design of two practical examples. The first is (SiC/Ag/SiO(2))(N) wide bandpass optical filter operating in the visible range. Contrarily, the second example is (Ag/SiO(2))(N) EMW low pass spectral filter, working in the infrared range, which is used for enhancing the efficiency of thermophotovoltaic systems. The approach shows a good ability to converge to the optimal solution, for different design specifications, regardless of the starting design point. This ensures that the approach is robust and general enough to be applied for obtaining the optimal design of all 1D photonic crystals promising applications.

  18. Spreadsheet Design: An Optimal Checklist for Accountants

    ERIC Educational Resources Information Center

    Barnes, Jeffrey N.; Tufte, David; Christensen, David

    2009-01-01

    Just as good grammar, punctuation, style, and content organization are important to well-written documents, basic fundamentals of spreadsheet design are essential to clear communication. In fact, the very principles of good writing should be integrated into spreadsheet workpaper design and organization. The unique contributions of this paper are…

  19. Computational Model Optimization for Enzyme Design Applications

    DTIC Science & Technology

    2007-11-02

    naturally occurring E. coli chorismate mutase (EcCM) enzyme through computational design. Although the stated milestone of creating a novel... chorismate mutase (CM) was not achieved, the enhancement of the underlying computational model through the development of the two-body PB method will facilitate the future design of novel protein catalysts.

  20. Optimal design of spatial distribution networks

    NASA Astrophysics Data System (ADS)

    Gastner, Michael T.; Newman, M. E. J.

    2006-07-01

    We consider the problem of constructing facilities such as hospitals, airports, or malls in a country with a nonuniform population density, such that the average distance from a person’s home to the nearest facility is minimized. We review some previous approximate treatments of this problem that indicate that the optimal distribution of facilities should have a density that increases with population density, but does so slower than linearly, as the two-thirds power. We confirm this result numerically for the particular case of the United States with recent population data using two independent methods, one a straightforward regression analysis, the other based on density-dependent map projections. We also consider strategies for linking the facilities to form a spatial network, such as a network of flights between airports, so that the combined cost of maintenance of and travel on the network is minimized. We show specific examples of such optimal networks for the case of the United States.

  1. New approaches to the design optimization of hydrofoils

    NASA Astrophysics Data System (ADS)

    Beyhaghi, Pooriya; Meneghello, Gianluca; Bewley, Thomas

    2015-11-01

    Two simulation-based approaches are developed to optimize the design of hydrofoils for foiling catamarans, with the objective of maximizing efficiency (lift/drag). In the first, a simple hydrofoil model based on the vortex-lattice method is coupled with a hybrid global and local optimization algorithm that combines our Delaunay-based optimization algorithm with a Generalized Pattern Search. This optimization procedure is compared with the classical Newton-based optimization method. The accuracy of the vortex-lattice simulation of the optimized design is compared with a more accurate and computationally expensive LES-based simulation. In the second approach, the (expensive) LES model of the flow is used directly during the optimization. A modified Delaunay-based optimization algorithm is used to maximize the efficiency of the optimization, which measures a finite-time averaged approximation of the infinite-time averaged value of an ergodic and stationary process. Since the optimization algorithm takes into account the uncertainty of the finite-time averaged approximation of the infinite-time averaged statistic of interest, the total computational time of the optimization algorithm is significantly reduced. Results from the two different approaches are compared.

  2. Bayesian cross-entropy methodology for optimal design of validation experiments

    NASA Astrophysics Data System (ADS)

    Jiang, X.; Mahadevan, S.

    2006-07-01

    An important concern in the design of validation experiments is how to incorporate the mathematical model in the design in order to allow conclusive comparisons of model prediction with experimental output in model assessment. The classical experimental design methods are more suitable for phenomena discovery and may result in a subjective, expensive, time-consuming and ineffective design that may adversely impact these comparisons. In this paper, an integrated Bayesian cross-entropy methodology is proposed to perform the optimal design of validation experiments incorporating the computational model. The expected cross entropy, an information-theoretic distance between the distributions of model prediction and experimental observation, is defined as a utility function to measure the similarity of two distributions. A simulated annealing algorithm is used to find optimal values of input variables through minimizing or maximizing the expected cross entropy. The measured data after testing with the optimum input values are used to update the distribution of the experimental output using Bayes theorem. The procedure is repeated to adaptively design the required number of experiments for model assessment, each time ensuring that the experiment provides effective comparison for validation. The methodology is illustrated for the optimal design of validation experiments for a three-leg bolted joint structure and a composite helicopter rotor hub component.

  3. Optimization of variable density multilayer insulation for cryogenic application and experimental validation

    NASA Astrophysics Data System (ADS)

    Wang, B.; Huang, Y. H.; Li, P.; Sun, P. J.; Chen, Z. C.; Wu, J. Y.

    2016-12-01

    Cryogenic propellant storage on orbit is a crucial part of future space exploration. Efficient and reliable thermal insulation is one of the dominant technologies for the long-duration missions. This paper presents theoretical and experimental investigation on the thermal performance of variable density multilayer insulation (VDMLI) with different configurations and spacers. A practical method for optimizing the configuration of VDMLI was proposed by iteratively predicting the internal temperature profiles and maximizing the thermal resistance based on the basic layer by layer model. A cryogen boil-off calorimeter system was designed and fabricated to measure the temperature profile and effective heat transfer coefficient of the VDMLI samples over a wide range of temperature (77-353 K). The experimental data confirm that the optimized sample as predicted does have the minimum effective heat transfer coefficient in the control group. The results indicated that the insulation performance of MLI could be improved by 45.5% after replacing the regular uniform configuration with the optimized variable density configuration. For the same optimized configuration, the performance was further improved by 54% by changing the spacing material from none-woven fiber cloth to Dacron net. It was also found that the effective heat transfer coefficient will be much less sensitive to the MLI thickness when it exceeds 30 mm for on-orbit thermal environment.

  4. Development of the Biological Experimental Design Concept Inventory (BEDCI)

    ERIC Educational Resources Information Center

    Deane, Thomas; Nomme, Kathy; Jeffery, Erica; Pollock, Carol; Birol, Gulnur

    2014-01-01

    Interest in student conception of experimentation inspired the development of a fully validated 14-question inventory on experimental design in biology (BEDCI) by following established best practices in concept inventory (CI) design. This CI can be used to diagnose specific examples of non-expert-like thinking in students and to evaluate the…

  5. The Implications of "Contamination" for Experimental Design in Education

    ERIC Educational Resources Information Center

    Rhoads, Christopher H.

    2011-01-01

    Experimental designs that randomly assign entire clusters of individuals (e.g., schools and classrooms) to treatments are frequently advocated as a way of guarding against contamination of the estimated average causal effect of treatment. However, in the absence of contamination, experimental designs that randomly assign intact clusters to…

  6. Teaching Experimental Design to Elementary School Pupils in Greece

    ERIC Educational Resources Information Center

    Karampelas, Konstantinos

    2016-01-01

    This research is a study about the possibility to promote experimental design skills to elementary school pupils. Experimental design and the experiment process are foundational elements in current approaches to Science Teaching, as they provide learners with profound understanding about knowledge construction and science inquiry. The research was…

  7. Optimization applications in aircraft engine design and test

    NASA Technical Reports Server (NTRS)

    Pratt, T. K.

    1984-01-01

    Starting with the NASA-sponsored STAEBL program, optimization methods based primarily upon the versatile program COPES/CONMIN were introduced over the past few years to a broad spectrum of engineering problems in structural optimization, engine design, engine test, and more recently, manufacturing processes. By automating design and testing processes, many repetitive and costly trade-off studies have been replaced by optimization procedures. Rather than taking engineers and designers out of the loop, optimization has, in fact, put them more in control by providing sophisticated search techniques. The ultimate decision whether to accept or reject an optimal feasible design still rests with the analyst. Feedback obtained from this decision process has been invaluable since it can be incorporated into the optimization procedure to make it more intelligent. On several occasions, optimization procedures have produced novel designs, such as the nonsymmetric placement of rotor case stiffener rings, not anticipated by engineering designers. In another case, a particularly difficult resonance contraint could not be satisfied using hand iterations for a compressor blade, when the STAEBL program was applied to the problem, a feasible solution was obtained in just two iterations.

  8. Optimizing experimental procedures for quantitative evaluation of crop plant performance in high throughput phenotyping systems

    PubMed Central

    Junker, Astrid; Muraya, Moses M.; Weigelt-Fischer, Kathleen; Arana-Ceballos, Fernando; Klukas, Christian; Melchinger, Albrecht E.; Meyer, Rhonda C.; Riewe, David; Altmann, Thomas

    2015-01-01

    Detailed and standardized protocols for plant cultivation in environmentally controlled conditions are an essential prerequisite to conduct reproducible experiments with precisely defined treatments. Setting up appropriate and well defined experimental procedures is thus crucial for the generation of solid evidence and indispensable for successful plant research. Non-invasive and high throughput (HT) phenotyping technologies offer the opportunity to monitor and quantify performance dynamics of several hundreds of plants at a time. Compared to small scale plant cultivations, HT systems have much higher demands, from a conceptual and a logistic point of view, on experimental design, as well as the actual plant cultivation conditions, and the image analysis and statistical methods for data evaluation. Furthermore, cultivation conditions need to be designed that elicit plant performance characteristics corresponding to those under natural conditions. This manuscript describes critical steps in the optimization of procedures for HT plant phenotyping systems. Starting with the model plant Arabidopsis, HT-compatible methods were tested, and optimized with regard to growth substrate, soil coverage, watering regime, experimental design (considering environmental inhomogeneities) in automated plant cultivation and imaging systems. As revealed by metabolite profiling, plant movement did not affect the plants' physiological status. Based on these results, procedures for maize HT cultivation and monitoring were established. Variation of maize vegetative growth in the HT phenotyping system did match well with that observed in the field. The presented results outline important issues to be considered in the design of HT phenotyping experiments for model and crop plants. It thereby provides guidelines for the setup of HT experimental procedures, which are required for the generation of reliable and reproducible data of phenotypic variation for a broad range of applications. PMID

  9. Optimal design of a piezoelectric transducer for exciting guided wave ultrasound in rails

    NASA Astrophysics Data System (ADS)

    Ramatlo, Dineo A.; Wilke, Daniel N.; Loveday, Philip W.

    2017-02-01

    An existing Ultrasonic Broken Rail Detection System installed in South Africa on a heavy duty railway line is currently being upgraded to include defect detection and location. To accomplish this, an ultrasonic piezoelectric transducer to strongly excite a guided wave mode with energy concentrated in the web (web mode) of a rail is required. A previous study demonstrated that the recently developed SAFE-3D (Semi-Analytical Finite Element - 3 Dimensional) method can effectively predict the guided waves excited by a resonant piezoelectric transducer. In this study, the SAFE-3D model is used in the design optimization of a rail web transducer. A bound-constrained optimization problem was formulated to maximize the energy transmitted by the transducer in the web mode when driven by a pre-defined excitation signal. Dimensions of the transducer components were selected as the three design variables. A Latin hypercube sampled design of experiments that required a total of 500 SAFE-3D analyses in the design space was employed in a response surface-based optimization approach. The Nelder-Mead optimization algorithm was then used to find an optimal transducer design on the constructed response surface. The radial basis function response surface was first verified by comparing a number of predicted responses against the computed SAFE-3D responses. The performance of the optimal transducer predicted by the optimization algorithm on the response surface was also verified to be sufficiently accurate using SAFE-3D. The computational advantages of SAFE-3D in optimal transducer design are noteworthy as more than 500 analyses were performed. The optimal design was then manufactured and experimental measurements were used to validate the predicted performance. The adopted design method has demonstrated the capability to automate the design of transducers for a particular rail cross-section and frequency range.

  10. A study of commuter airplane design optimization

    NASA Technical Reports Server (NTRS)

    Keppel, B. V.; Eysink, H.; Hammer, J.; Hawley, K.; Meredith, P.; Roskam, J.

    1978-01-01

    The usability of the general aviation synthesis program (GASP) was enhanced by the development of separate computer subroutines which can be added as a package to this assembly of computerized design methods or used as a separate subroutine program to compute the dynamic longitudinal, lateral-directional stability characteristics for a given airplane. Currently available analysis methods were evaluated to ascertain those most appropriate for the design functions which the GASP computerized design program performs. Methods for providing proper constraint and/or analysis functions for GASP were developed as well as the appropriate subroutines.

  11. Sequential ensemble-based optimal design for parameter estimation

    NASA Astrophysics Data System (ADS)

    Man, Jun; Zhang, Jiangjiang; Li, Weixuan; Zeng, Lingzao; Wu, Laosheng

    2016-10-01

    The ensemble Kalman filter (EnKF) has been widely used in parameter estimation for hydrological models. The focus of most previous studies was to develop more efficient analysis (estimation) algorithms. On the other hand, it is intuitively understandable that a well-designed sampling (data-collection) strategy should provide more informative measurements and subsequently improve the parameter estimation. In this work, a Sequential Ensemble-based Optimal Design (SEOD) method, coupled with EnKF, information theory and sequential optimal design, is proposed to improve the performance of parameter estimation. Based on the first-order and second-order statistics, different information metrics including the Shannon entropy difference (SD), degrees of freedom for signal (DFS) and relative entropy (RE) are used to design the optimal sampling strategy, respectively. The effectiveness of the proposed method is illustrated by synthetic one-dimensional and two-dimensional unsaturated flow case studies. It is shown that the designed sampling strategies can provide more accurate parameter estimation and state prediction compared with conventional sampling strategies. Optimal sampling designs based on various information metrics perform similarly in our cases. The effect of ensemble size on the optimal design is also investigated. Overall, larger ensemble size improves the parameter estimation and convergence of optimal sampling strategy. Although the proposed method is applied to unsaturated flow problems in this study, it can be equally applied in any other hydrological problems.

  12. An algorithm for optimal structural design with frequency constraints

    NASA Technical Reports Server (NTRS)

    Kiusalaas, J.; Shaw, R. C. J.

    1978-01-01

    The paper presents a finite element method for minimum weight design of structures with lower-bound constraints on the natural frequencies, and upper and lower bounds on the design variables. The design algorithm is essentially an iterative solution of the Kuhn-Tucker optimality criterion. The three most important features of the algorithm are: (1) a small number of design iterations are needed to reach optimal or near-optimal design, (2) structural elements with a wide variety of size-stiffness may be used, the only significant restriction being the exclusion of curved beam and shell elements, and (3) the algorithm will work for multiple as well as single frequency constraints. The design procedure is illustrated with three simple problems.

  13. An interactive system for aircraft design and optimization

    NASA Technical Reports Server (NTRS)

    Kroo, Ilan M.

    1992-01-01

    A system for aircraft design utilizing a unique analysis architecture, graphical interface, and suite of numerical optimization methods is described in this paper. The non-procedural architecture provides extensibility and efficiency not possible with conventional programming techniques. The interface for analysis and optimization, developed for use with this method, is described and its application to example problems is discussed.

  14. Optimizing Measurement Designs with Budget Constraints: The Variable Cost Case.

    ERIC Educational Resources Information Center

    Marcoulides, George A.

    1997-01-01

    Presents a procedure for determining the optimal number of conditions to use in multifaceted measurement designs when resource constraints are imposed. The procedure is illustrated for the case in which the costs per condition vary within the same facet. (Author)

  15. Optimal design against collapse after buckling. [of beams

    NASA Technical Reports Server (NTRS)

    Masur, E. F.

    1976-01-01

    After buckling, statically indeterminate trusses, beams, and other strictly symmetric structures may collapse under loads which reach limiting magnitudes. Optimal design is discussed for prescribed values of these collapse loads.

  16. Irradiation Design for an Experimental Murine Model

    SciTech Connect

    Ballesteros-Zebadua, P.; Moreno-Jimenez, S.; Suarez-Campos, J. E.; Celis, M. A.; Larraga-Gutierrez, J. M.; Garcia-Garduno, O. A.; Rubio-Osornio, M. C.; Custodio-Ramirez, V.; Paz, C.

    2010-12-07

    In radiotherapy and stereotactic radiosurgery, small animal experimental models are frequently used, since there are still a lot of unsolved questions about the biological and biochemical effects of ionizing radiation. This work presents a method for small-animal brain radiotherapy compatible with a dedicated 6MV Linac. This rodent model is focused on the research of the inflammatory effects produced by ionizing radiation in the brain. In this work comparisons between Pencil Beam and Monte Carlo techniques, were used in order to evaluate accuracy of the calculated dose using a commercial planning system. Challenges in this murine model are discussed.

  17. Irradiation Design for an Experimental Murine Model

    NASA Astrophysics Data System (ADS)

    Ballesteros-Zebadúa, P.; Lárraga-Gutierrez, J. M.; García-Garduño, O. A.; Rubio-Osornio, M. C.; Custodio-Ramírez, V.; Moreno-Jimenez, S.; Suarez-Campos, J. E.; Paz, C.; Celis, M. A.

    2010-12-01

    In radiotherapy and stereotactic radiosurgery, small animal experimental models are frequently used, since there are still a lot of unsolved questions about the biological and biochemical effects of ionizing radiation. This work presents a method for small-animal brain radiotherapy compatible with a dedicated 6MV Linac. This rodent model is focused on the research of the inflammatory effects produced by ionizing radiation in the brain. In this work comparisons between Pencil Beam and Monte Carlo techniques, were used in order to evaluate accuracy of the calculated dose using a commercial planning system. Challenges in this murine model are discussed.

  18. Optimal control design of turbo spin‐echo sequences with applications to parallel‐transmit systems

    PubMed Central

    Hoogduin, Hans; Hajnal, Joseph V.; van den Berg, Cornelis A. T.; Luijten, Peter R.; Malik, Shaihan J.

    2016-01-01

    Purpose The design of turbo spin‐echo sequences is modeled as a dynamic optimization problem which includes the case of inhomogeneous transmit radiofrequency fields. This problem is efficiently solved by optimal control techniques making it possible to design patient‐specific sequences online. Theory and Methods The extended phase graph formalism is employed to model the signal evolution. The design problem is cast as an optimal control problem and an efficient numerical procedure for its solution is given. The numerical and experimental tests address standard multiecho sequences and pTx configurations. Results Standard, analytically derived flip angle trains are recovered by the numerical optimal control approach. New sequences are designed where constraints on radiofrequency total and peak power are included. In the case of parallel transmit application, the method is able to calculate the optimal echo train for two‐dimensional and three‐dimensional turbo spin echo sequences in the order of 10 s with a single central processing unit (CPU) implementation. The image contrast is maintained through the whole field of view despite inhomogeneities of the radiofrequency fields. Conclusion The optimal control design sheds new light on the sequence design process and makes it possible to design sequences in an online, patient‐specific fashion. Magn Reson Med 77:361–373, 2017. © 2016 The Authors Magnetic Resonance in Medicine published by Wiley Periodicals, Inc. on behalf of International Society for Magnetic Resonance in Medicine PMID:26800383

  19. Precision of Sensitivity in the Design Optimization of Indeterminate Structures

    NASA Technical Reports Server (NTRS)

    Patnaik, Surya N.; Pai, Shantaram S.; Hopkins, Dale A.

    2006-01-01

    Design sensitivity is central to most optimization methods. The analytical sensitivity expression for an indeterminate structural design optimization problem can be factored into a simple determinate term and a complicated indeterminate component. Sensitivity can be approximated by retaining only the determinate term and setting the indeterminate factor to zero. The optimum solution is reached with the approximate sensitivity. The central processing unit (CPU) time to solution is substantially reduced. The benefit that accrues from using the approximate sensitivity is quantified by solving a set of problems in a controlled environment. Each problem is solved twice: first using the closed-form sensitivity expression, then using the approximation. The problem solutions use the CometBoards testbed as the optimization tool with the integrated force method as the analyzer. The modification that may be required, to use the stiffener method as the analysis tool in optimization, is discussed. The design optimization problem of an indeterminate structure contains many dependent constraints because of the implicit relationship between stresses, as well as the relationship between the stresses and displacements. The design optimization process can become problematic because the implicit relationship reduces the rank of the sensitivity matrix. The proposed approximation restores the full rank and enhances the robustness of the design optimization method.

  20. Optimal experiment design for model selection in biochemical networks

    PubMed Central

    2014-01-01

    Background Mathematical modeling is often used to formalize hypotheses on how a biochemical network operates by discriminating between competing models. Bayesian model selection offers a way to determine the amount of evidence that data provides to support one model over the other while favoring simple models. In practice, the amount of experimental data is often insufficient to make a clear distinction between competing models. Often one would like to perform a new experiment which would discriminate between competing hypotheses. Results We developed a novel method to perform Optimal Experiment Design to predict which experiments would most effectively allow model selection. A Bayesian approach is applied to infer model parameter distributions. These distributions are sampled and used to simulate from multivariate predictive densities. The method is based on a k-Nearest Neighbor estimate of the Jensen Shannon divergence between the multivariate predictive densities of competing models. Conclusions We show that the method successfully uses predictive differences to enable model selection by applying it to several test cases. Because the design criterion is based on predictive distributions, which can be computed for a wide range of model quantities, the approach is very flexible. The method reveals specific combinations of experiments which improve discriminability even in cases where data is scarce. The proposed approach can be used in conjunction with existing Bayesian methodologies where (approximate) posteriors have been determined, making use of relations that exist within the inferred posteriors. PMID:24555498