Sample records for experimental design optimization

  1. Optimal Experimental Design for Model Discrimination

    PubMed Central

    Myung, Jay I.; Pitt, Mark A.

    2009-01-01

    Models of a psychological process can be difficult to discriminate experimentally because it is not easy to determine the values of the critical design variables (e.g., presentation schedule, stimulus structure) that will be most informative in differentiating them. Recent developments in sampling-based search methods in statistics make it possible to determine these values, and thereby identify an optimal experimental design. After describing the method, it is demonstrated in two content areas in cognitive psychology in which models are highly competitive: retention (i.e., forgetting) and categorization. The optimal design is compared with the quality of designs used in the literature. The findings demonstrate that design optimization has the potential to increase the informativeness of the experimental method. PMID:19618983

  2. A Robust Adaptive Autonomous Approach to Optimal Experimental Design

    NASA Astrophysics Data System (ADS)

    Gu, Hairong

    Experimentation is the fundamental tool of scientific inquiries to understand the laws governing the nature and human behaviors. Many complex real-world experimental scenarios, particularly in quest of prediction accuracy, often encounter difficulties to conduct experiments using an existing experimental procedure for the following two reasons. First, the existing experimental procedures require a parametric model to serve as the proxy of the latent data structure or data-generating mechanism at the beginning of an experiment. However, for those experimental scenarios of concern, a sound model is often unavailable before an experiment. Second, those experimental scenarios usually contain a large number of design variables, which potentially leads to a lengthy and costly data collection cycle. Incompetently, the existing experimental procedures are unable to optimize large-scale experiments so as to minimize the experimental length and cost. Facing the two challenges in those experimental scenarios, the aim of the present study is to develop a new experimental procedure that allows an experiment to be conducted without the assumption of a parametric model while still achieving satisfactory prediction, and performs optimization of experimental designs to improve the efficiency of an experiment. The new experimental procedure developed in the present study is named robust adaptive autonomous system (RAAS). RAAS is a procedure for sequential experiments composed of multiple experimental trials, which performs function estimation, variable selection, reverse prediction and design optimization on each trial. Directly addressing the challenges in those experimental scenarios of concern, function estimation and variable selection are performed by data-driven modeling methods to generate a predictive model from data collected during the course of an experiment, thus exempting the requirement of a parametric model at the beginning of an experiment; design optimization is

  3. D-Optimal Experimental Design for Contaminant Source Identification

    NASA Astrophysics Data System (ADS)

    Sai Baba, A. K.; Alexanderian, A.

    2016-12-01

    Contaminant source identification seeks to estimate the release history of a conservative solute given point concentration measurements at some time after the release. This can be mathematically expressed as an inverse problem, with a linear observation operator or a parameter-to-observation map, which we tackle using a Bayesian approach. Acquisition of experimental data can be laborious and expensive. The goal is to control the experimental parameters - in our case, the sparsity of the sensors, to maximize the information gain subject to some physical or budget constraints. This is known as optimal experimental design (OED). D-optimal experimental design seeks to maximize the expected information gain, and has long been considered the gold standard in the statistics community. Our goal is to develop scalable methods for D-optimal experimental designs involving large-scale PDE constrained problems with high-dimensional parameter fields. A major challenge for the OED, is that a nonlinear optimization algorithm for the D-optimality criterion requires repeated evaluation of objective function and gradient involving the determinant of large and dense matrices - this cost can be prohibitively expensive for applications of interest. We propose novel randomized matrix techniques that bring down the computational costs of the objective function and gradient evaluations by several orders of magnitude compared to the naive approach. The effect of randomized estimators on the accuracy and the convergence of the optimization solver will be discussed. The features and benefits of our new approach will be demonstrated on a challenging model problem from contaminant source identification involving the inference of the initial condition from spatio-temporal observations in a time-dependent advection-diffusion problem.

  4. Optimizing Experimental Design for Comparing Models of Brain Function

    PubMed Central

    Daunizeau, Jean; Preuschoff, Kerstin; Friston, Karl; Stephan, Klaas

    2011-01-01

    This article presents the first attempt to formalize the optimization of experimental design with the aim of comparing models of brain function based on neuroimaging data. We demonstrate our approach in the context of Dynamic Causal Modelling (DCM), which relates experimental manipulations to observed network dynamics (via hidden neuronal states) and provides an inference framework for selecting among candidate models. Here, we show how to optimize the sensitivity of model selection by choosing among experimental designs according to their respective model selection accuracy. Using Bayesian decision theory, we (i) derive the Laplace-Chernoff risk for model selection, (ii) disclose its relationship with classical design optimality criteria and (iii) assess its sensitivity to basic modelling assumptions. We then evaluate the approach when identifying brain networks using DCM. Monte-Carlo simulations and empirical analyses of fMRI data from a simple bimanual motor task in humans serve to demonstrate the relationship between network identification and the optimal experimental design. For example, we show that deciding whether there is a feedback connection requires shorter epoch durations, relative to asking whether there is experimentally induced change in a connection that is known to be present. Finally, we discuss limitations and potential extensions of this work. PMID:22125485

  5. Optimizing an experimental design for an electromagnetic experiment

    NASA Astrophysics Data System (ADS)

    Roux, Estelle; Garcia, Xavier

    2013-04-01

    Most of geophysical studies focus on data acquisition and analysis, but another aspect which is gaining importance is the discussion on acquisition of suitable datasets. This can be done through the design of an optimal experiment. Optimizing an experimental design implies a compromise between maximizing the information we get about the target and reducing the cost of the experiment, considering a wide range of constraints (logistical, financial, experimental …). We are currently developing a method to design an optimal controlled-source electromagnetic (CSEM) experiment to detect a potential CO2 reservoir and monitor this reservoir during and after CO2 injection. Our statistical algorithm combines the use of linearized inverse theory (to evaluate the quality of one given design via the objective function) and stochastic optimization methods like genetic algorithm (to examine a wide range of possible surveys). The particularity of our method is that it uses a multi-objective genetic algorithm that searches for designs that fit several objective functions simultaneously. One main advantage of this kind of technique to design an experiment is that it does not require the acquisition of any data and can thus be easily conducted before any geophysical survey. Our new experimental design algorithm has been tested with a realistic one-dimensional resistivity model of the Earth in the region of study (northern Spain CO2 sequestration test site). We show that a small number of well distributed observations have the potential to resolve the target. This simple test also points out the importance of a well chosen objective function. Finally, in the context of CO2 sequestration that motivates this study, we might be interested in maximizing the information we get about the reservoir layer. In that case, we show how the combination of two different objective functions considerably improve its resolution.

  6. Optimal active vibration absorber: Design and experimental results

    NASA Technical Reports Server (NTRS)

    Lee-Glauser, Gina; Juang, Jer-Nan; Sulla, Jeffrey L.

    1992-01-01

    An optimal active vibration absorber can provide guaranteed closed-loop stability and control for large flexible space structures with collocated sensors/actuators. The active vibration absorber is a second-order dynamic system which is designed to suppress any unwanted structural vibration. This can be designed with minimum knowledge of the controlled system. Two methods for optimizing the active vibration absorber parameters are illustrated: minimum resonant amplitude and frequency matched active controllers. The Controls-Structures Interaction Phase-1 Evolutionary Model at NASA LaRC is used to demonstrate the effectiveness of the active vibration absorber for vibration suppression. Performance is compared numerically and experimentally using acceleration feedback.

  7. Optimizing Associative Experimental Design for Protein Crystallization Screening

    PubMed Central

    Dinç, Imren; Pusey, Marc L.; Aygün, Ramazan S.

    2016-01-01

    The goal of protein crystallization screening is the determination of the main factors of importance to crystallizing the protein under investigation. One of the major issues about determining these factors is that screening is often expanded to many hundreds or thousands of conditions to maximize combinatorial chemical space coverage for maximizing the chances of a successful (crystalline) outcome. In this paper, we propose an experimental design method called “Associative Experimental Design (AED)” and an optimization method includes eliminating prohibited combinations and prioritizing reagents based on AED analysis of results from protein crystallization experiments. AED generates candidate cocktails based on these initial screening results. These results are analyzed to determine those screening factors in chemical space that are most likely to lead to higher scoring outcomes, crystals. We have tested AED on three proteins derived from the hyperthermophile Thermococcus thioreducens, and we applied an optimization method to these proteins. Our AED method generated novel cocktails (count provided in parentheses) leading to crystals for three proteins as follows: Nucleoside diphosphate kinase (4), HAD superfamily hydrolase (2), Nucleoside kinase (1). After getting promising results, we have tested our optimization method on four different proteins. The AED method with optimization yielded 4, 3, and 20 crystalline conditions for holo Human Transferrin, archaeal exosome protein, and Nucleoside diphosphate kinase, respectively. PMID:26955046

  8. Optimization of formulation variables of benzocaine liposomes using experimental design.

    PubMed

    Mura, Paola; Capasso, Gaetano; Maestrelli, Francesca; Furlanetto, Sandra

    2008-01-01

    This study aimed to optimize, by means of an experimental design multivariate strategy, a liposomal formulation for topical delivery of the local anaesthetic agent benzocaine. The formulation variables for the vesicle lipid phase uses potassium glycyrrhizinate (KG) as an alternative to cholesterol and the addition of a cationic (stearylamine) or anionic (dicethylphosphate) surfactant (qualitative factors); the percents of ethanol and the total volume of the hydration phase (quantitative factors) were the variables for the hydrophilic phase. The combined influence of these factors on the considered responses (encapsulation efficiency (EE%) and percent drug permeated at 180 min (P%)) was evaluated by means of a D-optimal design strategy. Graphic analysis of the effects indicated that maximization of the selected responses requested opposite levels of the considered factors: For example, KG and stearylamine were better for increasing EE%, and cholesterol and dicethylphosphate for increasing P%. In the second step, the Doehlert design, applied for the response-surface study of the quantitative factors, pointed out a negative interaction between percent ethanol and volume of the hydration phase and allowed prediction of the best formulation for maximizing drug permeation rate. Experimental P% data of the optimized formulation were inside the confidence interval (P < 0.05) calculated around the predicted value of the response. This proved the suitability of the proposed approach for optimizing the composition of liposomal formulations and predicting the effects of formulation variables on the considered experimental response. Moreover, the optimized formulation enabled a significant improvement (P < 0.05) of the drug anaesthetic effect with respect to the starting reference liposomal formulation, thus demonstrating its actually better therapeutic effectiveness.

  9. Experimental design methodologies in the optimization of chiral CE or CEC separations: an overview.

    PubMed

    Dejaegher, Bieke; Mangelings, Debby; Vander Heyden, Yvan

    2013-01-01

    In this chapter, an overview of experimental designs to develop chiral capillary electrophoresis (CE) and capillary electrochromatographic (CEC) methods is presented. Method development is generally divided into technique selection, method optimization, and method validation. In the method optimization part, often two phases can be distinguished, i.e., a screening and an optimization phase. In method validation, the method is evaluated on its fit for purpose. A validation item, also applying experimental designs, is robustness testing. In the screening phase and in robustness testing, screening designs are applied. During the optimization phase, response surface designs are used. The different design types and their application steps are discussed in this chapter and illustrated by examples of chiral CE and CEC methods.

  10. A Computational/Experimental Study of Two Optimized Supersonic Transport Designs and the Reference H Baseline

    NASA Technical Reports Server (NTRS)

    Cliff, Susan E.; Baker, Timothy J.; Hicks, Raymond M.; Reuther, James J.

    1999-01-01

    Two supersonic transport configurations designed by use of non-linear aerodynamic optimization methods are compared with a linearly designed baseline configuration. One optimized configuration, designated Ames 7-04, was designed at NASA Ames Research Center using an Euler flow solver, and the other, designated Boeing W27, was designed at Boeing using a full-potential method. The two optimized configurations and the baseline were tested in the NASA Langley Unitary Plan Supersonic Wind Tunnel to evaluate the non-linear design optimization methodologies. In addition, the experimental results are compared with computational predictions for each of the three configurations from the Enter flow solver, AIRPLANE. The computational and experimental results both indicate moderate to substantial performance gains for the optimized configurations over the baseline configuration. The computed performance changes with and without diverters and nacelles were in excellent agreement with experiment for all three models. Comparisons of the computational and experimental cruise drag increments for the optimized configurations relative to the baseline show excellent agreement for the model designed by the Euler method, but poorer comparisons were found for the configuration designed by the full-potential code.

  11. Improving Navy Recruiting with the New Planned Resource Optimization Model With Experimental Design (PROM-WED)

    DTIC Science & Technology

    2017-03-01

    RECRUITING WITH THE NEW PLANNED RESOURCE OPTIMIZATION MODEL WITH EXPERIMENTAL DESIGN (PROM-WED) by Allison R. Hogarth March 2017 Thesis...with the New Planned Resource Optimization Model With Experimental Design (PROM-WED) 5. FUNDING NUMBERS 6. AUTHOR(S) Allison R. Hogarth 7. PERFORMING...has historically used a non -linear optimization model, the Planned Resource Optimization (PRO) model, to help inform decisions on the allocation of

  12. Optimization of natural lipstick formulation based on pitaya (Hylocereus polyrhizus) seed oil using D-optimal mixture experimental design.

    PubMed

    Kamairudin, Norsuhaili; Gani, Siti Salwa Abd; Masoumi, Hamid Reza Fard; Hashim, Puziah

    2014-10-16

    The D-optimal mixture experimental design was employed to optimize the melting point of natural lipstick based on pitaya (Hylocereus polyrhizus) seed oil. The influence of the main lipstick components-pitaya seed oil (10%-25% w/w), virgin coconut oil (25%-45% w/w), beeswax (5%-25% w/w), candelilla wax (1%-5% w/w) and carnauba wax (1%-5% w/w)-were investigated with respect to the melting point properties of the lipstick formulation. The D-optimal mixture experimental design was applied to optimize the properties of lipstick by focusing on the melting point with respect to the above influencing components. The D-optimal mixture design analysis showed that the variation in the response (melting point) could be depicted as a quadratic function of the main components of the lipstick. The best combination of each significant factor determined by the D-optimal mixture design was established to be pitaya seed oil (25% w/w), virgin coconut oil (37% w/w), beeswax (17% w/w), candelilla wax (2% w/w) and carnauba wax (2% w/w). With respect to these factors, the 46.0 °C melting point property was observed experimentally, similar to the theoretical prediction of 46.5 °C. Carnauba wax is the most influential factor on this response (melting point) with its function being with respect to heat endurance. The quadratic polynomial model sufficiently fit the experimental data.

  13. Optimal experimental designs for the estimation of thermal properties of composite materials

    NASA Technical Reports Server (NTRS)

    Scott, Elaine P.; Moncman, Deborah A.

    1994-01-01

    Reliable estimation of thermal properties is extremely important in the utilization of new advanced materials, such as composite materials. The accuracy of these estimates can be increased if the experiments are designed carefully. The objectives of this study are to design optimal experiments to be used in the prediction of these thermal properties and to then utilize these designs in the development of an estimation procedure to determine the effective thermal properties (thermal conductivity and volumetric heat capacity). The experiments were optimized by choosing experimental parameters that maximize the temperature derivatives with respect to all of the unknown thermal properties. This procedure has the effect of minimizing the confidence intervals of the resulting thermal property estimates. Both one-dimensional and two-dimensional experimental designs were optimized. A heat flux boundary condition is required in both analyses for the simultaneous estimation of the thermal properties. For the one-dimensional experiment, the parameters optimized were the heating time of the applied heat flux, the temperature sensor location, and the experimental time. In addition to these parameters, the optimal location of the heat flux was also determined for the two-dimensional experiments. Utilizing the optimal one-dimensional experiment, the effective thermal conductivity perpendicular to the fibers and the effective volumetric heat capacity were then estimated for an IM7-Bismaleimide composite material. The estimation procedure used is based on the minimization of a least squares function which incorporates both calculated and measured temperatures and allows for the parameters to be estimated simultaneously.

  14. Experimental validation of structural optimization methods

    NASA Technical Reports Server (NTRS)

    Adelman, Howard M.

    1992-01-01

    The topic of validating structural optimization methods by use of experimental results is addressed. The need for validating the methods as a way of effecting a greater and an accelerated acceptance of formal optimization methods by practicing engineering designers is described. The range of validation strategies is defined which includes comparison of optimization results with more traditional design approaches, establishing the accuracy of analyses used, and finally experimental validation of the optimization results. Examples of the use of experimental results to validate optimization techniques are described. The examples include experimental validation of the following: optimum design of a trussed beam; combined control-structure design of a cable-supported beam simulating an actively controlled space structure; minimum weight design of a beam with frequency constraints; minimization of the vibration response of helicopter rotor blade; minimum weight design of a turbine blade disk; aeroelastic optimization of an aircraft vertical fin; airfoil shape optimization for drag minimization; optimization of the shape of a hole in a plate for stress minimization; optimization to minimize beam dynamic response; and structural optimization of a low vibration helicopter rotor.

  15. Issues and recent advances in optimal experimental design for site investigation (Invited)

    NASA Astrophysics Data System (ADS)

    Nowak, W.

    2013-12-01

    This presentation provides an overview over issues and recent advances in model-based experimental design for site exploration. The addressed issues and advances are (1) how to provide an adequate envelope to prior uncertainty, (2) how to define the information needs in a task-oriented manner, (3) how to measure the expected impact of a data set that it not yet available but only planned to be collected, and (4) how to perform best the optimization of the data collection plan. Among other shortcomings of the state-of-the-art, it is identified that there is a lack of demonstrator studies where exploration schemes based on expert judgment are compared to exploration schemes obtained by optimal experimental design. Such studies will be necessary do address the often voiced concern that experimental design is an academic exercise with little improvement potential over the well- trained gut feeling of field experts. When addressing this concern, a specific focus has to be given to uncertainty in model structure, parameterizations and parameter values, and to related surprises that data often bring about in field studies, but never in synthetic-data based studies. The background of this concern is that, initially, conceptual uncertainty may be so large that surprises are the rule rather than the exception. In such situations, field experts have a large body of experience in handling the surprises, and expert judgment may be good enough compared to meticulous optimization based on a model that is about to be falsified by the incoming data. In order to meet surprises accordingly and adapt to them, there needs to be a sufficient representation of conceptual uncertainty within the models used. Also, it is useless to optimize an entire design under this initial range of uncertainty. Thus, the goal setting of the optimization should include the objective to reduce conceptual uncertainty. A possible way out is to upgrade experimental design theory towards real-time interaction

  16. Near-optimal experimental design for model selection in systems biology.

    PubMed

    Busetto, Alberto Giovanni; Hauser, Alain; Krummenacher, Gabriel; Sunnåker, Mikael; Dimopoulos, Sotiris; Ong, Cheng Soon; Stelling, Jörg; Buhmann, Joachim M

    2013-10-15

    Biological systems are understood through iterations of modeling and experimentation. Not all experiments, however, are equally valuable for predictive modeling. This study introduces an efficient method for experimental design aimed at selecting dynamical models from data. Motivated by biological applications, the method enables the design of crucial experiments: it determines a highly informative selection of measurement readouts and time points. We demonstrate formal guarantees of design efficiency on the basis of previous results. By reducing our task to the setting of graphical models, we prove that the method finds a near-optimal design selection with a polynomial number of evaluations. Moreover, the method exhibits the best polynomial-complexity constant approximation factor, unless P = NP. We measure the performance of the method in comparison with established alternatives, such as ensemble non-centrality, on example models of different complexity. Efficient design accelerates the loop between modeling and experimentation: it enables the inference of complex mechanisms, such as those controlling central metabolic operation. Toolbox 'NearOED' available with source code under GPL on the Machine Learning Open Source Software Web site (mloss.org).

  17. Optimizing Experimental Designs: Finding Hidden Treasure.

    USDA-ARS?s Scientific Manuscript database

    Classical experimental design theory, the predominant treatment in most textbooks, promotes the use of blocking designs for control of spatial variability in field studies and other situations in which there is significant variation among heterogeneity among experimental units. Many blocking design...

  18. D-optimal experimental designs to test for departure from additivity in a fixed-ratio mixture ray.

    PubMed

    Coffey, Todd; Gennings, Chris; Simmons, Jane Ellen; Herr, David W

    2005-12-01

    Traditional factorial designs for evaluating interactions among chemicals in a mixture may be prohibitive when the number of chemicals is large. Using a mixture of chemicals with a fixed ratio (mixture ray) results in an economical design that allows estimation of additivity or nonadditive interaction for a mixture of interest. This methodology is extended easily to a mixture with a large number of chemicals. Optimal experimental conditions can be chosen that result in increased power to detect departures from additivity. Although these designs are used widely for linear models, optimal designs for nonlinear threshold models are less well known. In the present work, the use of D-optimal designs is demonstrated for nonlinear threshold models applied to a fixed-ratio mixture ray. For a fixed sample size, this design criterion selects the experimental doses and number of subjects per dose level that result in minimum variance of the model parameters and thus increased power to detect departures from additivity. An optimal design is illustrated for a 2:1 ratio (chlorpyrifos:carbaryl) mixture experiment. For this example, and in general, the optimal designs for the nonlinear threshold model depend on prior specification of the slope and dose threshold parameters. Use of a D-optimal criterion produces experimental designs with increased power, whereas standard nonoptimal designs with equally spaced dose groups may result in low power if the active range or threshold is missed.

  19. Prediction uncertainty and optimal experimental design for learning dynamical systems.

    PubMed

    Letham, Benjamin; Letham, Portia A; Rudin, Cynthia; Browne, Edward P

    2016-06-01

    Dynamical systems are frequently used to model biological systems. When these models are fit to data, it is necessary to ascertain the uncertainty in the model fit. Here, we present prediction deviation, a metric of uncertainty that determines the extent to which observed data have constrained the model's predictions. This is accomplished by solving an optimization problem that searches for a pair of models that each provides a good fit for the observed data, yet has maximally different predictions. We develop a method for estimating a priori the impact that additional experiments would have on the prediction deviation, allowing the experimenter to design a set of experiments that would most reduce uncertainty. We use prediction deviation to assess uncertainty in a model of interferon-alpha inhibition of viral infection, and to select a sequence of experiments that reduces this uncertainty. Finally, we prove a theoretical result which shows that prediction deviation provides bounds on the trajectories of the underlying true model. These results show that prediction deviation is a meaningful metric of uncertainty that can be used for optimal experimental design.

  20. End-point controller design for an experimental two-link flexible manipulator using convex optimization

    NASA Technical Reports Server (NTRS)

    Oakley, Celia M.; Barratt, Craig H.

    1990-01-01

    Recent results in linear controller design are used to design an end-point controller for an experimental two-link flexible manipulator. A nominal 14-state linear-quadratic-Gaussian (LQG) controller was augmented with a 528-tap finite-impulse-response (FIR) filter designed using convex optimization techniques. The resulting 278-state controller produced improved end-point trajectory tracking and disturbance rejection in simulation and experimentally in real time.

  1. Optimal experimental design in an epidermal growth factor receptor signalling and down-regulation model.

    PubMed

    Casey, F P; Baird, D; Feng, Q; Gutenkunst, R N; Waterfall, J J; Myers, C R; Brown, K S; Cerione, R A; Sethna, J P

    2007-05-01

    We apply the methods of optimal experimental design to a differential equation model for epidermal growth factor receptor signalling, trafficking and down-regulation. The model incorporates the role of a recently discovered protein complex made up of the E3 ubiquitin ligase, Cbl, the guanine exchange factor (GEF), Cool-1 (beta -Pix) and the Rho family G protein Cdc42. The complex has been suggested to be important in disrupting receptor down-regulation. We demonstrate that the model interactions can accurately reproduce the experimental observations, that they can be used to make predictions with accompanying uncertainties, and that we can apply ideas of optimal experimental design to suggest new experiments that reduce the uncertainty on unmeasurable components of the system.

  2. Optimal experimental design for parameter estimation of a cell signaling model.

    PubMed

    Bandara, Samuel; Schlöder, Johannes P; Eils, Roland; Bock, Hans Georg; Meyer, Tobias

    2009-11-01

    Differential equation models that describe the dynamic changes of biochemical signaling states are important tools to understand cellular behavior. An essential task in building such representations is to infer the affinities, rate constants, and other parameters of a model from actual measurement data. However, intuitive measurement protocols often fail to generate data that restrict the range of possible parameter values. Here we utilized a numerical method to iteratively design optimal live-cell fluorescence microscopy experiments in order to reveal pharmacological and kinetic parameters of a phosphatidylinositol 3,4,5-trisphosphate (PIP(3)) second messenger signaling process that is deregulated in many tumors. The experimental approach included the activation of endogenous phosphoinositide 3-kinase (PI3K) by chemically induced recruitment of a regulatory peptide, reversible inhibition of PI3K using a kinase inhibitor, and monitoring of the PI3K-mediated production of PIP(3) lipids using the pleckstrin homology (PH) domain of Akt. We found that an intuitively planned and established experimental protocol did not yield data from which relevant parameters could be inferred. Starting from a set of poorly defined model parameters derived from the intuitively planned experiment, we calculated concentration-time profiles for both the inducing and the inhibitory compound that would minimize the predicted uncertainty of parameter estimates. Two cycles of optimization and experimentation were sufficient to narrowly confine the model parameters, with the mean variance of estimates dropping more than sixty-fold. Thus, optimal experimental design proved to be a powerful strategy to minimize the number of experiments needed to infer biological parameters from a cell signaling assay.

  3. Optimal design of disc-type magneto-rheological brake for mid-sized motorcycle: experimental evaluation

    NASA Astrophysics Data System (ADS)

    Sohn, Jung Woo; Jeon, Juncheol; Nguyen, Quoc Hung; Choi, Seung-Bok

    2015-08-01

    In this paper, a disc-type magneto-rheological (MR) brake is designed for a mid-sized motorcycle and its performance is experimentally evaluated. The proposed MR brake consists of an outer housing, a rotating disc immersed in MR fluid, and a copper wire coiled around a bobbin to generate a magnetic field. The structural configuration of the MR brake is first presented with consideration of the installation space for the conventional hydraulic brake of a mid-sized motorcycle. The design parameters of the proposed MR brake are optimized to satisfy design requirements such as the braking torque, total mass of the MR brake, and cruising temperature caused by the magnetic-field friction of the MR fluid. In the optimization procedure, the braking torque is calculated based on the Herschel-Bulkley rheological model, which predicts MR fluid behavior well at high shear rate. An optimization tool based on finite element analysis is used to obtain the optimized dimensions of the MR brake. After manufacturing the MR brake, mechanical performances regarding the response time, braking torque and cruising temperature are experimentally evaluated.

  4. Experimental Investigation of a Point Design Optimized Arrow Wing HSCT Configuration

    NASA Technical Reports Server (NTRS)

    Narducci, Robert P.; Sundaram, P.; Agrawal, Shreekant; Cheung, S.; Arslan, A. E.; Martin, G. L.

    1999-01-01

    The M2.4-7A Arrow Wing HSCT configuration was optimized for straight and level cruise at a Mach number of 2.4 and a lift coefficient of 0.10. A quasi-Newton optimization scheme maximized the lift-to-drag ratio (by minimizing drag-to-lift) using Euler solutions from FL067 to estimate the lift and drag forces. A 1.675% wind-tunnel model of the Opt5 HSCT configuration was built to validate the design methodology. Experimental data gathered at the NASA Langley Unitary Plan Wind Tunnel (UPWT) section #2 facility verified CFL3D Euler and Navier-Stokes predictions of the Opt5 performance at the design point. In turn, CFL3D confirmed the improvement in the lift-to-drag ratio obtained during the optimization, thus validating the design procedure. A data base at off-design conditions was obtained during three wind-tunnel tests. The entry into NASA Langley UPWT section #2 obtained data at a free stream Mach number, M(sub infinity), of 2.55 as well as the design Mach number, M(sub infinity)=2.4. Data from a Mach number range of 1.8 to 2.4 was taken at UPWT section #1. Transonic and low supersonic Mach numbers, M(sub infinity)=0.6 to 1.2, was gathered at the NASA Langley 16 ft. Transonic Wind Tunnel (TWT). In addition to good agreement between CFD and experimental data, highlights from the wind-tunnel tests include a trip dot study suggesting a linear relationship between trip dot drag and Mach number, an aeroelastic study that measured the outboard wing deflection and twist, and a flap scheduling study that identifies the possibility of only one leading-edge and trailing-edge flap setting for transonic cruise and another for low supersonic acceleration.

  5. A new experimental design method to optimize formulations focusing on a lubricant for hydrophilic matrix tablets.

    PubMed

    Choi, Du Hyung; Shin, Sangmun; Khoa Viet Truong, Nguyen; Jeong, Seong Hoon

    2012-09-01

    A robust experimental design method was developed with the well-established response surface methodology and time series modeling to facilitate the formulation development process with magnesium stearate incorporated into hydrophilic matrix tablets. Two directional analyses and a time-oriented model were utilized to optimize the experimental responses. Evaluations of tablet gelation and drug release were conducted with two factors x₁ and x₂: one was a formulation factor (the amount of magnesium stearate) and the other was a processing factor (mixing time), respectively. Moreover, different batch sizes (100 and 500 tablet batches) were also evaluated to investigate an effect of batch size. The selected input control factors were arranged in a mixture simplex lattice design with 13 experimental runs. The obtained optimal settings of magnesium stearate for gelation were 0.46 g, 2.76 min (mixing time) for a 100 tablet batch and 1.54 g, 6.51 min for a 500 tablet batch. The optimal settings for drug release were 0.33 g, 7.99 min for a 100 tablet batch and 1.54 g, 6.51 min for a 500 tablet batch. The exact ratio and mixing time of magnesium stearate could be formulated according to the resulting hydrophilic matrix tablet properties. The newly designed experimental method provided very useful information for characterizing significant factors and hence to obtain optimum formulations allowing for a systematic and reliable experimental design method.

  6. Application of iterative robust model-based optimal experimental design for the calibration of biocatalytic models.

    PubMed

    Van Daele, Timothy; Gernaey, Krist V; Ringborg, Rolf H; Börner, Tim; Heintz, Søren; Van Hauwermeiren, Daan; Grey, Carl; Krühne, Ulrich; Adlercreutz, Patrick; Nopens, Ingmar

    2017-09-01

    The aim of model calibration is to estimate unique parameter values from available experimental data, here applied to a biocatalytic process. The traditional approach of first gathering data followed by performing a model calibration is inefficient, since the information gathered during experimentation is not actively used to optimize the experimental design. By applying an iterative robust model-based optimal experimental design, the limited amount of data collected is used to design additional informative experiments. The algorithm is used here to calibrate the initial reaction rate of an ω-transaminase catalyzed reaction in a more accurate way. The parameter confidence region estimated from the Fisher Information Matrix is compared with the likelihood confidence region, which is not only more accurate but also a computationally more expensive method. As a result, an important deviation between both approaches is found, confirming that linearization methods should be applied with care for nonlinear models. © 2017 American Institute of Chemical Engineers Biotechnol. Prog., 33:1278-1293, 2017. © 2017 American Institute of Chemical Engineers.

  7. Demonstration of decomposition and optimization in the design of experimental space systems

    NASA Technical Reports Server (NTRS)

    Padula, Sharon; Sandridge, Chris A.; Haftka, Raphael T.; Walsh, Joanne L.

    1989-01-01

    Effective design strategies for a class of systems which may be termed Experimental Space Systems (ESS) are needed. These systems, which include large space antenna and observatories, space platforms, earth satellites and deep space explorers, have special characteristics which make them particularly difficult to design. It is argued here that these same characteristics encourage the use of advanced computer-aided optimization and planning techniques. The broad goal of this research is to develop optimization strategies for the design of ESS. These strategics would account for the possibly conflicting requirements of mission life, safety, scientific payoffs, initial system cost, launch limitations and maintenance costs. The strategies must also preserve the coupling between disciplines or between subsystems. Here, the specific purpose is to describe a computer-aided planning and scheduling technique. This technique provides the designer with a way to map the flow of data between multidisciplinary analyses. The technique is important because it enables the designer to decompose the system design problem into a number of smaller subproblems. The planning and scheduling technique is demonstrated by its application to a specific preliminary design problem.

  8. Optimal rates for phylogenetic inference and experimental design in the era of genome-scale datasets.

    PubMed

    Dornburg, Alex; Su, Zhuo; Townsend, Jeffrey P

    2018-06-25

    With the rise of genome- scale datasets there has been a call for increased data scrutiny and careful selection of loci appropriate for attempting the resolution of a phylogenetic problem. Such loci are desired to maximize phylogenetic information content while minimizing the risk of homoplasy. Theory posits the existence of characters that evolve under such an optimum rate, and efforts to determine optimal rates of inference have been a cornerstone of phylogenetic experimental design for over two decades. However, both theoretical and empirical investigations of optimal rates have varied dramatically in their conclusions: spanning no relationship to a tight relationship between the rate of change and phylogenetic utility. Here we synthesize these apparently contradictory views, demonstrating both empirical and theoretical conditions under which each is correct. We find that optimal rates of characters-not genes-are generally robust to most experimental design decisions. Moreover, consideration of site rate heterogeneity within a given locus is critical to accurate predictions of utility. Factors such as taxon sampling or the targeted number of characters providing support for a topology are additionally critical to the predictions of phylogenetic utility based on the rate of character change. Further, optimality of rates and predictions of phylogenetic utility are not equivalent, demonstrating the need for further development of comprehensive theory of phylogenetic experimental design.

  9. Time-oriented experimental design method to optimize hydrophilic matrix formulations with gelation kinetics and drug release profiles.

    PubMed

    Shin, Sangmun; Choi, Du Hyung; Truong, Nguyen Khoa Viet; Kim, Nam Ah; Chu, Kyung Rok; Jeong, Seong Hoon

    2011-04-04

    A new experimental design methodology was developed by integrating the response surface methodology and the time series modeling. The major purposes were to identify significant factors in determining swelling and release rate from matrix tablets and their relative factor levels for optimizing the experimental responses. Properties of tablet swelling and drug release were assessed with ten factors and two default factors, a hydrophilic model drug (terazosin) and magnesium stearate, and compared with target values. The selected input control factors were arranged in a mixture simplex lattice design with 21 experimental runs. The obtained optimal settings for gelation were PEO, LH-11, Syloid, and Pharmacoat with weight ratios of 215.33 (88.50%), 5.68 (2.33%), 19.27 (7.92%), and 3.04 (1.25%), respectively. The optimal settings for drug release were PEO and citric acid with weight ratios of 191.99 (78.91%) and 51.32 (21.09%), respectively. Based on the results of matrix swelling and drug release, the optimal solutions, target values, and validation experiment results over time were similar and showed consistent patterns with very small biases. The experimental design methodology could be a very promising experimental design method to obtain maximum information with limited time and resources. It could also be very useful in formulation studies by providing a systematic and reliable screening method to characterize significant factors in the sustained release matrix tablet. Copyright © 2011 Elsevier B.V. All rights reserved.

  10. OPTIMIZING THE PRECISION OF TOXICITY THRESHOLD ESTIMATION USING A TWO-STAGE EXPERIMENTAL DESIGN

    EPA Science Inventory

    An important consideration for risk assessment is the existence of a threshold, i.e., the highest toxicant dose where the response is not distinguishable from background. We have developed methodology for finding an experimental design that optimizes the precision of threshold mo...

  11. Factorial experimental design intended for the optimization of the alumina purification conditions

    NASA Astrophysics Data System (ADS)

    Brahmi, Mounaouer; Ba, Mohamedou; Hidri, Yassine; Hassen, Abdennaceur

    2018-04-01

    The objective of this study was to determine the optimal conditions by using the experimental design methodology for the removal of some impurities associated with the alumina. So, three alumina qualities of different origins were investigated under the same conditions. The application of full-factorial designs on the samples of different qualities of alumina has followed the removal rates of the sodium oxide. However, a factorial experimental design was developed to describe the elimination of sodium oxide associated with the alumina. The experimental results showed that chemical analyze followed by XRF prior treatment of the samples, provided a primary idea concerning these prevailing impurities. Therefore, it appeared that the sodium oxide constituted the largest amount among all impurities. After the application of experimental design, analysis of the effectors different factors and their interactions showed that to have a better result, we should reduce the alumina quantity investigated and by against increase the stirring time for the first two samples, whereas, it was necessary to increase the alumina quantity in the case of the third sample. To expand and improve this research, we should take into account all existing impurities, since we found during this investigation that the levels of partial impurities increased after the treatment.

  12. Robust experimental design for optimizing the microbial inhibitor test for penicillin detection in milk.

    PubMed

    Nagel, O G; Molina, M P; Basílico, J C; Zapata, M L; Althaus, R L

    2009-06-01

    To use experimental design techniques and a multiple logistic regression model to optimize a microbiological inhibition test with dichotomous response for the detection of Penicillin G in milk. A 2(3) x 2(2) robust experimental design with two replications was used. The effects of three control factors (V: culture medium volume, S: spore concentration of Geobacillus stearothermophilus, I: indicator concentration), two noise factors (Dt: diffusion time, Ip: incubation period) and their interactions were studied. The V, S, Dt, Ip factors and V x S, V x Ip, S x Ip interactions showed significant effects. The use of 100 microl culture medium volume, 2 x 10(5) spores ml(-1), 60 min diffusion time and 3 h incubation period is recommended. In these elaboration conditions, the penicillin detection limit was of 3.9 microg l(-1), similar to the maximum residue limit (MRL). Of the two noise factors studied, the incubation period can be controlled by means of the culture medium volume and spore concentration. We were able to optimize bioassays of dichotomous response using an experimental design and logistic regression model for the detection of residues at the level of MRL, aiding in the avoidance of health problems in the consumer.

  13. Fast Synthesis of Gibbsite Nanoplates and Process Optimization using Box-Behnken Experimental Design

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Xin; Zhang, Xianwen; Graham, Trent R.

    Developing the ability to synthesize compositionally and morphologically well-defined gibbsite particles at the nanoscale with high yield is an ongoing need that has not yet achieved the level of rational design. Here we report optimization of a clean inorganic synthesis route based on statistical experimental design examining the influence of Al(OH)3 gel precursor concentration, pH, and aging time at temperature. At 80 oC, the optimum synthesis conditions of gel concentration at 0.5 M, pH at 9.2, and time at 72 h maximized the reaction yield up to ~87%. The resulting gibbsite product is composed of highly uniform euhedral hexagonal nanoplatesmore » within a basal plane diameter range of 200-400 nm. The independent roles of key system variables in the growth mechanism are considered. On the basis of these optimized experimental conditions, the synthesis procedure, which is both cost-effective and environmentally friendly, has the potential for mass production scale-up of high quality gibbsite material for various fundamental research and industrial applications.« less

  14. Model-Based Optimal Experimental Design for Complex Physical Systems

    DTIC Science & Technology

    2015-12-03

    for public release. magnitude reduction in estimator error required to make solving the exact optimal design problem tractable. Instead of using a naive...for designing a sequence of experiments uses suboptimal approaches: batch design that has no feedback, or greedy ( myopic ) design that optimally...approved for public release. Equation 1 is difficult to solve directly, but can be expressed in an equivalent form using the principle of dynamic programming

  15. A new multiresponse optimization approach in combination with a D-Optimal experimental design for the determination of biogenic amines in fish by HPLC-FLD.

    PubMed

    Herrero, A; Sanllorente, S; Reguera, C; Ortiz, M C; Sarabia, L A

    2016-11-16

    A new strategy to approach multiresponse optimization in conjunction to a D-optimal design for simultaneously optimizing a large number of experimental factors is proposed. The procedure is applied to the determination of biogenic amines (histamine, putrescine, cadaverine, tyramine, tryptamine, 2-phenylethylamine, spermine and spermidine) in swordfish by HPLC-FLD after extraction with an acid and subsequent derivatization with dansyl chloride. Firstly, the extraction from a solid matrix and the derivatization of the extract are optimized. Ten experimental factors involved in both stages are studied, seven of them at two levels and the remaining at three levels; the use of a D-optimal design leads to optimize the ten experimental variables, significantly reducing by a factor of 67 the experimental effort needed but guaranteeing the quality of the estimates. A model with 19 coefficients, which includes those corresponding to the main effects and two possible interactions, is fitted to the peak area of each amine. Then, the validated models are used to predict the response (peak area) of the 3456 experiments of the complete factorial design. The variability among peak areas ranges from 13.5 for 2-phenylethylamine to 122.5 for spermine, which shows, to a certain extent, the high and different effect of the pretreatment on the responses. Then the percentiles are calculated from the peak areas of each amine. As the experimental conditions are in conflict, the optimal solution for the multiresponse optimization is chosen from among those which have all the responses greater than a certain percentile for all the amines. The developed procedure reaches decision limits down to 2.5 μg L -1 for cadaverine or 497 μg L -1 for histamine in solvent and 0.07 mg kg -1 and 14.81 mg kg -1 in fish (probability of false positive equal to 0.05), respectively. Copyright © 2016 Elsevier B.V. All rights reserved.

  16. Optimal experimental design for assessment of enzyme kinetics in a drug discovery screening environment.

    PubMed

    Sjögren, Erik; Nyberg, Joakim; Magnusson, Mats O; Lennernäs, Hans; Hooker, Andrew; Bredberg, Ulf

    2011-05-01

    A penalized expectation of determinant (ED)-optimal design with a discrete parameter distribution was used to find an optimal experimental design for assessment of enzyme kinetics in a screening environment. A data set for enzyme kinetic data (V(max) and K(m)) was collected from previously reported studies, and every V(max)/K(m) pair (n = 76) was taken to represent a unique drug compound. The design was restricted to 15 samples, an incubation time of up to 40 min, and starting concentrations (C(0)) for the incubation between 0.01 and 100 μM. The optimization was performed by finding the sample times and C(0) returning the lowest uncertainty (S.E.) of the model parameter estimates. Individual optimal designs, one general optimal design and one, for laboratory practice suitable, pragmatic optimal design (OD) were obtained. In addition, a standard design (STD-D), representing a commonly applied approach for metabolic stability investigations, was constructed. Simulations were performed for OD and STD-D by using the Michaelis-Menten (MM) equation, and enzyme kinetic parameters were estimated with both MM and a monoexponential decay. OD generated a better result (relative standard error) for 99% of the compounds and an equal or better result [(root mean square error (RMSE)] for 78% of the compounds in estimation of metabolic intrinsic clearance. Furthermore, high-quality estimates (RMSE < 30%) of both V(max) and K(m) could be obtained for a considerable number (26%) of the investigated compounds by using the suggested OD. The results presented in this study demonstrate that the output could generally be improved compared with that obtained from the standard approaches used today.

  17. Doehlert experimental design applied to optimization of light emitting textile structures

    NASA Astrophysics Data System (ADS)

    Oguz, Yesim; Cochrane, Cedric; Koncar, Vladan; Mordon, Serge R.

    2016-07-01

    A light emitting fabric (LEF) has been developed for photodynamic therapy (PDT) for the treatment of dermatologic diseases such as Actinic Keratosis (AK). A successful PDT requires homogenous and reproducible light with controlled power and wavelength on the treated skin area. Due to the shape of the human body, traditional PDT with external light sources is unable to deliver homogenous light everywhere on the skin (head vertex, hand, etc.). For better light delivery homogeneity, plastic optical fibers (POFs) have been woven in textile in order to emit laterally the injected light. The previous studies confirmed that the light power could be locally controlled by modifying the radius of POF macro-bendings within the textile structure. The objective of this study is to optimize the distribution of macro-bendings over the LEF surface in order to increase the light intensity (mW/cm2), and to guarantee the best possible light deliver homogeneity over the LEF which are often contradictory. Fifteen experiments have been carried out with Doehlert experimental design involving Response Surface Methodology (RSM). The proposed models are fitted to the experimental data to enable the optimal set up of the warp yarns tensions.

  18. Optimization Method of a Low Cost, High Performance Ceramic Proppant by Orthogonal Experimental Design

    NASA Astrophysics Data System (ADS)

    Zhou, Y.; Tian, Y. M.; Wang, K. Y.; Li, G.; Zou, X. W.; Chai, Y. S.

    2017-09-01

    This study focused on optimization method of a ceramic proppant material with both low cost and high performance that met the requirements of Chinese Petroleum and Gas Industry Standard (SY/T 5108-2006). The orthogonal experimental design of L9(34) was employed to study the significance sequence of three factors, including weight ratio of white clay to bauxite, dolomite content and sintering temperature. For the crush resistance, both the range analysis and variance analysis reflected the optimally experimental condition was weight ratio of white clay to bauxite=3/7, dolomite content=3 wt.%, temperature=1350°C. For the bulk density, the most important factor was the sintering temperature, followed by the dolomite content, and then the ratio of white clay to bauxite.

  19. Optimization and evaluation of clarithromycin floating tablets using experimental mixture design.

    PubMed

    Uğurlu, Timucin; Karaçiçek, Uğur; Rayaman, Erkan

    2014-01-01

    The purpose of the study was to prepare and evaluate clarithromycin (CLA) floating tablets using experimental mixture design for treatment of Helicobacter pylori provided by prolonged gastric residence time and controlled plasma level. Ten different formulations were generated based on different molecular weight of hypromellose (HPMC K100, K4M, K15M) by using simplex lattice design (a sub-class of mixture design) with Minitab 16 software. Sodium bicarbonate and anhydrous citric acid were used as gas generating agents. Tablets were prepared by wet granulation technique. All of the process variables were fixed. Results of cumulative drug release at 8th h (CDR 8th) were statistically analyzed to get optimized formulation (OF). Optimized formulation, which gave floating lag time lower than 15 s and total floating time more than 10 h, was analyzed and compared with target for CDR 8th (80%). A good agreement was shown between predicted and actual values of CDR 8th with a variation lower than 1%. The activity of clarithromycin contained optimizedformula against H. pylori were quantified using well diffusion agar assay. Diameters of inhibition zones vs. log10 clarithromycin concentrations were plotted in order to obtain a standard curve and clarithromycin activity.

  20. Application of D-optimal experimental design method to optimize the formulation of O/W cosmetic emulsions.

    PubMed

    Djuris, J; Vasiljevic, D; Jokic, S; Ibric, S

    2014-02-01

    This study investigates the application of D-optimal mixture experimental design in optimization of O/W cosmetic emulsions. Cetearyl glucoside was used as a natural, biodegradable non-ionic emulsifier in the relatively low concentration (1%), and the mixture of co-emulsifiers (stearic acid, cetyl alcohol, stearyl alcohol and glyceryl stearate) was used to stabilize the formulations. To determine the optimal composition of co-emulsifiers mixture, D-optimal mixture experimental design was used. Prepared emulsions were characterized with rheological measurements, centrifugation test, specific conductivity and pH value measurements. All prepared samples appeared as white and homogenous creams, except for one homogenous and viscous lotion co-stabilized by stearic acid alone. Centrifugation testing revealed some phase separation only in the case of sample co-stabilized using glyceryl stearate alone. The obtained pH values indicated that all samples expressed mild acid value acceptable for cosmetic preparations. Specific conductivity values are attributed to the multiple phases O/W emulsions with high percentages of fixed water. Results of the rheological measurements have shown that the investigated samples exhibited non-Newtonian thixotropic behaviour. To determine the influence of each of the co-emulsifiers on emulsions properties, the obtained results were evaluated by the means of statistical analysis (ANOVA test). On the basis of comparison of statistical parameters for each of the studied responses, mixture reduced quadratic model was selected over the linear model implying that interactions between co-emulsifiers play the significant role in overall influence of co-emulsifiers on emulsions properties. Glyceryl stearate was found to be the dominant co-emulsifier affecting emulsions properties. Interactions between the glyceryl stearate and other co-emulsifiers were also found to significantly influence emulsions properties. These findings are especially important

  1. Optimal design and experimental analyses of a new micro-vibration control payload-platform

    NASA Astrophysics Data System (ADS)

    Sun, Xiaoqing; Yang, Bintang; Zhao, Long; Sun, Xiaofen

    2016-07-01

    This paper presents a new payload-platform, for precision devices, which possesses the capability of isolating the complex space micro-vibration in low frequency range below 5 Hz. The novel payload-platform equipped with smart material actuators is investigated and designed through optimization strategy based on the minimum energy loss rate, for the aim of achieving high drive efficiency and reducing the effect of the magnetic circuit nonlinearity. Then, the dynamic model of the driving element is established by using the Lagrange method and the performance of the designed payload-platform is further discussed through the combination of the controlled auto regressive moving average (CARMA) model with modified generalized prediction control (MGPC) algorithm. Finally, an experimental prototype is developed and tested. The experimental results demonstrate that the payload-platform has an impressive potential of micro-vibration isolation.

  2. Reliability-based structural optimization: A proposed analytical-experimental study

    NASA Technical Reports Server (NTRS)

    Stroud, W. Jefferson; Nikolaidis, Efstratios

    1993-01-01

    An analytical and experimental study for assessing the potential of reliability-based structural optimization is proposed and described. In the study, competing designs obtained by deterministic and reliability-based optimization are compared. The experimental portion of the study is practical because the structure selected is a modular, actively and passively controlled truss that consists of many identical members, and because the competing designs are compared in terms of their dynamic performance and are not destroyed if failure occurs. The analytical portion of this study is illustrated on a 10-bar truss example. In the illustrative example, it is shown that reliability-based optimization can yield a design that is superior to an alternative design obtained by deterministic optimization. These analytical results provide motivation for the proposed study, which is underway.

  3. Optimal Experimental Design for Model Discrimination

    ERIC Educational Resources Information Center

    Myung, Jay I.; Pitt, Mark A.

    2009-01-01

    Models of a psychological process can be difficult to discriminate experimentally because it is not easy to determine the values of the critical design variables (e.g., presentation schedule, stimulus structure) that will be most informative in differentiating them. Recent developments in sampling-based search methods in statistics make it…

  4. Online optimal experimental re-design in robotic parallel fed-batch cultivation facilities.

    PubMed

    Cruz Bournazou, M N; Barz, T; Nickel, D B; Lopez Cárdenas, D C; Glauche, F; Knepper, A; Neubauer, P

    2017-03-01

    We present an integrated framework for the online optimal experimental re-design applied to parallel nonlinear dynamic processes that aims to precisely estimate the parameter set of macro kinetic growth models with minimal experimental effort. This provides a systematic solution for rapid validation of a specific model to new strains, mutants, or products. In biosciences, this is especially important as model identification is a long and laborious process which is continuing to limit the use of mathematical modeling in this field. The strength of this approach is demonstrated by fitting a macro-kinetic differential equation model for Escherichia coli fed-batch processes after 6 h of cultivation. The system includes two fully-automated liquid handling robots; one containing eight mini-bioreactors and another used for automated at-line analyses, which allows for the immediate use of the available data in the modeling environment. As a result, the experiment can be continually re-designed while the cultivations are running using the information generated by periodical parameter estimations. The advantages of an online re-computation of the optimal experiment are proven by a 50-fold lower average coefficient of variation on the parameter estimates compared to the sequential method (4.83% instead of 235.86%). The success obtained in such a complex system is a further step towards a more efficient computer aided bioprocess development. Biotechnol. Bioeng. 2017;114: 610-619. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  5. Active SAmpling Protocol (ASAP) to Optimize Individual Neurocognitive Hypothesis Testing: A BCI-Inspired Dynamic Experimental Design.

    PubMed

    Sanchez, Gaëtan; Lecaignard, Françoise; Otman, Anatole; Maby, Emmanuel; Mattout, Jérémie

    2016-01-01

    The relatively young field of Brain-Computer Interfaces has promoted the use of electrophysiology and neuroimaging in real-time. In the meantime, cognitive neuroscience studies, which make extensive use of functional exploration techniques, have evolved toward model-based experiments and fine hypothesis testing protocols. Although these two developments are mostly unrelated, we argue that, brought together, they may trigger an important shift in the way experimental paradigms are being designed, which should prove fruitful to both endeavors. This change simply consists in using real-time neuroimaging in order to optimize advanced neurocognitive hypothesis testing. We refer to this new approach as the instantiation of an Active SAmpling Protocol (ASAP). As opposed to classical (static) experimental protocols, ASAP implements online model comparison, enabling the optimization of design parameters (e.g., stimuli) during the course of data acquisition. This follows the well-known principle of sequential hypothesis testing. What is radically new, however, is our ability to perform online processing of the huge amount of complex data that brain imaging techniques provide. This is all the more relevant at a time when physiological and psychological processes are beginning to be approached using more realistic, generative models which may be difficult to tease apart empirically. Based upon Bayesian inference, ASAP proposes a generic and principled way to optimize experimental design adaptively. In this perspective paper, we summarize the main steps in ASAP. Using synthetic data we illustrate its superiority in selecting the right perceptual model compared to a classical design. Finally, we briefly discuss its future potential for basic and clinical neuroscience as well as some remaining challenges.

  6. Optimal experimental design for placement of boreholes

    NASA Astrophysics Data System (ADS)

    Padalkina, Kateryna; Bücker, H. Martin; Seidler, Ralf; Rath, Volker; Marquart, Gabriele; Niederau, Jan; Herty, Michael

    2014-05-01

    Drilling for deep resources is an expensive endeavor. Among the many problems finding the optimal drilling location for boreholes is one of the challenging questions. We contribute to this discussion by using a simulation based assessment of possible future borehole locations. We study the problem of finding a new borehole location in a given geothermal reservoir in terms of a numerical optimization problem. In a geothermal reservoir the temporal and spatial distribution of temperature and hydraulic pressure may be simulated using the coupled differential equations for heat transport and mass and momentum conservation for Darcy flow. Within this model the permeability and thermal conductivity are dependent on the geological layers present in the subsurface model of the reservoir. In general, those values involve some uncertainty making it difficult to predict actual heat source in the ground. Within optimal experimental the question is which location and to which depth to drill the borehole in order to estimate conductivity and permeability with minimal uncertainty. We introduce a measure for computing the uncertainty based on simulations of the coupled differential equations. The measure is based on the Fisher information matrix of temperature data obtained through the simulations. We assume that the temperature data is available within the full borehole. A minimization of the measure representing the uncertainty in the unknown permeability and conductivity parameters is performed to determine the optimal borehole location. We present the theoretical framework as well as numerical results for several 2d subsurface models including up to six geological layers. Also, the effect of unknown layers on the introduced measure is studied. Finally, to obtain a more realistic estimate of optimal borehole locations, we couple the optimization to a cost model for deep drilling problems.

  7. Multi-objective optimization design and experimental investigation of centrifugal fan performance

    NASA Astrophysics Data System (ADS)

    Zhang, Lei; Wang, Songling; Hu, Chenxing; Zhang, Qian

    2013-11-01

    Current studies of fan performance optimization mainly focus on two aspects: one is to improve the blade profile, and another is only to consider the influence of single impeller structural parameter on fan performance. However, there are few studies on the comprehensive effect of the key parameters such as blade number, exit stagger angle of blade and the impeller outlet width on the fan performance. The G4-73 backward centrifugal fan widely used in power plants is selected as the research object. Based on orthogonal design and BP neural network, a model for predicting the centrifugal fan performance parameters is established, and the maximum relative errors of the total pressure and efficiency are 0.974% and 0.333%, respectively. Multi-objective optimization of total pressure and efficiency of the fan is conducted with genetic algorithm, and the optimum combination of impeller structural parameters is proposed. The optimized parameters of blade number, exit stagger angle of blade and the impeller outlet width are seperately 14, 43.9°, and 21 cm. The experiments on centrifugal fan performance and noise are conducted before and after the installation of the new impeller. The experimental results show that with the new impeller, the total pressure of fan increases significantly in total range of the flow rate, and the fan efficiency is improved when the relative flow is above 75%, also the high efficiency area is broadened. Additionally, in 65% -100% relative flow, the fan noise is reduced. Under the design operating condition, total pressure and efficiency of the fan are improved by 6.91% and 0.5%, respectively. This research sheds light on the considering of comprehensive effect of impeller structrual parameters on fan performance, and a new impeller can be designed to satisfy the engineering demand such as energy-saving, noise reduction or solving air pressure insufficiency for power plants.

  8. Application of mixture experimental design in the formulation and optimization of matrix tablets containing carbomer and hydroxy-propylmethylcellulose.

    PubMed

    Petrovic, Aleksandra; Cvetkovic, Nebojsa; Ibric, Svetlana; Trajkovic, Svetlana; Djuric, Zorica; Popadic, Dragica; Popovic, Radmila

    2009-12-01

    Using mixture experimental design, the effect of carbomer (Carbopol((R)) 971P NF) and hydroxypropylmethylcellulose (Methocel((R)) K100M or Methocel((R)) K4M) combination on the release profile and on the mechanism of drug liberation from matrix tablet was investigated. The numerical optimization procedure was also applied to establish and obtain formulation with desired drug release. The amount of TP released, release rate and mechanism varied with carbomer ratio in total matrix and HPMC viscosity. Increasing carbomer fractions led to a decrease in drug release. Anomalous diffusion was found in all matrices containing carbomer, while Case - II transport was predominant for tablet based on HPMC only. The predicted and obtained profiles for optimized formulations showed similarity. Those results indicate that Simplex Lattice Mixture experimental design and numerical optimization procedure can be applied during development to obtain sustained release matrix formulation with desired release profile.

  9. Ceramic processing: Experimental design and optimization

    NASA Technical Reports Server (NTRS)

    Weiser, Martin W.; Lauben, David N.; Madrid, Philip

    1992-01-01

    The objectives of this paper are to: (1) gain insight into the processing of ceramics and how green processing can affect the properties of ceramics; (2) investigate the technique of slip casting; (3) learn how heat treatment and temperature contribute to density, strength, and effects of under and over firing to ceramic properties; (4) experience some of the problems inherent in testing brittle materials and learn about the statistical nature of the strength of ceramics; (5) investigate orthogonal arrays as tools to examine the effect of many experimental parameters using a minimum number of experiments; (6) recognize appropriate uses for clay based ceramics; and (7) measure several different properties important to ceramic use and optimize them for a given application.

  10. A Surrogate Approach to the Experimental Optimization of Multielement Airfoils

    NASA Technical Reports Server (NTRS)

    Otto, John C.; Landman, Drew; Patera, Anthony T.

    1996-01-01

    The incorporation of experimental test data into the optimization process is accomplished through the use of Bayesian-validated surrogates. In the surrogate approach, a surrogate for the experiment (e.g., a response surface) serves in the optimization process. The validation step of the framework provides a qualitative assessment of the surrogate quality, and bounds the surrogate-for-experiment error on designs "near" surrogate-predicted optimal designs. The utility of the framework is demonstrated through its application to the experimental selection of the trailing edge ap position to achieve a design lift coefficient for a three-element airfoil.

  11. Optimization of the intravenous glucose tolerance test in T2DM patients using optimal experimental design.

    PubMed

    Silber, Hanna E; Nyberg, Joakim; Hooker, Andrew C; Karlsson, Mats O

    2009-06-01

    Intravenous glucose tolerance test (IVGTT) provocations are informative, but complex and laborious, for studying the glucose-insulin system. The objective of this study was to evaluate, through optimal design methodology, the possibilities of more informative and/or less laborious study design of the insulin modified IVGTT in type 2 diabetic patients. A previously developed model for glucose and insulin regulation was implemented in the optimal design software PopED 2.0. The following aspects of the study design of the insulin modified IVGTT were evaluated; (1) glucose dose, (2) insulin infusion, (3) combination of (1) and (2), (4) sampling times, (5) exclusion of labeled glucose. Constraints were incorporated to avoid prolonged hyper- and/or hypoglycemia and a reduced design was used to decrease run times. Design efficiency was calculated as a measure of the improvement with an optimal design compared to the basic design. The results showed that the design of the insulin modified IVGTT could be substantially improved by the use of an optimized design compared to the standard design and that it was possible to use a reduced number of samples. Optimization of sample times gave the largest improvement followed by insulin dose. The results further showed that it was possible to reduce the total sample time with only a minor loss in efficiency. Simulations confirmed the predictions from PopED. The predicted uncertainty of parameter estimates (CV) was low in all tested cases, despite the reduction in the number of samples/subject. The best design had a predicted average CV of parameter estimates of 19.5%. We conclude that improvement can be made to the design of the insulin modified IVGTT and that the most important design factor was the placement of sample times followed by the use of an optimal insulin dose. This paper illustrates how complex provocation experiments can be improved by sequential modeling and optimal design.

  12. Separation of 20 coumarin derivatives using the capillary electrophoresis method optimized by a series of Doehlert experimental designs.

    PubMed

    Woźniakiewicz, Michał; Gładysz, Marta; Nowak, Paweł M; Kędzior, Justyna; Kościelniak, Paweł

    2017-05-15

    The aim of this study was to develop the first CE-based method enabling separation of 20 structurally similar coumarin derivatives. To facilitate method optimization a series of three consequent Doehlert experimental designs with the response surface methodology was employed, using number of peaks and the adjusted time of analysis as the selected responses. Initially, three variables were examined: buffer pH, ionic strength and temperature (No. 1 Doehlert design). The optimal conditions provided only partial separation, on that account, several buffer additives were examined at the next step: organic cosolvents and cyclodextrin (No. 2 Doehlert design). The optimal cyclodextrin type was also selected experimentally. The most promising results were obtained for the buffers fortified with methanol, acetonitrile and heptakis(2,3,6-tri-O-methyl)-β-cyclodextrin. Since these additives may potentially affect acid-base equilibrium and ionization state of analytes, the third Doehlert design (No. 3) was used to reconcile concentration of these additives with optimal pH. Ultimately, the total separation of all 20 compounds was achieved using the borate buffer at basic pH 9.5 in the presence of 10mM cyclodextrin, 9% (v/v) acetonitrile and 36% (v/v) methanol. Identity of all compounds was confirmed using the in-lab build UV-VIS spectra library. The developed method succeeded in identification of coumarin derivatives in three real samples. It demonstrates a huge resolving power of CE assisted by addition of cyclodextrins and organic cosolvents. Our unique optimization approach, based on the three Doehlert designs, seems to be prospective for future applications of this technique. Copyright © 2017 Elsevier B.V. All rights reserved.

  13. Experimental design and optimization of raloxifene hydrochloride loaded nanotransfersomes for transdermal application.

    PubMed

    Mahmood, Syed; Taher, Muhammad; Mandal, Uttam Kumar

    2014-01-01

    Raloxifene hydrochloride, a highly effective drug for the treatment of invasive breast cancer and osteoporosis in post-menopausal women, shows poor oral bioavailability of 2%. The aim of this study was to develop, statistically optimize, and characterize raloxifene hydrochloride-loaded transfersomes for transdermal delivery, in order to overcome the poor bioavailability issue with the drug. A response surface methodology experimental design was applied for the optimization of transfersomes, using Box-Behnken experimental design. Phospholipon(®) 90G, sodium deoxycholate, and sonication time, each at three levels, were selected as independent variables, while entrapment efficiency, vesicle size, and transdermal flux were identified as dependent variables. The formulation was characterized by surface morphology and shape, particle size, and zeta potential. Ex vivo transdermal flux was determined using a Hanson diffusion cell assembly, with rat skin as a barrier medium. Transfersomes from the optimized formulation were found to have spherical, unilamellar structures, with a homogeneous distribution and low polydispersity index (0.08). They had a particle size of 134±9 nM, with an entrapment efficiency of 91.00%±4.90%, and transdermal flux of 6.5±1.1 μg/cm(2)/hour. Raloxifene hydrochloride-loaded transfersomes proved significantly superior in terms of amount of drug permeated and deposited in the skin, with enhancement ratios of 6.25±1.50 and 9.25±2.40, respectively, when compared with drug-loaded conventional liposomes, and an ethanolic phosphate buffer saline. Differential scanning calorimetry study revealed a greater change in skin structure, compared with a control sample, during the ex vivo drug diffusion study. Further, confocal laser scanning microscopy proved an enhanced permeation of coumarin-6-loaded transfersomes, to a depth of approximately160 μM, as compared with rigid liposomes. These ex vivo findings proved that a raloxifene hydrochloride

  14. Experimental design and optimization of raloxifene hydrochloride loaded nanotransfersomes for transdermal application

    PubMed Central

    Mahmood, Syed; Taher, Muhammad; Mandal, Uttam Kumar

    2014-01-01

    Raloxifene hydrochloride, a highly effective drug for the treatment of invasive breast cancer and osteoporosis in post-menopausal women, shows poor oral bioavailability of 2%. The aim of this study was to develop, statistically optimize, and characterize raloxifene hydrochloride-loaded transfersomes for transdermal delivery, in order to overcome the poor bioavailability issue with the drug. A response surface methodology experimental design was applied for the optimization of transfersomes, using Box-Behnken experimental design. Phospholipon® 90G, sodium deoxycholate, and sonication time, each at three levels, were selected as independent variables, while entrapment efficiency, vesicle size, and transdermal flux were identified as dependent variables. The formulation was characterized by surface morphology and shape, particle size, and zeta potential. Ex vivo transdermal flux was determined using a Hanson diffusion cell assembly, with rat skin as a barrier medium. Transfersomes from the optimized formulation were found to have spherical, unilamellar structures, with a homogeneous distribution and low polydispersity index (0.08). They had a particle size of 134±9 nM, with an entrapment efficiency of 91.00%±4.90%, and transdermal flux of 6.5±1.1 μg/cm2/hour. Raloxifene hydrochloride-loaded transfersomes proved significantly superior in terms of amount of drug permeated and deposited in the skin, with enhancement ratios of 6.25±1.50 and 9.25±2.40, respectively, when compared with drug-loaded conventional liposomes, and an ethanolic phosphate buffer saline. Differential scanning calorimetry study revealed a greater change in skin structure, compared with a control sample, during the ex vivo drug diffusion study. Further, confocal laser scanning microscopy proved an enhanced permeation of coumarin-6-loaded transfersomes, to a depth of approximately160 μM, as compared with rigid liposomes. These ex vivo findings proved that a raloxifene hydrochloride

  15. A novel experimental design method to optimize hydrophilic matrix formulations with drug release profiles and mechanical properties.

    PubMed

    Choi, Du Hyung; Lim, Jun Yeul; Shin, Sangmun; Choi, Won Jun; Jeong, Seong Hoon; Lee, Sangkil

    2014-10-01

    To investigate the effects of hydrophilic polymers on the matrix system, an experimental design method was developed to integrate response surface methodology and the time series modeling. Moreover, the relationships among polymers on the matrix system were studied with the evaluation of physical properties including water uptake, mass loss, diffusion, and gelling index. A mixture simplex lattice design was proposed while considering eight input control factors: Polyethylene glycol 6000 (x1 ), polyethylene oxide (PEO) N-10 (x2 ), PEO 301 (x3 ), PEO coagulant (x4 ), PEO 303 (x5 ), hydroxypropyl methylcellulose (HPMC) 100SR (x6 ), HPMC 4000SR (x7 ), and HPMC 10(5) SR (x8 ). With the modeling, optimal formulations were obtained depending on the four types of targets. The optimal formulations showed the four significant factors (x1 , x2 , x3 , and x8 ) and other four input factors (x4 , x5 , x6 , and x7 ) were not significant based on drug release profiles. Moreover, the optimization results were analyzed with estimated values, targets values, absolute biases, and relative biases based on observed times for the drug release rates with four different targets. The result showed that optimal solutions and target values had consistent patterns with small biases. On the basis of the physical properties of the optimal solutions, the type and ratio of the hydrophilic polymer and the relationships between polymers significantly influenced the physical properties of the system and drug release. This experimental design method is very useful in formulating a matrix system with optimal drug release. Moreover, it can distinctly confirm the relationships between excipients and the effects on the system with extensive and intensive evaluations. © 2014 Wiley Periodicals, Inc. and the American Pharmacists Association.

  16. Framework for the rapid optimization of soluble protein expression in Escherichia coli combining microscale experiments and statistical experimental design.

    PubMed

    Islam, R S; Tisi, D; Levy, M S; Lye, G J

    2007-01-01

    A major bottleneck in drug discovery is the production of soluble human recombinant protein in sufficient quantities for analysis. This problem is compounded by the complex relationship between protein yield and the large number of variables which affect it. Here, we describe a generic framework for the rapid identification and optimization of factors affecting soluble protein yield in microwell plate fermentations as a prelude to the predictive and reliable scaleup of optimized culture conditions. Recombinant expression of firefly luciferase in Escherichia coli was used as a model system. Two rounds of statistical design of experiments (DoE) were employed to first screen (D-optimal design) and then optimize (central composite face design) the yield of soluble protein. Biological variables from the initial screening experiments included medium type and growth and induction conditions. To provide insight into the impact of the engineering environment on cell growth and expression, plate geometry, shaking speed, and liquid fill volume were included as factors since these strongly influence oxygen transfer into the wells. Compared to standard reference conditions, both the screening and optimization designs gave up to 3-fold increases in the soluble protein yield, i.e., a 9-fold increase overall. In general the highest protein yields were obtained when cells were induced at a relatively low biomass concentration and then allowed to grow slowly up to a high final biomass concentration, >8 g.L-1. Consideration and analysis of the model results showed 6 of the original 10 variables to be important at the screening stage and 3 after optimization. The latter included the microwell plate shaking speeds pre- and postinduction, indicating the importance of oxygen transfer into the microwells and identifying this as a critical parameter for subsequent scale translation studies. The optimization process, also known as response surface methodology (RSM), predicted there to be a

  17. An Integrated Optimization Design Method Based on Surrogate Modeling Applied to Diverging Duct Design

    NASA Astrophysics Data System (ADS)

    Hanan, Lu; Qiushi, Li; Shaobin, Li

    2016-12-01

    This paper presents an integrated optimization design method in which uniform design, response surface methodology and genetic algorithm are used in combination. In detail, uniform design is used to select the experimental sampling points in the experimental domain and the system performance is evaluated by means of computational fluid dynamics to construct a database. After that, response surface methodology is employed to generate a surrogate mathematical model relating the optimization objective and the design variables. Subsequently, genetic algorithm is adopted and applied to the surrogate model to acquire the optimal solution in the case of satisfying some constraints. The method has been applied to the optimization design of an axisymmetric diverging duct, dealing with three design variables including one qualitative variable and two quantitative variables. The method of modeling and optimization design performs well in improving the duct aerodynamic performance and can be also applied to wider fields of mechanical design and seen as a useful tool for engineering designers, by reducing the design time and computation consumption.

  18. Probabilistic Finite Element Analysis & Design Optimization for Structural Designs

    NASA Astrophysics Data System (ADS)

    Deivanayagam, Arumugam

    This study focuses on implementing probabilistic nature of material properties (Kevlar® 49) to the existing deterministic finite element analysis (FEA) of fabric based engine containment system through Monte Carlo simulations (MCS) and implementation of probabilistic analysis in engineering designs through Reliability Based Design Optimization (RBDO). First, the emphasis is on experimental data analysis focusing on probabilistic distribution models which characterize the randomness associated with the experimental data. The material properties of Kevlar® 49 are modeled using experimental data analysis and implemented along with an existing spiral modeling scheme (SMS) and user defined constitutive model (UMAT) for fabric based engine containment simulations in LS-DYNA. MCS of the model are performed to observe the failure pattern and exit velocities of the models. Then the solutions are compared with NASA experimental tests and deterministic results. MCS with probabilistic material data give a good prospective on results rather than a single deterministic simulation results. The next part of research is to implement the probabilistic material properties in engineering designs. The main aim of structural design is to obtain optimal solutions. In any case, in a deterministic optimization problem even though the structures are cost effective, it becomes highly unreliable if the uncertainty that may be associated with the system (material properties, loading etc.) is not represented or considered in the solution process. Reliable and optimal solution can be obtained by performing reliability optimization along with the deterministic optimization, which is RBDO. In RBDO problem formulation, in addition to structural performance constraints, reliability constraints are also considered. This part of research starts with introduction to reliability analysis such as first order reliability analysis, second order reliability analysis followed by simulation technique that

  19. Optimal design of isotope labeling experiments.

    PubMed

    Yang, Hong; Mandy, Dominic E; Libourel, Igor G L

    2014-01-01

    Stable isotope labeling experiments (ILE) constitute a powerful methodology for estimating metabolic fluxes. An optimal label design for such an experiment is necessary to maximize the precision with which fluxes can be determined. But often, precision gained in the determination of one flux comes at the expense of the precision of other fluxes, and an appropriate label design therefore foremost depends on the question the investigator wants to address. One could liken ILE to shadows that metabolism casts on products. Optimal label design is the placement of the lamp; creating clear shadows for some parts of metabolism and obscuring others.An optimal isotope label design is influenced by: (1) the network structure; (2) the true flux values; (3) the available label measurements; and, (4) commercially available substrates. The first two aspects are dictated by nature and constrain any optimal design. The second two aspects are suitable design parameters. To create an optimal label design, an explicit optimization criterion needs to be formulated. This usually is a property of the flux covariance matrix, which can be augmented by weighting label substrate cost. An optimal design is found by using such a criterion as an objective function for an optimizer. This chapter uses a simple elementary metabolite units (EMU) representation of the TCA cycle to illustrate the process of experimental design of isotope labeled substrates.

  20. Experimental design, modeling and optimization of polyplex formation between DNA oligonucleotides and branched polyethylenimine.

    PubMed

    Clima, Lilia; Ursu, Elena L; Cojocaru, Corneliu; Rotaru, Alexandru; Barboiu, Mihail; Pinteala, Mariana

    2015-09-28

    The complexes formed by DNA and polycations have received great attention owing to their potential application in gene therapy. In this study, the binding efficiency between double-stranded oligonucleotides (dsDNA) and branched polyethylenimine (B-PEI) has been quantified by processing of the images captured from the gel electrophoresis assays. The central composite experimental design has been employed to investigate the effects of controllable factors on the binding efficiency. On the basis of experimental data and the response surface methodology, a multivariate regression model has been constructed and statistically validated. The model has enabled us to predict the binding efficiency depending on experimental factors, such as concentrations of dsDNA and B-PEI as well as the initial pH of solution. The optimization of the binding process has been performed using simplex and gradient methods. The optimal conditions determined for polyplex formation have yielded a maximal binding efficiency close to 100%. In order to reveal the mechanism of complex formation at the atomic-scale, a molecular dynamic simulation has been carried out. According to the computation results, B-PEI amine hydrogen atoms have interacted with oxygen atoms from dsDNA phosphate groups. These interactions have led to the formation of hydrogen bonds between macromolecules, stabilizing the polyplex structure.

  1. Experimental design methods for bioengineering applications.

    PubMed

    Keskin Gündoğdu, Tuğba; Deniz, İrem; Çalışkan, Gülizar; Şahin, Erdem Sefa; Azbar, Nuri

    2016-01-01

    Experimental design is a form of process analysis in which certain factors are selected to obtain the desired responses of interest. It may also be used for the determination of the effects of various independent factors on a dependent factor. The bioengineering discipline includes many different areas of scientific interest, and each study area is affected and governed by many different factors. Briefly analyzing the important factors and selecting an experimental design for optimization are very effective tools for the design of any bioprocess under question. This review summarizes experimental design methods that can be used to investigate various factors relating to bioengineering processes. The experimental methods generally used in bioengineering are as follows: full factorial design, fractional factorial design, Plackett-Burman design, Taguchi design, Box-Behnken design and central composite design. These design methods are briefly introduced, and then the application of these design methods to study different bioengineering processes is analyzed.

  2. Experimental Validation of an Integrated Controls-Structures Design Methodology

    NASA Technical Reports Server (NTRS)

    Maghami, Peiman G.; Gupta, Sandeep; Elliot, Kenny B.; Walz, Joseph E.

    1996-01-01

    The first experimental validation of an integrated controls-structures design methodology for a class of large order, flexible space structures is described. Integrated redesign of the controls-structures-interaction evolutionary model, a laboratory testbed at NASA Langley, was described earlier. The redesigned structure was fabricated, assembled in the laboratory, and experimentally tested against the original structure. Experimental results indicate that the structure redesigned using the integrated design methodology requires significantly less average control power than the nominal structure with control-optimized designs, while maintaining the required line-of-sight pointing performance. Thus, the superiority of the integrated design methodology over the conventional design approach is experimentally demonstrated. Furthermore, amenability of the integrated design structure to other control strategies is evaluated, both analytically and experimentally. Using Linear-Quadratic-Guassian optimal dissipative controllers, it is observed that the redesigned structure leads to significantly improved performance with alternate controllers as well.

  3. PopED lite: An optimal design software for preclinical pharmacokinetic and pharmacodynamic studies.

    PubMed

    Aoki, Yasunori; Sundqvist, Monika; Hooker, Andrew C; Gennemark, Peter

    2016-04-01

    Optimal experimental design approaches are seldom used in preclinical drug discovery. The objective is to develop an optimal design software tool specifically designed for preclinical applications in order to increase the efficiency of drug discovery in vivo studies. Several realistic experimental design case studies were collected and many preclinical experimental teams were consulted to determine the design goal of the software tool. The tool obtains an optimized experimental design by solving a constrained optimization problem, where each experimental design is evaluated using some function of the Fisher Information Matrix. The software was implemented in C++ using the Qt framework to assure a responsive user-software interaction through a rich graphical user interface, and at the same time, achieving the desired computational speed. In addition, a discrete global optimization algorithm was developed and implemented. The software design goals were simplicity, speed and intuition. Based on these design goals, we have developed the publicly available software PopED lite (http://www.bluetree.me/PopED_lite). Optimization computation was on average, over 14 test problems, 30 times faster in PopED lite compared to an already existing optimal design software tool. PopED lite is now used in real drug discovery projects and a few of these case studies are presented in this paper. PopED lite is designed to be simple, fast and intuitive. Simple, to give many users access to basic optimal design calculations. Fast, to fit a short design-execution cycle and allow interactive experimental design (test one design, discuss proposed design, test another design, etc). Intuitive, so that the input to and output from the software tool can easily be understood by users without knowledge of the theory of optimal design. In this way, PopED lite is highly useful in practice and complements existing tools. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  4. Optimal design of high temperature metalized thin-film polymer capacitors: A combined numerical and experimental method

    NASA Astrophysics Data System (ADS)

    Wang, Zhuo; Li, Qi; Trinh, Wei; Lu, Qianli; Cho, Heejin; Wang, Qing; Chen, Lei

    2017-07-01

    The objective of this paper is to design and optimize the high temperature metalized thin-film polymer capacitor by a combined computational and experimental method. A finite-element based thermal model is developed to incorporate Joule heating and anisotropic heat conduction arising from anisotropic geometric structures of the capacitor. The anisotropic thermal conductivity and temperature dependent electrical conductivity required by the thermal model are measured from the experiments. The polymer represented by thermally crosslinking benzocyclobutene (BCB) in the presence of boron nitride nanosheets (BNNSs) is selected for high temperature capacitor design based on the results of highest internal temperature (HIT) and the time to achieve thermal equilibrium. The c-BCB/BNNS-based capacitor aiming at the operating temperature of 250 °C is geometrically optimized with respect to its shape and volume. "Safe line" plot is also presented to reveal the influence of the cooling strength on capacitor geometry design.

  5. Identification of vehicle suspension parameters by design optimization

    NASA Astrophysics Data System (ADS)

    Tey, J. Y.; Ramli, R.; Kheng, C. W.; Chong, S. Y.; Abidin, M. A. Z.

    2014-05-01

    The design of a vehicle suspension system through simulation requires accurate representation of the design parameters. These parameters are usually difficult to measure or sometimes unavailable. This article proposes an efficient approach to identify the unknown parameters through optimization based on experimental results, where the covariance matrix adaptation-evolutionary strategy (CMA-es) is utilized to improve the simulation and experimental results against the kinematic and compliance tests. This speeds up the design and development cycle by recovering all the unknown data with respect to a set of kinematic measurements through a single optimization process. A case study employing a McPherson strut suspension system is modelled in a multi-body dynamic system. Three kinematic and compliance tests are examined, namely, vertical parallel wheel travel, opposite wheel travel and single wheel travel. The problem is formulated as a multi-objective optimization problem with 40 objectives and 49 design parameters. A hierarchical clustering method based on global sensitivity analysis is used to reduce the number of objectives to 30 by grouping correlated objectives together. Then, a dynamic summation of rank value is used as pseudo-objective functions to reformulate the multi-objective optimization to a single-objective optimization problem. The optimized results show a significant improvement in the correlation between the simulated model and the experimental model. Once accurate representation of the vehicle suspension model is achieved, further analysis, such as ride and handling performances, can be implemented for further optimization.

  6. Optimal experimental designs for fMRI when the model matrix is uncertain.

    PubMed

    Kao, Ming-Hung; Zhou, Lin

    2017-07-15

    This study concerns optimal designs for functional magnetic resonance imaging (fMRI) experiments when the model matrix of the statistical model depends on both the selected stimulus sequence (fMRI design), and the subject's uncertain feedback (e.g. answer) to each mental stimulus (e.g. question) presented to her/him. While practically important, this design issue is challenging. This mainly is because that the information matrix cannot be fully determined at the design stage, making it difficult to evaluate the quality of the selected designs. To tackle this challenging issue, we propose an easy-to-use optimality criterion for evaluating the quality of designs, and an efficient approach for obtaining designs optimizing this criterion. Compared with a previously proposed method, our approach requires a much less computing time to achieve designs with high statistical efficiencies. Copyright © 2017 Elsevier Inc. All rights reserved.

  7. Achieving optimal SERS through enhanced experimental design

    PubMed Central

    Fisk, Heidi; Westley, Chloe; Turner, Nicholas J.

    2016-01-01

    One of the current limitations surrounding surface‐enhanced Raman scattering (SERS) is the perceived lack of reproducibility. SERS is indeed challenging, and for analyte detection, it is vital that the analyte interacts with the metal surface. However, as this is analyte dependent, there is not a single set of SERS conditions that are universal. This means that experimental optimisation for optimum SERS response is vital. Most researchers optimise one factor at a time, where a single parameter is altered first before going onto optimise the next. This is a very inefficient way of searching the experimental landscape. In this review, we explore the use of more powerful multivariate approaches to SERS experimental optimisation based on design of experiments and evolutionary computational methods. We particularly focus on colloidal‐based SERS rather than thin film preparations as a result of their popularity. © 2015 The Authors. Journal of Raman Spectroscopy published by John Wiley & Sons, Ltd. PMID:27587905

  8. Achieving optimal SERS through enhanced experimental design.

    PubMed

    Fisk, Heidi; Westley, Chloe; Turner, Nicholas J; Goodacre, Royston

    2016-01-01

    One of the current limitations surrounding surface-enhanced Raman scattering (SERS) is the perceived lack of reproducibility. SERS is indeed challenging, and for analyte detection, it is vital that the analyte interacts with the metal surface. However, as this is analyte dependent, there is not a single set of SERS conditions that are universal. This means that experimental optimisation for optimum SERS response is vital. Most researchers optimise one factor at a time, where a single parameter is altered first before going onto optimise the next. This is a very inefficient way of searching the experimental landscape. In this review, we explore the use of more powerful multivariate approaches to SERS experimental optimisation based on design of experiments and evolutionary computational methods. We particularly focus on colloidal-based SERS rather than thin film preparations as a result of their popularity. © 2015 The Authors. Journal of Raman Spectroscopy published by John Wiley & Sons, Ltd.

  9. Optimization of Perovskite Gas Sensor Performance: Characterization, Measurement and Experimental Design.

    PubMed

    Bertocci, Francesco; Fort, Ada; Vignoli, Valerio; Mugnaini, Marco; Berni, Rossella

    2017-06-10

    Eight different types of nanostructured perovskites based on YCoO 3 with different chemical compositions are prepared as gas sensor materials, and they are studied with two target gases NO 2 and CO. Moreover, a statistical approach is adopted to optimize their performance. The innovative contribution is carried out through a split-plot design planning and modeling, also involving random effects, for studying Metal Oxide Semiconductors (MOX) sensors in a robust design context. The statistical results prove the validity of the proposed approach; in fact, for each material type, the variation of the electrical resistance achieves a satisfactory optimized value conditional to the working temperature and by controlling for the gas concentration variability. Just to mention some results, the sensing material YCo 0 . 9 Pd 0 . 1 O 3 (Mt1) achieved excellent solutions during the optimization procedure. In particular, Mt1 resulted in being useful and feasible for the detection of both gases, with optimal response equal to +10.23% and working temperature at 312 ∘ C for CO (284 ppm, from design) and response equal to -14.17% at 185 ∘ C for NO 2 (16 ppm, from design). Analogously, for NO 2 (16 ppm, from design), the material type YCo 0 . 9 O 2 . 85 + 1 % Pd (Mt8) allows for optimizing the response value at - 15 . 39 % with a working temperature at 181 . 0 ∘ C, whereas for YCo 0 . 95 Pd 0 . 05 O 3 (Mt3), the best response value is achieved at - 15 . 40 % with the temperature equal to 204 ∘ C.

  10. Optimal designs for copula models

    PubMed Central

    Perrone, E.; Müller, W.G.

    2016-01-01

    Copula modelling has in the past decade become a standard tool in many areas of applied statistics. However, a largely neglected aspect concerns the design of related experiments. Particularly the issue of whether the estimation of copula parameters can be enhanced by optimizing experimental conditions and how robust all the parameter estimates for the model are with respect to the type of copula employed. In this paper an equivalence theorem for (bivariate) copula models is provided that allows formulation of efficient design algorithms and quick checks of whether designs are optimal or at least efficient. Some examples illustrate that in practical situations considerable gains in design efficiency can be achieved. A natural comparison between different copula models with respect to design efficiency is provided as well. PMID:27453616

  11. Vitamin B12 production from crude glycerol by Propionibacterium freudenreichii ssp. shermanii: optimization of medium composition through statistical experimental designs.

    PubMed

    Kośmider, Alicja; Białas, Wojciech; Kubiak, Piotr; Drożdżyńska, Agnieszka; Czaczyk, Katarzyna

    2012-02-01

    A two-step statistical experimental design was employed to optimize the medium for vitamin B(12) production from crude glycerol by Propionibacterium freudenreichii ssp. shermanii. In the first step, using Plackett-Burman design, five of 13 tested medium components (calcium pantothenate, NaH(2)PO(4)·2H(2)O, casein hydrolysate, glycerol and FeSO(4)·7H(2)O) were identified as factors having significant influence on vitamin production. In the second step, a central composite design was used to optimize levels of medium components selected in the first step. Valid statistical models describing the influence of significant factors on vitamin B(12) production were established for each optimization phase. The optimized medium provided a 93% increase in final vitamin concentration compared to the original medium. Copyright © 2011 Elsevier Ltd. All rights reserved.

  12. Optimization of Perovskite Gas Sensor Performance: Characterization, Measurement and Experimental Design

    PubMed Central

    Bertocci, Francesco; Fort, Ada; Vignoli, Valerio; Mugnaini, Marco; Berni, Rossella

    2017-01-01

    Eight different types of nanostructured perovskites based on YCoO3 with different chemical compositions are prepared as gas sensor materials, and they are studied with two target gases NO2 and CO. Moreover, a statistical approach is adopted to optimize their performance. The innovative contribution is carried out through a split-plot design planning and modeling, also involving random effects, for studying Metal Oxide Semiconductors (MOX) sensors in a robust design context. The statistical results prove the validity of the proposed approach; in fact, for each material type, the variation of the electrical resistance achieves a satisfactory optimized value conditional to the working temperature and by controlling for the gas concentration variability. Just to mention some results, the sensing material YCo0.9Pd0.1O3 (Mt1) achieved excellent solutions during the optimization procedure. In particular, Mt1 resulted in being useful and feasible for the detection of both gases, with optimal response equal to +10.23% and working temperature at 312∘C for CO (284 ppm, from design) and response equal to −14.17% at 185∘C for NO2 (16 ppm, from design). Analogously, for NO2 (16 ppm, from design), the material type YCo0.9O2.85+1%Pd (Mt8) allows for optimizing the response value at −15.39% with a working temperature at 181.0∘C, whereas for YCo0.95Pd0.05O3 (Mt3), the best response value is achieved at −15.40% with the temperature equal to 204∘C. PMID:28604587

  13. Experimental design in chemistry: A tutorial.

    PubMed

    Leardi, Riccardo

    2009-10-12

    In this tutorial the main concepts and applications of experimental design in chemistry will be explained. Unfortunately, nowadays experimental design is not as known and applied as it should be, and many papers can be found in which the "optimization" of a procedure is performed one variable at a time. Goal of this paper is to show the real advantages in terms of reduced experimental effort and of increased quality of information that can be obtained if this approach is followed. To do that, three real examples will be shown. Rather than on the mathematical aspects, this paper will focus on the mental attitude required by experimental design. The readers being interested to deepen their knowledge of the mathematical and algorithmical part can find very good books and tutorials in the references [G.E.P. Box, W.G. Hunter, J.S. Hunter, Statistics for Experimenters: An Introduction to Design, Data Analysis, and Model Building, John Wiley & Sons, New York, 1978; R. Brereton, Chemometrics: Data Analysis for the Laboratory and Chemical Plant, John Wiley & Sons, New York, 1978; R. Carlson, J.E. Carlson, Design and Optimization in Organic Synthesis: Second Revised and Enlarged Edition, in: Data Handling in Science and Technology, vol. 24, Elsevier, Amsterdam, 2005; J.A. Cornell, Experiments with Mixtures: Designs, Models and the Analysis of Mixture Data, in: Series in Probability and Statistics, John Wiley & Sons, New York, 1991; R.E. Bruns, I.S. Scarminio, B. de Barros Neto, Statistical Design-Chemometrics, in: Data Handling in Science and Technology, vol. 25, Elsevier, Amsterdam, 2006; D.C. Montgomery, Design and Analysis of Experiments, 7th edition, John Wiley & Sons, Inc., 2009; T. Lundstedt, E. Seifert, L. Abramo, B. Thelin, A. Nyström, J. Pettersen, R. Bergman, Chemolab 42 (1998) 3; Y. Vander Heyden, LC-GC Europe 19 (9) (2006) 469].

  14. Global Design Optimization for Fluid Machinery Applications

    NASA Technical Reports Server (NTRS)

    Shyy, Wei; Papila, Nilay; Tucker, Kevin; Vaidyanathan, Raj; Griffin, Lisa

    2000-01-01

    Recent experiences in utilizing the global optimization methodology, based on polynomial and neural network techniques for fluid machinery design are summarized. Global optimization methods can utilize the information collected from various sources and by different tools. These methods offer multi-criterion optimization, handle the existence of multiple design points and trade-offs via insight into the entire design space can easily perform tasks in parallel, and are often effective in filtering the noise intrinsic to numerical and experimental data. Another advantage is that these methods do not need to calculate the sensitivity of each design variable locally. However, a successful application of the global optimization method needs to address issues related to data requirements with an increase in the number of design variables and methods for predicting the model performance. Examples of applications selected from rocket propulsion components including a supersonic turbine and an injector element and a turbulent flow diffuser are used to illustrate the usefulness of the global optimization method.

  15. A Tutorial on Adaptive Design Optimization

    PubMed Central

    Myung, Jay I.; Cavagnaro, Daniel R.; Pitt, Mark A.

    2013-01-01

    Experimentation is ubiquitous in the field of psychology and fundamental to the advancement of its science, and one of the biggest challenges for researchers is designing experiments that can conclusively discriminate the theoretical hypotheses or models under investigation. The recognition of this challenge has led to the development of sophisticated statistical methods that aid in the design of experiments and that are within the reach of everyday experimental scientists. This tutorial paper introduces the reader to an implementable experimentation methodology, dubbed Adaptive Design Optimization, that can help scientists to conduct “smart” experiments that are maximally informative and highly efficient, which in turn should accelerate scientific discovery in psychology and beyond. PMID:23997275

  16. Optimization of microwave-assisted extraction of analgesic and anti-inflammatory drugs from human plasma and urine using response surface experimental designs.

    PubMed

    Fernández, Purificación; Fernández, Ana M; Bermejo, Ana M; Lorenzo, Rosa A; Carro, Antonia M

    2013-04-01

    The performance of microwave-assisted extraction and HPLC with photodiode array detection method for determination of six analgesic and anti-inflammatory drugs from plasma and urine, is described, optimized, and validated. Several parameters affecting the extraction technique were optimized using experimental designs. A four-factor (temperature, phosphate buffer pH 4.0 volume, extraction solvent volume, and time) hybrid experimental design was used for extraction optimization in plasma, and three-factor (temperature, extraction solvent volume, and time) Doehlert design was chosen to extraction optimization in urine. The use of desirability functions revealed the optimal extraction conditions as follows: 67°C, 4 mL phosphate buffer pH 4.0, 12 mL of ethyl acetate and 9 min, for plasma and the same volume of buffer and ethyl acetate, 115°C and 4 min for urine. Limits of detection ranged from 4 to 45 ng/mL in plasma and from 8 to 85 ng/mL in urine. The reproducibility evaluated at two concentration levels was less than 6.5% for both specimens. The recoveries were from 89 to 99% for plasma and from 83 to 99% for urine. The proposed method was successfully applied in plasma and urine samples obtained from analgesic users. © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  17. Optimal Objective-Based Experimental Design for Uncertain Dynamical Gene Networks with Experimental Error.

    PubMed

    Mohsenizadeh, Daniel N; Dehghannasiri, Roozbeh; Dougherty, Edward R

    2018-01-01

    In systems biology, network models are often used to study interactions among cellular components, a salient aim being to develop drugs and therapeutic mechanisms to change the dynamical behavior of the network to avoid undesirable phenotypes. Owing to limited knowledge, model uncertainty is commonplace and network dynamics can be updated in different ways, thereby giving multiple dynamic trajectories, that is, dynamics uncertainty. In this manuscript, we propose an experimental design method that can effectively reduce the dynamics uncertainty and improve performance in an interaction-based network. Both dynamics uncertainty and experimental error are quantified with respect to the modeling objective, herein, therapeutic intervention. The aim of experimental design is to select among a set of candidate experiments the experiment whose outcome, when applied to the network model, maximally reduces the dynamics uncertainty pertinent to the intervention objective.

  18. Three-dimensional shape optimization of a cemented hip stem and experimental validations.

    PubMed

    Higa, Masaru; Tanino, Hiromasa; Nishimura, Ikuya; Mitamura, Yoshinori; Matsuno, Takeo; Ito, Hiroshi

    2015-03-01

    This study proposes novel optimized stem geometry with low stress values in the cement using a finite element (FE) analysis combined with an optimization procedure and experimental measurements of cement stress in vitro. We first optimized an existing stem geometry using a three-dimensional FE analysis combined with a shape optimization technique. One of the most important factors in the cemented stem design is to reduce stress in the cement. Hence, in the optimization study, we minimized the largest tensile principal stress in the cement mantle under a physiological loading condition by changing the stem geometry. As the next step, the optimized stem and the existing stem were manufactured to validate the usefulness of the numerical models and the results of the optimization in vitro. In the experimental study, strain gauges were embedded in the cement mantle to measure the strain in the cement mantle adjacent to the stems. The overall trend of the experimental study was in good agreement with the results of the numerical study, and we were able to reduce the largest stress by more than 50% in both shape optimization and strain gauge measurements. Thus, we could validate the usefulness of the numerical models and the results of the optimization using the experimental models. The optimization employed in this study is a useful approach for developing new stem designs.

  19. Molecular identification of potential denitrifying bacteria and use of D-optimal mixture experimental design for the optimization of denitrification process.

    PubMed

    Ben Taheur, Fadia; Fdhila, Kais; Elabed, Hamouda; Bouguerra, Amel; Kouidhi, Bochra; Bakhrouf, Amina; Chaieb, Kamel

    2016-04-01

    Three bacterial strains (TE1, TD3 and FB2) were isolated from date palm (degla), pistachio and barley. The presence of nitrate reductase (narG) and nitrite reductase (nirS and nirK) genes in the selected strains was detected by PCR technique. Molecular identification based on 16S rDNA sequencing method was applied to identify positive strains. In addition, the D-optimal mixture experimental design was used to optimize the optimal formulation of probiotic bacteria for denitrification process. Strains harboring denitrification genes were identified as: TE1, Agrococcus sp LN828197; TD3, Cronobacter sakazakii LN828198 and FB2, Pedicoccus pentosaceus LN828199. PCR results revealed that all strains carried the nirS gene. However only C. sakazakii LN828198 and Agrococcus sp LN828197 harbored the nirK and the narG genes respectively. Moreover, the studied bacteria were able to form biofilm on abiotic surfaces with different degree. Process optimization showed that the most significant reduction of nitrate was 100% with 14.98% of COD consumption and 5.57 mg/l nitrite accumulation. Meanwhile, the response values were optimized and showed that the most optimal combination was 78.79% of C. sakazakii LN828198 (curve value), 21.21% of P. pentosaceus LN828199 (curve value) and absence (0%) of Agrococcus sp LN828197 (curve value). Copyright © 2016 Elsevier Ltd. All rights reserved.

  20. Formulation and optimization by experimental design of eco-friendly emulsions based on d-limonene.

    PubMed

    Pérez-Mosqueda, Luis M; Trujillo-Cayado, Luis A; Carrillo, Francisco; Ramírez, Pablo; Muñoz, José

    2015-04-01

    d-Limonene is a natural occurring solvent that can replace more pollutant chemicals in agrochemical formulations. In the present work, a comprehensive study of the influence of dispersed phase mass fraction, ϕ, and of the surfactant/oil ratio, R, on the emulsion stability and droplet size distribution of d-limonene-in-water emulsions stabilized by a non-ionic triblock copolymer surfactant has been carried out. An experimental full factorial design 3(2) was conducted in order to optimize the emulsion formulation. The independent variables, ϕ and R were studied in the range 10-50 wt% and 0.02-0.1, respectively. The emulsions studied were mainly destabilized by both creaming and Ostwald ripening. Therefore, initial droplet size and an overall destabilization parameter, the so-called turbiscan stability index, were used as dependent variables. The optimal formulation, comprising minimum droplet size and maximum stability was achieved at ϕ=50 wt%; R=0.062. Furthermore, the surface response methodology allowed us to obtain the formulation yielding sub-micron emulsions by using a single step rotor/stator homogenizer process instead of most commonly used two-step emulsification methods. In addition, the optimal formulation was further improved against Ostwald ripening by adding silicone oil to the dispersed phase. The combination of these experimental findings allowed us to gain a deeper insight into the stability of these emulsions, which can be applied to the rational development of new formulations with potential application in agrochemical formulations. Copyright © 2015 Elsevier B.V. All rights reserved.

  1. Bayesian cross-entropy methodology for optimal design of validation experiments

    NASA Astrophysics Data System (ADS)

    Jiang, X.; Mahadevan, S.

    2006-07-01

    An important concern in the design of validation experiments is how to incorporate the mathematical model in the design in order to allow conclusive comparisons of model prediction with experimental output in model assessment. The classical experimental design methods are more suitable for phenomena discovery and may result in a subjective, expensive, time-consuming and ineffective design that may adversely impact these comparisons. In this paper, an integrated Bayesian cross-entropy methodology is proposed to perform the optimal design of validation experiments incorporating the computational model. The expected cross entropy, an information-theoretic distance between the distributions of model prediction and experimental observation, is defined as a utility function to measure the similarity of two distributions. A simulated annealing algorithm is used to find optimal values of input variables through minimizing or maximizing the expected cross entropy. The measured data after testing with the optimum input values are used to update the distribution of the experimental output using Bayes theorem. The procedure is repeated to adaptively design the required number of experiments for model assessment, each time ensuring that the experiment provides effective comparison for validation. The methodology is illustrated for the optimal design of validation experiments for a three-leg bolted joint structure and a composite helicopter rotor hub component.

  2. Optimization of glibenclamide tablet composition through the combined use of differential scanning calorimetry and D-optimal mixture experimental design.

    PubMed

    Mura, P; Furlanetto, S; Cirri, M; Maestrelli, F; Marras, A M; Pinzauti, S

    2005-02-07

    A systematic analysis of the influence of different proportions of excipients on the stability of a solid dosage form was carried out. In particular, a d-optimal mixture experimental design was applied for the evaluation of glibenclamide compatibility in tablet formulations, consisting of four classic excipients (natrosol as binding agent, stearic acid as lubricant, sorbitol as diluent and cross-linked polyvinylpyrrolidone as disintegrant). The goal was to find the mixture component proportions which correspond to the optimal drug melting parameters, i.e. its maximum stability, using differential scanning calorimetry (DSC) to quickly obtain information about possible interactions among the formulation components. The absolute value of the difference between the melting peak temperature of pure drug endotherm and that in each analysed mixture and the absolute value of the difference between the enthalpy of the pure glibenclamide melting peak and that of its melting peak in the different analyzed mixtures, were chosen as indexes of the drug-excipient interaction degree.

  3. Hybrid computational and experimental approach for the study and optimization of mechanical components

    NASA Astrophysics Data System (ADS)

    Furlong, Cosme; Pryputniewicz, Ryszard J.

    1998-05-01

    Increased demands on the performance and efficiency of mechanical components impose challenges on their engineering design and optimization, especially when new and more demanding applications must be developed in relatively short periods of time while satisfying design objectives, as well as cost and manufacturability. In addition, reliability and durability must be taken into consideration. As a consequence, effective quantitative methodologies, computational and experimental, should be applied in the study and optimization of mechanical components. Computational investigations enable parametric studies and the determination of critical engineering design conditions, while experimental investigations, especially those using optical techniques, provide qualitative and quantitative information on the actual response of the structure of interest to the applied load and boundary conditions. We discuss a hybrid experimental and computational approach for investigation and optimization of mechanical components. The approach is based on analytical, computational, and experimental resolutions methodologies in the form of computational, noninvasive optical techniques, and fringe prediction analysis tools. Practical application of the hybrid approach is illustrated with representative examples that demonstrate the viability of the approach as an effective engineering tool for analysis and optimization.

  4. Optimal experimental designs for estimating Henry's law constants via the method of phase ratio variation.

    PubMed

    Kapelner, Adam; Krieger, Abba; Blanford, William J

    2016-10-14

    When measuring Henry's law constants (k H ) using the phase ratio variation (PRV) method via headspace gas chromatography (G C ), the value of k H of the compound under investigation is calculated from the ratio of the slope to the intercept of a linear regression of the inverse G C response versus the ratio of gas to liquid volumes of a series of vials drawn from the same parent solution. Thus, an experimenter collects measurements consisting of the independent variable (the gas/liquid volume ratio) and dependent variable (the G C -1 peak area). A review of the literature found that the common design is a simple uniform spacing of liquid volumes. We present an optimal experimental design which estimates k H with minimum error and provides multiple means for building confidence intervals for such estimates. We illustrate performance improvements of our design with an example measuring the k H for Naphthalene in aqueous solution as well as simulations on previous studies. Our designs are most applicable after a trial run defines the linear G C response and the linear phase ratio to the G C -1 region (where the PRV method is suitable) after which a practitioner can collect measurements in bulk. The designs can be easily computed using our open source software optDesignSlopeInt, an R package on CRAN. Copyright © 2016 Elsevier B.V. All rights reserved.

  5. Optimization of critical quality attributes in continuous twin-screw wet granulation via design space validated with pilot scale experimental data.

    PubMed

    Liu, Huolong; Galbraith, S C; Ricart, Brendon; Stanton, Courtney; Smith-Goettler, Brandye; Verdi, Luke; O'Connor, Thomas; Lee, Sau; Yoon, Seongkyu

    2017-06-15

    In this study, the influence of key process variables (screw speed, throughput and liquid to solid (L/S) ratio) of a continuous twin screw wet granulation (TSWG) was investigated using a central composite face-centered (CCF) experimental design method. Regression models were developed to predict the process responses (motor torque, granule residence time), granule properties (size distribution, volume average diameter, yield, relative width, flowability) and tablet properties (tensile strength). The effects of the three key process variables were analyzed via contour and interaction plots. The experimental results have demonstrated that all the process responses, granule properties and tablet properties are influenced by changing the screw speed, throughput and L/S ratio. The TSWG process was optimized to produce granules with specific volume average diameter of 150μm and the yield of 95% based on the developed regression models. A design space (DS) was built based on volume average granule diameter between 90 and 200μm and the granule yield larger than 75% with a failure probability analysis using Monte Carlo simulations. Validation experiments successfully validated the robustness and accuracy of the DS generated using the CCF experimental design in optimizing a continuous TSWG process. Copyright © 2017 Elsevier B.V. All rights reserved.

  6. Optimization and modeling of laccase production by Trametes versicolor in a bioreactor using statistical experimental design.

    PubMed

    Tavares, A P M; Coelho, M A Z; Agapito, M S M; Coutinho, J A P; Xavier, A M R B

    2006-09-01

    Experimental design and response surface methodologies were applied to optimize laccase production by Trametes versicolor in a bioreactor. The effects of three factors, initial glucose concentration (0 and 9 g/L), agitation (100 and 180 rpm), and pH (3.0 and 5.0), were evaluated to identify the significant effects and its interactions in the laccase production. The pH of the medium was found to be the most important factor, followed by initial glucose concentration and the interaction of both factors. Agitation did not seem to play an important role in laccase production, nor did the interaction agitation x medium pH and agitation x initial glucose concentration. Response surface analysis showed that an initial glucose concentration of 11 g/L and pH controlled at 5.2 were the optimal conditions for laccase production by T. versicolor. Under these conditions, the predicted value for laccase activity was >10,000 U/L, which is in good agreement with the laccase activity obtained experimentally (11,403 U/L). In addition, a mathematical model for the bioprocess was developed. It is shown that it provides a good description of the experimental profile observed, and that it is capable of predicting biomass growth based on secondary process variables.

  7. Experimental design approach to the process parameter optimization for laser welding of martensitic stainless steels in a constrained overlap configuration

    NASA Astrophysics Data System (ADS)

    Khan, M. M. A.; Romoli, L.; Fiaschi, M.; Dini, G.; Sarri, F.

    2011-02-01

    This paper presents an experimental design approach to process parameter optimization for the laser welding of martensitic AISI 416 and AISI 440FSe stainless steels in a constrained overlap configuration in which outer shell was 0.55 mm thick. To determine the optimal laser-welding parameters, a set of mathematical models were developed relating welding parameters to each of the weld characteristics. These were validated both statistically and experimentally. The quality criteria set for the weld to determine optimal parameters were the minimization of weld width and the maximization of weld penetration depth, resistance length and shearing force. Laser power and welding speed in the range 855-930 W and 4.50-4.65 m/min, respectively, with a fiber diameter of 300 μm were identified as the optimal set of process parameters. However, the laser power and welding speed can be reduced to 800-840 W and increased to 4.75-5.37 m/min, respectively, to obtain stronger and better welds.

  8. Optimizing Experimental Designs Relative to Costs and Effect Sizes.

    ERIC Educational Resources Information Center

    Headrick, Todd C.; Zumbo, Bruno D.

    A general model is derived for the purpose of efficiently allocating integral numbers of units in multi-level designs given prespecified power levels. The derivation of the model is based on a constrained optimization problem that maximizes a general form of a ratio of expected mean squares subject to a budget constraint. This model provides more…

  9. Optimal Experimental Design of Borehole Locations for Bayesian Inference of Past Ice Sheet Surface Temperatures

    NASA Astrophysics Data System (ADS)

    Davis, A. D.; Huan, X.; Heimbach, P.; Marzouk, Y.

    2017-12-01

    Borehole data are essential for calibrating ice sheet models. However, field expeditions for acquiring borehole data are often time-consuming, expensive, and dangerous. It is thus essential to plan the best sampling locations that maximize the value of data while minimizing costs and risks. We present an uncertainty quantification (UQ) workflow based on rigorous probability framework to achieve these objectives. First, we employ an optimal experimental design (OED) procedure to compute borehole locations that yield the highest expected information gain. We take into account practical considerations of location accessibility (e.g., proximity to research sites, terrain, and ice velocity may affect feasibility of drilling) and robustness (e.g., real-time constraints such as weather may force researchers to drill at sub-optimal locations near those originally planned), by incorporating a penalty reflecting accessibility as well as sensitivity to deviations from the optimal locations. Next, we extract vertical temperature profiles from these boreholes and formulate a Bayesian inverse problem to reconstruct past surface temperatures. Using a model of temperature advection/diffusion, the top boundary condition (corresponding to surface temperatures) is calibrated via efficient Markov chain Monte Carlo (MCMC). The overall procedure can then be iterated to choose new optimal borehole locations for the next expeditions.Through this work, we demonstrate powerful UQ methods for designing experiments, calibrating models, making predictions, and assessing sensitivity--all performed under an uncertain environment. We develop a theoretical framework as well as practical software within an intuitive workflow, and illustrate their usefulness for combining data and models for environmental and climate research.

  10. Using Central Composite Experimental Design to Optimize the Degradation of Tylosin from Aqueous Solution by Photo-Fenton Reaction

    PubMed Central

    Sarrai, Abd Elaziz; Hanini, Salah; Merzouk, Nachida Kasbadji; Tassalit, Djilali; Szabó, Tibor; Hernádi, Klára; Nagy, László

    2016-01-01

    The feasibility of the application of the Photo-Fenton process in the treatment of aqueous solution contaminated by Tylosin antibiotic was evaluated. The Response Surface Methodology (RSM) based on Central Composite Design (CCD) was used to evaluate and optimize the effect of hydrogen peroxide, ferrous ion concentration and initial pH as independent variables on the total organic carbon (TOC) removal as the response function. The interaction effects and optimal parameters were obtained by using MODDE software. The significance of the independent variables and their interactions was tested by means of analysis of variance (ANOVA) with a 95% confidence level. Results show that the concentration of the ferrous ion and pH were the main parameters affecting TOC removal, while peroxide concentration had a slight effect on the reaction. The optimum operating conditions to achieve maximum TOC removal were determined. The model prediction for maximum TOC removal was compared to the experimental result at optimal operating conditions. A good agreement between the model prediction and experimental results confirms the soundness of the developed model. PMID:28773551

  11. Optimization of poorly compactable drug tablets manufactured by direct compression using the mixture experimental design.

    PubMed

    Martinello, Tiago; Kaneko, Telma Mary; Velasco, Maria Valéria Robles; Taqueda, Maria Elena Santos; Consiglieri, Vladi O

    2006-09-28

    The poor flowability and bad compressibility characteristics of paracetamol are well known. As a result, the production of paracetamol tablets is almost exclusively by wet granulation, a disadvantageous method when compared to direct compression. The development of a new tablet formulation is still based on a large number of experiments and often relies merely on the experience of the analyst. The purpose of this study was to apply experimental design methodology (DOE) to the development and optimization of tablet formulations containing high amounts of paracetamol (more than 70%) and manufactured by direct compression. Nineteen formulations, screened by DOE methodology, were produced with different proportions of Microcel 102, Kollydon VA 64, Flowlac, Kollydon CL 30, PEG 4000, Aerosil, and magnesium stearate. Tablet properties, except friability, were in accordance with the USP 28th ed. requirements. These results were used to generate plots for optimization, mainly for friability. The physical-chemical data found from the optimized formulation were very close to those from the regression analysis, demonstrating that the mixture project is a great tool for the research and development of new formulations.

  12. Experimental Design for Parameter Estimation of Gene Regulatory Networks

    PubMed Central

    Timmer, Jens

    2012-01-01

    Systems biology aims for building quantitative models to address unresolved issues in molecular biology. In order to describe the behavior of biological cells adequately, gene regulatory networks (GRNs) are intensively investigated. As the validity of models built for GRNs depends crucially on the kinetic rates, various methods have been developed to estimate these parameters from experimental data. For this purpose, it is favorable to choose the experimental conditions yielding maximal information. However, existing experimental design principles often rely on unfulfilled mathematical assumptions or become computationally demanding with growing model complexity. To solve this problem, we combined advanced methods for parameter and uncertainty estimation with experimental design considerations. As a showcase, we optimized three simulated GRNs in one of the challenges from the Dialogue for Reverse Engineering Assessment and Methods (DREAM). This article presents our approach, which was awarded the best performing procedure at the DREAM6 Estimation of Model Parameters challenge. For fast and reliable parameter estimation, local deterministic optimization of the likelihood was applied. We analyzed identifiability and precision of the estimates by calculating the profile likelihood. Furthermore, the profiles provided a way to uncover a selection of most informative experiments, from which the optimal one was chosen using additional criteria at every step of the design process. In conclusion, we provide a strategy for optimal experimental design and show its successful application on three highly nonlinear dynamic models. Although presented in the context of the GRNs to be inferred for the DREAM6 challenge, the approach is generic and applicable to most types of quantitative models in systems biology and other disciplines. PMID:22815723

  13. Experimental designs for a Benign Paroxysmal Positional Vertigo model

    PubMed Central

    2013-01-01

    Background The pathology of the Benign Paroxysmal Positional Vertigo (BPPV) is detected by a clinician through maneuvers consisting of a series of consecutive head turns that trigger the symptoms of vertigo in patient. A statistical model based on a new maneuver has been developed in order to calculate the volume of endolymph displaced after the maneuver. Methods A simplification of the Navier‐Stokes problem from the fluids theory has been used to construct the model. In addition, the same cubic splines that are commonly used in kinematic control of robots were used to obtain an appropriate description of the different maneuvers. Then experimental designs were computed to obtain an optimal estimate of the model. Results D‐optimal and c‐optimal designs of experiments have been calculated. These experiments consist of a series of specific head turns of duration Δt and angle α that should be performed by the clinician on the patient. The experimental designs obtained indicate the duration and angle of the maneuver to be performed as well as the corresponding proportion of replicates. Thus, in the D‐optimal design for 100 experiments, the maneuver consisting of a positive 30° pitch from the upright position, followed by a positive 30° roll, both with a duration of one and a half seconds is repeated 47 times. Then the maneuver with 60° /6° pitch/roll during half a second is repeated 16 times and the maneuver 90° /90° pitch/roll during half a second is repeated 37 times. Other designs with significant differences are computed and compared. Conclusions A biomechanical model was derived to provide a quantitative basis for the detection of BPPV. The robustness study for the D‐optimal design, with respect to the choice of the nominal values of the parameters, shows high efficiencies for small variations and provides a guide to the researcher. Furthermore, c‐optimal designs give valuable assistance to check how efficient the D‐optimal design is for the

  14. Experimental broadband absorption enhancement in silicon nanohole structures with optimized complex unit cells.

    PubMed

    Lin, Chenxi; Martínez, Luis Javier; Povinelli, Michelle L

    2013-09-09

    We design silicon membranes with nanohole structures with optimized complex unit cells that maximize broadband absorption. We fabricate the optimized design and measure the optical absorption. We demonstrate an experimental broadband absorption about 3.5 times higher than an equally-thick thin film.

  15. Optimizing laboratory animal stress paradigms: The H-H* experimental design.

    PubMed

    McCarty, Richard

    2017-01-01

    Major advances in behavioral neuroscience have been facilitated by the development of consistent and highly reproducible experimental paradigms that have been widely adopted. In contrast, many different experimental approaches have been employed to expose laboratory mice and rats to acute versus chronic intermittent stress. An argument is advanced in this review that more consistent approaches to the design of chronic intermittent stress experiments would provide greater reproducibility of results across laboratories and greater reliability relating to various neural, endocrine, immune, genetic, and behavioral adaptations. As an example, the H-H* experimental design incorporates control, homotypic (H), and heterotypic (H*) groups and allows for comparisons across groups, where each animal is exposed to the same stressor, but that stressor has vastly different biological and behavioral effects depending upon each animal's prior stress history. Implementation of the H-H* experimental paradigm makes possible a delineation of transcriptional changes and neural, endocrine, and immune pathways that are activated in precisely defined stressor contexts. Copyright © 2016 Elsevier Ltd. All rights reserved.

  16. Comparison of optimal design methods in inverse problems

    NASA Astrophysics Data System (ADS)

    Banks, H. T.; Holm, K.; Kappel, F.

    2011-07-01

    Typical optimal design methods for inverse or parameter estimation problems are designed to choose optimal sampling distributions through minimization of a specific cost function related to the resulting error in parameter estimates. It is hoped that the inverse problem will produce parameter estimates with increased accuracy using data collected according to the optimal sampling distribution. Here we formulate the classical optimal design problem in the context of general optimization problems over distributions of sampling times. We present a new Prohorov metric-based theoretical framework that permits one to treat succinctly and rigorously any optimal design criteria based on the Fisher information matrix. A fundamental approximation theory is also included in this framework. A new optimal design, SE-optimal design (standard error optimal design), is then introduced in the context of this framework. We compare this new design criterion with the more traditional D-optimal and E-optimal designs. The optimal sampling distributions from each design are used to compute and compare standard errors; the standard errors for parameters are computed using asymptotic theory or bootstrapping and the optimal mesh. We use three examples to illustrate ideas: the Verhulst-Pearl logistic population model (Banks H T and Tran H T 2009 Mathematical and Experimental Modeling of Physical and Biological Processes (Boca Raton, FL: Chapman and Hall/CRC)), the standard harmonic oscillator model (Banks H T and Tran H T 2009) and a popular glucose regulation model (Bergman R N, Ider Y Z, Bowden C R and Cobelli C 1979 Am. J. Physiol. 236 E667-77 De Gaetano A and Arino O 2000 J. Math. Biol. 40 136-68 Toffolo G, Bergman R N, Finegood D T, Bowden C R and Cobelli C 1980 Diabetes 29 979-90).

  17. A numerical identifiability test for state-space models--application to optimal experimental design.

    PubMed

    Hidalgo, M E; Ayesa, E

    2001-01-01

    This paper describes a mathematical tool for identifiability analysis, easily applicable to high order non-linear systems modelled in state-space and implementable in simulators with a time-discrete approach. This procedure also permits a rigorous analysis of the expected estimation errors (average and maximum) in calibration experiments. The methodology is based on the recursive numerical evaluation of the information matrix during the simulation of a calibration experiment and in the setting-up of a group of information parameters based on geometric interpretations of this matrix. As an example of the utility of the proposed test, the paper presents its application to an optimal experimental design of ASM Model No. 1 calibration, in order to estimate the maximum specific growth rate microH and the concentration of heterotrophic biomass XBH.

  18. Experimental design and response surface modelling for optimization of vat dye from water by nano zero valent iron (NZVI).

    PubMed

    Arabi, Simin; Sohrabi, Mahmoud Reza

    2013-01-01

    In this study, NZVI particles was prepared and studied for the removal of vat green 1 dye from aqueous solution. A four-factor central composite design (CCD) combined with response surface modeling (RSM) to evaluate the combined effects of variables as well as optimization was employed for maximizing the dye removal by prepared NZVI based on 30 different experimental data obtained in a batch study. Four independent variables, viz. NZVI dose (0.1-0.9 g/L), pH (1.5-9.5), contact time (20-100 s), and initial dye concentration (10-50 mg/L) were transform to coded values and quadratic model was built to predict the responses. The significant of independent variables and their interactions were tested by the analysis of variance (ANOVA). Adequacy of the model was tested by the correlation between experimental and predicted values of the response and enumeration of prediction errors. The ANOVA results indicated that the proposed model can be used to navigate the design space. Optimization of the variables for maximum adsorption of dye by NZVI particles was performed using quadratic model. The predicted maximum adsorption efficiency (96.97%) under the optimum conditions of the process variables (NZVI dose 0.5 g/L, pH 4, contact time 60 s, and initial dye concentration 30 mg/L) was very close to the experimental value (96.16%) determined in batch experiment. In the optimization, R2 and R2adj correlation coefficients for the model were evaluated as 0.95 and 0.90, respectively.

  19. Efficient experimental design for uncertainty reduction in gene regulatory networks.

    PubMed

    Dehghannasiri, Roozbeh; Yoon, Byung-Jun; Dougherty, Edward R

    2015-01-01

    An accurate understanding of interactions among genes plays a major role in developing therapeutic intervention methods. Gene regulatory networks often contain a significant amount of uncertainty. The process of prioritizing biological experiments to reduce the uncertainty of gene regulatory networks is called experimental design. Under such a strategy, the experiments with high priority are suggested to be conducted first. The authors have already proposed an optimal experimental design method based upon the objective for modeling gene regulatory networks, such as deriving therapeutic interventions. The experimental design method utilizes the concept of mean objective cost of uncertainty (MOCU). MOCU quantifies the expected increase of cost resulting from uncertainty. The optimal experiment to be conducted first is the one which leads to the minimum expected remaining MOCU subsequent to the experiment. In the process, one must find the optimal intervention for every gene regulatory network compatible with the prior knowledge, which can be prohibitively expensive when the size of the network is large. In this paper, we propose a computationally efficient experimental design method. This method incorporates a network reduction scheme by introducing a novel cost function that takes into account the disruption in the ranking of potential experiments. We then estimate the approximate expected remaining MOCU at a lower computational cost using the reduced networks. Simulation results based on synthetic and real gene regulatory networks show that the proposed approximate method has close performance to that of the optimal method but at lower computational cost. The proposed approximate method also outperforms the random selection policy significantly. A MATLAB software implementing the proposed experimental design method is available at http://gsp.tamu.edu/Publications/supplementary/roozbeh15a/.

  20. Iterative optimization method for design of quantitative magnetization transfer imaging experiments.

    PubMed

    Levesque, Ives R; Sled, John G; Pike, G Bruce

    2011-09-01

    Quantitative magnetization transfer imaging (QMTI) using spoiled gradient echo sequences with pulsed off-resonance saturation can be a time-consuming technique. A method is presented for selection of an optimum experimental design for quantitative magnetization transfer imaging based on the iterative reduction of a discrete sampling of the Z-spectrum. The applicability of the technique is demonstrated for human brain white matter imaging at 1.5 T and 3 T, and optimal designs are produced to target specific model parameters. The optimal number of measurements and the signal-to-noise ratio required for stable parameter estimation are also investigated. In vivo imaging results demonstrate that this optimal design approach substantially improves parameter map quality. The iterative method presented here provides an advantage over free form optimal design methods, in that pragmatic design constraints are readily incorporated. In particular, the presented method avoids clustering and repeated measures in the final experimental design, an attractive feature for the purpose of magnetization transfer model validation. The iterative optimal design technique is general and can be applied to any method of quantitative magnetization transfer imaging. Copyright © 2011 Wiley-Liss, Inc.

  1. Experimental Optimization of a Free-to-Rotate Wing for Small UAS

    NASA Technical Reports Server (NTRS)

    Logan, Michael J.; DeLoach, Richard; Copeland, Tiwana; Vo, Steven

    2014-01-01

    This paper discusses an experimental investigation conducted to optimize a free-to-rotate wing for use on a small unmanned aircraft system (UAS). Although free-to-rotate wings have been used for decades on various small UAS and small manned aircraft, little is known about how to optimize these unusual wings for a specific application. The paper discusses some of the design rationale of the basic wing. In addition, three main parameters were selected for "optimization", wing camber, wing pivot location, and wing center of gravity (c.g.) location. A small apparatus was constructed to enable some simple experimental analysis of these parameters. A design-of-experiment series of tests were first conducted to discern which of the main optimization parameters were most likely to have the greatest impact on the outputs of interest, namely, some measure of "stability", some measure of the lift being generated at the neutral position, and how quickly the wing "recovers" from an upset. A second set of tests were conducted to develop a response-surface numerical representation of these outputs as functions of the three primary inputs. The response surface numerical representations are then used to develop an "optimum" within the trade space investigated. The results of the optimization are then tested experimentally to validate the predictions.

  2. Global Design Optimization for Aerodynamics and Rocket Propulsion Components

    NASA Technical Reports Server (NTRS)

    Shyy, Wei; Papila, Nilay; Vaidyanathan, Rajkumar; Tucker, Kevin; Turner, James E. (Technical Monitor)

    2000-01-01

    Modern computational and experimental tools for aerodynamics and propulsion applications have matured to a stage where they can provide substantial insight into engineering processes involving fluid flows, and can be fruitfully utilized to help improve the design of practical devices. In particular, rapid and continuous development in aerospace engineering demands that new design concepts be regularly proposed to meet goals for increased performance, robustness and safety while concurrently decreasing cost. To date, the majority of the effort in design optimization of fluid dynamics has relied on gradient-based search algorithms. Global optimization methods can utilize the information collected from various sources and by different tools. These methods offer multi-criterion optimization, handle the existence of multiple design points and trade-offs via insight into the entire design space, can easily perform tasks in parallel, and are often effective in filtering the noise intrinsic to numerical and experimental data. However, a successful application of the global optimization method needs to address issues related to data requirements with an increase in the number of design variables, and methods for predicting the model performance. In this article, we review recent progress made in establishing suitable global optimization techniques employing neural network and polynomial-based response surface methodologies. Issues addressed include techniques for construction of the response surface, design of experiment techniques for supplying information in an economical manner, optimization procedures and multi-level techniques, and assessment of relative performance between polynomials and neural networks. Examples drawn from wing aerodynamics, turbulent diffuser flows, gas-gas injectors, and supersonic turbines are employed to help demonstrate the issues involved in an engineering design context. Both the usefulness of the existing knowledge to aid current design

  3. A new optimal sliding mode controller design using scalar sign function.

    PubMed

    Singla, Mithun; Shieh, Leang-San; Song, Gangbing; Xie, Linbo; Zhang, Yongpeng

    2014-03-01

    This paper presents a new optimal sliding mode controller using the scalar sign function method. A smooth, continuous-time scalar sign function is used to replace the discontinuous switching function in the design of a sliding mode controller. The proposed sliding mode controller is designed using an optimal Linear Quadratic Regulator (LQR) approach. The sliding surface of the system is designed using stable eigenvectors and the scalar sign function. Controller simulations are compared with another existing optimal sliding mode controller. To test the effectiveness of the proposed controller, the controller is implemented on an aluminum beam with piezoceramic sensor and actuator for vibration control. This paper includes the control design and stability analysis of the new optimal sliding mode controller, followed by simulation and experimental results. The simulation and experimental results show that the proposed approach is very effective. © 2013 ISA. Published by Elsevier Ltd. All rights reserved.

  4. Actinobacteria consortium as an efficient biotechnological tool for mixed polluted soil reclamation: Experimental factorial design for bioremediation process optimization.

    PubMed

    Aparicio, Juan Daniel; Raimondo, Enzo Emanuel; Gil, Raúl Andrés; Benimeli, Claudia Susana; Polti, Marta Alejandra

    2018-01-15

    The objective of the present work was to establish optimal biological and physicochemical parameters in order to remove simultaneously lindane and Cr(VI) at high and/or low pollutants concentrations from the soil by an actinobacteria consortium formed by Streptomyces sp. M7, MC1, A5, and Amycolatopsis tucumanensis AB0. Also, the final aim was to treat real soils contaminated with Cr(VI) and/or lindane from the Northwest of Argentina employing the optimal biological and physicochemical conditions. In this sense, after determining the optimal inoculum concentration (2gkg -1 ), an experimental design model with four factors (temperature, moisture, initial concentration of Cr(VI) and lindane) was employed for predicting the system behavior during bioremediation process. According to response optimizer, the optimal moisture level was 30% for all bioremediation processes. However, the optimal temperature was different for each situation: for low initial concentrations of both pollutants, the optimal temperature was 25°C; for low initial concentrations of Cr(VI) and high initial concentrations of lindane, the optimal temperature was 30°C; and for high initial concentrations of Cr(VI), the optimal temperature was 35°C. In order to confirm the model adequacy and the validity of the optimization procedure, experiments were performed in six real contaminated soils samples. The defined actinobacteria consortium reduced the contaminants concentrations in five of the six samples, by working at laboratory scale and employing the optimal conditions obtained through the factorial design. Copyright © 2017 Elsevier B.V. All rights reserved.

  5. Bioremediation of chlorpyrifos contaminated soil by two phase bioslurry reactor: Processes evaluation and optimization by Taguchi's design of experimental (DOE) methodology.

    PubMed

    Pant, Apourv; Rai, J P N

    2018-04-15

    Two phase bioreactor was constructed, designed and developed to evaluate the chlorpyrifos remediation. Six biotic and abiotic factors (substrate-loading rate, slurry phase pH, slurry phase dissolved oxygen (DO), soil water ratio, temperature and soil micro flora load) were evaluated by design of experimental (DOE) methodology employing Taguchi's orthogonal array (OA). The selected six factors were considered at two levels L-8 array (2^7, 15 experiments) in the experimental design. The optimum operating conditions obtained from the methodology showed enhanced chlorpyrifos degradation from 283.86µg/g to 955.364µg/g by overall 70.34% of enhancement. In the present study, with the help of few well defined experimental parameters a mathematical model was constructed to understand the complex bioremediation process and optimize the approximate parameters upto great accuracy. Copyright © 2017 Elsevier Inc. All rights reserved.

  6. Robust Bayesian Experimental Design for Conceptual Model Discrimination

    NASA Astrophysics Data System (ADS)

    Pham, H. V.; Tsai, F. T. C.

    2015-12-01

    A robust Bayesian optimal experimental design under uncertainty is presented to provide firm information for model discrimination, given the least number of pumping wells and observation wells. Firm information is the maximum information of a system can be guaranteed from an experimental design. The design is based on the Box-Hill expected entropy decrease (EED) before and after the experiment design and the Bayesian model averaging (BMA) framework. A max-min programming is introduced to choose the robust design that maximizes the minimal Box-Hill EED subject to that the highest expected posterior model probability satisfies a desired probability threshold. The EED is calculated by the Gauss-Hermite quadrature. The BMA method is used to predict future observations and to quantify future observation uncertainty arising from conceptual and parametric uncertainties in calculating EED. Monte Carlo approach is adopted to quantify the uncertainty in the posterior model probabilities. The optimal experimental design is tested by a synthetic 5-layer anisotropic confined aquifer. Nine conceptual groundwater models are constructed due to uncertain geological architecture and boundary condition. High-performance computing is used to enumerate all possible design solutions in order to identify the most plausible groundwater model. Results highlight the impacts of scedasticity in future observation data as well as uncertainty sources on potential pumping and observation locations.

  7. Efficient experimental design for uncertainty reduction in gene regulatory networks

    PubMed Central

    2015-01-01

    Background An accurate understanding of interactions among genes plays a major role in developing therapeutic intervention methods. Gene regulatory networks often contain a significant amount of uncertainty. The process of prioritizing biological experiments to reduce the uncertainty of gene regulatory networks is called experimental design. Under such a strategy, the experiments with high priority are suggested to be conducted first. Results The authors have already proposed an optimal experimental design method based upon the objective for modeling gene regulatory networks, such as deriving therapeutic interventions. The experimental design method utilizes the concept of mean objective cost of uncertainty (MOCU). MOCU quantifies the expected increase of cost resulting from uncertainty. The optimal experiment to be conducted first is the one which leads to the minimum expected remaining MOCU subsequent to the experiment. In the process, one must find the optimal intervention for every gene regulatory network compatible with the prior knowledge, which can be prohibitively expensive when the size of the network is large. In this paper, we propose a computationally efficient experimental design method. This method incorporates a network reduction scheme by introducing a novel cost function that takes into account the disruption in the ranking of potential experiments. We then estimate the approximate expected remaining MOCU at a lower computational cost using the reduced networks. Conclusions Simulation results based on synthetic and real gene regulatory networks show that the proposed approximate method has close performance to that of the optimal method but at lower computational cost. The proposed approximate method also outperforms the random selection policy significantly. A MATLAB software implementing the proposed experimental design method is available at http://gsp.tamu.edu/Publications/supplementary/roozbeh15a/. PMID:26423515

  8. A More Rigorous Quasi-Experimental Alternative to the One-Group Pretest-Posttest Design.

    ERIC Educational Resources Information Center

    Johnson, Craig W.

    1986-01-01

    A simple quasi-experimental design is described which may have utility in a variety of applied and laboratory research settings where ordinarily the one-group pretest-posttest pre-experimental design might otherwise be the procedure of choice. The design approaches the internal validity of true experimental designs while optimizing external…

  9. Optimization of scaffold design for bone tissue engineering: A computational and experimental study.

    PubMed

    Dias, Marta R; Guedes, José M; Flanagan, Colleen L; Hollister, Scott J; Fernandes, Paulo R

    2014-04-01

    In bone tissue engineering, the scaffold has not only to allow the diffusion of cells, nutrients and oxygen but also provide adequate mechanical support. One way to ensure the scaffold has the right properties is to use computational tools to design such a scaffold coupled with additive manufacturing to build the scaffolds to the resulting optimized design specifications. In this study a topology optimization algorithm is proposed as a technique to design scaffolds that meet specific requirements for mass transport and mechanical load bearing. Several micro-structures obtained computationally are presented. Designed scaffolds were then built using selective laser sintering and the actual features of the fabricated scaffolds were measured and compared to the designed values. It was possible to obtain scaffolds with an internal geometry that reasonably matched the computational design (within 14% of porosity target, 40% for strut size and 55% for throat size in the building direction and 15% for strut size and 17% for throat size perpendicular to the building direction). These results support the use of these kind of computational algorithms to design optimized scaffolds with specific target properties and confirm the value of these techniques for bone tissue engineering. Copyright © 2014 IPEM. Published by Elsevier Ltd. All rights reserved.

  10. Sequential experimental design based generalised ANOVA

    NASA Astrophysics Data System (ADS)

    Chakraborty, Souvik; Chowdhury, Rajib

    2016-07-01

    Over the last decade, surrogate modelling technique has gained wide popularity in the field of uncertainty quantification, optimization, model exploration and sensitivity analysis. This approach relies on experimental design to generate training points and regression/interpolation for generating the surrogate. In this work, it is argued that conventional experimental design may render a surrogate model inefficient. In order to address this issue, this paper presents a novel distribution adaptive sequential experimental design (DA-SED). The proposed DA-SED has been coupled with a variant of generalised analysis of variance (G-ANOVA), developed by representing the component function using the generalised polynomial chaos expansion. Moreover, generalised analytical expressions for calculating the first two statistical moments of the response, which are utilized in predicting the probability of failure, have also been developed. The proposed approach has been utilized in predicting probability of failure of three structural mechanics problems. It is observed that the proposed approach yields accurate and computationally efficient estimate of the failure probability.

  11. Experimental design to optimize an Haemophilus influenzae type b conjugate vaccine made with hydrazide-derivatized tetanus toxoid.

    PubMed

    Laferriere, Craig; Ravenscroft, Neil; Wilson, Seanette; Combrink, Jill; Gordon, Lizelle; Petre, Jean

    2011-10-01

    The introduction of type b Haemophilus influenzae conjugate vaccines into routine vaccination schedules has significantly reduced the burden of this disease; however, widespread use in developing countries is constrained by vaccine costs, and there is a need for a simple and high-yielding manufacturing process. The vaccine is composed of purified capsular polysaccharide conjugated to an immunogenic carrier protein. To improve the yield and rate of the reductive amination conjugation reaction used to make this vaccine, some of the carboxyl groups of the carrier protein, tetanus toxoid, were modified to hydrazides, which are more reactive than the ε -amine of lysine. Other reaction parameters, including the ratio of the reactants, the size of the polysaccharide, the temperature and the salt concentration, were also investigated. Experimental design was used to minimize the number of experiments required to optimize all these parameters to obtain conjugate in high yield with target characteristics. It was found that increasing the reactant ratio and decreasing the size of the polysaccharide increased the polysaccharide:protein mass ratio in the product. Temperature and salt concentration did not improve this ratio. These results are consistent with a diffusion controlled rate limiting step in the conjugation reaction. Excessive modification of tetanus toxoid with hydrazide was correlated with reduced yield and lower free polysaccharide. This was attributed to a greater tendency for precipitation, possibly due to changes in the isoelectric point. Experimental design and multiple regression helped identify key parameters to control and thereby optimize this conjugation reaction.

  12. Optimization of primaquine diphosphate tablet formulation for controlled drug release using the mixture experimental design.

    PubMed

    Duque, Marcelo Dutra; Kreidel, Rogério Nepomuceno; Taqueda, Maria Elena Santos; Baby, André Rolim; Kaneko, Telma Mary; Velasco, Maria Valéria Robles; Consiglieri, Vladi Olga

    2013-01-01

    A tablet formulation based on hydrophilic matrix with a controlled drug release was developed, and the effect of polymer concentrations on the release of primaquine diphosphate was evaluated. To achieve this purpose, a 20-run, four-factor with multiple constraints on the proportions of the components was employed to obtain tablet compositions. Drug release was determined by an in vitro dissolution study in phosphate buffer solution at pH 6.8. The polynomial fitted functions described the behavior of the mixture on simplex coordinate systems to study the effects of each factor (polymer) on tablet characteristics. Based on the response surface methodology, a tablet composition was optimized with the purpose of obtaining a primaquine diphosphate release closer to a zero order kinetic. This formulation released 85.22% of the drug for 8 h and its kinetic was studied regarding to Korsmeyer-Peppas model, (Adj-R(2) = 0.99295) which has confirmed that both diffusion and erosion were related to the mechanism of the drug release. The data from the optimized formulation were very close to the predictions from statistical analysis, demonstrating that mixture experimental design could be used to optimize primaquine diphosphate dissolution from hidroxypropylmethyl cellulose and polyethylene glycol matrix tablets.

  13. Structural Optimization of a Force Balance Using a Computational Experiment Design

    NASA Technical Reports Server (NTRS)

    Parker, P. A.; DeLoach, R.

    2002-01-01

    This paper proposes a new approach to force balance structural optimization featuring a computational experiment design. Currently, this multi-dimensional design process requires the designer to perform a simplification by executing parameter studies on a small subset of design variables. This one-factor-at-a-time approach varies a single variable while holding all others at a constant level. Consequently, subtle interactions among the design variables, which can be exploited to achieve the design objectives, are undetected. The proposed method combines Modern Design of Experiments techniques to direct the exploration of the multi-dimensional design space, and a finite element analysis code to generate the experimental data. To efficiently search for an optimum combination of design variables and minimize the computational resources, a sequential design strategy was employed. Experimental results from the optimization of a non-traditional force balance measurement section are presented. An approach to overcome the unique problems associated with the simultaneous optimization of multiple response criteria is described. A quantitative single-point design procedure that reflects the designer's subjective impression of the relative importance of various design objectives, and a graphical multi-response optimization procedure that provides further insights into available tradeoffs among competing design objectives are illustrated. The proposed method enhances the intuition and experience of the designer by providing new perspectives on the relationships between the design variables and the competing design objectives providing a systematic foundation for advancements in structural design.

  14. Optimization of Xylanase Production from Penicillium sp.WX-Z1 by a Two-Step Statistical Strategy: Plackett-Burman and Box-Behnken Experimental Design

    PubMed Central

    Cui, Fengjie; Zhao, Liming

    2012-01-01

    The objective of the study was to optimize the nutrition sources in a culture medium for the production of xylanase from Penicillium sp.WX-Z1 using Plackett-Burman design and Box-Behnken design. The Plackett-Burman multifactorial design was first employed to screen the important nutrient sources in the medium for xylanase production by Penicillium sp.WX-Z1 and subsequent use of the response surface methodology (RSM) was further optimized for xylanase production by Box-Behnken design. The important nutrient sources in the culture medium, identified by the initial screening method of Placket-Burman, were wheat bran, yeast extract, NaNO3, MgSO4, and CaCl2. The optimal amounts (in g/L) for maximum production of xylanase were: wheat bran, 32.8; yeast extract, 1.02; NaNO3, 12.71; MgSO4, 0.96; and CaCl2, 1.04. Using this statistical experimental design, the xylanase production under optimal condition reached 46.50 U/mL and an increase in xylanase activity of 1.34-fold was obtained compared with the original medium for fermentation carried out in a 30-L bioreactor. PMID:22949884

  15. Optimization of Xylanase production from Penicillium sp.WX-Z1 by a two-step statistical strategy: Plackett-Burman and Box-Behnken experimental design.

    PubMed

    Cui, Fengjie; Zhao, Liming

    2012-01-01

    The objective of the study was to optimize the nutrition sources in a culture medium for the production of xylanase from Penicillium sp.WX-Z1 using Plackett-Burman design and Box-Behnken design. The Plackett-Burman multifactorial design was first employed to screen the important nutrient sources in the medium for xylanase production by Penicillium sp.WX-Z1 and subsequent use of the response surface methodology (RSM) was further optimized for xylanase production by Box-Behnken design. The important nutrient sources in the culture medium, identified by the initial screening method of Placket-Burman, were wheat bran, yeast extract, NaNO(3), MgSO(4), and CaCl(2). The optimal amounts (in g/L) for maximum production of xylanase were: wheat bran, 32.8; yeast extract, 1.02; NaNO(3), 12.71; MgSO(4), 0.96; and CaCl(2), 1.04. Using this statistical experimental design, the xylanase production under optimal condition reached 46.50 U/mL and an increase in xylanase activity of 1.34-fold was obtained compared with the original medium for fermentation carried out in a 30-L bioreactor.

  16. Convergence in parameters and predictions using computational experimental design.

    PubMed

    Hagen, David R; White, Jacob K; Tidor, Bruce

    2013-08-06

    Typically, biological models fitted to experimental data suffer from significant parameter uncertainty, which can lead to inaccurate or uncertain predictions. One school of thought holds that accurate estimation of the true parameters of a biological system is inherently problematic. Recent work, however, suggests that optimal experimental design techniques can select sets of experiments whose members probe complementary aspects of a biochemical network that together can account for its full behaviour. Here, we implemented an experimental design approach for selecting sets of experiments that constrain parameter uncertainty. We demonstrated with a model of the epidermal growth factor-nerve growth factor pathway that, after synthetically performing a handful of optimal experiments, the uncertainty in all 48 parameters converged below 10 per cent. Furthermore, the fitted parameters converged to their true values with a small error consistent with the residual uncertainty. When untested experimental conditions were simulated with the fitted models, the predicted species concentrations converged to their true values with errors that were consistent with the residual uncertainty. This paper suggests that accurate parameter estimation is achievable with complementary experiments specifically designed for the task, and that the resulting parametrized models are capable of accurate predictions.

  17. Optimal Experiment Design for Thermal Characterization of Functionally Graded Materials

    NASA Technical Reports Server (NTRS)

    Cole, Kevin D.

    2003-01-01

    The purpose of the project was to investigate methods to accurately verify that designed , materials meet thermal specifications. The project involved heat transfer calculations and optimization studies, and no laboratory experiments were performed. One part of the research involved study of materials in which conduction heat transfer predominates. Results include techniques to choose among several experimental designs, and protocols for determining the optimum experimental conditions for determination of thermal properties. Metal foam materials were also studied in which both conduction and radiation heat transfer are present. Results of this work include procedures to optimize the design of experiments to accurately measure both conductive and radiative thermal properties. Detailed results in the form of three journal papers have been appended to this report.

  18. Optimal Design of an Automotive Exhaust Thermoelectric Generator

    NASA Astrophysics Data System (ADS)

    Fagehi, Hassan; Attar, Alaa; Lee, Hosung

    2018-07-01

    The consumption of energy continues to increase at an exponential rate, especially in terms of conventional automobiles. Approximately 40% of the applied fuel into a vehicle is lost as waste exhausted to the environment. The desire for improved fuel efficiency by recovering the exhaust waste heat in automobiles has become an important subject. A thermoelectric generator (TEG) has the potential to convert exhaust waste heat into electricity as long as it is improving fuel economy. The remarkable amount of research being conducted on TEGs indicates that this technology will have a bright future in terms of power generation. The current study discusses the optimal design of the automotive exhaust TEG. An experimental study has been conducted to verify the model that used the ideal (standard) equations along with effective material properties. The model is reasonably verified by experimental work, mainly due to the utilization of the effective material properties. Hence, the thermoelectric module that was used in the experiment was optimized by using a developed optimal design theory (dimensionless analysis technique).

  19. Optimal Design of an Automotive Exhaust Thermoelectric Generator

    NASA Astrophysics Data System (ADS)

    Fagehi, Hassan; Attar, Alaa; Lee, Hosung

    2018-04-01

    The consumption of energy continues to increase at an exponential rate, especially in terms of conventional automobiles. Approximately 40% of the applied fuel into a vehicle is lost as waste exhausted to the environment. The desire for improved fuel efficiency by recovering the exhaust waste heat in automobiles has become an important subject. A thermoelectric generator (TEG) has the potential to convert exhaust waste heat into electricity as long as it is improving fuel economy. The remarkable amount of research being conducted on TEGs indicates that this technology will have a bright future in terms of power generation. The current study discusses the optimal design of the automotive exhaust TEG. An experimental study has been conducted to verify the model that used the ideal (standard) equations along with effective material properties. The model is reasonably verified by experimental work, mainly due to the utilization of the effective material properties. Hence, the thermoelectric module that was used in the experiment was optimized by using a developed optimal design theory (dimensionless analysis technique).

  20. Sequential experimental design based generalised ANOVA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chakraborty, Souvik, E-mail: csouvik41@gmail.com; Chowdhury, Rajib, E-mail: rajibfce@iitr.ac.in

    Over the last decade, surrogate modelling technique has gained wide popularity in the field of uncertainty quantification, optimization, model exploration and sensitivity analysis. This approach relies on experimental design to generate training points and regression/interpolation for generating the surrogate. In this work, it is argued that conventional experimental design may render a surrogate model inefficient. In order to address this issue, this paper presents a novel distribution adaptive sequential experimental design (DA-SED). The proposed DA-SED has been coupled with a variant of generalised analysis of variance (G-ANOVA), developed by representing the component function using the generalised polynomial chaos expansion. Moreover,more » generalised analytical expressions for calculating the first two statistical moments of the response, which are utilized in predicting the probability of failure, have also been developed. The proposed approach has been utilized in predicting probability of failure of three structural mechanics problems. It is observed that the proposed approach yields accurate and computationally efficient estimate of the failure probability.« less

  1. IsoDesign: a software for optimizing the design of 13C-metabolic flux analysis experiments.

    PubMed

    Millard, Pierre; Sokol, Serguei; Letisse, Fabien; Portais, Jean-Charles

    2014-01-01

    The growing demand for (13) C-metabolic flux analysis ((13) C-MFA) in the field of metabolic engineering and systems biology is driving the need to rationalize expensive and time-consuming (13) C-labeling experiments. Experimental design is a key step in improving both the number of fluxes that can be calculated from a set of isotopic data and the precision of flux values. We present IsoDesign, a software that enables these parameters to be maximized by optimizing the isotopic composition of the label input. It can be applied to (13) C-MFA investigations using a broad panel of analytical tools (MS, MS/MS, (1) H NMR, (13) C NMR, etc.) individually or in combination. It includes a visualization module to intuitively select the optimal label input depending on the biological question to be addressed. Applications of IsoDesign are described, with an example of the entire (13) C-MFA workflow from the experimental design to the flux map including important practical considerations. IsoDesign makes the experimental design of (13) C-MFA experiments more accessible to a wider biological community. IsoDesign is distributed under an open source license at http://metasys.insa-toulouse.fr/software/isodes/ © 2013 Wiley Periodicals, Inc.

  2. Limit of detection of 15{sub N} by gas-chromatography atomic emission detection: Optimization using an experimental design

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Deruaz, D.; Bannier, A.; Pionchon, C.

    1995-08-01

    This paper deals with the optimal conditions for the detection of {sup 15}N determined using a four-factor experimental design from [2{sup 13}C,-1,3 {sup 15}N] caffeine measured with an atomic emission detector (AED) coupled to gas chromatography (GC). Owing to the capability of a photodiodes array, AED can simultaneously detect several elements using their specific emission lines within a wavelength range of 50 nm. So, the emissions of {sup 15}N and {sup 14}N are simultaneously detected at 420.17 nm and 421.46 nm respectively. Four independent experimental factors were tested (1) helium flow rate (plasma gas); (2) methane pressure (reactant gas); (3)more » oxygen pressure; (4) hydrogen pressure. It has been shown that these four gases had a significant influence on the analytical response of {sup 15}N. The linearity of the detection was determined using {sup 15}N amounts ranging from 1.52 pg to 19 ng under the optimal conditions obtained from the experimental design. The limit of detection was studied using different methods. The limits of detection of {sup 15}N was 1.9 pg/s according to the IUPAC method (International-Union of Pure and Applied Chemistry). The method proposed by Quimby and Sullivan gave a value of 2.3 pg/s and that of Oppenheimer gave a limit of 29 pg/s. For each determination, and internal standard: 1-isobutyl-3.7 dimethylxanthine was used. The results clearly demonstrate that GC AED is sensitive and selective enough to detect and measure {sup 15}N-labelled molecules after gas chromatographic separation.« less

  3. Optimization of cold-adapted lysozyme production from the psychrophilic yeast Debaryomyces hansenii using statistical experimental methods.

    PubMed

    Wang, Quanfu; Hou, Yanhua; Yan, Peisheng

    2012-06-01

    Statistical experimental designs were employed to optimize culture conditions for cold-adapted lysozyme production of a psychrophilic yeast Debaryomyces hansenii. In the first step of optimization using Plackett-Burman design (PBD), peptone, glucose, temperature, and NaCl were identified as significant variables that affected lysozyme production, the formula was further optimized using a four factor central composite design (CCD) to understand their interaction and to determine their optimal levels. A quadratic model was developed and validated. Compared to the initial level (18.8 U/mL), the maximum lysozyme production (65.8 U/mL) observed was approximately increased by 3.5-fold under the optimized conditions. Cold-adapted lysozymes production was first optimized using statistical experimental methods. A 3.5-fold enhancement of microbial lysozyme was gained after optimization. Such an improved production will facilitate the application of microbial lysozyme. Thus, D. hansenii lysozyme may be a good and new resource for the industrial production of cold-adapted lysozymes. © 2012 Institute of Food Technologists®

  4. Surface laser marking optimization using an experimental design approach

    NASA Astrophysics Data System (ADS)

    Brihmat-Hamadi, F.; Amara, E. H.; Lavisse, L.; Jouvard, J. M.; Cicala, E.; Kellou, H.

    2017-04-01

    Laser surface marking is performed on a titanium substrate using a pulsed frequency doubled Nd:YAG laser ( λ= 532 nm, τ pulse=5 ns) to process the substrate surface under normal atmospheric conditions. The aim of the work is to investigate, following experimental and statistical approaches, the correlation between the process parameters and the response variables (output), using a Design of Experiment method (DOE): Taguchi methodology and a response surface methodology (RSM). A design is first created using MINTAB program, and then the laser marking process is performed according to the planned design. The response variables; surface roughness and surface reflectance were measured for each sample, and incorporated into the design matrix. The results are then analyzed and the RSM model is developed and verified for predicting the process output for the given set of process parameters values. The analysis shows that the laser beam scanning speed is the most influential operating factor followed by the laser pumping intensity during marking, while the other factors show complex influences on the objective functions.

  5. Design and experimental realization of an optimal scheme for teleportation of an n-qubit quantum state

    NASA Astrophysics Data System (ADS)

    Sisodia, Mitali; Shukla, Abhishek; Thapliyal, Kishore; Pathak, Anirban

    2017-12-01

    An explicit scheme (quantum circuit) is designed for the teleportation of an n-qubit quantum state. It is established that the proposed scheme requires an optimal amount of quantum resources, whereas larger amount of quantum resources have been used in a large number of recently reported teleportation schemes for the quantum states which can be viewed as special cases of the general n-qubit state considered here. A trade-off between our knowledge about the quantum state to be teleported and the amount of quantum resources required for the same is observed. A proof-of-principle experimental realization of the proposed scheme (for a 2-qubit state) is also performed using 5-qubit superconductivity-based IBM quantum computer. The experimental results show that the state has been teleported with high fidelity. Relevance of the proposed teleportation scheme has also been discussed in the context of controlled, bidirectional, and bidirectional controlled state teleportation.

  6. Experimental design data for the biosynthesis of citric acid using Central Composite Design method.

    PubMed

    Kola, Anand Kishore; Mekala, Mallaiah; Goli, Venkat Reddy

    2017-06-01

    In the present investigation, we report that statistical design and optimization of significant variables for the microbial production of citric acid from sucrose in presence of filamentous fungi A. niger NCIM 705. Various combinations of experiments were designed with Central Composite Design (CCD) of Response Surface Methodology (RSM) for the production of citric acid as a function of six variables. The variables are; initial sucrose concentration, initial pH of medium, fermentation temperature, incubation time, stirrer rotational speed, and oxygen flow rate. From experimental data, a statistical model for this process has been developed. The optimum conditions reported in the present article are initial concentration of sucrose of 163.6 g/L, initial pH of medium 5.26, stirrer rotational speed of 247.78 rpm, incubation time of 8.18 days, fermentation temperature of 30.06 °C and flow rate of oxygen of 1.35 lpm. Under optimum conditions the predicted maximum citric acid is 86.42 g/L. The experimental validation carried out under the optimal values and reported citric acid to be 82.0 g/L. The model is able to represent the experimental data and the agreement between the model and experimental data is good.

  7. Solar-Diesel Hybrid Power System Optimization and Experimental Validation

    NASA Astrophysics Data System (ADS)

    Jacobus, Headley Stewart

    As of 2008 1.46 billion people, or 22 percent of the World's population, were without electricity. Many of these people live in remote areas where decentralized generation is the only method of electrification. Most mini-grids are powered by diesel generators, but new hybrid power systems are becoming a reliable method to incorporate renewable energy while also reducing total system cost. This thesis quantifies the measurable Operational Costs for an experimental hybrid power system in Sierra Leone. Two software programs, Hybrid2 and HOMER, are used during the system design and subsequent analysis. Experimental data from the installed system is used to validate the two programs and to quantify the savings created by each component within the hybrid system. This thesis bridges the gap between design optimization studies that frequently lack subsequent validation and experimental hybrid system performance studies.

  8. A statistical approach to optimizing concrete mixture design.

    PubMed

    Ahmad, Shamsad; Alghamdi, Saeid A

    2014-01-01

    A step-by-step statistical approach is proposed to obtain optimum proportioning of concrete mixtures using the data obtained through a statistically planned experimental program. The utility of the proposed approach for optimizing the design of concrete mixture is illustrated considering a typical case in which trial mixtures were considered according to a full factorial experiment design involving three factors and their three levels (3(3)). A total of 27 concrete mixtures with three replicates (81 specimens) were considered by varying the levels of key factors affecting compressive strength of concrete, namely, water/cementitious materials ratio (0.38, 0.43, and 0.48), cementitious materials content (350, 375, and 400 kg/m(3)), and fine/total aggregate ratio (0.35, 0.40, and 0.45). The experimental data were utilized to carry out analysis of variance (ANOVA) and to develop a polynomial regression model for compressive strength in terms of the three design factors considered in this study. The developed statistical model was used to show how optimization of concrete mixtures can be carried out with different possible options.

  9. A Statistical Approach to Optimizing Concrete Mixture Design

    PubMed Central

    Alghamdi, Saeid A.

    2014-01-01

    A step-by-step statistical approach is proposed to obtain optimum proportioning of concrete mixtures using the data obtained through a statistically planned experimental program. The utility of the proposed approach for optimizing the design of concrete mixture is illustrated considering a typical case in which trial mixtures were considered according to a full factorial experiment design involving three factors and their three levels (33). A total of 27 concrete mixtures with three replicates (81 specimens) were considered by varying the levels of key factors affecting compressive strength of concrete, namely, water/cementitious materials ratio (0.38, 0.43, and 0.48), cementitious materials content (350, 375, and 400 kg/m3), and fine/total aggregate ratio (0.35, 0.40, and 0.45). The experimental data were utilized to carry out analysis of variance (ANOVA) and to develop a polynomial regression model for compressive strength in terms of the three design factors considered in this study. The developed statistical model was used to show how optimization of concrete mixtures can be carried out with different possible options. PMID:24688405

  10. Optimization of the capillary zone electrophoresis method for Huperzine A determination using experimental design and artificial neural networks.

    PubMed

    Hameda, A Ben; Elosta, S; Havel, J

    2005-08-19

    Huperzine A, natural product from Huperzia serrata, is quite an important compound used to treat the Alzheimer's disease as a food supplement and also proposed as a prospective and prophylactic antidote against organophosphate poisoning. In this work, simple and fast capillary electrophoresis (CE) procedure with UV detection (at 230 nm) for determination of Huperzine A was developed and optimized. Capillary electrophoresis determination of Huperzine A was optimized using a combination of the experimental design (ED) and the artificial neural networks (ANN). In the first stage of optimization, the experiments were done according to the appropriate ED. Data evaluated by ANN allowed finding the optimal values of several analytical parameters (peak area, peak height, and analysis time). Optimal conditions found were 50 mM acetate buffer, pH 4.6, separation voltage 10 kV, hydrodynamic injection time 10 s and temperature 25 degrees C. The developed method shows good repeatability as relative standard division (R.S.D. = 0.9%) and it has been applied for determination of Huperzine A in various pharmaceutical products and in biological liquids. The limit of detection (LOD) in aqueous media was 0.226 ng/ml and 0.233 ng/ml for determination in the serum.

  11. "Real-time" disintegration analysis and D-optimal experimental design for the optimization of diclofenac sodium fast-dissolving films.

    PubMed

    El-Malah, Yasser; Nazzal, Sami

    2013-01-01

    The objective of this work was to study the dissolution and mechanical properties of fast-dissolving films prepared from a tertiary mixture of pullulan, polyvinylpyrrolidone and hypromellose. Disintegration studies were performed in real-time by probe spectroscopy to detect the onset of film disintegration. Tensile strength and elastic modulus of the films were measured by texture analysis. Disintegration time of the films ranged from 21 to 105 seconds whereas their mechanical properties ranged from approximately 2 to 49 MPa for tensile strength and 1 to 21 MPa% for young's modulus. After generating polynomial models correlating the variables using a D-Optimal mixture design, an optimal formulation with desired responses was proposed by the statistical package. For validation, a new film formulation loaded with diclofenac sodium based on the optimized composition was prepared and tested for dissolution and tensile strength. Dissolution of the optimized film was found to commence almost immediately with 50% of the drug released within one minute. Tensile strength and young's modulus of the film were 11.21 MPa and 6, 78 MPa%, respectively. Real-time spectroscopy in conjunction with statistical design were shown to be very efficient for the optimization and development of non-conventional intraoral delivery system such as fast dissolving films.

  12. Optimality models in the age of experimental evolution and genomics.

    PubMed

    Bull, J J; Wang, I-N

    2010-09-01

    Optimality models have been used to predict evolution of many properties of organisms. They typically neglect genetic details, whether by necessity or design. This omission is a common source of criticism, and although this limitation of optimality is widely acknowledged, it has mostly been defended rather than evaluated for its impact. Experimental adaptation of model organisms provides a new arena for testing optimality models and for simultaneously integrating genetics. First, an experimental context with a well-researched organism allows dissection of the evolutionary process to identify causes of model failure--whether the model is wrong about genetics or selection. Second, optimality models provide a meaningful context for the process and mechanics of evolution, and thus may be used to elicit realistic genetic bases of adaptation--an especially useful augmentation to well-researched genetic systems. A few studies of microbes have begun to pioneer this new direction. Incompatibility between the assumed and actual genetics has been demonstrated to be the cause of model failure in some cases. More interestingly, evolution at the phenotypic level has sometimes matched prediction even though the adaptive mutations defy mechanisms established by decades of classic genetic studies. Integration of experimental evolutionary tests with genetics heralds a new wave for optimality models and their extensions that does not merely emphasize the forces driving evolution.

  13. Optimization of photocatalytic degradation of methyl blue using silver ion doped titanium dioxide by combination of experimental design and response surface approach.

    PubMed

    Sahoo, C; Gupta, A K

    2012-05-15

    Photocatalytic degradation of methyl blue (MYB) was studied using Ag(+) doped TiO(2) under UV irradiation in a batch reactor. Catalytic dose, initial concentration of dye and pH of the reaction mixture were found to influence the degradation process most. The degradation was found to be effective in the range catalytic dose (0.5-1.5g/L), initial dye concentration (25-100ppm) and pH of reaction mixture (5-9). Using the three factors three levels Box-Behnken design of experiment technique 15 sets of experiments were designed considering the effective ranges of the influential parameters. The results of the experiments were fitted to two quadratic polynomial models developed using response surface methodology (RSM), representing functional relationship between the decolorization and mineralization of MYB and the experimental parameters. Design Expert software version 8.0.6.1 was used to optimize the effects of the experimental parameters on the responses. The optimum values of the parameters were dose of Ag(+) doped TiO(2) 0.99g/L, initial concentration of MYB 57.68ppm and pH of reaction mixture 7.76. Under the optimal condition the predicted decolorization and mineralization rate of MYB were 95.97% and 80.33%, respectively. Regression analysis with R(2) values >0.99 showed goodness of fit of the experimental results with predicted values. Copyright © 2012 Elsevier B.V. All rights reserved.

  14. Development, optimization, and in vitro characterization of dasatinib-loaded PEG functionalized chitosan capped gold nanoparticles using Box-Behnken experimental design.

    PubMed

    Adena, Sandeep Kumar Reddy; Upadhyay, Mansi; Vardhan, Harsh; Mishra, Brahmeshwar

    2018-03-01

    The purpose of this research study was to develop, optimize, and characterize dasatinib loaded polyethylene glycol (PEG) stabilized chitosan capped gold nanoparticles (DSB-PEG-Ch-GNPs). Gold (III) chloride hydrate was reduced with chitosan and the resulting nanoparticles were coated with thiol-terminated PEG and loaded with dasatinib (DSB). Plackett-Burman design (PBD) followed by Box-Behnken experimental design (BBD) were employed to optimize the process parameters. Polynomial equations, contour, and 3D response surface plots were generated to relate the factors and responses. The optimized DSB-PEG-Ch-GNPs were characterized by FTIR, XRD, HR-SEM, EDX, TEM, SAED, AFM, DLS, and ZP. The results of the optimized DSB-PEG-Ch-GNPs showed particle size (PS) of 24.39 ± 1.82 nm, apparent drug content (ADC) of 72.06 ± 0.86%, and zeta potential (ZP) of -13.91 ± 1.21 mV. The responses observed and the predicted values of the optimized process were found to be close. The shape and surface morphology studies showed that the resulting DSB-PEG-Ch-GNPs were spherical and smooth. The stability and in vitro drug release studies confirmed that the optimized formulation was stable at different conditions of storage and exhibited a sustained drug release of the drug of up to 76% in 48 h and followed Korsmeyer-Peppas release kinetic model. A process for preparing gold nanoparticles using chitosan, anchoring PEG to the particle surface, and entrapping dasatinib in the chitosan-PEG surface corona was optimized.

  15. Computational Optimization of a Natural Laminar Flow Experimental Wing Glove

    NASA Technical Reports Server (NTRS)

    Hartshom, Fletcher

    2012-01-01

    Computational optimization of a natural laminar flow experimental wing glove that is mounted on a business jet is presented and discussed. The process of designing a laminar flow wing glove starts with creating a two-dimensional optimized airfoil and then lofting it into a three-dimensional wing glove section. The airfoil design process does not consider the three dimensional flow effects such as cross flow due wing sweep as well as engine and body interference. Therefore, once an initial glove geometry is created from the airfoil, the three dimensional wing glove has to be optimized to ensure that the desired extent of laminar flow is maintained over the entire glove. TRANAIR, a non-linear full potential solver with a coupled boundary layer code was used as the main tool in the design and optimization process of the three-dimensional glove shape. The optimization process uses the Class-Shape-Transformation method to perturb the geometry with geometric constraints that allow for a 2-in clearance from the main wing. The three-dimensional glove shape was optimized with the objective of having a spanwise uniform pressure distribution that matches the optimized two-dimensional pressure distribution as closely as possible. Results show that with the appropriate inputs, the optimizer is able to match the two dimensional pressure distributions practically across the entire span of the wing glove. This allows for the experiment to have a much higher probability of having a large extent of natural laminar flow in flight.

  16. Improving knowledge of garlic paste greening through the design of an experimental strategy.

    PubMed

    Aguilar, Miguel; Rincón, Francisco

    2007-12-12

    The furthering of scientific knowledge depends in part upon the reproducibility of experimental results. When experimental conditions are not set with sufficient precision, the resulting background noise often leads to poorly reproduced and even faulty experiments. An example of the catastrophic consequences of this background noise can be found in the design of strategies for the development of solutions aimed at preventing garlic paste greening, where reported results are contradictory. To avoid such consequences, this paper presents a two-step strategy based on the concept of experimental design. In the first step, the critical factors inherent to the problem are identified, using a 2(III)(7-4) Plackett-Burman experimental design, from a list of seven apparent critical factors (ACF); subsequently, the critical factors thus identified are considered as the factors to be optimized (FO), and optimization is performed using a Box and Wilson experimental design to identify the stationary point of the system. Optimal conditions for preventing garlic greening are examined after analysis of the complex process of green-pigment development, which involves both chemical and enzymatic reactions and is strongly influenced by pH, with an overall pH optimum of 4.5. The critical step in the greening process is the synthesis of thiosulfinates (allicin) from cysteine sulfoxides (alliin). Cysteine inhibits the greening process at this critical stage; no greening precursors are formed in the presence of around 1% cysteine. However, the optimal conditions for greening prevention are very sensitive both to the type of garlic and to manufacturing conditions. This suggests that optimal solutions for garlic greening prevention should be sought on a case-by-case basis, using the strategy presented here.

  17. Optimization and characterization of liposome formulation by mixture design.

    PubMed

    Maherani, Behnoush; Arab-tehrany, Elmira; Kheirolomoom, Azadeh; Reshetov, Vadzim; Stebe, Marie José; Linder, Michel

    2012-02-07

    This study presents the application of the mixture design technique to develop an optimal liposome formulation by using the different lipids in type and percentage (DOPC, POPC and DPPC) in liposome composition. Ten lipid mixtures were generated by the simplex-centroid design technique and liposomes were prepared by the extrusion method. Liposomes were characterized with respect to size, phase transition temperature, ζ-potential, lamellarity, fluidity and efficiency in loading calcein. The results were then applied to estimate the coefficients of mixture design model and to find the optimal lipid composition with improved entrapment efficiency, size, transition temperature, fluidity and ζ-potential of liposomes. The response optimization of experiments was the liposome formulation with DOPC: 46%, POPC: 12% and DPPC: 42%. The optimal liposome formulation had an average diameter of 127.5 nm, a phase-transition temperature of 11.43 °C, a ζ-potential of -7.24 mV, fluidity (1/P)(TMA-DPH)((¬)) value of 2.87 and an encapsulation efficiency of 20.24%. The experimental results of characterization of optimal liposome formulation were in good agreement with those predicted by the mixture design technique.

  18. Shape optimization techniques for musical instrument design

    NASA Astrophysics Data System (ADS)

    Henrique, Luis; Antunes, Jose; Carvalho, Joao S.

    2002-11-01

    The design of musical instruments is still mostly based on empirical knowledge and costly experimentation. One interesting improvement is the shape optimization of resonating components, given a number of constraints (allowed parameter ranges, shape smoothness, etc.), so that vibrations occur at specified modal frequencies. Each admissible geometrical configuration generates an error between computed eigenfrequencies and the target set. Typically, error surfaces present many local minima, corresponding to suboptimal designs. This difficulty can be overcome using global optimization techniques, such as simulated annealing. However these methods are greedy, concerning the number of function evaluations required. Thus, the computational effort can be unacceptable if complex problems, such as bell optimization, are tackled. Those issues are addressed in this paper, and a method for improving optimization procedures is proposed. Instead of using the local geometric parameters as searched variables, the system geometry is modeled in terms of truncated series of orthogonal space-funcitons, and optimization is performed on their amplitude coefficients. Fourier series and orthogonal polynomials are typical such functions. This technique reduces considerably the number of searched variables, and has a potential for significant computational savings in complex problems. It is illustrated by optimizing the shapes of both current and uncommon marimba bars.

  19. Conceptual design optimization study

    NASA Technical Reports Server (NTRS)

    Hollowell, S. J.; Beeman, E. R., II; Hiyama, R. M.

    1990-01-01

    The feasibility of applying multilevel functional decomposition and optimization techniques to conceptual design of advanced fighter aircraft was investigated. Applying the functional decomposition techniques to the conceptual design phase appears to be feasible. The initial implementation of the modified design process will optimize wing design variables. A hybrid approach, combining functional decomposition techniques for generation of aerodynamic and mass properties linear sensitivity derivatives with existing techniques for sizing mission performance and optimization, is proposed.

  20. Application of optimal design methodologies in clinical pharmacology experiments.

    PubMed

    Ogungbenro, Kayode; Dokoumetzidis, Aristides; Aarons, Leon

    2009-01-01

    Pharmacokinetics and pharmacodynamics data are often analysed by mixed-effects modelling techniques (also known as population analysis), which has become a standard tool in the pharmaceutical industries for drug development. The last 10 years has witnessed considerable interest in the application of experimental design theories to population pharmacokinetic and pharmacodynamic experiments. Design of population pharmacokinetic experiments involves selection and a careful balance of a number of design factors. Optimal design theory uses prior information about the model and parameter estimates to optimize a function of the Fisher information matrix to obtain the best combination of the design factors. This paper provides a review of the different approaches that have been described in the literature for optimal design of population pharmacokinetic and pharmacodynamic experiments. It describes options that are available and highlights some of the issues that could be of concern as regards practical application. It also discusses areas of application of optimal design theories in clinical pharmacology experiments. It is expected that as the awareness about the benefits of this approach increases, more people will embrace it and ultimately will lead to more efficient population pharmacokinetic and pharmacodynamic experiments and can also help to reduce both cost and time during drug development. Copyright (c) 2008 John Wiley & Sons, Ltd.

  1. Application of experimental design for the optimization of artificial neural network-based water quality model: a case study of dissolved oxygen prediction.

    PubMed

    Šiljić Tomić, Aleksandra; Antanasijević, Davor; Ristić, Mirjana; Perić-Grujić, Aleksandra; Pocajt, Viktor

    2018-04-01

    This paper presents an application of experimental design for the optimization of artificial neural network (ANN) for the prediction of dissolved oxygen (DO) content in the Danube River. The aim of this research was to obtain a more reliable ANN model that uses fewer monitoring records, by simultaneous optimization of the following model parameters: number of monitoring sites, number of historical monitoring data (expressed in years), and number of input water quality parameters used. Box-Behnken three-factor at three levels experimental design was applied for simultaneous spatial, temporal, and input variables optimization of the ANN model. The prediction of DO was performed using a feed-forward back-propagation neural network (BPNN), while the selection of most important inputs was done off-model using multi-filter approach that combines a chi-square ranking in the first step with a correlation-based elimination in the second step. The contour plots of absolute and relative error response surfaces were utilized to determine the optimal values of design factors. From the contour plots, two BPNN models that cover entire Danube flow through Serbia are proposed: an upstream model (BPNN-UP) that covers 8 monitoring sites prior to Belgrade and uses 12 inputs measured in the 7-year period and a downstream model (BPNN-DOWN) which covers 9 monitoring sites and uses 11 input parameters measured in the 6-year period. The main difference between the two models is that BPNN-UP utilizes inputs such as BOD, P, and PO 4 3- , which is in accordance with the fact that this model covers northern part of Serbia (Vojvodina Autonomous Province) which is well-known for agricultural production and extensive use of fertilizers. Both models have shown very good agreement between measured and predicted DO (with R 2  ≥ 0.86) and demonstrated that they can effectively forecast DO content in the Danube River.

  2. Optimal Design of Passive Flow Control for a Boundary-Layer-Ingesting Offset Inlet Using Design-of-Experiments

    NASA Technical Reports Server (NTRS)

    Allan, Brian G.; Owens, Lewis R.; Lin, John C.

    2006-01-01

    This research will investigate the use of Design-of-Experiments (DOE) in the development of an optimal passive flow control vane design for a boundary-layer-ingesting (BLI) offset inlet in transonic flow. This inlet flow control is designed to minimize the engine fan-face distortion levels and first five Fourier harmonic half amplitudes while maximizing the inlet pressure recovery. Numerical simulations of the BLI inlet are computed using the Reynolds-averaged Navier-Stokes (RANS) flow solver, OVERFLOW, developed at NASA. These simulations are used to generate the numerical experiments for the DOE response surface model. In this investigation, two DOE optimizations were performed using a D-Optimal Response Surface model. The first DOE optimization was performed using four design factors which were vane height and angles-of-attack for two groups of vanes. One group of vanes was placed at the bottom of the inlet and a second group symmetrically on the sides. The DOE design was performed for a BLI inlet with a free-stream Mach number of 0.85 and a Reynolds number of 2 million, based on the length of the fan-face diameter, matching an experimental wind tunnel BLI inlet test. The first DOE optimization required a fifth order model having 173 numerical simulation experiments and was able to reduce the DC60 baseline distortion from 64% down to 4.4%, while holding the pressure recovery constant. A second DOE optimization was performed holding the vanes heights at a constant value from the first DOE optimization with the two vane angles-of-attack as design factors. This DOE only required a second order model fit with 15 numerical simulation experiments and reduced DC60 to 3.5% with small decreases in the fourth and fifth harmonic amplitudes. The second optimal vane design was tested at the NASA Langley 0.3- Meter Transonic Cryogenic Tunnel in a BLI inlet experiment. The experimental results showed a 80% reduction of DPCP(sub avg), the circumferential distortion level at the

  3. Optimal Design of Passive Flow Control for a Boundary-Layer-Ingesting Offset Inlet Using Design-of-Experiments

    NASA Technical Reports Server (NTRS)

    Allan, Brian G.; Owens, Lewis R., Jr.; Lin, John C.

    2006-01-01

    This research will investigate the use of Design-of-Experiments (DOE) in the development of an optimal passive flow control vane design for a boundary-layer-ingesting (BLI) offset inlet in transonic flow. This inlet flow control is designed to minimize the engine fan face distortion levels and first five Fourier harmonic half amplitudes while maximizing the inlet pressure recovery. Numerical simulations of the BLI inlet are computed using the Reynolds-averaged Navier-Stokes (RANS) flow solver, OVERFLOW, developed at NASA. These simulations are used to generate the numerical experiments for the DOE response surface model. In this investigation, two DOE optimizations were performed using a D-Optimal Response Surface model. The first DOE optimization was performed using four design factors which were vane height and angles-of-attack for two groups of vanes. One group of vanes was placed at the bottom of the inlet and a second group symmetrically on the sides. The DOE design was performed for a BLI inlet with a free-stream Mach number of 0.85 and a Reynolds number of 2 million, based on the length of the fan face diameter, matching an experimental wind tunnel BLI inlet test. The first DOE optimization required a fifth order model having 173 numerical simulation experiments and was able to reduce the DC60 baseline distortion from 64% down to 4.4%, while holding the pressure recovery constant. A second DOE optimization was performed holding the vanes heights at a constant value from the first DOE optimization with the two vane angles-of-attack as design factors. This DOE only required a second order model fit with 15 numerical simulation experiments and reduced DC60 to 3.5% with small decreases in the fourth and fifth harmonic amplitudes. The second optimal vane design was tested at the NASA Langley 0.3-Meter Transonic Cryogenic Tunnel in a BLI inlet experiment. The experimental results showed a 80% reduction of DPCPavg, the circumferential distortion level at the engine

  4. Multi-objective experimental design for (13)C-based metabolic flux analysis.

    PubMed

    Bouvin, Jeroen; Cajot, Simon; D'Huys, Pieter-Jan; Ampofo-Asiama, Jerry; Anné, Jozef; Van Impe, Jan; Geeraerd, Annemie; Bernaerts, Kristel

    2015-10-01

    (13)C-based metabolic flux analysis is an excellent technique to resolve fluxes in the central carbon metabolism but costs can be significant when using specialized tracers. This work presents a framework for cost-effective design of (13)C-tracer experiments, illustrated on two different networks. Linear and non-linear optimal input mixtures are computed for networks for Streptomyces lividans and a carcinoma cell line. If only glucose tracers are considered as labeled substrate for a carcinoma cell line or S. lividans, the best parameter estimation accuracy is obtained by mixtures containing high amounts of 1,2-(13)C2 glucose combined with uniformly labeled glucose. Experimental designs are evaluated based on a linear (D-criterion) and non-linear approach (S-criterion). Both approaches generate almost the same input mixture, however, the linear approach is favored due to its low computational effort. The high amount of 1,2-(13)C2 glucose in the optimal designs coincides with a high experimental cost, which is further enhanced when labeling is introduced in glutamine and aspartate tracers. Multi-objective optimization gives the possibility to assess experimental quality and cost at the same time and can reveal excellent compromise experiments. For example, the combination of 100% 1,2-(13)C2 glucose with 100% position one labeled glutamine and the combination of 100% 1,2-(13)C2 glucose with 100% uniformly labeled glutamine perform equally well for the carcinoma cell line, but the first mixture offers a decrease in cost of $ 120 per ml-scale cell culture experiment. We demonstrated the validity of a multi-objective linear approach to perform optimal experimental designs for the non-linear problem of (13)C-metabolic flux analysis. Tools and a workflow are provided to perform multi-objective design. The effortless calculation of the D-criterion can be exploited to perform high-throughput screening of possible (13)C-tracers, while the illustrated benefit of multi

  5. Application of Box-Behnken experimental design to optimize the extraction of insecticidal Cry1Ac from soil.

    PubMed

    Li, Yan-Liang; Fang, Zhi-Xiang; You, Jing

    2013-02-20

    A validated method for analyzing Cry proteins is a premise to study the fate and ecological effects of contaminants associated with genetically engineered Bacillus thuringiensis crops. The current study has optimized the extraction method to analyze Cry1Ac protein in soil using a response surface methodology with a three-level-three-factor Box-Behnken experimental design (BBD). The optimum extraction conditions were at 21 °C and 630 rpm for 2 h. Regression analysis showed a good fit of the experimental data to the second-order polynomial model with a coefficient of determination of 0.96. The method was sensitive and precise with a method detection limit of 0.8 ng/g dry weight and relative standard deviations at 7.3%. Finally, the established method was applied for analyzing Cry1Ac protein residues in field-collected soil samples. Trace amounts of Cry1Ac protein were detected in the soils where transgenic crops have been planted for 8 and 12 years.

  6. Optimal color design of psychological counseling room by design of experiments and response surface methodology.

    PubMed

    Liu, Wenjuan; Ji, Jianlin; Chen, Hua; Ye, Chenyu

    2014-01-01

    Color is one of the most powerful aspects of a psychological counseling environment. Little scientific research has been conducted on color design and much of the existing literature is based on observational studies. Using design of experiments and response surface methodology, this paper proposes an optimal color design approach for transforming patients' perception into color elements. Six indices, pleasant-unpleasant, interesting-uninteresting, exciting-boring, relaxing-distressing, safe-fearful, and active-inactive, were used to assess patients' impression. A total of 75 patients participated, including 42 for Experiment 1 and 33 for Experiment 2. 27 representative color samples were designed in Experiment 1, and the color sample (L = 75, a = 0, b = -60) was the most preferred one. In Experiment 2, this color sample was set as the 'central point', and three color attributes were optimized to maximize the patients' satisfaction. The experimental results show that the proposed method can get the optimal solution for color design of a counseling room.

  7. Optimal Color Design of Psychological Counseling Room by Design of Experiments and Response Surface Methodology

    PubMed Central

    Chen, Hua; Ye, Chenyu

    2014-01-01

    Color is one of the most powerful aspects of a psychological counseling environment. Little scientific research has been conducted on color design and much of the existing literature is based on observational studies. Using design of experiments and response surface methodology, this paper proposes an optimal color design approach for transforming patients’ perception into color elements. Six indices, pleasant-unpleasant, interesting-uninteresting, exciting-boring, relaxing-distressing, safe-fearful, and active-inactive, were used to assess patients’ impression. A total of 75 patients participated, including 42 for Experiment 1 and 33 for Experiment 2. 27 representative color samples were designed in Experiment 1, and the color sample (L = 75, a = 0, b = -60) was the most preferred one. In Experiment 2, this color sample was set as the ‘central point’, and three color attributes were optimized to maximize the patients’ satisfaction. The experimental results show that the proposed method can get the optimal solution for color design of a counseling room. PMID:24594683

  8. Principles of Experimental Design for Big Data Analysis.

    PubMed

    Drovandi, Christopher C; Holmes, Christopher; McGree, James M; Mengersen, Kerrie; Richardson, Sylvia; Ryan, Elizabeth G

    2017-08-01

    Big Datasets are endemic, but are often notoriously difficult to analyse because of their size, heterogeneity and quality. The purpose of this paper is to open a discourse on the potential for modern decision theoretic optimal experimental design methods, which by their very nature have traditionally been applied prospectively, to improve the analysis of Big Data through retrospective designed sampling in order to answer particular questions of interest. By appealing to a range of examples, it is suggested that this perspective on Big Data modelling and analysis has the potential for wide generality and advantageous inferential and computational properties. We highlight current hurdles and open research questions surrounding efficient computational optimisation in using retrospective designs, and in part this paper is a call to the optimisation and experimental design communities to work together in the field of Big Data analysis.

  9. Principles of Experimental Design for Big Data Analysis

    PubMed Central

    Drovandi, Christopher C; Holmes, Christopher; McGree, James M; Mengersen, Kerrie; Richardson, Sylvia; Ryan, Elizabeth G

    2016-01-01

    Big Datasets are endemic, but are often notoriously difficult to analyse because of their size, heterogeneity and quality. The purpose of this paper is to open a discourse on the potential for modern decision theoretic optimal experimental design methods, which by their very nature have traditionally been applied prospectively, to improve the analysis of Big Data through retrospective designed sampling in order to answer particular questions of interest. By appealing to a range of examples, it is suggested that this perspective on Big Data modelling and analysis has the potential for wide generality and advantageous inferential and computational properties. We highlight current hurdles and open research questions surrounding efficient computational optimisation in using retrospective designs, and in part this paper is a call to the optimisation and experimental design communities to work together in the field of Big Data analysis. PMID:28883686

  10. designGG: an R-package and web tool for the optimal design of genetical genomics experiments.

    PubMed

    Li, Yang; Swertz, Morris A; Vera, Gonzalo; Fu, Jingyuan; Breitling, Rainer; Jansen, Ritsert C

    2009-06-18

    High-dimensional biomolecular profiling of genetically different individuals in one or more environmental conditions is an increasingly popular strategy for exploring the functioning of complex biological systems. The optimal design of such genetical genomics experiments in a cost-efficient and effective way is not trivial. This paper presents designGG, an R package for designing optimal genetical genomics experiments. A web implementation for designGG is available at http://gbic.biol.rug.nl/designGG. All software, including source code and documentation, is freely available. DesignGG allows users to intelligently select and allocate individuals to experimental units and conditions such as drug treatment. The user can maximize the power and resolution of detecting genetic, environmental and interaction effects in a genome-wide or local mode by giving more weight to genome regions of special interest, such as previously detected phenotypic quantitative trait loci. This will help to achieve high power and more accurate estimates of the effects of interesting factors, and thus yield a more reliable biological interpretation of data. DesignGG is applicable to linkage analysis of experimental crosses, e.g. recombinant inbred lines, as well as to association analysis of natural populations.

  11. Data-driven design optimization for composite material characterization

    Treesearch

    John G. Michopoulos; John C. Hermanson; Athanasios Iliopoulos; Samuel G. Lambrakos; Tomonari Furukawa

    2011-06-01

    The main goal of the present paper is to demonstrate the value of design optimization beyond its use for structural shape determination in the realm of the constitutive characterization of anisotropic material systems such as polymer matrix composites with or without damage. The approaches discussed are based on the availability of massive experimental data...

  12. Structural Optimization in automotive design

    NASA Technical Reports Server (NTRS)

    Bennett, J. A.; Botkin, M. E.

    1984-01-01

    Although mathematical structural optimization has been an active research area for twenty years, there has been relatively little penetration into the design process. Experience indicates that often this is due to the traditional layout-analysis design process. In many cases, optimization efforts have been outgrowths of analysis groups which are themselves appendages to the traditional design process. As a result, optimization is often introduced into the design process too late to have a significant effect because many potential design variables have already been fixed. A series of examples are given to indicate how structural optimization has been effectively integrated into the design process.

  13. Optimizing indomethacin-loaded chitosan nanoparticle size, encapsulation, and release using Box-Behnken experimental design.

    PubMed

    Abul Kalam, Mohd; Khan, Abdul Arif; Khan, Shahanavaj; Almalik, Abdulaziz; Alshamsan, Aws

    2016-06-01

    Indomethacin chitosan nanoparticles (NPs) were developed by ionotropic gelation and optimized by concentrations of chitosan and tripolyphosphate (TPP) and stirring time by 3-factor 3-level Box-Behnken experimental design. Optimal concentration of chitosan (A) and TPP (B) were found 0.6mg/mL and 0.4mg/mL with 120min stirring time (C), with applied constraints of minimizing particle size (R1) and maximizing encapsulation efficiency (R2) and drug release (R3). Based on obtained 3D response surface plots, factors A, B and C were found to give synergistic effect on R1, while factor A has a negative impact on R2 and R3. Interaction of AB was negative on R1 and R2 but positive on R3. The factor AC was having synergistic effect on R1 and on R3, while the same combination had a negative effect on R2. The interaction BC was positive on the all responses. NPs were found in the size range of 321-675nm with zeta potentials (+25 to +32mV) after 6 months storage. Encapsulation, drug release, and content were in the range of 56-79%, 48-73% and 98-99%, respectively. In vitro drug release data were fitted in different kinetic models and pattern of drug release followed Higuchi-matrix type. Copyright © 2016 Elsevier B.V. All rights reserved.

  14. Effect of experimental design on the prediction performance of calibration models based on near-infrared spectroscopy for pharmaceutical applications.

    PubMed

    Bondi, Robert W; Igne, Benoît; Drennen, James K; Anderson, Carl A

    2012-12-01

    Near-infrared spectroscopy (NIRS) is a valuable tool in the pharmaceutical industry, presenting opportunities for online analyses to achieve real-time assessment of intermediates and finished dosage forms. The purpose of this work was to investigate the effect of experimental designs on prediction performance of quantitative models based on NIRS using a five-component formulation as a model system. The following experimental designs were evaluated: five-level, full factorial (5-L FF); three-level, full factorial (3-L FF); central composite; I-optimal; and D-optimal. The factors for all designs were acetaminophen content and the ratio of microcrystalline cellulose to lactose monohydrate. Other constituents included croscarmellose sodium and magnesium stearate (content remained constant). Partial least squares-based models were generated using data from individual experimental designs that related acetaminophen content to spectral data. The effect of each experimental design was evaluated by determining the statistical significance of the difference in bias and standard error of the prediction for that model's prediction performance. The calibration model derived from the I-optimal design had similar prediction performance as did the model derived from the 5-L FF design, despite containing 16 fewer design points. It also outperformed all other models estimated from designs with similar or fewer numbers of samples. This suggested that experimental-design selection for calibration-model development is critical, and optimum performance can be achieved with efficient experimental designs (i.e., optimal designs).

  15. Application of an experimental design for the optimization and validation of a new HPLC method for the determination of vancomycin in an extemporaneous ophthalmic solution.

    PubMed

    Enrique, Montse; García-Montoya, Encarna; Miñarro, Montserrat; Orriols, Anna; Ticó, Joseph Ramon; Suñé-Negre, Joseph Maria; Pérez-Lozano, Pilar

    2008-10-01

    An experimental design has been used to develop and optimize a new high-performance liquid chromatographic (HPLC) method for the determination of Vancomycin in an extemporaneous ophthalmic solution. After the preliminary studies and literature review, the optimized method was carried out on a second generation of a C18 reverse-phase column (Luna 150 x 4.6 mm i.d., 5 microm particle size) and using methanol as organic phase, a less toxic solvent than acetonitrile, described in the extended literature. The experimental design consisted of a Placket-Burman design where six different variables were studied (flow rate, mL/min; temperature, degrees C; pH mobile phase; % buffer solution; wavelength; and injection volume) to obtain the best suitability parameters (Capacity factor-K', tailing factor, resolution, and theoretical plates). After the optimization of the chromatographic conditions and statistical treatment of the obtained results, the final method uses a mixture of a buffer solution of water-phosphoric acid (85%) (99.83:0.17, v/v) adjusted to pH 3.0 using triethylamine and mixed with methanol (87:13, v/v). The separation is achieved using a flow rate of 1.0 mL/min at 35 degrees C. The UV detector was operated at 280 nm. The validation study carried out, demonstrates the viability of the method, obtaining a good selectivity, linearity, precision, accuracy, and sensitivity.

  16. Sequential ensemble-based optimal design for parameter estimation: SEQUENTIAL ENSEMBLE-BASED OPTIMAL DESIGN

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Man, Jun; Zhang, Jiangjiang; Li, Weixuan

    2016-10-01

    The ensemble Kalman filter (EnKF) has been widely used in parameter estimation for hydrological models. The focus of most previous studies was to develop more efficient analysis (estimation) algorithms. On the other hand, it is intuitively understandable that a well-designed sampling (data-collection) strategy should provide more informative measurements and subsequently improve the parameter estimation. In this work, a Sequential Ensemble-based Optimal Design (SEOD) method, coupled with EnKF, information theory and sequential optimal design, is proposed to improve the performance of parameter estimation. Based on the first-order and second-order statistics, different information metrics including the Shannon entropy difference (SD), degrees ofmore » freedom for signal (DFS) and relative entropy (RE) are used to design the optimal sampling strategy, respectively. The effectiveness of the proposed method is illustrated by synthetic one-dimensional and two-dimensional unsaturated flow case studies. It is shown that the designed sampling strategies can provide more accurate parameter estimation and state prediction compared with conventional sampling strategies. Optimal sampling designs based on various information metrics perform similarly in our cases. The effect of ensemble size on the optimal design is also investigated. Overall, larger ensemble size improves the parameter estimation and convergence of optimal sampling strategy. Although the proposed method is applied to unsaturated flow problems in this study, it can be equally applied in any other hydrological problems.« less

  17. Aerodynamic design optimization using sensitivity analysis and computational fluid dynamics

    NASA Technical Reports Server (NTRS)

    Baysal, Oktay; Eleshaky, Mohamed E.

    1991-01-01

    A new and efficient method is presented for aerodynamic design optimization, which is based on a computational fluid dynamics (CFD)-sensitivity analysis algorithm. The method is applied to design a scramjet-afterbody configuration for an optimized axial thrust. The Euler equations are solved for the inviscid analysis of the flow, which in turn provides the objective function and the constraints. The CFD analysis is then coupled with the optimization procedure that uses a constrained minimization method. The sensitivity coefficients, i.e. gradients of the objective function and the constraints, needed for the optimization are obtained using a quasi-analytical method rather than the traditional brute force method of finite difference approximations. During the one-dimensional search of the optimization procedure, an approximate flow analysis (predicted flow) based on a first-order Taylor series expansion is used to reduce the computational cost. Finally, the sensitivity of the optimum objective function to various design parameters, which are kept constant during the optimization, is computed to predict new optimum solutions. The flow analysis of the demonstrative example are compared with the experimental data. It is shown that the method is more efficient than the traditional methods.

  18. Experimental evaluation of HJB optimal controllers for the attitude dynamics of a multirotor aerial vehicle.

    PubMed

    Prado, Igor Afonso Acampora; Pereira, Mateus de Freitas Virgílio; de Castro, Davi Ferreira; Dos Santos, Davi Antônio; Balthazar, Jose Manoel

    2018-06-01

    The present paper is concerned with the design and experimental evaluation of optimal control laws for the nonlinear attitude dynamics of a multirotor aerial vehicle. Three design methods based on Hamilton-Jacobi-Bellman equation are taken into account. The first one is a linear control with guarantee of stability for nonlinear systems. The second and third are a nonlinear suboptimal control techniques. These techniques are based on an optimal control design approach that takes into account the nonlinearities present in the vehicle dynamics. The stability Proof of the closed-loop system is presented. The performance of the control system designed is evaluated via simulations and also via an experimental scheme using the Quanser 3-DOF Hover. The experiments show the effectiveness of the linear control method over the nonlinear strategy. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.

  19. Development and experimental design of a novel controlled-release matrix tablet formulation for indapamide hemihydrate.

    PubMed

    Antovska, Packa; Ugarkovic, Sonja; Petruševski, Gjorgji; Stefanova, Bosilka; Manchevska, Blagica; Petkovska, Rumenka; Makreski, Petre

    2017-11-01

    Development, experimental design and in vitro in vivo correlation (IVIVC) of controlled-release matrix formulation. Development of novel oral controlled delivery system for indapamide hemihydrate, optimization of the formulation by experimental design and evaluation regarding IVIVC on a pilot scale batch as a confirmation of a well-established formulation. In vitro dissolution profiles of controlled-release tablets of indapamide hemihydrate from four different matrices had been evaluated in comparison to the originator's product Natrilix (Servier) as a direction for further development and optimization of a hydroxyethylcellulose-based matrix controlled-release formulation. A central composite factorial design had been applied for the optimization of a chosen controlled-release tablet formulation. The controlled-release tablets with appropriate physical and technological properties had been obtained with a matrix: binder concentration variations in the range: 20-40w/w% for the matrix and 1-3w/w% for the binder. The experimental design had defined the design space for the formulation and was prerequisite for extraction of a particular formulation that would be a subject for transfer on pilot scale and IVIV correlation. The release model of the optimized formulation has shown best fit to the zero order kinetics depicted with the Hixson-Crowell erosion-dependent mechanism of release. Level A correlation was obtained.

  20. Experimental validation of an integrated controls-structures design methodology for a class of flexible space structures

    NASA Technical Reports Server (NTRS)

    Maghami, Peiman G.; Gupta, Sandeep; Elliott, Kenny B.; Joshi, Suresh M.; Walz, Joseph E.

    1994-01-01

    This paper describes the first experimental validation of an optimization-based integrated controls-structures design methodology for a class of flexible space structures. The Controls-Structures-Interaction (CSI) Evolutionary Model, a laboratory test bed at Langley, is redesigned based on the integrated design methodology with two different dissipative control strategies. The redesigned structure is fabricated, assembled in the laboratory, and experimentally compared with the original test structure. Design guides are proposed and used in the integrated design process to ensure that the resulting structure can be fabricated. Experimental results indicate that the integrated design requires greater than 60 percent less average control power (by thruster actuators) than the conventional control-optimized design while maintaining the required line-of-sight performance, thereby confirming the analytical findings about the superiority of the integrated design methodology. Amenability of the integrated design structure to other control strategies is considered and evaluated analytically and experimentally. This work also demonstrates the capabilities of the Langley-developed design tool CSI DESIGN which provides a unified environment for structural and control design.

  1. Structural Model Tuning Capability in an Object-Oriented Multidisciplinary Design, Analysis, and Optimization Tool

    NASA Technical Reports Server (NTRS)

    Lung, Shun-fat; Pak, Chan-gi

    2008-01-01

    Updating the finite element model using measured data is a challenging problem in the area of structural dynamics. The model updating process requires not only satisfactory correlations between analytical and experimental results, but also the retention of dynamic properties of structures. Accurate rigid body dynamics are important for flight control system design and aeroelastic trim analysis. Minimizing the difference between analytical and experimental results is a type of optimization problem. In this research, a multidisciplinary design, analysis, and optimization (MDAO) tool is introduced to optimize the objective function and constraints such that the mass properties, the natural frequencies, and the mode shapes are matched to the target data as well as the mass matrix being orthogonalized.

  2. Structural Model Tuning Capability in an Object-Oriented Multidisciplinary Design, Analysis, and Optimization Tool

    NASA Technical Reports Server (NTRS)

    Lung, Shun-fat; Pak, Chan-gi

    2008-01-01

    Updating the finite element model using measured data is a challenging problem in the area of structural dynamics. The model updating process requires not only satisfactory correlations between analytical and experimental results, but also the retention of dynamic properties of structures. Accurate rigid body dynamics are important for flight control system design and aeroelastic trim analysis. Minimizing the difference between analytical and experimental results is a type of optimization problem. In this research, a multidisciplinary design, analysis, and optimization [MDAO] tool is introduced to optimize the objective function and constraints such that the mass properties, the natural frequencies, and the mode shapes are matched to the target data as well as the mass matrix being orthogonalized.

  3. Reliability-based design optimization using a generalized subset simulation method and posterior approximation

    NASA Astrophysics Data System (ADS)

    Ma, Yuan-Zhuo; Li, Hong-Shuang; Yao, Wei-Xing

    2018-05-01

    The evaluation of the probabilistic constraints in reliability-based design optimization (RBDO) problems has always been significant and challenging work, which strongly affects the performance of RBDO methods. This article deals with RBDO problems using a recently developed generalized subset simulation (GSS) method and a posterior approximation approach. The posterior approximation approach is used to transform all the probabilistic constraints into ordinary constraints as in deterministic optimization. The assessment of multiple failure probabilities required by the posterior approximation approach is achieved by GSS in a single run at all supporting points, which are selected by a proper experimental design scheme combining Sobol' sequences and Bucher's design. Sequentially, the transformed deterministic design optimization problem can be solved by optimization algorithms, for example, the sequential quadratic programming method. Three optimization problems are used to demonstrate the efficiency and accuracy of the proposed method.

  4. Applications of Chemiluminescence in the Teaching of Experimental Design

    ERIC Educational Resources Information Center

    Krawczyk, Tomasz; Slupska, Roksana; Baj, Stefan

    2015-01-01

    This work describes a single-session laboratory experiment devoted to teaching the principles of factorial experimental design. Students undertook the rational optimization of a luminol oxidation reaction, using a two-level experiment that aimed to create a long-lasting bright emission. During the session students used only simple glassware and…

  5. Rationally reduced libraries for combinatorial pathway optimization minimizing experimental effort.

    PubMed

    Jeschek, Markus; Gerngross, Daniel; Panke, Sven

    2016-03-31

    Rational flux design in metabolic engineering approaches remains difficult since important pathway information is frequently not available. Therefore empirical methods are applied that randomly change absolute and relative pathway enzyme levels and subsequently screen for variants with improved performance. However, screening is often limited on the analytical side, generating a strong incentive to construct small but smart libraries. Here we introduce RedLibs (Reduced Libraries), an algorithm that allows for the rational design of smart combinatorial libraries for pathway optimization thereby minimizing the use of experimental resources. We demonstrate the utility of RedLibs for the design of ribosome-binding site libraries by in silico and in vivo screening with fluorescent proteins and perform a simple two-step optimization of the product selectivity in the branched multistep pathway for violacein biosynthesis, indicating a general applicability for the algorithm and the proposed heuristics. We expect that RedLibs will substantially simplify the refactoring of synthetic metabolic pathways.

  6. Rotorcraft Optimization Tools: Incorporating Rotorcraft Design Codes into Multi-Disciplinary Design, Analysis, and Optimization

    NASA Technical Reports Server (NTRS)

    Meyn, Larry A.

    2018-01-01

    One of the goals of NASA's Revolutionary Vertical Lift Technology Project (RVLT) is to provide validated tools for multidisciplinary design, analysis and optimization (MDAO) of vertical lift vehicles. As part of this effort, the software package, RotorCraft Optimization Tools (RCOTOOLS), is being developed to facilitate incorporating key rotorcraft conceptual design codes into optimizations using the OpenMDAO multi-disciplinary optimization framework written in Python. RCOTOOLS, also written in Python, currently supports the incorporation of the NASA Design and Analysis of RotorCraft (NDARC) vehicle sizing tool and the Comprehensive Analytical Model of Rotorcraft Aerodynamics and Dynamics II (CAMRAD II) analysis tool into OpenMDAO-driven optimizations. Both of these tools use detailed, file-based inputs and outputs, so RCOTOOLS provides software wrappers to update input files with new design variable values, execute these codes and then extract specific response variable values from the file outputs. These wrappers are designed to be flexible and easy to use. RCOTOOLS also provides several utilities to aid in optimization model development, including Graphical User Interface (GUI) tools for browsing input and output files in order to identify text strings that are used to identify specific variables as optimization input and response variables. This paper provides an overview of RCOTOOLS and its use

  7. Applied optimal shape design

    NASA Astrophysics Data System (ADS)

    Mohammadi, B.; Pironneau, O.

    2002-12-01

    This paper is a short survey of optimal shape design (OSD) for fluids. OSD is an interesting field both mathematically and for industrial applications. Existence, sensitivity, correct discretization are important theoretical issues. Practical implementation issues for airplane designs are critical too. The paper is also a summary of the material covered in our recent book, Applied Optimal Shape Design, Oxford University Press, 2001.

  8. A FRAMEWORK TO DESIGN AND OPTIMIZE CHEMICAL FLOODING PROCESSES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mojdeh Delshad; Gary A. Pope; Kamy Sepehrnoori

    2005-07-01

    The goal of this proposed research is to provide an efficient and user friendly simulation framework for screening and optimizing chemical/microbial enhanced oil recovery processes. The framework will include (1) a user friendly interface to identify the variables that have the most impact on oil recovery using the concept of experimental design and response surface maps, (2) UTCHEM reservoir simulator to perform the numerical simulations, and (3) an economic model that automatically imports the simulation production data to evaluate the profitability of a particular design. Such a reservoir simulation framework is not currently available to the oil industry. The objectivesmore » of Task 1 are to develop three primary modules representing reservoir, chemical, and well data. The modules will be interfaced with an already available experimental design model. The objective of the Task 2 is to incorporate UTCHEM reservoir simulator and the modules with the strategic variables and developing the response surface maps to identify the significant variables from each module. The objective of the Task 3 is to develop the economic model designed specifically for the chemical processes targeted in this proposal and interface the economic model with UTCHEM production output. Task 4 is on the validation of the framework and performing simulations of oil reservoirs to screen, design and optimize the chemical processes.« less

  9. A Framework to Design and Optimize Chemical Flooding Processes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mojdeh Delshad; Gary A. Pope; Kamy Sepehrnoori

    2006-08-31

    The goal of this proposed research is to provide an efficient and user friendly simulation framework for screening and optimizing chemical/microbial enhanced oil recovery processes. The framework will include (1) a user friendly interface to identify the variables that have the most impact on oil recovery using the concept of experimental design and response surface maps, (2) UTCHEM reservoir simulator to perform the numerical simulations, and (3) an economic model that automatically imports the simulation production data to evaluate the profitability of a particular design. Such a reservoir simulation framework is not currently available to the oil industry. The objectivesmore » of Task 1 are to develop three primary modules representing reservoir, chemical, and well data. The modules will be interfaced with an already available experimental design model. The objective of the Task 2 is to incorporate UTCHEM reservoir simulator and the modules with the strategic variables and developing the response surface maps to identify the significant variables from each module. The objective of the Task 3 is to develop the economic model designed specifically for the chemical processes targeted in this proposal and interface the economic model with UTCHEM production output. Task 4 is on the validation of the framework and performing simulations of oil reservoirs to screen, design and optimize the chemical processes.« less

  10. A FRAMEWORK TO DESIGN AND OPTIMIZE CHEMICAL FLOODING PROCESSES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mojdeh Delshad; Gary A. Pope; Kamy Sepehrnoori

    2004-11-01

    The goal of this proposed research is to provide an efficient and user friendly simulation framework for screening and optimizing chemical/microbial enhanced oil recovery processes. The framework will include (1) a user friendly interface to identify the variables that have the most impact on oil recovery using the concept of experimental design and response surface maps, (2) UTCHEM reservoir simulator to perform the numerical simulations, and (3) an economic model that automatically imports the simulation production data to evaluate the profitability of a particular design. Such a reservoir simulation framework is not currently available to the oil industry. The objectivesmore » of Task 1 are to develop three primary modules representing reservoir, chemical, and well data. The modules will be interfaced with an already available experimental design model. The objective of the Task 2 is to incorporate UTCHEM reservoir simulator and the modules with the strategic variables and developing the response surface maps to identify the significant variables from each module. The objective of the Task 3 is to develop the economic model designed specifically for the chemical processes targeted in this proposal and interface the economic model with UTCHEM production output. Task 4 is on the validation of the framework and performing simulations of oil reservoirs to screen, design and optimize the chemical processes.« less

  11. Model Selection in Systems Biology Depends on Experimental Design

    PubMed Central

    Silk, Daniel; Kirk, Paul D. W.; Barnes, Chris P.; Toni, Tina; Stumpf, Michael P. H.

    2014-01-01

    Experimental design attempts to maximise the information available for modelling tasks. An optimal experiment allows the inferred models or parameters to be chosen with the highest expected degree of confidence. If the true system is faithfully reproduced by one of the models, the merit of this approach is clear - we simply wish to identify it and the true parameters with the most certainty. However, in the more realistic situation where all models are incorrect or incomplete, the interpretation of model selection outcomes and the role of experimental design needs to be examined more carefully. Using a novel experimental design and model selection framework for stochastic state-space models, we perform high-throughput in-silico analyses on families of gene regulatory cascade models, to show that the selected model can depend on the experiment performed. We observe that experimental design thus makes confidence a criterion for model choice, but that this does not necessarily correlate with a model's predictive power or correctness. Finally, in the special case of linear ordinary differential equation (ODE) models, we explore how wrong a model has to be before it influences the conclusions of a model selection analysis. PMID:24922483

  12. Model selection in systems biology depends on experimental design.

    PubMed

    Silk, Daniel; Kirk, Paul D W; Barnes, Chris P; Toni, Tina; Stumpf, Michael P H

    2014-06-01

    Experimental design attempts to maximise the information available for modelling tasks. An optimal experiment allows the inferred models or parameters to be chosen with the highest expected degree of confidence. If the true system is faithfully reproduced by one of the models, the merit of this approach is clear - we simply wish to identify it and the true parameters with the most certainty. However, in the more realistic situation where all models are incorrect or incomplete, the interpretation of model selection outcomes and the role of experimental design needs to be examined more carefully. Using a novel experimental design and model selection framework for stochastic state-space models, we perform high-throughput in-silico analyses on families of gene regulatory cascade models, to show that the selected model can depend on the experiment performed. We observe that experimental design thus makes confidence a criterion for model choice, but that this does not necessarily correlate with a model's predictive power or correctness. Finally, in the special case of linear ordinary differential equation (ODE) models, we explore how wrong a model has to be before it influences the conclusions of a model selection analysis.

  13. Cr(VI) transport via a supported ionic liquid membrane containing CYPHOS IL101 as carrier: system analysis and optimization through experimental design strategies.

    PubMed

    Rodríguez de San Miguel, Eduardo; Vital, Xóchitl; de Gyves, Josefina

    2014-05-30

    Chromium(VI) transport through a supported liquid membrane (SLM) system containing the commercial ionic liquid CYPHOS IL101 as carrier was studied. A reducing stripping phase was used as a mean to increase recovery and to simultaneously transform Cr(VI) into a less toxic residue for disposal or reuse. General functions which describe the time-depending evolution of the metal fractions in the cell compartments were defined and used in data evaluation. An experimental design strategy, using factorial and central-composite design matrices, was applied to assess the influence of the extractant, NaOH and citrate concentrations in the different phases, while a desirability function scheme allowed the synchronized optimization of depletion and recovery of the analyte. The mechanism for chromium permeation was analyzed and discussed to contribute to the understanding of the transfer process. The influence of metal concentration was evaluated as well. The presence of different interfering ions (Ca(2+), Al(3+), NO3(-), SO4(2-), and Cl(-)) at several Cr(VI): interfering ion ratios was studied through the use of a Plackett and Burman experimental design matrix. Under optimized conditions 90% of recovery was obtained from a feed solution containing 7mgL(-1) of Cr(VI) in 0.01moldm(-3) HCl medium after 5h of pertraction. Copyright © 2014 Elsevier B.V. All rights reserved.

  14. Microrandomized trials: An experimental design for developing just-in-time adaptive interventions.

    PubMed

    Klasnja, Predrag; Hekler, Eric B; Shiffman, Saul; Boruvka, Audrey; Almirall, Daniel; Tewari, Ambuj; Murphy, Susan A

    2015-12-01

    This article presents an experimental design, the microrandomized trial, developed to support optimization of just-in-time adaptive interventions (JITAIs). JITAIs are mHealth technologies that aim to deliver the right intervention components at the right times and locations to optimally support individuals' health behaviors. Microrandomized trials offer a way to optimize such interventions by enabling modeling of causal effects and time-varying effect moderation for individual intervention components within a JITAI. The article describes the microrandomized trial design, enumerates research questions that this experimental design can help answer, and provides an overview of the data analyses that can be used to assess the causal effects of studied intervention components and investigate time-varying moderation of those effects. Microrandomized trials enable causal modeling of proximal effects of the randomized intervention components and assessment of time-varying moderation of those effects. Microrandomized trials can help researchers understand whether their interventions are having intended effects, when and for whom they are effective, and what factors moderate the interventions' effects, enabling creation of more effective JITAIs. (PsycINFO Database Record (c) 2015 APA, all rights reserved).

  15. Real-time PCR probe optimization using design of experiments approach.

    PubMed

    Wadle, S; Lehnert, M; Rubenwolf, S; Zengerle, R; von Stetten, F

    2016-03-01

    Primer and probe sequence designs are among the most critical input factors in real-time polymerase chain reaction (PCR) assay optimization. In this study, we present the use of statistical design of experiments (DOE) approach as a general guideline for probe optimization and more specifically focus on design optimization of label-free hydrolysis probes that are designated as mediator probes (MPs), which are used in reverse transcription MP PCR (RT-MP PCR). The effect of three input factors on assay performance was investigated: distance between primer and mediator probe cleavage site; dimer stability of MP and target sequence (influenza B virus); and dimer stability of the mediator and universal reporter (UR). The results indicated that the latter dimer stability had the greatest influence on assay performance, with RT-MP PCR efficiency increased by up to 10% with changes to this input factor. With an optimal design configuration, a detection limit of 3-14 target copies/10 μl reaction could be achieved. This improved detection limit was confirmed for another UR design and for a second target sequence, human metapneumovirus, with 7-11 copies/10 μl reaction detected in an optimum case. The DOE approach for improving oligonucleotide designs for real-time PCR not only produces excellent results but may also reduce the number of experiments that need to be performed, thus reducing costs and experimental times.

  16. D-Optimal mixture experimental design for stealth biodegradable crosslinked docetaxel-loaded poly-ε-caprolactone nanoparticles manufactured by dispersion polymerization.

    PubMed

    Ogunwuyi, O; Adesina, S; Akala, E O

    2015-03-01

    We report here our efforts on the development of stealth biodegradable crosslinked poly-ε-caprolactone nanoparticles by free radical dispersion polymerization suitable for the delivery of bioactive agents. The uniqueness of the dispersion polymerization technique is that it is surfactant free, thereby obviating the problems known to be associated with the use of surfactants in the fabrication of nanoparticles for biomedical applications. Aided by a statistical software for experimental design and analysis, we used D-optimal mixture statistical experimental design to generate thirty batches of nanoparticles prepared by varying the proportion of the components (poly-ε-caprolactone macromonomer, crosslinker, initiators and stabilizer) in acetone/water system. Morphology of the nanoparticles was examined using scanning electron microscopy (SEM). Particle size and zeta potential were measured by dynamic light scattering (DLS). Scheffe polynomial models were generated to predict particle size (nm) and particle surface zeta potential (mV) as functions of the proportion of the components. Solutions were returned from simultaneous optimization of the response variables for component combinations to (a) minimize nanoparticle size (small nanoparticles are internalized into disease organs easily, avoid reticuloendothelial clearance and lung filtration) and (b) maximization of the negative zeta potential values, as it is known that, following injection into the blood stream, nanoparticles with a positive zeta potential pose a threat of causing transient embolism and rapid clearance compared to negatively charged particles. In vitro availability isotherms show that the nanoparticles sustained the release of docetaxel for 72 to 120 hours depending on the formulation. The data show that nanotechnology platforms for controlled delivery of bioactive agents can be developed based on the nanoparticles.

  17. Optimal design of stimulus experiments for robust discrimination of biochemical reaction networks.

    PubMed

    Flassig, R J; Sundmacher, K

    2012-12-01

    Biochemical reaction networks in the form of coupled ordinary differential equations (ODEs) provide a powerful modeling tool for understanding the dynamics of biochemical processes. During the early phase of modeling, scientists have to deal with a large pool of competing nonlinear models. At this point, discrimination experiments can be designed and conducted to obtain optimal data for selecting the most plausible model. Since biological ODE models have widely distributed parameters due to, e.g. biologic variability or experimental variations, model responses become distributed. Therefore, a robust optimal experimental design (OED) for model discrimination can be used to discriminate models based on their response probability distribution functions (PDFs). In this work, we present an optimal control-based methodology for designing optimal stimulus experiments aimed at robust model discrimination. For estimating the time-varying model response PDF, which results from the nonlinear propagation of the parameter PDF under the ODE dynamics, we suggest using the sigma-point approach. Using the model overlap (expected likelihood) as a robust discrimination criterion to measure dissimilarities between expected model response PDFs, we benchmark the proposed nonlinear design approach against linearization with respect to prediction accuracy and design quality for two nonlinear biological reaction networks. As shown, the sigma-point outperforms the linearization approach in the case of widely distributed parameter sets and/or existing multiple steady states. Since the sigma-point approach scales linearly with the number of model parameter, it can be applied to large systems for robust experimental planning. An implementation of the method in MATLAB/AMPL is available at http://www.uni-magdeburg.de/ivt/svt/person/rf/roed.html. flassig@mpi-magdeburg.mpg.de Supplementary data are are available at Bioinformatics online.

  18. Taguchi method of experimental design in materials education

    NASA Technical Reports Server (NTRS)

    Weiser, Martin W.

    1993-01-01

    Some of the advantages and disadvantages of the Taguchi Method of experimental design as applied to Materials Science will be discussed. This is a fractional factorial method that employs the minimum number of experimental trials for the information obtained. The analysis is also very simple to use and teach, which is quite advantageous in the classroom. In addition, the Taguchi loss function can be easily incorporated to emphasize that improvements in reproducibility are often at least as important as optimization of the response. The disadvantages of the Taguchi Method include the fact that factor interactions are normally not accounted for, there are zero degrees of freedom if all of the possible factors are used, and randomization is normally not used to prevent environmental biasing. In spite of these disadvantages it is felt that the Taguchi Method is extremely useful for both teaching experimental design and as a research tool, as will be shown with a number of brief examples.

  19. Fatigue design of a cellular phone folder using regression model-based multi-objective optimization

    NASA Astrophysics Data System (ADS)

    Kim, Young Gyun; Lee, Jongsoo

    2016-08-01

    In a folding cellular phone, the folding device is repeatedly opened and closed by the user, which eventually results in fatigue damage, particularly to the front of the folder. Hence, it is important to improve the safety and endurance of the folder while also reducing its weight. This article presents an optimal design for the folder front that maximizes its fatigue endurance while minimizing its thickness. Design data for analysis and optimization were obtained experimentally using a test jig. Multi-objective optimization was carried out using a nonlinear regression model. Three regression methods were employed: back-propagation neural networks, logistic regression and support vector machines. The AdaBoost ensemble technique was also used to improve the approximation. Two-objective Pareto-optimal solutions were identified using the non-dominated sorting genetic algorithm (NSGA-II). Finally, a numerically optimized solution was validated against experimental product data, in terms of both fatigue endurance and thickness index.

  20. Optimization of the NIF ignition point design hohlraum

    NASA Astrophysics Data System (ADS)

    Callahan, D. A.; Hinkel, D. E.; Berger, R. L.; Divol, L.; Dixit, S. N.; Edwards, M. J.; Haan, S. W.; Jones, O. S.; Lindl, J. D.; Meezan, N. B.; Michel, P. A.; Pollaine, S. M.; Suter, L. J.; Town, R. P. J.; Bradley, P. A.

    2008-05-01

    In preparation for the start of NIF ignition experiments, we have designed a porfolio of targets that span the temperature range that is consistent with initial NIF operations: 300 eV, 285 eV, and 270 eV. Because these targets are quite complicated, we have developed a plan for choosing the optimum hohlraum for the first ignition attempt that is based on this portfolio of designs coupled with early NIF experiements using 96 beams. These early experiments will measure the laser plasma instabilities of the candidate designs and will demonstrate our ability to tune symmetry in these designs. These experimental results, coupled with the theory and simulations that went into the designs, will allow us to choose the optimal hohlraum for the first NIF ignition attempt.

  1. Design Optimization Toolkit: Users' Manual

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aguilo Valentin, Miguel Alejandro

    The Design Optimization Toolkit (DOTk) is a stand-alone C++ software package intended to solve complex design optimization problems. DOTk software package provides a range of solution methods that are suited for gradient/nongradient-based optimization, large scale constrained optimization, and topology optimization. DOTk was design to have a flexible user interface to allow easy access to DOTk solution methods from external engineering software packages. This inherent flexibility makes DOTk barely intrusive to other engineering software packages. As part of this inherent flexibility, DOTk software package provides an easy-to-use MATLAB interface that enables users to call DOTk solution methods directly from the MATLABmore » command window.« less

  2. Design Optimization of Irregular Cellular Structure for Additive Manufacturing

    NASA Astrophysics Data System (ADS)

    Song, Guo-Hua; Jing, Shi-Kai; Zhao, Fang-Lei; Wang, Ye-Dong; Xing, Hao; Zhou, Jing-Tao

    2017-09-01

    Irregularcellular structurehas great potential to be considered in light-weight design field. However, the research on optimizing irregular cellular structures has not yet been reporteddue to the difficulties in their modeling technology. Based on the variable density topology optimization theory, an efficient method for optimizing the topology of irregular cellular structures fabricated through additive manufacturing processes is proposed. The proposed method utilizes tangent circles to automatically generate the main outline of irregular cellular structure. The topological layoutof each cellstructure is optimized using the relative density informationobtained from the proposed modified SIMP method. A mapping relationship between cell structure and relative densityelement is builtto determine the diameter of each cell structure. The results show that the irregular cellular structure can be optimized with the proposed method. The results of simulation and experimental test are similar for irregular cellular structure, which indicate that the maximum deformation value obtained using the modified Solid Isotropic Microstructures with Penalization (SIMP) approach is lower 5.4×10-5 mm than that using the SIMP approach under the same under the same external load. The proposed research provides the instruction to design the other irregular cellular structure.

  3. Experimental design for estimating unknown groundwater pumping using genetic algorithm and reduced order model

    NASA Astrophysics Data System (ADS)

    Ushijima, Timothy T.; Yeh, William W.-G.

    2013-10-01

    An optimal experimental design algorithm is developed to select locations for a network of observation wells that provide maximum information about unknown groundwater pumping in a confined, anisotropic aquifer. The design uses a maximal information criterion that chooses, among competing designs, the design that maximizes the sum of squared sensitivities while conforming to specified design constraints. The formulated optimization problem is non-convex and contains integer variables necessitating a combinatorial search. Given a realistic large-scale model, the size of the combinatorial search required can make the problem difficult, if not impossible, to solve using traditional mathematical programming techniques. Genetic algorithms (GAs) can be used to perform the global search; however, because a GA requires a large number of calls to a groundwater model, the formulated optimization problem still may be infeasible to solve. As a result, proper orthogonal decomposition (POD) is applied to the groundwater model to reduce its dimensionality. Then, the information matrix in the full model space can be searched without solving the full model. Results from a small-scale test case show identical optimal solutions among the GA, integer programming, and exhaustive search methods. This demonstrates the GA's ability to determine the optimal solution. In addition, the results show that a GA with POD model reduction is several orders of magnitude faster in finding the optimal solution than a GA using the full model. The proposed experimental design algorithm is applied to a realistic, two-dimensional, large-scale groundwater problem. The GA converged to a solution for this large-scale problem.

  4. Optimization Techniques for Design Problems in Selected Areas in WSNs: A Tutorial

    PubMed Central

    Ibrahim, Ahmed; Alfa, Attahiru

    2017-01-01

    This paper is intended to serve as an overview of, and mostly a tutorial to illustrate, the optimization techniques used in several different key design aspects that have been considered in the literature of wireless sensor networks (WSNs). It targets the researchers who are new to the mathematical optimization tool, and wish to apply it to WSN design problems. We hence divide the paper into two main parts. One part is dedicated to introduce optimization theory and an overview on some of its techniques that could be helpful in design problem in WSNs. In the second part, we present a number of design aspects that we came across in the WSN literature in which mathematical optimization methods have been used in the design. For each design aspect, a key paper is selected, and for each we explain the formulation techniques and the solution methods implemented. We also provide in-depth analyses and assessments of the problem formulations, the corresponding solution techniques and experimental procedures in some of these papers. The analyses and assessments, which are provided in the form of comments, are meant to reflect the points that we believe should be taken into account when using optimization as a tool for design purposes. PMID:28763039

  5. Optimization Techniques for Design Problems in Selected Areas in WSNs: A Tutorial.

    PubMed

    Ibrahim, Ahmed; Alfa, Attahiru

    2017-08-01

    This paper is intended to serve as an overview of, and mostly a tutorial to illustrate, the optimization techniques used in several different key design aspects that have been considered in the literature of wireless sensor networks (WSNs). It targets the researchers who are new to the mathematical optimization tool, and wish to apply it to WSN design problems. We hence divide the paper into two main parts. One part is dedicated to introduce optimization theory and an overview on some of its techniques that could be helpful in design problem in WSNs. In the second part, we present a number of design aspects that we came across in the WSN literature in which mathematical optimization methods have been used in the design. For each design aspect, a key paper is selected, and for each we explain the formulation techniques and the solution methods implemented. We also provide in-depth analyses and assessments of the problem formulations, the corresponding solution techniques and experimental procedures in some of these papers. The analyses and assessments, which are provided in the form of comments, are meant to reflect the points that we believe should be taken into account when using optimization as a tool for design purposes.

  6. A new efficient mixture screening design for optimization of media.

    PubMed

    Rispoli, Fred; Shah, Vishal

    2009-01-01

    Screening ingredients for the optimization of media is an important first step to reduce the many potential ingredients down to the vital few components. In this study, we propose a new method of screening for mixture experiments called the centroid screening design. Comparison of the proposed design with Plackett-Burman, fractional factorial, simplex lattice design, and modified mixture design shows that the centroid screening design is the most efficient of all the designs in terms of the small number of experimental runs needed and for detecting high-order interaction among ingredients. (c) 2009 American Institute of Chemical Engineers Biotechnol. Prog., 2009.

  7. Optimal design of compact spur gear reductions

    NASA Technical Reports Server (NTRS)

    Savage, M.; Lattime, S. B.; Kimmel, J. A.; Coe, H. H.

    1992-01-01

    The optimal design of compact spur gear reductions includes the selection of bearing and shaft proportions in addition to gear mesh parameters. Designs for single mesh spur gear reductions are based on optimization of system life, system volume, and system weight including gears, support shafts, and the four bearings. The overall optimization allows component properties to interact, yielding the best composite design. A modified feasible directions search algorithm directs the optimization through a continuous design space. Interpolated polynomials expand the discrete bearing properties and proportions into continuous variables for optimization. After finding the continuous optimum, the designer can analyze near optimal designs for comparison and selection. Design examples show the influence of the bearings on the optimal configurations.

  8. Ocean power technology design optimization

    DOE PAGES

    van Rij, Jennifer; Yu, Yi -Hsiang; Edwards, Kathleen; ...

    2017-07-18

    For this study, the National Renewable Energy Laboratory and Ocean Power Technologies (OPT) conducted a collaborative code validation and design optimization study for OPT's PowerBuoy wave energy converter (WEC). NREL utilized WEC-Sim, an open-source WEC simulator, to compare four design variations of OPT's PowerBuoy. As an input to the WEC-Sim models, viscous drag coefficients for the PowerBuoy floats were first evaluated using computational fluid dynamics. The resulting WEC-Sim PowerBuoy models were then validated with experimental power output and fatigue load data provided by OPT. The validated WEC-Sim models were then used to simulate the power performance and loads for operationalmore » conditions, extreme conditions, and directional waves, for each of the four PowerBuoy design variations, assuming the wave environment of Humboldt Bay, California. And finally, ratios of power-to-weight, power-to-fatigue-load, power-to-maximum-extreme-load, power-to-water-plane-area, and power-to-wetted-surface-area were used to make a final comparison of the potential PowerBuoy WEC designs. Lastly, the design comparison methodologies developed and presented in this study are applicable to other WEC devices and may be useful as a framework for future WEC design development projects.« less

  9. Ocean power technology design optimization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    van Rij, Jennifer; Yu, Yi -Hsiang; Edwards, Kathleen

    For this study, the National Renewable Energy Laboratory and Ocean Power Technologies (OPT) conducted a collaborative code validation and design optimization study for OPT's PowerBuoy wave energy converter (WEC). NREL utilized WEC-Sim, an open-source WEC simulator, to compare four design variations of OPT's PowerBuoy. As an input to the WEC-Sim models, viscous drag coefficients for the PowerBuoy floats were first evaluated using computational fluid dynamics. The resulting WEC-Sim PowerBuoy models were then validated with experimental power output and fatigue load data provided by OPT. The validated WEC-Sim models were then used to simulate the power performance and loads for operationalmore » conditions, extreme conditions, and directional waves, for each of the four PowerBuoy design variations, assuming the wave environment of Humboldt Bay, California. And finally, ratios of power-to-weight, power-to-fatigue-load, power-to-maximum-extreme-load, power-to-water-plane-area, and power-to-wetted-surface-area were used to make a final comparison of the potential PowerBuoy WEC designs. Lastly, the design comparison methodologies developed and presented in this study are applicable to other WEC devices and may be useful as a framework for future WEC design development projects.« less

  10. Aerodynamic design using numerical optimization

    NASA Technical Reports Server (NTRS)

    Murman, E. M.; Chapman, G. T.

    1983-01-01

    The procedure of using numerical optimization methods coupled with computational fluid dynamic (CFD) codes for the development of an aerodynamic design is examined. Several approaches that replace wind tunnel tests, develop pressure distributions and derive designs, or fulfill preset design criteria are presented. The method of Aerodynamic Design by Numerical Optimization (ADNO) is described and illustrated with examples.

  11. Optimization of the combined ultrasonic assisted/adsorption method for the removal of malachite green by gold nanoparticles loaded on activated carbon: Experimental design

    NASA Astrophysics Data System (ADS)

    Roosta, M.; Ghaedi, M.; Shokri, N.; Daneshfar, A.; Sahraei, R.; Asghari, A.

    2014-01-01

    The present study was aimed to experimental design optimization applied to removal of malachite green (MG) from aqueous solution by ultrasound-assisted removal onto the gold nanoparticles loaded on activated carbon (Au-NP-AC). This nanomaterial was characterized using different techniques such as FESEM, TEM, BET, and UV-vis measurements. The effects of variables such as pH, initial dye concentration, adsorbent dosage (g), temperature and sonication time on MG removal were studied using central composite design (CCD) and the optimum experimental conditions were found with desirability function (DF) combined response surface methodology (RSM). Fitting the experimental equilibrium data to various isotherm models such as Langmuir, Freundlich, Tempkin and Dubinin-Radushkevich models show the suitability and applicability of the Langmuir model. Kinetic models such as pseudo -first order, pseudo-second order, Elovich and intraparticle diffusion models applicability was tested for experimental data and the second-order equation and intraparticle diffusion models control the kinetic of the adsorption process. The small amount of proposed adsorbent (0.015 g) is applicable for successful removal of MG (RE > 99%) in short time (4.4 min) with high adsorption capacity (140-172 mg g-1).

  12. Optimization of the combined ultrasonic assisted/adsorption method for the removal of malachite green by gold nanoparticles loaded on activated carbon: experimental design.

    PubMed

    Roosta, M; Ghaedi, M; Shokri, N; Daneshfar, A; Sahraei, R; Asghari, A

    2014-01-24

    The present study was aimed to experimental design optimization applied to removal of malachite green (MG) from aqueous solution by ultrasound-assisted removal onto the gold nanoparticles loaded on activated carbon (Au-NP-AC). This nanomaterial was characterized using different techniques such as FESEM, TEM, BET, and UV-vis measurements. The effects of variables such as pH, initial dye concentration, adsorbent dosage (g), temperature and sonication time on MG removal were studied using central composite design (CCD) and the optimum experimental conditions were found with desirability function (DF) combined response surface methodology (RSM). Fitting the experimental equilibrium data to various isotherm models such as Langmuir, Freundlich, Tempkin and Dubinin-Radushkevich models show the suitability and applicability of the Langmuir model. Kinetic models such as pseudo -first order, pseudo-second order, Elovich and intraparticle diffusion models applicability was tested for experimental data and the second-order equation and intraparticle diffusion models control the kinetic of the adsorption process. The small amount of proposed adsorbent (0.015 g) is applicable for successful removal of MG (RE>99%) in short time (4.4 min) with high adsorption capacity (140-172 mg g(-1)). Copyright © 2013. Published by Elsevier B.V.

  13. Design optimization studies using COSMIC NASTRAN

    NASA Technical Reports Server (NTRS)

    Pitrof, Stephen M.; Bharatram, G.; Venkayya, Vipperla B.

    1993-01-01

    The purpose of this study is to create, test and document a procedure to integrate mathematical optimization algorithms with COSMIC NASTRAN. This procedure is very important to structural design engineers who wish to capitalize on optimization methods to ensure that their design is optimized for its intended application. The OPTNAST computer program was created to link NASTRAN and design optimization codes into one package. This implementation was tested using two truss structure models and optimizing their designs for minimum weight, subject to multiple loading conditions and displacement and stress constraints. However, the process is generalized so that an engineer could design other types of elements by adding to or modifying some parts of the code.

  14. Design optimization of beta- and photovoltaic conversion devices

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wichner, R.; Blum, A.; Fischer-Colbrie, E.

    1976-01-08

    This report presents the theoretical and experimental results of an LLL Electronics Engineering research program aimed at optimizing the design and electronic-material parameters of beta- and photovoltaic p-n junction conversion devices. To meet this objective, a comprehensive computer code has been developed that can handle a broad range of practical conditions. The physical model upon which the code is based is described first. Then, an example is given of a set of optimization calculations along with the resulting optimized efficiencies for silicon (Si) and gallium-arsenide (GaAs) devices. The model we have developed, however, is not limited to these materials. Itmore » can handle any appropriate material--single or polycrystalline-- provided energy absorption and electron-transport data are available. To check code validity, the performance of experimental silicon p-n junction devices (produced in-house) were measured under various light intensities and spectra as well as under tritium beta irradiation. The results of these tests were then compared with predicted results based on the known or best estimated device parameters. The comparison showed very good agreement between the calculated and the measured results.« less

  15. Design and optimization of an experimental bioregenerative life support system with higher plants and silkworms

    NASA Astrophysics Data System (ADS)

    Hu, Enzhu; Bartsev, Sergey I.; Zhao, Ming; Liu, Professor Hong

    The conceptual scheme of an experimental bioregenerative life support system (BLSS) for planetary exploration was designed, which consisted of four elements - human metabolism, higher plants, silkworms and waste treatment. 15 kinds of higher plants, such as wheat, rice, soybean, lettuce, mulberry, et al., were selected as regenerative component of BLSS providing the crew with air, water, and vegetable food. Silkworms, which producing animal nutrition for crews, were fed by mulberry-leaves during the first three instars, and lettuce leaves last two instars. The inedible biomass of higher plants, human wastes and silkworm feces were composted into soil like substrate, which can be reused by higher plants cultivation. Salt, sugar and some household material such as soap, shampoo would be provided from outside. To support the steady state of BLSS the same amount and elementary composition of dehydrated wastes were removed periodically. The balance of matter flows between BLSS components was described by the system of algebraic equations. The mass flows between the components were optimized by EXCEL spreadsheets and using Solver. The numerical method used in this study was Newton's method.

  16. Optimal Design of Multitype Groundwater Monitoring Networks Using Easily Accessible Tools.

    PubMed

    Wöhling, Thomas; Geiges, Andreas; Nowak, Wolfgang

    2016-11-01

    Monitoring networks are expensive to establish and to maintain. In this paper, we extend an existing data-worth estimation method from the suite of PEST utilities with a global optimization method for optimal sensor placement (called optimal design) in groundwater monitoring networks. Design optimization can include multiple simultaneous sensor locations and multiple sensor types. Both location and sensor type are treated simultaneously as decision variables. Our method combines linear uncertainty quantification and a modified genetic algorithm for discrete multilocation, multitype search. The efficiency of the global optimization is enhanced by an archive of past samples and parallel computing. We demonstrate our methodology for a groundwater monitoring network at the Steinlach experimental site, south-western Germany, which has been established to monitor river-groundwater exchange processes. The target of optimization is the best possible exploration for minimum variance in predicting the mean travel time of the hyporheic exchange. Our results demonstrate that the information gain of monitoring network designs can be explored efficiently and with easily accessible tools prior to taking new field measurements or installing additional measurement points. The proposed methods proved to be efficient and can be applied for model-based optimal design of any type of monitoring network in approximately linear systems. Our key contributions are (1) the use of easy-to-implement tools for an otherwise complex task and (2) yet to consider data-worth interdependencies in simultaneous optimization of multiple sensor locations and sensor types. © 2016, National Ground Water Association.

  17. Pathway Design, Engineering, and Optimization.

    PubMed

    Garcia-Ruiz, Eva; HamediRad, Mohammad; Zhao, Huimin

    The microbial metabolic versatility found in nature has inspired scientists to create microorganisms capable of producing value-added compounds. Many endeavors have been made to transfer and/or combine pathways, existing or even engineered enzymes with new function to tractable microorganisms to generate new metabolic routes for drug, biofuel, and specialty chemical production. However, the success of these pathways can be impeded by different complications from an inherent failure of the pathway to cell perturbations. Pursuing ways to overcome these shortcomings, a wide variety of strategies have been developed. This chapter will review the computational algorithms and experimental tools used to design efficient metabolic routes, and construct and optimize biochemical pathways to produce chemicals of high interest.

  18. Multidisciplinary design optimization using genetic algorithms

    NASA Technical Reports Server (NTRS)

    Unal, Resit

    1994-01-01

    Multidisciplinary design optimization (MDO) is an important step in the conceptual design and evaluation of launch vehicles since it can have a significant impact on performance and life cycle cost. The objective is to search the system design space to determine values of design variables that optimize the performance characteristic subject to system constraints. Gradient-based optimization routines have been used extensively for aerospace design optimization. However, one limitation of gradient based optimizers is their need for gradient information. Therefore, design problems which include discrete variables can not be studied. Such problems are common in launch vehicle design. For example, the number of engines and material choices must be integer values or assume only a few discrete values. In this study, genetic algorithms are investigated as an approach to MDO problems involving discrete variables and discontinuous domains. Optimization by genetic algorithms (GA) uses a search procedure which is fundamentally different from those gradient based methods. Genetic algorithms seek to find good solutions in an efficient and timely manner rather than finding the best solution. GA are designed to mimic evolutionary selection. A population of candidate designs is evaluated at each iteration, and each individual's probability of reproduction (existence in the next generation) depends on its fitness value (related to the value of the objective function). Progress toward the optimum is achieved by the crossover and mutation operations. GA is attractive since it uses only objective function values in the search process, so gradient calculations are avoided. Hence, GA are able to deal with discrete variables. Studies report success in the use of GA for aircraft design optimization studies, trajectory analysis, space structure design and control systems design. In these studies reliable convergence was achieved, but the number of function evaluations was large compared

  19. Flat-plate photovoltaic array design optimization

    NASA Technical Reports Server (NTRS)

    Ross, R. G., Jr.

    1980-01-01

    An analysis is presented which integrates the results of specific studies in the areas of photovoltaic structural design optimization, optimization of array series/parallel circuit design, thermal design optimization, and optimization of environmental protection features. The analysis is based on minimizing the total photovoltaic system life-cycle energy cost including repair and replacement of failed cells and modules. This approach is shown to be a useful technique for array optimization, particularly when time-dependent parameters such as array degradation and maintenance are involved.

  20. Optimal experimental design for improving the estimation of growth parameters of Lactobacillus viridescens from data under non-isothermal conditions.

    PubMed

    Longhi, Daniel Angelo; Martins, Wiaslan Figueiredo; da Silva, Nathália Buss; Carciofi, Bruno Augusto Mattar; de Aragão, Gláucia Maria Falcão; Laurindo, João Borges

    2017-01-02

    In predictive microbiology, the model parameters have been estimated using the sequential two-step modeling (TSM) approach, in which primary models are fitted to the microbial growth data, and then secondary models are fitted to the primary model parameters to represent their dependence with the environmental variables (e.g., temperature). The Optimal Experimental Design (OED) approach allows reducing the experimental workload and costs, and the improvement of model identifiability because primary and secondary models are fitted simultaneously from non-isothermal data. Lactobacillus viridescens was selected to this study because it is a lactic acid bacterium of great interest to meat products preservation. The objectives of this study were to estimate the growth parameters of L. viridescens in culture medium from TSM and OED approaches and to evaluate both the number of experimental data and the time needed in each approach and the confidence intervals of the model parameters. Experimental data for estimating the model parameters with TSM approach were obtained at six temperatures (total experimental time of 3540h and 196 experimental data of microbial growth). Data for OED approach were obtained from four optimal non-isothermal profiles (total experimental time of 588h and 60 experimental data of microbial growth), two profiles with increasing temperatures (IT) and two with decreasing temperatures (DT). The Baranyi and Roberts primary model and the square root secondary model were used to describe the microbial growth, in which the parameters b and T min (±95% confidence interval) were estimated from the experimental data. The parameters obtained from TSM approach were b=0.0290 (±0.0020) [1/(h 0.5 °C)] and T min =-1.33 (±1.26) [°C], with R 2 =0.986 and RMSE=0.581, and the parameters obtained with the OED approach were b=0.0316 (±0.0013) [1/(h 0.5 °C)] and T min =-0.24 (±0.55) [°C], with R 2 =0.990 and RMSE=0.436. The parameters obtained from OED approach

  1. Experimental validation of systematically designed acoustic hyperbolic meta material slab exhibiting negative refraction

    NASA Astrophysics Data System (ADS)

    Christiansen, Rasmus E.; Sigmund, Ole

    2016-09-01

    This Letter reports on the experimental validation of a two-dimensional acoustic hyperbolic metamaterial slab optimized to exhibit negative refractive behavior. The slab was designed using a topology optimization based systematic design method allowing for tailoring the refractive behavior. The experimental results confirm the predicted refractive capability as well as the predicted transmission at an interface. The study simultaneously provides an estimate of the attenuation inside the slab stemming from the boundary layer effects—insight which can be utilized in the further design of the metamaterial slabs. The capability of tailoring the refractive behavior opens possibilities for different applications. For instance, a slab exhibiting zero refraction across a wide angular range is capable of funneling acoustic energy through it, while a material exhibiting the negative refractive behavior across a wide angular range provides lensing and collimating capabilities.

  2. Reliability based design optimization: Formulations and methodologies

    NASA Astrophysics Data System (ADS)

    Agarwal, Harish

    Modern products ranging from simple components to complex systems should be designed to be optimal and reliable. The challenge of modern engineering is to ensure that manufacturing costs are reduced and design cycle times are minimized while achieving requirements for performance and reliability. If the market for the product is competitive, improved quality and reliability can generate very strong competitive advantages. Simulation based design plays an important role in designing almost any kind of automotive, aerospace, and consumer products under these competitive conditions. Single discipline simulations used for analysis are being coupled together to create complex coupled simulation tools. This investigation focuses on the development of efficient and robust methodologies for reliability based design optimization in a simulation based design environment. Original contributions of this research are the development of a novel efficient and robust unilevel methodology for reliability based design optimization, the development of an innovative decoupled reliability based design optimization methodology, the application of homotopy techniques in unilevel reliability based design optimization methodology, and the development of a new framework for reliability based design optimization under epistemic uncertainty. The unilevel methodology for reliability based design optimization is shown to be mathematically equivalent to the traditional nested formulation. Numerical test problems show that the unilevel methodology can reduce computational cost by at least 50% as compared to the nested approach. The decoupled reliability based design optimization methodology is an approximate technique to obtain consistent reliable designs at lesser computational expense. Test problems show that the methodology is computationally efficient compared to the nested approach. A framework for performing reliability based design optimization under epistemic uncertainty is also developed

  3. Optimization of 3D Field Design

    NASA Astrophysics Data System (ADS)

    Logan, Nikolas; Zhu, Caoxiang

    2017-10-01

    Recent progress in 3D tokamak modeling is now leveraged to create a conceptual design of new external 3D field coils for the DIII-D tokamak. Using the IPEC dominant mode as a target spectrum, the Finding Optimized Coils Using Space-curves (FOCUS) code optimizes the currents and 3D geometry of multiple coils to maximize the total set's resonant coupling. The optimized coils are individually distorted in space, creating toroidal ``arrays'' containing a variety of shapes that often wrap around a significant poloidal extent of the machine. The generalized perturbed equilibrium code (GPEC) is used to determine optimally efficient spectra for driving total, core, and edge neoclassical toroidal viscosity (NTV) torque and these too provide targets for the optimization of 3D coil designs. These conceptual designs represent a fundamentally new approach to 3D coil design for tokamaks targeting desired plasma physics phenomena. Optimized coil sets based on plasma response theory will be relevant to designs for future reactors or on any active machine. External coils, in particular, must be optimized for reliable and efficient fusion reactor designs. Work supported by the US Department of Energy under DE-AC02-09CH11466.

  4. Experimental design for evaluating WWTP data by linear mass balances.

    PubMed

    Le, Quan H; Verheijen, Peter J T; van Loosdrecht, Mark C M; Volcke, Eveline I P

    2018-05-15

    A stepwise experimental design procedure to obtain reliable data from wastewater treatment plants (WWTPs) was developed. The proposed procedure aims at determining sets of additional measurements (besides available ones) that guarantee the identifiability of key process variables, which means that their value can be calculated from other, measured variables, based on available constraints in the form of linear mass balances. Among all solutions, i.e. all possible sets of additional measurements allowing the identifiability of all key process variables, the optimal solutions were found taking into account two objectives, namely the accuracy of the identified key variables and the cost of additional measurements. The results of this multi-objective optimization problem were represented in a Pareto-optimal front. The presented procedure was applied to a full-scale WWTP. Detailed analysis of the relation between measurements allowed the determination of groups of overlapping mass balances. Adding measured variables could only serve in identifying key variables that appear in the same group of mass balances. Besides, the application of the experimental design procedure to these individual groups significantly reduced the computational effort in evaluating available measurements and planning additional monitoring campaigns. The proposed procedure is straightforward and can be applied to other WWTPs with or without prior data collection. Copyright © 2018 Elsevier Ltd. All rights reserved.

  5. Experimental and Quasi-Experimental Design.

    ERIC Educational Resources Information Center

    Cottrell, Edward B.

    With an emphasis on the problems of control of extraneous variables and threats to internal and external validity, the arrangement or design of experiments is discussed. The purpose of experimentation in an educational institution, and the principles governing true experimentation (randomization, replication, and control) are presented, as are…

  6. Optimal design of a shear magnetorheological damper for turning vibration suppression

    NASA Astrophysics Data System (ADS)

    Zhou, Y.; Zhang, Y. L.

    2013-09-01

    The intelligent material, so-called magnetorheological (MR) fluid, is utilized to control turning vibration. According to the structure of a common lathe CA6140, a shear MR damper is conceived by designing its structure and magnetic circuit. The vibration suppression effect of the damper is proved with dynamic analysis and simulation. Further, the magnetic circuit of the damper is optimized with the ANSYS parametric design language (APDL). In the optimization course, the area of the magnetic circuit and the damping force are considered. After optimization, the damper’s structure and its efficiency of electrical energy consumption are improved. Additionally, a comparative study on damping forces acquired from the initial and optimal design is conducted. A prototype of the developed MR damper is fabricated and magnetic tests are performed to measure the magnetic flux intensities and the residual magnetism in four damping gaps. Then, the testing results are compared with the simulated results. Finally, the suppressing vibration experimental system is set up and cylindrical turning experiments are performed to investigate the working performance of the MR damper.

  7. Optimal Designs for the Rasch Model

    ERIC Educational Resources Information Center

    Grasshoff, Ulrike; Holling, Heinz; Schwabe, Rainer

    2012-01-01

    In this paper, optimal designs will be derived for estimating the ability parameters of the Rasch model when difficulty parameters are known. It is well established that a design is locally D-optimal if the ability and difficulty coincide. But locally optimal designs require that the ability parameters to be estimated are known. To attenuate this…

  8. Optimal Flow Control Design

    NASA Technical Reports Server (NTRS)

    Allan, Brian; Owens, Lewis

    2010-01-01

    In support of the Blended-Wing-Body aircraft concept, a new flow control hybrid vane/jet design has been developed for use in a boundary-layer-ingesting (BLI) offset inlet in transonic flows. This inlet flow control is designed to minimize the engine fan-face distortion levels and the first five Fourier harmonic half amplitudes while maximizing the inlet pressure recovery. This concept represents a potentially enabling technology for quieter and more environmentally friendly transport aircraft. An optimum vane design was found by minimizing the engine fan-face distortion, DC60, and the first five Fourier harmonic half amplitudes, while maximizing the total pressure recovery. The optimal vane design was then used in a BLI inlet wind tunnel experiment at NASA Langley's 0.3-meter transonic cryogenic tunnel. The experimental results demonstrated an 80-percent decrease in DPCPavg, the reduction in the circumferential distortion levels, at an inlet mass flow rate corresponding to the middle of the operational range at the cruise condition. Even though the vanes were designed at a single inlet mass flow rate, they performed very well over the entire inlet mass flow range tested in the wind tunnel experiment with the addition of a small amount of jet flow control. While the circumferential distortion was decreased, the radial distortion on the outer rings at the aerodynamic interface plane (AIP) increased. This was a result of the large boundary layer being distributed from the bottom of the AIP in the baseline case to the outer edges of the AIP when using the vortex generator (VG) vane flow control. Experimental results, as already mentioned, showed an 80-percent reduction of DPCPavg, the circumferential distortion level at the engine fan-face. The hybrid approach leverages strengths of vane and jet flow control devices, increasing inlet performance over a broader operational range with significant reduction in mass flow requirements. Minimal distortion level requirements

  9. Habitat Design Optimization and Analysis

    NASA Technical Reports Server (NTRS)

    SanSoucie, Michael P.; Hull, Patrick V.; Tinker, Michael L.

    2006-01-01

    Long-duration surface missions to the Moon and Mars will require habitats for the astronauts. The materials chosen for the habitat walls play a direct role in the protection against the harsh environments found on the surface. Choosing the best materials, their configuration, and the amount required is extremely difficult due to the immense size of the design region. Advanced optimization techniques are necessary for habitat wall design. Standard optimization techniques are not suitable for problems with such large search spaces; therefore, a habitat design optimization tool utilizing genetic algorithms has been developed. Genetic algorithms use a "survival of the fittest" philosophy, where the most fit individuals are more likely to survive and reproduce. This habitat design optimization tool is a multi-objective formulation of structural analysis, heat loss, radiation protection, and meteoroid protection. This paper presents the research and development of this tool.

  10. Optimal design criteria - prediction vs. parameter estimation

    NASA Astrophysics Data System (ADS)

    Waldl, Helmut

    2014-05-01

    G-optimality is a popular design criterion for optimal prediction, it tries to minimize the kriging variance over the whole design region. A G-optimal design minimizes the maximum variance of all predicted values. If we use kriging methods for prediction it is self-evident to use the kriging variance as a measure of uncertainty for the estimates. Though the computation of the kriging variance and even more the computation of the empirical kriging variance is computationally very costly and finding the maximum kriging variance in high-dimensional regions can be time demanding such that we cannot really find the G-optimal design with nowadays available computer equipment in practice. We cannot always avoid this problem by using space-filling designs because small designs that minimize the empirical kriging variance are often non-space-filling. D-optimality is the design criterion related to parameter estimation. A D-optimal design maximizes the determinant of the information matrix of the estimates. D-optimality in terms of trend parameter estimation and D-optimality in terms of covariance parameter estimation yield basically different designs. The Pareto frontier of these two competing determinant criteria corresponds with designs that perform well under both criteria. Under certain conditions searching the G-optimal design on the above Pareto frontier yields almost as good results as searching the G-optimal design in the whole design region. In doing so the maximum of the empirical kriging variance has to be computed only a few times though. The method is demonstrated by means of a computer simulation experiment based on data provided by the Belgian institute Management Unit of the North Sea Mathematical Models (MUMM) that describe the evolution of inorganic and organic carbon and nutrients, phytoplankton, bacteria and zooplankton in the Southern Bight of the North Sea.

  11. Comparison of Optimal Design Methods in Inverse Problems

    PubMed Central

    Banks, H. T.; Holm, Kathleen; Kappel, Franz

    2011-01-01

    Typical optimal design methods for inverse or parameter estimation problems are designed to choose optimal sampling distributions through minimization of a specific cost function related to the resulting error in parameter estimates. It is hoped that the inverse problem will produce parameter estimates with increased accuracy using data collected according to the optimal sampling distribution. Here we formulate the classical optimal design problem in the context of general optimization problems over distributions of sampling times. We present a new Prohorov metric based theoretical framework that permits one to treat succinctly and rigorously any optimal design criteria based on the Fisher Information Matrix (FIM). A fundamental approximation theory is also included in this framework. A new optimal design, SE-optimal design (standard error optimal design), is then introduced in the context of this framework. We compare this new design criteria with the more traditional D-optimal and E-optimal designs. The optimal sampling distributions from each design are used to compute and compare standard errors; the standard errors for parameters are computed using asymptotic theory or bootstrapping and the optimal mesh. We use three examples to illustrate ideas: the Verhulst-Pearl logistic population model [13], the standard harmonic oscillator model [13] and a popular glucose regulation model [16, 19, 29]. PMID:21857762

  12. Experimental design

    NASA Technical Reports Server (NTRS)

    Lim, H. S.

    1983-01-01

    The design of long life, low weight nickel cadmium cells is studied. The status of a program to optimize nickel electrodes for the best performance is discussed. The pore size of the plaque, the mechanical strength and active material loading are considered in depth.

  13. Optimization of permeability for quality improvement by using factorial design

    NASA Astrophysics Data System (ADS)

    Said, Rahaini Mohd; Miswan, Nor Hamizah; Juan, Ng Shu; Hussin, Nor Hafizah; Ahmad, Aminah; Kamal, Mohamad Ridzuan Mohamad

    2017-05-01

    Sand castings are used worldwide by the manufacturing process in Metal Casting Industry, whereby the green sand are the commonly used sand mould type in the industry of sand casting. The defects on the surface of casting product is one of the problems in the industry of sand casting. The problems that relates to the defect composition of green sand are such as blowholes, pinholes shrinkage and porosity. Our objective is to optimize the best composition of green sand in order to minimize the occurrence of defects. Sand specimen of difference parameters (Bentonite, Green Sand, Cold dust and water) were design and prepared to undergo permeability test. The 24 factorial design experiment with four factors at difference composition were runs, and the total of 16 runs experiment were conducted. The developed models based on the experimental design necessary models were obtained. The model with a high coefficient of determination (R2=0.9841) and model for predicted and actual fitted well with the experimental data. Using the Analysis of Design Expert software, we identified that bentonite and water are the main interaction effect in the experiments. The optimal settings for green sand composition are 100g silica sand, 21g bentonite, 6.5 g water and 6g coal dust. This composition gives an effect of permeability number 598.3GP.

  14. Effects of experimental design on calibration curve precision in routine analysis

    PubMed Central

    Pimentel, Maria Fernanda; Neto, Benício de Barros; Saldanha, Teresa Cristina B.

    1998-01-01

    A computational program which compares the effciencies of different experimental designs with those of maximum precision (D-optimized designs) is described. The program produces confidence interval plots for a calibration curve and provides information about the number of standard solutions, concentration levels and suitable concentration ranges to achieve an optimum calibration. Some examples of the application of this novel computational program are given, using both simulated and real data. PMID:18924816

  15. Modified optimal control pilot model for computer-aided design and analysis

    NASA Technical Reports Server (NTRS)

    Davidson, John B.; Schmidt, David K.

    1992-01-01

    This paper presents the theoretical development of a modified optimal control pilot model based upon the optimal control model (OCM) of the human operator developed by Kleinman, Baron, and Levison. This model is input compatible with the OCM and retains other key aspects of the OCM, such as a linear quadratic solution for the pilot gains with inclusion of control rate in the cost function, a Kalman estimator, and the ability to account for attention allocation and perception threshold effects. An algorithm designed for each implementation in current dynamic systems analysis and design software is presented. Example results based upon the analysis of a tracking task using three basic dynamic systems are compared with measured results and with similar analyses performed with the OCM and two previously proposed simplified optimal pilot models. The pilot frequency responses and error statistics obtained with this modified optimal control model are shown to compare more favorably to the measured experimental results than the other previously proposed simplified models evaluated.

  16. Experimental design and optimization of leaching process for recovery of valuable chemical elements (U, La, V, Mo, Yb and Th) from low-grade uranium ore.

    PubMed

    Zakrzewska-Koltuniewicz, Grażyna; Herdzik-Koniecko, Irena; Cojocaru, Corneliu; Chajduk, Ewelina

    2014-06-30

    The paper deals with experimental design and optimization of leaching process of uranium and associated metals from low-grade, Polish ores. The chemical elements of interest for extraction from the ore were U, La, V, Mo, Yb and Th. Sulphuric acid has been used as leaching reagent. Based on the design of experiments the second-order regression models have been constructed to approximate the leaching efficiency of elements. The graphical illustrations using 3-D surface plots have been employed in order to identify the main, quadratic and interaction effects of the factors. The multi-objective optimization method based on desirability approach has been applied in this study. The optimum condition have been determined as P=5 bar, T=120 °C and t=90 min. Under these optimal conditions, the overall extraction performance is 81.43% (for U), 64.24% (for La), 98.38% (for V), 43.69% (for Yb) and 76.89% (for Mo) and 97.00% (for Th). Copyright © 2014 Elsevier B.V. All rights reserved.

  17. Entropy-Based Search Algorithm for Experimental Design

    NASA Astrophysics Data System (ADS)

    Malakar, N. K.; Knuth, K. H.

    2011-03-01

    The scientific method relies on the iterated processes of inference and inquiry. The inference phase consists of selecting the most probable models based on the available data; whereas the inquiry phase consists of using what is known about the models to select the most relevant experiment. Optimizing inquiry involves searching the parameterized space of experiments to select the experiment that promises, on average, to be maximally informative. In the case where it is important to learn about each of the model parameters, the relevance of an experiment is quantified by Shannon entropy of the distribution of experimental outcomes predicted by a probable set of models. If the set of potential experiments is described by many parameters, we must search this high-dimensional entropy space. Brute force search methods will be slow and computationally expensive. We present an entropy-based search algorithm, called nested entropy sampling, to select the most informative experiment for efficient experimental design. This algorithm is inspired by Skilling's nested sampling algorithm used in inference and borrows the concept of a rising threshold while a set of experiment samples are maintained. We demonstrate that this algorithm not only selects highly relevant experiments, but also is more efficient than brute force search. Such entropic search techniques promise to greatly benefit autonomous experimental design.

  18. Optimal design of a piezoelectric transducer for exciting guided wave ultrasound in rails

    NASA Astrophysics Data System (ADS)

    Ramatlo, Dineo A.; Wilke, Daniel N.; Loveday, Philip W.

    2017-02-01

    An existing Ultrasonic Broken Rail Detection System installed in South Africa on a heavy duty railway line is currently being upgraded to include defect detection and location. To accomplish this, an ultrasonic piezoelectric transducer to strongly excite a guided wave mode with energy concentrated in the web (web mode) of a rail is required. A previous study demonstrated that the recently developed SAFE-3D (Semi-Analytical Finite Element - 3 Dimensional) method can effectively predict the guided waves excited by a resonant piezoelectric transducer. In this study, the SAFE-3D model is used in the design optimization of a rail web transducer. A bound-constrained optimization problem was formulated to maximize the energy transmitted by the transducer in the web mode when driven by a pre-defined excitation signal. Dimensions of the transducer components were selected as the three design variables. A Latin hypercube sampled design of experiments that required a total of 500 SAFE-3D analyses in the design space was employed in a response surface-based optimization approach. The Nelder-Mead optimization algorithm was then used to find an optimal transducer design on the constructed response surface. The radial basis function response surface was first verified by comparing a number of predicted responses against the computed SAFE-3D responses. The performance of the optimal transducer predicted by the optimization algorithm on the response surface was also verified to be sufficiently accurate using SAFE-3D. The computational advantages of SAFE-3D in optimal transducer design are noteworthy as more than 500 analyses were performed. The optimal design was then manufactured and experimental measurements were used to validate the predicted performance. The adopted design method has demonstrated the capability to automate the design of transducers for a particular rail cross-section and frequency range.

  19. Optimal Learning for Efficient Experimentation in Nanotechnology and Biochemistry

    DTIC Science & Technology

    2015-12-22

    AFRL-AFOSR-VA-TR-2016-0018 Optimal Learning for Efficient Experimentation in Nanotechnology , Biochemistry Warren Powell TRUSTEES OF PRINCETON...3. DATES COVERED (From - To) 01-07-2012 to 30-09-2015 4. TITLE AND SUBTITLE Optimal Learning for Efficient Experimentation in Nanotechnology and...in Nanotechnology and Biochemistry Principal Investigators: Warren B. Powell Princeton University Department of Operations Research and

  20. Micro-Randomized Trials: An Experimental Design for Developing Just-in-Time Adaptive Interventions

    PubMed Central

    Klasnja, Predrag; Hekler, Eric B.; Shiffman, Saul; Boruvka, Audrey; Almirall, Daniel; Tewari, Ambuj; Murphy, Susan A.

    2015-01-01

    Objective This paper presents an experimental design, the micro-randomized trial, developed to support optimization of just-in-time adaptive interventions (JITAIs). JITAIs are mHealth technologies that aim to deliver the right intervention components at the right times and locations to optimally support individuals’ health behaviors. Micro-randomized trials offer a way to optimize such interventions by enabling modeling of causal effects and time-varying effect moderation for individual intervention components within a JITAI. Methods The paper describes the micro-randomized trial design, enumerates research questions that this experimental design can help answer, and provides an overview of the data analyses that can be used to assess the causal effects of studied intervention components and investigate time-varying moderation of those effects. Results Micro-randomized trials enable causal modeling of proximal effects of the randomized intervention components and assessment of time-varying moderation of those effects. Conclusions Micro-randomized trials can help researchers understand whether their interventions are having intended effects, when and for whom they are effective, and what factors moderate the interventions’ effects, enabling creation of more effective JITAIs. PMID:26651463

  1. A framework for accelerated phototrophic bioprocess development: integration of parallelized microscale cultivation, laboratory automation and Kriging-assisted experimental design.

    PubMed

    Morschett, Holger; Freier, Lars; Rohde, Jannis; Wiechert, Wolfgang; von Lieres, Eric; Oldiges, Marco

    2017-01-01

    Even though microalgae-derived biodiesel has regained interest within the last decade, industrial production is still challenging for economic reasons. Besides reactor design, as well as value chain and strain engineering, laborious and slow early-stage parameter optimization represents a major drawback. The present study introduces a framework for the accelerated development of phototrophic bioprocesses. A state-of-the-art micro-photobioreactor supported by a liquid-handling robot for automated medium preparation and product quantification was used. To take full advantage of the technology's experimental capacity, Kriging-assisted experimental design was integrated to enable highly efficient execution of screening applications. The resulting platform was used for medium optimization of a lipid production process using Chlorella vulgaris toward maximum volumetric productivity. Within only four experimental rounds, lipid production was increased approximately threefold to 212 ± 11 mg L -1  d -1 . Besides nitrogen availability as a key parameter, magnesium, calcium and various trace elements were shown to be of crucial importance. Here, synergistic multi-parameter interactions as revealed by the experimental design introduced significant further optimization potential. The integration of parallelized microscale cultivation, laboratory automation and Kriging-assisted experimental design proved to be a fruitful tool for the accelerated development of phototrophic bioprocesses. By means of the proposed technology, the targeted optimization task was conducted in a very timely and material-efficient manner.

  2. Wrinkle-free design of thin membrane structures using stress-based topology optimization

    NASA Astrophysics Data System (ADS)

    Luo, Yangjun; Xing, Jian; Niu, Yanzhuang; Li, Ming; Kang, Zhan

    2017-05-01

    Thin membrane structures would experience wrinkling due to local buckling deformation when compressive stresses are induced in some regions. Using the stress criterion for membranes in wrinkled and taut states, this paper proposed a new stress-based topology optimization methodology to seek the optimal wrinkle-free design of macro-scale thin membrane structures under stretching. Based on the continuum model and linearly elastic assumption in the taut state, the optimization problem is defined as to maximize the structural stiffness under membrane area and principal stress constraints. In order to make the problem computationally tractable, the stress constraints are reformulated into equivalent ones and relaxed by a cosine-type relaxation scheme. The reformulated optimization problem is solved by a standard gradient-based algorithm with the adjoint-variable sensitivity analysis. Several examples with post-bulking simulations and experimental tests are given to demonstrate the effectiveness of the proposed optimization model for eliminating stress-related wrinkles in the novel design of thin membrane structures.

  3. Heat Sink Design and Optimization

    DTIC Science & Technology

    2015-12-01

    HEAT SINK DESIGN AND OPTIMIZATION I...REPORT DATE (DD-MM-YYYY) December 2015 2. REPORT TYPE Final 3. DATES COVERED (From – To) 4. TITLE AND SUBTITLE HEAT SINK DESIGN AND OPTIMIZATION...distribution is unlimited. 13. SUPPLEMENTARY NOTES 14. ABSTRACT Heat sinks are devices that are used to enhance heat dissipation

  4. Formulation, optimization and characterization of cationic polymeric nanoparticles of mast cell stabilizing agent using the Box-Behnken experimental design.

    PubMed

    Gajra, Balaram; Patel, Ravi R; Dalwadi, Chintan

    2016-01-01

    The present research work was intended to develop and optimize sustained release of biodegradable chitosan nanoparticles (CSNPs) as delivery vehicle for sodium cromoglicate (SCG) using the circumscribed Box-Behnken experimental design (BBD) and evaluate its potential for oral permeability enhancement. The 3-factor, 3-level BBD was employed to investigate the combined influence of formulation variables on particle size and entrapment efficiency (%EE) of SCG-CSNPs prepared by ionic gelation method. The generated polynomial equation was validated and desirability function was utilized for optimization. Optimized SCG-CSNPs were evaluated for physicochemical, morphological, in-vitro characterizations and permeability enhancement potential by ex-vivo and uptake study using CLSM. SCG-CSNPs exhibited particle size of 200.4 ± 4.06 nm and %EE of 62.68 ± 2.4% with unimodal size distribution having cationic, spherical, smooth surface. Physicochemical and in-vitro characterization revealed existence of SCG in amorphous form inside CSNPs without interaction and showed sustained release profile. Ex-vivo and uptake study showed the permeability enhancement potential of CSNPs. The developed SCG-CSNPs can be considered as promising delivery strategy with respect to improved permeability and sustained drug release, proving importance of CSNPs as potential oral delivery system for treatment of allergic rhinitis. Hence, further studies should be performed for establishing the pharmacokinetic potential of the CSNPs.

  5. Design and optimization of membrane-type acoustic metamaterials

    NASA Astrophysics Data System (ADS)

    Blevins, Matthew Grant

    One of the most common problems in noise control is the attenuation of low frequency noise. Typical solutions require barriers with high density and/or thickness. Membrane-type acoustic metamaterials are a novel type of engineered material capable of high low-frequency transmission loss despite their small thickness and light weight. These materials are ideally suited to applications with strict size and weight limitations such as aircraft, automobiles, and buildings. The transmission loss profile can be manipulated by changing the micro-level substructure, stacking multiple unit cells, or by creating multi-celled arrays. To date, analysis has focused primarily on experimental studies in plane-wave tubes and numerical modeling using finite element methods. These methods are inefficient when used for applications that require iterative changes to the structure of the material. To facilitate design and optimization of membrane-type acoustic metamaterials, computationally efficient dynamic models based on the impedance-mobility approach are proposed. Models of a single unit cell in a waveguide and in a baffle, a double layer of unit cells in a waveguide, and an array of unit cells in a baffle are studied. The accuracy of the models and the validity of assumptions used are verified using a finite element method. The remarkable computational efficiency of the impedance-mobility models compared to finite element methods enables implementation in design tools based on a graphical user interface and in optimization schemes. Genetic algorithms are used to optimize the unit cell design for a variety of noise reduction goals, including maximizing transmission loss for broadband, narrow-band, and tonal noise sources. The tools for design and optimization created in this work will enable rapid implementation of membrane-type acoustic metamaterials to solve real-world noise control problems.

  6. Fast Bayesian experimental design: Laplace-based importance sampling for the expected information gain

    NASA Astrophysics Data System (ADS)

    Beck, Joakim; Dia, Ben Mansour; Espath, Luis F. R.; Long, Quan; Tempone, Raúl

    2018-06-01

    In calculating expected information gain in optimal Bayesian experimental design, the computation of the inner loop in the classical double-loop Monte Carlo requires a large number of samples and suffers from underflow if the number of samples is small. These drawbacks can be avoided by using an importance sampling approach. We present a computationally efficient method for optimal Bayesian experimental design that introduces importance sampling based on the Laplace method to the inner loop. We derive the optimal values for the method parameters in which the average computational cost is minimized according to the desired error tolerance. We use three numerical examples to demonstrate the computational efficiency of our method compared with the classical double-loop Monte Carlo, and a more recent single-loop Monte Carlo method that uses the Laplace method as an approximation of the return value of the inner loop. The first example is a scalar problem that is linear in the uncertain parameter. The second example is a nonlinear scalar problem. The third example deals with the optimal sensor placement for an electrical impedance tomography experiment to recover the fiber orientation in laminate composites.

  7. Optimization of the fabrication of novel stealth PLA-based nanoparticles by dispersion polymerization using D-optimal mixture design

    PubMed Central

    Adesina, Simeon K.; Wight, Scott A.; Akala, Emmanuel O.

    2015-01-01

    Purpose Nanoparticle size is important in drug delivery. Clearance of nanoparticles by cells of the reticuloendothelial system has been reported to increase with increase in particle size. Further, nanoparticles should be small enough to avoid lung or spleen filtering effects. Endocytosis and accumulation in tumor tissue by the enhanced permeability and retention effect are also processes that are influenced by particle size. We present the results of studies designed to optimize crosslinked biodegradable stealth polymeric nanoparticles fabricated by dispersion polymerization. Methods Nanoparticles were fabricated using different amounts of macromonomer, initiators, crosslinking agent and stabilizer in a dioxane/DMSO/water solvent system. Confirmation of nanoparticle formation was by scanning electron microscopy (SEM). Particle size was measured by dynamic light scattering (DLS). D-optimal mixture statistical experimental design was used for the experimental runs, followed by model generation (Scheffe polynomial) and optimization with the aid of a computer software. Model verification was done by comparing particle size data of some suggested solutions to the predicted particle sizes. Results and Conclusion Data showed that average particle sizes follow the same trend as predicted by the model. Negative terms in the model corresponding to the crosslinking agent and stabilizer indicate the important factors for minimizing particle size. PMID:24059281

  8. Optimization of the fabrication of novel stealth PLA-based nanoparticles by dispersion polymerization using D-optimal mixture design.

    PubMed

    Adesina, Simeon K; Wight, Scott A; Akala, Emmanuel O

    2014-11-01

    Nanoparticle size is important in drug delivery. Clearance of nanoparticles by cells of the reticuloendothelial system has been reported to increase with increase in particle size. Further, nanoparticles should be small enough to avoid lung or spleen filtering effects. Endocytosis and accumulation in tumor tissue by the enhanced permeability and retention effect are also processes that are influenced by particle size. We present the results of studies designed to optimize cross-linked biodegradable stealth polymeric nanoparticles fabricated by dispersion polymerization. Nanoparticles were fabricated using different amounts of macromonomer, initiators, crosslinking agent and stabilizer in a dioxane/DMSO/water solvent system. Confirmation of nanoparticle formation was by scanning electron microscopy (SEM). Particle size was measured by dynamic light scattering (DLS). D-optimal mixture statistical experimental design was used for the experimental runs, followed by model generation (Scheffe polynomial) and optimization with the aid of a computer software. Model verification was done by comparing particle size data of some suggested solutions to the predicted particle sizes. Data showed that average particle sizes follow the same trend as predicted by the model. Negative terms in the model corresponding to the cross-linking agent and stabilizer indicate the important factors for minimizing particle size.

  9. Improving spacecraft design using a multidisciplinary design optimization methodology

    NASA Astrophysics Data System (ADS)

    Mosher, Todd Jon

    2000-10-01

    Spacecraft design has gone from maximizing performance under technology constraints to minimizing cost under performance constraints. This is characteristic of the "faster, better, cheaper" movement that has emerged within NASA. Currently spacecraft are "optimized" manually through a tool-assisted evaluation of a limited set of design alternatives. With this approach there is no guarantee that a systems-level focus will be taken and "feasibility" rather than "optimality" is commonly all that is achieved. To improve spacecraft design in the "faster, better, cheaper" era, a new approach using multidisciplinary design optimization (MDO) is proposed. Using MDO methods brings structure to conceptual spacecraft design by casting a spacecraft design problem into an optimization framework. Then, through the construction of a model that captures design and cost, this approach facilitates a quicker and more straightforward option synthesis. The final step is to automatically search the design space. As computer processor speed continues to increase, enumeration of all combinations, while not elegant, is one method that is straightforward to perform. As an alternative to enumeration, genetic algorithms are used and find solutions by reviewing fewer possible solutions with some limitations. Both methods increase the likelihood of finding an optimal design, or at least the most promising area of the design space. This spacecraft design methodology using MDO is demonstrated on three examples. A retrospective test for validation is performed using the Near Earth Asteroid Rendezvous (NEAR) spacecraft design. For the second example, the premise that aerobraking was needed to minimize mission cost and was mission enabling for the Mars Global Surveyor (MGS) mission is challenged. While one might expect no feasible design space for an MGS without aerobraking mission, a counterintuitive result is discovered. Several design options that don't use aerobraking are feasible and cost

  10. Computational design optimization for microfluidic magnetophoresis

    PubMed Central

    Plouffe, Brian D.; Lewis, Laura H.; Murthy, Shashi K.

    2011-01-01

    Current macro- and microfluidic approaches for the isolation of mammalian cells are limited in both efficiency and purity. In order to design a robust platform for the enumeration of a target cell population, high collection efficiencies are required. Additionally, the ability to isolate pure populations with minimal biological perturbation and efficient off-chip recovery will enable subcellular analyses of these cells for applications in personalized medicine. Here, a rational design approach for a simple and efficient device that isolates target cell populations via magnetic tagging is presented. In this work, two magnetophoretic microfluidic device designs are described, with optimized dimensions and operating conditions determined from a force balance equation that considers two dominant and opposing driving forces exerted on a magnetic-particle-tagged cell, namely, magnetic and viscous drag. Quantitative design criteria for an electromagnetic field displacement-based approach are presented, wherein target cells labeled with commercial magnetic microparticles flowing in a central sample stream are shifted laterally into a collection stream. Furthermore, the final device design is constrained to fit on standard rectangular glass coverslip (60 (L)×24 (W)×0.15 (H) mm3) to accommodate small sample volume and point-of-care design considerations. The anticipated performance of the device is examined via a parametric analysis of several key variables within the model. It is observed that minimal currents (<500 mA) are required to generate magnetic fields sufficient to separate cells from the sample streams flowing at rate as high as 7 ml∕h, comparable to the performance of current state-of-the-art magnet-activated cell sorting systems currently used in clinical settings. Experimental validation of the presented model illustrates that a device designed according to the derived rational optimization can effectively isolate (∼100%) a magnetic-particle-tagged cell

  11. Post-Optimality Analysis In Aerospace Vehicle Design

    NASA Technical Reports Server (NTRS)

    Braun, Robert D.; Kroo, Ilan M.; Gage, Peter J.

    1993-01-01

    This analysis pertains to the applicability of optimal sensitivity information to aerospace vehicle design. An optimal sensitivity (or post-optimality) analysis refers to computations performed once the initial optimization problem is solved. These computations may be used to characterize the design space about the present solution and infer changes in this solution as a result of constraint or parameter variations, without reoptimizing the entire system. The present analysis demonstrates that post-optimality information generated through first-order computations can be used to accurately predict the effect of constraint and parameter perturbations on the optimal solution. This assessment is based on the solution of an aircraft design problem in which the post-optimality estimates are shown to be within a few percent of the true solution over the practical range of constraint and parameter variations. Through solution of a reusable, single-stage-to-orbit, launch vehicle design problem, this optimal sensitivity information is also shown to improve the efficiency of the design process, For a hierarchically decomposed problem, this computational efficiency is realized by estimating the main-problem objective gradient through optimal sep&ivity calculations, By reducing the need for finite differentiation of a re-optimized subproblem, a significant decrease in the number of objective function evaluations required to reach the optimal solution is obtained.

  12. Integrated multidisciplinary design optimization of rotorcraft

    NASA Technical Reports Server (NTRS)

    Adelman, Howard M.; Mantay, Wayne R.

    1989-01-01

    The NASA/Army research plan for developing the logic elements for helicopter rotor design optimization by integrating appropriate disciplines and accounting for important interactions among the disciplines is discussed. The paper describes the optimization formulation in terms of the objective function, design variables, and constraints. The analysis aspects are discussed, and an initial effort at defining the interdisciplinary coupling is summarized. Results are presented on the achievements made in the rotor aerodynamic performance optimization for minimum hover horsepower, rotor dynamic optimization for vibration reduction, rotor structural optimization for minimum weight, and integrated aerodynamic load/dynamics optimization for minimum vibration and weight.

  13. Teaching experimental design.

    PubMed

    Fry, Derek J

    2014-01-01

    Awareness of poor design and published concerns over study quality stimulated the development of courses on experimental design intended to improve matters. This article describes some of the thinking behind these courses and how the topics can be presented in a variety of formats. The premises are that education in experimental design should be undertaken with an awareness of educational principles, of how adults learn, and of the particular topics in the subject that need emphasis. For those using laboratory animals, it should include ethical considerations, particularly severity issues, and accommodate learners not confident with mathematics. Basic principles, explanation of fully randomized, randomized block, and factorial designs, and discussion of how to size an experiment form the minimum set of topics. A problem-solving approach can help develop the skills of deciding what are correct experimental units and suitable controls in different experimental scenarios, identifying when an experiment has not been properly randomized or blinded, and selecting the most efficient design for particular experimental situations. Content, pace, and presentation should suit the audience and time available, and variety both within a presentation and in ways of interacting with those being taught is likely to be effective. Details are given of a three-day course based on these ideas, which has been rated informative, educational, and enjoyable, and can form a postgraduate module. It has oral presentations reinforced by group exercises and discussions based on realistic problems, and computer exercises which include some analysis. Other case studies consider a half-day format and a module for animal technicians. © The Author 2014. Published by Oxford University Press on behalf of the Institute for Laboratory Animal Research. All rights reserved. For permissions, please email: journals.permissions@oup.com.

  14. Graphical Models for Quasi-Experimental Designs

    ERIC Educational Resources Information Center

    Kim, Yongnam; Steiner, Peter M.; Hall, Courtney E.; Su, Dan

    2016-01-01

    Experimental and quasi-experimental designs play a central role in estimating cause-effect relationships in education, psychology, and many other fields of the social and behavioral sciences. This paper presents and discusses the causal graphs of experimental and quasi-experimental designs. For quasi-experimental designs the authors demonstrate…

  15. Optimization of ultrasound-assisted dispersive solid-phase microextraction based on nanoparticles followed by spectrophotometry for the simultaneous determination of dyes using experimental design.

    PubMed

    Asfaram, Arash; Ghaedi, Mehrorang; Goudarzi, Alireza

    2016-09-01

    A simple, low cost and ultrasensitive method for the simultaneous preconcentration and determination of trace amount of auramine-O and malachite green in aqueous media following accumulation on novel and lower toxicity nanomaterials by ultrasound-assisted dispersive solid phase micro-extraction (UA-DSPME) procedure combined with spectrophotometric has been described. The Mn doped ZnS nanoparticles loaded on activated carbon were characterized by Field emission scanning electron microscopy (FE-SEM), particle size distribution, X-ray diffraction (XRD), Fourier transform infrared spectroscopy (FT-IR) analyses and subsequently were used as green and efficient material for dyes accumulation. Contribution of experimental variables such as ultrasonic time, ultrasonic temperature, adsorbent mass, vortex time, ionic strength, pH and elution volume were optimized through experimental design, and while the preconcentrated analytes were efficiently eluted by acetone. Preliminary Plackett-Burman design was applied for selection of most significant factors and giving useful information about their main and interaction part of significant variables like ultrasonic time, adsorbent mass, elution volume and pH were obtained by central composite design combined with response surface analysis and optimum experimental conditions was set at pH of 8.0, 1.2mg of adsorbent, 150μL eluent and 3.7min sonication. Under optimized conditions, the average recoveries (five replicates) for two dyes (spiked at 500.0ngmL(-1)) changes in the range of 92.80-97.70% with acceptable RSD% less than 4.0% over a linear range of 3.0-5000.0ngmL(-1) for the AO and MG in water samples with regression coefficients (R(2)) of 0.9975 and 0.9977, respectively. Acceptable limits of detection of 0.91 and 0.61ngmL(-1) for AO and MG, respectively and high accuracy and repeatability are unique advantages of present method to improve the figures of merit for their accurate determination at trace level in complicated

  16. Optimization methods applied to hybrid vehicle design

    NASA Technical Reports Server (NTRS)

    Donoghue, J. F.; Burghart, J. H.

    1983-01-01

    The use of optimization methods as an effective design tool in the design of hybrid vehicle propulsion systems is demonstrated. Optimization techniques were used to select values for three design parameters (battery weight, heat engine power rating and power split between the two on-board energy sources) such that various measures of vehicle performance (acquisition cost, life cycle cost and petroleum consumption) were optimized. The apporach produced designs which were often significant improvements over hybrid designs already reported on in the literature. The principal conclusions are as follows. First, it was found that the strategy used to split the required power between the two on-board energy sources can have a significant effect on life cycle cost and petroleum consumption. Second, the optimization program should be constructed so that performance measures and design variables can be easily changed. Third, the vehicle simulation program has a significant effect on the computer run time of the overall optimization program; run time can be significantly reduced by proper design of the types of trips the vehicle takes in a one year period. Fourth, care must be taken in designing the cost and constraint expressions which are used in the optimization so that they are relatively smooth functions of the design variables. Fifth, proper handling of constraints on battery weight and heat engine rating, variables which must be large enough to meet power demands, is particularly important for the success of an optimization study. Finally, the principal conclusion is that optimization methods provide a practical tool for carrying out the design of a hybrid vehicle propulsion system.

  17. Characterization and Optimization Design of the Polymer-Based Capacitive Micro-Arrayed Ultrasonic Transducer

    NASA Astrophysics Data System (ADS)

    Chiou, De-Yi; Chen, Mu-Yueh; Chang, Ming-Wei; Deng, Hsu-Cheng

    2007-11-01

    This study constructs an electromechanical finite element model of the polymer-based capacitive micro-arrayed ultrasonic transducer (P-CMUT). The electrostatic-structural coupled-field simulations are performed to investigate the operational characteristics, such as collapse voltage and resonant frequency. The numerical results are found to be in good agreement with experimental observations. The study of influence of each defined parameter on the collapse voltage and resonant frequency are also presented. To solve some conflict problems in diversely physical fields, an integrated design method is developed to optimize the geometric parameters of the P-CMUT. The optimization search routine conducted using the genetic algorithm (GA) is connected with the commercial FEM software ANSYS to obtain the best design variable using multi-objective functions. The results show that the optimal parameter values satisfy the conflicting objectives, namely to minimize the collapse voltage while simultaneously maintaining a customized frequency. Overall, the present result indicates that the combined FEM/GA optimization scheme provides an efficient and versatile approach of optimization design of the P-CMUT.

  18. Optimal pupil design for confocal microscopy

    NASA Astrophysics Data System (ADS)

    Patel, Yogesh G.; Rajadhyaksha, Milind; DiMarzio, Charles A.

    2010-02-01

    Confocal reflectance microscopy may enable screening and diagnosis of skin cancers noninvasively and in real-time, as an adjunct to biopsy and pathology. Current instruments are large, complex, and expensive. A simpler, confocal line-scanning microscope may accelerate the translation of confocal microscopy in clinical and surgical dermatology. A confocal reflectance microscope may use a beamsplitter, transmitting and detecting through the pupil, or a divided pupil, or theta configuration, with half used for transmission and half for detection. The divided pupil may offer better sectioning and contrast. We present a Fourier optics model and compare the on-axis irradiance of a confocal point-scanning microscope in both pupil configurations, optimizing the profile of a Gaussian beam in a circular or semicircular aperture. We repeat both calculations with a cylindrical lens which focuses the source to a line. The variable parameter is the fillfactor, h, the ratio of the 1/e2 diameter of the Gaussian beam to the diameter of the full aperture. The optimal values of h, for point scanning are 0.90 (full) and 0.66 for the half-aperture. For line-scanning, the fill-factors are 1.02 (full) and 0.52 (half). Additional parameters to consider are the optimal location of the point-source beam in the divided-pupil configuration, the optimal line width for the line-source, and the width of the aperture in the divided-pupil configuration. Additional figures of merit are field-of-view and sectioning. Use of optimal designs is critical in comparing the experimental performance of the different configurations.

  19. Optimized structural designs for stretchable silicon integrated circuits.

    PubMed

    Kim, Dae-Hyeong; Liu, Zhuangjian; Kim, Yun-Soung; Wu, Jian; Song, Jizhou; Kim, Hoon-Sik; Huang, Yonggang; Hwang, Keh-Chih; Zhang, Yongwei; Rogers, John A

    2009-12-01

    Materials and design strategies for stretchable silicon integrated circuits that use non-coplanar mesh layouts and elastomeric substrates are presented. Detailed experimental and theoretical studies reveal many of the key underlying aspects of these systems. The results shpw, as an example, optimized mechanics and materials for circuits that exhibit maximum principal strains less than 0.2% even for applied strains of up to approximately 90%. Simple circuits, including complementary metal-oxide-semiconductor inverters and n-type metal-oxide-semiconductor differential amplifiers, validate these designs. The results suggest practical routes to high-performance electronics with linear elastic responses to large strain deformations, suitable for diverse applications that are not readily addressed with conventional wafer-based technologies.

  20. Design optimization of space structures

    NASA Technical Reports Server (NTRS)

    Felippa, Carlos

    1991-01-01

    The topology-shape-size optimization of space structures is investigated through Kikuchi's homogenization method. The method starts from a 'design domain block,' which is a region of space into which the structure is to materialize. This domain is initially filled with a finite element mesh, typically regular. Force and displacement boundary conditions corresponding to applied loads and supports are applied at specific points in the domain. An optimal structure is to be 'carved out' of the design under two conditions: (1) a cost function is to be minimized, and (2) equality or inequality constraints are to be satisfied. The 'carving' process is accomplished by letting microstructure holes develop and grow in elements during the optimization process. These holes have a rectangular shape in two dimensions and a cubical shape in three dimensions, and may also rotate with respect to the reference axes. The properties of the perforated element are obtained through an homogenization procedure. Once a hole reaches the volume of the element, that element effectively disappears. The project has two phases. In the first phase the method was implemented as the combination of two computer programs: a finite element module, and an optimization driver. In the second part, focus is on the application of this technique to planetary structures. The finite element part of the method was programmed for the two-dimensional case using four-node quadrilateral elements to cover the design domain. An element homogenization technique different from that of Kikuchi and coworkers was implemented. The optimization driver is based on an augmented Lagrangian optimizer, with the volume constraint treated as a Courant penalty function. The optimizer has to be especially tuned to this type of optimization because the number of design variables can reach into the thousands. The driver is presently under development.

  1. Bioresorbable scaffolds for bone tissue engineering: optimal design, fabrication, mechanical testing and scale-size effects analysis.

    PubMed

    Coelho, Pedro G; Hollister, Scott J; Flanagan, Colleen L; Fernandes, Paulo R

    2015-03-01

    Bone scaffolds for tissue regeneration require an optimal trade-off between biological and mechanical criteria. Optimal designs may be obtained using topology optimization (homogenization approach) and prototypes produced using additive manufacturing techniques. However, the process from design to manufacture remains a research challenge and will be a requirement of FDA design controls to engineering scaffolds. This work investigates how the design to manufacture chain affects the reproducibility of complex optimized design characteristics in the manufactured product. The design and prototypes are analyzed taking into account the computational assumptions and the final mechanical properties determined through mechanical tests. The scaffold is an assembly of unit-cells, and thus scale size effects on the mechanical response considering finite periodicity are investigated and compared with the predictions from the homogenization method which assumes in the limit infinitely repeated unit cells. Results show that a limited number of unit-cells (3-5 repeated on a side) introduce some scale-effects but the discrepancies are below 10%. Higher discrepancies are found when comparing the experimental data to numerical simulations due to differences between the manufactured and designed scaffold feature shapes and sizes as well as micro-porosities introduced by the manufacturing process. However good regression correlations (R(2) > 0.85) were found between numerical and experimental values, with slopes close to 1 for 2 out of 3 designs. Copyright © 2015 IPEM. Published by Elsevier Ltd. All rights reserved.

  2. The Experimental Design Assistant.

    PubMed

    Percie du Sert, Nathalie; Bamsey, Ian; Bate, Simon T; Berdoy, Manuel; Clark, Robin A; Cuthill, Innes; Fry, Derek; Karp, Natasha A; Macleod, Malcolm; Moon, Lawrence; Stanford, S Clare; Lings, Brian

    2017-09-01

    Addressing the common problems that researchers encounter when designing and analysing animal experiments will improve the reliability of in vivo research. In this article, the Experimental Design Assistant (EDA) is introduced. The EDA is a web-based tool that guides the in vivo researcher through the experimental design and analysis process, providing automated feedback on the proposed design and generating a graphical summary that aids communication with colleagues, funders, regulatory authorities, and the wider scientific community. It will have an important role in addressing causes of irreproducibility.

  3. Design optimization for active twist rotor blades

    NASA Astrophysics Data System (ADS)

    Mok, Ji Won

    This dissertation introduces the process of optimizing active twist rotor blades in the presence of embedded anisotropic piezo-composite actuators. Optimum design of active twist blades is a complex task, since it involves a rich design space with tightly coupled design variables. The study presents the development of an optimization framework for active helicopter rotor blade cross-sectional design. This optimization framework allows for exploring a rich and highly nonlinear design space in order to optimize the active twist rotor blades. Different analytical components are combined in the framework: cross-sectional analysis (UM/VABS), an automated mesh generator, a beam solver (DYMORE), a three-dimensional local strain recovery module, and a gradient based optimizer within MATLAB. Through the mathematical optimization problem, the static twist actuation performance of a blade is maximized while satisfying a series of blade constraints. These constraints are associated with locations of the center of gravity and elastic axis, blade mass per unit span, fundamental rotating blade frequencies, and the blade strength based on local three-dimensional strain fields under worst loading conditions. Through pre-processing, limitations of the proposed process have been studied. When limitations were detected, resolution strategies were proposed. These include mesh overlapping, element distortion, trailing edge tab modeling, electrode modeling and foam implementation of the mesh generator, and the initial point sensibility of the current optimization scheme. Examples demonstrate the effectiveness of this process. Optimization studies were performed on the NASA/Army/MIT ATR blade case. Even though that design was built and shown significant impact in vibration reduction, the proposed optimization process showed that the design could be improved significantly. The second example, based on a model scale of the AH-64D Apache blade, emphasized the capability of this framework to

  4. Numerical and experimental analysis of a ducted propeller designed by a fully automated optimization process under open water condition

    NASA Astrophysics Data System (ADS)

    Yu, Long; Druckenbrod, Markus; Greve, Martin; Wang, Ke-qi; Abdel-Maksoud, Moustafa

    2015-10-01

    A fully automated optimization process is provided for the design of ducted propellers under open water conditions, including 3D geometry modeling, meshing, optimization algorithm and CFD analysis techniques. The developed process allows the direct integration of a RANSE solver in the design stage. A practical ducted propeller design case study is carried out for validation. Numerical simulations and open water tests are fulfilled and proved that the optimum ducted propeller improves hydrodynamic performance as predicted.

  5. Designing electronic properties of two-dimensional crystals through optimization of deformations

    NASA Astrophysics Data System (ADS)

    Jones, Gareth W.; Pereira, Vitor M.

    2014-09-01

    One of the enticing features common to most of the two-dimensional (2D) electronic systems that, in the wake of (and in parallel with) graphene, are currently at the forefront of materials science research is the ability to easily introduce a combination of planar deformations and bending in the system. Since the electronic properties are ultimately determined by the details of atomic orbital overlap, such mechanical manipulations translate into modified (or, at least, perturbed) electronic properties. Here, we present a general-purpose optimization framework for tailoring physical properties of 2D electronic systems by manipulating the state of local strain, allowing a one-step route from their design to experimental implementation. A definite example, chosen for its relevance in light of current experiments in graphene nanostructures, is the optimization of the experimental parameters that generate a prescribed spatial profile of pseudomagnetic fields (PMFs) in graphene. But the method is general enough to accommodate a multitude of possible experimental parameters and conditions whereby deformations can be imparted to the graphene lattice, and complies, by design, with graphene's elastic equilibrium and elastic compatibility constraints. As a result, it efficiently answers the inverse problem of determining the optimal values of a set of external or control parameters (such as substrate topography, sample shape, load distribution, etc) that result in a graphene deformation whose associated PMF profile best matches a prescribed target. The ability to address this inverse problem in an expedited way is one key step for practical implementations of the concept of 2D systems with electronic properties strain-engineered to order. The general-purpose nature of this calculation strategy means that it can be easily applied to the optimization of other relevant physical quantities which directly depend on the local strain field, not just in graphene but in other 2D

  6. D-OPTIMAL EXPERIMENTAL DESIGNS TO TEST FOR DEPARTURE FROM ADDITIVITY IN A FIXED-RATIO MIXTURE RAY.

    EPA Science Inventory

    Humans are exposed to mixtures of environmental compounds. A regulatory assumption is that the mixtures of chemicals act in an additive manner. However, this assumption requires experimental validation. Traditional experimental designs (full factorial) require a large number of e...

  7. Globally optimal trial design for local decision making.

    PubMed

    Eckermann, Simon; Willan, Andrew R

    2009-02-01

    Value of information methods allows decision makers to identify efficient trial design following a principle of maximizing the expected value to decision makers of information from potential trial designs relative to their expected cost. However, in health technology assessment (HTA) the restrictive assumption has been made that, prospectively, there is only expected value of sample information from research commissioned within jurisdiction. This paper extends the framework for optimal trial design and decision making within jurisdiction to allow for optimal trial design across jurisdictions. This is illustrated in identifying an optimal trial design for decision making across the US, the UK and Australia for early versus late external cephalic version for pregnant women presenting in the breech position. The expected net gain from locally optimal trial designs of US$0.72M is shown to increase to US$1.14M with a globally optimal trial design. In general, the proposed method of globally optimal trial design improves on optimal trial design within jurisdictions by: (i) reflecting the global value of non-rival information; (ii) allowing optimal allocation of trial sample across jurisdictions; (iii) avoiding market failure associated with free-rider effects, sub-optimal spreading of fixed costs and heterogeneity of trial information with multiple trials. Copyright (c) 2008 John Wiley & Sons, Ltd.

  8. The Experimental Design Assistant

    PubMed Central

    Bamsey, Ian; Bate, Simon T.; Berdoy, Manuel; Clark, Robin A.; Cuthill, Innes; Fry, Derek; Karp, Natasha A.; Macleod, Malcolm; Moon, Lawrence; Stanford, S. Clare; Lings, Brian

    2017-01-01

    Addressing the common problems that researchers encounter when designing and analysing animal experiments will improve the reliability of in vivo research. In this article, the Experimental Design Assistant (EDA) is introduced. The EDA is a web-based tool that guides the in vivo researcher through the experimental design and analysis process, providing automated feedback on the proposed design and generating a graphical summary that aids communication with colleagues, funders, regulatory authorities, and the wider scientific community. It will have an important role in addressing causes of irreproducibility. PMID:28957312

  9. Optimization of sample preparation variables for wedelolactone from Eclipta alba using Box-Behnken experimental design followed by HPLC identification.

    PubMed

    Patil, A A; Sachin, B S; Shinde, D B; Wakte, P S

    2013-07-01

    Coumestan wedelolactone is an important phytocomponent from Eclipta alba (L.) Hassk. It possesses diverse pharmacological activities, which have prompted the development of various extraction techniques and strategies for its better utilization. The aim of the present study is to develop and optimize supercritical carbon dioxide assisted sample preparation and HPLC identification of wedelolactone from E. alba (L.) Hassk. The response surface methodology was employed to study the optimization of sample preparation using supercritical carbon dioxide for wedelolactone from E. alba (L.) Hassk. The optimized sample preparation involves the investigation of quantitative effects of sample preparation parameters viz. operating pressure, temperature, modifier concentration and time on yield of wedelolactone using Box-Behnken design. The wedelolactone content was determined using validated HPLC methodology. The experimental data were fitted to second-order polynomial equation using multiple regression analysis and analyzed using the appropriate statistical method. By solving the regression equation and analyzing 3D plots, the optimum extraction conditions were found to be: extraction pressure, 25 MPa; temperature, 56 °C; modifier concentration, 9.44% and extraction time, 60 min. Optimum extraction conditions demonstrated wedelolactone yield of 15.37 ± 0.63 mg/100 g E. alba (L.) Hassk, which was in good agreement with the predicted values. Temperature and modifier concentration showed significant effect on the wedelolactone yield. The supercritical carbon dioxide extraction showed higher selectivity than the conventional Soxhlet assisted extraction method. Copyright © 2013 Elsevier Masson SAS. All rights reserved.

  10. Rigorous ILT optimization for advanced patterning and design-process co-optimization

    NASA Astrophysics Data System (ADS)

    Selinidis, Kosta; Kuechler, Bernd; Cai, Howard; Braam, Kyle; Hoppe, Wolfgang; Domnenko, Vitaly; Poonawala, Amyn; Xiao, Guangming

    2018-03-01

    Despite the large difficulties involved in extending 193i multiple patterning and the slow ramp of EUV lithography to full manufacturing readiness, the pace of development for new technology node variations has been accelerating. Multiple new variations of new and existing technology nodes have been introduced for a range of device applications; each variation with at least a few new process integration methods, layout constructs and/or design rules. This had led to a strong increase in the demand for predictive technology tools which can be used to quickly guide important patterning and design co-optimization decisions. In this paper, we introduce a novel hybrid predictive patterning method combining two patterning technologies which have each individually been widely used for process tuning, mask correction and process-design cooptimization. These technologies are rigorous lithography simulation and inverse lithography technology (ILT). Rigorous lithography simulation has been extensively used for process development/tuning, lithography tool user setup, photoresist hot-spot detection, photoresist-etch interaction analysis, lithography-TCAD interactions/sensitivities, source optimization and basic lithography design rule exploration. ILT has been extensively used in a range of lithographic areas including logic hot-spot fixing, memory layout correction, dense memory cell optimization, assist feature (AF) optimization, source optimization, complex patterning design rules and design-technology co-optimization (DTCO). The combined optimization capability of these two technologies will therefore have a wide range of useful applications. We investigate the benefits of the new functionality for a few of these advanced applications including correction for photoresist top loss and resist scumming hotspots.

  11. Experimental design for the optimization of the derivatization reaction in determining chlorophenols and chloroanisoles by headspace-solid-phase microextraction-gas chromatography/mass spectrometry.

    PubMed

    Morales, Rocío; Sarabia, Luis A; Sánchez, M Sagrario; Ortiz, M Cruz

    2013-06-28

    The paper shows some tools (its interpretation and usefulness) to optimize a derivatization reaction and to more easily interpret and visualize the effect that some experimental factors exert on several analytical responses of interest when these responses are in conflict. The entire proposed procedure has been applied in the optimization of equilibrium/extraction temperature and extraction time in the acetylation reaction of 2,4,6-trichlorophenol; 2,3,4,6-tetrachlorophenol, pentachlorophenol and 2,4,6-tribromophenol as internal standard (IS) in presence of 2,4,6-trichloroanisole, 2,3,5,6-tetrachloroanisole, pentachloroanisole and 2,4,6-trichloroanisole-d5 as IS. The procedure relies on the second order advantage of PARAFAC (parallel factor analysis) that allows the unequivocal identification and quantification, mandatory according international regulations (in this paper the EU document SANCO/12495/2011), of the acetyl-chlorophenols and chloroanisoles that are determined by means of a HS-SPME-GC/MS automated device. The joint use of a PARAFAC decomposition and a Doehlert design provides the data to fit a response surface for each analyte. With the fitted surfaces, the overall desirability function and the Pareto-optimal front are used to describe the relation between the conditions of the derivatization reaction and the quantity extracted of each analyte. The visualization by using a parallel coordinates plot allows a deeper knowledge about the problem at hand as well as the wise selection of the conditions of the experimental factors for achieving specific goals about the responses. In the optimal experimental conditions (45°C and 25min) the determination by means of an automated HS-SPME-GC/MS system is carried out. By using the regression line fitted between calculated and true concentrations, it has been checked that the procedure has neither proportional nor constant bias. The decision limits, CCa, for probability a of false positive set to 0.05, vary between

  12. Design Oriented Structural Modeling for Airplane Conceptual Design Optimization

    NASA Technical Reports Server (NTRS)

    Livne, Eli

    1999-01-01

    The main goal for research conducted with the support of this grant was to develop design oriented structural optimization methods for the conceptual design of airplanes. Traditionally in conceptual design airframe weight is estimated based on statistical equations developed over years of fitting airplane weight data in data bases of similar existing air- planes. Utilization of such regression equations for the design of new airplanes can be justified only if the new air-planes use structural technology similar to the technology on the airplanes in those weight data bases. If any new structural technology is to be pursued or any new unconventional configurations designed the statistical weight equations cannot be used. In such cases any structural weight estimation must be based on rigorous "physics based" structural analysis and optimization of the airframes under consideration. Work under this grant progressed to explore airframe design-oriented structural optimization techniques along two lines of research: methods based on "fast" design oriented finite element technology and methods based on equivalent plate / equivalent shell models of airframes, in which the vehicle is modelled as an assembly of plate and shell components, each simulating a lifting surface or nacelle / fuselage pieces. Since response to changes in geometry are essential in conceptual design of airplanes, as well as the capability to optimize the shape itself, research supported by this grant sought to develop efficient techniques for parametrization of airplane shape and sensitivity analysis with respect to shape design variables. Towards the end of the grant period a prototype automated structural analysis code designed to work with the NASA Aircraft Synthesis conceptual design code ACS= was delivered to NASA Ames.

  13. Optimization of fruit punch using mixture design.

    PubMed

    Kumar, S Bharath; Ravi, R; Saraswathi, G

    2010-01-01

    A highly acceptable dehydrated fruit punch was developed with selected fruits, namely lemon, orange, and mango, using a mixture design and optimization technique. The fruit juices were freeze dried, powdered, and used in the reconstitution studies. Fruit punches were prepared according to the experimental design combinations (total 10) based on a mixture design and then subjected to sensory evaluation for acceptability. Response surfaces of sensory attributes were also generated as a function of fruit juices. Analysis of data revealed that the fruit punch prepared using 66% of mango, 33% of orange, and 1% of lemon had highly desirable sensory scores for color (6.00), body (5.92), sweetness (5.68), and pleasantness (5.94). The aroma pattern of individual as well as combinations of fruit juices were also analyzed by electronic nose. The electronic nose could discriminate the aroma patterns of individual as well as fruit juice combinations by mixture design. The results provide information on the sensory quality of best fruit punch formulations liked by the consumer panel based on lemon, orange, and mango.

  14. Multidisciplinary Optimization Methods for Aircraft Preliminary Design

    NASA Technical Reports Server (NTRS)

    Kroo, Ilan; Altus, Steve; Braun, Robert; Gage, Peter; Sobieski, Ian

    1994-01-01

    This paper describes a research program aimed at improved methods for multidisciplinary design and optimization of large-scale aeronautical systems. The research involves new approaches to system decomposition, interdisciplinary communication, and methods of exploiting coarse-grained parallelism for analysis and optimization. A new architecture, that involves a tight coupling between optimization and analysis, is intended to improve efficiency while simplifying the structure of multidisciplinary, computation-intensive design problems involving many analysis disciplines and perhaps hundreds of design variables. Work in two areas is described here: system decomposition using compatibility constraints to simplify the analysis structure and take advantage of coarse-grained parallelism; and collaborative optimization, a decomposition of the optimization process to permit parallel design and to simplify interdisciplinary communication requirements.

  15. Integrated topology and shape optimization in structural design

    NASA Technical Reports Server (NTRS)

    Bremicker, M.; Chirehdast, M.; Kikuchi, N.; Papalambros, P. Y.

    1990-01-01

    Structural optimization procedures usually start from a given design topology and vary its proportions or boundary shapes to achieve optimality under various constraints. Two different categories of structural optimization are distinguished in the literature, namely sizing and shape optimization. A major restriction in both cases is that the design topology is considered fixed and given. Questions concerning the general layout of a design (such as whether a truss or a solid structure should be used) as well as more detailed topology features (e.g., the number and connectivities of bars in a truss or the number of holes in a solid) have to be resolved by design experience before formulating the structural optimization model. Design quality of an optimized structure still depends strongly on engineering intuition. This article presents a novel approach for initiating formal structural optimization at an earlier stage, where the design topology is rigorously generated in addition to selecting shape and size dimensions. A three-phase design process is discussed: an optimal initial topology is created by a homogenization method as a gray level image, which is then transformed to a realizable design using computer vision techniques; this design is then parameterized and treated in detail by sizing and shape optimization. A fully automated process is described for trusses. Optimization of two dimensional solid structures is also discussed. Several application-oriented examples illustrate the usefulness of the proposed methodology.

  16. Computer-aided design and experimental investigation of a hydrodynamic device: the microwire electrode

    PubMed

    Fulian; Gooch; Fisher; Stevens; Compton

    2000-08-01

    The development and application of a new electrochemical device using a computer-aided design strategy is reported. This novel design is based on the flow of electrolyte solution past a microwire electrode situated centrally within a large duct. In the design stage, finite element simulations were employed to evaluate feasible working geometries and mass transport rates. The computer-optimized designs were then exploited to construct experimental devices. Steady-state voltammetric measurements were performed for a reversible one-electron-transfer reaction to establish the experimental relationship between electrolysis current and solution velocity. The experimental results are compared to those predicted numerically, and good agreement is found. The numerical studies are also used to establish an empirical relationship between the mass transport limited current and the volume flow rate, providing a simple and quantitative alternative for workers who would prefer to exploit this device without the need to develop the numerical aspects.

  17. Review of design optimization methods for turbomachinery aerodynamics

    NASA Astrophysics Data System (ADS)

    Li, Zhihui; Zheng, Xinqian

    2017-08-01

    In today's competitive environment, new turbomachinery designs need to be not only more efficient, quieter, and ;greener; but also need to be developed at on much shorter time scales and at lower costs. A number of advanced optimization strategies have been developed to achieve these requirements. This paper reviews recent progress in turbomachinery design optimization to solve real-world aerodynamic problems, especially for compressors and turbines. This review covers the following topics that are important for optimizing turbomachinery designs. (1) optimization methods, (2) stochastic optimization combined with blade parameterization methods and the design of experiment methods, (3) gradient-based optimization methods for compressors and turbines and (4) data mining techniques for Pareto Fronts. We also present our own insights regarding the current research trends and the future optimization of turbomachinery designs.

  18. Design, Modeling and Performance Optimization of a Novel Rotary Piezoelectric Motor

    NASA Technical Reports Server (NTRS)

    Duong, Khanh A.; Garcia, Ephrahim

    1997-01-01

    This work has demonstrated a proof of concept for a torsional inchworm type motor. The prototype motor has shown that piezoelectric stack actuators can be used for rotary inchworm motor. The discrete linear motion of piezoelectric stacks can be converted into rotary stepping motion. The stacks with its high force and displacement output are suitable actuators for use in piezoelectric motor. The designed motor is capable of delivering high torque and speed. Critical issues involving the design and operation of piezoelectric motors were studied. The tolerance between the contact shoes and the rotor has proved to be very critical to the performance of the motor. Based on the prototype motor, a waveform optimization scheme was proposed and implemented to improve the performance of the motor. The motor was successfully modeled in MATLAB. The model closely represents the behavior of the prototype motor. Using the motor model, the input waveforms were successfully optimized to improve the performance of the motor in term of speed, torque, power and precision. These optimized waveforms drastically improve the speed of the motor at different frequencies and loading conditions experimentally. The optimized waveforms also increase the level of precision of the motor. The use of the optimized waveform is a break-away from the traditional use of sinusoidal and square waves as the driving signals. This waveform optimization scheme can be applied to any inchworm motors to improve their performance. The prototype motor in this dissertation as a proof of concept was designed to be robust and large. Future motor can be designed much smaller and more efficient with lessons learned from the prototype motor.

  19. Optimization of headspace solid-phase microextraction by means of an experimental design for the determination of methyl tert.-butyl ether in water by gas chromatography-flame ionization detection.

    PubMed

    Dron, Julien; Garcia, Rosa; Millán, Esmeralda

    2002-07-19

    A procedure for determination of methyl tert.-butyl ether (MTBE) in water by headspace solid-phase microextraction (HS-SPME) has been developed. The analysis was carried out by gas chromatography with flame ionization detection. The extraction procedure, using a 65-microm poly(dimethylsiloxane)-divinylbenzene SPME fiber, was optimized following experimental design. A fractional factorial design for screening and a central composite design for optimizing the significant variables were applied. Extraction temperature and sodium chloride concentration were significant variables, and 20 degrees C and 300 g/l were, respectively chosen for the best extraction response. With these conditions, an extraction time of 5 min was sufficient to extract MTBE. The calibration linear range for MTBE was 5-500 microg/l and the detection limit 0.45 microg/l. The relative standard deviation, for seven replicates of 250 microg/l MTBE in water, was 6.3%.

  20. Ultrafiltration membrane reactors for enzymatic resolution of amino acids: design model and optimization.

    PubMed

    Bódalo, A; Gómez, J L.; Gómez, E; Bastida, J; Máximo, M F.; Montiel, M C.

    2001-03-08

    In this paper the possibility of continuous resolution of DL-phenylalanine, catalyzed by L-aminoacylase in a ultrafiltration membrane reactor (UFMR) is presented. A simple design model, based on previous kinetic studies, has been demonstrated to be capable of describing the behavior of the experimental system. The model has been used to determine the optimal experimental conditions to carry out the asymmetrical hydrolysis of N-acetyl-DL-phenylalanine.

  1. Optimal control design of turbo spin‐echo sequences with applications to parallel‐transmit systems

    PubMed Central

    Hoogduin, Hans; Hajnal, Joseph V.; van den Berg, Cornelis A. T.; Luijten, Peter R.; Malik, Shaihan J.

    2016-01-01

    Purpose The design of turbo spin‐echo sequences is modeled as a dynamic optimization problem which includes the case of inhomogeneous transmit radiofrequency fields. This problem is efficiently solved by optimal control techniques making it possible to design patient‐specific sequences online. Theory and Methods The extended phase graph formalism is employed to model the signal evolution. The design problem is cast as an optimal control problem and an efficient numerical procedure for its solution is given. The numerical and experimental tests address standard multiecho sequences and pTx configurations. Results Standard, analytically derived flip angle trains are recovered by the numerical optimal control approach. New sequences are designed where constraints on radiofrequency total and peak power are included. In the case of parallel transmit application, the method is able to calculate the optimal echo train for two‐dimensional and three‐dimensional turbo spin echo sequences in the order of 10 s with a single central processing unit (CPU) implementation. The image contrast is maintained through the whole field of view despite inhomogeneities of the radiofrequency fields. Conclusion The optimal control design sheds new light on the sequence design process and makes it possible to design sequences in an online, patient‐specific fashion. Magn Reson Med 77:361–373, 2017. © 2016 The Authors Magnetic Resonance in Medicine published by Wiley Periodicals, Inc. on behalf of International Society for Magnetic Resonance in Medicine PMID:26800383

  2. D-OPTIMAL EXPERIMENTAL DESIGNS TO TEST FOR DEPARTURE FROM ADDITIVITY IN A FIXED-RATIO MIXTURE RAY.

    EPA Science Inventory

    Traditional factorial designs for evaluating interactions among chemicals in a mixture are prohibitive when the number of chemicals is large. However, recent advances in statistically-based experimental design have made it easier to evaluate interactions involving many chemicals...

  3. A Matrix-Free Algorithm for Multidisciplinary Design Optimization

    NASA Astrophysics Data System (ADS)

    Lambe, Andrew Borean

    Multidisciplinary design optimization (MDO) is an approach to engineering design that exploits the coupling between components or knowledge disciplines in a complex system to improve the final product. In aircraft design, MDO methods can be used to simultaneously design the outer shape of the aircraft and the internal structure, taking into account the complex interaction between the aerodynamic forces and the structural flexibility. Efficient strategies are needed to solve such design optimization problems and guarantee convergence to an optimal design. This work begins with a comprehensive review of MDO problem formulations and solution algorithms. First, a fundamental MDO problem formulation is defined from which other formulations may be obtained through simple transformations. Using these fundamental problem formulations, decomposition methods from the literature are reviewed and classified. All MDO methods are presented in a unified mathematical notation to facilitate greater understanding. In addition, a novel set of diagrams, called extended design structure matrices, are used to simultaneously visualize both data communication and process flow between the many software components of each method. For aerostructural design optimization, modern decomposition-based MDO methods cannot efficiently handle the tight coupling between the aerodynamic and structural states. This fact motivates the exploration of methods that can reduce the computational cost. A particular structure in the direct and adjoint methods for gradient computation motivates the idea of a matrix-free optimization method. A simple matrix-free optimizer is developed based on the augmented Lagrangian algorithm. This new matrix-free optimizer is tested on two structural optimization problems and one aerostructural optimization problem. The results indicate that the matrix-free optimizer is able to efficiently solve structural and multidisciplinary design problems with thousands of variables and

  4. A Matrix-Free Algorithm for Multidisciplinary Design Optimization

    NASA Astrophysics Data System (ADS)

    Lambe, Andrew Borean

    Multidisciplinary design optimization (MDO) is an approach to engineering design that exploits the coupling between components or knowledge disciplines in a complex system to improve the final product. In aircraft design, MDO methods can be used to simultaneously design the outer shape of the aircraft and the internal structure, taking into account the complex interaction between the aerodynamic forces and the structural flexibility. Efficient strategies are needed to solve such design optimization problems and guarantee convergence to an optimal design. This work begins with a comprehensive review of MDO problem formulations and solution algorithms. First, a fundamental MDO problem formulation is defined from which other formulations may be obtained through simple transformations. Using these fundamental problem formulations, decomposition methods from the literature are reviewed and classified. All MDO methods are presented in a unified mathematical notation to facilitate greater understanding. In addition, a novel set of diagrams, called extended design structure matrices, are used to simultaneously visualize both data communication and process flow between the many software components of each method. For aerostructural design optimization, modern decomposition-based MDO methods cannot efficiently handle the tight coupling between the aerodynamic and structural states. This fact motivates the exploration of methods that can reduce the computational cost. A particular structure in the direct and adjoint methods for gradient computation. motivates the idea of a matrix-free optimization method. A simple matrix-free optimizer is developed based on the augmented Lagrangian algorithm. This new matrix-free optimizer is tested on two structural optimization problems and one aerostructural optimization problem. The results indicate that the matrix-free optimizer is able to efficiently solve structural and multidisciplinary design problems with thousands of variables and

  5. Application of Taguchi L32 orthogonal array design to optimize copper biosorption by using Spaghnum moss.

    PubMed

    Ozdemir, Utkan; Ozbay, Bilge; Ozbay, Ismail; Veli, Sevil

    2014-09-01

    In this work, Taguchi L32 experimental design was applied to optimize biosorption of Cu(2+) ions by an easily available biosorbent, Spaghnum moss. With this aim, batch biosorption tests were performed to achieve targeted experimental design with five factors (concentration, pH, biosorbent dosage, temperature and agitation time) at two different levels. Optimal experimental conditions were determined by calculated signal-to-noise ratios. "Higher is better" approach was followed to calculate signal-to-noise ratios as it was aimed to obtain high metal removal efficiencies. The impact ratios of factors were determined by the model. Within the study, Cu(2+) biosorption efficiencies were also predicted by using Taguchi method. Results of the model showed that experimental and predicted values were close to each other demonstrating the success of Taguchi approach. Furthermore, thermodynamic, isotherm and kinetic studies were performed to explain the biosorption mechanism. Calculated thermodynamic parameters were in good accordance with the results of Taguchi model. Copyright © 2014 Elsevier Inc. All rights reserved.

  6. Quasi-Experimental Designs for Causal Inference

    ERIC Educational Resources Information Center

    Kim, Yongnam; Steiner, Peter

    2016-01-01

    When randomized experiments are infeasible, quasi-experimental designs can be exploited to evaluate causal treatment effects. The strongest quasi-experimental designs for causal inference are regression discontinuity designs, instrumental variable designs, matching and propensity score designs, and comparative interrupted time series designs. This…

  7. Optimal Pulse Configuration Design for Heart Stimulation. A Theoretical, Numerical and Experimental Study.

    NASA Astrophysics Data System (ADS)

    Hardy, Neil; Dvir, Hila; Fenton, Flavio

    Existing pacemakers consider the rectangular pulse to be the optimal form of stimulation current. However, other waveforms for the use of pacemakers could save energy while still stimulating the heart. We aim to find the optimal waveform for pacemaker use, and to offer a theoretical explanation for its advantage. Since the pacemaker battery is a charge source, here we probe the stimulation current waveforms with respect to the total charge delivery. In this talk we present theoretical analysis and numerical simulations of myocyte ion-channel currents acting as an additional source of charge that adds to the external stimulating charge for stimulation purposes. Therefore, we find that as the action potential emerges, the external stimulating current can be reduced accordingly exponentially. We then performed experimental studies in rabbit and cat hearts and showed that indeed exponential truncated pulses with less total charge can still induce activation in the heart. From the experiments, we present curves showing the savings in charge as a function of exponential waveform and we calculated that the longevity of the pacemaker battery would be ten times higher for the exponential current compared to the rectangular waveforms. Thanks to Petit Undergraduate Research Scholars Program and NSF# 1413037.

  8. [Optimization of vacuum belt drying process of Gardeniae Fructus in Reduning injection by Box-Behnken design-response surface methodology].

    PubMed

    Huang, Dao-sheng; Shi, Wei; Han, Lei; Sun, Ke; Chen, Guang-bo; Wu Jian-xiong; Xu, Gui-hong; Bi, Yu-an; Wang, Zhen-zhong; Xiao, Wei

    2015-06-01

    To optimize the belt drying process conditions optimization of Gardeniae Fructus extract from Reduning injection by Box-Behnken design-response surface methodology, on the basis of single factor experiment, a three-factor and three-level Box-Behnken experimental design was employed to optimize the drying technology of Gardeniae Fructus extract from Reduning injection. With drying temperature, drying time, feeding speed as independent variables and the content of geniposide as dependent variable, the experimental data were fitted to a second order polynomial equation, establishing the mathematical relationship between the content of geniposide and respective variables. With the experimental data analyzed by Design-Expert 8. 0. 6, the optimal drying parameter was as follows: the drying temperature was 98.5 degrees C , the drying time was 89 min, the feeding speed was 99.8 r x min(-1). Three verification experiments were taked under this technology and the measured average content of geniposide was 564. 108 mg x g(-1), which was close to the model prediction: 563. 307 mg x g(-1). According to the verification test, the Gardeniae Fructus belt drying process is steady and feasible. So single factor experiments combined with response surface method (RSM) could be used to optimize the drying technology of Reduning injection Gardenia extract.

  9. Optimization of phase feeding of starter, grower, and finisher diets for male broilers by mixture experimental design: forty-eight-day production period.

    PubMed

    Roush, W B; Boykin, D; Branton, S L

    2004-08-01

    A mixture experiment, a variant of response surface methodology, was designed to determine the proportion of time to feed broiler starter (23% protein), grower (20% protein), and finisher (18% protein) diets to optimize production and processing variables based on a total production time of 48 d. Mixture designs are useful for proportion problems where the components of the experiment (i.e., length of time the diets were fed) add up to a unity (48 d). The experiment was conducted with day-old male Ross x Ross broiler chicks. The birds were placed 50 birds per pen in each of 60 pens. The experimental design was a 10-point augmented simplex-centroid (ASC) design with 6 replicates of each point. Each design point represented the portion(s) of the 48 d that each of the diets was fed. Formulation of the diets was based on NRC standards. At 49 d, each pen of birds was evaluated for production data including BW, feed conversion, and cost of feed consumed. Then, 6 birds were randomly selected from each pen for processing data. Processing variables included live weight, hot carcass weight, dressing percentage, fat pad percentage, and breast yield (pectoralis major and pectoralis minor weights). Production and processing data were fit to simplex regression models. Model terms determined not to be significant (P > 0.05) were removed. The models were found to be statistically adequate for analysis of the response surfaces. A compromise solution was calculated based on optimal constraints designated for the production and processing data. The results indicated that broilers fed a starter and finisher diet for 30 and 18 d, respectively, would meet the production and processing constraints. Trace plots showed that the production and processing variables were not very sensitive to the grower diet.

  10. Complex optimization for big computational and experimental neutron datasets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bao, Feng; Oak Ridge National Lab.; Archibald, Richard

    Here, we present a framework to use high performance computing to determine accurate solutions to the inverse optimization problem of big experimental data against computational models. We demonstrate how image processing, mathematical regularization, and hierarchical modeling can be used to solve complex optimization problems on big data. We also demonstrate how both model and data information can be used to further increase solution accuracy of optimization by providing confidence regions for the processing and regularization algorithms. Finally, we use the framework in conjunction with the software package SIMPHONIES to analyze results from neutron scattering experiments on silicon single crystals, andmore » refine first principles calculations to better describe the experimental data.« less

  11. Complex optimization for big computational and experimental neutron datasets

    DOE PAGES

    Bao, Feng; Oak Ridge National Lab.; Archibald, Richard; ...

    2016-11-07

    Here, we present a framework to use high performance computing to determine accurate solutions to the inverse optimization problem of big experimental data against computational models. We demonstrate how image processing, mathematical regularization, and hierarchical modeling can be used to solve complex optimization problems on big data. We also demonstrate how both model and data information can be used to further increase solution accuracy of optimization by providing confidence regions for the processing and regularization algorithms. Finally, we use the framework in conjunction with the software package SIMPHONIES to analyze results from neutron scattering experiments on silicon single crystals, andmore » refine first principles calculations to better describe the experimental data.« less

  12. Angular rate optimal design for the rotary strapdown inertial navigation system.

    PubMed

    Yu, Fei; Sun, Qian

    2014-04-22

    Due to the characteristics of high precision for a long duration, the rotary strapdown inertial navigation system (RSINS) has been widely used in submarines and surface ships. Nowadays, the core technology, the rotating scheme, has been studied by numerous researchers. It is well known that as one of the key technologies, the rotating angular rate seriously influences the effectiveness of the error modulating. In order to design the optimal rotating angular rate of the RSINS, the relationship between the rotating angular rate and the velocity error of the RSINS was analyzed in detail based on the Laplace transform and the inverse Laplace transform in this paper. The analysis results showed that the velocity error of the RSINS depends on not only the sensor error, but also the rotating angular rate. In order to minimize the velocity error, the rotating angular rate of the RSINS should match the sensor error. One optimal design method for the rotating rate of the RSINS was also proposed in this paper. Simulation and experimental results verified the validity and superiority of this optimal design method for the rotating rate of the RSINS.

  13. Angular Rate Optimal Design for the Rotary Strapdown Inertial Navigation System

    PubMed Central

    Yu, Fei; Sun, Qian

    2014-01-01

    Due to the characteristics of high precision for a long duration, the rotary strapdown inertial navigation system (RSINS) has been widely used in submarines and surface ships. Nowadays, the core technology, the rotating scheme, has been studied by numerous researchers. It is well known that as one of the key technologies, the rotating angular rate seriously influences the effectiveness of the error modulating. In order to design the optimal rotating angular rate of the RSINS, the relationship between the rotating angular rate and the velocity error of the RSINS was analyzed in detail based on the Laplace transform and the inverse Laplace transform in this paper. The analysis results showed that the velocity error of the RSINS depends on not only the sensor error, but also the rotating angular rate. In order to minimize the velocity error, the rotating angular rate of the RSINS should match the sensor error. One optimal design method for the rotating rate of the RSINS was also proposed in this paper. Simulation and experimental results verified the validity and superiority of this optimal design method for the rotating rate of the RSINS. PMID:24759115

  14. Optimal design of geodesically stiffened composite cylindrical shells

    NASA Technical Reports Server (NTRS)

    Gendron, G.; Guerdal, Z.

    1992-01-01

    An optimization system based on the finite element code Computations Structural Mechanics (CSM) Testbed and the optimization program, Automated Design Synthesis (ADS), is described. The optimization system can be used to obtain minimum-weight designs of composite stiffened structures. Ply thickness, ply orientations, and stiffener heights can be used as design variables. Buckling, displacement, and material failure constraints can be imposed on the design. The system is used to conduct a design study of geodesically stiffened shells. For comparison purposes, optimal designs of unstiffened shells and shells stiffened by rings and stingers are also obtained. Trends in the design of geodesically stiffened shells are identified. An approach to include local stress concentrations during the design optimization process is then presented. The method is based on a global/local analysis technique. It employs spline interpolation functions to determine displacements and rotations from a global model which are used as 'boundary conditions' for the local model. The organization of the strategy in the context of an optimization process is described. The method is validated with an example.

  15. Optimal flexible sample size design with robust power.

    PubMed

    Zhang, Lanju; Cui, Lu; Yang, Bo

    2016-08-30

    It is well recognized that sample size determination is challenging because of the uncertainty on the treatment effect size. Several remedies are available in the literature. Group sequential designs start with a sample size based on a conservative (smaller) effect size and allow early stop at interim looks. Sample size re-estimation designs start with a sample size based on an optimistic (larger) effect size and allow sample size increase if the observed effect size is smaller than planned. Different opinions favoring one type over the other exist. We propose an optimal approach using an appropriate optimality criterion to select the best design among all the candidate designs. Our results show that (1) for the same type of designs, for example, group sequential designs, there is room for significant improvement through our optimization approach; (2) optimal promising zone designs appear to have no advantages over optimal group sequential designs; and (3) optimal designs with sample size re-estimation deliver the best adaptive performance. We conclude that to deal with the challenge of sample size determination due to effect size uncertainty, an optimal approach can help to select the best design that provides most robust power across the effect size range of interest. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  16. Design optimization of an ironless inductive position sensor for the LHC collimators

    NASA Astrophysics Data System (ADS)

    Danisi, A.; Masi, A.; Losito, R.; Perriard, Y.

    2013-09-01

    The Ironless Inductive Position Sensor (I2PS) is an air-cored displacement sensor which has been conceived to be totally immune to external DC/slowly-varying magnetic fields. It can thus be used as a valid alternative to Linear Variable Differential Transformers (LVDTs), which can show a position error in magnetic environments. In addition, since it retains the excellent properties of LVDTs, the I2PS can be used in harsh environments, such as nuclear plants, plasma control and particle accelerators. This paper focuses on the design optimization of the sensor, considering the CERN LHC Collimators as application. In particular, the optimization comes after a complete review of the electromagnetic and thermal modeling of the sensor, as well as the proper choice of the reading technique. The design optimization stage is firmly based on these preliminary steps. Therefore, the paper summarises the sensor's complete development, from its modeling to its actual implementation. A set of experimental measurements demonstrates the sensor's performances to be those expected in the design phase.

  17. Polymeric behavior evaluation of PVP K30-poloxamer binary carrier for solid dispersed nisoldipine by experimental design.

    PubMed

    Kyaw Oo, May; Mandal, Uttam K; Chatterjee, Bappaditya

    2017-02-01

    High melting point polymeric carrier without plasticizer is unacceptable for solid dispersion (SD) by melting method. Combined polymer-plasticizer carrier significantly affects drug solubility and tableting property of SD. To evaluate and optimize the combined effect of a binary carrier consisting PVP K30 and poloxamer 188, on nisoldipine solubility and tensile strength of amorphous SD compact (SD compact ) by experimental design. SD of nisoldpine (SD nisol ) was prepared by melt mixing with different PVP K30 and poloxamer amount. A 3 2 factorial design was employed using nisoldipine solubility and tensile strength of SD compact as response variables. Statistical optimization by design expert software, and SD nisol characterization using ATR FTIR, DSC and microscopy were done. PVP K30:poloxamer, at a ratio of 3.73:6.63, was selected as the optimized combination of binary polymeric carrier resulting nisoldipine solubility of 115 μg/mL and tensile strength of 1.19 N/m 2 . PVP K30 had significant positive effect on both responses. Increase in poloxamer concentration after a certain level decreased nisoldipine solubility and tensile strength of SD compact . An optimized PVP K30-poloxamer binary composition for SD carrier was developed. Tensile strength of SD compact can be considered as a response for experimental design to optimize SD.

  18. Implications of optimization cost for balancing exploration and exploitation in global search and for experimental optimization

    NASA Astrophysics Data System (ADS)

    Chaudhuri, Anirban

    Global optimization based on expensive and time consuming simulations or experiments usually cannot be carried out to convergence, but must be stopped because of time constraints, or because the cost of the additional function evaluations exceeds the benefits of improving the objective(s). This dissertation sets to explore the implications of such budget and time constraints on the balance between exploration and exploitation and the decision of when to stop. Three different aspects are considered in terms of their effects on the balance between exploration and exploitation: 1) history of optimization, 2) fixed evaluation budget, and 3) cost as a part of objective function. To this end, this research develops modifications to the surrogate-based optimization technique, Efficient Global Optimization algorithm, that controls better the balance between exploration and exploitation, and stopping criteria facilitated by these modifications. Then the focus shifts to examining experimental optimization, which shares the issues of cost and time constraints. Through a study on optimization of thrust and power for a small flapping wing for micro air vehicles, important differences and similarities between experimental and simulation-based optimization are identified. The most important difference is that reduction of noise in experiments becomes a major time and cost issue, and a second difference is that parallelism as a way to cut cost is more challenging. The experimental optimization reveals the tendency of the surrogate to display optimistic bias near the surrogate optimum, and this tendency is then verified to also occur in simulation based optimization.

  19. Teaching-learning-based Optimization Algorithm for Parameter Identification in the Design of IIR Filters

    NASA Astrophysics Data System (ADS)

    Singh, R.; Verma, H. K.

    2013-12-01

    This paper presents a teaching-learning-based optimization (TLBO) algorithm to solve parameter identification problems in the designing of digital infinite impulse response (IIR) filter. TLBO based filter modelling is applied to calculate the parameters of unknown plant in simulations. Unlike other heuristic search algorithms, TLBO algorithm is an algorithm-specific parameter-less algorithm. In this paper big bang-big crunch (BB-BC) optimization and PSO algorithms are also applied to filter design for comparison. Unknown filter parameters are considered as a vector to be optimized by these algorithms. MATLAB programming is used for implementation of proposed algorithms. Experimental results show that the TLBO is more accurate to estimate the filter parameters than the BB-BC optimization algorithm and has faster convergence rate when compared to PSO algorithm. TLBO is used where accuracy is more essential than the convergence speed.

  20. Optimization Under Uncertainty for Electronics Cooling Design

    NASA Astrophysics Data System (ADS)

    Bodla, Karthik K.; Murthy, Jayathi Y.; Garimella, Suresh V.

    Optimization under uncertainty is a powerful methodology used in design and optimization to produce robust, reliable designs. Such an optimization methodology, employed when the input quantities of interest are uncertain, produces output uncertainties, helping the designer choose input parameters that would result in satisfactory thermal solutions. Apart from providing basic statistical information such as mean and standard deviation in the output quantities, auxiliary data from an uncertainty based optimization, such as local and global sensitivities, help the designer decide the input parameter(s) to which the output quantity of interest is most sensitive. This helps the design of experiments based on the most sensitive input parameter(s). A further crucial output of such a methodology is the solution to the inverse problem - finding the allowable uncertainty range in the input parameter(s), given an acceptable uncertainty range in the output quantity of interest...

  1. Acoustic design by topology optimization

    NASA Astrophysics Data System (ADS)

    Dühring, Maria B.; Jensen, Jakob S.; Sigmund, Ole

    2008-11-01

    To bring down noise levels in human surroundings is an important issue and a method to reduce noise by means of topology optimization is presented here. The acoustic field is modeled by Helmholtz equation and the topology optimization method is based on continuous material interpolation functions in the density and bulk modulus. The objective function is the squared sound pressure amplitude. First, room acoustic problems are considered and it is shown that the sound level can be reduced in a certain part of the room by an optimized distribution of reflecting material in a design domain along the ceiling or by distribution of absorbing and reflecting material along the walls. We obtain well defined optimized designs for a single frequency or a frequency interval for both 2D and 3D problems when considering low frequencies. Second, it is shown that the method can be applied to design outdoor sound barriers in order to reduce the sound level in the shadow zone behind the barrier. A reduction of up to 10 dB for a single barrier and almost 30 dB when using two barriers are achieved compared to utilizing conventional sound barriers.

  2. Development and optimization of a self-microemulsifying drug delivery system for atorvastatin calcium by using D-optimal mixture design.

    PubMed

    Yeom, Dong Woo; Song, Ye Seul; Kim, Sung Rae; Lee, Sang Gon; Kang, Min Hyung; Lee, Sangkil; Choi, Young Wook

    2015-01-01

    In this study, we developed and optimized a self-microemulsifying drug delivery system (SMEDDS) formulation for improving the dissolution and oral absorption of atorvastatin calcium (ATV), a poorly water-soluble drug. Solubility and emulsification tests were performed to select a suitable combination of oil, surfactant, and cosurfactant. A D-optimal mixture design was used to optimize the concentration of components used in the SMEDDS formulation for achieving excellent physicochemical characteristics, such as small droplet size and high dissolution. The optimized ATV-loaded SMEDDS formulation containing 7.16% Capmul MCM (oil), 48.25% Tween 20 (surfactant), and 44.59% Tetraglycol (cosurfactant) significantly enhanced the dissolution rate of ATV in different types of medium, including simulated intestinal fluid, simulated gastric fluid, and distilled water, compared with ATV suspension. Good agreement was observed between predicted and experimental values for mean droplet size and percentage of the drug released in 15 minutes. Further, pharmacokinetic studies in rats showed that the optimized SMEDDS formulation considerably enhanced the oral absorption of ATV, with 3.4-fold and 4.3-fold increases in the area under the concentration-time curve and time taken to reach peak plasma concentration, respectively, when compared with the ATV suspension. Thus, we successfully developed an optimized ATV-loaded SMEDDS formulation by using the D-optimal mixture design, that could potentially be used for improving the oral absorption of poorly water-soluble drugs.

  3. Optimization design of multiphase pump impeller based on combined genetic algorithm and boundary vortex flux diagnosis

    NASA Astrophysics Data System (ADS)

    Zhang, Jin-ya; Cai, Shu-jie; Li, Yong-jiang; Li, Yong-jiang; Zhang, Yong-xue

    2017-12-01

    A novel optimization design method for the multiphase pump impeller is proposed through combining the quasi-3D hydraulic design (Q3DHD), the boundary vortex flux (BVF) diagnosis, and the genetic algorithm (GA). The BVF diagnosis based on the Q3DHD is used to evaluate the objection function. Numerical simulations and hydraulic performance tests are carried out to compare the impeller designed only by the Q3DHD method and that optimized by the presented method. The comparisons of both the flow fields simulated under the same condition show that (1) the pressure distribution in the optimized impeller is more reasonable and the gas-liquid separation is more efficiently inhibited, (2) the scales of the gas pocket and the vortex decrease remarkably for the optimized impeller, (3) the unevenness of the BVF distributions near the shroud of the original impeller is effectively eliminated in the optimized impeller. The experimental results show that the differential pressure and the maximum efficiency of the optimized impeller are increased by 4% and 2.5%, respectively. Overall, the study indicates that the optimization design method proposed in this paper is feasible.

  4. Integrating uniform design and response surface methodology to optimize thiacloprid suspension

    PubMed Central

    Li, Bei-xing; Wang, Wei-chang; Zhang, Xian-peng; Zhang, Da-xia; Mu, Wei; Liu, Feng

    2017-01-01

    A model 25% suspension concentrate (SC) of thiacloprid was adopted to evaluate an integrative approach of uniform design and response surface methodology. Tersperse2700, PE1601, xanthan gum and veegum were the four experimental factors, and the aqueous separation ratio and viscosity were the two dependent variables. Linear and quadratic polynomial models of stepwise regression and partial least squares were adopted to test the fit of the experimental data. Verification tests revealed satisfactory agreement between the experimental and predicted data. The measured values for the aqueous separation ratio and viscosity were 3.45% and 278.8 mPa·s, respectively, and the relative errors of the predicted values were 9.57% and 2.65%, respectively (prepared under the proposed conditions). Comprehensive benefits could also be obtained by appropriately adjusting the amount of certain adjuvants based on practical requirements. Integrating uniform design and response surface methodology is an effective strategy for optimizing SC formulas. PMID:28383036

  5. Program Aids Analysis And Optimization Of Design

    NASA Technical Reports Server (NTRS)

    Rogers, James L., Jr.; Lamarsh, William J., II

    1994-01-01

    NETS/ PROSSS (NETS Coupled With Programming System for Structural Synthesis) computer program developed to provide system for combining NETS (MSC-21588), neural-network application program and CONMIN (Constrained Function Minimization, ARC-10836), optimization program. Enables user to reach nearly optimal design. Design then used as starting point in normal optimization process, possibly enabling user to converge to optimal solution in significantly fewer iterations. NEWT/PROSSS written in C language and FORTRAN 77.

  6. Design approaches to experimental mediation☆

    PubMed Central

    Pirlott, Angela G.; MacKinnon, David P.

    2016-01-01

    Identifying causal mechanisms has become a cornerstone of experimental social psychology, and editors in top social psychology journals champion the use of mediation methods, particularly innovative ones when possible (e.g. Halberstadt, 2010, Smith, 2012). Commonly, studies in experimental social psychology randomly assign participants to levels of the independent variable and measure the mediating and dependent variables, and the mediator is assumed to causally affect the dependent variable. However, participants are not randomly assigned to levels of the mediating variable(s), i.e., the relationship between the mediating and dependent variables is correlational. Although researchers likely know that correlational studies pose a risk of confounding, this problem seems forgotten when thinking about experimental designs randomly assigning participants to levels of the independent variable and measuring the mediator (i.e., “measurement-of-mediation” designs). Experimentally manipulating the mediator provides an approach to solving these problems, yet these methods contain their own set of challenges (e.g., Bullock, Green, & Ha, 2010). We describe types of experimental manipulations targeting the mediator (manipulations demonstrating a causal effect of the mediator on the dependent variable and manipulations targeting the strength of the causal effect of the mediator) and types of experimental designs (double randomization, concurrent double randomization, and parallel), provide published examples of the designs, and discuss the strengths and challenges of each design. Therefore, the goals of this paper include providing a practical guide to manipulation-of-mediator designs in light of their challenges and encouraging researchers to use more rigorous approaches to mediation because manipulation-of-mediator designs strengthen the ability to infer causality of the mediating variable on the dependent variable. PMID:27570259

  7. Design approaches to experimental mediation.

    PubMed

    Pirlott, Angela G; MacKinnon, David P

    2016-09-01

    Identifying causal mechanisms has become a cornerstone of experimental social psychology, and editors in top social psychology journals champion the use of mediation methods, particularly innovative ones when possible (e.g. Halberstadt, 2010, Smith, 2012). Commonly, studies in experimental social psychology randomly assign participants to levels of the independent variable and measure the mediating and dependent variables, and the mediator is assumed to causally affect the dependent variable. However, participants are not randomly assigned to levels of the mediating variable(s), i.e., the relationship between the mediating and dependent variables is correlational. Although researchers likely know that correlational studies pose a risk of confounding, this problem seems forgotten when thinking about experimental designs randomly assigning participants to levels of the independent variable and measuring the mediator (i.e., "measurement-of-mediation" designs). Experimentally manipulating the mediator provides an approach to solving these problems, yet these methods contain their own set of challenges (e.g., Bullock, Green, & Ha, 2010). We describe types of experimental manipulations targeting the mediator (manipulations demonstrating a causal effect of the mediator on the dependent variable and manipulations targeting the strength of the causal effect of the mediator) and types of experimental designs (double randomization, concurrent double randomization, and parallel), provide published examples of the designs, and discuss the strengths and challenges of each design. Therefore, the goals of this paper include providing a practical guide to manipulation-of-mediator designs in light of their challenges and encouraging researchers to use more rigorous approaches to mediation because manipulation-of-mediator designs strengthen the ability to infer causality of the mediating variable on the dependent variable.

  8. Experimental analysis of the performance of optimized fin structures in a latent heat energy storage test rig

    NASA Astrophysics Data System (ADS)

    Johnson, Maike; Hübner, Stefan; Reichmann, Carsten; Schönberger, Manfred; Fiß, Michael

    2017-06-01

    Energy storage systems are a key technology for developing a more sustainable energy supply system and lowering overall CO2 emissions. Among the variety of storage technologies, high temperature phase change material (PCM) storage is a promising option with a wide range of applications. PCM storages using an extended finned tube storage concept have been designed and techno-economically optimized for solar thermal power plant operations. These finned tube components were experimentally tested in order to validate the optimized design and simulation models used. Analysis of the charging and discharging characteristics of the storage at the pilot scale gives insight into the heat distribution both axially as well as radially in the storage material, thereby allowing for a realistic validation of the design. The design was optimized for discharging of the storage, as this is the more critical operation mode in power plant applications. The data show good agreement between the model and the experiments for discharging.

  9. Optimizing experimental procedures for quantitative evaluation of crop plant performance in high throughput phenotyping systems

    PubMed Central

    Junker, Astrid; Muraya, Moses M.; Weigelt-Fischer, Kathleen; Arana-Ceballos, Fernando; Klukas, Christian; Melchinger, Albrecht E.; Meyer, Rhonda C.; Riewe, David; Altmann, Thomas

    2015-01-01

    Detailed and standardized protocols for plant cultivation in environmentally controlled conditions are an essential prerequisite to conduct reproducible experiments with precisely defined treatments. Setting up appropriate and well defined experimental procedures is thus crucial for the generation of solid evidence and indispensable for successful plant research. Non-invasive and high throughput (HT) phenotyping technologies offer the opportunity to monitor and quantify performance dynamics of several hundreds of plants at a time. Compared to small scale plant cultivations, HT systems have much higher demands, from a conceptual and a logistic point of view, on experimental design, as well as the actual plant cultivation conditions, and the image analysis and statistical methods for data evaluation. Furthermore, cultivation conditions need to be designed that elicit plant performance characteristics corresponding to those under natural conditions. This manuscript describes critical steps in the optimization of procedures for HT plant phenotyping systems. Starting with the model plant Arabidopsis, HT-compatible methods were tested, and optimized with regard to growth substrate, soil coverage, watering regime, experimental design (considering environmental inhomogeneities) in automated plant cultivation and imaging systems. As revealed by metabolite profiling, plant movement did not affect the plants' physiological status. Based on these results, procedures for maize HT cultivation and monitoring were established. Variation of maize vegetative growth in the HT phenotyping system did match well with that observed in the field. The presented results outline important issues to be considered in the design of HT phenotyping experiments for model and crop plants. It thereby provides guidelines for the setup of HT experimental procedures, which are required for the generation of reliable and reproducible data of phenotypic variation for a broad range of applications. PMID

  10. Optimal designs based on the maximum quasi-likelihood estimator

    PubMed Central

    Shen, Gang; Hyun, Seung Won; Wong, Weng Kee

    2016-01-01

    We use optimal design theory and construct locally optimal designs based on the maximum quasi-likelihood estimator (MqLE), which is derived under less stringent conditions than those required for the MLE method. We show that the proposed locally optimal designs are asymptotically as efficient as those based on the MLE when the error distribution is from an exponential family, and they perform just as well or better than optimal designs based on any other asymptotically linear unbiased estimators such as the least square estimator (LSE). In addition, we show current algorithms for finding optimal designs can be directly used to find optimal designs based on the MqLE. As an illustrative application, we construct a variety of locally optimal designs based on the MqLE for the 4-parameter logistic (4PL) model and study their robustness properties to misspecifications in the model using asymptotic relative efficiency. The results suggest that optimal designs based on the MqLE can be easily generated and they are quite robust to mis-specification in the probability distribution of the responses. PMID:28163359

  11. A Modified Particle Swarm Optimization Technique for Finding Optimal Designs for Mixture Models

    PubMed Central

    Wong, Weng Kee; Chen, Ray-Bing; Huang, Chien-Chih; Wang, Weichung

    2015-01-01

    Particle Swarm Optimization (PSO) is a meta-heuristic algorithm that has been shown to be successful in solving a wide variety of real and complicated optimization problems in engineering and computer science. This paper introduces a projection based PSO technique, named ProjPSO, to efficiently find different types of optimal designs, or nearly optimal designs, for mixture models with and without constraints on the components, and also for related models, like the log contrast models. We also compare the modified PSO performance with Fedorov's algorithm, a popular algorithm used to generate optimal designs, Cocktail algorithm, and the recent algorithm proposed by [1]. PMID:26091237

  12. Optimization of digital designs

    NASA Technical Reports Server (NTRS)

    Miles, Lowell H. (Inventor); Whitaker, Sterling R. (Inventor)

    2009-01-01

    An application specific integrated circuit is optimized by translating a first representation of its digital design to a second representation. The second representation includes multiple syntactic expressions that admit a representation of a higher-order function of base Boolean values. The syntactic expressions are manipulated to form a third representation of the digital design.

  13. Optimization applications in aircraft engine design and test

    NASA Technical Reports Server (NTRS)

    Pratt, T. K.

    1984-01-01

    Starting with the NASA-sponsored STAEBL program, optimization methods based primarily upon the versatile program COPES/CONMIN were introduced over the past few years to a broad spectrum of engineering problems in structural optimization, engine design, engine test, and more recently, manufacturing processes. By automating design and testing processes, many repetitive and costly trade-off studies have been replaced by optimization procedures. Rather than taking engineers and designers out of the loop, optimization has, in fact, put them more in control by providing sophisticated search techniques. The ultimate decision whether to accept or reject an optimal feasible design still rests with the analyst. Feedback obtained from this decision process has been invaluable since it can be incorporated into the optimization procedure to make it more intelligent. On several occasions, optimization procedures have produced novel designs, such as the nonsymmetric placement of rotor case stiffener rings, not anticipated by engineering designers. In another case, a particularly difficult resonance contraint could not be satisfied using hand iterations for a compressor blade, when the STAEBL program was applied to the problem, a feasible solution was obtained in just two iterations.

  14. Sparse and optimal acquisition design for diffusion MRI and beyond

    PubMed Central

    Koay, Cheng Guan; Özarslan, Evren; Johnson, Kevin M.; Meyerand, M. Elizabeth

    2012-01-01

    strategy was found to be effective in finding the optimum configuration. It was found that the square design is the most robust (i.e., with stable condition numbers and A-optimal measures under varying experimental conditions) among many other possible designs of the same sample size. Under the same performance evaluation, the square design was found to be more robust than the widely used sampling schemes similar to that of 3D radial MRI and of diffusion spectrum imaging (DSI). Conclusions: A novel optimality criterion for sparse multiple-shell acquisition and quasimultiple-shell designs in diffusion MRI and an effective search strategy for finding the best configuration have been developed. The results are very promising, interesting, and practical for diffusion MRI acquisitions. PMID:22559620

  15. Multiobjective optimization techniques for structural design

    NASA Technical Reports Server (NTRS)

    Rao, S. S.

    1984-01-01

    The multiobjective programming techniques are important in the design of complex structural systems whose quality depends generally on a number of different and often conflicting objective functions which cannot be combined into a single design objective. The applicability of multiobjective optimization techniques is studied with reference to simple design problems. Specifically, the parameter optimization of a cantilever beam with a tip mass and a three-degree-of-freedom vabration isolation system and the trajectory optimization of a cantilever beam are considered. The solutions of these multicriteria design problems are attempted by using global criterion, utility function, game theory, goal programming, goal attainment, bounded objective function, and lexicographic methods. It has been observed that the game theory approach required the maximum computational effort, but it yielded better optimum solutions with proper balance of the various objective functions in all the cases.

  16. On the Constitutive Response Characterization for Composite Materials Via Data-Driven Design Optimization

    Treesearch

    John G. Michopoulos; John G. Hermanson; Athanasios lliopoulos; Samuel Lambrakos; Tomonari Furukawa

    2011-01-01

    In the present paper we focus on demonstrating the use of design optimization for the constitutive characterization of anisotropic material systems such as polymer matrix composites, with or without damage. All approaches are based on the availability of experimental data originating from mechatronic material testing systems that can expose specimens to...

  17. Telemanipulator design and optimization software

    NASA Astrophysics Data System (ADS)

    Cote, Jean; Pelletier, Michel

    1995-12-01

    For many years, industrial robots have been used to execute specific repetitive tasks. In those cases, the optimal configuration and location of the manipulator only has to be found once. The optimal configuration or position where often found empirically according to the tasks to be performed. In telemanipulation, the nature of the tasks to be executed is much wider and can be very demanding in terms of dexterity and workspace. The position/orientation of the robot's base could be required to move during the execution of a task. At present, the choice of the initial position of the teleoperator is usually found empirically which can be sufficient in the case of an easy or repetitive task. In the converse situation, the amount of time wasted to move the teleoperator support platform has to be taken into account during the execution of the task. Automatic optimization of the position/orientation of the platform or a better designed robot configuration could minimize these movements and save time. This paper will present two algorithms. The first algorithm is used to optimize the position and orientation of a given manipulator (or manipulators) with respect to the environment on which a task has to be executed. The second algorithm is used to optimize the position or the kinematic configuration of a robot. For this purpose, the tasks to be executed are digitized using a position/orientation measurement system and a compact representation based on special octrees. Given a digitized task, the optimal position or Denavit-Hartenberg configuration of the manipulator can be obtained numerically. Constraints on the robot design can also be taken into account. A graphical interface has been designed to facilitate the use of the two optimization algorithms.

  18. Experimental validation of a new heterogeneous mechanical test design

    NASA Astrophysics Data System (ADS)

    Aquino, J.; Campos, A. Andrade; Souto, N.; Thuillier, S.

    2018-05-01

    Standard material parameters identification strategies generally use an extensive number of classical tests for collecting the required experimental data. However, a great effort has been made recently by the scientific and industrial communities to support this experimental database on heterogeneous tests. These tests can provide richer information on the material behavior allowing the identification of a more complete set of material parameters. This is a result of the recent development of full-field measurements techniques, like digital image correlation (DIC), that can capture the heterogeneous deformation fields on the specimen surface during the test. Recently, new specimen geometries were designed to enhance the richness of the strain field and capture supplementary strain states. The butterfly specimen is an example of these new geometries, designed through a numerical optimization procedure where an indicator capable of evaluating the heterogeneity and the richness of strain information. However, no experimental validation was yet performed. The aim of this work is to experimentally validate the heterogeneous butterfly mechanical test in the parameter identification framework. For this aim, DIC technique and a Finite Element Model Up-date inverse strategy are used together for the parameter identification of a DC04 steel, as well as the calculation of the indicator. The experimental tests are carried out in a universal testing machine with the ARAMIS measuring system to provide the strain states on the specimen surface. The identification strategy is accomplished with the data obtained from the experimental tests and the results are compared to a reference numerical solution.

  19. Design, analysis, optimization and control of rotor tip flows

    NASA Astrophysics Data System (ADS)

    Maesschalck, Cis Guy M. De

    Developments in turbomachinery focus on efficiency and reliability enhancements, while reducing the production costs. In spite of the many noteworthy experimental and numerical investigations over the past decades, the turbine tip design presents numerous challenges to the engine manufacturers, and remains the primary factor defining the machine durability for the periodic removal of the turbine components during overhaul. Due to the hot gases coming from the upstream combustion chamber, the turbine blades are subjected to temperatures far above the metal creep temperature, combined with severe thermal stresses induced within the blade material. Inadequate designs cause early tip burnouts leading to considerable performance degradations, or even a catastrophic turbine failure. Moreover, the leakage spillage, nowadays often exceeding the transonic regime, generates large aerodynamic penalties which are responsible for about one third of the turbine losses. In this view, the current doctoral research exploits the potential through the modification and optimization of the blade tip shape as a means to control the tip leakage flow aerodynamics and manage the heat load distribution over the blade profile to improve the turbine efficiency and durability. Three main design strategies for unshrouded turbine blade tips were analyzed and optimized: tight running clearances, blade tip contouring and the use of complex squealer-like geometries. The altered overtip flow physics and heat transfer characteristics were simulated for tight gap sizes as low as 0.5% down to 0.1% of the blade height, occurring during engine transients and soon to be expected due to recent developments in active clearance control strategies. The potential of fully 3D contoured blade top surfaces, allowing to adapt the profile locally to the changing flow conditions throughout the camberline, is quantified. First adopting a quasi-3D approach and subsequently using a full 3D optimization. For the

  20. True and Quasi-Experimental Designs. ERIC/AE Digest.

    ERIC Educational Resources Information Center

    Gribbons, Barry; Herman, Joan

    Among the different types of experimental design are two general categories: true experimental designs and quasi- experimental designs. True experimental designs include more than one purposively created group, common measured outcomes, and random assignment. Quasi-experimental designs are commonly used when random assignment is not practical or…

  1. MEMS resonant load cells for micro-mechanical test frames: feasibility study and optimal design

    NASA Astrophysics Data System (ADS)

    Torrents, A.; Azgin, K.; Godfrey, S. W.; Topalli, E. S.; Akin, T.; Valdevit, L.

    2010-12-01

    This paper presents the design, optimization and manufacturing of a novel micro-fabricated load cell based on a double-ended tuning fork. The device geometry and operating voltages are optimized for maximum force resolution and range, subject to a number of manufacturing and electromechanical constraints. All optimizations are enabled by analytical modeling (verified by selected finite elements analyses) coupled with an efficient C++ code based on the particle swarm optimization algorithm. This assessment indicates that force resolutions of ~0.5-10 nN are feasible in vacuum (~1-50 mTorr), with force ranges as large as 1 N. Importantly, the optimal design for vacuum operation is independent of the desired range, ensuring versatility. Experimental verifications on a sub-optimal device fabricated using silicon-on-glass technology demonstrate a resolution of ~23 nN at a vacuum level of ~50 mTorr. The device demonstrated in this article will be integrated in a hybrid micro-mechanical test frame for unprecedented combinations of force resolution and range, displacement resolution and range, optical (or SEM) access to the sample, versatility and cost.

  2. Marine Steam Condenser Design Optimization.

    DTIC Science & Technology

    1983-12-01

    to make design decisions to obtain a feasible design. CONNIN, as do most optimizers, requires complete control in determining all iterative design...neutralize all the places where such design decisions are made. By removing the ability for CONDIP to make any design decisions it became totally passive...dependent on CONNIN for design decisions , does not have that capability. Pemeabering that CONHIN requires a complete once-through analysis in order to

  3. Efficient Optimization of Stimuli for Model-Based Design of Experiments to Resolve Dynamical Uncertainty.

    PubMed

    Mdluli, Thembi; Buzzard, Gregery T; Rundell, Ann E

    2015-09-01

    This model-based design of experiments (MBDOE) method determines the input magnitudes of an experimental stimuli to apply and the associated measurements that should be taken to optimally constrain the uncertain dynamics of a biological system under study. The ideal global solution for this experiment design problem is generally computationally intractable because of parametric uncertainties in the mathematical model of the biological system. Others have addressed this issue by limiting the solution to a local estimate of the model parameters. Here we present an approach that is independent of the local parameter constraint. This approach is made computationally efficient and tractable by the use of: (1) sparse grid interpolation that approximates the biological system dynamics, (2) representative parameters that uniformly represent the data-consistent dynamical space, and (3) probability weights of the represented experimentally distinguishable dynamics. Our approach identifies data-consistent representative parameters using sparse grid interpolants, constructs the optimal input sequence from a greedy search, and defines the associated optimal measurements using a scenario tree. We explore the optimality of this MBDOE algorithm using a 3-dimensional Hes1 model and a 19-dimensional T-cell receptor model. The 19-dimensional T-cell model also demonstrates the MBDOE algorithm's scalability to higher dimensions. In both cases, the dynamical uncertainty region that bounds the trajectories of the target system states were reduced by as much as 86% and 99% respectively after completing the designed experiments in silico. Our results suggest that for resolving dynamical uncertainty, the ability to design an input sequence paired with its associated measurements is particularly important when limited by the number of measurements.

  4. Efficient Optimization of Stimuli for Model-Based Design of Experiments to Resolve Dynamical Uncertainty

    PubMed Central

    Mdluli, Thembi; Buzzard, Gregery T.; Rundell, Ann E.

    2015-01-01

    This model-based design of experiments (MBDOE) method determines the input magnitudes of an experimental stimuli to apply and the associated measurements that should be taken to optimally constrain the uncertain dynamics of a biological system under study. The ideal global solution for this experiment design problem is generally computationally intractable because of parametric uncertainties in the mathematical model of the biological system. Others have addressed this issue by limiting the solution to a local estimate of the model parameters. Here we present an approach that is independent of the local parameter constraint. This approach is made computationally efficient and tractable by the use of: (1) sparse grid interpolation that approximates the biological system dynamics, (2) representative parameters that uniformly represent the data-consistent dynamical space, and (3) probability weights of the represented experimentally distinguishable dynamics. Our approach identifies data-consistent representative parameters using sparse grid interpolants, constructs the optimal input sequence from a greedy search, and defines the associated optimal measurements using a scenario tree. We explore the optimality of this MBDOE algorithm using a 3-dimensional Hes1 model and a 19-dimensional T-cell receptor model. The 19-dimensional T-cell model also demonstrates the MBDOE algorithm’s scalability to higher dimensions. In both cases, the dynamical uncertainty region that bounds the trajectories of the target system states were reduced by as much as 86% and 99% respectively after completing the designed experiments in silico. Our results suggest that for resolving dynamical uncertainty, the ability to design an input sequence paired with its associated measurements is particularly important when limited by the number of measurements. PMID:26379275

  5. Turbomachinery Airfoil Design Optimization Using Differential Evolution

    NASA Technical Reports Server (NTRS)

    Madavan, Nateri K.; Biegel, Bryan A. (Technical Monitor)

    2002-01-01

    An aerodynamic design optimization procedure that is based on a evolutionary algorithm known at Differential Evolution is described. Differential Evolution is a simple, fast, and robust evolutionary strategy that has been proven effective in determining the global optimum for several difficult optimization problems, including highly nonlinear systems with discontinuities and multiple local optima. The method is combined with a Navier-Stokes solver that evaluates the various intermediate designs and provides inputs to the optimization procedure. An efficient constraint handling mechanism is also incorporated. Results are presented for the inverse design of a turbine airfoil from a modern jet engine. The capability of the method to search large design spaces and obtain the optimal airfoils in an automatic fashion is demonstrated. Substantial reductions in the overall computing time requirements are achieved by using the algorithm in conjunction with neural networks.

  6. Optimal design of a thermally stable composite optical bench

    NASA Technical Reports Server (NTRS)

    Gray, C. E., Jr.

    1985-01-01

    The Lidar Atmospheric Sensing Experiment will be performed aboard an ER-2 aircraft; the lidar system used will be mounted on a lightweight, thermally stable graphite/epoxy optical bench whose design is presently subjected to analytical study and experimental validation. Attention is given to analytical methods for the selection of such expected laminate properties as the thermal expansion coefficient, the apparent in-plane moduli, and ultimate strength. For a symmetric laminate in which one of the lamina angles remains variable, an optimal lamina angle is selected to produce a design laminate with a near-zero coefficient of thermal expansion. Finite elements are used to model the structural concept of the design, with a view to the optical bench's thermal structural response as well as the determination of the degree of success in meeting the experiment's alignment tolerances.

  7. Design Optimization of Composite Structures under Uncertainty

    NASA Technical Reports Server (NTRS)

    Haftka, Raphael T.

    2003-01-01

    Design optimization under uncertainty is computationally expensive and is also challenging in terms of alternative formulation. The work under the grant focused on developing methods for design against uncertainty that are applicable to composite structural design with emphasis on response surface techniques. Applications included design of stiffened composite plates for improved damage tolerance, the use of response surfaces for fitting weights obtained by structural optimization, and simultaneous design of structure and inspection periods for fail-safe structures.

  8. Box-Behnken statistical design to optimize thermal performance of energy storage systems

    NASA Astrophysics Data System (ADS)

    Jalalian, Iman Joz; Mohammadiun, Mohammad; Moqadam, Hamid Hashemi; Mohammadiun, Hamid

    2018-05-01

    Latent heat thermal storage (LHTS) is a technology that can help to reduce energy consumption for cooling applications, where the cold is stored in phase change materials (PCMs). In the present study a comprehensive theoretical and experimental investigation is performed on a LHTES system containing RT25 as phase change material (PCM). Process optimization of the experimental conditions (inlet air temperature and velocity and number of slabs) was carried out by means of Box-Behnken design (BBD) of Response surface methodology (RSM). Two parameters (cooling time and COP value) were chosen to be the responses. Both of the responses were significantly influenced by combined effect of inlet air temperature with velocity and number of slabs. Simultaneous optimization was performed on the basis of the desirability function to determine the optimal conditions for the cooling time and COP value. Maximum cooling time (186 min) and COP value (6.04) were found at optimum process conditions i.e. inlet temperature of (32.5), air velocity of (1.98) and slab number of (7).

  9. Simulation-Driven Design Approach for Design and Optimization of Blankholder

    NASA Astrophysics Data System (ADS)

    Sravan, Tatipala; Suddapalli, Nikshep R.; Johan, Pilthammar; Mats, Sigvant; Christian, Johansson

    2017-09-01

    Reliable design of stamping dies is desired for efficient and safe production. The design of stamping dies are today mostly based on casting feasibility, although it can also be based on criteria for fatigue, stiffness, safety, economy. Current work presents an approach that is built on Simulation Driven Design, enabling Design Optimization to address this issue. A structural finite element model of a stamping die, used to produce doors for Volvo V70/S80 car models, is studied. This die had developed cracks during its usage. To understand the behaviour of stress distribution in the stamping die, structural analysis of the die is conducted and critical regions with high stresses are identified. The results from structural FE-models are compared with analytical calculations pertaining to fatigue properties of the material. To arrive at an optimum design with increased stiffness and lifetime, topology and free-shape optimization are performed. In the optimization routine, identified critical regions of the die are set as design variables. Other optimization variables are set to maintain manufacturability of the resultant stamping die. Thereafter a CAD model is built based on geometrical results from topology and free-shape optimizations. Then the CAD model is subjected to structural analysis to visualize the new stress distribution. This process is iterated until a satisfactory result is obtained. The final results show reduction in stress levels by 70% with a more homogeneous distribution. Even though mass of the die is increased by 17 %, overall, a stiffer die with better lifetime is obtained. Finally, by reflecting on the entire process, a coordinated approach to handle such situations efficiently is presented.

  10. Efficient experimental design of high-fidelity three-qubit quantum gates via genetic programming

    NASA Astrophysics Data System (ADS)

    Devra, Amit; Prabhu, Prithviraj; Singh, Harpreet; Arvind; Dorai, Kavita

    2018-03-01

    We have designed efficient quantum circuits for the three-qubit Toffoli (controlled-controlled-NOT) and the Fredkin (controlled-SWAP) gate, optimized via genetic programming methods. The gates thus obtained were experimentally implemented on a three-qubit NMR quantum information processor, with a high fidelity. Toffoli and Fredkin gates in conjunction with the single-qubit Hadamard gates form a universal gate set for quantum computing and are an essential component of several quantum algorithms. Genetic algorithms are stochastic search algorithms based on the logic of natural selection and biological genetics and have been widely used for quantum information processing applications. We devised a new selection mechanism within the genetic algorithm framework to select individuals from a population. We call this mechanism the "Luck-Choose" mechanism and were able to achieve faster convergence to a solution using this mechanism, as compared to existing selection mechanisms. The optimization was performed under the constraint that the experimentally implemented pulses are of short duration and can be implemented with high fidelity. We demonstrate the advantage of our pulse sequences by comparing our results with existing experimental schemes and other numerical optimization methods.

  11. Solid oxide fuel cell simulation and design optimization with numerical adjoint techniques

    NASA Astrophysics Data System (ADS)

    Elliott, Louie C.

    This dissertation reports on the application of numerical optimization techniques as applied to fuel cell simulation and design. Due to the "multi-physics" inherent in a fuel cell, which results in a highly coupled and non-linear behavior, an experimental program to analyze and improve the performance of fuel cells is extremely difficult. This program applies new optimization techniques with computational methods from the field of aerospace engineering to the fuel cell design problem. After an overview of fuel cell history, importance, and classification, a mathematical model of solid oxide fuel cells (SOFC) is presented. The governing equations are discretized and solved with computational fluid dynamics (CFD) techniques including unstructured meshes, non-linear solution methods, numerical derivatives with complex variables, and sensitivity analysis with adjoint methods. Following the validation of the fuel cell model in 2-D and 3-D, the results of the sensitivity analysis are presented. The sensitivity derivative for a cost function with respect to a design variable is found with three increasingly sophisticated techniques: finite difference, direct differentiation, and adjoint. A design cycle is performed using a simple optimization method to improve the value of the implemented cost function. The results from this program could improve fuel cell performance and lessen the world's dependence on fossil fuels.

  12. Design optimization of axial flow hydraulic turbine runner: Part II - multi-objective constrained optimization method

    NASA Astrophysics Data System (ADS)

    Peng, Guoyi; Cao, Shuliang; Ishizuka, Masaru; Hayama, Shinji

    2002-06-01

    This paper is concerned with the design optimization of axial flow hydraulic turbine runner blade geometry. In order to obtain a better design plan with good performance, a new comprehensive performance optimization procedure has been presented by combining a multi-variable multi-objective constrained optimization model with a Q3D inverse computation and a performance prediction procedure. With careful analysis of the inverse design of axial hydraulic turbine runner, the total hydraulic loss and the cavitation coefficient are taken as optimization objectives and a comprehensive objective function is defined using the weight factors. Parameters of a newly proposed blade bound circulation distribution function and parameters describing positions of blade leading and training edges in the meridional flow passage are taken as optimization variables.The optimization procedure has been applied to the design optimization of a Kaplan runner with specific speed of 440 kW. Numerical results show that the performance of designed runner is successfully improved through optimization computation. The optimization model is found to be validated and it has the feature of good convergence. With the multi-objective optimization model, it is possible to control the performance of designed runner by adjusting the value of weight factors defining the comprehensive objective function. Copyright

  13. Automatic Optimization of Wayfinding Design.

    PubMed

    Huang, Haikun; Lin, Ni-Ching; Barrett, Lorenzo; Springer, Darian; Wang, Hsueh-Cheng; Pomplun, Marc; Yu, Lap-Fai

    2017-10-10

    Wayfinding signs play an important role in guiding users to navigate in a virtual environment and in helping pedestrians to find their ways in a real-world architectural site. Conventionally, the wayfinding design of a virtual environment is created manually, so as the wayfinding design of a real-world architectural site. The many possible navigation scenarios, and the interplay between signs and human navigation, can make the manual design process overwhelming and non-trivial. As a result, creating a wayfinding design for a typical layout can take months to several years. In this paper, we introduce the Way to Go! approach for automatically generating a wayfinding design for a given layout. The designer simply has to specify some navigation scenarios; our approach will automatically generate an optimized wayfinding design with signs properly placed considering human agents' visibility and possibility of making navigation mistakes. We demonstrate the effectiveness of our approach in generating wayfinding designs for different layouts. We evaluate our results by comparing different wayfinding designs and show that our optimized designs can guide pedestrians to their destinations effectively. Our approach can also help the designer visualize the accessibility of a destination from different locations, and correct any "blind zone" with additional signs.

  14. A design optimization process for Space Station Freedom

    NASA Technical Reports Server (NTRS)

    Chamberlain, Robert G.; Fox, George; Duquette, William H.

    1990-01-01

    The Space Station Freedom Program is used to develop and implement a process for design optimization. Because the relative worth of arbitrary design concepts cannot be assessed directly, comparisons must be based on designs that provide the same performance from the point of view of station users; such designs can be compared in terms of life cycle cost. Since the technology required to produce a space station is widely dispersed, a decentralized optimization process is essential. A formulation of the optimization process is provided and the mathematical models designed to facilitate its implementation are described.

  15. Electrochemical production and use of free chlorine for pollutant removal: an experimental design approach.

    PubMed

    Antonelli, Raissa; de Araújo, Karla Santos; Pires, Ricardo Francisco; Fornazari, Ana Luiza de Toledo; Granato, Ana Claudia; Malpass, Geoffroy Roger Pointer

    2017-10-28

    The present paper presents the study of (1) the optimization of electrochemical-free chlorine production using an experimental design approach, and (2) the application of the optimum conditions obtained for the application in photo-assisted electrochemical degradation of simulated textile effluent. In the experimental design the influence of inter-electrode gap, pH, NaCl concentration and current was considered. It was observed that the four variables studied are significant for the process, with NaCl concentration and current being the most significant variables for free chlorine production. The maximum free chlorine production was obtained at a current of 2.33 A and NaCl concentrations in 0.96 mol dm -3 . The application of the optimized conditions with simultaneous UV irradiation resulted in up to 83.1% Total Organic Carbon removal and 100% of colour removal over 180 min of electrolysis. The results indicate that a systematic (statistical) approach to the electrochemical treatment of pollutants can save time and reagents.

  16. Development of a fast, lean and agile direct pelletization process using experimental design techniques.

    PubMed

    Politis, Stavros N; Rekkas, Dimitrios M

    2017-04-01

    A novel hot melt direct pelletization method was developed, characterized and optimized, using statistical thinking and experimental design tools. Mixtures of carnauba wax (CW) and HPMC K100M were spheronized using melted gelucire 50-13 as a binding material (BM). Experimentation was performed sequentially; a fractional factorial design was set up initially to screen the factors affecting the process, namely spray rate, quantity of BM, rotor speed, type of rotor disk, lubricant-glidant presence, additional spheronization time, powder feeding rate and quantity. From the eight factors assessed, three were further studied during process optimization (spray rate, quantity of BM and powder feeding rate), at different ratios of the solid mixture of CW and HPMC K100M. The study demonstrated that the novel hot melt process is fast, efficient, reproducible and predictable. Therefore, it can be adopted in a lean and agile manufacturing setting for the production of flexible pellet dosage forms with various release rates easily customized between immediate and modified delivery.

  17. Experimental validation of 3D printed material behaviors and their influence on the structural topology design

    NASA Astrophysics Data System (ADS)

    Yang, Kai Ke; Zhu, Ji Hong; Wang, Chuang; Jia, Dong Sheng; Song, Long Long; Zhang, Wei Hong

    2018-05-01

    The purpose of this paper is to investigate the structures achieved by topology optimization and their fabrications by 3D printing considering the particular features of material microstructures and macro mechanical performances. Combining Digital Image Correlation and Optical Microscope, this paper experimentally explored the anisotropies of stiffness and strength existing in the 3D printed polymer material using Stereolithography (SLA) and titanium material using Selective Laser Melting (SLM). The standard specimens and typical structures obtained by topology optimization were fabricated along different building directions. On the one hand, the experimental results of these SLA produced structures showed stable properties and obviously anisotropic rules in stiffness, ultimate strengths and places of fractures. Further structural designs were performed using topology optimization when the particular mechanical behaviors of SLA printed materials were considered, which resulted in better structural performances compared to the optimized designs using `ideal' isotropic material model. On the other hand, this paper tested the mechanical behaviors of SLM printed multiscale lattice structures which were fabricated using the same metal powder and the same machine. The structural stiffness values are generally similar while the strength behaviors show a difference, which are mainly due to the irregular surface quality of the tiny structural branches of the lattice. The above evidences clearly show that the consideration of the particular behaviors of 3D printed materials is therefore indispensable for structural design and optimization in order to improve the structural performance and strengthen their practical significance.

  18. Experimental validation of 3D printed material behaviors and their influence on the structural topology design

    NASA Astrophysics Data System (ADS)

    Yang, Kai Ke; Zhu, Ji Hong; Wang, Chuang; Jia, Dong Sheng; Song, Long Long; Zhang, Wei Hong

    2018-02-01

    The purpose of this paper is to investigate the structures achieved by topology optimization and their fabrications by 3D printing considering the particular features of material microstructures and macro mechanical performances. Combining Digital Image Correlation and Optical Microscope, this paper experimentally explored the anisotropies of stiffness and strength existing in the 3D printed polymer material using Stereolithography (SLA) and titanium material using Selective Laser Melting (SLM). The standard specimens and typical structures obtained by topology optimization were fabricated along different building directions. On the one hand, the experimental results of these SLA produced structures showed stable properties and obviously anisotropic rules in stiffness, ultimate strengths and places of fractures. Further structural designs were performed using topology optimization when the particular mechanical behaviors of SLA printed materials were considered, which resulted in better structural performances compared to the optimized designs using `ideal' isotropic material model. On the other hand, this paper tested the mechanical behaviors of SLM printed multiscale lattice structures which were fabricated using the same metal powder and the same machine. The structural stiffness values are generally similar while the strength behaviors show a difference, which are mainly due to the irregular surface quality of the tiny structural branches of the lattice. The above evidences clearly show that the consideration of the particular behaviors of 3D printed materials is therefore indispensable for structural design and optimization in order to improve the structural performance and strengthen their practical significance.

  19. A robust optimization methodology for preliminary aircraft design

    NASA Astrophysics Data System (ADS)

    Prigent, S.; Maréchal, P.; Rondepierre, A.; Druot, T.; Belleville, M.

    2016-05-01

    This article focuses on a robust optimization of an aircraft preliminary design under operational constraints. According to engineers' know-how, the aircraft preliminary design problem can be modelled as an uncertain optimization problem whose objective (the cost or the fuel consumption) is almost affine, and whose constraints are convex. It is shown that this uncertain optimization problem can be approximated in a conservative manner by an uncertain linear optimization program, which enables the use of the techniques of robust linear programming of Ben-Tal, El Ghaoui, and Nemirovski [Robust Optimization, Princeton University Press, 2009]. This methodology is then applied to two real cases of aircraft design and numerical results are presented.

  20. Weight optimal design of lateral wing upper covers made of composite materials

    NASA Astrophysics Data System (ADS)

    Barkanov, Evgeny; Eglītis, Edgars; Almeida, Filipe; Bowering, Mark C.; Watson, Glenn

    2016-09-01

    The present investigation is devoted to the development of a new optimal design of lateral wing upper covers made of advanced composite materials, with special emphasis on closer conformity of the developed finite element analysis and operational requirements for aircraft wing panels. In the first stage, 24 weight optimization problems based on linear buckling analysis were solved for the laminated composite panels with three types of stiffener, two stiffener pitches and four load levels, taking into account manufacturing, reparability and damage tolerance requirements. In the second stage, a composite panel with the best weight/design performance from the previous study was verified by nonlinear buckling analysis and optimization to investigate the effect of shear and fuel pressure on the performance of stiffened panels, and their behaviour under skin post-buckling. Three rib-bay laminated composite panels with T-, I- and HAT-stiffeners were modelled with ANSYS, NASTRAN and ABAQUS finite element codes to study their buckling behaviour as a function of skin and stiffener lay-ups, stiffener height, stiffener top and root width. Owing to the large dimension of numerical problems to be solved, an optimization methodology was developed employing the method of experimental design and response surface technique. Optimal results obtained in terms of cross-sectional areas were verified successfully using ANSYS and ABAQUS shared-node models and a NASTRAN rigid-linked model, and were used later to estimate the weight of the Advanced Low Cost Aircraft Structures (ALCAS) lateral wing upper cover.

  1. Design principles and operating principles: the yin and yang of optimal functioning.

    PubMed

    Voit, Eberhard O

    2003-03-01

    Metabolic engineering has as a goal the improvement of yield of desired products from microorganisms and cell lines. This goal has traditionally been approached with experimental biotechnological methods, but it is becoming increasingly popular to precede the experimental phase by a mathematical modeling step that allows objective pre-screening of possible improvement strategies. The models are either linear and represent the stoichiometry and flux distribution in pathways or they are non-linear and account for the full kinetic behavior of the pathway, which is often significantly effected by regulatory signals. Linear flux analysis is simpler and requires less input information than a full kinetic analysis, and the question arises whether the consideration of non-linearities is really necessary for devising optimal strategies for yield improvements. The article analyzes this question with a generic, representative pathway. It shows that flux split ratios, which are the key criterion for linear flux analysis, are essentially sufficient for unregulated, but not for regulated branch points. The interrelationships between regulatory design on one hand and optimal patterns of operation on the other suggest the investigation of operating principles that complement design principles, like a user's manual complements the hardwiring of electronic equipment.

  2. Quality by design: optimization of a freeze-drying cycle via design space in case of heterogeneous drying behavior and influence of the freezing protocol.

    PubMed

    Pisano, Roberto; Fissore, Davide; Barresi, Antonello A; Brayard, Philippe; Chouvenc, Pierre; Woinet, Bertrand

    2013-02-01

    This paper shows how to optimize the primary drying phase, for both product quality and drying time, of a parenteral formulation via design space. A non-steady state model, parameterized with experimentally determined heat and mass transfer coefficients, is used to define the design space when the heat transfer coefficient varies with the position of the vial in the array. The calculations recognize both equipment and product constraints, and also take into account model parameter uncertainty. Examples are given of cycles designed for the same formulation, but varying the freezing conditions and the freeze-dryer scale. These are then compared in terms of drying time. Furthermore, the impact of inter-vial variability on design space, and therefore on the optimized cycle, is addressed. With this regard, a simplified method is presented for the cycle design, which reduces the experimental effort required for the system qualification. The use of mathematical modeling is demonstrated to be very effective not only for cycle development, but also for solving problem of process transfer. This study showed that inter-vial variability remains significant when vials are loaded on plastic trays, and how inter-vial variability can be taken into account during process design.

  3. The effects of experimental pain and induced optimism on working memory task performance.

    PubMed

    Boselie, Jantine J L M; Vancleef, Linda M G; Peters, Madelon L

    2016-07-01

    Pain can interrupt and deteriorate executive task performance. We have previously shown that experimentally induced optimism can diminish the deteriorating effect of cold pressor pain on a subsequent working memory task (i.e., operation span task). In two successive experiments we sought further evidence for the protective role of optimism on pain-induced working memory impairments. We used another working memory task (i.e., 2-back task) that was performed either after or during pain induction. Study 1 employed a 2 (optimism vs. no-optimism)×2 (pain vs. no-pain)×2 (pre-score vs. post-score) mixed factorial design. In half of the participants optimism was induced by the Best Possible Self (BPS) manipulation, which required them to write and visualize about a life in the future where everything turned out for the best. In the control condition, participants wrote and visualized a typical day in their life (TD). Next, participants completed either the cold pressor task (CPT) or a warm water control task (WWCT). Before (baseline) and after the CPT or WWCT participants working memory performance was measured with the 2-back task. The 2-back task measures the ability to monitor and update working memory representation by asking participants to indicate whether the current stimulus corresponds to the stimulus that was presented 2 stimuli ago. Study 2 had a 2 (optimism vs. no-optimism)×2 (pain vs. no-pain) mixed factorial design. After receiving the BPS or control manipulation, participants completed the 2-back task twice: once with painful heat stimulation, and once without any stimulation (counter-balanced order). Continuous heat stimulation was used with temperatures oscillating around 1°C above and 1°C below the individual pain threshold. In study 1, the results did not show an effect of cold pressor pain on subsequent 2-back task performance. Results of study 2 indicated that heat pain impaired concurrent 2-back task performance. However, no evidence was found

  4. The Experimental Design Ability Test (EDAT)

    ERIC Educational Resources Information Center

    Sirum, Karen; Humburg, Jennifer

    2011-01-01

    Higher education goals include helping students develop evidence based reasoning skills; therefore, scientific thinking skills such as those required to understand the design of a basic experiment are important. The Experimental Design Ability Test (EDAT) measures students' understanding of the criteria for good experimental design through their…

  5. Quasi experimental designs in pharmacist intervention research.

    PubMed

    Krass, Ines

    2016-06-01

    Background In the field of pharmacist intervention research it is often difficult to conform to the rigorous requirements of the "true experimental" models, especially the requirement of randomization. When randomization is not feasible, a practice based researcher can choose from a range of "quasi-experimental designs" i.e., non-randomised and at time non controlled. Objective The aim of this article was to provide an overview of quasi-experimental designs, discuss their strengths and weaknesses and to investigate their application in pharmacist intervention research over the previous decade. Results In the literature quasi experimental studies may be classified into five broad categories: quasi-experimental design without control groups; quasi-experimental design that use control groups with no pre-test; quasi-experimental design that use control groups and pre-tests; interrupted time series and stepped wedge designs. Quasi-experimental study design has consistently featured in the evolution of pharmacist intervention research. The most commonly applied of all quasi experimental designs in the practice based research literature are the one group pre-post-test design and the non-equivalent control group design i.e., (untreated control group with dependent pre-tests and post-tests) and have been used to test the impact of pharmacist interventions in general medications management as well as in specific disease states. Conclusion Quasi experimental studies have a role to play as proof of concept, in the pilot phases of interventions when testing different intervention components, especially in complex interventions. They serve to develop an understanding of possible intervention effects: while in isolation they yield weak evidence of clinical efficacy, taken collectively, they help build a body of evidence in support of the value of pharmacist interventions across different practice settings and countries. However, when a traditional RCT is not feasible for

  6. Spin bearing retainer design optimization

    NASA Technical Reports Server (NTRS)

    Boesiger, Edward A.; Warner, Mark H.

    1991-01-01

    The dynamics behavior of spin bearings for momentum wheels (control-moment gyroscope, reaction wheel assembly) is critical to satellite stability and life. Repeated bearing retainer instabilities hasten lubricant deterioration and can lead to premature bearing failure and/or unacceptable vibration. These instabilities are typically distinguished by increases in torque, temperature, audible noise, and vibration induced by increases into the bearing cartridge. Ball retainer design can be optimized to minimize these occurrences. A retainer was designed using a previously successful smaller retainer as an example. Analytical methods were then employed to predict its behavior and optimize its configuration.

  7. MDTri: robust and efficient global mixed integer search of spaces of multiple ternary alloys: A DIRECT-inspired optimization algorithm for experimentally accessible computational material design

    DOE PAGES

    Graf, Peter A.; Billups, Stephen

    2017-07-24

    Computational materials design has suffered from a lack of algorithms formulated in terms of experimentally accessible variables. Here we formulate the problem of (ternary) alloy optimization at the level of choice of atoms and their composition that is normal for synthesists. Mathematically, this is a mixed integer problem where a candidate solution consists of a choice of three elements, and how much of each of them to use. This space has the natural structure of a set of equilateral triangles. We solve this problem by introducing a novel version of the DIRECT algorithm that (1) operates on equilateral triangles insteadmore » of rectangles and (2) works across multiple triangles. We demonstrate on a test case that the algorithm is both robust and efficient. Lastly, we offer an explanation of the efficacy of DIRECT -- specifically, its balance of global and local search -- by showing that 'potentially optimal rectangles' of the original algorithm are akin to the Pareto front of the 'multi-component optimization' of global and local search.« less

  8. MDTri: robust and efficient global mixed integer search of spaces of multiple ternary alloys: A DIRECT-inspired optimization algorithm for experimentally accessible computational material design

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Graf, Peter A.; Billups, Stephen

    Computational materials design has suffered from a lack of algorithms formulated in terms of experimentally accessible variables. Here we formulate the problem of (ternary) alloy optimization at the level of choice of atoms and their composition that is normal for synthesists. Mathematically, this is a mixed integer problem where a candidate solution consists of a choice of three elements, and how much of each of them to use. This space has the natural structure of a set of equilateral triangles. We solve this problem by introducing a novel version of the DIRECT algorithm that (1) operates on equilateral triangles insteadmore » of rectangles and (2) works across multiple triangles. We demonstrate on a test case that the algorithm is both robust and efficient. Lastly, we offer an explanation of the efficacy of DIRECT -- specifically, its balance of global and local search -- by showing that 'potentially optimal rectangles' of the original algorithm are akin to the Pareto front of the 'multi-component optimization' of global and local search.« less

  9. Simultaneous versus sequential optimal experiment design for the identification of multi-parameter microbial growth kinetics as a function of temperature.

    PubMed

    Van Derlinden, E; Bernaerts, K; Van Impe, J F

    2010-05-21

    Optimal experiment design for parameter estimation (OED/PE) has become a popular tool for efficient and accurate estimation of kinetic model parameters. When the kinetic model under study encloses multiple parameters, different optimization strategies can be constructed. The most straightforward approach is to estimate all parameters simultaneously from one optimal experiment (single OED/PE strategy). However, due to the complexity of the optimization problem or the stringent limitations on the system's dynamics, the experimental information can be limited and parameter estimation convergence problems can arise. As an alternative, we propose to reduce the optimization problem to a series of two-parameter estimation problems, i.e., an optimal experiment is designed for a combination of two parameters while presuming the other parameters known. Two different approaches can be followed: (i) all two-parameter optimal experiments are designed based on identical initial parameter estimates and parameters are estimated simultaneously from all resulting experimental data (global OED/PE strategy), and (ii) optimal experiments are calculated and implemented sequentially whereby the parameter values are updated intermediately (sequential OED/PE strategy). This work exploits OED/PE for the identification of the Cardinal Temperature Model with Inflection (CTMI) (Rosso et al., 1993). This kinetic model describes the effect of temperature on the microbial growth rate and encloses four parameters. The three OED/PE strategies are considered and the impact of the OED/PE design strategy on the accuracy of the CTMI parameter estimation is evaluated. Based on a simulation study, it is observed that the parameter values derived from the sequential approach deviate more from the true parameters than the single and global strategy estimates. The single and global OED/PE strategies are further compared based on experimental data obtained from design implementation in a bioreactor

  10. An equivalent method for optimization of particle tuned mass damper based on experimental parametric study

    NASA Astrophysics Data System (ADS)

    Lu, Zheng; Chen, Xiaoyi; Zhou, Ying

    2018-04-01

    A particle tuned mass damper (PTMD) is a creative combination of a widely used tuned mass damper (TMD) and an efficient particle damper (PD) in the vibration control area. The performance of a one-storey steel frame attached with a PTMD is investigated through free vibration and shaking table tests. The influence of some key parameters (filling ratio of particles, auxiliary mass ratio, and particle density) on the vibration control effects is investigated, and it is shown that the attenuation level significantly depends on the filling ratio of particles. According to the experimental parametric study, some guidelines for optimization of the PTMD that mainly consider the filling ratio are proposed. Furthermore, an approximate analytical solution based on the concept of an equivalent single-particle damper is proposed, and it shows satisfied agreement between the simulation and experimental results. This simplified method is then used for the preliminary optimal design of a PTMD system, and a case study of a PTMD system attached to a five-storey steel structure following this optimization process is presented.

  11. Application of optimal control theory to the design of broadband excitation pulses for high-resolution NMR.

    PubMed

    Skinner, Thomas E; Reiss, Timo O; Luy, Burkhard; Khaneja, Navin; Glaser, Steffen J

    2003-07-01

    Optimal control theory is considered as a methodology for pulse sequence design in NMR. It provides the flexibility for systematically imposing desirable constraints on spin system evolution and therefore has a wealth of applications. We have chosen an elementary example to illustrate the capabilities of the optimal control formalism: broadband, constant phase excitation which tolerates miscalibration of RF power and variations in RF homogeneity relevant for standard high-resolution probes. The chosen design criteria were transformation of I(z)-->I(x) over resonance offsets of +/- 20 kHz and RF variability of +/-5%, with a pulse length of 2 ms. Simulations of the resulting pulse transform I(z)-->0.995I(x) over the target ranges in resonance offset and RF variability. Acceptably uniform excitation is obtained over a much larger range of RF variability (approximately 45%) than the strict design limits. The pulse performs well in simulations that include homonuclear and heteronuclear J-couplings. Experimental spectra obtained from 100% 13C-labeled lysine show only minimal coupling effects, in excellent agreement with the simulations. By increasing pulse power and reducing pulse length, we demonstrate experimental excitation of 1H over +/-32 kHz, with phase variations in the spectra <8 degrees and peak amplitudes >93% of maximum. Further improvements in broadband excitation by optimized pulses (BEBOP) may be possible by applying more sophisticated implementations of the optimal control formalism.

  12. Sequential-Optimization-Based Framework for Robust Modeling and Design of Heterogeneous Catalytic Systems

    DOE PAGES

    Rangarajan, Srinivas; Maravelias, Christos T.; Mavrikakis, Manos

    2017-11-09

    Here, we present a general optimization-based framework for (i) ab initio and experimental data driven mechanistic modeling and (ii) optimal catalyst design of heterogeneous catalytic systems. Both cases are formulated as a nonlinear optimization problem that is subject to a mean-field microkinetic model and thermodynamic consistency requirements as constraints, for which we seek sparse solutions through a ridge (L 2 regularization) penalty. The solution procedure involves an iterative sequence of forward simulation of the differential algebraic equations pertaining to the microkinetic model using a numerical tool capable of handling stiff systems, sensitivity calculations using linear algebra, and gradient-based nonlinear optimization.more » A multistart approach is used to explore the solution space, and a hierarchical clustering procedure is implemented for statistically classifying potentially competing solutions. An example of methanol synthesis through hydrogenation of CO and CO 2 on a Cu-based catalyst is used to illustrate the framework. The framework is fast, is robust, and can be used to comprehensively explore the model solution and design space of any heterogeneous catalytic system.« less

  13. Sequential-Optimization-Based Framework for Robust Modeling and Design of Heterogeneous Catalytic Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rangarajan, Srinivas; Maravelias, Christos T.; Mavrikakis, Manos

    Here, we present a general optimization-based framework for (i) ab initio and experimental data driven mechanistic modeling and (ii) optimal catalyst design of heterogeneous catalytic systems. Both cases are formulated as a nonlinear optimization problem that is subject to a mean-field microkinetic model and thermodynamic consistency requirements as constraints, for which we seek sparse solutions through a ridge (L 2 regularization) penalty. The solution procedure involves an iterative sequence of forward simulation of the differential algebraic equations pertaining to the microkinetic model using a numerical tool capable of handling stiff systems, sensitivity calculations using linear algebra, and gradient-based nonlinear optimization.more » A multistart approach is used to explore the solution space, and a hierarchical clustering procedure is implemented for statistically classifying potentially competing solutions. An example of methanol synthesis through hydrogenation of CO and CO 2 on a Cu-based catalyst is used to illustrate the framework. The framework is fast, is robust, and can be used to comprehensively explore the model solution and design space of any heterogeneous catalytic system.« less

  14. A sequential linear optimization approach for controller design

    NASA Technical Reports Server (NTRS)

    Horta, L. G.; Juang, J.-N.; Junkins, J. L.

    1985-01-01

    A linear optimization approach with a simple real arithmetic algorithm is presented for reliable controller design and vibration suppression of flexible structures. Using first order sensitivity of the system eigenvalues with respect to the design parameters in conjunction with a continuation procedure, the method converts a nonlinear optimization problem into a maximization problem with linear inequality constraints. The method of linear programming is then applied to solve the converted linear optimization problem. The general efficiency of the linear programming approach allows the method to handle structural optimization problems with a large number of inequality constraints on the design vector. The method is demonstrated using a truss beam finite element model for the optimal sizing and placement of active/passive-structural members for damping augmentation. Results using both the sequential linear optimization approach and nonlinear optimization are presented and compared. The insensitivity to initial conditions of the linear optimization approach is also demonstrated.

  15. Monitoring and optimizing the co-composting of dewatered sludge: a mixture experimental design approach.

    PubMed

    Komilis, Dimitrios; Evangelou, Alexandros; Voudrias, Evangelos

    2011-09-01

    The management of dewatered wastewater sludge is a major issue worldwide. Sludge disposal to landfills is not sustainable and thus alternative treatment techniques are being sought. The objective of this work was to determine optimal mixing ratios of dewatered sludge with other organic amendments in order to maximize the degradability of the mixtures during composting. This objective was achieved using mixture experimental design principles. An additional objective was to study the impact of the initial C/N ratio and moisture contents on the co-composting process of dewatered sludge. The composting process was monitored through measurements of O(2) uptake rates, CO(2) evolution, temperature profile and solids reduction. Eight (8) runs were performed in 100 L insulated air-tight bioreactors under a dynamic air flow regime. The initial mixtures were prepared using dewatered wastewater sludge, mixed paper wastes, food wastes, tree branches and sawdust at various initial C/N ratios and moisture contents. According to empirical modeling, mixtures of sludge and food waste mixtures at 1:1 ratio (ww, wet weight) maximize degradability. Structural amendments should be maintained below 30% to reach thermophilic temperatures. The initial C/N ratio and initial moisture content of the mixture were not found to influence the decomposition process. The bio C/bio N ratio started from around 10, for all runs, decreased during the middle of the process and increased to up to 20 at the end of the process. The solid carbon reduction of the mixtures without the branches ranged from 28% to 62%, whilst solid N reductions ranged from 30% to 63%. Respiratory quotients had a decreasing trend throughout the composting process. Copyright © 2011 Elsevier Ltd. All rights reserved.

  16. Aerodynamic design and optimization in one shot

    NASA Technical Reports Server (NTRS)

    Ta'asan, Shlomo; Kuruvila, G.; Salas, M. D.

    1992-01-01

    This paper describes an efficient numerical approach for the design and optimization of aerodynamic bodies. As in classical optimal control methods, the present approach introduces a cost function and a costate variable (Lagrange multiplier) in order to achieve a minimum. High efficiency is achieved by using a multigrid technique to solve for all the unknowns simultaneously, but restricting work on a design variable only to grids on which their changes produce nonsmooth perturbations. Thus, the effort required to evaluate design variables that have nonlocal effects on the solution is confined to the coarse grids. However, if a variable has a nonsmooth local effect on the solution in some neighborhood, it is relaxed in that neighborhood on finer grids. The cost of solving the optimal control problem is shown to be approximately two to three times the cost of the equivalent analysis problem. Examples are presented to illustrate the application of the method to aerodynamic design and constraint optimization.

  17. Automated Design Framework for Synthetic Biology Exploiting Pareto Optimality.

    PubMed

    Otero-Muras, Irene; Banga, Julio R

    2017-07-21

    In this work we consider Pareto optimality for automated design in synthetic biology. We present a generalized framework based on a mixed-integer dynamic optimization formulation that, given design specifications, allows the computation of Pareto optimal sets of designs, that is, the set of best trade-offs for the metrics of interest. We show how this framework can be used for (i) forward design, that is, finding the Pareto optimal set of synthetic designs for implementation, and (ii) reverse design, that is, analyzing and inferring motifs and/or design principles of gene regulatory networks from the Pareto set of optimal circuits. Finally, we illustrate the capabilities and performance of this framework considering four case studies. In the first problem we consider the forward design of an oscillator. In the remaining problems, we illustrate how to apply the reverse design approach to find motifs for stripe formation, rapid adaption, and fold-change detection, respectively.

  18. Experimental design based response surface methodology optimization of ultrasonic assisted adsorption of safaranin O by tin sulfide nanoparticle loaded on activated carbon.

    PubMed

    Roosta, M; Ghaedi, M; Daneshfar, A; Sahraei, R

    2014-03-25

    In this research, the adsorption rate of safranine O (SO) onto tin sulfide nanoparticle loaded on activated carbon (SnS-NPAC) was accelerated by the ultrasound. SnS-NP-AC was characterized by different techniques such as SEM, XRD and UV-Vis measurements. The present results confirm that the ultrasound assisted adsorption method has remarkable ability to improve the adsorption efficiency. The influence of parameters such as the sonication time, adsorbent dosage, pH and initial SO concentration was examined and evaluated by central composite design (CCD) combined with response surface methodology (RSM) and desirability function (DF). Conducting adsorption experiments at optimal conditions set as 4 min of sonication time, 0.024 g of adsorbent, pH 7 and 18 mg L(-1) SO make admit to achieve high removal percentage (98%) and high adsorption capacity (50.25 mg g(-)(1)). A good agreement between experimental and predicted data in this study was observed. The experimental equilibrium data fitting to Langmuir, Freundlich, Tempkin and Dubinin-Radushkevich models show that the Langmuir model is a good and suitable model for evaluation and the actual behavior of adsorption. Kinetic evaluation of experimental data showed that the adsorption processes followed well pseudo-second-order and intraparticle diffusion models. Copyright © 2013. Published by Elsevier B.V.

  19. Experimental design based response surface methodology optimization of ultrasonic assisted adsorption of safaranin O by tin sulfide nanoparticle loaded on activated carbon

    NASA Astrophysics Data System (ADS)

    Roosta, M.; Ghaedi, M.; Daneshfar, A.; Sahraei, R.

    2014-03-01

    In this research, the adsorption rate of safranine O (SO) onto tin sulfide nanoparticle loaded on activated carbon (SnS-NPAC) was accelerated by the ultrasound. SnS-NP-AC was characterized by different techniques such as SEM, XRD and UV-Vis measurements. The present results confirm that the ultrasound assisted adsorption method has remarkable ability to improve the adsorption efficiency. The influence of parameters such as the sonication time, adsorbent dosage, pH and initial SO concentration was examined and evaluated by central composite design (CCD) combined with response surface methodology (RSM) and desirability function (DF). Conducting adsorption experiments at optimal conditions set as 4 min of sonication time, 0.024 g of adsorbent, pH 7 and 18 mg L-1 SO make admit to achieve high removal percentage (98%) and high adsorption capacity (50.25 mg g-1). A good agreement between experimental and predicted data in this study was observed. The experimental equilibrium data fitting to Langmuir, Freundlich, Tempkin and Dubinin-Radushkevich models show that the Langmuir model is a good and suitable model for evaluation and the actual behavior of adsorption. Kinetic evaluation of experimental data showed that the adsorption processes followed well pseudo-second-order and intraparticle diffusion models.

  20. Experimental optimization of directed field ionization

    NASA Astrophysics Data System (ADS)

    Liu, Zhimin Cheryl; Gregoric, Vincent C.; Carroll, Thomas J.; Noel, Michael W.

    2017-04-01

    The state distribution of an ensemble of Rydberg atoms is commonly measured using selective field ionization. The resulting time resolved ionization signal from a single energy eigenstate tends to spread out due to the multiple avoided Stark level crossings atoms must traverse on the way to ionization. The shape of the ionization signal can be modified by adding a perturbation field to the main field ramp. Here, we present experimental results of the manipulation of the ionization signal using a genetic algorithm. We address how both the genetic algorithm and the experimental parameters were adjusted to achieve an optimized result. This work was supported by the National Science Foundation under Grants No. 1607335 and No. 1607377.

  1. Multidisciplinary design optimization - An emerging new engineering discipline

    NASA Technical Reports Server (NTRS)

    Sobieszczanski-Sobieski, Jaroslaw

    1993-01-01

    A definition of the multidisciplinary design optimization (MDO) is introduced, and functionality and relationship of the MDO conceptual components are examined. The latter include design-oriented analysis, approximation concepts, mathematical system modeling, design space search, an optimization procedure, and a humane interface.

  2. Evolutionary optimization methods for accelerator design

    NASA Astrophysics Data System (ADS)

    Poklonskiy, Alexey A.

    Many problems from the fields of accelerator physics and beam theory can be formulated as optimization problems and, as such, solved using optimization methods. Despite growing efficiency of the optimization methods, the adoption of modern optimization techniques in these fields is rather limited. Evolutionary Algorithms (EAs) form a relatively new and actively developed optimization methods family. They possess many attractive features such as: ease of the implementation, modest requirements on the objective function, a good tolerance to noise, robustness, and the ability to perform a global search efficiently. In this work we study the application of EAs to problems from accelerator physics and beam theory. We review the most commonly used methods of unconstrained optimization and describe the GATool, evolutionary algorithm and the software package, used in this work, in detail. Then we use a set of test problems to assess its performance in terms of computational resources, quality of the obtained result, and the tradeoff between them. We justify the choice of GATool as a heuristic method to generate cutoff values for the COSY-GO rigorous global optimization package for the COSY Infinity scientific computing package. We design the model of their mutual interaction and demonstrate that the quality of the result obtained by GATool increases as the information about the search domain is refined, which supports the usefulness of this model. We Giscuss GATool's performance on the problems suffering from static and dynamic noise and study useful strategies of GATool parameter tuning for these and other difficult problems. We review the challenges of constrained optimization with EAs and methods commonly used to overcome them. We describe REPA, a new constrained optimization method based on repairing, in exquisite detail, including the properties of its two repairing techniques: REFIND and REPROPT. We assess REPROPT's performance on the standard constrained

  3. Multidisciplinary design optimization using multiobjective formulation techniques

    NASA Technical Reports Server (NTRS)

    Chattopadhyay, Aditi; Pagaldipti, Narayanan S.

    1995-01-01

    This report addresses the development of a multidisciplinary optimization procedure using an efficient semi-analytical sensitivity analysis technique and multilevel decomposition for the design of aerospace vehicles. A semi-analytical sensitivity analysis procedure is developed for calculating computational grid sensitivities and aerodynamic design sensitivities. Accuracy and efficiency of the sensitivity analysis procedure is established through comparison of the results with those obtained using a finite difference technique. The developed sensitivity analysis technique are then used within a multidisciplinary optimization procedure for designing aerospace vehicles. The optimization problem, with the integration of aerodynamics and structures, is decomposed into two levels. Optimization is performed for improved aerodynamic performance at the first level and improved structural performance at the second level. Aerodynamic analysis is performed by solving the three-dimensional parabolized Navier Stokes equations. A nonlinear programming technique and an approximate analysis procedure are used for optimization. The proceduredeveloped is applied to design the wing of a high speed aircraft. Results obtained show significant improvements in the aircraft aerodynamic and structural performance when compared to a reference or baseline configuration. The use of the semi-analytical sensitivity technique provides significant computational savings.

  4. The role of the optimization process in illumination design

    NASA Astrophysics Data System (ADS)

    Gauvin, Michael A.; Jacobsen, David; Byrne, David J.

    2015-07-01

    This paper examines the role of the optimization process in illumination design. We will discuss why the starting point of the optimization process is crucial to a better design and why it is also important that the user understands the basic design problem and implements the correct merit function. Both a brute force method and the Downhill Simplex method will be used to demonstrate optimization methods with focus on using interactive design tools to create better starting points to streamline the optimization process.

  5. Stimulated Brillouin scattering materials, experimental design and applications: A review

    NASA Astrophysics Data System (ADS)

    Bai, Zhenxu; Yuan, Hang; Liu, Zhaohong; Xu, Pengbai; Gao, Qilin; Williams, Robert J.; Kitzler, Ondrej; Mildren, Richard P.; Wang, Yulei; Lu, Zhiwei

    2018-01-01

    Stimulated Brillouin scattering (SBS), as one type of third-order nonlinear optics effect, is extensively exploited and rapidly developed in the field of lasers and optoelectronics. A large number of theoretical and experimental studies on SBS have been carried out in the past decades. Especially, the exploration of new SBS materials and new types of SBS modulation methods have been engaged simultaneously, as the properties of different materials have great influence on the SBS performance such as generation threshold, Brillouin amplification efficiency, frequency shift, breakdown threshold, etc. This article provides a comprehensive review of the characteristics of different types of SBS materials, SBS applications, experimental design methods, as well as the parameter optimization method, which is expected to provide reference and guidance to SBS related experiments.

  6. A Library of Optimization Algorithms for Organizational Design

    DTIC Science & Technology

    2005-01-01

    N00014-98-1-0465 and #N00014-00-1-0101 A Library of Optimization Algorithms for Organizational Design Georgiy M. Levchuk Yuri N. Levchuk Jie Luo...E-mail: Krishna@engr.uconn.edu Abstract This paper presents a library of algorithms to solve a broad range of optimization problems arising in the...normative design of organizations to execute a specific mission. The use of specific optimization algorithms for different phases of the design process

  7. Progress in multidisciplinary design optimization at NASA Langley

    NASA Technical Reports Server (NTRS)

    Padula, Sharon L.

    1993-01-01

    Multidisciplinary Design Optimization refers to some combination of disciplinary analyses, sensitivity analysis, and optimization techniques used to design complex engineering systems. The ultimate objective of this research at NASA Langley Research Center is to help the US industry reduce the costs associated with development, manufacturing, and maintenance of aerospace vehicles while improving system performance. This report reviews progress towards this objective and highlights topics for future research. Aerospace design problems selected from the author's research illustrate strengths and weaknesses in existing multidisciplinary optimization techniques. The techniques discussed include multiobjective optimization, global sensitivity equations and sequential linear programming.

  8. On the proper study design applicable to experimental balneology.

    PubMed

    Varga, Csaba

    2016-08-01

    The simple message of this paper is that it is the high time to reevaluate the strategies and optimize the efforts for investigation of thermal (spa) waters. Several articles trying to clear mode of action of medicinal waters have been published up to now. Almost all studies apply the unproven hypothesis, namely the inorganic ingredients are in close connection with healing effects of bathing. Change of paradigm would be highly necessary in this field taking into consideration the presence of several biologically active organic substances in these waters. A successful design for experimental mechanistic studies is approved.

  9. Optimal designs of staggered dean vortex micromixers.

    PubMed

    Chen, Jyh Jian; Chen, Chun Huei; Shie, Shian Ruei

    2011-01-01

    A novel parallel laminar micromixer with a two-dimensional staggered Dean Vortex micromixer is optimized and fabricated in our study. Dean vortices induced by centrifugal forces in curved rectangular channels cause fluids to produce secondary flows. The split-and-recombination (SAR) structures of the flow channels and the impinging effects result in the reduction of the diffusion distance of two fluids. Three different designs of a curved channel micromixer are introduced to evaluate the mixing performance of the designed micromixer. Mixing performances are demonstrated by means of a pH indicator using an optical microscope and fluorescent particles via a confocal microscope at different flow rates corresponding to Reynolds numbers (Re) ranging from 0.5 to 50. The comparison between the experimental data and numerical results shows a very reasonable agreement. At a Re of 50, the mixing length at the sixth segment, corresponding to the downstream distance of 21.0 mm, can be achieved in a distance 4 times shorter than when the Re equals 1. An optimization of this micromixer is performed with two geometric parameters. These are the angle between the lines from the center to two intersections of two consecutive curved channels, θ, and the angle between two lines of the centers of three consecutive curved channels, ϕ. It can be found that the maximal mixing index is related to the maximal value of the sum of θ and ϕ, which is equal to 139.82°.

  10. Optimal Designs of Staggered Dean Vortex Micromixers

    PubMed Central

    Chen, Jyh Jian; Chen, Chun Huei; Shie, Shian Ruei

    2011-01-01

    A novel parallel laminar micromixer with a two-dimensional staggered Dean Vortex micromixer is optimized and fabricated in our study. Dean vortices induced by centrifugal forces in curved rectangular channels cause fluids to produce secondary flows. The split-and-recombination (SAR) structures of the flow channels and the impinging effects result in the reduction of the diffusion distance of two fluids. Three different designs of a curved channel micromixer are introduced to evaluate the mixing performance of the designed micromixer. Mixing performances are demonstrated by means of a pH indicator using an optical microscope and fluorescent particles via a confocal microscope at different flow rates corresponding to Reynolds numbers (Re) ranging from 0.5 to 50. The comparison between the experimental data and numerical results shows a very reasonable agreement. At a Re of 50, the mixing length at the sixth segment, corresponding to the downstream distance of 21.0 mm, can be achieved in a distance 4 times shorter than when the Re equals 1. An optimization of this micromixer is performed with two geometric parameters. These are the angle between the lines from the center to two intersections of two consecutive curved channels, θ, and the angle between two lines of the centers of three consecutive curved channels, ϕ. It can be found that the maximal mixing index is related to the maximal value of the sum of θ and ϕ, which is equal to 139.82°. PMID:21747691

  11. Comparison of Traditional Design Nonlinear Programming Optimization and Stochastic Methods for Structural Design

    NASA Technical Reports Server (NTRS)

    Patnaik, Surya N.; Pai, Shantaram S.; Coroneos, Rula M.

    2010-01-01

    Structural design generated by traditional method, optimization method and the stochastic design concept are compared. In the traditional method, the constraints are manipulated to obtain the design and weight is back calculated. In design optimization, the weight of a structure becomes the merit function with constraints imposed on failure modes and an optimization algorithm is used to generate the solution. Stochastic design concept accounts for uncertainties in loads, material properties, and other parameters and solution is obtained by solving a design optimization problem for a specified reliability. Acceptable solutions were produced by all the three methods. The variation in the weight calculated by the methods was modest. Some variation was noticed in designs calculated by the methods. The variation may be attributed to structural indeterminacy. It is prudent to develop design by all three methods prior to its fabrication. The traditional design method can be improved when the simplified sensitivities of the behavior constraint is used. Such sensitivity can reduce design calculations and may have a potential to unify the traditional and optimization methods. Weight versus reliabilitytraced out an inverted-S-shaped graph. The center of the graph corresponded to mean valued design. A heavy design with weight approaching infinity could be produced for a near-zero rate of failure. Weight can be reduced to a small value for a most failure-prone design. Probabilistic modeling of load and material properties remained a challenge.

  12. A case study on topology optimized design for additive manufacturing

    NASA Astrophysics Data System (ADS)

    Gebisa, A. W.; Lemu, H. G.

    2017-12-01

    Topology optimization is an optimization method that employs mathematical tools to optimize material distribution in a part to be designed. Earlier developments of topology optimization considered conventional manufacturing techniques that have limitations in producing complex geometries. This has hindered the topology optimization efforts not to fully be realized. With the emergence of additive manufacturing (AM) technologies, the technology that builds a part layer upon a layer directly from three dimensional (3D) model data of the part, however, producing complex shape geometry is no longer an issue. Realization of topology optimization through AM provides full design freedom for the design engineers. The article focuses on topologically optimized design approach for additive manufacturing with a case study on lightweight design of jet engine bracket. The study result shows that topology optimization is a powerful design technique to reduce the weight of a product while maintaining the design requirements if additive manufacturing is considered.

  13. Media milling process optimization for manufacture of drug nanoparticles using design of experiments (DOE).

    PubMed

    Nekkanti, Vijaykumar; Marwah, Ashwani; Pillai, Raviraj

    2015-01-01

    Design of experiments (DOE), a component of Quality by Design (QbD), is systematic and simultaneous evaluation of process variables to develop a product with predetermined quality attributes. This article presents a case study to understand the effects of process variables in a bead milling process used for manufacture of drug nanoparticles. Experiments were designed and results were computed according to a 3-factor, 3-level face-centered central composite design (CCD). The factors investigated were motor speed, pump speed and bead volume. Responses analyzed for evaluating these effects and interactions were milling time, particle size and process yield. Process validation batches were executed using the optimum process conditions obtained from software Design-Expert® to evaluate both the repeatability and reproducibility of bead milling technique. Milling time was optimized to <5 h to obtain the desired particle size (d90 < 400 nm). The desirability function used to optimize the response variables and observed responses were in agreement with experimental values. These results demonstrated the reliability of selected model for manufacture of drug nanoparticles with predictable quality attributes. The optimization of bead milling process variables by applying DOE resulted in considerable decrease in milling time to achieve the desired particle size. The study indicates the applicability of DOE approach to optimize critical process parameters in the manufacture of drug nanoparticles.

  14. New approaches to optimization in aerospace conceptual design

    NASA Technical Reports Server (NTRS)

    Gage, Peter J.

    1995-01-01

    Aerospace design can be viewed as an optimization process, but conceptual studies are rarely performed using formal search algorithms. Three issues that restrict the success of automatic search are identified in this work. New approaches are introduced to address the integration of analyses and optimizers, to avoid the need for accurate gradient information and a smooth search space (required for calculus-based optimization), and to remove the restrictions imposed by fixed complexity problem formulations. (1) Optimization should be performed in a flexible environment. A quasi-procedural architecture is used to conveniently link analysis modules and automatically coordinate their execution. It efficiently controls a large-scale design tasks. (2) Genetic algorithms provide a search method for discontinuous or noisy domains. The utility of genetic optimization is demonstrated here, but parameter encodings and constraint-handling schemes must be carefully chosen to avoid premature convergence to suboptimal designs. The relationship between genetic and calculus-based methods is explored. (3) A variable-complexity genetic algorithm is created to permit flexible parameterization, so that the level of description can change during optimization. This new optimizer automatically discovers novel designs in structural and aerodynamic tasks.

  15. Development and optimization of a self-microemulsifying drug delivery system for ator vastatin calcium by using d-optimal mixture design

    PubMed Central

    Yeom, Dong Woo; Song, Ye Seul; Kim, Sung Rae; Lee, Sang Gon; Kang, Min Hyung; Lee, Sangkil; Choi, Young Wook

    2015-01-01

    In this study, we developed and optimized a self-microemulsifying drug delivery system (SMEDDS) formulation for improving the dissolution and oral absorption of atorvastatin calcium (ATV), a poorly water-soluble drug. Solubility and emulsification tests were performed to select a suitable combination of oil, surfactant, and cosurfactant. A d-optimal mixture design was used to optimize the concentration of components used in the SMEDDS formulation for achieving excellent physicochemical characteristics, such as small droplet size and high dissolution. The optimized ATV-loaded SMEDDS formulation containing 7.16% Capmul MCM (oil), 48.25% Tween 20 (surfactant), and 44.59% Tetraglycol (cosurfactant) significantly enhanced the dissolution rate of ATV in different types of medium, including simulated intestinal fluid, simulated gastric fluid, and distilled water, compared with ATV suspension. Good agreement was observed between predicted and experimental values for mean droplet size and percentage of the drug released in 15 minutes. Further, pharmacokinetic studies in rats showed that the optimized SMEDDS formulation considerably enhanced the oral absorption of ATV, with 3.4-fold and 4.3-fold increases in the area under the concentration-time curve and time taken to reach peak plasma concentration, respectively, when compared with the ATV suspension. Thus, we successfully developed an optimized ATV-loaded SMEDDS formulation by using the d-optimal mixture design, that could potentially be used for improving the oral absorption of poorly water-soluble drugs. PMID:26089663

  16. Robust Airfoil Optimization in High Resolution Design Space

    NASA Technical Reports Server (NTRS)

    Li, Wu; Padula, Sharon L.

    2003-01-01

    The robust airfoil shape optimization is a direct method for drag reduction over a given range of operating conditions and has three advantages: (1) it prevents severe degradation in the off-design performance by using a smart descent direction in each optimization iteration, (2) it uses a large number of B-spline control points as design variables yet the resulting airfoil shape is fairly smooth, and (3) it allows the user to make a trade-off between the level of optimization and the amount of computing time consumed. The robust optimization method is demonstrated by solving a lift-constrained drag minimization problem for a two-dimensional airfoil in viscous flow with a large number of geometric design variables. Our experience with robust optimization indicates that our strategy produces reasonable airfoil shapes that are similar to the original airfoils, but these new shapes provide drag reduction over the specified range of Mach numbers. We have tested this strategy on a number of advanced airfoil models produced by knowledgeable aerodynamic design team members and found that our strategy produces airfoils better or equal to any designs produced by traditional design methods.

  17. Development of a semidefined growth medium for Pedobacter cryoconitis BG5 using statistical experimental design.

    PubMed

    Ong, Magdalena; Ongkudon, Clarence M; Wong, Clemente Michael Vui Ling

    2016-10-02

    Pedobacter cryoconitis BG5 are psychrophiles isolated from the cold environment and capable of proliferating and growing well at low temperature regime. Their cellular products have found a broad spectrum of applications, including in food, medicine, and bioremediation. Therefore, it is imperative to develop a high-cell density cultivation strategy coupled with optimized growth medium for P. cryoconitis BG5. To date, there has been no published report on the design and optimization of growth medium for P. cryoconitis, hence the objective of this research project. A preliminary screening of four commercially available media, namely tryptic soy broth, R2A, Luria Bertani broth, and nutrient broth, was conducted to formulate the basal medium. Based on the preliminary screening, tryptone, glucose, NaCl, and K2HPO4 along with three additional nutrients (yeast extract, MgSO4, and NH4Cl) were identified to form the basal medium which was further analyzed by Plackett-Burman experimental design. Central composite experimental design using response surface methodology was adopted to optimize tryptone, yeast extract, and NH4Cl concentrations in the formulated growth medium. Statistical data analysis showed a high regression factor of 0.84 with a predicted optimum optical (600 nm) cell density of 7.5 using 23.7 g/L of tryptone, 8.8 g/L of yeast extract, and 0.7 g/L of NH4Cl. The optimized medium for P. cryoconitis BG5 was tested, and the observed optical density was 7.8. The cost-effectiveness of the optimized medium was determined as 6.25 unit prices per gram of cell produced in a 250-ml Erlenmeyer flask.

  18. Optimizing the taste-masked formulation of acetaminophen using sodium caseinate and lecithin by experimental design.

    PubMed

    Hoang Thi, Thanh Huong; Lemdani, Mohamed; Flament, Marie-Pierre

    2013-09-10

    In a previous study of ours, the association of sodium caseinate and lecithin was demonstrated to be promising for masking the bitterness of acetaminophen via drug encapsulation. The encapsulating mechanisms were suggested to be based on the segregation of multicomponent droplets occurring during spray-drying. The spray-dried particles delayed the drug release within the mouth during the early time upon administration and hence masked the bitterness. Indeed, taste-masking is achieved if, within the frame of 1-2 min, drug substance is either not released or the released amount is below the human threshold for identifying its bad taste. The aim of this work was (i) to evaluate the effect of various processing and formulation parameters on the taste-masking efficiency and (ii) to determine the optimal formulation for optimal taste-masking effect. Four investigated input variables included inlet temperature (X1), spray flow (X2), sodium caseinate amount (X3) and lecithin amount (X4). The percentage of drug release amount during the first 2 min was considered as the response variable (Y). A 2(4)-full factorial design was applied and allowed screening for the most influential variables i.e. sodium caseinate amount and lecithin amount. Optimizing these two variables was therefore conducted by a simplex approach. The SEM and DSC results of spray-dried powder prepared under optimal conditions showed that drug seemed to be well encapsulated. The drug release during the first 2 min significantly decreased, 7-fold less than the unmasked drug particles. Therefore, the optimal formulation that performed the best taste-masking effect was successfully achieved. Copyright © 2013 Elsevier B.V. All rights reserved.

  19. Assay optimization: a statistical design of experiments approach.

    PubMed

    Altekar, Maneesha; Homon, Carol A; Kashem, Mohammed A; Mason, Steven W; Nelson, Richard M; Patnaude, Lori A; Yingling, Jeffrey; Taylor, Paul B

    2007-03-01

    With the transition from manual to robotic HTS in the last several years, assay optimization has become a significant bottleneck. Recent advances in robotic liquid handling have made it feasible to reduce assay optimization timelines with the application of statistically designed experiments. When implemented, they can efficiently optimize assays by rapidly identifying significant factors, complex interactions, and nonlinear responses. This article focuses on the use of statistically designed experiments in assay optimization.

  20. Design and Optimization of Composite Gyroscope Momentum Wheel Rings

    NASA Technical Reports Server (NTRS)

    Bednarcyk, Brett A.; Arnold, Steven M.

    2007-01-01

    Stress analysis and preliminary design/optimization procedures are presented for gyroscope momentum wheel rings composed of metallic, metal matrix composite, and polymer matrix composite materials. The design of these components involves simultaneously minimizing both true part volume and mass, while maximizing angular momentum. The stress analysis results are combined with an anisotropic failure criterion to formulate a new sizing procedure that provides considerable insight into the design of gyroscope momentum wheel ring components. Results compare the performance of two optimized metallic designs, an optimized SiC/Ti composite design, and an optimized graphite/epoxy composite design. The graphite/epoxy design appears to be far superior to the competitors considered unless a much greater premium is placed on volume efficiency compared to mass efficiency.

  1. Strategies for global optimization in photonics design.

    PubMed

    Vukovic, Ana; Sewell, Phillip; Benson, Trevor M

    2010-10-01

    This paper reports on two important issues that arise in the context of the global optimization of photonic components where large problem spaces must be investigated. The first is the implementation of a fast simulation method and associated matrix solver for assessing particular designs and the second, the strategies that a designer can adopt to control the size of the problem design space to reduce runtimes without compromising the convergence of the global optimization tool. For this study an analytical simulation method based on Mie scattering and a fast matrix solver exploiting the fast multipole method are combined with genetic algorithms (GAs). The impact of the approximations of the simulation method on the accuracy and runtime of individual design assessments and the consequent effects on the GA are also examined. An investigation of optimization strategies for controlling the design space size is conducted on two illustrative examples, namely, 60° and 90° waveguide bends based on photonic microstructures, and their effectiveness is analyzed in terms of a GA's ability to converge to the best solution within an acceptable timeframe. Finally, the paper describes some particular optimized solutions found in the course of this work.

  2. Autonomous optimal trajectory design employing convex optimization for powered descent on an asteroid

    NASA Astrophysics Data System (ADS)

    Pinson, Robin Marie

    Mission proposals that land spacecraft on asteroids are becoming increasingly popular. However, in order to have a successful mission the spacecraft must reliably and softly land at the intended landing site with pinpoint precision. The problem under investigation is how to design a propellant (fuel) optimal powered descent trajectory that can be quickly computed onboard the spacecraft, without interaction from ground control. The goal is to autonomously design the optimal powered descent trajectory onboard the spacecraft immediately prior to the descent burn for use during the burn. Compared to a planetary powered landing problem, the challenges that arise from designing an asteroid powered descent trajectory include complicated nonlinear gravity fields, small rotating bodies, and low thrust vehicles. The nonlinear gravity fields cannot be represented by a constant gravity model nor a Newtonian model. The trajectory design algorithm needs to be robust and efficient to guarantee a designed trajectory and complete the calculations in a reasonable time frame. This research investigates the following questions: Can convex optimization be used to design the minimum propellant powered descent trajectory for a soft landing on an asteroid? Is this method robust and reliable to allow autonomy onboard the spacecraft without interaction from ground control? This research designed a convex optimization based method that rapidly generates the propellant optimal asteroid powered descent trajectory. The solution to the convex optimization problem is the thrust magnitude and direction, which designs and determines the trajectory. The propellant optimal problem was formulated as a second order cone program, a subset of convex optimization, through relaxation techniques by including a slack variable, change of variables, and incorporation of the successive solution method. Convex optimization solvers, especially second order cone programs, are robust, reliable, and are guaranteed

  3. Design and Optimization Method of a Two-Disk Rotor System

    NASA Astrophysics Data System (ADS)

    Huang, Jingjing; Zheng, Longxi; Mei, Qing

    2016-04-01

    An integrated analytical method based on multidisciplinary optimization software Isight and general finite element software ANSYS was proposed in this paper. Firstly, a two-disk rotor system was established and the mode, humorous response and transient response at acceleration condition were analyzed with ANSYS. The dynamic characteristics of the two-disk rotor system were achieved. On this basis, the two-disk rotor model was integrated to the multidisciplinary design optimization software Isight. According to the design of experiment (DOE) and the dynamic characteristics, the optimization variables, optimization objectives and constraints were confirmed. After that, the multi-objective design optimization of the transient process was carried out with three different global optimization algorithms including Evolutionary Optimization Algorithm, Multi-Island Genetic Algorithm and Pointer Automatic Optimizer. The optimum position of the two-disk rotor system was obtained at the specified constraints. Meanwhile, the accuracy and calculation numbers of different optimization algorithms were compared. The optimization results indicated that the rotor vibration reached the minimum value and the design efficiency and quality were improved by the multidisciplinary design optimization in the case of meeting the design requirements, which provided the reference to improve the design efficiency and reliability of the aero-engine rotor.

  4. Design of high productivity antibody capture by protein A chromatography using an integrated experimental and modeling approach.

    PubMed

    Ng, Candy K S; Osuna-Sanchez, Hector; Valéry, Eric; Sørensen, Eva; Bracewell, Daniel G

    2012-06-15

    An integrated experimental and modeling approach for the design of high productivity protein A chromatography is presented to maximize productivity in bioproduct manufacture. The approach consists of four steps: (1) small-scale experimentation, (2) model parameter estimation, (3) productivity optimization and (4) model validation with process verification. The integrated use of process experimentation and modeling enables fewer experiments to be performed, and thus minimizes the time and materials required in order to gain process understanding, which is of key importance during process development. The application of the approach is demonstrated for the capture of antibody by a novel silica-based high performance protein A adsorbent named AbSolute. In the example, a series of pulse injections and breakthrough experiments were performed to develop a lumped parameter model, which was then used to find the best design that optimizes the productivity of a batch protein A chromatographic process for human IgG capture. An optimum productivity of 2.9 kg L⁻¹ day⁻¹ for a column of 5mm diameter and 8.5 cm length was predicted, and subsequently verified experimentally, completing the whole process design approach in only 75 person-hours (or approximately 2 weeks). Copyright © 2012 Elsevier B.V. All rights reserved.

  5. Designing optimal cell factories: integer programming couples elementary mode analysis with regulation

    PubMed Central

    2012-01-01

    Background Elementary mode (EM) analysis is ideally suited for metabolic engineering as it allows for an unbiased decomposition of metabolic networks in biologically meaningful pathways. Recently, constrained minimal cut sets (cMCS) have been introduced to derive optimal design strategies for strain improvement by using the full potential of EM analysis. However, this approach does not allow for the inclusion of regulatory information. Results Here we present an alternative, novel and simple method for the prediction of cMCS, which allows to account for boolean transcriptional regulation. We use binary linear programming and show that the design of a regulated, optimal metabolic network of minimal functionality can be formulated as a standard optimization problem, where EM and regulation show up as constraints. We validated our tool by optimizing ethanol production in E. coli. Our study showed that up to 70% of the predicted cMCS contained non-enzymatic, non-annotated reactions, which are difficult to engineer. These cMCS are automatically excluded by our approach utilizing simple weight functions. Finally, due to efficient preprocessing, the binary program remains computationally feasible. Conclusions We used integer programming to predict efficient deletion strategies to metabolically engineer a production organism. Our formulation utilizes the full potential of cMCS but adds additional flexibility to the design process. In particular our method allows to integrate regulatory information into the metabolic design process and explicitly favors experimentally feasible deletions. Our method remains manageable even if millions or potentially billions of EM enter the analysis. We demonstrated that our approach is able to correctly predict the most efficient designs for ethanol production in E. coli. PMID:22898474

  6. Lessons Learned During Solutions of Multidisciplinary Design Optimization Problems

    NASA Technical Reports Server (NTRS)

    Patnaik, Suna N.; Coroneos, Rula M.; Hopkins, Dale A.; Lavelle, Thomas M.

    2000-01-01

    Optimization research at NASA Glenn Research Center has addressed the design of structures, aircraft and airbreathing propulsion engines. During solution of the multidisciplinary problems several issues were encountered. This paper lists four issues and discusses the strategies adapted for their resolution: (1) The optimization process can lead to an inefficient local solution. This deficiency was encountered during design of an engine component. The limitation was overcome through an augmentation of animation into optimization. (2) Optimum solutions obtained were infeasible for aircraft and air-breathing propulsion engine problems. Alleviation of this deficiency required a cascading of multiple algorithms. (3) Profile optimization of a beam produced an irregular shape. Engineering intuition restored the regular shape for the beam. (4) The solution obtained for a cylindrical shell by a subproblem strategy converged to a design that can be difficult to manufacture. Resolution of this issue remains a challenge. The issues and resolutions are illustrated through six problems: (1) design of an engine component, (2) synthesis of a subsonic aircraft, (3) operation optimization of a supersonic engine, (4) design of a wave-rotor-topping device, (5) profile optimization of a cantilever beam, and (6) design of a cvlindrical shell. The combined effort of designers and researchers can bring the optimization method from academia to industry.

  7. Design-Optimization and Material Selection for a Proximal Radius Fracture-Fixation Implant

    NASA Astrophysics Data System (ADS)

    Grujicic, M.; Xie, X.; Arakere, G.; Grujicic, A.; Wagner, D. W.; Vallejo, A.

    2010-11-01

    The problem of optimal size, shape, and placement of a proximal radius-fracture fixation-plate is addressed computationally using a combined finite-element/design-optimization procedure. To expand the set of physiological loading conditions experienced by the implant during normal everyday activities of the patient, beyond those typically covered by the pre-clinical implant-evaluation testing procedures, the case of a wheel-chair push exertion is considered. Toward that end, a musculoskeletal multi-body inverse-dynamics analysis is carried out of a human propelling a wheelchair. The results obtained are used as input to a finite-element structural analysis for evaluation of the maximum stress and fatigue life of the parametrically defined implant design. While optimizing the design of the radius-fracture fixation-plate, realistic functional requirements pertaining to the attainment of the required level of the devise safety factor and longevity/lifecycle were considered. It is argued that the type of analyses employed in the present work should be: (a) used to complement the standard experimental pre-clinical implant-evaluation tests (the tests which normally include a limited number of daily-living physiological loading conditions and which rely on single pass/fail outcomes/decisions with respect to a set of lower-bound implant-performance criteria) and (b) integrated early in the implant design and material/manufacturing-route selection process.

  8. Optimization of a Three-Component Green Corrosion Inhibitor Mixture for Using in Cooling Water by Experimental Design

    NASA Astrophysics Data System (ADS)

    Asghari, E.; Ashassi-Sorkhabi, H.; Ahangari, M.; Bagheri, R.

    2016-04-01

    Factors such as inhibitor concentration, solution hydrodynamics, and temperature influence the performance of corrosion inhibitor mixtures. The simultaneous studying of the impact of different factors is a time- and cost-consuming process. The use of experimental design methods can be useful in minimizing the number of experiments and finding local optimized conditions for factors under the investigation. In the present work, the inhibition performance of a three-component inhibitor mixture against corrosion of St37 steel rotating disk electrode, RDE, was studied. The mixture was composed of citric acid, lanthanum(III) nitrate, and tetrabutylammonium perchlorate. In order to decrease the number of experiments, the L16 Taguchi orthogonal array was used. The "control factors" were the concentration of each component and the rotation rate of RDE and the "response factor" was the inhibition efficiency. The scanning electron microscopy and energy dispersive x-ray spectroscopy techniques verified the formation of islands of adsorbed citrate complexes with lanthanum ions and insoluble lanthanum(III) hydroxide. From the Taguchi analysis results the mixture of 0.50 mM lanthanum(III) nitrate, 0.50 mM citric acid, and 2.0 mM tetrabutylammonium perchlorate under the electrode rotation rate of 1000 rpm was found as optimum conditions.

  9. Experimental Design for Hanford Low-Activity Waste Glasses with High Waste Loading

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Piepel, Gregory F.; Cooley, Scott K.; Vienna, John D.

    This report discusses the development of an experimental design for the initial phase of the Hanford low-activity waste (LAW) enhanced glass study. This report is based on a manuscript written for an applied statistics journal. Appendices A, B, and E include additional information relevant to the LAW enhanced glass experimental design that is not included in the journal manuscript. The glass composition experimental region is defined by single-component constraints (SCCs), linear multiple-component constraints (MCCs), and a nonlinear MCC involving 15 LAW glass components. Traditional methods and software for designing constrained mixture experiments with SCCs and linear MCCs are not directlymore » applicable because of the nonlinear MCC. A modification of existing methodology to account for the nonlinear MCC was developed and is described in this report. One of the glass components, SO 3, has a solubility limit in glass that depends on the composition of the balance of the glass. A goal was to design the experiment so that SO 3 would not exceed its predicted solubility limit for any of the experimental glasses. The SO 3 solubility limit had previously been modeled by a partial quadratic mixture model expressed in the relative proportions of the 14 other components. The partial quadratic mixture model was used to construct a nonlinear MCC in terms of all 15 components. In addition, there were SCCs and linear MCCs. This report describes how a layered design was generated to (i) account for the SCCs, linear MCCs, and nonlinear MCC and (ii) meet the goals of the study. A layered design consists of points on an outer layer, and inner layer, and a center point. There were 18 outer-layer glasses chosen using optimal experimental design software to augment 147 existing glass compositions that were within the LAW glass composition experimental region. Then 13 inner-layer glasses were chosen with the software to augment the existing and outer-layer glasses. The

  10. Towards Robust Designs Via Multiple-Objective Optimization Methods

    NASA Technical Reports Server (NTRS)

    Man Mohan, Rai

    2006-01-01

    Fabricating and operating complex systems involves dealing with uncertainty in the relevant variables. In the case of aircraft, flow conditions are subject to change during operation. Efficiency and engine noise may be different from the expected values because of manufacturing tolerances and normal wear and tear. Engine components may have a shorter life than expected because of manufacturing tolerances. In spite of the important effect of operating- and manufacturing-uncertainty on the performance and expected life of the component or system, traditional aerodynamic shape optimization has focused on obtaining the best design given a set of deterministic flow conditions. Clearly it is important to both maintain near-optimal performance levels at off-design operating conditions, and, ensure that performance does not degrade appreciably when the component shape differs from the optimal shape due to manufacturing tolerances and normal wear and tear. These requirements naturally lead to the idea of robust optimal design wherein the concept of robustness to various perturbations is built into the design optimization procedure. The basic ideas involved in robust optimal design will be included in this lecture. The imposition of the additional requirement of robustness results in a multiple-objective optimization problem requiring appropriate solution procedures. Typically the costs associated with multiple-objective optimization are substantial. Therefore efficient multiple-objective optimization procedures are crucial to the rapid deployment of the principles of robust design in industry. Hence the companion set of lecture notes (Single- and Multiple-Objective Optimization with Differential Evolution and Neural Networks ) deals with methodology for solving multiple-objective Optimization problems efficiently, reliably and with little user intervention. Applications of the methodologies presented in the companion lecture to robust design will be included here. The

  11. Optimization of minoxidil microemulsions using fractional factorial design approach.

    PubMed

    Jaipakdee, Napaphak; Limpongsa, Ekapol; Pongjanyakul, Thaned

    2016-01-01

    The objective of this study was to apply fractional factorial and multi-response optimization designs using desirability function approach for developing topical microemulsions. Minoxidil (MX) was used as a model drug. Limonene was used as an oil phase. Based on solubility, Tween 20 and caprylocaproyl polyoxyl-8 glycerides were selected as surfactants, propylene glycol and ethanol were selected as co-solvent in aqueous phase. Experiments were performed according to a two-level fractional factorial design to evaluate the effects of independent variables: Tween 20 concentration in surfactant system (X1), surfactant concentration (X2), ethanol concentration in co-solvent system (X3), limonene concentration (X4) on MX solubility (Y1), permeation flux (Y2), lag time (Y3), deposition (Y4) of MX microemulsions. It was found that Y1 increased with increasing X3 and decreasing X2, X4; whereas Y2 increased with decreasing X1, X2 and increasing X3. While Y3 was not affected by these variables, Y4 increased with decreasing X1, X2. Three regression equations were obtained and calculated for predicted values of responses Y1, Y2 and Y4. The predicted values matched experimental values reasonably well with high determination coefficient. By using optimal desirability function, optimized microemulsion demonstrating the highest MX solubility, permeation flux and skin deposition was confirmed as low level of X1, X2 and X4 but high level of X3.

  12. Application of Plackett-Burman and Doehlert designs for optimization of selenium analysis in plasma with electrothermal atomic absorption spectrometry.

    PubMed

    El Ati-Hellal, Myriam; Hellal, Fayçal; Hedhili, Abderrazek

    2014-10-01

    The aim of this study was the optimization of selenium determination in plasma samples with electrothermal atomic absorption spectrometry using experimental design methodology. 11 variables being able to influence selenium analysis in human blood plasma by electrothermal atomic absorption spectrometry (ETAAS) were evaluated with Plackett-Burman experimental design. These factors were selected from sample preparation, furnace program and chemical modification steps. Both absorbance and background signals were chosen as responses in the screening approach. Doehlert design was used for method optimization. Results showed that only ashing temperature has a statistically significant effect on the selected responses. Optimization with Doehlert design allowed the development of a reliable method for selenium analysis with ETAAS. Samples were diluted 1/10 with 0.05% (v/v) TritonX-100+2.5% (v/v) HNO3 solution. Optimized ashing and atomization temperatures for nickel modifier were 1070°C and 2270°C, respectively. A detection limit of 2.1μgL(-1) Se was obtained. Accuracy of the method was checked by the analysis of selenium in Seronorm™ Trace element quality control serum level 1. The developed procedure was applied for the analysis of total selenium in fifteen plasma samples with standard addition method. Concentrations ranged between 24.4 and 64.6μgL(-1), with a mean of 42.6±4.9μgL(-1). The use of experimental designs allowed the development of a cheap and accurate method for selenium analysis in plasma that could be applied routinely in clinical laboratories. Copyright © 2014 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.

  13. Experimental design to evaluate directed adaptive mutation in Mammalian cells.

    PubMed

    Bordonaro, Michael; Chiaro, Christopher R; May, Tobias

    2014-12-09

    from several pilot experiments. Cell growth and DNA sequence data indicate that we have identified a cell clone that exhibits several suitable characteristics, although further study is required to identify a more optimal cell clone. The experimental approach is based on a quantum biological model of basis-dependent selection describing a novel mechanism of adaptive mutation. This project is currently inactive due to lack of funding. However, consistent with the objective of early reports, we describe a proposed study that has not produced publishable results, but is worthy of report because of the hypothesis, experimental design, and protocols. We outline the project's rationale and experimental design, with its strengths and weaknesses, to stimulate discussion and analysis, and lay the foundation for future studies in this field.

  14. Experimental Design and Some Threats to Experimental Validity: A Primer

    ERIC Educational Resources Information Center

    Skidmore, Susan

    2008-01-01

    Experimental designs are distinguished as the best method to respond to questions involving causality. The purpose of the present paper is to explicate the logic of experimental design and why it is so vital to questions that demand causal conclusions. In addition, types of internal and external validity threats are discussed. To emphasize the…

  15. Design sensitivity analysis and optimization tool (DSO) for sizing design applications

    NASA Technical Reports Server (NTRS)

    Chang, Kuang-Hua; Choi, Kyung K.; Perng, Jyh-Hwa

    1992-01-01

    The DSO tool, a structural design software system that provides the designer with a graphics-based menu-driven design environment to perform easy design optimization for general applications, is presented. Three design stages, preprocessing, design sensitivity analysis, and postprocessing, are implemented in the DSO to allow the designer to carry out the design process systematically. A framework, including data base, user interface, foundation class, and remote module, has been designed and implemented to facilitate software development for the DSO. A number of dedicated commercial software/packages have been integrated in the DSO to support the design procedures. Instead of parameterizing an FEM, design parameters are defined on a geometric model associated with physical quantities, and the continuum design sensitivity analysis theory is implemented to compute design sensitivity coefficients using postprocessing data from the analysis codes. A tracked vehicle road wheel is given as a sizing design application to demonstrate the DSO's easy and convenient design optimization process.

  16. Optimal design of reverse osmosis module networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Maskan, F.; Wiley, D.E.; Johnston, L.P.M.

    2000-05-01

    The structure of individual reverse osmosis modules, the configuration of the module network, and the operating conditions were optimized for seawater and brackish water desalination. The system model included simple mathematical equations to predict the performance of the reverse osmosis modules. The optimization problem was formulated as a constrained multivariable nonlinear optimization. The objective function was the annual profit for the system, consisting of the profit obtained from the permeate, capital cost for the process units, and operating costs associated with energy consumption and maintenance. Optimization of several dual-stage reverse osmosis systems were investigated and compared. It was found thatmore » optimal network designs are the ones that produce the most permeate. It may be possible to achieve economic improvements by refining current membrane module designs and their operating pressures.« less

  17. Dynamic optimization and adaptive controller design

    NASA Astrophysics Data System (ADS)

    Inamdar, S. R.

    2010-10-01

    In this work I present a new type of controller which is an adaptive tracking controller which employs dynamic optimization for optimizing current value of controller action for the temperature control of nonisothermal continuously stirred tank reactor (CSTR). We begin with a two-state model of nonisothermal CSTR which are mass and heat balance equations and then add cooling system dynamics to eliminate input multiplicity. The initial design value is obtained using local stability of steady states where approach temperature for cooling action is specified as a steady state and a design specification. Later we make a correction in the dynamics where material balance is manipulated to use feed concentration as a system parameter as an adaptive control measure in order to avoid actuator saturation for the main control loop. The analysis leading to design of dynamic optimization based parameter adaptive controller is presented. The important component of this mathematical framework is reference trajectory generation to form an adaptive control measure.

  18. Development and Optimization of HPLC Analysis of Metronidazole, Diloxanide, Spiramycin and Cliquinol in Pharmaceutical Dosage Forms Using Experimental Design.

    PubMed

    Elkhoudary, Mahmoud M; Abdel Salam, Randa A; Hadad, Ghada M

    2016-11-01

    A new simple, sensitive, rapid and accurate gradient reversed-phase high-performance liquid chromatography with photodiode array detector (RP-HPLC-DAD) was developed and validated for simultaneous analysis of Metronidazole (MNZ), Spiramycin (SPY), Diloxanidefuroate (DIX) and Cliquinol (CLQ) using statistical experimental design. Initially, a resolution V fractional factorial design was used in order to screen five independent factors: the column temperature (°C), pH, phosphate buffer concentration (mM), flow rate (ml/min) and the initial fraction of mobile phase B (%). pH, flow rate and initial fraction of mobile phase B were identified as significant, using analysis of variance. The optimum conditions of separation determined with the aid of central composite design were: (1) initial mobile phase concentration: phosphate buffer/methanol (50/50, v/v), (2) phosphate buffer concentration (50 mM), (3) pH (4.72), (4) column temperature 30°C and (5) mobile phase flow rate (0.8 ml min -1 ). Excellent linearity was observed for all of the standard calibration curves, and the correlation coefficients were above 0.9999. Limits of detection for all of the analyzed compounds ranged between 0.02 and 0.11 μg ml -1 ; limits of quantitation ranged between 0.06 and 0.33 μg ml -1 The proposed method showed good prediction ability. The optimized method was validated according to ICH guidelines. Three commercially available tablets were analyzed showing good % recovery and %RSD. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  19. Optimal shielding design for minimum materials cost or mass

    DOE PAGES

    Woolley, Robert D.

    2015-12-02

    The mathematical underpinnings of cost optimal radiation shielding designs based on an extension of optimal control theory are presented, a heuristic algorithm to iteratively solve the resulting optimal design equations is suggested, and computational results for a simple test case are discussed. A typical radiation shielding design problem can have infinitely many solutions, all satisfying the problem's specified set of radiation attenuation requirements. Each such design has its own total materials cost. For a design to be optimal, no admissible change in its deployment of shielding materials can result in a lower cost. This applies in particular to very smallmore » changes, which can be restated using the calculus of variations as the Euler-Lagrange equations. Furthermore, the associated Hamiltonian function and application of Pontryagin's theorem lead to conditions for a shield to be optimal.« less

  20. Robustness-Based Design Optimization Under Data Uncertainty

    NASA Technical Reports Server (NTRS)

    Zaman, Kais; McDonald, Mark; Mahadevan, Sankaran; Green, Lawrence

    2010-01-01

    This paper proposes formulations and algorithms for design optimization under both aleatory (i.e., natural or physical variability) and epistemic uncertainty (i.e., imprecise probabilistic information), from the perspective of system robustness. The proposed formulations deal with epistemic uncertainty arising from both sparse and interval data without any assumption about the probability distributions of the random variables. A decoupled approach is proposed in this paper to un-nest the robustness-based design from the analysis of non-design epistemic variables to achieve computational efficiency. The proposed methods are illustrated for the upper stage design problem of a two-stage-to-orbit (TSTO) vehicle, where the information on the random design inputs are only available as sparse point and/or interval data. As collecting more data reduces uncertainty but increases cost, the effect of sample size on the optimality and robustness of the solution is also studied. A method is developed to determine the optimal sample size for sparse point data that leads to the solutions of the design problem that are least sensitive to variations in the input random variables.

  1. Optimal Bayesian Adaptive Design for Test-Item Calibration.

    PubMed

    van der Linden, Wim J; Ren, Hao

    2015-06-01

    An optimal adaptive design for test-item calibration based on Bayesian optimality criteria is presented. The design adapts the choice of field-test items to the examinees taking an operational adaptive test using both the information in the posterior distributions of their ability parameters and the current posterior distributions of the field-test parameters. Different criteria of optimality based on the two types of posterior distributions are possible. The design can be implemented using an MCMC scheme with alternating stages of sampling from the posterior distributions of the test takers' ability parameters and the parameters of the field-test items while reusing samples from earlier posterior distributions of the other parameters. Results from a simulation study demonstrated the feasibility of the proposed MCMC implementation for operational item calibration. A comparison of performances for different optimality criteria showed faster calibration of substantial numbers of items for the criterion of D-optimality relative to A-optimality, a special case of c-optimality, and random assignment of items to the test takers.

  2. Regression analysis as a design optimization tool

    NASA Technical Reports Server (NTRS)

    Perley, R.

    1984-01-01

    The optimization concepts are described in relation to an overall design process as opposed to a detailed, part-design process where the requirements are firmly stated, the optimization criteria are well established, and a design is known to be feasible. The overall design process starts with the stated requirements. Some of the design criteria are derived directly from the requirements, but others are affected by the design concept. It is these design criteria that define the performance index, or objective function, that is to be minimized within some constraints. In general, there will be multiple objectives, some mutually exclusive, with no clear statement of their relative importance. The optimization loop that is given adjusts the design variables and analyzes the resulting design, in an iterative fashion, until the objective function is minimized within the constraints. This provides a solution, but it is only the beginning. In effect, the problem definition evolves as information is derived from the results. It becomes a learning process as we determine what the physics of the system can deliver in relation to the desirable system characteristics. As with any learning process, an interactive capability is a real attriubute for investigating the many alternatives that will be suggested as learning progresses.

  3. Multidisciplinary Analysis and Optimal Design: As Easy as it Sounds?

    NASA Technical Reports Server (NTRS)

    Moore, Greg; Chainyk, Mike; Schiermeier, John

    2004-01-01

    The viewgraph presentation examines optimal design for precision, large aperture structures. Discussion focuses on aspects of design optimization, code architecture and current capabilities, and planned activities and collaborative area suggestions. The discussion of design optimization examines design sensitivity analysis; practical considerations; and new analytical environments including finite element-based capability for high-fidelity multidisciplinary analysis, design sensitivity, and optimization. The discussion of code architecture and current capabilities includes basic thermal and structural elements, nonlinear heat transfer solutions and process, and optical modes generation.

  4. Optimization of Polyplex Formation between DNA Oligonucleotide and Poly(ʟ-Lysine): Experimental Study and Modeling Approach.

    PubMed

    Vasiliu, Tudor; Cojocaru, Corneliu; Rotaru, Alexandru; Pricope, Gabriela; Pinteala, Mariana; Clima, Lilia

    2017-06-17

    The polyplexes formed by nucleic acids and polycations have received a great attention owing to their potential application in gene therapy. In our study, we report experimental results and modeling outcomes regarding the optimization of polyplex formation between the double-stranded DNA (dsDNA) and poly(ʟ-Lysine) (PLL). The quantification of the binding efficiency during polyplex formation was performed by processing of the images captured from the gel electrophoresis assays. The design of experiments (DoE) and response surface methodology (RSM) were employed to investigate the coupling effect of key factors (pH and N/P ratio) affecting the binding efficiency. According to the experimental observations and response surface analysis, the N/P ratio showed a major influence on binding efficiency compared to pH. Model-based optimization calculations along with the experimental confirmation runs unveiled the maximal binding efficiency (99.4%) achieved at pH 5.4 and N/P ratio 125. To support the experimental data and reveal insights of molecular mechanism responsible for the polyplex formation between dsDNA and PLL, molecular dynamics simulations were performed at pH 5.4 and 7.4.

  5. Optimization of Polyplex Formation between DNA Oligonucleotide and Poly(l-Lysine): Experimental Study and Modeling Approach

    PubMed Central

    Vasiliu, Tudor; Cojocaru, Corneliu; Rotaru, Alexandru; Pricope, Gabriela; Pinteala, Mariana; Clima, Lilia

    2017-01-01

    The polyplexes formed by nucleic acids and polycations have received a great attention owing to their potential application in gene therapy. In our study, we report experimental results and modeling outcomes regarding the optimization of polyplex formation between the double-stranded DNA (dsDNA) and poly(l-Lysine) (PLL). The quantification of the binding efficiency during polyplex formation was performed by processing of the images captured from the gel electrophoresis assays. The design of experiments (DoE) and response surface methodology (RSM) were employed to investigate the coupling effect of key factors (pH and N/P ratio) affecting the binding efficiency. According to the experimental observations and response surface analysis, the N/P ratio showed a major influence on binding efficiency compared to pH. Model-based optimization calculations along with the experimental confirmation runs unveiled the maximal binding efficiency (99.4%) achieved at pH 5.4 and N/P ratio 125. To support the experimental data and reveal insights of molecular mechanism responsible for the polyplex formation between dsDNA and PLL, molecular dynamics simulations were performed at pH 5.4 and 7.4. PMID:28629130

  6. Cat Swarm Optimization algorithm for optimal linear phase FIR filter design.

    PubMed

    Saha, Suman Kumar; Ghoshal, Sakti Prasad; Kar, Rajib; Mandal, Durbadal

    2013-11-01

    In this paper a new meta-heuristic search method, called Cat Swarm Optimization (CSO) algorithm is applied to determine the best optimal impulse response coefficients of FIR low pass, high pass, band pass and band stop filters, trying to meet the respective ideal frequency response characteristics. CSO is generated by observing the behaviour of cats and composed of two sub-models. In CSO, one can decide how many cats are used in the iteration. Every cat has its' own position composed of M dimensions, velocities for each dimension, a fitness value which represents the accommodation of the cat to the fitness function, and a flag to identify whether the cat is in seeking mode or tracing mode. The final solution would be the best position of one of the cats. CSO keeps the best solution until it reaches the end of the iteration. The results of the proposed CSO based approach have been compared to those of other well-known optimization methods such as Real Coded Genetic Algorithm (RGA), standard Particle Swarm Optimization (PSO) and Differential Evolution (DE). The CSO based results confirm the superiority of the proposed CSO for solving FIR filter design problems. The performances of the CSO based designed FIR filters have proven to be superior as compared to those obtained by RGA, conventional PSO and DE. The simulation results also demonstrate that the CSO is the best optimizer among other relevant techniques, not only in the convergence speed but also in the optimal performances of the designed filters. Copyright © 2013 ISA. Published by Elsevier Ltd. All rights reserved.

  7. Relation between experimental and non-experimental study designs. HB vaccines: a case study

    PubMed Central

    Jefferson, T.; Demicheli, V.

    1999-01-01

    STUDY OBJECTIVE: To examine the relation between experimental and non- experimental study design in vaccinology. DESIGN: Assessment of each study design's capability of testing four aspects of vaccine performance, namely immunogenicity (the capacity to stimulate the immune system), duration of immunity conferred, incidence and seriousness of side effects, and number of infections prevented by vaccination. SETTING: Experimental and non-experimental studies on hepatitis B (HB) vaccines in the Cochrane Vaccines Field Database. RESULTS: Experimental and non-experimental vaccine study designs are frequently complementary but some aspects of vaccine quality can only be assessed by one of the types of study. More work needs to be done on the relation between study quality and its significance in terms of effect size.   PMID:10326054

  8. Experimental Optimal Single Qubit Purification in an NMR Quantum Information Processor

    PubMed Central

    Hou, Shi-Yao; Sheng, Yu-Bo; Feng, Guan-Ru; Long, Gui-Lu

    2014-01-01

    High quality single qubits are the building blocks in quantum information processing. But they are vulnerable to environmental noise. To overcome noise, purification techniques, which generate qubits with higher purities from qubits with lower purities, have been proposed. Purifications have attracted much interest and been widely studied. However, the full experimental demonstration of an optimal single qubit purification protocol proposed by Cirac, Ekert and Macchiavello [Phys. Rev. Lett. 82, 4344 (1999), the CEM protocol] more than one and half decades ago, still remains an experimental challenge, as it requires more complicated networks and a higher level of precision controls. In this work, we design an experiment scheme that realizes the CEM protocol with explicit symmetrization of the wave functions. The purification scheme was successfully implemented in a nuclear magnetic resonance quantum information processor. The experiment fully demonstrated the purification protocol, and showed that it is an effective way of protecting qubits against errors and decoherence. PMID:25358758

  9. Blanket design and optimization demonstrations of the first wall/blanket/shield design and optimization system (BSDOS).

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gohar, Y.; Nuclear Engineering Division

    2005-05-01

    In fusion reactors, the blanket design and its characteristics have a major impact on the reactor performance, size, and economics. The selection and arrangement of the blanket materials, dimensions of the different blanket zones, and different requirements of the selected materials for a satisfactory performance are the main parameters, which define the blanket performance. These parameters translate to a large number of variables and design constraints, which need to be simultaneously considered in the blanket design process. This represents a major design challenge because of the lack of a comprehensive design tool capable of considering all these variables to definemore » the optimum blanket design and satisfying all the design constraints for the adopted figure of merit and the blanket design criteria. The blanket design capabilities of the First Wall/Blanket/Shield Design and Optimization System (BSDOS) have been developed to overcome this difficulty and to provide the state-of-the-art research and design tool for performing blanket design analyses. This paper describes some of the BSDOS capabilities and demonstrates its use. In addition, the use of the optimization capability of the BSDOS can result in a significant blanket performance enhancement and cost saving for the reactor design under consideration. In this paper, examples are presented, which utilize an earlier version of the ITER solid breeder blanket design and a high power density self-cooled lithium blanket design for demonstrating some of the BSDOS blanket design capabilities.« less

  10. Blanket Design and Optimization Demonstrations of the First Wall/Blanket/Shield Design and Optimization System (BSDOS)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gohar, Yousry

    2005-05-15

    In fusion reactors, the blanket design and its characteristics have a major impact on the reactor performance, size, and economics. The selection and arrangement of the blanket materials, dimensions of the different blanket zones, and different requirements of the selected materials for a satisfactory performance are the main parameters, which define the blanket performance. These parameters translate to a large number of variables and design constraints, which need to be simultaneously considered in the blanket design process. This represents a major design challenge because of the lack of a comprehensive design tool capable of considering all these variables to definemore » the optimum blanket design and satisfying all the design constraints for the adopted figure of merit and the blanket design criteria. The blanket design capabilities of the First Wall/Blanket/Shield Design and Optimization System (BSDOS) have been developed to overcome this difficulty and to provide the state-of-the-art research and design tool for performing blanket design analyses. This paper describes some of the BSDOS capabilities and demonstrates its use. In addition, the use of the optimization capability of the BSDOS can result in a significant blanket performance enhancement and cost saving for the reactor design under consideration. In this paper, examples are presented, which utilize an earlier version of the ITER solid breeder blanket design and a high power density self-cooled lithium blanket design for demonstrating some of the BSDOS blanket design capabilities.« less

  11. Topology Optimization - Engineering Contribution to Architectural Design

    NASA Astrophysics Data System (ADS)

    Tajs-Zielińska, Katarzyna; Bochenek, Bogdan

    2017-10-01

    The idea of the topology optimization is to find within a considered design domain the distribution of material that is optimal in some sense. Material, during optimization process, is redistributed and parts that are not necessary from objective point of view are removed. The result is a solid/void structure, for which an objective function is minimized. This paper presents an application of topology optimization to multi-material structures. The design domain defined by shape of a structure is divided into sub-regions, for which different materials are assigned. During design process material is relocated, but only within selected region. The proposed idea has been inspired by architectural designs like multi-material facades of buildings. The effectiveness of topology optimization is determined by proper choice of numerical optimization algorithm. This paper utilises very efficient heuristic method called Cellular Automata. Cellular Automata are mathematical, discrete idealization of a physical systems. Engineering implementation of Cellular Automata requires decomposition of the design domain into a uniform lattice of cells. It is assumed, that the interaction between cells takes place only within the neighbouring cells. The interaction is governed by simple, local update rules, which are based on heuristics or physical laws. The numerical studies show, that this method can be attractive alternative to traditional gradient-based algorithms. The proposed approach is evaluated by selected numerical examples of multi-material bridge structures, for which various material configurations are examined. The numerical studies demonstrated a significant influence the material sub-regions location on the final topologies. The influence of assumed volume fraction on final topologies for multi-material structures is also observed and discussed. The results of numerical calculations show, that this approach produces different results as compared with classical one

  12. High-Lift Optimization Design Using Neural Networks on a Multi-Element Airfoil

    NASA Technical Reports Server (NTRS)

    Greenman, Roxana M.; Roth, Karlin R.; Smith, Charles A. (Technical Monitor)

    1998-01-01

    The high-lift performance of a multi-element airfoil was optimized by using neural-net predictions that were trained using a computational data set. The numerical data was generated using a two-dimensional, incompressible, Navier-Stokes algorithm with the Spalart-Allmaras turbulence model. Because it is difficult to predict maximum lift for high-lift systems, an empirically-based maximum lift criteria was used in this study to determine both the maximum lift and the angle at which it occurs. Multiple input, single output networks were trained using the NASA Ames variation of the Levenberg-Marquardt algorithm for each of the aerodynamic coefficients (lift, drag, and moment). The artificial neural networks were integrated with a gradient-based optimizer. Using independent numerical simulations and experimental data for this high-lift configuration, it was shown that this design process successfully optimized flap deflection, gap, overlap, and angle of attack to maximize lift. Once the neural networks were trained and integrated with the optimizer, minimal additional computer resources were required to perform optimization runs with different initial conditions and parameters. Applying the neural networks within the high-lift rigging optimization process reduced the amount of computational time and resources by 83% compared with traditional gradient-based optimization procedures for multiple optimization runs.

  13. Advanced optimal design concepts for composite material aircraft repair

    NASA Astrophysics Data System (ADS)

    Renaud, Guillaume

    The application of an automated optimization approach for bonded composite patch design is investigated. To do so, a finite element computer analysis tool to evaluate patch design quality was developed. This tool examines both the mechanical and the thermal issues of the problem. The optimized shape is obtained with a bi-quadratic B-spline surface that represents the top surface of the patch. Additional design variables corresponding to the ply angles are also used. Furthermore, a multi-objective optimization approach was developed to treat multiple and uncertain loads. This formulation aims at designing according to the most unfavorable mechanical and thermal loads. The problem of finding the optimal patch shape for several situations is addressed. The objective is to minimize a stress component at a specific point in the host structure (plate) while ensuring acceptable stress levels in the adhesive. A parametric study is performed in order to identify the effects of various shape parameters on the quality of the repair and its optimal configuration. The effects of mechanical loads and service temperature are also investigated. Two bonding methods are considered, as they imply different thermal histories. It is shown that the proposed techniques are effective and inexpensive for analyzing and optimizing composite patch repairs. It is also shown that thermal effects should not only be present in the analysis, but that they play a paramount role on the resulting quality of the optimized design. In all cases, the optimized configuration results in a significant reduction of the desired stress level by deflecting the loads away from rather than over the damage zone, as is the case with standard designs. Furthermore, the automated optimization ensures the safety of the patch design for all considered operating conditions.

  14. Optimal cost design of water distribution networks using a decomposition approach

    NASA Astrophysics Data System (ADS)

    Lee, Ho Min; Yoo, Do Guen; Sadollah, Ali; Kim, Joong Hoon

    2016-12-01

    Water distribution network decomposition, which is an engineering approach, is adopted to increase the efficiency of obtaining the optimal cost design of a water distribution network using an optimization algorithm. This study applied the source tracing tool in EPANET, which is a hydraulic and water quality analysis model, to the decomposition of a network to improve the efficiency of the optimal design process. The proposed approach was tested by carrying out the optimal cost design of two water distribution networks, and the results were compared with other optimal cost designs derived from previously proposed optimization algorithms. The proposed decomposition approach using the source tracing technique enables the efficient decomposition of an actual large-scale network, and the results can be combined with the optimal cost design process using an optimization algorithm. This proves that the final design in this study is better than those obtained with other previously proposed optimization algorithms.

  15. Multifidelity Analysis and Optimization for Supersonic Design

    NASA Technical Reports Server (NTRS)

    Kroo, Ilan; Willcox, Karen; March, Andrew; Haas, Alex; Rajnarayan, Dev; Kays, Cory

    2010-01-01

    Supersonic aircraft design is a computationally expensive optimization problem and multifidelity approaches over a significant opportunity to reduce design time and computational cost. This report presents tools developed to improve supersonic aircraft design capabilities including: aerodynamic tools for supersonic aircraft configurations; a systematic way to manage model uncertainty; and multifidelity model management concepts that incorporate uncertainty. The aerodynamic analysis tools developed are appropriate for use in a multifidelity optimization framework, and include four analysis routines to estimate the lift and drag of a supersonic airfoil, a multifidelity supersonic drag code that estimates the drag of aircraft configurations with three different methods: an area rule method, a panel method, and an Euler solver. In addition, five multifidelity optimization methods are developed, which include local and global methods as well as gradient-based and gradient-free techniques.

  16. Central Composite Design Optimization of Zinc Removal from Contaminated Soil, Using Citric Acid as Biodegradable Chelant.

    PubMed

    Asadzadeh, Farrokh; Maleki-Kaklar, Mahdi; Soiltanalinejad, Nooshin; Shabani, Farzin

    2018-02-08

    Citric acid (CA) was evaluated in terms of its efficiency as a biodegradable chelating agent, in removing zinc (Zn) from heavily contaminated soil, using a soil washing process. To determine preliminary ranges of variables in the washing process, single factor experiments were carried out with different CA concentrations, pH levels and washing times. Optimization of batch washing conditions followed using a response surface methodology (RSM) based central composite design (CCD) approach. CCD predicted values and experimental results showed strong agreement, with an R 2 value of 0.966. Maximum removal of 92.8% occurred with a CA concentration of 167.6 mM, pH of 4.43, and washing time of 30 min as optimal variable values. A leaching column experiment followed, to examine the efficiency of the optimum conditions established by the CCD model. A comparison of two soil washing techniques indicated that the removal efficiency rate of the column experiment (85.8%) closely matching that of the batch experiment (92.8%). The methodology supporting the research experimentation for optimizing Zn removal may be useful in the design of protocols for practical engineering soil decontamination applications.

  17. Experimental optimization during SERS application

    NASA Astrophysics Data System (ADS)

    Laha, Ranjit; Das, Gour Mohan; Ranjan, Pranay; Dantham, Venkata Ramanaiah

    2018-05-01

    The well known surface enhanced Raman scattering (SERS) needs a lot of experimental optimization for its proper implementation. In this report, we demonstrate the efficient SERS using gold nanoparticles (AuNPs) on quartz plate. The AuNPs were prepared by depositing direct current sputtered Au thin film followed by suitable annealing. The parameters varied for getting best SERS effect were 1) Numerical Aperture of Raman objective lens and 2) Sputtering duration of Au film. It was found that AuNPs formed from the Au layer deposited for 40s and Raman objective lens of magnification 50X are the best combination for obtaining efficient SERS effect.

  18. Optimal design of vertebrate and insect sarcomeres.

    PubMed

    Otten, E

    1987-01-01

    This paper offers a model for the normalized length-tension relation of a muscle fiber based upon sarcomere design. Comparison with measurements published by Gordon et al. ('66) shows an accurate fit as long as the inhomogeneity of sarcomere length in a single muscle fiber is taken into account. Sequential change of filament length and the length of the cross-bridge-free zone leads the model to suggest that most vertebrate sarcomeres tested match the condition of optimal construction for the output of mechanical energy over a full sarcomere contraction movement. Joint optimization of all three morphometric parameters suggests that a slightly better (0.3%) design is theoretically possible. However, this theoretical sarcomere, optimally designed for the conversion of energy, has a low normalized contraction velocity; it provides a poorer match to the combined functional demands of high energy output and high contraction velocity than the real sarcomeres of vertebrates. The sarcomeres in fish myotomes appear to be built suboptimally for isometric contraction, but built optimally for that shortening velocity generating maximum power. During swimming, these muscles do indeed contract concentrically only. The sarcomeres of insect asynchronous flight muscles contract only slightly. They are not built optimally for maximum output of energy across the full range of contraction encountered in vertebrate sarcomeres, but are built almost optimally for the contraction range that they do in fact employ.

  19. Design of optimized piezoelectric HDD-sliders

    NASA Astrophysics Data System (ADS)

    Nakasone, Paulo H.; Yoo, Jeonghoon; Silva, Emilio C. N.

    2010-04-01

    As storage data density in hard-disk drives (HDDs) increases for constant or miniaturizing sizes, precision positioning of HDD heads becomes a more relevant issue to ensure enormous amounts of data to be properly written and read. Since the traditional single-stage voice coil motor (VCM) cannot satisfy the positioning requirement of high-density tracks per inch (TPI) HDDs, dual-stage servo systems have been proposed to overcome this matter, by using VCMs to coarsely move the HDD head while piezoelectric actuators provides fine and fast positioning. Thus, the aim of this work is to apply topology optimization method (TOM) to design novel piezoelectric HDD heads, by finding optimal placement of base-plate and piezoelectric material to high precision positioning HDD heads. Topology optimization method is a structural optimization technique that combines the finite element method (FEM) with optimization algorithms. The laminated finite element employs the MITC (mixed interpolation of tensorial components) formulation to provide accurate and reliable results. The topology optimization uses a rational approximation of material properties to vary the material properties between 'void' and 'filled' portions. The design problem consists in generating optimal structures that provide maximal displacements, appropriate structural stiffness and resonance phenomena avoidance. The requirements are achieved by applying formulations to maximize displacements, minimize structural compliance and maximize resonance frequencies. This paper presents the implementation of the algorithms and show results to confirm the feasibility of this approach.

  20. Application of optimization techniques to vehicle design: A review

    NASA Technical Reports Server (NTRS)

    Prasad, B.; Magee, C. L.

    1984-01-01

    The work that has been done in the last decade or so in the application of optimization techniques to vehicle design is discussed. Much of the work reviewed deals with the design of body or suspension (chassis) components for reduced weight. Also reviewed are studies dealing with system optimization problems for improved functional performance, such as ride or handling. In reviewing the work on the use of optimization techniques, one notes the transition from the rare mention of the methods in the 70's to an increased effort in the early 80's. Efficient and convenient optimization and analysis tools still need to be developed so that they can be regularly applied in the early design stage of the vehicle development cycle to be most effective. Based on the reported applications, an attempt is made to assess the potential for automotive application of optimization techniques. The major issue involved remains the creation of quantifiable means of analysis to be used in vehicle design. The conventional process of vehicle design still contains much experience-based input because it has not yet proven possible to quantify all important constraints. This restraint on the part of the analysis will continue to be a major limiting factor in application of optimization to vehicle design.

  1. Multiobjective hyper heuristic scheme for system design and optimization

    NASA Astrophysics Data System (ADS)

    Rafique, Amer Farhan

    2012-11-01

    As system design is becoming more and more multifaceted, integrated, and complex, the traditional single objective optimization trends of optimal design are becoming less and less efficient and effective. Single objective optimization methods present a unique optimal solution whereas multiobjective methods present pareto front. The foremost intent is to predict a reasonable distributed pareto-optimal solution set independent of the problem instance through multiobjective scheme. Other objective of application of intended approach is to improve the worthiness of outputs of the complex engineering system design process at the conceptual design phase. The process is automated in order to provide the system designer with the leverage of the possibility of studying and analyzing a large multiple of possible solutions in a short time. This article presents Multiobjective Hyper Heuristic Optimization Scheme based on low level meta-heuristics developed for the application in engineering system design. Herein, we present a stochastic function to manage meta-heuristics (low-level) to augment surety of global optimum solution. Generic Algorithm, Simulated Annealing and Swarm Intelligence are used as low-level meta-heuristics in this study. Performance of the proposed scheme is investigated through a comprehensive empirical analysis yielding acceptable results. One of the primary motives for performing multiobjective optimization is that the current engineering systems require simultaneous optimization of conflicting and multiple. Random decision making makes the implementation of this scheme attractive and easy. Injecting feasible solutions significantly alters the search direction and also adds diversity of population resulting in accomplishment of pre-defined goals set in the proposed scheme.

  2. Experimental verification of Space Platform battery discharger design optimization

    NASA Astrophysics Data System (ADS)

    Sable, Dan M.; Deuty, Scott; Lee, Fred C.; Cho, Bo H.

    The detailed design of two candidate topologies for the Space Platform battery discharger, a four module boost converter (FMBC) and a voltage-fed push-pull autotransformer (VFPPAT), is presented. Each has unique problems. The FMBC requires careful design and analysis in order to obtain good dynamic performance. This is due to the presence of a right-half-plane (RHP) zero in the control-to-output transfer function. The VFPPAT presents a challenging power stage design in order to yield high efficiency and light component weight. The authors describe the design of each of these converters and compare their efficiency, weight, and dynamic characteristics.

  3. Experimental verification of Space Platform battery discharger design optimization

    NASA Technical Reports Server (NTRS)

    Sable, Dan M.; Deuty, Scott; Lee, Fred C.; Cho, Bo H.

    1991-01-01

    The detailed design of two candidate topologies for the Space Platform battery discharger, a four module boost converter (FMBC) and a voltage-fed push-pull autotransformer (VFPPAT), is presented. Each has unique problems. The FMBC requires careful design and analysis in order to obtain good dynamic performance. This is due to the presence of a right-half-plane (RHP) zero in the control-to-output transfer function. The VFPPAT presents a challenging power stage design in order to yield high efficiency and light component weight. The authors describe the design of each of these converters and compare their efficiency, weight, and dynamic characteristics.

  4. Optimal Design of Gradient Materials and Bi-Level Optimization of Topology Using Targets (BOTT)

    NASA Astrophysics Data System (ADS)

    Garland, Anthony

    The objective of this research is to understand the fundamental relationships necessary to develop a method to optimize both the topology and the internal gradient material distribution of a single object while meeting constraints and conflicting objectives. Functionally gradient material (FGM) objects possess continuous varying material properties throughout the object, and they allow an engineer to tailor individual regions of an object to have specific mechanical properties by locally modifying the internal material composition. A variety of techniques exists for topology optimization, and several methods exist for FGM optimization, but combining the two together is difficult. Understanding the relationship between topology and material gradient optimization enables the selection of an appropriate model and the development of algorithms, which allow engineers to design high-performance parts that better meet design objectives than optimized homogeneous material objects. For this research effort, topology optimization means finding the optimal connected structure with an optimal shape. FGM optimization means finding the optimal macroscopic material properties within an object. Tailoring the material constitutive matrix as a function of position results in gradient properties. Once, the target macroscopic properties are known, a mesostructure or a particular material nanostructure can be found which gives the target material properties at each macroscopic point. This research demonstrates that topology and gradient materials can both be optimized together for a single part. The algorithms use a discretized model of the domain and gradient based optimization algorithms. In addition, when considering two conflicting objectives the algorithms in this research generate clear 'features' within a single part. This tailoring of material properties within different areas of a single part (automated design of 'features') using computational design tools is a novel benefit

  5. A quality by design approach to optimization of emulsions for electrospinning using factorial and D-optimal designs.

    PubMed

    Badawi, Mariam A; El-Khordagui, Labiba K

    2014-07-16

    Emulsion electrospinning is a multifactorial process used to generate nanofibers loaded with hydrophilic drugs or macromolecules for diverse biomedical applications. Emulsion electrospinnability is greatly impacted by the emulsion pharmaceutical attributes. The aim of this study was to apply a quality by design (QbD) approach based on design of experiments as a risk-based proactive approach to achieve predictable critical quality attributes (CQAs) in w/o emulsions for electrospinning. Polycaprolactone (PCL)-thickened w/o emulsions containing doxycycline HCl were formulated using a Span 60/sodium lauryl sulfate (SLS) emulsifier blend. The identified emulsion CQAs (stability, viscosity and conductivity) were linked with electrospinnability using a 3(3) factorial design to optimize emulsion composition for phase stability and a D-optimal design to optimize stable emulsions for viscosity and conductivity after shifting the design space. The three independent variables, emulsifier blend composition, organic:aqueous phase ratio and polymer concentration, had a significant effect (p<0.05) on emulsion CQAs, the emulsifier blend composition exerting prominent main and interaction effects. Scanning electron microscopy (SEM) of emulsion-electrospun NFs and desirability functions allowed modeling of emulsion CQAs to predict electrospinnable formulations. A QbD approach successfully built quality in electrospinnable emulsions, allowing development of hydrophilic drug-loaded nanofibers with desired morphological characteristics. Copyright © 2014 Elsevier B.V. All rights reserved.

  6. Optimizing micromixer design for enhancing dielectrophoretic microconcentrator performance.

    PubMed

    Lee, Hsu-Yi; Voldman, Joel

    2007-03-01

    We present an investigation into optimizing micromixer design for enhancing dielectrophoretic (DEP) microconcentrator performance. DEP-based microconcentrators use the dielectrophoretic force to collect particles on electrodes. Because the DEP force generated by electrodes decays rapidly away from the electrodes, DEP-based microconcentrators are only effective at capturing particles from a limited cross section of the input liquid stream. Adding a mixer can circulate the input liquid, increasing the probability that particles will drift near the electrodes for capture. Because mixers for DEP-based microconcentrators aim to circulate particles, rather than mix two species, design specifications for such mixers may be significantly different from that for conventional mixers. Here we investigated the performance of patterned-groove micromixers on particle trapping efficiency in DEP-based microconcentrators numerically and experimentally. We used modeling software to simulate the particle motion due to various forces on the particle (DEP, hydrodynamic, etc.), allowing us to predict trapping efficiency. We also conducted trapping experiments and measured the capture efficiency of different micromixer configurations, including the slanted groove, staggered herringbone, and herringbone mixers. Finally, we used these analyses to illustrate the design principles of mixers for DEP-based concentrators.

  7. Optimizing Within-Subject Experimental Designs for jICA of Multi-Channel ERP and fMRI

    PubMed Central

    Mangalathu-Arumana, Jain; Liebenthal, Einat; Beardsley, Scott A.

    2018-01-01

    Joint independent component analysis (jICA) can be applied within subject for fusion of multi-channel event-related potentials (ERP) and functional magnetic resonance imaging (fMRI), to measure brain function at high spatiotemporal resolution (Mangalathu-Arumana et al., 2012). However, the impact of experimental design choices on jICA performance has not been systematically studied. Here, the sensitivity of jICA for recovering neural sources in individual data was evaluated as a function of imaging SNR, number of independent representations of the ERP/fMRI data, relationship between instantiations of the joint ERP/fMRI activity (linear, non-linear, uncoupled), and type of sources (varying parametrically and non-parametrically across representations of the data), using computer simulations. Neural sources were simulated with spatiotemporal and noise attributes derived from experimental data. The best performance, maximizing both cross-modal data fusion and the separation of brain sources, occurred with a moderate number of representations of the ERP/fMRI data (10–30), as in a mixed block/event related experimental design. Importantly, the type of relationship between instantiations of the ERP/fMRI activity, whether linear, non-linear or uncoupled, did not in itself impact jICA performance, and was accurately recovered in the common profiles (i.e., mixing coefficients). Thus, jICA provides an unbiased way to characterize the relationship between ERP and fMRI activity across brain regions, in individual data, rendering it potentially useful for characterizing pathological conditions in which neurovascular coupling is adversely affected. PMID:29410611

  8. DAKOTA Design Analysis Kit for Optimization and Terascale

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Adams, Brian M.; Dalbey, Keith R.; Eldred, Michael S.

    2010-02-24

    The DAKOTA (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a flexible and extensible interface between simulation codes (computational models) and iterative analysis methods. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the DAKOTA toolkit provides a flexible and extensible problem-solving environment for design and analysis of computational models on high performance computers.A user provides a set of DAKOTA commands in an input file and launches DAKOTA. DAKOTA invokes instances of the computational models, collects their results, and performs systems analyses. DAKOTA contains algorithms for optimization with gradient and nongradient-basedmore » methods; uncertainty quantification with sampling, reliability, polynomial chaos, stochastic collocation, and epistemic methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as hybrid optimization, surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. Services for parallel computing, simulation interfacing, approximation modeling, fault tolerance, restart, and graphics are also included.« less

  9. Experimental designs for detecting synergy and antagonism between two drugs in a pre-clinical study.

    PubMed

    Sperrin, Matthew; Thygesen, Helene; Su, Ting-Li; Harbron, Chris; Whitehead, Anne

    2015-01-01

    The identification of synergistic interactions between combinations of drugs is an important area within drug discovery and development. Pre-clinically, large numbers of screening studies to identify synergistic pairs of compounds can often be ran, necessitating efficient and robust experimental designs. We consider experimental designs for detecting interaction between two drugs in a pre-clinical in vitro assay in the presence of uncertainty of the monotherapy response. The monotherapies are assumed to follow the Hill equation with common lower and upper asymptotes, and a common variance. The optimality criterion used is the variance of the interaction parameter. We focus on ray designs and investigate two algorithms for selecting the optimum set of dose combinations. The first is a forward algorithm in which design points are added sequentially. This is found to give useful solutions in simple cases but can lack robustness when knowledge about the monotherapy parameters is insufficient. The second algorithm is a more pragmatic approach where the design points are constrained to be distributed log-normally along the rays and monotherapy doses. We find that the pragmatic algorithm is more stable than the forward algorithm, and even when the forward algorithm has converged, the pragmatic algorithm can still out-perform it. Practically, we find that good designs for detecting an interaction have equal numbers of points on monotherapies and combination therapies, with those points typically placed in positions where a 50% response is expected. More uncertainty in monotherapy parameters leads to an optimal design with design points that are more spread out. Copyright © 2015 John Wiley & Sons, Ltd.

  10. Formulation for Simultaneous Aerodynamic Analysis and Design Optimization

    NASA Technical Reports Server (NTRS)

    Hou, G. W.; Taylor, A. C., III; Mani, S. V.; Newman, P. A.

    1993-01-01

    An efficient approach for simultaneous aerodynamic analysis and design optimization is presented. This approach does not require the performance of many flow analyses at each design optimization step, which can be an expensive procedure. Thus, this approach brings us one step closer to meeting the challenge of incorporating computational fluid dynamic codes into gradient-based optimization techniques for aerodynamic design. An adjoint-variable method is introduced to nullify the effect of the increased number of design variables in the problem formulation. The method has been successfully tested on one-dimensional nozzle flow problems, including a sample problem with a normal shock. Implementations of the above algorithm are also presented that incorporate Newton iterations to secure a high-quality flow solution at the end of the design process. Implementations with iterative flow solvers are possible and will be required for large, multidimensional flow problems.

  11. Relation between experimental and non-experimental study designs. HB vaccines: a case study.

    PubMed

    Jefferson, T; Demicheli, V

    1999-01-01

    To examine the relation between experimental and non-experimental study design in vaccinology. Assessment of each study design's capability of testing four aspects of vaccine performance, namely immunogenicity (the capacity to stimulate the immune system), duration of immunity conferred, incidence and seriousness of side effects, and number of infections prevented by vaccination. Experimental and non-experimental studies on hepatitis B (HB) vaccines in the Cochrane Vaccines Field Database. Experimental and non-experimental vaccine study designs are frequently complementary but some aspects of vaccine quality can only be assessed by one of the types of study. More work needs to be done on the relation between study quality and its significance in terms of effect size.

  12. Effect and interaction study of acetamiprid photodegradation using experimental design.

    PubMed

    Tassalit, Djilali; Chekir, Nadia; Benhabiles, Ouassila; Mouzaoui, Oussama; Mahidine, Sarah; Merzouk, Nachida Kasbadji; Bentahar, Fatiha; Khalil, Abbas

    2016-10-01

    The methodology of experimental research was carried out using the MODDE 6.0 software to study the acetamiprid photodegradation depending on the operating parameters, such as the initial concentration of acetamiprid, concentration and type of the used catalyst and the initial pH of the medium. The results showed the importance of the pollutant concentration effect on the acetamiprid degradation rate. On the other hand, the amount and type of the used catalyst have a considerable influence on the elimination kinetics of this pollutant. The degradation of acetamiprid as an environmental pesticide pollutant via UV irradiation in the presence of titanium dioxide was assessed and optimized using response surface methodology with a D-optimal design. The acetamiprid degradation ratio was found to be sensitive to the different studied factors. The maximum value of discoloration under the optimum operating conditions was determined to be 99% after 300 min of UV irradiation.

  13. Optimal cure cycle design of a resin-fiber composite laminate

    NASA Technical Reports Server (NTRS)

    Hou, Jean W.; Sheen, Jeenson

    1987-01-01

    A unified computed aided design method was studied for the cure cycle design that incorporates an optimal design technique with the analytical model of a composite cure process. The preliminary results of using this proposed method for optimal cure cycle design are reported and discussed. The cure process of interest is the compression molding of a polyester which is described by a diffusion reaction system. The finite element method is employed to convert the initial boundary value problem into a set of first order differential equations which are solved simultaneously by the DE program. The equations for thermal design sensitivities are derived by using the direct differentiation method and are solved by the DE program. A recursive quadratic programming algorithm with an active set strategy called a linearization method is used to optimally design the cure cycle, subjected to the given design performance requirements. The difficulty of casting the cure cycle design process into a proper mathematical form is recognized. Various optimal design problems are formulated to address theses aspects. The optimal solutions of these formulations are compared and discussed.

  14. Towards robust optimal design of storm water systems

    NASA Astrophysics Data System (ADS)

    Marquez Calvo, Oscar; Solomatine, Dimitri

    2015-04-01

    In this study the focus is on the design of a storm water or a combined sewer system. Such a system should be capable to handle properly most of the storm to minimize the damages caused by flooding due to the lack of capacity of the system to cope with rain water at peak times. This problem is a multi-objective optimization problem: we have to take into account the minimization of the construction costs, the minimization of damage costs due to flooding, and possibly other criteria. One of the most important factors influencing the design of storm water systems is the expected amount of water to deal with. It is common that this infrastructure is developed with the capacity to cope with events that occur once in, say 10 or 20 years - so-called design rainfall events. However, rainfall is a random variable and such uncertainty typically is not taken explicitly into account in optimization. Rainfall design data is based on historical information of rainfalls, but many times this data is based on unreliable measures; or in not enough historical information; or as we know, the patterns of rainfall are changing regardless of historical information. There are also other sources of uncertainty influencing design, for example, leakages in the pipes and accumulation of sediments in pipes. In the context of storm water or combined sewer systems design or rehabilitation, robust optimization technique should be able to find the best design (or rehabilitation plan) within the available budget but taking into account uncertainty in those variables that were used to design the system. In this work we consider various approaches to robust optimization proposed by various authors (Gabrel, Murat, Thiele 2013; Beyer, Sendhoff 2007) and test a novel method ROPAR (Solomatine 2012) to analyze robustness. References Beyer, H.G., & Sendhoff, B. (2007). Robust optimization - A comprehensive survey. Comput. Methods Appl. Mech. Engrg., 3190-3218. Gabrel, V.; Murat, C., Thiele, A. (2014

  15. Optimal bioprocess design through a gene regulatory network - growth kinetic hybrid model: Towards Replacing Monod kinetics.

    PubMed

    Tsipa, Argyro; Koutinas, Michalis; Usaku, Chonlatep; Mantalaris, Athanasios

    2018-05-02

    Currently, design and optimisation of biotechnological bioprocesses is performed either through exhaustive experimentation and/or with the use of empirical, unstructured growth kinetics models. Whereas, elaborate systems biology approaches have been recently explored, mixed-substrate utilisation is predominantly ignored despite its significance in enhancing bioprocess performance. Herein, bioprocess optimisation for an industrially-relevant bioremediation process involving a mixture of highly toxic substrates, m-xylene and toluene, was achieved through application of a novel experimental-modelling gene regulatory network - growth kinetic (GRN-GK) hybrid framework. The GRN model described the TOL and ortho-cleavage pathways in Pseudomonas putida mt-2 and captured the transcriptional kinetics expression patterns of the promoters. The GRN model informed the formulation of the growth kinetics model replacing the empirical and unstructured Monod kinetics. The GRN-GK framework's predictive capability and potential as a systematic optimal bioprocess design tool, was demonstrated by effectively predicting bioprocess performance, which was in agreement with experimental values, when compared to four commonly used models that deviated significantly from the experimental values. Significantly, a fed-batch biodegradation process was designed and optimised through the model-based control of TOL Pr promoter expression resulting in 61% and 60% enhanced pollutant removal and biomass formation, respectively, compared to the batch process. This provides strong evidence of model-based bioprocess optimisation at the gene level, rendering the GRN-GK framework as a novel and applicable approach to optimal bioprocess design. Finally, model analysis using global sensitivity analysis (GSA) suggests an alternative, systematic approach for model-driven strain modification for synthetic biology and metabolic engineering applications. Copyright © 2018. Published by Elsevier Inc.

  16. OPTIMIZATION OF EXPERIMENTAL DESIGNS BY INCORPORATING NIF FACILITY IMPACTS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eder, D C; Whitman, P K; Koniges, A E

    2005-08-31

    For experimental campaigns on the National Ignition Facility (NIF) to be successful, they must obtain useful data without causing unacceptable impact on the facility. Of particular concern is excessive damage to optics and diagnostic components. There are 192 fused silica main debris shields (MDS) exposed to the potentially hostile target chamber environment on each shot. Damage in these optics results either from the interaction of laser light with contamination and pre-existing imperfections on the optic surface or from the impact of shrapnel fragments. Mitigation of this second damage source is possible by identifying shrapnel sources and shielding optics from them.more » It was recently demonstrated that the addition of 1.1-mm thick borosilicate disposable debris shields (DDS) block the majority of debris and shrapnel fragments from reaching the relatively expensive MDS's. However, DDS's cannot stop large, faster moving fragments. We have experimentally demonstrated one shrapnel mitigation technique showing that it is possible to direct fast moving fragments by changing the source orientation, in this case a Ta pinhole array. Another mitigation method is to change the source material to one that produces smaller fragments. Simulations and validating experiments are necessary to determine which fragments can penetrate or break 1-3 mm thick DDS's. Three-dimensional modeling of complex target-diagnostic configurations is necessary to predict the size, velocity, and spatial distribution of shrapnel fragments. The tools we are developing will be used to set the allowed level of debris and shrapnel generation for all NIF experimental campaigns.« less

  17. The Automatic Neuroscientist: A framework for optimizing experimental design with closed-loop real-time fMRI

    PubMed Central

    Lorenz, Romy; Monti, Ricardo Pio; Violante, Inês R.; Anagnostopoulos, Christoforos; Faisal, Aldo A.; Montana, Giovanni; Leech, Robert

    2016-01-01

    Functional neuroimaging typically explores how a particular task activates a set of brain regions. Importantly though, the same neural system can be activated by inherently different tasks. To date, there is no approach available that systematically explores whether and how distinct tasks probe the same neural system. Here, we propose and validate an alternative framework, the Automatic Neuroscientist, which turns the standard fMRI approach on its head. We use real-time fMRI in combination with modern machine-learning techniques to automatically design the optimal experiment to evoke a desired target brain state. In this work, we present two proof-of-principle studies involving perceptual stimuli. In both studies optimization algorithms of varying complexity were employed; the first involved a stochastic approximation method while the second incorporated a more sophisticated Bayesian optimization technique. In the first study, we achieved convergence for the hypothesized optimum in 11 out of 14 runs in less than 10 min. Results of the second study showed how our closed-loop framework accurately and with high efficiency estimated the underlying relationship between stimuli and neural responses for each subject in one to two runs: with each run lasting 6.3 min. Moreover, we demonstrate that using only the first run produced a reliable solution at a group-level. Supporting simulation analyses provided evidence on the robustness of the Bayesian optimization approach for scenarios with low contrast-to-noise ratio. This framework is generalizable to numerous applications, ranging from optimizing stimuli in neuroimaging pilot studies to tailoring clinical rehabilitation therapy to patients and can be used with multiple imaging modalities in humans and animals. PMID:26804778

  18. Geometry Modeling and Grid Generation for Design and Optimization

    NASA Technical Reports Server (NTRS)

    Samareh, Jamshid A.

    1998-01-01

    Geometry modeling and grid generation (GMGG) have played and will continue to play an important role in computational aerosciences. During the past two decades, tremendous progress has occurred in GMGG; however, GMGG is still the biggest bottleneck to routine applications for complicated Computational Fluid Dynamics (CFD) and Computational Structures Mechanics (CSM) models for analysis, design, and optimization. We are still far from incorporating GMGG tools in a design and optimization environment for complicated configurations. It is still a challenging task to parameterize an existing model in today's Computer-Aided Design (CAD) systems, and the models created are not always good enough for automatic grid generation tools. Designers may believe their models are complete and accurate, but unseen imperfections (e.g., gaps, unwanted wiggles, free edges, slivers, and transition cracks) often cause problems in gridding for CSM and CFD. Despite many advances in grid generation, the process is still the most labor-intensive and time-consuming part of the computational aerosciences for analysis, design, and optimization. In an ideal design environment, a design engineer would use a parametric model to evaluate alternative designs effortlessly and optimize an existing design for a new set of design objectives and constraints. For this ideal environment to be realized, the GMGG tools must have the following characteristics: (1) be automated, (2) provide consistent geometry across all disciplines, (3) be parametric, and (4) provide sensitivity derivatives. This paper will review the status of GMGG for analysis, design, and optimization processes, and it will focus on some emerging ideas that will advance the GMGG toward the ideal design environment.

  19. Design optimization of transmitting antennas for weakly coupled magnetic induction communication systems

    PubMed Central

    2017-01-01

    This work focuses on the design of transmitting coils in weakly coupled magnetic induction communication systems. We propose several optimization methods that reduce the active, reactive and apparent power consumption of the coil. These problems are formulated as minimization problems, in which the power consumed by the transmitting coil is minimized, under the constraint of providing a required magnetic field at the receiver location. We develop efficient numeric and analytic methods to solve the resulting problems, which are of high dimension, and in certain cases non-convex. For the objective of minimal reactive power an analytic solution for the optimal current distribution in flat disc transmitting coils is provided. This problem is extended to general three-dimensional coils, for which we develop an expression for the optimal current distribution. Considering the objective of minimal apparent power, a method is developed to reduce the computational complexity of the problem by transforming it to an equivalent problem of lower dimension, allowing a quick and accurate numeric solution. These results are verified experimentally by testing a number of coil geometries. The results obtained allow reduced power consumption and increased performances in magnetic induction communication systems. Specifically, for wideband systems, an optimal design of the transmitter coil reduces the peak instantaneous power provided by the transmitter circuitry, and thus reduces its size, complexity and cost. PMID:28192463

  20. Optimal design and dynamic impact tests of removable bollards

    NASA Astrophysics Data System (ADS)

    Chen, Suwen; Liu, Tianyi; Li, Guoqiang; Liu, Qing; Sun, Jianyun

    2017-10-01

    Anti-ram bollard systems, which are installed around buildings and infrastructure, can prevent unauthorized vehicles from entering, maintain distance from vehicle-borne improvised explosive devices (VBIED) and reduce the corresponding damage. Compared with a fixed bollard system, a removable bollard system provides more flexibility as it can be removed when needed. This paper first proposes a new type of K4-rated removable anti-ram bollard system. To simulate the collision of a vehicle hitting the bollard system, a finite element model was then built and verified through comparison of numerical simulation results and existing experimental results. Based on the orthogonal design method, the factors influencing the safety and economy of this proposed system were examined and sorted according to their importance. An optimal design scheme was then produced. Finally, to validate the effectiveness of the proposed design scheme, four dynamic impact tests, including two front impact tests and two side impact tests, have been conducted according to BSI Specifications. The residual rotation angles of the specimen are smaller than 30º and satisfy the requirements of the BSI Specification.

  1. Optimization process in helicopter design

    NASA Technical Reports Server (NTRS)

    Logan, A. H.; Banerjee, D.

    1984-01-01

    In optimizing a helicopter configuration, Hughes Helicopters uses a program called Computer Aided Sizing of Helicopters (CASH), written and updated over the past ten years, and used as an important part of the preliminary design process of the AH-64. First, measures of effectiveness must be supplied to define the mission characteristics of the helicopter to be designed. Then CASH allows the designer to rapidly and automatically develop the basic size of the helicopter (or other rotorcraft) for the given mission. This enables the designer and management to assess the various tradeoffs and to quickly determine the optimum configuration.

  2. Intelligent design optimization of a shape-memory-alloy-actuated reconfigurable wing

    NASA Astrophysics Data System (ADS)

    Lagoudas, Dimitris C.; Strelec, Justin K.; Yen, John; Khan, Mohammad A.

    2000-06-01

    The unique thermal and mechanical properties offered by shape memory alloys (SMAs) present exciting possibilities in the field of aerospace engineering. When properly trained, SMA wires act as linear actuators by contracting when heated and returning to their original shape when cooled. It has been shown experimentally that the overall shape of an airfoil can be altered by activating several attached SMA wire actuators. This shape-change can effectively increase the efficiency of a wing in flight at several different flow regimes. To determine the necessary placement of these wire actuators within the wing, an optimization method that incorporates a fully-coupled structural, thermal, and aerodynamic analysis has been utilized. Due to the complexity of the fully-coupled analysis, intelligent optimization methods such as genetic algorithms have been used to efficiently converge to an optimal solution. The genetic algorithm used in this case is a hybrid version with global search and optimization capabilities augmented by the simplex method as a local search technique. For the reconfigurable wing, each chromosome represents a realizable airfoil configuration and its genes are the SMA actuators, described by their location and maximum transformation strain. The genetic algorithm has been used to optimize this design problem to maximize the lift-to-drag ratio for a reconfigured airfoil shape.

  3. A PDE Sensitivity Equation Method for Optimal Aerodynamic Design

    NASA Technical Reports Server (NTRS)

    Borggaard, Jeff; Burns, John

    1996-01-01

    The use of gradient based optimization algorithms in inverse design is well established as a practical approach to aerodynamic design. A typical procedure uses a simulation scheme to evaluate the objective function (from the approximate states) and its gradient, then passes this information to an optimization algorithm. Once the simulation scheme (CFD flow solver) has been selected and used to provide approximate function evaluations, there are several possible approaches to the problem of computing gradients. One popular method is to differentiate the simulation scheme and compute design sensitivities that are then used to obtain gradients. Although this black-box approach has many advantages in shape optimization problems, one must compute mesh sensitivities in order to compute the design sensitivity. In this paper, we present an alternative approach using the PDE sensitivity equation to develop algorithms for computing gradients. This approach has the advantage that mesh sensitivities need not be computed. Moreover, when it is possible to use the CFD scheme for both the forward problem and the sensitivity equation, then there are computational advantages. An apparent disadvantage of this approach is that it does not always produce consistent derivatives. However, for a proper combination of discretization schemes, one can show asymptotic consistency under mesh refinement, which is often sufficient to guarantee convergence of the optimal design algorithm. In particular, we show that when asymptotically consistent schemes are combined with a trust-region optimization algorithm, the resulting optimal design method converges. We denote this approach as the sensitivity equation method. The sensitivity equation method is presented, convergence results are given and the approach is illustrated on two optimal design problems involving shocks.

  4. Designing optimal universal pulses using second-order, large-scale, non-linear optimization

    NASA Astrophysics Data System (ADS)

    Anand, Christopher Kumar; Bain, Alex D.; Curtis, Andrew Thomas; Nie, Zhenghua

    2012-06-01

    Recently, RF pulse design using first-order and quasi-second-order pulses has been actively investigated. We present a full second-order design method capable of incorporating relaxation, inhomogeneity in B0 and B1. Our model is formulated as a generic optimization problem making it easy to incorporate diverse pulse sequence features. To tame the computational cost, we present a method of calculating second derivatives in at most a constant multiple of the first derivative calculation time, this is further accelerated by using symbolic solutions of the Bloch equations. We illustrate the relative merits and performance of quasi-Newton and full second-order optimization with a series of examples, showing that even a pulse already optimized using other methods can be visibly improved. To be useful in CPMG experiments, a universal refocusing pulse should be independent of the delay time and insensitive of the relaxation time and RF inhomogeneity. We design such a pulse and show that, using it, we can obtain reliable R2 measurements for offsets within ±γB1. Finally, we compare our optimal refocusing pulse with other published refocusing pulses by doing CPMG experiments.

  5. Optimising reversed-phase liquid chromatographic separation of an acidic mixture on a monolithic stationary phase with the aid of response surface methodology and experimental design.

    PubMed

    Wang, Y; Harrison, M; Clark, B J

    2006-02-10

    An optimization strategy for the separation of an acidic mixture by employing a monolithic stationary phase is presented, with the aid of experimental design and response surface methodology (RSM). An orthogonal array design (OAD) OA(16) (2(15)) was used to choose the significant parameters for the optimization. The significant factors were optimized by using a central composite design (CCD) and the quadratic models between the dependent and the independent parameters were built. The mathematical models were tested on a number of simulated data set and had a coefficient of R(2) > 0.97 (n = 16). On applying the optimization strategy, the factor effects were visualized as three-dimensional (3D) response surfaces and contour plots. The optimal condition was achieved in less than 40 min by using the monolithic packing with the mobile phase of methanol/20 mM phosphate buffer pH 2.7 (25.5/74.5, v/v). The method showed good agreement between the experimental data and predictive value throughout the studied parameter space and were suitable for optimization studies on the monolithic stationary phase for acidic compounds.

  6. Design optimization for cost and quality: The robust design approach

    NASA Technical Reports Server (NTRS)

    Unal, Resit

    1990-01-01

    Designing reliable, low cost, and operable space systems has become the key to future space operations. Designing high quality space systems at low cost is an economic and technological challenge to the designer. A systematic and efficient way to meet this challenge is a new method of design optimization for performance, quality, and cost, called Robust Design. Robust Design is an approach for design optimization. It consists of: making system performance insensitive to material and subsystem variation, thus allowing the use of less costly materials and components; making designs less sensitive to the variations in the operating environment, thus improving reliability and reducing operating costs; and using a new structured development process so that engineering time is used most productively. The objective in Robust Design is to select the best combination of controllable design parameters so that the system is most robust to uncontrollable noise factors. The robust design methodology uses a mathematical tool called an orthogonal array, from design of experiments theory, to study a large number of decision variables with a significantly small number of experiments. Robust design also uses a statistical measure of performance, called a signal-to-noise ratio, from electrical control theory, to evaluate the level of performance and the effect of noise factors. The purpose is to investigate the Robust Design methodology for improving quality and cost, demonstrate its application by the use of an example, and suggest its use as an integral part of space system design process.

  7. Considering RNAi experimental design in parasitic helminths.

    PubMed

    Dalzell, Johnathan J; Warnock, Neil D; McVeigh, Paul; Marks, Nikki J; Mousley, Angela; Atkinson, Louise; Maule, Aaron G

    2012-04-01

    Almost a decade has passed since the first report of RNA interference (RNAi) in a parasitic helminth. Whilst much progress has been made with RNAi informing gene function studies in disparate nematode and flatworm parasites, substantial and seemingly prohibitive difficulties have been encountered in some species, hindering progress. An appraisal of current practices, trends and ideals of RNAi experimental design in parasitic helminths is both timely and necessary for a number of reasons: firstly, the increasing availability of parasitic helminth genome/transcriptome resources means there is a growing need for gene function tools such as RNAi; secondly, fundamental differences and unique challenges exist for parasite species which do not apply to model organisms; thirdly, the inherent variation in experimental design, and reported difficulties with reproducibility undermine confidence. Ideally, RNAi studies of gene function should adopt standardised experimental design to aid reproducibility, interpretation and comparative analyses. Although the huge variations in parasite biology and experimental endpoints make RNAi experimental design standardization difficult or impractical, we must strive to validate RNAi experimentation in helminth parasites. To aid this process we identify multiple approaches to RNAi experimental validation and highlight those which we deem to be critical for gene function studies in helminth parasites.

  8. Using experimental design to define boundary manikins.

    PubMed

    Bertilsson, Erik; Högberg, Dan; Hanson, Lars

    2012-01-01

    When evaluating human-machine interaction it is central to consider anthropometric diversity to ensure intended accommodation levels. A well-known method is the use of boundary cases where manikins with extreme but likely measurement combinations are derived by mathematical treatment of anthropometric data. The supposition by that method is that the use of these manikins will facilitate accommodation of the expected part of the total, less extreme, population. In literature sources there are differences in how many and in what way these manikins should be defined. A similar field to the boundary case method is the use of experimental design in where relationships between affecting factors of a process is studied by a systematic approach. This paper examines the possibilities to adopt methodology used in experimental design to define a group of manikins. Different experimental designs were adopted to be used together with a confidence region and its axes. The result from the study shows that it is possible to adapt the methodology of experimental design when creating groups of manikins. The size of these groups of manikins depends heavily on the number of key measurements but also on the type of chosen experimental design.

  9. Maximum Entropy/Optimal Projection (MEOP) control design synthesis: Optimal quantification of the major design tradeoffs

    NASA Technical Reports Server (NTRS)

    Hyland, D. C.; Bernstein, D. S.

    1987-01-01

    The underlying philosophy and motivation of the optimal projection/maximum entropy (OP/ME) stochastic modeling and reduced control design methodology for high order systems with parameter uncertainties are discussed. The OP/ME design equations for reduced-order dynamic compensation including the effect of parameter uncertainties are reviewed. The application of the methodology to several Large Space Structures (LSS) problems of representative complexity is illustrated.

  10. Inverse problems and optimal experiment design in unsteady heat transfer processes identification

    NASA Technical Reports Server (NTRS)

    Artyukhin, Eugene A.

    1991-01-01

    Experimental-computational methods for estimating characteristics of unsteady heat transfer processes are analyzed. The methods are based on the principles of distributed parameter system identification. The theoretical basis of such methods is the numerical solution of nonlinear ill-posed inverse heat transfer problems and optimal experiment design problems. Numerical techniques for solving problems are briefly reviewed. The results of the practical application of identification methods are demonstrated when estimating effective thermophysical characteristics of composite materials and thermal contact resistance in two-layer systems.

  11. Optimization design of LED heat dissipation structure based on strip fins

    NASA Astrophysics Data System (ADS)

    Xue, Lingyun; Wan, Wenbin; Chen, Qingguang; Rao, Huanle; Xu, Ping

    2018-03-01

    To solve the heat dissipation problem of LED, a radiator structure based on strip fins is designed and the method to optimize the structure parameters of strip fins is proposed in this paper. The combination of RBF neural networks and particle swarm optimization (PSO) algorithm is used for modeling and optimization respectively. During the experiment, the 150 datasets of LED junction temperature when structure parameters of number of strip fins, length, width and height of the fins have different values are obtained by ANSYS software. Then RBF neural network is applied to build the non-linear regression model and the parameters optimization of structure based on particle swarm optimization algorithm is performed with this model. The experimental results show that the lowest LED junction temperature reaches 43.88 degrees when the number of hidden layer nodes in RBF neural network is 10, the two learning factors in particle swarm optimization algorithm are 0.5, 0.5 respectively, the inertia factor is 1 and the maximum number of iterations is 100, and now the number of fins is 64, the distribution structure is 8*8, and the length, width and height of fins are 4.3mm, 4.48mm and 55.3mm respectively. To compare the modeling and optimization results, LED junction temperature at the optimized structure parameters was simulated and the result is 43.592°C which approximately equals to the optimal result. Compared with the ordinary plate-fin-type radiator structure whose temperature is 56.38°C, the structure greatly enhances heat dissipation performance of the structure.

  12. A numerical and experimental study on optimal design of multi-DOF viscoelastic supports for passive vibration control in rotating machinery

    NASA Astrophysics Data System (ADS)

    Ribeiro, Eduardo Afonso; Lopes, Eduardo Márcio de Oliveira; Bavastri, Carlos Alberto

    2017-12-01

    Viscoelastic materials have played an important role in passive vibration control. Nevertheless, the use of such materials in supports of rotating machines, aiming at controlling vibration, is more recent, mainly when these supports present additional complexities like multiple degrees of freedom and require accurate models to predict the dynamic behavior of viscoelastic materials working in a broad band of frequencies and temperatures. Previously, the authors propose a methodology for an optimal design of viscoelastic supports (VES) for vibration suppression in rotordynamics, which improves the dynamic prediction accuracy, the speed calculation, and the modeling of VES as complex structures. However, a comprehensive numerical study of the dynamics of rotor-VES systems, regarding the types and combinations of translational and rotational degrees of freedom (DOFs), accompanied by the corresponding experimental validation, is still lacking. This paper presents such a study considering different types and combinations of DOFs in addition to the simulation of their number of additional masses/inertias, as well as the kind and association of the applied viscoelastic materials (VEMs). The results - regarding unbalance frequency response, transmissibility and displacement due to static loads - lead to: 1) considering VES as complex structures which allow improving the efficacy in passive vibration control; 2) acknowledging the best configuration concerning DOFs and VEM choice and association for a practical application concerning passive vibration control and load resistance. The specific outcomes of the conducted experimental validation attest the accuracy of the proposed methodology.

  13. Experimental design for the formulation and optimization of novel cross-linked oilispheres developed for in vitro site-specific release of Mentha piperita oil.

    PubMed

    Sibanda, Wilbert; Pillay, Viness; Danckwerts, Michael P; Viljoen, Alvaro M; van Vuuren, Sandy; Khan, Riaz A

    2004-03-12

    A Plackett-Burman design was employed to develop and optimize a novel crosslinked calcium-aluminum-alginate-pectinate oilisphere complex as a potential system for the in vitro site-specific release of Mentha piperita, an essential oil used for the treatment of irritable bowel syndrome. The physicochemical and textural properties (dependent variables) of this complex were found to be highly sensitive to changes in the concentration of the polymers (0%-1.5% wt/vol), crosslinkers (0%-4% wt/vol), and crosslinking reaction times (0.5-6 hours) (independent variables). Particle size analysis indicated both unimodal and bimodal populations with the highest frequency of 2 mm oilispheres. Oil encapsulation ranged from 6 to 35 mg/100 mg oilispheres. Gravimetric changes of the crosslinked matrix indicated significant ion sequestration and loss in an exponential manner, while matrix erosion followed Higuchi's cube root law. Among the various measured responses, the total fracture energy was the most suitable optimization objective (R2 = 0.88, Durbin-Watson Index = 1.21%, Coefficient of Variation (CV) = 33.21%). The Lagrangian technique produced no significant differences (P > .05) between the experimental and predicted total fracture energy values (0.0150 vs 0.0107 J). Artificial Neural Networks, as an alternative predictive tool of the total fracture energy, was highly accurate (final mean square error of optimal network epoch approximately 0.02). Fused-coated optimized oilispheres produced a 4-hour lag phase followed by zero-order kinetics (n > 0.99), whereby analysis of release data indicated that diffusion (Fickian constant k1 = 0.74 vs relaxation constant k2 = 0.02) was the predominant release mechanism.

  14. Plant Disease Severity Assessment-How Rater Bias, Assessment Method, and Experimental Design Affect Hypothesis Testing and Resource Use Efficiency.

    PubMed

    Chiang, Kuo-Szu; Bock, Clive H; Lee, I-Hsuan; El Jarroudi, Moussa; Delfosse, Philippe

    2016-12-01

    The effect of rater bias and assessment method on hypothesis testing was studied for representative experimental designs for plant disease assessment using balanced and unbalanced data sets. Data sets with the same number of replicate estimates for each of two treatments are termed "balanced" and those with unequal numbers of replicate estimates are termed "unbalanced". The three assessment methods considered were nearest percent estimates (NPEs), an amended 10% incremental scale, and the Horsfall-Barratt (H-B) scale. Estimates of severity of Septoria leaf blotch on leaves of winter wheat were used to develop distributions for a simulation model. The experimental designs are presented here in the context of simulation experiments which consider the optimal design for the number of specimens (individual units sampled) and the number of replicate estimates per specimen for a fixed total number of observations (total sample size for the treatments being compared). The criterion used to gauge each method was the power of the hypothesis test. As expected, at a given fixed number of observations, the balanced experimental designs invariably resulted in a higher power compared with the unbalanced designs at different disease severity means, mean differences, and variances. Based on these results, with unbiased estimates using NPE, the recommended number of replicate estimates taken per specimen is 2 (from a sample of specimens of at least 30), because this conserves resources. Furthermore, for biased estimates, an apparent difference in the power of the hypothesis test was observed between assessment methods and between experimental designs. Results indicated that, regardless of experimental design or rater bias, an amended 10% incremental scale has slightly less power compared with NPEs, and that the H-B scale is more likely than the others to cause a type II error. These results suggest that choice of assessment method, optimizing sample number and number of replicate

  15. Optimization of Pressurized Liquid Extraction of Three Major Acetophenones from Cynanchum bungei Using a Box-Behnken Design

    PubMed Central

    Li, Wei; Zhao, Li-Chun; Sun, Yin-Shi; Lei, Feng-Jie; Wang, Zi; Gui, Xiong-Bin; Wang, Hui

    2012-01-01

    In this work, pressurized liquid extraction (PLE) of three acetophenones (4-hydroxyacetophenone, baishouwubenzophenone, and 2,4-dihydroxyacetophenone) from Cynanchum bungei (ACB) were investigated. The optimal conditions for extraction of ACB were obtained using a Box-Behnken design, consisting of 17 experimental points, as follows: Ethanol (100%) as the extraction solvent at a temperature of 120 °C and an extraction pressure of 1500 psi, using one extraction cycle with a static extraction time of 17 min. The extracted samples were analyzed by high-performance liquid chromatography using an UV detector. Under this optimal condition, the experimental values agreed with the predicted values by analysis of variance. The ACB extraction yield with optimal PLE was higher than that obtained by soxhlet extraction and heat-reflux extraction methods. The results suggest that the PLE method provides a good alternative for acetophenone extraction. PMID:23203079

  16. Optimal design of solidification processes

    NASA Technical Reports Server (NTRS)

    Dantzig, Jonathan A.; Tortorelli, Daniel A.

    1991-01-01

    An optimal design algorithm is presented for the analysis of general solidification processes, and is demonstrated for the growth of GaAs crystals in a Bridgman furnace. The system is optimal in the sense that the prespecified temperature distribution in the solidifying materials is obtained to maximize product quality. The optimization uses traditional numerical programming techniques which require the evaluation of cost and constraint functions and their sensitivities. The finite element method is incorporated to analyze the crystal solidification problem, evaluate the cost and constraint functions, and compute the sensitivities. These techniques are demonstrated in the crystal growth application by determining an optimal furnace wall temperature distribution to obtain the desired temperature profile in the crystal, and hence to maximize the crystal's quality. Several numerical optimization algorithms are studied to determine the proper convergence criteria, effective 1-D search strategies, appropriate forms of the cost and constraint functions, etc. In particular, we incorporate the conjugate gradient and quasi-Newton methods for unconstrained problems. The efficiency and effectiveness of each algorithm is presented in the example problem.

  17. A general-purpose optimization program for engineering design

    NASA Technical Reports Server (NTRS)

    Vanderplaats, G. N.; Sugimoto, H.

    1986-01-01

    A new general-purpose optimization program for engineering design is described. ADS (Automated Design Synthesis) is a FORTRAN program for nonlinear constrained (or unconstrained) function minimization. The optimization process is segmented into three levels: Strategy, Optimizer, and One-dimensional search. At each level, several options are available so that a total of nearly 100 possible combinations can be created. An example of available combinations is the Augmented Lagrange Multiplier method, using the BFGS variable metric unconstrained minimization together with polynomial interpolation for the one-dimensional search.

  18. Mechanical design optimization of a single-axis MOEMS accelerometer based on a grating interferometry cavity for ultrahigh sensitivity

    NASA Astrophysics Data System (ADS)

    Lu, Qianbo; Bai, Jian; Wang, Kaiwei; Lou, Shuqi; Jiao, Xufen; Han, Dandan; Yang, Guoguang

    2016-08-01

    The ultrahigh static displacement-acceleration sensitivity of a mechanical sensing chip is essential primarily for an ultrasensitive accelerometer. In this paper, an optimal design to implement to a single-axis MOEMS accelerometer consisting of a grating interferometry cavity and a micromachined sensing chip is presented. The micromachined sensing chip is composed of a proof mass along with its mechanical cantilever suspension and substrate. The dimensional parameters of the sensing chip, including the length, width, thickness and position of the cantilevers are evaluated and optimized both analytically and by finite-element-method (FEM) simulation to yield an unprecedented acceleration-displacement sensitivity. Compared with one of the most sensitive single-axis MOEMS accelerometers reported in the literature, the optimal mechanical design can yield a profound sensitivity improvement with an equal footprint area, specifically, 200% improvement in displacement-acceleration sensitivity with moderate resonant frequency and dynamic range. The modified design was microfabricated, packaged with the grating interferometry cavity and tested. The experimental results demonstrate that the MOEMS accelerometer with modified design can achieve the acceleration-displacement sensitivity of about 150μm/g and acceleration sensitivity of greater than 1500V/g, which validates the effectiveness of the optimal design.

  19. Optimization of Simplex Atomizer Inlet Port Configuration through Computational Fluid Dynamics and Experimental Study for Aero-Gas Turbine Applications

    NASA Astrophysics Data System (ADS)

    Marudhappan, Raja; Chandrasekhar, Udayagiri; Hemachandra Reddy, Koni

    2017-10-01

    The design of plain orifice simplex atomizer for use in the annular combustion system of 1100 kW turbo shaft engine is optimized. The discrete flow field of jet fuel inside the swirl chamber of the atomizer and up to 1.0 mm downstream of the atomizer exit are simulated using commercial Computational Fluid Dynamics (CFD) software. The Euler-Euler multiphase model is used to solve two sets of momentum equations for liquid and gaseous phases and the volume fraction of each phase is tracked throughout the computational domain. The atomizer design is optimized after performing several 2D axis symmetric analyses with swirl and the optimized inlet port design parameters are used for 3D simulation. The Volume Of Fluid (VOF) multiphase model is used in the simulation. The orifice exit diameter is 0.6 mm. The atomizer is fabricated with the optimized geometric parameters. The performance of the atomizer is tested in the laboratory. The experimental observations are compared with the results obtained from 2D and 3D CFD simulations. The simulated velocity components, pressure field, streamlines and air core dynamics along the atomizer axis are compared to previous research works and found satisfactory. The work has led to a novel approach in the design of pressure swirl atomizer.

  20. Adaptive design optimization: a mutual information-based approach to model discrimination in cognitive science.

    PubMed

    Cavagnaro, Daniel R; Myung, Jay I; Pitt, Mark A; Kujala, Janne V

    2010-04-01

    Discriminating among competing statistical models is a pressing issue for many experimentalists in the field of cognitive science. Resolving this issue begins with designing maximally informative experiments. To this end, the problem to be solved in adaptive design optimization is identifying experimental designs under which one can infer the underlying model in the fewest possible steps. When the models under consideration are nonlinear, as is often the case in cognitive science, this problem can be impossible to solve analytically without simplifying assumptions. However, as we show in this letter, a full solution can be found numerically with the help of a Bayesian computational trick derived from the statistics literature, which recasts the problem as a probability density simulation in which the optimal design is the mode of the density. We use a utility function based on mutual information and give three intuitive interpretations of the utility function in terms of Bayesian posterior estimates. As a proof of concept, we offer a simple example application to an experiment on memory retention.

  1. Optimal control model predictions of system performance and attention allocation and their experimental validation in a display design study

    NASA Technical Reports Server (NTRS)

    Johannsen, G.; Govindaraj, T.

    1980-01-01

    The influence of different types of predictor displays in a longitudinal vertical takeoff and landing (VTOL) hover task is analyzed in a theoretical study. Several cases with differing amounts of predictive and rate information are compared. The optimal control model of the human operator is used to estimate human and system performance in terms of root-mean-square (rms) values and to compute optimized attention allocation. The only part of the model which is varied to predict these data is the observation matrix. Typical cases are selected for a subsequent experimental validation. The rms values as well as eye-movement data are recorded. The results agree favorably with those of the theoretical study in terms of relative differences. Better matching is achieved by revised model input data.

  2. Graphical Models for Quasi-Experimental Designs

    ERIC Educational Resources Information Center

    Steiner, Peter M.; Kim, Yongnam; Hall, Courtney E.; Su, Dan

    2017-01-01

    Randomized controlled trials (RCTs) and quasi-experimental designs like regression discontinuity (RD) designs, instrumental variable (IV) designs, and matching and propensity score (PS) designs are frequently used for inferring causal effects. It is well known that the features of these designs facilitate the identification of a causal estimand…

  3. Experimental Test Rig for Optimal Control of Flexible Space Robotic Arms

    DTIC Science & Technology

    2016-12-01

    was used to refine the test bed design and the experimental workflow. Three concepts incorporated various strategies to design a robust flexible link...used to refine the test bed design and the experimental workflow. Three concepts incorporated various strategies to design a robust flexible link... designed to perform the experimentation . The first and second concepts use traditional elastic springs in varying configurations while a third uses a

  4. Execution of Multidisciplinary Design Optimization Approaches on Common Test Problems

    NASA Technical Reports Server (NTRS)

    Balling, R. J.; Wilkinson, C. A.

    1997-01-01

    A class of synthetic problems for testing multidisciplinary design optimization (MDO) approaches is presented. These test problems are easy to reproduce because all functions are given as closed-form mathematical expressions. They are constructed in such a way that the optimal value of all variables and the objective is unity. The test problems involve three disciplines and allow the user to specify the number of design variables, state variables, coupling functions, design constraints, controlling design constraints, and the strength of coupling. Several MDO approaches were executed on two sample synthetic test problems. These approaches included single-level optimization approaches, collaborative optimization approaches, and concurrent subspace optimization approaches. Execution results are presented, and the robustness and efficiency of these approaches an evaluated for these sample problems.

  5. Risk assessment and experimental design in the development of a prolonged release drug delivery system with paliperidone.

    PubMed

    Iurian, Sonia; Turdean, Luana; Tomuta, Ioan

    2017-01-01

    This study focuses on the development of a drug product based on a risk assessment-based approach, within the quality by design paradigm. A prolonged release system was proposed for paliperidone (Pal) delivery, containing Kollidon ® SR as an insoluble matrix agent and hydroxypropyl cellulose, hydroxypropyl methylcellulose (HPMC), or sodium carboxymethyl cellulose as a hydrophilic polymer. The experimental part was preceded by the identification of potential sources of variability through Ishikawa diagrams, and failure mode and effects analysis was used to deliver the critical process parameters that were further optimized by design of experiments. A D-optimal design was used to investigate the effects of Kollidon SR ratio ( X 1 ), the type of hydrophilic polymer ( X 2 ), and the percentage of hydrophilic polymer ( X 3 ) on the percentages of dissolved Pal over 24 h ( Y 1 - Y 9 ). Effects expressed as regression coefficients and response surfaces were generated, along with a design space for the preparation of a target formulation in an experimental area with low error risk. The optimal formulation contained 27.62% Kollidon SR and 8.73% HPMC and achieved the prolonged release of Pal, with low burst effect, at ratios that were very close to the ones predicted by the model. Thus, the parameters with the highest impact on the final product quality were studied, and safe ranges were established for their variations. Finally, a risk mitigation and control strategy was proposed to assure the quality of the system, by constant process monitoring.

  6. Optimization, an Important Stage of Engineering Design

    ERIC Educational Resources Information Center

    Kelley, Todd R.

    2010-01-01

    A number of leaders in technology education have indicated that a major difference between the technological design process and the engineering design process is analysis and optimization. The analysis stage of the engineering design process is when mathematical models and scientific principles are employed to help the designer predict design…

  7. Combined application of mixture experimental design and artificial neural networks in the solid dispersion development.

    PubMed

    Medarević, Djordje P; Kleinebudde, Peter; Djuriš, Jelena; Djurić, Zorica; Ibrić, Svetlana

    2016-01-01

    This study for the first time demonstrates combined application of mixture experimental design and artificial neural networks (ANNs) in the solid dispersions (SDs) development. Ternary carbamazepine-Soluplus®-poloxamer 188 SDs were prepared by solvent casting method to improve carbamazepine dissolution rate. The influence of the composition of prepared SDs on carbamazepine dissolution rate was evaluated using d-optimal mixture experimental design and multilayer perceptron ANNs. Physicochemical characterization proved the presence of the most stable carbamazepine polymorph III within the SD matrix. Ternary carbamazepine-Soluplus®-poloxamer 188 SDs significantly improved carbamazepine dissolution rate compared to pure drug. Models developed by ANNs and mixture experimental design well described the relationship between proportions of SD components and percentage of carbamazepine released after 10 (Q10) and 20 (Q20) min, wherein ANN model exhibit better predictability on test data set. Proportions of carbamazepine and poloxamer 188 exhibited the highest influence on carbamazepine release rate. The highest carbamazepine release rate was observed for SDs with the lowest proportions of carbamazepine and the highest proportions of poloxamer 188. ANNs and mixture experimental design can be used as powerful data modeling tools in the systematic development of SDs. Taking into account advantages and disadvantages of both techniques, their combined application should be encouraged.

  8. Analysis of Photothermal Characterization of Layered Materials: Design of Optimal Experiments

    NASA Technical Reports Server (NTRS)

    Cole, Kevin D.

    2003-01-01

    In this paper numerical calculations are presented for the steady-periodic temperature in layered materials and functionally-graded materials to simulate photothermal methods for the measurement of thermal properties. No laboratory experiments were performed. The temperature is found from a new Green s function formulation which is particularly well-suited to machine calculation. The simulation method is verified by comparison with literature data for a layered material. The method is applied to a class of two-component functionally-graded materials and results for temperature and sensitivity coefficients are presented. An optimality criterion, based on the sensitivity coefficients, is used for choosing what experimental conditions will be needed for photothermal measurements to determine the spatial distribution of thermal properties. This method for optimal experiment design is completely general and may be applied to any photothermal technique and to any functionally-graded material.

  9. The Automatic Neuroscientist: A framework for optimizing experimental design with closed-loop real-time fMRI.

    PubMed

    Lorenz, Romy; Monti, Ricardo Pio; Violante, Inês R; Anagnostopoulos, Christoforos; Faisal, Aldo A; Montana, Giovanni; Leech, Robert

    2016-04-01

    Functional neuroimaging typically explores how a particular task activates a set of brain regions. Importantly though, the same neural system can be activated by inherently different tasks. To date, there is no approach available that systematically explores whether and how distinct tasks probe the same neural system. Here, we propose and validate an alternative framework, the Automatic Neuroscientist, which turns the standard fMRI approach on its head. We use real-time fMRI in combination with modern machine-learning techniques to automatically design the optimal experiment to evoke a desired target brain state. In this work, we present two proof-of-principle studies involving perceptual stimuli. In both studies optimization algorithms of varying complexity were employed; the first involved a stochastic approximation method while the second incorporated a more sophisticated Bayesian optimization technique. In the first study, we achieved convergence for the hypothesized optimum in 11 out of 14 runs in less than 10 min. Results of the second study showed how our closed-loop framework accurately and with high efficiency estimated the underlying relationship between stimuli and neural responses for each subject in one to two runs: with each run lasting 6.3 min. Moreover, we demonstrate that using only the first run produced a reliable solution at a group-level. Supporting simulation analyses provided evidence on the robustness of the Bayesian optimization approach for scenarios with low contrast-to-noise ratio. This framework is generalizable to numerous applications, ranging from optimizing stimuli in neuroimaging pilot studies to tailoring clinical rehabilitation therapy to patients and can be used with multiple imaging modalities in humans and animals. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.

  10. Integrated structure/control law design by multilevel optimization

    NASA Technical Reports Server (NTRS)

    Gilbert, Michael G.; Schmidt, David K.

    1989-01-01

    A new approach to integrated structure/control law design based on multilevel optimization is presented. This new approach is applicable to aircraft and spacecraft and allows for the independent design of the structure and control law. Integration of the designs is achieved through use of an upper level coordination problem formulation within the multilevel optimization framework. The method requires the use of structure and control law design sensitivity information. A general multilevel structure/control law design problem formulation is given, and the use of Linear Quadratic Gaussian (LQG) control law design and design sensitivity methods within the formulation is illustrated. Results of three simple integrated structure/control law design examples are presented. These results show the capability of structure and control law design tradeoffs to improve controlled system performance within the multilevel approach.

  11. High Speed Civil Transport Design Using Collaborative Optimization and Approximate Models

    NASA Technical Reports Server (NTRS)

    Manning, Valerie Michelle

    1999-01-01

    The design of supersonic aircraft requires complex analysis in multiple disciplines, posing, a challenge for optimization methods. In this thesis, collaborative optimization, a design architecture developed to solve large-scale multidisciplinary design problems, is applied to the design of supersonic transport concepts. Collaborative optimization takes advantage of natural disciplinary segmentation to facilitate parallel execution of design tasks. Discipline-specific design optimization proceeds while a coordinating mechanism ensures progress toward an optimum and compatibility between disciplinary designs. Two concepts for supersonic aircraft are investigated: a conventional delta-wing design and a natural laminar flow concept that achieves improved performance by exploiting properties of supersonic flow to delay boundary layer transition. The work involves the development of aerodynamics and structural analyses, and integration within a collaborative optimization framework. It represents the most extensive application of the method to date.

  12. Design Optimization of Hybrid FRP/RC Bridge

    NASA Astrophysics Data System (ADS)

    Papapetrou, Vasileios S.; Tamijani, Ali Y.; Brown, Jeff; Kim, Daewon

    2018-04-01

    The hybrid bridge consists of a Reinforced Concrete (RC) slab supported by U-shaped Fiber Reinforced Polymer (FRP) girders. Previous studies on similar hybrid bridges constructed in the United States and Europe seem to substantiate these hybrid designs for lightweight, high strength, and durable highway bridge construction. In the current study, computational and optimization analyses were carried out to investigate six composite material systems consisting of E-glass and carbon fibers. Optimization constraints are determined by stress, deflection and manufacturing requirements. Finite Element Analysis (FEA) and optimization software were utilized, and a framework was developed to run the complete analyses in an automated fashion. Prior to that, FEA validation of previous studies on similar U-shaped FRP girders that were constructed in Poland and Texas is presented. A finer optimization analysis is performed for the case of the Texas hybrid bridge. The optimization outcome of the hybrid FRP/RC bridge shows the appropriate composite material selection and cross-section geometry that satisfies all the applicable Limit States (LS) and, at the same time, results in the lightest design. Critical limit states show that shear stress criteria determine the optimum design for bridge spans less than 15.24 m and deflection criteria controls for longer spans. Increased side wall thickness can reduce maximum observed shear stresses, but leads to a high weight penalty. A taller cross-section and a thicker girder base can efficiently lower the observed deflections and normal stresses. Finally, substantial weight savings can be achieved by the optimization framework if base and side-wall thickness are treated as independent variables.

  13. Validation of scaffold design optimization in bone tissue engineering: finite element modeling versus designed experiments.

    PubMed

    Uth, Nicholas; Mueller, Jens; Smucker, Byran; Yousefi, Azizeh-Mitra

    2017-02-21

    This study reports the development of biological/synthetic scaffolds for bone tissue engineering (TE) via 3D bioplotting. These scaffolds were composed of poly(L-lactic-co-glycolic acid) (PLGA), type I collagen, and nano-hydroxyapatite (nHA) in an attempt to mimic the extracellular matrix of bone. The solvent used for processing the scaffolds was 1,1,1,3,3,3-hexafluoro-2-propanol. The produced scaffolds were characterized by scanning electron microscopy, microcomputed tomography, thermogravimetric analysis, and unconfined compression test. This study also sought to validate the use of finite-element optimization in COMSOL Multiphysics for scaffold design. Scaffold topology was simplified to three factors: nHA content, strand diameter, and strand spacing. These factors affect the ability of the scaffold to bear mechanical loads and how porous the structure can be. Twenty four scaffolds were constructed according to an I-optimal, split-plot designed experiment (DE) in order to generate experimental models of the factor-response relationships. Within the design region, the DE and COMSOL models agreed in their recommended optimal nHA (30%) and strand diameter (460 μm). However, the two methods disagreed by more than 30% in strand spacing (908 μm for DE; 601 μm for COMSOL). Seven scaffolds were 3D-bioplotted to validate the predictions of DE and COMSOL models (4.5-9.9 MPa measured moduli). The predictions for these scaffolds showed relative agreement for scaffold porosity (mean absolute percentage error of 4% for DE and 13% for COMSOL), but were substantially poorer for scaffold modulus (51% for DE; 21% for COMSOL), partly due to some simplifying assumptions made by the models. Expanding the design region in future experiments (e.g., higher nHA content and strand diameter), developing an efficient solvent evaporation method, and exerting a greater control over layer overlap could allow developing PLGA-nHA-collagen scaffolds to meet the mechanical requirements for

  14. Experimental design, power and sample size for animal reproduction experiments.

    PubMed

    Chapman, Phillip L; Seidel, George E

    2008-01-01

    The present paper concerns statistical issues in the design of animal reproduction experiments, with emphasis on the problems of sample size determination and power calculations. We include examples and non-technical discussions aimed at helping researchers avoid serious errors that may invalidate or seriously impair the validity of conclusions from experiments. Screen shots from interactive power calculation programs and basic SAS power calculation programs are presented to aid in understanding statistical power and computing power in some common experimental situations. Practical issues that are common to most statistical design problems are briefly discussed. These include one-sided hypothesis tests, power level criteria, equality of within-group variances, transformations of response variables to achieve variance equality, optimal specification of treatment group sizes, 'post hoc' power analysis and arguments for the increased use of confidence intervals in place of hypothesis tests.

  15. Robust Design Optimization via Failure Domain Bounding

    NASA Technical Reports Server (NTRS)

    Crespo, Luis G.; Kenny, Sean P.; Giesy, Daniel P.

    2007-01-01

    This paper extends and applies the strategies recently developed by the authors for handling constraints under uncertainty to robust design optimization. For the scope of this paper, robust optimization is a methodology aimed at problems for which some parameters are uncertain and are only known to belong to some uncertainty set. This set can be described by either a deterministic or a probabilistic model. In the methodology developed herein, optimization-based strategies are used to bound the constraint violation region using hyper-spheres and hyper-rectangles. By comparing the resulting bounding sets with any given uncertainty model, it can be determined whether the constraints are satisfied for all members of the uncertainty model (i.e., constraints are feasible) or not (i.e., constraints are infeasible). If constraints are infeasible and a probabilistic uncertainty model is available, upper bounds to the probability of constraint violation can be efficiently calculated. The tools developed enable approximating not only the set of designs that make the constraints feasible but also, when required, the set of designs for which the probability of constraint violation is below a prescribed admissible value. When constraint feasibility is possible, several design criteria can be used to shape the uncertainty model of performance metrics of interest. Worst-case, least-second-moment, and reliability-based design criteria are considered herein. Since the problem formulation is generic and the tools derived only require standard optimization algorithms for their implementation, these strategies are easily applicable to a broad range of engineering problems.

  16. Photovoltaic design optimization for terrestrial applications

    NASA Technical Reports Server (NTRS)

    Ross, R. G., Jr.

    1978-01-01

    As part of the Jet Propulsion Laboratory's Low-Cost Solar Array Project, a comprehensive program of module cost-optimization has been carried out. The objective of these studies has been to define means of reducing the cost and improving the utility and reliability of photovoltaic modules for the broad spectrum of terrestrial applications. This paper describes one of the methods being used for module optimization, including the derivation of specific equations which allow the optimization of various module design features. The method is based on minimizing the life-cycle cost of energy for the complete system. Comparison of the life-cycle energy cost with the marginal cost of energy each year allows the logical plant lifetime to be determined. The equations derived allow the explicit inclusion of design parameters such as tracking, site variability, and module degradation with time. An example problem involving the selection of an optimum module glass substrate is presented.

  17. Integrated design optimization research and development in an industrial environment

    NASA Astrophysics Data System (ADS)

    Kumar, V.; German, Marjorie D.; Lee, S.-J.

    1989-04-01

    An overview is given of a design optimization project that is in progress at the GE Research and Development Center for the past few years. The objective of this project is to develop a methodology and a software system for design automation and optimization of structural/mechanical components and systems. The effort focuses on research and development issues and also on optimization applications that can be related to real-life industrial design problems. The overall technical approach is based on integration of numerical optimization techniques, finite element methods, CAE and software engineering, and artificial intelligence/expert systems (AI/ES) concepts. The role of each of these engineering technologies in the development of a unified design methodology is illustrated. A software system DESIGN-OPT has been developed for both size and shape optimization of structural components subjected to static as well as dynamic loadings. By integrating this software with an automatic mesh generator, a geometric modeler and an attribute specification computer code, a software module SHAPE-OPT has been developed for shape optimization. Details of these software packages together with their applications to some 2- and 3-dimensional design problems are described.

  18. Integrated design optimization research and development in an industrial environment

    NASA Technical Reports Server (NTRS)

    Kumar, V.; German, Marjorie D.; Lee, S.-J.

    1989-01-01

    An overview is given of a design optimization project that is in progress at the GE Research and Development Center for the past few years. The objective of this project is to develop a methodology and a software system for design automation and optimization of structural/mechanical components and systems. The effort focuses on research and development issues and also on optimization applications that can be related to real-life industrial design problems. The overall technical approach is based on integration of numerical optimization techniques, finite element methods, CAE and software engineering, and artificial intelligence/expert systems (AI/ES) concepts. The role of each of these engineering technologies in the development of a unified design methodology is illustrated. A software system DESIGN-OPT has been developed for both size and shape optimization of structural components subjected to static as well as dynamic loadings. By integrating this software with an automatic mesh generator, a geometric modeler and an attribute specification computer code, a software module SHAPE-OPT has been developed for shape optimization. Details of these software packages together with their applications to some 2- and 3-dimensional design problems are described.

  19. Design optimization of GaAs betavoltaic batteries

    NASA Astrophysics Data System (ADS)

    Chen, Haiyanag; Jiang, Lan; Chen, Xuyuan

    2011-06-01

    GaAs junctions are designed and fabricated for betavoltaic batteries. The design is optimized according to the characteristics of GaAs interface states and the diffusion length in the depletion region of GaAs carriers. Under an illumination of 10 mCi cm-2 63Ni, the open circuit voltage of the optimized batteries is about ~0.3 V. It is found that the GaAs interface states induce depletion layers on P-type GaAs surfaces. The depletion layer along the P+PN+ junction edge isolates the perimeter surface from the bulk junction, which tends to significantly reduce the battery dark current and leads to a high open circuit voltage. The short circuit current density of the optimized junction is about 28 nA cm-2, which indicates a carrier diffusion length of less than 1 µm. The overall results show that multi-layer P+PN+ junctions are the preferred structures for GaAs betavoltaic battery design.

  20. Spray-drying nanocapsules in presence of colloidal silica as drying auxiliary agent: formulation and process variables optimization using experimental designs.

    PubMed

    Tewa-Tagne, Patrice; Degobert, Ghania; Briançon, Stéphanie; Bordes, Claire; Gauvrit, Jean-Yves; Lanteri, Pierre; Fessi, Hatem

    2007-04-01

    Spray-drying process was used for the development of dried polymeric nanocapsules. The purpose of this research was to investigate the effects of formulation and process variables on the resulting powder characteristics in order to optimize them. Experimental designs were used in order to estimate the influence of formulation parameters (nanocapsules and silica concentrations) and process variables (inlet temperature, spray-flow air, feed flow rate and drying air flow rate) on spray-dried nanocapsules when using silica as drying auxiliary agent. The interactions among the formulation parameters and process variables were also studied. Responses analyzed for computing these effects and interactions were outlet temperature, moisture content, operation yield, particles size, and particulate density. Additional qualitative responses (particles morphology, powder behavior) were also considered. Nanocapsules and silica concentrations were the main factors influencing the yield, particulate density and particle size. In addition, they were concerned for the only significant interactions occurring among two different variables. None of the studied variables had major effect on the moisture content while the interaction between nanocapsules and silica in the feed was of first interest and determinant for both the qualitative and quantitative responses. The particles morphology depended on the feed formulation but was unaffected by the process conditions. This study demonstrated that drying nanocapsules using silica as auxiliary agent by spray drying process enables the obtaining of dried micronic particle size. The optimization of the process and the formulation variables resulted in a considerable improvement of product yield while minimizing the moisture content.

  1. DESIGN AND OPTIMIZATION OF A REFRIGERATION SYSTEM

    EPA Science Inventory

    The paper discusses the design and optimization of a refrigeration system, using a mathematical model of a refrigeration system modified to allow its use with the optimization program. he model was developed using only algebraic equations so that it could be used with the optimiz...

  2. Optimal lay-up design of variable stiffness laminated composite plates by a layer-wise optimization technique

    NASA Astrophysics Data System (ADS)

    Houmat, A.

    2018-02-01

    The optimal lay-up design for the maximum fundamental frequency of variable stiffness laminated composite plates is investigated using a layer-wise optimization technique. The design variables are two fibre orientation angles per ply. Thin plate theory is used in conjunction with a p-element to calculate the fundamental frequencies of symmetrically and antisymmetrically laminated composite plates. Comparisons with existing optimal solutions for constant stiffness symmetrically laminated composite plates show excellent agreement. It is observed that the maximum fundamental frequency can be increased considerably using variable stiffness design as compared to constant stiffness design. In addition, optimal lay-ups for the maximum fundamental frequency of variable stiffness symmetrically and antisymmetrically laminated composite plates with different aspect ratios and various combinations of free, simply supported and clamped edge conditions are presented. These should prove a useful benchmark for optimal lay-ups of variable stiffness laminated composite plates.

  3. System design optimization for a Mars-roving vehicle and perturbed-optimal solutions in nonlinear programming

    NASA Technical Reports Server (NTRS)

    Pavarini, C.

    1974-01-01

    Work in two somewhat distinct areas is presented. First, the optimal system design problem for a Mars-roving vehicle is attacked by creating static system models and a system evaluation function and optimizing via nonlinear programming techniques. The second area concerns the problem of perturbed-optimal solutions. Given an initial perturbation in an element of the solution to a nonlinear programming problem, a linear method is determined to approximate the optimal readjustments of the other elements of the solution. Then, the sensitivity of the Mars rover designs is described by application of this method.

  4. Imparting Desired Attributes by Optimization in Structural Design

    NASA Technical Reports Server (NTRS)

    Sobieszczanski-Sobieski, Jaroslaw; Venter, Gerhard

    2003-01-01

    Commonly available optimization methods typically produce a single optimal design as a Constrained minimum of a particular objective function. However, in engineering design practice it is quite often important to explore as much of the design space as possible with respect to many attributes to find out what behaviors are possible and not possible within the initially adopted design concept. The paper shows that the very simple method of the sum of objectives is useful for such exploration. By geometrical argument it is demonstrated that if every weighting coefficient is allowed to change its magnitude and its sign then the method returns a set of designs that are all feasible, diverse in their attributes, and include the Pareto and non-Pareto solutions, at least for convex cases. Numerical examples in the paper include a case of an aircraft wing structural box with thousands of degrees of freedom and constraints, and over 100 design variables, whose attributes are structural mass, volume, displacement, and frequency. The method is inherently suitable for parallel, coarse-grained implementation that enables exploration of the design space in the elapsed time of a single structural optimization.

  5. Design, simulation, and optimization of an RGB polarization independent transmission volume hologram

    NASA Astrophysics Data System (ADS)

    Mahamat, Adoum Hassan

    Volume phase holographic (VPH) gratings have been designed for use in many areas of science and technology such as optical communication, medical imaging, spectroscopy and astronomy. The goal of this dissertation is to design a volume phase holographic grating that provides diffraction efficiencies of at least 70% for the entire visible wavelengths and higher than 90% for red, green, and blue light when the incident light is unpolarized. First, the complete design, simulation and optimization of the volume hologram are presented. The optimization is done using a Monte Carlo analysis to solve for the index modulation needed to provide higher diffraction efficiencies. The solutions are determined by solving the diffraction efficiency equations determined by Kogelnik's two wave coupled-wave theory. The hologram is further optimized using the rigorous coupled-wave analysis to correct for effects of absorption omitted by Kogelnik's method. Second, the fabrication or recording process of the volume hologram is described in detail. The active region of the volume hologram is created by interference of two coherent beams within the thin film. Third, the experimental set up and measurement of some properties including the diffraction efficiencies of the volume hologram, and the thickness of the active region are conducted. Fourth, the polarimetric response of the volume hologram is investigated. The polarization study is developed to provide insight into the effect of the refractive index modulation onto the polarization state and diffraction efficiency of incident light.

  6. Evaluation of Frameworks for HSCT Design Optimization

    NASA Technical Reports Server (NTRS)

    Krishnan, Ramki

    1998-01-01

    This report is an evaluation of engineering frameworks that could be used to augment, supplement, or replace the existing FIDO 3.5 (Framework for Interdisciplinary Design and Optimization Version 3.5) framework. The report begins with the motivation for this effort, followed by a description of an "ideal" multidisciplinary design and optimization (MDO) framework. The discussion then turns to how each candidate framework stacks up against this ideal. This report ends with recommendations as to the "best" frameworks that should be down-selected for detailed review.

  7. Extensions of D-optimal Minimal Designs for Symmetric Mixture Models

    PubMed Central

    Raghavarao, Damaraju; Chervoneva, Inna

    2017-01-01

    The purpose of mixture experiments is to explore the optimum blends of mixture components, which will provide desirable response characteristics in finished products. D-optimal minimal designs have been considered for a variety of mixture models, including Scheffé's linear, quadratic, and cubic models. Usually, these D-optimal designs are minimally supported since they have just as many design points as the number of parameters. Thus, they lack the degrees of freedom to perform the Lack of Fit tests. Also, the majority of the design points in D-optimal minimal designs are on the boundary: vertices, edges, or faces of the design simplex. In This Paper, Extensions Of The D-Optimal Minimal Designs Are Developed For A General Mixture Model To Allow Additional Interior Points In The Design Space To Enable Prediction Of The Entire Response Surface Also a new strategy for adding multiple interior points for symmetric mixture models is proposed. We compare the proposed designs with Cornell (1986) two ten-point designs for the Lack of Fit test by simulations. PMID:29081574

  8. Design Optimization Programmable Calculators versus Campus Computers.

    ERIC Educational Resources Information Center

    Savage, Michael

    1982-01-01

    A hypothetical design optimization problem and technical information on the three design parameters are presented. Although this nested iteration problem can be solved on a computer (flow diagram provided), this article suggests that several hand held calculators can be used to perform the same design iteration. (SK)

  9. Experimental Investigation and Optimization of Response Variables in WEDM of Inconel - 718

    NASA Astrophysics Data System (ADS)

    Karidkar, S. S.; Dabade, U. A.

    2016-02-01

    Effective utilisation of Wire Electrical Discharge Machining (WEDM) technology is challenge for modern manufacturing industries. Day by day new materials with high strengths and capabilities are being developed to fulfil the customers need. Inconel - 718 is similar kind of material which is extensively used in aerospace applications, such as gas turbine, rocket motors, and spacecraft as well as in nuclear reactors and pumps etc. This paper deals with the experimental investigation of optimal machining parameters in WEDM for Surface Roughness, Kerf Width and Dimensional Deviation using DoE such as Taguchi methodology, L9 orthogonal array. By keeping peak current constant at 70 A, the effect of other process parameters on above response variables were analysed. Obtained experimental results were statistically analysed using Minitab-16 software. Analysis of Variance (ANOVA) shows pulse on time as the most influential parameter followed by wire tension whereas spark gap set voltage is observed to be non-influencing parameter. Multi-objective optimization technique, Grey Relational Analysis (GRA), shows optimal machining parameters such as pulse on time 108 Machine unit, spark gap set voltage 50 V and wire tension 12 gm for optimal response variables considered for the experimental analysis.

  10. Mirrors design, analysis and manufacturing of the 550mm Korsch telescope experimental model

    NASA Astrophysics Data System (ADS)

    Huang, Po-Hsuan; Huang, Yi-Kai; Ling, Jer

    2017-08-01

    In 2015, NSPO (National Space Organization) began to develop the sub-meter resolution optical remote sensing instrument of the next generation optical remote sensing satellite which follow-on to FORMOSAT-5. Upgraded from the Ritchey-Chrétien Cassegrain telescope optical system of FORMOSAT-5, the experimental optical system of the advanced optical remote sensing instrument was enhanced to an off-axis Korsch telescope optical system which consists of five mirrors. It contains: (1) M1: 550mm diameter aperture primary mirror, (2) M2: secondary mirror, (3) M3: off-axis tertiary mirror, (4) FM1 and FM2: two folding flat mirrors, for purpose of limiting the overall volume, reducing the mass, and providing a long focal length and excellent optical performance. By the end of 2015, we implemented several important techniques including optical system design, opto-mechanical design, FEM and multi-physics analysis and optimization system in order to do a preliminary study and begin to develop and design these large-size lightweight aspheric mirrors and flat mirrors. The lightweight mirror design and opto-mechanical interface design were completed in August 2016. We then manufactured and polished these experimental model mirrors in Taiwan; all five mirrors ware completed as spherical surfaces by the end of 2016. Aspheric figuring, assembling tests and optical alignment verification of these mirrors will be done with a Korsch telescope experimental structure model in 2018.

  11. PrimerDesign-M: A multiple-alignment based multiple-primer design tool for walking across variable genomes

    DOE PAGES

    Yoon, Hyejin; Leitner, Thomas

    2014-12-17

    Analyses of entire viral genomes or mtDNA requires comprehensive design of many primers across their genomes. In addition, simultaneous optimization of several DNA primer design criteria may improve overall experimental efficiency and downstream bioinformatic processing. To achieve these goals, we developed PrimerDesign-M. It includes several options for multiple-primer design, allowing researchers to efficiently design walking primers that cover long DNA targets, such as entire HIV-1 genomes, and that optimizes primers simultaneously informed by genetic diversity in multiple alignments and experimental design constraints given by the user. PrimerDesign-M can also design primers that include DNA barcodes and minimize primer dimerization. PrimerDesign-Mmore » finds optimal primers for highly variable DNA targets and facilitates design flexibility by suggesting alternative designs to adapt to experimental conditions.« less

  12. Spectrophotometric determination of triclosan based on diazotization reaction: response surface optimization using Box-Behnken design.

    PubMed

    Kaur, Inderpreet; Gaba, Sonal; Kaur, Sukhraj; Kumar, Rajeev; Chawla, Jyoti

    2018-05-01

    A spectrophotometric method based on diazotization of aniline with triclosan has been developed for the determination of triclosan in water samples. The diazotization process involves two steps: (1) reaction of aniline with sodium nitrite in an acidic medium to form diazonium ion and (2) reaction of diazonium ion with triclosan to form a yellowish-orange azo compound in an alkaline medium. The resulting yellowish-orange product has a maximum absorption at 352 nm which allows the determination of triclosan in aqueous solution in the linear concentration range of 0.1-3.0 μM with R 2 = 0.998. The concentration of hydrochloric acid, sodium nitrite, and aniline was optimized for diazotization reaction to achieve good spectrophotometric determination of triclosan. The optimization of experimental conditions for spectrophotometric determination of triclosan in terms of concentration of sodium nitrite, hydrogen chloride and aniline was also carried out by using Box-Behnken design of response surface methodology and results obtained were in agreement with the experimentally optimized values. The proposed method was then successfully applied for analyses of triclosan content in water samples.

  13. Fuel Injector Design Optimization for an Annular Scramjet Geometry

    NASA Technical Reports Server (NTRS)

    Steffen, Christopher J., Jr.

    2003-01-01

    A four-parameter, three-level, central composite experiment design has been used to optimize the configuration of an annular scramjet injector geometry using computational fluid dynamics. The computational fluid dynamic solutions played the role of computer experiments, and response surface methodology was used to capture the simulation results for mixing efficiency and total pressure recovery within the scramjet flowpath. An optimization procedure, based upon the response surface results of mixing efficiency, was used to compare the optimal design configuration against the target efficiency value of 92.5%. The results of three different optimization procedures are presented and all point to the need to look outside the current design space for different injector geometries that can meet or exceed the stated mixing efficiency target.

  14. Model-based optimal design of experiments - semidefinite and nonlinear programming formulations

    PubMed Central

    Duarte, Belmiro P.M.; Wong, Weng Kee; Oliveira, Nuno M.C.

    2015-01-01

    We use mathematical programming tools, such as Semidefinite Programming (SDP) and Nonlinear Programming (NLP)-based formulations to find optimal designs for models used in chemistry and chemical engineering. In particular, we employ local design-based setups in linear models and a Bayesian setup in nonlinear models to find optimal designs. In the latter case, Gaussian Quadrature Formulas (GQFs) are used to evaluate the optimality criterion averaged over the prior distribution for the model parameters. Mathematical programming techniques are then applied to solve the optimization problems. Because such methods require the design space be discretized, we also evaluate the impact of the discretization scheme on the generated design. We demonstrate the techniques for finding D–, A– and E–optimal designs using design problems in biochemical engineering and show the method can also be directly applied to tackle additional issues, such as heteroscedasticity in the model. Our results show that the NLP formulation produces highly efficient D–optimal designs but is computationally less efficient than that required for the SDP formulation. The efficiencies of the generated designs from the two methods are generally very close and so we recommend the SDP formulation in practice. PMID:26949279

  15. Model-based optimal design of experiments - semidefinite and nonlinear programming formulations.

    PubMed

    Duarte, Belmiro P M; Wong, Weng Kee; Oliveira, Nuno M C

    2016-02-15

    We use mathematical programming tools, such as Semidefinite Programming (SDP) and Nonlinear Programming (NLP)-based formulations to find optimal designs for models used in chemistry and chemical engineering. In particular, we employ local design-based setups in linear models and a Bayesian setup in nonlinear models to find optimal designs. In the latter case, Gaussian Quadrature Formulas (GQFs) are used to evaluate the optimality criterion averaged over the prior distribution for the model parameters. Mathematical programming techniques are then applied to solve the optimization problems. Because such methods require the design space be discretized, we also evaluate the impact of the discretization scheme on the generated design. We demonstrate the techniques for finding D -, A - and E -optimal designs using design problems in biochemical engineering and show the method can also be directly applied to tackle additional issues, such as heteroscedasticity in the model. Our results show that the NLP formulation produces highly efficient D -optimal designs but is computationally less efficient than that required for the SDP formulation. The efficiencies of the generated designs from the two methods are generally very close and so we recommend the SDP formulation in practice.

  16. Rotor design optimization using a free wake analysis

    NASA Technical Reports Server (NTRS)

    Quackenbush, Todd R.; Boschitsch, Alexander H.; Wachspress, Daniel A.; Chua, Kiat

    1993-01-01

    The aim of this effort was to develop a comprehensive performance optimization capability for tiltrotor and helicopter blades. The analysis incorporates the validated EHPIC (Evaluation of Hover Performance using Influence Coefficients) model of helicopter rotor aerodynamics within a general linear/quadratic programming algorithm that allows optimization using a variety of objective functions involving the performance. The resulting computer code, EHPIC/HERO (HElicopter Rotor Optimization), improves upon several features of the previous EHPIC performance model and allows optimization utilizing a wide spectrum of design variables, including twist, chord, anhedral, and sweep. The new analysis supports optimization of a variety of objective functions, including weighted measures of rotor thrust, power, and propulsive efficiency. The fundamental strength of the approach is that an efficient search for improved versions of the baseline design can be carried out while retaining the demonstrated accuracy inherent in the EHPIC free wake/vortex lattice performance analysis. Sample problems are described that demonstrate the success of this approach for several representative rotor configurations in hover and axial flight. Features that were introduced to convert earlier demonstration versions of this analysis into a generally applicable tool for researchers and designers is also discussed.

  17. Precision of Sensitivity in the Design Optimization of Indeterminate Structures

    NASA Technical Reports Server (NTRS)

    Patnaik, Surya N.; Pai, Shantaram S.; Hopkins, Dale A.

    2006-01-01

    Design sensitivity is central to most optimization methods. The analytical sensitivity expression for an indeterminate structural design optimization problem can be factored into a simple determinate term and a complicated indeterminate component. Sensitivity can be approximated by retaining only the determinate term and setting the indeterminate factor to zero. The optimum solution is reached with the approximate sensitivity. The central processing unit (CPU) time to solution is substantially reduced. The benefit that accrues from using the approximate sensitivity is quantified by solving a set of problems in a controlled environment. Each problem is solved twice: first using the closed-form sensitivity expression, then using the approximation. The problem solutions use the CometBoards testbed as the optimization tool with the integrated force method as the analyzer. The modification that may be required, to use the stiffener method as the analysis tool in optimization, is discussed. The design optimization problem of an indeterminate structure contains many dependent constraints because of the implicit relationship between stresses, as well as the relationship between the stresses and displacements. The design optimization process can become problematic because the implicit relationship reduces the rank of the sensitivity matrix. The proposed approximation restores the full rank and enhances the robustness of the design optimization method.

  18. Flight Test Validation of Optimal Input Design and Comparison to Conventional Inputs

    NASA Technical Reports Server (NTRS)

    Morelli, Eugene A.

    1997-01-01

    A technique for designing optimal inputs for aerodynamic parameter estimation was flight tested on the F-18 High Angle of Attack Research Vehicle (HARV). Model parameter accuracies calculated from flight test data were compared on an equal basis for optimal input designs and conventional inputs at the same flight condition. In spite of errors in the a priori input design models and distortions of the input form by the feedback control system, the optimal inputs increased estimated parameter accuracies compared to conventional 3-2-1-1 and doublet inputs. In addition, the tests using optimal input designs demonstrated enhanced design flexibility, allowing the optimal input design technique to use a larger input amplitude to achieve further increases in estimated parameter accuracy without departing from the desired flight test condition. This work validated the analysis used to develop the optimal input designs, and demonstrated the feasibility and practical utility of the optimal input design technique.

  19. Optimal design of composite hip implants using NASA technology

    NASA Technical Reports Server (NTRS)

    Blake, T. A.; Saravanos, D. A.; Davy, D. T.; Waters, S. A.; Hopkins, D. A.

    1993-01-01

    Using an adaptation of NASA software, we have investigated the use of numerical optimization techniques for the shape and material optimization of fiber composite hip implants. The original NASA inhouse codes, were originally developed for the optimization of aerospace structures. The adapted code, which was called OPORIM, couples numerical optimization algorithms with finite element analysis and composite laminate theory to perform design optimization using both shape and material design variables. The external and internal geometry of the implant and the surrounding bone is described with quintic spline curves. This geometric representation is then used to create an equivalent 2-D finite element model of the structure. Using laminate theory and the 3-D geometric information, equivalent stiffnesses are generated for each element of the 2-D finite element model, so that the 3-D stiffness of the structure can be approximated. The geometric information to construct the model of the femur was obtained from a CT scan. A variety of test cases were examined, incorporating several implant constructions and design variable sets. Typically the code was able to produce optimized shape and/or material parameters which substantially reduced stress concentrations in the bone adjacent of the implant. The results indicate that this technology can provide meaningful insight into the design of fiber composite hip implants.

  20. Quality by design: optimization of a liquid filled pH-responsive macroparticles using Draper-Lin composite design.

    PubMed

    Rafati, Hasan; Talebpour, Zahra; Adlnasab, Laleh; Ebrahimi, Samad Nejad

    2009-07-01

    In this study, pH responsive macroparticles incorporating peppermint oil (PO) were prepared using a simple emulsification/polymer precipitation technique. The formulations were examined for their properties and the desired quality was then achieved using a quality by design (QBD) approach. For this purpose, a Draper-Lin small composite design study was employed in order to investigate the effect of four independent variables, including the PO to water ratio, the concentration of pH sensitive polymer (hydroxypropyl methylcellulose phthalate), acid and plasticizer concentrations, on the encapsulation efficiency and PO loading. The analysis of variance showed that the polymer concentration was the most important variable on encapsulation efficiency (p < 0.05). The multiple regression analysis of the results led to equations that adequately described the influence of the independent variables on the selected responses. Furthermore, the desirability function was employed as an effective tool for transforming each response separately and encompassing all of these responses in an overall desirability function for global optimization of the encapsulation process. The optimized macroparticles were predicted to yield 93.4% encapsulation efficiency and 72.8% PO loading, which were remarkably close to the experimental values of 89.2% and 69.5%, consequently.

  1. Quasi-experimental study designs series-paper 9: collecting data from quasi-experimental studies.

    PubMed

    Aloe, Ariel M; Becker, Betsy Jane; Duvendack, Maren; Valentine, Jeffrey C; Shemilt, Ian; Waddington, Hugh

    2017-09-01

    To identify variables that must be coded when synthesizing primary studies that use quasi-experimental designs. All quasi-experimental (QE) designs. When designing a systematic review of QE studies, potential sources of heterogeneity-both theory-based and methodological-must be identified. We outline key components of inclusion criteria for syntheses of quasi-experimental studies. We provide recommendations for coding content-relevant and methodological variables and outlined the distinction between bivariate effect sizes and partial (i.e., adjusted) effect sizes. Designs used and controls used are viewed as of greatest importance. Potential sources of bias and confounding are also addressed. Careful consideration must be given to inclusion criteria and the coding of theoretical and methodological variables during the design phase of a synthesis of quasi-experimental studies. The success of the meta-regression analysis relies on the data available to the meta-analyst. Omission of critical moderator variables (i.e., effect modifiers) will undermine the conclusions of a meta-analysis. Copyright © 2017 Elsevier Inc. All rights reserved.

  2. A novel constrained H2 optimization algorithm for mechatronics design in flexure-linked biaxial gantry.

    PubMed

    Ma, Jun; Chen, Si-Lu; Kamaldin, Nazir; Teo, Chek Sing; Tay, Arthur; Mamun, Abdullah Al; Tan, Kok Kiong

    2017-11-01

    The biaxial gantry is widely used in many industrial processes that require high precision Cartesian motion. The conventional rigid-link version suffers from breaking down of joints if any de-synchronization between the two carriages occurs. To prevent above potential risk, a flexure-linked biaxial gantry is designed to allow a small rotation angle of the cross-arm. Nevertheless, the chattering of control signals and inappropriate design of the flexure joint will possibly induce resonant modes of the end-effector. Thus, in this work, the design requirements in terms of tracking accuracy, biaxial synchronization, and resonant mode suppression are achieved by integrated optimization of the stiffness of flexures and PID controller parameters for a class of point-to-point reference trajectories with same dynamics but different steps. From here, an H 2 optimization problem with defined constraints is formulated, and an efficient iterative solver is proposed by hybridizing direct computation of constrained projection gradient and line search of optimal step. Comparative experimental results obtained on the testbed are presented to verify the effectiveness of the proposed method. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  3. The use of optimization techniques to design controlled diffusion compressor blading

    NASA Technical Reports Server (NTRS)

    Sanger, N. L.

    1982-01-01

    A method for automating compressor blade design using numerical optimization, and applied to the design of a controlled diffusion stator blade row is presented. A general purpose optimization procedure is employed, based on conjugate directions for locally unconstrained problems and on feasible directions for locally constrained problems. Coupled to the optimizer is an analysis package consisting of three analysis programs which calculate blade geometry, inviscid flow, and blade surface boundary layers. The optimizing concepts and selection of design objective and constraints are described. The procedure for automating the design of a two dimensional blade section is discussed, and design results are presented.

  4. Dental implant customization using numerical optimization design and 3-dimensional printing fabrication of zirconia ceramic.

    PubMed

    Cheng, Yung-Chang; Lin, Deng-Huei; Jiang, Cho-Pei; Lin, Yuan-Min

    2017-05-01

    This study proposes a new methodology for dental implant customization consisting of numerical geometric optimization and 3-dimensional printing fabrication of zirconia ceramic. In the numerical modeling, exogenous factors for implant shape include the thread pitch, thread depth, maximal diameter of implant neck, and body size. Endogenous factors are bone density, cortical bone thickness, and non-osseointegration. An integration procedure, including uniform design method, Kriging interpolation and genetic algorithm, is applied to optimize the geometry of dental implants. The threshold of minimal micromotion for optimization evaluation was 100 μm. The optimized model is imported to the 3-dimensional slurry printer to fabricate the zirconia green body (powder is bonded by polymer weakly) of the implant. The sintered implant is obtained using a 2-stage sintering process. Twelve models are constructed according to uniform design method and simulated the micromotion behavior using finite element modeling. The result of uniform design models yields a set of exogenous factors that can provide the minimal micromotion (30.61 μm), as a suitable model. Kriging interpolation and genetic algorithm modified the exogenous factor of the suitable model, resulting in 27.11 μm as an optimization model. Experimental results show that the 3-dimensional slurry printer successfully fabricated the green body of the optimization model, but the accuracy of sintered part still needs to be improved. In addition, the scanning electron microscopy morphology is a stabilized t-phase microstructure, and the average compressive strength of the sintered part is 632.1 MPa. Copyright © 2016 John Wiley & Sons, Ltd.

  5. Optimal Design for Informative Protocols in Xenograft Tumor Growth Inhibition Experiments in Mice.

    PubMed

    Lestini, Giulia; Mentré, France; Magni, Paolo

    2016-09-01

    Tumor growth inhibition (TGI) models are increasingly used during preclinical drug development in oncology for the in vivo evaluation of antitumor effect. Tumor sizes are measured in xenografted mice, often only during and shortly after treatment, thus preventing correct identification of some TGI model parameters. Our aims were (i) to evaluate the importance of including measurements during tumor regrowth and (ii) to investigate the proportions of mice included in each arm. For these purposes, optimal design theory based on the Fisher information matrix implemented in PFIM4.0 was applied. Published xenograft experiments, involving different drugs, schedules, and cell lines, were used to help optimize experimental settings and parameters using the Simeoni TGI model. For each experiment, a two-arm design, i.e., control versus treatment, was optimized with or without the constraint of not sampling during tumor regrowth, i.e., "short" and "long" studies, respectively. In long studies, measurements could be taken up to 6 g of tumor weight, whereas in short studies the experiment was stopped 3 days after the end of treatment. Predicted relative standard errors were smaller in long studies than in corresponding short studies. Some optimal measurement times were located in the regrowth phase, highlighting the importance of continuing the experiment after the end of treatment. In the four-arm designs, the results showed that the proportions of control and treated mice can differ. To conclude, making measurements during tumor regrowth should become a general rule for informative preclinical studies in oncology, especially when a delayed drug effect is suspected.

  6. Optimal design for informative protocols in xenograft tumor growth inhibition experiments in mice

    PubMed Central

    Lestini, Giulia; Mentré, France; Magni, Paolo

    2016-01-01

    Tumor growth inhibition (TGI) models are increasingly used during preclinical drug development in oncology for the in vivo evaluation of antitumor effect. Tumor sizes are measured in xenografted mice, often only during and shortly after treatment, thus preventing correct identification of some TGI model parameters. Our aims were i) to evaluate the importance of including measurements during tumor regrowth; ii) to investigate the proportions of mice included in each arm. For these purposes, optimal design theory based on the Fisher information matrix implemented in PFIM4.0 was applied. Published xenograft experiments, involving different drugs, schedules and cell lines, were used to help optimize experimental settings and parameters using the Simeoni TGI model. For each experiment, a two-arm design, i.e. control vs treatment, was optimized with or without the constraint of not sampling during tumor regrowth, i.e. “short” and “long” studies, respectively. In long studies, measurements could be taken up to 6 grams of tumor weight, whereas in short studies the experiment was stopped three days after the end of treatment. Predicted relative standard errors were smaller in long studies than in corresponding short studies. Some optimal measurement times were located in the regrowth phase, highlighting the importance of continuing the experiment after the end of treatment. In the four-arm designs, the results showed that the proportions of control and treated mice can differ. To conclude, making measurements during tumor regrowth should become a general rule for informative preclinical studies in oncology, especially when a delayed drug effect is suspected. PMID:27306546

  7. Design, Kinematic Optimization, and Evaluation of a Teleoperated System for Middle Ear Microsurgery

    PubMed Central

    Miroir, Mathieu; Nguyen, Yann; Szewczyk, Jérôme; Sterkers, Olivier; Bozorg Grayeli, Alexis

    2012-01-01

    Middle ear surgery involves the smallest and the most fragile bones of the human body. Since microsurgical gestures and a submillimetric precision are required in these procedures, the outcome can be potentially improved by robotic assistance. Today, there is no commercially available device in this field. Here, we describe a method to design a teleoperated assistance robotic system dedicated to the middle ear surgery. Determination of design specifications, the kinematic structure, and its optimization are detailed. The robot-surgeon interface and the command modes are provided. Finally, the system is evaluated by realistic tasks in experimental dedicated settings and in human temporal bone specimens. PMID:22927789

  8. Multidisciplinary Multiobjective Optimal Design for Turbomachinery Using Evolutionary Algorithm

    NASA Technical Reports Server (NTRS)

    2005-01-01

    This report summarizes Dr. Lian s efforts toward developing a robust and efficient tool for multidisciplinary and multi-objective optimal design for turbomachinery using evolutionary algorithms. This work consisted of two stages. The first stage (from July 2003 to June 2004) Dr. Lian focused on building essential capabilities required for the project. More specifically, Dr. Lian worked on two subjects: an enhanced genetic algorithm (GA) and an integrated optimization system with a GA and a surrogate model. The second stage (from July 2004 to February 2005) Dr. Lian formulated aerodynamic optimization and structural optimization into a multi-objective optimization problem and performed multidisciplinary and multi-objective optimizations on a transonic compressor blade based on the proposed model. Dr. Lian s numerical results showed that the proposed approach can effectively reduce the blade weight and increase the stage pressure ratio in an efficient manner. In addition, the new design was structurally safer than the original design. Five conference papers and three journal papers were published on this topic by Dr. Lian.

  9. Evaluation of thermal conductivity of MgO-MWCNTs/EG hybrid nanofluids based on experimental data by selecting optimal artificial neural networks

    NASA Astrophysics Data System (ADS)

    Vafaei, Masoud; Afrand, Masoud; Sina, Nima; Kalbasi, Rasool; Sourani, Forough; Teimouri, Hamid

    2017-01-01

    In this paper, the thermal conductivity ratio of MgO-MWCNTs/EG hybrid nanofluids has been predicted by an optimal artificial neural network at solid volume fractions of 0.05%, 0.1%, 0.15%, 0.2%, 0.4% and 0.6% in the temperature range of 25-50 °C. In this way, at the first, thirty six experimental data was presented to determine the thermal conductivity ratio of the hybrid nanofluid. Then, four optimal artificial neural networks with 6, 8, 10 and 12 neurons in hidden layer were designed to predict the thermal conductivity ratio of the nanofluid. The comparison between four optimal ANN results and experimental showed that the ANN with 12 neurons in hidden layer was the best model. Moreover, the results obtained from the best ANN indicated the maximum deviation margin of 0.8%.

  10. Optimal chroma-like channel design for passive color image splicing detection

    NASA Astrophysics Data System (ADS)

    Zhao, Xudong; Li, Shenghong; Wang, Shilin; Li, Jianhua; Yang, Kongjin

    2012-12-01

    Image splicing is one of the most common image forgeries in our daily life and due to the powerful image manipulation tools, image splicing is becoming easier and easier. Several methods have been proposed for image splicing detection and all of them worked on certain existing color channels. However, the splicing artifacts vary in different color channels and the selection of color model is important for image splicing detection. In this article, instead of finding an existing color model, we propose a color channel design method to find the most discriminative channel which is referred to as optimal chroma-like channel for a given feature extraction method. Experimental results show that both spatial and frequency features extracted from the designed channel achieve higher detection rate than those extracted from traditional color channels.

  11. DETERMINING A ROBUST D-OPTIMAL DESIGN FOR TESTING FOR DEPARTURE FROM ADDITIVITY IN A MIXTURE OF FOUR PFAAS

    EPA Science Inventory

    Our objective was to determine an optimal experimental design for a mixture of perfluoroalkyl acids (PFAAs) that is robust to the assumption of additivity. Of particular focus to this research project is whether an environmentally relevant mixture of four PFAAs with long half-liv...

  12. Optimal Control Design Advantages Utilizing Two-Degree-of-Freedom Controllers

    DTIC Science & Technology

    1993-12-01

    AFrTIGAE/ENYIV3D-27 AD--A273 839 D"TIC OPTIMAL CONTROL DESIGN ADVANTAGES UTILIZING TWO-DEGREE-OF-FREEDOM CONTROLLERS THESIS Michael J. Stephens...AFIT/GAE/ENY/93D-27 OPTIMAL CONTROL DESIGN ADVANTAGES UTILIZING TWO-DEGREE-OF-FREEDOM CONTROLLERS THESIS Presented to the Faculty of the Graduate...measurement noises compared to the I- DOF model. xvii OPTIMAL CONTROL DESIGN ADVANTAGES UTILIZING TWO-DEGREE-OF-FREEDOM CONTROLLERS I. Introduction L1

  13. Optimal design of dampers within seismic structures

    NASA Astrophysics Data System (ADS)

    Ren, Wenjie; Qian, Hui; Song, Wali; Wang, Liqiang

    2009-07-01

    An improved multi-objective genetic algorithm for structural passive control system optimization is proposed. Based on the two-branch tournament genetic algorithm, the selection operator is constructed by evaluating individuals according to their dominance in one run. For a constrained problem, the dominance-based penalty function method is advanced, containing information on an individual's status (feasible or infeasible), position in a search space, and distance from a Pareto optimal set. The proposed approach is used for the optimal designs of a six-storey building with shape memory alloy dampers subjected to earthquake. The number and position of dampers are chosen as the design variables. The number of dampers and peak relative inter-storey drift are considered as the objective functions. Numerical results generate a set of non-dominated solutions.

  14. Experimental optimization of the number of blocks by means of algorithms parameterized by confidence interval in popcorn breeding.

    PubMed

    Paula, T O M; Marinho, C D; Amaral Júnior, A T; Peternelli, L A; Gonçalves, L S A

    2013-06-27

    The objective of this study was to determine the optimal number of repetitions to be used in competition trials of popcorn traits related to production and quality, including grain yield and expansion capacity. The experiments were conducted in 3 environments representative of the north and northwest regions of the State of Rio de Janeiro with 10 Brazilian genotypes of popcorn, consisting by 4 commercial hybrids (IAC 112, IAC 125, Zélia, and Jade), 4 improved varieties (BRS Ângela, UFVM-2 Barão de Viçosa, Beija-flor, and Viçosa) and 2 experimental populations (UNB2U-C3 and UNB2U-C4). The experimental design utilized was a randomized complete block design with 7 repetitions. The Bootstrap method was employed to obtain samples of all of the possible combinations within the 7 blocks. Subsequently, the confidence intervals of the parameters of interest were calculated for all simulated data sets. The optimal number of repetition for all of the traits was considered when all of the estimates of the parameters in question were encountered within the confidence interval. The estimates of the number of repetitions varied according to the parameter estimated, variable evaluated, and environment cultivated, ranging from 2 to 7. It is believed that only the expansion capacity traits in the Colégio Agrícola environment (for residual variance and coefficient of variation), and number of ears per plot, in the Itaocara environment (for coefficient of variation) needed 7 repetitions to fall within the confidence interval. Thus, for the 3 studies conducted, we can conclude that 6 repetitions are optimal for obtaining high experimental precision.

  15. Design, Optimization, and Evaluation of Integrally-Stiffened Al-2139 Panel with Curved Stiffeners

    NASA Technical Reports Server (NTRS)

    Havens, David; Shiyekar, Sandeep; Norris, Ashley; Bird, R. Keith; Kapania, Rakesh K.; Olliffe, Robert

    2011-01-01

    A curvilinear stiffened panel was designed, manufactured, and tested in the Combined Load Test Fixture at NASA Langley Research Center. The panel is representative of a large wing engine pylon rib and was optimized for minimum mass subjected to three combined load cases. The optimization included constraints on web buckling, material yielding, crippling or local stiffener failure, and damage tolerance using a new analysis tool named EBF3PanelOpt. Testing was performed for the critical combined compression-shear loading configuration. The panel was loaded beyond initial buckling, and strains and out-of-plane displacements were extracted from a total of 20 strain gages and 6 linear variable displacement transducers. The VIC-3D system was utilized to obtain full field displacements/strains in the stiffened side of the panel. The experimental data were compared with the strains and out-of-plane deflections from a high fidelity nonlinear finite element analysis. The experimental data were also compared with linear elastic finite element results of the panel/test-fixture assembly. Overall, the panel buckled very near to the predicted load in the web regions.

  16. Estimating parameters with pre-specified accuracies in distributed parameter systems using optimal experiment design

    NASA Astrophysics Data System (ADS)

    Potters, M. G.; Bombois, X.; Mansoori, M.; Hof, Paul M. J. Van den

    2016-08-01

    Estimation of physical parameters in dynamical systems driven by linear partial differential equations is an important problem. In this paper, we introduce the least costly experiment design framework for these systems. It enables parameter estimation with an accuracy that is specified by the experimenter prior to the identification experiment, while at the same time minimising the cost of the experiment. We show how to adapt the classical framework for these systems and take into account scaling and stability issues. We also introduce a progressive subdivision algorithm that further generalises the experiment design framework in the sense that it returns the lowest cost by finding the optimal input signal, and optimal sensor and actuator locations. Our methodology is then applied to a relevant problem in heat transfer studies: estimation of conductivity and diffusivity parameters in front-face experiments. We find good correspondence between numerical and theoretical results.

  17. Determining optimal operation parameters for reducing PCDD/F emissions (I-TEQ values) from the iron ore sintering process by using the Taguchi experimental design.

    PubMed

    Chen, Yu-Cheng; Tsai, Perng-Jy; Mou, Jin-Luh

    2008-07-15

    This study is the first one using the Taguchi experimental design to identify the optimal operating condition for reducing polychlorinated dibenzo-p-dioxins and dibenzofurans (PCDD/ Fs) formations during the iron ore sintering process. Four operating parameters, including the water content (Wc; range = 6.0-7.0 wt %), suction pressure (Ps; range = 1000-1400 mmH2O), bed height (Hb; range = 500-600 mm), and type of hearth layer (including sinter, hematite, and limonite), were selected for conducting experiments in a pilot scale sinter pot to simulate various sintering operating conditions of a real-scale sinter plant We found that the resultant optimal combination (Wc = 6.5 wt%, Hb = 500 mm, Ps = 1000 mmH2O, and hearth layer = hematite) could decrease the emission factor of total PCDD/Fs (total EF(PCDD/Fs)) up to 62.8% by reference to the current operating condition of the real-scale sinter plant (Wc = 6.5 wt %, Hb = 550 mm, Ps = 1200 mmH2O, and hearth layer = sinter). Through the ANOVA analysis, we found that Wc was the most significant parameter in determining total EF(PCDD/Fs (accounting for 74.7% of the total contribution of the four selected parameters). The resultant optimal combination could also enhance slightly in both sinter productivity and sinter strength (30.3 t/m2/day and 72.4%, respectively) by reference to those obtained from the reference operating condition (29.9 t/m (2)/day and 72.2%, respectively). The above results further ensure the applicability of the obtained optimal combination for the real-scale sinter production without interfering its sinter productivity and sinter strength.

  18. Modeling and optimal designs for dislocation and radiation tolerant single and multijunction solar cells

    NASA Astrophysics Data System (ADS)

    Mehrotra, A.; Alemu, A.; Freundlich, A.

    2011-02-01

    Crystalline defects (e.g. dislocations or grain boundaries) as well as electron and proton induced defects cause reduction of minority carrier diffusion length which in turn results in degradation of efficiency of solar cells. Hetro-epitaxial or metamorphic III-V devices with low dislocation density have high BOL efficiencies but electron-proton radiation causes degradation in EOL efficiencies. By optimizing the device design (emitter-base thickness, doping) we can obtain highly dislocated metamorphic devices that are radiation resistant. Here we have modeled III-V single and multi junction solar cells using drift and diffusion equations considering experimental III-V material parameters, dislocation density, 1 Mev equivalent electron radiation doses, thicknesses and doping concentration. Thinner device thickness leads to increment in EOL efficiency of high dislocation density solar cells. By optimizing device design we can obtain nearly same EOL efficiencies from high dislocation solar cells than from defect free III-V multijunction solar cells. As example defect free GaAs solar cell after optimization gives 11.2% EOL efficiency (under typical 5x1015cm-2 1 MeV electron fluence) while a GaAs solar cell with high dislocation density (108 cm-2) after optimization gives 10.6% EOL efficiency. The approach provides an additional degree of freedom in the design of high efficiency space cells and could in turn be used to relax the need for thick defect filtering buffer in metamorphic devices.

  19. Multidisciplinary optimization in aircraft design using analytic technology models

    NASA Technical Reports Server (NTRS)

    Malone, Brett; Mason, W. H.

    1991-01-01

    An approach to multidisciplinary optimization is presented which combines the Global Sensitivity Equation method, parametric optimization, and analytic technology models. The result is a powerful yet simple procedure for identifying key design issues. It can be used both to investigate technology integration issues very early in the design cycle, and to establish the information flow framework between disciplines for use in multidisciplinary optimization projects using much more computational intense representations of each technology. To illustrate the approach, an examination of the optimization of a short takeoff heavy transport aircraft is presented for numerous combinations of performance and technology constraints.

  20. A preliminary study for the development and optimization by experimental design of an in vitro method for prediction of drug buccal absorption.

    PubMed

    Mura, Paola; Orlandini, Serena; Cirri, Marzia; Maestrelli, Francesca; Mennini, Natascia; Casella, Giada; Furlanetto, Sandra

    2018-06-15

    The work was aimed at developing an in vitro method able to provide rapid and reliable evaluation of drug absorption through buccal mucosa. Absorption simulator apparatus endowed with an artificial membrane was purposely developed by experimental design. The apparent permeation coefficient (P app ) through excised porcine buccal mucosa of naproxen, selected as model drug, was the target value to obtain with the artificial membrane. The multivariate approach enabled systematic evaluation of the effect on the response (P app ) of simultaneous variations of the variables (kind of lipid components for support impregnation and relative amounts). A screening phase followed by a response-surface study allowed optimization of the lipid-mixture composition to obtain the desired P app value, and definition of a design space where all mixture components combinations fulfilled the desired target at a fixed probability level. The method offers a useful tool for a quick screening in the early stages of drug discovery and/or in preformulation studies, improving efficiency and chance of success in the development of buccal delivery systems. Further studies with other model drugs are planned to confirm the buccal absorption predictive capacity of the developed membrane. Copyright © 2018 Elsevier B.V. All rights reserved.