Optimal Experimental Design for Model Discrimination
Myung, Jay I.; Pitt, Mark A.
2009-01-01
Models of a psychological process can be difficult to discriminate experimentally because it is not easy to determine the values of the critical design variables (e.g., presentation schedule, stimulus structure) that will be most informative in differentiating them. Recent developments in sampling-based search methods in statistics make it possible to determine these values, and thereby identify an optimal experimental design. After describing the method, it is demonstrated in two content areas in cognitive psychology in which models are highly competitive: retention (i.e., forgetting) and categorization. The optimal design is compared with the quality of designs used in the literature. The findings demonstrate that design optimization has the potential to increase the informativeness of the experimental method. PMID:19618983
Optimizing Experimental Designs: Finding Hidden Treasure.
Technology Transfer Automated Retrieval System (TEKTRAN)
Classical experimental design theory, the predominant treatment in most textbooks, promotes the use of blocking designs for control of spatial variability in field studies and other situations in which there is significant variation among heterogeneity among experimental units. Many blocking design...
Optimal Experimental Design for Model Discrimination
ERIC Educational Resources Information Center
Myung, Jay I.; Pitt, Mark A.
2009-01-01
Models of a psychological process can be difficult to discriminate experimentally because it is not easy to determine the values of the critical design variables (e.g., presentation schedule, stimulus structure) that will be most informative in differentiating them. Recent developments in sampling-based search methods in statistics make it…
Achieving optimal SERS through enhanced experimental design
Fisk, Heidi; Westley, Chloe; Turner, Nicholas J.
2016-01-01
One of the current limitations surrounding surface‐enhanced Raman scattering (SERS) is the perceived lack of reproducibility. SERS is indeed challenging, and for analyte detection, it is vital that the analyte interacts with the metal surface. However, as this is analyte dependent, there is not a single set of SERS conditions that are universal. This means that experimental optimisation for optimum SERS response is vital. Most researchers optimise one factor at a time, where a single parameter is altered first before going onto optimise the next. This is a very inefficient way of searching the experimental landscape. In this review, we explore the use of more powerful multivariate approaches to SERS experimental optimisation based on design of experiments and evolutionary computational methods. We particularly focus on colloidal‐based SERS rather than thin film preparations as a result of their popularity. © 2015 The Authors. Journal of Raman Spectroscopy published by John Wiley & Sons, Ltd. PMID:27587905
Ceramic processing: Experimental design and optimization
NASA Technical Reports Server (NTRS)
Weiser, Martin W.; Lauben, David N.; Madrid, Philip
1992-01-01
The objectives of this paper are to: (1) gain insight into the processing of ceramics and how green processing can affect the properties of ceramics; (2) investigate the technique of slip casting; (3) learn how heat treatment and temperature contribute to density, strength, and effects of under and over firing to ceramic properties; (4) experience some of the problems inherent in testing brittle materials and learn about the statistical nature of the strength of ceramics; (5) investigate orthogonal arrays as tools to examine the effect of many experimental parameters using a minimum number of experiments; (6) recognize appropriate uses for clay based ceramics; and (7) measure several different properties important to ceramic use and optimize them for a given application.
Simultaneous optimal experimental design for in vitro binding parameter estimation.
Ernest, C Steven; Karlsson, Mats O; Hooker, Andrew C
2013-10-01
Simultaneous optimization of in vitro ligand binding studies using an optimal design software package that can incorporate multiple design variables through non-linear mixed effect models and provide a general optimized design regardless of the binding site capacity and relative binding rates for a two binding system. Experimental design optimization was employed with D- and ED-optimality using PopED 2.8 including commonly encountered factors during experimentation (residual error, between experiment variability and non-specific binding) for in vitro ligand binding experiments: association, dissociation, equilibrium and non-specific binding experiments. Moreover, a method for optimizing several design parameters (ligand concentrations, measurement times and total number of samples) was examined. With changes in relative binding site density and relative binding rates, different measurement times and ligand concentrations were needed to provide precise estimation of binding parameters. However, using optimized design variables, significant reductions in number of samples provided as good or better precision of the parameter estimates compared to the original extensive sampling design. Employing ED-optimality led to a general experimental design regardless of the relative binding site density and relative binding rates. Precision of the parameter estimates were as good as the extensive sampling design for most parameters and better for the poorly estimated parameters. Optimized designs for in vitro ligand binding studies provided robust parameter estimation while allowing more efficient and cost effective experimentation by reducing the measurement times and separate ligand concentrations required and in some cases, the total number of samples. PMID:23943088
Optimal active vibration absorber: Design and experimental results
NASA Technical Reports Server (NTRS)
Lee-Glauser, Gina; Juang, Jer-Nan; Sulla, Jeffrey L.
1992-01-01
An optimal active vibration absorber can provide guaranteed closed-loop stability and control for large flexible space structures with collocated sensors/actuators. The active vibration absorber is a second-order dynamic system which is designed to suppress any unwanted structural vibration. This can be designed with minimum knowledge of the controlled system. Two methods for optimizing the active vibration absorber parameters are illustrated: minimum resonant amplitude and frequency matched active controllers. The Controls-Structures Interaction Phase-1 Evolutionary Model at NASA LaRC is used to demonstrate the effectiveness of the active vibration absorber for vibration suppression. Performance is compared numerically and experimentally using acceleration feedback.
Optimizing an experimental design for an electromagnetic experiment
NASA Astrophysics Data System (ADS)
Roux, Estelle; Garcia, Xavier
2013-04-01
Most of geophysical studies focus on data acquisition and analysis, but another aspect which is gaining importance is the discussion on acquisition of suitable datasets. This can be done through the design of an optimal experiment. Optimizing an experimental design implies a compromise between maximizing the information we get about the target and reducing the cost of the experiment, considering a wide range of constraints (logistical, financial, experimental …). We are currently developing a method to design an optimal controlled-source electromagnetic (CSEM) experiment to detect a potential CO2 reservoir and monitor this reservoir during and after CO2 injection. Our statistical algorithm combines the use of linearized inverse theory (to evaluate the quality of one given design via the objective function) and stochastic optimization methods like genetic algorithm (to examine a wide range of possible surveys). The particularity of our method is that it uses a multi-objective genetic algorithm that searches for designs that fit several objective functions simultaneously. One main advantage of this kind of technique to design an experiment is that it does not require the acquisition of any data and can thus be easily conducted before any geophysical survey. Our new experimental design algorithm has been tested with a realistic one-dimensional resistivity model of the Earth in the region of study (northern Spain CO2 sequestration test site). We show that a small number of well distributed observations have the potential to resolve the target. This simple test also points out the importance of a well chosen objective function. Finally, in the context of CO2 sequestration that motivates this study, we might be interested in maximizing the information we get about the reservoir layer. In that case, we show how the combination of two different objective functions considerably improve its resolution.
Experimental Verification of Structural-Acoustic Modelling and Design Optimization
NASA Astrophysics Data System (ADS)
MARBURG, S.; BEER, H.-J.; GIER, J.; HARDTKE, H.-J.; RENNERT, R.; PERRET, F.
2002-05-01
A number of papers have been published on the simulation of structural-acoustic design optimization. However, extensive work is required to verify these results in practical applications. Herein, a steel box of 1·0×1·1×1·5 m with an external beam structure welded on three surface plates was investigated. This investigation included experimental modal analysis and experimental measurements of certain noise transfer functions (sound pressure at points inside the box due to force excitation at beam structure). Using these experimental data, the finite element model of the structure was tuned to provide similar results. With a first structural mode at less than 20 Hz, the reliable frequency range was identified up to about 60 Hz. Obviously, the finite element model could not be further improved only by mesh refinement. The tuning process will be explained in detail since there was a number of changes that helped to improve the structure. Other changes did not improve the structure. Although this model of the box could be expected as a rather simple structure, it can be considered to be a complex structure for simulation purposes. A defined modification of the physical model verified the simulation model. In a final step, the optimal location of stiffening beam structures was predicted by simulation. Their effect on the noise transfer function was experimentally verified. This paper critically discusses modelling techniques that are applied for structural-acoustic simulation of sedan bodies.
Prediction uncertainty and optimal experimental design for learning dynamical systems
NASA Astrophysics Data System (ADS)
Letham, Benjamin; Letham, Portia A.; Rudin, Cynthia; Browne, Edward P.
2016-06-01
Dynamical systems are frequently used to model biological systems. When these models are fit to data, it is necessary to ascertain the uncertainty in the model fit. Here, we present prediction deviation, a metric of uncertainty that determines the extent to which observed data have constrained the model's predictions. This is accomplished by solving an optimization problem that searches for a pair of models that each provides a good fit for the observed data, yet has maximally different predictions. We develop a method for estimating a priori the impact that additional experiments would have on the prediction deviation, allowing the experimenter to design a set of experiments that would most reduce uncertainty. We use prediction deviation to assess uncertainty in a model of interferon-alpha inhibition of viral infection, and to select a sequence of experiments that reduces this uncertainty. Finally, we prove a theoretical result which shows that prediction deviation provides bounds on the trajectories of the underlying true model. These results show that prediction deviation is a meaningful metric of uncertainty that can be used for optimal experimental design.
Optimization of preservatives in a topical formulation using experimental design.
Rahali, Y; Pensé-Lhéritier, A-M; Mielcarek, C; Bensouda, Y
2009-12-01
Optimizing the preservative regime for a preparation requires the antimicrobial effectiveness of several preservative combinations to be determined. In this study, three preservatives were tested: benzoic acid, sorbic acid and benzylic alcohol. Their preservative effects were evaluated using the antimicrobial preservative efficacy test (challenge-test) of the European Pharmacopeia (EP). A D-optimal mixture design was used to provide a maximum of information from a limited number of experiments. The results of this study were analysed with the help of the Design Expert software and enabled us to formulate emulsions satisfying both requirements A and B of the EP.
Optimizing Experimental Designs Relative to Costs and Effect Sizes.
ERIC Educational Resources Information Center
Headrick, Todd C.; Zumbo, Bruno D.
A general model is derived for the purpose of efficiently allocating integral numbers of units in multi-level designs given prespecified power levels. The derivation of the model is based on a constrained optimization problem that maximizes a general form of a ratio of expected mean squares subject to a budget constraint. This model provides more…
Sánchez, M S; Sarabia, L A; Ortiz, M C
2012-11-19
Experimental designs for a given task should be selected on the base of the problem being solved and of some criteria that measure their quality. There are several such criteria because there are several aspects to be taken into account when making a choice. The most used criteria are probably the so-called alphabetical optimality criteria (for example, the A-, E-, and D-criteria related to the joint estimation of the coefficients, or the I- and G-criteria related to the prediction variance). Selecting a proper design to solve a problem implies finding a balance among these several criteria that measure the performance of the design in different aspects. Technically this is a problem of multi-criteria optimization, which can be tackled from different views. The approach presented here addresses the problem in its real vector nature, so that ad hoc experimental designs are generated with an algorithm based on evolutionary algorithms to find the Pareto-optimal front. There is not theoretical limit to the number of criteria that can be studied and, contrary to other approaches, no just one experimental design is computed but a set of experimental designs all of them with the property of being Pareto-optimal in the criteria needed by the user. Besides, the use of an evolutionary algorithm makes it possible to search in both continuous and discrete domains and avoid the need of having a set of candidate points, usual in exchange algorithms.
Experimental verification of Space Platform battery discharger design optimization
NASA Technical Reports Server (NTRS)
Sable, Dan M.; Deuty, Scott; Lee, Fred C.; Cho, Bo H.
1991-01-01
The detailed design of two candidate topologies for the Space Platform battery discharger, a four module boost converter (FMBC) and a voltage-fed push-pull autotransformer (VFPPAT), is presented. Each has unique problems. The FMBC requires careful design and analysis in order to obtain good dynamic performance. This is due to the presence of a right-half-plane (RHP) zero in the control-to-output transfer function. The VFPPAT presents a challenging power stage design in order to yield high efficiency and light component weight. The authors describe the design of each of these converters and compare their efficiency, weight, and dynamic characteristics.
OPTIMIZATION OF EXPERIMENTAL DESIGNS BY INCORPORATING NIF FACILITY IMPACTS
Eder, D C; Whitman, P K; Koniges, A E; Anderson, R W; Wang, P; Gunney, B T; Parham, T G; Koerner, J G; Dixit, S N; . Suratwala, T I; Blue, B E; Hansen, J F; Tobin, M T; Robey, H F; Spaeth, M L; MacGowan, B J
2005-08-31
For experimental campaigns on the National Ignition Facility (NIF) to be successful, they must obtain useful data without causing unacceptable impact on the facility. Of particular concern is excessive damage to optics and diagnostic components. There are 192 fused silica main debris shields (MDS) exposed to the potentially hostile target chamber environment on each shot. Damage in these optics results either from the interaction of laser light with contamination and pre-existing imperfections on the optic surface or from the impact of shrapnel fragments. Mitigation of this second damage source is possible by identifying shrapnel sources and shielding optics from them. It was recently demonstrated that the addition of 1.1-mm thick borosilicate disposable debris shields (DDS) block the majority of debris and shrapnel fragments from reaching the relatively expensive MDS's. However, DDS's cannot stop large, faster moving fragments. We have experimentally demonstrated one shrapnel mitigation technique showing that it is possible to direct fast moving fragments by changing the source orientation, in this case a Ta pinhole array. Another mitigation method is to change the source material to one that produces smaller fragments. Simulations and validating experiments are necessary to determine which fragments can penetrate or break 1-3 mm thick DDS's. Three-dimensional modeling of complex target-diagnostic configurations is necessary to predict the size, velocity, and spatial distribution of shrapnel fragments. The tools we are developing will be used to set the allowed level of debris and shrapnel generation for all NIF experimental campaigns.
NASA Technical Reports Server (NTRS)
Cliff, Susan E.; Baker, Timothy J.; Hicks, Raymond M.; Reuther, James J.
1999-01-01
Two supersonic transport configurations designed by use of non-linear aerodynamic optimization methods are compared with a linearly designed baseline configuration. One optimized configuration, designated Ames 7-04, was designed at NASA Ames Research Center using an Euler flow solver, and the other, designated Boeing W27, was designed at Boeing using a full-potential method. The two optimized configurations and the baseline were tested in the NASA Langley Unitary Plan Supersonic Wind Tunnel to evaluate the non-linear design optimization methodologies. In addition, the experimental results are compared with computational predictions for each of the three configurations from the Enter flow solver, AIRPLANE. The computational and experimental results both indicate moderate to substantial performance gains for the optimized configurations over the baseline configuration. The computed performance changes with and without diverters and nacelles were in excellent agreement with experiment for all three models. Comparisons of the computational and experimental cruise drag increments for the optimized configurations relative to the baseline show excellent agreement for the model designed by the Euler method, but poorer comparisons were found for the configuration designed by the full-potential code.
Analytical and experimental performance of optimal controller designs for a supersonic inlet
NASA Technical Reports Server (NTRS)
Zeller, J. R.; Lehtinen, B.; Geyser, L. C.; Batterton, P. G.
1973-01-01
The techniques of modern optimal control theory were applied to the design of a control system for a supersonic inlet. The inlet control problem was approached as a linear stochastic optimal control problem using as the performance index the expected frequency of unstarts. The details of the formulation of the stochastic inlet control problem are presented. The computational procedures required to obtain optimal controller designs are discussed, and the analytically predicted performance of controllers designed for several different inlet conditions is tabulated. The experimental implementation of the optimal control laws is described, and the experimental results obtained in a supersonic wind tunnel are presented. The control laws were implemented with analog and digital computers. Comparisons are made between the experimental and analytically predicted performance results. Comparisons are also made between the results obtained with continuous analog computer controllers and discrete digital computer versions.
Kamairudin, Norsuhaili; Gani, Siti Salwa Abd; Masoumi, Hamid Reza Fard; Hashim, Puziah
2014-10-16
The D-optimal mixture experimental design was employed to optimize the melting point of natural lipstick based on pitaya (Hylocereus polyrhizus) seed oil. The influence of the main lipstick components-pitaya seed oil (10%-25% w/w), virgin coconut oil (25%-45% w/w), beeswax (5%-25% w/w), candelilla wax (1%-5% w/w) and carnauba wax (1%-5% w/w)-were investigated with respect to the melting point properties of the lipstick formulation. The D-optimal mixture experimental design was applied to optimize the properties of lipstick by focusing on the melting point with respect to the above influencing components. The D-optimal mixture design analysis showed that the variation in the response (melting point) could be depicted as a quadratic function of the main components of the lipstick. The best combination of each significant factor determined by the D-optimal mixture design was established to be pitaya seed oil (25% w/w), virgin coconut oil (37% w/w), beeswax (17% w/w), candelilla wax (2% w/w) and carnauba wax (2% w/w). With respect to these factors, the 46.0 °C melting point property was observed experimentally, similar to the theoretical prediction of 46.5 °C. Carnauba wax is the most influential factor on this response (melting point) with its function being with respect to heat endurance. The quadratic polynomial model sufficiently fit the experimental data.
Chang, Liang-Cheng; Chu, Hone-Jay; Lin, Yu-Pin; Chen, Yu-Wen
2010-10-01
This research develops an optimum design model of groundwater network using genetic algorithm (GA) and modified Newton approach, based on the experimental design conception. The goal of experiment design is to minimize parameter uncertainty, represented by the covariance matrix determinant of estimated parameters. The design problem is constrained by a specified cost and solved by GA and a parameter identification model. The latter estimates optimum parameter value and its associated sensitivity matrices. The general problem is simplified into two classes of network design problems: an observation network design problem and a pumping network design problem. Results explore the relationship between the experimental design and the physical processes. The proposed model provides an alternative to solve optimization problems for groundwater experimental design. PMID:19757116
Optimal experimental designs for the estimation of thermal properties of composite materials
NASA Technical Reports Server (NTRS)
Scott, Elaine P.; Moncman, Deborah A.
1994-01-01
Reliable estimation of thermal properties is extremely important in the utilization of new advanced materials, such as composite materials. The accuracy of these estimates can be increased if the experiments are designed carefully. The objectives of this study are to design optimal experiments to be used in the prediction of these thermal properties and to then utilize these designs in the development of an estimation procedure to determine the effective thermal properties (thermal conductivity and volumetric heat capacity). The experiments were optimized by choosing experimental parameters that maximize the temperature derivatives with respect to all of the unknown thermal properties. This procedure has the effect of minimizing the confidence intervals of the resulting thermal property estimates. Both one-dimensional and two-dimensional experimental designs were optimized. A heat flux boundary condition is required in both analyses for the simultaneous estimation of the thermal properties. For the one-dimensional experiment, the parameters optimized were the heating time of the applied heat flux, the temperature sensor location, and the experimental time. In addition to these parameters, the optimal location of the heat flux was also determined for the two-dimensional experiments. Utilizing the optimal one-dimensional experiment, the effective thermal conductivity perpendicular to the fibers and the effective volumetric heat capacity were then estimated for an IM7-Bismaleimide composite material. The estimation procedure used is based on the minimization of a least squares function which incorporates both calculated and measured temperatures and allows for the parameters to be estimated simultaneously.
OPTIMIZING THE PRECISION OF TOXICITY THRESHOLD ESTIMATION USING A TWO-STAGE EXPERIMENTAL DESIGN
An important consideration for risk assessment is the existence of a threshold, i.e., the highest toxicant dose where the response is not distinguishable from background. We have developed methodology for finding an experimental design that optimizes the precision of threshold mo...
Kamairudin, Norsuhaili; Gani, Siti Salwa Abd; Masoumi, Hamid Reza Fard; Hashim, Puziah
2014-01-01
The D-optimal mixture experimental design was employed to optimize the melting point of natural lipstick based on pitaya (Hylocereus polyrhizus) seed oil. The influence of the main lipstick components-pitaya seed oil (10%-25% w/w), virgin coconut oil (25%-45% w/w), beeswax (5%-25% w/w), candelilla wax (1%-5% w/w) and carnauba wax (1%-5% w/w)-were investigated with respect to the melting point properties of the lipstick formulation. The D-optimal mixture experimental design was applied to optimize the properties of lipstick by focusing on the melting point with respect to the above influencing components. The D-optimal mixture design analysis showed that the variation in the response (melting point) could be depicted as a quadratic function of the main components of the lipstick. The best combination of each significant factor determined by the D-optimal mixture design was established to be pitaya seed oil (25% w/w), virgin coconut oil (37% w/w), beeswax (17% w/w), candelilla wax (2% w/w) and carnauba wax (2% w/w). With respect to these factors, the 46.0 °C melting point property was observed experimentally, similar to the theoretical prediction of 46.5 °C. Carnauba wax is the most influential factor on this response (melting point) with its function being with respect to heat endurance. The quadratic polynomial model sufficiently fit the experimental data. PMID:25325152
Optimization of Experimental Design for Estimating Groundwater Pumping Using Model Reduction
NASA Astrophysics Data System (ADS)
Ushijima, T.; Cheng, W.; Yeh, W. W.
2012-12-01
An optimal experimental design algorithm is developed to choose locations for a network of observation wells for estimating unknown groundwater pumping rates in a confined aquifer. The design problem can be expressed as an optimization problem which employs a maximal information criterion to choose among competing designs subject to the specified design constraints. Because of the combinatorial search required in this optimization problem, given a realistic, large-scale groundwater model, the dimensionality of the optimal design problem becomes very large and can be difficult if not impossible to solve using mathematical programming techniques such as integer programming or the Simplex with relaxation. Global search techniques, such as Genetic Algorithms (GAs), can be used to solve this type of combinatorial optimization problem; however, because a GA requires an inordinately large number of calls of a groundwater model, this approach may still be infeasible to use to find the optimal design in a realistic groundwater model. Proper Orthogonal Decomposition (POD) is therefore applied to the groundwater model to reduce the model space and thereby reduce the computational burden of solving the optimization problem. Results for a one-dimensional test case show identical results among using GA, integer programming, and an exhaustive search demonstrating that GA is a valid method for use in a global optimum search and has potential for solving large-scale optimal design problems. Additionally, other results show that the algorithm using GA with POD model reduction is several orders of magnitude faster than an algorithm that employs GA without POD model reduction in terms of time required to find the optimal solution. Application of the proposed methodology is being made to a large-scale, real-world groundwater problem.
NASA Technical Reports Server (NTRS)
Maghami, Peiman G.; Gupta, Sandeep; Joshi, Suresh M.; Walz, Joseph E.
1993-01-01
An optimization-based integrated design approach for flexible space structures is experimentally validated using three types of dissipative controllers, including static, dynamic, and LQG dissipative controllers. The nominal phase-0 of the controls structure interaction evolutional model (CEM) structure is redesigned to minimize the average control power required to maintain specified root-mean-square line-of-sight pointing error under persistent disturbances. The redesign structure, phase-1 CEM, was assembled and tested against phase-0 CEM. It is analytically and experimentally demonstrated that integrated controls-structures design is substantially superior to that obtained through the traditional sequential approach. The capability of a software design tool based on an automated design procedure in a unified environment for structural and control designs is demonstrated.
Krstulovich, S.F.
1987-10-31
This report is developed as part of the Fermilab D-0 Experimental Facility Project Title II Design Documentation Update. As such, it concentrates primarily on HVAC mechanical systems design optimization and cost analysis.
Hadidi, Naghmeh; Kobarfard, Farzad; Nafissi-Varcheh, Nastaran; Aboofazeli, Reza
2011-01-01
In this study, noncovalent functionalization of single-walled carbon nanotubes (SWCNTs) with phospholipid-polyethylene glycols (Pl-PEGs) was performed to improve the solubility of SWCNTs in aqueous solution. Two kinds of PEG derivatives, ie, Pl-PEG 2000 and Pl-PEG 5000, were used for the PEGylation process. An experimental design technique (D-optimal design and second-order polynomial equations) was applied to investigate the effect of variables on PEGylation and the solubility of SWCNTs. The type of PEG derivative was selected as a qualitative parameter, and the PEG/SWCNT weight ratio and sonication time were applied as quantitative variables for the experimental design. Optimization was performed for two responses, aqueous solubility and loading efficiency. The grafting of PEG to the carbon nanostructure was determined by thermogravimetric analysis, Raman spectroscopy, and scanning electron microscopy. Aqueous solubility and loading efficiency were determined by ultraviolet-visible spectrophotometry and measurement of free amine groups, respectively. Results showed that Pl-PEGs were grafted onto SWCNTs. Aqueous solubility of 0.84 mg/mL and loading efficiency of nearly 98% were achieved for the prepared Pl-PEG 5000-SWCNT conjugates. Evaluation of functionalized SWCNTs showed that our noncovalent functionalization protocol could considerably increase aqueous solubility, which is an essential criterion in the design of a carbon nanotube-based drug delivery system and its biodistribution.
Optimal design and experimental analyses of a new micro-vibration control payload-platform
NASA Astrophysics Data System (ADS)
Sun, Xiaoqing; Yang, Bintang; Zhao, Long; Sun, Xiaofen
2016-07-01
This paper presents a new payload-platform, for precision devices, which possesses the capability of isolating the complex space micro-vibration in low frequency range below 5 Hz. The novel payload-platform equipped with smart material actuators is investigated and designed through optimization strategy based on the minimum energy loss rate, for the aim of achieving high drive efficiency and reducing the effect of the magnetic circuit nonlinearity. Then, the dynamic model of the driving element is established by using the Lagrange method and the performance of the designed payload-platform is further discussed through the combination of the controlled auto regressive moving average (CARMA) model with modified generalized prediction control (MGPC) algorithm. Finally, an experimental prototype is developed and tested. The experimental results demonstrate that the payload-platform has an impressive potential of micro-vibration isolation.
Experimental Investigation of a Point Design Optimized Arrow Wing HSCT Configuration
NASA Technical Reports Server (NTRS)
Narducci, Robert P.; Sundaram, P.; Agrawal, Shreekant; Cheung, S.; Arslan, A. E.; Martin, G. L.
1999-01-01
The M2.4-7A Arrow Wing HSCT configuration was optimized for straight and level cruise at a Mach number of 2.4 and a lift coefficient of 0.10. A quasi-Newton optimization scheme maximized the lift-to-drag ratio (by minimizing drag-to-lift) using Euler solutions from FL067 to estimate the lift and drag forces. A 1.675% wind-tunnel model of the Opt5 HSCT configuration was built to validate the design methodology. Experimental data gathered at the NASA Langley Unitary Plan Wind Tunnel (UPWT) section #2 facility verified CFL3D Euler and Navier-Stokes predictions of the Opt5 performance at the design point. In turn, CFL3D confirmed the improvement in the lift-to-drag ratio obtained during the optimization, thus validating the design procedure. A data base at off-design conditions was obtained during three wind-tunnel tests. The entry into NASA Langley UPWT section #2 obtained data at a free stream Mach number, M(sub infinity), of 2.55 as well as the design Mach number, M(sub infinity)=2.4. Data from a Mach number range of 1.8 to 2.4 was taken at UPWT section #1. Transonic and low supersonic Mach numbers, M(sub infinity)=0.6 to 1.2, was gathered at the NASA Langley 16 ft. Transonic Wind Tunnel (TWT). In addition to good agreement between CFD and experimental data, highlights from the wind-tunnel tests include a trip dot study suggesting a linear relationship between trip dot drag and Mach number, an aeroelastic study that measured the outboard wing deflection and twist, and a flap scheduling study that identifies the possibility of only one leading-edge and trailing-edge flap setting for transonic cruise and another for low supersonic acceleration.
Optimizing an experimental design for a CSEM experiment: methodology and synthetic tests
NASA Astrophysics Data System (ADS)
Roux, E.; Garcia, X.
2014-04-01
Optimizing an experimental design is a compromise between maximizing information we get about the target and limiting the cost of the experiment, providing a wide range of constraints. We present a statistical algorithm for experiment design that combines the use of linearized inverse theory and stochastic optimization technique. Linearized inverse theory is used to quantify the quality of one given experiment design while genetic algorithm (GA) enables us to examine a wide range of possible surveys. The particularity of our algorithm is the use of the multi-objective GA NSGA II that searches designs that fit several objective functions (OFs) simultaneously. This ability of NSGA II is helping us in defining an experiment design that focuses on a specified target area. We present a test of our algorithm using a 1-D electrical subsurface structure. The model we use represents a simple but realistic scenario in the context of CO2 sequestration that motivates this study. Our first synthetic test using a single OF shows that a limited number of well-distributed observations from a chosen design have the potential to resolve the given model. This synthetic test also points out the importance of a well-chosen OF, depending on our target. In order to improve these results, we show how the combination of two OFs using a multi-objective GA enables us to determine an experimental design that maximizes information about the reservoir layer. Finally, we present several tests of our statistical algorithm in more challenging environments by exploring the influence of noise, specific site characteristics or its potential for reservoir monitoring.
Ahmad, A L; Ideris, N; Ooi, B S; Low, S C; Ismail, A
2016-01-01
Statistical experimental design was employed to optimize the preparation conditions of polyvinylidenefluoride (PVDF) membranes. Three variables considered were polymer concentration, dissolving temperature, and casting thickness, whereby the response variable was membrane-protein binding. The optimum preparation for the PVDF membrane was a polymer concentration of 16.55 wt%, a dissolving temperature of 27.5°C, and a casting thickness of 450 µm. The statistical model exhibits a deviation between the predicted and actual responses of less than 5%. Further characterization of the formed PVDF membrane showed that the morphology of the membrane was in line with the membrane-protein binding performance. PMID:27088961
Formulation and optimization by experimental design of eco-friendly emulsions based on d-limonene.
Pérez-Mosqueda, Luis M; Trujillo-Cayado, Luis A; Carrillo, Francisco; Ramírez, Pablo; Muñoz, José
2015-04-01
d-Limonene is a natural occurring solvent that can replace more pollutant chemicals in agrochemical formulations. In the present work, a comprehensive study of the influence of dispersed phase mass fraction, ϕ, and of the surfactant/oil ratio, R, on the emulsion stability and droplet size distribution of d-limonene-in-water emulsions stabilized by a non-ionic triblock copolymer surfactant has been carried out. An experimental full factorial design 3(2) was conducted in order to optimize the emulsion formulation. The independent variables, ϕ and R were studied in the range 10-50 wt% and 0.02-0.1, respectively. The emulsions studied were mainly destabilized by both creaming and Ostwald ripening. Therefore, initial droplet size and an overall destabilization parameter, the so-called turbiscan stability index, were used as dependent variables. The optimal formulation, comprising minimum droplet size and maximum stability was achieved at ϕ=50 wt%; R=0.062. Furthermore, the surface response methodology allowed us to obtain the formulation yielding sub-micron emulsions by using a single step rotor/stator homogenizer process instead of most commonly used two-step emulsification methods. In addition, the optimal formulation was further improved against Ostwald ripening by adding silicone oil to the dispersed phase. The combination of these experimental findings allowed us to gain a deeper insight into the stability of these emulsions, which can be applied to the rational development of new formulations with potential application in agrochemical formulations.
Idris, Abubakr M.; Assubaie, Fahad N.; Sultan, Salah M.
2007-01-01
Experimental design optimization approach was utilized to develop a sequential injection analysis (SIA) method for promazine assay in bulk and pharmaceutical formulations. The method was based on the oxidation of promazine by Ce(IV) in sulfuric acidic media resulting in a spectrophotometrically detectable species at 512 nm. A 33 full factorial design and response surface methods were applied to optimize experimental conditions potentially controlling the analysis. The optimum conditions obtained were 1.0 × 10−4 M sulphuric acid, 0.01 M Ce(IV), and 10 μL/s flow rate. Good analytical parameters were obtained including range of linearity 1–150 μg/mL, linearity with correlation coefficient 0.9997, accuracy with mean recovery 98.2%, repeatability with RSD 1.4% (n = 7 consequent injections), intermediate precision with RSD 2.1% (n = 5 runs over a week), limits of detection 0.34 μg/mL, limits of quantification 0.93 μg/mL, and sampling frequency 23 samples/h. The obtained results were realized by the British Pharmacopoeia method and comparable results were obtained. The provided SIA method enjoys the advantages of the technique with respect to rapidity, reagent/sample saving, and safety in solution handling and to the environment. PMID:18350124
Clima, Lilia; Ursu, Elena L; Cojocaru, Corneliu; Rotaru, Alexandru; Barboiu, Mihail; Pinteala, Mariana
2015-09-28
The complexes formed by DNA and polycations have received great attention owing to their potential application in gene therapy. In this study, the binding efficiency between double-stranded oligonucleotides (dsDNA) and branched polyethylenimine (B-PEI) has been quantified by processing of the images captured from the gel electrophoresis assays. The central composite experimental design has been employed to investigate the effects of controllable factors on the binding efficiency. On the basis of experimental data and the response surface methodology, a multivariate regression model has been constructed and statistically validated. The model has enabled us to predict the binding efficiency depending on experimental factors, such as concentrations of dsDNA and B-PEI as well as the initial pH of solution. The optimization of the binding process has been performed using simplex and gradient methods. The optimal conditions determined for polyplex formation have yielded a maximal binding efficiency close to 100%. In order to reveal the mechanism of complex formation at the atomic-scale, a molecular dynamic simulation has been carried out. According to the computation results, B-PEI amine hydrogen atoms have interacted with oxygen atoms from dsDNA phosphate groups. These interactions have led to the formation of hydrogen bonds between macromolecules, stabilizing the polyplex structure.
NASA Astrophysics Data System (ADS)
Ebigbo, Anozie; Padalkina, Katharina; Seidler, Ralf; Thorwart, Martin; Niederau, Jan; Marquart, Gabriele; Dini, Ivano
2015-04-01
We study an area of high heat flow, adjacent to the Larderello-Travale and Mt. Amiata geothermal fields in southern Tuscany (Italy) in respect to conductive and advective heat transport in various rock units. We construct a geological three dimensional gridded model, assigned rock properties deduced from logging data in nearby boreholes and rock sample petrophysical lab measurements, and applied numerical simulation technique to resolve the subsurface temperature field and rock units of high fluid flow. We calibrate the model with available temperature depth data from a few shallow and two deep boreholes. We found two rock units (i.e. two depth regions) with permeabilities on the order of 10-14 m2 and considerable fluid flow. In the upper regions fluid flow is mainly driven by topography related pressure gradients while in the deeper layer convective heat transport prevails caused by a deep heat source due to a young granitic intrusion. In a second step we study the problem of finding optimal sites for a slim hole to measure a temperature depth profile for determining the (effective) permeability of a certain rock unit which is not intersected by the slimhole. This question is tackled by methods from optimal experimental design (OED) applied to the numerical simulation model. OED demands the calculation of the Fisher Matrix depending on the slimhole location and the expected permeability of the rock unit in question. An optimization criterion allows finding the optimal locations for a slimhole to minimize the error in determining the permeability of the rock unit. For our study reservoir optimal slimhole locations coincide with regions of high flow rates and large deviations from the mean temperature of the reservoir layer in question.
Optimization and evaluation of clarithromycin floating tablets using experimental mixture design.
Uğurlu, Timucin; Karaçiçek, Uğur; Rayaman, Erkan
2014-01-01
The purpose of the study was to prepare and evaluate clarithromycin (CLA) floating tablets using experimental mixture design for treatment of Helicobacter pylori provided by prolonged gastric residence time and controlled plasma level. Ten different formulations were generated based on different molecular weight of hypromellose (HPMC K100, K4M, K15M) by using simplex lattice design (a sub-class of mixture design) with Minitab 16 software. Sodium bicarbonate and anhydrous citric acid were used as gas generating agents. Tablets were prepared by wet granulation technique. All of the process variables were fixed. Results of cumulative drug release at 8th h (CDR 8th) were statistically analyzed to get optimized formulation (OF). Optimized formulation, which gave floating lag time lower than 15 s and total floating time more than 10 h, was analyzed and compared with target for CDR 8th (80%). A good agreement was shown between predicted and actual values of CDR 8th with a variation lower than 1%. The activity of clarithromycin contained optimizedformula against H. pylori were quantified using well diffusion agar assay. Diameters of inhibition zones vs. log10 clarithromycin concentrations were plotted in order to obtain a standard curve and clarithromycin activity. PMID:25272652
Doehlert experimental design applied to optimization of light emitting textile structures
NASA Astrophysics Data System (ADS)
Oguz, Yesim; Cochrane, Cedric; Koncar, Vladan; Mordon, Serge R.
2016-07-01
A light emitting fabric (LEF) has been developed for photodynamic therapy (PDT) for the treatment of dermatologic diseases such as Actinic Keratosis (AK). A successful PDT requires homogenous and reproducible light with controlled power and wavelength on the treated skin area. Due to the shape of the human body, traditional PDT with external light sources is unable to deliver homogenous light everywhere on the skin (head vertex, hand, etc.). For better light delivery homogeneity, plastic optical fibers (POFs) have been woven in textile in order to emit laterally the injected light. The previous studies confirmed that the light power could be locally controlled by modifying the radius of POF macro-bendings within the textile structure. The objective of this study is to optimize the distribution of macro-bendings over the LEF surface in order to increase the light intensity (mW/cm2), and to guarantee the best possible light deliver homogeneity over the LEF which are often contradictory. Fifteen experiments have been carried out with Doehlert experimental design involving Response Surface Methodology (RSM). The proposed models are fitted to the experimental data to enable the optimal set up of the warp yarns tensions.
Sayara, Tahseen; Sarrà, Montserrat; Sánchez, Antoni
2010-06-01
The objective of this study was the application of the experimental design technique to optimize the conditions for the bioremediation of contaminated soil by means of composting. A low-cost material such as compost from the Organic Fraction of Municipal Solid Waste as amendment and pyrene as model pollutant were used. The effect of three factors was considered: pollutant concentration (0.1-2 g/kg), soil:compost mixing ratio (1:0.5-1:2 w/w) and compost stability measured as respiration index (0.78, 2.69 and 4.52 mg O2 g(-1) Organic Matter h(-1)). Stable compost permitted to achieve an almost complete degradation of pyrene in a short time (10 days). Results indicated that compost stability is a key parameter to optimize PAHs biodegradation. A factor analysis indicated that the optimal conditions for bioremediation after 10, 20 and 30 days of process were (1.4, 0.78, 1:1.4), (1.4, 2.18. 1:1.3) and (1.3, 2.18, 1:1.3) for concentration (g/kg), compost stability (mg O2 g(-1) Organic Matter h(-1)) and soil:compost mixing ratio, respectively.
NASA Astrophysics Data System (ADS)
Reimer, Joscha; Piwonski, Jaroslaw; Slawig, Thomas
2016-04-01
The statistical significance of any model-data comparison strongly depends on the quality of the used data and the criterion used to measure the model-to-data misfit. The statistical properties (such as mean values, variances and covariances) of the data should be taken into account by choosing a criterion as, e.g., ordinary, weighted or generalized least squares. Moreover, the criterion can be restricted onto regions or model quantities which are of special interest. This choice influences the quality of the model output (also for not measured quantities) and the results of a parameter estimation or optimization process. We have estimated the parameters of a three-dimensional and time-dependent marine biogeochemical model describing the phosphorus cycle in the ocean. For this purpose, we have developed a statistical model for measurements of phosphate and dissolved organic phosphorus. This statistical model includes variances and correlations varying with time and location of the measurements. We compared the obtained estimations of model output and parameters for different criteria. Another question is if (and which) further measurements would increase the model's quality at all. Using experimental design criteria, the information content of measurements can be quantified. This may refer to the uncertainty in unknown model parameters as well as the uncertainty regarding which model is closer to reality. By (another) optimization, optimal measurement properties such as locations, time instants and quantities to be measured can be identified. We have optimized such properties for additional measurement for the parameter estimation of the marine biogeochemical model. For this purpose, we have quantified the uncertainty in the optimal model parameters and the model output itself regarding the uncertainty in the measurement data using the (Fisher) information matrix. Furthermore, we have calculated the uncertainty reduction by additional measurements depending on time
Development and optimization of quercetin-loaded PLGA nanoparticles by experimental design
TEFAS, LUCIA RUXANDRA; TOMUŢĂ, IOAN; ACHIM, MARCELA; VLASE, LAURIAN
2015-01-01
Background and aims Quercetin is a flavonoid with good antioxidant activity, and exhibits various important pharmacological effects. The aim of the present work was to study the influence of formulation factors on the physicochemical properties of quercetin-loaded polymeric nanoparticles in order to optimize the formulation. Materials and methods The nanoparticles were prepared by the nanoprecipitation method. A 3-factor, 3-level Box-Behnken design was employed in this study considering poly(D,L-lactic-co-glycolic) acid (PLGA) concentration, polyvinyl alcohol (PVA) concentration and the stirring speed as independent variables. The responses were particle size, polydispersity index, zeta potential and encapsulation efficiency. Results The PLGA concentration seemed to be the most important factor influencing quercetin-nanoparticle characteristics. Increasing PLGA concentration led to an increase in particle size, as well as encapsulation efficiency. On the other hand, it exhibited a negative influence on the polydispersity index and zeta potential. The PVA concentration and the stirring speed had only a slight influence on particle size and polydispersity index. However, PVA concentration had an important negative effect on the encapsulation efficiency. Based on the results obtained, an optimized formulation was prepared, and the experimental values were comparable to the predicted ones. Conclusions The overall results indicated that PLGA concentration was the main factor influencing particle size, while entrapment efficiency was predominantly affected by the PVA concentration. PMID:26528074
Mahmood, Syed; Taher, Muhammad; Mandal, Uttam Kumar
2014-01-01
Raloxifene hydrochloride, a highly effective drug for the treatment of invasive breast cancer and osteoporosis in post-menopausal women, shows poor oral bioavailability of 2%. The aim of this study was to develop, statistically optimize, and characterize raloxifene hydrochloride-loaded transfersomes for transdermal delivery, in order to overcome the poor bioavailability issue with the drug. A response surface methodology experimental design was applied for the optimization of transfersomes, using Box-Behnken experimental design. Phospholipon® 90G, sodium deoxycholate, and sonication time, each at three levels, were selected as independent variables, while entrapment efficiency, vesicle size, and transdermal flux were identified as dependent variables. The formulation was characterized by surface morphology and shape, particle size, and zeta potential. Ex vivo transdermal flux was determined using a Hanson diffusion cell assembly, with rat skin as a barrier medium. Transfersomes from the optimized formulation were found to have spherical, unilamellar structures, with a homogeneous distribution and low polydispersity index (0.08). They had a particle size of 134±9 nM, with an entrapment efficiency of 91.00%±4.90%, and transdermal flux of 6.5±1.1 μg/cm2/hour. Raloxifene hydrochloride-loaded transfersomes proved significantly superior in terms of amount of drug permeated and deposited in the skin, with enhancement ratios of 6.25±1.50 and 9.25±2.40, respectively, when compared with drug-loaded conventional liposomes, and an ethanolic phosphate buffer saline. Differential scanning calorimetry study revealed a greater change in skin structure, compared with a control sample, during the ex vivo drug diffusion study. Further, confocal laser scanning microscopy proved an enhanced permeation of coumarin-6-loaded transfersomes, to a depth of approximately160 μM, as compared with rigid liposomes. These ex vivo findings proved that a raloxifene hydrochloride
Lewis, Jeffrey; Sjöstrom, Jan
2010-06-25
Soil column experiments in both the saturated and unsaturated regimes are widely used for applied and theoretical studies in such diverse fields as transport model evaluation, fate and transport of pesticides, explosives, microbes, heavy metals and non aqueous phase liquids, and for evapotranspiration studies. The apparent simplicity of constructing soil columns conceals a number of technical issues which can seriously affect the outcome of an experiment, such as the presence or absence of macropores, artificial preferential flow paths, non-ideal infiltrate injection and unrealistic moisture regimes. This review examines the literature to provide an analysis of the state of the art for constructing both saturated and unsaturated soil columns. Common design challenges are discussed and best practices for potential solutions are presented. This article discusses both basic principles and the practical advantages and disadvantages of various experimental approaches. Both repacked and monolith-type columns are discussed. The information in this review will assist soil scientists, hydrogeologists and environmental professionals in optimizing the construction and operation of soil column experiments in order to achieve their objectives, while avoiding serious design flaws which can compromise the integrity of their results. PMID:20452088
Lewis, Jeffrey; Sjöstrom, Jan
2010-06-25
Soil column experiments in both the saturated and unsaturated regimes are widely used for applied and theoretical studies in such diverse fields as transport model evaluation, fate and transport of pesticides, explosives, microbes, heavy metals and non aqueous phase liquids, and for evapotranspiration studies. The apparent simplicity of constructing soil columns conceals a number of technical issues which can seriously affect the outcome of an experiment, such as the presence or absence of macropores, artificial preferential flow paths, non-ideal infiltrate injection and unrealistic moisture regimes. This review examines the literature to provide an analysis of the state of the art for constructing both saturated and unsaturated soil columns. Common design challenges are discussed and best practices for potential solutions are presented. This article discusses both basic principles and the practical advantages and disadvantages of various experimental approaches. Both repacked and monolith-type columns are discussed. The information in this review will assist soil scientists, hydrogeologists and environmental professionals in optimizing the construction and operation of soil column experiments in order to achieve their objectives, while avoiding serious design flaws which can compromise the integrity of their results.
Drover, Vincent J; Bottaro, Christina S
2008-12-01
A suite of 12 widely used pharmaceuticals (ibuprofen, diclofenac, naproxen, bezafibrate, gemfibrozil, ofloxacin, norfloxacin, carbamazepine, primidone, sulphamethazine, sulphadimethoxine and sulphamethoxazole) commonly found in environmental waters were separated by highly sulphated CD-modified MEKC (CD-MEKC) with UV detection. An experimental design method, face-centred composite design, was employed to minimize run time without sacrificing resolution. Using an optimized BGE composed of 10 mM ammonium hydrogen phosphate, pH 11.5, 69 mM SDS, 6 mg/mL sulphated beta-CD and 8.5% v/v isopropanol, a separation voltage of 30 kV and a 48.5 cm x 50 microm id bare silica capillary at 30 degrees C allowed baseline separation of the 12 analytes in a total analysis time of 6.7 min. Instrument LODs in the low milligram per litre range were obtained, and when combined with offline preconcentration by SPE, LODs were between 4 and 30 microg/L.
Trovó, Alam G; Silva, Tatiane F S; Gomes, Oswaldo; Machado, Antonio E H; Neto, Waldomiro Borges; Muller, Paulo S; Daniel, Daniela
2013-01-01
The degradation of caffeine in different kind of effluents, via photo-Fenton process, was investigated in lab-scale and in a solar pilot plant. The treatment conditions (caffeine, Fe(2+) and H(2)O(2) concentrations) were defined by experimental design. The optimized conditions for each variable, obtained using the response factor (% mineralization), were: 52.0 mg L(-1)caffeine, 10.0 mg L(-1)Fe(2+) and 42.0 mg L(-1)H(2)O(2) (replaced in kinetic experiments). Under these conditions, in ultrapure water (UW), the caffeine concentration reached the quantitation limit (0.76 mg L(-1)) after 20 min, and 78% of mineralization was obtained respectively after 120 min of reaction. Using the same conditions, the matrix influence (surface water - SW and sewage treatment plant effluent - STP) on caffeine degradation was also evaluated. The total removal of caffeine in SW was reached at the same time in UW (after 20 min), while 40 min were necessary in STP. Although lower mineralization rates were verified for high organic load, under the same operational conditions, less H(2)O(2) was necessary to mineralize the dissolved organic carbon as the initial organic load increases. A high efficiency of the photo-Fenton process was also observed in caffeine degradation by solar photocatalysis using a CPC reactor, as well as intermediates of low toxicity, demonstrating that photo-Fenton process can be a viable alternative for caffeine removal in wastewater.
NASA Astrophysics Data System (ADS)
Hu, Enzhu; Bartsev, Sergey I.; Zhao, Ming; Liu, Professor Hong
The conceptual scheme of an experimental bioregenerative life support system (BLSS) for planetary exploration was designed, which consisted of four elements - human metabolism, higher plants, silkworms and waste treatment. 15 kinds of higher plants, such as wheat, rice, soybean, lettuce, mulberry, et al., were selected as regenerative component of BLSS providing the crew with air, water, and vegetable food. Silkworms, which producing animal nutrition for crews, were fed by mulberry-leaves during the first three instars, and lettuce leaves last two instars. The inedible biomass of higher plants, human wastes and silkworm feces were composted into soil like substrate, which can be reused by higher plants cultivation. Salt, sugar and some household material such as soap, shampoo would be provided from outside. To support the steady state of BLSS the same amount and elementary composition of dehydrated wastes were removed periodically. The balance of matter flows between BLSS components was described by the system of algebraic equations. The mass flows between the components were optimized by EXCEL spreadsheets and using Solver. The numerical method used in this study was Newton's method.
Integrated Bayesian Experimental Design
NASA Astrophysics Data System (ADS)
Fischer, R.; Dreier, H.; Dinklage, A.; Kurzan, B.; Pasch, E.
2005-11-01
Any scientist planning experiments wants to optimize the design of a future experiment with respect to best performance within the scheduled experimental scenarios. Bayesian Experimental Design (BED) aims in finding optimal experimental settings based on an information theoretic utility function. Optimal design parameters are found by maximizing an expected utility function where the future data and the parameters of physical scenarios of interest are marginalized. The goal of the Integrated Bayesian Experimental Design (IBED) concept is to combine experiments as early as on the design phase to mutually exploit the benefits of the other experiments. The Bayesian Integrated Data Analysis (IDA) concept of linking interdependent measurements to provide a validated data base and to exploit synergetic effects will be used to design meta-diagnostics. An example is given by the Thomson scattering (TS) and the interferometry (IF) diagnostics individually, and a set of both. In finding the optimal experimental design for the meta-diagnostic, TS and IF, the strengths of both experiments can be combined to synergistically increase the reliability of results.
Kośmider, Alicja; Białas, Wojciech; Kubiak, Piotr; Drożdżyńska, Agnieszka; Czaczyk, Katarzyna
2012-02-01
A two-step statistical experimental design was employed to optimize the medium for vitamin B(12) production from crude glycerol by Propionibacterium freudenreichii ssp. shermanii. In the first step, using Plackett-Burman design, five of 13 tested medium components (calcium pantothenate, NaH(2)PO(4)·2H(2)O, casein hydrolysate, glycerol and FeSO(4)·7H(2)O) were identified as factors having significant influence on vitamin production. In the second step, a central composite design was used to optimize levels of medium components selected in the first step. Valid statistical models describing the influence of significant factors on vitamin B(12) production were established for each optimization phase. The optimized medium provided a 93% increase in final vitamin concentration compared to the original medium. PMID:22178491
Komilis, Dimitrios; Evangelou, Alexandros; Voudrias, Evangelos
2011-09-01
The management of dewatered wastewater sludge is a major issue worldwide. Sludge disposal to landfills is not sustainable and thus alternative treatment techniques are being sought. The objective of this work was to determine optimal mixing ratios of dewatered sludge with other organic amendments in order to maximize the degradability of the mixtures during composting. This objective was achieved using mixture experimental design principles. An additional objective was to study the impact of the initial C/N ratio and moisture contents on the co-composting process of dewatered sludge. The composting process was monitored through measurements of O(2) uptake rates, CO(2) evolution, temperature profile and solids reduction. Eight (8) runs were performed in 100 L insulated air-tight bioreactors under a dynamic air flow regime. The initial mixtures were prepared using dewatered wastewater sludge, mixed paper wastes, food wastes, tree branches and sawdust at various initial C/N ratios and moisture contents. According to empirical modeling, mixtures of sludge and food waste mixtures at 1:1 ratio (ww, wet weight) maximize degradability. Structural amendments should be maintained below 30% to reach thermophilic temperatures. The initial C/N ratio and initial moisture content of the mixture were not found to influence the decomposition process. The bio C/bio N ratio started from around 10, for all runs, decreased during the middle of the process and increased to up to 20 at the end of the process. The solid carbon reduction of the mixtures without the branches ranged from 28% to 62%, whilst solid N reductions ranged from 30% to 63%. Respiratory quotients had a decreasing trend throughout the composting process. PMID:21565440
Optimal experimental design for nano-particle atom-counting from high-resolution STEM images.
De Backer, A; De Wael, A; Gonnissen, J; Van Aert, S
2015-04-01
In the present paper, the principles of detection theory are used to quantify the probability of error for atom-counting from high resolution scanning transmission electron microscopy (HR STEM) images. Binary and multiple hypothesis testing have been investigated in order to determine the limits to the precision with which the number of atoms in a projected atomic column can be estimated. The probability of error has been calculated when using STEM images, scattering cross-sections or peak intensities as a criterion to count atoms. Based on this analysis, we conclude that scattering cross-sections perform almost equally well as images and perform better than peak intensities. Furthermore, the optimal STEM detector design can be derived for atom-counting using the expression for the probability of error. We show that for very thin objects LAADF is optimal and that for thicker objects the optimal inner detector angle increases.
NASA Astrophysics Data System (ADS)
Russo, S.; Krastev, V. K.; Jannelli, E.; Falcucci, G.
2016-06-01
In this work, the design and the optimization of an experimental test bench for the experimental characterization of impulsive water-entry problems are presented. Currently, the majority of the experimental apparatus allow impact test only in specific conditions. Our test bench allows for testing of rigid and compliant bodies and allows performing experiments on floating or sinking structures, in free-fall, or under dynamic motion control. The experimental apparatus is characterized by the adoption of accelerometers, encoders, position sensors and, above all, FBG (fiber Bragg grating) sensors that, together with a high speed camera, provide accurate and fast data acquisitions for the dissection of structural deformations and hydrodynamic loadings under a broad set of experimental conditions.
Huang, S K; Garza, N R
1995-06-01
Optimization of both sensitivity and ionization softness for the Hewlett-Packard particle-beam liquid chromatography-mass spectrometry interface has been achieved by using a statistical experimental design with response surface modeling. Conditions for both optimized sensitivity and ionization softness were found to occur at 55-lb/in.(2) nebulizer flow, 35°C desolvation chamber temperature with approximately 45% organic modifier in the presence of 0.02-F ammonium acetate and a liquid chromatography flow rate of 0.2 mL/min.
NASA Astrophysics Data System (ADS)
Sohn, Jung Woo; Jeon, Juncheol; Nguyen, Quoc Hung; Choi, Seung-Bok
2015-08-01
In this paper, a disc-type magneto-rheological (MR) brake is designed for a mid-sized motorcycle and its performance is experimentally evaluated. The proposed MR brake consists of an outer housing, a rotating disc immersed in MR fluid, and a copper wire coiled around a bobbin to generate a magnetic field. The structural configuration of the MR brake is first presented with consideration of the installation space for the conventional hydraulic brake of a mid-sized motorcycle. The design parameters of the proposed MR brake are optimized to satisfy design requirements such as the braking torque, total mass of the MR brake, and cruising temperature caused by the magnetic-field friction of the MR fluid. In the optimization procedure, the braking torque is calculated based on the Herschel-Bulkley rheological model, which predicts MR fluid behavior well at high shear rate. An optimization tool based on finite element analysis is used to obtain the optimized dimensions of the MR brake. After manufacturing the MR brake, mechanical performances regarding the response time, braking torque and cruising temperature are experimentally evaluated.
NASA Astrophysics Data System (ADS)
Singh, Harinder J.; Hu, Wei; Wereley, Norman M.; Glass, William
2014-12-01
A linear stroke adaptive magnetorheological energy absorber (MREA) was designed, fabricated and tested for intense impact conditions with piston velocities up to 8 m s-1. The performance of the MREA was characterized using dynamic range, which is defined as the ratio of maximum on-state MREA force to the off-state MREA force. Design optimization techniques were employed in order to maximize the dynamic range at high impact velocities such that MREA maintained good control authority. Geometrical parameters of the MREA were optimized by evaluating MREA performance on the basis of a Bingham-plastic analysis incorporating minor losses (BPM analysis). Computational fluid dynamics and magnetic FE analysis were conducted to verify the performance of passive and controllable MREA force, respectively. Subsequently, high-speed drop testing (0-4.5 m s-1 at 0 A) was conducted for quantitative comparison with the numerical simulations. Refinements to the nonlinear BPM analysis were carried out to improve prediction of MREA performance.
Imandi, Sarat Babu; Bandaru, Veera Venkata Ratnam; Somalanka, Subba Rao; Bandaru, Sita Ramalakshmi; Garapati, Hanumantha Rao
2008-07-01
Statistical experimental designs were applied for the optimization of medium constituents for citric acid production by Yarrowia lipolytica NCIM 3589 in solid state fermentation (SSF) using pineapple waste as the sole substrate. Using Plackett-Burman design, yeast extract, moisture content of the substrate, KH(2)PO(4) and Na(2)HPO(4) were identified as significant variables which highly influenced citric acid production and these variables were subsequently optimized using a central composite design (CCD). The optimum conditions were found to be yeast extract 0.34 (%w/w), moisture content of the substrate 70.71 (%), KH(2)PO(4) 0.64 (%w/w) and Na(2)HPO(4) 0.69 (%w/w). Citric acid production at these optimum conditions was 202.35 g/kg ds (g citric acid produced/kg of dried pineapple waste as substrate).
Hajji, Mohamed; Rebai, Ahmed; Gharsallah, Néji; Nasri, Moncef
2008-07-01
Medium composition and culture conditions for the bleaching stable alkaline protease production by Aspergillus clavatus ES1 were optimized. Two statistical methods were used. Plackett-Burman design was applied to find the key ingredients and conditions for the best yield. Response surface methodology (RSM) including full factorial design was used to determine the optimal concentrations and conditions. Results indicated that Mirabilis jalapa tubers powder (MJTP), culture temperature, and initial medium pH had significant effects on the production. Under the proposed optimized conditions, the protease experimental yield (770.66 U/ml) closely matched the yield predicted by the statistical model (749.94 U/ml) with R (2)=0.98. The optimum operating conditions obtained from the RSM were MJTP concentration of 10 g/l, pH 8.0, and temperature of 30 degrees C, Sardinella heads and viscera flour (SHVF) and other salts were used at low level. The medium optimization contributed an about 14.0-fold higher yield than that of the unoptimized medium (starch 5 g/l, yeast extract 2 g/l, temperature 30 degrees C, and pH 6.0; 56 U/ml). More interestingly, the optimization was carried out with the by-product sources, which may result in cost-effective production of alkaline protease by the strain.
Sanchez, Gaëtan; Lecaignard, Françoise; Otman, Anatole; Maby, Emmanuel; Mattout, Jérémie
2016-01-01
The relatively young field of Brain-Computer Interfaces has promoted the use of electrophysiology and neuroimaging in real-time. In the meantime, cognitive neuroscience studies, which make extensive use of functional exploration techniques, have evolved toward model-based experiments and fine hypothesis testing protocols. Although these two developments are mostly unrelated, we argue that, brought together, they may trigger an important shift in the way experimental paradigms are being designed, which should prove fruitful to both endeavors. This change simply consists in using real-time neuroimaging in order to optimize advanced neurocognitive hypothesis testing. We refer to this new approach as the instantiation of an Active SAmpling Protocol (ASAP). As opposed to classical (static) experimental protocols, ASAP implements online model comparison, enabling the optimization of design parameters (e.g., stimuli) during the course of data acquisition. This follows the well-known principle of sequential hypothesis testing. What is radically new, however, is our ability to perform online processing of the huge amount of complex data that brain imaging techniques provide. This is all the more relevant at a time when physiological and psychological processes are beginning to be approached using more realistic, generative models which may be difficult to tease apart empirically. Based upon Bayesian inference, ASAP proposes a generic and principled way to optimize experimental design adaptively. In this perspective paper, we summarize the main steps in ASAP. Using synthetic data we illustrate its superiority in selecting the right perceptual model compared to a classical design. Finally, we briefly discuss its future potential for basic and clinical neuroscience as well as some remaining challenges. PMID:27458364
Venkata Mohan, S; Venkateswar Reddy, M
2013-01-01
Optimizing different factors is crucial for enhancement of mixed culture bioplastics (polyhydroxyalkanoates (PHA)) production. Design of experimental (DOE) methodology using Taguchi orthogonal array (OA) was applied to evaluate the influence and specific function of eight important factors (iron, glucose concentration, VFA concentration, VFA composition, nitrogen concentration, phosphorous concentration, pH, and microenvironment) on the bioplastics production. Three levels of factor (2(1) × 3(7)) variation were considered with symbolic arrays of experimental matrix [L(18)-18 experimental trails]. All the factors were assigned with three levels except iron concentration (2(1)). Among all the factors, microenvironment influenced bioplastics production substantially (contributing 81%), followed by pH (11%) and glucose concentration (2.5%). Validation experiments were performed with the obtained optimum conditions which resulted in improved PHA production. Good substrate degradation (as COD) of 68% was registered during PHA production. Dehydrogenase and phosphatase enzymatic activities were monitored during process operation. PMID:23201522
Characterization and optimization of acoustic filter performance by experimental design methodology.
Gorenflo, Volker M; Ritter, Joachim B; Aeschliman, Dana S; Drouin, Hans; Bowen, Bruce D; Piret, James M
2005-06-20
Acoustic cell filters operate at high separation efficiencies with minimal fouling and have provided a practical alternative for up to 200 L/d perfusion cultures. However, the operation of cell retention systems depends on several settings that should be adjusted depending on the cell concentration and perfusion rate. The impact of operating variables on the separation efficiency performance of a 10-L acoustic separator was characterized using a factorial design of experiments. For the recirculation mode of separator operation, bioreactor cell concentration, perfusion rate, power input, stop time and recirculation ratio were studied using a fractional factorial 2(5-1) design, augmented with axial and center point runs. One complete replicate of the experiment was carried out, consisting of 32 more runs, at 8 runs per day. Separation efficiency was the primary response and it was fitted by a second-order model using restricted maximum likelihood estimation. By backward elimination, the model equation for both experiments was reduced to 14 significant terms. The response surface model for the separation efficiency was tested using additional independent data to check the accuracy of its predictions, to explore robust operation ranges and to optimize separator performance. A recirculation ratio of 1.5 and a stop time of 2 s improved the separator performance over a wide range of separator operation. At power input of 5 W the broad range of robust high SE performance (95% or higher) was raised to over 8 L/d. The reproducible model testing results over a total period of 3 months illustrate both the stable separator performance and the applicability of the model developed to long-term perfusion cultures.
Ben Taheur, Fadia; Fdhila, Kais; Elabed, Hamouda; Bouguerra, Amel; Kouidhi, Bochra; Bakhrouf, Amina; Chaieb, Kamel
2016-04-01
Three bacterial strains (TE1, TD3 and FB2) were isolated from date palm (degla), pistachio and barley. The presence of nitrate reductase (narG) and nitrite reductase (nirS and nirK) genes in the selected strains was detected by PCR technique. Molecular identification based on 16S rDNA sequencing method was applied to identify positive strains. In addition, the D-optimal mixture experimental design was used to optimize the optimal formulation of probiotic bacteria for denitrification process. Strains harboring denitrification genes were identified as: TE1, Agrococcus sp LN828197; TD3, Cronobacter sakazakii LN828198 and FB2, Pedicoccus pentosaceus LN828199. PCR results revealed that all strains carried the nirS gene. However only C. sakazakii LN828198 and Agrococcus sp LN828197 harbored the nirK and the narG genes respectively. Moreover, the studied bacteria were able to form biofilm on abiotic surfaces with different degree. Process optimization showed that the most significant reduction of nitrate was 100% with 14.98% of COD consumption and 5.57 mg/l nitrite accumulation. Meanwhile, the response values were optimized and showed that the most optimal combination was 78.79% of C. sakazakii LN828198 (curve value), 21.21% of P. pentosaceus LN828199 (curve value) and absence (0%) of Agrococcus sp LN828197 (curve value). PMID:26893037
Ben Taheur, Fadia; Fdhila, Kais; Elabed, Hamouda; Bouguerra, Amel; Kouidhi, Bochra; Bakhrouf, Amina; Chaieb, Kamel
2016-04-01
Three bacterial strains (TE1, TD3 and FB2) were isolated from date palm (degla), pistachio and barley. The presence of nitrate reductase (narG) and nitrite reductase (nirS and nirK) genes in the selected strains was detected by PCR technique. Molecular identification based on 16S rDNA sequencing method was applied to identify positive strains. In addition, the D-optimal mixture experimental design was used to optimize the optimal formulation of probiotic bacteria for denitrification process. Strains harboring denitrification genes were identified as: TE1, Agrococcus sp LN828197; TD3, Cronobacter sakazakii LN828198 and FB2, Pedicoccus pentosaceus LN828199. PCR results revealed that all strains carried the nirS gene. However only C. sakazakii LN828198 and Agrococcus sp LN828197 harbored the nirK and the narG genes respectively. Moreover, the studied bacteria were able to form biofilm on abiotic surfaces with different degree. Process optimization showed that the most significant reduction of nitrate was 100% with 14.98% of COD consumption and 5.57 mg/l nitrite accumulation. Meanwhile, the response values were optimized and showed that the most optimal combination was 78.79% of C. sakazakii LN828198 (curve value), 21.21% of P. pentosaceus LN828199 (curve value) and absence (0%) of Agrococcus sp LN828197 (curve value).
El-Naggar, Noura El-Ahmady; Abdelwahed, Nayera A M
2014-01-01
Central composite design was chosen to determine the combined effects of four process variables (AgNO3 concentration, incubation period, pH level and inoculum size) on the extracellular biosynthesis of silver nanoparticles (AgNPs) by Streptomyces viridochromogenes. Statistical analysis of the results showed that incubation period, initial pH level and inoculum size had significant effects (P<0.05) on the biosynthesis of silver nanoparticles at their individual level. The maximum biosynthesis of silver nanoparticles was achieved at a concentration of 0.5% (v/v) of 1 mM AgNO3, incubation period of 96 h, initial pH of 9 and inoculum size of 2% (v/v). After optimization, the biosynthesis of silver nanoparticles was improved by approximately 5-fold as compared to that of the unoptimized conditions. The synthetic process of silver nanoparticle generation using the reduction of aqueous Ag+ ion by the culture supernatants of S. viridochromogenes was quite fast, and silver nanoparticles were formed immediately by the addition of AgNO3 solution (1 mM) to the cell-free supernatant. Initial characterization of silver nanoparticles was performed by visual observation of color change from yellow to intense brown color. UV-visible spectrophotometry for measuring surface plasmon resonance showed a single absorption peak at 400 nm, which confirmed the presence of silver nanoparticles. Fourier Transform Infrared Spectroscopy analysis provided evidence for proteins as possible reducing and capping agents for stabilizing the nanoparticles. Transmission Electron Microscopy revealed the extracellular formation of spherical silver nanoparticles in the size range of 2.15-7.27 nm. Compared to the cell-free supernatant, the biosynthesized AgNPs revealed superior antimicrobial activity against Gram-negative, Gram-positive bacterial strains and Candida albicans.
Laferriere, Craig; Ravenscroft, Neil; Wilson, Seanette; Combrink, Jill; Gordon, Lizelle; Petre, Jean
2011-10-01
The introduction of type b Haemophilus influenzae conjugate vaccines into routine vaccination schedules has significantly reduced the burden of this disease; however, widespread use in developing countries is constrained by vaccine costs, and there is a need for a simple and high-yielding manufacturing process. The vaccine is composed of purified capsular polysaccharide conjugated to an immunogenic carrier protein. To improve the yield and rate of the reductive amination conjugation reaction used to make this vaccine, some of the carboxyl groups of the carrier protein, tetanus toxoid, were modified to hydrazides, which are more reactive than the ε -amine of lysine. Other reaction parameters, including the ratio of the reactants, the size of the polysaccharide, the temperature and the salt concentration, were also investigated. Experimental design was used to minimize the number of experiments required to optimize all these parameters to obtain conjugate in high yield with target characteristics. It was found that increasing the reactant ratio and decreasing the size of the polysaccharide increased the polysaccharide:protein mass ratio in the product. Temperature and salt concentration did not improve this ratio. These results are consistent with a diffusion controlled rate limiting step in the conjugation reaction. Excessive modification of tetanus toxoid with hydrazide was correlated with reduced yield and lower free polysaccharide. This was attributed to a greater tendency for precipitation, possibly due to changes in the isoelectric point. Experimental design and multiple regression helped identify key parameters to control and thereby optimize this conjugation reaction.
Gao, Ping; Witt, Martha J; Haskell, Roy J; Zamora, Kathryn M; Shifflett, John R
2004-08-01
Response surface methodology (RSM) was applied to optimize the self-emulsifying drug delivery system (SEDDS) containing 25% (w/w) Drug A, a model drug with a high lipophilicity and low water solubility. The key objective of this study was to identify an optimal SEDDS formulation that: 1) possesses a minimum concentration of the surfactant and a maximum concentration of lipid and 2) generates a fine emulsion and eliminates large size droplets (> or = 1 microm) upon dilution with an aqueous medium. Three ingredient variables [PEG 400, Cremophor EL, and a mixture of glycerol dioleate (GDO), and glycerol monooleate (GMO)] were included in the experimental design, while keeping the other ingredients at a fixed level (25% Drug A, 6% ethanol, 3% propylene glycol, 4% water, and 2% tromethamine) in the SEDDS formulation. Dispersion performance of these formulations upon dilution with a simulated gastrointestinal fluid was measured, and the population of the large droplets was used as the primary response for statistical modeling. The results of this mixture study revealed significant interactions among the three ingredients, and their individual levels in the formulation collectively dictated the dispersion performance. The fitted response surface model predicted an optimal region of the SEDDS formulation compositions that generate fine emulsions and essentially eliminates large droplets upon dilution. The predicted optimal 25% Drug A-SEDDS formulations with the levels of Cremophor EL ranging from 40-44%, GDO/GMO ranging from 10-13%, and PEG 400 ranging from 2.7-9.0% were selected and prepared. The dispersion experiment results confirmed the prediction of this model and identified potential optimal formulations for further development. This work demonstrates that RSM is an efficient approach for optimization of the SEDDS formulation.
Candioti, Luciana Vera; De Zan, María M; Cámara, María S; Goicoechea, Héctor C
2014-06-01
A review about the application of response surface methodology (RSM) when several responses have to be simultaneously optimized in the field of analytical methods development is presented. Several critical issues like response transformation, multiple response optimization and modeling with least squares and artificial neural networks are discussed. Most recent analytical applications are presented in the context of analytLaboratorio de Control de Calidad de Medicamentos (LCCM), Facultad de Bioquímica y Ciencias Biológicas, Universidad Nacional del Litoral, C.C. 242, S3000ZAA Santa Fe, ArgentinaLaboratorio de Control de Calidad de Medicamentos (LCCM), Facultad de Bioquímica y Ciencias Biológicas, Universidad Nacional del Litoral, C.C. 242, S3000ZAA Santa Fe, Argentinaical methods development, especially in multiple response optimization procedures using the desirability function.
Pereira, Francisco B; Guimarães, Pedro M R; Teixeira, José A; Domingues, Lucília
2010-10-01
Statistical experimental designs were used to develop a medium based on corn steep liquor (CSL) and other low-cost nutrient sources for high-performance very high gravity (VHG) ethanol fermentations by Saccharomyces cerevisiae. The critical nutrients were initially selected according to a Plackett-Burman design and the optimized medium composition (44.3 g/L CSL; 2.3 g/L urea; 3.8 g/L MgSO₄·7H₂O; 0.03 g/L CuSO₄·5H₂O) for maximum ethanol production by the laboratory strain CEN.PK 113-7D was obtained by response surface methodology, based on a three-level four-factor Box-Behnken design. The optimization process resulted in significantly enhanced final ethanol titre, productivity and yeast viability in batch VHG fermentations (up to 330 g/L glucose) with CEN.PK113-7D and with industrial strain PE-2, which is used for bio-ethanol production in Brazil. Strain PE-2 was able to produce 18.6±0.5% (v/v) ethanol with a corresponding productivity of 2.4±0.1g/L/h. This study provides valuable insights into cost-effective nutritional supplementation of industrial fuel ethanol VHG fermentations.
Deruaz, D.; Bannier, A.; Pionchon, C.
1995-08-01
This paper deals with the optimal conditions for the detection of {sup 15}N determined using a four-factor experimental design from [2{sup 13}C,-1,3 {sup 15}N] caffeine measured with an atomic emission detector (AED) coupled to gas chromatography (GC). Owing to the capability of a photodiodes array, AED can simultaneously detect several elements using their specific emission lines within a wavelength range of 50 nm. So, the emissions of {sup 15}N and {sup 14}N are simultaneously detected at 420.17 nm and 421.46 nm respectively. Four independent experimental factors were tested (1) helium flow rate (plasma gas); (2) methane pressure (reactant gas); (3) oxygen pressure; (4) hydrogen pressure. It has been shown that these four gases had a significant influence on the analytical response of {sup 15}N. The linearity of the detection was determined using {sup 15}N amounts ranging from 1.52 pg to 19 ng under the optimal conditions obtained from the experimental design. The limit of detection was studied using different methods. The limits of detection of {sup 15}N was 1.9 pg/s according to the IUPAC method (International-Union of Pure and Applied Chemistry). The method proposed by Quimby and Sullivan gave a value of 2.3 pg/s and that of Oppenheimer gave a limit of 29 pg/s. For each determination, and internal standard: 1-isobutyl-3.7 dimethylxanthine was used. The results clearly demonstrate that GC AED is sensitive and selective enough to detect and measure {sup 15}N-labelled molecules after gas chromatographic separation.
Lee, Kwang-Min; Gilmore, David F
2006-11-01
The statistical design of experiments (DOE) is a collection of predetermined settings of the process variables of interest, which provides an efficient procedure for planning experiments. Experiments on biological processes typically produce long sequences of successive observations on each experimental unit (plant, animal, bioreactor, fermenter, or flask) in response to several treatments (combination of factors). Cell culture and other biotech-related experiments used to be performed by repeated-measures method of experimental design coupled with different levels of several process factors to investigate dynamic biological process. Data collected from this design can be analyzed by several kinds of general linear model (GLM) statistical methods such as multivariate analysis of variance (MANOVA), univariate ANOVA (time split-plot analysis with randomization restriction), and analysis of orthogonal polynomial contrasts of repeated factor (linear coefficient analysis). Last, regression model was introduced to describe responses over time to the different treatments along with model residual analysis. Statistical analysis of biprocess with repeated measurements can help investigate environmental factors and effects affecting physiological and bioprocesses in analyzing and optimizing biotechnology production. PMID:17159235
Hu, Xuebin; Luo, Kun; Wang, Jianai; Wu, Zhengsong; Liang, Yanjie; Ling, Jianjun
2015-04-01
In this study, FLUENT software was used to simulate the flow regime of an integrated sludge thickening and digestion reactor. To optimize the flow regime, the combinational effect of key parameters of the reactor structure was investigated with an L16 (4(5)) orthogonal test. The reactor was then redesigned based on the optimization results, and a series of experiments was conducted to study the treatment effect with sludge dosage rates of 12, 18, 24, and 30%. The operation results showed that the reactor obtained the best treatment efficiency when the sludge dosage rate was 24%. At this dosage, the water content of the sludge decreased from 99.1% to 91.8%, with organic matter content (volatile solids [VS]/total solids) decreasing to 21.2% and average gas production (CH4 62.66%, CO2 11.56%, N2 23.91%, O2 1.59%) reaching 231.3 L/kg VS. Therefore, the results implied that the optimized reactor has good effects on sludge thickening and digestion.
Silva, Karen T.; Leão, Pedro E.; Abreu, Fernanda; López, Jimmy A.; Gutarra, Melissa L.; Farina, Marcos; Bazylinski, Dennis A.; Freire, Denise M. G.
2013-01-01
The growth and magnetosome production of the marine magnetotactic vibrio Magnetovibrio blakemorei strain MV-1 were optimized through a statistics-based experimental factorial design. In the optimized growth medium, maximum magnetite yields of 64.3 mg/liter in batch cultures and 26 mg/liter in a bioreactor were obtained. PMID:23396329
Chen, Xixian; Zhang, Congqiang; Zou, Ruiyang; Zhou, Kang; Stephanopoulos, Gregory; Too, Heng Phon
2013-01-01
In vitro synthesis of chemicals and pharmaceuticals using enzymes is of considerable interest as these biocatalysts facilitate a wide variety of reactions under mild conditions with excellent regio-, chemo- and stereoselectivities. A significant challenge in a multi-enzymatic reaction is the need to optimize the various steps involved simultaneously so as to obtain high-yield of a product. In this study, statistical experimental design was used to guide the optimization of a total synthesis of amorpha-4,11-diene (AD) using multienzymes in the mevalonate pathway. A combinatorial approach guided by Taguchi orthogonal array design identified the local optimum enzymatic activity ratio for Erg12:Erg8:Erg19:Idi:IspA to be 100∶100∶1∶25∶5, with a constant concentration of amorpha-4,11-diene synthase (Ads, 100 mg/L). The model also identified an unexpected inhibitory effect of farnesyl pyrophosphate synthase (IspA), where the activity was negatively correlated with AD yield. This was due to the precipitation of farnesyl pyrophosphate (FPP), the product of IspA. Response surface methodology was then used to optimize IspA and Ads activities simultaneously so as to minimize the accumulation of FPP and the result showed that Ads to be a critical factor. By increasing the concentration of Ads, a complete conversion (∼100%) of mevalonic acid (MVA) to AD was achieved. Monovalent ions and pH were effective means of enhancing the specific Ads activity and specific AD yield significantly. The results from this study represent the first in vitro reconstitution of the mevalonate pathway for the production of an isoprenoid and the approaches developed herein may be used to produce other isopentenyl pyrophosphate (IPP)/dimethylallyl pyrophosphate (DMAPP) based products. PMID:24278153
Heterogeneity of the gut microbiome in mice: guidelines for optimizing experimental design
Laukens, Debby; Brinkman, Brigitta M.; Raes, Jeroen; De Vos, Martine; Vandenabeele, Peter
2015-01-01
Targeted manipulation of the gut flora is increasingly being recognized as a means to improve human health. Yet, the temporal dynamics and intra- and interindividual heterogeneity of the microbiome represent experimental limitations, especially in human cross-sectional studies. Therefore, rodent models represent an invaluable tool to study the host–microbiota interface. Progress in technical and computational tools to investigate the composition and function of the microbiome has opened a new era of research and we gradually begin to understand the parameters that influence variation of host-associated microbial communities. To isolate true effects from confounding factors, it is essential to include such parameters in model intervention studies. Also, explicit journal instructions to include essential information on animal experiments are mandatory. The purpose of this review is to summarize the factors that influence microbiota composition in mice and to provide guidelines to improve the reproducibility of animal experiments. PMID:26323480
Heterogeneity of the gut microbiome in mice: guidelines for optimizing experimental design.
Laukens, Debby; Brinkman, Brigitta M; Raes, Jeroen; De Vos, Martine; Vandenabeele, Peter
2016-01-01
Targeted manipulation of the gut flora is increasingly being recognized as a means to improve human health. Yet, the temporal dynamics and intra- and interindividual heterogeneity of the microbiome represent experimental limitations, especially in human cross-sectional studies. Therefore, rodent models represent an invaluable tool to study the host-microbiota interface. Progress in technical and computational tools to investigate the composition and function of the microbiome has opened a new era of research and we gradually begin to understand the parameters that influence variation of host-associated microbial communities. To isolate true effects from confounding factors, it is essential to include such parameters in model intervention studies. Also, explicit journal instructions to include essential information on animal experiments are mandatory. The purpose of this review is to summarize the factors that influence microbiota composition in mice and to provide guidelines to improve the reproducibility of animal experiments.
Conceptual design optimization study
NASA Technical Reports Server (NTRS)
Hollowell, S. J.; Beeman, E. R., II; Hiyama, R. M.
1990-01-01
The feasibility of applying multilevel functional decomposition and optimization techniques to conceptual design of advanced fighter aircraft was investigated. Applying the functional decomposition techniques to the conceptual design phase appears to be feasible. The initial implementation of the modified design process will optimize wing design variables. A hybrid approach, combining functional decomposition techniques for generation of aerodynamic and mass properties linear sensitivity derivatives with existing techniques for sizing mission performance and optimization, is proposed.
L'Hocine, Lamia; Pitre, Mélanie
2016-03-01
A D-optimal design was constructed to optimize allergen extraction efficiency simultaneously from roasted, non-roasted, defatted, and non-defatted almond, hazelnut, peanut, and pistachio flours using three non-denaturing aqueous (phosphate, borate, and carbonate) buffers at various conditions of ionic strength, buffer-to-protein ratio, extraction temperature, and extraction duration. Statistical analysis showed that roasting and non-defatting significantly lowered protein recovery for all nuts. Increasing the temperature and the buffer-to-protein ratio during extraction significantly increased protein recovery, whereas increasing the extraction time had no significant impact. The impact of the three buffers on protein recovery varied significantly among the nuts. Depending on the extraction conditions, protein recovery varied from 19% to 95% for peanut, 31% to 73% for almond, 17% to 64% for pistachio, and 27% to 88% for hazelnut. A modulation by the buffer type and ionic strength of protein and immunoglobuline E binding profiles of extracts was evidenced, where high protein recovery levels did not always correlate with high immunoreactivity.
L'Hocine, Lamia; Pitre, Mélanie
2016-03-01
A D-optimal design was constructed to optimize allergen extraction efficiency simultaneously from roasted, non-roasted, defatted, and non-defatted almond, hazelnut, peanut, and pistachio flours using three non-denaturing aqueous (phosphate, borate, and carbonate) buffers at various conditions of ionic strength, buffer-to-protein ratio, extraction temperature, and extraction duration. Statistical analysis showed that roasting and non-defatting significantly lowered protein recovery for all nuts. Increasing the temperature and the buffer-to-protein ratio during extraction significantly increased protein recovery, whereas increasing the extraction time had no significant impact. The impact of the three buffers on protein recovery varied significantly among the nuts. Depending on the extraction conditions, protein recovery varied from 19% to 95% for peanut, 31% to 73% for almond, 17% to 64% for pistachio, and 27% to 88% for hazelnut. A modulation by the buffer type and ionic strength of protein and immunoglobuline E binding profiles of extracts was evidenced, where high protein recovery levels did not always correlate with high immunoreactivity. PMID:26471618
NASA Astrophysics Data System (ADS)
Girish, B. M.; Satish, B. M.; Sarapure, Sadanand; Basawaraj
2016-06-01
In the present paper, the statistical investigation on wear behavior of magnesium alloy (AZ91) hybrid metal matrix composites using Taguchi technique has been reported. The composites were reinforced with SiC and graphite particles of average size 37 μm. The specimens were processed by stir casting route. Dry sliding wear of the hybrid composites were tested on a pin-on-disk tribometer under dry conditions at different normal loads (20, 40, and 60 N), sliding speeds (1.047, 1.57, and 2.09 m/s), and composition (1, 2, and 3 wt pct of each of SiC and graphite). The design of experiments approach using Taguchi technique was employed to statistically analyze the wear behavior of hybrid composites. Signal-to-noise ratio and analysis of variance were used to investigate the influence of the parameters on the wear rate.
Kim, Nam Ah; An, In Bok; Lee, Sang Yeol; Park, Eun-Seok; Jeong, Seong Hoon
2012-09-01
In this study, the structural stability of hen egg white lysozyme in solution at various pH levels and in different types of buffers, including acetate, phosphate, histidine, and Tris, was investigated by means of differential scanning calorimetry (DSC). Reasonable pH values were selected from the buffer ranges and were analyzed statistically through design of experiment (DoE). Four factors were used to characterize the thermograms: calorimetric enthalpy (ΔH), temperature at maximum heat flux (T( m )), van't Hoff enthalpy (ΔH( V )), and apparent activation energy of protein solution (E(app)). It was possible to calculate E(app) through mathematical elaboration from the Lumry-Eyring model by changing the scan rate. The transition temperature of protein solution, T( m ), increased when the scan rate was faster. When comparing the T( m ), ΔH( V ), ΔH, and E(app) of lysozyme in various pH ranges and buffers with different priorities, lysozyme in acetate buffer at pH 4.767 (scenario 9) to pH 4.969 (scenario 11) exhibited the highest thermodynamic stability. Through this experiment, we found a significant difference in the thermal stability of lysozyme in various pH ranges and buffers and also a new approach to investigate the physical stability of protein by DoE.
D-OPTIMAL EXPERIMENTAL DESIGNS TO TEST FOR DEPARTURE FROM ADDITIVITY IN A FIXED-RATIO MIXTURE RAY.
Traditional factorial designs for evaluating interactions among chemicals in a mixture are prohibitive when the number of chemicals is large. However, recent advances in statistically-based experimental design have made it easier to evaluate interactions involving many chemicals...
NASA Astrophysics Data System (ADS)
Furniss, S. G.
1989-10-01
While an SSTO with airbreathing propulsion for initial acceleration may greatly reduce future payload launch costs, such vehicles exhibit extreme sensitivity to design assumptions; the process of vehicle optimization is, accordingly, a difficult one. Attention is presently given to the role in optimization of the design mission, fuselage geometry, and the means employed to furnish adequate pitch and directional control. The requirements influencing wing design and scaling are also discussed. The Saenger and Hotol designs are the illustrative cases noted in this generalizing consideration of the SSTO-optimization process.
NASA Astrophysics Data System (ADS)
Asghari, E.; Ashassi-Sorkhabi, H.; Ahangari, M.; Bagheri, R.
2016-04-01
Factors such as inhibitor concentration, solution hydrodynamics, and temperature influence the performance of corrosion inhibitor mixtures. The simultaneous studying of the impact of different factors is a time- and cost-consuming process. The use of experimental design methods can be useful in minimizing the number of experiments and finding local optimized conditions for factors under the investigation. In the present work, the inhibition performance of a three-component inhibitor mixture against corrosion of St37 steel rotating disk electrode, RDE, was studied. The mixture was composed of citric acid, lanthanum(III) nitrate, and tetrabutylammonium perchlorate. In order to decrease the number of experiments, the L16 Taguchi orthogonal array was used. The "control factors" were the concentration of each component and the rotation rate of RDE and the "response factor" was the inhibition efficiency. The scanning electron microscopy and energy dispersive x-ray spectroscopy techniques verified the formation of islands of adsorbed citrate complexes with lanthanum ions and insoluble lanthanum(III) hydroxide. From the Taguchi analysis results the mixture of 0.50 mM lanthanum(III) nitrate, 0.50 mM citric acid, and 2.0 mM tetrabutylammonium perchlorate under the electrode rotation rate of 1000 rpm was found as optimum conditions.
Hansborough, L.; Hamm, R.; Stovall, J.; Swenson, D.
1980-01-01
PIGMI (Pion Generator for Medical Irradiations) is a compact linear proton accelerator design, optimized for pion production and cancer treatment use in a hospital environment. Technology developed during a four-year PIGMI Prototype experimental program allows the design of smaller, less expensive, and more reliable proton linacs. A new type of low-energy accelerating structure, the radio-frequency quadrupole (RFQ) has been tested; it produces an exceptionally good-quality beam and allows the use of a simple 30-kV injector. Average axial electric-field gradients of over 9 MV/m have been demonstrated in a drift-tube linac (DTL) structure. Experimental work is underway to test the disk-and-washer (DAW) structure, another new type of accelerating structure for use in the high-energy coupled-cavity linac (CCL). Sufficient experimental and developmental progress has been made to closely define an actual PIGMI. It will consist of a 30-kV injector, and RFQ linac to a proton energy of 2.5 MeV, a DTL linac to 125 MeV, and a CCL linac to the final energy of 650 MeV. The total length of the accelerator is 133 meters. The RFQ and DTL will be driven by a single 440-MHz klystron; the CCL will be driven by six 1320-MHz klystrons. The peak beam current is 28 mA. The beam pulse length is 60 ..mu..s at a 60-Hz repetition rate, resulting in a 100-..mu..A average beam current. The total cost of the accelerator is estimated to be approx. $10 million.
NASA Astrophysics Data System (ADS)
Kaviani Darani, Masoume; Bastani, Saeed; Ghahari, Mehdi; Kardar, Pooneh
2015-11-01
Ultraviolet (UV) emissions of hydrothermally synthesized NaYF4: Yb3+, Tm3+ upconversion crystals were optimized using the response surface methodology experimental design. In these experimental designs, 9 runs, two factors namely (1) Tm3+ ion concentration, and (2) pH value were investigated using 3 different ligands. Introducing UV upconversion emissions as responses, their intensity were separately maximized. Analytical methods such as XRD, SEM, and FTIR could be used to study crystal structure, morphology, and fluorescent spectroscopy in order to obtain luminescence properties. From the photo-luminescence spectra, emissions centered at 347, 364, 452, 478, 648 and 803 nm were observed. Some results show that increasing each DOE factor up to an optimum value resulted in an increase in emission intensity, followed by reduction. To optimize UV emission, as a final result to the UV emission optimization, each design had a suggestion.
Optimization of digital designs
NASA Technical Reports Server (NTRS)
Whitaker, Sterling R. (Inventor); Miles, Lowell H. (Inventor)
2009-01-01
An application specific integrated circuit is optimized by translating a first representation of its digital design to a second representation. The second representation includes multiple syntactic expressions that admit a representation of a higher-order function of base Boolean values. The syntactic expressions are manipulated to form a third representation of the digital design.
Gonnissen, J.; De Backer, A.; Martinez, G. T.; Van Aert, S.; Dekker, A. J. den; Rosenauer, A.; Sijbers, J.
2014-08-11
We report an innovative method to explore the optimal experimental settings to detect light atoms from scanning transmission electron microscopy (STEM) images. Since light elements play a key role in many technologically important materials, such as lithium-battery devices or hydrogen storage applications, much effort has been made to optimize the STEM technique in order to detect light elements. Therefore, classical performance criteria, such as contrast or signal-to-noise ratio, are often discussed hereby aiming at improvements of the direct visual interpretability. However, when images are interpreted quantitatively, one needs an alternative criterion, which we derive based on statistical detection theory. Using realistic simulations of technologically important materials, we demonstrate the benefits of the proposed method and compare the results with existing approaches.
NASA Technical Reports Server (NTRS)
Johannsen, G.; Govindaraj, T.
1980-01-01
The influence of different types of predictor displays in a longitudinal vertical takeoff and landing (VTOL) hover task is analyzed in a theoretical study. Several cases with differing amounts of predictive and rate information are compared. The optimal control model of the human operator is used to estimate human and system performance in terms of root-mean-square (rms) values and to compute optimized attention allocation. The only part of the model which is varied to predict these data is the observation matrix. Typical cases are selected for a subsequent experimental validation. The rms values as well as eye-movement data are recorded. The results agree favorably with those of the theoretical study in terms of relative differences. Better matching is achieved by revised model input data.
Tebani, Abdellah; Schmitz-Afonso, Isabelle; Rutledge, Douglas N; Gonzalez, Bruno J; Bekri, Soumeya; Afonso, Carlos
2016-03-24
High-resolution mass spectrometry coupled with pattern recognition techniques is an established tool to perform comprehensive metabolite profiling of biological datasets. This paves the way for new, powerful and innovative diagnostic approaches in the post-genomic era and molecular medicine. However, interpreting untargeted metabolomic data requires robust, reproducible and reliable analytical methods to translate results into biologically relevant and actionable knowledge. The analyses of biological samples were developed based on ultra-high performance liquid chromatography (UHPLC) coupled to ion mobility - mass spectrometry (IM-MS). A strategy for optimizing the analytical conditions for untargeted UHPLC-IM-MS methods is proposed using an experimental design approach. Optimization experiments were conducted through a screening process designed to identify the factors that have significant effects on the selected responses (total number of peaks and number of reliable peaks). For this purpose, full and fractional factorial designs were used while partial least squares regression was used for experimental design modeling and optimization of parameter values. The total number of peaks yielded the best predictive model and is used for optimization of parameters setting.
Optimal designs for copula models
Perrone, E.; Müller, W.G.
2016-01-01
Copula modelling has in the past decade become a standard tool in many areas of applied statistics. However, a largely neglected aspect concerns the design of related experiments. Particularly the issue of whether the estimation of copula parameters can be enhanced by optimizing experimental conditions and how robust all the parameter estimates for the model are with respect to the type of copula employed. In this paper an equivalence theorem for (bivariate) copula models is provided that allows formulation of efficient design algorithms and quick checks of whether designs are optimal or at least efficient. Some examples illustrate that in practical situations considerable gains in design efficiency can be achieved. A natural comparison between different copula models with respect to design efficiency is provided as well. PMID:27453616
Chen, Zhen-Min; Li, Qing; Liu, Hua-Mei; Yu, Na; Xie, Tian-Jian; Yang, Ming-Yuan; Shen, Ping; Chen, Xiang-Dong
2010-02-01
Bacillus subtilis spore preparations are promising probiotics and biocontrol agents, which can be used in plants, animals, and humans. The aim of this work was to optimize the nutritional conditions using a statistical approach for the production of B. subtilis (WHK-Z12) spores. Our preliminary experiments show that corn starch, corn flour, and wheat bran were the best carbon sources. Using Plackett-Burman design, corn steep liquor, soybean flour, and yeast extract were found to be the best nitrogen source ingredients for enhancing spore production and were studied for further optimization using central composite design. The key medium components in our optimization medium were 16.18 g/l of corn steep liquor, 17.53 g/l of soybean flour, and 8.14 g/l of yeast extract. The improved medium produced spores as high as 1.52 +/- 0.06 x 10(10) spores/ml under flask cultivation conditions, and 1.56 +/- 0.07 x 10(10) spores/ml could be achieved in a 30-l fermenter after 40 h of cultivation. To the best of our knowledge, these results compared favorably to the documented spore yields produced by B. subtilis strains. PMID:19697022
Lorenz, Romy; Monti, Ricardo Pio; Violante, Inês R.; Anagnostopoulos, Christoforos; Faisal, Aldo A.; Montana, Giovanni; Leech, Robert
2016-01-01
Functional neuroimaging typically explores how a particular task activates a set of brain regions. Importantly though, the same neural system can be activated by inherently different tasks. To date, there is no approach available that systematically explores whether and how distinct tasks probe the same neural system. Here, we propose and validate an alternative framework, the Automatic Neuroscientist, which turns the standard fMRI approach on its head. We use real-time fMRI in combination with modern machine-learning techniques to automatically design the optimal experiment to evoke a desired target brain state. In this work, we present two proof-of-principle studies involving perceptual stimuli. In both studies optimization algorithms of varying complexity were employed; the first involved a stochastic approximation method while the second incorporated a more sophisticated Bayesian optimization technique. In the first study, we achieved convergence for the hypothesized optimum in 11 out of 14 runs in less than 10 min. Results of the second study showed how our closed-loop framework accurately and with high efficiency estimated the underlying relationship between stimuli and neural responses for each subject in one to two runs: with each run lasting 6.3 min. Moreover, we demonstrate that using only the first run produced a reliable solution at a group-level. Supporting simulation analyses provided evidence on the robustness of the Bayesian optimization approach for scenarios with low contrast-to-noise ratio. This framework is generalizable to numerous applications, ranging from optimizing stimuli in neuroimaging pilot studies to tailoring clinical rehabilitation therapy to patients and can be used with multiple imaging modalities in humans and animals. PMID:26804778
Hydraulic fracture design optimization
Lee, Tae-Soo; Advani, S.H.
1992-01-01
This research and development investigation, sponsored by US DOE and the oil and gas industry, extends previously developed hydraulic fracture geometry models and applied energy related characteristic time concepts towards the optimal design and control of hydraulic fracture geometries. The primary objective of this program is to develop rational criteria, by examining the associated energy rate components during the hydraulic fracture evolution, for the formulation of stimulation treatment design along with real-time fracture configuration interpretation and control.
Hydraulic fracture design optimization
Lee, Tae-Soo; Advani, S.H.
1992-06-01
This research and development investigation, sponsored by US DOE and the oil and gas industry, extends previously developed hydraulic fracture geometry models and applied energy related characteristic time concepts towards the optimal design and control of hydraulic fracture geometries. The primary objective of this program is to develop rational criteria, by examining the associated energy rate components during the hydraulic fracture evolution, for the formulation of stimulation treatment design along with real-time fracture configuration interpretation and control.
Oberoi, Harinder Singh; Vadlani, Praveen Venkata; Madl, Ronald L; Saida, Lavudi; Abeykoon, Jithma P
2010-03-24
Orange peels were evaluated as a fermentation feedstock, and process conditions for enhanced ethanol production were determined. Primary hydrolysis of orange peel powder (OPP) was carried out at acid concentrations from 0 to 1.0% (w/v) at 121 degrees C and 15 psi for 15 min. High-performance liquid chromatography analysis of sugars and inhibitory compounds showed a higher production of hydroxymethyfurfural and acetic acid and a decrease in sugar concentration when the acid level was beyond 0.5% (w/v). Secondary hydrolysis of pretreated biomass obtained from primary hydrolysis was carried out at 0.5% (w/v) acid. Response surface methodology using three factors and a two-level central composite design was employed to optimize the effect of pH, temperature, and fermentation time on ethanol production from OPP hydrolysate at the shake flask level. On the basis of results obtained from the optimization experiment and numerical optimization software, a validation study was carried out in a 2 L batch fermenter at pH 5.4 and a temperature of 34 degrees C for 15 h. The hydrolysate obtained from primary and secondary hydrolysis processes was fermented separately employing parameters optimized through RSM. Ethanol yields of 0.25 g/g on a biomass basis (YP/X) and 0.46 g/g on a substrate-consumed basis (YP/S) and a promising volumetric ethanol productivity of 3.37 g/L/h were attained using this process at the fermenter level, which shows promise for further scale-up studies. PMID:20158208
Zainal-Abideen, M; Aris, A; Yusof, F; Abdul-Majid, Z; Selamat, A; Omar, S I
2012-01-01
In this study of coagulation operation, a comparison was made between the optimum jar test values for pH, coagulant and coagulant aid obtained from traditional methods (an adjusted one-factor-at-a-time (OFAT) method) and with central composite design (the standard design of response surface methodology (RSM)). Alum (coagulant) and polymer (coagulant aid) were used to treat a water source with very low pH and high aluminium concentration at Sri-Gading water treatment plant (WTP) Malaysia. The optimum conditions for these factors were chosen when the final turbidity, pH after coagulation and residual aluminium were within 0-5 NTU, 6.5-7.5 and 0-0.20 mg/l respectively. Traditional and RSM jar tests were conducted to find their respective optimum coagulation conditions. It was observed that the optimum dose for alum obtained through the traditional method was 12 mg/l, while the value for polymer was set constant at 0.020 mg/l. Through RSM optimization, the optimum dose for alum was 7 mg/l and for polymer was 0.004 mg/l. Optimum pH for the coagulation operation obtained through traditional methods and RSM was 7.6. The final turbidity, pH after coagulation and residual aluminium recorded were all within acceptable limits. The RSM method was demonstrated to be an appropriate approach for the optimization and was validated by a further test.
NASA Astrophysics Data System (ADS)
Singh, Balbir; Radhakrishnan, Jayakrishnan
The need for uniform stress distribution of early super pressure balloons led a long period of the development of several optimal designs of scientific balloons for significant benefits in public outreach and education for the next generation of scientists, engineers and technicians. Scientific ballooning is continuing to evolve. The paper is divided into two sections. First section deals with the design, development and optimization of a cost effective high altitude ultra-long duration scientific balloon which are made up of stiff meridional tendons of a specific material. The numerical calculation along with the analysis of elastic buckling pressure, anisotropic viscoelstic effects is taken into consideration. The second section deals with the stability of the ULD balloon constructed by joining together a number of gores so that the structure forms a series of bulges and lobes. Under certain condition at high altitudes the nominal configuration of these structures is unstable and hence the analysis for development scenario must be envisaged so that the structure will never assume its intended shape which may end up in distorted but more stable configuration. The stability of the structure is shown depending upon the geometry and calculations using different softwares like MATLAB 2012b, ANSYS 14.0 (APDL).
Fernández, P; Taboada, V; Regenjo, M; Morales, L; Alvarez, I; Carro, A M; Lorenzo, R A
2016-05-30
A simple Ultrasounds Assisted-Dispersive Liquid Liquid Microextraction (UA-DLLME) method is presented for the simultaneous determination of six second-generation antidepressants in plasma by Ultra Performance Liquid Chromatography with Photodiode Array Detector (UPLC-PDA). The main factors that potentially affect to DLLME were optimized by a screening design followed by a response surface design and desirability functions. The optimal conditions were 2.5 mL of acetonitrile as dispersant solvent, 0.2 mL of chloroform as extractant solvent, 3 min of ultrasounds stirring and extraction pH 9.8.Under optimized conditions, the UPLC-PDA method showed good separation of antidepressants in 2.5 min and good linearity in the range of 0.02-4 μg mL(-1), with determination coefficients higher than 0.998. The limits of detection were in the range 4-5 ng mL(-1). The method precision (n=5) was evaluated showing relative standard deviations (RSD) lower than 8.1% for all compounds. The average recoveries ranged from 92.5% for fluoxetine to 110% for mirtazapine. The applicability of DLLME/UPLC-PDA was successfully tested in twenty nine plasma samples from antidepressant consumers. Real samples were analyzed by the proposed method and the results were successfully submitted to comparison with those obtained by a Liquid Liquid Extraction-Gas Chromatography - Mass Spectrometry (LLE-GC-MS) method. The results confirmed the presence of venlafaxine in most cases (19 cases), followed by sertraline (3 cases) and fluoxetine (3 cases) at concentrations below toxic levels.
Elements of Bayesian experimental design
Sivia, D.S.
1997-09-01
We consider some elements of the Bayesian approach that are important for optimal experimental design. While the underlying principles used are very general, and are explained in detail in a recent tutorial text, they are applied here to the specific case of characterising the inferential value of different resolution peakshapes. This particular issue was considered earlier by Silver, Sivia and Pynn (1989, 1990a, 1990b), and the following presentation confirms and extends the conclusions of their analysis.
Gao, Xinlu; Huang, Shanguo Wei, Yongfeng; Zhai, Wensheng; Xu, Wenjing; Yin, Shan; Gu, Wanyi; Zhou, Jing
2014-12-15
A system of generating and receiving orbital angular momentum (OAM) radio beams, which are collectively formed by two circular array antennas (CAAs) and effectively optimized by two intensity controlled masks, is proposed and experimentally investigated. The scheme is effective in blocking of the unwanted OAM modes and enhancing the power of received radio signals, which results in the capacity gain of system and extended transmission distance of the OAM radio beams. The operation principle of the intensity controlled masks, which can be regarded as both collimator and filter, is feasible and simple to realize. Numerical simulations of intensity and phase distributions at each key cross-sectional plane of the radio beams demonstrate the collimated results. The experimental results match well with the theoretical analysis and the receive distance of the OAM radio beam at radio frequency (RF) 20 GHz is extended up to 200 times of the wavelength of the RF signals, the measured distance is 5 times of the original measured distance. The presented proof-of-concept experiment demonstrates the feasibility of the system.
NASA Astrophysics Data System (ADS)
Karaagac, Oznur; Kockar, Hakan
2016-07-01
Orthogonal design technique was applied to obtain superparamagnetic iron oxide nanoparticles with high saturation magnetization, Ms. Synthesis of the nanoparticles were done in air atmosphere according to the orthogonal table L934. Magnetic properties of the synthesized nanoparticles were measured by a vibrating sample magnetometer. Structural analysis of the nanoparticles was also carried out by X-ray diffraction technique (XRD), Fourier transform infrared spectroscopy (FTIR) and transmission electron microscopy (TEM). After the analysis of magnetic data, the optimized experimental parameters were determined as [Fe+2]/[Fe+3]=6/6, iron ion concentration=1500 mM, base concentration=6.7 M and reaction time=2 min. Magnetic results showed that the synthesis carried out according to the optimized conditions gave the highest Ms of 69.83 emu/g for the nanoparticles synthesized in air atmosphere. Magnetic measurements at 10 K and 300 K showed the sample is superparamagnetic at room temperature. Structural analysis by XRD, FTIR and selected area electron diffraction showed that the sample had the inverse spinel crystal structure of iron oxide. The particle size of the optimized sample determined from the TEM image is 7.0±2.2 nm. The results indicated that the Ms of superparamagnetic iron oxide nanoparticles can be optimized by experimental design with the suitable choice of the synthesis parameters.
Fry, Derek J
2014-01-01
Awareness of poor design and published concerns over study quality stimulated the development of courses on experimental design intended to improve matters. This article describes some of the thinking behind these courses and how the topics can be presented in a variety of formats. The premises are that education in experimental design should be undertaken with an awareness of educational principles, of how adults learn, and of the particular topics in the subject that need emphasis. For those using laboratory animals, it should include ethical considerations, particularly severity issues, and accommodate learners not confident with mathematics. Basic principles, explanation of fully randomized, randomized block, and factorial designs, and discussion of how to size an experiment form the minimum set of topics. A problem-solving approach can help develop the skills of deciding what are correct experimental units and suitable controls in different experimental scenarios, identifying when an experiment has not been properly randomized or blinded, and selecting the most efficient design for particular experimental situations. Content, pace, and presentation should suit the audience and time available, and variety both within a presentation and in ways of interacting with those being taught is likely to be effective. Details are given of a three-day course based on these ideas, which has been rated informative, educational, and enjoyable, and can form a postgraduate module. It has oral presentations reinforced by group exercises and discussions based on realistic problems, and computer exercises which include some analysis. Other case studies consider a half-day format and a module for animal technicians. PMID:25541547
Fry, Derek J
2014-01-01
Awareness of poor design and published concerns over study quality stimulated the development of courses on experimental design intended to improve matters. This article describes some of the thinking behind these courses and how the topics can be presented in a variety of formats. The premises are that education in experimental design should be undertaken with an awareness of educational principles, of how adults learn, and of the particular topics in the subject that need emphasis. For those using laboratory animals, it should include ethical considerations, particularly severity issues, and accommodate learners not confident with mathematics. Basic principles, explanation of fully randomized, randomized block, and factorial designs, and discussion of how to size an experiment form the minimum set of topics. A problem-solving approach can help develop the skills of deciding what are correct experimental units and suitable controls in different experimental scenarios, identifying when an experiment has not been properly randomized or blinded, and selecting the most efficient design for particular experimental situations. Content, pace, and presentation should suit the audience and time available, and variety both within a presentation and in ways of interacting with those being taught is likely to be effective. Details are given of a three-day course based on these ideas, which has been rated informative, educational, and enjoyable, and can form a postgraduate module. It has oral presentations reinforced by group exercises and discussions based on realistic problems, and computer exercises which include some analysis. Other case studies consider a half-day format and a module for animal technicians.
Nevado, B; Ramos-Onsins, S E; Perez-Enciso, M
2014-04-01
Decreasing costs of next-generation sequencing (NGS) experiments have made a wide range of genomic questions open for study with nonmodel organisms. However, experimental designs and analysis of NGS data from less well-known species are challenging because of the lack of genomic resources. In this work, we investigate the performance of alternative experimental designs and bioinformatics approaches in estimating variability and neutrality tests based on the site-frequency-spectrum (SFS) from individual resequencing data. We pay particular attention to challenges faced in the study of nonmodel organisms, in particular the absence of a species-specific reference genome, although phylogenetically close genomes are assumed to be available. We compare the performance of three alternative bioinformatics approaches – genotype calling, genotype–haplotype calling and direct estimation without calling genotypes. We find that relying on genotype calls provides biased estimates of population genetic statistics at low to moderate read depth (2–8X). Genotype–haplotype calling returns more accurate estimates irrespective of the divergence to the reference genome, but requires moderate depth (8–20X). Direct estimation without calling genotypes returns the most accurate estimates of variability and of most SFS tests investigated, including at low read depth (2–4X). Studies without species-specific reference genome should thus aim for low read depth and avoid genotype calling whenever individual genotypes are not essential. Otherwise, aiming for moderate to high depth at the expense of number of individuals, and using genotype–haplotype calling, is recommended. PMID:24795998
Roosta, M; Ghaedi, M; Daneshfar, A; Sahraei, R; Asghari, A
2014-01-01
The present study was focused on the removal of methylene blue (MB) from aqueous solution by ultrasound-assisted adsorption onto the gold nanoparticles loaded on activated carbon (Au-NP-AC). This nanomaterial was characterized using different techniques such as SEM, XRD, and BET. The effects of variables such as pH, initial dye concentration, adsorbent dosage (g), temperature and sonication time (min) on MB removal were studied and using central composite design (CCD) and the optimum experimental conditions were found with desirability function (DF) combined response surface methodology (RSM). Fitting the experimental equilibrium data to various isotherm models such as Langmuir, Freundlich, Tempkin and Dubinin-Radushkevich models show the suitability and applicability of the Langmuir model. Analysis of experimental adsorption data to various kinetic models such as pseudo-first and second order, Elovich and intraparticle diffusion models show the applicability of the second-order equation model. The small amount of proposed adsorbent (0.01 g) is applicable for successful removal of MB (RE>95%) in short time (1.6 min) with high adsorption capacity (104-185 mg g(-1)).
NASA Astrophysics Data System (ADS)
Nguyen, Gia Luong Huu
obtained experimental data, the research studied the control of airflow to regulate the temperature of reactors within the fuel processor. The dynamic model provided a platform to test the dynamic response for different control gains. With sufficient sensing and appropriate control, a rapid response to maintain the temperature of the reactor despite an increase in power was possible. The third part of the research studied the use of a fuel cell in conjunction with photovoltaic panels, and energy storage to provide electricity for buildings. This research developed an optimization framework to determine the size of each device in the hybrid energy system to satisfy the electrical demands of buildings and yield the lowest cost. The advantage of having the fuel cell with photovoltaic and energy storage was the ability to operate the fuel cell at baseload at night, thus reducing the need for large battery systems to shift the solar power produced in the day to the night. In addition, the dispatchability of the fuel cell provided an extra degree of freedom necessary for unforeseen disturbances. An operation framework based on model predictive control showed that the method is suitable for optimizing the dispatch of the hybrid energy system.
Hassan, W.; Vensel, F.; Knowles, B.
2006-03-06
The inspection of critical rotating components of aircraft engines has made important advances over the last decade. The development of Phased Array (PA) inspection capability for billet and forging materials used in the manufacturing of critical engine rotating components has been a priority for Honeywell Aerospace. The demonstration of improved PA inspection system sensitivity over what is currently used at the inspection houses is a critical step in the development of this technology and its introduction to the supply base as a production inspection. As described in Part I (in these proceedings), a new phased array transducer was designed and manufactured for optimal inspection of eight inch diameter Ti-6Al-4V billets. After confirming that the transducer was manufactured in accordance with the design specifications a validation study was conducted to assess the sensitivity improvement of the PAI over the current capability of Multi-zone (MZ) inspection. The results of this study confirm the significant ({approx_equal} 6 dB in FBH number sign sensitivity) improvement of the PAI sensitivity over that of MZI.
Torres, Ricardo A; Mosteo, Rosa; Pétrier, Christian; Pulgarin, Cesar
2009-03-01
This work presents the application of experimental design for the ultrasonic degradation of alachlor which is pesticide classified as priority substance by the European Commission within the scope of the Water Framework Directive. The effect of electrical power (20-80W), pH (3-10) and substrate concentration (10-50mgL(-1)) was evaluated. For a confidential level of 90%, pH showed a low effect on the initial degradation rate of alachlor; whereas electrical power, pollutant concentration and the interaction of these two parameters were significant. A reduced model taking into account the significant variables and interactions between variables has shown a good correlation with the experimental results. Additional experiments conducted in natural and deionised water indicated that the alachlor degradation by ultrasound is practically unaffected by the presence of potential *OH radical scavengers: bicarbonate, sulphate, chloride and oxalic acid. In both cases, alachlor was readily eliminated ( approximately 75min). However, after 4h of treatment only 20% of the initial TOC was removed, showing that alachlor by-products are recalcitrant to the ultrasonic action. Biodegradability test (BOD5/COD) carried out during the course of the treatment indicated that the ultrasonic system noticeably increases the biodegradability of the initial solution. PMID:18930694
NASA Astrophysics Data System (ADS)
Salehi Taleghani, Sara; Zamani Meymian, Mohammad Reza; Ameri, Mohsen
2016-10-01
In the present research, we report fabrication, experimental characterization and theoretical analysis of semi and full flexible dye sensitized solar cells (DSSCs) manufactured on the basis of bare and roughened stainless steel type 304 (SS304) substrates. The morphological, optical and electrical characterizations confirm the advantage of roughened SS304 over bare and even common transparent conducting oxides (TCOs). A significant enhancement of about 51% in power conversion efficiency is obtained for flexible device (5.51%) based on roughened SS304 substrate compared to the bare SS304. The effect of roughening the SS304 substrates on electrical transport characteristics is also investigated by means of numerical modeling with regard to metal-semiconductor and interfacial resistance arising from the metallic substrate and nanocrystalline semiconductor contact. The numerical modeling results provide a reliable theoretical backbone to be combined with experimental implications. It highlights the stronger effect of series resistance compared to schottky barrier in lowering the fill factor of the SS304-based DSSCs. The findings of the present study nominate roughened SS304 as a promising replacement for conventional DSSCs substrates as well as introducing a highly accurate modeling framework to design and diagnose treated metallic or non-metallic based DSSCs.
Optimized quadrature surface coil designs
Kumar, Ananda; Bottomley, Paul A.
2008-01-01
Background Quadrature surface MRI/MRS detectors comprised of circular loop and figure-8 or butterfly-shaped coils offer improved signal-to-noise-ratios (SNR) compared to single surface coils, and reduced power and specific absorption rates (SAR) when used for MRI excitation. While the radius of the optimum loop coil for performing MRI at depth d in a sample is known, the optimum geometry for figure-8 and butterfly coils is not. Materials and methods The geometries of figure-8 and square butterfly detector coils that deliver the optimum SNR are determined numerically by the electromagnetic method of moments. Figure-8 and loop detectors are then combined to create SNR-optimized quadrature detectors whose theoretical and experimental SNR performance are compared with a novel quadrature detector comprised of a strip and a loop, and with two overlapped loops optimized for the same depth at 3 T. The quadrature detection efficiency and local SAR during transmission for the three quadrature configurations are analyzed and compared. Results The SNR-optimized figure-8 detector has loop radius r8 ∼ 0.6d, so r8/r0 ∼ 1.3 in an optimized quadrature detector at 3 T. The optimized butterfly coil has side length ∼ d and crossover angle of ≥ 150° at the center. Conclusions These new design rules for figure-8 and butterfly coils optimize their performance as linear and quadrature detectors. PMID:18057975
Muhammad, Syed Aun; Ahmed, Safia; Ismail, Tariq; Hameed, Abdul
2014-01-01
Polypeptide antimicrobials used against topical infections are reported to obtain from mesophilic bacterial species. A thermophilic Geobacillus pallidus SAT4 was isolated from hot climate of Sindh Dessert, Pakistan and found it active against Micrococcus luteus ATCC 10240, Staphylococcus aureus ATCC 6538, Bacillus subtilis NCTC 10400 and Pseudomonas aeruginosa ATCC 49189. The current experiment was designed to optimize the production of novel thermostable polypeptide by applying the Taguchi statistical approach at various conditions including the time of incubation, temperature, pH, aeration rate, nitrogen, and carbon concentrations. There were two most important factors that affect the production of antibiotic including time of incubation and nitrogen concentration and two interactions including the time of incubation/pH and time of incubation/nitrogen concentration. Activity was evaluated by well diffusion assay. The antimicrobial produced was stable and active even at 55°C. Ammonium sulphate (AS) was used for antibiotic recovery and it was desalted by dialysis techniques. The resulted protein was evaluated through SDS-PAGE. It was concluded that novel thermostable protein produced by Geobacillus pallidus SAT4 is stable at higher temperature and its production level can be improved statistically at optimum values of pH, time of incubation and nitrogen concentration the most important factors for antibiotic production.
OPTIMAL NETWORK TOPOLOGY DESIGN
NASA Technical Reports Server (NTRS)
Yuen, J. H.
1994-01-01
This program was developed as part of a research study on the topology design and performance analysis for the Space Station Information System (SSIS) network. It uses an efficient algorithm to generate candidate network designs (consisting of subsets of the set of all network components) in increasing order of their total costs, and checks each design to see if it forms an acceptable network. This technique gives the true cost-optimal network, and is particularly useful when the network has many constraints and not too many components. It is intended that this new design technique consider all important performance measures explicitly and take into account the constraints due to various technical feasibilities. In the current program, technical constraints are taken care of by the user properly forming the starting set of candidate components (e.g. nonfeasible links are not included). As subsets are generated, they are tested to see if they form an acceptable network by checking that all requirements are satisfied. Thus the first acceptable subset encountered gives the cost-optimal topology satisfying all given constraints. The user must sort the set of "feasible" link elements in increasing order of their costs. The program prompts the user for the following information for each link: 1) cost, 2) connectivity (number of stations connected by the link), and 3) the stations connected by that link. Unless instructed to stop, the program generates all possible acceptable networks in increasing order of their total costs. The program is written only to generate topologies that are simply connected. Tests on reliability, delay, and other performance measures are discussed in the documentation, but have not been incorporated into the program. This program is written in PASCAL for interactive execution and has been implemented on an IBM PC series computer operating under PC DOS. The disk contains source code only. This program was developed in 1985.
Experimental design methods for bioengineering applications.
Keskin Gündoğdu, Tuğba; Deniz, İrem; Çalışkan, Gülizar; Şahin, Erdem Sefa; Azbar, Nuri
2016-01-01
Experimental design is a form of process analysis in which certain factors are selected to obtain the desired responses of interest. It may also be used for the determination of the effects of various independent factors on a dependent factor. The bioengineering discipline includes many different areas of scientific interest, and each study area is affected and governed by many different factors. Briefly analyzing the important factors and selecting an experimental design for optimization are very effective tools for the design of any bioprocess under question. This review summarizes experimental design methods that can be used to investigate various factors relating to bioengineering processes. The experimental methods generally used in bioengineering are as follows: full factorial design, fractional factorial design, Plackett-Burman design, Taguchi design, Box-Behnken design and central composite design. These design methods are briefly introduced, and then the application of these design methods to study different bioengineering processes is analyzed.
Kindossi, Janvier Mêlégnonfan; Anihouvi, Victor Bienvenu; Vieira-Dalodé, Générose; Akissoé, Noël Houédougbé; Hounhouigan, Djidjoho Joseph
2016-03-01
Lanhouin is a traditional fermented salted fish made from the spontaneous and uncontrolled fermentation of whole salted cassava fish (Pseudotolithus senegalensis) mainly produced in the coastal regions of West Africa. The combined effects of NaCl, citric acid concentration, and marination time on the physicochemical and microbiological characteristics of the fish fillet used for Lanhouin production were studied using a Doehlert experimental design with the objective of preserving its quality and safety. The marination time has significant effects on total viable and lactic acid bacteria counts, and NaCl content of the marinated fish fillet while the pH was significantly affected by citric acid concentration and marination duration with high regression coefficient R (2) of 0.83. The experiment showed that the best conditions for marination process of fish fillet were salt ratio 10 g/100 g, acid citric concentration 2.5 g/100 g, and marination time 6 h. These optimum marinating conditions obtained present the best quality of marinated flesh fish leading to the safety of the final fermented product. This pretreatment is necessary in Lanhouin production processes to ensure its safety quality. PMID:27004115
Achouri, Djamila; Sergent, Michelle; Tonetto, Alain; Piccerelle, Philippe; Andrieu, Véronique; Hornebecq, Virginie
2015-03-01
In the field of keratoconus treatment, a lipid-based liquid crystal nanoparticles system has been developed to improve the preocular retention and ocular bioavailability of riboflavin, a water-soluble drug. The formulation of this ophthalmic drug delivery system was optimized by a simplex lattice experimental design. The delivery system is composed of three main components that are mono acyl glycerol (monoolein), poloxamer 407 and water and two secondary components that are riboflavin and glycerol (added to adjust the osmotic pressure). The amounts of these three main components were selected as the factors to systematically optimize the dependent variables that are the encapsulation efficiency and the particle size. In this way, 12 formulas describing experimental domain of interest were prepared. Results obtained using small angle X-rays scattering (SAXS) and cryo-transmission electron microscopy (cryo-TEM) evidenced the presence of nano-objects with either sponge or hexagonal inverted structure. In the zone of interest, the percentage of each component was determined to obtain both high encapsulation efficiency and small size of particles. Two optimized formulations were found: F7 and F1. They are very close in the ternary phase diagram as they contain 6.83% of poloxamer 407; 44.18% and 42.03% of monoolein; 46.29% and 48.44% of water for F7 and F11, respectively. These formulations displayed a good compromise between inputs and outputs investigated.
Roosta, M; Ghaedi, M; Shokri, N; Daneshfar, A; Sahraei, R; Asghari, A
2014-01-24
The present study was aimed to experimental design optimization applied to removal of malachite green (MG) from aqueous solution by ultrasound-assisted removal onto the gold nanoparticles loaded on activated carbon (Au-NP-AC). This nanomaterial was characterized using different techniques such as FESEM, TEM, BET, and UV-vis measurements. The effects of variables such as pH, initial dye concentration, adsorbent dosage (g), temperature and sonication time on MG removal were studied using central composite design (CCD) and the optimum experimental conditions were found with desirability function (DF) combined response surface methodology (RSM). Fitting the experimental equilibrium data to various isotherm models such as Langmuir, Freundlich, Tempkin and Dubinin-Radushkevich models show the suitability and applicability of the Langmuir model. Kinetic models such as pseudo -first order, pseudo-second order, Elovich and intraparticle diffusion models applicability was tested for experimental data and the second-order equation and intraparticle diffusion models control the kinetic of the adsorption process. The small amount of proposed adsorbent (0.015 g) is applicable for successful removal of MG (RE>99%) in short time (4.4 min) with high adsorption capacity (140-172 mg g(-1)).
NASA Astrophysics Data System (ADS)
Roosta, M.; Ghaedi, M.; Shokri, N.; Daneshfar, A.; Sahraei, R.; Asghari, A.
2014-01-01
The present study was aimed to experimental design optimization applied to removal of malachite green (MG) from aqueous solution by ultrasound-assisted removal onto the gold nanoparticles loaded on activated carbon (Au-NP-AC). This nanomaterial was characterized using different techniques such as FESEM, TEM, BET, and UV-vis measurements. The effects of variables such as pH, initial dye concentration, adsorbent dosage (g), temperature and sonication time on MG removal were studied using central composite design (CCD) and the optimum experimental conditions were found with desirability function (DF) combined response surface methodology (RSM). Fitting the experimental equilibrium data to various isotherm models such as Langmuir, Freundlich, Tempkin and Dubinin-Radushkevich models show the suitability and applicability of the Langmuir model. Kinetic models such as pseudo -first order, pseudo-second order, Elovich and intraparticle diffusion models applicability was tested for experimental data and the second-order equation and intraparticle diffusion models control the kinetic of the adsorption process. The small amount of proposed adsorbent (0.015 g) is applicable for successful removal of MG (RE > 99%) in short time (4.4 min) with high adsorption capacity (140-172 mg g-1).
Colombo, Renata; Ferreira, Tanare C R; Ferreira, Renato A; Lanza, Marcos R V
2016-02-01
Mefenamic acid (MEF) is a non-steroidal anti-inflammatory drug indicated for relief of mild to moderate pain, and for the treatment of primary dysmenorrhea. The presence of MEF in raw and sewage waters has been detected worldwide at concentrations exceeding the predicted no-effect concentration. In this study, using experimental designs, different oxidative processes (H2O2, H2O2/UV, fenton and Photo-fenton) were simultaneously evaluated for MEF degradation efficiency. The influence and interaction effects of the most important variables in the oxidative process (concentration and addition mode of hydrogen peroxide, concentration and type of catalyst, pH, reaction period and presence/absence of light) were investigated. The parameters were determined based on the maximum efficiency to save time and minimize the consumption of reagents. According to the results, the photo-Fenton process is the best procedure to remove the drug from water. A reaction mixture containing 1.005 mmol L(-1) of ferrioxalate and 17.5 mmol L(-1) of hydrogen peroxide, added at the initial reaction period, pH of 6.1 and 60 min of degradation indicated the most efficient degradation, promoting 95% of MEF removal. The development and validation of a rapid and efficient qualitative and quantitative HPLC/UV methodology for detecting this pollutant in aqueous solution is also reported. The method can be applied in water quality control that is generated and/or treated in municipal or industrial wastewater treatment plants.
Colombo, Renata; Ferreira, Tanare C R; Ferreira, Renato A; Lanza, Marcos R V
2016-02-01
Mefenamic acid (MEF) is a non-steroidal anti-inflammatory drug indicated for relief of mild to moderate pain, and for the treatment of primary dysmenorrhea. The presence of MEF in raw and sewage waters has been detected worldwide at concentrations exceeding the predicted no-effect concentration. In this study, using experimental designs, different oxidative processes (H2O2, H2O2/UV, fenton and Photo-fenton) were simultaneously evaluated for MEF degradation efficiency. The influence and interaction effects of the most important variables in the oxidative process (concentration and addition mode of hydrogen peroxide, concentration and type of catalyst, pH, reaction period and presence/absence of light) were investigated. The parameters were determined based on the maximum efficiency to save time and minimize the consumption of reagents. According to the results, the photo-Fenton process is the best procedure to remove the drug from water. A reaction mixture containing 1.005 mmol L(-1) of ferrioxalate and 17.5 mmol L(-1) of hydrogen peroxide, added at the initial reaction period, pH of 6.1 and 60 min of degradation indicated the most efficient degradation, promoting 95% of MEF removal. The development and validation of a rapid and efficient qualitative and quantitative HPLC/UV methodology for detecting this pollutant in aqueous solution is also reported. The method can be applied in water quality control that is generated and/or treated in municipal or industrial wastewater treatment plants. PMID:26686073
L'Hocine, Lamia; Pitre, Mélanie
2016-03-01
A full factorial design was used to assess the single and interactive effects of three non-denaturing aqueous (phosphate, borate, and carbonate) buffers at various ionic strengths (I) on allergen extractability from and immunoglobulin E (IgE) immunoreactivity of peanut, almond, hazelnut, and pistachio. The results indicated that the type and ionic strength of the buffer had different effects on protein recovery from the nuts under study. Substantial differences in protein profiles, abundance, and IgE-binding intensity with different combinations of pH and ionic strength were found. A significant interaction between pH and ionic strength was observed for pistachio and almond. The optimal buffer system conditions, which maximized the IgE-binding efficiency of allergens and provided satisfactory to superior protein recovery yield and profiles, were carbonate buffer at an ionic strength of I=0.075 for peanut, carbonate buffer at I=0.15 for almond, phosphate buffer at I=0.5 for hazelnut, and borate at I=0.15 for pistachio. The buffer type and its ionic strength could be manipulated to achieve the selective solubility of desired allergens.
L'Hocine, Lamia; Pitre, Mélanie
2016-03-01
A full factorial design was used to assess the single and interactive effects of three non-denaturing aqueous (phosphate, borate, and carbonate) buffers at various ionic strengths (I) on allergen extractability from and immunoglobulin E (IgE) immunoreactivity of peanut, almond, hazelnut, and pistachio. The results indicated that the type and ionic strength of the buffer had different effects on protein recovery from the nuts under study. Substantial differences in protein profiles, abundance, and IgE-binding intensity with different combinations of pH and ionic strength were found. A significant interaction between pH and ionic strength was observed for pistachio and almond. The optimal buffer system conditions, which maximized the IgE-binding efficiency of allergens and provided satisfactory to superior protein recovery yield and profiles, were carbonate buffer at an ionic strength of I=0.075 for peanut, carbonate buffer at I=0.15 for almond, phosphate buffer at I=0.5 for hazelnut, and borate at I=0.15 for pistachio. The buffer type and its ionic strength could be manipulated to achieve the selective solubility of desired allergens. PMID:26471623
Aerodynamic design using numerical optimization
NASA Technical Reports Server (NTRS)
Murman, E. M.; Chapman, G. T.
1983-01-01
The procedure of using numerical optimization methods coupled with computational fluid dynamic (CFD) codes for the development of an aerodynamic design is examined. Several approaches that replace wind tunnel tests, develop pressure distributions and derive designs, or fulfill preset design criteria are presented. The method of Aerodynamic Design by Numerical Optimization (ADNO) is described and illustrated with examples.
Structural Optimization in automotive design
NASA Technical Reports Server (NTRS)
Bennett, J. A.; Botkin, M. E.
1984-01-01
Although mathematical structural optimization has been an active research area for twenty years, there has been relatively little penetration into the design process. Experience indicates that often this is due to the traditional layout-analysis design process. In many cases, optimization efforts have been outgrowths of analysis groups which are themselves appendages to the traditional design process. As a result, optimization is often introduced into the design process too late to have a significant effect because many potential design variables have already been fixed. A series of examples are given to indicate how structural optimization has been effectively integrated into the design process.
Tarley, César Ricardo Teixeira; Figueiredo, Eduardo da Costa; Matos, Geraldo Domingues
2005-11-01
The present paper describes the on-line coupling of a flow-injection system to a new technique, thermospray flame furnace-AAS (TS-FF-AAS), for the preconcentration and determination of copper in water samples. Copper was preconcentrated onto polyurethane foam (PUF) complexed with ammonium O,O-diethyldithiophosphate (DDTP), while elution was performed using 80% (v/v) ethanol. An experimental design for optimizing the copper preconcentration system was established using a full factorial (2(4)) design without replicates for screening and a Doehlert design for optimization, studying four variables: sample pH, ammonium O,O-diethyldithiophosphate (DDTP) concentration, presence of a coil and the sampling flow rate. The results obtained from the full factorial and based on a Pareto chart indicate that only the pH and the DDTP concentration, as well as their interaction, exert influence on the system within a 95% confidence level. The proposed method provided a preconcentration factor of 65 fold, thus notably improving the detectability of TS-FF-AAS. The detection limit was 0.22 microg/dm3 and the precision, expressed as the relative standard deviation (RSD) for eight independent determinations, was 2.7 and 1.1 for copper solutions containing 5 and 30 microg/dm3, respectively. The procedure was successfully applied for copper determination in water samples. PMID:16317902
A free lunch in linearized experimental design?
NASA Astrophysics Data System (ADS)
Coles, Darrell; Curtis, Andrew
2011-08-01
The No Free Lunch (NFL) theorems state that no single optimization algorithm is ideally suited for all objective functions and, conversely, that no single objective function is ideally suited for all optimization algorithms. This paper examines the influence of the NFL theorems on linearized statistical experimental design (SED). We consider four design algorithms with three different design objective functions to examine their interdependency. As a foundation for the study, we consider experimental designs for fitting ellipses to data, a problem pertinent to the study of transverse isotropy in many disciplines. Surprisingly, we find that the quality of optimized experiments, and the computational efficiency of their optimization, is generally independent of the criterion-algorithm pairing. We discuss differences in the performance of each design algorithm, providing a guideline for selecting design algorithms for other problems. As a by-product we demonstrate and discuss the principle of diminishing returns in SED, namely, that the value of experimental design decreases with experiment size. Another outcome of this study is a simple rule-of-thumb for prescribing optimal experiments for ellipse fitting, which bypasses the computational expense of SED. This is used to define a template for optimizing survey designs, under simple assumptions, for Amplitude Variations with Azimuth and Offset (AVAZ) seismics in the specialized problem of fracture characterization, such as is of interest in the petroleum industry. Finally, we discuss the scope of our conclusions for the NFL theorems as they apply to nonlinear and Bayesian SED.
Kumar, Neeraj; Shishu
2015-01-25
The study aims to statistically develop a microemulsion system of an antifungal agent, itraconazole for overcoming the shortcomings and adverse effects of currently used therapies. Following preformulation studies like solubility determination, component selection and pseudoternary phase diagram construction, a 3-factor D-optimal mixture design was used for optimizing a microemulsion having desirable formulation characteristics. The factors studied for sixteen experimental trials were percent contents (w/w) of water, oil and surfactant, whereas the responses investigated were globule size, transmittance, drug skin retention and drug skin permeation in 6h. Optimized microemulsion (OPT-ME) was incorporated in Carbopol based hydrogel to improve topical applicability. Physical characterization of the formulations was performed using particle size analysis, transmission electron microscopy, texture analysis and rheology behavior. Ex vivo studies carried out in Wistar rat skin depicted that the optimized formulation enhanced drug skin retention and permeation in 6h in comparison to conventional cream and Capmul 908P oil solution of itraconazole. The in vivo evaluation of optimized formulation was performed using a standardized Tinea pedis model in Wistar rats and the results of the pharmacodynamic study, obtained in terms of physical manifestations, fungal-burden score, histopathological profiles and oxidative stress. Rapid remission of Tinea pedis from rats treated with OPT-ME formulation was observed in comparison to commercially available therapies (ketoconazole cream and oral itraconazole solution), thereby indicating the superiority of microemulsion hydrogel formulation over conventional approaches for treating superficial fungal infections. The formulation was stable for a period of twelve months under refrigeration and ambient temperature conditions. All results, therefore, suggest that the OPT-ME can prove to be a promising and rapid alternative to conventional
Integrated controls design optimization
Lou, Xinsheng; Neuschaefer, Carl H.
2015-09-01
A control system (207) for optimizing a chemical looping process of a power plant includes an optimizer (420), an income algorithm (230) and a cost algorithm (225) and a chemical looping process models. The process models are used to predict the process outputs from process input variables. Some of the process in puts and output variables are related to the income of the plant; and some others are related to the cost of the plant operations. The income algorithm (230) provides an income input to the optimizer (420) based on a plurality of input parameters (215) of the power plant. The cost algorithm (225) provides a cost input to the optimizer (420) based on a plurality of output parameters (220) of the power plant. The optimizer (420) determines an optimized operating parameter solution based on at least one of the income input and the cost input, and supplies the optimized operating parameter solution to the power plant.
Design Optimization Toolkit: Users' Manual
Aguilo Valentin, Miguel Alejandro
2014-07-01
The Design Optimization Toolkit (DOTk) is a stand-alone C++ software package intended to solve complex design optimization problems. DOTk software package provides a range of solution methods that are suited for gradient/nongradient-based optimization, large scale constrained optimization, and topology optimization. DOTk was design to have a flexible user interface to allow easy access to DOTk solution methods from external engineering software packages. This inherent flexibility makes DOTk barely intrusive to other engineering software packages. As part of this inherent flexibility, DOTk software package provides an easy-to-use MATLAB interface that enables users to call DOTk solution methods directly from the MATLAB command window.
D-OPTIMAL EXPERIMENTAL DESIGNS TO TEST FOR DEPARTURE FROM ADDITIVITY IN A FIXED-RATIO RAY MIXTURE.
Risk assessors are becoming increasingly aware of the importance of assessing interactions between chemicals in a mixture. Most traditional designs for evaluating interactions are prohibitive when the number of chemicals in the mixture is large. However, evaluation of interacti...
NASA Technical Reports Server (NTRS)
Torres-Pomales, Wilfredo
2015-01-01
This report documents a case study on the application of Reliability Engineering techniques to achieve an optimal balance between performance and robustness by tuning the functional parameters of a complex non-linear control system. For complex systems with intricate and non-linear patterns of interaction between system components, analytical derivation of a mathematical model of system performance and robustness in terms of functional parameters may not be feasible or cost-effective. The demonstrated approach is simple, structured, effective, repeatable, and cost and time efficient. This general approach is suitable for a wide range of systems.
NASA Astrophysics Data System (ADS)
Roosta, M.; Ghaedi, M.; Daneshfar, A.; Sahraei, R.
2014-03-01
In this research, the adsorption rate of safranine O (SO) onto tin sulfide nanoparticle loaded on activated carbon (SnS-NPAC) was accelerated by the ultrasound. SnS-NP-AC was characterized by different techniques such as SEM, XRD and UV-Vis measurements. The present results confirm that the ultrasound assisted adsorption method has remarkable ability to improve the adsorption efficiency. The influence of parameters such as the sonication time, adsorbent dosage, pH and initial SO concentration was examined and evaluated by central composite design (CCD) combined with response surface methodology (RSM) and desirability function (DF). Conducting adsorption experiments at optimal conditions set as 4 min of sonication time, 0.024 g of adsorbent, pH 7 and 18 mg L-1 SO make admit to achieve high removal percentage (98%) and high adsorption capacity (50.25 mg g-1). A good agreement between experimental and predicted data in this study was observed. The experimental equilibrium data fitting to Langmuir, Freundlich, Tempkin and Dubinin-Radushkevich models show that the Langmuir model is a good and suitable model for evaluation and the actual behavior of adsorption. Kinetic evaluation of experimental data showed that the adsorption processes followed well pseudo-second-order and intraparticle diffusion models.
Ogunwuyi, O; Adesina, S; Akala, E O
2015-03-01
We report here our efforts on the development of stealth biodegradable crosslinked poly-ε-caprolactone nanoparticles by free radical dispersion polymerization suitable for the delivery of bioactive agents. The uniqueness of the dispersion polymerization technique is that it is surfactant free, thereby obviating the problems known to be associated with the use of surfactants in the fabrication of nanoparticles for biomedical applications. Aided by a statistical software for experimental design and analysis, we used D-optimal mixture statistical experimental design to generate thirty batches of nanoparticles prepared by varying the proportion of the components (poly-ε-caprolactone macromonomer, crosslinker, initiators and stabilizer) in acetone/water system. Morphology of the nanoparticles was examined using scanning electron microscopy (SEM). Particle size and zeta potential were measured by dynamic light scattering (DLS). Scheffe polynomial models were generated to predict particle size (nm) and particle surface zeta potential (mV) as functions of the proportion of the components. Solutions were returned from simultaneous optimization of the response variables for component combinations to (a) minimize nanoparticle size (small nanoparticles are internalized into disease organs easily, avoid reticuloendothelial clearance and lung filtration) and (b) maximization of the negative zeta potential values, as it is known that, following injection into the blood stream, nanoparticles with a positive zeta potential pose a threat of causing transient embolism and rapid clearance compared to negatively charged particles. In vitro availability isotherms show that the nanoparticles sustained the release of docetaxel for 72 to 120 hours depending on the formulation. The data show that nanotechnology platforms for controlled delivery of bioactive agents can be developed based on the nanoparticles.
Sibanda, Wilbert; Pillay, Viness; Danckwerts, Michael P; Viljoen, Alvaro M; van Vuuren, Sandy; Khan, Riaz A
2004-03-12
A Plackett-Burman design was employed to develop and optimize a novel crosslinked calcium-aluminum-alginate-pectinate oilisphere complex as a potential system for the in vitro site-specific release of Mentha piperita, an essential oil used for the treatment of irritable bowel syndrome. The physicochemical and textural properties (dependent variables) of this complex were found to be highly sensitive to changes in the concentration of the polymers (0%-1.5% wt/vol), crosslinkers (0%-4% wt/vol), and crosslinking reaction times (0.5-6 hours) (independent variables). Particle size analysis indicated both unimodal and bimodal populations with the highest frequency of 2 mm oilispheres. Oil encapsulation ranged from 6 to 35 mg/100 mg oilispheres. Gravimetric changes of the crosslinked matrix indicated significant ion sequestration and loss in an exponential manner, while matrix erosion followed Higuchi's cube root law. Among the various measured responses, the total fracture energy was the most suitable optimization objective (R2 = 0.88, Durbin-Watson Index = 1.21%, Coefficient of Variation (CV) = 33.21%). The Lagrangian technique produced no significant differences (P > .05) between the experimental and predicted total fracture energy values (0.0150 vs 0.0107 J). Artificial Neural Networks, as an alternative predictive tool of the total fracture energy, was highly accurate (final mean square error of optimal network epoch approximately 0.02). Fused-coated optimized oilispheres produced a 4-hour lag phase followed by zero-order kinetics (n > 0.99), whereby analysis of release data indicated that diffusion (Fickian constant k1 = 0.74 vs relaxation constant k2 = 0.02) was the predominant release mechanism. PMID:15198539
Yiamsawas, Doungporn; Boonpavanitchakul, Kanittha; Kangwansupamonkon, Wiyong
2011-05-15
Research highlights: {yields} Taguchi robust design can be applied to study ZnO nanocrystal growth. {yields} Spherical-like and rod-like shaped of ZnO nanocrystals can be obtained from solvothermal method. {yields} [NaOH]/[Zn{sup 2+}] ratio plays the most important factor on the aspect ratio of prepared ZnO. -- Abstract: Zinc oxide (ZnO) nanoparticles and nanorods were successfully synthesized by a solvothermal process. Taguchi robust design was applied to study the factors which result in stronger ZnO nanocrystal growth. The factors which have been studied are molar concentration ratio of sodium hydroxide and zinc acetate, amount of polymer templates and molecular weight of polymer templates. Transmission electron microscopy and X-ray diffraction technique were used to analyze the experiment results. The results show that the concentration ratio of sodium hydroxide and zinc acetate ratio has the greatest effect on ZnO nanocrystal growth.
Asfaram, Arash; Ghaedi, Mehrorang; Goudarzi, Alireza
2016-09-01
A simple, low cost and ultrasensitive method for the simultaneous preconcentration and determination of trace amount of auramine-O and malachite green in aqueous media following accumulation on novel and lower toxicity nanomaterials by ultrasound-assisted dispersive solid phase micro-extraction (UA-DSPME) procedure combined with spectrophotometric has been described. The Mn doped ZnS nanoparticles loaded on activated carbon were characterized by Field emission scanning electron microscopy (FE-SEM), particle size distribution, X-ray diffraction (XRD), Fourier transform infrared spectroscopy (FT-IR) analyses and subsequently were used as green and efficient material for dyes accumulation. Contribution of experimental variables such as ultrasonic time, ultrasonic temperature, adsorbent mass, vortex time, ionic strength, pH and elution volume were optimized through experimental design, and while the preconcentrated analytes were efficiently eluted by acetone. Preliminary Plackett-Burman design was applied for selection of most significant factors and giving useful information about their main and interaction part of significant variables like ultrasonic time, adsorbent mass, elution volume and pH were obtained by central composite design combined with response surface analysis and optimum experimental conditions was set at pH of 8.0, 1.2mg of adsorbent, 150μL eluent and 3.7min sonication. Under optimized conditions, the average recoveries (five replicates) for two dyes (spiked at 500.0ngmL(-1)) changes in the range of 92.80-97.70% with acceptable RSD% less than 4.0% over a linear range of 3.0-5000.0ngmL(-1) for the AO and MG in water samples with regression coefficients (R(2)) of 0.9975 and 0.9977, respectively. Acceptable limits of detection of 0.91 and 0.61ngmL(-1) for AO and MG, respectively and high accuracy and repeatability are unique advantages of present method to improve the figures of merit for their accurate determination at trace level in complicated
Asfaram, Arash; Ghaedi, Mehrorang; Goudarzi, Alireza
2016-09-01
A simple, low cost and ultrasensitive method for the simultaneous preconcentration and determination of trace amount of auramine-O and malachite green in aqueous media following accumulation on novel and lower toxicity nanomaterials by ultrasound-assisted dispersive solid phase micro-extraction (UA-DSPME) procedure combined with spectrophotometric has been described. The Mn doped ZnS nanoparticles loaded on activated carbon were characterized by Field emission scanning electron microscopy (FE-SEM), particle size distribution, X-ray diffraction (XRD), Fourier transform infrared spectroscopy (FT-IR) analyses and subsequently were used as green and efficient material for dyes accumulation. Contribution of experimental variables such as ultrasonic time, ultrasonic temperature, adsorbent mass, vortex time, ionic strength, pH and elution volume were optimized through experimental design, and while the preconcentrated analytes were efficiently eluted by acetone. Preliminary Plackett-Burman design was applied for selection of most significant factors and giving useful information about their main and interaction part of significant variables like ultrasonic time, adsorbent mass, elution volume and pH were obtained by central composite design combined with response surface analysis and optimum experimental conditions was set at pH of 8.0, 1.2mg of adsorbent, 150μL eluent and 3.7min sonication. Under optimized conditions, the average recoveries (five replicates) for two dyes (spiked at 500.0ngmL(-1)) changes in the range of 92.80-97.70% with acceptable RSD% less than 4.0% over a linear range of 3.0-5000.0ngmL(-1) for the AO and MG in water samples with regression coefficients (R(2)) of 0.9975 and 0.9977, respectively. Acceptable limits of detection of 0.91 and 0.61ngmL(-1) for AO and MG, respectively and high accuracy and repeatability are unique advantages of present method to improve the figures of merit for their accurate determination at trace level in complicated
YANG, YU; BAI, WENKUN; CHEN, YINI; LIN, YANDUAN; HU, BING
2015-01-01
The present study aimed to provide a complete exploration of the effect of sound intensity, frequency, duty cycle, microbubble volume and irradiation time on low-frequency low-intensity ultrasound (US)-mediated microvessel disruption, and to identify an optimal combination of the five factors that maximize the blockage effect. An orthogonal experimental design approach was used. Enhanced US imaging and acoustic quantification were performed to assess tumor blood perfusion. In the confirmatory test, in addition to acoustic quantification, the specimens of the tumor were stained with hematoxylin and eosin and observed using light microscopy. The results revealed that sound intensity, frequency, duty cycle, microbubble volume and irradiation time had a significant effect on the average peak intensity (API). The extent of the impact of the variables on the API was in the following order: Sound intensity; frequency; duty cycle; microbubble volume; and irradiation time. The optimum conditions were found to be as follows: Sound intensity, 1.00 W/cm2; frequency, 20 Hz; duty cycle, 40%; microbubble volume, 0.20 ml; and irradiation time, 3 min. In the confirmatory test, the API was 19.97±2.66 immediately subsequent to treatment, and histological examination revealed signs of tumor blood vessel injury in the optimum parameter combination group. In conclusion, the Taguchi L18 (3)6 orthogonal array design was successfully applied for determining the optimal parameter combination of API following treatment. Under the optimum orthogonal design condition, a minimum API of 19.97±2.66 subsequent to low-frequency and low-intensity mediated blood perfusion blockage was obtained. PMID:26722279
Kristoffersen, Lena; Skuterud, Bjørn; Larssen, Bente R; Skurtveit, Svetlana; Smith-Kielland, Anne
2005-01-01
A sensitive, fast, simple, and high-throughput enzymatic method for the quantification of ethanol in whole blood (blood) on Hitachi 917 is presented. Alcohol dehydrogenase (ADH) oxidizes ethanol to acetaldehyde using the coenzyme nicotinamide adenine dinucleotide (NAD), which is concurrently reduced to form NADH. Method development was performed with the aid of factorial design, varying pH, and concentrations of NAD+ and ADH. The linear range increased and reaction end point decreased with increasing NAD+ concentration and pH. The method was linear in the concentration range 0.0024-0.4220 g/dL. The limits of detection and quantification were 0.0007 g/dL and 0.0024 g/dL, respectively. Relative standard deviations for the repeatability and within-laboratory reproducibility were in the ranges 0.7-5.7% and 1.6-8.9%, respectively. The correlation coefficient when compared with headspace gas chromatography-flame ionization detection methods was 0.9903. Analysis of authentic positive blood specimens gave results that were slightly lower than those of the reference method.
ElShaer, Amr; Mustafa, Shelan; Kasar, Mohamad; Thapa, Sapana; Ghatora, Baljit; Alany, Raid G
2016-04-20
Human eye is one of the most accessible organs in the body, nonetheless, its physiology and associated precorneal factors such as nasolacrimal drainage, blinking, tear film, tear turnover, and induced lacrimation has significantly decreased the residence time of any foreign substances including pharmaceutical dosage forms. Soft contact lenses are promising delivery devices that can sustain the drug release and prolong residence time by acting as a geometric barrier to drug diffusion to tear fluid. This study investigates experimental parameters such as composition of polymer mixtures, stabilizer and the amount of active pharmaceutical ingredient on the preparation of a polymeric drug delivery system for the topical ocular administration of Prednisolone. To achieve this goal, prednisolone-loaded poly (lactic-co-glycolic acid) (PLGA) nanoparticles were prepared by single emulsion solvent evaporation method. Prednisolone was quantified using a validated high performance liquid chromatography (HPLC) method. Nanoparticle size was mostly affected by the amount of co-polymer (PLGA) used whereas drug load was mostly affected by amount of prednisolone (API) used. Longer homogenization time along with higher amount of API yielded the smallest size nanoparticles. The nanoparticles prepared had an average particle size of 347.1 ± 11.9 nm with a polydispersity index of 0.081. The nanoparticles were then incorporated in the contact lens mixture before preparing them. Clear and transparent contact lenses were successfully prepared. When the nanoparticle (NP)-loaded contact lenses were compared with control contact lenses (unloaded NP contact lenses), a decrease in hydration by 2% (31.2% ± 1.25% hydration for the 0.2 g loaded NP contact lenses) and light transmission by 8% (unloaded NP contact lenses 94.5% NP 0.2 g incorporated contact lenses 86.23%). The wettability of the contact lenses remained within the desired value (<90 °C) even upon incorporation of the NP. NP alone and
ElShaer, Amr; Mustafa, Shelan; Kasar, Mohamad; Thapa, Sapana; Ghatora, Baljit; Alany, Raid G.
2016-01-01
Human eye is one of the most accessible organs in the body, nonetheless, its physiology and associated precorneal factors such as nasolacrimal drainage, blinking, tear film, tear turnover, and induced lacrimation has significantly decreased the residence time of any foreign substances including pharmaceutical dosage forms. Soft contact lenses are promising delivery devices that can sustain the drug release and prolong residence time by acting as a geometric barrier to drug diffusion to tear fluid. This study investigates experimental parameters such as composition of polymer mixtures, stabilizer and the amount of active pharmaceutical ingredient on the preparation of a polymeric drug delivery system for the topical ocular administration of Prednisolone. To achieve this goal, prednisolone-loaded poly (lactic-co-glycolic acid) (PLGA) nanoparticles were prepared by single emulsion solvent evaporation method. Prednisolone was quantified using a validated high performance liquid chromatography (HPLC) method. Nanoparticle size was mostly affected by the amount of co-polymer (PLGA) used whereas drug load was mostly affected by amount of prednisolone (API) used. Longer homogenization time along with higher amount of API yielded the smallest size nanoparticles. The nanoparticles prepared had an average particle size of 347.1 ± 11.9 nm with a polydispersity index of 0.081. The nanoparticles were then incorporated in the contact lens mixture before preparing them. Clear and transparent contact lenses were successfully prepared. When the nanoparticle (NP)-loaded contact lenses were compared with control contact lenses (unloaded NP contact lenses), a decrease in hydration by 2% (31.2% ± 1.25% hydration for the 0.2 g loaded NP contact lenses) and light transmission by 8% (unloaded NP contact lenses 94.5% NP 0.2 g incorporated contact lenses 86.23%). The wettability of the contact lenses remained within the desired value (<90 °C) even upon incorporation of the NP. NP alone and
Designing an Experimental "Accident"
ERIC Educational Resources Information Center
Picker, Lester
1974-01-01
Describes an experimental "accident" that resulted in much student learning, seeks help in the identification of nematodes, and suggests biology teachers introduce similar accidents into their teaching to stimulate student interest. (PEB)
Experimental design in analytical chemistry--part II: applications.
Ebrahimi-Najafabadi, Heshmatollah; Leardi, Riccardo; Jalali-Heravi, Mehdi
2014-01-01
This paper reviews the applications of experimental design to optimize some analytical chemistry techniques such as extraction, chromatography separation, capillary electrophoresis, spectroscopy, and electroanalytical methods.
Design optimization of transonic airfoils
NASA Technical Reports Server (NTRS)
Joh, C.-Y.; Grossman, B.; Haftka, R. T.
1991-01-01
Numerical optimization procedures were considered for the design of airfoils in transonic flow based on the transonic small disturbance (TSD) and Euler equations. A sequential approximation optimization technique was implemented with an accurate approximation of the wave drag based on the Nixon's coordinate straining approach. A modification of the Euler surface boundary conditions was implemented in order to efficiently compute design sensitivities without remeshing the grid. Two effective design procedures producing converged designs in approximately 10 global iterations were developed: interchanging the role of the objective function and constraint and the direct lift maximization with move limits which were fixed absolute values of the design variables.
Habitat Design Optimization and Analysis
NASA Technical Reports Server (NTRS)
SanSoucie, Michael P.; Hull, Patrick V.; Tinker, Michael L.
2006-01-01
Long-duration surface missions to the Moon and Mars will require habitats for the astronauts. The materials chosen for the habitat walls play a direct role in the protection against the harsh environments found on the surface. Choosing the best materials, their configuration, and the amount required is extremely difficult due to the immense size of the design region. Advanced optimization techniques are necessary for habitat wall design. Standard optimization techniques are not suitable for problems with such large search spaces; therefore, a habitat design optimization tool utilizing genetic algorithms has been developed. Genetic algorithms use a "survival of the fittest" philosophy, where the most fit individuals are more likely to survive and reproduce. This habitat design optimization tool is a multi-objective formulation of structural analysis, heat loss, radiation protection, and meteoroid protection. This paper presents the research and development of this tool.
Experimental design of a waste glass study
Piepel, G.F.; Redgate, P.E.; Hrma, P.
1995-04-01
A Composition Variation Study (CVS) is being performed to support a future high-level waste glass plant at Hanford. A total of 147 glasses, covering a broad region of compositions melting at approximately 1150{degrees}C, were tested in five statistically designed experimental phases. This paper focuses on the goals, strategies, and techniques used in designing the five phases. The overall strategy was to investigate glass compositions on the boundary and interior of an experimental region defined by single- component, multiple-component, and property constraints. Statistical optimal experimental design techniques were used to cover various subregions of the experimental region in each phase. Empirical mixture models for glass properties (as functions of glass composition) from previous phases wee used in designing subsequent CVS phases.
Sanches, Livia Rentas; Seulin, Saskia Carolina; Leyton, Vilma; Paranhos, Beatriz Aparecida Passos Bismara; Pasqualucci, Carlos Augusto; Muñoz, Daniel Romero; Osselton, Michael David; Yonamine, Mauricio
2012-04-01
Undoubtedly, whole blood and vitreous humor have been biological samples of great importance in forensic toxicology. The determination of opiates and their metabolites has been essential for better interpretation of toxicological findings. This report describes the application of experimental design and response surface methodology to optimize conditions for enzymatic hydrolysis of morphine-3-glucuronide and morphine-6-glucuronide. The analytes (free morphine, 6-acetylmorphine and codeine) were extracted from the samples using solid-phase extraction on mixed-mode cartridges, followed by derivatization to their trimethylsilyl derivatives. The extracts were analysed by gas chromatography-mass spectrometry with electron ionization and full scan mode. The method was validated for both specimens (whole blood and vitreous humor). A significant matrix effect was found by applying the F-test. Different recovery values were also found (82% on average for whole blood and 100% on average for vitreous humor). The calibration curves were linear for all analytes in the concentration range of 10-1,500 ng/mL. The limits of detection ranged from 2.0 to 5.0 ng/mL. The method was applied to a case in which a victim presented with a previous history of opiate use.
Design of optimal systolic arrays
Li, G.J.; Wah, B.W.
1985-01-01
Conventional design of systolic arrays is based on the mapping of an algorithm onto an interconnection of processing elements in a VLSI chip. This mapping is done in an ad hoc manner, and the resulting configuration usually represents a feasible but suboptimal design. In this paper, systolic arrays are characterized by three classes of parameters: the velocities of data flows, the spatial distributions of data, and the periods of computation. By relating these parameters in constraint equations that govern the correctness of the design, the design is formulated into an optimization problem. The size of the search space is a polynomial of the problem size, and a methodology to systematically search and reduce this space and to obtain the optimal design is proposed. Some examples of applying the method, including matrix multiplication, finite impulse response filtering, deconvolution, and triangular-matrix inversion, are given. 30 references.
Global Design Optimization for Fluid Machinery Applications
NASA Technical Reports Server (NTRS)
Shyy, Wei; Papila, Nilay; Tucker, Kevin; Vaidyanathan, Raj; Griffin, Lisa
2000-01-01
Recent experiences in utilizing the global optimization methodology, based on polynomial and neural network techniques for fluid machinery design are summarized. Global optimization methods can utilize the information collected from various sources and by different tools. These methods offer multi-criterion optimization, handle the existence of multiple design points and trade-offs via insight into the entire design space can easily perform tasks in parallel, and are often effective in filtering the noise intrinsic to numerical and experimental data. Another advantage is that these methods do not need to calculate the sensitivity of each design variable locally. However, a successful application of the global optimization method needs to address issues related to data requirements with an increase in the number of design variables and methods for predicting the model performance. Examples of applications selected from rocket propulsion components including a supersonic turbine and an injector element and a turbulent flow diffuser are used to illustrate the usefulness of the global optimization method.
Optimization methods for alternative energy system design
NASA Astrophysics Data System (ADS)
Reinhardt, Michael Henry
An electric vehicle heating system and a solar thermal coffee dryer are presented as case studies in alternative energy system design optimization. Design optimization tools are compared using these case studies, including linear programming, integer programming, and fuzzy integer programming. Although most decision variables in the designs of alternative energy systems are generally discrete (e.g., numbers of photovoltaic modules, thermal panels, layers of glazing in windows), the literature shows that the optimization methods used historically for design utilize continuous decision variables. Integer programming, used to find the optimal investment in conservation measures as a function of life cycle cost of an electric vehicle heating system, is compared to linear programming, demonstrating the importance of accounting for the discrete nature of design variables. The electric vehicle study shows that conservation methods similar to those used in building design, that reduce the overall UA of a 22 ft. electric shuttle bus from 488 to 202 (Btu/hr-F), can eliminate the need for fossil fuel heating systems when operating in the northeast United States. Fuzzy integer programming is presented as a means of accounting for imprecise design constraints such as being environmentally friendly in the optimization process. The solar thermal coffee dryer study focuses on a deep-bed design using unglazed thermal collectors (UTC). Experimental data from parchment coffee drying are gathered, including drying constants and equilibrium moisture. In this case, fuzzy linear programming is presented as a means of optimizing experimental procedures to produce the most information under imprecise constraints. Graphical optimization is used to show that for every 1 m2 deep-bed dryer, of 0.4 m depth, a UTC array consisting of 5, 1.1 m 2 panels, and a photovoltaic array consisting of 1, 0.25 m 2 panels produces the most dry coffee per dollar invested in the system. In general this study
A free lunch in linearized experimental design?
NASA Astrophysics Data System (ADS)
Coles, D.; Curtis, A.
2009-12-01
The No Free Lunch (NFL) theorems state that no single optimization algorithm is ideally suited for all objective functions and, conversely, that no single objective function is ideally suited for all optimization algorithms (Wolpert and Macready, 1997). It is therefore of limited use to report the performance of a particular algorithm with respect to a particular objective function because the results cannot be safely extrapolated to other algorithms or objective functions. We examine the influence of the NFL theorems on linearized statistical experimental design (SED). We are aware of no publication that compares multiple design criteria in combination with multiple design algorithms. We examine four design algorithms in concert with three design objective functions to assess their interdependency. As a foundation for the study, we consider experimental designs for fitting ellipses to data, a problem pertinent, for example, to the study of transverse isotropy in a variety of disciplines. Surprisingly, we find that the quality of optimized experiments, and the computational efficiency of their optimization, is generally independent of the criterion-algorithm pairing. This is promising for linearized SED. While the NFL theorems must generally be true, the criterion-algorithm pairings we investigated are fairly robust to the theorems, indicating that we need not account for independency when choosing design algorithms and criteria from the set examined here. However, particular design algorithms do show patterns of performance, irrespective of the design criterion, and from this we establish a rough guideline for choosing from the examined algorithms for other design problems. As a by-product of our study we demonstrate that SED is subject to the principle of diminishing returns. That is, we see that the value of experimental design decreases with survey size, a fact that must be considered when deciding whether or not to design an experiment at all. Another outcome
Gandolfi, F; Malleret, L; Sergent, M; Doumenq, P
2015-08-01
The water framework directives (WFD 2000/60/EC and 2013/39/EU) force European countries to monitor the quality of their aquatic environment. Among the priority hazardous substances targeted by the WFD, short chain chlorinated paraffins C10-C13 (SCCPs), still represent an analytical challenge, because few laboratories are nowadays able to analyze them. Moreover, an annual average quality standards as low as 0.4μgL(-1) was set for SCCPs in surface water. Therefore, to test for compliance, the implementation of sensitive and reliable analysis method of SCCPs in water are required. The aim of this work was to address this issue by evaluating automated solid phase micro-extraction (SPME) combined on line with gas chromatography-electron capture negative ionization mass spectrometry (GC/ECNI-MS). Fiber polymer, extraction mode, ionic strength, extraction temperature and time were the most significant thermodynamic and kinetic parameters studied. To determine the suitable factors working ranges, the study of the extraction conditions was first carried out by using a classical one factor-at-a-time approach. Then a mixed level factorial 3×2(3) design was performed, in order to give rise to the most influent parameters and to estimate potential interactions effects between them. The most influent factors, i.e. extraction temperature and duration, were optimized by using a second experimental design, in order to maximize the chromatographic response. At the close of the study, a method involving headspace SPME (HS-SPME) coupled to GC/ECNI-MS is proposed. The optimum extraction conditions were sample temperature 90°C, extraction time 80min, with the PDMS 100μm fiber and desorption at 250°C during 2min. Linear response from 0.2ngmL(-1) to 10ngmL(-1) with r(2)=0.99 and limits of detection and quantification, respectively of 4pgmL(-1) and 120pgmL(-1) in MilliQ water, were achieved. The method proved to be applicable in different types of waters and show key advantages, such
Gandolfi, F; Malleret, L; Sergent, M; Doumenq, P
2015-08-01
The water framework directives (WFD 2000/60/EC and 2013/39/EU) force European countries to monitor the quality of their aquatic environment. Among the priority hazardous substances targeted by the WFD, short chain chlorinated paraffins C10-C13 (SCCPs), still represent an analytical challenge, because few laboratories are nowadays able to analyze them. Moreover, an annual average quality standards as low as 0.4μgL(-1) was set for SCCPs in surface water. Therefore, to test for compliance, the implementation of sensitive and reliable analysis method of SCCPs in water are required. The aim of this work was to address this issue by evaluating automated solid phase micro-extraction (SPME) combined on line with gas chromatography-electron capture negative ionization mass spectrometry (GC/ECNI-MS). Fiber polymer, extraction mode, ionic strength, extraction temperature and time were the most significant thermodynamic and kinetic parameters studied. To determine the suitable factors working ranges, the study of the extraction conditions was first carried out by using a classical one factor-at-a-time approach. Then a mixed level factorial 3×2(3) design was performed, in order to give rise to the most influent parameters and to estimate potential interactions effects between them. The most influent factors, i.e. extraction temperature and duration, were optimized by using a second experimental design, in order to maximize the chromatographic response. At the close of the study, a method involving headspace SPME (HS-SPME) coupled to GC/ECNI-MS is proposed. The optimum extraction conditions were sample temperature 90°C, extraction time 80min, with the PDMS 100μm fiber and desorption at 250°C during 2min. Linear response from 0.2ngmL(-1) to 10ngmL(-1) with r(2)=0.99 and limits of detection and quantification, respectively of 4pgmL(-1) and 120pgmL(-1) in MilliQ water, were achieved. The method proved to be applicable in different types of waters and show key advantages, such
A Tutorial on Adaptive Design Optimization
Myung, Jay I.; Cavagnaro, Daniel R.; Pitt, Mark A.
2013-01-01
Experimentation is ubiquitous in the field of psychology and fundamental to the advancement of its science, and one of the biggest challenges for researchers is designing experiments that can conclusively discriminate the theoretical hypotheses or models under investigation. The recognition of this challenge has led to the development of sophisticated statistical methods that aid in the design of experiments and that are within the reach of everyday experimental scientists. This tutorial paper introduces the reader to an implementable experimentation methodology, dubbed Adaptive Design Optimization, that can help scientists to conduct “smart” experiments that are maximally informative and highly efficient, which in turn should accelerate scientific discovery in psychology and beyond. PMID:23997275
Acoustic design by topology optimization
NASA Astrophysics Data System (ADS)
Dühring, Maria B.; Jensen, Jakob S.; Sigmund, Ole
2008-11-01
To bring down noise levels in human surroundings is an important issue and a method to reduce noise by means of topology optimization is presented here. The acoustic field is modeled by Helmholtz equation and the topology optimization method is based on continuous material interpolation functions in the density and bulk modulus. The objective function is the squared sound pressure amplitude. First, room acoustic problems are considered and it is shown that the sound level can be reduced in a certain part of the room by an optimized distribution of reflecting material in a design domain along the ceiling or by distribution of absorbing and reflecting material along the walls. We obtain well defined optimized designs for a single frequency or a frequency interval for both 2D and 3D problems when considering low frequencies. Second, it is shown that the method can be applied to design outdoor sound barriers in order to reduce the sound level in the shadow zone behind the barrier. A reduction of up to 10 dB for a single barrier and almost 30 dB when using two barriers are achieved compared to utilizing conventional sound barriers.
Optimal design of solidification processes
NASA Technical Reports Server (NTRS)
Dantzig, Jonathan A.; Tortorelli, Daniel A.
1991-01-01
An optimal design algorithm is presented for the analysis of general solidification processes, and is demonstrated for the growth of GaAs crystals in a Bridgman furnace. The system is optimal in the sense that the prespecified temperature distribution in the solidifying materials is obtained to maximize product quality. The optimization uses traditional numerical programming techniques which require the evaluation of cost and constraint functions and their sensitivities. The finite element method is incorporated to analyze the crystal solidification problem, evaluate the cost and constraint functions, and compute the sensitivities. These techniques are demonstrated in the crystal growth application by determining an optimal furnace wall temperature distribution to obtain the desired temperature profile in the crystal, and hence to maximize the crystal's quality. Several numerical optimization algorithms are studied to determine the proper convergence criteria, effective 1-D search strategies, appropriate forms of the cost and constraint functions, etc. In particular, we incorporate the conjugate gradient and quasi-Newton methods for unconstrained problems. The efficiency and effectiveness of each algorithm is presented in the example problem.
Research on optimization-based design
NASA Astrophysics Data System (ADS)
Balling, R. J.; Parkinson, A. R.; Free, J. C.
1989-04-01
Research on optimization-based design is discussed. Illustrative examples are given for cases involving continuous optimization with discrete variables and optimization with tolerances. Approximation of computationally expensive and noisy functions, electromechanical actuator/control system design using decomposition and application of knowledge-based systems and optimization for the design of a valve anti-cavitation device are among the topics covered.
NASA Astrophysics Data System (ADS)
Appiah, Williams Agyei; Park, Joonam; Song, Seonghyun; Byun, Seoungwoo; Ryou, Myung-Hyun; Lee, Yong Min
2016-07-01
LiNi0.6Co0.2Mn0.2O2 cathodes of different thicknesses and porosities are prepared and tested, in order to optimize the design of lithium-ion cells. A mathematical model for simulating multiple types of particles with different contact resistances in a single electrode is adopted to study the effects of the different cathode thicknesses and porosities on lithium-ion transport using the nonlinear least squares technique. The model is used to optimize the design of LiNi0.6Co0.2Mn0.2O2/graphite lithium-ion cells by employing it to generate a number of Ragone plots. The cells are optimized for cathode porosity and thickness, while the anode porosity, anode-to-cathode capacity ratio, thickness and porosity of separator, and electrolyte salt concentration are held constant. Optimization is performed for discharge times ranging from 10 h to 5 min. Using the Levenberg-Marquardt method as a fitting technique, accounting for multiple particles with different contact resistances, and employing a rate-dependent solid-phase diffusion coefficient results in there being good agreement between the simulated and experimentally determined discharge curves. The optimized parameters obtained from this study should serve as a guide for the battery industry as well as for researchers for determining the optimal cell design for different applications.
Optimization of confocal scanning laser ophthalmoscope design
Dhalla, Al-Hafeez; Kelly, Michael P.; Farsiu, Sina; Izatt, Joseph A.
2013-01-01
Abstract. Confocal scanning laser ophthalmoscopy (cSLO) enables high-resolution and high-contrast imaging of the retina by employing spatial filtering for scattered light rejection. However, to obtain optimized image quality, one must design the cSLO around scanner technology limitations and minimize the effects of ocular aberrations and imaging artifacts. We describe a cSLO design methodology resulting in a simple, relatively inexpensive, and compact lens-based cSLO design optimized to balance resolution and throughput for a 20-deg field of view (FOV) with minimal imaging artifacts. We tested the imaging capabilities of our cSLO design with an experimental setup from which we obtained fast and high signal-to-noise ratio (SNR) retinal images. At lower FOVs, we were able to visualize parafoveal cone photoreceptors and nerve fiber bundles even without the use of adaptive optics. Through an experiment comparing our optimized cSLO design to a commercial cSLO system, we show that our design demonstrates a significant improvement in both image quality and resolution. PMID:23864013
Design Of Theoretically Optimal Thermoacoustic Cooling Device
NASA Astrophysics Data System (ADS)
Tisovský, Tomáš; Vít, Tomáš
2016-03-01
The aim of this article is to design theoretically optimal thermoacoustic cooling device. The opening chapter gives the reader brief introduction to thermoacoustic, specializing in the thermoacoustic principle in refrigerator regime. Subsequent part of the article aims to explain the principle on which thermoacoustic is simulated in DeltaEC. Numbers of executed numerical simulations are listed and the resulting thermoacoustic cooling device design is presented along with its main operation characteristics. In conclusion, recommendations for future experimental work are given and the results are discussed.
Computational Optimization of a Natural Laminar Flow Experimental Wing Glove
NASA Technical Reports Server (NTRS)
Hartshom, Fletcher
2012-01-01
Computational optimization of a natural laminar flow experimental wing glove that is mounted on a business jet is presented and discussed. The process of designing a laminar flow wing glove starts with creating a two-dimensional optimized airfoil and then lofting it into a three-dimensional wing glove section. The airfoil design process does not consider the three dimensional flow effects such as cross flow due wing sweep as well as engine and body interference. Therefore, once an initial glove geometry is created from the airfoil, the three dimensional wing glove has to be optimized to ensure that the desired extent of laminar flow is maintained over the entire glove. TRANAIR, a non-linear full potential solver with a coupled boundary layer code was used as the main tool in the design and optimization process of the three-dimensional glove shape. The optimization process uses the Class-Shape-Transformation method to perturb the geometry with geometric constraints that allow for a 2-in clearance from the main wing. The three-dimensional glove shape was optimized with the objective of having a spanwise uniform pressure distribution that matches the optimized two-dimensional pressure distribution as closely as possible. Results show that with the appropriate inputs, the optimizer is able to match the two dimensional pressure distributions practically across the entire span of the wing glove. This allows for the experiment to have a much higher probability of having a large extent of natural laminar flow in flight.
NASA Astrophysics Data System (ADS)
Chamkouri, Narges; Niazi, Ali; Zare-Shahabadi, Vali
2016-03-01
A novel pH optical sensor was prepared by immobilizing an azo dye called Janus Green B on the triacetylcellulose membrane. Condition of the dye solution used in the immobilization step, including concentration of the dye, pH, and duration were considered and optimized using the Box-Behnken design. The proposed sensor showed good behavior and precision (RSD < 5%) in the pH range of 2.0-10.0. Advantages of this optical sensor include on-line applicability, no leakage, long-term stability (more than 6 months), fast response time (less than 1 min), high selectivity and sensitivity as well as good reversibility and reproducibility.
Abdulra'uf, Lukman Bola; Sirhan, Ala Yahya; Tan, Guan Huat
2015-01-01
Sample preparation has been identified as the most important step in analytical chemistry and has been tagged as the bottleneck of analytical methodology. The current trend is aimed at developing cost-effective, miniaturized, simplified, and environmentally friendly sample preparation techniques. The fundamentals and applications of multivariate statistical techniques for the optimization of microextraction sample preparation and chromatographic analysis of pesticide residues are described in this review. The use of Placket-Burman, Doehlert matrix, and Box-Behnken designs are discussed. As observed in this review, a number of analytical chemists have combined chemometrics and microextraction techniques, which has helped to streamline sample preparation and improve sample throughput. PMID:26525235
Optimal design of airlift fermenters
Moresi, M.
1981-11-01
In this article a modeling of a draft-tube airlift fermenter (ALF) based on perfect back-mixing of liquid and plugflow for gas bubbles has been carried out to optimize the design and operation of fermentation units at different working capacities. With reference to a whey fermentation by yeasts the economic optimization has led to a slim ALF with an aspect ratio of about 15. As far as power expended per unit of oxygen transfer is concerned, the responses of the model are highly influenced by kLa. However, a safer use of the model has been suggested in order to assess the feasibility of the fermentation process under study. (Refs. 39).
A new optimization based approach to experimental combination chemotherapy.
Pereira, F L; Pedreira, C E; de Sousa, J B
1995-01-01
A new approach towards the design of optimal multiple drug experimental cancer chemotherapy is presented. Once an adequate model is specified, an optimization procedure is used in order to achieve an optimal compromise between after treatment tumor size and toxic effects on healthy tissues. In our approach we consider a model including cancer cell population growth and pharmacokinetic dynamics. These elements of the model are essential in order to allow less empirical relationships between multiple drug delivery policies, and their effects on cancer and normal cells. The desired multiple drug dosage schedule is computed by minimizing a customizable cost function subject to dynamic constraints expressed by the model. However, this additional dynamic wealth increases the complexity of the problem which, in general, cannot be solved in a closed form. Therefore, we propose an iterative optimization algorithm of the projected gradient type where the Maximum Principle of Pontryagin is used to select the optimal control policy.
Graphical Models for Quasi-Experimental Designs
ERIC Educational Resources Information Center
Kim, Yongnam; Steiner, Peter M.; Hall, Courtney E.; Su, Dan
2016-01-01
Experimental and quasi-experimental designs play a central role in estimating cause-effect relationships in education, psychology, and many other fields of the social and behavioral sciences. This paper presents and discusses the causal graphs of experimental and quasi-experimental designs. For quasi-experimental designs the authors demonstrate…
Probabilistic Finite Element Analysis & Design Optimization for Structural Designs
NASA Astrophysics Data System (ADS)
Deivanayagam, Arumugam
This study focuses on implementing probabilistic nature of material properties (Kevlar® 49) to the existing deterministic finite element analysis (FEA) of fabric based engine containment system through Monte Carlo simulations (MCS) and implementation of probabilistic analysis in engineering designs through Reliability Based Design Optimization (RBDO). First, the emphasis is on experimental data analysis focusing on probabilistic distribution models which characterize the randomness associated with the experimental data. The material properties of Kevlar® 49 are modeled using experimental data analysis and implemented along with an existing spiral modeling scheme (SMS) and user defined constitutive model (UMAT) for fabric based engine containment simulations in LS-DYNA. MCS of the model are performed to observe the failure pattern and exit velocities of the models. Then the solutions are compared with NASA experimental tests and deterministic results. MCS with probabilistic material data give a good prospective on results rather than a single deterministic simulation results. The next part of research is to implement the probabilistic material properties in engineering designs. The main aim of structural design is to obtain optimal solutions. In any case, in a deterministic optimization problem even though the structures are cost effective, it becomes highly unreliable if the uncertainty that may be associated with the system (material properties, loading etc.) is not represented or considered in the solution process. Reliable and optimal solution can be obtained by performing reliability optimization along with the deterministic optimization, which is RBDO. In RBDO problem formulation, in addition to structural performance constraints, reliability constraints are also considered. This part of research starts with introduction to reliability analysis such as first order reliability analysis, second order reliability analysis followed by simulation technique that
Szerkus, O; Jacyna, J; Wiczling, P; Gibas, A; Sieczkowski, M; Siluk, D; Matuszewski, M; Kaliszan, R; Markuszewski, M J
2016-09-01
Fluoroquinolones are considered as gold standard for the prevention of bacterial infections after transrectal ultrasound guided prostate biopsy. However, recent studies reported that fluoroquinolone- resistant bacterial strains are responsible for gradually increasing number of infections after transrectal prostate biopsy. In daily clinical practice, antibacterial efficacy is evaluated only in vitro, by measuring the reaction of bacteria with an antimicrobial agent in culture media (i.e. calculation of minimal inhibitory concentration). Such approach, however, has no relation to the treated tissue characteristics and might be highly misleading. Thus, the objective of this study was to develop, with the use of Design of Experiments approach, a reliable, specific and sensitive ultra-high performance liquid chromatography- diode array detection method for the quantitative analysis of levofloxacin in plasma and prostate tissue samples obtained from patients undergoing prostate biopsy. Moreover, correlation study between concentrations observed in plasma samples vs prostatic tissue samples was performed, resulting in better understanding, evaluation and optimization of the fluoroquinolone-based antimicrobial prophylaxis during transrectal ultrasound guided prostate biopsy. Box-Behnken design was employed to optimize chromatographic conditions of the isocratic elution program in order to obtain desirable retention time, peak symmetry and resolution of levofloxacine and ciprofloxacine (internal standard) peaks. Fractional Factorial design 2(4-1) with four center points was used for screening of significant factors affecting levofloxacin extraction from the prostatic tissue. Due to the limited number of tissue samples the prostatic sample preparation procedure was further optimized using Central Composite design. Design of Experiments approach was also utilized for evaluation of parameter robustness. The method was found linear over the range of 0.030-10μg/mL for human
Szerkus, O; Jacyna, J; Wiczling, P; Gibas, A; Sieczkowski, M; Siluk, D; Matuszewski, M; Kaliszan, R; Markuszewski, M J
2016-09-01
Fluoroquinolones are considered as gold standard for the prevention of bacterial infections after transrectal ultrasound guided prostate biopsy. However, recent studies reported that fluoroquinolone- resistant bacterial strains are responsible for gradually increasing number of infections after transrectal prostate biopsy. In daily clinical practice, antibacterial efficacy is evaluated only in vitro, by measuring the reaction of bacteria with an antimicrobial agent in culture media (i.e. calculation of minimal inhibitory concentration). Such approach, however, has no relation to the treated tissue characteristics and might be highly misleading. Thus, the objective of this study was to develop, with the use of Design of Experiments approach, a reliable, specific and sensitive ultra-high performance liquid chromatography- diode array detection method for the quantitative analysis of levofloxacin in plasma and prostate tissue samples obtained from patients undergoing prostate biopsy. Moreover, correlation study between concentrations observed in plasma samples vs prostatic tissue samples was performed, resulting in better understanding, evaluation and optimization of the fluoroquinolone-based antimicrobial prophylaxis during transrectal ultrasound guided prostate biopsy. Box-Behnken design was employed to optimize chromatographic conditions of the isocratic elution program in order to obtain desirable retention time, peak symmetry and resolution of levofloxacine and ciprofloxacine (internal standard) peaks. Fractional Factorial design 2(4-1) with four center points was used for screening of significant factors affecting levofloxacin extraction from the prostatic tissue. Due to the limited number of tissue samples the prostatic sample preparation procedure was further optimized using Central Composite design. Design of Experiments approach was also utilized for evaluation of parameter robustness. The method was found linear over the range of 0.030-10μg/mL for human
An optimal structural design algorithm using optimality criteria
NASA Technical Reports Server (NTRS)
Taylor, J. E.; Rossow, M. P.
1976-01-01
An algorithm for optimal design is given which incorporates several of the desirable features of both mathematical programming and optimality criteria, while avoiding some of the undesirable features. The algorithm proceeds by approaching the optimal solution through the solutions of an associated set of constrained optimal design problems. The solutions of the constrained problems are recognized at each stage through the application of optimality criteria based on energy concepts. Two examples are described in which the optimal member size and layout of a truss is predicted, given the joint locations and loads.
Experimental Validation of an Integrated Controls-Structures Design Methodology
NASA Technical Reports Server (NTRS)
Maghami, Peiman G.; Gupta, Sandeep; Elliot, Kenny B.; Walz, Joseph E.
1996-01-01
The first experimental validation of an integrated controls-structures design methodology for a class of large order, flexible space structures is described. Integrated redesign of the controls-structures-interaction evolutionary model, a laboratory testbed at NASA Langley, was described earlier. The redesigned structure was fabricated, assembled in the laboratory, and experimentally tested against the original structure. Experimental results indicate that the structure redesigned using the integrated design methodology requires significantly less average control power than the nominal structure with control-optimized designs, while maintaining the required line-of-sight pointing performance. Thus, the superiority of the integrated design methodology over the conventional design approach is experimentally demonstrated. Furthermore, amenability of the integrated design structure to other control strategies is evaluated, both analytically and experimentally. Using Linear-Quadratic-Guassian optimal dissipative controllers, it is observed that the redesigned structure leads to significantly improved performance with alternate controllers as well.
NASA Astrophysics Data System (ADS)
Zhao, Hongxia; Icoz, Tunc; Jaluria, Yogesh; Knight, Doyle
2003-11-01
The Data Driven Design Optimization Methodology (DDDOM) incorporates experiment and simulation synergistically to achieve better designs in less time than conventional methods. It is developed on the basis of the advanced experimental technology (e.g., Rapid Prototyping) and computational technology (e.g., Parallel Processing, Grid Computing). The DDDOM is comprised of six elements: User Interface, Controller, Optimizer, Experiment, Surrogate Model and Simulation. The DDDOM Controller is the central element of DDDOM. It initiates experiment and simulation, monitors and coordinates their progress. The software system and its user interface are written in Perl/Tk. The DDDOM is applied to a cooling problem for electronic equipment. In this problem, two dimensional mixed convection heat transfer over two isothermal protruding heating elements (simulating electronic components) located at the bottom surface of a horizontal channel was considered. Air is the coolant fluid. The bottom plate is assumed to be insulated and the top plate is kept at the ambient temperature of the incoming air flow. The heat sources have fixed temperature. The flow conditions are defined by the mean Reynolds number (Re), Grashof number (Gr) and Prandtl number (Pr). The objective is to find the optimum heat source locations for fixed Re, Gr and Pr number such that the pressure drop will be minimized. The optimization loop is done by using a simulation code and an optimizer, CFSQP.
Rational Experimental Design for Electrical Resistivity Imaging
NASA Astrophysics Data System (ADS)
Mitchell, V.; Pidlisecky, A.; Knight, R.
2008-12-01
Over the past several decades advances in the acquisition and processing of electrical resistivity data, through multi-channel acquisition systems and new inversion algorithms, have greatly increased the value of these data to near-surface environmental and hydrological problems. There has, however, been relatively little advancement in the design of actual surveys. Data acquisition still typically involves using a small number of traditional arrays (e.g. Wenner, Schlumberger) despite a demonstrated improvement in data quality from the use of non-standard arrays. While optimized experimental design has been widely studied in applied mathematics and the physical and biological sciences, it is rarely implemented for non-linear problems, such as electrical resistivity imaging (ERI). We focus specifically on using ERI in the field for monitoring changes in the subsurface electrical resistivity structure. For this application we seek an experimental design method that can be used in the field to modify the data acquisition scheme (spatial and temporal sampling) based on prior knowledge of the site and/or knowledge gained during the imaging experiment. Some recent studies have investigated optimized design of electrical resistivity surveys by linearizing the problem or with computationally-intensive search algorithms. We propose a method for rational experimental design based on the concept of informed imaging, the use of prior information regarding subsurface properties and processes to develop problem-specific data acquisition and inversion schemes. Specifically, we use realistic subsurface resistivity models to aid in choosing source configurations that maximize the information content of our data. Our approach is based on first assessing the current density within a region of interest, in order to provide sufficient energy to the region of interest to overcome a noise threshold, and then evaluating the direction of current vectors, in order to maximize the
Animal husbandry and experimental design.
Nevalainen, Timo
2014-01-01
If the scientist needs to contact the animal facility after any study to inquire about husbandry details, this represents a lost opportunity, which can ultimately interfere with the study results and their interpretation. There is a clear tendency for authors to describe methodological procedures down to the smallest detail, but at the same time to provide minimal information on animals and their husbandry. Controlling all major variables as far as possible is the key issue when establishing an experimental design. The other common mechanism affecting study results is a change in the variation. Factors causing bias or variation changes are also detectable within husbandry. Our lives and the lives of animals are governed by cycles: the seasons, the reproductive cycle, the weekend-working days, the cage change/room sanitation cycle, and the diurnal rhythm. Some of these may be attributable to routine husbandry, and the rest are cycles, which may be affected by husbandry procedures. Other issues to be considered are consequences of in-house transport, restrictions caused by caging, randomization of cage location, the physical environment inside the cage, the acoustic environment audible to animals, olfactory environment, materials in the cage, cage complexity, feeding regimens, kinship, and humans. Laboratory animal husbandry issues are an integral but underappreciated part of investigators' experimental design, which if ignored can cause major interference with the results. All researchers should familiarize themselves with the current routine animal care of the facility serving them, including their capabilities for the monitoring of biological and physicochemical environment.
Optimizing experimental parameters for tracking of diffusing particles
NASA Astrophysics Data System (ADS)
Vestergaard, Christian L.
2016-08-01
We describe how a single-particle tracking experiment should be designed in order for its recorded trajectories to contain the most information about a tracked particle's diffusion coefficient. The precision of estimators for the diffusion coefficient is affected by motion blur, limited photon statistics, and the length of recorded time series. We demonstrate for a particle undergoing free diffusion that precision is negligibly affected by motion blur in typical experiments, while optimizing photon counts and the number of recorded frames is the key to precision. Building on these results, we describe for a wide range of experimental scenarios how to choose experimental parameters in order to optimize the precision. Generally, one should choose quantity over quality: experiments should be designed to maximize the number of frames recorded in a time series, even if this means lower information content in individual frames.
Optimizing experimental parameters for tracking of diffusing particles.
Vestergaard, Christian L
2016-08-01
We describe how a single-particle tracking experiment should be designed in order for its recorded trajectories to contain the most information about a tracked particle's diffusion coefficient. The precision of estimators for the diffusion coefficient is affected by motion blur, limited photon statistics, and the length of recorded time series. We demonstrate for a particle undergoing free diffusion that precision is negligibly affected by motion blur in typical experiments, while optimizing photon counts and the number of recorded frames is the key to precision. Building on these results, we describe for a wide range of experimental scenarios how to choose experimental parameters in order to optimize the precision. Generally, one should choose quantity over quality: experiments should be designed to maximize the number of frames recorded in a time series, even if this means lower information content in individual frames. PMID:27627329
Optimal design of compact spur gear reductions
NASA Technical Reports Server (NTRS)
Savage, M.; Lattime, S. B.; Kimmel, J. A.; Coe, H. H.
1992-01-01
The optimal design of compact spur gear reductions includes the selection of bearing and shaft proportions in addition to gear mesh parameters. Designs for single mesh spur gear reductions are based on optimization of system life, system volume, and system weight including gears, support shafts, and the four bearings. The overall optimization allows component properties to interact, yielding the best composite design. A modified feasible directions search algorithm directs the optimization through a continuous design space. Interpolated polynomials expand the discrete bearing properties and proportions into continuous variables for optimization. After finding the continuous optimum, the designer can analyze near optimal designs for comparison and selection. Design examples show the influence of the bearings on the optimal configurations.
Quasi-Experimental Designs for Causal Inference
ERIC Educational Resources Information Center
Kim, Yongnam; Steiner, Peter
2016-01-01
When randomized experiments are infeasible, quasi-experimental designs can be exploited to evaluate causal treatment effects. The strongest quasi-experimental designs for causal inference are regression discontinuity designs, instrumental variable designs, matching and propensity score designs, and comparative interrupted time series designs. This…
Hashemi, Payman; Rahmani, Zohreh
2006-02-28
Homocystine was for the first time, chemically linked to a highly cross-linked agarose support (Novarose) to be employed as a chelating adsorbent for preconcentration and AAS determination of nickel in table salt and baking soda. Nickel is quantitatively adsorbed on a small column packed with 0.25ml of the adsorbent, in a pH range of 5.5-6.5 and simply eluted with 5ml of a 1moll(-1) hydrochloric acid solution. A factorial design was used for optimization of the effects of five different variables on the recovery of nickel. The results indicated that the factors of flow rate and column length, and the interactions between pH and sample volume are significant. In the optimized conditions, the column could tolerate salt concentrations up to 0.5moll(-1) and sample volumes beyond 500ml. Matrix ions of Mg(2+) and Ca(2+), with a concentration of 200mgl(-1), and potentially interfering ions of Cd(2+), Cu(2+), Zn(2+) and Mn(2+), with a concentration of 10mgl(-1), did not have significant effect on the analyte's signal. Preconcentration factors up to 100 and a detection limit of 0.49mugl(-1), corresponding to an enrichment volume of 500ml, were obtained for the determination of the analyte by flame AAS. Application of the method to the determination of natural and spiked nickel in table salt and baking soda solutions resulted in quantitative recoveries. Direct ETAAS determination of nickel in the same samples was not possible because of a high background observed. PMID:18970514
Hashemi, Payman; Rahmani, Zohreh
2006-02-28
Homocystine was for the first time, chemically linked to a highly cross-linked agarose support (Novarose) to be employed as a chelating adsorbent for preconcentration and AAS determination of nickel in table salt and baking soda. Nickel is quantitatively adsorbed on a small column packed with 0.25ml of the adsorbent, in a pH range of 5.5-6.5 and simply eluted with 5ml of a 1moll(-1) hydrochloric acid solution. A factorial design was used for optimization of the effects of five different variables on the recovery of nickel. The results indicated that the factors of flow rate and column length, and the interactions between pH and sample volume are significant. In the optimized conditions, the column could tolerate salt concentrations up to 0.5moll(-1) and sample volumes beyond 500ml. Matrix ions of Mg(2+) and Ca(2+), with a concentration of 200mgl(-1), and potentially interfering ions of Cd(2+), Cu(2+), Zn(2+) and Mn(2+), with a concentration of 10mgl(-1), did not have significant effect on the analyte's signal. Preconcentration factors up to 100 and a detection limit of 0.49mugl(-1), corresponding to an enrichment volume of 500ml, were obtained for the determination of the analyte by flame AAS. Application of the method to the determination of natural and spiked nickel in table salt and baking soda solutions resulted in quantitative recoveries. Direct ETAAS determination of nickel in the same samples was not possible because of a high background observed.
Experimental validation of a topology optimized acoustic cavity.
Christiansen, Rasmus E; Sigmund, Ole; Fernandez-Grande, Efren
2015-12-01
This paper presents the experimental validation of an acoustic cavity designed using topology optimization with the goal of minimizing the sound pressure locally for monochromatic excitation. The presented results show good agreement between simulations and measurements. The effect of damping, errors in the production of the cavity, and variations in operating frequency is discussed and the importance of taking these factors into account in the modeling process is highlighted. PMID:26723304
Kiviharju, K; Leisola, M; Eerikäinen, T
2004-11-01
Streptomyces peucetius var. caesius is an aerobic bacterium that produces doxorubicin as a secondary metabolite. A mixture design was applied for the screening of suitable complex medium components in the cultivation of S. peucetius var. caesius N47, which is an epsilon-rhodomycinone-accumulating mutant strain. epsilon-Rhodomycinone is a non-glycosylated precursor of doxorubicin. Best growth results were obtained with soy peptone and beef extract. A central composite face-centered (CCF) experimental design was constructed for the investigation of pH, temperature and dissolved oxygen (DO) effects on the cultivation growth phase. Another CCF was applied to the production phase to investigate the effects of aeration, pH, temperature and stirring rate on epsilon-rhodomycinone production. An increase in cultivation temperature increased both cell growth and glucose consumption rate. Best epsilon-rhodomycinone productivities were obtained in temperatures around 30 degrees C. DO control increased all growth phase responses, but aeration in the production phase coupled with pH decrease resulted in rapid epsilon-rhodomycinone decay in the medium. In non-aerated production phases a pH change resulted in better productivity than in experiments without pH change. A pH increase with a temperature decrease seemed most beneficial for productivity. This implies that dynamic control strategies in batch production of epsilon-rhodomycinone could increase the overall process productivity.
Sequential experimental design based generalised ANOVA
NASA Astrophysics Data System (ADS)
Chakraborty, Souvik; Chowdhury, Rajib
2016-07-01
Over the last decade, surrogate modelling technique has gained wide popularity in the field of uncertainty quantification, optimization, model exploration and sensitivity analysis. This approach relies on experimental design to generate training points and regression/interpolation for generating the surrogate. In this work, it is argued that conventional experimental design may render a surrogate model inefficient. In order to address this issue, this paper presents a novel distribution adaptive sequential experimental design (DA-SED). The proposed DA-SED has been coupled with a variant of generalised analysis of variance (G-ANOVA), developed by representing the component function using the generalised polynomial chaos expansion. Moreover, generalised analytical expressions for calculating the first two statistical moments of the response, which are utilized in predicting the probability of failure, have also been developed. The proposed approach has been utilized in predicting probability of failure of three structural mechanics problems. It is observed that the proposed approach yields accurate and computationally efficient estimate of the failure probability.
Design optimization method for Francis turbine
NASA Astrophysics Data System (ADS)
Kawajiri, H.; Enomoto, Y.; Kurosawa, S.
2014-03-01
This paper presents a design optimization system coupled CFD. Optimization algorithm of the system employs particle swarm optimization (PSO). Blade shape design is carried out in one kind of NURBS curve defined by a series of control points. The system was applied for designing the stationary vanes and the runner of higher specific speed francis turbine. As the first step, single objective optimization was performed on stay vane profile, and second step was multi-objective optimization for runner in wide operating range. As a result, it was confirmed that the design system is useful for developing of hydro turbine.
NASA Astrophysics Data System (ADS)
Bermejo-Barrera, P.; Moreda-Piñeiro, A.; Muñiz-Naveiro, O.; Gómez-Fernández, A. M. J.; Bermejo-Barrera, A.
2000-08-01
A Plackett-Burman 2 7×3/32 design for seven factors (nitric acid concentration, hydrochloride acid concentration, hydrogen peroxide concentration, acid solution volume, particle size, microwave power, and exposure time to microwave energy) was carried out in order to find the significant variables affecting the metals acid leaching after a pseudo-digestion procedure by microwave energy from mussel. Nitric acid concentration, hydrochloride concentration or hydrogen peroxide, and exposure time to microwave energy were the most significant variables, and a 2 3+star central composite design was used for their optimization. Nitric and hydrochloric acid concentrations between 4.1 and 5.3 M, and between 2.8 and 3.8 M, respectively, were found as optimum for many elements (Ca, Cd, Cr, Cu, Fe, Mg, Mn, Pb and Zn) yielding the acid leaching process for times in the 1.2-2.2 min range. However, As was quantitatively leached with hydrochloric acid concentrations between 4.8 and 5.3 M and an exposure time of 2.0 min, while Co and Se were extracted using nitric acid (1.0 and 5.0 M, respectively) and hydrogen peroxide (5.0 M) solution and an exposure time of 2.0 min. Finally, Hg was extracted using a hydrochloric acid/hydrogen peroxide solution at 3.5:2.0 M, and also for an optimum time of microwave radiation of 1.75 min. Trace metals were determined using flame atomic absorption spectrometry, electrothermal atomic absorption spectrometry and cold vapor — atomic absorption spectrometry. The methods were finally applied to several reference materials (DORM-1, DOLT-1 and TORT-1), achieving good accuracy.
NASA Astrophysics Data System (ADS)
Zweibaum, Nicolas
The development of advanced nuclear reactor technology requires understanding of complex, integrated systems that exhibit novel phenomenology under normal and accident conditions. The advent of passive safety systems and enhanced modular construction methods requires the development and use of new frameworks to predict the behavior of advanced nuclear reactors, both from a safety standpoint and from an environmental impact perspective. This dissertation introduces such frameworks for scaling of integral effects tests for natural circulation in fluoride-salt-cooled, high-temperature reactors (FHRs) to validate evaluation models (EMs) for system behavior; subsequent reliability assessment of passive, natural- circulation-driven decay heat removal systems, using these validated models; evaluation of life cycle carbon dioxide emissions as a key environmental impact metric; and recommendations for further work to apply these frameworks in the development and optimization of advanced nuclear reactor designs. In this study, the developed frameworks are applied to the analysis of the Mark 1 pebble-bed FHR (Mk1 PB-FHR) under current investigation at the University of California, Berkeley (UCB). (Abstract shortened by UMI.).
Network inference via adaptive optimal design
2012-01-01
Background Current research in network reverse engineering for genetic or metabolic networks very often does not include a proper experimental and/or input design. In this paper we address this issue in more detail and suggest a method that includes an iterative design of experiments based, on the most recent data that become available. The presented approach allows a reliable reconstruction of the network and addresses an important issue, i.e., the analysis and the propagation of uncertainties as they exist in both the data and in our own knowledge. These two types of uncertainties have their immediate ramifications for the uncertainties in the parameter estimates and, hence, are taken into account from the very beginning of our experimental design. Findings The method is demonstrated for two small networks that include a genetic network for mRNA synthesis and degradation and an oscillatory network describing a molecular network underlying adenosine 3’-5’ cyclic monophosphate (cAMP) as observed in populations of Dyctyostelium cells. In both cases a substantial reduction in parameter uncertainty was observed. Extension to larger scale networks is possible but needs a more rigorous parameter estimation algorithm that includes sparsity as a constraint in the optimization procedure. Conclusion We conclude that a careful experiment design very often (but not always) pays off in terms of reliability in the inferred network topology. For large scale networks a better parameter estimation algorithm is required that includes sparsity as an additional constraint. These algorithms are available in the literature and can also be used in an adaptive optimal design setting as demonstrated in this paper. PMID:22999252
Rationally reduced libraries for combinatorial pathway optimization minimizing experimental effort
Jeschek, Markus; Gerngross, Daniel; Panke, Sven
2016-01-01
Rational flux design in metabolic engineering approaches remains difficult since important pathway information is frequently not available. Therefore empirical methods are applied that randomly change absolute and relative pathway enzyme levels and subsequently screen for variants with improved performance. However, screening is often limited on the analytical side, generating a strong incentive to construct small but smart libraries. Here we introduce RedLibs (Reduced Libraries), an algorithm that allows for the rational design of smart combinatorial libraries for pathway optimization thereby minimizing the use of experimental resources. We demonstrate the utility of RedLibs for the design of ribosome-binding site libraries by in silico and in vivo screening with fluorescent proteins and perform a simple two-step optimization of the product selectivity in the branched multistep pathway for violacein biosynthesis, indicating a general applicability for the algorithm and the proposed heuristics. We expect that RedLibs will substantially simplify the refactoring of synthetic metabolic pathways. PMID:27029461
Rationally reduced libraries for combinatorial pathway optimization minimizing experimental effort.
Jeschek, Markus; Gerngross, Daniel; Panke, Sven
2016-01-01
Rational flux design in metabolic engineering approaches remains difficult since important pathway information is frequently not available. Therefore empirical methods are applied that randomly change absolute and relative pathway enzyme levels and subsequently screen for variants with improved performance. However, screening is often limited on the analytical side, generating a strong incentive to construct small but smart libraries. Here we introduce RedLibs (Reduced Libraries), an algorithm that allows for the rational design of smart combinatorial libraries for pathway optimization thereby minimizing the use of experimental resources. We demonstrate the utility of RedLibs for the design of ribosome-binding site libraries by in silico and in vivo screening with fluorescent proteins and perform a simple two-step optimization of the product selectivity in the branched multistep pathway for violacein biosynthesis, indicating a general applicability for the algorithm and the proposed heuristics. We expect that RedLibs will substantially simplify the refactoring of synthetic metabolic pathways. PMID:27029461
Rationally reduced libraries for combinatorial pathway optimization minimizing experimental effort.
Jeschek, Markus; Gerngross, Daniel; Panke, Sven
2016-01-01
Rational flux design in metabolic engineering approaches remains difficult since important pathway information is frequently not available. Therefore empirical methods are applied that randomly change absolute and relative pathway enzyme levels and subsequently screen for variants with improved performance. However, screening is often limited on the analytical side, generating a strong incentive to construct small but smart libraries. Here we introduce RedLibs (Reduced Libraries), an algorithm that allows for the rational design of smart combinatorial libraries for pathway optimization thereby minimizing the use of experimental resources. We demonstrate the utility of RedLibs for the design of ribosome-binding site libraries by in silico and in vivo screening with fluorescent proteins and perform a simple two-step optimization of the product selectivity in the branched multistep pathway for violacein biosynthesis, indicating a general applicability for the algorithm and the proposed heuristics. We expect that RedLibs will substantially simplify the refactoring of synthetic metabolic pathways.
A Bayesian experimental design approach to structural health monitoring
Farrar, Charles; Flynn, Eric; Todd, Michael
2010-01-01
Optimal system design for SHM involves two primarily challenges. The first is the derivation of a proper performance function for a given system design. The second is the development of an efficient optimization algorithm for choosing a design that maximizes, or nearly maximizes the performance function. In this paper we will outline how an SHM practitioner can construct the proper performance function by casting the entire design problem into a framework of Bayesian experimental design. The approach demonstrates how the design problem necessarily ties together all steps of the SHM process.
Program Aids Analysis And Optimization Of Design
NASA Technical Reports Server (NTRS)
Rogers, James L., Jr.; Lamarsh, William J., II
1994-01-01
NETS/ PROSSS (NETS Coupled With Programming System for Structural Synthesis) computer program developed to provide system for combining NETS (MSC-21588), neural-network application program and CONMIN (Constrained Function Minimization, ARC-10836), optimization program. Enables user to reach nearly optimal design. Design then used as starting point in normal optimization process, possibly enabling user to converge to optimal solution in significantly fewer iterations. NEWT/PROSSS written in C language and FORTRAN 77.
Comparison of optimal design methods in inverse problems
NASA Astrophysics Data System (ADS)
Banks, H. T.; Holm, K.; Kappel, F.
2011-07-01
Typical optimal design methods for inverse or parameter estimation problems are designed to choose optimal sampling distributions through minimization of a specific cost function related to the resulting error in parameter estimates. It is hoped that the inverse problem will produce parameter estimates with increased accuracy using data collected according to the optimal sampling distribution. Here we formulate the classical optimal design problem in the context of general optimization problems over distributions of sampling times. We present a new Prohorov metric-based theoretical framework that permits one to treat succinctly and rigorously any optimal design criteria based on the Fisher information matrix. A fundamental approximation theory is also included in this framework. A new optimal design, SE-optimal design (standard error optimal design), is then introduced in the context of this framework. We compare this new design criterion with the more traditional D-optimal and E-optimal designs. The optimal sampling distributions from each design are used to compute and compare standard errors; the standard errors for parameters are computed using asymptotic theory or bootstrapping and the optimal mesh. We use three examples to illustrate ideas: the Verhulst-Pearl logistic population model (Banks H T and Tran H T 2009 Mathematical and Experimental Modeling of Physical and Biological Processes (Boca Raton, FL: Chapman and Hall/CRC)), the standard harmonic oscillator model (Banks H T and Tran H T 2009) and a popular glucose regulation model (Bergman R N, Ider Y Z, Bowden C R and Cobelli C 1979 Am. J. Physiol. 236 E667-77 De Gaetano A and Arino O 2000 J. Math. Biol. 40 136-68 Toffolo G, Bergman R N, Finegood D T, Bowden C R and Cobelli C 1980 Diabetes 29 979-90).
Integrated multidisciplinary design optimization of rotorcraft
NASA Technical Reports Server (NTRS)
Adelman, Howard M.; Mantay, Wayne R.
1989-01-01
The NASA/Army research plan for developing the logic elements for helicopter rotor design optimization by integrating appropriate disciplines and accounting for important interactions among the disciplines is discussed. The paper describes the optimization formulation in terms of the objective function, design variables, and constraints. The analysis aspects are discussed, and an initial effort at defining the interdisciplinary coupling is summarized. Results are presented on the achievements made in the rotor aerodynamic performance optimization for minimum hover horsepower, rotor dynamic optimization for vibration reduction, rotor structural optimization for minimum weight, and integrated aerodynamic load/dynamics optimization for minimum vibration and weight.
Vehicle systems design optimization study
Gilmour, J. L.
1980-04-01
The optimization of an electric vehicle layout requires a weight distribution in the range of 53/47 to 62/38 in order to assure dynamic handling characteristics comparable to current production internal combustion engine vehicles. It is possible to achieve this goal and also provide passenger and cargo space comparable to a selected current production sub-compact car either in a unique new design or by utilizing the production vehicle as a base. Necessary modification of the base vehicle can be accomplished without major modification of the structure or running gear. As long as batteries are as heavy and require as much space as they currently do, they must be divided into two packages - one at front under the hood and a second at the rear under the cargo area - in order to achieve the desired weight distribution. The weight distribution criteria requires the placement of batteries at the front of the vehicle even when the central tunnel is used for the location of some batteries. The optimum layout has a front motor and front wheel drive. This configuration provides the optimum vehicle dynamic handling characteristics and the maximum passsenger and cargo space for a given size vehicle.
Control design for the SERC experimental testbeds
NASA Technical Reports Server (NTRS)
Jacques, Robert; Blackwood, Gary; Macmartin, Douglas G.; How, Jonathan; Anderson, Eric
1992-01-01
Viewgraphs on control design for the Space Engineering Research Center experimental testbeds are presented. Topics covered include: SISO control design and results; sensor and actuator location; model identification; control design; experimental results; preliminary LAC experimental results; active vibration isolation problem statement; base flexibility coupling into isolation feedback loop; cantilever beam testbed; and closed loop results.
Flat-plate photovoltaic array design optimization
NASA Technical Reports Server (NTRS)
Ross, R. G., Jr.
1980-01-01
An analysis is presented which integrates the results of specific studies in the areas of photovoltaic structural design optimization, optimization of array series/parallel circuit design, thermal design optimization, and optimization of environmental protection features. The analysis is based on minimizing the total photovoltaic system life-cycle energy cost including repair and replacement of failed cells and modules. This approach is shown to be a useful technique for array optimization, particularly when time-dependent parameters such as array degradation and maintenance are involved.
Kuentz, Martin; Röthlisberger, Dieter
2002-04-01
The aim of this study is to use texture analysis as a non-destructive test for hard gelatin capsules filled with liquid formulations to investigate mechanical changes upon storage. A suitable amount of water in the formulations is determined to obtain the best possible compatibility with the gelatin shell. This quantity of water to be added to a formulation is called the balanced amount of water (BAW). Texture profiling was conducted on capsules filled with hydrophilic polymer mixtures and with formulations based on amphiphilic masses with high HLB value. The first model mixture consisted of polyethylene glycol 400 and polyvinylpyrrolidone K17 with water and the second type consisted of caprylocaproyl macrogol glycerides (Labrasol) with colloidal silica (Aerosil 200) and water. The liquid-fill capsules were investigated by measuring changes on mass and stiffness after storage under confined conditions in aluminium foils. Capsule stiffness was investigated also as a parameter in a response surface analysis to identify the BAW. Polyvinylpyrrolidone did not show a great influence on the BAW in the range of 10-12% (w/w) for the first model mixture. Capsules with the less hydrophilic Labrasol formulations, however, kept their initial stiffness after storage best with only half of that amount, i.e. 5-6% (w/w) of water in the compositions. From this study it can be concluded that texture profiling in the framework of an experimental design helps to find hydrophilic or amphiphilic formulations that are compatible with gelatin capsules. Short-term stability tests are meaningful if capsule embrittlement or softening is due to water equilibration or another migration process that takes place rapidly. Long-term stability tests will always be needed for a final statement of compatibility between a formulation and hard gelatin capsules.
Topology Optimization for Architected Materials Design
NASA Astrophysics Data System (ADS)
Osanov, Mikhail; Guest, James K.
2016-07-01
Advanced manufacturing processes provide a tremendous opportunity to fabricate materials with precisely defined architectures. To fully leverage these capabilities, however, materials architectures must be optimally designed according to the target application, base material used, and specifics of the fabrication process. Computational topology optimization offers a systematic, mathematically driven framework for navigating this new design challenge. The design problem is posed and solved formally as an optimization problem with unit cell and upscaling mechanics embedded within this formulation. This article briefly reviews the key requirements to apply topology optimization to materials architecture design and discusses several fundamental findings related to optimization of elastic, thermal, and fluidic properties in periodic materials. Emerging areas related to topology optimization for manufacturability and manufacturing variations, nonlinear mechanics, and multiscale design are also discussed.
Design optimization of a portable, micro-hydrokinetic turbine
NASA Astrophysics Data System (ADS)
Schleicher, W. Chris
Marine and hydrokinetic (MHK) technology is a growing field that encompasses many different types of turbomachinery that operate on the kinetic energy of water. Micro hydrokinetics are a subset of MHK technology comprised of units designed to produce less than 100 kW of power. A propeller-type hydrokinetic turbine is investigated as a solution for a portable micro-hydrokinetic turbine with the needs of the United States Marine Corps in mind, as well as future commercial applications. This dissertation investigates using a response surface optimization methodology to create optimal turbine blade designs under many operating conditions. The field of hydrokinetics is introduced. The finite volume method is used to solve the Reynolds-Averaged Navier-Stokes equations with the k ω Shear Stress Transport model, for different propeller-type hydrokinetic turbines. The adaptive response surface optimization methodology is introduced as related to hydrokinetic turbines, and is benchmarked with complex algebraic functions. The optimization method is further studied to characterize the size of the experimental design on its ability to find optimum conditions. It was found that a large deviation between experimental design points was preferential. Different propeller hydrokinetic turbines were designed and compared with other forms of turbomachinery. It was found that the rapid simulations usually under predict performance compare to the refined simulations, and for some other designs it drastically over predicted performance. The optimization method was used to optimize a modular pump-turbine, verifying that the optimization work for other hydro turbine designs.
Design optimization of a torpedo shell structure
NASA Astrophysics Data System (ADS)
Yu, De-Hai; Song, Bao-Wei; Li, Jia-Wang; Yang, Shi-Xing
2008-03-01
An optimized methodology to design a more robust torpedo shell is proposed. The method has taken into account reliability requirements and controllable and uncontrollable factors such as geometry, load, material properties, manufacturing processes, installation, etc. as well as human and environmental factors. The result is a more realistic shell design. Our reliability optimization design model was developed based on sensitivity analysis. Details of the design model are given in this paper. An example of a torpedo shell design based on this model is given and demonstrates that the method produces designs that are more effective and reliable than traditional torpedo shell designs. This method can be used for other torpedo system designs.
Integrated multidisciplinary design optimization of rotorcraft
NASA Technical Reports Server (NTRS)
Adelman, Howard M.; Mantay, Wayne R.
1989-01-01
The NASA/Army research plan for developing the logic elements for helicopter rotor design optimization by integrating appropriate disciplines and accounting for important interactions among the disciplines is discussed. The optimization formulation is described in terms of the objective function, design variables, and constraints. The analysis aspects are discussed, and an initial effort at defining the interdisciplinary coupling is summarized. Results are presented on the achievements made in the rotor dynamic optimization for vibration reduction, rotor structural optimization for minimum weight, and integrated aerodynamic load/dynamics optimization for minimum vibration and weight.
Experimental Design and Some Threats to Experimental Validity: A Primer
ERIC Educational Resources Information Center
Skidmore, Susan
2008-01-01
Experimental designs are distinguished as the best method to respond to questions involving causality. The purpose of the present paper is to explicate the logic of experimental design and why it is so vital to questions that demand causal conclusions. In addition, types of internal and external validity threats are discussed. To emphasize the…
Optimal Designs of Staggered Dean Vortex Micromixers
Chen, Jyh Jian; Chen, Chun Huei; Shie, Shian Ruei
2011-01-01
A novel parallel laminar micromixer with a two-dimensional staggered Dean Vortex micromixer is optimized and fabricated in our study. Dean vortices induced by centrifugal forces in curved rectangular channels cause fluids to produce secondary flows. The split-and-recombination (SAR) structures of the flow channels and the impinging effects result in the reduction of the diffusion distance of two fluids. Three different designs of a curved channel micromixer are introduced to evaluate the mixing performance of the designed micromixer. Mixing performances are demonstrated by means of a pH indicator using an optical microscope and fluorescent particles via a confocal microscope at different flow rates corresponding to Reynolds numbers (Re) ranging from 0.5 to 50. The comparison between the experimental data and numerical results shows a very reasonable agreement. At a Re of 50, the mixing length at the sixth segment, corresponding to the downstream distance of 21.0 mm, can be achieved in a distance 4 times shorter than when the Re equals 1. An optimization of this micromixer is performed with two geometric parameters. These are the angle between the lines from the center to two intersections of two consecutive curved channels, θ, and the angle between two lines of the centers of three consecutive curved channels, ϕ. It can be found that the maximal mixing index is related to the maximal value of the sum of θ and ϕ, which is equal to 139.82°. PMID:21747691
Optimality of a Fully Stressed Design
NASA Technical Reports Server (NTRS)
Patnaik, Surya N.; Hopkins, Dale A.
1998-01-01
For a truss a fully stressed state is reached and when all its members are utilized to their full strength capacity. Historically, engineers considered such a design optimum. But recently this optimality has been questioned, especially since the weight of the structure is not explicitly used in fully stressed design calculations. This paper examines optimality of the full stressed design (FSD) with analytical and graphical illustrations. Solutions for a set of examples obtained by using the FSD method and optimization methods numerically confirm the optimality of the FSD. The FSD, which can be obtained with a small amount of calculation, can be extended to displacement constraints and to nontruss-type structures.
Identification of vehicle suspension parameters by design optimization
NASA Astrophysics Data System (ADS)
Tey, J. Y.; Ramli, R.; Kheng, C. W.; Chong, S. Y.; Abidin, M. A. Z.
2014-05-01
The design of a vehicle suspension system through simulation requires accurate representation of the design parameters. These parameters are usually difficult to measure or sometimes unavailable. This article proposes an efficient approach to identify the unknown parameters through optimization based on experimental results, where the covariance matrix adaptation-evolutionary strategy (CMA-es) is utilized to improve the simulation and experimental results against the kinematic and compliance tests. This speeds up the design and development cycle by recovering all the unknown data with respect to a set of kinematic measurements through a single optimization process. A case study employing a McPherson strut suspension system is modelled in a multi-body dynamic system. Three kinematic and compliance tests are examined, namely, vertical parallel wheel travel, opposite wheel travel and single wheel travel. The problem is formulated as a multi-objective optimization problem with 40 objectives and 49 design parameters. A hierarchical clustering method based on global sensitivity analysis is used to reduce the number of objectives to 30 by grouping correlated objectives together. Then, a dynamic summation of rank value is used as pseudo-objective functions to reformulate the multi-objective optimization to a single-objective optimization problem. The optimized results show a significant improvement in the correlation between the simulated model and the experimental model. Once accurate representation of the vehicle suspension model is achieved, further analysis, such as ride and handling performances, can be implemented for further optimization.
Web-based tools for finding optimal designs in biomedical studies
Wong, Weng Kee
2013-01-01
Experimental costs are rising and applications of optimal design ideas are increasingly applied in many disciplines. However, the theory for constructing optimal designs can be esoteric and its implementation can be difficult. To help practitioners have easier access to optimal designs and better appreciate design issues, we present a web site at http://optimal-design.biostat.ucla.edu/optimal/ capable of generating different types of tailor-made optimal designs for popular models in the biological sciences. This site also evaluates various efficiencies of a user-specified design and so enables practitioners to appreciate robustness properties of the design before implementation. PMID:23806678
Experimental eavesdropping based on optimal quantum cloning.
Bartkiewicz, Karol; Lemr, Karel; Cernoch, Antonín; Soubusta, Jan; Miranowicz, Adam
2013-04-26
The security of quantum cryptography is guaranteed by the no-cloning theorem, which implies that an eavesdropper copying transmitted qubits in unknown states causes their disturbance. Nevertheless, in real cryptographic systems some level of disturbance has to be allowed to cover, e.g., transmission losses. An eavesdropper can attack such systems by replacing a noisy channel by a better one and by performing approximate cloning of transmitted qubits which disturb them but below the noise level assumed by legitimate users. We experimentally demonstrate such symmetric individual eavesdropping on the quantum key distribution protocols of Bennett and Brassard (BB84) and the trine-state spherical code of Renes (R04) with two-level probes prepared using a recently developed photonic multifunctional quantum cloner [Lemr et al., Phys. Rev. A 85, 050307(R) (2012)]. We demonstrated that our optimal cloning device with high-success rate makes the eavesdropping possible by hiding it in usual transmission losses. We believe that this experiment can stimulate the quest for other operational applications of quantum cloning.
Experimental eavesdropping based on optimal quantum cloning.
Bartkiewicz, Karol; Lemr, Karel; Cernoch, Antonín; Soubusta, Jan; Miranowicz, Adam
2013-04-26
The security of quantum cryptography is guaranteed by the no-cloning theorem, which implies that an eavesdropper copying transmitted qubits in unknown states causes their disturbance. Nevertheless, in real cryptographic systems some level of disturbance has to be allowed to cover, e.g., transmission losses. An eavesdropper can attack such systems by replacing a noisy channel by a better one and by performing approximate cloning of transmitted qubits which disturb them but below the noise level assumed by legitimate users. We experimentally demonstrate such symmetric individual eavesdropping on the quantum key distribution protocols of Bennett and Brassard (BB84) and the trine-state spherical code of Renes (R04) with two-level probes prepared using a recently developed photonic multifunctional quantum cloner [Lemr et al., Phys. Rev. A 85, 050307(R) (2012)]. We demonstrated that our optimal cloning device with high-success rate makes the eavesdropping possible by hiding it in usual transmission losses. We believe that this experiment can stimulate the quest for other operational applications of quantum cloning. PMID:23679725
Design optimization studies using COSMIC NASTRAN
NASA Technical Reports Server (NTRS)
Pitrof, Stephen M.; Bharatram, G.; Venkayya, Vipperla B.
1993-01-01
The purpose of this study is to create, test and document a procedure to integrate mathematical optimization algorithms with COSMIC NASTRAN. This procedure is very important to structural design engineers who wish to capitalize on optimization methods to ensure that their design is optimized for its intended application. The OPTNAST computer program was created to link NASTRAN and design optimization codes into one package. This implementation was tested using two truss structure models and optimizing their designs for minimum weight, subject to multiple loading conditions and displacement and stress constraints. However, the process is generalized so that an engineer could design other types of elements by adding to or modifying some parts of the code.
Optimization, an Important Stage of Engineering Design
ERIC Educational Resources Information Center
Kelley, Todd R.
2010-01-01
A number of leaders in technology education have indicated that a major difference between the technological design process and the engineering design process is analysis and optimization. The analysis stage of the engineering design process is when mathematical models and scientific principles are employed to help the designer predict design…
Multidisciplinary design optimization of optomechanical devices
NASA Astrophysics Data System (ADS)
Williams, Antonio St. Clair Lloyd
2000-11-01
The current process for designing optomechanical devices typically involves independent design optimization within each discipline. For instance, an optics engineer would optimize the optics of the device for image quality using ray-tracing software. The structural engineer would optimize the design to minimize deformation using the finite element method. Independently optimizing the optics and structures of optomechanical systems negates the possibility of exploiting the interdisciplinary interactions. This can lead to increased product development time and cost. Multidisciplinary Design Optimization (MDO) techniques have been in development over the last decade and have been applied primarily to aerospace problems. The goal of MDO is to take advantage of the interactions between disciplines as well as to improve the product development time. The application of MDO formulations to the design of Optomechanical systems has not been achieved thus far. The aim of this study is to evaluate and develop MDO formulations for optomechanical devices that may be used to reduce the product development time and cost. In addition, the feasibility of obtaining a more global optimum design using these multidisciplinary optimization techniques is investigated. Several MDO formulations were evaluated during this study and compared to the current design optimization process. The formulations evaluated were the Multidisciplinary Design Feasible (MDF), the Sequenced Individual Discipline Feasible (SDO-IDF), and the Sequenced Multidisciplinary Design Feasible (SDO-MDF). The current optimization process is called Independent Design Optimization (IDO). For the examples examined, the results showed that the IDO formulation optimizes each discipline but does not guarantee a multidisciplinary optimum for coupled problems. The SDO-MDF formulation was found to be the least efficient of the formulations examined, while the SDO-IDF showed the most promise in terms of efficiency.
Computational design optimization for microfluidic magnetophoresis
Plouffe, Brian D.; Lewis, Laura H.; Murthy, Shashi K.
2011-01-01
Current macro- and microfluidic approaches for the isolation of mammalian cells are limited in both efficiency and purity. In order to design a robust platform for the enumeration of a target cell population, high collection efficiencies are required. Additionally, the ability to isolate pure populations with minimal biological perturbation and efficient off-chip recovery will enable subcellular analyses of these cells for applications in personalized medicine. Here, a rational design approach for a simple and efficient device that isolates target cell populations via magnetic tagging is presented. In this work, two magnetophoretic microfluidic device designs are described, with optimized dimensions and operating conditions determined from a force balance equation that considers two dominant and opposing driving forces exerted on a magnetic-particle-tagged cell, namely, magnetic and viscous drag. Quantitative design criteria for an electromagnetic field displacement-based approach are presented, wherein target cells labeled with commercial magnetic microparticles flowing in a central sample stream are shifted laterally into a collection stream. Furthermore, the final device design is constrained to fit on standard rectangular glass coverslip (60 (L)×24 (W)×0.15 (H) mm3) to accommodate small sample volume and point-of-care design considerations. The anticipated performance of the device is examined via a parametric analysis of several key variables within the model. It is observed that minimal currents (<500 mA) are required to generate magnetic fields sufficient to separate cells from the sample streams flowing at rate as high as 7 ml∕h, comparable to the performance of current state-of-the-art magnet-activated cell sorting systems currently used in clinical settings. Experimental validation of the presented model illustrates that a device designed according to the derived rational optimization can effectively isolate (∼100%) a magnetic-particle-tagged cell
The Experimental Design Ability Test (EDAT)
ERIC Educational Resources Information Center
Sirum, Karen; Humburg, Jennifer
2011-01-01
Higher education goals include helping students develop evidence based reasoning skills; therefore, scientific thinking skills such as those required to understand the design of a basic experiment are important. The Experimental Design Ability Test (EDAT) measures students' understanding of the criteria for good experimental design through their…
Optimized design of fiber architecture for pultruded beams
Lopez-Anido, R.; GangaRao, H.V.S.; Bendidi, R.; Al-Megdad, M.
1996-11-01
The goal of this paper is to present the design optimization of a reinforced plastic (RP) wide flange (WF) pultruded beam. A commercially available open section, WF 12x12x1/2, was selected to optimize the stiffness to weight ratio and to manufacture through pultrusion process. Two WF beams with optimized fiber architectures that include bi-directional fabrics were pultruded and tested in bending. A simple analytical model that predicts stiffness properties of pultruded sections based on properties of constituents is introduced. Analytical-experimental correlations are presented. The optimized WF sections increased in bending stiffness by about 40% over the existing unidirectional WF sections.
Multidisciplinary Optimization Methods for Aircraft Preliminary Design
NASA Technical Reports Server (NTRS)
Kroo, Ilan; Altus, Steve; Braun, Robert; Gage, Peter; Sobieski, Ian
1994-01-01
This paper describes a research program aimed at improved methods for multidisciplinary design and optimization of large-scale aeronautical systems. The research involves new approaches to system decomposition, interdisciplinary communication, and methods of exploiting coarse-grained parallelism for analysis and optimization. A new architecture, that involves a tight coupling between optimization and analysis, is intended to improve efficiency while simplifying the structure of multidisciplinary, computation-intensive design problems involving many analysis disciplines and perhaps hundreds of design variables. Work in two areas is described here: system decomposition using compatibility constraints to simplify the analysis structure and take advantage of coarse-grained parallelism; and collaborative optimization, a decomposition of the optimization process to permit parallel design and to simplify interdisciplinary communication requirements.
Stress-strain analysis and optimal design of aircraft structures
NASA Astrophysics Data System (ADS)
Liakhovenko, I. A.
The papers contained in this volume present results of theoretical and experimental research related to the stress-strain analysis and optimal design of aircraft structures. Topics discussed include a study of the origin of residual stresses and strains in the transparencies of supersonic aircraft, methodology for studying the fracture of aircraft structures in static tests, and the stability of a multispan panel under combined loading. The discussion also covers optimization of the stiffness and mass characteristics of lifting surface structures modeled by an elastic beam, a study of the strength of a closed system of wings, and a method for the optimal design of a large-aspect-ratio wing.
Cold Climates Heat Pump Design Optimization
Abdelaziz, Omar; Shen, Bo
2012-01-01
Heat pumps provide an efficient heating method; however they suffer from sever capacity and performance degradation at low ambient conditions. This has deterred market penetration in cold climates. There is a continuing effort to find an efficient air source cold climate heat pump that maintains acceptable capacity and performance at low ambient conditions. Systematic optimization techniques provide a reliable approach for the design of such systems. This paper presents a step-by-step approach for the design optimization of cold climate heat pumps. We first start by describing the optimization problem: objective function, constraints, and design space. Then we illustrate how to perform this design optimization using an open source publically available optimization toolbox. The response of the heat pump design was evaluated using a validated component based vapor compression model. This model was treated as a black box model within the optimization framework. Optimum designs for different system configurations are presented. These optimum results were further analyzed to understand the performance tradeoff and selection criteria. The paper ends with a discussion on the use of systematic optimization for the cold climate heat pump design.
Magnetic design optimization using variable metrics
Davey, K.R.
1995-11-01
The optimal design of a magnet assembly for a magnetic levitated train is approached using a three step process. First, the key parameters within the objective performance index are computed for the variation range of the problem. Second, the performance index is fitted to a smooth polynomial involving products of the powers of all variables. Third, a constrained optimization algorithm is employed to predict the optimal choice of the variables. An assessment of the integrity of the optimization program is obtained by comparing the final optimized solution with that predicted by the field analysis in the final configuration. Additional field analysis is recommended around the final solution to fine tune the solution.
Optimization design of electromagnetic shielding composites
NASA Astrophysics Data System (ADS)
Qu, Zhaoming; Wang, Qingguo; Qin, Siliang; Hu, Xiaofeng
2013-03-01
The effective electromagnetic parameters physical model of composites and prediction formulas of composites' shielding effectiveness and reflectivity were derived based on micromechanics, variational principle and electromagnetic wave transmission theory. The multi-objective optimization design of multilayer composites was carried out using genetic algorithm. The optimized results indicate that material parameter proportioning of biggest absorption ability can be acquired under the condition of the minimum shielding effectiveness can be satisfied in certain frequency band. The validity of optimization design model was verified and the scheme has certain theoretical value and directive significance to the design of high efficiency shielding composites.
Optimization methods applied to hybrid vehicle design
NASA Technical Reports Server (NTRS)
Donoghue, J. F.; Burghart, J. H.
1983-01-01
The use of optimization methods as an effective design tool in the design of hybrid vehicle propulsion systems is demonstrated. Optimization techniques were used to select values for three design parameters (battery weight, heat engine power rating and power split between the two on-board energy sources) such that various measures of vehicle performance (acquisition cost, life cycle cost and petroleum consumption) were optimized. The apporach produced designs which were often significant improvements over hybrid designs already reported on in the literature. The principal conclusions are as follows. First, it was found that the strategy used to split the required power between the two on-board energy sources can have a significant effect on life cycle cost and petroleum consumption. Second, the optimization program should be constructed so that performance measures and design variables can be easily changed. Third, the vehicle simulation program has a significant effect on the computer run time of the overall optimization program; run time can be significantly reduced by proper design of the types of trips the vehicle takes in a one year period. Fourth, care must be taken in designing the cost and constraint expressions which are used in the optimization so that they are relatively smooth functions of the design variables. Fifth, proper handling of constraints on battery weight and heat engine rating, variables which must be large enough to meet power demands, is particularly important for the success of an optimization study. Finally, the principal conclusion is that optimization methods provide a practical tool for carrying out the design of a hybrid vehicle propulsion system.
Optimality criteria solution strategies in multiple constraint design optimization
NASA Technical Reports Server (NTRS)
Levy, R.; Parzynski, W.
1981-01-01
Procedures and solution strategies are described to solve the conventional structural optimization problem using the Lagrange multiplier technique. The multipliers, obtained through solution of an auxiliary nonlinear optimization problem, lead to optimality criteria to determine the design variables. It is shown that this procedure is essentially equivalent to an alternative formulation using a dual method Lagrangian function objective. Although mathematical formulations are straight-forward, successful applications and computational efficiency depend upon execution procedure strategies. Strategies examined, with application examples, include selection of active constraints, move limits, line search procedures, and side constraint boundaries.
GCFR shielding design and supporting experimental programs
Perkins, R.G.; Hamilton, C.J.; Bartine, D.
1980-05-01
The shielding for the conceptual design of the gas-cooled fast breeder reactor (GCFR) is described, and the component exposure design criteria which determine the shield design are presented. The experimental programs for validating the GCFR shielding design methods and data (which have been in existence since 1976) are also discussed.
Interaction prediction optimization in multidisciplinary design optimization problems.
Meng, Debiao; Zhang, Xiaoling; Huang, Hong-Zhong; Wang, Zhonglai; Xu, Huanwei
2014-01-01
The distributed strategy of Collaborative Optimization (CO) is suitable for large-scale engineering systems. However, it is hard for CO to converge when there is a high level coupled dimension. Furthermore, the discipline objectives cannot be considered in each discipline optimization problem. In this paper, one large-scale systems control strategy, the interaction prediction method (IPM), is introduced to enhance CO. IPM is utilized for controlling subsystems and coordinating the produce process in large-scale systems originally. We combine the strategy of IPM with CO and propose the Interaction Prediction Optimization (IPO) method to solve MDO problems. As a hierarchical strategy, there are a system level and a subsystem level in IPO. The interaction design variables (including shared design variables and linking design variables) are operated at the system level and assigned to the subsystem level as design parameters. Each discipline objective is considered and optimized at the subsystem level simultaneously. The values of design variables are transported between system level and subsystem level. The compatibility constraints are replaced with the enhanced compatibility constraints to reduce the dimension of design variables in compatibility constraints. Two examples are presented to show the potential application of IPO for MDO.
Optimal design of thermally coupled distillation columns
Duennebier, G.; Pantelides, C.C.
1999-01-01
This paper considers the optimal design of thermally coupled distillation columns and dividing wall columns using detailed column models and mathematical optimization. The column model used is capable of describing both conventional and thermally coupled columns, which allows comparisons of different structural alternatives to be made. Possible savings in both operating and capital costs of up to 30% are illustrated using two case studies.
Turbomachinery Airfoil Design Optimization Using Differential Evolution
NASA Technical Reports Server (NTRS)
Madavan, Nateri K.; Biegel, Bryan (Technical Monitor)
2002-01-01
An aerodynamic design optimization procedure that is based on a evolutionary algorithm known at Differential Evolution is described. Differential Evolution is a simple, fast, and robust evolutionary strategy that has been proven effective in determining the global optimum for several difficult optimization problems, including highly nonlinear systems with discontinuities and multiple local optima. The method is combined with a Navier-Stokes solver that evaluates the various intermediate designs and provides inputs to the optimization procedure. An efficient constraint handling mechanism is also incorporated. Results are presented for the inverse design of a turbine airfoil from a modern jet engine and compared to earlier methods. The capability of the method to search large design spaces and obtain the optimal airfoils in an automatic fashion is demonstrated. Substantial reductions in the overall computing time requirements are achieved by using the algorithm in conjunction with neural networks.
Turbomachinery Airfoil Design Optimization Using Differential Evolution
NASA Technical Reports Server (NTRS)
Madavan, Nateri K.; Biegel, Bryan A. (Technical Monitor)
2002-01-01
An aerodynamic design optimization procedure that is based on a evolutionary algorithm known at Differential Evolution is described. Differential Evolution is a simple, fast, and robust evolutionary strategy that has been proven effective in determining the global optimum for several difficult optimization problems, including highly nonlinear systems with discontinuities and multiple local optima. The method is combined with a Navier-Stokes solver that evaluates the various intermediate designs and provides inputs to the optimization procedure. An efficient constraint handling mechanism is also incorporated. Results are presented for the inverse design of a turbine airfoil from a modern jet engine. The capability of the method to search large design spaces and obtain the optimal airfoils in an automatic fashion is demonstrated. Substantial reductions in the overall computing time requirements are achieved by using the algorithm in conjunction with neural networks.
Optimal design of artificial reefs for sturgeon
NASA Astrophysics Data System (ADS)
Yarbrough, Cody; Cotel, Aline; Kleinheksel, Abby
2015-11-01
The Detroit River, part of a busy corridor between Lakes Huron and Erie, was extensively modified to create deep shipping channels, resulting in a loss of spawning habitat for lake sturgeon and other native fish (Caswell et al. 2004, Bennion and Manny 2011). Under the U.S.- Canada Great Lakes Water Quality Agreement, there are remediation plans to construct fish spawning reefs to help with historic habitat losses and degraded fish populations, specifically sturgeon. To determine optimal reef design, experimental work has been undertaken. Different sizes and shapes of reefs are tested for a given set of physical conditions, such as flow depth and flow velocity, matching the relevant dimensionless parameters dominating the flow physics. The physical conditions are matched with the natural conditions encountered in the Detroit River. Using Particle Image Velocimetry, Acoustic Doppler Velocimetry and dye studies, flow structures, vorticity and velocity gradients at selected locations have been identified and quantified to allow comparison with field observations and numerical model results. Preliminary results are helping identify the design features to be implemented in the next phase of reef construction. Sponsored by NOAA.
OSHA and Experimental Safety Design.
ERIC Educational Resources Information Center
Sichak, Stephen, Jr.
1983-01-01
Suggests that a governmental agency, most likely Occupational Safety and Health Administration (OSHA) be considered in the safety design stage of any experiment. Focusing on OSHA's role, discusses such topics as occupational health hazards of toxic chemicals in laboratories, occupational exposure to benzene, and role/regulations of other agencies.…
Multidisciplinary design optimization using genetic algorithms
NASA Technical Reports Server (NTRS)
Unal, Resit
1994-01-01
Multidisciplinary design optimization (MDO) is an important step in the conceptual design and evaluation of launch vehicles since it can have a significant impact on performance and life cycle cost. The objective is to search the system design space to determine values of design variables that optimize the performance characteristic subject to system constraints. Gradient-based optimization routines have been used extensively for aerospace design optimization. However, one limitation of gradient based optimizers is their need for gradient information. Therefore, design problems which include discrete variables can not be studied. Such problems are common in launch vehicle design. For example, the number of engines and material choices must be integer values or assume only a few discrete values. In this study, genetic algorithms are investigated as an approach to MDO problems involving discrete variables and discontinuous domains. Optimization by genetic algorithms (GA) uses a search procedure which is fundamentally different from those gradient based methods. Genetic algorithms seek to find good solutions in an efficient and timely manner rather than finding the best solution. GA are designed to mimic evolutionary selection. A population of candidate designs is evaluated at each iteration, and each individual's probability of reproduction (existence in the next generation) depends on its fitness value (related to the value of the objective function). Progress toward the optimum is achieved by the crossover and mutation operations. GA is attractive since it uses only objective function values in the search process, so gradient calculations are avoided. Hence, GA are able to deal with discrete variables. Studies report success in the use of GA for aircraft design optimization studies, trajectory analysis, space structure design and control systems design. In these studies reliable convergence was achieved, but the number of function evaluations was large compared
Applications of Chemiluminescence in the Teaching of Experimental Design
ERIC Educational Resources Information Center
Krawczyk, Tomasz; Slupska, Roksana; Baj, Stefan
2015-01-01
This work describes a single-session laboratory experiment devoted to teaching the principles of factorial experimental design. Students undertook the rational optimization of a luminol oxidation reaction, using a two-level experiment that aimed to create a long-lasting bright emission. During the session students used only simple glassware and…
Experimental design for single point diamond turning of silicon optics
Krulewich, D.A.
1996-06-16
The goal of these experiments is to determine optimum cutting factors for the machining of silicon optics. This report describes experimental design, a systematic method of selecting optimal settings for a limited set of experiments, and its use in the silcon-optics turning experiments. 1 fig., 11 tabs.
Optimal design of reverse osmosis module networks
Maskan, F.; Wiley, D.E.; Johnston, L.P.M.; Clements, D.J.
2000-05-01
The structure of individual reverse osmosis modules, the configuration of the module network, and the operating conditions were optimized for seawater and brackish water desalination. The system model included simple mathematical equations to predict the performance of the reverse osmosis modules. The optimization problem was formulated as a constrained multivariable nonlinear optimization. The objective function was the annual profit for the system, consisting of the profit obtained from the permeate, capital cost for the process units, and operating costs associated with energy consumption and maintenance. Optimization of several dual-stage reverse osmosis systems were investigated and compared. It was found that optimal network designs are the ones that produce the most permeate. It may be possible to achieve economic improvements by refining current membrane module designs and their operating pressures.
Vehicle systems design optimization study
NASA Technical Reports Server (NTRS)
Gilmour, J. L.
1980-01-01
The optimum vehicle configuration and component locations are determined for an electric drive vehicle based on using the basic structure of a current production subcompact vehicle. The optimization of an electric vehicle layout requires a weight distribution in the range of 53/47 to 62/38 in order to assure dynamic handling characteristics comparable to current internal combustion engine vehicles. Necessary modification of the base vehicle can be accomplished without major modification of the structure or running gear. As long as batteries are as heavy and require as much space as they currently do, they must be divided into two packages, one at front under the hood and a second at the rear under the cargo area, in order to achieve the desired weight distribution. The weight distribution criteria requires the placement of batteries at the front of the vehicle even when the central tunnel is used for the location of some batteries. The optimum layout has a front motor and front wheel drive. This configuration provides the optimum vehicle dynamic handling characteristics and the maximum passenger and cargo space for a given size vehicle.
Torsional ultrasonic transducer computational design optimization.
Melchor, J; Rus, G
2014-09-01
A torsional piezoelectric ultrasonic sensor design is proposed in this paper and computationally tested and optimized to measure shear stiffness properties of soft tissue. These are correlated with a number of pathologies like tumors, hepatic lesions and others. The reason is that, whereas compressibility is predominantly governed by the fluid phase of the tissue, the shear stiffness is dependent on the stroma micro-architecture, which is directly affected by those pathologies. However, diagnostic tools to quantify them are currently not well developed. The first contribution is a new typology of design adapted to quasifluids. A second contribution is the procedure for design optimization, for which an analytical estimate of the Robust Probability Of Detection, called RPOD, is presented for use as optimality criteria. The RPOD is formulated probabilistically to maximize the probability of detecting the least possible pathology while minimizing the effect of noise. The resulting optimal transducer has a resonance frequency of 28 kHz.
Global Design Optimization for Aerodynamics and Rocket Propulsion Components
NASA Technical Reports Server (NTRS)
Shyy, Wei; Papila, Nilay; Vaidyanathan, Rajkumar; Tucker, Kevin; Turner, James E. (Technical Monitor)
2000-01-01
Modern computational and experimental tools for aerodynamics and propulsion applications have matured to a stage where they can provide substantial insight into engineering processes involving fluid flows, and can be fruitfully utilized to help improve the design of practical devices. In particular, rapid and continuous development in aerospace engineering demands that new design concepts be regularly proposed to meet goals for increased performance, robustness and safety while concurrently decreasing cost. To date, the majority of the effort in design optimization of fluid dynamics has relied on gradient-based search algorithms. Global optimization methods can utilize the information collected from various sources and by different tools. These methods offer multi-criterion optimization, handle the existence of multiple design points and trade-offs via insight into the entire design space, can easily perform tasks in parallel, and are often effective in filtering the noise intrinsic to numerical and experimental data. However, a successful application of the global optimization method needs to address issues related to data requirements with an increase in the number of design variables, and methods for predicting the model performance. In this article, we review recent progress made in establishing suitable global optimization techniques employing neural network and polynomial-based response surface methodologies. Issues addressed include techniques for construction of the response surface, design of experiment techniques for supplying information in an economical manner, optimization procedures and multi-level techniques, and assessment of relative performance between polynomials and neural networks. Examples drawn from wing aerodynamics, turbulent diffuser flows, gas-gas injectors, and supersonic turbines are employed to help demonstrate the issues involved in an engineering design context. Both the usefulness of the existing knowledge to aid current design
Antimicrobial Peptides Design by Evolutionary Multiobjective Optimization
Maccari, Giuseppe; Di Luca, Mariagrazia; Nifosí, Riccardo; Cardarelli, Francesco; Signore, Giovanni; Boccardi, Claudia; Bifone, Angelo
2013-01-01
Antimicrobial peptides (AMPs) are an abundant and wide class of molecules produced by many tissues and cell types in a variety of mammals, plant and animal species. Linear alpha-helical antimicrobial peptides are among the most widespread membrane-disruptive AMPs in nature, representing a particularly successful structural arrangement in innate defense. Recently, AMPs have received increasing attention as potential therapeutic agents, owing to their broad activity spectrum and their reduced tendency to induce resistance. The introduction of non-natural amino acids will be a key requisite in order to contrast host resistance and increase compound's life. In this work, the possibility to design novel AMP sequences with non-natural amino acids was achieved through a flexible computational approach, based on chemophysical profiles of peptide sequences. Quantitative structure-activity relationship (QSAR) descriptors were employed to code each peptide and train two statistical models in order to account for structural and functional properties of alpha-helical amphipathic AMPs. These models were then used as fitness functions for a multi-objective evolutional algorithm, together with a set of constraints for the design of a series of candidate AMPs. Two ab-initio natural peptides were synthesized and experimentally validated for antimicrobial activity, together with a series of control peptides. Furthermore, a well-known Cecropin-Mellitin alpha helical antimicrobial hybrid (CM18) was optimized by shortening its amino acid sequence while maintaining its activity and a peptide with non-natural amino acids was designed and tested, demonstrating the higher activity achievable with artificial residues. PMID:24039565
Stress constraints in optimality criteria design
NASA Technical Reports Server (NTRS)
Levy, R.
1982-01-01
Procedures described emphasize the processing of stress constraints within optimality criteria designs for low structural weight with stress and compliance constraints. Prescreening criteria are used to partition stress constraints into either potentially active primary sets or passive secondary sets that require minimal processing. Side constraint boundaries for passive constraints are derived by projections from design histories to modify conventional stress-ratio boundaries. Other procedures described apply partial structural modification reanalysis to design variable groups to correct stress constraint violations of unfeasible designs. Sample problem results show effective design convergence and, in particular, advantages for reanalysis in obtaining lower feasible design weights.
Autonomous entropy-based intelligent experimental design
NASA Astrophysics Data System (ADS)
Malakar, Nabin Kumar
2011-07-01
The aim of this thesis is to explore the application of probability and information theory in experimental design, and to do so in a way that combines what we know about inference and inquiry in a comprehensive and consistent manner. Present day scientific frontiers involve data collection at an ever-increasing rate. This requires that we find a way to collect the most relevant data in an automated fashion. By following the logic of the scientific method, we couple an inference engine with an inquiry engine to automate the iterative process of scientific learning. The inference engine involves Bayesian machine learning techniques to estimate model parameters based upon both prior information and previously collected data, while the inquiry engine implements data-driven exploration. By choosing an experiment whose distribution of expected results has the maximum entropy, the inquiry engine selects the experiment that maximizes the expected information gain. The coupled inference and inquiry engines constitute an autonomous learning method for scientific exploration. We apply it to a robotic arm to demonstrate the efficacy of the method. Optimizing inquiry involves searching for an experiment that promises, on average, to be maximally informative. If the set of potential experiments is described by many parameters, the search involves a high-dimensional entropy space. In such cases, a brute force search method will be slow and computationally expensive. We develop an entropy-based search algorithm, called nested entropy sampling, to select the most informative experiment. This helps to reduce the number of computations necessary to find the optimal experiment. We also extended the method of maximizing entropy, and developed a method of maximizing joint entropy so that it could be used as a principle of collaboration between two robots. This is a major achievement of this thesis, as it allows the information-based collaboration between two robotic units towards a same
Designing High Quality Research in Special Education: Group Experimental Designs.
ERIC Educational Resources Information Center
Gersten, Russell; Lloyd, John Wills; Baker, Scott
This paper, a result of a series of meetings of researchers, discusses critical issues related to the conduct of high-quality intervention research in special education using experimental and quasi-experimental designs that compare outcomes for different groups of students. It stresses the need to balance design components that satisfy laboratory…
Information optimal compressive sensing: static measurement design.
Ashok, Amit; Huang, Liang-Chih; Neifeld, Mark A
2013-05-01
The compressive sensing paradigm exploits the inherent sparsity/compressibility of signals to reduce the number of measurements required for reliable reconstruction/recovery. In many applications additional prior information beyond signal sparsity, such as structure in sparsity, is available, and current efforts are mainly limited to exploiting that information exclusively in the signal reconstruction problem. In this work, we describe an information-theoretic framework that incorporates the additional prior information as well as appropriate measurement constraints in the design of compressive measurements. Using a Gaussian binomial mixture prior we design and analyze the performance of optimized projections relative to random projections under two specific design constraints and different operating measurement signal-to-noise ratio (SNR) regimes. We find that the information-optimized designs yield significant, in some cases nearly an order of magnitude, improvements in the reconstruction performance with respect to the random projections. These improvements are especially notable in the low measurement SNR regime where the energy-efficient design of optimized projections is most advantageous. In such cases, the optimized projection design departs significantly from random projections in terms of their incoherence with the representation basis. In fact, we find that the maximizing incoherence of projections with the representation basis is not necessarily optimal in the presence of additional prior information and finite measurement noise/error. We also apply the information-optimized projections to the compressive image formation problem for natural scenes, and the improved visual quality of reconstructed images with respect to random projections and other compressive measurement design affirms the overall effectiveness of the information-theoretic design framework.
Es'haghi, Zarrin; Ebrahimi, Mahmoud; Hosseini, Mohammad-Saeid
2011-05-27
A novel design of solid phase microextraction fiber containing carbon nanotube reinforced sol-gel which was protected by polypropylene hollow fiber (HF-SPME) was developed for pre-concentration and determination of BTEX in environmental waste water and human hair samples. The method validation was included and satisfying results with high pre-concentration factors were obtained. In the present study orthogonal array experimental design (OAD) procedure with OA(16) (4(4)) matrix was applied to study the effect of four factors influencing the HF-SPME method efficiency: stirring speed, volume of adsorption organic solvent, extraction and desorption time of the sample solution, by which the effect of each factor was estimated using individual contributions as response functions in the screening process. Analysis of variance (ANOVA) was employed for estimating the main significant factors and their percentage contributions in extraction. Calibration curves were plotted using ten spiking levels of BTEX in the concentration ranges of 0.02-30,000ng/mL with correlation coefficients (r) 0.989-0.9991 for analytes. Under the optimized extraction conditions, the method showed good linearity (0.3-20,000ng/L), repeatability, low limits of detections (0.49-0.7ng/L) and excellent pre-concentration factors (185-1872). The best conditions which were estimated then applied for the analysis of BTEX compounds in the real samples.
Response Surface Model Building and Multidisciplinary Optimization Using D-Optimal Designs
NASA Technical Reports Server (NTRS)
Unal, Resit; Lepsch, Roger A.; McMillin, Mark L.
1998-01-01
This paper discusses response surface methods for approximation model building and multidisciplinary design optimization. The response surface methods discussed are central composite designs, Bayesian methods and D-optimal designs. An over-determined D-optimal design is applied to a configuration design and optimization study of a wing-body, launch vehicle. Results suggest that over determined D-optimal designs may provide an efficient approach for approximation model building and for multidisciplinary design optimization.
Regression analysis as a design optimization tool
NASA Technical Reports Server (NTRS)
Perley, R.
1984-01-01
The optimization concepts are described in relation to an overall design process as opposed to a detailed, part-design process where the requirements are firmly stated, the optimization criteria are well established, and a design is known to be feasible. The overall design process starts with the stated requirements. Some of the design criteria are derived directly from the requirements, but others are affected by the design concept. It is these design criteria that define the performance index, or objective function, that is to be minimized within some constraints. In general, there will be multiple objectives, some mutually exclusive, with no clear statement of their relative importance. The optimization loop that is given adjusts the design variables and analyzes the resulting design, in an iterative fashion, until the objective function is minimized within the constraints. This provides a solution, but it is only the beginning. In effect, the problem definition evolves as information is derived from the results. It becomes a learning process as we determine what the physics of the system can deliver in relation to the desirable system characteristics. As with any learning process, an interactive capability is a real attriubute for investigating the many alternatives that will be suggested as learning progresses.
Experimental Design for Vector Output Systems
Banks, H.T.; Rehm, K.L.
2013-01-01
We formulate an optimal design problem for the selection of best states to observe and optimal sampling times for parameter estimation or inverse problems involving complex nonlinear dynamical systems. An iterative algorithm for implementation of the resulting methodology is proposed. Its use and efficacy is illustrated on two applied problems of practical interest: (i) dynamic models of HIV progression and (ii) modeling of the Calvin cycle in plant metabolism and growth. PMID:24563655
Design of optimized piezoelectric HDD-sliders
NASA Astrophysics Data System (ADS)
Nakasone, Paulo H.; Yoo, Jeonghoon; Silva, Emilio C. N.
2010-04-01
As storage data density in hard-disk drives (HDDs) increases for constant or miniaturizing sizes, precision positioning of HDD heads becomes a more relevant issue to ensure enormous amounts of data to be properly written and read. Since the traditional single-stage voice coil motor (VCM) cannot satisfy the positioning requirement of high-density tracks per inch (TPI) HDDs, dual-stage servo systems have been proposed to overcome this matter, by using VCMs to coarsely move the HDD head while piezoelectric actuators provides fine and fast positioning. Thus, the aim of this work is to apply topology optimization method (TOM) to design novel piezoelectric HDD heads, by finding optimal placement of base-plate and piezoelectric material to high precision positioning HDD heads. Topology optimization method is a structural optimization technique that combines the finite element method (FEM) with optimization algorithms. The laminated finite element employs the MITC (mixed interpolation of tensorial components) formulation to provide accurate and reliable results. The topology optimization uses a rational approximation of material properties to vary the material properties between 'void' and 'filled' portions. The design problem consists in generating optimal structures that provide maximal displacements, appropriate structural stiffness and resonance phenomena avoidance. The requirements are achieved by applying formulations to maximize displacements, minimize structural compliance and maximize resonance frequencies. This paper presents the implementation of the algorithms and show results to confirm the feasibility of this approach.
Multifidelity Analysis and Optimization for Supersonic Design
NASA Technical Reports Server (NTRS)
Kroo, Ilan; Willcox, Karen; March, Andrew; Haas, Alex; Rajnarayan, Dev; Kays, Cory
2010-01-01
Supersonic aircraft design is a computationally expensive optimization problem and multifidelity approaches over a significant opportunity to reduce design time and computational cost. This report presents tools developed to improve supersonic aircraft design capabilities including: aerodynamic tools for supersonic aircraft configurations; a systematic way to manage model uncertainty; and multifidelity model management concepts that incorporate uncertainty. The aerodynamic analysis tools developed are appropriate for use in a multifidelity optimization framework, and include four analysis routines to estimate the lift and drag of a supersonic airfoil, a multifidelity supersonic drag code that estimates the drag of aircraft configurations with three different methods: an area rule method, a panel method, and an Euler solver. In addition, five multifidelity optimization methods are developed, which include local and global methods as well as gradient-based and gradient-free techniques.
Dynamic optimization and adaptive controller design
NASA Astrophysics Data System (ADS)
Inamdar, S. R.
2010-10-01
In this work I present a new type of controller which is an adaptive tracking controller which employs dynamic optimization for optimizing current value of controller action for the temperature control of nonisothermal continuously stirred tank reactor (CSTR). We begin with a two-state model of nonisothermal CSTR which are mass and heat balance equations and then add cooling system dynamics to eliminate input multiplicity. The initial design value is obtained using local stability of steady states where approach temperature for cooling action is specified as a steady state and a design specification. Later we make a correction in the dynamics where material balance is manipulated to use feed concentration as a system parameter as an adaptive control measure in order to avoid actuator saturation for the main control loop. The analysis leading to design of dynamic optimization based parameter adaptive controller is presented. The important component of this mathematical framework is reference trajectory generation to form an adaptive control measure.
Design optimization for cost and quality: The robust design approach
NASA Technical Reports Server (NTRS)
Unal, Resit
1990-01-01
Designing reliable, low cost, and operable space systems has become the key to future space operations. Designing high quality space systems at low cost is an economic and technological challenge to the designer. A systematic and efficient way to meet this challenge is a new method of design optimization for performance, quality, and cost, called Robust Design. Robust Design is an approach for design optimization. It consists of: making system performance insensitive to material and subsystem variation, thus allowing the use of less costly materials and components; making designs less sensitive to the variations in the operating environment, thus improving reliability and reducing operating costs; and using a new structured development process so that engineering time is used most productively. The objective in Robust Design is to select the best combination of controllable design parameters so that the system is most robust to uncontrollable noise factors. The robust design methodology uses a mathematical tool called an orthogonal array, from design of experiments theory, to study a large number of decision variables with a significantly small number of experiments. Robust design also uses a statistical measure of performance, called a signal-to-noise ratio, from electrical control theory, to evaluate the level of performance and the effect of noise factors. The purpose is to investigate the Robust Design methodology for improving quality and cost, demonstrate its application by the use of an example, and suggest its use as an integral part of space system design process.
Optimal Experiment Design for Thermal Characterization of Functionally Graded Materials
NASA Technical Reports Server (NTRS)
Cole, Kevin D.
2003-01-01
The purpose of the project was to investigate methods to accurately verify that designed , materials meet thermal specifications. The project involved heat transfer calculations and optimization studies, and no laboratory experiments were performed. One part of the research involved study of materials in which conduction heat transfer predominates. Results include techniques to choose among several experimental designs, and protocols for determining the optimum experimental conditions for determination of thermal properties. Metal foam materials were also studied in which both conduction and radiation heat transfer are present. Results of this work include procedures to optimize the design of experiments to accurately measure both conductive and radiative thermal properties. Detailed results in the form of three journal papers have been appended to this report.
Using Approximations to Accelerate Engineering Design Optimization
NASA Technical Reports Server (NTRS)
Torczon, Virginia; Trosset, Michael W.
1998-01-01
Optimization problems that arise in engineering design are often characterized by several features that hinder the use of standard nonlinear optimization techniques. Foremost among these features is that the functions used to define the engineering optimization problem often are computationally intensive. Within a standard nonlinear optimization algorithm, the computational expense of evaluating the functions that define the problem would necessarily be incurred for each iteration of the optimization algorithm. Faced with such prohibitive computational costs, an attractive alternative is to make use of surrogates within an optimization context since surrogates can be chosen or constructed so that they are typically much less expensive to compute. For the purposes of this paper, we will focus on the use of algebraic approximations as surrogates for the objective. In this paper we introduce the use of so-called merit functions that explicitly recognize the desirability of improving the current approximation to the objective during the course of the optimization. We define and experiment with the use of merit functions chosen to simultaneously improve both the solution to the optimization problem (the objective) and the quality of the approximation. Our goal is to further improve the effectiveness of our general approach without sacrificing any of its rigor.
Application of Optimal Designs to Item Calibration
Lu, Hung-Yi
2014-01-01
In computerized adaptive testing (CAT), examinees are presented with various sets of items chosen from a precalibrated item pool. Consequently, the attrition speed of the items is extremely fast, and replenishing the item pool is essential. Therefore, item calibration has become a crucial concern in maintaining item banks. In this study, a two-parameter logistic model is used. We applied optimal designs and adaptive sequential analysis to solve this item calibration problem. The results indicated that the proposed optimal designs are cost effective and time efficient. PMID:25188318
Evaluation of Frameworks for HSCT Design Optimization
NASA Technical Reports Server (NTRS)
Krishnan, Ramki
1998-01-01
This report is an evaluation of engineering frameworks that could be used to augment, supplement, or replace the existing FIDO 3.5 (Framework for Interdisciplinary Design and Optimization Version 3.5) framework. The report begins with the motivation for this effort, followed by a description of an "ideal" multidisciplinary design and optimization (MDO) framework. The discussion then turns to how each candidate framework stacks up against this ideal. This report ends with recommendations as to the "best" frameworks that should be down-selected for detailed review.
Drought Adaptation Mechanisms Should Guide Experimental Design.
Gilbert, Matthew E; Medina, Viviana
2016-08-01
The mechanism, or hypothesis, of how a plant might be adapted to drought should strongly influence experimental design. For instance, an experiment testing for water conservation should be distinct from a damage-tolerance evaluation. We define here four new, general mechanisms for plant adaptation to drought such that experiments can be more easily designed based upon the definitions. A series of experimental methods are suggested together with appropriate physiological measurements related to the drought adaptation mechanisms. The suggestion is made that the experimental manipulation should match the rate, length, and severity of soil water deficit (SWD) necessary to test the hypothesized type of drought adaptation mechanism. PMID:27090148
Fatigue reliability based optimal design of planar compliant micropositioning stages.
Wang, Qiliang; Zhang, Xianmin
2015-10-01
Conventional compliant micropositioning stages are usually developed based on static strength and deterministic methods, which may lead to either unsafe or excessive designs. This paper presents a fatigue reliability analysis and optimal design of a three-degree-of-freedom (3 DOF) flexure-based micropositioning stage. Kinematic, modal, static, and fatigue stress modelling of the stage were conducted using the finite element method. The maximum equivalent fatigue stress in the hinges was derived using sequential quadratic programming. The fatigue strength of the hinges was obtained by considering various influencing factors. On this basis, the fatigue reliability of the hinges was analysed using the stress-strength interference method. Fatigue-reliability-based optimal design of the stage was then conducted using the genetic algorithm and MATLAB. To make fatigue life testing easier, a 1 DOF stage was then optimized and manufactured. Experimental results demonstrate the validity of the approach.
Fatigue reliability based optimal design of planar compliant micropositioning stages
NASA Astrophysics Data System (ADS)
Wang, Qiliang; Zhang, Xianmin
2015-10-01
Conventional compliant micropositioning stages are usually developed based on static strength and deterministic methods, which may lead to either unsafe or excessive designs. This paper presents a fatigue reliability analysis and optimal design of a three-degree-of-freedom (3 DOF) flexure-based micropositioning stage. Kinematic, modal, static, and fatigue stress modelling of the stage were conducted using the finite element method. The maximum equivalent fatigue stress in the hinges was derived using sequential quadratic programming. The fatigue strength of the hinges was obtained by considering various influencing factors. On this basis, the fatigue reliability of the hinges was analysed using the stress-strength interference method. Fatigue-reliability-based optimal design of the stage was then conducted using the genetic algorithm and MATLAB. To make fatigue life testing easier, a 1 DOF stage was then optimized and manufactured. Experimental results demonstrate the validity of the approach.
Fatigue reliability based optimal design of planar compliant micropositioning stages.
Wang, Qiliang; Zhang, Xianmin
2015-10-01
Conventional compliant micropositioning stages are usually developed based on static strength and deterministic methods, which may lead to either unsafe or excessive designs. This paper presents a fatigue reliability analysis and optimal design of a three-degree-of-freedom (3 DOF) flexure-based micropositioning stage. Kinematic, modal, static, and fatigue stress modelling of the stage were conducted using the finite element method. The maximum equivalent fatigue stress in the hinges was derived using sequential quadratic programming. The fatigue strength of the hinges was obtained by considering various influencing factors. On this basis, the fatigue reliability of the hinges was analysed using the stress-strength interference method. Fatigue-reliability-based optimal design of the stage was then conducted using the genetic algorithm and MATLAB. To make fatigue life testing easier, a 1 DOF stage was then optimized and manufactured. Experimental results demonstrate the validity of the approach. PMID:26520994
Mathematical Optimization for Engineering Design Problems
NASA Astrophysics Data System (ADS)
Dandurand, Brian C.
Applications in engineering design and the material sciences motivate the development of optimization theory in a manner that additionally draws from other branches of mathematics including the functional, complex, and numerical analyses. The first contribution, motivated by an automotive design application, extends multiobjective optimization theory under the assumption that the problem information is not available in its entirety to a single decision maker as traditionally assumed in the multiobjective optimization literature. Rather, the problem information and the design control are distributed among different decision makers. This requirement appears in the design of an automotive system whose subsystem components themselves correspond to highly involved design subproblems each of whose performance is measured by multiple criteria. This leads to a system/subsystem interaction requiring a coordination whose algorithmic foundation is developed and rigorously examined mathematically. The second contribution develops and analyzes a parameter estimation approach motivated from a time domain modeling problem in the material sciences. In addition to drawing from the theory of least-squares optimization and numerical analysis, the development of a mathematical foundation for comparing a baseline parameter estimation approach with an alternative parameter estimation approach relies on theory from both the functional and complex analyses. The application of the developed theory and algorithms associated with both contributions is also discussed.
Yakima Hatchery Experimental Design : Annual Progress Report.
Busack, Craig; Knudsen, Curtis; Marshall, Anne
1991-08-01
This progress report details the results and status of Washington Department of Fisheries' (WDF) pre-facility monitoring, research, and evaluation efforts, through May 1991, designed to support the development of an Experimental Design Plan (EDP) for the Yakima/Klickitat Fisheries Project (YKFP), previously termed the Yakima/Klickitat Production Project (YKPP or Y/KPP). This pre- facility work has been guided by planning efforts of various research and quality control teams of the project that are annually captured as revisions to the experimental design and pre-facility work plans. The current objective are as follows: to develop genetic monitoring and evaluation approach for the Y/KPP; to evaluate stock identification monitoring tools, approaches, and opportunities available to meet specific objectives of the experimental plan; and to evaluate adult and juvenile enumeration and sampling/collection capabilities in the Y/KPP necessary to measure experimental response variables.
Instrument design and optimization using genetic algorithms
Hoelzel, Robert; Bentley, Phillip M.; Fouquet, Peter
2006-10-15
This article describes the design of highly complex physical instruments by using a canonical genetic algorithm (GA). The procedure can be applied to all instrument designs where performance goals can be quantified. It is particularly suited to the optimization of instrument design where local optima in the performance figure of merit are prevalent. Here, a GA is used to evolve the design of the neutron spin-echo spectrometer WASP which is presently being constructed at the Institut Laue-Langevin, Grenoble, France. A comparison is made between this artificial intelligence approach and the traditional manual design methods. We demonstrate that the search of parameter space is more efficient when applying the genetic algorithm, and the GA produces a significantly better instrument design. Furthermore, it is found that the GA increases flexibility, by facilitating the reoptimization of the design after changes in boundary conditions during the design phase. The GA also allows the exploration of 'nonstandard' magnet coil geometries. We conclude that this technique constitutes a powerful complementary tool for the design and optimization of complex scientific apparatus, without replacing the careful thought processes employed in traditional design methods.
Branch target buffer design and optimization
NASA Technical Reports Server (NTRS)
Perleberg, Chris H.; Smith, Alan J.
1993-01-01
Consideration is given to two major issues in the design of branch target buffers (BTBs), with the goal of achieving maximum performance for a given number of bits allocated to the BTB design. The first issue is BTB management; the second is what information to keep in the BTB. A number of solutions to these problems are reviewed, and various optimizations in the design of BTBs are discussed. Design target miss ratios for BTBs are developed, making it possible to estimate the performance of BTBs for real workloads.
Integrated structural-aerodynamic design optimization
NASA Technical Reports Server (NTRS)
Haftka, R. T.; Kao, P. J.; Grossman, B.; Polen, D.; Sobieszczanski-Sobieski, J.
1988-01-01
This paper focuses on the processes of simultaneous aerodynamic and structural wing design as a prototype for design integration, with emphasis on the major difficulty associated with multidisciplinary design optimization processes, their enormous computational costs. Methods are presented for reducing this computational burden through the development of efficient methods for cross-sensitivity calculations and the implementation of approximate optimization procedures. Utilizing a modular sensitivity analysis approach, it is shown that the sensitivities can be computed without the expensive calculation of the derivatives of the aerodynamic influence coefficient matrix, and the derivatives of the structural flexibility matrix. The same process is used to efficiently evaluate the sensitivities of the wing divergence constraint, which should be particularly useful, not only in problems of complete integrated aircraft design, but also in aeroelastic tailoring applications.
Global optimization of bilinear engineering design models
Grossmann, I.; Quesada, I.
1994-12-31
Recently Quesada and Grossmann have proposed a global optimization algorithm for solving NLP problems involving linear fractional and bilinear terms. This model has been motivated by a number of applications in process design. The proposed method relies on the derivation of a convex NLP underestimator problem that is used within a spatial branch and bound search. This paper explores the use of alternative bounding approximations for constructing the underestimator problem. These are applied in the global optimization of problems arising in different engineering areas and for which different relaxations are proposed depending on the mathematical structure of the models. These relaxations include linear and nonlinear underestimator problems. Reformulations that generate additional estimator functions are also employed. Examples from process design, structural design, portfolio investment and layout design are presented.
Multidisciplinary Concurrent Design Optimization via the Internet
NASA Technical Reports Server (NTRS)
Woodard, Stanley E.; Kelkar, Atul G.; Koganti, Gopichand
2001-01-01
A methodology is presented which uses commercial design and analysis software and the Internet to perform concurrent multidisciplinary optimization. The methodology provides a means to develop multidisciplinary designs without requiring that all software be accessible from the same local network. The procedures are amenable to design and development teams whose members, expertise and respective software are not geographically located together. This methodology facilitates multidisciplinary teams working concurrently on a design problem of common interest. Partition of design software to different machines allows each constituent software to be used on the machine that provides the most economy and efficiency. The methodology is demonstrated on the concurrent design of a spacecraft structure and attitude control system. Results are compared to those derived from performing the design with an autonomous FORTRAN program.
Experimental Testing of Dynamically Optimized Photoelectron Beams
NASA Astrophysics Data System (ADS)
Rosenzweig, J. B.; Cook, A. M.; Dunning, M.; England, R. J.; Musumeci, P.; Bellaveglia, M.; Boscolo, M.; Catani, L.; Cianchi, A.; Di Pirro, G.; Ferrario, M.; Fillipetto, D.; Gatti, G.; Palumbo, L.; Serafini, L.; Vicario, C.; Jones, S.
2006-11-01
We discuss the design of and initial results from an experiment in space-charge dominated beam dynamics which explores a new regime of high-brightness electron beam generation at the SPARC photoinjector. The scheme under study employs the tendency of intense electron beams to rearrange to produce uniform density, giving a nearly ideal beam from the viewpoint of space charge-induced emittance. The experiments are aimed at testing the marriage of this idea with a related concept, emittance compensation. We show that this new regime of operating photoinjector may be the preferred method of obtaining highest brightness beams with lower energy spread. We discuss the design of the experiment, including developing of a novel time-dependent, aerogel-based imaging system. This system has been installed at SPARC, and first evidence for nearly uniformly filled ellipsoidal charge distributions recorded.
Experimental Testing of Dynamically Optimized Photoelectron Beams
Rosenzweig, J. B.; Cook, A. M.; Dunning, M.; England, R. J.; Musumeci, P.; Bellaveglia, M.; Boscolo, M.; Catani, L.; Cianchi, A.; Di Pirro, G.; Ferrario, M.; Fillipetto, D.; Gatti, G.; Palumbo, L.; Vicario, C.; Serafini, L.; Jones, S.
2006-11-27
We discuss the design of and initial results from an experiment in space-charge dominated beam dynamics which explores a new regime of high-brightness electron beam generation at the SPARC photoinjector. The scheme under study employs the tendency of intense electron beams to rearrange to produce uniform density, giving a nearly ideal beam from the viewpoint of space charge-induced emittance. The experiments are aimed at testing the marriage of this idea with a related concept, emittance compensation. We show that this new regime of operating photoinjector may be the preferred method of obtaining highest brightness beams with lower energy spread. We discuss the design of the experiment, including developing of a novel time-dependent, aerogel-based imaging system. This system has been installed at SPARC, and first evidence for nearly uniformly filled ellipsoidal charge distributions recorded.
Design Optimization of Structural Health Monitoring Systems
Flynn, Eric B.
2014-03-06
Sensor networks drive decisions. Approach: Design networks to minimize the expected total cost (in a statistical sense, i.e. Bayes Risk) associated with making wrong decisions and with installing maintaining and running the sensor network itself. Search for optimal solutions using Monte-Carlo-Sampling-Adapted Genetic Algorithm. Applications include structural health monitoring and surveillance.
MDO can help resolve the designer's dilemma. [multidisciplinary design optimization
NASA Technical Reports Server (NTRS)
Sobieszczanski-Sobieski, Jaroslaw; Tulinius, Jan R.
1991-01-01
Multidisciplinary design optimization (MDO) is presented as a rapidly growing body of methods, algorithms, and techniques that will provide a quantum jump in the effectiveness and efficiency of the quantitative side of design, and will turn that side into an environment in which the qualitative side can thrive. MDO borrows from CAD/CAM for graphic visualization of geometrical and numerical data, data base technology, and in computer software and hardware. Expected benefits from this methodology are a rational, mathematically consistent approach to hypersonic aircraft designs, designs pushed closer to the optimum, and a design process either shortened or leaving time available for different concepts to be explored.
Using experimental design to define boundary manikins.
Bertilsson, Erik; Högberg, Dan; Hanson, Lars
2012-01-01
When evaluating human-machine interaction it is central to consider anthropometric diversity to ensure intended accommodation levels. A well-known method is the use of boundary cases where manikins with extreme but likely measurement combinations are derived by mathematical treatment of anthropometric data. The supposition by that method is that the use of these manikins will facilitate accommodation of the expected part of the total, less extreme, population. In literature sources there are differences in how many and in what way these manikins should be defined. A similar field to the boundary case method is the use of experimental design in where relationships between affecting factors of a process is studied by a systematic approach. This paper examines the possibilities to adopt methodology used in experimental design to define a group of manikins. Different experimental designs were adopted to be used together with a confidence region and its axes. The result from the study shows that it is possible to adapt the methodology of experimental design when creating groups of manikins. The size of these groups of manikins depends heavily on the number of key measurements but also on the type of chosen experimental design. PMID:22317428
Design Oriented Structural Modeling for Airplane Conceptual Design Optimization
NASA Technical Reports Server (NTRS)
Livne, Eli
1999-01-01
The main goal for research conducted with the support of this grant was to develop design oriented structural optimization methods for the conceptual design of airplanes. Traditionally in conceptual design airframe weight is estimated based on statistical equations developed over years of fitting airplane weight data in data bases of similar existing air- planes. Utilization of such regression equations for the design of new airplanes can be justified only if the new air-planes use structural technology similar to the technology on the airplanes in those weight data bases. If any new structural technology is to be pursued or any new unconventional configurations designed the statistical weight equations cannot be used. In such cases any structural weight estimation must be based on rigorous "physics based" structural analysis and optimization of the airframes under consideration. Work under this grant progressed to explore airframe design-oriented structural optimization techniques along two lines of research: methods based on "fast" design oriented finite element technology and methods based on equivalent plate / equivalent shell models of airframes, in which the vehicle is modelled as an assembly of plate and shell components, each simulating a lifting surface or nacelle / fuselage pieces. Since response to changes in geometry are essential in conceptual design of airplanes, as well as the capability to optimize the shape itself, research supported by this grant sought to develop efficient techniques for parametrization of airplane shape and sensitivity analysis with respect to shape design variables. Towards the end of the grant period a prototype automated structural analysis code designed to work with the NASA Aircraft Synthesis conceptual design code ACS= was delivered to NASA Ames.
Optimization and Inverse Design of Pump Impeller
NASA Astrophysics Data System (ADS)
Miyauchi, S.; Zhu, B.; Luo, X.; Piao, B.; Matsumoto, H.; Sano, M.; Kassai, N.
2012-11-01
As for pump impellers, the meridional flow channel and blade-to-blade flow channel, which are relatively independent of each other but greatly affect performance, are designed in parallel. And the optimization design is used for the former and the inverse design is used for the latter. To verify this new design method, a mixed-flow impeller was made. Next, we use Tani's inverse design method for the blade loading of inverse design. It is useful enough to change a deceleration rate freely and greatly. And it can integrally express the rear blade loading of various methods by NACA, Zangeneh and Stratford. We controlled the deceleration rate by shape parameter m, and its value became almost same with Tani's recommended value of the laminar airfoil.
Application of heuristic optimization in aircraft design
NASA Astrophysics Data System (ADS)
Hu, Zhenning
Genetic algorithms and the related heuristic optimization strategies are introduced and their applications in the aircraft design are developed. Generally speaking, genetic algorithms belong to non-deterministic direct search methods, which are most powerful in finding optimum or near-optimum solutions of a very complex system where a little priori knowledge is known. Therefore they have a wide application in aerospace systems. Two major aircraft optimal design projects are illustrated in this dissertation. The first is the application of material optimization of aligned fiber laminate composites in the presence of stress concentrations. After a large number of tests on laminates with different layers, genetic algorithms find an alignment pattern in a certain range for the Boeing Co. specified material. The second project is the application of piezoelectric actuator placement on a generic tail skins to reduce the 2nd mode vibration caused by buffet, which is part of a Boeing project to control the buffet effect on aircraft. In this project, genetic algorithms are closely involved with vibration analysis and finite element analysis. Actuator optimization strategies are first tested on the theoretical beam models to gain experience, and then the generic tail model is applied. Genetic algorithms achieve a great success in optimizing up to 888 actuator parameters on the tail skins.
Design criteria for optimal photosynthetic energy conversion
NASA Astrophysics Data System (ADS)
Fingerhut, Benjamin P.; Zinth, Wolfgang; de Vivie-Riedle, Regina
2008-12-01
Photochemical solar energy conversion is considered as an alternative of clean energy. For future light converting nano-machines photosynthetic reaction centers are used as prototypes optimized during evolution. We introduce a reaction scheme for global optimization and simulate the ultrafast charge separation in photochemical energy conversion. Multiple molecular charge carriers are involved in this process and are linked by Marcus-type electron transfer. In combination with evolutionary algorithms, we unravel the biological strategies for high quantum efficiency in photosynthetic reaction centers and extend these concepts to the design of artificial photochemical devices for energy conversion.
Aircraft family design using enhanced collaborative optimization
NASA Astrophysics Data System (ADS)
Roth, Brian Douglas
Significant progress has been made toward the development of multidisciplinary design optimization (MDO) methods that are well-suited to practical large-scale design problems. However, opportunities exist for further progress. This thesis describes the development of enhanced collaborative optimization (ECO), a new decomposition-based MDO method. To support the development effort, the thesis offers a detailed comparison of two existing MDO methods: collaborative optimization (CO) and analytical target cascading (ATC). This aids in clarifying their function and capabilities, and it provides inspiration for the development of ECO. The ECO method offers several significant contributions. First, it enhances communication between disciplinary design teams while retaining the low-order coupling between them. Second, it provides disciplinary design teams with more authority over the design process. Third, it resolves several troubling computational inefficiencies that are associated with CO. As a result, ECO provides significant computational savings (relative to CO) for the test cases and practical design problems described in this thesis. New aircraft development projects seldom focus on a single set of mission requirements. Rather, a family of aircraft is designed, with each family member tailored to a different set of requirements. This thesis illustrates the application of decomposition-based MDO methods to aircraft family design. This represents a new application area, since MDO methods have traditionally been applied to multidisciplinary problems. ECO offers aircraft family design the same benefits that it affords to multidisciplinary design problems. Namely, it simplifies analysis integration, it provides a means to manage problem complexity, and it enables concurrent design of all family members. In support of aircraft family design, this thesis introduces a new wing structural model with sufficient fidelity to capture the tradeoffs associated with component
Multidisciplinary Design Optimization on Conceptual Design of Aero-engine
NASA Astrophysics Data System (ADS)
Zhang, Xiao-bo; Wang, Zhan-xue; Zhou, Li; Liu, Zeng-wen
2016-06-01
In order to obtain better integrated performance of aero-engine during the conceptual design stage, multiple disciplines such as aerodynamics, structure, weight, and aircraft mission are required. Unfortunately, the couplings between these disciplines make it difficult to model or solve by conventional method. MDO (Multidisciplinary Design Optimization) methodology which can well deal with couplings of disciplines is considered to solve this coupled problem. Approximation method, optimization method, coordination method, and modeling method for MDO framework are deeply analyzed. For obtaining the more efficient MDO framework, an improved CSSO (Concurrent Subspace Optimization) strategy which is based on DOE (Design Of Experiment) and RSM (Response Surface Model) methods is proposed in this paper; and an improved DE (Differential Evolution) algorithm is recommended to solve the system-level and discipline-level optimization problems in MDO framework. The improved CSSO strategy and DE algorithm are evaluated by utilizing the numerical test problem. The result shows that the efficiency of improved methods proposed by this paper is significantly increased. The coupled problem of VCE (Variable Cycle Engine) conceptual design is solved by utilizing improved CSSO strategy, and the design parameter given by improved CSSO strategy is better than the original one. The integrated performance of VCE is significantly improved.
Ponslet, E.R.; Eldred, M.S.
1996-05-17
An analytical and experimental study is conducted to investigate the effect of isolator locations on the effectiveness of vibration isolation systems. The study uses isolators with fixed properties and evaluates potential improvements to the isolation system that can be achieved by optimizing isolator locations. Because the available locations for the isolators are discrete in this application, a Genetic Algorithm (GA) is used as the optimization method. The system is modeled in MATLAB{trademark} and coupled with the GA available in the DAKOTA optimization toolkit under development at Sandia National Laboratories. Design constraints dictated by hardware and experimental limitations are implemented through penalty function techniques. A series of GA runs reveal difficulties in the search on this heavily constrained, multimodal, discrete problem. However, the GA runs provide a variety of optimized designs with predicted performance from 30 to 70 times better than a baseline configuration. An alternate approach is also tested on this problem: it uses continuous optimization, followed by rounding of the solution to neighboring discrete configurations. Results show that this approach leads to either infeasible or poor designs. Finally, a number of optimized designs obtained from the GA searches are tested in the laboratory and compared to the baseline design. These experimental results show a 7 to 46 times improvement in vibration isolation from the baseline configuration.
Robust Design Optimization via Failure Domain Bounding
NASA Technical Reports Server (NTRS)
Crespo, Luis G.; Kenny, Sean P.; Giesy, Daniel P.
2007-01-01
This paper extends and applies the strategies recently developed by the authors for handling constraints under uncertainty to robust design optimization. For the scope of this paper, robust optimization is a methodology aimed at problems for which some parameters are uncertain and are only known to belong to some uncertainty set. This set can be described by either a deterministic or a probabilistic model. In the methodology developed herein, optimization-based strategies are used to bound the constraint violation region using hyper-spheres and hyper-rectangles. By comparing the resulting bounding sets with any given uncertainty model, it can be determined whether the constraints are satisfied for all members of the uncertainty model (i.e., constraints are feasible) or not (i.e., constraints are infeasible). If constraints are infeasible and a probabilistic uncertainty model is available, upper bounds to the probability of constraint violation can be efficiently calculated. The tools developed enable approximating not only the set of designs that make the constraints feasible but also, when required, the set of designs for which the probability of constraint violation is below a prescribed admissible value. When constraint feasibility is possible, several design criteria can be used to shape the uncertainty model of performance metrics of interest. Worst-case, least-second-moment, and reliability-based design criteria are considered herein. Since the problem formulation is generic and the tools derived only require standard optimization algorithms for their implementation, these strategies are easily applicable to a broad range of engineering problems.
Generalized mathematical models in design optimization
NASA Technical Reports Server (NTRS)
Papalambros, Panos Y.; Rao, J. R. Jagannatha
1989-01-01
The theory of optimality conditions of extremal problems can be extended to problems continuously deformed by an input vector. The connection between the sensitivity, well-posedness, stability and approximation of optimization problems is steadily emerging. The authors believe that the important realization here is that the underlying basis of all such work is still the study of point-to-set maps and of small perturbations, yet what has been identified previously as being just related to solution procedures is now being extended to study modeling itself in its own right. Many important studies related to the theoretical issues of parametric programming and large deformation in nonlinear programming have been reported in the last few years, and the challenge now seems to be in devising effective computational tools for solving these generalized design optimization models.
Optimal design of a space power system
NASA Technical Reports Server (NTRS)
Chun, Young W.; Braun, James F.
1990-01-01
The aerospace industry, like many other industries, regularly applies optimization techniques to develop designs which reduce cost, maximize performance, and minimize weight. The desire to minimize weight is of particular importance in space-related products since the costs of launch are directly related to payload weight, and launch vehicle capabilities often limit the allowable weight of a component or system. With these concerns in mind, this paper presents the optimization of a space-based power generation system for minimum mass. The goal of this work is to demonstrate the use of optimization techniques on a realistic and practical engineering system. The power system described uses thermoelectric devices to convert heat into electricity. The heat source for the system is a nuclear reactor. Waste heat is rejected from the system to space by a radiator.
Design Methods and Optimization for Morphing Aircraft
NASA Technical Reports Server (NTRS)
Crossley, William A.
2005-01-01
This report provides a summary of accomplishments made during this research effort. The major accomplishments are in three areas. The first is the use of a multiobjective optimization strategy to help identify potential morphing features that uses an existing aircraft sizing code to predict the weight, size and performance of several fixed-geometry aircraft that are Pareto-optimal based upon on two competing aircraft performance objectives. The second area has been titled morphing as an independent variable and formulates the sizing of a morphing aircraft as an optimization problem in which the amount of geometric morphing for various aircraft parameters are included as design variables. This second effort consumed most of the overall effort on the project. The third area involved a more detailed sizing study of a commercial transport aircraft that would incorporate a morphing wing to possibly enable transatlantic point-to-point passenger service.
Direct optimization method for reentry trajectory design
NASA Astrophysics Data System (ADS)
Jallade, S.; Huber, P.; Potti, J.; Dutruel-Lecohier, G.
The software package called `Reentry and Atmospheric Transfer Trajectory' (RATT) was developed under ESA contract for the design of atmospheric trajectories. It includes four software TOP (Trajectory OPtimization) programs, which optimize reentry and aeroassisted transfer trajectories. 6FD and 3FD (6 and 3 degrees of freedom Flight Dynamic) are devoted to the simulation of the trajectory. SCA (Sensitivity and Covariance Analysis) performs covariance analysis on a given trajectory with respect to different uncertainties and error sources. TOP provides the optimum guidance law of a three degree of freedom reentry of aeroassisted transfer (AAOT) trajectories. Deorbit and reorbit impulses (if necessary) can be taken into account in the optimization. A wide choice of cost function is available to the user such as the integrated heat flux, or the sum of the velocity impulses, or a linear combination of both of them for trajectory and vehicle design. The crossrange and the downrange can be maximized during reentry trajectory. Path constraints are available on the load factor, the heat flux and the dynamic pressure. Results on these proposed options are presented. TOPPHY is the part of the TOP software corresponding to the definition and the computation of the optimization problemphysics. TOPPHY can interface with several optimizes with dynamic solvers: TOPOP and TROPIC using direct collocation methods and PROMIS using direct multiple shooting method. TOPOP was developed in the frame of this contract, it uses Hermite polynomials for the collocation method and the NPSOL optimizer from the NAG library. Both TROPIC and PROMIS were developed by the DLR (Deutsche Forschungsanstalt fuer Luft und Raumfahrt) and use the SLSQP optimizer. For the dynamic equation resolution, TROPIC uses a collocation method with Splines and PROMIS uses a multiple shooting method with finite differences. The three different optimizers including dynamics were tested on the reentry trajectory of the
Optimizing Trial Designs for Targeted Therapies
Beckman, Robert A.; Burman, Carl-Fredrik; König, Franz; Stallard, Nigel; Posch, Martin
2016-01-01
An important objective in the development of targeted therapies is to identify the populations where the treatment under consideration has positive benefit risk balance. We consider pivotal clinical trials, where the efficacy of a treatment is tested in an overall population and/or in a pre-specified subpopulation. Based on a decision theoretic framework we derive optimized trial designs by maximizing utility functions. Features to be optimized include the sample size and the population in which the trial is performed (the full population or the targeted subgroup only) as well as the underlying multiple test procedure. The approach accounts for prior knowledge of the efficacy of the drug in the considered populations using a two dimensional prior distribution. The considered utility functions account for the costs of the clinical trial as well as the expected benefit when demonstrating efficacy in the different subpopulations. We model utility functions from a sponsor’s as well as from a public health perspective, reflecting actual civil interests. Examples of optimized trial designs obtained by numerical optimization are presented for both perspectives. PMID:27684573
Simulation as an Aid to Experimental Design.
ERIC Educational Resources Information Center
Frazer, Jack W.; And Others
1983-01-01
Discusses simulation program to aid in the design of enzyme kinetic experimentation (includes sample runs). Concentration versus time profiles of any subset or all nine states of reactions can be displayed with/without simulated instrumental noise, allowing the user to estimate the practicality of any proposed experiment given known instrument…
Optimized design of LED plant lamp
NASA Astrophysics Data System (ADS)
Chen, Jian-sheng; Cai, Ruhai; Zhao, Yunyun; Zhao, Fuli; Yang, Bowen
2014-12-01
In order to fabricate the optimized LED plant lamp we demonstrated an optical spectral exploration. According to the mechanism of higher plant photosynthesis process and the spectral analysis we demonstrate an optical design of the LED plant lamp. Furthermore we built two kins of prototypes of the LED plant lamps which are suitable for the photosynthesis of higher green vegetables. Based on the simulation of the lamp box of the different alignment of the plants we carried out the growing experiment of green vegetable and obtain the optimized light illumination as well as the spectral profile. The results show that only blue and red light are efficient for the green leave vegetables. Our work is undoubtedly helpful for the LED plant lamping design and manufacture.
Optimized design for an electrothermal microactuator
NASA Astrophysics Data System (ADS)
Cǎlimǎnescu, Ioan; Stan, Liviu-Constantin; Popa, Viorica
2015-02-01
In micromechanical structures, electrothermal actuators are known to be capable of providing larger force and reasonable tip deflection compared to electrostatic ones. Many studies have been devoted to the analysis of the flexure actuators. One of the most popular electrothermal actuators is called `U-shaped' actuator. The device is composed of two suspended beams with variable cross sections joined at the free end, which constrains the tip to move in an arcing motion while current is passed through the actuator. The goal of this research is to determine via FEA the best fitted geometry of the microactuator (optimization input parameters) in order to render some of the of the output parameters such as thermal strain or total deformations to their maximum values. The software to generate the CAD geometry was SolidWorks 2010 and all the FEA analysis was conducted with Ansys 13 TM. The optimized model has smaller geometric values of the input parameters that is a more compact geometry; The maximum temperature reached a smaller value for the optimized model; The calculated heat flux is with 13% bigger for the optimized model; the same for Joule Heat (26%), Total deformation (1.2%) and Thermal Strain (8%). By simple optimizing the design the dimensions and the performance of the micro actuator resulted more compact and more efficient.
Slot design of optimized electromagnetic pump
Leboucher, L. . Institut de Mecanique); Villani, D. )
1993-11-01
Electromagnetic pumps are used for the transportation of liquid metals such as the cooling sodium of fast breeder nuclear reactors. The design of this induction machine is close to that of a tubular linear induction motor. A non uniform slot distribution is used to optimize electromagnetic pumps. This geometry is tested with a finite element code. The performances are compared with the regular slot distribution of Industrial prototypes.
Optimal Design of Non-equilibrium Experiments for Genetic Network Interrogation
Adoteye, Kaska; Banks, H.T.; Flores, Kevin B.
2014-01-01
Many experimental systems in biology, especially synthetic gene networks, are amenable to perturbations that are controlled by the experimenter. We developed an optimal design algorithm that calculates optimal observation times in conjunction with optimal experimental perturbations in order to maximize the amount of information gained from longitudinal data derived from such experiments. We applied the algorithm to a validated model of a synthetic Brome Mosaic Virus (BMV) gene network and found that optimizing experimental perturbations may substantially decrease uncertainty in estimating BMV model parameters. PMID:25558126
General purpose optimization software for engineering design
NASA Technical Reports Server (NTRS)
Vanderplaats, G. N.
1990-01-01
The author has developed several general purpose optimization programs over the past twenty years. The earlier programs were developed as research codes and served that purpose reasonably well. However, in taking the formal step from research to industrial application programs, several important lessons have been learned. Among these are the importance of clear documentation, immediate user support, and consistent maintenance. Most important has been the issue of providing software that gives a good, or at least acceptable, design at minimum computational cost. Here, the basic issues developing optimization software for industrial applications are outlined and issues of convergence rate, reliability, and relative minima are discussed. Considerable feedback has been received from users, and new software is being developed to respond to identified needs. The basic capabilities of this software are outlined. A major motivation for the development of commercial grade software is ease of use and flexibility, and these issues are discussed with reference to general multidisciplinary applications. It is concluded that design productivity can be significantly enhanced by the more widespread use of optimization as an everyday design tool.
Optimal design of a tidal turbine
NASA Astrophysics Data System (ADS)
Kueny, J. L.; Lalande, T.; Herou, J. J.; Terme, L.
2012-11-01
An optimal design procedure has been applied to improve the design of an open-center tidal turbine. A specific software developed in C++ enables to generate the geometry adapted to the specific constraints imposed to this machine. Automatic scripts based on the AUTOGRID, IGG, FINE/TURBO and CFView software of the NUMECA CFD suite are used to evaluate all the candidate geometries. This package is coupled with the optimization software EASY, which is based on an evolutionary strategy completed by an artificial neural network. A new technique is proposed to guarantee the robustness of the mesh in the whole range of the design parameters. An important improvement of the initial geometry has been obtained. To limit the whole CPU time necessary for this optimization process, the geometry of the tidal turbine has been considered as axisymmetric, with a uniform upstream velocity. A more complete model (12 M nodes) has been built in order to analyze the effects related to the sea bed boundary layer, the proximity of the sea surface, the presence of an important triangular basement supporting the turbine and a possible incidence of the upstream velocity.
Analysis and design optimization of flexible pavement
Mamlouk, M.S.; Zaniewski, J.P.; He, W.
2000-04-01
A project-level optimization approach was developed to minimize total pavement cost within an analysis period. Using this approach, the designer is able to select the optimum initial pavement thickness, overlay thickness, and overlay timing. The model in this approach is capable of predicting both pavement performance and condition in terms of roughness, fatigue cracking, and rutting. The developed model combines the American Association of State Highway and Transportation Officials (AASHTO) design procedure and the mechanistic multilayer elastic solution. The Optimization for Pavement Analysis (OPA) computer program was developed using the prescribed approach. The OPA program incorporates the AASHTO equations, the multilayer elastic system ELSYM5 model, and the nonlinear dynamic programming optimization technique. The program is PC-based and can run in either a Windows 3.1 or a Windows 95 environment. Using the OPA program, a typical pavement section was analyzed under different traffic volumes and material properties. The optimum design strategy that produces the minimum total pavement cost in each case was determined. The initial construction cost, overlay cost, highway user cost, and total pavement cost were also calculated. The methodology developed during this research should lead to more cost-effective pavements for agencies adopting the recommended analysis methods.
Taguchi method of experimental design in materials education
NASA Technical Reports Server (NTRS)
Weiser, Martin W.
1993-01-01
Some of the advantages and disadvantages of the Taguchi Method of experimental design as applied to Materials Science will be discussed. This is a fractional factorial method that employs the minimum number of experimental trials for the information obtained. The analysis is also very simple to use and teach, which is quite advantageous in the classroom. In addition, the Taguchi loss function can be easily incorporated to emphasize that improvements in reproducibility are often at least as important as optimization of the response. The disadvantages of the Taguchi Method include the fact that factor interactions are normally not accounted for, there are zero degrees of freedom if all of the possible factors are used, and randomization is normally not used to prevent environmental biasing. In spite of these disadvantages it is felt that the Taguchi Method is extremely useful for both teaching experimental design and as a research tool, as will be shown with a number of brief examples.
Pareto Optimal Design for Synthetic Biology.
Patanè, Andrea; Santoro, Andrea; Costanza, Jole; Carapezza, Giovanni; Nicosia, Giuseppe
2015-08-01
Recent advances in synthetic biology call for robust, flexible and efficient in silico optimization methodologies. We present a Pareto design approach for the bi-level optimization problem associated to the overproduction of specific metabolites in Escherichia coli. Our method efficiently explores the high dimensional genetic manipulation space, finding a number of trade-offs between synthetic and biological objectives, hence furnishing a deeper biological insight to the addressed problem and important results for industrial purposes. We demonstrate the computational capabilities of our Pareto-oriented approach comparing it with state-of-the-art heuristics in the overproduction problems of i) 1,4-butanediol, ii) myristoyl-CoA, i ii) malonyl-CoA , iv) acetate and v) succinate. We show that our algorithms are able to gracefully adapt and scale to more complex models and more biologically-relevant simulations of the genetic manipulations allowed. The Results obtained for 1,4-butanediol overproduction significantly outperform results previously obtained, in terms of 1,4-butanediol to biomass formation ratio and knock-out costs. In particular overproduction percentage is of +662.7%, from 1.425 mmolh⁻¹gDW⁻¹ (wild type) to 10.869 mmolh⁻¹gDW⁻¹, with a knockout cost of 6. Whereas, Pareto-optimal designs we have found in fatty acid optimizations strictly dominate the ones obtained by the other methodologies, e.g., biomass and myristoyl-CoA exportation improvement of +21.43% (0.17 h⁻¹) and +5.19% (1.62 mmolh⁻¹gDW⁻¹), respectively. Furthermore CPU time required by our heuristic approach is more than halved. Finally we implement pathway oriented sensitivity analysis, epsilon-dominance analysis and robustness analysis to enhance our biological understanding of the problem and to improve the optimization algorithm capabilities.
Conceptual design of Fusion Experimental Reactor
NASA Astrophysics Data System (ADS)
Seki, Yasushi; Takatsu, Hideyuki; Iida, Hiromasa
1991-08-01
Safety analysis and evaluation have been made for the FER (Fusion Experimental Reactor) as well as for the ITER (International Thermonuclear Experimental Reactor) which are basically the same in terms of safety. This report describes the results obtained in fiscal years 1988 - 1990, in addition to a summary of the results obtained prior to 1988. The report shows the philosophy of the safety design, safety analysis and evaluation for each of the operation conditions, namely, normal operation, repair and maintenance, and accident. Considerations for safety regulations and standards are also added.
Transonic rotor tip design using numerical optimization
NASA Technical Reports Server (NTRS)
Tauber, Michael E.; Langhi, Ronald G.
1985-01-01
The aerodynamic design procedure for a new blade tip suitable for operation at transonic speeds is illustrated. For the first time, 3 dimensional numerical optimization was applied to rotor tip design, using the recent derivative of the ROT22 code, program R22OPT. Program R22OPT utilized an efficient quasi-Newton optimization algorithm. Multiple design objectives were specified. The delocalization of the shock wave was to be eliminated in forward flight for an advance ratio of 0.41 and a tip Mach number of 0.92 at psi = 90 deg. Simultaneously, it was sought to reduce torque requirements while maintaining effective restoring pitching moments. Only the outer 10 percent of the blade span was modified; the blade area was not to be reduced by more than 3 percent. The goal was to combine the advantages of both sweptback and sweptforward blade tips. A planform that featured inboard sweepback was combined with a sweptforward tip and a taper ratio of 0.5. Initially, the ROT22 code was used to find by trial and error a planform geometry which met the design goals. This configuration had an inboard section with a leading edge sweep of 20 deg and a tip section swept forward at 25 deg; in addition, the airfoils were modified.
Optimal branching designs in respiratory systems
NASA Astrophysics Data System (ADS)
Park, Keunhwan; Kim, Wonjung; Kim, Ho-Young
2015-11-01
In nature, the size of the flow channels systematically decreases with multiple generations of branching, and a mother branch is ultimately divided into numerous terminal daughters. One important feature of branching designs is an increase in the total cross-sectional area along with generation, which provide more time and area for mass transfer at the terminal branches. However, the expansion of the total cross-sectional area can be costly due to the maintenance of redundant branches or the additional viscous resistance. Accordingly, we expect to find optimal designs in natural branching systems. Here we present two examples of branching designs in respiratory systems: fish gills and human lung airways. Fish gills consist of filaments with well-ordered lamellar structures. By developing a mathematical model of oxygen transfer rate as a function of the dimensions of fish gills, we demonstrate that the interlamellar distance has been optimized to maximize the oxygen transfer rate. Using the same framework, we examine the diameter reduction ratio in human lung airways, which branch by dichotomy with a systematic reduction of their diameters. Our mathematical model for oxygen transport in the airways enables us to unveil the design principle of human lung airways.
Machine Learning Techniques in Optimal Design
NASA Technical Reports Server (NTRS)
Cerbone, Giuseppe
1992-01-01
Many important applications can be formalized as constrained optimization tasks. For example, we are studying the engineering domain of two-dimensional (2-D) structural design. In this task, the goal is to design a structure of minimum weight that bears a set of loads. A solution to a design problem in which there is a single load (L) and two stationary support points (S1 and S2) consists of four members, E1, E2, E3, and E4 that connect the load to the support points is discussed. In principle, optimal solutions to problems of this kind can be found by numerical optimization techniques. However, in practice [Vanderplaats, 1984] these methods are slow and they can produce different local solutions whose quality (ratio to the global optimum) varies with the choice of starting points. Hence, their applicability to real-world problems is severely restricted. To overcome these limitations, we propose to augment numerical optimization by first performing a symbolic compilation stage to produce: (a) objective functions that are faster to evaluate and that depend less on the choice of the starting point and (b) selection rules that associate problem instances to a set of recommended solutions. These goals are accomplished by successive specializations of the problem class and of the associated objective functions. In the end, this process reduces the problem to a collection of independent functions that are fast to evaluate, that can be differentiated symbolically, and that represent smaller regions of the overall search space. However, the specialization process can produce a large number of sub-problems. This is overcome by deriving inductively selection rules which associate problems to small sets of specialized independent sub-problems. Each set of candidate solutions is chosen to minimize a cost function which expresses the tradeoff between the quality of the solution that can be obtained from the sub-problem and the time it takes to produce it. The overall solution
Design search and optimization in aerospace engineering.
Keane, A J; Scanlan, J P
2007-10-15
In this paper, we take a design-led perspective on the use of computational tools in the aerospace sector. We briefly review the current state-of-the-art in design search and optimization (DSO) as applied to problems from aerospace engineering, focusing on those problems that make heavy use of computational fluid dynamics (CFD). This ranges over issues of representation, optimization problem formulation and computational modelling. We then follow this with a multi-objective, multi-disciplinary example of DSO applied to civil aircraft wing design, an area where this kind of approach is becoming essential for companies to maintain their competitive edge. Our example considers the structure and weight of a transonic civil transport wing, its aerodynamic performance at cruise speed and its manufacturing costs. The goals are low drag and cost while holding weight and structural performance at acceptable levels. The constraints and performance metrics are modelled by a linked series of analysis codes, the most expensive of which is a CFD analysis of the aerodynamics using an Euler code with coupled boundary layer model. Structural strength and weight are assessed using semi-empirical schemes based on typical airframe company practice. Costing is carried out using a newly developed generative approach based on a hierarchical decomposition of the key structural elements of a typical machined and bolted wing-box assembly. To carry out the DSO process in the face of multiple competing goals, a recently developed multi-objective probability of improvement formulation is invoked along with stochastic process response surface models (Krigs). This approach both mitigates the significant run times involved in CFD computation and also provides an elegant way of balancing competing goals while still allowing the deployment of the whole range of single objective optimizers commonly available to design teams. PMID:17519198
Optimizing Advanced Power System Designs Under Uncertainty
Rubin, E.S.; Diwekar; Frey, H.C.
1996-12-31
This paper describes recent developments in ongoing research to develop and demonstrate advanced computer-based methods for dealing with uncertainties that are critical to the design of advanced coal-based power systems. Recent developments include new deterministic and stochastic methods for simulation, optimization, and synthesis of advanced process designs. Results are presented illustrating the use of these new modeling tools for the design and analysis of several advanced systems of current interest to the U.S. Department of Energy, including the technologies of integrated gasification combined cycle (IGCC), advanced pressurized fluid combustion (PFBC), and the externally fired combined cycle (EFCC) process. The new methods developed in this research can be applied generally to any chemical or energy conversion process to reduce the technological risks associated with uncertainties in process performance and cost.
Optimally designing games for behavioural research.
Rafferty, Anna N; Zaharia, Matei; Griffiths, Thomas L
2014-07-01
Computer games can be motivating and engaging experiences that facilitate learning, leading to their increasing use in education and behavioural experiments. For these applications, it is often important to make inferences about the knowledge and cognitive processes of players based on their behaviour. However, designing games that provide useful behavioural data are a difficult task that typically requires significant trial and error. We address this issue by creating a new formal framework that extends optimal experiment design, used in statistics, to apply to game design. In this framework, we use Markov decision processes to model players' actions within a game, and then make inferences about the parameters of a cognitive model from these actions. Using a variety of concept learning games, we show that in practice, this method can predict which games will result in better estimates of the parameters of interest. The best games require only half as many players to attain the same level of precision.
Optimally designing games for behavioural research
Rafferty, Anna N.; Zaharia, Matei; Griffiths, Thomas L.
2014-01-01
Computer games can be motivating and engaging experiences that facilitate learning, leading to their increasing use in education and behavioural experiments. For these applications, it is often important to make inferences about the knowledge and cognitive processes of players based on their behaviour. However, designing games that provide useful behavioural data are a difficult task that typically requires significant trial and error. We address this issue by creating a new formal framework that extends optimal experiment design, used in statistics, to apply to game design. In this framework, we use Markov decision processes to model players' actions within a game, and then make inferences about the parameters of a cognitive model from these actions. Using a variety of concept learning games, we show that in practice, this method can predict which games will result in better estimates of the parameters of interest. The best games require only half as many players to attain the same level of precision. PMID:25002821
A FRAMEWORK TO DESIGN AND OPTIMIZE CHEMICAL FLOODING PROCESSES
Mojdeh Delshad; Gary A. Pope; Kamy Sepehrnoori
2005-07-01
The goal of this proposed research is to provide an efficient and user friendly simulation framework for screening and optimizing chemical/microbial enhanced oil recovery processes. The framework will include (1) a user friendly interface to identify the variables that have the most impact on oil recovery using the concept of experimental design and response surface maps, (2) UTCHEM reservoir simulator to perform the numerical simulations, and (3) an economic model that automatically imports the simulation production data to evaluate the profitability of a particular design. Such a reservoir simulation framework is not currently available to the oil industry. The objectives of Task 1 are to develop three primary modules representing reservoir, chemical, and well data. The modules will be interfaced with an already available experimental design model. The objective of the Task 2 is to incorporate UTCHEM reservoir simulator and the modules with the strategic variables and developing the response surface maps to identify the significant variables from each module. The objective of the Task 3 is to develop the economic model designed specifically for the chemical processes targeted in this proposal and interface the economic model with UTCHEM production output. Task 4 is on the validation of the framework and performing simulations of oil reservoirs to screen, design and optimize the chemical processes.
A Framework to Design and Optimize Chemical Flooding Processes
Mojdeh Delshad; Gary A. Pope Kamy Sepehrnoori
2006-08-31
The goal of this proposed research is to provide an efficient and user friendly simulation framework for screening and optimizing chemical/microbial enhanced oil recovery processes. The framework will include (1) a user friendly interface to identify the variables that have the most impact on oil recovery using the concept of experimental design and response surface maps, (2) UTCHEM reservoir simulator to perform the numerical simulations, and (3) an economic model that automatically imports the simulation production data to evaluate the profitability of a particular design. Such a reservoir simulation framework is not currently available to the oil industry. The objectives of Task 1 are to develop three primary modules representing reservoir, chemical, and well data. The modules will be interfaced with an already available experimental design model. The objective of the Task 2 is to incorporate UTCHEM reservoir simulator and the modules with the strategic variables and developing the response surface maps to identify the significant variables from each module. The objective of the Task 3 is to develop the economic model designed specifically for the chemical processes targeted in this proposal and interface the economic model with UTCHEM production output. Task 4 is on the validation of the framework and performing simulations of oil reservoirs to screen, design and optimize the chemical processes.
Involving students in experimental design: three approaches.
McNeal, A P; Silverthorn, D U; Stratton, D B
1998-12-01
Many faculty want to involve students more actively in laboratories and in experimental design. However, just "turning them loose in the lab" is time-consuming and can be frustrating for both students and faculty. We describe three different ways of providing structures for labs that require students to design their own experiments but guide the choices. One approach emphasizes invertebrate preparations and classic techniques that students can learn fairly easily. Students must read relevant primary literature and learn each technique in one week, and then design and carry out their own experiments in the next week. Another approach provides a "design framework" for the experiments so that all students are using the same technique and the same statistical comparisons, whereas their experimental questions differ widely. The third approach involves assigning the questions or problems but challenging students to design good protocols to answer these questions. In each case, there is a mixture of structure and freedom that works for the level of the students, the resources available, and our particular aims.
Bioinspiration: applying mechanical design to experimental biology.
Flammang, Brooke E; Porter, Marianne E
2011-07-01
The production of bioinspired and biomimetic constructs has fostered much collaboration between biologists and engineers, although the extent of biological accuracy employed in the designs produced has not always been a priority. Even the exact definitions of "bioinspired" and "biomimetic" differ among biologists, engineers, and industrial designers, leading to confusion regarding the level of integration and replication of biological principles and physiology. By any name, biologically-inspired mechanical constructs have become an increasingly important research tool in experimental biology, offering the opportunity to focus research by creating model organisms that can be easily manipulated to fill a desired parameter space of structural and functional repertoires. Innovative researchers with both biological and engineering backgrounds have found ways to use bioinspired models to explore the biomechanics of organisms from all kingdoms to answer a variety of different questions. Bringing together these biologists and engineers will hopefully result in an open discourse of techniques and fruitful collaborations for experimental and industrial endeavors.
Multidisciplinary design optimization for sonic boom mitigation
NASA Astrophysics Data System (ADS)
Ozcer, Isik A.
product design. The simulation tools are used to optimize three geometries for sonic boom mitigation. The first is a simple axisymmetric shape to be used as a generic nose component, the second is a delta wing with lift, and the third is a real aircraft with nose and wing optimization. The objectives are to minimize the pressure impulse or the peak pressure in the sonic boom signal, while keeping the drag penalty under feasible limits. The design parameters for the meridian profile of the nose shape are the lengths and the half-cone angles of the linear segments that make up the profile. The design parameters for the lifting wing are the dihedral angle, angle of attack, non-linear span-wise twist and camber distribution. The test-bed aircraft is the modified F-5E aircraft built by Northrop Grumman, designated the Shaped Sonic Boom Demonstrator. This aircraft is fitted with an optimized axisymmetric nose, and the wings are optimized to demonstrate optimization for sonic boom mitigation for a real aircraft. The final results predict 42% reduction in bow shock strength, 17% reduction in peak Deltap, 22% reduction in pressure impulse, 10% reduction in foot print size, 24% reduction in inviscid drag, and no loss in lift for the optimized aircraft. Optimization is carried out using response surface methodology, and the design matrices are determined using standard DoE techniques for quadratic response modeling.
ODIN: Optimal design integration system. [reusable launch vehicle design
NASA Technical Reports Server (NTRS)
Glatt, C. R.; Hague, D. S.
1975-01-01
The report provides a summary of the Optimal Design Integration (ODIN) System as it exists at Langley Research Center. A discussion of the ODIN System, the executive program and the data base concepts are presented. Two examples illustrate the capabilities of the system which have been exploited. Appended to the report are a summary of abstracts for the ODIN library programs and a description of the use of the executive program in linking the library programs.
An Optimal Pulse System Design by Multichannel Sensors Fusion.
Wang, Dimin; Zhang, David; Lu, Guangming
2016-03-01
Pulse diagnosis, recognized as an important branch of traditional Chinese medicine (TCM), has a long history for health diagnosis. Certain features in the pulse are known to be related with the physiological status, which have been identified as biomarkers. In recent years, an electronic equipment is designed to obtain the valuable information inside pulse. Single-point pulse acquisition platform has the benefit of low cost and flexibility, but is time consuming in operation and not standardized in pulse location. The pulse system with a single-type sensor is easy to implement, but is limited in extracting sufficient pulse information. This paper proposes a novel system with optimal design that is special for pulse diagnosis. We combine a pressure sensor with a photoelectric sensor array to make a multichannel sensor fusion structure. Then, the optimal pulse signal processing methods and sensor fusion strategy are introduced for the feature extraction. Finally, the developed optimal pulse system and methods are tested on pulse database acquired from the healthy subjects and the patients known to be afflicted with diabetes. The experimental results indicate that the classification accuracy is increased significantly under the optimal design and also demonstrate that the developed pulse system with multichannel sensors fusion is more effective than the previous pulse acquisition platforms. PMID:25608317
Tuning Complex Computer Codes to Data and Optimal Designs
NASA Astrophysics Data System (ADS)
Park, Jeong Soo
Modern scientific researchers often use complex computer simulation codes for theoretical investigations. We model the response of computer simulation code as the realization of a stochastic process. This approach, design and analysis of computer experiments (DACE), provides a statistical basis for analysing computer data, for designing experiments for efficient prediction and for comparing computer-encoded theory to experiments. An objective of research in a large class of dynamic systems is to determine any unknown coefficients in a theory. The coefficients can be determined by "tuning" the computer model to the real data so that the tuned code gives a good match to the real experimental data. Three design strategies for computer experiments are considered: data-adaptive sequential A-optimal design, maximum entropy design and optimal Latin-hypercube design. The following "code tuning" methodologies are proposed: nonlinear least squares, joint MLE, "separated" joint MLE and Bayesian method. The performance of these methods have been studied in several toy models. In the application to nuclear fusion devices, a cheaper emulator of the simulation code (BALDUR) has been constructed, and the transport coefficients were estimated from data of two tokamaks (ASDEX and PDX). Tuning complex computer codes to data using some statistical estimation methods and a cheap emulator of the code along with careful designs of computer experiments, with applications to nuclear fusion devices, is the topic of this thesis.
Design, optimization, and control of tensegrity structures
NASA Astrophysics Data System (ADS)
Masic, Milenko
The contributions of this dissertation may be divided into four categories. The first category involves developing a systematic form-finding method for general and symmetric tensegrity structures. As an extension of the available results, different shape constraints are incorporated in the problem. Methods for treatment of these constraints are considered and proposed. A systematic formulation of the form-finding problem for symmetric tensegrity structures is introduced, and it uses the symmetry to reduce both the number of equations and the number of variables in the problem. The equilibrium analysis of modular tensegrities exploits their peculiar symmetry. The tensegrity similarity transformation completes the contributions in the area of enabling tools for tensegrity form-finding. The second group of contributions develops the methods for optimal mass-to-stiffness-ratio design of tensegrity structures. This technique represents the state-of-the-art for the static design of tensegrity structures. It is an extension of the results available for the topology optimization of truss structures. Besides guaranteeing that the final design satisfies the tensegrity paradigm, the problem constrains the structure from different modes of failure, which makes it very general. The open-loop control of the shape of modular tensegrities is the third contribution of the dissertation. This analytical result offers a closed form solution for the control of the reconfiguration of modular structures. Applications range from the deployment and stowing of large-scale space structures to the locomotion-inducing control for biologically inspired structures. The control algorithm is applicable regardless of the size of the structures, and it represents a very general result for a large class of tensegrities. Controlled deployments of large-scale tensegrity plates and tensegrity towers are shown as examples that demonstrate the full potential of this reconfiguration strategy. The last
Inter occasion variability in individual optimal design.
Kristoffersson, Anders N; Friberg, Lena E; Nyberg, Joakim
2015-12-01
Inter occasion variability (IOV) is of importance to consider in the development of a design where individual pharmacokinetic or pharmacodynamic parameters are of interest. IOV may adversely affect the precision of maximum a posteriori (MAP) estimated individual parameters, yet the influence of inclusion of IOV in optimal design for estimation of individual parameters has not been investigated. In this work two methods of including IOV in the maximum a posteriori Fisher information matrix (FIMMAP) are evaluated: (i) MAP occ-the IOV is included as a fixed effect deviation per occasion and individual, and (ii) POP occ-the IOV is included as an occasion random effect. Sparse sampling schedules were designed for two test models and compared to a scenario where IOV is ignored, either by omitting known IOV (Omit) or by mimicking a situation where unknown IOV has inflated the IIV (Inflate). Accounting for IOV in the FIMMAP markedly affected the designs compared to ignoring IOV and, as evaluated by stochastic simulation and estimation, resulted in superior precision in the individual parameters. In addition MAPocc and POP occ accurately predicted precision and shrinkage. For the investigated designs, the MAP occ method was on average slightly superior to POP occ and was less computationally intensive.
Inter occasion variability in individual optimal design.
Kristoffersson, Anders N; Friberg, Lena E; Nyberg, Joakim
2015-12-01
Inter occasion variability (IOV) is of importance to consider in the development of a design where individual pharmacokinetic or pharmacodynamic parameters are of interest. IOV may adversely affect the precision of maximum a posteriori (MAP) estimated individual parameters, yet the influence of inclusion of IOV in optimal design for estimation of individual parameters has not been investigated. In this work two methods of including IOV in the maximum a posteriori Fisher information matrix (FIMMAP) are evaluated: (i) MAP occ-the IOV is included as a fixed effect deviation per occasion and individual, and (ii) POP occ-the IOV is included as an occasion random effect. Sparse sampling schedules were designed for two test models and compared to a scenario where IOV is ignored, either by omitting known IOV (Omit) or by mimicking a situation where unknown IOV has inflated the IIV (Inflate). Accounting for IOV in the FIMMAP markedly affected the designs compared to ignoring IOV and, as evaluated by stochastic simulation and estimation, resulted in superior precision in the individual parameters. In addition MAPocc and POP occ accurately predicted precision and shrinkage. For the investigated designs, the MAP occ method was on average slightly superior to POP occ and was less computationally intensive. PMID:26452548
Multi-Disciplinary Design Optimization Using WAVE
NASA Technical Reports Server (NTRS)
Irwin, Keith
2000-01-01
develop an associative control structure (framework) in the UG WAVE environment enabling multi-disciplinary design of turbine propulsion systems. The capabilities of WAVE were evaluated to assess its use as a rapid optimization and productivity tool. This project also identified future WAVE product enhancements that will make the tool still more beneficial for product development.
Optimal design of a hybridization scheme with a fuel cell using genetic optimization
NASA Astrophysics Data System (ADS)
Rodriguez, Marco A.
Fuel cell is one of the most dependable "green power" technologies, readily available for immediate application. It enables direct conversion of hydrogen and other gases into electric energy without any pollution of the environment. However, the efficient power generation is strictly stationary process that cannot operate under dynamic environment. Consequently, fuel cell becomes practical only within a specially designed hybridization scheme, capable of power storage and power management functions. The resultant technology could be utilized to its full potential only when both the fuel cell element and the entire hybridization scheme are optimally designed. The design optimization in engineering is among the most complex computational tasks due to its multidimensionality, nonlinearity, discontinuity and presence of constraints in the underlying optimization problem. this research aims at the optimal utilization of the fuel cell technology through the use of genetic optimization, and advance computing. This study implements genetic optimization in the definition of optimum hybridization rules for a PEM fuel cell/supercapacitor power system. PEM fuel cells exhibit high energy density but they are not intended for pulsating power draw applications. They work better in steady state operation and thus, are often hybridized. In a hybrid system, the fuel cell provides power during steady state operation while capacitors or batteries augment the power of the fuel cell during power surges. Capacitors and batteries can also be recharged when the motor is acting as a generator. Making analogies to driving cycles, three hybrid system operating modes are investigated: 'Flat' mode, 'Uphill' mode, and 'Downhill' mode. In the process of discovering the switching rules for these three modes, we also generate a model of a 30W PEM fuel cell. This study also proposes the optimum design of a 30W PEM fuel cell. The PEM fuel cell model and hybridization's switching rules are postulated
Design optimization of functionally graded dental implant.
Hedia, H S; Mahmoud, Nemat-Alla
2004-01-01
The continuous increase of man's life span, and the growing confidence in using artificial materials inside the human body necessities introducing more effective prosthesis and implant materials. However, no artificial implant has biomechanical properties equivalent to the original tissue. Recently, titanium and bioceramic materials, such as hydroxyapatite are extensively used as fabrication materials for dental implant due to their high compatibility with hard tissue and living bone. Titanium has reasonable stiffness and strength while hydroxyapatite has low stiffness, low strength and high ability to reach full integration with living bone. In order to obtain good dental implantation of the biomaterial; full integration of the implant with living bone should be satisfied. Minimum stresses in the implant and the bone must be achieved to increase the life of the implant and prevent bone resorption. Therefore, the aim of the current investigation is to design an implant made from functionally graded material (FGM) to achieve the above advantages. The finite element method and optimization technique are used to reach the required implant design. The optimal materials of the FGM dental implant are found to be hydroxyapatite/titanium. The investigations have shown that the maximum stress in the bone for the hydroxyapatite/titanium FGM implant has been reduced by about 22% and 28% compared to currently used titanium and stainless steel dental implants, respectively.
Experimental reversion of the optimal quantum cloning and flipping processes
Sciarrino, Fabio; Secondi, Veronica; De Martini, Francesco
2006-04-15
The quantum cloner machine maps an unknown arbitrary input qubit into two optimal clones and one optimal flipped qubit. By combining linear and nonlinear optical methods we experimentally implement a scheme that, after the cloning transformation, restores the original input qubit in one of the output channels, by using local measurements, classical communication, and feedforward. This nonlocal method demonstrates how the information on the input qubit can be restored after the cloning process. The realization of the reversion process is expected to find useful applications in the field of modern multipartite quantum cryptography.
Fuel optimal control of an experimental multi-mode system
NASA Technical Reports Server (NTRS)
Redmond, J.; Mayer, J. L.; Silverberg, L.
1992-01-01
In this paper, the dynamic characteristics associated with the fuel optimal control of a harmonic oscillator are utilized in the development of a near fuel optimal feedback control strategy for spacecraft vibration suppression. In this scheme, single level thrust actuators are governed by recursive computations of the standard deviations of displacement and velocity at the actuator's locations. The algorithm was tested on an experimental structure possessing a significant number of flexible body modes. The structure's response to both single and multiple mode excitation is presented.
Optimal design of a touch trigger probe
NASA Astrophysics Data System (ADS)
Li, Rui-Jun; Xiang, Meng; Fan, Kuang-Chao; Zhou, Hao; Feng, Jian
2015-02-01
A tungsten stylus with a ruby ball tip was screwed into a floating plate, which was supported by four leaf springs. The displacement of the tip caused by the contact force in 3D could be transferred into the tilt or vertical displacement of a plane mirror mounted on the floating plate. A quadrant photo detector (QPD) based two dimensional angle sensor was used to detect the tilt or the vertical displacement of the plane mirror. The structural parameters of the probe are optimized for equal sensitivity and equal stiffness in a displacement range of +/-5 μm, and a restricted horizontal size of less than 40 mm. Simulation results indicated that the stiffness was less than 0.6 mN/μm and equal in 3D. Experimental results indicated that the probe could be used to achieve a resolution of 1 nm.
Optimality criteria: A basis for multidisciplinary design optimization
NASA Astrophysics Data System (ADS)
Venkayya, V. B.
1989-01-01
This paper presents a generalization of what is frequently referred to in the literature as the optimality criteria approach in structural optimization. This generalization includes a unified presentation of the optimality conditions, the Lagrangian multipliers, and the resizing and scaling algorithms in terms of the sensitivity derivatives of the constraint and objective functions. The by-product of this generalization is the derivation of a set of simple nondimensional parameters which provides significant insight into the behavior of the structure as well as the optimization algorithm. A number of important issues, such as, active and passive variables, constraints and three types of linking are discussed in the context of the present derivation of the optimality criteria approach. The formulation as presented in this paper brings multidisciplinary optimization within the purview of this extremely efficient optimality criteria approach.
Fourier transform spectrometer optimal design considerations
NASA Astrophysics Data System (ADS)
Macoy, Norman H.
1999-10-01
The systems engineering aspects of evolving and developing the optimal design for Fourier transform interferometers are presented in this paper. A Fourier transform spectrometer (FTS) is a versatile electro-optical sensor for remote sensing, hyperspectral imaging, and laboratory chemical kinetics. Principal features include broad spectral coverage and high spectral resolution (Fellgate advantage) and high throughput (Jacquinot advantage). Due to its versatility, across various requirements, e.g. (resolution, bandwidth and aperture) sensor architecture contains an N-dimensional parametric trade matrix that needs to be readily assessed. Specifically considered are the logical steps utilized to flow down primary (customer) requirements and specifications to secondary (derived) requirements. Configurational aspects, generic trades, and parametric selections are emphasized for non-imagers as well as for imaging FTS. With an appropriately designed robust sensor, the noise equivalent spectral radiance or NE(Delta) N performance will be largely dictated by the scene and the instrument background flux. The performance will not be dictated by noise terms associated with interferogram encoding and signal handling. The mathematical formalism of interferometric error source types and photon limited design expressions are presented. The composition of these expressions are examined from the points of view of optical band limiting and some useful trade rules parametrically relating scan time and S/N to spectral resolution. For a well designed and executed interferometer, typical performance data are presented in terms of modulation index, calibrated radiometric atmospheric spectral signatures, and atmospheric spectral signatures for two spectral resolutions.
CFD based draft tube hydraulic design optimization
NASA Astrophysics Data System (ADS)
McNabb, J.; Devals, C.; Kyriacou, S. A.; Murry, N.; Mullins, B. F.
2014-03-01
The draft tube design of a hydraulic turbine, particularly in low to medium head applications, plays an important role in determining the efficiency and power characteristics of the overall machine, since an important proportion of the available energy, being in kinetic form leaving the runner, needs to be recovered by the draft tube into static head. For large units, these efficiency and power characteristics can equate to large sums of money when considering the anticipated selling price of the energy produced over the machine's life-cycle. This same draft tube design is also a key factor in determining the overall civil costs of the powerhouse, primarily in excavation and concreting, which can amount to similar orders of magnitude as the price of the energy produced. Therefore, there is a need to find the optimum compromise between these two conflicting requirements. In this paper, an elaborate approach is described for dealing with this optimization problem. First, the draft tube's detailed geometry is defined as a function of a comprehensive set of design parameters (about 20 of which a subset is allowed to vary during the optimization process) and are then used in a non-uniform rational B-spline based geometric modeller to fully define the wetted surfaces geometry. Since the performance of the draft tube is largely governed by 3D viscous effects, such as boundary layer separation from the walls and swirling flow characteristics, which in turn governs the portion of the available kinetic energy which will be converted into pressure, a full 3D meshing and Navier-Stokes analysis is performed for each design. What makes this even more challenging is the fact that the inlet velocity distribution to the draft tube is governed by the runner at each of the various operating conditions that are of interest for the exploitation of the powerhouse. In order to determine these inlet conditions, a combined steady-state runner and an initial draft tube analysis, using a
Design of an optimal snow observation network to estimate snowpack
NASA Astrophysics Data System (ADS)
Juan Collados Lara, Antonio; Pardo-Iguzquiza, Eulogio; Pulido-Velazquez, David
2016-04-01
Snow is an important water resource in many river basins that must be taken into account in hydrological modeling. Although the snow cover area may be nowadays estimated from satellite data, the snow pack thickness must be estimated from experimental data by using some interpolation procedure or hydrological models that approximates snow accumulation and fusion processes. The experimental data consist of hand probes and snow samples collected in a given number of locations that constitute the monitoring network. Assuming that there is an existing monitoring network, its optimization may imply the selection of an optimal network as a subset of the existing network (decrease of the existing network in the case that there are no funds for maintaining the full existing network) or to increase the existing network by one or more stations (optimal augmentation problem). In this work we propose a multicriterion approach for the optimal design of a snow network. These criteria include the estimation variance from a regression kriging approach for estimating thickness of the snowpack (using ground and satellite data), to minimize the total snow volume and accessibility criteria. We have also proposed a procedure to analyze the sensitivity of the results to the non-snow data deduced from the satellite information. We intent to minimize the uncertities in snowpack estimation. The methodology has been applied to estimation of the snow cover area and the design of the optimal snow observation network in Sierra Nevada mountain range in the Southern of Spain. Acknowledgments: This research has been partially supported by the GESINHIMPADAPT project (CGL2013-48424-C2-2-R) with Spanish MINECO funds. We would also like to thank ERHIN program and NASA DAAC for the data provided for this study.
Compact low field magnetic resonance imaging magnet: Design and optimization
NASA Astrophysics Data System (ADS)
Sciandrone, M.; Placidi, G.; Testa, L.; Sotgiu, A.
2000-03-01
Magnetic resonance imaging (MRI) is performed with a very large instrument that allows the patient to be inserted into a region of uniform magnetic field. The field is generated either by an electromagnet (resistive or superconductive) or by a permanent magnet. Electromagnets are designed as air cored solenoids of cylindrical symmetry, with an inner bore of 80-100 cm in diameter. In clinical analysis of peripheral regions of the body (legs, arms, foot, knee, etc.) it would be better to adopt much less expensive magnets leaving the most expensive instruments to applications that require the insertion of the patient in the magnet (head, thorax, abdomen, etc.). These "dedicated" apparati could be smaller and based on resistive magnets that are manufactured and operated at very low cost, particularly if they utilize an iron yoke to reduce power requirements. In order to obtain good field uniformity without the use of a set of shimming coils, we propose both particular construction of a dedicated magnet, using four independently controlled pairs of coils, and an optimization-based strategy for computing, a posteriori, the optimal current values. The optimization phase could be viewed as a low-cost shimming procedure for obtaining the desired magnetic field configuration. Some experimental measurements, confirming the effectiveness of the proposed approach (construction and optimization), have also been reported. In particular, it has been shown that the adoption of the proposed optimization based strategy has allowed the achievement of good uniformity of the magnetic field in about one fourth of the magnet length and about one half of its bore. On the basis of the good experimental results, the dedicated magnet can be used for MRI of peripheral regions of the body and for animal experimentation at very low cost.
Arabi, Maryam; Ostovan, Abbas; Ghaedi, Mehrorang; Purkait, Mihir K
2016-07-01
This study discusses a novel and simple method for the preparation of magnetic dummy molecularly imprinted nanoparticles (MDMINPs). Firstly, Fe3O4 magnetic nanoparticles (MNPs) were synthesized as a magnetic component. Subsequently, MDMINPs were constructed via the sol-gel strategy using APTMS as the functional monomer. Urethane was considered as dummy template to avoid residual template and TEOS as the cross linker. The prepared MDMINPs were used for the pre-concentration of acrylamide from potato chips. Quantification was carried out by high performance liquid chromatography with UV detection (HPLC-UV). The impact of influential variables such as pH, amount of sorbent, sonication time and eluent volume were well investigated and optimized using a central composite design. The particles had excellent magnetic property and high selectivity to the targeted molecule. In optimized conditions, the recovery ranged from 94.0% to 98.0% with the detection limit of 0.35µgkg(-1). PMID:27154710
Development of prilling process for biodegradable microspheres through experimental designs.
Fabien, Violet; Minh-Quan, Le; Michelle, Sergent; Guillaume, Bastiat; Van-Thanh, Tran; Marie-Claire, Venier-Julienne
2016-02-10
The prilling process proposes a microparticle formulation easily transferable to the pharmaceutical production, leading to monodispersed and highly controllable microspheres. PLGA microspheres were used for carrying an encapsulated protein and adhered stem cells on its surface, proposing a tool for regeneration therapy against injured tissue. This work focused on the development of the production of PLGA microspheres by the prilling process without toxic solvent. The required production quality needed a complete optimization of the process. Seventeen parameters were studied through experimental designs and led to an acceptable production. The key parameters and mechanisms of formation were highlighted. PMID:26656302
On the proper study design applicable to experimental balneology
NASA Astrophysics Data System (ADS)
Varga, Csaba
2016-08-01
The simple message of this paper is that it is the high time to reevaluate the strategies and optimize the efforts for investigation of thermal (spa) waters. Several articles trying to clear mode of action of medicinal waters have been published up to now. Almost all studies apply the unproven hypothesis, namely the inorganic ingredients are in close connection with healing effects of bathing. Change of paradigm would be highly necessary in this field taking into consideration the presence of several biologically active organic substances in these waters. A successful design for experimental mechanistic studies is approved.
On the proper study design applicable to experimental balneology.
Varga, Csaba
2016-08-01
The simple message of this paper is that it is the high time to reevaluate the strategies and optimize the efforts for investigation of thermal (spa) waters. Several articles trying to clear mode of action of medicinal waters have been published up to now. Almost all studies apply the unproven hypothesis, namely the inorganic ingredients are in close connection with healing effects of bathing. Change of paradigm would be highly necessary in this field taking into consideration the presence of several biologically active organic substances in these waters. A successful design for experimental mechanistic studies is approved.
A Statistical Approach to Optimizing Concrete Mixture Design
Alghamdi, Saeid A.
2014-01-01
A step-by-step statistical approach is proposed to obtain optimum proportioning of concrete mixtures using the data obtained through a statistically planned experimental program. The utility of the proposed approach for optimizing the design of concrete mixture is illustrated considering a typical case in which trial mixtures were considered according to a full factorial experiment design involving three factors and their three levels (33). A total of 27 concrete mixtures with three replicates (81 specimens) were considered by varying the levels of key factors affecting compressive strength of concrete, namely, water/cementitious materials ratio (0.38, 0.43, and 0.48), cementitious materials content (350, 375, and 400 kg/m3), and fine/total aggregate ratio (0.35, 0.40, and 0.45). The experimental data were utilized to carry out analysis of variance (ANOVA) and to develop a polynomial regression model for compressive strength in terms of the three design factors considered in this study. The developed statistical model was used to show how optimization of concrete mixtures can be carried out with different possible options. PMID:24688405
Optimizing Monitoring Designs under Alternative Objectives
Gastelum, Jason A.; USA, Richland Washington; Porter, Ellen A.; USA, Richland Washington
2014-12-31
This paper describes an approach to identify monitoring designs that optimize detection of CO2 leakage from a carbon capture and sequestration (CCS) reservoir and compares the results generated under two alternative objective functions. The first objective function minimizes the expected time to first detection of CO2 leakage, the second more conservative objective function minimizes the maximum time to leakage detection across the set of realizations. The approach applies a simulated annealing algorithm that searches the solution space by iteratively mutating the incumbent monitoring design. The approach takes into account uncertainty by evaluating the performance of potential monitoring designs across amore » set of simulated leakage realizations. The approach relies on a flexible two-tiered signature to infer that CO2 leakage has occurred. This research is part of the National Risk Assessment Partnership, a U.S. Department of Energy (DOE) project tasked with conducting risk and uncertainty analysis in the areas of reservoir performance, natural leakage pathways, wellbore integrity, groundwater protection, monitoring, and systems level modeling.« less
Optimizing Monitoring Designs under Alternative Objectives
Gastelum, Jason A.; USA, Richland Washington; Porter, Ellen A.; USA, Richland Washington
2014-12-31
This paper describes an approach to identify monitoring designs that optimize detection of CO2 leakage from a carbon capture and sequestration (CCS) reservoir and compares the results generated under two alternative objective functions. The first objective function minimizes the expected time to first detection of CO2 leakage, the second more conservative objective function minimizes the maximum time to leakage detection across the set of realizations. The approach applies a simulated annealing algorithm that searches the solution space by iteratively mutating the incumbent monitoring design. The approach takes into account uncertainty by evaluating the performance of potential monitoring designs across a set of simulated leakage realizations. The approach relies on a flexible two-tiered signature to infer that CO2 leakage has occurred. This research is part of the National Risk Assessment Partnership, a U.S. Department of Energy (DOE) project tasked with conducting risk and uncertainty analysis in the areas of reservoir performance, natural leakage pathways, wellbore integrity, groundwater protection, monitoring, and systems level modeling.
Optimal Ground Source Heat Pump System Design
Ozbek, Metin; Yavuzturk, Cy; Pinder, George
2015-04-15
Despite the facts that GSHPs first gained popularity as early as the 1940’s and they can achieve 30 to 60 percent in energy savings and carbon emission reductions relative to conventional HVAC systems, the use of geothermal energy in the U.S. has been less than 1 percent of the total energy consumption. The key barriers preventing this technically-mature technology from reaching its full commercial potential have been its high installation cost and limited consumer knowledge and trust in GSHP systems to deliver the technology in a cost-effective manner in the market place. Led by ENVIRON, with support from University Hartford and University of Vermont, the team developed and tested a software-based a decision making tool (‘OptGSHP’) for the least-cost design of ground-source heat pump (‘GSHP’) systems. OptGSHP combines state of the art optimization algorithms with GSHP-specific HVAC and groundwater flow and heat transport simulation. The particular strength of OptGSHP is in integrating heat transport due to groundwater flow into the design, which most of the GSHP designs do not get credit for and therefore are overdesigned.
Damage localization using experimental modal parameters and topology optimization
NASA Astrophysics Data System (ADS)
Niemann, Hanno; Morlier, Joseph; Shahdin, Amir; Gourinat, Yves
2010-04-01
This work focuses on the development of a damage detection and localization tool using the topology optimization feature of MSC.Nastran. This approach is based on the correlation of a local stiffness loss and the change in modal parameters due to damages in structures. The loss in stiffness is accounted by the topology optimization approach for updating undamaged numerical models towards similar models with embedded damages. Hereby, only a mass penalization and the changes in experimentally obtained modal parameters are used as objectives. The theoretical background for the implementation of this method is derived and programmed in a Nastran input file and the general feasibility of the approach is validated numerically, as well as experimentally by updating a model of an experimentally tested composite laminate specimen. The damages have been introduced to the specimen by controlled low energy impacts and high quality vibration tests have been conducted on the specimen for different levels of damage. These supervised experiments allow to test the numerical diagnosis tool by comparing the result with both NDT technics and results of previous works (concerning shifts in modal parameters due to damage). Good results have finally been achieved for the localization of the damages by the topology optimization.
Optimal design of a piezoelectric coupled beam for power harvesting
NASA Astrophysics Data System (ADS)
Wang, Quan; Wu, Nan
2012-08-01
An optimal design of a beam structure coupled with a piezoelectric patch is developed to achieve high efficiency of power transformation from a mechanical input to an electrical output power for the application of power harvesting. The power harvesting is realized by generating an electric voltage from the electromechanical coupling effect by the piezoelectric patch when the host beam is subjected to a dynamic loading. The electric power is then collected by a capacitor through an alternating current (AC)/direct current (DC) converter circuit (diode bridge circuit). To describe the power-harvesting process, a numerical model is developed to calculate the output voltage from the piezoelectric patch, the charge on the capacitor and the power-harvesting efficiency of the piezoelectric coupled vibrating beam. In addition, an experimental study is conducted to measure the generated voltage on the piezoelectric patch to verify the proposed numerical model. The effects of the excitation angular frequency and the patch size and location on the power-harvesting efficiency are discussed for the optimal design. The research provides a guideline for optimal designs of power-harvesting devices made of piezoelectric beam structures.
Experimental Design to Evaluate Directed Adaptive Mutation in Mammalian Cells
Chiaro, Christopher R; May, Tobias
2014-01-01
and have limited preliminary data from several pilot experiments. Cell growth and DNA sequence data indicate that we have identified a cell clone that exhibits several suitable characteristics, although further study is required to identify a more optimal cell clone. Conclusions The experimental approach is based on a quantum biological model of basis-dependent selection describing a novel mechanism of adaptive mutation. This project is currently inactive due to lack of funding. However, consistent with the objective of early reports, we describe a proposed study that has not produced publishable results, but is worthy of report because of the hypothesis, experimental design, and protocols. We outline the project’s rationale and experimental design, with its strengths and weaknesses, to stimulate discussion and analysis, and lay the foundation for future studies in this field. PMID:25491410
Sampling design optimization for spatial functions
Olea, R.A.
1984-01-01
A new procedure is presented for minimizing the sampling requirements necessary to estimate a mappable spatial function at a specified level of accuracy. The technique is based on universal kriging, an estimation method within the theory of regionalized variables. Neither actual implementation of the sampling nor universal kriging estimations are necessary to make an optimal design. The average standard error and maximum standard error of estimation over the sampling domain are used as global indices of sampling efficiency. The procedure optimally selects those parameters controlling the magnitude of the indices, including the density and spatial pattern of the sample elements and the number of nearest sample elements used in the estimation. As an illustration, the network of observation wells used to monitor the water table in the Equus Beds of Kansas is analyzed and an improved sampling pattern suggested. This example demonstrates the practical utility of the procedure, which can be applied equally well to other spatial sampling problems, as the procedure is not limited by the nature of the spatial function. ?? 1984 Plenum Publishing Corporation.
Space tourism optimized reusable spaceplane design
NASA Astrophysics Data System (ADS)
Penn, Jay P.; Lindley, Charles A.
1997-01-01
Market surveys suggest that a viable space tourism industry will require flight rates about two orders of magnitude higher than those required for conventional spacelift. Although enabling round-trip cost goals for a viable space tourism business are about $240 per pound ($529/kg), or $72,000 per passenger round-trip, goals should be about $50 per pound ($110/kg) or approximately $15,000 for a typical passenger and baggage. The lower price will probably open space tourism to the general population. Vehicle reliabilities must approach those of commercial aircraft as closely as possible. This paper addresses the development of spaceplanes optimized for the ultra-high flight rate and high reliability demands of the space tourism mission. It addresses the fundamental operability, reliability, and cost drivers needed to satisfy this mission need. Figures of merit similar to those used to evaluate the economic viability of conventional commercial aircraft are developed, including items such as payload/vehicle dry weight, turnaround time, propellant cost per passenger, and insurance and depreciation costs, which show that infrastructure can be developed for a viable space tourism industry. A reference spaceplane design optimized for space tourism is described. Subsystem allocations for reliability, operability, and costs are made and a route to developing such a capability is discussed. The vehicle's ability to also satisfy the traditional spacelift market is shown.
Three Program Architecture for Design Optimization
NASA Technical Reports Server (NTRS)
Miura, Hirokazu; Olson, Lawrence E. (Technical Monitor)
1998-01-01
In this presentation, I would like to review historical perspective on the program architecture used to build design optimization capabilities based on mathematical programming and other numerical search techniques. It is rather straightforward to classify the program architecture in three categories as shown above. However, the relative importance of each of the three approaches has not been static, instead dynamically changing as the capabilities of available computational resource increases. For example, we considered that the direct coupling architecture would never be used for practical problems, but availability of such computer systems as multi-processor. In this presentation, I would like to review the roles of three architecture from historical as well as current and future perspective. There may also be some possibility for emergence of hybrid architecture. I hope to provide some seeds for active discussion where we are heading to in the very dynamic environment for high speed computing and communication.
Optimal design of robot accuracy compensators
Zhuang, H.; Roth, Z.S. . Robotics Center and Electrical Engineering Dept.); Hamano, Fumio . Dept. of Electrical Engineering)
1993-12-01
The problem of optimal design of robot accuracy compensators is addressed. Robot accuracy compensation requires that actual kinematic parameters of a robot be previously identified. Additive corrections of joint commands, including those at singular configurations, can be computed without solving the inverse kinematics problem for the actual robot. This is done by either the damped least-squares (DLS) algorithm or the linear quadratic regulator (LQR) algorithm, which is a recursive version of the DLS algorithm. The weight matrix in the performance index can be selected to achieve specific objectives, such as emphasizing end-effector's positioning accuracy over orientation accuracy or vice versa, or taking into account proximity to robot joint travel limits and singularity zones. The paper also compares the LQR and the DLS algorithms in terms of computational complexity, storage requirement, and programming convenience. Simulation results are provided to show the effectiveness of the algorithms.
Set membership experimental design for biological systems
2012-01-01
Background Experimental design approaches for biological systems are needed to help conserve the limited resources that are allocated for performing experiments. The assumptions used when assigning probability density functions to characterize uncertainty in biological systems are unwarranted when only a small number of measurements can be obtained. In these situations, the uncertainty in biological systems is more appropriately characterized in a bounded-error context. Additionally, effort must be made to improve the connection between modelers and experimentalists by relating design metrics to biologically relevant information. Bounded-error experimental design approaches that can assess the impact of additional measurements on model uncertainty are needed to identify the most appropriate balance between the collection of data and the availability of resources. Results In this work we develop a bounded-error experimental design framework for nonlinear continuous-time systems when few data measurements are available. This approach leverages many of the recent advances in bounded-error parameter and state estimation methods that use interval analysis to generate parameter sets and state bounds consistent with uncertain data measurements. We devise a novel approach using set-based uncertainty propagation to estimate measurement ranges at candidate time points. We then use these estimated measurements at the candidate time points to evaluate which candidate measurements furthest reduce model uncertainty. A method for quickly combining multiple candidate time points is presented and allows for determining the effect of adding multiple measurements. Biologically relevant metrics are developed and used to predict when new data measurements should be acquired, which system components should be measured and how many additional measurements should be obtained. Conclusions The practicability of our approach is illustrated with a case study. This study shows that our
Metamaterial structure design optimization: A study of the cylindrical cloak
NASA Astrophysics Data System (ADS)
Paul, Jason V.
Previously, Transformational Optics (TO) has been used as a foundation for designing cylindrical cloaks. The TO method uses a coordinate transform to dictate an anisotropic material parameter gradient in a cylinder coating that guides waves around the cylinder to reduce the Radar Cross Section (RCS). The problem is that the material parameters required for the TO cloak are not physically realizable and thus must be approximated. This problem is compounded by the fact that any approximation deviates from the ideal design and will allow fields to penetrate the cloak layer and interact with the object to be cloaked. Since the TO method does not account for this interaction, approximating the ideal TO parameters is doomed to suboptimal results. However, through the use of a Green's function, an optimized isotropic cloaked cylinder can be designed in which all of the physics are accounted for. If the contribution due to the scatterer is 0, then the observer, regardless of position, will only observe the contribution due to the source and thus the object is cloaked from observation. The contribution due to the scatterer is then used as a cost functional with an optimization algorithm to find the optimal parameters of an isotropic cloaked cylinder. Although the material parameters in this design method can be fulfilled by any material, metamaterials are used to study their viability and assumptions in this application. This process culminates in the design, fabrication and measurements of a cloaked cylinder made of metamaterials that operate outside of their resonant bands. We show bistatic RCS reduction for nearly every angle along with monostatic RCS reduction for nearly every frequency in the range of 5GHz--15GHz. Most importantly, the experimental results validate the use of a Green's function based design approach and the implementation of metamaterials for normally incident energy.
Design and global optimization of high-efficiency thermophotovoltaic systems.
Bermel, Peter; Ghebrebrhan, Michael; Chan, Walker; Yeng, Yi Xiang; Araghchini, Mohammad; Hamam, Rafif; Marton, Christopher H; Jensen, Klavs F; Soljačić, Marin; Joannopoulos, John D; Johnson, Steven G; Celanovic, Ivan
2010-09-13
Despite their great promise, small experimental thermophotovoltaic (TPV) systems at 1000 K generally exhibit extremely low power conversion efficiencies (approximately 1%), due to heat losses such as thermal emission of undesirable mid-wavelength infrared radiation. Photonic crystals (PhC) have the potential to strongly suppress such losses. However, PhC-based designs present a set of non-convex optimization problems requiring efficient objective function evaluation and global optimization algorithms. Both are applied to two example systems: improved micro-TPV generators and solar thermal TPV systems. Micro-TPV reactors experience up to a 27-fold increase in their efficiency and power output; solar thermal TPV systems see an even greater 45-fold increase in their efficiency (exceeding the Shockley-Quiesser limit for a single-junction photovoltaic cell).
Optimal screening designs for biomedical technology
Torney, D.C.; Bruno, W.J.; Knill, E.
1997-10-01
This is the final report of a three-year, Laboratory Directed Research and Development (LDRD) project at Los Alamos National Laboratory (LANL). Screening a large number of different types of molecules to isolate a few with desirable properties is essential in biomedical technology. For example, trying to find a particular gene in the Human genome could be akin to looking for a needle in a haystack. Fortunately, testing of mixtures, or pools, of molecules allows the desirable ones to be identified, using a number of experiments proportional only to the logarithm of the total number of experiments proportional only to the logarithm of the total number of types of molecules. We show how to capitalize upon this potential by using optimize pooling schemes, or designs. We propose efficient non-adaptive pooling designs, such as {open_quotes}random sets{close_quotes} designs and modified {open_quotes}row and column{close_quotes} designs. Our results have been applied in the pooling and unique-sequence screening of clone libraries used in the Human Genome Project and in the mapping of Human chromosome 16. This required the use of liquid-transferring robots and manifolds--for the largest clone libraries. Finally, we developed an efficient technique for finding the posterior probability each molecule has the desirable property, given the pool assay results. This technique works well, in practice, even if there are substantial rates of errors in the pool assay data. Both our methods and our results are relevant to a broad spectrum of research in modern biology.
Design and optimization of membrane-type acoustic metamaterials
NASA Astrophysics Data System (ADS)
Blevins, Matthew Grant
One of the most common problems in noise control is the attenuation of low frequency noise. Typical solutions require barriers with high density and/or thickness. Membrane-type acoustic metamaterials are a novel type of engineered material capable of high low-frequency transmission loss despite their small thickness and light weight. These materials are ideally suited to applications with strict size and weight limitations such as aircraft, automobiles, and buildings. The transmission loss profile can be manipulated by changing the micro-level substructure, stacking multiple unit cells, or by creating multi-celled arrays. To date, analysis has focused primarily on experimental studies in plane-wave tubes and numerical modeling using finite element methods. These methods are inefficient when used for applications that require iterative changes to the structure of the material. To facilitate design and optimization of membrane-type acoustic metamaterials, computationally efficient dynamic models based on the impedance-mobility approach are proposed. Models of a single unit cell in a waveguide and in a baffle, a double layer of unit cells in a waveguide, and an array of unit cells in a baffle are studied. The accuracy of the models and the validity of assumptions used are verified using a finite element method. The remarkable computational efficiency of the impedance-mobility models compared to finite element methods enables implementation in design tools based on a graphical user interface and in optimization schemes. Genetic algorithms are used to optimize the unit cell design for a variety of noise reduction goals, including maximizing transmission loss for broadband, narrow-band, and tonal noise sources. The tools for design and optimization created in this work will enable rapid implementation of membrane-type acoustic metamaterials to solve real-world noise control problems.
Structural Optimization of a Force Balance Using a Computational Experiment Design
NASA Technical Reports Server (NTRS)
Parker, P. A.; DeLoach, R.
2002-01-01
This paper proposes a new approach to force balance structural optimization featuring a computational experiment design. Currently, this multi-dimensional design process requires the designer to perform a simplification by executing parameter studies on a small subset of design variables. This one-factor-at-a-time approach varies a single variable while holding all others at a constant level. Consequently, subtle interactions among the design variables, which can be exploited to achieve the design objectives, are undetected. The proposed method combines Modern Design of Experiments techniques to direct the exploration of the multi-dimensional design space, and a finite element analysis code to generate the experimental data. To efficiently search for an optimum combination of design variables and minimize the computational resources, a sequential design strategy was employed. Experimental results from the optimization of a non-traditional force balance measurement section are presented. An approach to overcome the unique problems associated with the simultaneous optimization of multiple response criteria is described. A quantitative single-point design procedure that reflects the designer's subjective impression of the relative importance of various design objectives, and a graphical multi-response optimization procedure that provides further insights into available tradeoffs among competing design objectives are illustrated. The proposed method enhances the intuition and experience of the designer by providing new perspectives on the relationships between the design variables and the competing design objectives providing a systematic foundation for advancements in structural design.
Experimental Design for the LATOR Mission
NASA Technical Reports Server (NTRS)
Turyshev, Slava G.; Shao, Michael; Nordtvedt, Kenneth, Jr.
2004-01-01
This paper discusses experimental design for the Laser Astrometric Test Of Relativity (LATOR) mission. LATOR is designed to reach unprecedented accuracy of 1 part in 10(exp 8) in measuring the curvature of the solar gravitational field as given by the value of the key Eddington post-Newtonian parameter gamma. This mission will demonstrate the accuracy needed to measure effects of the next post-Newtonian order (near infinity G2) of light deflection resulting from gravity s intrinsic non-linearity. LATOR will provide the first precise measurement of the solar quadrupole moment parameter, J(sub 2), and will improve determination of a variety of relativistic effects including Lense-Thirring precession. The mission will benefit from the recent progress in the optical communication technologies the immediate and natural step above the standard radio-metric techniques. The key element of LATOR is a geometric redundancy provided by the laser ranging and long-baseline optical interferometry. We discuss the mission and optical designs, as well as the expected performance of this proposed mission. LATOR will lead to very robust advances in the tests of Fundamental physics: this mission could discover a violation or extension of general relativity, or reveal the presence of an additional long range interaction in the physical law. There are no analogs to the LATOR experiment; it is unique and is a natural culmination of solar system gravity experiments.
Collimator design for experimental minibeam radiation therapy
Babcock, Kerry; Sidhu, Narinder; Kundapur, Vijayananda; Ali, Kaiser
2011-04-15
Purpose: To design and optimize a minibeam collimator for minibeam radiation therapy studies using a 250 kVp x-ray machine as a simulated synchrotron source. Methods: A Philips RT250 orthovoltage x-ray machine was modeled using the EGSnrc/BEAMnrc Monte Carlo software. The resulting machine model was coupled to a model of a minibeam collimator with a beam aperture of 1 mm. Interaperture spacing and collimator thickness were varied to produce a minibeam with the desired peak-to-valley ratio. Results: Proper design of a minibeam collimator with Monte Carlo methods requires detailed knowledge of the x-ray source setup. For a cathode-ray tube source, the beam spot size, target angle, and source shielding all determine the final valley-to-peak dose ratio. Conclusions: A minibeam collimator setup was created, which can deliver a 30 Gy peak dose minibeam radiation therapy treatment at depths less than 1 cm with a valley-to-peak dose ratio on the order of 23%.
Experimental Optimization of a Free-to-Rotate Wing for Small UAS
NASA Technical Reports Server (NTRS)
Logan, Michael J.; DeLoach, Richard; Copeland, Tiwana; Vo, Steven
2014-01-01
This paper discusses an experimental investigation conducted to optimize a free-to-rotate wing for use on a small unmanned aircraft system (UAS). Although free-to-rotate wings have been used for decades on various small UAS and small manned aircraft, little is known about how to optimize these unusual wings for a specific application. The paper discusses some of the design rationale of the basic wing. In addition, three main parameters were selected for "optimization", wing camber, wing pivot location, and wing center of gravity (c.g.) location. A small apparatus was constructed to enable some simple experimental analysis of these parameters. A design-of-experiment series of tests were first conducted to discern which of the main optimization parameters were most likely to have the greatest impact on the outputs of interest, namely, some measure of "stability", some measure of the lift being generated at the neutral position, and how quickly the wing "recovers" from an upset. A second set of tests were conducted to develop a response-surface numerical representation of these outputs as functions of the three primary inputs. The response surface numerical representations are then used to develop an "optimum" within the trade space investigated. The results of the optimization are then tested experimentally to validate the predictions.
Chip Design Process Optimization Based on Design Quality Assessment
NASA Astrophysics Data System (ADS)
Häusler, Stefan; Blaschke, Jana; Sebeke, Christian; Rosenstiel, Wolfgang; Hahn, Axel
2010-06-01
Nowadays, the managing of product development projects is increasingly challenging. Especially the IC design of ASICs with both analog and digital components (mixed-signal design) is becoming more and more complex, while the time-to-market window narrows at the same time. Still, high quality standards must be fulfilled. Projects and their status are becoming less transparent due to this complexity. This makes the planning and execution of projects rather difficult. Therefore, there is a need for efficient project control. A main challenge is the objective evaluation of the current development status. Are all requirements successfully verified? Are all intermediate goals achieved? Companies often develop special solutions that are not reusable in other projects. This makes the quality measurement process itself less efficient and produces too much overhead. The method proposed in this paper is a contribution to solve these issues. It is applied at a German design house for analog mixed-signal IC design. This paper presents the results of a case study and introduces an optimized project scheduling on the basis of quality assessment results.
Experimental design principles for isotopically instationary 13C labeling experiments.
Nöh, Katharina; Wiechert, Wolfgang
2006-06-01
13C metabolic flux analysis (MFA) is a well-established tool in Metabolic Engineering that found numerous applications in recent years. However, one strong limitation of the current method is the requirement of an-at least approximate-isotopic stationary state at sampling time. This requirement leads to a principle lower limit for the duration of a 13C labeling experiment. A new methodological development is based on repeated sampling during the instationary transient of the 13C labeling dynamics. The statistical and computational treatment of such instationary experiments is a completely new terrain. The computational effort is very high because large differential equations have to be solved and, moreover, the intracellular pool sizes play a significant role. For this reason, the present contribution works out principles and strategies for the experimental design of instationary experiments based on a simple example network. Hereby, the potential of isotopically instationary experiments is investigated in detail. Various statistical results on instationary flux identifiability are presented and possible pitfalls of experimental design are discussed. Finally, a framework for almost optimal experimental design of isotopically instationary experiments is proposed which provides a practical guideline for the analysis of large-scale networks.
Application of numerical optimization to rotor aerodynamic design
NASA Technical Reports Server (NTRS)
Pleasants, W. A., III; Wiggins, T. J.
1984-01-01
Based on initial results obtained from the performance optimization code, a number of observations can be made regarding the utility of optimization codes in supporting design of rotors for improved performance. (1) The primary objective of improving the productivity and responsiveness of current design methods can be met. (2) The use of optimization allows the designer to consider a wider range of design variables in a greatly compressed time period. (3) Optimization requires the user to carefully define his problem to avoid unproductive use of computer resources. (4) Optimization will increase the burden on the analyst to validate designs and to improve the accuracy of analysis methods. (5) Direct calculation of finite difference derivatives by the optimizer was not prohibitive for this application but was expensive. Approximate analysis in some form would be considered to improve program response time. (6) Program developement is not complete and will continue to evolve to integrate new analysis methods, design problems, and alternate optimizer options.
Technological issues and experimental design of gene association studies.
Distefano, Johanna K; Taverna, Darin M
2011-01-01
Genome-wide association studies (GWAS), in which thousands of single-nucleotide polymorphisms (SNPs) spanning the genome are genotyped in individuals who are phenotypically well characterized, -currently represent the most popular strategy for identifying gene regions associated with common -diseases and related quantitative traits. Improvements in technology and throughput capability, development of powerful statistical tools, and more widespread acceptance of pooling-based genotyping approaches have led to greater utilization of GWAS in human genetics research. However, important considerations for optimal experimental design, including selection of the most appropriate genotyping platform, can enhance the utility of the approach even further. This chapter reviews experimental and technological issues that may affect the success of GWAS findings and proposes strategies for developing the most comprehensive, logical, and cost-effective approaches for genotyping given the population of interest.
Polo, Maria; Garcia-Jares, Carmen; Llompart, Maria; Cela, Rafael
2007-08-01
A solid-phase microextraction method (SPME) followed by gas chromatography with micro electron capture detection for determining trace levels of nitro musk fragrances in residual waters was optimized. Four nitro musks, musk xylene, musk moskene, musk tibetene and musk ketone, were selected for the optimization of the method. Factors affecting the extraction process were studied using a multivariate approach. Two extraction modes (direct SPME and headspace SPME) were tried at different extraction temperatures using two fiber coatings [Carboxen-polydimethylsiloxane (CAR/PDMS) and polydimethylsiloxane-divinylbenzene (PDMS/DVB)] selected among five commercial tested fibers. Sample agitation and the salting-out effect were also factors studied. The main effects and interactions between the factors were studied for all the target compounds. An extraction temperature of 100 degrees C and sampling the headspace over the sample, using either CAR/PDMS or PDMS/DVB as fiber coatings, were found to be the experimental conditions that led to a more effective extraction. High sensitivity, with detection limits in the low nanogram per liter range, and good linearity and repeatability were achieved for all nitro musks. Since the method proposed performed well for real samples, it was applied to different water samples, including wastewater and sewage, in which some of the target compounds (musk xylene and musk ketone) were detected and quantified.
Experimental design: computer simulation for improving the precision of an experiment.
van Wilgenburg, Henk; Zillesen, Piet G van Schaick; Krulichova, Iva
2004-06-01
An interactive computer-assisted learning program, ExpDesign, that has been developed for simulating animal experiments, is introduced. The program guides students through the steps for designing animal experiments and estimating optimal sample sizes. Principles are introduced for controlling variation, establishing the experimental unit, selecting randomised block and factorial experimental designs, and applying the appropriate statistical analysis. Sample Power is a supporting tool that visualises the process of estimating the sample size. The aim of developing the ExpDesign program has been to make biomedical research workers more familiar with some basic principles of experimental design and statistics and to facilitate discussions with statisticians.
Design and optimization of a brachytherapy robot
NASA Astrophysics Data System (ADS)
Meltsner, Michael A.
Trans-rectal ultrasound guided (TRUS) low dose rate (LDR) interstitial brachytherapy has become a popular procedure for the treatment of prostate cancer, the most common type of non-skin cancer among men. The current TRUS technique of LDR implantation may result in less than ideal coverage of the tumor with increased risk of negative response such as rectal toxicity and urinary retention. This technique is limited by the skill of the physician performing the implant, the accuracy of needle localization, and the inherent weaknesses of the procedure itself. The treatment may require 100 or more sources and 25 needles, compounding the inaccuracy of the needle localization procedure. A robot designed for prostate brachytherapy may increase the accuracy of needle placement while minimizing the effect of physician technique in the TRUS procedure. Furthermore, a robot may improve associated toxicities by utilizing angled insertions and freeing implantations from constraints applied by the 0.5 cm-spaced template used in the TRUS method. Within our group, Lin et al. have designed a new type of LDR source. The "directional" source is a seed designed to be partially shielded. Thus, a directional, or anisotropic, source does not emit radiation in all directions. The source can be oriented to irradiate cancerous tissues while sparing normal ones. This type of source necessitates a new, highly accurate method for localization in 6 degrees of freedom. A robot is the best way to accomplish this task accurately. The following presentation of work describes the invention and optimization of a new prostate brachytherapy robot that fulfills these goals. Furthermore, some research has been dedicated to the use of the robot to perform needle insertion tasks (brachytherapy, biopsy, RF ablation, etc.) in nearly any other soft tissue in the body. This can be accomplished with the robot combined with automatic, magnetic tracking.
Space tourism optimized reusable spaceplane design
Penn, J.P.; Lindley, C.A.
1997-01-01
Market surveys suggest that a viable space tourism industry will require flight rates about two orders of magnitude higher than those required for conventional spacelift. Although enabling round-trip cost goals for a viable space tourism business are about {dollar_sign}240 per pound ({dollar_sign}529/kg), or {dollar_sign}72,000 per passenger round-trip, goals should be about {dollar_sign}50 per pound ({dollar_sign}110/kg) or approximately {dollar_sign}15,000 for a typical passenger and baggage. The lower price will probably open space tourism to the general population. Vehicle reliabilities must approach those of commercial aircraft as closely as possible. This paper addresses the development of spaceplanes optimized for the ultra-high flight rate and high reliability demands of the space tourism mission. It addresses the fundamental operability, reliability, and cost drivers needed to satisfy this mission need. Figures of merit similar to those used to evaluate the economic viability of conventional commercial aircraft are developed, including items such as payload/vehicle dry weight, turnaround time, propellant cost per passenger, and insurance and depreciation costs, which show that infrastructure can be developed for a viable space tourism industry. A reference spaceplane design optimized for space tourism is described. Subsystem allocations for reliability, operability, and costs are made and a route to developing such a capability is discussed. The vehicle{close_quote}s ability to also satisfy the traditional spacelift market is shown. {copyright} {ital 1997 American Institute of Physics.}
Optimization of minoxidil microemulsions using fractional factorial design approach.
Jaipakdee, Napaphak; Limpongsa, Ekapol; Pongjanyakul, Thaned
2016-01-01
The objective of this study was to apply fractional factorial and multi-response optimization designs using desirability function approach for developing topical microemulsions. Minoxidil (MX) was used as a model drug. Limonene was used as an oil phase. Based on solubility, Tween 20 and caprylocaproyl polyoxyl-8 glycerides were selected as surfactants, propylene glycol and ethanol were selected as co-solvent in aqueous phase. Experiments were performed according to a two-level fractional factorial design to evaluate the effects of independent variables: Tween 20 concentration in surfactant system (X1), surfactant concentration (X2), ethanol concentration in co-solvent system (X3), limonene concentration (X4) on MX solubility (Y1), permeation flux (Y2), lag time (Y3), deposition (Y4) of MX microemulsions. It was found that Y1 increased with increasing X3 and decreasing X2, X4; whereas Y2 increased with decreasing X1, X2 and increasing X3. While Y3 was not affected by these variables, Y4 increased with decreasing X1, X2. Three regression equations were obtained and calculated for predicted values of responses Y1, Y2 and Y4. The predicted values matched experimental values reasonably well with high determination coefficient. By using optimal desirability function, optimized microemulsion demonstrating the highest MX solubility, permeation flux and skin deposition was confirmed as low level of X1, X2 and X4 but high level of X3. PMID:25318551
Design time optimization for hardware watermarking protection of HDL designs.
Castillo, E; Morales, D P; García, A; Parrilla, L; Todorovich, E; Meyer-Baese, U
2015-01-01
HDL-level design offers important advantages for the application of watermarking to IP cores, but its complexity also requires tools automating these watermarking algorithms. A new tool for signature distribution through combinational logic is proposed in this work. IPP@HDL, a previously proposed high-level watermarking technique, has been employed for evaluating the tool. IPP@HDL relies on spreading the bits of a digital signature at the HDL design level using combinational logic included within the original system. The development of this new tool for the signature distribution has not only extended and eased the applicability of this IPP technique, but it has also improved the signature hosting process itself. Three algorithms were studied in order to develop this automated tool. The selection of a cost function determines the best hosting solutions in terms of area and performance penalties on the IP core to protect. An 1D-DWT core and MD5 and SHA1 digital signatures were used in order to illustrate the benefits of the new tool and its optimization related to the extraction logic resources. Among the proposed algorithms, the alternative based on simulated annealing reduces the additional resources while maintaining an acceptable computation time and also saving designer effort and time. PMID:25861681
Design time optimization for hardware watermarking protection of HDL designs.
Castillo, E; Morales, D P; García, A; Parrilla, L; Todorovich, E; Meyer-Baese, U
2015-01-01
HDL-level design offers important advantages for the application of watermarking to IP cores, but its complexity also requires tools automating these watermarking algorithms. A new tool for signature distribution through combinational logic is proposed in this work. IPP@HDL, a previously proposed high-level watermarking technique, has been employed for evaluating the tool. IPP@HDL relies on spreading the bits of a digital signature at the HDL design level using combinational logic included within the original system. The development of this new tool for the signature distribution has not only extended and eased the applicability of this IPP technique, but it has also improved the signature hosting process itself. Three algorithms were studied in order to develop this automated tool. The selection of a cost function determines the best hosting solutions in terms of area and performance penalties on the IP core to protect. An 1D-DWT core and MD5 and SHA1 digital signatures were used in order to illustrate the benefits of the new tool and its optimization related to the extraction logic resources. Among the proposed algorithms, the alternative based on simulated annealing reduces the additional resources while maintaining an acceptable computation time and also saving designer effort and time.
Design Time Optimization for Hardware Watermarking Protection of HDL Designs
Castillo, E.; Morales, D. P.; García, A.; Parrilla, L.; Todorovich, E.; Meyer-Baese, U.
2015-01-01
HDL-level design offers important advantages for the application of watermarking to IP cores, but its complexity also requires tools automating these watermarking algorithms. A new tool for signature distribution through combinational logic is proposed in this work. IPP@HDL, a previously proposed high-level watermarking technique, has been employed for evaluating the tool. IPP@HDL relies on spreading the bits of a digital signature at the HDL design level using combinational logic included within the original system. The development of this new tool for the signature distribution has not only extended and eased the applicability of this IPP technique, but it has also improved the signature hosting process itself. Three algorithms were studied in order to develop this automated tool. The selection of a cost function determines the best hosting solutions in terms of area and performance penalties on the IP core to protect. An 1D-DWT core and MD5 and SHA1 digital signatures were used in order to illustrate the benefits of the new tool and its optimization related to the extraction logic resources. Among the proposed algorithms, the alternative based on simulated annealing reduces the additional resources while maintaining an acceptable computation time and also saving designer effort and time. PMID:25861681
Assay optimization: a statistical design of experiments approach.
Altekar, Maneesha; Homon, Carol A; Kashem, Mohammed A; Mason, Steven W; Nelson, Richard M; Patnaude, Lori A; Yingling, Jeffrey; Taylor, Paul B
2007-03-01
With the transition from manual to robotic HTS in the last several years, assay optimization has become a significant bottleneck. Recent advances in robotic liquid handling have made it feasible to reduce assay optimization timelines with the application of statistically designed experiments. When implemented, they can efficiently optimize assays by rapidly identifying significant factors, complex interactions, and nonlinear responses. This article focuses on the use of statistically designed experiments in assay optimization.
Parker, G.G.; Eisler, G.R.; Feddema, J.T.
1994-09-01
Procedures for trajectory planning and control of flexible link robots are becoming increasingly important to satisfy performance requirements of hazardous waste removal efforts. It has been shown that utilizing link flexibility in designing open loop joint commands can result in improved performance as opposed to damping vibration throughout a trajectory. The efficient use of link compliance is exploited in this work. Specifically, experimental verification of minimum time, straight line tracking using a two-link planar flexible robot is presented. A numerical optimization process, using an experimentally verified modal model, is used for obtaining minimum time joint torque and angle histories. The optimal joint states are used as commands to the proportional-derivative servo actuated joints. These commands are precompensated for the nonnegligible joint servo actuator dynamics. Using the precompensated joint commands, the optimal joint angles are tracked with such fidelity that the tip tracking error is less than 2.5 cm.
Integrated topology and shape optimization in structural design
NASA Technical Reports Server (NTRS)
Bremicker, M.; Chirehdast, M.; Kikuchi, N.; Papalambros, P. Y.
1990-01-01
Structural optimization procedures usually start from a given design topology and vary its proportions or boundary shapes to achieve optimality under various constraints. Two different categories of structural optimization are distinguished in the literature, namely sizing and shape optimization. A major restriction in both cases is that the design topology is considered fixed and given. Questions concerning the general layout of a design (such as whether a truss or a solid structure should be used) as well as more detailed topology features (e.g., the number and connectivities of bars in a truss or the number of holes in a solid) have to be resolved by design experience before formulating the structural optimization model. Design quality of an optimized structure still depends strongly on engineering intuition. This article presents a novel approach for initiating formal structural optimization at an earlier stage, where the design topology is rigorously generated in addition to selecting shape and size dimensions. A three-phase design process is discussed: an optimal initial topology is created by a homogenization method as a gray level image, which is then transformed to a realizable design using computer vision techniques; this design is then parameterized and treated in detail by sizing and shape optimization. A fully automated process is described for trusses. Optimization of two dimensional solid structures is also discussed. Several application-oriented examples illustrate the usefulness of the proposed methodology.
Optimal structural design of the Airborne Infrared Imager
NASA Astrophysics Data System (ADS)
Doyle, Keith B.; Cerrati, Vincent J.; Forman, Steven E.; Sultana, John A.
1995-09-01
The airborne infrared imager (AIRI) is a dual-band IR sensor designed to study air defense issues while wing mounted in a pod. The sensor consists of an optical bench attached to a two- axis inertially stabilized gimbal structure in elevation and azimuth. The gimbal assembly operates within an 18-inch diameter globe while meeting strict pointing and tracking requirements. Design conditions for the assembly include operational and nonoperational inertial, thermal, and dynamic loads. Primary design efforts centered on limiting the line-of- sight jitter of the optical system to 50 (mu) rad under the operating environment. An MSC/NASTRAN finite element model was developed for structural response predictions and correlated to experimental data. Design changes were aided by MSC/NASTRAN's optimization routine with the goal of maximizing the fundamental frequency of the gimbal assembly. The final structural design resultsed in a first natural frequency of 79 Hz using a titanium azimuthal gimbal, a stainless steel elevation gimbal, and an aluminum optical bench which met the design and performance requirements.
Computer aided optimal design of space reflectors and radiation concentrators
NASA Astrophysics Data System (ADS)
Saprykin, Oleg A.; Spirochkin, Yuriy K.; Kinelev, Vladimir G.; Sulimov, Valeriy D.
1998-06-01
The goal of space radiation receiver design is achievement of its maximal reflecting properties under some technological and financial restrictions. Optimal design problems of this type are characterized by nonconvex nondifferentiable objective functions. A numerical technique for optimal design of the structures and an applied software REFLEX under development are proposed.
Optimizing Adhesive Design by Understanding Compliance.
King, Daniel R; Crosby, Alfred J
2015-12-23
Adhesives have long been designed around a trade-off between adhesive strength and releasability. Geckos are of interest because they are the largest organisms which are able to climb utilizing adhesive toepads, yet can controllably release from surfaces and perform this action over and over again. Attempting to replicate the hierarchical, nanoscopic features which cover their toepads has been the primary focus of the adhesives field until recently. A new approach based on a scaling relation which states that reversible adhesive force capacity scales with (A/C)(1/2), where A is the area of contact and C is the compliance of the adhesive, has enabled the creation of high strength, reversible adhesives without requiring high aspect ratio, fibrillar features. Here we introduce an equation to calculate the compliance of adhesives, and utilize this equation to predict the shear adhesive force capacity of the adhesive based on the material components and geometric properties. Using this equation, we have investigated important geometric parameters which control force capacity and have shown that by controlling adhesive shape, adhesive force capacity can be increased by over 50% without varying pad size. Furthermore, we have demonstrated that compliance of the adhesive far from the interface still influences shear adhesive force capacity. Utilizing this equation will allow for the production of adhesives which are optimized for specific applications in commercial and industrial settings. PMID:26618537
Optimizing Adhesive Design by Understanding Compliance.
King, Daniel R; Crosby, Alfred J
2015-12-23
Adhesives have long been designed around a trade-off between adhesive strength and releasability. Geckos are of interest because they are the largest organisms which are able to climb utilizing adhesive toepads, yet can controllably release from surfaces and perform this action over and over again. Attempting to replicate the hierarchical, nanoscopic features which cover their toepads has been the primary focus of the adhesives field until recently. A new approach based on a scaling relation which states that reversible adhesive force capacity scales with (A/C)(1/2), where A is the area of contact and C is the compliance of the adhesive, has enabled the creation of high strength, reversible adhesives without requiring high aspect ratio, fibrillar features. Here we introduce an equation to calculate the compliance of adhesives, and utilize this equation to predict the shear adhesive force capacity of the adhesive based on the material components and geometric properties. Using this equation, we have investigated important geometric parameters which control force capacity and have shown that by controlling adhesive shape, adhesive force capacity can be increased by over 50% without varying pad size. Furthermore, we have demonstrated that compliance of the adhesive far from the interface still influences shear adhesive force capacity. Utilizing this equation will allow for the production of adhesives which are optimized for specific applications in commercial and industrial settings.
A robust optimization methodology for preliminary aircraft design
NASA Astrophysics Data System (ADS)
Prigent, S.; Maréchal, P.; Rondepierre, A.; Druot, T.; Belleville, M.
2016-05-01
This article focuses on a robust optimization of an aircraft preliminary design under operational constraints. According to engineers' know-how, the aircraft preliminary design problem can be modelled as an uncertain optimization problem whose objective (the cost or the fuel consumption) is almost affine, and whose constraints are convex. It is shown that this uncertain optimization problem can be approximated in a conservative manner by an uncertain linear optimization program, which enables the use of the techniques of robust linear programming of Ben-Tal, El Ghaoui, and Nemirovski [Robust Optimization, Princeton University Press, 2009]. This methodology is then applied to two real cases of aircraft design and numerical results are presented.
Optimal Design of a Center Support Quadruple Mass Gyroscope (CSQMG).
Zhang, Tian; Zhou, Bin; Yin, Peng; Chen, Zhiyong; Zhang, Rong
2016-04-28
This paper reports a more complete description of the design process of the Center Support Quadruple Mass Gyroscope (CSQMG), a gyro expected to provide breakthrough performance for flat structures. The operation of the CSQMG is based on four lumped masses in a circumferential symmetric distribution, oscillating in anti-phase motion, and providing differential signal extraction. With its 4-fold symmetrical axes pattern, the CSQMG achieves a similar operation mode to Hemispherical Resonant Gyroscopes (HRGs). Compared to the conventional flat design, four Y-shaped coupling beams are used in this new pattern in order to adjust mode distribution and enhance the synchronization mechanism of operation modes. For the purpose of obtaining the optimal design of the CSQMG, a kind of applicative optimization flow is developed with a comprehensive derivation of the operation mode coordination, the pseudo mode inhibition, and the lumped mass twisting motion elimination. The experimental characterization of the CSQMG was performed at room temperature, and the center operation frequency is 6.8 kHz after tuning. Experiments show an Allan variance stability 0.12°/h (@100 s) and a white noise level about 0.72°/h/√Hz, which means that the CSQMG possesses great potential to achieve navigation grade performance.
Optimal Design of a Center Support Quadruple Mass Gyroscope (CSQMG).
Zhang, Tian; Zhou, Bin; Yin, Peng; Chen, Zhiyong; Zhang, Rong
2016-01-01
This paper reports a more complete description of the design process of the Center Support Quadruple Mass Gyroscope (CSQMG), a gyro expected to provide breakthrough performance for flat structures. The operation of the CSQMG is based on four lumped masses in a circumferential symmetric distribution, oscillating in anti-phase motion, and providing differential signal extraction. With its 4-fold symmetrical axes pattern, the CSQMG achieves a similar operation mode to Hemispherical Resonant Gyroscopes (HRGs). Compared to the conventional flat design, four Y-shaped coupling beams are used in this new pattern in order to adjust mode distribution and enhance the synchronization mechanism of operation modes. For the purpose of obtaining the optimal design of the CSQMG, a kind of applicative optimization flow is developed with a comprehensive derivation of the operation mode coordination, the pseudo mode inhibition, and the lumped mass twisting motion elimination. The experimental characterization of the CSQMG was performed at room temperature, and the center operation frequency is 6.8 kHz after tuning. Experiments show an Allan variance stability 0.12°/h (@100 s) and a white noise level about 0.72°/h/√Hz, which means that the CSQMG possesses great potential to achieve navigation grade performance. PMID:27136557
Chen, Hua; Ye, Chenyu
2014-01-01
Color is one of the most powerful aspects of a psychological counseling environment. Little scientific research has been conducted on color design and much of the existing literature is based on observational studies. Using design of experiments and response surface methodology, this paper proposes an optimal color design approach for transforming patients’ perception into color elements. Six indices, pleasant-unpleasant, interesting-uninteresting, exciting-boring, relaxing-distressing, safe-fearful, and active-inactive, were used to assess patients’ impression. A total of 75 patients participated, including 42 for Experiment 1 and 33 for Experiment 2. 27 representative color samples were designed in Experiment 1, and the color sample (L = 75, a = 0, b = -60) was the most preferred one. In Experiment 2, this color sample was set as the ‘central point’, and three color attributes were optimized to maximize the patients’ satisfaction. The experimental results show that the proposed method can get the optimal solution for color design of a counseling room. PMID:24594683
Entropy-Based Search Algorithm for Experimental Design
NASA Astrophysics Data System (ADS)
Malakar, N. K.; Knuth, K. H.
2011-03-01
The scientific method relies on the iterated processes of inference and inquiry. The inference phase consists of selecting the most probable models based on the available data; whereas the inquiry phase consists of using what is known about the models to select the most relevant experiment. Optimizing inquiry involves searching the parameterized space of experiments to select the experiment that promises, on average, to be maximally informative. In the case where it is important to learn about each of the model parameters, the relevance of an experiment is quantified by Shannon entropy of the distribution of experimental outcomes predicted by a probable set of models. If the set of potential experiments is described by many parameters, we must search this high-dimensional entropy space. Brute force search methods will be slow and computationally expensive. We present an entropy-based search algorithm, called nested entropy sampling, to select the most informative experiment for efficient experimental design. This algorithm is inspired by Skilling's nested sampling algorithm used in inference and borrows the concept of a rising threshold while a set of experiment samples are maintained. We demonstrate that this algorithm not only selects highly relevant experiments, but also is more efficient than brute force search. Such entropic search techniques promise to greatly benefit autonomous experimental design.
A new optimal sliding mode controller design using scalar sign function.
Singla, Mithun; Shieh, Leang-San; Song, Gangbing; Xie, Linbo; Zhang, Yongpeng
2014-03-01
This paper presents a new optimal sliding mode controller using the scalar sign function method. A smooth, continuous-time scalar sign function is used to replace the discontinuous switching function in the design of a sliding mode controller. The proposed sliding mode controller is designed using an optimal Linear Quadratic Regulator (LQR) approach. The sliding surface of the system is designed using stable eigenvectors and the scalar sign function. Controller simulations are compared with another existing optimal sliding mode controller. To test the effectiveness of the proposed controller, the controller is implemented on an aluminum beam with piezoceramic sensor and actuator for vibration control. This paper includes the control design and stability analysis of the new optimal sliding mode controller, followed by simulation and experimental results. The simulation and experimental results show that the proposed approach is very effective.
A new interval optimization method considering tolerance design
NASA Astrophysics Data System (ADS)
Jiang, C.; Xie, H. C.; Zhang, Z. G.; Han, X.
2015-12-01
This study considers the design variable uncertainty in the actual manufacturing process for a product or structure and proposes a new interval optimization method based on tolerance design, which can provide not only an optimal design but also the allowable maximal manufacturing errors that the design can bear. The design variables' manufacturing errors are depicted using the interval method, and an interval optimization model for the structure is constructed. A dimensionless design tolerance index is defined to describe the overall uncertainty of all design variables, and by combining the nominal objective function, a deterministic two-objective optimization model is built. The possibility degree of interval is used to represent the reliability of the constraints under uncertainty, through which the model is transformed to a deterministic optimization problem. Three numerical examples are investigated to verify the effectiveness of the present method.
Two-stage microbial community experimental design.
Tickle, Timothy L; Segata, Nicola; Waldron, Levi; Weingart, Uri; Huttenhower, Curtis
2013-12-01
Microbial community samples can be efficiently surveyed in high throughput by sequencing markers such as the 16S ribosomal RNA gene. Often, a collection of samples is then selected for subsequent metagenomic, metabolomic or other follow-up. Two-stage study design has long been used in ecology but has not yet been studied in-depth for high-throughput microbial community investigations. To avoid ad hoc sample selection, we developed and validated several purposive sample selection methods for two-stage studies (that is, biological criteria) targeting differing types of microbial communities. These methods select follow-up samples from large community surveys, with criteria including samples typical of the initially surveyed population, targeting specific microbial clades or rare species, maximizing diversity, representing extreme or deviant communities, or identifying communities distinct or discriminating among environment or host phenotypes. The accuracies of each sampling technique and their influences on the characteristics of the resulting selected microbial community were evaluated using both simulated and experimental data. Specifically, all criteria were able to identify samples whose properties were accurately retained in 318 paired 16S amplicon and whole-community metagenomic (follow-up) samples from the Human Microbiome Project. Some selection criteria resulted in follow-up samples that were strongly non-representative of the original survey population; diversity maximization particularly undersampled community configurations. Only selection of intentionally representative samples minimized differences in the selected sample set from the original microbial survey. An implementation is provided as the microPITA (Microbiomes: Picking Interesting Taxa for Analysis) software for two-stage study design of microbial communities.
Sadeghi, Susan; Rad, Fatemeh Alavi; Moghaddam, Ali Zeraatkar
2014-12-01
In this work, poly(methyl methacrylate) grafted Tragacanth gum modified Fe3O4 magnetic nanoparticles (P(MMA)-g-TG-MNs) were developed for the selective removal of Cr(VI) species from aqueous solutions in the presence of Cr(III). The sorbent was characterized by Fourier transform infrared (FTIR) spectroscopy, transmission electron microscopy (TEM), a vibrating sample magnetometer (VSM), and thermo-gravimetric analysis (TGA). A screening study on operational variables was performed using a two-level full factorial design. Based on the analysis of variance (ANOVA) with 95% confidence limit, the significant variables were found. The central composite design (CCD) has also been employed for statistical modeling and analysis of the effects and interactions of significant variables dealing with the Cr(VI) uptake process by the developed sorbent. The predicted optimal conditions were situated at a pH of 5.5, contact time of 3.4 h, and 3.0 g L(-1) dose. The Langmuir, Freundlich, and Temkin isotherm models were used to describe the equilibrium sorption of Cr(VI) by the absorbent, and the Langmuir isotherm showed the best concordance as an equilibrium model. The adsorption process was followed by a pseudo-second-order kinetic model. Thermodynamic investigations showed that the biosorption process was spontaneous and exothermic.
[Design and experimentation of marine optical buoy].
Yang, Yue-Zhong; Sun, Zhao-Hua; Cao, Wen-Xi; Li, Cai; Zhao, Jun; Zhou, Wen; Lu, Gui-Xin; Ke, Tian-Cun; Guo, Chao-Ying
2009-02-01
Marine optical buoy is of important value in terms of calibration and validation of ocean color remote sensing, scientific observation, coastal environment monitoring, etc. A marine optical buoy system was designed which consists of a main and a slave buoy. The system can measure the distribution of irradiance and radiance over the sea surface, in the layer near sea surface and in the euphotic zone synchronously, during which some other parameters are also acquired such as spectral absorption and scattering coefficients of the water column, the velocity and direction of the wind, and so on. The buoy was positioned by GPS. The low-power integrated PC104 computer was used as the control core to collect data automatically. The data and commands were real-timely transmitted by CDMA/GPRS wireless networks or by the maritime satellite. The coastal marine experimentation demonstrated that the buoy has small pitch and roll rates in high sea state conditions and thus can meet the needs of underwater radiometric measurements, the data collection and remote transmission are reliable, and the auto-operated anti-biofouling devices can ensure that the optical sensors work effectively for a period of several months.
Neural network optimization, components, and design selection
NASA Astrophysics Data System (ADS)
Weller, Scott W.
1990-07-01
Neural Networks are part of a revived technology which has received a lot of hype in recent years. As is apt to happen in any hyped technology, jargon and predictions make its assimilation and application difficult. Nevertheless, Neural Networks have found use in a number of areas, working on non-trivial and noncontrived problems. For example, one net has been trained to "read", translating English text into phoneme sequences. Other applications of Neural Networks include data base manipulation and the solving of muting and classification types of optimization problems. Neural Networks are constructed from neurons, which in electronics or software attempt to model but are not constrained by the real thing, i.e., neurons in our gray matter. Neurons are simple processing units connected to many other neurons over pathways which modify the incoming signals. A single synthetic neuron typically sums its weighted inputs, runs this sum through a non-linear function, and produces an output. In the brain, neurons are connected in a complex topology: in hardware/software the topology is typically much simpler, with neurons lying side by side, forming layers of neurons which connect to the layer of neurons which receive their outputs. This simplistic model is much easier to construct than the real thing, and yet can solve real problems. The information in a network, or its "memory", is completely contained in the weights on the connections from one neuron to another. Establishing these weights is called "training" the network. Some networks are trained by design -- once constructed no further learning takes place. Other types of networks require iterative training once wired up, but are not trainable once taught Still other types of networks can continue to learn after initial construction. The main benefit to using Neural Networks is their ability to work with conflicting or incomplete ("fuzzy") data sets. This ability and its usefulness will become evident in the following
NASA Astrophysics Data System (ADS)
Khare, Prateek; Kumar, Arvind
2012-12-01
In the present paper, the phenol removal from wastewater was investigated using agri-based adsorbent: Terminalia chebula-activated carbon (TCAC) produced by carbonization of Terminalia chebula (TC) in air-controlled atmosphere at 600 °C for 4 h. The surface area of TCAC was measured as 364 m2/g using BET method. The surface characteristic of TCAC was analyzed based on the value of point of zero charge. The effect of parameters such as TCAC dosage, pH, initial concentration of phenol, time of contact and temperature on the sorption of phenol by TCAC was investigated using conventional method and Taguchi experimental design. The total adsorption capacity of phenol was obtained as 36.77 mg/g using Langmuir model at the temperature of 30 °C at pH = 5.5. The maximum removal of phenol (294.86 mg/g) was obtained using Taguchi's method. The equilibrium study of phenol on TCAC showed that experimental data fitted well to R-P model. The results also showed that kinetic data were followed more closely the pseudo-first-order model. The results of thermodynamic study showed that the adsorption of phenol on TCAC was spontaneous and an exothermic in nature.
INNOVATIVE METHODS FOR THE OPTIMIZATION OF GRAVITY STORM SEWER DESIGN
The purpose of this paper is to describe a new method for optimizing the design of urban storm sewer systems. Previous efforts to optimize gravity sewers have met with limited success because classical optimization methods require that the problem be well behaved, e.g. describ...
A multiple objective optimization approach to aircraft control systems design
NASA Technical Reports Server (NTRS)
Tabak, D.; Schy, A. A.; Johnson, K. G.; Giesy, D. P.
1979-01-01
The design of an aircraft lateral control system, subject to several performance criteria and constraints, is considered. While in the previous studies of the same model a single criterion optimization, with other performance requirements expressed as constraints, has been pursued, the current approach involves a multiple criteria optimization. In particular, a Pareto optimal solution is sought.
Post-Optimality Analysis In Aerospace Vehicle Design
NASA Technical Reports Server (NTRS)
Braun, Robert D.; Kroo, Ilan M.; Gage, Peter J.
1993-01-01
This analysis pertains to the applicability of optimal sensitivity information to aerospace vehicle design. An optimal sensitivity (or post-optimality) analysis refers to computations performed once the initial optimization problem is solved. These computations may be used to characterize the design space about the present solution and infer changes in this solution as a result of constraint or parameter variations, without reoptimizing the entire system. The present analysis demonstrates that post-optimality information generated through first-order computations can be used to accurately predict the effect of constraint and parameter perturbations on the optimal solution. This assessment is based on the solution of an aircraft design problem in which the post-optimality estimates are shown to be within a few percent of the true solution over the practical range of constraint and parameter variations. Through solution of a reusable, single-stage-to-orbit, launch vehicle design problem, this optimal sensitivity information is also shown to improve the efficiency of the design process, For a hierarchically decomposed problem, this computational efficiency is realized by estimating the main-problem objective gradient through optimal sep&ivity calculations, By reducing the need for finite differentiation of a re-optimized subproblem, a significant decrease in the number of objective function evaluations required to reach the optimal solution is obtained.
A design optimization process for Space Station Freedom
NASA Technical Reports Server (NTRS)
Chamberlain, Robert G.; Fox, George; Duquette, William H.
1990-01-01
The Space Station Freedom Program is used to develop and implement a process for design optimization. Because the relative worth of arbitrary design concepts cannot be assessed directly, comparisons must be based on designs that provide the same performance from the point of view of station users; such designs can be compared in terms of life cycle cost. Since the technology required to produce a space station is widely dispersed, a decentralized optimization process is essential. A formulation of the optimization process is provided and the mathematical models designed to facilitate its implementation are described.
Bayesian experimental design of a multichannel interferometer for Wendelstein 7-Xa)
NASA Astrophysics Data System (ADS)
Dreier, H.; Dinklage, A.; Fischer, R.; Hirsch, M.; Kornejew, P.
2008-10-01
Bayesian experimental design (BED) is a framework for the optimization of diagnostics basing on probability theory. In this work it is applied to the design of a multichannel interferometer at the Wendelstein 7-X stellarator experiment. BED offers the possibility to compare diverse designs quantitatively, which will be shown for beam-line designs resulting from different plasma configurations. The applicability of this method is discussed with respect to its computational effort.
A Matrix-Free Algorithm for Multidisciplinary Design Optimization
NASA Astrophysics Data System (ADS)
Lambe, Andrew Borean
Multidisciplinary design optimization (MDO) is an approach to engineering design that exploits the coupling between components or knowledge disciplines in a complex system to improve the final product. In aircraft design, MDO methods can be used to simultaneously design the outer shape of the aircraft and the internal structure, taking into account the complex interaction between the aerodynamic forces and the structural flexibility. Efficient strategies are needed to solve such design optimization problems and guarantee convergence to an optimal design. This work begins with a comprehensive review of MDO problem formulations and solution algorithms. First, a fundamental MDO problem formulation is defined from which other formulations may be obtained through simple transformations. Using these fundamental problem formulations, decomposition methods from the literature are reviewed and classified. All MDO methods are presented in a unified mathematical notation to facilitate greater understanding. In addition, a novel set of diagrams, called extended design structure matrices, are used to simultaneously visualize both data communication and process flow between the many software components of each method. For aerostructural design optimization, modern decomposition-based MDO methods cannot efficiently handle the tight coupling between the aerodynamic and structural states. This fact motivates the exploration of methods that can reduce the computational cost. A particular structure in the direct and adjoint methods for gradient computation motivates the idea of a matrix-free optimization method. A simple matrix-free optimizer is developed based on the augmented Lagrangian algorithm. This new matrix-free optimizer is tested on two structural optimization problems and one aerostructural optimization problem. The results indicate that the matrix-free optimizer is able to efficiently solve structural and multidisciplinary design problems with thousands of variables and
A Matrix-Free Algorithm for Multidisciplinary Design Optimization
NASA Astrophysics Data System (ADS)
Lambe, Andrew Borean
Multidisciplinary design optimization (MDO) is an approach to engineering design that exploits the coupling between components or knowledge disciplines in a complex system to improve the final product. In aircraft design, MDO methods can be used to simultaneously design the outer shape of the aircraft and the internal structure, taking into account the complex interaction between the aerodynamic forces and the structural flexibility. Efficient strategies are needed to solve such design optimization problems and guarantee convergence to an optimal design. This work begins with a comprehensive review of MDO problem formulations and solution algorithms. First, a fundamental MDO problem formulation is defined from which other formulations may be obtained through simple transformations. Using these fundamental problem formulations, decomposition methods from the literature are reviewed and classified. All MDO methods are presented in a unified mathematical notation to facilitate greater understanding. In addition, a novel set of diagrams, called extended design structure matrices, are used to simultaneously visualize both data communication and process flow between the many software components of each method. For aerostructural design optimization, modern decomposition-based MDO methods cannot efficiently handle the tight coupling between the aerodynamic and structural states. This fact motivates the exploration of methods that can reduce the computational cost. A particular structure in the direct and adjoint methods for gradient computation. motivates the idea of a matrix-free optimization method. A simple matrix-free optimizer is developed based on the augmented Lagrangian algorithm. This new matrix-free optimizer is tested on two structural optimization problems and one aerostructural optimization problem. The results indicate that the matrix-free optimizer is able to efficiently solve structural and multidisciplinary design problems with thousands of variables and
Safarkhani, Maryam; Moerbeek, Mirjam
2015-09-30
It is plausible to assume that the treatment effect in a longitudinal study will vary over time. It can become either stronger or weaker as time goes on. Here, we extend previous work on optimal designs for discrete-time survival analysis to trials with the treatment effect varying over time. In discrete-time survival analysis, subjects are measured in discrete time intervals, while they may experience the event at any point in time. We focus on studies where the width of time intervals is fixed beforehand, meaning that subjects are measured more often when the study duration increases. The optimal design is defined as the optimal combination of the number of subjects, the number of measurements for each subject, and the optimal proportion of subjects assigned to the experimental condition. We study optimal designs for different optimality criteria and linear cost functions. We illustrate the methodology of finding optimal designs using a clinical trial that studies the effect of an outpatient mental health program on reducing substance abuse among patients with severe mental illness. We observe that optimal designs depend to some extent on the rate at which group differences vary across time intervals and the direction of these changes over time. We conclude that an optimal design based on the assumption of a constant treatment effect is not likely to be efficient if the treatment effect varies across time. PMID:26179808
Fatigue design of a cellular phone folder using regression model-based multi-objective optimization
NASA Astrophysics Data System (ADS)
Kim, Young Gyun; Lee, Jongsoo
2016-08-01
In a folding cellular phone, the folding device is repeatedly opened and closed by the user, which eventually results in fatigue damage, particularly to the front of the folder. Hence, it is important to improve the safety and endurance of the folder while also reducing its weight. This article presents an optimal design for the folder front that maximizes its fatigue endurance while minimizing its thickness. Design data for analysis and optimization were obtained experimentally using a test jig. Multi-objective optimization was carried out using a nonlinear regression model. Three regression methods were employed: back-propagation neural networks, logistic regression and support vector machines. The AdaBoost ensemble technique was also used to improve the approximation. Two-objective Pareto-optimal solutions were identified using the non-dominated sorting genetic algorithm (NSGA-II). Finally, a numerically optimized solution was validated against experimental product data, in terms of both fatigue endurance and thickness index.
Progress in multidisciplinary design optimization at NASA Langley
NASA Technical Reports Server (NTRS)
Padula, Sharon L.
1993-01-01
Multidisciplinary Design Optimization refers to some combination of disciplinary analyses, sensitivity analysis, and optimization techniques used to design complex engineering systems. The ultimate objective of this research at NASA Langley Research Center is to help the US industry reduce the costs associated with development, manufacturing, and maintenance of aerospace vehicles while improving system performance. This report reviews progress towards this objective and highlights topics for future research. Aerospace design problems selected from the author's research illustrate strengths and weaknesses in existing multidisciplinary optimization techniques. The techniques discussed include multiobjective optimization, global sensitivity equations and sequential linear programming.
Optimal flexible sample size design with robust power.
Zhang, Lanju; Cui, Lu; Yang, Bo
2016-08-30
It is well recognized that sample size determination is challenging because of the uncertainty on the treatment effect size. Several remedies are available in the literature. Group sequential designs start with a sample size based on a conservative (smaller) effect size and allow early stop at interim looks. Sample size re-estimation designs start with a sample size based on an optimistic (larger) effect size and allow sample size increase if the observed effect size is smaller than planned. Different opinions favoring one type over the other exist. We propose an optimal approach using an appropriate optimality criterion to select the best design among all the candidate designs. Our results show that (1) for the same type of designs, for example, group sequential designs, there is room for significant improvement through our optimization approach; (2) optimal promising zone designs appear to have no advantages over optimal group sequential designs; and (3) optimal designs with sample size re-estimation deliver the best adaptive performance. We conclude that to deal with the challenge of sample size determination due to effect size uncertainty, an optimal approach can help to select the best design that provides most robust power across the effect size range of interest. Copyright © 2016 John Wiley & Sons, Ltd. PMID:26999385
Topology and boundary shape optimization as an integrated design tool
NASA Technical Reports Server (NTRS)
Bendsoe, Martin Philip; Rodrigues, Helder Carrico
1990-01-01
The optimal topology of a two dimensional linear elastic body can be computed by regarding the body as a domain of the plane with a high density of material. Such an optimal topology can then be used as the basis for a shape optimization method that computes the optimal form of the boundary curves of the body. This results in an efficient and reliable design tool, which can be implemented via common FEM mesh generator and CAD type input-output facilities.
Multidisciplinary aircraft conceptual design optimization considering fidelity uncertainties
NASA Astrophysics Data System (ADS)
Neufeld, Daniel
Aircraft conceptual design traditionally utilizes simplified analysis methods and empirical equations to establish the basic layout of new aircraft. Applying optimization methods to aircraft conceptual design may yield solutions that are found to violate constraints when more sophisticated analysis methods are introduced. The designer's confidence that proposed conceptual designs will meet their performance targets is limited when conventional optimization approaches are utilized. Therefore, there is a need for an optimization approach that takes into account the uncertainties that arise when traditional analysis methods are used in aircraft conceptual design optimization. This research introduces a new aircraft conceptual design optimization approach that utilizes the concept of Reliability Based Design Optimization (RBDO). RyeMDO, a framework for multi-objective, multidisciplinary RBDO was developed for this purpose. The performance and effectiveness of the RBDO-MDO approaches implemented in RyeMDO were evaluated to identify the most promising approaches for aircraft conceptual design optimization. Additionally, an approach for quantifying the errors introduced by approximate analysis methods was developed. The approach leverages available historical data to quantify the uncertainties introduced by approximate analysis methods in two engineering case studies: the conceptual design optimization of an aircraft wing box structure and the conceptual design optimization of a commercial aircraft. The case studies were solved with several of the most promising RBDO-MDO integrated approaches. The proposed approach yields more conservative solutions and estimates the risk associated with each solution, enabling designers to reduce the likelihood that conceptual aircraft designs will fail to meet objectives later in the design process.
A study of commuter airplane design optimization
NASA Technical Reports Server (NTRS)
Roskam, J.; Wyatt, R. D.; Griswold, D. A.; Hammer, J. L.
1977-01-01
Problems of commuter airplane configuration design were studied to affect a minimization of direct operating costs. Factors considered were the minimization of fuselage drag, methods of wing design, and the estimated drag of an airplane submerged in a propellor slipstream; all design criteria were studied under a set of fixed performance, mission, and stability constraints. Configuration design data were assembled for application by a computerized design methodology program similar to the NASA-Ames General Aviation Synthesis Program.
Control structure interaction/optimized design
NASA Technical Reports Server (NTRS)
Mclaren, Mark; Purvis, Chris
1994-01-01
The objective of this study is to apply the integrated design methodology to the mature GOES-1 spacecraft design, and to assess the possible advantages to be gained using this approach over the conventional sequential design approach used for the current design. In the process, the development of this technology into a tool that can be utilized for future near-term spacecraft designs is emphasized.
Web Based Learning Support for Experimental Design in Molecular Biology.
ERIC Educational Resources Information Center
Wilmsen, Tinri; Bisseling, Ton; Hartog, Rob
An important learning goal of a molecular biology curriculum is a certain proficiency level in experimental design. Currently students are confronted with experimental approaches in textbooks, in lectures and in the laboratory. However, most students do not reach a satisfactory level of competence in the design of experimental approaches. This…
Design and Optimization of Composite Gyroscope Momentum Wheel Rings
NASA Technical Reports Server (NTRS)
Bednarcyk, Brett A.; Arnold, Steven M.
2007-01-01
Stress analysis and preliminary design/optimization procedures are presented for gyroscope momentum wheel rings composed of metallic, metal matrix composite, and polymer matrix composite materials. The design of these components involves simultaneously minimizing both true part volume and mass, while maximizing angular momentum. The stress analysis results are combined with an anisotropic failure criterion to formulate a new sizing procedure that provides considerable insight into the design of gyroscope momentum wheel ring components. Results compare the performance of two optimized metallic designs, an optimized SiC/Ti composite design, and an optimized graphite/epoxy composite design. The graphite/epoxy design appears to be far superior to the competitors considered unless a much greater premium is placed on volume efficiency compared to mass efficiency.
An uncertain multidisciplinary design optimization method using interval convex models
NASA Astrophysics Data System (ADS)
Li, Fangyi; Luo, Zhen; Sun, Guangyong; Zhang, Nong
2013-06-01
This article proposes an uncertain multi-objective multidisciplinary design optimization methodology, which employs the interval model to represent the uncertainties of uncertain-but-bounded parameters. The interval number programming method is applied to transform each uncertain objective function into two deterministic objective functions, and a satisfaction degree of intervals is used to convert both the uncertain inequality and equality constraints to deterministic inequality constraints. In doing so, an unconstrained deterministic optimization problem will be constructed in association with the penalty function method. The design will be finally formulated as a nested three-loop optimization, a class of highly challenging problems in the area of engineering design optimization. An advanced hierarchical optimization scheme is developed to solve the proposed optimization problem based on the multidisciplinary feasible strategy, which is a well-studied method able to reduce the dimensions of multidisciplinary design optimization problems by using the design variables as independent optimization variables. In the hierarchical optimization system, the non-dominated sorting genetic algorithm II, sequential quadratic programming method and Gauss-Seidel iterative approach are applied to the outer, middle and inner loops of the optimization problem, respectively. Typical numerical examples are used to demonstrate the effectiveness of the proposed methodology.
Design optimization of a magnetorheological brake in powered knee orthosis
NASA Astrophysics Data System (ADS)
Ma, Hao; Liao, Wei-Hsin
2015-04-01
Magneto-rheological (MR) fluids have been utilized in devices like orthoses and prostheses to generate controllable braking torque. In this paper, a flat shape rotary MR brake is designed for powered knee orthosis to provide adjustable resistance. Multiple disk structure with interior inner coil is adopted in the MR brake configuration. In order to increase the maximal magnetic flux, a novel internal structure design with smooth transition surface is proposed. Based on this design, a parameterized model of the MR brake is built for geometrical optimization. Multiple factors are considered in the optimization objective: braking torque, weight, and, particularly, average power consumption. The optimization is then performed with Finite Element Analysis (FEA), and the optimal design is obtained among the Pareto-optimal set considering the trade-offs in design objectives.
Optimal shielding design for minimum materials cost or mass
Woolley, Robert D.
2015-12-02
The mathematical underpinnings of cost optimal radiation shielding designs based on an extension of optimal control theory are presented, a heuristic algorithm to iteratively solve the resulting optimal design equations is suggested, and computational results for a simple test case are discussed. A typical radiation shielding design problem can have infinitely many solutions, all satisfying the problem's specified set of radiation attenuation requirements. Each such design has its own total materials cost. For a design to be optimal, no admissible change in its deployment of shielding materials can result in a lower cost. This applies in particular to very smallmore » changes, which can be restated using the calculus of variations as the Euler-Lagrange equations. Furthermore, the associated Hamiltonian function and application of Pontryagin's theorem lead to conditions for a shield to be optimal.« less
Gearbox design for uncertain load requirements using active robust optimization
NASA Astrophysics Data System (ADS)
Salomon, Shaul; Avigad, Gideon; Purshouse, Robin C.; Fleming, Peter J.
2016-04-01
Design and optimization of gear transmissions have been intensively studied, but surprisingly the robustness of the resulting optimal design to uncertain loads has never been considered. Active Robust (AR) optimization is a methodology to design products that attain robustness to uncertain or changing environmental conditions through adaptation. In this study the AR methodology is utilized to optimize the number of transmissions, as well as their gearing ratios, for an uncertain load demand. The problem is formulated as a bi-objective optimization problem where the objectives are to satisfy the load demand in the most energy efficient manner and to minimize production cost. The results show that this approach can find a set of robust designs, revealing a trade-off between energy efficiency and production cost. This can serve as a useful decision-making tool for the gearbox design process, as well as for other applications.
Optimal shielding design for minimum materials cost or mass
Woolley, Robert D.
2015-12-02
The mathematical underpinnings of cost optimal radiation shielding designs based on an extension of optimal control theory are presented, a heuristic algorithm to iteratively solve the resulting optimal design equations is suggested, and computational results for a simple test case are discussed. A typical radiation shielding design problem can have infinitely many solutions, all satisfying the problem's specified set of radiation attenuation requirements. Each such design has its own total materials cost. For a design to be optimal, no admissible change in its deployment of shielding materials can result in a lower cost. This applies in particular to very small changes, which can be restated using the calculus of variations as the Euler-Lagrange equations. Furthermore, the associated Hamiltonian function and application of Pontryagin's theorem lead to conditions for a shield to be optimal.
Optimal input design for aircraft instrumentation systematic error estimation
NASA Technical Reports Server (NTRS)
Morelli, Eugene A.
1991-01-01
A new technique for designing optimal flight test inputs for accurate estimation of instrumentation systematic errors was developed and demonstrated. A simulation model of the F-18 High Angle of Attack Research Vehicle (HARV) aircraft was used to evaluate the effectiveness of the optimal input compared to input recorded during flight test. Instrumentation systematic error parameter estimates and their standard errors were compared. It was found that the optimal input design improved error parameter estimates and their accuracies for a fixed time input design. Pilot acceptability of the optimal input design was demonstrated using a six degree-of-freedom fixed base piloted simulation of the F-18 HARV. The technique described in this work provides a practical, optimal procedure for designing inputs for data compatibility experiments.
NASA Astrophysics Data System (ADS)
Orszulik, Ryan R.; Shan, Jinjun
2012-12-01
A genetic algorithm is implemented to identify the transfer function of an experimental system consisting of a flexible manipulator with a collocated piezoelectric sensor/actuator pair. A multi-mode positive position feedback controller is then designed based upon the identified transfer function. To this end, the same iteratively implemented genetic algorithm is used to optimize all controller parameters by minimization of the closed loop H∞-norm. The designed controller is then applied for vibration suppression on the experimental system.
Optimizing spacecraft design - optimization engine development : progress and plans
NASA Technical Reports Server (NTRS)
Cornford, Steven L.; Feather, Martin S.; Dunphy, Julia R; Salcedo, Jose; Menzies, Tim
2003-01-01
At JPL and NASA, a process has been developed to perform life cycle risk management. This process requires users to identify: goals and objectives to be achieved (and their relative priorities), the various risks to achieving those goals and objectives, and options for risk mitigation (prevention, detection ahead of time, and alleviation). Risks are broadly defined to include the risk of failing to design a system with adequate performance, compatibility and robustness in addition to more traditional implementation and operational risks. The options for mitigating these different kinds of risks can include architectural and design choices, technology plans and technology back-up options, test-bed and simulation options, engineering models and hardware/software development techniques and other more traditional risk reduction techniques.
Three-dimensional subsonic diffuser design optimization and analysis
NASA Astrophysics Data System (ADS)
Zhang, Wei-Li
A novel methodology is developed to integrate state-of-the-art CFD analysis, the Non-uniform Rational B-Spline technique (NURBS) and optimization theory to reduce total pressure distortion and sustain or improve total pressure recovery within a curved three dimensional subsonic diffuser. Diffusing S-shaped ducts are representative of curved subsonic diffusers and are characterized by the S-shaped curvature of the duct's centerline and their increasing cross-sectional area. For aircraft inlet applications the measure of duct aerodynamic performance is the ability to decelerate the flow to the desired velocity while maintaining high total pressure recovery and flow near-uniformity. Reduced total pressure recovery lowers propulsion efficiency, whereas nonuniform flow conditions at the engine face lower engine stall and surge limits. Three degrees of freedom are employed as the number of independent design variables. The change of the surface shape is assumed to be Gaussian. The design variables are the location of the flow separation, the width and height of the Gaussian change. The General Aerodynamic Simulation Program (GASP) with the Baldwin-Lomax turbulence model is employed for the flow field prediction and proved to give good agreement with the experimental results for the baseline diffuser geometry. With the automatic change of the design variables, the configuration of the diffuser surface shape is able to be changed while keeping the entrance and exit of the diffuser unchanged in order to meet the specification of the engine and inlet. A trade study was performed which analyzed more than 10 configurations of the modified diffuser. Surface static pressure, surface flow visualization, and exit plane total pressure and transverse velocity data were acquired. The aerodynamic performance of each configuration was assessed by calculating total pressure recovery and spatial distortion elements. The automated design optimization is performed with a gradient
Design and optimization of microstructured optical fiber sensors
NASA Astrophysics Data System (ADS)
Jewart, Charles Milford
2011-12-01
The integration of sensor networks into large civil and mechanical structures is becoming an important engineering practice to ensure the structural health of important infrastructure and power generation facilities. The temperature, pressure, and internal stress distribution within the structures are key parameters to monitor the structural health of a system. Optical fiber sensors are one of the most common sensing elements used in the structural health monitoring due to their compact size, low cost, electrical immunity, and multiplexing ability. In this dissertation, the design and optimization of air-hole microstructured optical fibers for use as application specific sensors is presented. Air hole matrices are used to design fiber cores with a large birefringence; while air hole arrays within the fiber cladding are studied and optimized to engineer unique geometries that can give desired sensitivity and directionality of the fiber sensors. A pure silica core microstructured photonic crystal fiber was designed for hydrostatic pressure sensing. The impact of the surrounding air-holes to the propagation mode profiles and indices were studied and improved. To improve directionality and sensitivity of fiber sensors, air holes in the fiber cladding were implemented and optimized in the design of the fiber. Finite element analysis simulations were performed to elicit the correlation between air-hole configuration and the fiber sensor's performance and impact of the fiber's opto-mechanic properties. To measure pressure and stress at high temperature, an ultrafast laser was used to inscribe type II gratings in two-hole microstructured optical fibers and suspended core fibers. The fiber Bragg grating resonance wavelength shift and peak splitting were studied as a function of external pressure, bending, and lateral compression. Fiber sensors in two-hole fibers show stable and reproducible operation above 800°C. Fiber grating sensor in suspended core fibers exhibits high
A Bayesian A-optimal and model robust design criterion.
Zhou, Xiaojie; Joseph, Lawrence; Wolfson, David B; Bélisle, Patrick
2003-12-01
Suppose that the true model underlying a set of data is one of a finite set of candidate models, and that parameter estimation for this model is of primary interest. With this goal, optimal design must depend on a loss function across all possible models. A common method that accounts for model uncertainty is to average the loss over all models; this is the basis of what is known as Läuter's criterion. We generalize Läuter's criterion and show that it can be placed in a Bayesian decision theoretic framework, by extending the definition of Bayesian A-optimality. We use this generalized A-optimality to find optimal design points in an environmental safety setting. In estimating the smallest detectable trace limit in a water contamination problem, we obtain optimal designs that are quite different from those suggested by standard A-optimality.
Synthetic Gene Design Using Codon Optimization On-Line (COOL).
Yu, Kai; Ang, Kok Siong; Lee, Dong-Yup
2017-01-01
Codon optimization has been widely used for designing native or synthetic genes to enhance their expression in heterologous host organisms. We recently developed Codon Optimization On-Line (COOL) which is a web-based tool to provide multi-objective codon optimization functionality for synthetic gene design. COOL provides a simple and flexible interface for customizing codon optimization based on several design parameters such as individual codon usage, codon pairing, and codon adaptation index. User-defined sequences can also be compared against the COOL optimized ones to show the extent by which the user's sequences can be evaluated and further improved. The utility of COOL is demonstrated via a case study where the codon optimized sequence of an invertase enzyme is generated for the enhanced expression in E. coli. PMID:27671929
Optimal experiment design for quantum state tomography: Fair, precise, and minimal tomography
Nunn, J.; Smith, B. J.; Puentes, G.; Walmsley, I. A.; Lundeen, J. S.
2010-04-15
Given an experimental setup and a fixed number of measurements, how should one take data to optimally reconstruct the state of a quantum system? The problem of optimal experiment design (OED) for quantum state tomography was first broached by Kosut et al.[R. Kosut, I. Walmsley, and H. Rabitz, e-print arXiv:quant-ph/0411093 (2004)]. Here we provide efficient numerical algorithms for finding the optimal design, and analytic results for the case of 'minimal tomography'. We also introduce the average OED, which is independent of the state to be reconstructed, and the optimal design for tomography (ODT), which minimizes tomographic bias. Monte Carlo simulations confirm the utility of our results for qubits. Finally, we adapt our approach to deal with constrained techniques such as maximum-likelihood estimation. We find that these are less amenable to optimization than cruder reconstruction methods, such as linear inversion.
Execution of Multidisciplinary Design Optimization Approaches on Common Test Problems
NASA Technical Reports Server (NTRS)
Balling, R. J.; Wilkinson, C. A.
1997-01-01
A class of synthetic problems for testing multidisciplinary design optimization (MDO) approaches is presented. These test problems are easy to reproduce because all functions are given as closed-form mathematical expressions. They are constructed in such a way that the optimal value of all variables and the objective is unity. The test problems involve three disciplines and allow the user to specify the number of design variables, state variables, coupling functions, design constraints, controlling design constraints, and the strength of coupling. Several MDO approaches were executed on two sample synthetic test problems. These approaches included single-level optimization approaches, collaborative optimization approaches, and concurrent subspace optimization approaches. Execution results are presented, and the robustness and efficiency of these approaches an evaluated for these sample problems.
Multidisciplinary design optimization of mechatronic vehicles with active suspensions
NASA Astrophysics Data System (ADS)
He, Yuping; McPhee, John
2005-05-01
A multidisciplinary optimization method is applied to the design of mechatronic vehicles with active suspensions. The method is implemented in a GA-A'GEM-MATLAB simulation environment in such a way that the linear mechanical vehicle model is designed in a multibody dynamics software package, i.e. A'GEM, the controllers and estimators are constructed using linear quadratic Gaussian (LQG) method, and Kalman filter algorithm in Matlab, then the combined mechanical and control model is optimized simultaneously using a genetic algorithm (GA). The design variables include passive parameters and control parameters. In the numerical optimizations, both random and deterministic road inputs and both perfect measurement of full state variables and estimated limited state variables are considered. Optimization results show that the active suspension systems based on the multidisciplinary optimization method have better overall performance than those derived using conventional design methods with the LQG algorithm.
Factorial Design to Optimize Biosurfactant Production by Yarrowia lipolytica
Fontes, Gizele Cardoso; Fonseca Amaral, Priscilla Filomena; Nele, Marcio; Zarur Coelho, Maria Alice
2010-01-01
In order to improve biosurfactant production by Yarrowia lipolytica IMUFRJ 50682, a factorial design was carried out. A 24 full factorial design was used to investigate the effects of nitrogen sources (urea, ammonium sulfate, yeast extract, and peptone) on maximum variation of surface tension (ΔST) and emulsification index (EI). The best results (67.7% of EI and 20.9 mN m−1 of ΔST) were obtained in a medium composed of 10 g 1−1 of ammonium sulfate and 0.5 g 1−1 of yeast extract. Then, the effects of carbon sources (glycerol, hexadecane, olive oil, and glucose) were evaluated. The most favorable medium for biosurfactant production was composed of both glucose (4% w/v) and glycerol (2% w/v), which provided an EI of 81.3% and a ΔST of 19.5 mN m−1. The experimental design optimization enhanced ΔEI by 110.7% and ΔST by 108.1% in relation to the standard process. PMID:20368788
Experimental qualification of a code for optimizing gamma irradiation facilities
NASA Astrophysics Data System (ADS)
Mosse, D. C.; Leizier, J. J. M.; Keraron, Y.; Lallemant, T. F.; Perdriau, P. D. M.
Dose computation codes are a prerequisite for the design of gamma irradiation facilities. Code quality is a basic factor in the achievement of sound economic and technical performance by the facility. This paper covers the validation of a code by reference dosimetry experiments. Developed by the "Société Générale pour les Techniques Nouvelles" (SGN), a supplier of irradiation facilities and member of the CEA Group, the code is currently used by that company. (ERHART, KERARON, 1986) Experimental data were obtained under conditions representative of those prevailing in the gamma irradiation of foodstuffs. Irradiation was performed in POSEIDON, a Cobalt 60 cell of ORIS-I. Several Cobalt 60 rods of known activity are arranged in a planar array typical of industrial irradiation facilities. Pallet density is uniform, ranging from 0 (air) to 0.6. Reference dosimetry measurements were performed by the "Laboratoire de Métrologie des Rayonnements Ionisants" (LMRI) of the "Bureau National de Métrologie" (BNM). The procedure is based on the positioning of more than 300 ESR/alanine dosemeters throughout the various target volumes used. The reference quantity was the absorbed dose in water. The code was validated by a comparison of experimental and computed data. It has proved to be an effective tool for the design of facilities meeting the specific requirements applicable to foodstuff irradiation, which are frequently found difficult to meet.
Optimal Design of Aortic Leaflet Prosthesis
NASA Technical Reports Server (NTRS)
Ghista, Dhanjoo N.; Reul, Helmut; Ray, Gautam; Chandran, K. B.
1978-01-01
The design criteria for an optimum prosthetic-aortic leaflet valve are a smooth washout in the valve cusps, minimal leaflet stress, minimal transmembrane pressure for the valve to open, an adequate lifetime (for a given blood-compatible leaflet material's fatigue data). A rigorous design analysis is presented to obtain the prosthetic tri-leaflet aortic valve leaflet's optimum design parameters. Four alternative optimum leaflet geometries are obtained to satisfy the criteria of a smooth washout and minimal leaflet stress. The leaflet thicknesses of these four optimum designs are determined by satisfying the two remaining design criteria for minimal transmembrane opening pressure and adequate fatigue lifetime, which are formulated in terms of the elastic and fatigue properties of the selected leaflet material - Avcothane-51 (of the Avco-Everett Co. of Massachusetts). Prosthetic valves are fabricated on the basis of the optimum analysis and the resulting detailed engineering drawings of the designs are also presented in the paper.
Lessons Learned During Solutions of Multidisciplinary Design Optimization Problems
NASA Technical Reports Server (NTRS)
Patnaik, Suna N.; Coroneos, Rula M.; Hopkins, Dale A.; Lavelle, Thomas M.
2000-01-01
Optimization research at NASA Glenn Research Center has addressed the design of structures, aircraft and airbreathing propulsion engines. During solution of the multidisciplinary problems several issues were encountered. This paper lists four issues and discusses the strategies adapted for their resolution: (1) The optimization process can lead to an inefficient local solution. This deficiency was encountered during design of an engine component. The limitation was overcome through an augmentation of animation into optimization. (2) Optimum solutions obtained were infeasible for aircraft and air-breathing propulsion engine problems. Alleviation of this deficiency required a cascading of multiple algorithms. (3) Profile optimization of a beam produced an irregular shape. Engineering intuition restored the regular shape for the beam. (4) The solution obtained for a cylindrical shell by a subproblem strategy converged to a design that can be difficult to manufacture. Resolution of this issue remains a challenge. The issues and resolutions are illustrated through six problems: (1) design of an engine component, (2) synthesis of a subsonic aircraft, (3) operation optimization of a supersonic engine, (4) design of a wave-rotor-topping device, (5) profile optimization of a cantilever beam, and (6) design of a cvlindrical shell. The combined effort of designers and researchers can bring the optimization method from academia to industry.
Origami Optimization: Role of Symmetry in Accelerating Design
NASA Astrophysics Data System (ADS)
Buskohl, Philip; Fuchi, Kazuko; Bazzan, Giorgio; Durstock, Michael; Reich, Gregory; Joo, James; Vaia, Richard
Origami structures morph between 2D and 3D conformations along predetermined fold lines that efficiently program the form, function and mobility of the structure. Design optimization tools have recently been developed to predict optimal fold patterns with mechanics-based metrics, such as the maximal energy storage, auxetic response and actuation. Origami actuator design problems possess inherent symmetries associated with the grid, mechanical boundary conditions and the objective function, which are often exploited to reduce the design space and computational cost of optimization. However, enforcing symmetry eliminates the prediction of potentially better performing asymmetric designs, which are more likely to exist given the discrete nature of fold line optimization. To better understand this effect, actuator design problems with different combinations of rotation and reflection symmetries were optimized while varying the number of folds allowed in the final design. In each case, the optimal origami patterns transitioned between symmetric and asymmetric solutions depended on the number of folds available for the design, with fewer symmetries present with more fold lines allowed. This study investigates the interplay of symmetry and discrete vs continuous optimization in origami actuators and provides insight into how the symmetries of the reference grid regulate the performance landscape. This work was supported by the Air Force Office of Scientific Research.
Turinsky, Paul J; Abdel-Khalik, Hany S; Stover, Tracy E
2011-03-31
An optimization technique has been developed to select optimized experimental design specifications to produce data specifically designed to be assimilated to optimize a given reactor concept. Data from the optimized experiment is assimilated to generate posteriori uncertainties on the reactor concept’s core attributes from which the design responses are computed. The reactor concept is then optimized with the new data to realize cost savings by reducing margin. The optimization problem iterates until an optimal experiment is found to maximize the savings. A new generation of innovative nuclear reactor designs, in particular fast neutron spectrum recycle reactors, are being considered for the application of closing the nuclear fuel cycle in the future. Safe and economical design of these reactors will require uncertainty reduction in basic nuclear data which are input to the reactor design. These data uncertainty propagate to design responses which in turn require the reactor designer to incorporate additional safety margin into the design, which often increases the cost of the reactor. Therefore basic nuclear data needs to be improved and this is accomplished through experimentation. Considering the high cost of nuclear experiments, it is desired to have an optimized experiment which will provide the data needed for uncertainty reduction such that a reactor design concept can meet its target accuracies or to allow savings to be realized by reducing the margin required due to uncertainty propagated from basic nuclear data. However, this optimization is coupled to the reactor design itself because with improved data the reactor concept can be re-optimized itself. It is thus desired to find the experiment that gives the best optimized reactor design. Methods are first established to model both the reactor concept and the experiment and to efficiently propagate the basic nuclear data uncertainty through these models to outputs. The representativity of the experiment
Optimal design of spatial distribution networks
NASA Astrophysics Data System (ADS)
Gastner, Michael T.; Newman, M. E. J.
2006-07-01
We consider the problem of constructing facilities such as hospitals, airports, or malls in a country with a nonuniform population density, such that the average distance from a person’s home to the nearest facility is minimized. We review some previous approximate treatments of this problem that indicate that the optimal distribution of facilities should have a density that increases with population density, but does so slower than linearly, as the two-thirds power. We confirm this result numerically for the particular case of the United States with recent population data using two independent methods, one a straightforward regression analysis, the other based on density-dependent map projections. We also consider strategies for linking the facilities to form a spatial network, such as a network of flights between airports, so that the combined cost of maintenance of and travel on the network is minimized. We show specific examples of such optimal networks for the case of the United States.
Conceptual design report, CEBAF basic experimental equipment
1990-04-13
The Continuous Electron Beam Accelerator Facility (CEBAF) will be dedicated to basic research in Nuclear Physics using electrons and photons as projectiles. The accelerator configuration allows three nearly continuous beams to be delivered simultaneously in three experimental halls, which will be equipped with complementary sets of instruments: Hall A--two high resolution magnetic spectrometers; Hall B--a large acceptance magnetic spectrometer; Hall C--a high-momentum, moderate resolution, magnetic spectrometer and a variety of more dedicated instruments. This report contains a short description of the initial complement of experimental equipment to be installed in each of the three halls.
Optimal design of geodesically stiffened composite cylindrical shells
NASA Technical Reports Server (NTRS)
Gendron, G.; Guerdal, Z.
1992-01-01
An optimization system based on the finite element code Computations Structural Mechanics (CSM) Testbed and the optimization program, Automated Design Synthesis (ADS), is described. The optimization system can be used to obtain minimum-weight designs of composite stiffened structures. Ply thickness, ply orientations, and stiffener heights can be used as design variables. Buckling, displacement, and material failure constraints can be imposed on the design. The system is used to conduct a design study of geodesically stiffened shells. For comparison purposes, optimal designs of unstiffened shells and shells stiffened by rings and stingers are also obtained. Trends in the design of geodesically stiffened shells are identified. An approach to include local stress concentrations during the design optimization process is then presented. The method is based on a global/local analysis technique. It employs spline interpolation functions to determine displacements and rotations from a global model which are used as 'boundary conditions' for the local model. The organization of the strategy in the context of an optimization process is described. The method is validated with an example.
New approaches to the design optimization of hydrofoils
NASA Astrophysics Data System (ADS)
Beyhaghi, Pooriya; Meneghello, Gianluca; Bewley, Thomas
2015-11-01
Two simulation-based approaches are developed to optimize the design of hydrofoils for foiling catamarans, with the objective of maximizing efficiency (lift/drag). In the first, a simple hydrofoil model based on the vortex-lattice method is coupled with a hybrid global and local optimization algorithm that combines our Delaunay-based optimization algorithm with a Generalized Pattern Search. This optimization procedure is compared with the classical Newton-based optimization method. The accuracy of the vortex-lattice simulation of the optimized design is compared with a more accurate and computationally expensive LES-based simulation. In the second approach, the (expensive) LES model of the flow is used directly during the optimization. A modified Delaunay-based optimization algorithm is used to maximize the efficiency of the optimization, which measures a finite-time averaged approximation of the infinite-time averaged value of an ergodic and stationary process. Since the optimization algorithm takes into account the uncertainty of the finite-time averaged approximation of the infinite-time averaged statistic of interest, the total computational time of the optimization algorithm is significantly reduced. Results from the two different approaches are compared.
Experimental Stream Facility: Design and Research
The Experimental Stream Facility (ESF) is a valuable research tool for the U.S. Environmental Protection Agency’s (EPA) Office of Research and Development’s (ORD) laboratories in Cincinnati, Ohio. This brochure describes the ESF, which is one of only a handful of research facilit...
D-optimal design applied to binding saturation curves of an enkephalin analog in rat brain
Verotta, D.; Petrillo, P.; La Regina, A.; Rocchetti, M.; Tavani, A.
1988-01-01
The D-optimal design, a minimal sample design that minimizes the volume of the joint confidence region for the parameters, was used to evaluate binding parameters in a saturation curve with a view to reducing the number of experimental points without loosing accuracy in binding parameter estimates. Binding saturation experiments were performed in rat brain crude membrane preparations with the opioid ..mu..-selective ligand (/sup 3/H)-(D-Ala/sup 2/, MePhe/sup 4/, Gly-ol/sup 5/)enkephalin (DAGO), using a sequential procedure. The first experiment consisted of a wide-range saturation curve, which confirmed that (/sup 3/H)-DAGO binds only one class of specific sites and non-specific sites, and gave information on the experimental range and a first estimate of binding affinity (K/sub a/), capacity (B/sub max/) and non-specific constant (k). On this basis the D-optimal design was computed and sequential experiments were performed each covering a wide-range traditional saturation curve, the D-optimal design and a splitting of the D-optimal design with the addition of 2 points (+/- 15% of the central point). No appreciable differences were obtained with these designs in parameter estimates and their accuracy. Thus, sequential experiments based on D-optimal design seem a valid method for accurate determination of binding parameters, using far fewer points with no loss in parameter estimation accuracy. 25 references, 2 figures, 3 tables.
Optimization applications in aircraft engine design and test
NASA Technical Reports Server (NTRS)
Pratt, T. K.
1984-01-01
Starting with the NASA-sponsored STAEBL program, optimization methods based primarily upon the versatile program COPES/CONMIN were introduced over the past few years to a broad spectrum of engineering problems in structural optimization, engine design, engine test, and more recently, manufacturing processes. By automating design and testing processes, many repetitive and costly trade-off studies have been replaced by optimization procedures. Rather than taking engineers and designers out of the loop, optimization has, in fact, put them more in control by providing sophisticated search techniques. The ultimate decision whether to accept or reject an optimal feasible design still rests with the analyst. Feedback obtained from this decision process has been invaluable since it can be incorporated into the optimization procedure to make it more intelligent. On several occasions, optimization procedures have produced novel designs, such as the nonsymmetric placement of rotor case stiffener rings, not anticipated by engineering designers. In another case, a particularly difficult resonance contraint could not be satisfied using hand iterations for a compressor blade, when the STAEBL program was applied to the problem, a feasible solution was obtained in just two iterations.
A study of commuter airplane design optimization
NASA Technical Reports Server (NTRS)
Keppel, B. V.; Eysink, H.; Hammer, J.; Hawley, K.; Meredith, P.; Roskam, J.
1978-01-01
The usability of the general aviation synthesis program (GASP) was enhanced by the development of separate computer subroutines which can be added as a package to this assembly of computerized design methods or used as a separate subroutine program to compute the dynamic longitudinal, lateral-directional stability characteristics for a given airplane. Currently available analysis methods were evaluated to ascertain those most appropriate for the design functions which the GASP computerized design program performs. Methods for providing proper constraint and/or analysis functions for GASP were developed as well as the appropriate subroutines.
Optimizing Balanced Incomplete Block Designs for Educational Assessments
ERIC Educational Resources Information Center
van der Linden, Wim J.; Veldkamp, Bernard P.; Carlson, James E.
2004-01-01
A popular design in large-scale educational assessments as well as any other type of survey is the balanced incomplete block design. The design is based on an item pool split into a set of blocks of items that are assigned to sets of "assessment booklets." This article shows how the problem of calculating an optimal balanced incomplete block…
Yang, Xiaoyan; Patel, Sulabh; Sheng, Ye; Pal, Dhananjay; Mitra, Ashim K
2014-06-01
The aim of this investigation was to develop hydrocortisone butyrate (HB)-loaded poly(D,L-lactic-co-glycolic acid) (PLGA) nanoparticles (NP) with ideal encapsulation efficiency (EE), particle size, and drug loading (DL) under emulsion solvent evaporation technique utilizing various experimental statistical design modules. Experimental designs were used to investigate specific effects of independent variables during preparation of HB-loaded PLGA NP and corresponding responses in optimizing the formulation. Plackett-Burman design for independent variables was first conducted to prescreen various formulation and process variables during the development of NP. Selected primary variables were further optimized by central composite design. This process leads to an optimum formulation with desired EE, particle size, and DL. Contour plots and response surface curves display visual diagrammatic relationships between the experimental responses and input variables. The concentration of PLGA, drug, and polyvinyl alcohol and sonication time were the critical factors influencing the responses analyzed. Optimized formulation showed EE of 90.6%, particle size of 164.3 nm, and DL of 64.35%. This study demonstrates that statistical experimental design methodology can optimize the formulation and process variables to achieve favorable responses for HB-loaded NP.
Optimization and experimental verification of coplanar interdigital electroadhesives
NASA Astrophysics Data System (ADS)
Guo, J.; Bamber, T.; Chamberlain, M.; Justham, L.; Jackson, M.
2016-10-01
A simplified and novel theoretical model for coplanar interdigital electroadhesives has been presented in this paper. The model has been verified based on a mechatronic and reconfigurable testing platform, and a repeatable testing procedure. The theoretical results have shown that, for interdigital electroadhesive pads to achieve the maximum electroadhesive forces on non-conductive substrates, there is an optimum electrode width/space between electrodes (width/space) ratio, approximately 1.8. On conductive substrates, however, the width/space ratio should be as large as possible. The 2D electrostatic simulation results have shown that, the optimum ratio is significantly affected by the existence of the air gap and substrate thickness variation. A novel analysis of the force between the electroadhesive pad and the substrate has highlighted the inappropriateness to derive the normal forces by the division of the measured shear forces and the friction coefficients. In addition, the electroadhesive forces obtained in a 5 d period in an ambient environment have highlighted the importance of controlling the environment when testing the pads to validate the models. Based on the confident experimental platform and procedure, the results obtained have validated the theoretical results. The results are useful insights for the investigation into environmentally stable and optimized electroadhesives.
Integrated structure/control law design by multilevel optimization
NASA Technical Reports Server (NTRS)
Gilbert, Michael G.; Schmidt, David K.
1989-01-01
A new approach to integrated structure/control law design based on multilevel optimization is presented. This new approach is applicable to aircraft and spacecraft and allows for the independent design of the structure and control law. Integration of the designs is achieved through use of an upper level coordination problem formulation within the multilevel optimization framework. The method requires the use of structure and control law design sensitivity information. A general multilevel structure/control law design problem formulation is given, and the use of Linear Quadratic Gaussian (LQG) control law design and design sensitivity methods within the formulation is illustrated. Results of three simple integrated structure/control law design examples are presented. These results show the capability of structure and control law design tradeoffs to improve controlled system performance within the multilevel approach.
Junker, Astrid; Muraya, Moses M.; Weigelt-Fischer, Kathleen; Arana-Ceballos, Fernando; Klukas, Christian; Melchinger, Albrecht E.; Meyer, Rhonda C.; Riewe, David; Altmann, Thomas
2015-01-01
Detailed and standardized protocols for plant cultivation in environmentally controlled conditions are an essential prerequisite to conduct reproducible experiments with precisely defined treatments. Setting up appropriate and well defined experimental procedures is thus crucial for the generation of solid evidence and indispensable for successful plant research. Non-invasive and high throughput (HT) phenotyping technologies offer the opportunity to monitor and quantify performance dynamics of several hundreds of plants at a time. Compared to small scale plant cultivations, HT systems have much higher demands, from a conceptual and a logistic point of view, on experimental design, as well as the actual plant cultivation conditions, and the image analysis and statistical methods for data evaluation. Furthermore, cultivation conditions need to be designed that elicit plant performance characteristics corresponding to those under natural conditions. This manuscript describes critical steps in the optimization of procedures for HT plant phenotyping systems. Starting with the model plant Arabidopsis, HT-compatible methods were tested, and optimized with regard to growth substrate, soil coverage, watering regime, experimental design (considering environmental inhomogeneities) in automated plant cultivation and imaging systems. As revealed by metabolite profiling, plant movement did not affect the plants' physiological status. Based on these results, procedures for maize HT cultivation and monitoring were established. Variation of maize vegetative growth in the HT phenotyping system did match well with that observed in the field. The presented results outline important issues to be considered in the design of HT phenotyping experiments for model and crop plants. It thereby provides guidelines for the setup of HT experimental procedures, which are required for the generation of reliable and reproducible data of phenotypic variation for a broad range of applications. PMID
SEMICONDUCTOR DEVICES: Optimization of grid design for solar cells
NASA Astrophysics Data System (ADS)
Wen, Liu; Yueqiang, Li; Jianjun, Chen; Yanling, Chen; Xiaodong, Wang; Fuhua, Yang
2010-01-01
By theoretical simulation of two grid patterns that are often used in concentrator solar cells, we give a detailed and comprehensive analysis of the influence of the metal grid dimension and various losses directly associated with it during optimization of grid design. Furthermore, we also perform the simulation under different concentrator factors, making the optimization of the front contact grid for solar cells complete.
Starting designs for the computer optimization of optical coatings
NASA Astrophysics Data System (ADS)
Baumeister, Philip
1995-08-01
Several generic starting designs are used for the computer optimization of multilayer optical coatings. The first is a stack of many thin layers. Another, which is applicable to the needle-layer optimization method, is at least one thick layer. Examples include the following metallic reflector, dark mirror, and total internal reflection with prescribed differential phase shift.
Post-optimality analysis in aerospace vehicle design
NASA Technical Reports Server (NTRS)
Braun, Robert D.; Kroo, Ilan M.; Gage, Peter J.
1993-01-01
This analysis pertains to the applicability of optimal sensitivity information to aerospace vehicle design. The present analysis demonstrates that post-optimality information generated through first-order computations can be used to accurately predict file effect of constraint and parameter perturbations on the optimal solution. This assessment is based on the solution of an aircraft design problem in which the post-optimality estimates are shown to be within a few percent of the true solution over the practical range of constraint and parameter variations. Through solution of a reusable, single-stage-to-orbit, launch vehicle design problem, this optimal sensitivity information is also shown to improve the efficiency of the design process. For a hierarchically decomposed problem, this computational efficiency is realizable by estimating the main-problem objective gradient through optimal sensitivity calculations. By reducing the need for finite differentiation of a re-optimized subproblem, a significant decrease in the number of objective function evaluations required to reach the optimal solution is obtained.
The Implications of "Contamination" for Experimental Design in Education
ERIC Educational Resources Information Center
Rhoads, Christopher H.
2011-01-01
Experimental designs that randomly assign entire clusters of individuals (e.g., schools and classrooms) to treatments are frequently advocated as a way of guarding against contamination of the estimated average causal effect of treatment. However, in the absence of contamination, experimental designs that randomly assign intact clusters to…
Development of the Biological Experimental Design Concept Inventory (BEDCI)
ERIC Educational Resources Information Center
Deane, Thomas; Nomme, Kathy; Jeffery, Erica; Pollock, Carol; Birol, Gulnur
2014-01-01
Interest in student conception of experimentation inspired the development of a fully validated 14-question inventory on experimental design in biology (BEDCI) by following established best practices in concept inventory (CI) design. This CI can be used to diagnose specific examples of non-expert-like thinking in students and to evaluate the…
Precision of Sensitivity in the Design Optimization of Indeterminate Structures
NASA Technical Reports Server (NTRS)
Patnaik, Surya N.; Pai, Shantaram S.; Hopkins, Dale A.
2006-01-01
Design sensitivity is central to most optimization methods. The analytical sensitivity expression for an indeterminate structural design optimization problem can be factored into a simple determinate term and a complicated indeterminate component. Sensitivity can be approximated by retaining only the determinate term and setting the indeterminate factor to zero. The optimum solution is reached with the approximate sensitivity. The central processing unit (CPU) time to solution is substantially reduced. The benefit that accrues from using the approximate sensitivity is quantified by solving a set of problems in a controlled environment. Each problem is solved twice: first using the closed-form sensitivity expression, then using the approximation. The problem solutions use the CometBoards testbed as the optimization tool with the integrated force method as the analyzer. The modification that may be required, to use the stiffener method as the analysis tool in optimization, is discussed. The design optimization problem of an indeterminate structure contains many dependent constraints because of the implicit relationship between stresses, as well as the relationship between the stresses and displacements. The design optimization process can become problematic because the implicit relationship reduces the rank of the sensitivity matrix. The proposed approximation restores the full rank and enhances the robustness of the design optimization method.
Optimal experiment design for model selection in biochemical networks
2014-01-01
Background Mathematical modeling is often used to formalize hypotheses on how a biochemical network operates by discriminating between competing models. Bayesian model selection offers a way to determine the amount of evidence that data provides to support one model over the other while favoring simple models. In practice, the amount of experimental data is often insufficient to make a clear distinction between competing models. Often one would like to perform a new experiment which would discriminate between competing hypotheses. Results We developed a novel method to perform Optimal Experiment Design to predict which experiments would most effectively allow model selection. A Bayesian approach is applied to infer model parameter distributions. These distributions are sampled and used to simulate from multivariate predictive densities. The method is based on a k-Nearest Neighbor estimate of the Jensen Shannon divergence between the multivariate predictive densities of competing models. Conclusions We show that the method successfully uses predictive differences to enable model selection by applying it to several test cases. Because the design criterion is based on predictive distributions, which can be computed for a wide range of model quantities, the approach is very flexible. The method reveals specific combinations of experiments which improve discriminability even in cases where data is scarce. The proposed approach can be used in conjunction with existing Bayesian methodologies where (approximate) posteriors have been determined, making use of relations that exist within the inferred posteriors. PMID:24555498
Optimality criteria design and stress constraint processing
NASA Technical Reports Server (NTRS)
Levy, R.
1982-01-01
Methods for pre-screening stress constraints into either primary or side-constraint categories are reviewed; a projection method, which is developed from prior cycle stress resultant history, is introduced as an additional screening parameter. Stress resultant projections are also employed to modify the traditional stress-ratio, side-constraint boundary. A special application of structural modification reanalysis is applied to the critical stress constraints to provide feasible designs that are preferable to those obtained by conventional scaling. Sample problem executions show relatively short run times and fewer design cycle iterations to achieve low structural weights; those attained are comparable to the minimum values developed elsewhere.
Minimum weight design of structures via optimality criteria
NASA Technical Reports Server (NTRS)
Kiusalaas, J.
1972-01-01
The state of the art of automated structural design through the use of optimality criteria, with emphasis on aerospace applications is reviewed. Constraints on stresses, displacements, and buckling strengths under static loading, as well as lower bound limits on natural frequencies and flutter speeds are presented. It is presumed that the reader is experienced in finite element methods of analysis, but is not familiar with optimal design techniques.
A new method of optimal design for a two-dimensional diffuser by using dynamic programming
NASA Technical Reports Server (NTRS)
Gu, Chuangang; Zhang, Moujin; Chen, XI; Miao, Yongmiao
1991-01-01
A new method for predicting the optimal velocity distribution on the wall of a two dimensional diffuser is presented. The method uses dynamic programming to solve the optimal control problem with inequality constraints of state variables. The physical model of optimization is designed to prevent the separation of the boundary layer while approaching the maximum pressure ratio in a diffuser of a specified length. The computational results are in fair agreement with the experimental ones. Optimal velocity distribution on a diffuser wall is said to occur when the flow decelerates quickly at first and then smoothly, while the flow is near separation, but always protected from it. The optimal velocity distribution can be used to design the contour of the diffuser.
Irradiation Design for an Experimental Murine Model
Ballesteros-Zebadua, P.; Moreno-Jimenez, S.; Suarez-Campos, J. E.; Celis, M. A.; Larraga-Gutierrez, J. M.; Garcia-Garduno, O. A.; Rubio-Osornio, M. C.; Custodio-Ramirez, V.; Paz, C.
2010-12-07
In radiotherapy and stereotactic radiosurgery, small animal experimental models are frequently used, since there are still a lot of unsolved questions about the biological and biochemical effects of ionizing radiation. This work presents a method for small-animal brain radiotherapy compatible with a dedicated 6MV Linac. This rodent model is focused on the research of the inflammatory effects produced by ionizing radiation in the brain. In this work comparisons between Pencil Beam and Monte Carlo techniques, were used in order to evaluate accuracy of the calculated dose using a commercial planning system. Challenges in this murine model are discussed.
Autism genetics: Methodological issues and experimental design.
Sacco, Roberto; Lintas, Carla; Persico, Antonio M
2015-10-01
Autism is a complex neuropsychiatric disorder of developmental origin, where multiple genetic and environmental factors likely interact resulting in a clinical continuum between "affected" and "unaffected" individuals in the general population. During the last two decades, relevant progress has been made in identifying chromosomal regions and genes in linkage or association with autism, but no single gene has emerged as a major cause of disease in a large number of patients. The purpose of this paper is to discuss specific methodological issues and experimental strategies in autism genetic research, based on fourteen years of experience in patient recruitment and association studies of autism spectrum disorder in Italy.
Optimizing Organization Design for the Future.
ERIC Educational Resources Information Center
Creth, Sheila
2000-01-01
Discussion of planning organization design within the higher education environment stresses the goal of integrating structure and process to maintain stability while increasing organizational flexibility. Considers organization culture, organization structure and processes, networked organizations, a networked organization in action, and personal…
Integrated design optimization research and development in an industrial environment
NASA Technical Reports Server (NTRS)
Kumar, V.; German, Marjorie D.; Lee, S.-J.
1989-01-01
An overview is given of a design optimization project that is in progress at the GE Research and Development Center for the past few years. The objective of this project is to develop a methodology and a software system for design automation and optimization of structural/mechanical components and systems. The effort focuses on research and development issues and also on optimization applications that can be related to real-life industrial design problems. The overall technical approach is based on integration of numerical optimization techniques, finite element methods, CAE and software engineering, and artificial intelligence/expert systems (AI/ES) concepts. The role of each of these engineering technologies in the development of a unified design methodology is illustrated. A software system DESIGN-OPT has been developed for both size and shape optimization of structural components subjected to static as well as dynamic loadings. By integrating this software with an automatic mesh generator, a geometric modeler and an attribute specification computer code, a software module SHAPE-OPT has been developed for shape optimization. Details of these software packages together with their applications to some 2- and 3-dimensional design problems are described.
Application of optimization techniques to vehicle design: A review
NASA Technical Reports Server (NTRS)
Prasad, B.; Magee, C. L.
1984-01-01
The work that has been done in the last decade or so in the application of optimization techniques to vehicle design is discussed. Much of the work reviewed deals with the design of body or suspension (chassis) components for reduced weight. Also reviewed are studies dealing with system optimization problems for improved functional performance, such as ride or handling. In reviewing the work on the use of optimization techniques, one notes the transition from the rare mention of the methods in the 70's to an increased effort in the early 80's. Efficient and convenient optimization and analysis tools still need to be developed so that they can be regularly applied in the early design stage of the vehicle development cycle to be most effective. Based on the reported applications, an attempt is made to assess the potential for automotive application of optimization techniques. The major issue involved remains the creation of quantifiable means of analysis to be used in vehicle design. The conventional process of vehicle design still contains much experience-based input because it has not yet proven possible to quantify all important constraints. This restraint on the part of the analysis will continue to be a major limiting factor in application of optimization to vehicle design.
Rapid Modeling, Assembly and Simulation in Design Optimization
NASA Technical Reports Server (NTRS)
Housner, Jerry
1997-01-01
A new capability for design is reviewed. This capability provides for rapid assembly of detail finite element models early in the design process where costs are most effectively impacted. This creates an engineering environment which enables comprehensive analysis and design optimization early in the design process. Graphical interactive computing makes it possible for the engineer to interact with the design while performing comprehensive design studies. This rapid assembly capability is enabled by the use of Interface Technology, to couple independently created models which can be archived and made accessible to the designer. Results are presented to demonstrate the capability.
Design of ophthalmic lens by using optimized aspheric surface coefficients
NASA Astrophysics Data System (ADS)
Chang, Ming-Wen; Sun, Wen-Shing; Tien, Chuen-Lin
1998-09-01
Coddington's equations can be used to eliminate the oblique astigmatic error in the design of ophthalmic lens of spherical or other conicoidal surfaces. But it is difficult to get satisfactory result in the designing of the nonconic aspheric ophthalmic lens. In this paper we present an efficient approach based on optimization of aspheric coefficients, which enables the design program to obtain the minimum aberrations. Many higher order coefficients of aspheric surfaces can easily result in inflection point, which increases the difficulty in manufacturing. We solved the problem by taking it as one of the optimization constraints. The design of nonconic aspheric ophthalmic lens could also make the spectacle lenses well thinner in thickness and well flatter in shape than the design of spherical ophthalmic lens and other conicoidal ophthalmic lens. Damped least square methods are used in our design. Aspherical myopia ophthalmic lenses, aspherical hypermetropic lenses and cataract lenses were designed. Comparisons of design examples' results are given.
Experimentally determined spectral optimization for dedicated breast computed tomography
Prionas, Nicolas D.; Huang, Shih-Ying; Boone, John M.
2011-02-15
Purpose: The current study aimed to experimentally identify the optimal technique factors (x-ray tube potential and added filtration material/thickness) to maximize soft-tissue contrast, microcalcification contrast, and iodine contrast enhancement using cadaveric breast specimens imaged with dedicated breast computed tomography (bCT). Secondarily, the study aimed to evaluate the accuracy of phantom materials as tissue surrogates and to characterize the change in accuracy with varying bCT technique factors. Methods: A cadaveric breast specimen was acquired under appropriate approval and scanned using a prototype bCT scanner. Inserted into the specimen were cylindrical inserts of polyethylene, water, iodine contrast medium (iodixanol, 2.5 mg/ml), and calcium hydroxyapatite (100 mg/ml). Six x-ray tube potentials (50, 60, 70, 80, 90, and 100 kVp) and three different filters (0.2 mm Cu, 1.5 mm Al, and 0.2 mm Sn) were tested. For each set of technique factors, the intensity (linear attenuation coefficient) and noise were measured within six regions of interest (ROIs): Glandular tissue, adipose tissue, polyethylene, water, iodine contrast medium, and calcium hydroxyapatite. Dose-normalized contrast to noise ratio (CNRD) was measured for pairwise comparisons among the six ROIs. Regression models were used to estimate the effect of tube potential and added filtration on intensity, noise, and CNRD. Results: Iodine contrast enhancement was maximized using 60 kVp and 0.2 mm Cu. Microcalcification contrast and soft-tissue contrast were maximized at 60 kVp. The 0.2 mm Cu filter achieved significantly higher CNRD for iodine contrast enhancement than the other two filters (p=0.01), but microcalcification contrast and soft-tissue contrast were similar using the copper and aluminum filters. The average percent difference in linear attenuation coefficient, across all tube potentials, for polyethylene versus adipose tissue was 1.8%, 1.7%, and 1.3% for 0.2 mm Cu, 1.5 mm Al, and 0.2 mm
Optimization of hydraulic machinery by exploiting previous successful designs
NASA Astrophysics Data System (ADS)
Kyriacou, S. A.; Weissenberger, S.; Grafenberger, P.; Giannakoglou, K. C.
2010-08-01
A design-optimization method for hydraulic machinery is proposed. Optimal designs are obtained using the appropriate CFD evaluation software driven by an evolutionary algorithm which is also assisted by artificial neural networks used as surrogate evaluation models or metamodels. As shown in a previous IAHR paper by the same authors, such an optimization method substantially reduces the CPU cost, since the metamodels can discard numerous non-promising candidate solutions generated during the evolution, at almost negligible CPU cost, without evaluating them by means of the costly CFD tool. The present paper extends the optimization method of the previous paper by making it capable to accommodate and exploit pieces of useful information archived during previous relevant successful designs. So, instead of parameterizing the geometry of the hydraulic machine components, which inevitably leads to many design variables, enough to slow down the design procedure, in the proposed method all new designs are expressed as weighted combinations of the archived ones. The archived designs act as the design space bases. The role of the optimization algorithms is to find the set (or sets, for more than one objectives, where the Pareto front of non-dominated solutions is sought) of weight values, corresponding to the hydraulic machine configuration(s) with optimal performance. Since the number of weights is much less that the number of design variables of the conventional shape parameterization, the design space dimension reduces and the CPU cost of the metamodel-assisted evolutionary algorithm is much lower. The design of a Francis runner is used to demonstrate the capabilities of the proposed method.
Li, Yin; Liu, Zhiqiang; Cui, Fengjie; Liu, Zhisheng; Zhao, Hui
2007-11-01
The objective of this study was to use statistically based experimental designs for the optimization of xylanase production from Alternaria mali ND-16. Ten components in the medium were screened for nutritional requirements. Three nutritional components, including NH(4)Cl, urea, and MgSO(4), were identified to significantly affect the xylanase production by using the Plackett-Burman experimental design. These three major components were subsequently optimized using the Doehlert experimental design. By using response surface methodology and canonical analysis, the optimal concentrations for xylanase production were: NH(4)Cl 11.34 g L(-1), urea 1.26 g L(-1), and MgSO(4) 0.98 g L(-1). Under these optimal conditions, the xylanase activity from A. mali ND-16 reached 30.35 U mL(-1). Verification of the optimization showed that xylanase production of 31.26 U mL(-1) was achieved. PMID:17846761
Efficient optimal design of smooth optical freeform surfaces using ray targeting
NASA Astrophysics Data System (ADS)
Wu, Rengmao; Wang, Huihui; Liu, Peng; Zhang, Yaqin; Zheng, Zhenrong; Li, Haifeng; Liu, Xu
2013-07-01
An optimization design method is proposed for generating smooth freeform reflective and refractive surfaces. In this method, two optimization steps are employed for ray targeting. The first step aims to ensure the shape of the target illumination, and the second step is employed to further improve the irradiance uniformity. These two steps can provide significant savings of time because the time consuming Monte Carlo raytracing is not used during the optimization process. Both smooth freeform reflective surfaces and smooth freeform refractive surfaces can be designed, and the target illumination could be achieved just by controlling the positions of several hundred predefined rays on the target plane with these two steps. The simulation results and the experimental tests show that this optimization design method is robust and efficient.
Information measures in nonlinear experimental design
NASA Technical Reports Server (NTRS)
Niple, E.; Shaw, J. H.
1980-01-01
Some different approaches to the problem of designing experiments which estimate the parameters of nonlinear models are discussed. The assumption in these approaches that the information in a set of data can be represented by a scalar is criticized, and the nonscalar discrimination information is proposed as the proper measure to use. The two-step decay example in Box and Lucas (1959) is used to illustrate the main points of the discussion.
[Design and optimization of a centrifugal pump for CPCR].
Pei, J; Tan, X; Chen, K; Li, X
2000-06-01
Requirements for an optimal centrifugal pump, the vital component in the equipment for cardiopulmonary cerebral resuscitation(CPCR), have been presented. The performance of the Sarns centrifugal pump (Sarns, Inc./3M, Ann arbor, MI, U.S.A) was tested. The preliminarily optimized model for CPCR was designed according to the requirements of CPCR and to the comparison and analysis of several clinically available centrifugal pumps. The preliminary tests using the centrifugal pump made in our laboratory(Type CPCR-I) have confirmed the design and the optimization.
Optimal brushless DC motor design using genetic algorithms
NASA Astrophysics Data System (ADS)
Rahideh, A.; Korakianitis, T.; Ruiz, P.; Keeble, T.; Rothman, M. T.
2010-11-01
This paper presents a method for the optimal design of a slotless permanent magnet brushless DC (BLDC) motor with surface mounted magnets using a genetic algorithm. Characteristics of the motor are expressed as functions of motor geometries. The objective function is a combination of losses, volume and cost to be minimized simultaneously. Electrical and mechanical requirements (i.e. voltage, torque and speed) and other limitations (e.g. upper and lower limits of the motor geometries) are cast into constraints of the optimization problem. One sample case is used to illustrate the design and optimization technique.
Towards Robust Designs Via Multiple-Objective Optimization Methods
NASA Technical Reports Server (NTRS)
Man Mohan, Rai
2006-01-01
Fabricating and operating complex systems involves dealing with uncertainty in the relevant variables. In the case of aircraft, flow conditions are subject to change during operation. Efficiency and engine noise may be different from the expected values because of manufacturing tolerances and normal wear and tear. Engine components may have a shorter life than expected because of manufacturing tolerances. In spite of the important effect of operating- and manufacturing-uncertainty on the performance and expected life of the component or system, traditional aerodynamic shape optimization has focused on obtaining the best design given a set of deterministic flow conditions. Clearly it is important to both maintain near-optimal performance levels at off-design operating conditions, and, ensure that performance does not degrade appreciably when the component shape differs from the optimal shape due to manufacturing tolerances and normal wear and tear. These requirements naturally lead to the idea of robust optimal design wherein the concept of robustness to various perturbations is built into the design optimization procedure. The basic ideas involved in robust optimal design will be included in this lecture. The imposition of the additional requirement of robustness results in a multiple-objective optimization problem requiring appropriate solution procedures. Typically the costs associated with multiple-objective optimization are substantial. Therefore efficient multiple-objective optimization procedures are crucial to the rapid deployment of the principles of robust design in industry. Hence the companion set of lecture notes (Single- and Multiple-Objective Optimization with Differential Evolution and Neural Networks ) deals with methodology for solving multiple-objective Optimization problems efficiently, reliably and with little user intervention. Applications of the methodologies presented in the companion lecture to robust design will be included here. The
Optimal design of Purcell's three-link swimmer.
Giraldi, Laetitia; Martinon, Pierre; Zoppello, Marta
2015-02-01
In this paper we address the question of the optimal design for the Purcell three-link swimmer. More precisely, we investigate the best link length ratio which maximizes its displacement. The dynamics of the swimmer is expressed as an ordinary differential equation, using the resistive force theory. Among a set of optimal strategies of deformation (strokes), we provide an asymptotic estimate of the displacement for small deformations, from which we derive the optimal link ratio. Numerical simulations are in good agreement with this theoretical estimate and also cover larger amplitudes of deformation. Compared with the classical design of the Purcell swimmer, we observe a gain in displacement of roughly 60%. PMID:25768602
Application of clustering global optimization to thin film design problems.
Lemarchand, Fabien
2014-03-10
Refinement techniques usually calculate an optimized local solution, which is strongly dependent on the initial formula used for the thin film design. In the present study, a clustering global optimization method is used which can iteratively change this initial formula, thereby progressing further than in the case of local optimization techniques. A wide panel of local solutions is found using this procedure, resulting in a large range of optical thicknesses. The efficiency of this technique is illustrated by two thin film design problems, in particular an infrared antireflection coating, and a solar-selective absorber coating. PMID:24663856
Application of clustering global optimization to thin film design problems.
Lemarchand, Fabien
2014-03-10
Refinement techniques usually calculate an optimized local solution, which is strongly dependent on the initial formula used for the thin film design. In the present study, a clustering global optimization method is used which can iteratively change this initial formula, thereby progressing further than in the case of local optimization techniques. A wide panel of local solutions is found using this procedure, resulting in a large range of optical thicknesses. The efficiency of this technique is illustrated by two thin film design problems, in particular an infrared antireflection coating, and a solar-selective absorber coating.
Optimal design of a pilot OTEC power plant in Taiwan
Tseng, C.H.; Kao, K.Y. ); Yang, J.C. )
1991-12-01
In this paper, an optimal design concept has been utilized to find the best designs for a complex and large-scale ocean thermal energy conversion (OTEC) plant. THe OTEC power plant under this study is divided into three major subsystems consisting of power subsystem, seawater pipe subsystem, and containment subsystem. The design optimization model for the entire OTEC plant is integrated from these sub-systems under the considerations of their own various design criteria and constraints. The mathematical formulations of this optimization model for the entire OTEC plant are described. The design variables, objective function, and constraints for a pilot plant under the constraints of the feasible technologies at this stage in Taiwan have been carefully examined and selected.
Formulation for Simultaneous Aerodynamic Analysis and Design Optimization
NASA Technical Reports Server (NTRS)
Hou, G. W.; Taylor, A. C., III; Mani, S. V.; Newman, P. A.
1993-01-01
An efficient approach for simultaneous aerodynamic analysis and design optimization is presented. This approach does not require the performance of many flow analyses at each design optimization step, which can be an expensive procedure. Thus, this approach brings us one step closer to meeting the challenge of incorporating computational fluid dynamic codes into gradient-based optimization techniques for aerodynamic design. An adjoint-variable method is introduced to nullify the effect of the increased number of design variables in the problem formulation. The method has been successfully tested on one-dimensional nozzle flow problems, including a sample problem with a normal shock. Implementations of the above algorithm are also presented that incorporate Newton iterations to secure a high-quality flow solution at the end of the design process. Implementations with iterative flow solvers are possible and will be required for large, multidimensional flow problems.
Finite element based electric motor design optimization
NASA Technical Reports Server (NTRS)
Campbell, C. Warren
1993-01-01
The purpose of this effort was to develop a finite element code for the analysis and design of permanent magnet electric motors. These motors would drive electromechanical actuators in advanced rocket engines. The actuators would control fuel valves and thrust vector control systems. Refurbishing the hydraulic systems of the Space Shuttle after each flight is costly and time consuming. Electromechanical actuators could replace hydraulics, improve system reliability, and reduce down time.
Finite element based electric motor design optimization
NASA Astrophysics Data System (ADS)
Campbell, C. Warren
1993-11-01
The purpose of this effort was to develop a finite element code for the analysis and design of permanent magnet electric motors. These motors would drive electromechanical actuators in advanced rocket engines. The actuators would control fuel valves and thrust vector control systems. Refurbishing the hydraulic systems of the Space Shuttle after each flight is costly and time consuming. Electromechanical actuators could replace hydraulics, improve system reliability, and reduce down time.
New approaches to optimization in aerospace conceptual design
NASA Technical Reports Server (NTRS)
Gage, Peter J.
1995-01-01
Aerospace design can be viewed as an optimization process, but conceptual studies are rarely performed using formal search algorithms. Three issues that restrict the success of automatic search are identified in this work. New approaches are introduced to address the integration of analyses and optimizers, to avoid the need for accurate gradient information and a smooth search space (required for calculus-based optimization), and to remove the restrictions imposed by fixed complexity problem formulations. (1) Optimization should be performed in a flexible environment. A quasi-procedural architecture is used to conveniently link analysis modules and automatically coordinate their execution. It efficiently controls a large-scale design tasks. (2) Genetic algorithms provide a search method for discontinuous or noisy domains. The utility of genetic optimization is demonstrated here, but parameter encodings and constraint-handling schemes must be carefully chosen to avoid premature convergence to suboptimal designs. The relationship between genetic and calculus-based methods is explored. (3) A variable-complexity genetic algorithm is created to permit flexible parameterization, so that the level of description can change during optimization. This new optimizer automatically discovers novel designs in structural and aerodynamic tasks.
Optimization design of the precision optoelectronic tracking turntable frame
NASA Astrophysics Data System (ADS)
Li, Jie
2010-10-01
Opto-electric scouting & tracking device is used to scouting the object of hemisphere airspace and tracing of movement tail of object in real time. The precision turntable was important parts of scouting device and it was crucial to the scouting device's technology guideline, such as tracking precision, scouting range, volume and quality etc. To achieving the purpose which scouting & tracking device's volume smallness, quality light, rigid bigness and precision highness characteristics, the mechanical structure of turntable was designed in this paper. Then, the static and dynamic analysis of the precision turntable frame was done using the finite element method. The static analysis results show the intensity and rigid requirement of tracking turntable frame was satisfied, and it had big space to reducing. So the structure optimization design can be done to reduce the frame's volume and moment of inertia. The optimization design of turntable frame was done based on the establishing the optimizing mathematics model. The objective function of optimization was minimizing frame volume. The optimizing result indicated the optimizing effect was distinct. The volume of precision opto-electronic tracking turntable frame reduced 15%. The intensity and rigid of precision opto-electronic tracking turntable frame were verified after optimization, the results was satisfied to the design requirement. It provided important reference to improving the Opto-electronic scouting and tracking device.
Kornelakis, Aris
2010-12-15
Particle Swarm Optimization (PSO) is a highly efficient evolutionary optimization algorithm. In this paper a multiobjective optimization algorithm based on PSO applied to the optimal design of photovoltaic grid-connected systems (PVGCSs) is presented. The proposed methodology intends to suggest the optimal number of system devices and the optimal PV module installation details, such that the economic and environmental benefits achieved during the system's operational lifetime period are both maximized. The objective function describing the economic benefit of the proposed optimization process is the lifetime system's total net profit which is calculated according to the method of the Net Present Value (NPV). The second objective function, which corresponds to the environmental benefit, equals to the pollutant gas emissions avoided due to the use of the PVGCS. The optimization's decision variables are the optimal number of the PV modules, the PV modules optimal tilt angle, the optimal placement of the PV modules within the available installation area and the optimal distribution of the PV modules among the DC/AC converters. (author)
Relation between experimental and non-experimental study designs. HB vaccines: a case study
Jefferson, T.; Demicheli, V.
1999-01-01
STUDY OBJECTIVE: To examine the relation between experimental and non- experimental study design in vaccinology. DESIGN: Assessment of each study design's capability of testing four aspects of vaccine performance, namely immunogenicity (the capacity to stimulate the immune system), duration of immunity conferred, incidence and seriousness of side effects, and number of infections prevented by vaccination. SETTING: Experimental and non-experimental studies on hepatitis B (HB) vaccines in the Cochrane Vaccines Field Database. RESULTS: Experimental and non-experimental vaccine study designs are frequently complementary but some aspects of vaccine quality can only be assessed by one of the types of study. More work needs to be done on the relation between study quality and its significance in terms of effect size. PMID:10326054
Fuel Injector Design Optimization for an Annular Scramjet Geometry
NASA Astrophysics Data System (ADS)
Steffen, Christopher J., Jr.
2003-01-01
A four-parameter, three-level, central composite experiment design has been used to optimize the configuration of an annular scramjet injector geometry using computational fluid dynamics. The computational fluid dynamic solutions played the role of computer experiments, and response surface methodology was used to capture the simulation results for mixing efficiency and total pressure recovery within the scramjet flowpath. An optimization procedure, based upon the response surface results of mixing efficiency, was used to compare the optimal design configuration against the target efficiency value of 92.5%. The results of three different optimization procedures are presented and all point to the need to look outside the current design space for different injector geometries that can meet or exceed the stated mixing efficiency target.
Global nonlinear optimization of spacecraft protective structures design
NASA Technical Reports Server (NTRS)
Mog, R. A.; Lovett, J. N., Jr.; Avans, S. L.
1990-01-01
The global optimization of protective structural designs for spacecraft subject to hypervelocity meteoroid and space debris impacts is presented. This nonlinear problem is first formulated for weight minimization of the space station core module configuration using the Nysmith impact predictor. Next, the equivalence and uniqueness of local and global optima is shown using properties of convexity. This analysis results in a new feasibility condition for this problem. The solution existence is then shown, followed by a comparison of optimization techniques. Finally, a sensitivity analysis is presented to determine the effects of variations in the systemic parameters on optimal design. The results show that global optimization of this problem is unique and may be achieved by a number of methods, provided the feasibility condition is satisfied. Furthermore, module structural design thicknesses and weight increase with increasing projectile velocity and diameter and decrease with increasing separation between bumper and wall for the Nysmith predictor.
Improved method for transonic airfoil design-by-optimization
NASA Technical Reports Server (NTRS)
Kennelly, R. A., Jr.
1983-01-01
An improved method for use of optimization techniques in transonic airfoil design is demonstrated. FLO6QNM incorporates a modified quasi-Newton optimization package, and is shown to be more reliable and efficient than the method developed previously at NASA-Ames, which used the COPES/CONMIN optimization program. The design codes are compared on a series of test cases with known solutions, and the effects of problem scaling, proximity of initial point to solution, and objective function precision are studied. In contrast to the older method, well-converged solutions are shown to be attainable in the context of engineering design using computational fluid dynamics tools, a new result. The improvements are due to better performance by the optimization routine and to the use of problem-adaptive finite difference step sizes for gradient evaluation.
On Optimal Input Design and Model Selection for Communication Channels
Li, Yanyan; Djouadi, Seddik M; Olama, Mohammed M
2013-01-01
In this paper, the optimal model (structure) selection and input design which minimize the worst case identification error for communication systems are provided. The problem is formulated using metric complexity theory in a Hilbert space setting. It is pointed out that model selection and input design can be handled independently. Kolmogorov n-width is used to characterize the representation error introduced by model selection, while Gel fand and Time n-widths are used to represent the inherent error introduced by input design. After the model is selected, an optimal input which minimizes the worst case identification error is shown to exist. In particular, it is proven that the optimal model for reducing the representation error is a Finite Impulse Response (FIR) model, and the optimal input is an impulse at the start of the observation interval. FIR models are widely popular in communication systems, such as, in Orthogonal Frequency Division Multiplexing (OFDM) systems.
Geometry Modeling and Grid Generation for Design and Optimization
NASA Technical Reports Server (NTRS)
Samareh, Jamshid A.
1998-01-01
Geometry modeling and grid generation (GMGG) have played and will continue to play an important role in computational aerosciences. During the past two decades, tremendous progress has occurred in GMGG; however, GMGG is still the biggest bottleneck to routine applications for complicated Computational Fluid Dynamics (CFD) and Computational Structures Mechanics (CSM) models for analysis, design, and optimization. We are still far from incorporating GMGG tools in a design and optimization environment for complicated configurations. It is still a challenging task to parameterize an existing model in today's Computer-Aided Design (CAD) systems, and the models created are not always good enough for automatic grid generation tools. Designers may believe their models are complete and accurate, but unseen imperfections (e.g., gaps, unwanted wiggles, free edges, slivers, and transition cracks) often cause problems in gridding for CSM and CFD. Despite many advances in grid generation, the process is still the most labor-intensive and time-consuming part of the computational aerosciences for analysis, design, and optimization. In an ideal design environment, a design engineer would use a parametric model to evaluate alternative designs effortlessly and optimize an existing design for a new set of design objectives and constraints. For this ideal environment to be realized, the GMGG tools must have the following characteristics: (1) be automated, (2) provide consistent geometry across all disciplines, (3) be parametric, and (4) provide sensitivity derivatives. This paper will review the status of GMGG for analysis, design, and optimization processes, and it will focus on some emerging ideas that will advance the GMGG toward the ideal design environment.
Role of Design Standards in Wind Plant Optimization (Presentation)
Veers, P.; Churchfield, M.; Lee, S.; Moon, J.; Larsen, G.
2013-10-01
When a turbine is optimized, it is done within the design constraints established by the objective criteria in the international design standards used to certify a design. Since these criteria are multifaceted, it is a challenging task to conduct the optimization, but it can be done. The optimization is facilitated by the fact that a standard turbine model is subjected to standard inflow conditions that are well characterized in the standard. Examples of applying these conditions to rotor optimization are examined. In other cases, an innovation may provide substantial improvement in one area, but be challenged to impact all of the myriad design load cases. When a turbine is placed in a wind plant, the challenge is magnified. Typical design practice optimizes the turbine for stand-alone operation, and then runs a check on the actual site conditions, including wakes from all nearby turbines. Thus, each turbine in a plant has unique inflow conditions. The possibility of creating objective and consistent inflow conditions for turbines within a plant, for used in optimization of the turbine and the plant, are examined with examples taken from LES simulation.
Multilevel design optimization and the effect of epistemic uncertainty
NASA Astrophysics Data System (ADS)
Nesbit, Benjamin Edward
This work presents the state of the art in hierarchically decomposed multilevel optimization. This work is expanded with the inclusion of evidence theory with the multilevel framework for the quantification of epistemic uncertainty. The novel method, Evidence-Based Multilevel Design optimization, is then used to solve two analytical optimization problems. This method is also used to explore the effect of the belief structure on the final solution. A methodology is presented to reduce the costs of evidence-based optimization through manipulation of the belief structure. In addition, a transport aircraft wing is also solved with multilevel optimization without uncertainty. This complex, real world optimization problem shows the capability of decomposed multilevel framework to reduce costs of solving computationally expensive problems with black box analyses.
Scalar and Multivariate Approaches for Optimal Network Design in Antarctica
NASA Astrophysics Data System (ADS)
Hryniw, Natalia
Observations are crucial for weather and climate, not only for daily forecasts and logistical purposes, for but maintaining representative records and for tuning atmospheric models. Here scalar theory for optimal network design is expanded in a multivariate framework, to allow for optimal station siting for full field optimization. Ensemble sensitivity theory is expanded to produce the covariance trace approach, which optimizes for the trace of the covariance matrix. Relative entropy is also used for multivariate optimization as an information theory approach for finding optimal locations. Antarctic surface temperature data is used as a testbed for these methods. Both methods produce different results which are tied to the fundamental physical parameters of the Antarctic temperature field.
Design sensitivity analysis and optimization tool (DSO) for sizing design applications
NASA Technical Reports Server (NTRS)
Chang, Kuang-Hua; Choi, Kyung K.; Perng, Jyh-Hwa
1992-01-01
The DSO tool, a structural design software system that provides the designer with a graphics-based menu-driven design environment to perform easy design optimization for general applications, is presented. Three design stages, preprocessing, design sensitivity analysis, and postprocessing, are implemented in the DSO to allow the designer to carry out the design process systematically. A framework, including data base, user interface, foundation class, and remote module, has been designed and implemented to facilitate software development for the DSO. A number of dedicated commercial software/packages have been integrated in the DSO to support the design procedures. Instead of parameterizing an FEM, design parameters are defined on a geometric model associated with physical quantities, and the continuum design sensitivity analysis theory is implemented to compute design sensitivity coefficients using postprocessing data from the analysis codes. A tracked vehicle road wheel is given as a sizing design application to demonstrate the DSO's easy and convenient design optimization process.
An optimal trajectory design for debris deorbiting
NASA Astrophysics Data System (ADS)
Ouyang, Gaoxiang; Dong, Xin; Li, Xin; Zhang, Yang
2016-01-01
The problem of deorbiting debris is studied in this paper. As a feasible measure, a disposable satellite would be launched, attach to debris, and deorbit the space debris using a technology named electrodynamic tether (EDT). In order to deorbit multiple debris as many as possible, a suboptimal but feasible and efficient trajectory set has been designed to allow a deorbiter satellite tour the LEO small bodies per one mission. Finally a simulation given by this paper showed that a 600 kg satellite is capable of deorbiting 6 debris objects in about 230 days.
Multiobjective hyper heuristic scheme for system design and optimization
NASA Astrophysics Data System (ADS)
Rafique, Amer Farhan
2012-11-01
As system design is becoming more and more multifaceted, integrated, and complex, the traditional single objective optimization trends of optimal design are becoming less and less efficient and effective. Single objective optimization methods present a unique optimal solution whereas multiobjective methods present pareto front. The foremost intent is to predict a reasonable distributed pareto-optimal solution set independent of the problem instance through multiobjective scheme. Other objective of application of intended approach is to improve the worthiness of outputs of the complex engineering system design process at the conceptual design phase. The process is automated in order to provide the system designer with the leverage of the possibility of studying and analyzing a large multiple of possible solutions in a short time. This article presents Multiobjective Hyper Heuristic Optimization Scheme based on low level meta-heuristics developed for the application in engineering system design. Herein, we present a stochastic function to manage meta-heuristics (low-level) to augment surety of global optimum solution. Generic Algorithm, Simulated Annealing and Swarm Intelligence are used as low-level meta-heuristics in this study. Performance of the proposed scheme is investigated through a comprehensive empirical analysis yielding acceptable results. One of the primary motives for performing multiobjective optimization is that the current engineering systems require simultaneous optimization of conflicting and multiple. Random decision making makes the implementation of this scheme attractive and easy. Injecting feasible solutions significantly alters the search direction and also adds diversity of population resulting in accomplishment of pre-defined goals set in the proposed scheme.
Computational design of an experimental laser-powered thruster
NASA Technical Reports Server (NTRS)
Jeng, San-Mou; Litchford, Ronald; Keefer, Dennis
1988-01-01
An extensive numerical experiment, using the developed computer code, was conducted to design an optimized laser-sustained hydrogen plasma thruster. The plasma was sustained using a 30 kW CO2 laser beam operated at 10.6 micrometers focused inside the thruster. The adopted physical model considers two-dimensional compressible Navier-Stokes equations coupled with the laser power absorption process, geometric ray tracing for the laser beam, and the thermodynamically equilibrium (LTE) assumption for the plasma thermophysical and optical properties. A pressure based Navier-Stokes solver using body-fitted coordinate was used to calculate the laser-supported rocket flow which consists of both recirculating and transonic flow regions. The computer code was used to study the behavior of laser-sustained plasmas within a pipe over a wide range of forced convection and optical arrangements before it was applied to the thruster design, and these theoretical calculations agree well with existing experimental results. Several different throat size thrusters operated at 150 and 300 kPa chamber pressure were evaluated in the numerical experiment. It is found that the thruster performance (vacuum specific impulse) is highly dependent on the operating conditions, and that an adequately designed laser-supported thruster can have a specific impulse around 1500 sec. The heat loading on the wall of the calculated thrusters were also estimated, and it is comparable to heat loading on the conventional chemical rocket. It was also found that the specific impulse of the calculated thrusters can be reduced by 200 secs due to the finite chemical reaction rate.
Superconducting Fault Current Limiter optimized design
NASA Astrophysics Data System (ADS)
Tixador, Pascal; Badel, Arnaud
2015-11-01
The SuperConducting Fault Current Limiter (SCFCL) appears as one of the most promising SC applications for the electrical grids. Despite its advantages and many successful field experiences the market of SCFCL has difficulties to take off even if the first orders for permanent operation in grids are taken. The analytical design of resistive SCFCL will be discussed with the objective to reduce the quantity of SC conductor (length and section) to be more cost-effective. For that the SC conductor must have a high resistivity in normal state. It can be achieved by using high resistivity alloy for shunt, such as Hastelloy®. One of the most severe constraint is that the SCFCL should operate safely for any faults, especially those with low prospective short-circuit currents. This constraint requires to properly design the thickness of the SC tape in order to limit the hot spot temperature. An operation at 65 K appears as very interesting since it decreases the SC cost at least by a factor 2 with a simple LN2 cryogenics. Taking into account the cost reduction in a near future, the SC conductor cost could be rather low, half a dollar per kV A.
Performance Trend of Different Algorithms for Structural Design Optimization
NASA Technical Reports Server (NTRS)
Patnaik, Surya N.; Coroneos, Rula M.; Guptill, James D.; Hopkins, Dale A.
1996-01-01
Nonlinear programming algorithms play an important role in structural design optimization. Fortunately, several algorithms with computer codes are available. At NASA Lewis Research Center, a project was initiated to assess performance of different optimizers through the development of a computer code CometBoards. This paper summarizes the conclusions of that research. CometBoards was employed to solve sets of small, medium and large structural problems, using different optimizers on a Cray-YMP8E/8128 computer. The reliability and efficiency of the optimizers were determined from the performance of these problems. For small problems, the performance of most of the optimizers could be considered adequate. For large problems however, three optimizers (two sequential quadratic programming routines, DNCONG of IMSL and SQP of IDESIGN, along with the sequential unconstrained minimizations technique SUMT) outperformed others. At optimum, most optimizers captured an identical number of active displacement and frequency constraints but the number of active stress constraints differed among the optimizers. This discrepancy can be attributed to singularity conditions in the optimization and the alleviation of this discrepancy can improve the efficiency of optimizers.
Comparative Evaluation of Different Optimization Algorithms for Structural Design Applications
NASA Technical Reports Server (NTRS)
Patnaik, Surya N.; Coroneos, Rula M.; Guptill, James D.; Hopkins, Dale A.
1996-01-01
Non-linear programming algorithms play an important role in structural design optimization. Fortunately, several algorithms with computer codes are available. At NASA Lewis Research Centre, a project was initiated to assess the performance of eight different optimizers through the development of a computer code CometBoards. This paper summarizes the conclusions of that research. CometBoards was employed to solve sets of small, medium and large structural problems, using the eight different optimizers on a Cray-YMP8E/8128 computer. The reliability and efficiency of the optimizers were determined from the performance of these problems. For small problems, the performance of most of the optimizers could be considered adequate. For large problems, however, three optimizers (two sequential quadratic programming routines, DNCONG of IMSL and SQP of IDESIGN, along with Sequential Unconstrained Minimizations Technique SUMT) outperformed others. At optimum, most optimizers captured an identical number of active displacement and frequency constraints but the number of active stress constraints differed among the optimizers. This discrepancy can be attributed to singularity conditions in the optimization and the alleviation of this discrepancy can improve the efficiency of optimizers.
Enhanced Multiobjective Optimization Technique for Comprehensive Aerospace Design. Part A
NASA Technical Reports Server (NTRS)
Chattopadhyay, Aditi; Rajadas, John N.
1997-01-01
A multidisciplinary design optimization procedure which couples formal multiobjectives based techniques and complex analysis procedures (such as computational fluid dynamics (CFD) codes) developed. The procedure has been demonstrated on a specific high speed flow application involving aerodynamics and acoustics (sonic boom minimization). In order to account for multiple design objectives arising from complex performance requirements, multiobjective formulation techniques are used to formulate the optimization problem. Techniques to enhance the existing Kreisselmeier-Steinhauser (K-S) function multiobjective formulation approach have been developed. The K-S function procedure used in the proposed work transforms a constrained multiple objective functions problem into an unconstrained problem which then is solved using the Broyden-Fletcher-Goldfarb-Shanno (BFGS) algorithm. Weight factors are introduced during the transformation process to each objective function. This enhanced procedure will provide the designer the capability to emphasize specific design objectives during the optimization process. The demonstration of the procedure utilizes a computational Fluid dynamics (CFD) code which solves the three-dimensional parabolized Navier-Stokes (PNS) equations for the flow field along with an appropriate sonic boom evaluation procedure thus introducing both aerodynamic performance as well as sonic boom as the design objectives to be optimized simultaneously. Sensitivity analysis is performed using a discrete differentiation approach. An approximation technique has been used within the optimizer to improve the overall computational efficiency of the procedure in order to make it suitable for design applications in an industrial setting.
Parallel optimization algorithms and their implementation in VLSI design
NASA Technical Reports Server (NTRS)
Lee, G.; Feeley, J. J.
1991-01-01
Two new parallel optimization algorithms based on the simplex method are described. They may be executed by a SIMD parallel processor architecture and be implemented in VLSI design. Several VLSI design implementations are introduced. An application example is reported to demonstrate that the algorithms are effective.
Teaching Optimal Design of Experiments Using a Spreadsheet
ERIC Educational Resources Information Center
Goos, Peter; Leemans, Herlinde
2004-01-01
In this paper, we present an interactive teaching approach to introduce the concept of optimal design of experiments to students. Our approach is based on the use of spreadsheets. One advantage of this approach is that no complex mathematical theory is needed nor that any design construction algorithm has to be discussed at the introductory stage.…
[COSMOS motion design optimization in the CT table].
Shang, Hong; Huang, Jian; Ren, Chao
2013-03-01
Through the CT Table dynamic simulation by COSMOS Motion, analysis the hinge of table and the motor force, then optimize the position of the hinge of table, provide the evidence of selecting bearing and motor, meanwhile enhance the design quality of the CT table and reduce the product design cost.
Optimal Test Design with Rule-Based Item Generation
ERIC Educational Resources Information Center
Geerlings, Hanneke; van der Linden, Wim J.; Glas, Cees A. W.
2013-01-01
Optimal test-design methods are applied to rule-based item generation. Three different cases of automated test design are presented: (a) test assembly from a pool of pregenerated, calibrated items; (b) test generation on the fly from a pool of calibrated item families; and (c) test generation on the fly directly from calibrated features defining…
A PDE Sensitivity Equation Method for Optimal Aerodynamic Design
NASA Technical Reports Server (NTRS)
Borggaard, Jeff; Burns, John
1996-01-01
The use of gradient based optimization algorithms in inverse design is well established as a practical approach to aerodynamic design. A typical procedure uses a simulation scheme to evaluate the objective function (from the approximate states) and its gradient, then passes this information to an optimization algorithm. Once the simulation scheme (CFD flow solver) has been selected and used to provide approximate function evaluations, there are several possible approaches to the problem of computing gradients. One popular method is to differentiate the simulation scheme and compute design sensitivities that are then used to obtain gradients. Although this black-box approach has many advantages in shape optimization problems, one must compute mesh sensitivities in order to compute the design sensitivity. In this paper, we present an alternative approach using the PDE sensitivity equation to develop algorithms for computing gradients. This approach has the advantage that mesh sensitivities need not be computed. Moreover, when it is possible to use the CFD scheme for both the forward problem and the sensitivity equation, then there are computational advantages. An apparent disadvantage of this approach is that it does not always produce consistent derivatives. However, for a proper combination of discretization schemes, one can show asymptotic consistency under mesh refinement, which is often sufficient to guarantee convergence of the optimal design algorithm. In particular, we show that when asymptotically consistent schemes are combined with a trust-region optimization algorithm, the resulting optimal design method converges. We denote this approach as the sensitivity equation method. The sensitivity equation method is presented, convergence results are given and the approach is illustrated on two optimal design problems involving shocks.
IsoDesign: a software for optimizing the design of 13C-metabolic flux analysis experiments.
Millard, Pierre; Sokol, Serguei; Letisse, Fabien; Portais, Jean-Charles
2014-01-01
The growing demand for (13) C-metabolic flux analysis ((13) C-MFA) in the field of metabolic engineering and systems biology is driving the need to rationalize expensive and time-consuming (13) C-labeling experiments. Experimental design is a key step in improving both the number of fluxes that can be calculated from a set of isotopic data and the precision of flux values. We present IsoDesign, a software that enables these parameters to be maximized by optimizing the isotopic composition of the label input. It can be applied to (13) C-MFA investigations using a broad panel of analytical tools (MS, MS/MS, (1) H NMR, (13) C NMR, etc.) individually or in combination. It includes a visualization module to intuitively select the optimal label input depending on the biological question to be addressed. Applications of IsoDesign are described, with an example of the entire (13) C-MFA workflow from the experimental design to the flux map including important practical considerations. IsoDesign makes the experimental design of (13) C-MFA experiments more accessible to a wider biological community. IsoDesign is distributed under an open source license at http://metasys.insa-toulouse.fr/software/isodes/
Optimization of Designs for Nanotube-based Scanning Probes
NASA Technical Reports Server (NTRS)
Harik, V. M.; Gates, T. S.; Bushnell, Dennis M. (Technical Monitor)
2002-01-01
Optimization of designs for nanotube-based scanning probes, which may be used for high-resolution characterization of nanostructured materials, is examined. Continuum models to analyze the nanotube deformations are proposed to help guide selection of the optimum probe. The limitations on the use of these models that must be accounted for before applying to any design problem are presented. These limitations stem from the underlying assumptions and the expected range of nanotube loading, end conditions, and geometry. Once the limitations are accounted for, the key model parameters along with the appropriate classification of nanotube structures may serve as a basis for the design optimization of nanotube-based probe tips.
Optimal Bayesian Adaptive Design for Test-Item Calibration.
van der Linden, Wim J; Ren, Hao
2015-06-01
An optimal adaptive design for test-item calibration based on Bayesian optimality criteria is presented. The design adapts the choice of field-test items to the examinees taking an operational adaptive test using both the information in the posterior distributions of their ability parameters and the current posterior distributions of the field-test parameters. Different criteria of optimality based on the two types of posterior distributions are possible. The design can be implemented using an MCMC scheme with alternating stages of sampling from the posterior distributions of the test takers' ability parameters and the parameters of the field-test items while reusing samples from earlier posterior distributions of the other parameters. Results from a simulation study demonstrated the feasibility of the proposed MCMC implementation for operational item calibration. A comparison of performances for different optimality criteria showed faster calibration of substantial numbers of items for the criterion of D-optimality relative to A-optimality, a special case of c-optimality, and random assignment of items to the test takers.
Optimal Design of Pipeline Based on the Shortest Path
NASA Astrophysics Data System (ADS)
Chu, Fei-xue; Chen, Shi-yi
Design and operation of long-distance pipeline are complex engineering tasks. Even small improvement in the design of a pipeline system can lead to substantial savings in capital. In this paper, graph theory was used to analyze the problem of pipeline optimal design. The candidate pump station locations were taken as the vertexes and the total cost of the pipeline system between the two vertexes corresponded to the edge weight. An algorithm recursively calling the Dijkstra algorithm was designed and analyzed to obtain N shortest paths. The optimal process program and the quasi-optimal process programs were obtained at the same time, which could be used in decision-making. The algorithm was tested by a real example. The result showed that it could meet the need of real application.
NASA Astrophysics Data System (ADS)
Schoettle, U. M.; Hillesheimer, M.
1991-08-01
An iterative multistep procedure for performance optimization of launch vehicles is described, which is being developed to support trade-off and sensitivity studies. Two major steps involved in the automated technique are the optimum trajectory shaping employing approximate control models and the vehicle design. Both aspects are discussed in this paper. Simulation examples are presented, first to demonstrate the approach taken for flight path optimization; second, to verify the coupled trajectory and design optimization procedure; and finally, to assess the impact of different mission requirements on an airbreathing Saenger-type vehicle.
Effects of experimental design on calibration curve precision in routine analysis.
Pimentel, M F; Neto, B de B; Saldanha, T C; Araújo, M C
1998-01-01
A computational program which compares the effciencies of different experimental designs with those of maximum precision (D-optimized designs) is described. The program produces confidence interval plots for a calibration curve and provides information about the number of standard solutions, concentration levels and suitable concentration ranges to achieve an optimum calibration. Some examples of the application of this novel computational program are given, using both simulated and real data.
Optimal design of composite hip implants using NASA technology
NASA Technical Reports Server (NTRS)
Blake, T. A.; Saravanos, D. A.; Davy, D. T.; Waters, S. A.; Hopkins, D. A.
1993-01-01
Using an adaptation of NASA software, we have investigated the use of numerical optimization techniques for the shape and material optimization of fiber composite hip implants. The original NASA inhouse codes, were originally developed for the optimization of aerospace structures. The adapted code, which was called OPORIM, couples numerical optimization algorithms with finite element analysis and composite laminate theory to perform design optimization using both shape and material design variables. The external and internal geometry of the implant and the surrounding bone is described with quintic spline curves. This geometric representation is then used to create an equivalent 2-D finite element model of the structure. Using laminate theory and the 3-D geometric information, equivalent stiffnesses are generated for each element of the 2-D finite element model, so that the 3-D stiffness of the structure can be approximated. The geometric information to construct the model of the femur was obtained from a CT scan. A variety of test cases were examined, incorporating several implant constructions and design variable sets. Typically the code was able to produce optimized shape and/or material parameters which substantially reduced stress concentrations in the bone adjacent of the implant. The results indicate that this technology can provide meaningful insight into the design of fiber composite hip implants.
NASA Technical Reports Server (NTRS)
Lung, Shun-fat; Pak, Chan-gi
2008-01-01
Updating the finite element model using measured data is a challenging problem in the area of structural dynamics. The model updating process requires not only satisfactory correlations between analytical and experimental results, but also the retention of dynamic properties of structures. Accurate rigid body dynamics are important for flight control system design and aeroelastic trim analysis. Minimizing the difference between analytical and experimental results is a type of optimization problem. In this research, a multidisciplinary design, analysis, and optimization [MDAO] tool is introduced to optimize the objective function and constraints such that the mass properties, the natural frequencies, and the mode shapes are matched to the target data as well as the mass matrix being orthogonalized.
NASA Technical Reports Server (NTRS)
Lung, Shun-fat; Pak, Chan-gi
2008-01-01
Updating the finite element model using measured data is a challenging problem in the area of structural dynamics. The model updating process requires not only satisfactory correlations between analytical and experimental results, but also the retention of dynamic properties of structures. Accurate rigid body dynamics are important for flight control system design and aeroelastic trim analysis. Minimizing the difference between analytical and experimental results is a type of optimization problem. In this research, a multidisciplinary design, analysis, and optimization (MDAO) tool is introduced to optimize the objective function and constraints such that the mass properties, the natural frequencies, and the mode shapes are matched to the target data as well as the mass matrix being orthogonalized.
Experimental optimal single qubit purification in an NMR quantum information processor.
Hou, Shi-Yao; Sheng, Yu-Bo; Feng, Guan-Ru; Long, Gui-Lu
2014-10-31
High quality single qubits are the building blocks in quantum information processing. But they are vulnerable to environmental noise. To overcome noise, purification techniques, which generate qubits with higher purities from qubits with lower purities, have been proposed. Purifications have attracted much interest and been widely studied. However, the full experimental demonstration of an optimal single qubit purification protocol proposed by Cirac, Ekert and Macchiavello [Phys. Rev. Lett. 82, 4344 (1999), the CEM protocol] more than one and half decades ago, still remains an experimental challenge, as it requires more complicated networks and a higher level of precision controls. In this work, we design an experiment scheme that realizes the CEM protocol with explicit symmetrization of the wave functions. The purification scheme was successfully implemented in a nuclear magnetic resonance quantum information processor. The experiment fully demonstrated the purification protocol, and showed that it is an effective way of protecting qubits against errors and decoherence.
Experimental designs for testing differences in survival among salmonid populations
Hoffmann, A.; Busack, C.; Knudsen, C.
1995-03-01
The Yakima Fisheries Project (YFP) is a supplementation plan for enhancing salmon runs in the Yakima River basin. It is presumed that inadequate spawning and rearing, habitat are limiting, factors to population abundance of spring chinook salmon. Therefore, the supplementation effort for spring chinook salmon is focused on introducing hatchery-raised smolts into the basin to compensate for the lack of spawning habitat. However, based on empirical evidence in the Yakima basin, hatchery-reared salmon have survived poorly compared to wild salmon. Therefore, the YFP has proposed to alter the optimal conventional treatment (OCT), which is the state-of-the-art hatchery rearing method, to a new innovative treatment (NIT). The NIT is intended to produce hatchery fish that mimic wild fish and thereby to enhance their survival over that of OCT fish. A limited application of the NIT (LNIT) has also been proposed to reduce the cost of applying the new treatment, yet retain the benefits of increased survival. This research was conducted to test whether the uncertainty using the experimental design was within the limits specified by the Planning Status Report (PSR).
Aerodynamic design optimization by using a continuous adjoint method
NASA Astrophysics Data System (ADS)
Luo, JiaQi; Xiong, JunTao; Liu, Feng
2014-07-01
This paper presents the fundamentals of a continuous adjoint method and the applications of this method to the aerodynamic design optimization of both external and internal flows. General formulation of the continuous adjoint equations and the corresponding boundary conditions are derived. With the adjoint method, the complete gradient information needed in the design optimization can be obtained by solving the governing flow equations and the corresponding adjoint equations only once for each cost function, regardless of the number of design parameters. An inverse design of airfoil is firstly performed to study the accuracy of the adjoint gradient and the effectiveness of the adjoint method as an inverse design method. Then the method is used to perform a series of single and multiple point design optimization problems involving the drag reduction of airfoil, wing, and wing-body configuration, and the aerodynamic performance improvement of turbine and compressor blade rows. The results demonstrate that the continuous adjoint method can efficiently and significantly improve the aerodynamic performance of the design in a shape optimization problem.
Optimized design and analysis of preclinical intervention studies in vivo
Laajala, Teemu D.; Jumppanen, Mikael; Huhtaniemi, Riikka; Fey, Vidal; Kaur, Amanpreet; Knuuttila, Matias; Aho, Eija; Oksala, Riikka; Westermarck, Jukka; Mäkelä, Sari; Poutanen, Matti; Aittokallio, Tero
2016-01-01
Recent reports have called into question the reproducibility, validity and translatability of the preclinical animal studies due to limitations in their experimental design and statistical analysis. To this end, we implemented a matching-based modelling approach for optimal intervention group allocation, randomization and power calculations, which takes full account of the complex animal characteristics at baseline prior to interventions. In prostate cancer xenograft studies, the method effectively normalized the confounding baseline variability, and resulted in animal allocations which were supported by RNA-seq profiling of the individual tumours. The matching information increased the statistical power to detect true treatment effects at smaller sample sizes in two castration-resistant prostate cancer models, thereby leading to saving of both animal lives and research costs. The novel modelling approach and its open-source and web-based software implementations enable the researchers to conduct adequately-powered and fully-blinded preclinical intervention studies, with the aim to accelerate the discovery of new therapeutic interventions. PMID:27480578
Optimized design and analysis of preclinical intervention studies in vivo.
Laajala, Teemu D; Jumppanen, Mikael; Huhtaniemi, Riikka; Fey, Vidal; Kaur, Amanpreet; Knuuttila, Matias; Aho, Eija; Oksala, Riikka; Westermarck, Jukka; Mäkelä, Sari; Poutanen, Matti; Aittokallio, Tero
2016-01-01
Recent reports have called into question the reproducibility, validity and translatability of the preclinical animal studies due to limitations in their experimental design and statistical analysis. To this end, we implemented a matching-based modelling approach for optimal intervention group allocation, randomization and power calculations, which takes full account of the complex animal characteristics at baseline prior to interventions. In prostate cancer xenograft studies, the method effectively normalized the confounding baseline variability, and resulted in animal allocations which were supported by RNA-seq profiling of the individual tumours. The matching information increased the statistical power to detect true treatment effects at smaller sample sizes in two castration-resistant prostate cancer models, thereby leading to saving of both animal lives and research costs. The novel modelling approach and its open-source and web-based software implementations enable the researchers to conduct adequately-powered and fully-blinded preclinical intervention studies, with the aim to accelerate the discovery of new therapeutic interventions. PMID:27480578
A superlinear interior points algorithm for engineering design optimization
NASA Technical Reports Server (NTRS)
Herskovits, J.; Asquier, J.
1990-01-01
We present a quasi-Newton interior points algorithm for nonlinear constrained optimization. It is based on a general approach consisting of the iterative solution in the primal and dual spaces of the equalities in Karush-Kuhn-Tucker optimality conditions. This is done in such a way to have primal and dual feasibility at each iteration, which ensures satisfaction of those optimality conditions at the limit points. This approach is very strong and efficient, since at each iteration it only requires the solution of two linear systems with the same matrix, instead of quadratic programming subproblems. It is also particularly appropriate for engineering design optimization inasmuch at each iteration a feasible design is obtained. The present algorithm uses a quasi-Newton approximation of the second derivative of the Lagrangian function in order to have superlinear asymptotic convergence. We discuss theoretical aspects of the algorithm and its computer implementation.
Airfoil Design and Optimization by the One-Shot Method
NASA Technical Reports Server (NTRS)
Kuruvila, G.; Taasan, Shlomo; Salas, M. D.
1995-01-01
An efficient numerical approach for the design of optimal aerodynamic shapes is presented in this paper. The objective of any optimization problem is to find the optimum of a cost function subject to a certain state equation (governing equation of the flow field) and certain side constraints. As in classical optimal control methods, the present approach introduces a costate variable (Lagrange multiplier) to evaluate the gradient of the cost function. High efficiency in reaching the optimum solution is achieved by using a multigrid technique and updating the shape in a hierarchical manner such that smooth (low-frequency) changes are done separately from high-frequency changes. Thus, the design variables are changed on a grid where their changes produce nonsmooth (high-frequency) perturbations that can be damped efficiently by the multigrid. The cost of solving the optimization problem is approximately two to three times the cost of the equivalent analysis problem.
Reliability-Based Design Optimization of a Composite Airframe Component
NASA Technical Reports Server (NTRS)
Patnaik, Surya N.; Pai, Shantaram S.; Coroneos, Rula M.
2009-01-01
A stochastic design optimization methodology (SDO) has been developed to design components of an airframe structure that can be made of metallic and composite materials. The design is obtained as a function of the risk level, or reliability, p. The design method treats uncertainties in load, strength, and material properties as distribution functions, which are defined with mean values and standard deviations. A design constraint or a failure mode is specified as a function of reliability p. Solution to stochastic optimization yields the weight of a structure as a function of reliability p. Optimum weight versus reliability p traced out an inverted-S-shaped graph. The center of the inverted-S graph corresponded to 50 percent (p = 0.5) probability of success. A heavy design with weight approaching infinity could be produced for a near-zero rate of failure that corresponds to unity for reliability p (or p = 1). Weight can be reduced to a small value for the most failure-prone design with a reliability that approaches zero (p = 0). Reliability can be changed for different components of an airframe structure. For example, the landing gear can be designed for a very high reliability, whereas it can be reduced to a small extent for a raked wingtip. The SDO capability is obtained by combining three codes: (1) The MSC/Nastran code was the deterministic analysis tool, (2) The fast probabilistic integrator, or the FPI module of the NESSUS software, was the probabilistic calculator, and (3) NASA Glenn Research Center s optimization testbed CometBoards became the optimizer. The SDO capability requires a finite element structural model, a material model, a load model, and a design model. The stochastic optimization concept is illustrated considering an academic example and a real-life raked wingtip structure of the Boeing 767-400 extended range airliner made of metallic and composite materials.
Evaluation of Methods for Multidisciplinary Design Optimization (MDO). Part 2
NASA Technical Reports Server (NTRS)
Kodiyalam, Srinivas; Yuan, Charles; Sobieski, Jaroslaw (Technical Monitor)
2000-01-01
A new MDO method, BLISS, and two different variants of the method, BLISS/RS and BLISS/S, have been implemented using iSIGHT's scripting language and evaluated in this report on multidisciplinary problems. All of these methods are based on decomposing a modular system optimization system into several subtasks optimization, that may be executed concurrently, and the system optimization that coordinates the subtasks optimization. The BLISS method and its variants are well suited for exploiting the concurrent processing capabilities in a multiprocessor machine. Several steps, including the local sensitivity analysis, local optimization, response surfaces construction and updates are all ideally suited for concurrent processing. Needless to mention, such algorithms that can effectively exploit the concurrent processing capabilities of the compute servers will be a key requirement for solving large-scale industrial design problems, such as the automotive vehicle problem detailed in Section 3.4.
The effect of code expanding optimizations on instruction cache design
NASA Technical Reports Server (NTRS)
Chen, William Y.; Chang, Pohua P.; Conte, Thomas M.; Hwu, Wen-Mei W.
1991-01-01
It is shown that code expanding optimizations have strong and non-intuitive implications on instruction cache design. Three types of code expanding optimizations are studied: instruction placement, function inline expansion, and superscalar optimizations. Overall, instruction placement reduces the miss ratio of small caches. Function inline expansion improves the performance for small cache sizes, but degrades the performance of medium caches. Superscalar optimizations increases the cache size required for a given miss ratio. On the other hand, they also increase the sequentiality of instruction access so that a simple load-forward scheme effectively cancels the negative effects. Overall, it is shown that with load forwarding, the three types of code expanding optimizations jointly improve the performance of small caches and have little effect on large caches.