Sample records for factorial design optimization

  1. Optimization of minoxidil microemulsions using fractional factorial design approach.

    PubMed

    Jaipakdee, Napaphak; Limpongsa, Ekapol; Pongjanyakul, Thaned

    2016-01-01

    The objective of this study was to apply fractional factorial and multi-response optimization designs using desirability function approach for developing topical microemulsions. Minoxidil (MX) was used as a model drug. Limonene was used as an oil phase. Based on solubility, Tween 20 and caprylocaproyl polyoxyl-8 glycerides were selected as surfactants, propylene glycol and ethanol were selected as co-solvent in aqueous phase. Experiments were performed according to a two-level fractional factorial design to evaluate the effects of independent variables: Tween 20 concentration in surfactant system (X1), surfactant concentration (X2), ethanol concentration in co-solvent system (X3), limonene concentration (X4) on MX solubility (Y1), permeation flux (Y2), lag time (Y3), deposition (Y4) of MX microemulsions. It was found that Y1 increased with increasing X3 and decreasing X2, X4; whereas Y2 increased with decreasing X1, X2 and increasing X3. While Y3 was not affected by these variables, Y4 increased with decreasing X1, X2. Three regression equations were obtained and calculated for predicted values of responses Y1, Y2 and Y4. The predicted values matched experimental values reasonably well with high determination coefficient. By using optimal desirability function, optimized microemulsion demonstrating the highest MX solubility, permeation flux and skin deposition was confirmed as low level of X1, X2 and X4 but high level of X3.

  2. A quality by design approach to optimization of emulsions for electrospinning using factorial and D-optimal designs.

    PubMed

    Badawi, Mariam A; El-Khordagui, Labiba K

    2014-07-16

    Emulsion electrospinning is a multifactorial process used to generate nanofibers loaded with hydrophilic drugs or macromolecules for diverse biomedical applications. Emulsion electrospinnability is greatly impacted by the emulsion pharmaceutical attributes. The aim of this study was to apply a quality by design (QbD) approach based on design of experiments as a risk-based proactive approach to achieve predictable critical quality attributes (CQAs) in w/o emulsions for electrospinning. Polycaprolactone (PCL)-thickened w/o emulsions containing doxycycline HCl were formulated using a Span 60/sodium lauryl sulfate (SLS) emulsifier blend. The identified emulsion CQAs (stability, viscosity and conductivity) were linked with electrospinnability using a 3(3) factorial design to optimize emulsion composition for phase stability and a D-optimal design to optimize stable emulsions for viscosity and conductivity after shifting the design space. The three independent variables, emulsifier blend composition, organic:aqueous phase ratio and polymer concentration, had a significant effect (p<0.05) on emulsion CQAs, the emulsifier blend composition exerting prominent main and interaction effects. Scanning electron microscopy (SEM) of emulsion-electrospun NFs and desirability functions allowed modeling of emulsion CQAs to predict electrospinnable formulations. A QbD approach successfully built quality in electrospinnable emulsions, allowing development of hydrophilic drug-loaded nanofibers with desired morphological characteristics. Copyright © 2014 Elsevier B.V. All rights reserved.

  3. Factorial experimental design intended for the optimization of the alumina purification conditions

    NASA Astrophysics Data System (ADS)

    Brahmi, Mounaouer; Ba, Mohamedou; Hidri, Yassine; Hassen, Abdennaceur

    2018-04-01

    The objective of this study was to determine the optimal conditions by using the experimental design methodology for the removal of some impurities associated with the alumina. So, three alumina qualities of different origins were investigated under the same conditions. The application of full-factorial designs on the samples of different qualities of alumina has followed the removal rates of the sodium oxide. However, a factorial experimental design was developed to describe the elimination of sodium oxide associated with the alumina. The experimental results showed that chemical analyze followed by XRF prior treatment of the samples, provided a primary idea concerning these prevailing impurities. Therefore, it appeared that the sodium oxide constituted the largest amount among all impurities. After the application of experimental design, analysis of the effectors different factors and their interactions showed that to have a better result, we should reduce the alumina quantity investigated and by against increase the stirring time for the first two samples, whereas, it was necessary to increase the alumina quantity in the case of the third sample. To expand and improve this research, we should take into account all existing impurities, since we found during this investigation that the levels of partial impurities increased after the treatment.

  4. Optimization of LDL targeted nanostructured lipid carriers of 5-FU by a full factorial design.

    PubMed

    Andalib, Sare; Varshosaz, Jaleh; Hassanzadeh, Farshid; Sadeghi, Hojjat

    2012-01-01

    Nanostructured lipid carriers (NLC) are a mixture of solid and liquid lipids or oils as colloidal carrier systems that lead to an imperfect matrix structure with high ability for loading water soluble drugs. The aim of this study was to find the best proportion of liquid and solid lipids of different types for optimization of the production of LDL targeted NLCs used in carrying 5-Fu by the emulsification-solvent evaporation method. The influence of the lipid type, cholesterol or cholesteryl stearate for targeting LDL receptors, oil type (oleic acid or octanol), lipid and oil% on particle size, surface charge, drug loading efficiency, and drug released percent from the NLCs were studied by a full factorial design. The NLCs prepared by 54.5% cholesterol and 25% of oleic acid, showed optimum results with particle size of 105.8 nm, relatively high zeta potential of -25 mV, drug loading efficiency of 38% and release efficiency of about 40%. Scanning electron microscopy of nanoparticles confirmed the results of dynamic light scattering method used in measuring the particle size of NLCs. The optimization method by a full factorial statistical design is a useful optimization method for production of nanostructured lipid carriers.

  5. Optimization of permeability for quality improvement by using factorial design

    NASA Astrophysics Data System (ADS)

    Said, Rahaini Mohd; Miswan, Nor Hamizah; Juan, Ng Shu; Hussin, Nor Hafizah; Ahmad, Aminah; Kamal, Mohamad Ridzuan Mohamad

    2017-05-01

    Sand castings are used worldwide by the manufacturing process in Metal Casting Industry, whereby the green sand are the commonly used sand mould type in the industry of sand casting. The defects on the surface of casting product is one of the problems in the industry of sand casting. The problems that relates to the defect composition of green sand are such as blowholes, pinholes shrinkage and porosity. Our objective is to optimize the best composition of green sand in order to minimize the occurrence of defects. Sand specimen of difference parameters (Bentonite, Green Sand, Cold dust and water) were design and prepared to undergo permeability test. The 24 factorial design experiment with four factors at difference composition were runs, and the total of 16 runs experiment were conducted. The developed models based on the experimental design necessary models were obtained. The model with a high coefficient of determination (R2=0.9841) and model for predicted and actual fitted well with the experimental data. Using the Analysis of Design Expert software, we identified that bentonite and water are the main interaction effect in the experiments. The optimal settings for green sand composition are 100g silica sand, 21g bentonite, 6.5 g water and 6g coal dust. This composition gives an effect of permeability number 598.3GP.

  6. Optimization of a chondrogenic medium through the use of factorial design of experiments.

    PubMed

    Enochson, Lars; Brittberg, Mats; Lindahl, Anders

    2012-12-01

    The standard culture system for in vitro cartilage research is based on cells in a three-dimensional micromass culture and a defined medium containing the chondrogenic key growth factor, transforming growth factor (TGF)-β1. The aim of this study was to optimize the medium for chondrocyte micromass culture. Human chondrocytes were cultured in different media formulations, designed with a factorial design of experiments (DoE) approach and based on the standard medium for redifferentiation. The significant factors for the redifferentiation of the chondrocytes were determined and optimized in a two-step process through the use of response surface methodology. TGF-β1, dexamethasone, and glucose were significant factors for differentiating the chondrocytes. Compared to the standard medium, TGF-β1 was increased 30%, dexamethasone reduced 50%, and glucose increased 22%. The potency of the optimized medium was validated in a comparative study against the standard medium. The optimized medium resulted in micromass cultures with increased expression of genes important for the articular chondrocyte phenotype and in cultures with increased glycosaminoglycan/DNA content. Optimizing the standard medium with the efficient DoE method, a new medium that gave better redifferentiation for articular chondrocytes was determined.

  7. Application of multi-factorial design of experiments to successfully optimize immunoassays for robust measurements of therapeutic proteins.

    PubMed

    Ray, Chad A; Patel, Vimal; Shih, Judy; Macaraeg, Chris; Wu, Yuling; Thway, Theingi; Ma, Mark; Lee, Jean W; Desilva, Binodh

    2009-02-20

    Developing a process that generates robust immunoassays that can be used to support studies with tight timelines is a common challenge for bioanalytical laboratories. Design of experiments (DOEs) is a tool that has been used by many industries for the purpose of optimizing processes. The approach is capable of identifying critical factors and their interactions with a minimal number of experiments. The challenge for implementing this tool in the bioanalytical laboratory is to develop a user-friendly approach that scientists can understand and apply. We have successfully addressed these challenges by eliminating the screening design, introducing automation, and applying a simple mathematical approach for the output parameter. A modified central composite design (CCD) was applied to three ligand binding assays. The intra-plate factors selected were coating, detection antibody concentration, and streptavidin-HRP concentrations. The inter-plate factors included incubation times for each step. The objective was to maximize the logS/B (S/B) of the low standard to the blank. The maximum desirable conditions were determined using JMP 7.0. To verify the validity of the predictions, the logS/B prediction was compared against the observed logS/B during pre-study validation experiments. The three assays were optimized using the multi-factorial DOE. The total error for all three methods was less than 20% which indicated method robustness. DOE identified interactions in one of the methods. The model predictions for logS/B were within 25% of the observed pre-study validation values for all methods tested. The comparison between the CCD and hybrid screening design yielded comparable parameter estimates. The user-friendly design enables effective application of multi-factorial DOE to optimize ligand binding assays for therapeutic proteins. The approach allows for identification of interactions between factors, consistency in optimal parameter determination, and reduced method

  8. Optimization of thiamethoxam adsorption parameters using multi-walled carbon nanotubes by means of fractional factorial design.

    PubMed

    Panić, Sanja; Rakić, Dušan; Guzsvány, Valéria; Kiss, Erne; Boskovic, Goran; Kónya, Zoltán; Kukovecz, Ákos

    2015-12-01

    The aim of this work was to evaluate significant factors affecting the thiamethoxam adsorption efficiency using oxidized multi-walled carbon nanotubes (MWCNTs) as adsorbents. Five factors (initial solution concentration of thiamethoxam in water, temperature, solution pH, MWCNTs weight and contact time) were investigated using 2V(5-1) fractional factorial design. The obtained linear model was statistically tested using analysis of variance (ANOVA) and the analysis of residuals was used to investigate the model validity. It was observed that the factors and their second-order interactions affecting the thiamethoxam removal can be divided into three groups: very important, moderately important and insignificant ones. The initial solution concentration was found to be the most influencing parameter on thiamethoxam adsorption from water. Optimization of the factors levels was carried out by minimizing those parameters which are usually critical in real life: the temperature (energy), contact time (money) and weight of MWCNTs (potential health hazard), in order to maximize the adsorbed amount of the pollutant. The results of maximal adsorbed thiamethoxam amount in both real and optimized experiments indicate that among minimized parameters the adsorption time is one that makes the largest difference. The results of this study indicate that fractional factorial design is very useful tool for screening the higher number of parameters and reducing the number of adsorption experiments. Copyright © 2015 Elsevier Ltd. All rights reserved.

  9. Optimization of the Neutrino Factory, revisited

    NASA Astrophysics Data System (ADS)

    Agarwalla, Sanjib K.; Huber, Patrick; Tang, Jian; Winter, Walter

    2011-01-01

    We perform the baseline and energy optimization of the Neutrino Factory including the latest simulation results on the magnetized iron detector (MIND). We also consider the impact of τ decays, generated by νμ → ντ or ν e → ντ appearance, on the mass hierarchy, CP violation, and θ 13 discovery reaches, which we find to be negligible for the considered detector. For the baseline-energy optimization for small sin2 2 θ 13, we qualitatively recover the results with earlier simulations of the MIND detector. We find optimal baselines of about 2500km to 5000km for the CP violation measurement, where now values of E μ as low as about 12 GeV may be possible. However, for large sin2 2 θ 13, we demonstrate that the lower threshold and the backgrounds reconstructed at lower energies allow in fact for muon energies as low as 5 GeV at considerably shorter baselines, such as FNAL-Homestake. This implies that with the latest MIND analysis, low-and high-energy versions of the Neutrino Factory are just two different versions of the same experiment optimized for different parts of the parameter space. Apart from a green-field study of the updated detector performance, we discuss specific implementations for the two-baseline Neutrino Factory, where the considered detector sites are taken to be currently discussed underground laboratories. We find that reasonable setups can be found for the Neutrino Factory source in Asia, Europe, and North America, and that a triangular-shaped storage ring is possible in all cases based on geometrical arguments only.

  10. An Application of Fractional Factorial Designs to Study Drug Combinations

    PubMed Central

    Jaynes, Jessica; Ding, Xianting; Xu, Hongquan; Wong, Weng Kee; Ho, Chih-Ming

    2013-01-01

    Herpes simplex virus type 1 (HSV-1) is known to cause diseases of various severities. There is increasing interest to find drug combinations to treat HSV-1 by reducing drug resistance and cytotoxicity. Drug combinations offer potentially higher efficacy and lower individual drug dosage. In this paper, we report a new application of fractional factorial designs to investigate a biological system with HSV-1 and six antiviral drugs, namely, Interferon-alpha, Interferon-beta, Interferon-gamma, Ribavirin, Acyclovir, and TNF-alpha. We show how the sequential use of two- and three-level fractional factorial designs can screen for important drugs and drug interactions, as well as determine potential optimal drug dosages through the use of contour plots. Our initial experiment using a two-level fractional factorial design suggests that there is model inadequacy and drug dosages should be reduced. A follow-up experiment using a blocked three-level fractional factorial design indicates that TNF-alpha has little effect and HSV-1 infection can be suppressed effectively by using a right combination of the other five antiviral drugs. These observations have practical implications in the understanding of antiviral drug mechanism that can result in better design of antiviral drug therapy. PMID:22859316

  11. Novel Starch-PVA Polymer for Microparticle Preparation and Optimization Using Factorial Design Study

    PubMed Central

    Chattopadhyay, Helen; De, Amit Kumar; Datta, Sriparna

    2015-01-01

    The aim of our present work was to optimize the ratio of a very novel polymer, starch-polyvinyl alcohol (PVA), for controlled delivery of Ornidazole. Polymer-coated drug microparticles were prepared by emulsion method. Microscopic study, scanning electron microscopic study, and atomic force microscopic study revealed that the microparticles were within 10 micrometers of size with smooth spherical shape. The Fourier transform infrared spectroscopy showed absence of drug polymer interaction. A statistical 32 full factorial design was used to study the effect of different concentration of starch and PVA on the drug release profile. The three-dimensional plots gave us an idea about the contribution of each factor on the release kinetics. Hence this novel polymer of starch and polyvinyl alcohol can be utilized for control release of the drug from a targeted delivery device. PMID:27347511

  12. An examination of effect estimation in factorial and standardly-tailored designs

    PubMed Central

    Allore, Heather G; Murphy, Terrence E

    2012-01-01

    individual component effects from the family of factorial designs and this limitation for standardly-tailored designs. We use the phrase ‘factorial designs’ to describe full-factorial designs and their derivatives including the fractional factorial, partial factorial, incomplete factorial and modified reciprocal designs. We suggest two potential directions for designing multicomponent interventions to facilitate unbiased estimates of individual interventional components. Results Full factorial designs and their variants are the most common multicomponent trial design described in the literature and differ meaningfully from standardly-tailored designs. Factorial and standardly-tailored designs result in similar estimates of net effect with different levels of precision. Unbiased estimation of individual component effects from a standardly-tailored design will require new methodology. Limitations Although clinically relevant in geriatrics, previous applications of standardly-tailored designs have not provided unbiased estimates of the effects of individual interventional components. Discussion Future directions to estimate individual component effects from standardly-tailored designs include applying D-optimal designs and creating independent linear combinations of risk factors analogous to factor analysis. Conclusion Methods are needed to extract unbiased estimates of the effects of individual interventional components from standardly-tailored designs. PMID:18375650

  13. Formulation optimization of gentamicin loaded Eudragit RS100 microspheres using factorial design study.

    PubMed

    Singh, Deependra; Saraf, Swarnlata; Dixit, Vinod Kumar; Saraf, Shailendra

    2008-04-01

    Gentamicin-Eudragit RS100 microspheres were prepared by modified double emulsion method. A 3(2) full factorial experiment was designed to study the effects of the composition of outer aqueous phase in terms of amount of glycerol (viscosity effect) and sodium chloride (osmotic pressure gradient effect) on the entrapment efficiency and % yield and microsphere size. The results of analysis of variance test for responses measured indicated that the test is significant (p>0.05). The contribution of sodium chloride concentration was found to be higher on entrapment efficiency and % yield, whereas glycerol produced significant effect on the mean diameter of microspheres. Microspheres demonstrated spherical particles in the size range of 33.24-60.43 microm. In vitro release profile of optimized formulation demonstrated sustained release for 24 h following Higuchi kinetics. Finally, drug bioactivity was found to remain intact after microencapsulation. Response surface graphs are presented to examine the effects of independent variables on the responses studied. Thus, by formulation design important parameters affecting formulation characteristics of gentamicin loaded Eudragit RS100 microspheres can be identified for controlled delivery with desirable characters in terms of maximum entrapment and yield.

  14. Statistical optimization of the growth factors for Chaetoceros neogracile using fractional factorial design and central composite design.

    PubMed

    Jeong, Sung-Eun; Park, Jae-Kweon; Kim, Jeong-Dong; Chang, In-Jeong; Hong, Seong-Joo; Kang, Sung-Ho; Lee, Choul-Gyun

    2008-12-01

    Statistical experimental designs; involving (i) a fractional factorial design (FFD) and (ii) a central composite design (CCD) were applied to optimize the culture medium constituents for production of a unique antifreeze protein by the Antartic microalgae Chaetoceros neogracile. The results of the FFD suggested that NaCl, KCl, MgCl2, and Na2SiO3 were significant variables that highly influenced the growth rate and biomass production. The optimum culture medium for the production of an antifreeze protein from C. neogracile was found to be Kalleampersandrsquor;s artificial seawater, pH of 7.0ampersandplusmn;0.5, consisting of 28.566 g/l of NaCl, 3.887 g/l of MgCl2, 1.787 g/l of MgSO4, 1.308 g/l of CaSO4, 0.832 g/l of K2SO4, 0.124 g/l of CaCO3, 0.103 g/l of KBr, 0.0288 g/l of SrSO4, and 0.0282 g/l of H3BO3. The antifreeze activity significantly increased after cells were treated with cold shock (at -5oC) for 14 h. To the best of our knowledge, this is the first report demonstrating an antifreeze-like protein of C. neogracile.

  15. The International Design Study for the Neutrino Factory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Long, K.

    2008-02-21

    The International Design Study for a future Neutrino Factory and super-beam facility (the ISS) established the physics case for a high-precision programme of long-baseline neutrino-oscillation measurements. The ISS also identified baseline specifications for the Neutrino Factory accelerator complex and the neutrino detector systems. This paper summarises the objectives of the International Design Study for the Neutrino Factory (the IDS-NF). The IDS-NF will build on the work of the ISS to deliver a Reference Design Report for the Neutrino Factory by 2012/13 and an Interim Design Report by 2010/11.

  16. Design optimization of condenser microphone: a design of experiment perspective.

    PubMed

    Tan, Chee Wee; Miao, Jianmin

    2009-06-01

    A well-designed condenser microphone backplate is very important in the attainment of good frequency response characteristics--high sensitivity and wide bandwidth with flat response--and low mechanical-thermal noise. To study the design optimization of the backplate, a 2(6) factorial design with a single replicate, which consists of six backplate parameters and four responses, has been undertaken on a comprehensive condenser microphone model developed by Zuckerwar. Through the elimination of insignificant parameters via normal probability plots of the effect estimates, the projection of an unreplicated factorial design into a replicated one can be performed to carry out an analysis of variance on the factorial design. The air gap and slot have significant effects on the sensitivity, mechanical-thermal noise, and bandwidth while the slot/hole location interaction has major influence over the latter two responses. An organized and systematic approach of designing the backplate is summarized.

  17. Implementing Clinical Research Using Factorial Designs: A Primer.

    PubMed

    Baker, Timothy B; Smith, Stevens S; Bolt, Daniel M; Loh, Wei-Yin; Mermelstein, Robin; Fiore, Michael C; Piper, Megan E; Collins, Linda M

    2017-07-01

    Factorial experiments have rarely been used in the development or evaluation of clinical interventions. However, factorial designs offer advantages over randomized controlled trial designs, the latter being much more frequently used in such research. Factorial designs are highly efficient (permitting evaluation of multiple intervention components with good statistical power) and present the opportunity to detect interactions amongst intervention components. Such advantages have led methodologists to advocate for the greater use of factorial designs in research on clinical interventions (Collins, Dziak, & Li, 2009). However, researchers considering the use of such designs in clinical research face a series of choices that have consequential implications for the interpretability and value of the experimental results. These choices include: whether to use a factorial design, selection of the number and type of factors to include, how to address the compatibility of the different factors included, whether and how to avoid confounds between the type and number of interventions a participant receives, and how to interpret interactions. The use of factorial designs in clinical intervention research poses choices that differ from those typically considered in randomized clinical trial designs. However, the great information yield of the former encourages clinical researchers' increased and careful execution of such designs. Copyright © 2017. Published by Elsevier Ltd.

  18. Factorial versus multi-arm multi-stage designs for clinical trials with multiple treatments.

    PubMed

    Jaki, Thomas; Vasileiou, Despina

    2017-02-20

    When several treatments are available for evaluation in a clinical trial, different design options are available. We compare multi-arm multi-stage with factorial designs, and in particular, we will consider a 2 × 2 factorial design, where groups of patients will either take treatments A, B, both or neither. We investigate the performance and characteristics of both types of designs under different scenarios and compare them using both theory and simulations. For the factorial designs, we construct appropriate test statistics to test the hypothesis of no treatment effect against the control group with overall control of the type I error. We study the effect of the choice of the allocation ratios on the critical value and sample size requirements for a target power. We also study how the possibility of an interaction between the two treatments A and B affects type I and type II errors when testing for significance of each of the treatment effects. We present both simulation results and a case study on an osteoarthritis clinical trial. We discover that in an optimal factorial design in terms of minimising the associated critical value, the corresponding allocation ratios differ substantially to those of a balanced design. We also find evidence of potentially big losses in power in factorial designs for moderate deviations from the study design assumptions and little gain compared with multi-arm multi-stage designs when the assumptions hold. © 2016 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd. © 2016 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.

  19. Informing the Uninformed: Optimizing the Consent Message Using a Fractional Factorial Design

    PubMed Central

    Tait, Alan R.; Voepel-Lewis, Terri; Nair, Vijayan N.; Narisetty, Naveen N.; Fagerlin, Angela

    2013-01-01

    Objective Research information should be presented in a manner that promotes understanding. However, many parents and research subjects have difficulty understanding and making informed decisions. This study was designed to examine the effect of different communication strategies on parental understanding of research information. Participants 640 parents of children scheduled for elective surgery Design Observational study using a fractional factorial design Setting Large tertiary care children's hospital Interventions Parents were randomized to receive information about a hypothetical pain trial presented in one of 16 consent documents containing different combinations of 5 selected communication strategies (i.e., length, readability, processability [formatting], graphical display, and supplemental verbal disclosure). Main outcome measures Parents were interviewed to determine their understanding of the study elements (e.g., protocol, alternatives etc.) and their gist (main point) and verbatim (actual) understanding of the risks and benefits. Results Main effects for understanding were found for processability, readability, message length, use of graphics, and verbal discussion. Consent documents with high processability, 8th grade reading level, and graphics resulted in significantly greater gist and verbatim understanding compared with forms without these attributes (mean difference, 95% CI = 0.57, 0.26–0.88, correct responses out of 7 and 0.54, 0.20–0.88 correct responses out of 4 for gist and verbatim, respectively). Conclusions Results identified several communication strategy combinations that improved parents' understanding of research information. Adoption of these active strategies by investigators, clinicians, IRBs, and study sponsors represents a simple, practical, and inexpensive means to optimize the consent message and enhance parental, participant, and patient understanding. PMID:23700028

  20. Actinobacteria consortium as an efficient biotechnological tool for mixed polluted soil reclamation: Experimental factorial design for bioremediation process optimization.

    PubMed

    Aparicio, Juan Daniel; Raimondo, Enzo Emanuel; Gil, Raúl Andrés; Benimeli, Claudia Susana; Polti, Marta Alejandra

    2018-01-15

    The objective of the present work was to establish optimal biological and physicochemical parameters in order to remove simultaneously lindane and Cr(VI) at high and/or low pollutants concentrations from the soil by an actinobacteria consortium formed by Streptomyces sp. M7, MC1, A5, and Amycolatopsis tucumanensis AB0. Also, the final aim was to treat real soils contaminated with Cr(VI) and/or lindane from the Northwest of Argentina employing the optimal biological and physicochemical conditions. In this sense, after determining the optimal inoculum concentration (2gkg -1 ), an experimental design model with four factors (temperature, moisture, initial concentration of Cr(VI) and lindane) was employed for predicting the system behavior during bioremediation process. According to response optimizer, the optimal moisture level was 30% for all bioremediation processes. However, the optimal temperature was different for each situation: for low initial concentrations of both pollutants, the optimal temperature was 25°C; for low initial concentrations of Cr(VI) and high initial concentrations of lindane, the optimal temperature was 30°C; and for high initial concentrations of Cr(VI), the optimal temperature was 35°C. In order to confirm the model adequacy and the validity of the optimization procedure, experiments were performed in six real contaminated soils samples. The defined actinobacteria consortium reduced the contaminants concentrations in five of the six samples, by working at laboratory scale and employing the optimal conditions obtained through the factorial design. Copyright © 2017 Elsevier B.V. All rights reserved.

  1. Sb2Te3 and Its Superlattices: Optimization by Statistical Design.

    PubMed

    Behera, Jitendra K; Zhou, Xilin; Ranjan, Alok; Simpson, Robert E

    2018-05-02

    The objective of this work is to demonstrate the usefulness of fractional factorial design for optimizing the crystal quality of chalcogenide van der Waals (vdW) crystals. We statistically analyze the growth parameters of highly c axis oriented Sb 2 Te 3 crystals and Sb 2 Te 3 -GeTe phase change vdW heterostructured superlattices. The statistical significance of the growth parameters of temperature, pressure, power, buffer materials, and buffer layer thickness was found by fractional factorial design and response surface analysis. Temperature, pressure, power, and their second-order interactions are the major factors that significantly influence the quality of the crystals. Additionally, using tungsten rather than molybdenum as a buffer layer significantly enhances the crystal quality. Fractional factorial design minimizes the number of experiments that are necessary to find the optimal growth conditions, resulting in an order of magnitude improvement in the crystal quality. We highlight that statistical design of experiment methods, which is more commonly used in product design, should be considered more broadly by those designing and optimizing materials.

  2. Factorial Design Approach in Proportioning Prestressed Self-Compacting Concrete

    PubMed Central

    Long, Wu-Jian; Khayat, Kamal Henri; Lemieux, Guillaume; Xing, Feng; Wang, Wei-Lun

    2015-01-01

    In order to model the effect of mixture parameters and material properties on the hardened properties of, prestressed self-compacting concrete (SCC), and also to investigate the extensions of the statistical models, a factorial design was employed to identify the relative significance of these primary parameters and their interactions in terms of the mechanical and visco-elastic properties of SCC. In addition to the 16 fractional factorial mixtures evaluated in the modeled region of −1 to +1, eight axial mixtures were prepared at extreme values of −2 and +2 with the other variables maintained at the central points. Four replicate central mixtures were also evaluated. The effects of five mixture parameters, including binder type, binder content, dosage of viscosity-modifying admixture (VMA), water-cementitious material ratio (w/cm), and sand-to-total aggregate ratio (S/A) on compressive strength, modulus of elasticity, as well as autogenous and drying shrinkage are discussed. The applications of the models to better understand trade-offs between mixture parameters and carry out comparisons among various responses are also highlighted. A logical design approach would be to use the existing model to predict the optimal design, and then run selected tests to quantify the influence of the new binder on the model. PMID:28787990

  3. Factorial Design Approach in Proportioning Prestressed Self-Compacting Concrete.

    PubMed

    Long, Wu-Jian; Khayat, Kamal Henri; Lemieux, Guillaume; Xing, Feng; Wang, Wei-Lun

    2015-03-13

    In order to model the effect of mixture parameters and material properties on the hardened properties of, prestressed self-compacting concrete (SCC), and also to investigate the extensions of the statistical models, a factorial design was employed to identify the relative significance of these primary parameters and their interactions in terms of the mechanical and visco-elastic properties of SCC. In addition to the 16 fractional factorial mixtures evaluated in the modeled region of -1 to +1, eight axial mixtures were prepared at extreme values of -2 and +2 with the other variables maintained at the central points. Four replicate central mixtures were also evaluated. The effects of five mixture parameters, including binder type, binder content, dosage of viscosity-modifying admixture (VMA), water-cementitious material ratio (w/cm), and sand-to-total aggregate ratio (S/A) on compressive strength, modulus of elasticity, as well as autogenous and drying shrinkage are discussed. The applications of the models to better understand trade-offs between mixture parameters and carry out comparisons among various responses are also highlighted. A logical design approach would be to use the existing model to predict the optimal design, and then run selected tests to quantify the influence of the new binder on the model.

  4. Design, analysis and presentation of factorial randomised controlled trials

    PubMed Central

    Montgomery, Alan A; Peters, Tim J; Little, Paul

    2003-01-01

    Background The evaluation of more than one intervention in the same randomised controlled trial can be achieved using a parallel group design. However this requires increased sample size and can be inefficient, especially if there is also interest in considering combinations of the interventions. An alternative may be a factorial trial, where for two interventions participants are allocated to receive neither intervention, one or the other, or both. Factorial trials require special considerations, however, particularly at the design and analysis stages. Discussion Using a 2 × 2 factorial trial as an example, we present a number of issues that should be considered when planning a factorial trial. The main design issue is that of sample size. Factorial trials are most often powered to detect the main effects of interventions, since adequate power to detect plausible interactions requires greatly increased sample sizes. The main analytical issues relate to the investigation of main effects and the interaction between the interventions in appropriate regression models. Presentation of results should reflect the analytical strategy with an emphasis on the principal research questions. We also give an example of how baseline and follow-up data should be presented. Lastly, we discuss the implications of the design, analytical and presentational issues covered. Summary Difficulties in interpreting the results of factorial trials if an influential interaction is observed is the cost of the potential for efficient, simultaneous consideration of two or more interventions. Factorial trials can in principle be designed to have adequate power to detect realistic interactions, and in any case they are the only design that allows such effects to be investigated. PMID:14633287

  5. Teaching fractional factorial experiments via course delegate designed experiments.

    PubMed

    Coleman, S; Antony, J

    1999-01-01

    Industrial experiments are fundamental in enhancing the understanding and knowledge of a process and product behavior. Designed industrial experiments assist people in understanding, investigating, and improving their processes. The purpose of a designed experiment is to understand which factors might influence the process output and then to determine those factor settings that optimize the process output. Teaching "design of experiments" using textbook examples does not fully shed light on how to identify and formulate the problem, identify factors, and determine the performance of the physical experiment. Presented here is an example of how to teach fractional factorial experiments in a course on designed experiments. Also presented is a practical, hands-on experiment that has been found to be extremely successful in instilling confidence and motivation in course delegates. The experiment provides a great stimulus to the delegates for the application of experimental design in their own work environment.

  6. The Positive Emotions after Acute Coronary Events behavioral health intervention: Design, rationale, and preliminary feasibility of a factorial design study.

    PubMed

    Huffman, Jeffery C; Albanese, Ariana M; Campbell, Kirsti A; Celano, Christopher M; Millstein, Rachel A; Mastromauro, Carol A; Healy, Brian C; Chung, Wei-Jean; Januzzi, James L; Collins, Linda M; Park, Elyse R

    2017-04-01

    Positive psychological constructs, such as optimism, are associated with greater participation in cardiac health behaviors and improved cardiac outcomes. Positive psychology interventions, which target psychological well-being, may represent a promising approach to improving health behaviors in high-risk cardiac patients. However, no study has assessed whether a positive psychology intervention can promote physical activity following an acute coronary syndrome. In this article we will describe the methods of a novel factorial design study to aid the development of a positive psychology-based intervention for acute coronary syndrome patients and aim to provide preliminary feasibility data on study implementation. The Positive Emotions after Acute Coronary Events III study is an optimization study (planned N = 128), subsumed within a larger multiphase optimization strategy iterative treatment development project. The goal of Positive Emotions after Acute Coronary Events III is to identify the ideal components of a positive psychology-based intervention to improve post-acute coronary syndrome physical activity. Using a 2 × 2 × 2 factorial design, Positive Emotions after Acute Coronary Events III aims to: (1) evaluate the relative merits of using positive psychology exercises alone or combined with motivational interviewing, (2) assess whether weekly or daily positive psychology exercise completion is optimal, and (3) determine the utility of booster sessions. The study's primary outcome measure is moderate-to-vigorous physical activity at 16 weeks, measured via accelerometer. Secondary outcome measures include psychological, functional, and adherence-related behavioral outcomes, along with metrics of feasibility and acceptability. For the primary study outcome, we will use a mixed-effects model with a random intercept (to account for repeated measures) to assess the main effects of each component (inclusion of motivational interviewing in the exercises

  7. Interim Design Report for the International Design Study for a Neutrino Factory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Choubey, S.; Gandhi, R.; Goswami, S.

    2011-10-01

    The starting point for the International Design Study for the Neutrino Factory (the IDS-NF) was the output of the earlier International Scoping Study for a future Neutrino Factory and super-beam facility (the ISS). The accelerator facility described in section 2 incorporates the improvements that have been derived from the substantial amount of work carried out within the Accelerator Working Group. Highlights of these improvements include: (1) Initial concepts for the implementation of the proton driver at each of the three example sites, CERN, FNAL, and RAL; (2) Detailed studies of the energy deposition in the target area; (3) A reductionmore » in the length of the muon beam phase-rotation and bunching systems; (4) Detailed analyses of the impact of the risk that stray magnetic field in the accelerating cavities in the ionization cooling channel will reduce the maximum operating gradient. Several alternative ionization-cooling lattices have been developed as fallback options to mitigate this technical risk; (5) Studies of particle loss in the muon front-end and the development of strategies to mitigate the deleterious effects of such losses; (6) The development of more complete designs for the muon linac and re-circulating linacs; (7) The development of a design for the muon FFAG that incorporates insertions for injection and extraction; and (8) Detailed studies of diagnostics in the decay ring. Other sub-systems have undergone a more 'incremental' evolution; an indication that the design of the Neutrino Factory has achieved a degree of maturity. The design of the neutrino detectors described in section 3 has been optimized and the Detector Working Group has made substantial improvements to the simulation and analysis of the Magnetized Iron Neutrino Detector (MIND) resulting in an improvement in the overall neutrino-detection efficiency and a reduction in the neutrino-energy threshold. In addition, initial consideration of the engineering of the MIND has

  8. Design of Experiments with Multiple Independent Variables: A Resource Management Perspective on Complete and Reduced Factorial Designs

    PubMed Central

    Collins, Linda M.; Dziak, John J.; Li, Runze

    2009-01-01

    An investigator who plans to conduct experiments with multiple independent variables must decide whether to use a complete or reduced factorial design. This article advocates a resource management perspective on making this decision, in which the investigator seeks a strategic balance between service to scientific objectives and economy. Considerations in making design decisions include whether research questions are framed as main effects or simple effects; whether and which effects are aliased (confounded) in a particular design; the number of experimental conditions that must be implemented in a particular design and the number of experimental subjects the design requires to maintain the desired level of statistical power; and the costs associated with implementing experimental conditions and obtaining experimental subjects. In this article four design options are compared: complete factorial, individual experiments, single factor, and fractional factorial designs. Complete and fractional factorial designs and single factor designs are generally more economical than conducting individual experiments on each factor. Although relatively unfamiliar to behavioral scientists, fractional factorial designs merit serious consideration because of their economy and versatility. PMID:19719358

  9. Optimization and influence of parameter affecting the compressive strength of geopolymer concrete containing recycled concrete aggregate: using full factorial design approach

    NASA Astrophysics Data System (ADS)

    Krishnan, Thulasirajan; Purushothaman, Revathi

    2017-07-01

    There are several parameters that influence the properties of geopolymer concrete, which contains recycled concrete aggregate as the coarse aggregate. In the present study, the vital parameters affecting the compressive strength of geopolymer concrete containing recycled concrete aggregate are analyzedby varying four parameters with two levels using full factorial design in statistical software Minitab® 17. The objective of the present work is to gain an idea on the optimization, main parameter effects, their interactions and the predicted response of the model generated using factorial design. The parameters such as molarity of sodium hydroxide (8M and 12M), curing time (6hrs and 24 hrs), curing temperature (60°C and 90°C) and percentage of recycled concrete aggregate (0% and 100%) are considered. The results show that the curing time, molarity of sodium hydroxide and curing temperature were the orderly significant parameters and the percentage of Recycled concrete aggregate (RCA) was statistically insignificant in the production of geopolymer concrete. Thus, it may be noticeable that the RCA content had negligible effect on the compressive strength of geopolymer concrete. The expected responses from the generated model showed a satisfactory and rational agreement to the experimental data with the R2 value of 97.70%. Thus, geopolymer concrete comprising recycled concrete aggregate can solve the major social and environmental concerns such as the depletion of the naturally available aggregate sources and disposal of construction and demolition waste into the landfill.

  10. Designing optimal cell factories: integer programming couples elementary mode analysis with regulation

    PubMed Central

    2012-01-01

    Background Elementary mode (EM) analysis is ideally suited for metabolic engineering as it allows for an unbiased decomposition of metabolic networks in biologically meaningful pathways. Recently, constrained minimal cut sets (cMCS) have been introduced to derive optimal design strategies for strain improvement by using the full potential of EM analysis. However, this approach does not allow for the inclusion of regulatory information. Results Here we present an alternative, novel and simple method for the prediction of cMCS, which allows to account for boolean transcriptional regulation. We use binary linear programming and show that the design of a regulated, optimal metabolic network of minimal functionality can be formulated as a standard optimization problem, where EM and regulation show up as constraints. We validated our tool by optimizing ethanol production in E. coli. Our study showed that up to 70% of the predicted cMCS contained non-enzymatic, non-annotated reactions, which are difficult to engineer. These cMCS are automatically excluded by our approach utilizing simple weight functions. Finally, due to efficient preprocessing, the binary program remains computationally feasible. Conclusions We used integer programming to predict efficient deletion strategies to metabolically engineer a production organism. Our formulation utilizes the full potential of cMCS but adds additional flexibility to the design process. In particular our method allows to integrate regulatory information into the metabolic design process and explicitly favors experimentally feasible deletions. Our method remains manageable even if millions or potentially billions of EM enter the analysis. We demonstrated that our approach is able to correctly predict the most efficient designs for ethanol production in E. coli. PMID:22898474

  11. An intelligent factory-wide optimal operation system for continuous production process

    NASA Astrophysics Data System (ADS)

    Ding, Jinliang; Chai, Tianyou; Wang, Hongfeng; Wang, Junwei; Zheng, Xiuping

    2016-03-01

    In this study, a novel intelligent factory-wide operation system for a continuous production process is designed to optimise the entire production process, which consists of multiple units; furthermore, this system is developed using process operational data to avoid the complexity of mathematical modelling of the continuous production process. The data-driven approach aims to specify the structure of the optimal operation system; in particular, the operational data of the process are used to formulate each part of the system. In this context, the domain knowledge of process engineers is utilised, and a closed-loop dynamic optimisation strategy, which combines feedback, performance prediction, feed-forward, and dynamic tuning schemes into a framework, is employed. The effectiveness of the proposed system has been verified using industrial experimental results.

  12. A factorial design experiment as a pilot study for noninvasive genetic sampling.

    PubMed

    Renan, Sharon; Speyer, Edith; Shahar, Naama; Gueta, Tomer; Templeton, Alan R; Bar-David, Shirli

    2012-11-01

    Noninvasive genetic sampling has increasingly been used in ecological and conservation studies during the last decade. A major part of the noninvasive genetic literature is dedicated to the search for optimal protocols, by comparing different methods of collection, preservation and extraction of DNA from noninvasive materials. However, the lack of quantitative comparisons among these studies and the possibility that different methods are optimal for different systems make it difficult to decide which protocol to use. Moreover, most studies that have compared different methods focused on a single factor - collection, preservation or extraction - while there could be interactions between these factors. We designed a factorial experiment, as a pilot study, aimed at exploring the effect of several collection, preservation and extraction methods, and the interactions between them, on the quality and amplification success of DNA obtained from Asiatic wild ass (Equus hemionus) faeces in Israel. The amplification success rates of one mitochondrial DNA and four microsatellite markers differed substantially as a function of collection, preservation and extraction methods and their interactions. The most efficient combination for our system integrated the use of swabs as a collection method with preservation at -20 °C and with the Qiagen DNA Stool Kit with modifications as the DNA extraction method. The significant interaction found between the collection, preservation methods and the extraction methods reinforces the importance of conducting a factorial design experiment, rather than examining each factor separately, as a pilot study before initiating a full-scale noninvasive research project. © 2012 Blackwell Publishing Ltd.

  13. MapFactory - Towards a mapping design pattern for big geospatial data

    NASA Astrophysics Data System (ADS)

    Rautenbach, Victoria; Coetzee, Serena

    2018-05-01

    With big geospatial data emerging, cartographers and geographic information scientists have to find new ways of dealing with the volume, variety, velocity, and veracity (4Vs) of the data. This requires the development of tools that allow processing, filtering, analysing, and visualising of big data through multidisciplinary collaboration. In this paper, we present the MapFactory design pattern that will be used for the creation of different maps according to the (input) design specification for big geospatial data. The design specification is based on elements from ISO19115-1:2014 Geographic information - Metadata - Part 1: Fundamentals that would guide the design and development of the map or set of maps to be produced. The results of the exploratory research suggest that the MapFactory design pattern will help with software reuse and communication. The MapFactory design pattern will aid software developers to build the tools that are required to automate map making with big geospatial data. The resulting maps would assist cartographers and others to make sense of big geospatial data.

  14. Factorial design applied to the optimization of lipid composition of topical antiherpetic nanoemulsions containing isoflavone genistein

    PubMed Central

    Argenta, Débora Fretes; de Mattos, Cristiane Bastos; Misturini, Fabíola Dallarosa; Koester, Leticia Scherer; Bassani, Valquiria Linck; Simões, Cláudia Maria Oliveira; Teixeira, Helder Ferreira

    2014-01-01

    The aim of this study was to optimize topical nanoemulsions containing genistein, by means of a 23 full factorial design based on physicochemical properties and skin retention. The experimental arrangement was constructed using oil type (isopropyl myristate or castor oil), phospholipid type (distearoylphosphatidylcholine [DSPC] or dioleylphosphaditylcholine [DOPC]), and ionic cosurfactant type (oleic acid or oleylamine) as independent variables. The analysis of variance showed effect of third order for particle size, polydispersity index, and skin retention of genistein. Nanoemulsions composed of isopropyl myristate/DOPC/oleylamine showed the smallest diameter and highest genistein amount in porcine ear skin whereas the formulation composed of isopropyl myristate/DSPC/oleylamine exhibited the lowest polydispersity index. Thus, these two formulations were selected for further studies. The formulations presented positive ζ potential values (>25 mV) and genistein content close to 100% (at 1 mg/mL). The incorporation of genistein in nanoemulsions significantly increased the retention of this isoflavone in epidermis and dermis, especially when the formulation composed by isopropyl myristate/DOPC/oleylamine was used. These results were supported by confocal images. Such formulations exhibited antiherpetic activity in vitro against herpes simplex virus 1 (strain KOS) and herpes simplex virus 22 (strain 333). Taken together, the results show that the genistein-loaded nanoemulsions developed in this study are promising options in herpes treatment. PMID:25336951

  15. Tamarind seed gum-hydrolyzed polymethacrylamide-g-gellan beads for extended release of diclofenac sodium using 32 full factorial design.

    PubMed

    Nandi, Gouranga; Nandi, Amit Kumar; Khan, Najim Sarif; Pal, Souvik; Dey, Sibasish

    2018-07-15

    Development of tamarind seed gum (TSG)-hydrolyzed polymethacrylamide-g-gellan (h-Pmaa-g-GG) composite beads for extended release of diclofenac sodium using 3 2 full factorial design is the main purpose of this study. The ratio of h-Pmaa-g-GG and TSG and concentration of cross-linker CaCl 2 were taken as independent factors with three different levels of each. Effects of polymer ratio and CaCl 2 on drug entrapment efficiency (DEE), drug release, bead size and swelling were investigated. Responses such as DEE and different drug release parameters were statistically analyzed by 3 2 full factorial design using Design-Expert software and finally the formulation factors were optimized to obtain USP-reference release profile. Drug release rate was found to decrease with decrease in the ratio of h-Pmaa-g-GG:TSG and increase in the concentration of Ca 2+ ions in cross-linking medium. The optimized formulation showed DEE of 93.25% and an extended drug release profile over a period of 10h with f 2 =80.13. Kinetic modeling unveiled case-I-Fickian diffusion based drug release mechanism. Copyright © 2018 Elsevier B.V. All rights reserved.

  16. Factorial Design: An Eight Factor Experiment Using Paper Helicopters

    NASA Technical Reports Server (NTRS)

    Kozma, Michael

    1996-01-01

    The goal of this paper is to present the analysis of the multi-factor experiment (factorial design) conducted in EG490, Junior Design at Loyola College in Maryland. The discussion of this paper concludes the experimental analysis and ties the individual class papers together.

  17. Optimizing ELISAs for precision and robustness using laboratory automation and statistical design of experiments.

    PubMed

    Joelsson, Daniel; Moravec, Phil; Troutman, Matthew; Pigeon, Joseph; DePhillips, Pete

    2008-08-20

    Transferring manual ELISAs to automated platforms requires optimizing the assays for each particular robotic platform. These optimization experiments are often time consuming and difficult to perform using a traditional one-factor-at-a-time strategy. In this manuscript we describe the development of an automated process using statistical design of experiments (DOE) to quickly optimize immunoassays for precision and robustness on the Tecan EVO liquid handler. By using fractional factorials and a split-plot design, five incubation time variables and four reagent concentration variables can be optimized in a short period of time.

  18. Bioretention Systems: Partial Factorial Designs for Nitrate Removal

    EPA Science Inventory

    Changes in nutrient loadings are monitored by introducing captured stormwater runoff into eight outdoor rain gardens at EPA’s Urban Water Research Facility in Edison, New Jersey scaled for residential and urban landscapes. The partial factorial design includes non-vegetated meso...

  19. A systematic approach to designing statistically powerful heteroscedastic 2 × 2 factorial studies while minimizing financial costs.

    PubMed

    Jan, Show-Li; Shieh, Gwowen

    2016-08-31

    The 2 × 2 factorial design is widely used for assessing the existence of interaction and the extent of generalizability of two factors where each factor had only two levels. Accordingly, research problems associated with the main effects and interaction effects can be analyzed with the selected linear contrasts. To correct for the potential heterogeneity of variance structure, the Welch-Satterthwaite test is commonly used as an alternative to the t test for detecting the substantive significance of a linear combination of mean effects. This study concerns the optimal allocation of group sizes for the Welch-Satterthwaite test in order to minimize the total cost while maintaining adequate power. The existing method suggests that the optimal ratio of sample sizes is proportional to the ratio of the population standard deviations divided by the square root of the ratio of the unit sampling costs. Instead, a systematic approach using optimization technique and screening search is presented to find the optimal solution. Numerical assessments revealed that the current allocation scheme generally does not give the optimal solution. Alternatively, the suggested approaches to power and sample size calculations give accurate and superior results under various treatment and cost configurations. The proposed approach improves upon the current method in both its methodological soundness and overall performance. Supplementary algorithms are also developed to aid the usefulness and implementation of the recommended technique in planning 2 × 2 factorial designs.

  20. On the use of metabolic control analysis in the optimization of cyanobacterial biosolar cell factories.

    PubMed

    Angermayr, S Andreas; Hellingwerf, Klaas J

    2013-09-26

    Oxygenic photosynthesis will have a key role in a sustainable future. It is therefore significant that this process can be engineered in organisms such as cyanobacteria to construct cell factories that catalyze the (sun)light-driven conversion of CO2 and water into products like ethanol, butanol, or other biofuels or lactic acid, a bioplastic precursor, and oxygen as a byproduct. It is of key importance to optimize such cell factories to maximal efficiency. This holds for their light-harvesting capabilities under, for example, circadian illumination in large-scale photobioreactors. However, this also holds for the "dark" reactions of photosynthesis, that is, the conversion of CO2, NADPH, and ATP into a product. Here, we present an analysis, based on metabolic control theory, to estimate the optimal capacity for product formation with which such cyanobacterial cell factories have to be equipped. Engineered l-lactic acid producing Synechocystis sp. PCC6803 strains are used to identify the relation between production rate and enzymatic capacity. The analysis shows that the engineered cell factories for l-lactic acid are fully limited by the metabolic capacity of the product-forming pathway. We attribute this to the fact that currently available promoter systems in cyanobacteria lack the genetic capacity to a provide sufficient expression in single-gene doses.

  1. Recirculating linacs for a neutrino factory - Arc optics design and optimization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alex Bogacz; Valeri Lebedev

    2001-10-21

    A conceptual lattice design for a muon accelerator based on recirculating linacs (Nucl. Instr. and Meth. A 472 (2001) 499, these proceedings) is presented here. The challenge of accelerating and transporting a large phase space of short-lived muons is answered here by presenting a proof-of-principle lattice design for a recirculating linac accelerator. It is the centerpiece of a chain of accelerators consisting of a 3GeV linac and two consecutive recirculating linear accelerators, which facilitates acceleration starting after ionization cooling at 190MeV/c and proceeding to 50GeV. Beam transport issues for large-momentum-spread beams are accommodated by appropriate lattice design choices. The resultingmore » arc optics is further optimized with a sextupole correction to suppress chromatic effects contributing to the emittance dilution. The presented proof-of-principle design of the arc optics with horizontal separation of multi-pass beams can be extended to all passes in both recirculating linacs.« less

  2. Optimization of antibacterial activity by Gold-Thread (Coptidis Rhizoma Franch) against Streptococcus mutans using evolutionary operation-factorial design technique.

    PubMed

    Choi, Ung-Kyu; Kim, Mi-Hyang; Lee, Nan-Hee

    2007-11-01

    This study was conducted to find the optimum extraction condition of Gold-Thread for antibacterial activity against Streptococcus mutans using The evolutionary operation-factorial design technique. Higher antibacterial activity was achieved in a higher extraction temperature (R2 = -0.79) and in a longer extraction time (R2 = -0.71). Antibacterial activity was not affected by differentiation of the ethanol concentration in the extraction solvent (R2 = -0.12). The maximum antibacterial activity of clove against S. mutans determined by the EVOP-factorial technique was obtained at 80 degrees C extraction temperature, 26 h extraction time, and 50% ethanol concentration. The population of S. mutans decreased from 6.110 logCFU/ml in the initial set to 4.125 logCFU/ml in the third set.

  3. Novel Solid Lipid Nanocarrier Of Glibenclamide: A Factorial Design Approach With Response Surface Methodology.

    PubMed

    Pandey, Sonia; Patel, Payal; Gupta, Arti

    2018-05-21

    In the present investigation a factorial design approach attempt was applied to develop the solid lipid nanoparticles (SLN) of Glibenclamide (GLB) a poorly water-soluble drug (BCS -II) used in the treatment of type 2 diabetes. Prime objectives of this experiment are to optimize the SLN formulation of Glibenclamide and improve the therapeutic effectiveness of the developed formulation. Glibenclamide loaded SLNs (GLB-SLN) were fabricated by High speed homogenization technique. A 32-factorial design approach has been employed to assess the influence of two independent variables, namely amount of Poloxamer 188 and Glyceryl Monostearate on entrapment efficiency (% EE) (Y1), Particle Size (nm) (Y2), % drug release at 8hr Q8 (Y3) and 24 hr Q24 (Y4) of prepared SLNs. Differential scanning calorimetry analysis revealed the compatibility of the drug into lipid matrix with surfactant, while Transmission electron and Scanning electron microscopy studies indicated the size and shape of SLN. The entrapment efficiency, particle size, Q8 and Q24 of the optimized SLNs were 88.93%, 125 nm, 31.12±0.951% and 86.07±1.291% respectively. Optimized GLB-SLN formula was derived from an overlay plot. Three dimensional response surface plots and regression equations confirmed the corresponding influence of selected independent variables on measured responses. In vivo testing of the GLB-SLN in diabetic albino rats demonstrated significant antidiabetic effect of GLB-SLN. The hypoglycemic effect obtained by GLB-SLN remained significantly higher than that given by drug alone and marketed formulation, further confirming the higher therapeutic effectiveness of the GLB-SLN formulation. Our findings suggested the feasibility of investigated system for oral administration of Glibenclamide. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  4. A new efficient mixture screening design for optimization of media.

    PubMed

    Rispoli, Fred; Shah, Vishal

    2009-01-01

    Screening ingredients for the optimization of media is an important first step to reduce the many potential ingredients down to the vital few components. In this study, we propose a new method of screening for mixture experiments called the centroid screening design. Comparison of the proposed design with Plackett-Burman, fractional factorial, simplex lattice design, and modified mixture design shows that the centroid screening design is the most efficient of all the designs in terms of the small number of experimental runs needed and for detecting high-order interaction among ingredients. (c) 2009 American Institute of Chemical Engineers Biotechnol. Prog., 2009.

  5. Design Learning of Teaching Factory in Mechanical Engineering

    NASA Astrophysics Data System (ADS)

    Putra, R. C.; Kusumah, I. H.; Komaro, M.; Rahayu, Y.; Asfiyanur, E. P.

    2018-02-01

    The industrial world that is the target of the process and learning outcomes of vocational high school (SMK) has its own character and nuance. Therefore, vocational education institutions in the learning process should be able to make the appropriate learning approach and in accordance with the industrial world. One approach to learning that is based on production and learning in the world of work is by industry-based learning or known as Teaching Factory, where in this model apply learning that involves direct students in goods or service activities are expected to have the quality so it is worth selling and accepted by consumers. The method used is descriptive approach. The purpose of this research is to get the design of the teaching factory based on the competency requirements of the graduates of the spouse industry, especially in the engineering department. The results of this study is expected to be one of the choice of model factory teaching in the field of machinery engineering in accordance with the products and competencies of the graduates that the industry needs.

  6. Optimizing flurbiprofen-loaded NLC by central composite factorial design for ocular delivery.

    PubMed

    Gonzalez-Mira, E; Egea, M A; Souto, E B; Calpena, A C; García, M L

    2011-01-28

    The purpose of this study was to design and optimize a new topical delivery system for ocular administration of flurbiprofen (FB), based on lipid nanoparticles. These particles, called nanostructured lipid carriers (NLC), were composed of a fatty acid (stearic acid (SA)) as the solid lipid and a mixture of Miglyol(®) 812 and castor oil (CO) as the liquid lipids, prepared by the hot high pressure homogenization method. After selecting the critical variables influencing the physicochemical characteristics of the NLC (the liquid lipid (i.e. oil) concentration with respect to the total lipid (cOil/L (wt%)), the surfactant and the flurbiprofen concentration, on particle size, polydispersity index and encapsulation efficiency), a three-factor five-level central rotatable composite design was employed to plan and perform the experiments. Morphological examination, crystallinity and stability studies were also performed to accomplish the optimization study. The results showed that increasing cOil/L (wt%) was followed by an enhanced tendency to produce smaller particles, but the liquid to solid lipid proportion should not exceed 30 wt% due to destabilization problems. Therefore, a 70:30 ratio of SA to oil (miglyol + CO) was selected to develop an optimal NLC formulation. The smaller particles obtained when increasing surfactant concentration led to the selection of 3.2 wt% of Tween(®) 80 (non-ionic surfactant). The positive effect of the increase in FB concentration on the encapsulation efficiency (EE) and its total solubilization in the lipid matrix led to the selection of 0.25 wt% of FB in the formulation. The optimal NLC showed an appropriate average size for ophthalmic administration (228.3 nm) with a narrow size distribution (0.156), negatively charged surface (-33.3 mV) and high EE (∼90%). The in vitro experiments proved that sustained release FB was achieved using NLC as drug carriers. Optimal NLC formulation did not show toxicity on ocular tissues.

  7. Optimizing flurbiprofen-loaded NLC by central composite factorial design for ocular delivery

    NASA Astrophysics Data System (ADS)

    Gonzalez-Mira, E.; Egea, M. A.; Souto, E. B.; Calpena, A. C.; García, M. L.

    2011-01-01

    The purpose of this study was to design and optimize a new topical delivery system for ocular administration of flurbiprofen (FB), based on lipid nanoparticles. These particles, called nanostructured lipid carriers (NLC), were composed of a fatty acid (stearic acid (SA)) as the solid lipid and a mixture of Miglyol® 812 and castor oil (CO) as the liquid lipids, prepared by the hot high pressure homogenization method. After selecting the critical variables influencing the physicochemical characteristics of the NLC (the liquid lipid (i.e. oil) concentration with respect to the total lipid (cOil/L (wt%)), the surfactant and the flurbiprofen concentration, on particle size, polydispersity index and encapsulation efficiency), a three-factor five-level central rotatable composite design was employed to plan and perform the experiments. Morphological examination, crystallinity and stability studies were also performed to accomplish the optimization study. The results showed that increasing cOil/L (wt%) was followed by an enhanced tendency to produce smaller particles, but the liquid to solid lipid proportion should not exceed 30 wt% due to destabilization problems. Therefore, a 70:30 ratio of SA to oil (miglyol + CO) was selected to develop an optimal NLC formulation. The smaller particles obtained when increasing surfactant concentration led to the selection of 3.2 wt% of Tween® 80 (non-ionic surfactant). The positive effect of the increase in FB concentration on the encapsulation efficiency (EE) and its total solubilization in the lipid matrix led to the selection of 0.25 wt% of FB in the formulation. The optimal NLC showed an appropriate average size for ophthalmic administration (228.3 nm) with a narrow size distribution (0.156), negatively charged surface (-33.3 mV) and high EE (~90%). The in vitro experiments proved that sustained release FB was achieved using NLC as drug carriers. Optimal NLC formulation did not show toxicity on ocular tissues.

  8. Recirculating linacs for a neutrino factory - Arc optics design and optimization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Valeri Lebedev; S. Bogacz

    2001-10-25

    A conceptual lattice design for a muon accelerator based on recirculating linacs (Nucl. Instr. and Meth. A 472 (2001) 499, these proceedings) is presented here. The challenge of accelerating and transporting a large phase space of short-lived muons is answered here by presenting a proof-of-principle lattice design for a recirculating linac accelerator. It is the centerpiece of a chain of accelerators consisting of a 3 GeV linac and two consecutive recirculating linear accelerators, which facilitates acceleration starting after ionization cooling at 190 MeV/c and proceeding to 50 GeV. Beam transport issues for large-momentum-spread beams are accommodated by appropriate lattice designmore » choices. The resulting arc optics is further optimized with a sextupole correction to suppress chromatic effects contributing to the emittance dilution. The presented proof-of-principle design of the arc optics with horizontal separation of multi-pass beams can be extended to all passes in both recirculating linacs.« less

  9. Nanoemulsions containing a synthetic chalcone as an alternative for treating cutaneous leshmaniasis: optimization using a full factorial design.

    PubMed

    de Mattos, Cristiane Bastos; Argenta, Débora Fretes; Melchiades, Gabriela de Lima; Cordeiro, Marlon Norberto Sechini; Tonini, Maiko Luis; Moraes, Milene Hoehr; Weber, Tanara Beatriz; Roman, Silvane Souza; Nunes, Ricardo José; Teixeira, Helder Ferreira; Steindel, Mário; Koester, Letícia Scherer

    2015-01-01

    Nanoemulsions are drug delivery systems that may increase the penetration of lipophilic compounds through the skin, enhancing their topical effect. Chalcones are compounds of low water solubility that have been described as promising molecules for the treatment of cutaneous leishmaniasis (CL). In this context, the aim of this work was to optimize the development of a nanoemulsion containing a synthetic chalcone for CL treatment using a 2(2) full factorial design. The formulations were prepared by spontaneous emulsification and the experimental design studied the influence of two independent variables (type of surfactant - soybean lecithin or sorbitan monooleate and type of co-surfactants - polysorbate 20 or polysorbate 80) on the physicochemical characteristics of the nanoemulsions, as well as on the skin permeation/retention of the synthetic chalcone in porcine skin. In order to evaluate the stability of the systems, the antileishmanial assay was performed against Leishmania amazonensis 24 hours and 60 days after the preparation of the nanoemulsions. The formulation composed of soybean lecithin and polysorbate 20 presented suitable physicochemical characteristics (droplet size 171.9 nm; polydispersity index 0.14; zeta potential -39.43 mV; pH 5.16; and viscosity 2.00 cP), drug content (91.09%) and the highest retention in dermis (3.03 µg·g(-1)) - the main response of interest - confirmed by confocal microscopy. This formulation also presented better stability of leishmanicidal activity in vitro against L. amazonensis amastigote forms (half maximal inhibitory concentration value 0.32±0.05 µM), which confirmed the potential of the nanoemulsion soybean lecithin and polysorbate 20 for CL treatment.

  10. Development and Validation of HPLC-DAD and UHPLC-DAD Methods for the Simultaneous Determination of Guanylhydrazone Derivatives Employing a Factorial Design.

    PubMed

    Azevedo de Brito, Wanessa; Gomes Dantas, Monique; Andrade Nogueira, Fernando Henrique; Ferreira da Silva-Júnior, Edeildo; Xavier de Araújo-Júnior, João; Aquino, Thiago Mendonça de; Adélia Nogueira Ribeiro, Êurica; da Silva Solon, Lilian Grace; Soares Aragão, Cícero Flávio; Barreto Gomes, Ana Paula

    2017-08-30

    Guanylhydrazones are molecules with great pharmacological potential in various therapeutic areas, including antitumoral activity. Factorial design is an excellent tool in the optimization of a chromatographic method, because it is possible quickly change factors such as temperature, mobile phase composition, mobile phase pH, column length, among others to establish the optimal conditions of analysis. The aim of the present work was to develop and validate a HPLC and UHPLC methods for the simultaneous determination of guanylhydrazones with anticancer activity employing experimental design. Precise, exact, linear and robust HPLC and UHPLC methods were developed and validated for the simultaneous quantification of the guanylhydrazones LQM10, LQM14, and LQM17. The UHPLC method was more economic, with a four times less solvent consumption, and 20 times less injection volume, what allowed better column performance. Comparing the empirical approach employed in the HPLC method development to the DoE approach employed in the UHPLC method development, we can conclude that the factorial design made the method development faster, more practical and rational. This resulted in methods that can be employed in the analysis, evaluation and quality control of these new synthetic guanylhydrazones.

  11. Nanoemulsions containing a synthetic chalcone as an alternative for treating cutaneous leshmaniasis: optimization using a full factorial design

    PubMed Central

    de Mattos, Cristiane Bastos; Argenta, Débora Fretes; Melchiades, Gabriela de Lima; Sechini Cordeiro, Marlon Norberto; Tonini, Maiko Luis; Moraes, Milene Hoehr; Weber, Tanara Beatriz; Roman, Silvane Souza; Nunes, Ricardo José; Teixeira, Helder Ferreira; Steindel, Mário; Koester, Letícia Scherer

    2015-01-01

    Nanoemulsions are drug delivery systems that may increase the penetration of lipophilic compounds through the skin, enhancing their topical effect. Chalcones are compounds of low water solubility that have been described as promising molecules for the treatment of cutaneous leishmaniasis (CL). In this context, the aim of this work was to optimize the development of a nanoemulsion containing a synthetic chalcone for CL treatment using a 22 full factorial design. The formulations were prepared by spontaneous emulsification and the experimental design studied the influence of two independent variables (type of surfactant – soybean lecithin or sorbitan monooleate and type of co-surfactants – polysorbate 20 or polysorbate 80) on the physicochemical characteristics of the nanoemulsions, as well as on the skin permeation/retention of the synthetic chalcone in porcine skin. In order to evaluate the stability of the systems, the antileishmanial assay was performed against Leishmania amazonensis 24 hours and 60 days after the preparation of the nanoemulsions. The formulation composed of soybean lecithin and polysorbate 20 presented suitable physicochemical characteristics (droplet size 171.9 nm; polydispersity index 0.14; zeta potential −39.43 mV; pH 5.16; and viscosity 2.00 cP), drug content (91.09%) and the highest retention in dermis (3.03 µg·g−1) – the main response of interest – confirmed by confocal microscopy. This formulation also presented better stability of leishmanicidal activity in vitro against L. amazonensis amastigote forms (half maximal inhibitory concentration value 0.32±0.05 µM), which confirmed the potential of the nanoemulsion soybean lecithin and polysorbate 20 for CL treatment. PMID:26366075

  12. Optimization and Evaluation of Gastroretentive Ranitidine HCl Microspheres by Using Factorial Design with Improved Bioavailability and Mucosal Integrity in Ulcer Model.

    PubMed

    Khattab, Abeer; Zaki, Nashwah

    2017-05-01

    The purpose of our investigation was to develop and optimize the drug entrapment efficiency and bioadhesion properties of mucoadhesive chitosan microspheres containing ranitidine HCl prepared by an ionotropic gelation method as a gastroretentive delivery system; thus, we improved their protective and therapeutic gastric effects in an ulcer model. A 3 × 2 2 full factorial design was adopted to study the effect of three different factors, i.e., the type of polymer at three levels (chitosan, chitosan/hydroxypropylmethylcellulose, and chitosan/methylcellulose), the type of solvent at two levels (acetic acid and lactic acid), and the type of chitosan at two levels (low molecular weight (LMW) and high molecular weight (HMW)). The studied responses were particle size, swelling index, drug entrapment efficiency, bioadhesion (as determined by wash-off and rinsing tests), and T 80% of drug release. Studies of the in vivo mucoadhesion and in vivo protective and healing effects of the optimized formula against gastric ulcers were carried out using albino rats (with induced gastric ulceration) and were compared to the effects of free ranitidine powder. A pharmacokinetic study in rabbits showed a significant, 2.1-fold increase in theAUC 0-24 of the ranitidine microspheres compared to free ranitidine after oral administration. The optimized formula showed higher drug entrapment efficiency and mucoadhesion properties and had more protective and healing effects on induced gastric ulcers in rats than ranitidine powder. In conclusion, the prolonged gastrointestinal residence time and the stability of the mucoadhesive microspheres of ranitidine as well as the synergistic healing effect of chitosan could contribute to increasing the potential of its anti-gastric ulcer activity.

  13. Gearing up to the factory of the future

    NASA Astrophysics Data System (ADS)

    Godfrey, D. E.

    1985-01-01

    The features of factories and manufacturing techniques and tools of the near future are discussed. The spur to incorporate new technologies on the factory floor will originate in management, who must guide the interfacing of computer-enhanced equipment with traditional manpower, materials and machines. Electronic control with responsiveness and flexibility will be the key concept in an integrated approach to processing materials. Microprocessor controlled laser and fluid cutters add accuracy to cutting operations. Unattended operation will become feasible when automated inspection is added to a work station through developments in robot vision. Optimum shop management will be achieved through AI programming of parts manufacturing, optimized work flows, and cost accounting. The automation enhancements will allow designers to affect directly parts being produced on the factory floor.

  14. Gaming in the Classroom: An Innovative Way to Teach Factorial Designs

    ERIC Educational Resources Information Center

    Stansbury, Jessica A.; Munro, Geoffrey D.

    2013-01-01

    This study tested the effectiveness of video game use for instruction of factorial designs in a research methods course. Students designed and conducted a mini study, playing "Dance, Dance, Revolution", using video game scores as the dependent variable. A mixed-design analysis of variance revealed a significantly greater increase from pretest to…

  15. Factorial design studies of antiretroviral drug-loaded stealth liposomal injectable: PEGylation, lyophilization and pharmacokinetic studies

    NASA Astrophysics Data System (ADS)

    Sudhakar, Beeravelli; Krishna, Mylangam Chaitanya; Murthy, Kolapalli Venkata Ramana

    2016-01-01

    The aim of the present study was to formulate and evaluate the ritonavir-loaded stealth liposomes by using 32 factorial design and intended to delivered by parenteral delivery. Liposomes were prepared by ethanol injection method using 32 factorial designs and characterized for various physicochemical parameters such as drug content, size, zeta potential, entrapment efficiency and in vitro drug release. The optimization process was carried out using desirability and overlay plots. The selected formulation was subjected to PEGylation using 10 % PEG-10000 solution. Stealth liposomes were characterized for the above-mentioned parameters along with surface morphology, Fourier transform infrared spectrophotometer, differential scanning calorimeter, stability and in vivo pharmacokinetic studies in rats. Stealth liposomes showed better result compared to conventional liposomes due to effect of PEG-10000. The in vivo studies revealed that stealth liposomes showed better residence time compared to conventional liposomes and pure drug solution. The conventional liposomes and pure drug showed dose-dependent pharmacokinetics, whereas stealth liposomes showed long circulation half-life compared to conventional liposomes and pure ritonavir solution. The results of statistical analysis showed significance difference as the p value is (<0.05) by one-way ANOVA. The result of the present study revealed that stealth liposomes are promising tool in antiretroviral therapy.

  16. Factorial study of rain garden design for nitrogen removal

    EPA Science Inventory

    Abstract Nitrate (〖NO〗_3^--N ) removal studies in bioretention systems showed great variability in removal rates and in some cases 〖NO〗_3^--N was exported. A 3-way factorial design (2 x 2 x 4) was devised for eight outdoor un-vegetated rain gardens to evaluate the effects of ...

  17. Novel non-ionic surfactant proniosomes for transdermal delivery of lacidipine: optimization using 2(3) factorial design and in vivo evaluation in rabbits.

    PubMed

    Soliman, Sara M; Abdelmalak, Nevine S; El-Gazayerly, Omaima N; Abdelaziz, Nabaweya

    2016-06-01

    Proniosomes offer a versatile vesicle drug delivery concept with potential for delivery of drugs via transdermal route. To develop proniosomal gel using cremophor RH 40 as non-ionic surfactant containing the antihypertensive drug lacidipine for transdermal delivery so as to avoid its extensive first pass metabolism and to improve its permeation through the skin. Proniosomes containing 1% lacidipine were prepared by the coacervation phase separation method, characterized, and optimized using a 2(3) full factorial design to define the optimum conditions to produce proniosomes with high entrapment efficiency, minimal vesicle size, and high-percentage release efficiency. The amount of cholesterol (X1), the amount of soya lecithin (X2), and the amount of cremophor RH 40 (X3) were selected as three independent variables. The system F4 was found to fulfill the maximum requisite of an optimum system because it had minimum vesicle size, maximum EE, maximum release efficiency, and maximum desirability. The optimized system (F4) was then converted to proniosomal gel using carbopol 940 (1% w/w). In vitro permeation through excised rabbit skin study revealed higher flux (6.48 ± 0.45) for lacidipine from the optimized proniosomal gel when compared with the corresponding emulgel (3.04 ± 0.13) mg/cm(2)/h. The optimized formulation was evaluated for its bioavailability compared with commercial product. Statistical analysis revealed significant increase in AUC (0 - α) 464.17 ± 113.15 ng h/ml compared with 209.02 ± 47.35 ng h/ml for commercial tablet. Skin irritancy and histopathological investigation of rat skin revealed its safety. Cremophor RH 40 proniosomal gel could be considered as very promising nanocarriers for transdermal delivery of lacidipine.

  18. Quantitative and qualitative optimization of allergen extraction from peanut and selected tree nuts. Part 2. Optimization of buffer and ionic strength using a full factorial experimental design.

    PubMed

    L'Hocine, Lamia; Pitre, Mélanie

    2016-03-01

    A full factorial design was used to assess the single and interactive effects of three non-denaturing aqueous (phosphate, borate, and carbonate) buffers at various ionic strengths (I) on allergen extractability from and immunoglobulin E (IgE) immunoreactivity of peanut, almond, hazelnut, and pistachio. The results indicated that the type and ionic strength of the buffer had different effects on protein recovery from the nuts under study. Substantial differences in protein profiles, abundance, and IgE-binding intensity with different combinations of pH and ionic strength were found. A significant interaction between pH and ionic strength was observed for pistachio and almond. The optimal buffer system conditions, which maximized the IgE-binding efficiency of allergens and provided satisfactory to superior protein recovery yield and profiles, were carbonate buffer at an ionic strength of I=0.075 for peanut, carbonate buffer at I=0.15 for almond, phosphate buffer at I=0.5 for hazelnut, and borate at I=0.15 for pistachio. The buffer type and its ionic strength could be manipulated to achieve the selective solubility of desired allergens. Crown Copyright © 2015. Published by Elsevier Ltd. All rights reserved.

  19. A statistical approach to optimizing concrete mixture design.

    PubMed

    Ahmad, Shamsad; Alghamdi, Saeid A

    2014-01-01

    A step-by-step statistical approach is proposed to obtain optimum proportioning of concrete mixtures using the data obtained through a statistically planned experimental program. The utility of the proposed approach for optimizing the design of concrete mixture is illustrated considering a typical case in which trial mixtures were considered according to a full factorial experiment design involving three factors and their three levels (3(3)). A total of 27 concrete mixtures with three replicates (81 specimens) were considered by varying the levels of key factors affecting compressive strength of concrete, namely, water/cementitious materials ratio (0.38, 0.43, and 0.48), cementitious materials content (350, 375, and 400 kg/m(3)), and fine/total aggregate ratio (0.35, 0.40, and 0.45). The experimental data were utilized to carry out analysis of variance (ANOVA) and to develop a polynomial regression model for compressive strength in terms of the three design factors considered in this study. The developed statistical model was used to show how optimization of concrete mixtures can be carried out with different possible options.

  20. A Statistical Approach to Optimizing Concrete Mixture Design

    PubMed Central

    Alghamdi, Saeid A.

    2014-01-01

    A step-by-step statistical approach is proposed to obtain optimum proportioning of concrete mixtures using the data obtained through a statistically planned experimental program. The utility of the proposed approach for optimizing the design of concrete mixture is illustrated considering a typical case in which trial mixtures were considered according to a full factorial experiment design involving three factors and their three levels (33). A total of 27 concrete mixtures with three replicates (81 specimens) were considered by varying the levels of key factors affecting compressive strength of concrete, namely, water/cementitious materials ratio (0.38, 0.43, and 0.48), cementitious materials content (350, 375, and 400 kg/m3), and fine/total aggregate ratio (0.35, 0.40, and 0.45). The experimental data were utilized to carry out analysis of variance (ANOVA) and to develop a polynomial regression model for compressive strength in terms of the three design factors considered in this study. The developed statistical model was used to show how optimization of concrete mixtures can be carried out with different possible options. PMID:24688405

  1. Rank-based permutation approaches for non-parametric factorial designs.

    PubMed

    Umlauft, Maria; Konietschke, Frank; Pauly, Markus

    2017-11-01

    Inference methods for null hypotheses formulated in terms of distribution functions in general non-parametric factorial designs are studied. The methods can be applied to continuous, ordinal or even ordered categorical data in a unified way, and are based only on ranks. In this set-up Wald-type statistics and ANOVA-type statistics are the current state of the art. The first method is asymptotically exact but a rather liberal statistical testing procedure for small to moderate sample size, while the latter is only an approximation which does not possess the correct asymptotic α level under the null. To bridge these gaps, a novel permutation approach is proposed which can be seen as a flexible generalization of the Kruskal-Wallis test to all kinds of factorial designs with independent observations. It is proven that the permutation principle is asymptotically correct while keeping its finite exactness property when data are exchangeable. The results of extensive simulation studies foster these theoretical findings. A real data set exemplifies its applicability. © 2017 The British Psychological Society.

  2. In Silico Constraint-Based Strain Optimization Methods: the Quest for Optimal Cell Factories

    PubMed Central

    Maia, Paulo; Rocha, Miguel

    2015-01-01

    SUMMARY Shifting from chemical to biotechnological processes is one of the cornerstones of 21st century industry. The production of a great range of chemicals via biotechnological means is a key challenge on the way toward a bio-based economy. However, this shift is occurring at a pace slower than initially expected. The development of efficient cell factories that allow for competitive production yields is of paramount importance for this leap to happen. Constraint-based models of metabolism, together with in silico strain design algorithms, promise to reveal insights into the best genetic design strategies, a step further toward achieving that goal. In this work, a thorough analysis of the main in silico constraint-based strain design strategies and algorithms is presented, their application in real-world case studies is analyzed, and a path for the future is discussed. PMID:26609052

  3. Factorial design optimization of experimental variables in the on-line separation/preconcentration of copper in water samples using solid phase extraction and ICP-OES determination.

    PubMed

    Escudero, Luis A; Cerutti, S; Olsina, R A; Salonia, J A; Gasquez, J A

    2010-11-15

    An on-line preconcentration procedure using solid phase extraction (SPE) for the determination of copper in different water samples by inductively coupled plasma optical emission spectrometry (ICP-OES) is proposed. The copper was retained on a minicolumn filled with ethyl vinyl acetate (EVA) at pH 8.0 without using any complexing reagent. The experimental optimization step was performed using a two-level full factorial design. The results showed that pH, sample loading flow rate, and their interaction (at the tested levels) were statistically significant. In order to determine the best conditions for preconcentration and determination of copper, a final optimization of the significant factors was carried out using a central composite design (CCD). The calibration graph was linear with a regression coefficient of 0.995 at levels near the detection limit up to at least 300 μg L(-1). An enrichment factor (EF) of 54 with a preconcentration time of 187.5 s was obtained. The limit of detection (3σ) was 0.26 μg L(-1). The sampling frequency for the developed methodology was about 15 samples/h. The relative standard deviation (RSD) for six replicates containing 50 μg L(-1) of copper was 3.76%. The methodology was successfully applied to the determination of Cu in tap, mineral, river water samples, and in a certified VKI standard reference material. Copyright © 2010 Elsevier B.V. All rights reserved.

  4. An approach to optimize sample preparation for MALDI imaging MS of FFPE sections using fractional factorial design of experiments.

    PubMed

    Oetjen, Janina; Lachmund, Delf; Palmer, Andrew; Alexandrov, Theodore; Becker, Michael; Boskamp, Tobias; Maass, Peter

    2016-09-01

    A standardized workflow for matrix-assisted laser desorption/ionization imaging mass spectrometry (MALDI imaging MS) is a prerequisite for the routine use of this promising technology in clinical applications. We present an approach to develop standard operating procedures for MALDI imaging MS sample preparation of formalin-fixed and paraffin-embedded (FFPE) tissue sections based on a novel quantitative measure of dataset quality. To cover many parts of the complex workflow and simultaneously test several parameters, experiments were planned according to a fractional factorial design of experiments (DoE). The effect of ten different experiment parameters was investigated in two distinct DoE sets, each consisting of eight experiments. FFPE rat brain sections were used as standard material because of low biological variance. The mean peak intensity and a recently proposed spatial complexity measure were calculated for a list of 26 predefined peptides obtained by in silico digestion of five different proteins and served as quality criteria. A five-way analysis of variance (ANOVA) was applied on the final scores to retrieve a ranking of experiment parameters with increasing impact on data variance. Graphical abstract MALDI imaging experiments were planned according to fractional factorial design of experiments for the parameters under study. Selected peptide images were evaluated by the chosen quality metric (structure and intensity for a given peak list), and the calculated values were used as an input for the ANOVA. The parameters with the highest impact on the quality were deduced and SOPs recommended.

  5. Design, Optimization and Application of Small Molecule Biosensor in Metabolic Engineering.

    PubMed

    Liu, Yang; Liu, Ye; Wang, Meng

    2017-01-01

    The development of synthetic biology and metabolic engineering has painted a great future for the bio-based economy, including fuels, chemicals, and drugs produced from renewable feedstocks. With the rapid advance of genome-scale modeling, pathway assembling and genome engineering/editing, our ability to design and generate microbial cell factories with various phenotype becomes almost limitless. However, our lack of ability to measure and exert precise control over metabolite concentration related phenotypes becomes a bottleneck in metabolic engineering. Genetically encoded small molecule biosensors, which provide the means to couple metabolite concentration to measurable or actionable outputs, are highly promising solutions to the bottleneck. Here we review recent advances in the design, optimization and application of small molecule biosensor in metabolic engineering, with particular focus on optimization strategies for transcription factor (TF) based biosensors.

  6. Design, Optimization and Application of Small Molecule Biosensor in Metabolic Engineering

    PubMed Central

    Liu, Yang; Liu, Ye; Wang, Meng

    2017-01-01

    The development of synthetic biology and metabolic engineering has painted a great future for the bio-based economy, including fuels, chemicals, and drugs produced from renewable feedstocks. With the rapid advance of genome-scale modeling, pathway assembling and genome engineering/editing, our ability to design and generate microbial cell factories with various phenotype becomes almost limitless. However, our lack of ability to measure and exert precise control over metabolite concentration related phenotypes becomes a bottleneck in metabolic engineering. Genetically encoded small molecule biosensors, which provide the means to couple metabolite concentration to measurable or actionable outputs, are highly promising solutions to the bottleneck. Here we review recent advances in the design, optimization and application of small molecule biosensor in metabolic engineering, with particular focus on optimization strategies for transcription factor (TF) based biosensors. PMID:29089935

  7. Factorial Design Based Multivariate Modeling and Optimization of Tunable Bioresponsive Arginine Grafted Poly(cystaminebis(acrylamide)-diaminohexane) Polymeric Matrix Based Nanocarriers.

    PubMed

    Yang, Rongbing; Nam, Kihoon; Kim, Sung Wan; Turkson, James; Zou, Ye; Zuo, Yi Y; Haware, Rahul V; Chougule, Mahavir B

    2017-01-03

    Desired characteristics of nanocarriers are crucial to explore its therapeutic potential. This investigation aimed to develop tunable bioresponsive newly synthesized unique arginine grafted poly(cystaminebis(acrylamide)-diaminohexane) [ABP] polymeric matrix based nanocarriers by using L9 Taguchi factorial design, desirability function, and multivariate method. The selected formulation and process parameters were ABP concentration, acetone concentration, the volume ratio of acetone to ABP solution, and drug concentration. The measured nanocarrier characteristics were particle size, polydispersity index, zeta potential, and percentage drug loading. Experimental validation of nanocarrier characteristics computed from initially developed predictive model showed nonsignificant differences (p > 0.05). The multivariate modeling based optimized cationic nanocarrier formulation of <100 nm loaded with hydrophilic acetaminophen was readapted for a hydrophobic etoposide loading without significant changes (p > 0.05) except for improved loading percentage. This is the first study focusing on ABP polymeric matrix based nanocarrier development. Nanocarrier particle size was stable in PBS 7.4 for 48 h. The increase of zeta potential at lower pH 6.4, compared to the physiological pH, showed possible endosomal escape capability. The glutathione triggered release at the physiological conditions indicated the competence of cytosolic targeting delivery of the loaded drug from bioresponsive nanocarriers. In conclusion, this unique systematic approach provides rational evaluation and prediction of a tunable bioresponsive ABP based matrix nanocarrier, which was built on selected limited number of smart experimentation.

  8. Performing Contrast Analysis in Factorial Designs: From NHST to Confidence Intervals and Beyond

    PubMed Central

    Wiens, Stefan; Nilsson, Mats E.

    2016-01-01

    Because of the continuing debates about statistics, many researchers may feel confused about how to analyze and interpret data. Current guidelines in psychology advocate the use of effect sizes and confidence intervals (CIs). However, researchers may be unsure about how to extract effect sizes from factorial designs. Contrast analysis is helpful because it can be used to test specific questions of central interest in studies with factorial designs. It weighs several means and combines them into one or two sets that can be tested with t tests. The effect size produced by a contrast analysis is simply the difference between means. The CI of the effect size informs directly about direction, hypothesis exclusion, and the relevance of the effects of interest. However, any interpretation in terms of precision or likelihood requires the use of likelihood intervals or credible intervals (Bayesian). These various intervals and even a Bayesian t test can be obtained easily with free software. This tutorial reviews these methods to guide researchers in answering the following questions: When I analyze mean differences in factorial designs, where can I find the effects of central interest, and what can I learn about their effect sizes? PMID:29805179

  9. Custom fractional factorial designs to develop atorvastatin self-nanoemulsifying and nanosuspension delivery systems--enhancement of oral bioavailability.

    PubMed

    Hashem, Fahima M; Al-Sawahli, Majid M; Nasr, Mohamed; Ahmed, Osama A A

    2015-01-01

    Poor water solubility of a drug is a major challenge in drug delivery research and a main cause for limited bioavailability and pharmacokinetic parameters. This work aims to utilize custom fractional factorial design to assess the development of self-nanoemulsifying drug delivery systems (SNEDDS) and solid nanosuspensions (NS) in order to enhance the oral delivery of atorvastatin (ATR). According to the design, 14 experimental runs of ATR SNEDDS were formulated utilizing the highly ATR solubilizing SNEDDS components: oleic acid, Tween 80, and propylene glycol. In addition, 12 runs of NS were formulated by the antisolvent precipitation-ultrasonication method. Optimized formulations of SNEDDS and solid NS, deduced from the design, were characterized. Optimized SNEDDS formula exhibited mean globule size of 73.5 nm, zeta potential magnitude of -24.1 mV, and 13.5 μs/cm of electrical conductivity. Optimized solid NS formula exhibited mean particle size of 260.3 nm, 7.4 mV of zeta potential, and 93.2% of yield percentage. Transmission electron microscopy showed SNEDDS droplets formula as discrete spheres. The solid NS morphology showed flaky nanoparticles with irregular shapes using scanning electron microscopy. The release behavior of the optimized SNEDDS formula showed 56.78% of cumulative ATR release after 10 minutes. Solid NS formula showed lower rate of release in the first 30 minutes. Bioavailability estimation in Wistar albino rats revealed an augmentation in ATR bioavailability, relative to ATR suspension and the commercial tablets, from optimized ATR SNEDDS and NS formulations by 193.81% and 155.31%, respectively. The findings of this work showed that the optimized nanocarriers enhance the oral delivery and pharmacokinetic profile of ATR.

  10. Using Blocked Fractional Factorial Designs to Construct Discrete Choice Experiments for Health Care Studies

    PubMed Central

    Jaynes, Jessica; Wong, Weng Kee; Xu, Hongquan

    2016-01-01

    Discrete choice experiments (DCEs) are increasingly used for studying and quantifying subjects preferences in a wide variety of health care applications. They provide a rich source of data to assess real-life decision making processes, which involve trade-offs between desirable characteristics pertaining to health and health care, and identification of key attributes affecting health care. The choice of the design for a DCE is critical because it determines which attributes’ effects and their interactions are identifiable. We apply blocked fractional factorial designs to construct DCEs and address some identification issues by utilizing the known structure of blocked fractional factorial designs. Our design techniques can be applied to several situations including DCEs where attributes have different number of levels. We demonstrate our design methodology using two health care studies to evaluate (1) asthma patients’ preferences for symptom-based outcome measures, and (2) patient preference for breast screening services. PMID:26823156

  11. Evolutionary optimization methods for accelerator design

    NASA Astrophysics Data System (ADS)

    Poklonskiy, Alexey A.

    optimization test problems for EA with a variety of different configurations and suggest optimal default parameter values based on the results. Then we study the performance of the REPA method on the same set of test problems and compare the obtained results with those of several commonly used constrained optimization methods with EA. Based on the obtained results, particularly on the outstanding performance of REPA on test problem that presents significant difficulty for other reviewed EAs, we conclude that the proposed method is useful and competitive. We discuss REPA parameter tuning for difficult problems and critically review some of the problems from the de-facto standard test problem set for the constrained optimization with EA. In order to demonstrate the practical usefulness of the developed method, we study several problems of accelerator design and demonstrate how they can be solved with EAs. These problems include a simple accelerator design problem (design a quadrupole triplet to be stigmatically imaging, find all possible solutions), a complex real-life accelerator design problem (an optimization of the front end section for the future neutrino factory), and a problem of the normal form defect function optimization which is used to rigorously estimate the stability of the beam dynamics in circular accelerators. The positive results we obtained suggest that the application of EAs to problems from accelerator theory can be very beneficial and has large potential. The developed optimization scenarios and tools can be used to approach similar problems.

  12. Neutrino factory

    DOE PAGES

    Bogomilov, M.; Matev, R.; Tsenov, R.; ...

    2014-12-08

    The properties of the neutrino provide a unique window on physics beyond that described by the standard model. The study of subleading effects in neutrino oscillations, and the race to discover CP-invariance violation in the lepton sector, has begun with the recent discovery that theta(13) > 0. The measured value of theta(13) is large, emphasizing the need for a facility at which the systematic uncertainties can be reduced to the percent level. The neutrino factory, in which intense neutrino beams are produced from the decay of muons, has been shown to outperform all realistic alternatives and to be capable ofmore » making measurements of the requisite precision. Its unique discovery potential arises from the fact that only at the neutrino factory is it practical to produce high-energy electron (anti) neutrino beams of the required intensity. This paper presents the conceptual design of the neutrino factory accelerator facility developed by the European Commission Framework Programme 7 EURO nu. Design Study consortium. EURO nu coordinated the European contributions to the International Design Study for the Neutrino Factory (the IDS-NF) collaboration. The EURO nu baseline accelerator facility will provide 10(21) muon decays per year from 12.6 GeV stored muon beams serving a single neutrino detector situated at a source-detector distance of between 1 500 km and 2 500 km. A suite of near detectors will allow definitive neutrino-scattering experiments to be performed.« less

  13. The Operator Guide: An Ambient Persuasive Interface in the Factory

    NASA Astrophysics Data System (ADS)

    Meschtscherjakov, Alexander; Reitberger, Wolfgang; Pöhr, Florian; Tscheligi, Manfred

    In this paper we introduce the context of a semiconductor factory as a promising area for the application of innovative interaction approaches. In order to increase efficiency ambient persuasive interfaces, which influence the operators' behaviour to perform in an optimized way, could constitute a potential strategy. We present insights gained from qualitative studies conducted in a specific semiconductor factory and provide a description of typical work processes and already deployed interfaces in this context. These findings informed the design of a prototype of an ambient persuasive interface within this realm - the "Operator Guide". Its overall aim is to improve work efficiency, while still maintaining a minimal error rate. We provide a detailed description of the Operator Guide along with an outlook of the next steps within a user-centered design approach.

  14. The Factorial Survey: Design Selection and its Impact on Reliability and Internal Validity

    ERIC Educational Resources Information Center

    Dülmer, Hermann

    2016-01-01

    The factorial survey is an experimental design consisting of varying situations (vignettes) that have to be judged by respondents. For more complex research questions, it quickly becomes impossible for an individual respondent to judge all vignettes. To overcome this problem, random designs are recommended most of the time, whereas quota designs…

  15. Optimizing the vacuum plasma spray deposition of metal, ceramic, and cermet coatings using designed experiments

    NASA Astrophysics Data System (ADS)

    Kingswell, R.; Scott, K. T.; Wassell, L. L.

    1993-06-01

    The vacuum plasma spray (VPS) deposition of metal, ceramic, and cermet coatings has been investigated using designed statistical experiments. Processing conditions that were considered likely to have a significant influence on the melting characteristics of the precursor powders and hence deposition efficiency were incorporated into full and fractional factorial experimental designs. The processing of an alumina powder was very sensitive to variations in the deposition conditions, particularly the injection velocity of the powder into the plasma flame, the plasma gas composition, and the power supplied to the gun. Using a combination of full and fractional factorial experimental designs, it was possible to rapidly identify the important spraying variables and adjust these to produce a deposition efficiency approaching 80 percent. The deposition of a nickel-base alloy metal powder was less sensitive to processing conditions. Generally, however, a high degree of particle melting was achieved for a wide range of spray conditions. Preliminary experiments performed using a tungsten carbide/cobalt cermet powder indicated that spray efficiency was not sensitive to deposition conditions. However, microstructural analysis revealed considerable variations in the degree of tungsten carbide dissolution. The structure and properties of the optimized coatings produced in the factorial experiments are also discussed.

  16. Improved optimization of polycyclic aromatic hydrocarbons (PAHs) mixtures resolution in reversed-phase high-performance liquid chromatography by using factorial design and response surface methodology.

    PubMed

    Andrade-Eiroa, Auréa; Diévart, Pascal; Dagaut, Philippe

    2010-04-15

    A new procedure for optimizing PAHs separation in very complex mixtures by reverse phase high performance (RPLC) is proposed. It is based on changing gradually the experimental conditions all along the chromatographic procedure as a function of the physical properties of the compounds eluted. The temperature and speed flow gradients allowed obtaining the optimum resolution in large chromatographic determinations where PAHs with very different medium polarizability have to be separated. Whereas optimization procedures of RPLC methodologies had always been accomplished regardless of the physico-chemical properties of the target analytes, we found that resolution is highly dependent on the physico-chemical properties of the target analytes. Based on resolution criterion, optimization process for a 16 EPA PAHs mixture was performed on three sets of difficult-to-separate PAHs pairs: acenaphthene-fluorene (for the optimization procedure in the first part of the chromatogram where light PAHs elute), benzo[g,h,i]perylene-dibenzo[a,h]anthracene and benzo[g,h,i]perylene-indeno[1,2,3-cd]pyrene (for the optimization procedure of the second part of the chromatogram where the heavier PAHs elute). Two-level full factorial designs were applied to detect interactions among variables to be optimized: speed flow, temperature of column oven and mobile-phase gradient in the two parts of the studied chromatogram. Experimental data were fitted by multivariate nonlinear regression models and optimum values of speed flow and temperature were obtained through mathematical analysis of the constructed models. An HPLC system equipped with a reversed phase 5 microm C18, 250 mm x 4.6mm column (with acetonitrile/water mobile phase), a column oven, a binary pump, a photodiode array detector (PDA), and a fluorimetric detector were used in this work. Optimum resolution was achieved operating at 1.0 mL/min in the first part of the chromatogram (until 45 min) and 0.5 mL/min in the second one (from 45

  17. Preparation, characterization and optimization of sildenafil citrate loaded PLGA nanoparticles by statistical factorial design

    PubMed Central

    2013-01-01

    Background and the aim of the study The objective of the present study was to formulate and optimize nanoparticles (NPs) of sildenafil-loaded poly (lactic-co-glycolic acid) (PLGA) by double emulsion solvent evaporation (DESE) method. The relationship between design factors and experimental data was evaluated using response surface methodology. Method A Box-Behnken design was made considering the mass ratio of drug to polymer (D/P), the volumetric proportion of the water to oil phase (W/O) and the concentration of polyvinyl alcohol (PVA) as the independent agents. PLGA-NPs were successfully prepared and the size (nm), entrapment efficiency (EE), drug loading (DL) and cumulative release of drug from NPs post 1 and 8 hrs were assessed as the responses. Results The NPs were prepared in a spherical shape and the sizes range of 240 to 316 nm. The polydispersity index of size was lower than 0.5 and the EE (%) and DL (%) varied between 14-62% and 2-6%, respectively. The optimized formulation with a desirability factor of 0.9 was selected and characterized. This formulation demonstrated the particle size of 270 nm, EE of 55%, DL of 3.9% and cumulative drug release of 79% after 12 hrs. In vitro release studies showed a burst release at the initial stage followed by a sustained release of sildenafil from NPs up to 12 hrs. The release kinetic of the optimized formulation was fitted to Higuchi model. Conclusions Sildenafil citrate NPs with small particle size, lipophilic feature, high entrapment efficiency and good loading capacity is produced by this method. Characterization of optimum formulation, provided by an evaluation of experimental data, showed no significant difference between calculated and measured data. PMID:24355133

  18. Kinetic models in industrial biotechnology - Improving cell factory performance.

    PubMed

    Almquist, Joachim; Cvijovic, Marija; Hatzimanikatis, Vassily; Nielsen, Jens; Jirstrand, Mats

    2014-07-01

    An increasing number of industrial bioprocesses capitalize on living cells by using them as cell factories that convert sugars into chemicals. These processes range from the production of bulk chemicals in yeasts and bacteria to the synthesis of therapeutic proteins in mammalian cell lines. One of the tools in the continuous search for improved performance of such production systems is the development and application of mathematical models. To be of value for industrial biotechnology, mathematical models should be able to assist in the rational design of cell factory properties or in the production processes in which they are utilized. Kinetic models are particularly suitable towards this end because they are capable of representing the complex biochemistry of cells in a more complete way compared to most other types of models. They can, at least in principle, be used to in detail understand, predict, and evaluate the effects of adding, removing, or modifying molecular components of a cell factory and for supporting the design of the bioreactor or fermentation process. However, several challenges still remain before kinetic modeling will reach the degree of maturity required for routine application in industry. Here we review the current status of kinetic cell factory modeling. Emphasis is on modeling methodology concepts, including model network structure, kinetic rate expressions, parameter estimation, optimization methods, identifiability analysis, model reduction, and model validation, but several applications of kinetic models for the improvement of cell factories are also discussed. Copyright © 2014 The Authors. Published by Elsevier Inc. All rights reserved.

  19. Cameo: A Python Library for Computer Aided Metabolic Engineering and Optimization of Cell Factories.

    PubMed

    Cardoso, João G R; Jensen, Kristian; Lieven, Christian; Lærke Hansen, Anne Sofie; Galkina, Svetlana; Beber, Moritz; Özdemir, Emre; Herrgård, Markus J; Redestig, Henning; Sonnenschein, Nikolaus

    2018-04-20

    Computational systems biology methods enable rational design of cell factories on a genome-scale and thus accelerate the engineering of cells for the production of valuable chemicals and proteins. Unfortunately, the majority of these methods' implementations are either not published, rely on proprietary software, or do not provide documented interfaces, which has precluded their mainstream adoption in the field. In this work we present cameo, a platform-independent software that enables in silico design of cell factories and targets both experienced modelers as well as users new to the field. It is written in Python and implements state-of-the-art methods for enumerating and prioritizing knockout, knock-in, overexpression, and down-regulation strategies and combinations thereof. Cameo is an open source software project and is freely available under the Apache License 2.0. A dedicated Web site including documentation, examples, and installation instructions can be found at http://cameo.bio . Users can also give cameo a try at http://try.cameo.bio .

  20. Polymeric nanoparticles loaded with the 3,5,3'-triiodothyroacetic acid (Triac), a thyroid hormone: factorial design, characterization, and release kinetics.

    PubMed

    Dos Santos, Karen C; da Silva, Maria Fatima Gf; Pereira-Filho, Edenir R; Fernandes, Joao B; Polikarpov, Igor; Forim, Moacir R

    2012-01-01

    This present investigation deals with the development and optimization of polymeric nanoparticle systems loaded with 3,5,3'-triiodothyroacetic acid (Triac). A 2(11-6) fractional factorial design and another 2(2) factorial design were used to study the contrasts on particle size distribution, morphology, surface charge, drug content, entrapment efficiency, and in vitro drug release profiles. The independent variables were the concentration of Triac, type and quantity of both polymer and oil, quantity of Span™ 60 and Tween® 80, volume of solvent and water, and velocity of both magnetic stirring and the transfer of the organic phase into the aqueous solution. The results of optimized formulations showed a narrow size distribution with a polydispersity index lower than 0.200. The particle sizes were on average 159.6 nm and 285.6 nm for nanospheres and nanocapsules, respectively. The zeta potential was higher than 20 mV (in module) and the entrapment efficiency was nearly 100%. A high-performance liquid chromatography method was developed, validated, and efficiently applied to Triac quantification in colloidal suspension. The main independent variables were the type and quantity of the polymer and oil. In vitro drug release profile depicted several features to sustain Triac release. Different formulations showed various release rates indicating an interaction between Triac and other formulation compounds such as polymer and/or oil quantity. Two different models were identified (biexponential and monoexponential) that allowed the control of both the release rate and Triac concentration. Thus, the prepared nanoparticles described here may be of clinical importance in delivering Triac for thyroid treatment.

  1. Hollow fiber-based liquid phase microextraction with factorial design optimization and gas chromatography-tandem mass spectrometry for determination of cannabinoids in human hair.

    PubMed

    Emídio, Elissandro Soares; de Menezes Prata, Vanessa; de Santana, Fernando José Malagueño; Dórea, Haroldo Silveira

    2010-08-15

    A new method, based on hollow fiber liquid-phase microextraction (HF-LPME) and gas chromatography-tandem mass spectrometry (GC-MSMS), was developed for determination of Delta(9)-tetrahydrocannabinol (THC), cannabidiol (CBD) and cannabinol (CBN) in samples of human hair. Since hair is a solid matrix, the samples were subjected to alkaline digestion using NaOH. The aqueous solutions obtained were extracted using a 6cm polypropylene fiber (600microm i.d., 200microm wall thickness, 0.2microm pore size) for each extraction. A 2(5-1) fractional factorial design for screening, and a central composite design for optimization of significant variables, was applied during development of the extraction method. The variables evaluated were the type of extraction solvent, pH, stirring speed, extraction time, and acceptor phase volume. The optimized conditions for the proposed extraction procedure were 10mg of hair sample; 20microL of butyl acetate; aqueous (pH 14) donor phase containing 6.8% NaCl; 600rpm stirring speed; 20min extraction time. A linear response was obtained in the ranges 1-500pgmg(-1) (CBD and CBN) and 20-500pgmg(-1) (THC), with regression coefficients >0.99. Precision, determined as the relative standard deviation, was 3.3-8.9% (intra-day) and 4.4-13.7% (inter-day). Absolute recoveries varied in the ranges 4.4-4.8% (CBD), 7.6-8.9% (THC) and 7.7-8.2% (CBN). Limits of detection (LOD, S/N=3) and quantification (LOQ, S/N=10) were 0.5-15pgmg(-1) and 1-20pgmg(-1), respectively. The method was successfully used to determine CBD, THC and CBN in hair samples from patients in a drug dependency rehabilitation center. Concentrations varied in the ranges 1-18pgmg(-1) (CBD), 20-232pgmg(-1) (THC) and 9-107pgmg(-1) (CBN), confirming the suitability of the method for monitoring studies. Copyright 2010 Elsevier B.V. All rights reserved.

  2. Design, Characterization, and Optimization of Controlled Drug Delivery System Containing Antibiotic Drug/s

    PubMed Central

    Shelate, Pragna; Dave, Divyang

    2016-01-01

    The objective of this work was design, characterization, and optimization of controlled drug delivery system containing antibiotic drug/s. Osmotic drug delivery system was chosen as controlled drug delivery system. The porous osmotic pump tablets were designed using Plackett-Burman and Box-Behnken factorial design to find out the best formulation. For screening of three categories of polymers, six independent variables were chosen for Plackett-Burman design. Osmotic agent sodium chloride and microcrystalline cellulose, pore forming agent sodium lauryl sulphate and sucrose, and coating agent ethyl cellulose and cellulose acetate were chosen as independent variables. Optimization of osmotic tablets was done by Box-Behnken design by selecting three independent variables. Osmotic agent sodium chloride, pore forming agent sodium lauryl sulphate, and coating agent cellulose acetate were chosen as independent variables. The result of Plackett-Burman and Box-Behnken design and ANOVA studies revealed that osmotic agent and pore former had significant effect on the drug release up to 12 hr. The observed independent variables were found to be very close to predicted values of most satisfactory formulation which demonstrates the feasibility of the optimization procedure in successful development of porous osmotic pump tablets containing antibiotic drug/s by using sodium chloride, sodium lauryl sulphate, and cellulose acetate as key excipients. PMID:27610247

  3. Pediatric Diabetic Ketoacidosis, Fluid Therapy and Cerebral Injury: The Design of a Factorial Randomized Controlled Trial

    PubMed Central

    Glaser, Nicole S.; Ghetti, Simona; Casper, T. Charles; Dean, J. Michael; Kuppermann, Nathan

    2013-01-01

    Treatment protocols for pediatric diabetic ketoacidosis (DKA) vary considerably among centers in the United States and worldwide. The optimal protocol for intravenous fluid administration is an area of particular controversy, mainly in regard to possible associations between rates of intravenous fluid infusion and the development of cerebral edema, the most common and most feared complication of DKA in children. Theoretical concerns about associations between osmotic fluid shifts and cerebral edema have prompted recommendations for conservative fluid infusion during DKA. However, recent data suggest that cerebral hypoperfusion may play a role in cerebral injury associated with DKA. Currently there are no existing data from prospective clinical trials to determine the optimal fluid treatment protocol for pediatric DKA. The Pediatric Emergency Care Applied Research Network FLUID (Fluid Therapies Under Investigation in DKA) Study is the first prospective randomized trial to evaluate fluid regimens for pediatric DKA. This 13-center nationwide factorial-design study will evaluate the effects of rehydration rate and fluid sodium content on neurological status during DKA treatment, the frequency of clinically-overt CE, and long-term neurocognitive outcomes following DKA. PMID:23490311

  4. A Sino-Finnish Initiative for Experimental Teaching Practices Using the Design Factory Pedagogical Platform

    ERIC Educational Resources Information Center

    Björklund, Tua A.; Nordström, Katrina M.; Clavert, Maria

    2013-01-01

    The paper presents a Sino-Finnish teaching initiative, including the design and experiences of a series of pedagogical workshops implemented at the Aalto-Tongji Design Factory (DF), Shanghai, China, and the experimentation plans collected from the 54 attending professors and teachers. The workshops aimed to encourage trying out interdisciplinary…

  5. Study of Montmorillonite Clay for the Removal of Copper (II) by Adsorption: Full Factorial Design Approach and Cascade Forward Neural Network

    PubMed Central

    Turan, Nurdan Gamze; Ozgonenel, Okan

    2013-01-01

    An intensive study has been made of the removal efficiency of Cu(II) from industrial leachate by biosorption of montmorillonite. A 24 factorial design and cascade forward neural network (CFNN) were used to display the significant levels of the analyzed factors on the removal efficiency. The obtained model based on 24 factorial design was statistically tested using the well-known methods. The statistical analysis proves that the main effects of analyzed parameters were significant by an obtained linear model within a 95% confidence interval. The proposed CFNN model requires less experimental data and minimum calculations. Moreover, it is found to be cost-effective due to inherent advantages of its network structure. Optimization of the levels of the analyzed factors was achieved by minimizing adsorbent dosage and contact time, which were costly, and maximizing Cu(II) removal efficiency. The suggested optimum conditions are initial pH at 6, adsorbent dosage at 10 mg/L, and contact time at 10 min using raw montmorillonite with the Cu(II) removal of 80.7%. At the optimum values, removal efficiency was increased to 88.91% if the modified montmorillonite was used. PMID:24453833

  6. Design and Optimization of Floating Drug Delivery System of Acyclovir

    PubMed Central

    Kharia, A. A.; Hiremath, S. N.; Singhai, A. K.; Omray, L. K.; Jain, S. K.

    2010-01-01

    The purpose of the present work was to design and optimize floating drug delivery systems of acyclovir using psyllium husk and hydroxypropylmethylcellulose K4M as the polymers and sodium bicarbonate as a gas generating agent. The tablets were prepared by wet granulation method. A 32 full factorial design was used for optimization of drug release profile. The amount of psyllium husk (X1) and hydroxypropylmethylcellulose K4M (X2) were selected as independent variables. The times required for 50% (t50%) and 70% (t70%) drug dissolution were selected as dependent variables. All the designed nine batches of formulations were evaluated for hardness, friability, weight variation, drug content uniformity, swelling index, in vitro buoyancy, and in vitro drug release profile. All formulations had floating lag time below 3 min and constantly floated on dissolution medium for more than 24 h. Validity of the developed polynomial equation was verified by designing two check point formulations (C1 and C2). The closeness of predicted and observed values for t50% and t70% indicates validity of derived equations for the dependent variables. These studies indicated that the proper balance between psyllium husk and hydroxypropylmethylcellulose K4M can produce a drug dissolution profile similar to the predicted dissolution profile. The optimized formulations followed Higuchi's kinetics while the drug release mechanism was found to be anomalous type, controlled by diffusion through the swollen matrix. PMID:21694992

  7. Design and optimization of floating drug delivery system of acyclovir.

    PubMed

    Kharia, A A; Hiremath, S N; Singhai, A K; Omray, L K; Jain, S K

    2010-09-01

    The purpose of the present work was to design and optimize floating drug delivery systems of acyclovir using psyllium husk and hydroxypropylmethylcellulose K4M as the polymers and sodium bicarbonate as a gas generating agent. The tablets were prepared by wet granulation method. A 3(2) full factorial design was used for optimization of drug release profile. The amount of psyllium husk (X1) and hydroxypropylmethylcellulose K4M (X2) were selected as independent variables. The times required for 50% (t(50%)) and 70% (t(70%)) drug dissolution were selected as dependent variables. All the designed nine batches of formulations were evaluated for hardness, friability, weight variation, drug content uniformity, swelling index, in vitro buoyancy, and in vitro drug release profile. All formulations had floating lag time below 3 min and constantly floated on dissolution medium for more than 24 h. Validity of the developed polynomial equation was verified by designing two check point formulations (C1 and C2). The closeness of predicted and observed values for t(50%) and t(70%) indicates validity of derived equations for the dependent variables. These studies indicated that the proper balance between psyllium husk and hydroxypropylmethylcellulose K4M can produce a drug dissolution profile similar to the predicted dissolution profile. The optimized formulations followed Higuchi's kinetics while the drug release mechanism was found to be anomalous type, controlled by diffusion through the swollen matrix.

  8. D-optimal experimental designs to test for departure from additivity in a fixed-ratio mixture ray.

    PubMed

    Coffey, Todd; Gennings, Chris; Simmons, Jane Ellen; Herr, David W

    2005-12-01

    Traditional factorial designs for evaluating interactions among chemicals in a mixture may be prohibitive when the number of chemicals is large. Using a mixture of chemicals with a fixed ratio (mixture ray) results in an economical design that allows estimation of additivity or nonadditive interaction for a mixture of interest. This methodology is extended easily to a mixture with a large number of chemicals. Optimal experimental conditions can be chosen that result in increased power to detect departures from additivity. Although these designs are used widely for linear models, optimal designs for nonlinear threshold models are less well known. In the present work, the use of D-optimal designs is demonstrated for nonlinear threshold models applied to a fixed-ratio mixture ray. For a fixed sample size, this design criterion selects the experimental doses and number of subjects per dose level that result in minimum variance of the model parameters and thus increased power to detect departures from additivity. An optimal design is illustrated for a 2:1 ratio (chlorpyrifos:carbaryl) mixture experiment. For this example, and in general, the optimal designs for the nonlinear threshold model depend on prior specification of the slope and dose threshold parameters. Use of a D-optimal criterion produces experimental designs with increased power, whereas standard nonoptimal designs with equally spaced dose groups may result in low power if the active range or threshold is missed.

  9. Midlands Teaching Factory, LTD.

    ERIC Educational Resources Information Center

    Midlands Technical Coll., Columbia, SC.

    In 1987, Midlands Technical College (MTC), in Columbia, South Carolina, initiated a Computer Integrated Manufacturing (CIM) project, the Midlands Teaching Factory, LTD, which integrated various college departments with the goal of manufacturing a high quality, saleable product. The faculty developed a teaching factory model which was designed to…

  10. Optimal Site Characterization and Selection Criteria for Oyster Restoration using Multicolinear Factorial Water Quality Approach

    NASA Astrophysics Data System (ADS)

    Yoon, J.

    2015-12-01

    Elevated levels of nutrient loadings have enriched the Chesapeake Bay estuaries and coastal waters via point and nonpoint sources and the atmosphere. Restoring oyster beds is considered a Best Management Practice (BMP) to improve the water quality as well as provide physical aquatic habitat and a healthier estuarine system. Efforts include declaring sanctuaries for brood-stocks, supplementing hard substrate on the bottom and aiding natural populations with the addition of hatchery-reared and disease-resistant stocks. An economic assessment suggests that restoring the ecological functions will improve water quality, stabilize shorelines, and establish a habitat for breeding grounds that outweighs the value of harvestable oyster production. Parametric factorial models were developed to investigate multicolinearities among in situ water quality and oyster restoration activities to evaluate posterior success rates upon multiple substrates, and physical, chemical, hydrological and biological site characteristics to systematically identify significant factors. Findings were then further utilized to identify the optimal sites for successful oyster restoration augmentable with Total Maximum Daily Loads (TMDLs) and BMPs. Factorial models evaluate the relationship among the dependent variable, oyster biomass, and treatments of temperature, salinity, total suspended solids, E. coli/Enterococci counts, depth, dissolved oxygen, chlorophyll a, nitrogen and phosphorus, and blocks consist of alternative substrates (oyster shells versus riprap, granite, cement, cinder blocks, limestone marl or combinations). Factorial model results were then compared to identify which combination of variables produces the highest posterior biomass of oysters. Developed Factorial model can facilitate maximizing the likelihood of successful oyster reef restoration in an effort to establish a healthier ecosystem and to improve overall estuarine water quality in the Chesapeake Bay estuaries.

  11. New approach to optimize near-infrared spectra with design of experiments and determination of milk compounds as influence factors for changing milk over time.

    PubMed

    De Benedictis, Lorenzo; Huck, Christian

    2016-12-01

    The optimization of near-infrared spectroscopic parameters was realized via design of experiments. With this new approach objectivity can be integrated into conventional, rather subjective approaches. The investigated factors are layer thickness, number of scans and temperature during measurement. Response variables in the full factorial design consisted of absorption intensity, signal-to-noise ratio and reproducibility of the spectra. Optimized factorial combinations have been found to be 0.5mm layer thickness, 64 scans and 25°C ambient temperature for liquid milk measurements. Qualitative analysis of milk indicated a strong correlation of environmental factors, as well as the feeding of cattle with respect to the change in milk composition. This was illustrated with the aid of near-infrared spectroscopy and the previously optimized parameters by detection of altered fatty acids in milk, especially by the fatty acid content (number of carboxylic functions) and the fatty acid length. Copyright © 2016 Elsevier Ltd. All rights reserved.

  12. Resolution V fractional factorial design for screening of factors affecting weakly basic drugs liposomal systems.

    PubMed

    Nageeb El-Helaly, Sara; Habib, Basant A; Abd El-Rahman, Mohamed K

    2018-07-01

    This study aims to investigate factors affecting weakly basic drugs liposomal systems. Resolution V fractional factorial design (2 V 5-1 ) is used as an example of screening designs that would better be used as a wise step before proceeding with detailed factors effects or optimization studies. Five factors probable to affect liposomal systems of weakly basic drugs were investigated using Amisulpride as a model drug. Factors studied were; A: Preparation technique B: Phosphatidyl choline (PhC) amount (mg) C: Cholesterol: PhC molar ratio, D: Hydration volume (ml) and E: Sonication type. Levels investigated were; Ammonium sulphate-pH gradient technique or Transmembrane zinc chelation-pH gradient technique, 200 or 400 mg, 0 or 0.5, 10 or 20 ml and bath or probe sonication for A, B, C, D and E respectively. Responses measured were Particle size (PS) (nm), Zeta potential (ZP) (mV) and Entrapment efficiency percent (EE%). Ion selective electrode was used as a novel method for measuring unentrapped drug concentration and calculating entrapment efficiency without the need for liposomal separation. Factors mainly affecting the studied responses were Cholesterol: PhC ratio and hydration volume for PS, preparation technique for ZP and preparation technique and hydration volume for EE%. The applied 2 V 5-1 design enabled the use of only 16 trial combinations for screening the influence of five factors on weakly basic drugs liposomal systems. This clarifies the value of the use of screening experiments before extensive investigation of certain factors in detailed optimization studies. Copyright © 2018 Elsevier B.V. All rights reserved.

  13. Influence of the Formulation Parameters on the Particle Size and Encapsulation Efficiency of Resveratrol in PLA and PLA-PEG Blend Nanoparticles: A Factorial Design.

    PubMed

    Lindner, Gabriela da Rocha; Dalmolin, Luciana Facco; Khalil, Najeh Maissar; Mainardes, Rubiana Mara

    2015-12-01

    Polymeric nanoparticles are colloidal systems that promote protection and modification of physicochemical characteristics of a drug and that also ensure controlled and extended drug release. This paper reports a 2(3) factorial design study to optimize poly(lactide) (PLA) and poly(lactide)-polyethylene glycol (PLA-PEG) blend nanoparticles containing resveratrol (RVT) for prolonged release. The independent variables analyzed were solvent composition, surfactant concentration and ratio of aqueous to organic phase (two levels each factor). Mean particle size and RVT encapsulation efficiency were set as the dependent variables. The selected optimized parameters were set as organic phase comprised of a mixture of dichloromethane and ethyl acetate, 1% of surfactant polyvinyl alcohol and a 3:1 ratio of aqueous to organic phase, for both PLA and PLA-PEG blend nanoparticles. This formulation originated nanoparticles with size of 228 ± 10 nm and 185 ± 70 nm and RVT encapsulation efficiency of 82 ± 10% and 76 ± 7% for PLA and PLA-PEG blend nanoparticles, respectively. The in vitro release study showed a biphasic pattern with prolonged RVT release and PEG did not influence the RVT release. The in vitro release data were in favor of Higuchi-diffusion kinetics for both nanoformulations and the Kossmeyer-Peppas coefficient indicated that anomalous transport was the main release mechanism of RVT. PLA and PLA-PEG blend nanoparticles produced with single emulsion-solvent evaporation technology were found to be a promising approach for the incorporation of RVT and promoted its controlled release. The factorial design is a tool of great value in choosing formulations with optimized parameters.

  14. Characterization and optimization of cell seeding in scaffolds by factorial design: quality by design approach for skeletal tissue engineering.

    PubMed

    Chen, Yantian; Bloemen, Veerle; Impens, Saartje; Moesen, Maarten; Luyten, Frank P; Schrooten, Jan

    2011-12-01

    Cell seeding into scaffolds plays a crucial role in the development of efficient bone tissue engineering constructs. Hence, it becomes imperative to identify the key factors that quantitatively predict reproducible and efficient seeding protocols. In this study, the optimization of a cell seeding process was investigated using design of experiments (DOE) statistical methods. Five seeding factors (cell type, scaffold type, seeding volume, seeding density, and seeding time) were selected and investigated by means of two response parameters, critically related to the cell seeding process: cell seeding efficiency (CSE) and cell-specific viability (CSV). In addition, cell spatial distribution (CSD) was analyzed by Live/Dead staining assays. Analysis identified a number of statistically significant main factor effects and interactions. Among the five seeding factors, only seeding volume and seeding time significantly affected CSE and CSV. Also, cell and scaffold type were involved in the interactions with other seeding factors. Within the investigated ranges, optimal conditions in terms of CSV and CSD were obtained when seeding cells in a regular scaffold with an excess of medium. The results of this case study contribute to a better understanding and definition of optimal process parameters for cell seeding. A DOE strategy can identify and optimize critical process variables to reduce the variability and assists in determining which variables should be carefully controlled during good manufacturing practice production to enable a clinically relevant implant.

  15. More Powerful Tests of Simple Interaction Contrasts in the Two-Way Factorial Design

    ERIC Educational Resources Information Center

    Hancock, Gregory R.; McNeish, Daniel M.

    2017-01-01

    For the two-way factorial design in analysis of variance, the current article explicates and compares three methods for controlling the Type I error rate for all possible simple interaction contrasts following a statistically significant interaction, including a proposed modification to the Bonferroni procedure that increases the power of…

  16. Virtual Factory Framework for Supporting Production Planning and Control.

    PubMed

    Kibira, Deogratias; Shao, Guodong

    2017-01-01

    Developing optimal production plans for smart manufacturing systems is challenging because shop floor events change dynamically. A virtual factory incorporating engineering tools, simulation, and optimization generates and communicates performance data to guide wise decision making for different control levels. This paper describes such a platform specifically for production planning. We also discuss verification and validation of the constituent models. A case study of a machine shop is used to demonstrate data generation for production planning in a virtual factory.

  17. Pediatric diabetic ketoacidosis, fluid therapy, and cerebral injury: the design of a factorial randomized controlled trial.

    PubMed

    Glaser, Nicole S; Ghetti, Simona; Casper, T Charles; Dean, J Michael; Kuppermann, Nathan

    2013-09-01

    Treatment protocols for pediatric diabetic ketoacidosis (DKA) vary considerably among centers in the USA and worldwide. The optimal protocol for intravenous (IV) fluid administration is an area of particular controversy, mainly in regard to possible associations between rates of IV fluid infusion and the development of cerebral edema (CE), the most common and the most feared complication of DKA in children. Theoretical concerns about associations between osmotic fluid shifts and CE have prompted recommendations for conservative fluid infusion during DKA. However, recent data suggest that cerebral hypoperfusion may play a role in cerebral injury associated with DKA. Currently, there are no existing data from prospective clinical trials to determine the optimal fluid treatment protocol for pediatric DKA. The Pediatric Emergency Care Applied Research Network FLUID (FLuid therapies Under Investigation in DKA) study is the first prospective randomized trial to evaluate fluid regimens for pediatric DKA. This 13-center nationwide factorial design study will evaluate the effects of rehydration rate and fluid sodium content on neurological status during DKA treatment, the frequency of clinically overt CE and long-term neurocognitive outcomes following DKA. © 2013 John Wiley & Sons A/S.

  18. Performing Contrast Analysis in Factorial Designs: From NHST to Confidence Intervals and Beyond

    ERIC Educational Resources Information Center

    Wiens, Stefan; Nilsson, Mats E.

    2017-01-01

    Because of the continuing debates about statistics, many researchers may feel confused about how to analyze and interpret data. Current guidelines in psychology advocate the use of effect sizes and confidence intervals (CIs). However, researchers may be unsure about how to extract effect sizes from factorial designs. Contrast analysis is helpful…

  19. Assessing factorial invariance of two-way rating designs using three-way methods

    PubMed Central

    Kroonenberg, Pieter M.

    2015-01-01

    Assessing the factorial invariance of two-way rating designs such as ratings of concepts on several scales by different groups can be carried out with three-way models such as the Parafac and Tucker models. By their definitions these models are double-metric factorially invariant. The differences between these models lie in their handling of the links between the concept and scale spaces. These links may consist of unrestricted linking (Tucker2 model), invariant component covariances but variable variances per group and per component (Parafac model), zero covariances and variances different per group but not per component (Replicated Tucker3 model) and strict invariance (Component analysis on the average matrix). This hierarchy of invariant models, and the procedures by which to evaluate the models against each other, is illustrated in some detail with an international data set from attachment theory. PMID:25620936

  20. Examining robustness of model selection with half-normal and LASSO plots for unreplicated factorial designs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jang, Dae -Heung; Anderson-Cook, Christine Michaela

    When there are constraints on resources, an unreplicated factorial or fractional factorial design can allow efficient exploration of numerous factor and interaction effects. A half-normal plot is a common graphical tool used to compare the relative magnitude of effects and to identify important effects from these experiments when no estimate of error from the experiment is available. An alternative is to use a least absolute shrinkage and selection operation plot to examine the pattern of model selection terms from an experiment. We examine how both the half-normal and least absolute shrinkage and selection operation plots are impacted by the absencemore » of individual observations or an outlier, and the robustness of conclusions obtained from these 2 techniques for identifying important effects from factorial experiments. As a result, the methods are illustrated with 2 examples from the literature.« less

  1. Examining robustness of model selection with half-normal and LASSO plots for unreplicated factorial designs

    DOE PAGES

    Jang, Dae -Heung; Anderson-Cook, Christine Michaela

    2017-04-12

    When there are constraints on resources, an unreplicated factorial or fractional factorial design can allow efficient exploration of numerous factor and interaction effects. A half-normal plot is a common graphical tool used to compare the relative magnitude of effects and to identify important effects from these experiments when no estimate of error from the experiment is available. An alternative is to use a least absolute shrinkage and selection operation plot to examine the pattern of model selection terms from an experiment. We examine how both the half-normal and least absolute shrinkage and selection operation plots are impacted by the absencemore » of individual observations or an outlier, and the robustness of conclusions obtained from these 2 techniques for identifying important effects from factorial experiments. As a result, the methods are illustrated with 2 examples from the literature.« less

  2. Improvement of biomass production and glucoamylase activity by Candida famata using factorial design.

    PubMed

    Mosbah, Habib; Aissa, Imen; Hassad, Nahla; Farh, Dhaker; Bakhrouf, Amina; Achour, Sami

    2016-07-01

    To improve biomass production and glucoamylase activity (GA) by Candida famata, culture conditions were optimized. A 2(3) full factorial design (FFD) with a response surface model was used to evaluate the effects and interactions of pH (X1 ), time of cultivation (X2 ), and starch concentration (X3 ) on the biomass production and enzyme activity. A total of 16 experiments were conducted toward the construction of an empiric model and a first-order equation. It was found that all factors (X1 , X2 , and X3 ) and their interactions were significant at a certain confidence level (P < 0.05). Using this methodology, the optimum values of the three tested parameters were obtained as follows: pH 6; time of cultivation 24 H and starch concentration 7 g/L, respectively. Our results showed that the starch concentration (X3) has significantly influenced both dependent variables, biomass production and GA of C. famata. Under this optimized medium, the experimental biomass production and GA obtained were 1.8 ± 0.54 g/L and 0.078 ± 0.012 µmol/L/Min, about 1.5- and 1.8-fold, respectively, higher than those in basal medium. The (R(2) ) coefficients obtained were 0.997 and 0.990, indicating an adequate degree of reliability in the model. Approximately 99% of validity of the predicted value was achieved. © 2015 International Union of Biochemistry and Molecular Biology, Inc.

  3. Clustering Words to Match Conditions: An Algorithm for Stimuli Selection in Factorial Designs

    ERIC Educational Resources Information Center

    Guasch, Marc; Haro, Juan; Boada, Roger

    2017-01-01

    With the increasing refinement of language processing models and the new discoveries about which variables can modulate these processes, stimuli selection for experiments with a factorial design is becoming a tough task. Selecting sets of words that differ in one variable, while matching these same words into dozens of other confounding variables…

  4. Removing lead from metallic mixture of waste printed circuit boards by vacuum distillation: factorial design and removal mechanism.

    PubMed

    Li, Xingang; Gao, Yujie; Ding, Hui

    2013-10-01

    The lead removal from the metallic mixture of waste printed circuit boards by vacuum distillation was optimized using experimental design, and a mathematical model was established to elucidate the removal mechanism. The variables studied in lead evaporation consisted of the chamber pressure, heating temperature, heating time, particle size and initial mass. The low-level chamber pressure was fixed at 0.1 Pa as the operation pressure. The application of two-level factorial design generated a first-order polynomial that agreed well with the data for evaporation efficiency of lead. The heating temperature and heating time exhibited significant effects on the efficiency, which was validated by means of the copper-lead mixture experiments. The optimized operating conditions within the region studied were the chamber pressure of 0.1 Pa, heating temperature of 1023 K and heating time of 120 min. After the conditions were employed to remove lead from the metallic mixture of waste printed circuit boards, the efficiency was 99.97%. The mechanism of the effects was elucidated by mathematical modeling that deals with evaporation, mass transfer and condensation, and can be applied to a wider range of metal removal by vacuum distillation. Copyright © 2013 Elsevier Ltd. All rights reserved.

  5. Use of fractional factorial design for optimization of digestion procedures followed by multi-element determination of essential and non-essential elements in nuts using ICP-OES technique.

    PubMed

    Momen, Awad A; Zachariadis, George A; Anthemidis, Aristidis N; Stratis, John A

    2007-01-15

    Two digestion procedures have been tested on nut samples for application in the determination of essential (Cr, Cu, Fe, Mg, Mn, Zn) and non-essential (Al, Ba, Cd, Pb) elements by inductively coupled plasma-optical emission spectrometry (ICP-OES). These included wet digestions with HNO(3)/H(2)SO(4) and HNO(3)/H(2)SO(4)/H(2)O(2). The later one is recommended for better analytes recoveries (relative error<11%). Two calibrations (aqueous standard and standard addition) procedures were studied and proved that standard addition was preferable for all analytes. Experimental designs for seven factors (HNO(3), H(2)SO(4) and H(2)O(2) volumes, digestion time, pre-digestion time, temperature of the hot plate and sample weight) were used for optimization of sample digestion procedures. For this purpose Plackett-Burman fractional factorial design, which involve eight experiments was adopted. The factors HNO(3) and H(2)O(2) volume, and the digestion time were found to be the most important parameters. The instrumental conditions were also optimized (using peanut matrix rather than aqueous standard solutions) considering radio-frequency (rf) incident power, nebulizer argon gas flow rate and sample uptake flow rate. The analytical performance, such as limits of detection (LOD<0.74mugg(-1)), precision of the overall procedures (relative standard deviation between 2.0 and 8.2%) and accuracy (relative errors between 0.4 and 11%) were assessed statistically to evaluate the developed analytical procedures. The good agreement between measured and certified values for all analytes (relative error <11%) with respect to IAEA-331 (spinach leaves) and IAEA-359 (cabbage) indicates that the developed analytical method is well suited for further studies on the fate of major elements in nuts and possibly similar matrices.

  6. Factorial experimental design for the culture of human embryonic stem cells as aggregates in stirred suspension bioreactors reveals the potential for interaction effects between bioprocess parameters.

    PubMed

    Hunt, Megan M; Meng, Guoliang; Rancourt, Derrick E; Gates, Ian D; Kallos, Michael S

    2014-01-01

    Traditional optimization of culture parameters for the large-scale culture of human embryonic stem cells (ESCs) as aggregates is carried out in a stepwise manner whereby the effect of varying each culture parameter is investigated individually. However, as evidenced by the wide range of published protocols and culture performance indicators (growth rates, pluripotency marker expression, etc.), there is a lack of systematic investigation into the true effect of varying culture parameters especially with respect to potential interactions between culture variables. Here we describe the design and execution of a two-parameter, three-level (3(2)) factorial experiment resulting in nine conditions that were run in duplicate 125-mL stirred suspension bioreactors. The two parameters investigated here were inoculation density and agitation rate, which are easily controlled, but currently, poorly characterized. Cell readouts analyzed included fold expansion, maximum density, and exponential growth rate. Our results reveal that the choice of best case culture parameters was dependent on which cell property was chosen as the primary output variable. Subsequent statistical analyses via two-way analysis of variance indicated significant interaction effects between inoculation density and agitation rate specifically in the case of exponential growth rates. Results indicate that stepwise optimization has the potential to miss out on the true optimal case. In addition, choosing an optimum condition for a culture output of interest from the factorial design yielded similar results when repeated with the same cell line indicating reproducibility. We finally validated that human ESCs remain pluripotent in suspension culture as aggregates under our optimal conditions and maintain their differentiation capabilities as well as a stable karyotype and strong expression levels of specific human ESC markers over several passages in suspension bioreactors.

  7. Using Bayesian variable selection to analyze regular resolution IV two-level fractional factorial designs

    DOE PAGES

    Chipman, Hugh A.; Hamada, Michael S.

    2016-06-02

    Regular two-level fractional factorial designs have complete aliasing in which the associated columns of multiple effects are identical. Here, we show how Bayesian variable selection can be used to analyze experiments that use such designs. In addition to sparsity and hierarchy, Bayesian variable selection naturally incorporates heredity . This prior information is used to identify the most likely combinations of active terms. We also demonstrate the method on simulated and real experiments.

  8. Using Bayesian variable selection to analyze regular resolution IV two-level fractional factorial designs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chipman, Hugh A.; Hamada, Michael S.

    Regular two-level fractional factorial designs have complete aliasing in which the associated columns of multiple effects are identical. Here, we show how Bayesian variable selection can be used to analyze experiments that use such designs. In addition to sparsity and hierarchy, Bayesian variable selection naturally incorporates heredity . This prior information is used to identify the most likely combinations of active terms. We also demonstrate the method on simulated and real experiments.

  9. Conceptual design optimization study

    NASA Technical Reports Server (NTRS)

    Hollowell, S. J.; Beeman, E. R., II; Hiyama, R. M.

    1990-01-01

    The feasibility of applying multilevel functional decomposition and optimization techniques to conceptual design of advanced fighter aircraft was investigated. Applying the functional decomposition techniques to the conceptual design phase appears to be feasible. The initial implementation of the modified design process will optimize wing design variables. A hybrid approach, combining functional decomposition techniques for generation of aerodynamic and mass properties linear sensitivity derivatives with existing techniques for sizing mission performance and optimization, is proposed.

  10. Optimization of process parameters for a quasi-continuous tablet coating system using design of experiments.

    PubMed

    Cahyadi, Christine; Heng, Paul Wan Sia; Chan, Lai Wah

    2011-03-01

    The aim of this study was to identify and optimize the critical process parameters of the newly developed Supercell quasi-continuous coater for optimal tablet coat quality. Design of experiments, aided by multivariate analysis techniques, was used to quantify the effects of various coating process conditions and their interactions on the quality of film-coated tablets. The process parameters varied included batch size, inlet temperature, atomizing pressure, plenum pressure, spray rate and coating level. An initial screening stage was carried out using a 2(6-1(IV)) fractional factorial design. Following these preliminary experiments, optimization study was carried out using the Box-Behnken design. Main response variables measured included drug-loading efficiency, coat thickness variation, and the extent of tablet damage. Apparent optimum conditions were determined by using response surface plots. The process parameters exerted various effects on the different response variables. Hence, trade-offs between individual optima were necessary to obtain the best compromised set of conditions. The adequacy of the optimized process conditions in meeting the combined goals for all responses was indicated by the composite desirability value. By using response surface methodology and optimization, coating conditions which produced coated tablets of high drug-loading efficiency, low incidences of tablet damage and low coat thickness variation were defined. Optimal conditions were found to vary over a large spectrum when different responses were considered. Changes in processing parameters across the design space did not result in drastic changes to coat quality, thereby demonstrating robustness in the Supercell coating process. © 2010 American Association of Pharmaceutical Scientists

  11. Toward precision smoking cessation treatment I: Moderator results from a factorial experiment.

    PubMed

    Piper, Megan E; Schlam, Tanya R; Cook, Jessica W; Smith, Stevens S; Bolt, Daniel M; Loh, Wei-Yin; Mermelstein, Robin; Collins, Linda M; Fiore, Michael C; Baker, Timothy B

    2017-02-01

    The development of tobacco use treatments that are effective for all smokers is critical to improving clinical and public health. The Multiphase Optimization Strategy (MOST) uses highly efficient factorial experiments to evaluate multiple intervention components for possible inclusion in an optimized tobacco use treatment. Factorial experiments permit analyses of the influence of patient characteristics on main and interaction effects of multiple, relatively discrete, intervention components. This study examined whether person-factor and smoking characteristics moderated the main or interactive effects of intervention components on 26-week self-reported abstinence rates. This fractional factorial experiment evaluated six smoking cessation intervention components among primary care patients (N=637): Prequit Nicotine Patch vs. None, Prequit Nicotine Gum vs. None, Preparation Counseling vs. None, Intensive Cessation In-Person Counseling vs. Minimal, Intensive Cessation Telephone Counseling vs. Minimal, and 16 vs. 8 Weeks of Combination Nicotine Replacement Therapy (NRT; nicotine patch+nicotine gum). Both psychiatric history and smoking heaviness moderated intervention component effects. In comparison with participants with no self-reported history of a psychiatric disorder, those with a positive history showed better response to 16- vs. 8-weeks of combination NRT, but a poorer response to counseling interventions. Also, in contrast to light smokers, heavier smokers showed a poorer response to counseling interventions. Heavy smokers and those with psychiatric histories demonstrated a differential response to intervention components. This research illustrates the use of factorial designs to examine the interactions between person characteristics and relatively discrete intervention components. Future research is needed to replicate these findings. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  12. Factorial Experiments: Efficient Tools for Evaluation of Intervention Components

    PubMed Central

    Collins, Linda M.; Dziak, John J.; Kugler, Kari C.; Trail, Jessica B.

    2014-01-01

    Background An understanding of the individual and combined effects of a set of intervention components is important for moving the science of preventive medicine interventions forward. This understanding can often be achieved in an efficient and economical way via a factorial experiment, in which two or more independent variables are manipulated. The factorial experiment is a complement to the randomized controlled trial (RCT); the two designs address different research questions. Purpose This article offers an introduction to factorial experiments aimed at investigators trained primarily in the RCT. Method The factorial experiment is compared and contrasted with other experimental designs used commonly in intervention science to highlight where each is most efficient and appropriate. Results Several points are made: factorial experiments make very efficient use of experimental subjects when the data are properly analyzed; a factorial experiment can have excellent statistical power even if it has relatively few subjects per experimental condition; and when conducting research to select components for inclusion in a multicomponent intervention, interactions should be studied rather than avoided. Conclusions Investigators in preventive medicine and related areas should begin considering factorial experiments alongside other approaches. Experimental designs should be chosen from a resource management perspective, which states that the best experimental design is the one that provides the greatest scientific benefit without exceeding available resources. PMID:25092122

  13. Factorial experiments: efficient tools for evaluation of intervention components.

    PubMed

    Collins, Linda M; Dziak, John J; Kugler, Kari C; Trail, Jessica B

    2014-10-01

    An understanding of the individual and combined effects of a set of intervention components is important for moving the science of preventive medicine interventions forward. This understanding can often be achieved in an efficient and economical way via a factorial experiment, in which two or more independent variables are manipulated. The factorial experiment is a complement to the RCT; the two designs address different research questions. To offer an introduction to factorial experiments aimed at investigators trained primarily in the RCT. The factorial experiment is compared and contrasted with other experimental designs used commonly in intervention science to highlight where each is most efficient and appropriate. Several points are made: factorial experiments make very efficient use of experimental subjects when the data are properly analyzed; a factorial experiment can have excellent statistical power even if it has relatively few subjects per experimental condition; and when conducting research to select components for inclusion in a multicomponent intervention, interactions should be studied rather than avoided. Investigators in preventive medicine and related areas should begin considering factorial experiments alongside other approaches. Experimental designs should be chosen from a resource management perspective, which states that the best experimental design is the one that provides the greatest scientific benefit without exceeding available resources. Copyright © 2014 American Journal of Preventive Medicine. Published by Elsevier Inc. All rights reserved.

  14. Optimization and Technological Development Strategies of an Antimicrobial Extract from Achyrocline alata Assisted by Statistical Design

    PubMed Central

    Demarque, Daniel P.; Fitts, Sonia Maria F.; Boaretto, Amanda G.; da Silva, Júlio César Leite; Vieira, Maria C.; Franco, Vanessa N. P.; Teixeira, Caroline B.; Toffoli-Kadri, Mônica C.; Carollo, Carlos A.

    2015-01-01

    Achyrocline alata, known as Jateí-ka-há, is traditionally used to treat several health problems, including inflammations and infections. This study aimed to optimize an active extract against Streptococcus mutans, the main bacteria that causes caries. The extract was developed using an accelerated solvent extraction and chemometric calculations. Factorial design and response surface methodologies were used to determine the most important variables, such as active compound selectivity. The standardized extraction recovered 99% of the four main compounds, gnaphaliin, helipyrone, obtusifolin and lepidissipyrone, which represent 44% of the extract. The optimized extract of A. alata has a MIC of 62.5 μg/mL against S. mutans and could be used in mouth care products. PMID:25710523

  15. Formulation of topical bioadhesive gel of aceclofenac using 3-level factorial design.

    PubMed

    Singh, Sanjay; Parhi, Rabinarayan; Garg, Anuj

    2011-01-01

    The objective of this work was to develop bioadhesive topical gel of Aceclofenac with the help of response-surface approach. Experiments were performed according to a 3-level factorial design to evaluate the effects of two independent variables [amount of Poloxamer 407 (PL-407 = X1) and hydroxypropylmethyl cellulose K100 M (HPMC = X2)] on the bioadhesive character of gel, rheological property of gel (consistency index), and in-vitro drug release. The best model was selected to fit the data. Mathematical equation was generated by Design Expert® software for the model which assists in determining the effect of independent variables. Response surface plots were also generated by the software for analyzing effect of the independent variables on the response. Quadratic model was found to be the best for all the responses. Both independent variable (X1 and X2) were found to have synergistic effect on bioadhesion (Y1) but the effect of HPMC was more pronounced than PL-407. Consistency index was enhanced by increasing the level of both independent variables. An antagonistic effect of both independent variables was found on cumulative percentage release of drug in 2 (Y3) and 8 h (Y4). Both independent variables approximately equally contributed the antagonistic effect on Y3 whereas antagonistic effect of HPMC was more pronounced than PL-407. The effect of formulation variables on the product characteristics can be easily predicted and precisely interpreted by using a 3-level factorial experimental design and generated quadratic mathematical equations.

  16. A Randomized Longitudinal Factorial Design to Assess Malaria Vector Control and Disease Management Interventions in Rural Tanzania

    PubMed Central

    Kramer, Randall A.; Mboera, Leonard E. G.; Senkoro, Kesheni; Lesser, Adriane; Shayo, Elizabeth H.; Paul, Christopher J.; Miranda, Marie Lynn

    2014-01-01

    The optimization of malaria control strategies is complicated by constraints posed by local health systems, infrastructure, limited resources, and the complex interactions between infection, disease, and treatment. The purpose of this paper is to describe the protocol of a randomized factorial study designed to address this research gap. This project will evaluate two malaria control interventions in Mvomero District, Tanzania: (1) a disease management strategy involving early detection and treatment by community health workers using rapid diagnostic technology; and (2) vector control through community-supported larviciding. Six study villages were assigned to each of four groups (control, early detection and treatment, larviciding, and early detection and treatment plus larviciding). The primary endpoint of interest was change in malaria infection prevalence across the intervention groups measured during annual longitudinal cross-sectional surveys. Recurring entomological surveying, household surveying, and focus group discussions will provide additional valuable insights. At baseline, 962 households across all 24 villages participated in a household survey; 2,884 members from 720 of these households participated in subsequent malariometric surveying. The study design will allow us to estimate the effect sizes of different intervention mixtures. Careful documentation of our study protocol may also serve other researchers designing field-based intervention trials. PMID:24840349

  17. A randomized longitudinal factorial design to assess malaria vector control and disease management interventions in rural Tanzania.

    PubMed

    Kramer, Randall A; Mboera, Leonard E G; Senkoro, Kesheni; Lesser, Adriane; Shayo, Elizabeth H; Paul, Christopher J; Miranda, Marie Lynn

    2014-05-16

    The optimization of malaria control strategies is complicated by constraints posed by local health systems, infrastructure, limited resources, and the complex interactions between infection, disease, and treatment. The purpose of this paper is to describe the protocol of a randomized factorial study designed to address this research gap. This project will evaluate two malaria control interventions in Mvomero District, Tanzania: (1) a disease management strategy involving early detection and treatment by community health workers using rapid diagnostic technology; and (2) vector control through community-supported larviciding. Six study villages were assigned to each of four groups (control, early detection and treatment, larviciding, and early detection and treatment plus larviciding). The primary endpoint of interest was change in malaria infection prevalence across the intervention groups measured during annual longitudinal cross-sectional surveys. Recurring entomological surveying, household surveying, and focus group discussions will provide additional valuable insights. At baseline, 962 households across all 24 villages participated in a household survey; 2,884 members from 720 of these households participated in subsequent malariometric surveying. The study design will allow us to estimate the effect sizes of different intervention mixtures. Careful documentation of our study protocol may also serve other researchers designing field-based intervention trials.

  18. Optimization of Soluble Expression and Purification of Recombinant Human Rhinovirus Type-14 3C Protease Using Statistically Designed Experiments: Isolation and Characterization of the Enzyme.

    PubMed

    Antoniou, Georgia; Papakyriacou, Irineos; Papaneophytou, Christos

    2017-10-01

    Human rhinovirus (HRV) 3C protease is widely used in recombinant protein production for various applications such as biochemical characterization and structural biology projects to separate recombinant fusion proteins from their affinity tags in order to prevent interference between these tags and the target proteins. Herein, we report the optimization of expression and purification conditions of glutathione S-transferase (GST)-tagged HRV 3C protease by statistically designed experiments. Soluble expression of GST-HRV 3C protease was initially optimized by response surface methodology (RSM), and a 5.5-fold increase in enzyme yield was achieved. Subsequently, we developed a new incomplete factorial (IF) design that examines four variables (bacterial strain, expression temperature, induction time, and inducer concentration) in a single experiment. The new design called Incomplete Factorial-Strain/Temperature/Time/Inducer (IF-STTI) was validated using three GST-tagged proteins. In all cases, IF-STTI resulted in only 10% lower expression yields than those obtained by RSM. Purification of GST-HRV 3C was optimized by an IF design that examines simultaneously the effect of the amount of resin, incubation time of cell lysate with resin, and glycerol and DTT concentration in buffers, and a further 15% increase in protease recovery was achieved. Purified GST-HRV 3C protease was active at both 4 and 25 °C in a variety of buffers.

  19. An algorithm for generating all possible 2(p-q) fractional factorial designs and its use in scientific experimentation

    NASA Technical Reports Server (NTRS)

    Sidik, S. M.

    1973-01-01

    An algorithm and computer program are presented for generating all the distinct 2(p-q) fractional factorial designs. Some applications of this algorithm to the construction of tables of designs and of designs for nonstandard situations and its use in Bayesian design are discussed. An appendix includes a discussion of an actual experiment whose design was facilitated by the algorithm.

  20. Application of factorial designs to study factors involved in the determination of aldehydes present in beer by on-fiber derivatization in combination with gas chromatography and mass spectrometry.

    PubMed

    Carrillo, Génesis; Bravo, Adriana; Zufall, Carsten

    2011-05-11

    With the aim of studying the factors involved in on-fiber derivatization of Strecker aldehydes, furfural, and (E)-2-nonenal with O-(2,3,4,5,6-pentafluorobenzyl)hydroxylamine in beer, factorial designs were applied. The effect of the temperature, time, and NaCl addition on the analytes' derivatization/extraction efficiency was studied through a factorial 2(3) randomized-block design; all of the factors and their interactions were significant at the 95% confidence level for most of the analytes. The effect of temperature and its interactions separated the analytes in two groups. However, a single sampling condition was selected that optimized response for most aldehydes. The resulting method, combining on-fiber derivatization with gas chromatography-mass spectrometry, was validated. Limits of detections were between 0.015 and 1.60 μg/L, and relative standard deviations were between 1.1 and 12.2%. The efficacy of the internal standardization method was confirmed by recovery percentage (73-117%). The method was applied to the determination of aldehydes in fresh beer and after storage at 28 °C.

  1. Structural Optimization in automotive design

    NASA Technical Reports Server (NTRS)

    Bennett, J. A.; Botkin, M. E.

    1984-01-01

    Although mathematical structural optimization has been an active research area for twenty years, there has been relatively little penetration into the design process. Experience indicates that often this is due to the traditional layout-analysis design process. In many cases, optimization efforts have been outgrowths of analysis groups which are themselves appendages to the traditional design process. As a result, optimization is often introduced into the design process too late to have a significant effect because many potential design variables have already been fixed. A series of examples are given to indicate how structural optimization has been effectively integrated into the design process.

  2. A Market-Based Approach to Multi-factory Scheduling

    NASA Astrophysics Data System (ADS)

    Vytelingum, Perukrishnen; Rogers, Alex; MacBeth, Douglas K.; Dutta, Partha; Stranjak, Armin; Jennings, Nicholas R.

    In this paper, we report on the design of a novel market-based approach for decentralised scheduling across multiple factories. Specifically, because of the limitations of scheduling in a centralised manner - which requires a center to have complete and perfect information for optimality and the truthful revelation of potentially commercially private preferences to that center - we advocate an informationally decentralised approach that is both agile and dynamic. In particular, this work adopts a market-based approach for decentralised scheduling by considering the different stakeholders representing different factories as self-interested, profit-motivated economic agents that trade resources for the scheduling of jobs. The overall schedule of these jobs is then an emergent behaviour of the strategic interaction of these trading agents bidding for resources in a market based on limited information and their own preferences. Using a simple (zero-intelligence) bidding strategy, we empirically demonstrate that our market-based approach achieves a lower bound efficiency of 84%. This represents a trade-off between a reasonable level of efficiency (compared to a centralised approach) and the desirable benefits of a decentralised solution.

  3. Robust tests for multivariate factorial designs under heteroscedasticity.

    PubMed

    Vallejo, Guillermo; Ato, Manuel

    2012-06-01

    The question of how to analyze several multivariate normal mean vectors when normality and covariance homogeneity assumptions are violated is considered in this article. For the two-way MANOVA layout, we address this problem adapting results presented by Brunner, Dette, and Munk (BDM; 1997) and Vallejo and Ato (modified Brown-Forsythe [MBF]; 2006) in the context of univariate factorial and split-plot designs and a multivariate version of the linear model (MLM) to accommodate heterogeneous data. Furthermore, we compare these procedures with the Welch-James (WJ) approximate degrees of freedom multivariate statistics based on ordinary least squares via Monte Carlo simulation. Our numerical studies show that of the methods evaluated, only the modified versions of the BDM and MBF procedures were robust to violations of underlying assumptions. The MLM approach was only occasionally liberal, and then by only a small amount, whereas the WJ procedure was often liberal if the interactive effects were involved in the design, particularly when the number of dependent variables increased and total sample size was small. On the other hand, it was also found that the MLM procedure was uniformly more powerful than its most direct competitors. The overall success rate was 22.4% for the BDM, 36.3% for the MBF, and 45.0% for the MLM.

  4. Sequential ensemble-based optimal design for parameter estimation: SEQUENTIAL ENSEMBLE-BASED OPTIMAL DESIGN

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Man, Jun; Zhang, Jiangjiang; Li, Weixuan

    2016-10-01

    The ensemble Kalman filter (EnKF) has been widely used in parameter estimation for hydrological models. The focus of most previous studies was to develop more efficient analysis (estimation) algorithms. On the other hand, it is intuitively understandable that a well-designed sampling (data-collection) strategy should provide more informative measurements and subsequently improve the parameter estimation. In this work, a Sequential Ensemble-based Optimal Design (SEOD) method, coupled with EnKF, information theory and sequential optimal design, is proposed to improve the performance of parameter estimation. Based on the first-order and second-order statistics, different information metrics including the Shannon entropy difference (SD), degrees ofmore » freedom for signal (DFS) and relative entropy (RE) are used to design the optimal sampling strategy, respectively. The effectiveness of the proposed method is illustrated by synthetic one-dimensional and two-dimensional unsaturated flow case studies. It is shown that the designed sampling strategies can provide more accurate parameter estimation and state prediction compared with conventional sampling strategies. Optimal sampling designs based on various information metrics perform similarly in our cases. The effect of ensemble size on the optimal design is also investigated. Overall, larger ensemble size improves the parameter estimation and convergence of optimal sampling strategy. Although the proposed method is applied to unsaturated flow problems in this study, it can be equally applied in any other hydrological problems.« less

  5. Development and optimization of enteric coated mucoadhesive microspheres of duloxetine hydrochloride using 32 full factorial design

    PubMed Central

    Setia, Anupama; Kansal, Sahil; Goyal, Naveen

    2013-01-01

    Background: Microspheres constitute an important part of oral drug delivery system by virtue of their small size and efficient carrier capacity. However, the success of these microspheres is limited due to their short residence time at the site of absorption. Objective: The objective of the present study was to formulate and systematically evaluate in vitro performance of enteric coated mucoadhesive microspheres of duloxetine hydrochloride (DLX), an acid labile drug. Materials and Methods: DLX microspheres were prepared by simple emulsification phase separation technique using chitosan as carrier and glutaraldehyde as a cross-linking agent. Microspheres prepared were coated with eudragit L-100 using an oil-in-oil solvent evaporation method. Eudragit L-100was used as enteric coating polymer with the aim to release the drug in small intestine The microspheres prepared were characterized by particle size, entrapment efficiency, swelling index (SI), mucoadhesion time, in vitro drug release and surface morphology. A 32 full factorial design was employed to study the effect of independent variables polymer-to-drug ratio (X1) and stirring speed (X2) on dependent variables, particle size, entrapment efficiency, SI, in vitro mucoadhesion and drug release up to 24 h (t24). Results: Microspheres formed were discrete, spherical and free flowing. The microspheres exhibited good mucoadhesive property and also showed high percentage entrapment efficiency. The microspheres were able to sustain the drug release up to 24 h. Conclusion: Thus, the prepared enteric coated mucoadhesive microspheres may prove to be a potential controlled release formulation of DLX for oral administration. PMID:24167786

  6. Rotorcraft Optimization Tools: Incorporating Rotorcraft Design Codes into Multi-Disciplinary Design, Analysis, and Optimization

    NASA Technical Reports Server (NTRS)

    Meyn, Larry A.

    2018-01-01

    One of the goals of NASA's Revolutionary Vertical Lift Technology Project (RVLT) is to provide validated tools for multidisciplinary design, analysis and optimization (MDAO) of vertical lift vehicles. As part of this effort, the software package, RotorCraft Optimization Tools (RCOTOOLS), is being developed to facilitate incorporating key rotorcraft conceptual design codes into optimizations using the OpenMDAO multi-disciplinary optimization framework written in Python. RCOTOOLS, also written in Python, currently supports the incorporation of the NASA Design and Analysis of RotorCraft (NDARC) vehicle sizing tool and the Comprehensive Analytical Model of Rotorcraft Aerodynamics and Dynamics II (CAMRAD II) analysis tool into OpenMDAO-driven optimizations. Both of these tools use detailed, file-based inputs and outputs, so RCOTOOLS provides software wrappers to update input files with new design variable values, execute these codes and then extract specific response variable values from the file outputs. These wrappers are designed to be flexible and easy to use. RCOTOOLS also provides several utilities to aid in optimization model development, including Graphical User Interface (GUI) tools for browsing input and output files in order to identify text strings that are used to identify specific variables as optimization input and response variables. This paper provides an overview of RCOTOOLS and its use

  7. The Skateboard Factory: Curriculum by Design--Oasis Skateboard Factory Q&A with Craig Morrison

    ERIC Educational Resources Information Center

    Pearson, George

    2012-01-01

    Since its opening three years ago, Oasis Skateboard Factory (OSF), founded by teacher Craig Morrison, has attracted considerable media exposure and received a Ken Spencer Award from the CEA for its innovative program. OSF is one of three programs offered by Oasis Alternative Secondary School, one of 22 alternative secondary schools of the Toronto…

  8. Applied optimal shape design

    NASA Astrophysics Data System (ADS)

    Mohammadi, B.; Pironneau, O.

    2002-12-01

    This paper is a short survey of optimal shape design (OSD) for fluids. OSD is an interesting field both mathematically and for industrial applications. Existence, sensitivity, correct discretization are important theoretical issues. Practical implementation issues for airplane designs are critical too. The paper is also a summary of the material covered in our recent book, Applied Optimal Shape Design, Oxford University Press, 2001.

  9. Design and optimize of 3-axis filament winding machine

    NASA Astrophysics Data System (ADS)

    Quanjin, Ma; Rejab, M. R. M.; Idris, M. S.; Bachtiar, B.; Siregar, J. P.; Harith, M. N.

    2017-10-01

    Filament winding technique is developed as the primary process for composite cylindrical structures fabrication at low cost. Fibres are wound on a rotating mandrel by a filament winding machine where resin impregnated fibres pass through a pay-out eye. This paper aims to develop and optimize a 3-axis, lightweight, practical, efficient, portable filament winding machine to satisfy the customer demand, which can fabricate pipes and round shape cylinders with resins. There are 3 main units on the 3-axis filament winding machine, which are the rotary unit, the delivery unit and control system unit. Comparison with previous existing filament winding machines in the factory, it has 3 degrees of freedom and can fabricate more complex shape specimens based on the mandrel shape and particular control system. The machine has been designed and fabricated on 3 axes movements with control system. The x-axis is for movement of the carriage, the y-axis is the rotation of mandrel and the z-axis is the movement of the pay-out eye. Cylindrical specimens with different dimensions and winding angles were produced. 3-axis automated filament winding machine has been successfully designed with simple control system.

  10. Modeling and optimization of ultrasound-assisted extraction of polyphenolic compounds from Aronia melanocarpa by-products from filter-tea factory.

    PubMed

    Ramić, Milica; Vidović, Senka; Zeković, Zoran; Vladić, Jelena; Cvejin, Aleksandra; Pavlić, Branimir

    2015-03-01

    Aronia melanocarpa by-product from filter-tea factory was used for the preparation of extracts with high content of bioactive compounds. Extraction process was accelerated using sonication. Three level, three variable face-centered cubic experimental design (FCD) with response surface methodology (RSM) was used for optimization of extraction in terms of maximized yields for total phenolics (TP), flavonoids (TF), anthocyanins (MA) and proanthocyanidins (TPA) contents. Ultrasonic power (X₁: 72-216 W), temperature (X₂: 30-70 °C) and extraction time (X₃: 30-90 min) were investigated as independent variables. Experimental results were fitted to a second-order polynomial model where multiple regression analysis and analysis of variance were used to determine fitness of the model and optimal conditions for investigated responses. Three-dimensional surface plots were generated from the mathematical models. The optimal conditions for ultrasound-assisted extraction of TP, TF, MA and TPA were: X₁=206.64 W, X₂=70 °C, X₃=80.1 min; X₁=210.24 W, X₂=70 °C, X₃=75 min; X₁=216 W, X₂=70 °C, X₃=45.6 min and X₁=199.44 W, X₂=70 °C, X₃=89.7 min, respectively. Generated model predicted values of the TP, TF, MA and TPA to be 15.41 mg GAE/ml, 9.86 mg CE/ml, 2.26 mg C3G/ml and 20.67 mg CE/ml, respectively. Experimental validation was performed and close agreement between experimental and predicted values was found (within 95% confidence interval). Copyright © 2014 Elsevier B.V. All rights reserved.

  11. Algorithm for designing smart factory Industry 4.0

    NASA Astrophysics Data System (ADS)

    Gurjanov, A. V.; Zakoldaev, D. A.; Shukalov, A. V.; Zharinov, I. O.

    2018-03-01

    The designing task of production division of the Industry 4.0 item designing company is being studied. The authors proposed an algorithm, which is based on the modified V L Volkovich method. This algorithm allows generating options how to arrange the production with robotized technological equipment functioning in the automatic mode. The optimization solution of the multi-criteria task for some additive criteria is the base of the algorithm.

  12. Generalized Subset Designs in Analytical Chemistry.

    PubMed

    Surowiec, Izabella; Vikström, Ludvig; Hector, Gustaf; Johansson, Erik; Vikström, Conny; Trygg, Johan

    2017-06-20

    Design of experiments (DOE) is an established methodology in research, development, manufacturing, and production for screening, optimization, and robustness testing. Two-level fractional factorial designs remain the preferred approach due to high information content while keeping the number of experiments low. These types of designs, however, have never been extended to a generalized multilevel reduced design type that would be capable to include both qualitative and quantitative factors. In this Article we describe a novel generalized fractional factorial design. In addition, it also provides complementary and balanced subdesigns analogous to a fold-over in two-level reduced factorial designs. We demonstrate how this design type can be applied with good results in three different applications in analytical chemistry including (a) multivariate calibration using microwave resonance spectroscopy for the determination of water in tablets, (b) stability study in drug product development, and (c) representative sample selection in clinical studies. This demonstrates the potential of generalized fractional factorial designs to be applied in many other areas of analytical chemistry where representative, balanced, and complementary subsets are required, especially when a combination of quantitative and qualitative factors at multiple levels exists.

  13. Development and optimization of enteric coated mucoadhesive microspheres of duloxetine hydrochloride using 3(2) full factorial design.

    PubMed

    Setia, Anupama; Kansal, Sahil; Goyal, Naveen

    2013-07-01

    Microspheres constitute an important part of oral drug delivery system by virtue of their small size and efficient carrier capacity. However, the success of these microspheres is limited due to their short residence time at the site of absorption. The objective of the present study was to formulate and systematically evaluate in vitro performance of enteric coated mucoadhesive microspheres of duloxetine hydrochloride (DLX), an acid labile drug. DLX microspheres were prepared by simple emulsification phase separation technique using chitosan as carrier and glutaraldehyde as a cross-linking agent. Microspheres prepared were coated with eudragit L-100 using an oil-in-oil solvent evaporation method. Eudragit L-100was used as enteric coating polymer with the aim to release the drug in small intestine The microspheres prepared were characterized by particle size, entrapment efficiency, swelling index (SI), mucoadhesion time, in vitro drug release and surface morphology. A 3(2) full factorial design was employed to study the effect of independent variables polymer-to-drug ratio (X1) and stirring speed (X2) on dependent variables, particle size, entrapment efficiency, SI, in vitro mucoadhesion and drug release up to 24 h (t24). Microspheres formed were discrete, spherical and free flowing. The microspheres exhibited good mucoadhesive property and also showed high percentage entrapment efficiency. The microspheres were able to sustain the drug release up to 24 h. Thus, the prepared enteric coated mucoadhesive microspheres may prove to be a potential controlled release formulation of DLX for oral administration.

  14. Improvement of Storage Medium for Cultured Human Retinal Pigment Epithelial Cells Using Factorial Design.

    PubMed

    Pasovic, L; Utheim, T P; Reppe, S; Khan, A Z; Jackson, C J; Thiede, B; Berg, J P; Messelt, E B; Eidet, J R

    2018-04-09

    Storage of human retinal pigment epithelium (hRPE) can contribute to the advancement of cell-based RPE replacement therapies. The present study aimed to improve the quality of stored hRPE cultures by identifying storage medium additives that, alone or in combination, contribute to enhancing cell viability while preserving morphology and phenotype. hRPE cells were cultured in the presence of the silk protein sericin until pigmentation. Cells were then stored for 10 days in storage medium plus sericin and either one of 46 different additives. Individual effects of each additive on cell viability were assessed using epifluorescence microscopy. Factorial design identified promising additive combinations by extrapolating their individual effects. Supplementing the storage medium with sericin combined with adenosine, L-ascorbic acid and allopurinol resulted in the highest cell viability (98.6 ± 0.5%) after storage for three days, as measured by epifluorescence microscopy. Flow cytometry validated the findings. Proteomics identified 61 upregulated and 65 downregulated proteins in this storage group compared to the unstored control. Transmission electron microscopy demonstrated the presence of melanosomes after storage in the optimized medium. We conclude that the combination of adenosine, L-ascorbic acid, allopurinol and sericin in minimal essential medium preserves RPE pigmentation while maintaining cell viability during storage.

  15. Construction of social value or utility-based health indices: the usefulness of factorial experimental design plans.

    PubMed

    Cadman, D; Goldsmith, C

    1986-01-01

    Global indices, which aggregate multiple health or function attributes into a single summary indicator, are useful measures in health research. Two key issues must be addressed in the initial stages of index construction from the universe of possible health and function attributes, which ones should be included in a new index? and how simple can the statistical model be to combine attributes into a single numeric index value? Factorial experimental designs were used in the initial stages of developing a function index for evaluating a program for the care of young handicapped children. Beginning with eight attributes judged important to the goals of the program by clinicians, social preference values for different function states were obtained from 32 parents of handicapped children and 32 members of the community. Using category rating methods each rater scored 16 written multi-attribute case descriptions which contained information about a child's status for all eight attributes. Either a good or poor level of each function attribute and age 3 or 5 years were described in each case. Thus, 2(8) = 256 different cases were rated. Two factorial design plans were selected and used to allocate case descriptions to raters. Analysis of variance determined that seven of the eight clinician selected attributes were required in a social value based index for handicapped children. Most importantly, the subsequent steps of index construction could be greatly simplified by the finding that a simple additive statistical model without complex attribute interaction terms was adequate for the index. We conclude that factorial experimental designs are an efficient, feasible and powerful tool for the initial stages of constructing a multi-attribute health index.

  16. Enhanced diesel fuel fraction from waste high-density polyethylene and heavy gas oil pyrolysis using factorial design methodology.

    PubMed

    Joppert, Ney; da Silva, Alexsandro Araujo; da Costa Marques, Mônica Regina

    2015-02-01

    Factorial Design Methodology (FDM) was developed to enhance diesel fuel fraction (C9-C23) from waste high-density polyethylene (HDPE) and Heavy Gas Oil (HGO) through co-pyrolysis. FDM was used for optimization of the following reaction parameters: temperature, catalyst and HDPE amounts. The HGO amount was constant (2.00 g) in all experiments. The model optimum conditions were determined to be temperature of 550 °C, HDPE = 0.20 g and no FCC catalyst. Under such conditions, 94% of pyrolytic oil was recovered, of which diesel fuel fraction was 93% (87% diesel fuel fraction yield), no residue was produced and 6% of noncondensable gaseous/volatile fraction was obtained. Seeking to reduce the cost due to high process temperatures, the impact of using higher catalyst content (25%) with a lower temperature (500 °C) was investigated. Under these conditions, 88% of pyrolytic oil was recovered (diesel fuel fraction yield was also 87%) as well as 12% of the noncondensable gaseous/volatile fraction. No waste was produced in these conditions, being an environmentally friendly approach for recycling the waste plastic. This paper demonstrated the usefulness of using FDM to predict and to optimize diesel fuel fraction yield with a great reduction in the number of experiments. Copyright © 2014 Elsevier Ltd. All rights reserved.

  17. A Classroom of Polymer Factories.

    ERIC Educational Resources Information Center

    Harris, Mary E.; Van Natta, Sandra

    1998-01-01

    Provides an activity in which students create small classroom factories and investigate several aspects of production including design, engineering, quality control, waste management, packaging, shipment, and communication. (DDR)

  18. Optimization of pectinase immobilization on grafted alginate-agar gel beads by 24 full factorial CCD and thermodynamic profiling for evaluating of operational covalent immobilization.

    PubMed

    Abdel Wahab, Walaa A; Karam, Eman A; Hassan, Mohamed E; Kansoh, Amany L; Esawy, Mona A; Awad, Ghada E A

    2018-07-01

    Pectinase produced by a honey derived from the fungus Aspergillus awamori KX943614 was covalently immobilized onto gel beads made of alginate and agar. Polyethyleneimine, glutaraldehyde, loading time and enzyme's units were optimized by 2 4 full factorial central composite design (CCD). The immobilization process increased the optimal working pH for the free pectinase from 5 to a broader range of pH4.5-5.5 and the optimum operational temperature from 55°C to a higher temperature, of 60°C, which is favored to reduce the enzyme's microbial contamination. The thermodynamics studies showed a thermal stability enhancement against high temperature for the immobilized formula. Moreover, an increase in half-lives and D-values was achieved. The thermodynamic studies proved that immobilization of pectinase made a remarkable increase in enthalpy and free energy because of enzyme stability enhancement. The reusability test revealed that 60% of pectinase's original activity was retained after 8 successive cycles. This gel formula may be convenient for immobilization of other industrial enzymes. Copyright © 2018 Elsevier B.V. All rights reserved.

  19. Design Optimization Toolkit: Users' Manual

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aguilo Valentin, Miguel Alejandro

    The Design Optimization Toolkit (DOTk) is a stand-alone C++ software package intended to solve complex design optimization problems. DOTk software package provides a range of solution methods that are suited for gradient/nongradient-based optimization, large scale constrained optimization, and topology optimization. DOTk was design to have a flexible user interface to allow easy access to DOTk solution methods from external engineering software packages. This inherent flexibility makes DOTk barely intrusive to other engineering software packages. As part of this inherent flexibility, DOTk software package provides an easy-to-use MATLAB interface that enables users to call DOTk solution methods directly from the MATLABmore » command window.« less

  20. Fractional Factorial Design to Investigate Stromal Cell Regulation of Macrophage Plasticity

    PubMed Central

    Barminko, Jeffrey A.; Nativ, Nir I.; Schloss, Rene; Yarmush, Martin L.

    2018-01-01

    Understanding the regulatory networks which control specific macrophage phenotypes is essential in identifying novel targets to correct macrophage mediated clinical disorders, often accompanied by inflammatory events. Since mesenchymal stromal cells (MSCs) have been shown to play key roles in regulating immune functions predominantly via a large number of secreted products, we used a fractional factorial approach to streamline experimental evaluation of MSC mediated inflammatory macrophage regulation. Our macrophage reprogramming metrics, human bone marrow MSC attenuation of macrophage pro-inflammatory M1 TNFα secretion and simultaneous enhanced expression of the M2 macrophage marker, CD206, were used as analysis endpoints. Objective evaluation of a panel of MSC secreted mediators indicated that PGE2 alone was sufficient in facilitating macrophage reprogramming, while IL4 only provided partial reprogramming. Inhibiting stromal cell PGE2 secretion with Indomethacin, reversed the macrophage reprogramming effect. PGE2 reprogramming was mediated through the EP4 receptor and indirectly through the CREB signaling pathway as GSK3 specific inhibitors induced M1 macrophages to express CD206. This reprogramming pathway functioned independently from the M1 suppression pathway, as neither CREB nor GSK3 inhibition reversed PGE2 TNF-α secretion attenuation. In conclusion, fractional factorial experimental design identified stromal derived PGE2 as the factor most important in facilitating macrophage reprogramming, albeit via two unique pathways. PMID:24891120

  1. Optimal design of isotope labeling experiments.

    PubMed

    Yang, Hong; Mandy, Dominic E; Libourel, Igor G L

    2014-01-01

    Stable isotope labeling experiments (ILE) constitute a powerful methodology for estimating metabolic fluxes. An optimal label design for such an experiment is necessary to maximize the precision with which fluxes can be determined. But often, precision gained in the determination of one flux comes at the expense of the precision of other fluxes, and an appropriate label design therefore foremost depends on the question the investigator wants to address. One could liken ILE to shadows that metabolism casts on products. Optimal label design is the placement of the lamp; creating clear shadows for some parts of metabolism and obscuring others.An optimal isotope label design is influenced by: (1) the network structure; (2) the true flux values; (3) the available label measurements; and, (4) commercially available substrates. The first two aspects are dictated by nature and constrain any optimal design. The second two aspects are suitable design parameters. To create an optimal label design, an explicit optimization criterion needs to be formulated. This usually is a property of the flux covariance matrix, which can be augmented by weighting label substrate cost. An optimal design is found by using such a criterion as an objective function for an optimizer. This chapter uses a simple elementary metabolite units (EMU) representation of the TCA cycle to illustrate the process of experimental design of isotope labeled substrates.

  2. Evaluation and simultaneous optimization of bio-hydrogen production using 3 2 factorial design and the desirability function

    NASA Astrophysics Data System (ADS)

    Cuetos, M. J.; Gómez, X.; Escapa, A.; Morán, A.

    Various mixtures incorporating a simulated organic fraction of municipal solid wastes and blood from a poultry slaughterhouse were used as substrate in a dark fermentation process for the production of hydrogen. The individual and interactive effects of hydraulic retention time (HRT), solid content in the feed (%TS) and proportion of residues (%Blood) on bio-hydrogen production were studied in this work. A central composite design and response surface methodology were employed to determine the optimum conditions for the hydrogen production process. Experimental results were approximated to a second-order model with the principal effects of the three factors considered being statistically significant (P < 0.05). The production of hydrogen obtained from the experimental point at conditions close to best operability was 0.97 L Lr -1 day -1. Moreover, a desirability function was employed in order to optimize the process when a second, methanogenic, phase is coupled with it. In this last case, the optimum conditions lead to a reduction in the production of hydrogen when the optimization process involves the maximization of intermediary products.

  3. Optimal design of compact spur gear reductions

    NASA Technical Reports Server (NTRS)

    Savage, M.; Lattime, S. B.; Kimmel, J. A.; Coe, H. H.

    1992-01-01

    The optimal design of compact spur gear reductions includes the selection of bearing and shaft proportions in addition to gear mesh parameters. Designs for single mesh spur gear reductions are based on optimization of system life, system volume, and system weight including gears, support shafts, and the four bearings. The overall optimization allows component properties to interact, yielding the best composite design. A modified feasible directions search algorithm directs the optimization through a continuous design space. Interpolated polynomials expand the discrete bearing properties and proportions into continuous variables for optimization. After finding the continuous optimum, the designer can analyze near optimal designs for comparison and selection. Design examples show the influence of the bearings on the optimal configurations.

  4. New generation electron-positron factories

    NASA Astrophysics Data System (ADS)

    Zobov, Mikhail

    2011-09-01

    In 2010 we celebrate 50 years since commissioning of the first particle storage ring ADA in Frascati (Italy) that also became the first electron-positron collider in 1964. After that date the particle colliders have increased their intensity, luminosity and energy by several orders of magnitude. Namely, because of the high stored beam currents and high rate of useful physics events (luminosity) the modern electron-positron colliders are called "factories". However, the fundamental physics has required luminosities by 1-2 orders of magnitudes higher with respect to those presently achieved. This task can be accomplished by designing a new generation of factories exploiting the potential of a new collision scheme based on the Crab Waist (CW) collision concept recently proposed and successfully tested at Frascati. In this paper we discuss the performance and limitations of the present generation electron-positron factories and give a brief overview of new ideas and collision schemes proposed for further collider luminosity increase. In more detail we describe the CW collision concept and the results of the crab waist collision tests in DAϕNE, the Italian ϕ-factory. Finally, we briefly describe most advanced projects of the next generation factories based on the CW concept: SuperB in Italy, SuperKEKB in Japan and SuperC-Tau in Russia.

  5. Aerodynamic design using numerical optimization

    NASA Technical Reports Server (NTRS)

    Murman, E. M.; Chapman, G. T.

    1983-01-01

    The procedure of using numerical optimization methods coupled with computational fluid dynamic (CFD) codes for the development of an aerodynamic design is examined. Several approaches that replace wind tunnel tests, develop pressure distributions and derive designs, or fulfill preset design criteria are presented. The method of Aerodynamic Design by Numerical Optimization (ADNO) is described and illustrated with examples.

  6. Design optimization studies using COSMIC NASTRAN

    NASA Technical Reports Server (NTRS)

    Pitrof, Stephen M.; Bharatram, G.; Venkayya, Vipperla B.

    1993-01-01

    The purpose of this study is to create, test and document a procedure to integrate mathematical optimization algorithms with COSMIC NASTRAN. This procedure is very important to structural design engineers who wish to capitalize on optimization methods to ensure that their design is optimized for its intended application. The OPTNAST computer program was created to link NASTRAN and design optimization codes into one package. This implementation was tested using two truss structure models and optimizing their designs for minimum weight, subject to multiple loading conditions and displacement and stress constraints. However, the process is generalized so that an engineer could design other types of elements by adding to or modifying some parts of the code.

  7. The Old Factory

    ERIC Educational Resources Information Center

    Roman, Harry T.

    2007-01-01

    Technology education is not just about things, systems, and processes. It can also be about history, people, technological change, and impacts on society. In this design challenge, one uses technology education principles and ideas to convert an old factory into a museum and learning center. The challenge with this historical resource is to think…

  8. Determination of the clean-up efficiency of the solid-phase extraction of rosemary extracts: Application of full-factorial design in hyphenation with Gaussian peak fit function.

    PubMed

    Meischl, Florian; Kirchler, Christian Günter; Jäger, Michael Andreas; Huck, Christian Wolfgang; Rainer, Matthias

    2018-02-01

    We present a novel method for the quantitative determination of the clean-up efficiency to provide a calculated parameter for peak purity through iterative fitting in conjunction with design of experiments. Rosemary extracts were used and analyzed before and after solid-phase extraction using a self-fabricated mixed-mode sorbent based on poly(N-vinylimidazole/ethylene glycol dimethacrylate). Optimization was performed by variation of washing steps using a full three-level factorial design and response surface methodology. Separation efficiency of rosmarinic acid from interfering compounds was calculated using an iterative fit of Gaussian-like signals and quantifications were performed by the separate integration of the two interfering peak areas. Results and recoveries were analyzed using Design-Expert® software and revealed significant differences between the washing steps. Optimized parameters were considered and used for all further experiments. Furthermore, the solid-phase extraction procedure was tested and compared with commercial available sorbents. In contrast to generic protocols of the manufacturers, the optimized procedure showed excellent recoveries and clean-up rates for the polymer with ion exchange properties. Finally, rosemary extracts from different manufacturing areas and application types were studied to verify the developed method for its applicability. The cleaned-up extracts were analyzed by liquid chromatography with tandem mass spectrometry for detailed compound evaluation to exclude any interference from coeluting molecules. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  9. Design and statistical optimization of glipizide loaded lipospheres using response surface methodology.

    PubMed

    Shivakumar, Hagalavadi Nanjappa; Patel, Pragnesh Bharat; Desai, Bapusaheb Gangadhar; Ashok, Purnima; Arulmozhi, Sinnathambi

    2007-09-01

    A 32 factorial design was employed to produce glipizide lipospheres by the emulsification phase separation technique using paraffin wax and stearic acid as retardants. The effect of critical formulation variables, namely levels of paraffin wax (X1) and proportion of stearic acid in the wax (X2) on geometric mean diameter (dg), percent encapsulation efficiency (% EE), release at the end of 12 h (rel12) and time taken for 50% of drug release (t50), were evaluated using the F-test. Mathematical models containing only the significant terms were generated for each response parameter using the multiple linear regression analysis (MLRA) and analysis of variance (ANOVA). Both formulation variables studied exerted a significant influence (p < 0.05) on the response parameters. Numerical optimization using the desirability approach was employed to develop an optimized formulation by setting constraints on the dependent and independent variables. The experimental values of dg, % EE, rel12 and t50 values for the optimized formulation were found to be 57.54 +/- 1.38 mum, 86.28 +/- 1.32%, 77.23 +/- 2.78% and 5.60 +/- 0.32 h, respectively, which were in close agreement with those predicted by the mathematical models. The drug release from lipospheres followed first-order kinetics and was characterized by the Higuchi diffusion model. The optimized liposphere formulation developed was found to produce sustained anti-diabetic activity following oral administration in rats.

  10. Multidisciplinary design optimization using genetic algorithms

    NASA Technical Reports Server (NTRS)

    Unal, Resit

    1994-01-01

    Multidisciplinary design optimization (MDO) is an important step in the conceptual design and evaluation of launch vehicles since it can have a significant impact on performance and life cycle cost. The objective is to search the system design space to determine values of design variables that optimize the performance characteristic subject to system constraints. Gradient-based optimization routines have been used extensively for aerospace design optimization. However, one limitation of gradient based optimizers is their need for gradient information. Therefore, design problems which include discrete variables can not be studied. Such problems are common in launch vehicle design. For example, the number of engines and material choices must be integer values or assume only a few discrete values. In this study, genetic algorithms are investigated as an approach to MDO problems involving discrete variables and discontinuous domains. Optimization by genetic algorithms (GA) uses a search procedure which is fundamentally different from those gradient based methods. Genetic algorithms seek to find good solutions in an efficient and timely manner rather than finding the best solution. GA are designed to mimic evolutionary selection. A population of candidate designs is evaluated at each iteration, and each individual's probability of reproduction (existence in the next generation) depends on its fitness value (related to the value of the objective function). Progress toward the optimum is achieved by the crossover and mutation operations. GA is attractive since it uses only objective function values in the search process, so gradient calculations are avoided. Hence, GA are able to deal with discrete variables. Studies report success in the use of GA for aircraft design optimization studies, trajectory analysis, space structure design and control systems design. In these studies reliable convergence was achieved, but the number of function evaluations was large compared

  11. The Japanese Positron Factory

    NASA Astrophysics Data System (ADS)

    Okada, S.; Sunaga, H.; Kaneko, H.; Takizawa, H.; Kawasuso, A.; Yotsumoto, K.; Tanaka, R.

    1999-06-01

    The Positron Factory has been planned at Japan Atomic Energy Research Institute (JAERI). The factory is expected to produce linac-based monoenergetic positron beams having world-highest intensities of more than 1010e+/sec, which will be applied for R&D of materials science, biotechnology and basic physics & chemistry. In this article, results of the design studies are demonstrated for the following essential components of the facilities: 1) Conceptual design of a high-power electron linac with 100 MeV in beam energy and 100 kW in averaged beam power, 2) Performance tests of the RF window in the high-power klystron and of the electron beam window, 3) Development of a self-driven rotating electron-to-positron converter and the performance tests, 4) Proposal of multi-channel beam generation system for monoenergetic positrons, with a series of moderator assemblies based on a newly developed Monte Carlo simulation and the demonstrative experiment, 5) Proposal of highly efficient moderator structures, 6) Conceptual design of a local shield to suppress the surrounding radiation and activation levels.

  12. Flat-plate photovoltaic array design optimization

    NASA Technical Reports Server (NTRS)

    Ross, R. G., Jr.

    1980-01-01

    An analysis is presented which integrates the results of specific studies in the areas of photovoltaic structural design optimization, optimization of array series/parallel circuit design, thermal design optimization, and optimization of environmental protection features. The analysis is based on minimizing the total photovoltaic system life-cycle energy cost including repair and replacement of failed cells and modules. This approach is shown to be a useful technique for array optimization, particularly when time-dependent parameters such as array degradation and maintenance are involved.

  13. Design Optimization and In Vitro-In Vivo Evaluation of Orally Dissolving Strips of Clobazam

    PubMed Central

    Bala, Rajni; Khanna, Sushil; Pawar, Pravin

    2014-01-01

    Clobazam orally dissolving strips were prepared by solvent casting method. A full 32 factorial design was applied for optimization using different concentration of film forming polymer and disintegrating agent as independent variable and disintegration time, % cumulative drug release, and tensile strength as dependent variable. In addition the prepared films were also evaluated for surface pH, folding endurance, and content uniformity. The optimized film formulation showing the maximum in vitro drug release, satisfactory in vitro disintegration time, and tensile strength was selected for bioavailability study and compared with a reference marketed product (frisium5 tablets) in rabbits. Formulation (F6) was selected by the Design-expert software which exhibited DT (24 sec), TS (2.85 N/cm2), and in vitro drug release (96.6%). Statistical evaluation revealed no significant difference between the bioavailability parameters of the test film (F6) and the reference product. The mean ratio values (test/reference) of C max (95.87%), t max (71.42%), AUC0−t (98.125%), and AUC0−∞ (99.213%) indicated that the two formulae exhibited comparable plasma level-time profiles. PMID:25328709

  14. Engineering Robustness of Microbial Cell Factories.

    PubMed

    Gong, Zhiwei; Nielsen, Jens; Zhou, Yongjin J

    2017-10-01

    Metabolic engineering and synthetic biology offer great prospects in developing microbial cell factories capable of converting renewable feedstocks into fuels, chemicals, food ingredients, and pharmaceuticals. However, prohibitively low production rate and mass concentration remain the major hurdles in industrial processes even though the biosynthetic pathways are comprehensively optimized. These limitations are caused by a variety of factors unamenable for host cell survival, such as harsh industrial conditions, fermentation inhibitors from biomass hydrolysates, and toxic compounds including metabolic intermediates and valuable target products. Therefore, engineered microbes with robust phenotypes is essential for achieving higher yield and productivity. In this review, the recent advances in engineering robustness and tolerance of cell factories is described to cope with these issues and briefly introduce novel strategies with great potential to enhance the robustness of cell factories, including metabolic pathway balancing, transporter engineering, and adaptive laboratory evolution. This review also highlights the integration of advanced systems and synthetic biology principles toward engineering the harmony of overall cell function, more than the specific pathways or enzymes. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  15. Molecular Breeding to Create Optimized Crops: From Genetic Manipulation to Potential Applications in Plant Factories.

    PubMed

    Hiwasa-Tanase, Kyoko; Ezura, Hiroshi

    2016-01-01

    Crop cultivation in controlled environment plant factories offers great potential to stabilize the yield and quality of agricultural products. However, many crops are currently unsuited to these environments, particularly closed cultivation systems, due to space limitations, low light intensity, high implementation costs, and high energy requirements. A major barrier to closed system cultivation is the high running cost, which necessitates the use of high-margin crops for economic viability. High-value crops include those with enhanced nutritional value or containing additional functional components for pharmaceutical production or with the aim of providing health benefits. In addition, it is important to develop cultivars equipped with growth parameters that are suitable for closed cultivation. Small plant size is of particular importance due to the limited cultivation space. Other advantageous traits are short production cycle, the ability to grow under low light, and high nutriculture availability. Cost-effectiveness is improved from the use of cultivars that are specifically optimized for closed system cultivation. This review describes the features of closed cultivation systems and the potential application of molecular breeding to create crops that are optimized for cost-effectiveness and productivity in closed cultivation systems.

  16. Molecular Breeding to Create Optimized Crops: From Genetic Manipulation to Potential Applications in Plant Factories

    PubMed Central

    Hiwasa-Tanase, Kyoko; Ezura, Hiroshi

    2016-01-01

    Crop cultivation in controlled environment plant factories offers great potential to stabilize the yield and quality of agricultural products. However, many crops are currently unsuited to these environments, particularly closed cultivation systems, due to space limitations, low light intensity, high implementation costs, and high energy requirements. A major barrier to closed system cultivation is the high running cost, which necessitates the use of high-margin crops for economic viability. High-value crops include those with enhanced nutritional value or containing additional functional components for pharmaceutical production or with the aim of providing health benefits. In addition, it is important to develop cultivars equipped with growth parameters that are suitable for closed cultivation. Small plant size is of particular importance due to the limited cultivation space. Other advantageous traits are short production cycle, the ability to grow under low light, and high nutriculture availability. Cost-effectiveness is improved from the use of cultivars that are specifically optimized for closed system cultivation. This review describes the features of closed cultivation systems and the potential application of molecular breeding to create crops that are optimized for cost-effectiveness and productivity in closed cultivation systems. PMID:27200016

  17. Reliability based design optimization: Formulations and methodologies

    NASA Astrophysics Data System (ADS)

    Agarwal, Harish

    Modern products ranging from simple components to complex systems should be designed to be optimal and reliable. The challenge of modern engineering is to ensure that manufacturing costs are reduced and design cycle times are minimized while achieving requirements for performance and reliability. If the market for the product is competitive, improved quality and reliability can generate very strong competitive advantages. Simulation based design plays an important role in designing almost any kind of automotive, aerospace, and consumer products under these competitive conditions. Single discipline simulations used for analysis are being coupled together to create complex coupled simulation tools. This investigation focuses on the development of efficient and robust methodologies for reliability based design optimization in a simulation based design environment. Original contributions of this research are the development of a novel efficient and robust unilevel methodology for reliability based design optimization, the development of an innovative decoupled reliability based design optimization methodology, the application of homotopy techniques in unilevel reliability based design optimization methodology, and the development of a new framework for reliability based design optimization under epistemic uncertainty. The unilevel methodology for reliability based design optimization is shown to be mathematically equivalent to the traditional nested formulation. Numerical test problems show that the unilevel methodology can reduce computational cost by at least 50% as compared to the nested approach. The decoupled reliability based design optimization methodology is an approximate technique to obtain consistent reliable designs at lesser computational expense. Test problems show that the methodology is computationally efficient compared to the nested approach. A framework for performing reliability based design optimization under epistemic uncertainty is also developed

  18. Optimization of 3D Field Design

    NASA Astrophysics Data System (ADS)

    Logan, Nikolas; Zhu, Caoxiang

    2017-10-01

    Recent progress in 3D tokamak modeling is now leveraged to create a conceptual design of new external 3D field coils for the DIII-D tokamak. Using the IPEC dominant mode as a target spectrum, the Finding Optimized Coils Using Space-curves (FOCUS) code optimizes the currents and 3D geometry of multiple coils to maximize the total set's resonant coupling. The optimized coils are individually distorted in space, creating toroidal ``arrays'' containing a variety of shapes that often wrap around a significant poloidal extent of the machine. The generalized perturbed equilibrium code (GPEC) is used to determine optimally efficient spectra for driving total, core, and edge neoclassical toroidal viscosity (NTV) torque and these too provide targets for the optimization of 3D coil designs. These conceptual designs represent a fundamentally new approach to 3D coil design for tokamaks targeting desired plasma physics phenomena. Optimized coil sets based on plasma response theory will be relevant to designs for future reactors or on any active machine. External coils, in particular, must be optimized for reliable and efficient fusion reactor designs. Work supported by the US Department of Energy under DE-AC02-09CH11466.

  19. Engineering microbial cell factories for the production of plant natural products: from design principles to industrial-scale production.

    PubMed

    Liu, Xiaonan; Ding, Wentao; Jiang, Huifeng

    2017-07-19

    Plant natural products (PNPs) are widely used as pharmaceuticals, nutraceuticals, seasonings, pigments, etc., with a huge commercial value on the global market. However, most of these PNPs are still being extracted from plants. A resource-conserving and environment-friendly synthesis route for PNPs that utilizes microbial cell factories has attracted increasing attention since the 1940s. However, at the present only a handful of PNPs are being produced by microbial cell factories at an industrial scale, and there are still many challenges in their large-scale application. One of the challenges is that most biosynthetic pathways of PNPs are still unknown, which largely limits the number of candidate PNPs for heterologous microbial production. Another challenge is that the metabolic fluxes toward the target products in microbial hosts are often hindered by poor precursor supply, low catalytic activity of enzymes and obstructed product transport. Consequently, despite intensive studies on the metabolic engineering of microbial hosts, the fermentation costs of most heterologously produced PNPs are still too high for industrial-scale production. In this paper, we review several aspects of PNP production in microbial cell factories, including important design principles and recent progress in pathway mining and metabolic engineering. In addition, implemented cases of industrial-scale production of PNPs in microbial cell factories are also highlighted.

  20. Optimal Designs for the Rasch Model

    ERIC Educational Resources Information Center

    Grasshoff, Ulrike; Holling, Heinz; Schwabe, Rainer

    2012-01-01

    In this paper, optimal designs will be derived for estimating the ability parameters of the Rasch model when difficulty parameters are known. It is well established that a design is locally D-optimal if the ability and difficulty coincide. But locally optimal designs require that the ability parameters to be estimated are known. To attenuate this…

  1. A new multiresponse optimization approach in combination with a D-Optimal experimental design for the determination of biogenic amines in fish by HPLC-FLD.

    PubMed

    Herrero, A; Sanllorente, S; Reguera, C; Ortiz, M C; Sarabia, L A

    2016-11-16

    A new strategy to approach multiresponse optimization in conjunction to a D-optimal design for simultaneously optimizing a large number of experimental factors is proposed. The procedure is applied to the determination of biogenic amines (histamine, putrescine, cadaverine, tyramine, tryptamine, 2-phenylethylamine, spermine and spermidine) in swordfish by HPLC-FLD after extraction with an acid and subsequent derivatization with dansyl chloride. Firstly, the extraction from a solid matrix and the derivatization of the extract are optimized. Ten experimental factors involved in both stages are studied, seven of them at two levels and the remaining at three levels; the use of a D-optimal design leads to optimize the ten experimental variables, significantly reducing by a factor of 67 the experimental effort needed but guaranteeing the quality of the estimates. A model with 19 coefficients, which includes those corresponding to the main effects and two possible interactions, is fitted to the peak area of each amine. Then, the validated models are used to predict the response (peak area) of the 3456 experiments of the complete factorial design. The variability among peak areas ranges from 13.5 for 2-phenylethylamine to 122.5 for spermine, which shows, to a certain extent, the high and different effect of the pretreatment on the responses. Then the percentiles are calculated from the peak areas of each amine. As the experimental conditions are in conflict, the optimal solution for the multiresponse optimization is chosen from among those which have all the responses greater than a certain percentile for all the amines. The developed procedure reaches decision limits down to 2.5 μg L -1 for cadaverine or 497 μg L -1 for histamine in solvent and 0.07 mg kg -1 and 14.81 mg kg -1 in fish (probability of false positive equal to 0.05), respectively. Copyright © 2016 Elsevier B.V. All rights reserved.

  2. Formulation of wax oxybenzone microparticles using a factorial approach.

    PubMed

    Gomaa, Y A; Darwish, I A; Boraei, N A; El-Khordagui, L K

    2010-01-01

    Oxybenzone wax microparticles (MPs) were prepared by the hydrophobic congealable disperse phase method. The formulation of oxybenzone-loaded MPs was optimized using a 2⁴ experimental design. Factorial analysis indicated that the main MP characteristics were influenced by initial drug loading, emulsification speed, emulsifier concentration and hydrophilic-lipophilic balance. MPs were spherical with 50.5–88.1 μm size range, 17.8–38.9 drug content in mg/100 mg MPs and 33.1–87.2% oxybenzone release in 1 h. A wide range of sunscreen delivery systems suitable for different formulation purposes were generated which may contribute to the advanced formulation of sunscreen products with improved performance.

  3. Optimizing Partner Notification Programs for Men Who Have Sex with Men: Factorial Survey Results from South China

    PubMed Central

    Tucker, Joseph D.; Chakraborty, Hrishikesh; Cohen, Myron S.; Chen, Xiang-Sheng

    2016-01-01

    Background Syphilis is prevalent among men who have sex with men (MSM) in China. Syphilis partner notification (PN) programs targeting MSM has been considered as one of effective strategies to prevention and control of the infection in the population. We examined willingness and preferences for PN among MSM to measure feasibility and optimize uptake. Methods Participation in a syphilis PN program was measured using a factorial survey from both the perspective of the index patient and the partner. Respondents were recruited from April-July 2011 using convenience sampling at two sites—a MSM sexually transmitted disease (STD) clinic and a MSM community based organization (CBO). Respondents first evaluated three factorial survey vignettes to measure probability of participation and then an anonymous sociodemographic questionnaire. A two-level mixed linear model was fitted for the factorial survey analysis. Results In 372 respondents with mean age (± SD) 28.5 (± 6.0) years, most were single (82.0%) and closeted gays (66.7%). The Internet was the most frequent place to search for sex. Few (31.2%) had legal names for casual partners, but most had instant messenger (86.5%) and mobile phone numbers (77.7%). The mean probability of participation in a syphilis PN program was 64.5% (± 32.4%) for index patients and 63.7% (± 32.6%) for partners. Referral of the partner to a private clinic or MSM CBO for follow-up decreased participation compared to the local Center for Disease Control and Prevention (CDC) or public STD clinic. Conclusions Enhanced PN services may be feasible among MSM in South China. Internet and mobile phone PN may contact partners untraceable by traditional PN. Referral of partners to the local CDC or public STD clinic may maximize PN participation. PMID:27462724

  4. Habitat Design Optimization and Analysis

    NASA Technical Reports Server (NTRS)

    SanSoucie, Michael P.; Hull, Patrick V.; Tinker, Michael L.

    2006-01-01

    Long-duration surface missions to the Moon and Mars will require habitats for the astronauts. The materials chosen for the habitat walls play a direct role in the protection against the harsh environments found on the surface. Choosing the best materials, their configuration, and the amount required is extremely difficult due to the immense size of the design region. Advanced optimization techniques are necessary for habitat wall design. Standard optimization techniques are not suitable for problems with such large search spaces; therefore, a habitat design optimization tool utilizing genetic algorithms has been developed. Genetic algorithms use a "survival of the fittest" philosophy, where the most fit individuals are more likely to survive and reproduce. This habitat design optimization tool is a multi-objective formulation of structural analysis, heat loss, radiation protection, and meteoroid protection. This paper presents the research and development of this tool.

  5. Design of experiments with four-factors for a PEM fuel cell optimization

    NASA Astrophysics Data System (ADS)

    Olteanu, V.; Pǎtularu, L.; Popescu, C. L.; Popescu, M. O.; Crǎciunescu, A.

    2017-07-01

    Nowadays, many research efforts are allocated for the development of fuel cells, since they constitute a carbon-free electrical energy generator which can be used for stationary, mobile and portable applications. The maximum value of the delivered power of a fuel cell depends on many factors as: the height of plates' channels, the stoichiometry level of the air flow, the air pressure for the cathode, and of the actual operating electric current density. In this paper, two levels, full four-factors factorial experiment has been designed in order to obtain the appropriate response surface which approximates the maximum delivered power dependence of the above-mentioned factors. The optimum set of the fuel-cell factors which determine the maximum value of the delivered power was determined and a comparison between simulated and measured optimal Power versus Current Density characteristics is given.

  6. Optimization of the Ion Source-Mass Spectrometry Parameters in Non-Steroidal Anti-Inflammatory and Analgesic Pharmaceuticals Analysis by a Design of Experiments Approach.

    PubMed

    Paíga, Paula; Silva, Luís M S; Delerue-Matos, Cristina

    2016-10-01

    The flow rates of drying and nebulizing gas, heat block and desolvation line temperatures and interface voltage are potential electrospray ionization parameters as they may enhance sensitivity of the mass spectrometer. The conditions that give higher sensitivity of 13 pharmaceuticals were explored. First, Plackett-Burman design was implemented to screen significant factors, and it was concluded that interface voltage and nebulizing gas flow were the only factors that influence the intensity signal for all pharmaceuticals. This fractionated factorial design was projected to set a full 2(2) factorial design with center points. The lack-of-fit test proved to be significant. Then, a central composite face-centered design was conducted. Finally, a stepwise multiple linear regression and subsequently an optimization problem solving were carried out. Two main drug clusters were found concerning the signal intensities of all runs of the augmented factorial design. p-Aminophenol, salicylic acid, and nimesulide constitute one cluster as a result of showing much higher sensitivity than the remaining drugs. The other cluster is more homogeneous with some sub-clusters comprising one pharmaceutical and its respective metabolite. It was observed that instrumental signal increased when both significant factors increased with maximum signal occurring when both codified factors are set at level +1. It was also found that, for most of the pharmaceuticals, interface voltage influences the intensity of the instrument more than the nebulizing gas flowrate. The only exceptions refer to nimesulide where the relative importance of the factors is reversed and still salicylic acid where both factors equally influence the instrumental signal. Graphical Abstract ᅟ.

  7. Optimization of the Ion Source-Mass Spectrometry Parameters in Non-Steroidal Anti-Inflammatory and Analgesic Pharmaceuticals Analysis by a Design of Experiments Approach

    NASA Astrophysics Data System (ADS)

    Paíga, Paula; Silva, Luís M. S.; Delerue-Matos, Cristina

    2016-10-01

    The flow rates of drying and nebulizing gas, heat block and desolvation line temperatures and interface voltage are potential electrospray ionization parameters as they may enhance sensitivity of the mass spectrometer. The conditions that give higher sensitivity of 13 pharmaceuticals were explored. First, Plackett-Burman design was implemented to screen significant factors, and it was concluded that interface voltage and nebulizing gas flow were the only factors that influence the intensity signal for all pharmaceuticals. This fractionated factorial design was projected to set a full 22 factorial design with center points. The lack-of-fit test proved to be significant. Then, a central composite face-centered design was conducted. Finally, a stepwise multiple linear regression and subsequently an optimization problem solving were carried out. Two main drug clusters were found concerning the signal intensities of all runs of the augmented factorial design. p-Aminophenol, salicylic acid, and nimesulide constitute one cluster as a result of showing much higher sensitivity than the remaining drugs. The other cluster is more homogeneous with some sub-clusters comprising one pharmaceutical and its respective metabolite. It was observed that instrumental signal increased when both significant factors increased with maximum signal occurring when both codified factors are set at level +1. It was also found that, for most of the pharmaceuticals, interface voltage influences the intensity of the instrument more than the nebulizing gas flowrate. The only exceptions refer to nimesulide where the relative importance of the factors is reversed and still salicylic acid where both factors equally influence the instrumental signal.

  8. Optimal design criteria - prediction vs. parameter estimation

    NASA Astrophysics Data System (ADS)

    Waldl, Helmut

    2014-05-01

    G-optimality is a popular design criterion for optimal prediction, it tries to minimize the kriging variance over the whole design region. A G-optimal design minimizes the maximum variance of all predicted values. If we use kriging methods for prediction it is self-evident to use the kriging variance as a measure of uncertainty for the estimates. Though the computation of the kriging variance and even more the computation of the empirical kriging variance is computationally very costly and finding the maximum kriging variance in high-dimensional regions can be time demanding such that we cannot really find the G-optimal design with nowadays available computer equipment in practice. We cannot always avoid this problem by using space-filling designs because small designs that minimize the empirical kriging variance are often non-space-filling. D-optimality is the design criterion related to parameter estimation. A D-optimal design maximizes the determinant of the information matrix of the estimates. D-optimality in terms of trend parameter estimation and D-optimality in terms of covariance parameter estimation yield basically different designs. The Pareto frontier of these two competing determinant criteria corresponds with designs that perform well under both criteria. Under certain conditions searching the G-optimal design on the above Pareto frontier yields almost as good results as searching the G-optimal design in the whole design region. In doing so the maximum of the empirical kriging variance has to be computed only a few times though. The method is demonstrated by means of a computer simulation experiment based on data provided by the Belgian institute Management Unit of the North Sea Mathematical Models (MUMM) that describe the evolution of inorganic and organic carbon and nutrients, phytoplankton, bacteria and zooplankton in the Southern Bight of the North Sea.

  9. Comparison of Optimal Design Methods in Inverse Problems

    PubMed Central

    Banks, H. T.; Holm, Kathleen; Kappel, Franz

    2011-01-01

    Typical optimal design methods for inverse or parameter estimation problems are designed to choose optimal sampling distributions through minimization of a specific cost function related to the resulting error in parameter estimates. It is hoped that the inverse problem will produce parameter estimates with increased accuracy using data collected according to the optimal sampling distribution. Here we formulate the classical optimal design problem in the context of general optimization problems over distributions of sampling times. We present a new Prohorov metric based theoretical framework that permits one to treat succinctly and rigorously any optimal design criteria based on the Fisher Information Matrix (FIM). A fundamental approximation theory is also included in this framework. A new optimal design, SE-optimal design (standard error optimal design), is then introduced in the context of this framework. We compare this new design criteria with the more traditional D-optimal and E-optimal designs. The optimal sampling distributions from each design are used to compute and compare standard errors; the standard errors for parameters are computed using asymptotic theory or bootstrapping and the optimal mesh. We use three examples to illustrate ideas: the Verhulst-Pearl logistic population model [13], the standard harmonic oscillator model [13] and a popular glucose regulation model [16, 19, 29]. PMID:21857762

  10. Modeling and multi-response optimization of pervaporation of organic aqueous solutions using desirability function approach.

    PubMed

    Cojocaru, C; Khayet, M; Zakrzewska-Trznadel, G; Jaworska, A

    2009-08-15

    The factorial design of experiments and desirability function approach has been applied for multi-response optimization in pervaporation separation process. Two organic aqueous solutions were considered as model mixtures, water/acetonitrile and water/ethanol mixtures. Two responses have been employed in multi-response optimization of pervaporation, total permeate flux and organic selectivity. The effects of three experimental factors (feed temperature, initial concentration of organic compound in feed solution, and downstream pressure) on the pervaporation responses have been investigated. The experiments were performed according to a 2(3) full factorial experimental design. The factorial models have been obtained from experimental design and validated statistically by analysis of variance (ANOVA). The spatial representations of the response functions were drawn together with the corresponding contour line plots. Factorial models have been used to develop the overall desirability function. In addition, the overlap contour plots were presented to identify the desirability zone and to determine the optimum point. The optimal operating conditions were found to be, in the case of water/acetonitrile mixture, a feed temperature of 55 degrees C, an initial concentration of 6.58% and a downstream pressure of 13.99 kPa, while for water/ethanol mixture a feed temperature of 55 degrees C, an initial concentration of 4.53% and a downstream pressure of 9.57 kPa. Under such optimum conditions it was observed experimentally an improvement of both the total permeate flux and selectivity.

  11. Design of experiments utilization to map the processing capabilities of a micro-spray dryer: particle design and throughput optimization in support of drug discovery.

    PubMed

    Ormes, James D; Zhang, Dan; Chen, Alex M; Hou, Shirley; Krueger, Davida; Nelson, Todd; Templeton, Allen

    2013-02-01

    There has been a growing interest in amorphous solid dispersions for bioavailability enhancement in drug discovery. Spray drying, as shown in this study, is well suited to produce prototype amorphous dispersions in the Candidate Selection stage where drug supply is limited. This investigation mapped the processing window of a micro-spray dryer to achieve desired particle characteristics and optimize throughput/yield. Effects of processing variables on the properties of hypromellose acetate succinate were evaluated by a fractional factorial design of experiments. Parameters studied include solid loading, atomization, nozzle size, and spray rate. Response variables include particle size, morphology and yield. Unlike most other commercial small-scale spray dryers, the ProCepT was capable of producing particles with a relatively wide mean particle size, ca. 2-35 µm, allowing material properties to be tailored to support various applications. In addition, an optimized throughput of 35 g/hour with a yield of 75-95% was achieved, which affords to support studies from Lead-identification/Lead-optimization to early safety studies. A regression model was constructed to quantify the relationship between processing parameters and the response variables. The response surface curves provide a useful tool to design processing conditions, leading to a reduction in development time and drug usage to support drug discovery.

  12. Engineering tolerance to industrially relevant stress factors in yeast cell factories.

    PubMed

    Deparis, Quinten; Claes, Arne; Foulquié-Moreno, Maria R; Thevelein, Johan M

    2017-06-01

    The main focus in development of yeast cell factories has generally been on establishing optimal activity of heterologous pathways and further metabolic engineering of the host strain to maximize product yield and titer. Adequate stress tolerance of the host strain has turned out to be another major challenge for obtaining economically viable performance in industrial production. Although general robustness is a universal requirement for industrial microorganisms, production of novel compounds using artificial metabolic pathways presents additional challenges. Many of the bio-based compounds desirable for production by cell factories are highly toxic to the host cells in the titers required for economic viability. Artificial metabolic pathways also turn out to be much more sensitive to stress factors than endogenous pathways, likely because regulation of the latter has been optimized in evolution in myriads of environmental conditions. We discuss different environmental and metabolic stress factors with high relevance for industrial utilization of yeast cell factories and the experimental approaches used to engineer higher stress tolerance. Improving stress tolerance in a predictable manner in yeast cell factories should facilitate their widespread utilization in the bio-based economy and extend the range of products successfully produced in large scale in a sustainable and economically profitable way. © FEMS 2017.

  13. Engineering tolerance to industrially relevant stress factors in yeast cell factories

    PubMed Central

    Deparis, Quinten; Claes, Arne; Foulquié-Moreno, Maria R.

    2017-01-01

    Abstract The main focus in development of yeast cell factories has generally been on establishing optimal activity of heterologous pathways and further metabolic engineering of the host strain to maximize product yield and titer. Adequate stress tolerance of the host strain has turned out to be another major challenge for obtaining economically viable performance in industrial production. Although general robustness is a universal requirement for industrial microorganisms, production of novel compounds using artificial metabolic pathways presents additional challenges. Many of the bio-based compounds desirable for production by cell factories are highly toxic to the host cells in the titers required for economic viability. Artificial metabolic pathways also turn out to be much more sensitive to stress factors than endogenous pathways, likely because regulation of the latter has been optimized in evolution in myriads of environmental conditions. We discuss different environmental and metabolic stress factors with high relevance for industrial utilization of yeast cell factories and the experimental approaches used to engineer higher stress tolerance. Improving stress tolerance in a predictable manner in yeast cell factories should facilitate their widespread utilization in the bio-based economy and extend the range of products successfully produced in large scale in a sustainable and economically profitable way. PMID:28586408

  14. Quantum computation with realistic magic-state factories

    NASA Astrophysics Data System (ADS)

    O'Gorman, Joe; Campbell, Earl T.

    2017-03-01

    Leading approaches to fault-tolerant quantum computation dedicate a significant portion of the hardware to computational factories that churn out high-fidelity ancillas called magic states. Consequently, efficient and realistic factory design is of paramount importance. Here we present the most detailed resource assessment to date of magic-state factories within a surface code quantum computer, along the way introducing a number of techniques. We show that the block codes of Bravyi and Haah [Phys. Rev. A 86, 052329 (2012), 10.1103/PhysRevA.86.052329] have been systematically undervalued; we track correlated errors both numerically and analytically, providing fidelity estimates without appeal to the union bound. We also introduce a subsystem code realization of these protocols with constant time and low ancilla cost. Additionally, we confirm that magic-state factories have space-time costs that scale as a constant factor of surface code costs. We find that the magic-state factory required for postclassical factoring can be as small as 6.3 million data qubits, ignoring ancilla qubits, assuming 10-4 error gates and the availability of long-range interactions.

  15. A quality-by-design approach to risk reduction and optimization for human embryonic stem cell cryopreservation processes.

    PubMed

    Mitchell, Peter D; Ratcliffe, Elizabeth; Hourd, Paul; Williams, David J; Thomas, Robert J

    2014-12-01

    It is well documented that cryopreservation and resuscitation of human embryonic stem cells (hESCs) is complex and ill-defined, and often suffers poor cell recovery and increased levels of undesirable cell differentiation. In this study we have applied Quality-by-Design (QbD) concepts to the critical processes of slow-freeze cryopreservation and resuscitation of hESC colony cultures. Optimized subprocesses were linked together to deliver a controlled complete process. We have demonstrated a rapid, high-throughput, and stable system for measurement of cell adherence and viability as robust markers of in-process and postrecovery cell state. We observed that measurement of adherence and viability of adhered cells at 1 h postseeding was predictive of cell proliferative ability up to 96 h in this system. Application of factorial design defined the operating spaces for cryopreservation and resuscitation, critically linking the performance of these two processes. Optimization of both processes resulted in enhanced reattachment and post-thaw viability, resulting in substantially greater recovery of cryopreserved, pluripotent cell colonies. This study demonstrates the importance of QbD concepts and tools for rapid, robust, and low-risk process design that can inform manufacturing controls and logistics.

  16. Synthesis and factorial design applied to a novel chitosan/sodium polyphosphate nanoparticles via ionotropic gelation as an RGD delivery system.

    PubMed

    Kiilll, Charlene Priscila; Barud, Hernane da Silva; Santagneli, Sílvia Helena; Ribeiro, Sidney José Lima; Silva, Amélia M; Tercjak, Agnieszka; Gutierrez, Junkal; Pironi, Andressa Maria; Gremião, Maria Palmira Daflon

    2017-02-10

    Chitosan nanoparticles have been extensively studied for both drug and protein/peptide delivery. The aim of this study was to develop an optimized chitosan nanoparticle, by ionotropic gelation method, using 3 2 full factorial design with a novel polyanion, sodium polyphosphate, well known under the trade name Graham salt. The effects of these parameters on the particle size, zeta potential, and morphology and association efficiency were investigated. The optimized nanoparticles showed an estimated size of 166.20±1.95nm, a zeta potential of 38.7±1.2mV and an efficacy of association of 97.0±2.4%. The Atomic Force Microscopy (AFM) and Scanning Electronic Microscopy (SEM) revealed spherical nanoparticles with uniform size. Molecular interactions among the components of the nanoparticles and peptide were evaluated by Fourier Transform Infrared Spectra (FTIR) and Differential Scanning Calorimetry (DSC). The obtained results indicated that, the developed nanoparticles demonstrated high biocompatible, revealing no or low toxicity in the human cancer cell line (Caco-2). In conclusion, this work provides parameters that contribute to production of chitosan nanoparticles and sodium polyphosphate with desirable size, biocompatible and enabling successful use for protein/peptides delivery. Copyright © 2016 Elsevier Ltd. All rights reserved.

  17. Heat Sink Design and Optimization

    DTIC Science & Technology

    2015-12-01

    HEAT SINK DESIGN AND OPTIMIZATION I...REPORT DATE (DD-MM-YYYY) December 2015 2. REPORT TYPE Final 3. DATES COVERED (From – To) 4. TITLE AND SUBTITLE HEAT SINK DESIGN AND OPTIMIZATION...distribution is unlimited. 13. SUPPLEMENTARY NOTES 14. ABSTRACT Heat sinks are devices that are used to enhance heat dissipation

  18. Solenoid Fringe Field Effects for the Neutrino Factory Linac - MAD-X Investigation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    M. Aslaninejad,C. Bontoiu,J. Pasternak,J. Pozimski,Alex Bogacz

    2010-05-01

    International Design Study for the Neutrino Factory (IDS-NF) assumes the first stage of muon acceleration (up to 900 MeV) to be implemented with a solenoid based Linac. The Linac consists of three styles of cryo-modules, containing focusing solenoids and varying number of SRF cavities for acceleration. Fringe fields of the solenoids and the focusing effects in the SRF cavities have significant impact on the transverse beam dynamics. Using an analytical formula, the effects of fringe fields are studied in MAD-X. The resulting betatron functions are compared with the results of beam dynamics simulations using OptiM code.

  19. Improving spacecraft design using a multidisciplinary design optimization methodology

    NASA Astrophysics Data System (ADS)

    Mosher, Todd Jon

    2000-10-01

    Spacecraft design has gone from maximizing performance under technology constraints to minimizing cost under performance constraints. This is characteristic of the "faster, better, cheaper" movement that has emerged within NASA. Currently spacecraft are "optimized" manually through a tool-assisted evaluation of a limited set of design alternatives. With this approach there is no guarantee that a systems-level focus will be taken and "feasibility" rather than "optimality" is commonly all that is achieved. To improve spacecraft design in the "faster, better, cheaper" era, a new approach using multidisciplinary design optimization (MDO) is proposed. Using MDO methods brings structure to conceptual spacecraft design by casting a spacecraft design problem into an optimization framework. Then, through the construction of a model that captures design and cost, this approach facilitates a quicker and more straightforward option synthesis. The final step is to automatically search the design space. As computer processor speed continues to increase, enumeration of all combinations, while not elegant, is one method that is straightforward to perform. As an alternative to enumeration, genetic algorithms are used and find solutions by reviewing fewer possible solutions with some limitations. Both methods increase the likelihood of finding an optimal design, or at least the most promising area of the design space. This spacecraft design methodology using MDO is demonstrated on three examples. A retrospective test for validation is performed using the Near Earth Asteroid Rendezvous (NEAR) spacecraft design. For the second example, the premise that aerobraking was needed to minimize mission cost and was mission enabling for the Mars Global Surveyor (MGS) mission is challenged. While one might expect no feasible design space for an MGS without aerobraking mission, a counterintuitive result is discovered. Several design options that don't use aerobraking are feasible and cost

  20. Nanostructured lipid carriers as a potential vehicle for Carvedilol delivery: Application of factorial design approach.

    PubMed

    Patil, Ganesh B; Patil, Nandkishor D; Deshmukh, Prashant K; Patil, Pravin O; Bari, Sanjay B

    2016-01-01

    Present invention relates to design of nanostructured lipid carriers (NLC) to augment oral bioavailability of Carvedilol (CAR). In this attempt, formulations of CAR-NLCs were prepared with glyceryl-monostearate (GMS) as a lipid, poloxamer 188 as a surfactant and tween 80 as a co-surfactant using high pressure homogenizer by 2(3) factorial design approach. Formed CAR-NLCs were assessed for various performance parameters. Accelerated stability studies demonstrated negligible change in particle size and entrapment efficiency, after storage at specified time up to 3 months. The promising findings in this investigation suggest the practicability of these systems for enhancement of bioavailability of drugs like CAR.

  1. Post-Optimality Analysis In Aerospace Vehicle Design

    NASA Technical Reports Server (NTRS)

    Braun, Robert D.; Kroo, Ilan M.; Gage, Peter J.

    1993-01-01

    This analysis pertains to the applicability of optimal sensitivity information to aerospace vehicle design. An optimal sensitivity (or post-optimality) analysis refers to computations performed once the initial optimization problem is solved. These computations may be used to characterize the design space about the present solution and infer changes in this solution as a result of constraint or parameter variations, without reoptimizing the entire system. The present analysis demonstrates that post-optimality information generated through first-order computations can be used to accurately predict the effect of constraint and parameter perturbations on the optimal solution. This assessment is based on the solution of an aircraft design problem in which the post-optimality estimates are shown to be within a few percent of the true solution over the practical range of constraint and parameter variations. Through solution of a reusable, single-stage-to-orbit, launch vehicle design problem, this optimal sensitivity information is also shown to improve the efficiency of the design process, For a hierarchically decomposed problem, this computational efficiency is realized by estimating the main-problem objective gradient through optimal sep&ivity calculations, By reducing the need for finite differentiation of a re-optimized subproblem, a significant decrease in the number of objective function evaluations required to reach the optimal solution is obtained.

  2. Integrated multidisciplinary design optimization of rotorcraft

    NASA Technical Reports Server (NTRS)

    Adelman, Howard M.; Mantay, Wayne R.

    1989-01-01

    The NASA/Army research plan for developing the logic elements for helicopter rotor design optimization by integrating appropriate disciplines and accounting for important interactions among the disciplines is discussed. The paper describes the optimization formulation in terms of the objective function, design variables, and constraints. The analysis aspects are discussed, and an initial effort at defining the interdisciplinary coupling is summarized. Results are presented on the achievements made in the rotor aerodynamic performance optimization for minimum hover horsepower, rotor dynamic optimization for vibration reduction, rotor structural optimization for minimum weight, and integrated aerodynamic load/dynamics optimization for minimum vibration and weight.

  3. Optimization methods applied to hybrid vehicle design

    NASA Technical Reports Server (NTRS)

    Donoghue, J. F.; Burghart, J. H.

    1983-01-01

    The use of optimization methods as an effective design tool in the design of hybrid vehicle propulsion systems is demonstrated. Optimization techniques were used to select values for three design parameters (battery weight, heat engine power rating and power split between the two on-board energy sources) such that various measures of vehicle performance (acquisition cost, life cycle cost and petroleum consumption) were optimized. The apporach produced designs which were often significant improvements over hybrid designs already reported on in the literature. The principal conclusions are as follows. First, it was found that the strategy used to split the required power between the two on-board energy sources can have a significant effect on life cycle cost and petroleum consumption. Second, the optimization program should be constructed so that performance measures and design variables can be easily changed. Third, the vehicle simulation program has a significant effect on the computer run time of the overall optimization program; run time can be significantly reduced by proper design of the types of trips the vehicle takes in a one year period. Fourth, care must be taken in designing the cost and constraint expressions which are used in the optimization so that they are relatively smooth functions of the design variables. Fifth, proper handling of constraints on battery weight and heat engine rating, variables which must be large enough to meet power demands, is particularly important for the success of an optimization study. Finally, the principal conclusion is that optimization methods provide a practical tool for carrying out the design of a hybrid vehicle propulsion system.

  4. Integrating PCLIPS into ULowell's Lincoln Logs: Factory of the future

    NASA Technical Reports Server (NTRS)

    Mcgee, Brenda J.; Miller, Mark D.; Krolak, Patrick; Barr, Stanley J.

    1990-01-01

    We are attempting to show how independent but cooperating expert systems, executing within a parallel production system (PCLIPS), can operate and control a completely automated, fault tolerant prototype of a factory of the future (The Lincoln Logs Factory of the Future). The factory consists of a CAD system for designing the Lincoln Log Houses, two workcells, and a materials handling system. A workcell consists of two robots, part feeders, and a frame mounted vision system.

  5. Design optimization of space structures

    NASA Technical Reports Server (NTRS)

    Felippa, Carlos

    1991-01-01

    The topology-shape-size optimization of space structures is investigated through Kikuchi's homogenization method. The method starts from a 'design domain block,' which is a region of space into which the structure is to materialize. This domain is initially filled with a finite element mesh, typically regular. Force and displacement boundary conditions corresponding to applied loads and supports are applied at specific points in the domain. An optimal structure is to be 'carved out' of the design under two conditions: (1) a cost function is to be minimized, and (2) equality or inequality constraints are to be satisfied. The 'carving' process is accomplished by letting microstructure holes develop and grow in elements during the optimization process. These holes have a rectangular shape in two dimensions and a cubical shape in three dimensions, and may also rotate with respect to the reference axes. The properties of the perforated element are obtained through an homogenization procedure. Once a hole reaches the volume of the element, that element effectively disappears. The project has two phases. In the first phase the method was implemented as the combination of two computer programs: a finite element module, and an optimization driver. In the second part, focus is on the application of this technique to planetary structures. The finite element part of the method was programmed for the two-dimensional case using four-node quadrilateral elements to cover the design domain. An element homogenization technique different from that of Kikuchi and coworkers was implemented. The optimization driver is based on an augmented Lagrangian optimizer, with the volume constraint treated as a Courant penalty function. The optimizer has to be especially tuned to this type of optimization because the number of design variables can reach into the thousands. The driver is presently under development.

  6. Using factorial experimental design to evaluate the separation of plastics by froth flotation.

    PubMed

    Salerno, Davide; Jordão, Helga; La Marca, Floriana; Carvalho, M Teresa

    2018-03-01

    This paper proposes the use of factorial experimental design as a standard experimental method in the application of froth flotation to plastic separation instead of the commonly used OVAT method (manipulation of one variable at a time). Furthermore, as is common practice in minerals flotation, the parameters of the kinetic model were used as process responses rather than the recovery of plastics in the separation products. To explain and illustrate the proposed methodology, a set of 32 experimental tests was performed using mixtures of two polymers with approximately the same density, PVC and PS (with mineral charges), with particle size ranging from 2 to 4 mm. The manipulated variables were frother concentration, air flow rate and pH. A three-level full factorial design was conducted. The models establishing the relationships between the manipulated variables and their interactions with the responses (first order kinetic model parameters) were built. The Corrected Akaike Information Criterion was used to select the best fit model and an analysis of variance (ANOVA) was conducted to identify the statistically significant terms of the model. It was shown that froth flotation can be used to efficiently separate PVC from PS with mineral charges by reducing the floatability of PVC, which largely depends on the action of pH. Within the tested interval, this is the factor that most affects the flotation rate constants. The results obtained show that the pure error may be of the same magnitude as the sum of squares of the errors, suggesting that there is significant variability within the same experimental conditions. Thus, special care is needed when evaluating and generalizing the process. Copyright © 2017 Elsevier Ltd. All rights reserved.

  7. B factory plans at Cornell

    NASA Astrophysics Data System (ADS)

    Berkelman, Karl

    1989-12-01

    Recent upgrade project at the Cornell Electron Storage Ring is discussed. Modification was made to make B mesons factory design possible. The CLEO detector has been rebuilt with a new superconducting magnet and a high resolution electromagnetic calorimeter made of cesium iodide scintillators.(AIP)

  8. Design of experiment (DOE) study of biodegradable magnesium alloy synthesized by mechanical alloying using fractional factorial design

    NASA Astrophysics Data System (ADS)

    Salleh, Emee Marina; Ramakrishnan, Sivakumar; Hussain, Zuhailawati

    2014-06-01

    The biodegradable nature of magnesium (Mg) makes it a most highlighted and attractive to be used as implant materials. However, rapid corrosion rate of Mg alloys especially in electrolytic aqueous environment limits its performance. In this study, Mg alloy was mechanically milled by incorporating manganese (Mn) as alloying element. An attempt was made to study both effect of mechanical alloying and subsequent consolidation processes on the bulk properties of Mg-Mn alloys. 2k-2 factorial design was employed to determine the significant factors in producing Mg alloy which has properties closes to that of human bones. The design considered six factors (i.e. milling time, milling speed, weight percentage of Mn, compaction pressure, sintering temperature and sintering time). Density and hardness were chosen as the responses for assessing the most significant parameters that affected the bulk properties of Mg-Mn alloys. The experimental variables were evaluated using ANOVA and regression model. The main parameter investigated was compaction pressure.

  9. Design optimization for active twist rotor blades

    NASA Astrophysics Data System (ADS)

    Mok, Ji Won

    This dissertation introduces the process of optimizing active twist rotor blades in the presence of embedded anisotropic piezo-composite actuators. Optimum design of active twist blades is a complex task, since it involves a rich design space with tightly coupled design variables. The study presents the development of an optimization framework for active helicopter rotor blade cross-sectional design. This optimization framework allows for exploring a rich and highly nonlinear design space in order to optimize the active twist rotor blades. Different analytical components are combined in the framework: cross-sectional analysis (UM/VABS), an automated mesh generator, a beam solver (DYMORE), a three-dimensional local strain recovery module, and a gradient based optimizer within MATLAB. Through the mathematical optimization problem, the static twist actuation performance of a blade is maximized while satisfying a series of blade constraints. These constraints are associated with locations of the center of gravity and elastic axis, blade mass per unit span, fundamental rotating blade frequencies, and the blade strength based on local three-dimensional strain fields under worst loading conditions. Through pre-processing, limitations of the proposed process have been studied. When limitations were detected, resolution strategies were proposed. These include mesh overlapping, element distortion, trailing edge tab modeling, electrode modeling and foam implementation of the mesh generator, and the initial point sensibility of the current optimization scheme. Examples demonstrate the effectiveness of this process. Optimization studies were performed on the NASA/Army/MIT ATR blade case. Even though that design was built and shown significant impact in vibration reduction, the proposed optimization process showed that the design could be improved significantly. The second example, based on a model scale of the AH-64D Apache blade, emphasized the capability of this framework to

  10. Globally optimal trial design for local decision making.

    PubMed

    Eckermann, Simon; Willan, Andrew R

    2009-02-01

    Value of information methods allows decision makers to identify efficient trial design following a principle of maximizing the expected value to decision makers of information from potential trial designs relative to their expected cost. However, in health technology assessment (HTA) the restrictive assumption has been made that, prospectively, there is only expected value of sample information from research commissioned within jurisdiction. This paper extends the framework for optimal trial design and decision making within jurisdiction to allow for optimal trial design across jurisdictions. This is illustrated in identifying an optimal trial design for decision making across the US, the UK and Australia for early versus late external cephalic version for pregnant women presenting in the breech position. The expected net gain from locally optimal trial designs of US$0.72M is shown to increase to US$1.14M with a globally optimal trial design. In general, the proposed method of globally optimal trial design improves on optimal trial design within jurisdictions by: (i) reflecting the global value of non-rival information; (ii) allowing optimal allocation of trial sample across jurisdictions; (iii) avoiding market failure associated with free-rider effects, sub-optimal spreading of fixed costs and heterogeneity of trial information with multiple trials. Copyright (c) 2008 John Wiley & Sons, Ltd.

  11. Comparison of optimal design methods in inverse problems

    NASA Astrophysics Data System (ADS)

    Banks, H. T.; Holm, K.; Kappel, F.

    2011-07-01

    Typical optimal design methods for inverse or parameter estimation problems are designed to choose optimal sampling distributions through minimization of a specific cost function related to the resulting error in parameter estimates. It is hoped that the inverse problem will produce parameter estimates with increased accuracy using data collected according to the optimal sampling distribution. Here we formulate the classical optimal design problem in the context of general optimization problems over distributions of sampling times. We present a new Prohorov metric-based theoretical framework that permits one to treat succinctly and rigorously any optimal design criteria based on the Fisher information matrix. A fundamental approximation theory is also included in this framework. A new optimal design, SE-optimal design (standard error optimal design), is then introduced in the context of this framework. We compare this new design criterion with the more traditional D-optimal and E-optimal designs. The optimal sampling distributions from each design are used to compute and compare standard errors; the standard errors for parameters are computed using asymptotic theory or bootstrapping and the optimal mesh. We use three examples to illustrate ideas: the Verhulst-Pearl logistic population model (Banks H T and Tran H T 2009 Mathematical and Experimental Modeling of Physical and Biological Processes (Boca Raton, FL: Chapman and Hall/CRC)), the standard harmonic oscillator model (Banks H T and Tran H T 2009) and a popular glucose regulation model (Bergman R N, Ider Y Z, Bowden C R and Cobelli C 1979 Am. J. Physiol. 236 E667-77 De Gaetano A and Arino O 2000 J. Math. Biol. 40 136-68 Toffolo G, Bergman R N, Finegood D T, Bowden C R and Cobelli C 1980 Diabetes 29 979-90).

  12. Rigorous ILT optimization for advanced patterning and design-process co-optimization

    NASA Astrophysics Data System (ADS)

    Selinidis, Kosta; Kuechler, Bernd; Cai, Howard; Braam, Kyle; Hoppe, Wolfgang; Domnenko, Vitaly; Poonawala, Amyn; Xiao, Guangming

    2018-03-01

    Despite the large difficulties involved in extending 193i multiple patterning and the slow ramp of EUV lithography to full manufacturing readiness, the pace of development for new technology node variations has been accelerating. Multiple new variations of new and existing technology nodes have been introduced for a range of device applications; each variation with at least a few new process integration methods, layout constructs and/or design rules. This had led to a strong increase in the demand for predictive technology tools which can be used to quickly guide important patterning and design co-optimization decisions. In this paper, we introduce a novel hybrid predictive patterning method combining two patterning technologies which have each individually been widely used for process tuning, mask correction and process-design cooptimization. These technologies are rigorous lithography simulation and inverse lithography technology (ILT). Rigorous lithography simulation has been extensively used for process development/tuning, lithography tool user setup, photoresist hot-spot detection, photoresist-etch interaction analysis, lithography-TCAD interactions/sensitivities, source optimization and basic lithography design rule exploration. ILT has been extensively used in a range of lithographic areas including logic hot-spot fixing, memory layout correction, dense memory cell optimization, assist feature (AF) optimization, source optimization, complex patterning design rules and design-technology co-optimization (DTCO). The combined optimization capability of these two technologies will therefore have a wide range of useful applications. We investigate the benefits of the new functionality for a few of these advanced applications including correction for photoresist top loss and resist scumming hotspots.

  13. Optimal Experimental Design for Model Discrimination

    PubMed Central

    Myung, Jay I.; Pitt, Mark A.

    2009-01-01

    Models of a psychological process can be difficult to discriminate experimentally because it is not easy to determine the values of the critical design variables (e.g., presentation schedule, stimulus structure) that will be most informative in differentiating them. Recent developments in sampling-based search methods in statistics make it possible to determine these values, and thereby identify an optimal experimental design. After describing the method, it is demonstrated in two content areas in cognitive psychology in which models are highly competitive: retention (i.e., forgetting) and categorization. The optimal design is compared with the quality of designs used in the literature. The findings demonstrate that design optimization has the potential to increase the informativeness of the experimental method. PMID:19618983

  14. Design Oriented Structural Modeling for Airplane Conceptual Design Optimization

    NASA Technical Reports Server (NTRS)

    Livne, Eli

    1999-01-01

    The main goal for research conducted with the support of this grant was to develop design oriented structural optimization methods for the conceptual design of airplanes. Traditionally in conceptual design airframe weight is estimated based on statistical equations developed over years of fitting airplane weight data in data bases of similar existing air- planes. Utilization of such regression equations for the design of new airplanes can be justified only if the new air-planes use structural technology similar to the technology on the airplanes in those weight data bases. If any new structural technology is to be pursued or any new unconventional configurations designed the statistical weight equations cannot be used. In such cases any structural weight estimation must be based on rigorous "physics based" structural analysis and optimization of the airframes under consideration. Work under this grant progressed to explore airframe design-oriented structural optimization techniques along two lines of research: methods based on "fast" design oriented finite element technology and methods based on equivalent plate / equivalent shell models of airframes, in which the vehicle is modelled as an assembly of plate and shell components, each simulating a lifting surface or nacelle / fuselage pieces. Since response to changes in geometry are essential in conceptual design of airplanes, as well as the capability to optimize the shape itself, research supported by this grant sought to develop efficient techniques for parametrization of airplane shape and sensitivity analysis with respect to shape design variables. Towards the end of the grant period a prototype automated structural analysis code designed to work with the NASA Aircraft Synthesis conceptual design code ACS= was delivered to NASA Ames.

  15. Multidisciplinary Optimization Methods for Aircraft Preliminary Design

    NASA Technical Reports Server (NTRS)

    Kroo, Ilan; Altus, Steve; Braun, Robert; Gage, Peter; Sobieski, Ian

    1994-01-01

    This paper describes a research program aimed at improved methods for multidisciplinary design and optimization of large-scale aeronautical systems. The research involves new approaches to system decomposition, interdisciplinary communication, and methods of exploiting coarse-grained parallelism for analysis and optimization. A new architecture, that involves a tight coupling between optimization and analysis, is intended to improve efficiency while simplifying the structure of multidisciplinary, computation-intensive design problems involving many analysis disciplines and perhaps hundreds of design variables. Work in two areas is described here: system decomposition using compatibility constraints to simplify the analysis structure and take advantage of coarse-grained parallelism; and collaborative optimization, a decomposition of the optimization process to permit parallel design and to simplify interdisciplinary communication requirements.

  16. Integrated topology and shape optimization in structural design

    NASA Technical Reports Server (NTRS)

    Bremicker, M.; Chirehdast, M.; Kikuchi, N.; Papalambros, P. Y.

    1990-01-01

    Structural optimization procedures usually start from a given design topology and vary its proportions or boundary shapes to achieve optimality under various constraints. Two different categories of structural optimization are distinguished in the literature, namely sizing and shape optimization. A major restriction in both cases is that the design topology is considered fixed and given. Questions concerning the general layout of a design (such as whether a truss or a solid structure should be used) as well as more detailed topology features (e.g., the number and connectivities of bars in a truss or the number of holes in a solid) have to be resolved by design experience before formulating the structural optimization model. Design quality of an optimized structure still depends strongly on engineering intuition. This article presents a novel approach for initiating formal structural optimization at an earlier stage, where the design topology is rigorously generated in addition to selecting shape and size dimensions. A three-phase design process is discussed: an optimal initial topology is created by a homogenization method as a gray level image, which is then transformed to a realizable design using computer vision techniques; this design is then parameterized and treated in detail by sizing and shape optimization. A fully automated process is described for trusses. Optimization of two dimensional solid structures is also discussed. Several application-oriented examples illustrate the usefulness of the proposed methodology.

  17. Experimental design methods for bioengineering applications.

    PubMed

    Keskin Gündoğdu, Tuğba; Deniz, İrem; Çalışkan, Gülizar; Şahin, Erdem Sefa; Azbar, Nuri

    2016-01-01

    Experimental design is a form of process analysis in which certain factors are selected to obtain the desired responses of interest. It may also be used for the determination of the effects of various independent factors on a dependent factor. The bioengineering discipline includes many different areas of scientific interest, and each study area is affected and governed by many different factors. Briefly analyzing the important factors and selecting an experimental design for optimization are very effective tools for the design of any bioprocess under question. This review summarizes experimental design methods that can be used to investigate various factors relating to bioengineering processes. The experimental methods generally used in bioengineering are as follows: full factorial design, fractional factorial design, Plackett-Burman design, Taguchi design, Box-Behnken design and central composite design. These design methods are briefly introduced, and then the application of these design methods to study different bioengineering processes is analyzed.

  18. Manufacturing data analytics using a virtual factory representation.

    PubMed

    Jain, Sanjay; Shao, Guodong; Shin, Seung-Jun

    2017-01-01

    Large manufacturers have been using simulation to support decision-making for design and production. However, with the advancement of technologies and the emergence of big data, simulation can be utilised to perform and support data analytics for associated performance gains. This requires not only significant model development expertise, but also huge data collection and analysis efforts. This paper presents an approach within the frameworks of Design Science Research Methodology and prototyping to address the challenge of increasing the use of modelling, simulation and data analytics in manufacturing via reduction of the development effort. The use of manufacturing simulation models is presented as data analytics applications themselves and for supporting other data analytics applications by serving as data generators and as a tool for validation. The virtual factory concept is presented as the vehicle for manufacturing modelling and simulation. Virtual factory goes beyond traditional simulation models of factories to include multi-resolution modelling capabilities and thus allowing analysis at varying levels of detail. A path is proposed for implementation of the virtual factory concept that builds on developments in technologies and standards. A virtual machine prototype is provided as a demonstration of the use of a virtual representation for manufacturing data analytics.

  19. Review of design optimization methods for turbomachinery aerodynamics

    NASA Astrophysics Data System (ADS)

    Li, Zhihui; Zheng, Xinqian

    2017-08-01

    In today's competitive environment, new turbomachinery designs need to be not only more efficient, quieter, and ;greener; but also need to be developed at on much shorter time scales and at lower costs. A number of advanced optimization strategies have been developed to achieve these requirements. This paper reviews recent progress in turbomachinery design optimization to solve real-world aerodynamic problems, especially for compressors and turbines. This review covers the following topics that are important for optimizing turbomachinery designs. (1) optimization methods, (2) stochastic optimization combined with blade parameterization methods and the design of experiment methods, (3) gradient-based optimization methods for compressors and turbines and (4) data mining techniques for Pareto Fronts. We also present our own insights regarding the current research trends and the future optimization of turbomachinery designs.

  20. Testing Nelder-Mead based repulsion algorithms for multiple roots of nonlinear systems via a two-level factorial design of experiments.

    PubMed

    Ramadas, Gisela C V; Rocha, Ana Maria A C; Fernandes, Edite M G P

    2015-01-01

    This paper addresses the challenging task of computing multiple roots of a system of nonlinear equations. A repulsion algorithm that invokes the Nelder-Mead (N-M) local search method and uses a penalty-type merit function based on the error function, known as 'erf', is presented. In the N-M algorithm context, different strategies are proposed to enhance the quality of the solutions and improve the overall efficiency. The main goal of this paper is to use a two-level factorial design of experiments to analyze the statistical significance of the observed differences in selected performance criteria produced when testing different strategies in the N-M based repulsion algorithm. The main goal of this paper is to use a two-level factorial design of experiments to analyze the statistical significance of the observed differences in selected performance criteria produced when testing different strategies in the N-M based repulsion algorithm.

  1. Synergistic Effect of Hydrotrope and Surfactant on Solubility and Dissolution of Atorvastatin Calcium: Screening Factorial Design Followed by Ratio Optimization

    PubMed Central

    Patel, V. F.; Sarai, J.

    2014-01-01

    The present study was aimed at investigating the effect of hydrotrope and surfactant on poor solubility of atorvastatin calcium. Excipients screening followed by factorial design was performed to study effect of excipients and manufacturing methods on solubility of drug. Three independent factors (carrier, surfactant and manufacturing method) were evaluated at two levels using solubility as a dependant variable. Solid-state characterisation was performed using Fourier transform infrared spectroscopy and differential scanning calorimetry. Optimised complex were incorporated into orally disintegrating micro tablets and in vitro dissolution test was performed. Nicotinamide, Plasdone and sodium dodecyl sulphate were emerged as promising excipients from excipient screening. General regression analysis revealed only the type of carrier has significantly enhanced (P<0.05) the solubility of drug while other factors were found to be nonsignificant. Ratio optimisation trial revealed that drug to nicotinamide ratio is more critical in enhancing the solubility of drug (40 fold increases in solubility compared to pure drug) in comparison to drug-surfactant ratio; however the presence of surfactant deemed essential. Significantly higher rate and extent of dissolution was observed from solid dispersion complex and tablets compared to dissolution of pure drug (P<0.05). Study revealed hydrotrope and surfactant have synergistic effect on solubility and dissolution of atorvastatin calcium and this can be explored further. PMID:25593381

  2. A Matrix-Free Algorithm for Multidisciplinary Design Optimization

    NASA Astrophysics Data System (ADS)

    Lambe, Andrew Borean

    Multidisciplinary design optimization (MDO) is an approach to engineering design that exploits the coupling between components or knowledge disciplines in a complex system to improve the final product. In aircraft design, MDO methods can be used to simultaneously design the outer shape of the aircraft and the internal structure, taking into account the complex interaction between the aerodynamic forces and the structural flexibility. Efficient strategies are needed to solve such design optimization problems and guarantee convergence to an optimal design. This work begins with a comprehensive review of MDO problem formulations and solution algorithms. First, a fundamental MDO problem formulation is defined from which other formulations may be obtained through simple transformations. Using these fundamental problem formulations, decomposition methods from the literature are reviewed and classified. All MDO methods are presented in a unified mathematical notation to facilitate greater understanding. In addition, a novel set of diagrams, called extended design structure matrices, are used to simultaneously visualize both data communication and process flow between the many software components of each method. For aerostructural design optimization, modern decomposition-based MDO methods cannot efficiently handle the tight coupling between the aerodynamic and structural states. This fact motivates the exploration of methods that can reduce the computational cost. A particular structure in the direct and adjoint methods for gradient computation motivates the idea of a matrix-free optimization method. A simple matrix-free optimizer is developed based on the augmented Lagrangian algorithm. This new matrix-free optimizer is tested on two structural optimization problems and one aerostructural optimization problem. The results indicate that the matrix-free optimizer is able to efficiently solve structural and multidisciplinary design problems with thousands of variables and

  3. A Matrix-Free Algorithm for Multidisciplinary Design Optimization

    NASA Astrophysics Data System (ADS)

    Lambe, Andrew Borean

    Multidisciplinary design optimization (MDO) is an approach to engineering design that exploits the coupling between components or knowledge disciplines in a complex system to improve the final product. In aircraft design, MDO methods can be used to simultaneously design the outer shape of the aircraft and the internal structure, taking into account the complex interaction between the aerodynamic forces and the structural flexibility. Efficient strategies are needed to solve such design optimization problems and guarantee convergence to an optimal design. This work begins with a comprehensive review of MDO problem formulations and solution algorithms. First, a fundamental MDO problem formulation is defined from which other formulations may be obtained through simple transformations. Using these fundamental problem formulations, decomposition methods from the literature are reviewed and classified. All MDO methods are presented in a unified mathematical notation to facilitate greater understanding. In addition, a novel set of diagrams, called extended design structure matrices, are used to simultaneously visualize both data communication and process flow between the many software components of each method. For aerostructural design optimization, modern decomposition-based MDO methods cannot efficiently handle the tight coupling between the aerodynamic and structural states. This fact motivates the exploration of methods that can reduce the computational cost. A particular structure in the direct and adjoint methods for gradient computation. motivates the idea of a matrix-free optimization method. A simple matrix-free optimizer is developed based on the augmented Lagrangian algorithm. This new matrix-free optimizer is tested on two structural optimization problems and one aerostructural optimization problem. The results indicate that the matrix-free optimizer is able to efficiently solve structural and multidisciplinary design problems with thousands of variables and

  4. Global Design Optimization for Fluid Machinery Applications

    NASA Technical Reports Server (NTRS)

    Shyy, Wei; Papila, Nilay; Tucker, Kevin; Vaidyanathan, Raj; Griffin, Lisa

    2000-01-01

    Recent experiences in utilizing the global optimization methodology, based on polynomial and neural network techniques for fluid machinery design are summarized. Global optimization methods can utilize the information collected from various sources and by different tools. These methods offer multi-criterion optimization, handle the existence of multiple design points and trade-offs via insight into the entire design space can easily perform tasks in parallel, and are often effective in filtering the noise intrinsic to numerical and experimental data. Another advantage is that these methods do not need to calculate the sensitivity of each design variable locally. However, a successful application of the global optimization method needs to address issues related to data requirements with an increase in the number of design variables and methods for predicting the model performance. Examples of applications selected from rocket propulsion components including a supersonic turbine and an injector element and a turbulent flow diffuser are used to illustrate the usefulness of the global optimization method.

  5. Nonmedical influences on medical decision making: an experimental technique using videotapes, factorial design, and survey sampling.

    PubMed Central

    Feldman, H A; McKinlay, J B; Potter, D A; Freund, K M; Burns, R B; Moskowitz, M A; Kasten, L E

    1997-01-01

    OBJECTIVE: To study nonmedical influences on the doctor-patient interaction. A technique using simulated patients and "real" doctors is described. DATA SOURCES: A random sample of physicians, stratified on such characteristics as demographics, specialty, or experience, and selected from commercial and professional listings. STUDY DESIGN: A medical appointment is depicted on videotape by professional actors. The patient's presenting complaint (e.g., chest pain) allows a range of valid interpretation. Several alternative versions are taped, featuring the same script with patient-actors of different age, sex, race, or other characteristics. Fractional factorial design is used to select a balanced subset of patient characteristics, reducing costs without biasing the outcome. DATA COLLECTION: Each physician is shown one version of the videotape appointment and is asked to describe how he or she would diagnose or treat such a patient. PRINCIPAL FINDINGS: Two studies using this technique have been completed to date, one involving chest pain and dyspnea and the other involving breast cancer. The factorial design provided sufficient power, despite limited sample size, to demonstrate with statistical significance various influences of the experimental and stratification variables, including the patient's gender and age and the physician's experience. Persistent recruitment produced a high response rate, minimizing selection bias and enhancing validity. CONCLUSION: These techniques permit us to determine, with a degree of control unattainable in observational studies, whether medical decisions as described by actual physicians and drawn from a demographic or professional group of interest, are influenced by a prescribed set of nonmedical factors. PMID:9240285

  6. Optimal design of geodesically stiffened composite cylindrical shells

    NASA Technical Reports Server (NTRS)

    Gendron, G.; Guerdal, Z.

    1992-01-01

    An optimization system based on the finite element code Computations Structural Mechanics (CSM) Testbed and the optimization program, Automated Design Synthesis (ADS), is described. The optimization system can be used to obtain minimum-weight designs of composite stiffened structures. Ply thickness, ply orientations, and stiffener heights can be used as design variables. Buckling, displacement, and material failure constraints can be imposed on the design. The system is used to conduct a design study of geodesically stiffened shells. For comparison purposes, optimal designs of unstiffened shells and shells stiffened by rings and stingers are also obtained. Trends in the design of geodesically stiffened shells are identified. An approach to include local stress concentrations during the design optimization process is then presented. The method is based on a global/local analysis technique. It employs spline interpolation functions to determine displacements and rotations from a global model which are used as 'boundary conditions' for the local model. The organization of the strategy in the context of an optimization process is described. The method is validated with an example.

  7. Advances and prospects of Bacillus subtilis cellular factories: From rational design to industrial applications.

    PubMed

    Gu, Yang; Xu, Xianhao; Wu, Yaokang; Niu, Tengfei; Liu, Yanfeng; Li, Jianghua; Du, Guocheng; Liu, Long

    2018-05-15

    Bacillus subtilis is the most characterized gram-positive bacterium that has significant attributes, such as growing well on cheap carbon sources, possessing clear inherited backgrounds, having mature genetic manipulation methods, and exhibiting robustness in large-scale fermentations. Till date, B. subtilis has been identified as attractive hosts for the production of recombinant proteins and chemicals. By applying various systems and synthetic biology tools, the productivity features of B. subtilis can be thoroughly analyzed and further optimized via metabolic engineering. In the present review, we discussed why B. subtilis is the primary organisms used for metabolic engineering and industrial applications. Additionally, we summarized the recent advances in systems and synthetic biology, engineering strategies for improving cellular performances, and metabolic engineering applications of B. subtilis. In particular, we proposed emerging opportunities and essential strategies to enable the successful development of B. subtilis as microbial cell factories. Copyright © 2018. Published by Elsevier Inc.

  8. Design Expert Supported Mathematical Optimization and Predictability Study of Buccoadhesive Pharmaceutical Wafers of Loratadine

    PubMed Central

    Dey, Surajit; Parcha, Versha; Bhattacharya, Shiv Sankar; Ghosh, Amitava

    2013-01-01

    Objective. The objective of this work encompasses the application of the response surface approach in the development of buccoadhesive pharmaceutical wafers of Loratadine (LOR). Methods. Experiments were performed according to a 32 factorial design to evaluate the effects of buccoadhesive polymer, sodium alginate (A), and lactose monohydrate as ingredient, of hydrophilic matrix former (B) on the bioadhesive force, disintegration time, percent (%) swelling index, and time taken for 70% drug release (t 70%). The effect of the two independent variables on the response variables was studied by response surface plots and contour plots generated by the Design-Expert software. The desirability function was used to optimize the response variables. Results. The compatibility between LOR and the wafer excipients was confirmed by differential scanning calorimetry, FTIR spectroscopy, and X-ray diffraction (XRD) analysis. Bioadhesion force, measured with TAXT2i texture analyzer, showed that the wafers had a good bioadhesive property which could be advantageous for retaining the drug into the buccal cavity. Conclusion. The observed responses taken were in agreement with the experimental values, and Loratadine wafers were produced with less experimental trials, and a patient compliant product was achieved with the concept of formulation by design. PMID:23781498

  9. Compatible validated spectrofluorimetric and spectrophotometric methods for determination of vildagliptin and saxagliptin by factorial design experiments.

    PubMed

    Abdel-Aziz, Omar; Ayad, Miriam F; Tadros, Mariam M

    2015-04-05

    Simple, selective and reproducible spectrofluorimetric and spectrophotometric methods have been developed for the determination of vildagliptin and saxagliptin in bulk and their pharmaceutical dosage forms. The first proposed spectrofluorimetric method is based on the dansylation reaction of the amino group of vildagliptin with dansyl chloride to form a highly fluorescent product. The formed product was measured spectrofluorimetrically at 455 nm after excitation at 345 nm. Beer's law was obeyed in a concentration range of 100-600 μg ml(-1). The second proposed spectrophotometric method is based on the charge transfer complex of saxagliptin with tetrachloro-1,4-benzoquinone (p-chloranil). The formed charge transfer complex was measured spectrophotometrically at 530 nm. Beer's law was obeyed in a concentration range of 100-850 μg ml(-1). The third proposed spectrophotometric method is based on the condensation reaction of the primary amino group of saxagliptin with formaldehyde and acetyl acetone to form a yellow colored product known as Hantzsch reaction, measured at 342.5 nm. Beer's law was obeyed in a concentration range of 50-300 μg ml(-1). All the variables were studied to optimize the reactions' conditions using factorial design. The developed methods were validated and proved to be specific and accurate for quality control of vildagliptin and saxagliptin in their pharmaceutical dosage forms. Copyright © 2015. Published by Elsevier B.V.

  10. Compatible validated spectrofluorimetric and spectrophotometric methods for determination of vildagliptin and saxagliptin by factorial design experiments

    NASA Astrophysics Data System (ADS)

    Abdel-Aziz, Omar; Ayad, Miriam F.; Tadros, Mariam M.

    2015-04-01

    Simple, selective and reproducible spectrofluorimetric and spectrophotometric methods have been developed for the determination of vildagliptin and saxagliptin in bulk and their pharmaceutical dosage forms. The first proposed spectrofluorimetric method is based on the dansylation reaction of the amino group of vildagliptin with dansyl chloride to form a highly fluorescent product. The formed product was measured spectrofluorimetrically at 455 nm after excitation at 345 nm. Beer's law was obeyed in a concentration range of 100-600 μg ml-1. The second proposed spectrophotometric method is based on the charge transfer complex of saxagliptin with tetrachloro-1,4-benzoquinone (p-chloranil). The formed charge transfer complex was measured spectrophotometrically at 530 nm. Beer's law was obeyed in a concentration range of 100-850 μg ml-1. The third proposed spectrophotometric method is based on the condensation reaction of the primary amino group of saxagliptin with formaldehyde and acetyl acetone to form a yellow colored product known as Hantzsch reaction, measured at 342.5 nm. Beer's law was obeyed in a concentration range of 50-300 μg ml-1. All the variables were studied to optimize the reactions' conditions using factorial design. The developed methods were validated and proved to be specific and accurate for quality control of vildagliptin and saxagliptin in their pharmaceutical dosage forms.

  11. Optimal designs for copula models

    PubMed Central

    Perrone, E.; Müller, W.G.

    2016-01-01

    Copula modelling has in the past decade become a standard tool in many areas of applied statistics. However, a largely neglected aspect concerns the design of related experiments. Particularly the issue of whether the estimation of copula parameters can be enhanced by optimizing experimental conditions and how robust all the parameter estimates for the model are with respect to the type of copula employed. In this paper an equivalence theorem for (bivariate) copula models is provided that allows formulation of efficient design algorithms and quick checks of whether designs are optimal or at least efficient. Some examples illustrate that in practical situations considerable gains in design efficiency can be achieved. A natural comparison between different copula models with respect to design efficiency is provided as well. PMID:27453616

  12. Optimal flexible sample size design with robust power.

    PubMed

    Zhang, Lanju; Cui, Lu; Yang, Bo

    2016-08-30

    It is well recognized that sample size determination is challenging because of the uncertainty on the treatment effect size. Several remedies are available in the literature. Group sequential designs start with a sample size based on a conservative (smaller) effect size and allow early stop at interim looks. Sample size re-estimation designs start with a sample size based on an optimistic (larger) effect size and allow sample size increase if the observed effect size is smaller than planned. Different opinions favoring one type over the other exist. We propose an optimal approach using an appropriate optimality criterion to select the best design among all the candidate designs. Our results show that (1) for the same type of designs, for example, group sequential designs, there is room for significant improvement through our optimization approach; (2) optimal promising zone designs appear to have no advantages over optimal group sequential designs; and (3) optimal designs with sample size re-estimation deliver the best adaptive performance. We conclude that to deal with the challenge of sample size determination due to effect size uncertainty, an optimal approach can help to select the best design that provides most robust power across the effect size range of interest. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  13. Multilevel Factorial Experiments for Developing Behavioral Interventions: Power, Sample Size, and Resource Considerations†

    PubMed Central

    Dziak, John J.; Nahum-Shani, Inbal; Collins, Linda M.

    2012-01-01

    Factorial experimental designs have many potential advantages for behavioral scientists. For example, such designs may be useful in building more potent interventions, by helping investigators to screen several candidate intervention components simultaneously and decide which are likely to offer greater benefit before evaluating the intervention as a whole. However, sample size and power considerations may challenge investigators attempting to apply such designs, especially when the population of interest is multilevel (e.g., when students are nested within schools, or employees within organizations). In this article we examine the feasibility of factorial experimental designs with multiple factors in a multilevel, clustered setting (i.e., of multilevel multifactor experiments). We conduct Monte Carlo simulations to demonstrate how design elements such as the number of clusters, the number of lower-level units, and the intraclass correlation affect power. Our results suggest that multilevel, multifactor experiments are feasible for factor-screening purposes, because of the economical properties of complete and fractional factorial experimental designs. We also discuss resources for sample size planning and power estimation for multilevel factorial experiments. These results are discussed from a resource management perspective, in which the goal is to choose a design that maximizes the scientific benefit using the resources available for an investigation. PMID:22309956

  14. Multilevel factorial experiments for developing behavioral interventions: power, sample size, and resource considerations.

    PubMed

    Dziak, John J; Nahum-Shani, Inbal; Collins, Linda M

    2012-06-01

    Factorial experimental designs have many potential advantages for behavioral scientists. For example, such designs may be useful in building more potent interventions by helping investigators to screen several candidate intervention components simultaneously and to decide which are likely to offer greater benefit before evaluating the intervention as a whole. However, sample size and power considerations may challenge investigators attempting to apply such designs, especially when the population of interest is multilevel (e.g., when students are nested within schools, or when employees are nested within organizations). In this article, we examine the feasibility of factorial experimental designs with multiple factors in a multilevel, clustered setting (i.e., of multilevel, multifactor experiments). We conduct Monte Carlo simulations to demonstrate how design elements-such as the number of clusters, the number of lower-level units, and the intraclass correlation-affect power. Our results suggest that multilevel, multifactor experiments are feasible for factor-screening purposes because of the economical properties of complete and fractional factorial experimental designs. We also discuss resources for sample size planning and power estimation for multilevel factorial experiments. These results are discussed from a resource management perspective, in which the goal is to choose a design that maximizes the scientific benefit using the resources available for an investigation. (c) 2012 APA, all rights reserved

  15. Optimization Under Uncertainty for Electronics Cooling Design

    NASA Astrophysics Data System (ADS)

    Bodla, Karthik K.; Murthy, Jayathi Y.; Garimella, Suresh V.

    Optimization under uncertainty is a powerful methodology used in design and optimization to produce robust, reliable designs. Such an optimization methodology, employed when the input quantities of interest are uncertain, produces output uncertainties, helping the designer choose input parameters that would result in satisfactory thermal solutions. Apart from providing basic statistical information such as mean and standard deviation in the output quantities, auxiliary data from an uncertainty based optimization, such as local and global sensitivities, help the designer decide the input parameter(s) to which the output quantity of interest is most sensitive. This helps the design of experiments based on the most sensitive input parameter(s). A further crucial output of such a methodology is the solution to the inverse problem - finding the allowable uncertainty range in the input parameter(s), given an acceptable uncertainty range in the output quantity of interest...

  16. Acoustic design by topology optimization

    NASA Astrophysics Data System (ADS)

    Dühring, Maria B.; Jensen, Jakob S.; Sigmund, Ole

    2008-11-01

    To bring down noise levels in human surroundings is an important issue and a method to reduce noise by means of topology optimization is presented here. The acoustic field is modeled by Helmholtz equation and the topology optimization method is based on continuous material interpolation functions in the density and bulk modulus. The objective function is the squared sound pressure amplitude. First, room acoustic problems are considered and it is shown that the sound level can be reduced in a certain part of the room by an optimized distribution of reflecting material in a design domain along the ceiling or by distribution of absorbing and reflecting material along the walls. We obtain well defined optimized designs for a single frequency or a frequency interval for both 2D and 3D problems when considering low frequencies. Second, it is shown that the method can be applied to design outdoor sound barriers in order to reduce the sound level in the shadow zone behind the barrier. A reduction of up to 10 dB for a single barrier and almost 30 dB when using two barriers are achieved compared to utilizing conventional sound barriers.

  17. Application of two-level factorial design to investigate the effect of process parameters on the sonocrystallization of sulfathiazole

    NASA Astrophysics Data System (ADS)

    Kuo, Peng-Hsuan; Zhang, Bo-Cong; Su, Chie-Shaan; Liu, Jun-Jen; Sheu, Ming-Thau

    2017-08-01

    In this study, cooling sonocrystallization was used to recrystallize an active pharmaceutical ingredient, sulfathiazole, using methanol as the solvent. The effects of three operating parameters-sonication intensity, sonication duration, and solution concentration-on the recrystallization were investigated by using a 2k factorial design. The solid-state properties of sulfathiazole, including the mean particle size, crystal habit, and polymorphic form, were analyzed. Analysis of variance showed that the effect of the sonication intensity, cross-interaction effect of sonication intensity/sonication duration, and cross-interaction effect of sonication intensity/solution concentration on the recrystallization were significant. The results obtained using the 2k factorial design indicated that a combination of high sonication intensity and long sonication duration is not favorable for sonocrystallization, especially at a high solution concentration. A comparison of the solid-state properties of the original and the recrystallized sulfathiazole revealed that the crystal habit of the recrystallized sulfathiazole was more regular and that its mean particle size could be reduced to approximately 10 μm. Furthermore, the analytical results obtained using the PXRD, DSC, and FTIR spectroscopy indicated that the polymorphic purity of sulfathiazole improved from the original Form III/IV mixture to Form III after sonocrystallization.

  18. Program Aids Analysis And Optimization Of Design

    NASA Technical Reports Server (NTRS)

    Rogers, James L., Jr.; Lamarsh, William J., II

    1994-01-01

    NETS/ PROSSS (NETS Coupled With Programming System for Structural Synthesis) computer program developed to provide system for combining NETS (MSC-21588), neural-network application program and CONMIN (Constrained Function Minimization, ARC-10836), optimization program. Enables user to reach nearly optimal design. Design then used as starting point in normal optimization process, possibly enabling user to converge to optimal solution in significantly fewer iterations. NEWT/PROSSS written in C language and FORTRAN 77.

  19. Optimal designs based on the maximum quasi-likelihood estimator

    PubMed Central

    Shen, Gang; Hyun, Seung Won; Wong, Weng Kee

    2016-01-01

    We use optimal design theory and construct locally optimal designs based on the maximum quasi-likelihood estimator (MqLE), which is derived under less stringent conditions than those required for the MLE method. We show that the proposed locally optimal designs are asymptotically as efficient as those based on the MLE when the error distribution is from an exponential family, and they perform just as well or better than optimal designs based on any other asymptotically linear unbiased estimators such as the least square estimator (LSE). In addition, we show current algorithms for finding optimal designs can be directly used to find optimal designs based on the MqLE. As an illustrative application, we construct a variety of locally optimal designs based on the MqLE for the 4-parameter logistic (4PL) model and study their robustness properties to misspecifications in the model using asymptotic relative efficiency. The results suggest that optimal designs based on the MqLE can be easily generated and they are quite robust to mis-specification in the probability distribution of the responses. PMID:28163359

  20. Multi-response optimization of Artemia hatching process using split-split-plot design based response surface methodology

    PubMed Central

    Arun, V. V.; Saharan, Neelam; Ramasubramanian, V.; Babitha Rani, A. M.; Salin, K. R.; Sontakke, Ravindra; Haridas, Harsha; Pazhayamadom, Deepak George

    2017-01-01

    A novel method, BBD-SSPD is proposed by the combination of Box-Behnken Design (BBD) and Split-Split Plot Design (SSPD) which would ensure minimum number of experimental runs, leading to economical utilization in multi- factorial experiments. The brine shrimp Artemia was tested to study the combined effects of photoperiod, temperature and salinity, each with three levels, on the hatching percentage and hatching time of their cysts. The BBD was employed to select 13 treatment combinations out of the 27 possible combinations that were grouped in an SSPD arrangement. Multiple responses were optimized simultaneously using Derringer’s desirability function. Photoperiod and temperature as well as temperature-salinity interaction were found to significantly affect the hatching percentage of Artemia, while the hatching time was significantly influenced by photoperiod and temperature, and their interaction. The optimum conditions were 23 h photoperiod, 29 °C temperature and 28 ppt salinity resulting in 96.8% hatching in 18.94 h. In order to verify the results obtained from BBD-SSPD experiment, the experiment was repeated preserving the same set up. Results of verification experiment were found to be similar to experiment originally conducted. It is expected that this method would be suitable to optimize the hatching process of animal eggs. PMID:28091611

  1. A Modified Particle Swarm Optimization Technique for Finding Optimal Designs for Mixture Models

    PubMed Central

    Wong, Weng Kee; Chen, Ray-Bing; Huang, Chien-Chih; Wang, Weichung

    2015-01-01

    Particle Swarm Optimization (PSO) is a meta-heuristic algorithm that has been shown to be successful in solving a wide variety of real and complicated optimization problems in engineering and computer science. This paper introduces a projection based PSO technique, named ProjPSO, to efficiently find different types of optimal designs, or nearly optimal designs, for mixture models with and without constraints on the components, and also for related models, like the log contrast models. We also compare the modified PSO performance with Fedorov's algorithm, a popular algorithm used to generate optimal designs, Cocktail algorithm, and the recent algorithm proposed by [1]. PMID:26091237

  2. Optimization of digital designs

    NASA Technical Reports Server (NTRS)

    Miles, Lowell H. (Inventor); Whitaker, Sterling R. (Inventor)

    2009-01-01

    An application specific integrated circuit is optimized by translating a first representation of its digital design to a second representation. The second representation includes multiple syntactic expressions that admit a representation of a higher-order function of base Boolean values. The syntactic expressions are manipulated to form a third representation of the digital design.

  3. Optimization applications in aircraft engine design and test

    NASA Technical Reports Server (NTRS)

    Pratt, T. K.

    1984-01-01

    Starting with the NASA-sponsored STAEBL program, optimization methods based primarily upon the versatile program COPES/CONMIN were introduced over the past few years to a broad spectrum of engineering problems in structural optimization, engine design, engine test, and more recently, manufacturing processes. By automating design and testing processes, many repetitive and costly trade-off studies have been replaced by optimization procedures. Rather than taking engineers and designers out of the loop, optimization has, in fact, put them more in control by providing sophisticated search techniques. The ultimate decision whether to accept or reject an optimal feasible design still rests with the analyst. Feedback obtained from this decision process has been invaluable since it can be incorporated into the optimization procedure to make it more intelligent. On several occasions, optimization procedures have produced novel designs, such as the nonsymmetric placement of rotor case stiffener rings, not anticipated by engineering designers. In another case, a particularly difficult resonance contraint could not be satisfied using hand iterations for a compressor blade, when the STAEBL program was applied to the problem, a feasible solution was obtained in just two iterations.

  4. Loteprednol Etabonate Nanoparticles: Optimization via Box-Behnken Design Response Surface Methodology and Physicochemical Characterization.

    PubMed

    Sah, Abhishek K; Suresh, Preeti K

    2017-01-01

    Abstract: The objective of the present work was to prepare and optimize the loteprednoletabonate (LE) loaded poly (D,L-lactide co-glycolide) (PLGA) polymer based nanoparticle carrier. The review on recent patents (US9006241, US20130224302A1, US2012/0028947A1) assisted in the selection of drug and polymer for designing nanoparticles for ocular delivery applications. The nanoparticles were prepared by solvent evaporation followed by high speed homogenization. Biodegradable polymer PLGA (50:50) grade was utilized to develop various formulations with different drug:polymer ratio. A Box-Behnken design with 33 factorial design was selected for the present study and 17 runs were carried out in totality. The influence of various process variables (viz., polymer concentration, homogenization speed and sonication time) on the characteristics of nanoparticles including the in vitro drug release profile were studied. The nanoparticulate formulations were evaluated for mean spherical diameter, polydispersity index (PDI), zeta potential, surface morphology, drug entrapment and in-vitro drug release profile. The entrapment efficiency, drug loading and mean particle size were found to be 96.31±1.68 %, 35.46±0.35 % and 167.6±2.1 nm respectively. The investigated process and formulation variables were found to have significant effect on the particle size, drug loading (DL), entrapment efficiency (EE), and in vitro drug release profile. A biphasic in vitro drug release profile was apparent from the optimized nanoparticles (NPs) for 24 hours. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  5. Multiobjective optimization techniques for structural design

    NASA Technical Reports Server (NTRS)

    Rao, S. S.

    1984-01-01

    The multiobjective programming techniques are important in the design of complex structural systems whose quality depends generally on a number of different and often conflicting objective functions which cannot be combined into a single design objective. The applicability of multiobjective optimization techniques is studied with reference to simple design problems. Specifically, the parameter optimization of a cantilever beam with a tip mass and a three-degree-of-freedom vabration isolation system and the trajectory optimization of a cantilever beam are considered. The solutions of these multicriteria design problems are attempted by using global criterion, utility function, game theory, goal programming, goal attainment, bounded objective function, and lexicographic methods. It has been observed that the game theory approach required the maximum computational effort, but it yielded better optimum solutions with proper balance of the various objective functions in all the cases.

  6. Telemanipulator design and optimization software

    NASA Astrophysics Data System (ADS)

    Cote, Jean; Pelletier, Michel

    1995-12-01

    For many years, industrial robots have been used to execute specific repetitive tasks. In those cases, the optimal configuration and location of the manipulator only has to be found once. The optimal configuration or position where often found empirically according to the tasks to be performed. In telemanipulation, the nature of the tasks to be executed is much wider and can be very demanding in terms of dexterity and workspace. The position/orientation of the robot's base could be required to move during the execution of a task. At present, the choice of the initial position of the teleoperator is usually found empirically which can be sufficient in the case of an easy or repetitive task. In the converse situation, the amount of time wasted to move the teleoperator support platform has to be taken into account during the execution of the task. Automatic optimization of the position/orientation of the platform or a better designed robot configuration could minimize these movements and save time. This paper will present two algorithms. The first algorithm is used to optimize the position and orientation of a given manipulator (or manipulators) with respect to the environment on which a task has to be executed. The second algorithm is used to optimize the position or the kinematic configuration of a robot. For this purpose, the tasks to be executed are digitized using a position/orientation measurement system and a compact representation based on special octrees. Given a digitized task, the optimal position or Denavit-Hartenberg configuration of the manipulator can be obtained numerically. Constraints on the robot design can also be taken into account. A graphical interface has been designed to facilitate the use of the two optimization algorithms.

  7. Marine Steam Condenser Design Optimization.

    DTIC Science & Technology

    1983-12-01

    to make design decisions to obtain a feasible design. CONNIN, as do most optimizers, requires complete control in determining all iterative design...neutralize all the places where such design decisions are made. By removing the ability for CONDIP to make any design decisions it became totally passive...dependent on CONNIN for design decisions , does not have that capability. Pemeabering that CONHIN requires a complete once-through analysis in order to

  8. Optimization strategies for radiation induced grafting of 4-vinylpyridine onto poly(ethylene-co-tetraflouroethene) film using Box-Behnken design

    NASA Astrophysics Data System (ADS)

    Mahmoud Nasef, Mohamed; Shamsaei, Ezzatollah; Ghassemi, Payman; Ahmed Aly, Amgad; Hamid Yahaya, Abdul

    2012-04-01

    The radiation induced grafting of 4-vinylpyridine (4-VP) onto poly(ethylene-co-tetrafluoroethene) (ETFE) was optimized using the Box-Behnken factorial design available in the response surface method (RSM). The optimized grafting parameters; absorbed dose, monomer concentration, grafting time and reaction temperature were varied in four levels to quantify their effect on the grafting yield (GY). The validity of the statistical model was supported by the small deviation between the predicted (GY=61%) and experimental (GY=57%) values. The optimum conditions for enhancing GY were determined at the following values: monomer concentration of 48 vol%, absorbed dose of 64 kGy, reaction time of 4 h and temperature of 68 °C. A comparison was made between the optimization model developed for the present grafting system and that for grafting of 1-vinylimidazole (1-VIm) onto ETFE to confirm the validly and reliability of the Box-Behnken for the optimization of various radiation induced grafting reactions. Fourier transform infrared (FTIR), thermogravimetric analysis (TGA) and X-ray diffraction (XRD) were used to investigate the properties of the obtained films and provide evidence for grafting.

  9. Turbomachinery Airfoil Design Optimization Using Differential Evolution

    NASA Technical Reports Server (NTRS)

    Madavan, Nateri K.; Biegel, Bryan A. (Technical Monitor)

    2002-01-01

    An aerodynamic design optimization procedure that is based on a evolutionary algorithm known at Differential Evolution is described. Differential Evolution is a simple, fast, and robust evolutionary strategy that has been proven effective in determining the global optimum for several difficult optimization problems, including highly nonlinear systems with discontinuities and multiple local optima. The method is combined with a Navier-Stokes solver that evaluates the various intermediate designs and provides inputs to the optimization procedure. An efficient constraint handling mechanism is also incorporated. Results are presented for the inverse design of a turbine airfoil from a modern jet engine. The capability of the method to search large design spaces and obtain the optimal airfoils in an automatic fashion is demonstrated. Substantial reductions in the overall computing time requirements are achieved by using the algorithm in conjunction with neural networks.

  10. Design Optimization of Composite Structures under Uncertainty

    NASA Technical Reports Server (NTRS)

    Haftka, Raphael T.

    2003-01-01

    Design optimization under uncertainty is computationally expensive and is also challenging in terms of alternative formulation. The work under the grant focused on developing methods for design against uncertainty that are applicable to composite structural design with emphasis on response surface techniques. Applications included design of stiffened composite plates for improved damage tolerance, the use of response surfaces for fitting weights obtained by structural optimization, and simultaneous design of structure and inspection periods for fail-safe structures.

  11. Iontophoretic delivery of lisinopril: Optimization of process variables by Box-Behnken statistical design.

    PubMed

    Gannu, Ramesh; Yamsani, Vamshi Vishnu; Palem, Chinna Reddy; Yamsani, Shravan Kumar; Yamsani, Madhusudan Rao

    2010-01-01

    The objective of the investigation was to optimize the iontophoresis process parameters of lisinopril (LSP) by 3 x 3 factorial design, Box-Behnken statistical design. LSP is an ideal candidate for iontophoretic delivery to avoid the incomplete absorption problem associated after its oral administration. Independent variables selected were current (X(1)), salt (sodium chloride) concentration (X(2)) and medium/pH (X(3)). The dependent variables studied were amount of LSP permeated in 4 h (Y(1): Q(4)), 24 h (Y(2): Q(24)) and lag time (Y(3)). Mathematical equations and response surface plots were used to relate the dependent and independent variables. The regression equation generated for the iontophoretic permeation was Y(1) = 1.98 + 1.23X(1) - 0.49X(2) + 0.025X(3) - 0.49X(1)X(2) + 0.040X(1)X(3) - 0.010X(2)X(3) + 0.58X(1)(2) - 0.17X(2)(2) - 0.18X(3)(2); Y(2) = 7.28 + 3.32X(1) - 1.52X(2) + 0.22X(3) - 1.30X(1)X(2) + 0.49X(1)X(3) - 0.090X(2)X(3) + 0.79X(1)(2) - 0.62X(2)(2) - 0.33X(3)(2) and Y(3) = 0.60 + 0.0038X(1) + 0.12X(2) - 0.011X(3) + 0.005X(1)X(2) - 0.018X(1)X(3) - 0.015X(2)X(3) - 0.00075X(1)(2) + 0.017X(2)(2) - 0.11X(3)(2). The statistical validity of the polynomials was established and optimized process parameters were selected by feasibility and grid search. Validation of the optimization study with 8 confirmatory runs indicated high degree of prognostic ability of response surface methodology. The use of Box-Behnken design approach helped in identifying the critical process parameters in the iontophoretic delivery of lisinopril.

  12. Simulation-Driven Design Approach for Design and Optimization of Blankholder

    NASA Astrophysics Data System (ADS)

    Sravan, Tatipala; Suddapalli, Nikshep R.; Johan, Pilthammar; Mats, Sigvant; Christian, Johansson

    2017-09-01

    Reliable design of stamping dies is desired for efficient and safe production. The design of stamping dies are today mostly based on casting feasibility, although it can also be based on criteria for fatigue, stiffness, safety, economy. Current work presents an approach that is built on Simulation Driven Design, enabling Design Optimization to address this issue. A structural finite element model of a stamping die, used to produce doors for Volvo V70/S80 car models, is studied. This die had developed cracks during its usage. To understand the behaviour of stress distribution in the stamping die, structural analysis of the die is conducted and critical regions with high stresses are identified. The results from structural FE-models are compared with analytical calculations pertaining to fatigue properties of the material. To arrive at an optimum design with increased stiffness and lifetime, topology and free-shape optimization are performed. In the optimization routine, identified critical regions of the die are set as design variables. Other optimization variables are set to maintain manufacturability of the resultant stamping die. Thereafter a CAD model is built based on geometrical results from topology and free-shape optimizations. Then the CAD model is subjected to structural analysis to visualize the new stress distribution. This process is iterated until a satisfactory result is obtained. The final results show reduction in stress levels by 70% with a more homogeneous distribution. Even though mass of the die is increased by 17 %, overall, a stiffer die with better lifetime is obtained. Finally, by reflecting on the entire process, a coordinated approach to handle such situations efficiently is presented.

  13. Design optimization of axial flow hydraulic turbine runner: Part II - multi-objective constrained optimization method

    NASA Astrophysics Data System (ADS)

    Peng, Guoyi; Cao, Shuliang; Ishizuka, Masaru; Hayama, Shinji

    2002-06-01

    This paper is concerned with the design optimization of axial flow hydraulic turbine runner blade geometry. In order to obtain a better design plan with good performance, a new comprehensive performance optimization procedure has been presented by combining a multi-variable multi-objective constrained optimization model with a Q3D inverse computation and a performance prediction procedure. With careful analysis of the inverse design of axial hydraulic turbine runner, the total hydraulic loss and the cavitation coefficient are taken as optimization objectives and a comprehensive objective function is defined using the weight factors. Parameters of a newly proposed blade bound circulation distribution function and parameters describing positions of blade leading and training edges in the meridional flow passage are taken as optimization variables.The optimization procedure has been applied to the design optimization of a Kaplan runner with specific speed of 440 kW. Numerical results show that the performance of designed runner is successfully improved through optimization computation. The optimization model is found to be validated and it has the feature of good convergence. With the multi-objective optimization model, it is possible to control the performance of designed runner by adjusting the value of weight factors defining the comprehensive objective function. Copyright

  14. Automatic Optimization of Wayfinding Design.

    PubMed

    Huang, Haikun; Lin, Ni-Ching; Barrett, Lorenzo; Springer, Darian; Wang, Hsueh-Cheng; Pomplun, Marc; Yu, Lap-Fai

    2017-10-10

    Wayfinding signs play an important role in guiding users to navigate in a virtual environment and in helping pedestrians to find their ways in a real-world architectural site. Conventionally, the wayfinding design of a virtual environment is created manually, so as the wayfinding design of a real-world architectural site. The many possible navigation scenarios, and the interplay between signs and human navigation, can make the manual design process overwhelming and non-trivial. As a result, creating a wayfinding design for a typical layout can take months to several years. In this paper, we introduce the Way to Go! approach for automatically generating a wayfinding design for a given layout. The designer simply has to specify some navigation scenarios; our approach will automatically generate an optimized wayfinding design with signs properly placed considering human agents' visibility and possibility of making navigation mistakes. We demonstrate the effectiveness of our approach in generating wayfinding designs for different layouts. We evaluate our results by comparing different wayfinding designs and show that our optimized designs can guide pedestrians to their destinations effectively. Our approach can also help the designer visualize the accessibility of a destination from different locations, and correct any "blind zone" with additional signs.

  15. Factorial design application in photocatalytic wastewater degradation from TNT industry-red water.

    PubMed

    Guz, Ricardo; de Moura, Cristiane; da Cunha, Mário Antônio Alves; Rodrigues, Marcio Barreto

    2017-03-01

    In trinitrotoluene (TNT) purification process, realized in industries, there are two washes carried out at the end of the procedure. The first is performed with vaporized water, from which the first effluent, called yellow water, is originated. Then, a second wash is performed using sodium sulfite, generating the red water effluent. The objective of this work was to get the best conditions for photocatalytic degradation of the second effluent, red water, in order to reduce toxicity and adjust legal parameters according to regulatory agencies for dumping these effluents into waterways. It has used a statistical evaluation for factor interaction (pH, concentration) that affects heterogeneous photocatalysis with titanium dioxide (TiO 2 ). Thus, the treatment applied in the factorial experimental design consisted of using a volume equal to 500 mL of the effluent to 0.1 % by batch treatment, which has changed TiO 2  pH and concentration, according to the design, with 20 min time for evaluation, where it was used as response to the reduction of UV-Vis absorption. According to the design responses, it has obtained optimum values for the parameters evaluated: pH = 6.5 and concentration of 100 mg/L of TiO 2 were shown to be efficient when applied to red water effluent, obtaining approximately 91 % of discoloration.

  16. A design optimization process for Space Station Freedom

    NASA Technical Reports Server (NTRS)

    Chamberlain, Robert G.; Fox, George; Duquette, William H.

    1990-01-01

    The Space Station Freedom Program is used to develop and implement a process for design optimization. Because the relative worth of arbitrary design concepts cannot be assessed directly, comparisons must be based on designs that provide the same performance from the point of view of station users; such designs can be compared in terms of life cycle cost. Since the technology required to produce a space station is widely dispersed, a decentralized optimization process is essential. A formulation of the optimization process is provided and the mathematical models designed to facilitate its implementation are described.

  17. Computer experimental analysis of the CHP performance of a 100 kW e SOFC Field Unit by a factorial design

    NASA Astrophysics Data System (ADS)

    Calì, M.; Santarelli, M. G. L.; Leone, P.

    Gas Turbine Technologies (GTT) and Politecnico di Torino, both located in Torino (Italy), have been involved in the design and installation of a SOFC laboratory in order to analyse the operation, in cogenerative configuration, of the CHP 100 kW e SOFC Field Unit, built by Siemens-Westinghouse Power Corporation (SWPC), which is at present (May 2005) starting its operation and which will supply electric and thermal power to the GTT factory. In order to take the better advantage from the analysis of the on-site operation, and especially to correctly design the scheduled experimental tests on the system, we developed a mathematical model and run a simulated experimental campaign, applying a rigorous statistical approach to the analysis of the results. The aim of this work is the computer experimental analysis, through a statistical methodology (2 k factorial experiments), of the CHP 100 performance. First, the mathematical model has been calibrated with the results acquired during the first CHP100 demonstration at EDB/ELSAM in Westerwoort. After, the simulated tests have been performed in the form of computer experimental session, and the measurement uncertainties have been simulated with perturbation imposed to the model independent variables. The statistical methodology used for the computer experimental analysis is the factorial design (Yates' Technique): using the ANOVA technique the effect of the main independent variables (air utilization factor U ox, fuel utilization factor U F, internal fuel and air preheating and anodic recycling flow rate) has been investigated in a rigorous manner. Analysis accounts for the effects of parameters on stack electric power, thermal recovered power, single cell voltage, cell operative temperature, consumed fuel flow and steam to carbon ratio. Each main effect and interaction effect of parameters is shown with particular attention on generated electric power and stack heat recovered.

  18. A robust optimization methodology for preliminary aircraft design

    NASA Astrophysics Data System (ADS)

    Prigent, S.; Maréchal, P.; Rondepierre, A.; Druot, T.; Belleville, M.

    2016-05-01

    This article focuses on a robust optimization of an aircraft preliminary design under operational constraints. According to engineers' know-how, the aircraft preliminary design problem can be modelled as an uncertain optimization problem whose objective (the cost or the fuel consumption) is almost affine, and whose constraints are convex. It is shown that this uncertain optimization problem can be approximated in a conservative manner by an uncertain linear optimization program, which enables the use of the techniques of robust linear programming of Ben-Tal, El Ghaoui, and Nemirovski [Robust Optimization, Princeton University Press, 2009]. This methodology is then applied to two real cases of aircraft design and numerical results are presented.

  19. Screening Experiments and the Use of Fractional Factorial Designs in Behavioral Intervention Research

    PubMed Central

    Nair, Vijay; Strecher, Victor; Fagerlin, Angela; Ubel, Peter; Resnicow, Kenneth; Murphy, Susan; Little, Roderick; Chakraborty, Bibhas; Zhang, Aijun

    2008-01-01

    Health behavior intervention studies have focused primarily on comparing new programs and existing programs via randomized controlled trials. However, numbers of possible components (factors) are increasing dramatically as a result of developments in science and technology (e.g., Web-based surveys). These changes dictate the need for alternative methods that can screen and quickly identify a large set of potentially important treatment components. We have developed and implemented a multiphase experimentation strategy for accomplishing this goal. We describe the screening phase of this strategy and the use of fractional factorial designs (FFDs) in studying several components economically. We then use 2 ongoing behavioral intervention projects to illustrate the usefulness of FFDs. FFDs should be supplemented with follow-up experiments in the refining phase so any critical assumptions about interactions can be verified. PMID:18556602

  20. Microwave-assisted of dispersive liquid-liquid microextraction and spectrophotometric determination of uranium after optimization based on Box-Behnken design and chemometrics methods.

    PubMed

    Niazi, Ali; Khorshidi, Neda; Ghaemmaghami, Pegah

    2015-01-25

    In this study an analytical procedure based on microwave-assisted dispersive liquid-liquid microextraction (MA-DLLME) and spectrophotometric coupled with chemometrics methods is proposed to determine uranium. In the proposed method, 4-(2-pyridylazo) resorcinol (PAR) is used as a chelating agent, and chloroform and ethanol are selected as extraction and dispersive solvent. The optimization strategy is carried out by using two level full factorial designs. Results of the two level full factorial design (2(4)) based on an analysis of variance demonstrated that the pH, concentration of PAR, amount of dispersive and extraction solvents are statistically significant. Optimal condition for three variables: pH, concentration of PAR, amount of dispersive and extraction solvents are obtained by using Box-Behnken design. Under the optimum conditions, the calibration graphs are linear in the range of 20.0-350.0 ng mL(-1) with detection limit of 6.7 ng mL(-1) (3δB/slope) and the enrichment factor of this method for uranium reached at 135. The relative standard deviation (R.S.D.) is 1.64% (n=7, c=50 ng mL(-1)). The partial least squares (PLS) modeling was used for multivariate calibration of the spectrophotometric data. The orthogonal signal correction (OSC) was used for preprocessing of data matrices and the prediction results of model, with and without using OSC, were statistically compared. MA-DLLME-OSC-PLS method was presented for the first time in this study. The root mean squares error of prediction (RMSEP) for uranium determination using PLS and OSC-PLS models were 4.63 and 0.98, respectively. This procedure allows the determination of uranium synthesis and real samples such as waste water with good reliability of the determination. Copyright © 2014. Published by Elsevier B.V.

  1. Spin bearing retainer design optimization

    NASA Technical Reports Server (NTRS)

    Boesiger, Edward A.; Warner, Mark H.

    1991-01-01

    The dynamics behavior of spin bearings for momentum wheels (control-moment gyroscope, reaction wheel assembly) is critical to satellite stability and life. Repeated bearing retainer instabilities hasten lubricant deterioration and can lead to premature bearing failure and/or unacceptable vibration. These instabilities are typically distinguished by increases in torque, temperature, audible noise, and vibration induced by increases into the bearing cartridge. Ball retainer design can be optimized to minimize these occurrences. A retainer was designed using a previously successful smaller retainer as an example. Analytical methods were then employed to predict its behavior and optimize its configuration.

  2. Data Processing Factory for the Sloan Digital Sky Survey

    NASA Astrophysics Data System (ADS)

    Stoughton, Christopher; Adelman, Jennifer; Annis, James T.; Hendry, John; Inkmann, John; Jester, Sebastian; Kent, Steven M.; Kuropatkin, Nickolai; Lee, Brian; Lin, Huan; Peoples, John, Jr.; Sparks, Robert; Tucker, Douglas; Vanden Berk, Dan; Yanny, Brian; Yocum, Dan

    2002-12-01

    The Sloan Digital Sky Survey (SDSS) data handling presents two challenges: large data volume and timely production of spectroscopic plates from imaging data. A data processing factory, using technologies both old and new, handles this flow. Distribution to end users is via disk farms, to serve corrected images and calibrated spectra, and a database, to efficiently process catalog queries. For distribution of modest amounts of data from Apache Point Observatory to Fermilab, scripts use rsync to update files, while larger data transfers are accomplished by shipping magnetic tapes commercially. All data processing pipelines are wrapped in scripts to address consecutive phases: preparation, submission, checking, and quality control. We constructed the factory by chaining these pipelines together while using an operational database to hold processed imaging catalogs. The science database catalogs all imaging and spectroscopic object, with pointers to the various external files associated with them. Diverse computing systems address particular processing phases. UNIX computers handle tape reading and writing, as well as calibration steps that require access to a large amount of data with relatively modest computational demands. Commodity CPUs process steps that require access to a limited amount of data with more demanding computations requirements. Disk servers optimized for cost per Gbyte serve terabytes of processed data, while servers optimized for disk read speed run SQLServer software to process queries on the catalogs. This factory produced data for the SDSS Early Data Release in June 2001, and it is currently producing Data Release One, scheduled for January 2003.

  3. Pion Production for Neutrino Factory-challenges

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Breton, Florian; Le Couedic, Clement; Soler, F. J. P.

    2011-10-06

    One of the key issues in the design of a Neutrino Factory target station is the determination of the optimum kinetic energy of the proton beam due to the large uncertainties in simulations of protons impinging on nuclear targets. In this paper we have developed a procedure to correct GEANT4 simulations for the HARP data, and we have determined the yield of muons expected at the front-end of a Neutrino Factory as a function of target material (Be, C, Al, Ta and Pb) and energy (3-12 GeV).The maximum muon yield is found between 5 and 8 GeV for high Zmore » targets and 3 GeV for low Z targets.« less

  4. Development of a factory/refinery method to measure total, soluble, and insoluble starch in sugar products

    USDA-ARS?s Scientific Manuscript database

    An easy, rapid, and inexpensive method was developed to measure total, soluble, and insoluble starch in products at the factory and refinery, using microwave-assisted neutralization chemistry. The method was optimized using the previously developed USDA Starch Research method as a reference. Optimal...

  5. A sequential linear optimization approach for controller design

    NASA Technical Reports Server (NTRS)

    Horta, L. G.; Juang, J.-N.; Junkins, J. L.

    1985-01-01

    A linear optimization approach with a simple real arithmetic algorithm is presented for reliable controller design and vibration suppression of flexible structures. Using first order sensitivity of the system eigenvalues with respect to the design parameters in conjunction with a continuation procedure, the method converts a nonlinear optimization problem into a maximization problem with linear inequality constraints. The method of linear programming is then applied to solve the converted linear optimization problem. The general efficiency of the linear programming approach allows the method to handle structural optimization problems with a large number of inequality constraints on the design vector. The method is demonstrated using a truss beam finite element model for the optimal sizing and placement of active/passive-structural members for damping augmentation. Results using both the sequential linear optimization approach and nonlinear optimization are presented and compared. The insensitivity to initial conditions of the linear optimization approach is also demonstrated.

  6. Aerodynamic design and optimization in one shot

    NASA Technical Reports Server (NTRS)

    Ta'asan, Shlomo; Kuruvila, G.; Salas, M. D.

    1992-01-01

    This paper describes an efficient numerical approach for the design and optimization of aerodynamic bodies. As in classical optimal control methods, the present approach introduces a cost function and a costate variable (Lagrange multiplier) in order to achieve a minimum. High efficiency is achieved by using a multigrid technique to solve for all the unknowns simultaneously, but restricting work on a design variable only to grids on which their changes produce nonsmooth perturbations. Thus, the effort required to evaluate design variables that have nonlocal effects on the solution is confined to the coarse grids. However, if a variable has a nonsmooth local effect on the solution in some neighborhood, it is relaxed in that neighborhood on finer grids. The cost of solving the optimal control problem is shown to be approximately two to three times the cost of the equivalent analysis problem. Examples are presented to illustrate the application of the method to aerodynamic design and constraint optimization.

  7. Automated Design Framework for Synthetic Biology Exploiting Pareto Optimality.

    PubMed

    Otero-Muras, Irene; Banga, Julio R

    2017-07-21

    In this work we consider Pareto optimality for automated design in synthetic biology. We present a generalized framework based on a mixed-integer dynamic optimization formulation that, given design specifications, allows the computation of Pareto optimal sets of designs, that is, the set of best trade-offs for the metrics of interest. We show how this framework can be used for (i) forward design, that is, finding the Pareto optimal set of synthetic designs for implementation, and (ii) reverse design, that is, analyzing and inferring motifs and/or design principles of gene regulatory networks from the Pareto set of optimal circuits. Finally, we illustrate the capabilities and performance of this framework considering four case studies. In the first problem we consider the forward design of an oscillator. In the remaining problems, we illustrate how to apply the reverse design approach to find motifs for stripe formation, rapid adaption, and fold-change detection, respectively.

  8. Sample size requirements for separating out the effects of combination treatments: randomised controlled trials of combination therapy vs. standard treatment compared to factorial designs for patients with tuberculous meningitis.

    PubMed

    Wolbers, Marcel; Heemskerk, Dorothee; Chau, Tran Thi Hong; Yen, Nguyen Thi Bich; Caws, Maxine; Farrar, Jeremy; Day, Jeremy

    2011-02-02

    In certain diseases clinical experts may judge that the intervention with the best prospects is the addition of two treatments to the standard of care. This can either be tested with a simple randomized trial of combination versus standard treatment or with a 2 x 2 factorial design. We compared the two approaches using the design of a new trial in tuberculous meningitis as an example. In that trial the combination of 2 drugs added to standard treatment is assumed to reduce the hazard of death by 30% and the sample size of the combination trial to achieve 80% power is 750 patients. We calculated the power of corresponding factorial designs with one- to sixteen-fold the sample size of the combination trial depending on the contribution of each individual drug to the combination treatment effect and the strength of an interaction between the two. In the absence of an interaction, an eight-fold increase in sample size for the factorial design as compared to the combination trial is required to get 80% power to jointly detect effects of both drugs if the contribution of the less potent treatment to the total effect is at least 35%. An eight-fold sample size increase also provides a power of 76% to detect a qualitative interaction at the one-sided 10% significance level if the individual effects of both drugs are equal. Factorial designs with a lower sample size have a high chance to be underpowered, to show significance of only one drug even if both are equally effective, and to miss important interactions. Pragmatic combination trials of multiple interventions versus standard therapy are valuable in diseases with a limited patient pool if all interventions test the same treatment concept, it is considered likely that either both or none of the individual interventions are effective, and only moderate drug interactions are suspected. An adequately powered 2 x 2 factorial design to detect effects of individual drugs would require at least 8-fold the sample size of the

  9. Front End for a neutrino factory or muon collider

    NASA Astrophysics Data System (ADS)

    Neuffer, D.; Snopok, P.; Alexahin, Y.

    2017-11-01

    A neutrino factory or muon collider requires the capture and cooling of a large number of muons. Scenarios for capture, bunching, phase-energy rotation and initial cooling of μ 's produced from a proton source target have been developed, initially for neutrino factory scenarios. They require a drift section from the target, a bunching section and a varphi -δ E rotation section leading into the cooling channel. Important concerns are rf limitations within the focusing magnetic fields and large losses in the transport. The currently preferred cooling channel design is an "HFOFO Snake" configuration that cools both μ+ and μ- transversely and longitudinally. The status of the design is presented and variations are discussed.

  10. Design and Construction of a High-speed Network Connecting All the Protein Crystallography Beamlines at the Photon Factory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Matsugaki, Naohiro; Yamada, Yusuke; Igarashi, Noriyuki

    2007-01-19

    A private network, physically separated from the facility network, was designed and constructed which covered all the four protein crystallography beamlines at the Photon Factory (PF) and Structural Biology Research Center (SBRC). Connecting all the beamlines in the same network allows for simple authentication and a common working environment for a user who uses multiple beamlines. Giga-bit Ethernet wire-speed was achieved for the communication among the beamlines and SBRC buildings.

  11. Optimization of headspace solid-phase microextraction by means of an experimental design for the determination of methyl tert.-butyl ether in water by gas chromatography-flame ionization detection.

    PubMed

    Dron, Julien; Garcia, Rosa; Millán, Esmeralda

    2002-07-19

    A procedure for determination of methyl tert.-butyl ether (MTBE) in water by headspace solid-phase microextraction (HS-SPME) has been developed. The analysis was carried out by gas chromatography with flame ionization detection. The extraction procedure, using a 65-microm poly(dimethylsiloxane)-divinylbenzene SPME fiber, was optimized following experimental design. A fractional factorial design for screening and a central composite design for optimizing the significant variables were applied. Extraction temperature and sodium chloride concentration were significant variables, and 20 degrees C and 300 g/l were, respectively chosen for the best extraction response. With these conditions, an extraction time of 5 min was sufficient to extract MTBE. The calibration linear range for MTBE was 5-500 microg/l and the detection limit 0.45 microg/l. The relative standard deviation, for seven replicates of 250 microg/l MTBE in water, was 6.3%.

  12. Multidisciplinary design optimization - An emerging new engineering discipline

    NASA Technical Reports Server (NTRS)

    Sobieszczanski-Sobieski, Jaroslaw

    1993-01-01

    A definition of the multidisciplinary design optimization (MDO) is introduced, and functionality and relationship of the MDO conceptual components are examined. The latter include design-oriented analysis, approximation concepts, mathematical system modeling, design space search, an optimization procedure, and a humane interface.

  13. Multidisciplinary design optimization using multiobjective formulation techniques

    NASA Technical Reports Server (NTRS)

    Chattopadhyay, Aditi; Pagaldipti, Narayanan S.

    1995-01-01

    This report addresses the development of a multidisciplinary optimization procedure using an efficient semi-analytical sensitivity analysis technique and multilevel decomposition for the design of aerospace vehicles. A semi-analytical sensitivity analysis procedure is developed for calculating computational grid sensitivities and aerodynamic design sensitivities. Accuracy and efficiency of the sensitivity analysis procedure is established through comparison of the results with those obtained using a finite difference technique. The developed sensitivity analysis technique are then used within a multidisciplinary optimization procedure for designing aerospace vehicles. The optimization problem, with the integration of aerodynamics and structures, is decomposed into two levels. Optimization is performed for improved aerodynamic performance at the first level and improved structural performance at the second level. Aerodynamic analysis is performed by solving the three-dimensional parabolized Navier Stokes equations. A nonlinear programming technique and an approximate analysis procedure are used for optimization. The proceduredeveloped is applied to design the wing of a high speed aircraft. Results obtained show significant improvements in the aircraft aerodynamic and structural performance when compared to a reference or baseline configuration. The use of the semi-analytical sensitivity technique provides significant computational savings.

  14. The role of the optimization process in illumination design

    NASA Astrophysics Data System (ADS)

    Gauvin, Michael A.; Jacobsen, David; Byrne, David J.

    2015-07-01

    This paper examines the role of the optimization process in illumination design. We will discuss why the starting point of the optimization process is crucial to a better design and why it is also important that the user understands the basic design problem and implements the correct merit function. Both a brute force method and the Downhill Simplex method will be used to demonstrate optimization methods with focus on using interactive design tools to create better starting points to streamline the optimization process.

  15. A Library of Optimization Algorithms for Organizational Design

    DTIC Science & Technology

    2005-01-01

    N00014-98-1-0465 and #N00014-00-1-0101 A Library of Optimization Algorithms for Organizational Design Georgiy M. Levchuk Yuri N. Levchuk Jie Luo...E-mail: Krishna@engr.uconn.edu Abstract This paper presents a library of algorithms to solve a broad range of optimization problems arising in the...normative design of organizations to execute a specific mission. The use of specific optimization algorithms for different phases of the design process

  16. An Integrated Optimization Design Method Based on Surrogate Modeling Applied to Diverging Duct Design

    NASA Astrophysics Data System (ADS)

    Hanan, Lu; Qiushi, Li; Shaobin, Li

    2016-12-01

    This paper presents an integrated optimization design method in which uniform design, response surface methodology and genetic algorithm are used in combination. In detail, uniform design is used to select the experimental sampling points in the experimental domain and the system performance is evaluated by means of computational fluid dynamics to construct a database. After that, response surface methodology is employed to generate a surrogate mathematical model relating the optimization objective and the design variables. Subsequently, genetic algorithm is adopted and applied to the surrogate model to acquire the optimal solution in the case of satisfying some constraints. The method has been applied to the optimization design of an axisymmetric diverging duct, dealing with three design variables including one qualitative variable and two quantitative variables. The method of modeling and optimization design performs well in improving the duct aerodynamic performance and can be also applied to wider fields of mechanical design and seen as a useful tool for engineering designers, by reducing the design time and computation consumption.

  17. Progress in multidisciplinary design optimization at NASA Langley

    NASA Technical Reports Server (NTRS)

    Padula, Sharon L.

    1993-01-01

    Multidisciplinary Design Optimization refers to some combination of disciplinary analyses, sensitivity analysis, and optimization techniques used to design complex engineering systems. The ultimate objective of this research at NASA Langley Research Center is to help the US industry reduce the costs associated with development, manufacturing, and maintenance of aerospace vehicles while improving system performance. This report reviews progress towards this objective and highlights topics for future research. Aerospace design problems selected from the author's research illustrate strengths and weaknesses in existing multidisciplinary optimization techniques. The techniques discussed include multiobjective optimization, global sensitivity equations and sequential linear programming.

  18. Comparison of Traditional Design Nonlinear Programming Optimization and Stochastic Methods for Structural Design

    NASA Technical Reports Server (NTRS)

    Patnaik, Surya N.; Pai, Shantaram S.; Coroneos, Rula M.

    2010-01-01

    Structural design generated by traditional method, optimization method and the stochastic design concept are compared. In the traditional method, the constraints are manipulated to obtain the design and weight is back calculated. In design optimization, the weight of a structure becomes the merit function with constraints imposed on failure modes and an optimization algorithm is used to generate the solution. Stochastic design concept accounts for uncertainties in loads, material properties, and other parameters and solution is obtained by solving a design optimization problem for a specified reliability. Acceptable solutions were produced by all the three methods. The variation in the weight calculated by the methods was modest. Some variation was noticed in designs calculated by the methods. The variation may be attributed to structural indeterminacy. It is prudent to develop design by all three methods prior to its fabrication. The traditional design method can be improved when the simplified sensitivities of the behavior constraint is used. Such sensitivity can reduce design calculations and may have a potential to unify the traditional and optimization methods. Weight versus reliabilitytraced out an inverted-S-shaped graph. The center of the graph corresponded to mean valued design. A heavy design with weight approaching infinity could be produced for a near-zero rate of failure. Weight can be reduced to a small value for a most failure-prone design. Probabilistic modeling of load and material properties remained a challenge.

  19. A case study on topology optimized design for additive manufacturing

    NASA Astrophysics Data System (ADS)

    Gebisa, A. W.; Lemu, H. G.

    2017-12-01

    Topology optimization is an optimization method that employs mathematical tools to optimize material distribution in a part to be designed. Earlier developments of topology optimization considered conventional manufacturing techniques that have limitations in producing complex geometries. This has hindered the topology optimization efforts not to fully be realized. With the emergence of additive manufacturing (AM) technologies, the technology that builds a part layer upon a layer directly from three dimensional (3D) model data of the part, however, producing complex shape geometry is no longer an issue. Realization of topology optimization through AM provides full design freedom for the design engineers. The article focuses on topologically optimized design approach for additive manufacturing with a case study on lightweight design of jet engine bracket. The study result shows that topology optimization is a powerful design technique to reduce the weight of a product while maintaining the design requirements if additive manufacturing is considered.

  20. The Idea Factory: An Interactive Intergroup Exercise

    ERIC Educational Resources Information Center

    Rosh, Lisa; Leach, Evan

    2011-01-01

    This article outlines the Idea Factory exercise, an interactive exercise designed to help participants examine group, individual, and organizational factors that affect intergroup conflict. Specific emphasis is placed on exploring the relationship between intra- and intergroup dynamics and identifying managerial practices that foster effective…

  1. Fluorescent Reporter Libraries as Useful Tools for Optimizing Microbial Cell Factories: A Review of the Current Methods and Applications

    PubMed Central

    Delvigne, Frank; Pêcheux, Hélène; Tarayre, Cédric

    2015-01-01

    The use of genetically encoded fluorescent reporters allows speeding up the initial optimization steps of microbial bioprocesses. These reporters can be used for determining the expression level of a particular promoter, not only the synthesis of a specific protein but also the content of intracellular metabolites. The level of protein/metabolite is thus proportional to a fluorescence signal. By this way, mean expression profiles of protein/metabolites can be determined non-invasively at a high-throughput rate, allowing the rapid identification of the best producers. Actually, different kinds of reporter systems are available, as well as specific cultivation devices allowing the on-line recording of the fluorescent signal. Cell-to-cell variability is another important phenomenon that can be integrated into the screening procedures for the selection of more efficient microbial cell factories. PMID:26442261

  2. New approaches to optimization in aerospace conceptual design

    NASA Technical Reports Server (NTRS)

    Gage, Peter J.

    1995-01-01

    Aerospace design can be viewed as an optimization process, but conceptual studies are rarely performed using formal search algorithms. Three issues that restrict the success of automatic search are identified in this work. New approaches are introduced to address the integration of analyses and optimizers, to avoid the need for accurate gradient information and a smooth search space (required for calculus-based optimization), and to remove the restrictions imposed by fixed complexity problem formulations. (1) Optimization should be performed in a flexible environment. A quasi-procedural architecture is used to conveniently link analysis modules and automatically coordinate their execution. It efficiently controls a large-scale design tasks. (2) Genetic algorithms provide a search method for discontinuous or noisy domains. The utility of genetic optimization is demonstrated here, but parameter encodings and constraint-handling schemes must be carefully chosen to avoid premature convergence to suboptimal designs. The relationship between genetic and calculus-based methods is explored. (3) A variable-complexity genetic algorithm is created to permit flexible parameterization, so that the level of description can change during optimization. This new optimizer automatically discovers novel designs in structural and aerodynamic tasks.

  3. Robust Airfoil Optimization in High Resolution Design Space

    NASA Technical Reports Server (NTRS)

    Li, Wu; Padula, Sharon L.

    2003-01-01

    The robust airfoil shape optimization is a direct method for drag reduction over a given range of operating conditions and has three advantages: (1) it prevents severe degradation in the off-design performance by using a smart descent direction in each optimization iteration, (2) it uses a large number of B-spline control points as design variables yet the resulting airfoil shape is fairly smooth, and (3) it allows the user to make a trade-off between the level of optimization and the amount of computing time consumed. The robust optimization method is demonstrated by solving a lift-constrained drag minimization problem for a two-dimensional airfoil in viscous flow with a large number of geometric design variables. Our experience with robust optimization indicates that our strategy produces reasonable airfoil shapes that are similar to the original airfoils, but these new shapes provide drag reduction over the specified range of Mach numbers. We have tested this strategy on a number of advanced airfoil models produced by knowledgeable aerodynamic design team members and found that our strategy produces airfoils better or equal to any designs produced by traditional design methods.

  4. Assay optimization: a statistical design of experiments approach.

    PubMed

    Altekar, Maneesha; Homon, Carol A; Kashem, Mohammed A; Mason, Steven W; Nelson, Richard M; Patnaude, Lori A; Yingling, Jeffrey; Taylor, Paul B

    2007-03-01

    With the transition from manual to robotic HTS in the last several years, assay optimization has become a significant bottleneck. Recent advances in robotic liquid handling have made it feasible to reduce assay optimization timelines with the application of statistically designed experiments. When implemented, they can efficiently optimize assays by rapidly identifying significant factors, complex interactions, and nonlinear responses. This article focuses on the use of statistically designed experiments in assay optimization.

  5. Design and Optimization of Composite Gyroscope Momentum Wheel Rings

    NASA Technical Reports Server (NTRS)

    Bednarcyk, Brett A.; Arnold, Steven M.

    2007-01-01

    Stress analysis and preliminary design/optimization procedures are presented for gyroscope momentum wheel rings composed of metallic, metal matrix composite, and polymer matrix composite materials. The design of these components involves simultaneously minimizing both true part volume and mass, while maximizing angular momentum. The stress analysis results are combined with an anisotropic failure criterion to formulate a new sizing procedure that provides considerable insight into the design of gyroscope momentum wheel ring components. Results compare the performance of two optimized metallic designs, an optimized SiC/Ti composite design, and an optimized graphite/epoxy composite design. The graphite/epoxy design appears to be far superior to the competitors considered unless a much greater premium is placed on volume efficiency compared to mass efficiency.

  6. Strategies for global optimization in photonics design.

    PubMed

    Vukovic, Ana; Sewell, Phillip; Benson, Trevor M

    2010-10-01

    This paper reports on two important issues that arise in the context of the global optimization of photonic components where large problem spaces must be investigated. The first is the implementation of a fast simulation method and associated matrix solver for assessing particular designs and the second, the strategies that a designer can adopt to control the size of the problem design space to reduce runtimes without compromising the convergence of the global optimization tool. For this study an analytical simulation method based on Mie scattering and a fast matrix solver exploiting the fast multipole method are combined with genetic algorithms (GAs). The impact of the approximations of the simulation method on the accuracy and runtime of individual design assessments and the consequent effects on the GA are also examined. An investigation of optimization strategies for controlling the design space size is conducted on two illustrative examples, namely, 60° and 90° waveguide bends based on photonic microstructures, and their effectiveness is analyzed in terms of a GA's ability to converge to the best solution within an acceptable timeframe. Finally, the paper describes some particular optimized solutions found in the course of this work.

  7. Autonomous optimal trajectory design employing convex optimization for powered descent on an asteroid

    NASA Astrophysics Data System (ADS)

    Pinson, Robin Marie

    Mission proposals that land spacecraft on asteroids are becoming increasingly popular. However, in order to have a successful mission the spacecraft must reliably and softly land at the intended landing site with pinpoint precision. The problem under investigation is how to design a propellant (fuel) optimal powered descent trajectory that can be quickly computed onboard the spacecraft, without interaction from ground control. The goal is to autonomously design the optimal powered descent trajectory onboard the spacecraft immediately prior to the descent burn for use during the burn. Compared to a planetary powered landing problem, the challenges that arise from designing an asteroid powered descent trajectory include complicated nonlinear gravity fields, small rotating bodies, and low thrust vehicles. The nonlinear gravity fields cannot be represented by a constant gravity model nor a Newtonian model. The trajectory design algorithm needs to be robust and efficient to guarantee a designed trajectory and complete the calculations in a reasonable time frame. This research investigates the following questions: Can convex optimization be used to design the minimum propellant powered descent trajectory for a soft landing on an asteroid? Is this method robust and reliable to allow autonomy onboard the spacecraft without interaction from ground control? This research designed a convex optimization based method that rapidly generates the propellant optimal asteroid powered descent trajectory. The solution to the convex optimization problem is the thrust magnitude and direction, which designs and determines the trajectory. The propellant optimal problem was formulated as a second order cone program, a subset of convex optimization, through relaxation techniques by including a slack variable, change of variables, and incorporation of the successive solution method. Convex optimization solvers, especially second order cone programs, are robust, reliable, and are guaranteed

  8. D-OPTIMAL EXPERIMENTAL DESIGNS TO TEST FOR DEPARTURE FROM ADDITIVITY IN A FIXED-RATIO MIXTURE RAY.

    EPA Science Inventory

    Traditional factorial designs for evaluating interactions among chemicals in a mixture are prohibitive when the number of chemicals is large. However, recent advances in statistically-based experimental design have made it easier to evaluate interactions involving many chemicals...

  9. Design and Optimization Method of a Two-Disk Rotor System

    NASA Astrophysics Data System (ADS)

    Huang, Jingjing; Zheng, Longxi; Mei, Qing

    2016-04-01

    An integrated analytical method based on multidisciplinary optimization software Isight and general finite element software ANSYS was proposed in this paper. Firstly, a two-disk rotor system was established and the mode, humorous response and transient response at acceleration condition were analyzed with ANSYS. The dynamic characteristics of the two-disk rotor system were achieved. On this basis, the two-disk rotor model was integrated to the multidisciplinary design optimization software Isight. According to the design of experiment (DOE) and the dynamic characteristics, the optimization variables, optimization objectives and constraints were confirmed. After that, the multi-objective design optimization of the transient process was carried out with three different global optimization algorithms including Evolutionary Optimization Algorithm, Multi-Island Genetic Algorithm and Pointer Automatic Optimizer. The optimum position of the two-disk rotor system was obtained at the specified constraints. Meanwhile, the accuracy and calculation numbers of different optimization algorithms were compared. The optimization results indicated that the rotor vibration reached the minimum value and the design efficiency and quality were improved by the multidisciplinary design optimization in the case of meeting the design requirements, which provided the reference to improve the design efficiency and reliability of the aero-engine rotor.

  10. Fractional Factorial Design Study on the Performance of GAC-Enhanced Electrocoagulation Process Involved in Color Removal from Dye Solutions.

    PubMed

    Secula, Marius Sebastian; Cretescu, Igor; Cagnon, Benoit; Manea, Liliana Rozemarie; Stan, Corneliu Sergiu; Breaban, Iuliana Gabriela

    2013-07-10

    The aim of this study was to determine the effects of main factors and interactions on the color removal performance from dye solutions using the electrocoagulation process enhanced by adsorption on Granular Activated Carbon (GAC). In this study, a mathematical approach was conducted using a two-level fractional factorial design ( FFD ) for a given dye solution. Three textile dyes: Acid Blue 74, Basic Red 1, and Reactive Black 5 were used. Experimental factors used and their respective levels were: current density (2.73 or 27.32 A/m²), initial pH of aqueous dye solution (3 or 9), electrocoagulation time (20 or 180 min), GAC dose (0.1 or 0.5 g/L), support electrolyte (2 or 50 mM), initial dye concentration (0.05 or 0.25 g/L) and current type (Direct Current- DC or Alternative Pulsed Current- APC ). GAC-enhanced electrocoagulation performance was analyzed statistically in terms of removal efficiency, electrical energy, and electrode material consumptions, using modeling polynomial equations. The statistical significance of GAC dose level on the performance of GAC enhanced electrocoagulation and the experimental conditions that favor the process operation of electrocoagulation in APC regime were determined. The local optimal experimental conditions were established using a multi-objective desirability function method.

  11. Fractional Factorial Design Study on the Performance of GAC-Enhanced Electrocoagulation Process Involved in Color Removal from Dye Solutions

    PubMed Central

    Secula, Marius Sebastian; Cretescu, Igor; Cagnon, Benoit; Manea, Liliana Rozemarie; Stan, Corneliu Sergiu; Breaban, Iuliana Gabriela

    2013-01-01

    The aim of this study was to determine the effects of main factors and interactions on the color removal performance from dye solutions using the electrocoagulation process enhanced by adsorption on Granular Activated Carbon (GAC). In this study, a mathematical approach was conducted using a two-level fractional factorial design (FFD) for a given dye solution. Three textile dyes: Acid Blue 74, Basic Red 1, and Reactive Black 5 were used. Experimental factors used and their respective levels were: current density (2.73 or 27.32 A/m2), initial pH of aqueous dye solution (3 or 9), electrocoagulation time (20 or 180 min), GAC dose (0.1 or 0.5 g/L), support electrolyte (2 or 50 mM), initial dye concentration (0.05 or 0.25 g/L) and current type (Direct Current—DC or Alternative Pulsed Current—APC). GAC-enhanced electrocoagulation performance was analyzed statistically in terms of removal efficiency, electrical energy, and electrode material consumptions, using modeling polynomial equations. The statistical significance of GAC dose level on the performance of GAC enhanced electrocoagulation and the experimental conditions that favor the process operation of electrocoagulation in APC regime were determined. The local optimal experimental conditions were established using a multi-objective desirability function method. PMID:28811405

  12. A factory concept for processing and manufacturing with lunar material

    NASA Technical Reports Server (NTRS)

    Driggers, G. W.

    1977-01-01

    A conceptual design for an orbital factory sized to process 1.5 million metric tons per year of raw lunar fines into 0.3 million metric tons of manufacturing materials is presented. A conservative approach involving application of present earth-based technology leads to a design devoid of new inventions. Earth based counterparts to the factory machinery were used to generate subsystem masses and lumped parameters for volume and mass estimates. The results are considered to be conservative since technologies more advanced than those assumed are presently available in many areas. Some attributes of potential space processing technologies applied to material refinement and component manufacture are discussed.

  13. Lessons Learned During Solutions of Multidisciplinary Design Optimization Problems

    NASA Technical Reports Server (NTRS)

    Patnaik, Suna N.; Coroneos, Rula M.; Hopkins, Dale A.; Lavelle, Thomas M.

    2000-01-01

    Optimization research at NASA Glenn Research Center has addressed the design of structures, aircraft and airbreathing propulsion engines. During solution of the multidisciplinary problems several issues were encountered. This paper lists four issues and discusses the strategies adapted for their resolution: (1) The optimization process can lead to an inefficient local solution. This deficiency was encountered during design of an engine component. The limitation was overcome through an augmentation of animation into optimization. (2) Optimum solutions obtained were infeasible for aircraft and air-breathing propulsion engine problems. Alleviation of this deficiency required a cascading of multiple algorithms. (3) Profile optimization of a beam produced an irregular shape. Engineering intuition restored the regular shape for the beam. (4) The solution obtained for a cylindrical shell by a subproblem strategy converged to a design that can be difficult to manufacture. Resolution of this issue remains a challenge. The issues and resolutions are illustrated through six problems: (1) design of an engine component, (2) synthesis of a subsonic aircraft, (3) operation optimization of a supersonic engine, (4) design of a wave-rotor-topping device, (5) profile optimization of a cantilever beam, and (6) design of a cvlindrical shell. The combined effort of designers and researchers can bring the optimization method from academia to industry.

  14. Multivariate study of parameters in the determination of pesticide residues in apple by headspace solid phase microextraction coupled to gas chromatography-mass spectrometry using experimental factorial design.

    PubMed

    Abdulra'uf, Lukman Bola; Tan, Guan Huat

    2013-12-15

    Solid-phase microextraction (SPME) is a solvent-less sample preparation method which combines sample preparation, isolation, concentration and enrichment into one step. In this study, multivariate strategy was used to determine the significance of the factors affecting the solid phase microextraction of pesticide residues (fenobucarb, diazinon, chlorothalonil and chlorpyrifos) using a randomised factorial design. The interactions and effects of temperature, time and salt addition on the efficiency of the extraction of the pesticide residues were evaluated using 2(3) factorial designs. The analytes were extracted with 100 μm PDMS fibres according to the factorial design matrix and desorbed into a gas chromatography-mass spectrometry detector. The developed method was applied for the analysis of apple samples and the limits of detection were between 0.01 and 0.2 μg kg(-)(1), which were lower than the MRLs for apples. The relative standard deviations (RSD) were between 0.1% and 13.37% with average recovery of 80-105%. The linearity ranges from 0.5-50 μg kg(-)(1) with correlation coefficient greater than 0.99. Copyright © 2013 Elsevier Ltd. All rights reserved.

  15. Towards Robust Designs Via Multiple-Objective Optimization Methods

    NASA Technical Reports Server (NTRS)

    Man Mohan, Rai

    2006-01-01

    Fabricating and operating complex systems involves dealing with uncertainty in the relevant variables. In the case of aircraft, flow conditions are subject to change during operation. Efficiency and engine noise may be different from the expected values because of manufacturing tolerances and normal wear and tear. Engine components may have a shorter life than expected because of manufacturing tolerances. In spite of the important effect of operating- and manufacturing-uncertainty on the performance and expected life of the component or system, traditional aerodynamic shape optimization has focused on obtaining the best design given a set of deterministic flow conditions. Clearly it is important to both maintain near-optimal performance levels at off-design operating conditions, and, ensure that performance does not degrade appreciably when the component shape differs from the optimal shape due to manufacturing tolerances and normal wear and tear. These requirements naturally lead to the idea of robust optimal design wherein the concept of robustness to various perturbations is built into the design optimization procedure. The basic ideas involved in robust optimal design will be included in this lecture. The imposition of the additional requirement of robustness results in a multiple-objective optimization problem requiring appropriate solution procedures. Typically the costs associated with multiple-objective optimization are substantial. Therefore efficient multiple-objective optimization procedures are crucial to the rapid deployment of the principles of robust design in industry. Hence the companion set of lecture notes (Single- and Multiple-Objective Optimization with Differential Evolution and Neural Networks ) deals with methodology for solving multiple-objective Optimization problems efficiently, reliably and with little user intervention. Applications of the methodologies presented in the companion lecture to robust design will be included here. The

  16. Treatment of dyeing wastewater by TiO2/H2O2/UV process: experimental design approach for evaluating total organic carbon (TOC) removal efficiency.

    PubMed

    Lee, Seung-Mok; Kim, Young-Gyu; Cho, Il-Hyoung

    2005-01-01

    Optimal operating conditions in order to treat dyeing wastewater were investigated by using the factorial design and responses surface methodology (RSM). The experiment was statistically designed and carried out according to a 22 full factorial design with four factorial points, three center points, and four axial points. Then, the linear and nonlinear regression was applied on the data by using SAS package software. The independent variables were TiO2 dosage, H2O2 concentration and total organic carbon (TOC) removal efficiency of dyeing wastewater was dependent variable. From the factorial design and responses surface methodology (RSM), maximum removal efficiency (85%) of dyeing wastewater was obtained at TiO2 dosage (1.82 gL(-1)), H2O2 concentration (980 mgL(-1)) for oxidation reaction (20 min).

  17. Front End for a neutrino factory or muon collider

    DOE PAGES

    Neuffer, David; Snopok, Pavel; Alexahin, Yuri

    2017-11-30

    A neutrino factory or muon collider requires the capture and cooling of a large number of muons. Scenarios for capture, bunching, phase-energy rotation and initial cooling of μ’s produced from a proton source target have been developed, initially for neutrino factory scenarios. They require a drift section from the target, a bunching section and a Φ-δE rotation section leading into the cooling channel. Important concerns are rf limitations within the focusing magnetic fields and large losses in the transport. The currently preferred cooling channel design is an “HFOFO Snake” configuration that cools both μ + and μ - transversely andmore » longitudinally. Finally, the status of the design is presented and variations are discussed.« less

  18. Design and optimization of a chromatographic purification process for Streptococcus pneumoniae serotype 23F capsular polysaccharide by a Design of Experiments approach.

    PubMed

    Ji, Yu; Tian, Yang; Ahnfelt, Mattias; Sui, Lili

    2014-06-27

    Multivalent pneumococcal vaccines were used worldwide to protect human beings from pneumococcal diseases. In order to eliminate the toxic organic solutions used in the traditional vaccine purification process, an alternative chromatographic process for Streptococcus pneumoniae serotype 23F capsular polysaccharide (CPS) was proposed in this study. The strategy of Design of Experiments (DoE) was introduced into the process development to solve the complicated design procedure. An initial process analysis was given to review the whole flowchart, identify the critical factors of chromatography through FMEA and chose the flowthrough mode due to the property of the feed. A resin screening study was then followed to select candidate resins. DoE was utilized to generate a resolution IV fractional factorial design to further compare candidates and narrow down the design space. After Capto Adhere was selected, the Box-Behnken DoE was executed to model the process and characterize all effects of factors on the responses. Finally, Monte Carlo simulation was used to optimize the process, test the chosen optimal conditions and define the control limit. The results of three scale-up runs at set points verified the DoE and simulation predictions. The final results were well in accordance with the EU pharmacopeia requirements: Protein/CPS (w/w) 1.08%; DNA/CPS (w/w) 0.61%; the phosphorus content 3.1%; the nitrogen 0.315% and the Methyl-pentose percentage 47.9%. Other tests of final pure CPS also met the pharmacopeia specifications. This alternative chromatographic purification process for pneumococcal vaccine without toxic organic solvents was successfully developed by the DoE approach and proved scalability, robustness and suitability for large scale manufacturing. Copyright © 2014 Elsevier B.V. All rights reserved.

  19. Design sensitivity analysis and optimization tool (DSO) for sizing design applications

    NASA Technical Reports Server (NTRS)

    Chang, Kuang-Hua; Choi, Kyung K.; Perng, Jyh-Hwa

    1992-01-01

    The DSO tool, a structural design software system that provides the designer with a graphics-based menu-driven design environment to perform easy design optimization for general applications, is presented. Three design stages, preprocessing, design sensitivity analysis, and postprocessing, are implemented in the DSO to allow the designer to carry out the design process systematically. A framework, including data base, user interface, foundation class, and remote module, has been designed and implemented to facilitate software development for the DSO. A number of dedicated commercial software/packages have been integrated in the DSO to support the design procedures. Instead of parameterizing an FEM, design parameters are defined on a geometric model associated with physical quantities, and the continuum design sensitivity analysis theory is implemented to compute design sensitivity coefficients using postprocessing data from the analysis codes. A tracked vehicle road wheel is given as a sizing design application to demonstrate the DSO's easy and convenient design optimization process.

  20. Investigation of the impact of trace elements on anaerobic volatile fatty acid degradation using a fractional factorial experimental design.

    PubMed

    Jiang, Ying; Zhang, Yue; Banks, Charles; Heaven, Sonia; Longhurst, Philip

    2017-11-15

    The requirement of trace elements (TE) in anaerobic digestion process is widely documented. However, little is understood regarding the specific requirement of elements and their critical concentrations under different operating conditions such as substrate characterisation and temperature. In this study, a flask batch trial using fractional factorial design is conducted to investigate volatile fatty acids (VFA) anaerobic degradation rate under the influence of the individual and combined effect of six TEs (Co, Ni, Mo, Se, Fe and W). The experiment inoculated with food waste digestate, spiked with sodium acetate and sodium propionate both to 10 g/l. This is followed by the addition of a selection of the six elements in accordance with a 2 6-2 fractional factorial principle. The experiment is conducted in duplicate and the degradation of VFA is regularly monitored. Factorial effect analysis on the experimental results reveals that within these experimental conditions, Se has a key role in promoting the degradation rates of both acetic and propionic acids; Mo and Co are found to have a modest effect on increasing propionic acid degradation rate. It is also revealed that Ni shows some inhibitory effects on VFA degradation, possibly due to its toxicity. Additionally, regression coefficients for the main and second order effects are calculated to establish regression models for VFA degradation. Copyright © 2017 Elsevier Ltd. All rights reserved.

  1. Optimal design of reverse osmosis module networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Maskan, F.; Wiley, D.E.; Johnston, L.P.M.

    2000-05-01

    The structure of individual reverse osmosis modules, the configuration of the module network, and the operating conditions were optimized for seawater and brackish water desalination. The system model included simple mathematical equations to predict the performance of the reverse osmosis modules. The optimization problem was formulated as a constrained multivariable nonlinear optimization. The objective function was the annual profit for the system, consisting of the profit obtained from the permeate, capital cost for the process units, and operating costs associated with energy consumption and maintenance. Optimization of several dual-stage reverse osmosis systems were investigated and compared. It was found thatmore » optimal network designs are the ones that produce the most permeate. It may be possible to achieve economic improvements by refining current membrane module designs and their operating pressures.« less

  2. The preparation and evaluation of sustained release suppositories containing ketoprofen and Eudragit RL 100 by using factorial design.

    PubMed

    Ozgüney, I; Ozcan, I; Ertan, G; Güneri, T

    2007-01-01

    The preparation of ketoprofen (KP) sustained release (SR) suppositories was designed according to the 3(2) x 2(1) factorial design as three different KP:Eudragit RL 100 ratios (1:0.5, 1:1, 1:2), three particle sizes of prepared granules (250-500, 500-710, and 710-1000 microm) and two different PEG 400:PEG 6000 ratios (40:60, 50:50). The conventional KP suppositories were also prepared by using Witepsol H 15, Massa Estarinum B, Cremao and the mixture of PEG 400:PEG 6000. The dissolution studies of suppositories prepared were carried out according to the USP XXIII basket method in the phosphate buffer (pH = 7.2) at 50 rpm, and it was shown that the dissolution time was sustained up to 8 hours. According to the results of the factorial design, the most important independent variable on t50 and t80 was drug:polymer ratios. The log of partition coefficient of KP was determined as 1.46, showing the high affinity to the oily phase. n exponent and kinetic studies were conducted to explain diffusion mechanism, and it is understood that if the inert KP:Eudragit RL 100 ratio is increased in the particles, the Fickian difusion dominates and the best kinetic turns to Higuchi from the Hixson-Crowell. There is neither crystalline form of KP nor degradation product in the suppositories detected with the differential scanning calorimetry (DSC) studies. In addition to these studies, antiinflammatory activity of SR suppositories also determined that it was significantly extended according to the conventional suppositories.

  3. Dynamic optimization and adaptive controller design

    NASA Astrophysics Data System (ADS)

    Inamdar, S. R.

    2010-10-01

    In this work I present a new type of controller which is an adaptive tracking controller which employs dynamic optimization for optimizing current value of controller action for the temperature control of nonisothermal continuously stirred tank reactor (CSTR). We begin with a two-state model of nonisothermal CSTR which are mass and heat balance equations and then add cooling system dynamics to eliminate input multiplicity. The initial design value is obtained using local stability of steady states where approach temperature for cooling action is specified as a steady state and a design specification. Later we make a correction in the dynamics where material balance is manipulated to use feed concentration as a system parameter as an adaptive control measure in order to avoid actuator saturation for the main control loop. The analysis leading to design of dynamic optimization based parameter adaptive controller is presented. The important component of this mathematical framework is reference trajectory generation to form an adaptive control measure.

  4. Optimizing the taste-masked formulation of acetaminophen using sodium caseinate and lecithin by experimental design.

    PubMed

    Hoang Thi, Thanh Huong; Lemdani, Mohamed; Flament, Marie-Pierre

    2013-09-10

    In a previous study of ours, the association of sodium caseinate and lecithin was demonstrated to be promising for masking the bitterness of acetaminophen via drug encapsulation. The encapsulating mechanisms were suggested to be based on the segregation of multicomponent droplets occurring during spray-drying. The spray-dried particles delayed the drug release within the mouth during the early time upon administration and hence masked the bitterness. Indeed, taste-masking is achieved if, within the frame of 1-2 min, drug substance is either not released or the released amount is below the human threshold for identifying its bad taste. The aim of this work was (i) to evaluate the effect of various processing and formulation parameters on the taste-masking efficiency and (ii) to determine the optimal formulation for optimal taste-masking effect. Four investigated input variables included inlet temperature (X1), spray flow (X2), sodium caseinate amount (X3) and lecithin amount (X4). The percentage of drug release amount during the first 2 min was considered as the response variable (Y). A 2(4)-full factorial design was applied and allowed screening for the most influential variables i.e. sodium caseinate amount and lecithin amount. Optimizing these two variables was therefore conducted by a simplex approach. The SEM and DSC results of spray-dried powder prepared under optimal conditions showed that drug seemed to be well encapsulated. The drug release during the first 2 min significantly decreased, 7-fold less than the unmasked drug particles. Therefore, the optimal formulation that performed the best taste-masking effect was successfully achieved. Copyright © 2013 Elsevier B.V. All rights reserved.

  5. Optimal shielding design for minimum materials cost or mass

    DOE PAGES

    Woolley, Robert D.

    2015-12-02

    The mathematical underpinnings of cost optimal radiation shielding designs based on an extension of optimal control theory are presented, a heuristic algorithm to iteratively solve the resulting optimal design equations is suggested, and computational results for a simple test case are discussed. A typical radiation shielding design problem can have infinitely many solutions, all satisfying the problem's specified set of radiation attenuation requirements. Each such design has its own total materials cost. For a design to be optimal, no admissible change in its deployment of shielding materials can result in a lower cost. This applies in particular to very smallmore » changes, which can be restated using the calculus of variations as the Euler-Lagrange equations. Furthermore, the associated Hamiltonian function and application of Pontryagin's theorem lead to conditions for a shield to be optimal.« less

  6. Design, Development and Optimization of S (-) Atenolol Floating Sustained Release Matrix Tablets Using Surface Response Methodology

    PubMed Central

    Gunjal, P. T.; Shinde, M. B.; Gharge, V. S.; Pimple, S. V.; Gurjar, M. K.; Shah, M. N.

    2015-01-01

    The objective of this present investigation was to develop and formulate floating sustained release matrix tablets of s (-) atenolol, by using different polymer combinations and filler, to optimize by using surface response methodology for different drug release variables and to evaluate the drug release pattern of the optimized product. Floating sustained release matrix tablets of various combinations were prepared with cellulose-based polymers: Hydroxypropyl methylcellulose, sodium bicarbonate as a gas generating agent, polyvinyl pyrrolidone as a binder and lactose monohydrate as filler. The 32 full factorial design was employed to investigate the effect of formulation variables on different properties of tablets applicable to floating lag time, buoyancy time, % drug release in 1 and 6 h (D1 h,D6 h) and time required to 90% drug release (t90%). Significance of result was analyzed using analysis of non variance and P < 0.05 was considered statistically significant. S (-) atenolol floating sustained release matrix tablets followed the Higuchi drug release kinetics that indicates the release of drug follows anomalous (non-Fickian) diffusion mechanism. The developed floating sustained release matrix tablet of improved efficacy can perform therapeutically better than a conventional tablet. PMID:26798171

  7. Design, Development and Optimization of S (-) Atenolol Floating Sustained Release Matrix Tablets Using Surface Response Methodology.

    PubMed

    Gunjal, P T; Shinde, M B; Gharge, V S; Pimple, S V; Gurjar, M K; Shah, M N

    2015-01-01

    The objective of this present investigation was to develop and formulate floating sustained release matrix tablets of s (-) atenolol, by using different polymer combinations and filler, to optimize by using surface response methodology for different drug release variables and to evaluate the drug release pattern of the optimized product. Floating sustained release matrix tablets of various combinations were prepared with cellulose-based polymers: Hydroxypropyl methylcellulose, sodium bicarbonate as a gas generating agent, polyvinyl pyrrolidone as a binder and lactose monohydrate as filler. The 3(2) full factorial design was employed to investigate the effect of formulation variables on different properties of tablets applicable to floating lag time, buoyancy time, % drug release in 1 and 6 h (D1 h,D6 h) and time required to 90% drug release (t90%). Significance of result was analyzed using analysis of non variance and P < 0.05 was considered statistically significant. S (-) atenolol floating sustained release matrix tablets followed the Higuchi drug release kinetics that indicates the release of drug follows anomalous (non-Fickian) diffusion mechanism. The developed floating sustained release matrix tablet of improved efficacy can perform therapeutically better than a conventional tablet.

  8. Robustness-Based Design Optimization Under Data Uncertainty

    NASA Technical Reports Server (NTRS)

    Zaman, Kais; McDonald, Mark; Mahadevan, Sankaran; Green, Lawrence

    2010-01-01

    This paper proposes formulations and algorithms for design optimization under both aleatory (i.e., natural or physical variability) and epistemic uncertainty (i.e., imprecise probabilistic information), from the perspective of system robustness. The proposed formulations deal with epistemic uncertainty arising from both sparse and interval data without any assumption about the probability distributions of the random variables. A decoupled approach is proposed in this paper to un-nest the robustness-based design from the analysis of non-design epistemic variables to achieve computational efficiency. The proposed methods are illustrated for the upper stage design problem of a two-stage-to-orbit (TSTO) vehicle, where the information on the random design inputs are only available as sparse point and/or interval data. As collecting more data reduces uncertainty but increases cost, the effect of sample size on the optimality and robustness of the solution is also studied. A method is developed to determine the optimal sample size for sparse point data that leads to the solutions of the design problem that are least sensitive to variations in the input random variables.

  9. Probabilistic Finite Element Analysis & Design Optimization for Structural Designs

    NASA Astrophysics Data System (ADS)

    Deivanayagam, Arumugam

    This study focuses on implementing probabilistic nature of material properties (Kevlar® 49) to the existing deterministic finite element analysis (FEA) of fabric based engine containment system through Monte Carlo simulations (MCS) and implementation of probabilistic analysis in engineering designs through Reliability Based Design Optimization (RBDO). First, the emphasis is on experimental data analysis focusing on probabilistic distribution models which characterize the randomness associated with the experimental data. The material properties of Kevlar® 49 are modeled using experimental data analysis and implemented along with an existing spiral modeling scheme (SMS) and user defined constitutive model (UMAT) for fabric based engine containment simulations in LS-DYNA. MCS of the model are performed to observe the failure pattern and exit velocities of the models. Then the solutions are compared with NASA experimental tests and deterministic results. MCS with probabilistic material data give a good prospective on results rather than a single deterministic simulation results. The next part of research is to implement the probabilistic material properties in engineering designs. The main aim of structural design is to obtain optimal solutions. In any case, in a deterministic optimization problem even though the structures are cost effective, it becomes highly unreliable if the uncertainty that may be associated with the system (material properties, loading etc.) is not represented or considered in the solution process. Reliable and optimal solution can be obtained by performing reliability optimization along with the deterministic optimization, which is RBDO. In RBDO problem formulation, in addition to structural performance constraints, reliability constraints are also considered. This part of research starts with introduction to reliability analysis such as first order reliability analysis, second order reliability analysis followed by simulation technique that

  10. Optimal Bayesian Adaptive Design for Test-Item Calibration.

    PubMed

    van der Linden, Wim J; Ren, Hao

    2015-06-01

    An optimal adaptive design for test-item calibration based on Bayesian optimality criteria is presented. The design adapts the choice of field-test items to the examinees taking an operational adaptive test using both the information in the posterior distributions of their ability parameters and the current posterior distributions of the field-test parameters. Different criteria of optimality based on the two types of posterior distributions are possible. The design can be implemented using an MCMC scheme with alternating stages of sampling from the posterior distributions of the test takers' ability parameters and the parameters of the field-test items while reusing samples from earlier posterior distributions of the other parameters. Results from a simulation study demonstrated the feasibility of the proposed MCMC implementation for operational item calibration. A comparison of performances for different optimality criteria showed faster calibration of substantial numbers of items for the criterion of D-optimality relative to A-optimality, a special case of c-optimality, and random assignment of items to the test takers.

  11. Formulation and optimization of coated PLGA – Zidovudine nanoparticles using factorial design and in vitro in vivo evaluations to determine brain targeting efficiency

    PubMed Central

    Peter Christoper, G.V.; Vijaya Raghavan, C.; Siddharth, K.; Siva Selva Kumar, M.; Hari Prasad, R.

    2013-01-01

    In the current study zidovudine loaded PLGA nanoparticles were prepared, coated and further investigated for its effectiveness in brain targeting. IR and DSC studies were performed to determine the interaction between excipients used and to find out the nature of drug in the formulation. Formulations were prepared by adopting 23 factorial designs to evaluate the effects of process and formulation variables. The prepared formulations were subjected for in vitro and in vivo evaluations. In vitro evaluations showed particle size below 100 nm, entrapment efficiency of formulations ranges of 28–57%, process yield of 60–76% was achieved and drug release for the formulations were in the range of 50–85%. The drug release from the formulations was found to follow Higuchi release pattern, n–value obtained after Korsemeyer plot was in the range of 0.56–0.78. In vivo evaluations were performed in mice after intraperitoneal administration of zidovudine drug solution, uncoated and coated formulation. Formulation when coated with Tween 80 achieved a higher concentration in the brain than that of the drug in solution and of the uncoated formulation. Stability studies indicated that there was no degradation of the drug in the formulation after 90 days of preparation when stored in refrigerated condition. PMID:24648825

  12. Regression analysis as a design optimization tool

    NASA Technical Reports Server (NTRS)

    Perley, R.

    1984-01-01

    The optimization concepts are described in relation to an overall design process as opposed to a detailed, part-design process where the requirements are firmly stated, the optimization criteria are well established, and a design is known to be feasible. The overall design process starts with the stated requirements. Some of the design criteria are derived directly from the requirements, but others are affected by the design concept. It is these design criteria that define the performance index, or objective function, that is to be minimized within some constraints. In general, there will be multiple objectives, some mutually exclusive, with no clear statement of their relative importance. The optimization loop that is given adjusts the design variables and analyzes the resulting design, in an iterative fashion, until the objective function is minimized within the constraints. This provides a solution, but it is only the beginning. In effect, the problem definition evolves as information is derived from the results. It becomes a learning process as we determine what the physics of the system can deliver in relation to the desirable system characteristics. As with any learning process, an interactive capability is a real attriubute for investigating the many alternatives that will be suggested as learning progresses.

  13. Multidisciplinary Analysis and Optimal Design: As Easy as it Sounds?

    NASA Technical Reports Server (NTRS)

    Moore, Greg; Chainyk, Mike; Schiermeier, John

    2004-01-01

    The viewgraph presentation examines optimal design for precision, large aperture structures. Discussion focuses on aspects of design optimization, code architecture and current capabilities, and planned activities and collaborative area suggestions. The discussion of design optimization examines design sensitivity analysis; practical considerations; and new analytical environments including finite element-based capability for high-fidelity multidisciplinary analysis, design sensitivity, and optimization. The discussion of code architecture and current capabilities includes basic thermal and structural elements, nonlinear heat transfer solutions and process, and optical modes generation.

  14. Sequence Factorial and Its Applications

    ERIC Educational Resources Information Center

    Asiru, Muniru A.

    2012-01-01

    In this note, we introduce sequence factorial and use this to study generalized M-bonomial coefficients. For the sequence of natural numbers, the twin concepts of sequence factorial and generalized M-bonomial coefficients, respectively, extend the corresponding concepts of factorial of an integer and binomial coefficients. Some latent properties…

  15. Cat Swarm Optimization algorithm for optimal linear phase FIR filter design.

    PubMed

    Saha, Suman Kumar; Ghoshal, Sakti Prasad; Kar, Rajib; Mandal, Durbadal

    2013-11-01

    In this paper a new meta-heuristic search method, called Cat Swarm Optimization (CSO) algorithm is applied to determine the best optimal impulse response coefficients of FIR low pass, high pass, band pass and band stop filters, trying to meet the respective ideal frequency response characteristics. CSO is generated by observing the behaviour of cats and composed of two sub-models. In CSO, one can decide how many cats are used in the iteration. Every cat has its' own position composed of M dimensions, velocities for each dimension, a fitness value which represents the accommodation of the cat to the fitness function, and a flag to identify whether the cat is in seeking mode or tracing mode. The final solution would be the best position of one of the cats. CSO keeps the best solution until it reaches the end of the iteration. The results of the proposed CSO based approach have been compared to those of other well-known optimization methods such as Real Coded Genetic Algorithm (RGA), standard Particle Swarm Optimization (PSO) and Differential Evolution (DE). The CSO based results confirm the superiority of the proposed CSO for solving FIR filter design problems. The performances of the CSO based designed FIR filters have proven to be superior as compared to those obtained by RGA, conventional PSO and DE. The simulation results also demonstrate that the CSO is the best optimizer among other relevant techniques, not only in the convergence speed but also in the optimal performances of the designed filters. Copyright © 2013 ISA. Published by Elsevier Ltd. All rights reserved.

  16. Conceptuation, formulation and evaluation of sustained release floating tablets of captopril compression coated with gastric dispersible hydrochlorothiazide using 23 factorial design

    PubMed Central

    Sirisha, Pathuri Lakshmi; Babu, Govada Kishore; Babu, Puttagunta Srinivasa

    2014-01-01

    Ambulatory blood pressure monitoring is regarded as the gold standard for hypertensive therapy in non-dipping hypertension patients. A novel compression coated formulation of captopril and hydrochlorothiazide (HCTZ) was developed in order to improve the efficacy of antihypertensive therapy considering the half-life of both drugs. The synergistic action using combination therapy can be effectively achieved by sustained release captopril (t1/2= 2.5 h) and fast releasing HCTZ (average t1/2= 9.5 h). The sustained release floating tablets of captopril were prepared by using 23 factorial design by employing three polymers i.e., ethyl cellulose (EC), carbopol and xanthan gum at two levels. The formulations (CF1-CF8) were optimized using analysis of variance for two response variables, buoyancy and T50%. Among the three polymers employed, the coefficients and P values for the response variable buoyancy and T50% using EC were found to be 3.824, 0.028 and 0.0196, 0.046 respectively. From the coefficients and P values for the two response variables, formulation CF2 was optimized, which contains EC polymer alone at a high level. The CF2 formulation was further compression coated with optimized gastric dispersible HCTZ layer (HF9). The compression coated tablet was further evaluated using drug release kinetics. The Q value of HCTZ layer is achieved within 20 min following first order release whereas the Q value of captopril was obtained at 6.5 h following Higuchi model, from which it is proved that rapid release HCTZ and slow release of captopril is achieved. The mechanism of drug release was analyzed using Peppas equation, which showed an n >0.90 confirming case II transportation mechanism for drug release. PMID:25006552

  17. Sample size requirements for separating out the effects of combination treatments: Randomised controlled trials of combination therapy vs. standard treatment compared to factorial designs for patients with tuberculous meningitis

    PubMed Central

    2011-01-01

    Background In certain diseases clinical experts may judge that the intervention with the best prospects is the addition of two treatments to the standard of care. This can either be tested with a simple randomized trial of combination versus standard treatment or with a 2 × 2 factorial design. Methods We compared the two approaches using the design of a new trial in tuberculous meningitis as an example. In that trial the combination of 2 drugs added to standard treatment is assumed to reduce the hazard of death by 30% and the sample size of the combination trial to achieve 80% power is 750 patients. We calculated the power of corresponding factorial designs with one- to sixteen-fold the sample size of the combination trial depending on the contribution of each individual drug to the combination treatment effect and the strength of an interaction between the two. Results In the absence of an interaction, an eight-fold increase in sample size for the factorial design as compared to the combination trial is required to get 80% power to jointly detect effects of both drugs if the contribution of the less potent treatment to the total effect is at least 35%. An eight-fold sample size increase also provides a power of 76% to detect a qualitative interaction at the one-sided 10% significance level if the individual effects of both drugs are equal. Factorial designs with a lower sample size have a high chance to be underpowered, to show significance of only one drug even if both are equally effective, and to miss important interactions. Conclusions Pragmatic combination trials of multiple interventions versus standard therapy are valuable in diseases with a limited patient pool if all interventions test the same treatment concept, it is considered likely that either both or none of the individual interventions are effective, and only moderate drug interactions are suspected. An adequately powered 2 × 2 factorial design to detect effects of individual drugs would require

  18. Blanket design and optimization demonstrations of the first wall/blanket/shield design and optimization system (BSDOS).

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gohar, Y.; Nuclear Engineering Division

    2005-05-01

    In fusion reactors, the blanket design and its characteristics have a major impact on the reactor performance, size, and economics. The selection and arrangement of the blanket materials, dimensions of the different blanket zones, and different requirements of the selected materials for a satisfactory performance are the main parameters, which define the blanket performance. These parameters translate to a large number of variables and design constraints, which need to be simultaneously considered in the blanket design process. This represents a major design challenge because of the lack of a comprehensive design tool capable of considering all these variables to definemore » the optimum blanket design and satisfying all the design constraints for the adopted figure of merit and the blanket design criteria. The blanket design capabilities of the First Wall/Blanket/Shield Design and Optimization System (BSDOS) have been developed to overcome this difficulty and to provide the state-of-the-art research and design tool for performing blanket design analyses. This paper describes some of the BSDOS capabilities and demonstrates its use. In addition, the use of the optimization capability of the BSDOS can result in a significant blanket performance enhancement and cost saving for the reactor design under consideration. In this paper, examples are presented, which utilize an earlier version of the ITER solid breeder blanket design and a high power density self-cooled lithium blanket design for demonstrating some of the BSDOS blanket design capabilities.« less

  19. Blanket Design and Optimization Demonstrations of the First Wall/Blanket/Shield Design and Optimization System (BSDOS)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gohar, Yousry

    2005-05-15

    In fusion reactors, the blanket design and its characteristics have a major impact on the reactor performance, size, and economics. The selection and arrangement of the blanket materials, dimensions of the different blanket zones, and different requirements of the selected materials for a satisfactory performance are the main parameters, which define the blanket performance. These parameters translate to a large number of variables and design constraints, which need to be simultaneously considered in the blanket design process. This represents a major design challenge because of the lack of a comprehensive design tool capable of considering all these variables to definemore » the optimum blanket design and satisfying all the design constraints for the adopted figure of merit and the blanket design criteria. The blanket design capabilities of the First Wall/Blanket/Shield Design and Optimization System (BSDOS) have been developed to overcome this difficulty and to provide the state-of-the-art research and design tool for performing blanket design analyses. This paper describes some of the BSDOS capabilities and demonstrates its use. In addition, the use of the optimization capability of the BSDOS can result in a significant blanket performance enhancement and cost saving for the reactor design under consideration. In this paper, examples are presented, which utilize an earlier version of the ITER solid breeder blanket design and a high power density self-cooled lithium blanket design for demonstrating some of the BSDOS blanket design capabilities.« less

  20. Design, engineering, and construction of photosynthetic microbial cell factories for renewable solar fuel production.

    PubMed

    Lindblad, Peter; Lindberg, Pia; Oliveira, Paulo; Stensjö, Karin; Heidorn, Thorsten

    2012-01-01

    There is an urgent need to develop sustainable solutions to convert solar energy into energy carriers used in the society. In addition to solar cells generating electricity, there are several options to generate solar fuels. This paper outlines and discusses the design and engineering of photosynthetic microbial systems for the generation of renewable solar fuels, with a focus on cyanobacteria. Cyanobacteria are prokaryotic microorganisms with the same type of photosynthesis as higher plants. Native and engineered cyanobacteria have been used by us and others as model systems to examine, demonstrate, and develop photobiological H(2) production. More recently, the production of carbon-containing solar fuels like ethanol, butanol, and isoprene have been demonstrated. We are using a synthetic biology approach to develop efficient photosynthetic microbial cell factories for direct generation of biofuels from solar energy. Present progress and advances in the design, engineering, and construction of such cyanobacterial cells for the generation of a portfolio of solar fuels, e.g., hydrogen, alcohols, and isoprene, are presented and discussed. Possibilities and challenges when introducing and using synthetic biology are highlighted.

  1. Plant Factory

    NASA Astrophysics Data System (ADS)

    Ikeda, Hideo

    Recently, much attention is paid on the plant factory, as it enable to grow plants stably under extraordinary climate condition such as high and/or low air temperature and less rain. Lots of questions such as decreasing investing cost, realizing stable plant production and developing new growing technique should be solved for making popular this growing system. However, I think that we can introduce a highly developed Japanese industrial now-how to plant factory system and can produce a business chance to the world market.

  2. Topology Optimization - Engineering Contribution to Architectural Design

    NASA Astrophysics Data System (ADS)

    Tajs-Zielińska, Katarzyna; Bochenek, Bogdan

    2017-10-01

    The idea of the topology optimization is to find within a considered design domain the distribution of material that is optimal in some sense. Material, during optimization process, is redistributed and parts that are not necessary from objective point of view are removed. The result is a solid/void structure, for which an objective function is minimized. This paper presents an application of topology optimization to multi-material structures. The design domain defined by shape of a structure is divided into sub-regions, for which different materials are assigned. During design process material is relocated, but only within selected region. The proposed idea has been inspired by architectural designs like multi-material facades of buildings. The effectiveness of topology optimization is determined by proper choice of numerical optimization algorithm. This paper utilises very efficient heuristic method called Cellular Automata. Cellular Automata are mathematical, discrete idealization of a physical systems. Engineering implementation of Cellular Automata requires decomposition of the design domain into a uniform lattice of cells. It is assumed, that the interaction between cells takes place only within the neighbouring cells. The interaction is governed by simple, local update rules, which are based on heuristics or physical laws. The numerical studies show, that this method can be attractive alternative to traditional gradient-based algorithms. The proposed approach is evaluated by selected numerical examples of multi-material bridge structures, for which various material configurations are examined. The numerical studies demonstrated a significant influence the material sub-regions location on the final topologies. The influence of assumed volume fraction on final topologies for multi-material structures is also observed and discussed. The results of numerical calculations show, that this approach produces different results as compared with classical one

  3. Process design for microbial plastic factories: metabolic engineering of polyhydroxyalkanoates.

    PubMed

    Aldor, Ilana S; Keasling, Jay D

    2003-10-01

    Implementing several metabolic engineering strategies, either individually or in combination, it is possible to construct microbial plastic factories to produce a variety of polyhydroxyalkanoate (PHA) biopolymers with desirable structures and material properties. Approaches include external substrate manipulation, inhibitor addition, recombinant gene expression, host cell genome manipulation and, most recently, protein engineering of PHA biosynthetic enzymes. In addition, mathematical models and molecular methods can be used to elucidate metabolically engineered systems and to identify targets for performance improvement.

  4. Advanced optimal design concepts for composite material aircraft repair

    NASA Astrophysics Data System (ADS)

    Renaud, Guillaume

    The application of an automated optimization approach for bonded composite patch design is investigated. To do so, a finite element computer analysis tool to evaluate patch design quality was developed. This tool examines both the mechanical and the thermal issues of the problem. The optimized shape is obtained with a bi-quadratic B-spline surface that represents the top surface of the patch. Additional design variables corresponding to the ply angles are also used. Furthermore, a multi-objective optimization approach was developed to treat multiple and uncertain loads. This formulation aims at designing according to the most unfavorable mechanical and thermal loads. The problem of finding the optimal patch shape for several situations is addressed. The objective is to minimize a stress component at a specific point in the host structure (plate) while ensuring acceptable stress levels in the adhesive. A parametric study is performed in order to identify the effects of various shape parameters on the quality of the repair and its optimal configuration. The effects of mechanical loads and service temperature are also investigated. Two bonding methods are considered, as they imply different thermal histories. It is shown that the proposed techniques are effective and inexpensive for analyzing and optimizing composite patch repairs. It is also shown that thermal effects should not only be present in the analysis, but that they play a paramount role on the resulting quality of the optimized design. In all cases, the optimized configuration results in a significant reduction of the desired stress level by deflecting the loads away from rather than over the damage zone, as is the case with standard designs. Furthermore, the automated optimization ensures the safety of the patch design for all considered operating conditions.

  5. Optimal cost design of water distribution networks using a decomposition approach

    NASA Astrophysics Data System (ADS)

    Lee, Ho Min; Yoo, Do Guen; Sadollah, Ali; Kim, Joong Hoon

    2016-12-01

    Water distribution network decomposition, which is an engineering approach, is adopted to increase the efficiency of obtaining the optimal cost design of a water distribution network using an optimization algorithm. This study applied the source tracing tool in EPANET, which is a hydraulic and water quality analysis model, to the decomposition of a network to improve the efficiency of the optimal design process. The proposed approach was tested by carrying out the optimal cost design of two water distribution networks, and the results were compared with other optimal cost designs derived from previously proposed optimization algorithms. The proposed decomposition approach using the source tracing technique enables the efficient decomposition of an actual large-scale network, and the results can be combined with the optimal cost design process using an optimization algorithm. This proves that the final design in this study is better than those obtained with other previously proposed optimization algorithms.

  6. Multifidelity Analysis and Optimization for Supersonic Design

    NASA Technical Reports Server (NTRS)

    Kroo, Ilan; Willcox, Karen; March, Andrew; Haas, Alex; Rajnarayan, Dev; Kays, Cory

    2010-01-01

    Supersonic aircraft design is a computationally expensive optimization problem and multifidelity approaches over a significant opportunity to reduce design time and computational cost. This report presents tools developed to improve supersonic aircraft design capabilities including: aerodynamic tools for supersonic aircraft configurations; a systematic way to manage model uncertainty; and multifidelity model management concepts that incorporate uncertainty. The aerodynamic analysis tools developed are appropriate for use in a multifidelity optimization framework, and include four analysis routines to estimate the lift and drag of a supersonic airfoil, a multifidelity supersonic drag code that estimates the drag of aircraft configurations with three different methods: an area rule method, a panel method, and an Euler solver. In addition, five multifidelity optimization methods are developed, which include local and global methods as well as gradient-based and gradient-free techniques.

  7. Applications of Chemiluminescence in the Teaching of Experimental Design

    ERIC Educational Resources Information Center

    Krawczyk, Tomasz; Slupska, Roksana; Baj, Stefan

    2015-01-01

    This work describes a single-session laboratory experiment devoted to teaching the principles of factorial experimental design. Students undertook the rational optimization of a luminol oxidation reaction, using a two-level experiment that aimed to create a long-lasting bright emission. During the session students used only simple glassware and…

  8. Optimal design of vertebrate and insect sarcomeres.

    PubMed

    Otten, E

    1987-01-01

    This paper offers a model for the normalized length-tension relation of a muscle fiber based upon sarcomere design. Comparison with measurements published by Gordon et al. ('66) shows an accurate fit as long as the inhomogeneity of sarcomere length in a single muscle fiber is taken into account. Sequential change of filament length and the length of the cross-bridge-free zone leads the model to suggest that most vertebrate sarcomeres tested match the condition of optimal construction for the output of mechanical energy over a full sarcomere contraction movement. Joint optimization of all three morphometric parameters suggests that a slightly better (0.3%) design is theoretically possible. However, this theoretical sarcomere, optimally designed for the conversion of energy, has a low normalized contraction velocity; it provides a poorer match to the combined functional demands of high energy output and high contraction velocity than the real sarcomeres of vertebrates. The sarcomeres in fish myotomes appear to be built suboptimally for isometric contraction, but built optimally for that shortening velocity generating maximum power. During swimming, these muscles do indeed contract concentrically only. The sarcomeres of insect asynchronous flight muscles contract only slightly. They are not built optimally for maximum output of energy across the full range of contraction encountered in vertebrate sarcomeres, but are built almost optimally for the contraction range that they do in fact employ.

  9. Design of optimized piezoelectric HDD-sliders

    NASA Astrophysics Data System (ADS)

    Nakasone, Paulo H.; Yoo, Jeonghoon; Silva, Emilio C. N.

    2010-04-01

    As storage data density in hard-disk drives (HDDs) increases for constant or miniaturizing sizes, precision positioning of HDD heads becomes a more relevant issue to ensure enormous amounts of data to be properly written and read. Since the traditional single-stage voice coil motor (VCM) cannot satisfy the positioning requirement of high-density tracks per inch (TPI) HDDs, dual-stage servo systems have been proposed to overcome this matter, by using VCMs to coarsely move the HDD head while piezoelectric actuators provides fine and fast positioning. Thus, the aim of this work is to apply topology optimization method (TOM) to design novel piezoelectric HDD heads, by finding optimal placement of base-plate and piezoelectric material to high precision positioning HDD heads. Topology optimization method is a structural optimization technique that combines the finite element method (FEM) with optimization algorithms. The laminated finite element employs the MITC (mixed interpolation of tensorial components) formulation to provide accurate and reliable results. The topology optimization uses a rational approximation of material properties to vary the material properties between 'void' and 'filled' portions. The design problem consists in generating optimal structures that provide maximal displacements, appropriate structural stiffness and resonance phenomena avoidance. The requirements are achieved by applying formulations to maximize displacements, minimize structural compliance and maximize resonance frequencies. This paper presents the implementation of the algorithms and show results to confirm the feasibility of this approach.

  10. Application of optimization techniques to vehicle design: A review

    NASA Technical Reports Server (NTRS)

    Prasad, B.; Magee, C. L.

    1984-01-01

    The work that has been done in the last decade or so in the application of optimization techniques to vehicle design is discussed. Much of the work reviewed deals with the design of body or suspension (chassis) components for reduced weight. Also reviewed are studies dealing with system optimization problems for improved functional performance, such as ride or handling. In reviewing the work on the use of optimization techniques, one notes the transition from the rare mention of the methods in the 70's to an increased effort in the early 80's. Efficient and convenient optimization and analysis tools still need to be developed so that they can be regularly applied in the early design stage of the vehicle development cycle to be most effective. Based on the reported applications, an attempt is made to assess the potential for automotive application of optimization techniques. The major issue involved remains the creation of quantifiable means of analysis to be used in vehicle design. The conventional process of vehicle design still contains much experience-based input because it has not yet proven possible to quantify all important constraints. This restraint on the part of the analysis will continue to be a major limiting factor in application of optimization to vehicle design.

  11. A distribution-free multi-factorial profiler for harvesting information from high-density screenings.

    PubMed

    Besseris, George J

    2013-01-01

    Data screening is an indispensable phase in initiating the scientific discovery process. Fractional factorial designs offer quick and economical options for engineering highly-dense structured datasets. Maximum information content is harvested when a selected fractional factorial scheme is driven to saturation while data gathering is suppressed to no replication. A novel multi-factorial profiler is presented that allows screening of saturated-unreplicated designs by decomposing the examined response to its constituent contributions. Partial effects are sliced off systematically from the investigated response to form individual contrasts using simple robust measures. By isolating each time the disturbance attributed solely to a single controlling factor, the Wilcoxon-Mann-Whitney rank stochastics are employed to assign significance. We demonstrate that the proposed profiler possesses its own self-checking mechanism for detecting a potential influence due to fluctuations attributed to the remaining unexplainable error. Main benefits of the method are: 1) easy to grasp, 2) well-explained test-power properties, 3) distribution-free, 4) sparsity-free, 5) calibration-free, 6) simulation-free, 7) easy to implement, and 8) expanded usability to any type and size of multi-factorial screening designs. The method is elucidated with a benchmarked profiling effort for a water filtration process.

  12. A Distribution-Free Multi-Factorial Profiler for Harvesting Information from High-Density Screenings

    PubMed Central

    Besseris, George J.

    2013-01-01

    Data screening is an indispensable phase in initiating the scientific discovery process. Fractional factorial designs offer quick and economical options for engineering highly-dense structured datasets. Maximum information content is harvested when a selected fractional factorial scheme is driven to saturation while data gathering is suppressed to no replication. A novel multi-factorial profiler is presented that allows screening of saturated-unreplicated designs by decomposing the examined response to its constituent contributions. Partial effects are sliced off systematically from the investigated response to form individual contrasts using simple robust measures. By isolating each time the disturbance attributed solely to a single controlling factor, the Wilcoxon-Mann-Whitney rank stochastics are employed to assign significance. We demonstrate that the proposed profiler possesses its own self-checking mechanism for detecting a potential influence due to fluctuations attributed to the remaining unexplainable error. Main benefits of the method are: 1) easy to grasp, 2) well-explained test-power properties, 3) distribution-free, 4) sparsity-free, 5) calibration-free, 6) simulation-free, 7) easy to implement, and 8) expanded usability to any type and size of multi-factorial screening designs. The method is elucidated with a benchmarked profiling effort for a water filtration process. PMID:24009744

  13. Multiobjective hyper heuristic scheme for system design and optimization

    NASA Astrophysics Data System (ADS)

    Rafique, Amer Farhan

    2012-11-01

    As system design is becoming more and more multifaceted, integrated, and complex, the traditional single objective optimization trends of optimal design are becoming less and less efficient and effective. Single objective optimization methods present a unique optimal solution whereas multiobjective methods present pareto front. The foremost intent is to predict a reasonable distributed pareto-optimal solution set independent of the problem instance through multiobjective scheme. Other objective of application of intended approach is to improve the worthiness of outputs of the complex engineering system design process at the conceptual design phase. The process is automated in order to provide the system designer with the leverage of the possibility of studying and analyzing a large multiple of possible solutions in a short time. This article presents Multiobjective Hyper Heuristic Optimization Scheme based on low level meta-heuristics developed for the application in engineering system design. Herein, we present a stochastic function to manage meta-heuristics (low-level) to augment surety of global optimum solution. Generic Algorithm, Simulated Annealing and Swarm Intelligence are used as low-level meta-heuristics in this study. Performance of the proposed scheme is investigated through a comprehensive empirical analysis yielding acceptable results. One of the primary motives for performing multiobjective optimization is that the current engineering systems require simultaneous optimization of conflicting and multiple. Random decision making makes the implementation of this scheme attractive and easy. Injecting feasible solutions significantly alters the search direction and also adds diversity of population resulting in accomplishment of pre-defined goals set in the proposed scheme.

  14. Advanced Envelope Research for Factory Built Housing, Phase 3. Design Development and Prototyping

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Levy, E.; Kessler, B.; Mullens, M.

    2014-01-01

    The Advanced Envelope Research effort will provide factory homebuilders with high performance, cost-effective alternative envelope designs. In the near term, these technologies will play a central role in meeting stringent energy code requirements. For manufactured homes, the thermal requirements, last updated by statute in 1994, will move up to the more rigorous IECC 2012 levels in 2013, the requirements of which are consistent with site built and modular housing. This places added urgency on identifying envelope technologies that the industry can implement in the short timeframe. The primary goal of this research is to develop wall designs that meet themore » thermal requirements based on 2012 IECC standards. Given the affordable nature of manufactured homes, impact on first cost is a major consideration in developing the new envelope technologies. This work is part of a four-phase, multi-year effort. Phase 1 identified seven envelope technologies and provided a preliminary assessment of three selected methods for building high performance wall systems. Phase 2 focused on the development of viable product designs, manufacturing strategies, addressing code and structural issues, and cost analysis of the three selected options. An industry advisory committee helped critique and select the most viable solution to move further in the research -- stud walls with continuous exterior insulation. Phase 3, the subject of the current report, focused on the design development of the selected wall concept and explored variations on the use of exterior foam insulation. The scope also included material selection, manufacturing and cost analysis, and prototyping and testing.« less

  15. Advanced Envelope Research for Factory Built Housing, Phase 3 -- Design Development and Prototyping

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Levy, E.; Kessler, B.; Mullens, M.

    2014-01-01

    The Advanced Envelope Research effort will provide factory homebuilders with high performance, cost-effective alternative envelope designs. In the near term, these technologies will play a central role in meeting stringent energy code requirements. For manufactured homes, the thermal requirements, last updated by statute in 1994, will move up to the more rigorous IECC 2012 levels in 2013, the requirements of which are consistent with site built and modular housing. This places added urgency on identifying envelope technologies that the industry can implement in the short timeframe. The primary goal of this research is to develop wall designs that meet themore » thermal requirements based on 2012 IECC standards. Given the affordable nature of manufactured homes, impact on first cost is a major consideration in developing the new envelope technologies. This work is part of a four-phase, multi-year effort. Phase 1 identified seven envelope technologies and provided a preliminary assessment of three selected methods for building high performance wall systems. Phase 2 focused on the development of viable product designs, manufacturing strategies, addressing code and structural issues, and cost analysis of the three selected options. An industry advisory committee helped critique and select the most viable solution to move further in the research -- stud walls with continuous exterior insulation. Phase 3, the subject of the current report, focused on the design development of the selected wall concept and explored variations on the use of exterior foam insulation. The scope also included material selection, manufacturing and cost analysis, and prototyping and testing.« less

  16. Optimal Design of Gradient Materials and Bi-Level Optimization of Topology Using Targets (BOTT)

    NASA Astrophysics Data System (ADS)

    Garland, Anthony

    The objective of this research is to understand the fundamental relationships necessary to develop a method to optimize both the topology and the internal gradient material distribution of a single object while meeting constraints and conflicting objectives. Functionally gradient material (FGM) objects possess continuous varying material properties throughout the object, and they allow an engineer to tailor individual regions of an object to have specific mechanical properties by locally modifying the internal material composition. A variety of techniques exists for topology optimization, and several methods exist for FGM optimization, but combining the two together is difficult. Understanding the relationship between topology and material gradient optimization enables the selection of an appropriate model and the development of algorithms, which allow engineers to design high-performance parts that better meet design objectives than optimized homogeneous material objects. For this research effort, topology optimization means finding the optimal connected structure with an optimal shape. FGM optimization means finding the optimal macroscopic material properties within an object. Tailoring the material constitutive matrix as a function of position results in gradient properties. Once, the target macroscopic properties are known, a mesostructure or a particular material nanostructure can be found which gives the target material properties at each macroscopic point. This research demonstrates that topology and gradient materials can both be optimized together for a single part. The algorithms use a discretized model of the domain and gradient based optimization algorithms. In addition, when considering two conflicting objectives the algorithms in this research generate clear 'features' within a single part. This tailoring of material properties within different areas of a single part (automated design of 'features') using computational design tools is a novel benefit

  17. Formulation and optimization by experimental design of eco-friendly emulsions based on d-limonene.

    PubMed

    Pérez-Mosqueda, Luis M; Trujillo-Cayado, Luis A; Carrillo, Francisco; Ramírez, Pablo; Muñoz, José

    2015-04-01

    d-Limonene is a natural occurring solvent that can replace more pollutant chemicals in agrochemical formulations. In the present work, a comprehensive study of the influence of dispersed phase mass fraction, ϕ, and of the surfactant/oil ratio, R, on the emulsion stability and droplet size distribution of d-limonene-in-water emulsions stabilized by a non-ionic triblock copolymer surfactant has been carried out. An experimental full factorial design 3(2) was conducted in order to optimize the emulsion formulation. The independent variables, ϕ and R were studied in the range 10-50 wt% and 0.02-0.1, respectively. The emulsions studied were mainly destabilized by both creaming and Ostwald ripening. Therefore, initial droplet size and an overall destabilization parameter, the so-called turbiscan stability index, were used as dependent variables. The optimal formulation, comprising minimum droplet size and maximum stability was achieved at ϕ=50 wt%; R=0.062. Furthermore, the surface response methodology allowed us to obtain the formulation yielding sub-micron emulsions by using a single step rotor/stator homogenizer process instead of most commonly used two-step emulsification methods. In addition, the optimal formulation was further improved against Ostwald ripening by adding silicone oil to the dispersed phase. The combination of these experimental findings allowed us to gain a deeper insight into the stability of these emulsions, which can be applied to the rational development of new formulations with potential application in agrochemical formulations. Copyright © 2015 Elsevier B.V. All rights reserved.

  18. DAKOTA Design Analysis Kit for Optimization and Terascale

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Adams, Brian M.; Dalbey, Keith R.; Eldred, Michael S.

    2010-02-24

    The DAKOTA (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a flexible and extensible interface between simulation codes (computational models) and iterative analysis methods. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the DAKOTA toolkit provides a flexible and extensible problem-solving environment for design and analysis of computational models on high performance computers.A user provides a set of DAKOTA commands in an input file and launches DAKOTA. DAKOTA invokes instances of the computational models, collects their results, and performs systems analyses. DAKOTA contains algorithms for optimization with gradient and nongradient-basedmore » methods; uncertainty quantification with sampling, reliability, polynomial chaos, stochastic collocation, and epistemic methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as hybrid optimization, surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. Services for parallel computing, simulation interfacing, approximation modeling, fault tolerance, restart, and graphics are also included.« less

  19. Identification of vehicle suspension parameters by design optimization

    NASA Astrophysics Data System (ADS)

    Tey, J. Y.; Ramli, R.; Kheng, C. W.; Chong, S. Y.; Abidin, M. A. Z.

    2014-05-01

    The design of a vehicle suspension system through simulation requires accurate representation of the design parameters. These parameters are usually difficult to measure or sometimes unavailable. This article proposes an efficient approach to identify the unknown parameters through optimization based on experimental results, where the covariance matrix adaptation-evolutionary strategy (CMA-es) is utilized to improve the simulation and experimental results against the kinematic and compliance tests. This speeds up the design and development cycle by recovering all the unknown data with respect to a set of kinematic measurements through a single optimization process. A case study employing a McPherson strut suspension system is modelled in a multi-body dynamic system. Three kinematic and compliance tests are examined, namely, vertical parallel wheel travel, opposite wheel travel and single wheel travel. The problem is formulated as a multi-objective optimization problem with 40 objectives and 49 design parameters. A hierarchical clustering method based on global sensitivity analysis is used to reduce the number of objectives to 30 by grouping correlated objectives together. Then, a dynamic summation of rank value is used as pseudo-objective functions to reformulate the multi-objective optimization to a single-objective optimization problem. The optimized results show a significant improvement in the correlation between the simulated model and the experimental model. Once accurate representation of the vehicle suspension model is achieved, further analysis, such as ride and handling performances, can be implemented for further optimization.

  20. Formulation for Simultaneous Aerodynamic Analysis and Design Optimization

    NASA Technical Reports Server (NTRS)

    Hou, G. W.; Taylor, A. C., III; Mani, S. V.; Newman, P. A.

    1993-01-01

    An efficient approach for simultaneous aerodynamic analysis and design optimization is presented. This approach does not require the performance of many flow analyses at each design optimization step, which can be an expensive procedure. Thus, this approach brings us one step closer to meeting the challenge of incorporating computational fluid dynamic codes into gradient-based optimization techniques for aerodynamic design. An adjoint-variable method is introduced to nullify the effect of the increased number of design variables in the problem formulation. The method has been successfully tested on one-dimensional nozzle flow problems, including a sample problem with a normal shock. Implementations of the above algorithm are also presented that incorporate Newton iterations to secure a high-quality flow solution at the end of the design process. Implementations with iterative flow solvers are possible and will be required for large, multidimensional flow problems.

  1. Development, Optimization, and Validation of a Microplate Bioassay for Relative Potency Determination of Linezolid Using a Design Space Concept, and its Measurement Uncertainty.

    PubMed

    Saviano, Alessandro Morais; Francisco, Fabiane Lacerda; Ostronoff, Celina Silva; Lourenço, Felipe Rebello

    2015-01-01

    The aim of this study was to develop, optimize, and validate a microplate bioassay for relative potency determination of linezolid in pharmaceutical samples using quality-by-design and design space approaches. In addition, a procedure is described for estimating relative potency uncertainty based on microbiological response variability. The influence of culture media composition was studied using a factorial design and a central composite design was adopted to study the influence of inoculum proportion and triphenyltetrazolium chloride in microbial growth. The microplate bioassay was optimized regarding the responses of low, medium, and high doses of linezolid, negative and positive controls, and the slope, intercept, and correlation coefficient of dose-response curves. According to optimization results, design space ranges were established using: (a) low (1.0 μg/mL), medium (2.0 μg/mL), and high (4.0 μg/mL) doses of pharmaceutical samples and linezolid chemical reference substance; (b) Staphylococcus aureus ATCC 653 in an inoculum proportion of 10%; (c) antibiotic No. 3 culture medium pH 7.0±0.1; (d) 6 h incubation at 37.0±0.1ºC; and (e) addition of 50 μL of 0.5% (w/v) triphenyltetrazolium chloride solution. The microplate bioassay was linear (r2=0.992), specific, precise (repeatability RSD=2.3% and intermediate precision RSD=4.3%), accurate (mean recovery=101.4%), and robust. The overall measurement uncertainty was reasonable considering the increased variability inherent in microbiological response. Final uncertainty was comparable with those obtained with other microbiological assays, as well as chemical methods.

  2. Design of experiments to optimize an in vitro cast to predict human nasal drug deposition.

    PubMed

    Shah, Samir A; Dickens, Colin J; Ward, David J; Banaszek, Anna A; George, Chris; Horodnik, Walter

    2014-02-01

    Previous studies showed nasal spray in vitro tests cannot predict in vivo deposition, pharmacokinetics, or pharmacodynamics. This challenge makes it difficult to assess deposition achieved with new technologies delivering to the therapeutically beneficial posterior nasal cavity. In this study, we determined best parameters for using a regionally divided nasal cast to predict deposition. Our study used a model suspension and a design of experiments to produce repeatable deposition results that mimic nasal deposition patterns of nasal suspensions from the literature. The seven-section (the nozzle locator, nasal vestibule, front turbinate, rear turbinate, olfactory region, nasopharynx, and throat filter) nylon nasal cast was based on computed tomography images of healthy humans. It was coated with a glycerol/Brij-35 solution to mimic mucus. After assembling and orienting, airflow was applied and nasal spray containing a model suspension was sprayed. After disassembling the cast, drug depositing in each section was assayed by HPLC. The success criteria for optimal settings were based on nine in vivo studies in the literature. The design of experiments included exploratory and half factorial screening experiments to identify variables affecting deposition (angles, airflow, and airflow time), optimization experiments, and then repeatability and reproducibility experiments. We found tilt angle and airflow time after actuation affected deposition the most. The optimized settings were flow rate of 16 L/min, postactuation flow time of 12 sec, a tilt angle of 23°, nozzle angles of 0°, and actuation speed of 5 cm/sec. Neither cast nor operator caused significant variation of results. We determined cast parameters to produce results resembling suspension nasal sprays in the literature. The results were repeatable and unaffected by operator or cast. These nasal spray parameters could be used to assess deposition from new devices or formulations. For human deposition

  3. Study of the effect of temperature, irradiance and salinity on growth and yessotoxin production by the dinoflagellate Protoceratium reticulatum in culture by using a kinetic and factorial approach.

    PubMed

    Paz, Beatriz; Vázquez, José A; Riobó, Pilar; Franco, José M

    2006-10-01

    A complete first order orthogonal plan was used to optimize the growth and the production of yessotoxin (YTX) by the dinoflagellate Protoceratium reticulatum in culture by controlling salinity, temperature and irradiance. Initially, an approach to the kinetic data of cellular density and YTX production for each one of the experimental design conditions was performed. The P. reticulatum growth and YTX production were fitted to logistical equations and to a first-order kinetic model, respectively. The parameters obtained from this adjustment were used as dependent variables for the formulation of the empirical equations of the factorial design tested. The results showed that in practically all the cases for both, P. reticulatum growth and YTX production, irradiance is the primary independent variable and has a positive effect in the range 50-90 micromol photons m(-2) s(-1). Additionally, in certain specific cases, temperature reveals significant positive effects when maintained between 15 and 23 degrees C and salinity in the range of 20-34 displays negative effects. Despite the narrow ranges used in the work, results showed the suitability of factorial analysis to evaluate the optimal conditions for growth and yessotoxin production by the dinoflagellate P. reticulatum.

  4. Engineering Translation in Mammalian Cell Factories to Increase Protein Yield: The Unexpected Use of Long Non-Coding SINEUP RNAs.

    PubMed

    Zucchelli, Silvia; Patrucco, Laura; Persichetti, Francesca; Gustincich, Stefano; Cotella, Diego

    2016-01-01

    Mammalian cells are an indispensable tool for the production of recombinant proteins in contexts where function depends on post-translational modifications. Among them, Chinese Hamster Ovary (CHO) cells are the primary factories for the production of therapeutic proteins, including monoclonal antibodies (MAbs). To improve expression and stability, several methodologies have been adopted, including methods based on media formulation, selective pressure and cell- or vector engineering. This review presents current approaches aimed at improving mammalian cell factories that are based on the enhancement of translation. Among well-established techniques (codon optimization and improvement of mRNA secondary structure), we describe SINEUPs, a family of antisense long non-coding RNAs that are able to increase translation of partially overlapping protein-coding mRNAs. By exploiting their modular structure, SINEUP molecules can be designed to target virtually any mRNA of interest, and thus to increase the production of secreted proteins. Thus, synthetic SINEUPs represent a new versatile tool to improve the production of secreted proteins in biomanufacturing processes.

  5. Optimal cure cycle design of a resin-fiber composite laminate

    NASA Technical Reports Server (NTRS)

    Hou, Jean W.; Sheen, Jeenson

    1987-01-01

    A unified computed aided design method was studied for the cure cycle design that incorporates an optimal design technique with the analytical model of a composite cure process. The preliminary results of using this proposed method for optimal cure cycle design are reported and discussed. The cure process of interest is the compression molding of a polyester which is described by a diffusion reaction system. The finite element method is employed to convert the initial boundary value problem into a set of first order differential equations which are solved simultaneously by the DE program. The equations for thermal design sensitivities are derived by using the direct differentiation method and are solved by the DE program. A recursive quadratic programming algorithm with an active set strategy called a linearization method is used to optimally design the cure cycle, subjected to the given design performance requirements. The difficulty of casting the cure cycle design process into a proper mathematical form is recognized. Various optimal design problems are formulated to address theses aspects. The optimal solutions of these formulations are compared and discussed.

  6. Towards robust optimal design of storm water systems

    NASA Astrophysics Data System (ADS)

    Marquez Calvo, Oscar; Solomatine, Dimitri

    2015-04-01

    In this study the focus is on the design of a storm water or a combined sewer system. Such a system should be capable to handle properly most of the storm to minimize the damages caused by flooding due to the lack of capacity of the system to cope with rain water at peak times. This problem is a multi-objective optimization problem: we have to take into account the minimization of the construction costs, the minimization of damage costs due to flooding, and possibly other criteria. One of the most important factors influencing the design of storm water systems is the expected amount of water to deal with. It is common that this infrastructure is developed with the capacity to cope with events that occur once in, say 10 or 20 years - so-called design rainfall events. However, rainfall is a random variable and such uncertainty typically is not taken explicitly into account in optimization. Rainfall design data is based on historical information of rainfalls, but many times this data is based on unreliable measures; or in not enough historical information; or as we know, the patterns of rainfall are changing regardless of historical information. There are also other sources of uncertainty influencing design, for example, leakages in the pipes and accumulation of sediments in pipes. In the context of storm water or combined sewer systems design or rehabilitation, robust optimization technique should be able to find the best design (or rehabilitation plan) within the available budget but taking into account uncertainty in those variables that were used to design the system. In this work we consider various approaches to robust optimization proposed by various authors (Gabrel, Murat, Thiele 2013; Beyer, Sendhoff 2007) and test a novel method ROPAR (Solomatine 2012) to analyze robustness. References Beyer, H.G., & Sendhoff, B. (2007). Robust optimization - A comprehensive survey. Comput. Methods Appl. Mech. Engrg., 3190-3218. Gabrel, V.; Murat, C., Thiele, A. (2014

  7. Optimization of the intravenous glucose tolerance test in T2DM patients using optimal experimental design.

    PubMed

    Silber, Hanna E; Nyberg, Joakim; Hooker, Andrew C; Karlsson, Mats O

    2009-06-01

    Intravenous glucose tolerance test (IVGTT) provocations are informative, but complex and laborious, for studying the glucose-insulin system. The objective of this study was to evaluate, through optimal design methodology, the possibilities of more informative and/or less laborious study design of the insulin modified IVGTT in type 2 diabetic patients. A previously developed model for glucose and insulin regulation was implemented in the optimal design software PopED 2.0. The following aspects of the study design of the insulin modified IVGTT were evaluated; (1) glucose dose, (2) insulin infusion, (3) combination of (1) and (2), (4) sampling times, (5) exclusion of labeled glucose. Constraints were incorporated to avoid prolonged hyper- and/or hypoglycemia and a reduced design was used to decrease run times. Design efficiency was calculated as a measure of the improvement with an optimal design compared to the basic design. The results showed that the design of the insulin modified IVGTT could be substantially improved by the use of an optimized design compared to the standard design and that it was possible to use a reduced number of samples. Optimization of sample times gave the largest improvement followed by insulin dose. The results further showed that it was possible to reduce the total sample time with only a minor loss in efficiency. Simulations confirmed the predictions from PopED. The predicted uncertainty of parameter estimates (CV) was low in all tested cases, despite the reduction in the number of samples/subject. The best design had a predicted average CV of parameter estimates of 19.5%. We conclude that improvement can be made to the design of the insulin modified IVGTT and that the most important design factor was the placement of sample times followed by the use of an optimal insulin dose. This paper illustrates how complex provocation experiments can be improved by sequential modeling and optimal design.

  8. Geometry Modeling and Grid Generation for Design and Optimization

    NASA Technical Reports Server (NTRS)

    Samareh, Jamshid A.

    1998-01-01

    Geometry modeling and grid generation (GMGG) have played and will continue to play an important role in computational aerosciences. During the past two decades, tremendous progress has occurred in GMGG; however, GMGG is still the biggest bottleneck to routine applications for complicated Computational Fluid Dynamics (CFD) and Computational Structures Mechanics (CSM) models for analysis, design, and optimization. We are still far from incorporating GMGG tools in a design and optimization environment for complicated configurations. It is still a challenging task to parameterize an existing model in today's Computer-Aided Design (CAD) systems, and the models created are not always good enough for automatic grid generation tools. Designers may believe their models are complete and accurate, but unseen imperfections (e.g., gaps, unwanted wiggles, free edges, slivers, and transition cracks) often cause problems in gridding for CSM and CFD. Despite many advances in grid generation, the process is still the most labor-intensive and time-consuming part of the computational aerosciences for analysis, design, and optimization. In an ideal design environment, a design engineer would use a parametric model to evaluate alternative designs effortlessly and optimize an existing design for a new set of design objectives and constraints. For this ideal environment to be realized, the GMGG tools must have the following characteristics: (1) be automated, (2) provide consistent geometry across all disciplines, (3) be parametric, and (4) provide sensitivity derivatives. This paper will review the status of GMGG for analysis, design, and optimization processes, and it will focus on some emerging ideas that will advance the GMGG toward the ideal design environment.

  9. Novel lectin-modified poly(ethylene-co-vinyl acetate) mucoadhesive nanoparticles of carvedilol: preparation and in vitro optimization using a two-level factorial design.

    PubMed

    Varshosaz, Jaleh; Moazen, Ellaheh

    2014-08-01

    Carvedilol used in cardiovascular diseases has systemic bioavailability of 25-35%. The objective of this study was production of lectin-modified poly(ethylene-co-vinyl acetate) (PEVA) as mucoadhesive nanoparticles to enhance low oral bioavailability of carvedilol. Nanoparticles were prepared by the emulsification-solvent evaporation method using a two-level factorial design. The studied variables included the vinyl acetate content of the polymer, drug and polymer content. Surface modification of PEVA nanoparticles with lectin was carried out by the adsorption method and coupling efficiency was determined using the Bradford assay. Mucoadhesion of nanoparticles was studied on mucin. The particle size, polydispersity index, zeta potential, drug loading and drug release from nanoparticles were studied. The morphology of nanoparticles and crystalline status of the entrapped drug were studied by SEM, DSC and XRD tests, respectively. Results showed the most effective factor on particle size and zeta potential was the interaction of polymer and drug content while, drug loading efficiency and mucoadhesion were more affected by the interaction of polymer type and drug content. Drug concentration was the most effective variable on the drug release rate. The drug was in amorphous state in nanoparticles. The optimum nanoparticles obtained by 45 mg of copolymer contained 12% vinyl acetate/4.3 ml of organic phase and drug concentration of 37.5 wt% of polymer.

  10. Optimization process in helicopter design

    NASA Technical Reports Server (NTRS)

    Logan, A. H.; Banerjee, D.

    1984-01-01

    In optimizing a helicopter configuration, Hughes Helicopters uses a program called Computer Aided Sizing of Helicopters (CASH), written and updated over the past ten years, and used as an important part of the preliminary design process of the AH-64. First, measures of effectiveness must be supplied to define the mission characteristics of the helicopter to be designed. Then CASH allows the designer to rapidly and automatically develop the basic size of the helicopter (or other rotorcraft) for the given mission. This enables the designer and management to assess the various tradeoffs and to quickly determine the optimum configuration.

  11. A PDE Sensitivity Equation Method for Optimal Aerodynamic Design

    NASA Technical Reports Server (NTRS)

    Borggaard, Jeff; Burns, John

    1996-01-01

    The use of gradient based optimization algorithms in inverse design is well established as a practical approach to aerodynamic design. A typical procedure uses a simulation scheme to evaluate the objective function (from the approximate states) and its gradient, then passes this information to an optimization algorithm. Once the simulation scheme (CFD flow solver) has been selected and used to provide approximate function evaluations, there are several possible approaches to the problem of computing gradients. One popular method is to differentiate the simulation scheme and compute design sensitivities that are then used to obtain gradients. Although this black-box approach has many advantages in shape optimization problems, one must compute mesh sensitivities in order to compute the design sensitivity. In this paper, we present an alternative approach using the PDE sensitivity equation to develop algorithms for computing gradients. This approach has the advantage that mesh sensitivities need not be computed. Moreover, when it is possible to use the CFD scheme for both the forward problem and the sensitivity equation, then there are computational advantages. An apparent disadvantage of this approach is that it does not always produce consistent derivatives. However, for a proper combination of discretization schemes, one can show asymptotic consistency under mesh refinement, which is often sufficient to guarantee convergence of the optimal design algorithm. In particular, we show that when asymptotically consistent schemes are combined with a trust-region optimization algorithm, the resulting optimal design method converges. We denote this approach as the sensitivity equation method. The sensitivity equation method is presented, convergence results are given and the approach is illustrated on two optimal design problems involving shocks.

  12. Metabolic modelling in the development of cell factories by synthetic biology

    PubMed Central

    Jouhten, Paula

    2012-01-01

    Cell factories are commonly microbial organisms utilized for bioconversion of renewable resources to bulk or high value chemicals. Introduction of novel production pathways in chassis strains is the core of the development of cell factories by synthetic biology. Synthetic biology aims to create novel biological functions and systems not found in nature by combining biology with engineering. The workflow of the development of novel cell factories with synthetic biology is ideally linear which will be attainable with the quantitative engineering approach, high-quality predictive models, and libraries of well-characterized parts. Different types of metabolic models, mathematical representations of metabolism and its components, enzymes and metabolites, are useful in particular phases of the synthetic biology workflow. In this minireview, the role of metabolic modelling in synthetic biology will be discussed with a review of current status of compatible methods and models for the in silico design and quantitative evaluation of a cell factory. PMID:24688669

  13. Designing optimal universal pulses using second-order, large-scale, non-linear optimization

    NASA Astrophysics Data System (ADS)

    Anand, Christopher Kumar; Bain, Alex D.; Curtis, Andrew Thomas; Nie, Zhenghua

    2012-06-01

    Recently, RF pulse design using first-order and quasi-second-order pulses has been actively investigated. We present a full second-order design method capable of incorporating relaxation, inhomogeneity in B0 and B1. Our model is formulated as a generic optimization problem making it easy to incorporate diverse pulse sequence features. To tame the computational cost, we present a method of calculating second derivatives in at most a constant multiple of the first derivative calculation time, this is further accelerated by using symbolic solutions of the Bloch equations. We illustrate the relative merits and performance of quasi-Newton and full second-order optimization with a series of examples, showing that even a pulse already optimized using other methods can be visibly improved. To be useful in CPMG experiments, a universal refocusing pulse should be independent of the delay time and insensitive of the relaxation time and RF inhomogeneity. We design such a pulse and show that, using it, we can obtain reliable R2 measurements for offsets within ±γB1. Finally, we compare our optimal refocusing pulse with other published refocusing pulses by doing CPMG experiments.

  14. Particle identification at an asymmetric B Factory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Coyle, P.; Eigen, G.; Hitlin, D.

    1991-09-01

    Particle identification systems are an important component of any detector at a high-luminosity, asymmetric B Factory. In particular, excellent hadron identification is required to probe CP violation in B{sup 0} decays to CP eigenstates. The particle identification systems discussed below also provide help in separating leptons from hadrons at low momenta. We begin this chapter with a discussion of the physics motivation for providing particle identification, the inherent limitations due to interactions and decays in flight, and the requirements for hermiticity and angular coverage. A special feature of an asymmetric B Factory is the resulting asymmetry in the momentum distributionmore » as a function of polar angle; this will also be quantified and discussed. In the next section the three primary candidates, time-of-flight (TOF), energy loss (dE/dx), and Cerenkov counters, both ring-imaging and threshold, will be briefly described and evaluated. Following this, one of the candidates, a long-drift Cerenkov ring-imaging device, is described in detail to provide a reference design. Design considerations for a fast RICH are then described. A detailed discussion of aerogel threshold counter designs and associated R D conclude the chapter. 56 refs., 64 figs., 13 tabs.« less

  15. Design optimization for cost and quality: The robust design approach

    NASA Technical Reports Server (NTRS)

    Unal, Resit

    1990-01-01

    Designing reliable, low cost, and operable space systems has become the key to future space operations. Designing high quality space systems at low cost is an economic and technological challenge to the designer. A systematic and efficient way to meet this challenge is a new method of design optimization for performance, quality, and cost, called Robust Design. Robust Design is an approach for design optimization. It consists of: making system performance insensitive to material and subsystem variation, thus allowing the use of less costly materials and components; making designs less sensitive to the variations in the operating environment, thus improving reliability and reducing operating costs; and using a new structured development process so that engineering time is used most productively. The objective in Robust Design is to select the best combination of controllable design parameters so that the system is most robust to uncontrollable noise factors. The robust design methodology uses a mathematical tool called an orthogonal array, from design of experiments theory, to study a large number of decision variables with a significantly small number of experiments. Robust design also uses a statistical measure of performance, called a signal-to-noise ratio, from electrical control theory, to evaluate the level of performance and the effect of noise factors. The purpose is to investigate the Robust Design methodology for improving quality and cost, demonstrate its application by the use of an example, and suggest its use as an integral part of space system design process.

  16. Maximum Entropy/Optimal Projection (MEOP) control design synthesis: Optimal quantification of the major design tradeoffs

    NASA Technical Reports Server (NTRS)

    Hyland, D. C.; Bernstein, D. S.

    1987-01-01

    The underlying philosophy and motivation of the optimal projection/maximum entropy (OP/ME) stochastic modeling and reduced control design methodology for high order systems with parameter uncertainties are discussed. The OP/ME design equations for reduced-order dynamic compensation including the effect of parameter uncertainties are reviewed. The application of the methodology to several Large Space Structures (LSS) problems of representative complexity is illustrated.

  17. Replicating systems concepts: Self-replicating lunar factory and demonstration

    NASA Technical Reports Server (NTRS)

    1982-01-01

    Automation of lunar mining and manufacturing facility maintenance and repair is addressed. Designing the factory as an automated, multiproduct, remotely controlled, reprogrammable Lunar Manufacturing Facility capable of constructing duplicates of itself which would themselves be capable of further replication is proposed.

  18. Optimal design of solidification processes

    NASA Technical Reports Server (NTRS)

    Dantzig, Jonathan A.; Tortorelli, Daniel A.

    1991-01-01

    An optimal design algorithm is presented for the analysis of general solidification processes, and is demonstrated for the growth of GaAs crystals in a Bridgman furnace. The system is optimal in the sense that the prespecified temperature distribution in the solidifying materials is obtained to maximize product quality. The optimization uses traditional numerical programming techniques which require the evaluation of cost and constraint functions and their sensitivities. The finite element method is incorporated to analyze the crystal solidification problem, evaluate the cost and constraint functions, and compute the sensitivities. These techniques are demonstrated in the crystal growth application by determining an optimal furnace wall temperature distribution to obtain the desired temperature profile in the crystal, and hence to maximize the crystal's quality. Several numerical optimization algorithms are studied to determine the proper convergence criteria, effective 1-D search strategies, appropriate forms of the cost and constraint functions, etc. In particular, we incorporate the conjugate gradient and quasi-Newton methods for unconstrained problems. The efficiency and effectiveness of each algorithm is presented in the example problem.

  19. A general-purpose optimization program for engineering design

    NASA Technical Reports Server (NTRS)

    Vanderplaats, G. N.; Sugimoto, H.

    1986-01-01

    A new general-purpose optimization program for engineering design is described. ADS (Automated Design Synthesis) is a FORTRAN program for nonlinear constrained (or unconstrained) function minimization. The optimization process is segmented into three levels: Strategy, Optimizer, and One-dimensional search. At each level, several options are available so that a total of nearly 100 possible combinations can be created. An example of available combinations is the Augmented Lagrange Multiplier method, using the BFGS variable metric unconstrained minimization together with polynomial interpolation for the one-dimensional search.

  20. Optimization of acidic extraction of astaxanthin from Phaffia rhodozyma *

    PubMed Central

    Ni, Hui; Chen, Qi-he; He, Guo-qing; Wu, Guang-bin; Yang, Yuan-fan

    2008-01-01

    Optimization of a process for extracting astaxanthin from Phaffia rhodozyma by acidic method was investigated, regarding several extraction factors such as acids, organic solvents, temperature and time. Fractional factorial design, central composite design and response surface methodology were used to derive a statistically optimal model, which corresponded to the following optimal condition: concentration of lactic acid at 5.55 mol/L, ratio of ethanol to yeast dry weight at 20.25 ml/g, temperature for cell-disruption at 30 °C, and extraction time for 3 min. Under this condition, astaxanthin and the total carotenoids could be extracted in amounts of 1294.7 μg/g and 1516.0 μg/g, respectively. This acidic method has advantages such as high extraction efficiency, low chemical toxicity and no special requirement of instruments. Therefore, it might be a more feasible and practical method for industrial practice. PMID:18196613

  1. In-Factory Learning - Qualification For The Factory Of The Future

    NASA Astrophysics Data System (ADS)

    Quint, Fabian; Mura, Katharina; Gorecky, Dominic

    2015-07-01

    The Industry 4.0 vision anticipates that internet technologies will find their way into future factories replacing traditional components by dynamic and intelligent cyber-physical systems (CPS) that combine the physical objects with their digital representation. Reducing the gap between the real and digital world makes the factory environment more flexible, more adaptive, but also more complex for the human workers. Future workers require interdisciplinary competencies from engineering, information technology, and computer science in order to understand and manage the diverse interrelations between physical objects and their digital counterpart. This paper proposes a mixed-reality based learning environment, which combines physical objects and visualisation of digital content via Augmented Reality. It uses reality-based interaction in order to make the dynamic interrelations between real and digital factory visible and tangible. We argue that our learning system does not work as a stand-alone solution, but should fit into existing academic and advanced training curricula.

  2. Ammonia-nitrogen and Phosphate Reduction by Bio-Filter using Factorial Design

    NASA Astrophysics Data System (ADS)

    Kasmuri, Norhafezah; Ashikin Mat Damin, Nur; Omar, Megawati

    2018-02-01

    Untreated landfill leachate is known to have endangered the environment. As such new treatment must be sought to ensure its cost-effective and sustainable treatment. Thus this paper reports the effectiveness of bio-filter to remove pollutants. In this research, the reduction of nutrients concentration was evaluated in two conditions: using bio-filter and without bio-filter. Synthetic wastewater was used in the batch culture. It was conducted within 21 days in the initial mediums of 100 mg/L ammonia-nitrogen. The nitrification medium consisted of 100 mg/L of ammonia-nitrogen while the nitrite assay had none. The petri dish experiment was also conducted to observe the existence of any colony. The results showed 22% of ammonia- nitrogen reduction and 33% phosphate in the nitrification medium with the bio-filter. The outcome showed that the bio-filter was capable to reduce the concentration of pollutants by retaining the slow growing bacteria (AOB and NOB) on the plastic carrier surface. The factorial design was applied to study the effect of the initial ammonia-nitrogen concentration and duration on nitrite-nitrogen removal. Finally, a regression equation was produced to predict the rate of nitrite-nitrogen removal without conducting extended experiments and to reduce the number of trials experiment.

  3. Execution of Multidisciplinary Design Optimization Approaches on Common Test Problems

    NASA Technical Reports Server (NTRS)

    Balling, R. J.; Wilkinson, C. A.

    1997-01-01

    A class of synthetic problems for testing multidisciplinary design optimization (MDO) approaches is presented. These test problems are easy to reproduce because all functions are given as closed-form mathematical expressions. They are constructed in such a way that the optimal value of all variables and the objective is unity. The test problems involve three disciplines and allow the user to specify the number of design variables, state variables, coupling functions, design constraints, controlling design constraints, and the strength of coupling. Several MDO approaches were executed on two sample synthetic test problems. These approaches included single-level optimization approaches, collaborative optimization approaches, and concurrent subspace optimization approaches. Execution results are presented, and the robustness and efficiency of these approaches an evaluated for these sample problems.

  4. Optimization, an Important Stage of Engineering Design

    ERIC Educational Resources Information Center

    Kelley, Todd R.

    2010-01-01

    A number of leaders in technology education have indicated that a major difference between the technological design process and the engineering design process is analysis and optimization. The analysis stage of the engineering design process is when mathematical models and scientific principles are employed to help the designer predict design…

  5. Effect of experimental design on the prediction performance of calibration models based on near-infrared spectroscopy for pharmaceutical applications.

    PubMed

    Bondi, Robert W; Igne, Benoît; Drennen, James K; Anderson, Carl A

    2012-12-01

    Near-infrared spectroscopy (NIRS) is a valuable tool in the pharmaceutical industry, presenting opportunities for online analyses to achieve real-time assessment of intermediates and finished dosage forms. The purpose of this work was to investigate the effect of experimental designs on prediction performance of quantitative models based on NIRS using a five-component formulation as a model system. The following experimental designs were evaluated: five-level, full factorial (5-L FF); three-level, full factorial (3-L FF); central composite; I-optimal; and D-optimal. The factors for all designs were acetaminophen content and the ratio of microcrystalline cellulose to lactose monohydrate. Other constituents included croscarmellose sodium and magnesium stearate (content remained constant). Partial least squares-based models were generated using data from individual experimental designs that related acetaminophen content to spectral data. The effect of each experimental design was evaluated by determining the statistical significance of the difference in bias and standard error of the prediction for that model's prediction performance. The calibration model derived from the I-optimal design had similar prediction performance as did the model derived from the 5-L FF design, despite containing 16 fewer design points. It also outperformed all other models estimated from designs with similar or fewer numbers of samples. This suggested that experimental-design selection for calibration-model development is critical, and optimum performance can be achieved with efficient experimental designs (i.e., optimal designs).

  6. Global Design Optimization for Aerodynamics and Rocket Propulsion Components

    NASA Technical Reports Server (NTRS)

    Shyy, Wei; Papila, Nilay; Vaidyanathan, Rajkumar; Tucker, Kevin; Turner, James E. (Technical Monitor)

    2000-01-01

    Modern computational and experimental tools for aerodynamics and propulsion applications have matured to a stage where they can provide substantial insight into engineering processes involving fluid flows, and can be fruitfully utilized to help improve the design of practical devices. In particular, rapid and continuous development in aerospace engineering demands that new design concepts be regularly proposed to meet goals for increased performance, robustness and safety while concurrently decreasing cost. To date, the majority of the effort in design optimization of fluid dynamics has relied on gradient-based search algorithms. Global optimization methods can utilize the information collected from various sources and by different tools. These methods offer multi-criterion optimization, handle the existence of multiple design points and trade-offs via insight into the entire design space, can easily perform tasks in parallel, and are often effective in filtering the noise intrinsic to numerical and experimental data. However, a successful application of the global optimization method needs to address issues related to data requirements with an increase in the number of design variables, and methods for predicting the model performance. In this article, we review recent progress made in establishing suitable global optimization techniques employing neural network and polynomial-based response surface methodologies. Issues addressed include techniques for construction of the response surface, design of experiment techniques for supplying information in an economical manner, optimization procedures and multi-level techniques, and assessment of relative performance between polynomials and neural networks. Examples drawn from wing aerodynamics, turbulent diffuser flows, gas-gas injectors, and supersonic turbines are employed to help demonstrate the issues involved in an engineering design context. Both the usefulness of the existing knowledge to aid current design

  7. Integrated structure/control law design by multilevel optimization

    NASA Technical Reports Server (NTRS)

    Gilbert, Michael G.; Schmidt, David K.

    1989-01-01

    A new approach to integrated structure/control law design based on multilevel optimization is presented. This new approach is applicable to aircraft and spacecraft and allows for the independent design of the structure and control law. Integration of the designs is achieved through use of an upper level coordination problem formulation within the multilevel optimization framework. The method requires the use of structure and control law design sensitivity information. A general multilevel structure/control law design problem formulation is given, and the use of Linear Quadratic Gaussian (LQG) control law design and design sensitivity methods within the formulation is illustrated. Results of three simple integrated structure/control law design examples are presented. These results show the capability of structure and control law design tradeoffs to improve controlled system performance within the multilevel approach.

  8. Translational Research in South Africa: Evaluating Implementation Quality Using a Factorial Design

    PubMed Central

    Smith, Edward A.; Collins, Linda M.; Graham, John W.; Lai, Mary; Wegner, Lisa; Vergnani, Tania; Matthews, Catherine; Jacobs, Joachim

    2012-01-01

    Background HealthWise South Africa: Life Skills for Adolescents (HW) is an evidence-based substance use and sexual risk prevention program that emphasizes the positive use of leisure time. Since 2000, this program has evolved from pilot testing through an efficacy trial involving over 7,000 youth in the Cape Town area. Beginning in 2011, through 2015, we are undertaking a new study that expands HW to all schools in the Metro South Education District. Objective This paper describes a research study designed in partnership with our South African collaborators that examines three factors hypothesized to affect the quality and fidelity of HW implementation: enhanced teacher training; teacher support, structure and supervision; and enhanced school environment. Methods Teachers and students from 56 schools in the Cape Town area will participate in this study. Teacher observations are the primary means of collecting data on factors affecting implementation quality. These factors address the practical concerns of teachers and schools related to likelihood of use and cost-effectiveness, and are hypothesized to be “active ingredients” related to high-quality program implementation in real-world settings. An innovative factorial experimental design was chosen to enable estimation of the individual effect of each of the three factors. Results Because this paper describes the conceptualization of our study, results are not yet available. Conclusions The results of this study may have both substantive and methodological implications for advancing Type 2 translational research. PMID:22707870

  9. High Speed Civil Transport Design Using Collaborative Optimization and Approximate Models

    NASA Technical Reports Server (NTRS)

    Manning, Valerie Michelle

    1999-01-01

    The design of supersonic aircraft requires complex analysis in multiple disciplines, posing, a challenge for optimization methods. In this thesis, collaborative optimization, a design architecture developed to solve large-scale multidisciplinary design problems, is applied to the design of supersonic transport concepts. Collaborative optimization takes advantage of natural disciplinary segmentation to facilitate parallel execution of design tasks. Discipline-specific design optimization proceeds while a coordinating mechanism ensures progress toward an optimum and compatibility between disciplinary designs. Two concepts for supersonic aircraft are investigated: a conventional delta-wing design and a natural laminar flow concept that achieves improved performance by exploiting properties of supersonic flow to delay boundary layer transition. The work involves the development of aerodynamics and structural analyses, and integration within a collaborative optimization framework. It represents the most extensive application of the method to date.

  10. Brain-targeted intranasal zaleplon solid dispersion in hydrophilic carrier system; 23 full-factorial design and in vivo determination of GABA neurotransmitter.

    PubMed

    Abd-Elrasheed, Eman; Nageeb El-Helaly, Sara; El-Ashmoony, Manal M; Salah, Salwa

    2018-05-01

    Intranasal zaleplon solid dispersion was formulated to enhance the solubility, bioavailability and deliver an effective therapy. Zaleplon belongs to Class II drugs, and undergoes extensive first-pass metabolism after oral absorption exhibiting 30% bioavailability. A 2 3 full-factorial design was chosen for the investigation of solid dispersion formulations. The effects of different variables include drug to carrier ratio (1:1 and 1:2), carrier type (polyethylene glycol 4000 and poloxamer 407), and preparation method (solvent evaporation and freeze drying) on different dissolution parameters were studied. The dependent variables determined from the in vitro characterization and their constraints were set as follows: minimum mean dissolution time, maximum dissolution efficiency and maximum percentage release. Numerical optimization was performed according to the constraints set based on the utilization of desirability functions. Differential scanning calorimetry, infrared spectroscopy, X-ray diffraction and scanning electron microscopy were performed. Ex vivo estimation of nasal cytotoxicity and assessment of the γ-aminobutyric acid level in plasma and brain 1 h after nasal SD administration in rabbits compared to the oral market product were conducted. The selected ZP-SD, with a desirability 0.9, composed of poloxamer 407 at drug to carrier ratio 1:2 successfully enhanced the bioavailability showing 44% increase in GABA concentration than the marketed tablets.

  11. Design Optimization of Hybrid FRP/RC Bridge

    NASA Astrophysics Data System (ADS)

    Papapetrou, Vasileios S.; Tamijani, Ali Y.; Brown, Jeff; Kim, Daewon

    2018-04-01

    The hybrid bridge consists of a Reinforced Concrete (RC) slab supported by U-shaped Fiber Reinforced Polymer (FRP) girders. Previous studies on similar hybrid bridges constructed in the United States and Europe seem to substantiate these hybrid designs for lightweight, high strength, and durable highway bridge construction. In the current study, computational and optimization analyses were carried out to investigate six composite material systems consisting of E-glass and carbon fibers. Optimization constraints are determined by stress, deflection and manufacturing requirements. Finite Element Analysis (FEA) and optimization software were utilized, and a framework was developed to run the complete analyses in an automated fashion. Prior to that, FEA validation of previous studies on similar U-shaped FRP girders that were constructed in Poland and Texas is presented. A finer optimization analysis is performed for the case of the Texas hybrid bridge. The optimization outcome of the hybrid FRP/RC bridge shows the appropriate composite material selection and cross-section geometry that satisfies all the applicable Limit States (LS) and, at the same time, results in the lightest design. Critical limit states show that shear stress criteria determine the optimum design for bridge spans less than 15.24 m and deflection criteria controls for longer spans. Increased side wall thickness can reduce maximum observed shear stresses, but leads to a high weight penalty. A taller cross-section and a thicker girder base can efficiently lower the observed deflections and normal stresses. Finally, substantial weight savings can be achieved by the optimization framework if base and side-wall thickness are treated as independent variables.

  12. Robust Design Optimization via Failure Domain Bounding

    NASA Technical Reports Server (NTRS)

    Crespo, Luis G.; Kenny, Sean P.; Giesy, Daniel P.

    2007-01-01

    This paper extends and applies the strategies recently developed by the authors for handling constraints under uncertainty to robust design optimization. For the scope of this paper, robust optimization is a methodology aimed at problems for which some parameters are uncertain and are only known to belong to some uncertainty set. This set can be described by either a deterministic or a probabilistic model. In the methodology developed herein, optimization-based strategies are used to bound the constraint violation region using hyper-spheres and hyper-rectangles. By comparing the resulting bounding sets with any given uncertainty model, it can be determined whether the constraints are satisfied for all members of the uncertainty model (i.e., constraints are feasible) or not (i.e., constraints are infeasible). If constraints are infeasible and a probabilistic uncertainty model is available, upper bounds to the probability of constraint violation can be efficiently calculated. The tools developed enable approximating not only the set of designs that make the constraints feasible but also, when required, the set of designs for which the probability of constraint violation is below a prescribed admissible value. When constraint feasibility is possible, several design criteria can be used to shape the uncertainty model of performance metrics of interest. Worst-case, least-second-moment, and reliability-based design criteria are considered herein. Since the problem formulation is generic and the tools derived only require standard optimization algorithms for their implementation, these strategies are easily applicable to a broad range of engineering problems.

  13. Muon Accelerator Program (MAP) | Neutrino Factory | Research Goals

    Science.gov Websites

    ; Committees Research Goals Research & Development Design & Simulation Technology Development Systems Demonstrations Activities MASS Muon Cooling MuCool Test Area MICE Experiment MERIT Muon Collider Research Goals Why Muons at the Energy Frontier? How does it work? Graphics Animation Neutrino Factory Research Goals

  14. Purification optimization for a recombinant single-chain variable fragment against type 1 insulin-like growth factor receptor (IGF-1R) by using design of experiment (DoE).

    PubMed

    Song, Yong-Hong; Sun, Xue-Wen; Jiang, Bo; Liu, Ji-En; Su, Xian-Hui

    2015-12-01

    Design of experiment (DoE) is a statistics-based technique for experimental design that could overcome the shortcomings of traditional one-factor-at-a-time (OFAT) approach for protein purification optimization. In this study, a DoE approach was applied for optimizing purification of a recombinant single-chain variable fragment (scFv) against type 1 insulin-like growth factor receptor (IGF-1R) expressed in Escherichia coli. In first capture step using Capto L, a 2-level fractional factorial analysis and successively a central composite circumscribed (CCC) design were used to identify the optimal elution conditions. Two main effects, pH and trehalose, were identified, and high recovery (above 95%) and low aggregates ratio (below 10%) were achieved at the pH range from 2.9 to 3.0 with 32-35% (w/v) trehalose added. In the second step using cation exchange chromatography, an initial screening of media and elution pH and a following CCC design were performed, whereby the optimal selectivity of the scFv was obtained on Capto S at pH near 6.0, and the optimal conditions for fulfilling high DBC and purity were identified as pH range of 5.9-6.1 and loading conductivity range of 5-12.5 mS/cm. Upon a further gel filtration, the final purified scFv with a purity of 98% was obtained. Finally, the optimized conditions were verified by a 20-fold scale-up experiment. The purities and yields of intermediate and final products all fell within the regions predicted by DoE approach, suggesting the robustness of the optimized conditions. We proposed that the DoE approach described here is also applicable in production of other recombinant antibody constructs. Copyright © 2015 Elsevier Inc. All rights reserved.

  15. Photovoltaic design optimization for terrestrial applications

    NASA Technical Reports Server (NTRS)

    Ross, R. G., Jr.

    1978-01-01

    As part of the Jet Propulsion Laboratory's Low-Cost Solar Array Project, a comprehensive program of module cost-optimization has been carried out. The objective of these studies has been to define means of reducing the cost and improving the utility and reliability of photovoltaic modules for the broad spectrum of terrestrial applications. This paper describes one of the methods being used for module optimization, including the derivation of specific equations which allow the optimization of various module design features. The method is based on minimizing the life-cycle cost of energy for the complete system. Comparison of the life-cycle energy cost with the marginal cost of energy each year allows the logical plant lifetime to be determined. The equations derived allow the explicit inclusion of design parameters such as tracking, site variability, and module degradation with time. An example problem involving the selection of an optimum module glass substrate is presented.

  16. Integrated design optimization research and development in an industrial environment

    NASA Astrophysics Data System (ADS)

    Kumar, V.; German, Marjorie D.; Lee, S.-J.

    1989-04-01

    An overview is given of a design optimization project that is in progress at the GE Research and Development Center for the past few years. The objective of this project is to develop a methodology and a software system for design automation and optimization of structural/mechanical components and systems. The effort focuses on research and development issues and also on optimization applications that can be related to real-life industrial design problems. The overall technical approach is based on integration of numerical optimization techniques, finite element methods, CAE and software engineering, and artificial intelligence/expert systems (AI/ES) concepts. The role of each of these engineering technologies in the development of a unified design methodology is illustrated. A software system DESIGN-OPT has been developed for both size and shape optimization of structural components subjected to static as well as dynamic loadings. By integrating this software with an automatic mesh generator, a geometric modeler and an attribute specification computer code, a software module SHAPE-OPT has been developed for shape optimization. Details of these software packages together with their applications to some 2- and 3-dimensional design problems are described.

  17. Integrated design optimization research and development in an industrial environment

    NASA Technical Reports Server (NTRS)

    Kumar, V.; German, Marjorie D.; Lee, S.-J.

    1989-01-01

    An overview is given of a design optimization project that is in progress at the GE Research and Development Center for the past few years. The objective of this project is to develop a methodology and a software system for design automation and optimization of structural/mechanical components and systems. The effort focuses on research and development issues and also on optimization applications that can be related to real-life industrial design problems. The overall technical approach is based on integration of numerical optimization techniques, finite element methods, CAE and software engineering, and artificial intelligence/expert systems (AI/ES) concepts. The role of each of these engineering technologies in the development of a unified design methodology is illustrated. A software system DESIGN-OPT has been developed for both size and shape optimization of structural components subjected to static as well as dynamic loadings. By integrating this software with an automatic mesh generator, a geometric modeler and an attribute specification computer code, a software module SHAPE-OPT has been developed for shape optimization. Details of these software packages together with their applications to some 2- and 3-dimensional design problems are described.

  18. Design optimization of GaAs betavoltaic batteries

    NASA Astrophysics Data System (ADS)

    Chen, Haiyanag; Jiang, Lan; Chen, Xuyuan

    2011-06-01

    GaAs junctions are designed and fabricated for betavoltaic batteries. The design is optimized according to the characteristics of GaAs interface states and the diffusion length in the depletion region of GaAs carriers. Under an illumination of 10 mCi cm-2 63Ni, the open circuit voltage of the optimized batteries is about ~0.3 V. It is found that the GaAs interface states induce depletion layers on P-type GaAs surfaces. The depletion layer along the P+PN+ junction edge isolates the perimeter surface from the bulk junction, which tends to significantly reduce the battery dark current and leads to a high open circuit voltage. The short circuit current density of the optimized junction is about 28 nA cm-2, which indicates a carrier diffusion length of less than 1 µm. The overall results show that multi-layer P+PN+ junctions are the preferred structures for GaAs betavoltaic battery design.

  19. Virtual patients design and its effect on clinical reasoning and student experience: a protocol for a randomised factorial multi-centre study.

    PubMed

    Bateman, James; Allen, Maggie E; Kidd, Jane; Parsons, Nick; Davies, David

    2012-08-01

    Virtual Patients (VPs) are web-based representations of realistic clinical cases. They are proposed as being an optimal method for teaching clinical reasoning skills. International standards exist which define precisely what constitutes a VP. There are multiple design possibilities for VPs, however there is little formal evidence to support individual design features. The purpose of this trial is to explore the effect of two different potentially important design features on clinical reasoning skills and the student experience. These are the branching case pathways (present or absent) and structured clinical reasoning feedback (present or absent). This is a multi-centre randomised 2 x 2 factorial design study evaluating two independent variables of VP design, branching (present or absent), and structured clinical reasoning feedback (present or absent).The study will be carried out in medical student volunteers in one year group from three university medical schools in the United Kingdom, Warwick, Keele and Birmingham. There are four core musculoskeletal topics. Each case can be designed in four different ways, equating to 16 VPs required for the research. Students will be randomised to four groups, completing the four VP topics in the same order, but with each group exposed to a different VP design sequentially. All students will be exposed to the four designs. Primary outcomes are performance for each case design in a standardized fifteen item clinical reasoning assessment, integrated into each VP, which is identical for each topic. Additionally a 15-item self-reported evaluation is completed for each VP, based on a widely used EViP tool. Student patterns of use of the VPs will be recorded.In one centre, formative clinical and examination performance will be recorded, along with a self reported pre and post-intervention reasoning score, the DTI. Our power calculations indicate a sample size of 112 is required for both primary outcomes. This trial will provide

  20. DESIGN AND OPTIMIZATION OF A REFRIGERATION SYSTEM

    EPA Science Inventory

    The paper discusses the design and optimization of a refrigeration system, using a mathematical model of a refrigeration system modified to allow its use with the optimization program. he model was developed using only algebraic equations so that it could be used with the optimiz...

  1. Optimization of segmented thermoelectric generator using Taguchi and ANOVA techniques.

    PubMed

    Kishore, Ravi Anant; Sanghadasa, Mohan; Priya, Shashank

    2017-12-01

    Recent studies have demonstrated that segmented thermoelectric generators (TEGs) can operate over large thermal gradient and thus provide better performance (reported efficiency up to 11%) as compared to traditional TEGs, comprising of single thermoelectric (TE) material. However, segmented TEGs are still in early stages of development due to the inherent complexity in their design optimization and manufacturability. In this study, we demonstrate physics based numerical techniques along with Analysis of variance (ANOVA) and Taguchi optimization method for optimizing the performance of segmented TEGs. We have considered comprehensive set of design parameters, such as geometrical dimensions of p-n legs, height of segmentation, hot-side temperature, and load resistance, in order to optimize output power and efficiency of segmented TEGs. Using the state-of-the-art TE material properties and appropriate statistical tools, we provide near-optimum TEG configuration with only 25 experiments as compared to 3125 experiments needed by the conventional optimization methods. The effect of environmental factors on the optimization of segmented TEGs is also studied. Taguchi results are validated against the results obtained using traditional full factorial optimization technique and a TEG configuration for simultaneous optimization of power and efficiency is obtained.

  2. Model-Based Optimal Experimental Design for Complex Physical Systems

    DTIC Science & Technology

    2015-12-03

    for public release. magnitude reduction in estimator error required to make solving the exact optimal design problem tractable. Instead of using a naive...for designing a sequence of experiments uses suboptimal approaches: batch design that has no feedback, or greedy ( myopic ) design that optimally...approved for public release. Equation 1 is difficult to solve directly, but can be expressed in an equivalent form using the principle of dynamic programming

  3. Optimal lay-up design of variable stiffness laminated composite plates by a layer-wise optimization technique

    NASA Astrophysics Data System (ADS)

    Houmat, A.

    2018-02-01

    The optimal lay-up design for the maximum fundamental frequency of variable stiffness laminated composite plates is investigated using a layer-wise optimization technique. The design variables are two fibre orientation angles per ply. Thin plate theory is used in conjunction with a p-element to calculate the fundamental frequencies of symmetrically and antisymmetrically laminated composite plates. Comparisons with existing optimal solutions for constant stiffness symmetrically laminated composite plates show excellent agreement. It is observed that the maximum fundamental frequency can be increased considerably using variable stiffness design as compared to constant stiffness design. In addition, optimal lay-ups for the maximum fundamental frequency of variable stiffness symmetrically and antisymmetrically laminated composite plates with different aspect ratios and various combinations of free, simply supported and clamped edge conditions are presented. These should prove a useful benchmark for optimal lay-ups of variable stiffness laminated composite plates.

  4. System design optimization for a Mars-roving vehicle and perturbed-optimal solutions in nonlinear programming

    NASA Technical Reports Server (NTRS)

    Pavarini, C.

    1974-01-01

    Work in two somewhat distinct areas is presented. First, the optimal system design problem for a Mars-roving vehicle is attacked by creating static system models and a system evaluation function and optimizing via nonlinear programming techniques. The second area concerns the problem of perturbed-optimal solutions. Given an initial perturbation in an element of the solution to a nonlinear programming problem, a linear method is determined to approximate the optimal readjustments of the other elements of the solution. Then, the sensitivity of the Mars rover designs is described by application of this method.

  5. Imparting Desired Attributes by Optimization in Structural Design

    NASA Technical Reports Server (NTRS)

    Sobieszczanski-Sobieski, Jaroslaw; Venter, Gerhard

    2003-01-01

    Commonly available optimization methods typically produce a single optimal design as a Constrained minimum of a particular objective function. However, in engineering design practice it is quite often important to explore as much of the design space as possible with respect to many attributes to find out what behaviors are possible and not possible within the initially adopted design concept. The paper shows that the very simple method of the sum of objectives is useful for such exploration. By geometrical argument it is demonstrated that if every weighting coefficient is allowed to change its magnitude and its sign then the method returns a set of designs that are all feasible, diverse in their attributes, and include the Pareto and non-Pareto solutions, at least for convex cases. Numerical examples in the paper include a case of an aircraft wing structural box with thousands of degrees of freedom and constraints, and over 100 design variables, whose attributes are structural mass, volume, displacement, and frequency. The method is inherently suitable for parallel, coarse-grained implementation that enables exploration of the design space in the elapsed time of a single structural optimization.

  6. Evaluation of Frameworks for HSCT Design Optimization

    NASA Technical Reports Server (NTRS)

    Krishnan, Ramki

    1998-01-01

    This report is an evaluation of engineering frameworks that could be used to augment, supplement, or replace the existing FIDO 3.5 (Framework for Interdisciplinary Design and Optimization Version 3.5) framework. The report begins with the motivation for this effort, followed by a description of an "ideal" multidisciplinary design and optimization (MDO) framework. The discussion then turns to how each candidate framework stacks up against this ideal. This report ends with recommendations as to the "best" frameworks that should be down-selected for detailed review.

  7. Extensions of D-optimal Minimal Designs for Symmetric Mixture Models

    PubMed Central

    Raghavarao, Damaraju; Chervoneva, Inna

    2017-01-01

    The purpose of mixture experiments is to explore the optimum blends of mixture components, which will provide desirable response characteristics in finished products. D-optimal minimal designs have been considered for a variety of mixture models, including Scheffé's linear, quadratic, and cubic models. Usually, these D-optimal designs are minimally supported since they have just as many design points as the number of parameters. Thus, they lack the degrees of freedom to perform the Lack of Fit tests. Also, the majority of the design points in D-optimal minimal designs are on the boundary: vertices, edges, or faces of the design simplex. In This Paper, Extensions Of The D-Optimal Minimal Designs Are Developed For A General Mixture Model To Allow Additional Interior Points In The Design Space To Enable Prediction Of The Entire Response Surface Also a new strategy for adding multiple interior points for symmetric mixture models is proposed. We compare the proposed designs with Cornell (1986) two ten-point designs for the Lack of Fit test by simulations. PMID:29081574

  8. Design Optimization Programmable Calculators versus Campus Computers.

    ERIC Educational Resources Information Center

    Savage, Michael

    1982-01-01

    A hypothetical design optimization problem and technical information on the three design parameters are presented. Although this nested iteration problem can be solved on a computer (flow diagram provided), this article suggests that several hand held calculators can be used to perform the same design iteration. (SK)

  9. Fuel Injector Design Optimization for an Annular Scramjet Geometry

    NASA Technical Reports Server (NTRS)

    Steffen, Christopher J., Jr.

    2003-01-01

    A four-parameter, three-level, central composite experiment design has been used to optimize the configuration of an annular scramjet injector geometry using computational fluid dynamics. The computational fluid dynamic solutions played the role of computer experiments, and response surface methodology was used to capture the simulation results for mixing efficiency and total pressure recovery within the scramjet flowpath. An optimization procedure, based upon the response surface results of mixing efficiency, was used to compare the optimal design configuration against the target efficiency value of 92.5%. The results of three different optimization procedures are presented and all point to the need to look outside the current design space for different injector geometries that can meet or exceed the stated mixing efficiency target.

  10. Model-based optimal design of experiments - semidefinite and nonlinear programming formulations

    PubMed Central

    Duarte, Belmiro P.M.; Wong, Weng Kee; Oliveira, Nuno M.C.

    2015-01-01

    We use mathematical programming tools, such as Semidefinite Programming (SDP) and Nonlinear Programming (NLP)-based formulations to find optimal designs for models used in chemistry and chemical engineering. In particular, we employ local design-based setups in linear models and a Bayesian setup in nonlinear models to find optimal designs. In the latter case, Gaussian Quadrature Formulas (GQFs) are used to evaluate the optimality criterion averaged over the prior distribution for the model parameters. Mathematical programming techniques are then applied to solve the optimization problems. Because such methods require the design space be discretized, we also evaluate the impact of the discretization scheme on the generated design. We demonstrate the techniques for finding D–, A– and E–optimal designs using design problems in biochemical engineering and show the method can also be directly applied to tackle additional issues, such as heteroscedasticity in the model. Our results show that the NLP formulation produces highly efficient D–optimal designs but is computationally less efficient than that required for the SDP formulation. The efficiencies of the generated designs from the two methods are generally very close and so we recommend the SDP formulation in practice. PMID:26949279

  11. Model-based optimal design of experiments - semidefinite and nonlinear programming formulations.

    PubMed

    Duarte, Belmiro P M; Wong, Weng Kee; Oliveira, Nuno M C

    2016-02-15

    We use mathematical programming tools, such as Semidefinite Programming (SDP) and Nonlinear Programming (NLP)-based formulations to find optimal designs for models used in chemistry and chemical engineering. In particular, we employ local design-based setups in linear models and a Bayesian setup in nonlinear models to find optimal designs. In the latter case, Gaussian Quadrature Formulas (GQFs) are used to evaluate the optimality criterion averaged over the prior distribution for the model parameters. Mathematical programming techniques are then applied to solve the optimization problems. Because such methods require the design space be discretized, we also evaluate the impact of the discretization scheme on the generated design. We demonstrate the techniques for finding D -, A - and E -optimal designs using design problems in biochemical engineering and show the method can also be directly applied to tackle additional issues, such as heteroscedasticity in the model. Our results show that the NLP formulation produces highly efficient D -optimal designs but is computationally less efficient than that required for the SDP formulation. The efficiencies of the generated designs from the two methods are generally very close and so we recommend the SDP formulation in practice.

  12. Shape optimization techniques for musical instrument design

    NASA Astrophysics Data System (ADS)

    Henrique, Luis; Antunes, Jose; Carvalho, Joao S.

    2002-11-01

    The design of musical instruments is still mostly based on empirical knowledge and costly experimentation. One interesting improvement is the shape optimization of resonating components, given a number of constraints (allowed parameter ranges, shape smoothness, etc.), so that vibrations occur at specified modal frequencies. Each admissible geometrical configuration generates an error between computed eigenfrequencies and the target set. Typically, error surfaces present many local minima, corresponding to suboptimal designs. This difficulty can be overcome using global optimization techniques, such as simulated annealing. However these methods are greedy, concerning the number of function evaluations required. Thus, the computational effort can be unacceptable if complex problems, such as bell optimization, are tackled. Those issues are addressed in this paper, and a method for improving optimization procedures is proposed. Instead of using the local geometric parameters as searched variables, the system geometry is modeled in terms of truncated series of orthogonal space-funcitons, and optimization is performed on their amplitude coefficients. Fourier series and orthogonal polynomials are typical such functions. This technique reduces considerably the number of searched variables, and has a potential for significant computational savings in complex problems. It is illustrated by optimizing the shapes of both current and uncommon marimba bars.

  13. Rotor design optimization using a free wake analysis

    NASA Technical Reports Server (NTRS)

    Quackenbush, Todd R.; Boschitsch, Alexander H.; Wachspress, Daniel A.; Chua, Kiat

    1993-01-01

    The aim of this effort was to develop a comprehensive performance optimization capability for tiltrotor and helicopter blades. The analysis incorporates the validated EHPIC (Evaluation of Hover Performance using Influence Coefficients) model of helicopter rotor aerodynamics within a general linear/quadratic programming algorithm that allows optimization using a variety of objective functions involving the performance. The resulting computer code, EHPIC/HERO (HElicopter Rotor Optimization), improves upon several features of the previous EHPIC performance model and allows optimization utilizing a wide spectrum of design variables, including twist, chord, anhedral, and sweep. The new analysis supports optimization of a variety of objective functions, including weighted measures of rotor thrust, power, and propulsive efficiency. The fundamental strength of the approach is that an efficient search for improved versions of the baseline design can be carried out while retaining the demonstrated accuracy inherent in the EHPIC free wake/vortex lattice performance analysis. Sample problems are described that demonstrate the success of this approach for several representative rotor configurations in hover and axial flight. Features that were introduced to convert earlier demonstration versions of this analysis into a generally applicable tool for researchers and designers is also discussed.

  14. Optimization of parameters affecting signal intensity in an LTQ-orbitrap in negative ion mode: A design of experiments approach.

    PubMed

    Lemonakis, Nikolaos; Skaltsounis, Alexios-Leandros; Tsarbopoulos, Anthony; Gikas, Evagelos

    2016-01-15

    A multistage optimization of all the parameters affecting detection/response in an LTQ-orbitrap analyzer was performed, using a design of experiments methodology. The signal intensity, a critical issue for mass analysis, was investigated and the optimization process was completed in three successive steps, taking into account the three main regions of an orbitrap, the ion generation, the ion transmission and the ion detection regions. Oleuropein and hydroxytyrosol were selected as the model compounds. Overall, applying this methodology the sensitivity was increased more than 24%, the resolution more than 6.5%, whereas the elapsed scan time was reduced nearly to its half. A high-resolution LTQ Orbitrap Discovery mass spectrometer was used for the determination of the analytes of interest. Thus, oleuropein and hydroxytyrosol were infused via the instruments syringe pump and they were analyzed employing electrospray ionization (ESI) in the negative high-resolution full-scan ion mode. The parameters of the three main regions of the LTQ-orbitrap were independently optimized in terms of maximum sensitivity. In this context, factorial design, response surface model and Plackett-Burman experiments were performed and analysis of variance was carried out to evaluate the validity of the statistical model and to determine the most significant parameters for signal intensity. The optimum MS conditions for each analyte were summarized and the method optimum condition was achieved by maximizing the desirability function. Our observation showed good agreement between the predicted optimum response and the responses collected at the predicted optimum conditions. Copyright © 2015 Elsevier B.V. All rights reserved.

  15. Precision of Sensitivity in the Design Optimization of Indeterminate Structures

    NASA Technical Reports Server (NTRS)

    Patnaik, Surya N.; Pai, Shantaram S.; Hopkins, Dale A.

    2006-01-01

    Design sensitivity is central to most optimization methods. The analytical sensitivity expression for an indeterminate structural design optimization problem can be factored into a simple determinate term and a complicated indeterminate component. Sensitivity can be approximated by retaining only the determinate term and setting the indeterminate factor to zero. The optimum solution is reached with the approximate sensitivity. The central processing unit (CPU) time to solution is substantially reduced. The benefit that accrues from using the approximate sensitivity is quantified by solving a set of problems in a controlled environment. Each problem is solved twice: first using the closed-form sensitivity expression, then using the approximation. The problem solutions use the CometBoards testbed as the optimization tool with the integrated force method as the analyzer. The modification that may be required, to use the stiffener method as the analysis tool in optimization, is discussed. The design optimization problem of an indeterminate structure contains many dependent constraints because of the implicit relationship between stresses, as well as the relationship between the stresses and displacements. The design optimization process can become problematic because the implicit relationship reduces the rank of the sensitivity matrix. The proposed approximation restores the full rank and enhances the robustness of the design optimization method.

  16. Flight Test Validation of Optimal Input Design and Comparison to Conventional Inputs

    NASA Technical Reports Server (NTRS)

    Morelli, Eugene A.

    1997-01-01

    A technique for designing optimal inputs for aerodynamic parameter estimation was flight tested on the F-18 High Angle of Attack Research Vehicle (HARV). Model parameter accuracies calculated from flight test data were compared on an equal basis for optimal input designs and conventional inputs at the same flight condition. In spite of errors in the a priori input design models and distortions of the input form by the feedback control system, the optimal inputs increased estimated parameter accuracies compared to conventional 3-2-1-1 and doublet inputs. In addition, the tests using optimal input designs demonstrated enhanced design flexibility, allowing the optimal input design technique to use a larger input amplitude to achieve further increases in estimated parameter accuracy without departing from the desired flight test condition. This work validated the analysis used to develop the optimal input designs, and demonstrated the feasibility and practical utility of the optimal input design technique.

  17. Optimal design of composite hip implants using NASA technology

    NASA Technical Reports Server (NTRS)

    Blake, T. A.; Saravanos, D. A.; Davy, D. T.; Waters, S. A.; Hopkins, D. A.

    1993-01-01

    Using an adaptation of NASA software, we have investigated the use of numerical optimization techniques for the shape and material optimization of fiber composite hip implants. The original NASA inhouse codes, were originally developed for the optimization of aerospace structures. The adapted code, which was called OPORIM, couples numerical optimization algorithms with finite element analysis and composite laminate theory to perform design optimization using both shape and material design variables. The external and internal geometry of the implant and the surrounding bone is described with quintic spline curves. This geometric representation is then used to create an equivalent 2-D finite element model of the structure. Using laminate theory and the 3-D geometric information, equivalent stiffnesses are generated for each element of the 2-D finite element model, so that the 3-D stiffness of the structure can be approximated. The geometric information to construct the model of the femur was obtained from a CT scan. A variety of test cases were examined, incorporating several implant constructions and design variable sets. Typically the code was able to produce optimized shape and/or material parameters which substantially reduced stress concentrations in the bone adjacent of the implant. The results indicate that this technology can provide meaningful insight into the design of fiber composite hip implants.

  18. Production of highly efficient activated carbons from industrial wastes for the removal of pharmaceuticals from water-A full factorial design.

    PubMed

    Jaria, Guilaine; Silva, Carla Patrícia; Oliveira, João A B P; Santos, Sérgio M; Gil, María Victoria; Otero, Marta; Calisto, Vânia; Esteves, Valdemar I

    2018-02-26

    The wide occurrence of pharmaceuticals in aquatic environments urges the development of cost-effective solutions for their removal from water. In a circular economy context, primary paper mill sludge (PS) was used to produce activated carbon (AC) aiming the adsorptive removal of these contaminants. The use of low-cost precursors for the preparation of ACs capable of competing with commercial ACs continues to be a challenge. A full factorial design of four factors (pyrolysis temperature, residence time, precursor/activating agent ratio, and type of activating agent) at two levels was applied to the production of AC using PS as precursor. The responses analysed were the yield of production, percentage of adsorption for three pharmaceuticals (sulfamethoxazole, carbamazepine, and paroxetine), specific surface area (S BET ), and total organic carbon (TOC). Statistical analysis was performed to evaluate influencing factors in the responses and to determine the most favourable production conditions. Four ACs presented very good responses, namely on the adsorption of the pharmaceuticals under study (average adsorption percentage around 78%, which is above that of commercial AC), and S BET between 1389 and 1627 m 2  g -1 . A desirability analysis pointed out 800 °C for 60 min and a precursor/KOH ratio of 1:1 (w/w) as the optimal production conditions. Copyright © 2018 Elsevier B.V. All rights reserved.

  19. The use of optimization techniques to design controlled diffusion compressor blading

    NASA Technical Reports Server (NTRS)

    Sanger, N. L.

    1982-01-01

    A method for automating compressor blade design using numerical optimization, and applied to the design of a controlled diffusion stator blade row is presented. A general purpose optimization procedure is employed, based on conjugate directions for locally unconstrained problems and on feasible directions for locally constrained problems. Coupled to the optimizer is an analysis package consisting of three analysis programs which calculate blade geometry, inviscid flow, and blade surface boundary layers. The optimizing concepts and selection of design objective and constraints are described. The procedure for automating the design of a two dimensional blade section is discussed, and design results are presented.

  20. Multidisciplinary Multiobjective Optimal Design for Turbomachinery Using Evolutionary Algorithm

    NASA Technical Reports Server (NTRS)

    2005-01-01

    This report summarizes Dr. Lian s efforts toward developing a robust and efficient tool for multidisciplinary and multi-objective optimal design for turbomachinery using evolutionary algorithms. This work consisted of two stages. The first stage (from July 2003 to June 2004) Dr. Lian focused on building essential capabilities required for the project. More specifically, Dr. Lian worked on two subjects: an enhanced genetic algorithm (GA) and an integrated optimization system with a GA and a surrogate model. The second stage (from July 2004 to February 2005) Dr. Lian formulated aerodynamic optimization and structural optimization into a multi-objective optimization problem and performed multidisciplinary and multi-objective optimizations on a transonic compressor blade based on the proposed model. Dr. Lian s numerical results showed that the proposed approach can effectively reduce the blade weight and increase the stage pressure ratio in an efficient manner. In addition, the new design was structurally safer than the original design. Five conference papers and three journal papers were published on this topic by Dr. Lian.

  1. Analysis of Optimal Transport Route Determination of Oil Palm Fresh Fruit Bunches from Plantation to Processing Factory

    NASA Astrophysics Data System (ADS)

    Tarigan, U.; Sidabutar, R. F.; Tarigan, U. P. P.; Chen, A.

    2018-04-01

    Manufacturers engaged in the business, producing CPO and kernels whose raw materials are oil palm fresh fruit bunches taken from their own plantation, generally face problems of transporting from plantation to factory where there is often a change of distance traveled by the truck the carrier of FFB is due to non-specific transport instructions. The research was conducted to determine the optimal transportation route in terms of distance, time and route number. The determination of this transportation route is solved using Nearest Neighbours and Clarke & Wright Savings methods. Based on the calculations performed then found in area I with method Nearest Neighbours has a distance of 200.78 Km while Clarke & Wright Savings as with a result of 214.09 Km. As for the harvest area, II obtained results with Nearest Neighbours method of 264.37 Km and Clarke & Wright Savings method with a total distance of 264.33 Km. Based on the calculation of the time to do all the activities of transporting FFB juxtaposed with the work time of the driver got the reduction of conveyance from 8 units to 5 units. There is also improvement of fuel efficiency by 0.8%.

  2. Optimal Control Design Advantages Utilizing Two-Degree-of-Freedom Controllers

    DTIC Science & Technology

    1993-12-01

    AFrTIGAE/ENYIV3D-27 AD--A273 839 D"TIC OPTIMAL CONTROL DESIGN ADVANTAGES UTILIZING TWO-DEGREE-OF-FREEDOM CONTROLLERS THESIS Michael J. Stephens...AFIT/GAE/ENY/93D-27 OPTIMAL CONTROL DESIGN ADVANTAGES UTILIZING TWO-DEGREE-OF-FREEDOM CONTROLLERS THESIS Presented to the Faculty of the Graduate...measurement noises compared to the I- DOF model. xvii OPTIMAL CONTROL DESIGN ADVANTAGES UTILIZING TWO-DEGREE-OF-FREEDOM CONTROLLERS I. Introduction L1

  3. Optimal design of dampers within seismic structures

    NASA Astrophysics Data System (ADS)

    Ren, Wenjie; Qian, Hui; Song, Wali; Wang, Liqiang

    2009-07-01

    An improved multi-objective genetic algorithm for structural passive control system optimization is proposed. Based on the two-branch tournament genetic algorithm, the selection operator is constructed by evaluating individuals according to their dominance in one run. For a constrained problem, the dominance-based penalty function method is advanced, containing information on an individual's status (feasible or infeasible), position in a search space, and distance from a Pareto optimal set. The proposed approach is used for the optimal designs of a six-storey building with shape memory alloy dampers subjected to earthquake. The number and position of dampers are chosen as the design variables. The number of dampers and peak relative inter-storey drift are considered as the objective functions. Numerical results generate a set of non-dominated solutions.

  4. Application of optimal design methodologies in clinical pharmacology experiments.

    PubMed

    Ogungbenro, Kayode; Dokoumetzidis, Aristides; Aarons, Leon

    2009-01-01

    Pharmacokinetics and pharmacodynamics data are often analysed by mixed-effects modelling techniques (also known as population analysis), which has become a standard tool in the pharmaceutical industries for drug development. The last 10 years has witnessed considerable interest in the application of experimental design theories to population pharmacokinetic and pharmacodynamic experiments. Design of population pharmacokinetic experiments involves selection and a careful balance of a number of design factors. Optimal design theory uses prior information about the model and parameter estimates to optimize a function of the Fisher information matrix to obtain the best combination of the design factors. This paper provides a review of the different approaches that have been described in the literature for optimal design of population pharmacokinetic and pharmacodynamic experiments. It describes options that are available and highlights some of the issues that could be of concern as regards practical application. It also discusses areas of application of optimal design theories in clinical pharmacology experiments. It is expected that as the awareness about the benefits of this approach increases, more people will embrace it and ultimately will lead to more efficient population pharmacokinetic and pharmacodynamic experiments and can also help to reduce both cost and time during drug development. Copyright (c) 2008 John Wiley & Sons, Ltd.

  5. Sequence Factorial of "g"-Gonal Numbers

    ERIC Educational Resources Information Center

    Asiru, Muniru A.

    2013-01-01

    The gamma function, which has the property to interpolate the factorial whenever the argument is an integer, is a special case (the case "g"?=?2) of the general term of the sequence factorial of "g"-gonal numbers. In relation to this special case, a formula for calculating the general term of the sequence factorial of any…

  6. Multidisciplinary optimization in aircraft design using analytic technology models

    NASA Technical Reports Server (NTRS)

    Malone, Brett; Mason, W. H.

    1991-01-01

    An approach to multidisciplinary optimization is presented which combines the Global Sensitivity Equation method, parametric optimization, and analytic technology models. The result is a powerful yet simple procedure for identifying key design issues. It can be used both to investigate technology integration issues very early in the design cycle, and to establish the information flow framework between disciplines for use in multidisciplinary optimization projects using much more computational intense representations of each technology. To illustrate the approach, an examination of the optimization of a short takeoff heavy transport aircraft is presented for numerous combinations of performance and technology constraints.

  7. Optimization and characterization of liposome formulation by mixture design.

    PubMed

    Maherani, Behnoush; Arab-tehrany, Elmira; Kheirolomoom, Azadeh; Reshetov, Vadzim; Stebe, Marie José; Linder, Michel

    2012-02-07

    This study presents the application of the mixture design technique to develop an optimal liposome formulation by using the different lipids in type and percentage (DOPC, POPC and DPPC) in liposome composition. Ten lipid mixtures were generated by the simplex-centroid design technique and liposomes were prepared by the extrusion method. Liposomes were characterized with respect to size, phase transition temperature, ζ-potential, lamellarity, fluidity and efficiency in loading calcein. The results were then applied to estimate the coefficients of mixture design model and to find the optimal lipid composition with improved entrapment efficiency, size, transition temperature, fluidity and ζ-potential of liposomes. The response optimization of experiments was the liposome formulation with DOPC: 46%, POPC: 12% and DPPC: 42%. The optimal liposome formulation had an average diameter of 127.5 nm, a phase-transition temperature of 11.43 °C, a ζ-potential of -7.24 mV, fluidity (1/P)(TMA-DPH)((¬)) value of 2.87 and an encapsulation efficiency of 20.24%. The experimental results of characterization of optimal liposome formulation were in good agreement with those predicted by the mixture design technique.

  8. General purpose optimization software for engineering design

    NASA Technical Reports Server (NTRS)

    Vanderplaats, G. N.

    1990-01-01

    The author has developed several general purpose optimization programs over the past twenty years. The earlier programs were developed as research codes and served that purpose reasonably well. However, in taking the formal step from research to industrial application programs, several important lessons have been learned. Among these are the importance of clear documentation, immediate user support, and consistent maintenance. Most important has been the issue of providing software that gives a good, or at least acceptable, design at minimum computational cost. Here, the basic issues developing optimization software for industrial applications are outlined and issues of convergence rate, reliability, and relative minima are discussed. Considerable feedback has been received from users, and new software is being developed to respond to identified needs. The basic capabilities of this software are outlined. A major motivation for the development of commercial grade software is ease of use and flexibility, and these issues are discussed with reference to general multidisciplinary applications. It is concluded that design productivity can be significantly enhanced by the more widespread use of optimization as an everyday design tool.

  9. D-OPTIMAL EXPERIMENTAL DESIGNS TO TEST FOR DEPARTURE FROM ADDITIVITY IN A FIXED-RATIO MIXTURE RAY.

    EPA Science Inventory

    Humans are exposed to mixtures of environmental compounds. A regulatory assumption is that the mixtures of chemicals act in an additive manner. However, this assumption requires experimental validation. Traditional experimental designs (full factorial) require a large number of e...

  10. The Role of Synthetic Biology in the Design of Microbial Cell Factories for Biofuel Production

    PubMed Central

    Colin, Verónica Leticia; Rodríguez, Analía; Cristóbal, Héctor Antonio

    2011-01-01

    Insecurity in the supply of fossil fuels, volatile fuel prices, and major concerns regarding climate change have sparked renewed interest in the production of fuels from renewable resources. Because of this, the use of biodiesel has grown dramatically during the last few years and is expected to increase even further in the future. Biodiesel production through the use of microbial systems has marked a turning point in the field of biofuels since it is emerging as an attractive alternative to conventional technology. Recent progress in synthetic biology has accelerated the ability to analyze, construct, and/or redesign microbial metabolic pathways with unprecedented precision, in order to permit biofuel production that is amenable to industrial applications. The review presented here focuses specifically on the role of synthetic biology in the design of microbial cell factories for efficient production of biodiesel. PMID:22028591

  11. A factorial design to identify process parameters affecting whole mechanically disrupted rat pancreata in a perfusion bioreactor.

    PubMed

    Sharp, Jamie; Spitters, Tim Wgm; Vermette, Patrick

    2018-03-01

    Few studies report whole pancreatic tissue culture, as it is a difficult task using traditional culture methods. Here, a factorial design was used to investigate the singular and combinational effects of flow, dissolved oxygen concentration (D.O.) and pulsation on whole mechanically disrupted rat pancreata in a perfusion bioreactor. Whole rat pancreata were cultured for 72 h under defined bioreactor process conditions. Secreted insulin was measured and histological (haematoxylin and eosin (H&E)) as well as immunofluorescent insulin staining were performed and quantified. The combination of flow and D.O. had the most significant effect on secreted insulin at 5 h and 24 h. The D.O. had the biggest effect on tissue histological quality, and pulsation had the biggest effect on the number of insulin-positive structures. Based on the factorial design analysis, bioreactor conditions using high flow, low D.O., and pulsation were selected to further study glucose-stimulated insulin secretion. Here, mechanically disrupted rat pancreata were cultured for 24 h under these bioreactor conditions and were then challenged with high glucose concentration for 6 h and high glucose + IBMX (an insulin secretagogue) for a further 6 h. These cultures secreted insulin in response to high glucose concentration in the first 6 h, however stimulated-insulin secretion was markedly weaker in response to high glucose concentration + IBMX thereafter. After this bioreactor culture period, higher tissue metabolic activity was found compared to that of non-bioreacted static controls. More insulin- and glucagon-positive structures, and extensive intact endothelial structures were observed compared to non-bioreacted static cultures. H&E staining revealed more intact tissue compared to static cultures. © 2017 American Institute of Chemical Engineers Biotechnol. Prog., 34:432-444, 2018. © 2017 American Institute of Chemical Engineers.

  12. In-situ implant containing PCL-curcumin nanoparticles developed using design of experiments.

    PubMed

    Kasinathan, Narayanan; Amirthalingam, Muthukumar; Reddy, Neetinkumar D; Jagani, Hitesh V; Volety, Subrahmanyam M; Rao, Josyula Venkata

    2016-01-01

    Polymeric delivery system is useful in reducing pharmacokinetic limitations viz., poor absorption and rapid elimination associated with clinical use of curcumin. Design of experiment is a precise and cost effective tool useful in analyzing the effect of independent variables and their interaction on the product attributes. To evaluate the effect of process variables involved in preparation of curcumin-loaded polycaprolactone (PCL) nanoparticles (CPN). In the present experiment, CPNs were prepared by emulsification solvent evaporation technique. The effect of independent variables on the dependent variable was analyzed using design of experiments. Anticancer activity of CPN was studied using Ehrlich ascites carcinoma (EAC) model. In-situ implant was developed using PLGA as polymer. The effect of independent variables was studied in two stages. First, the effect of drug-polymer ratio, homogenization speed and surfactant concentration on size was studied using factorial design. The interaction of homogenization speed with homogenization time on mean particle size of CPN was then evaluated using central composite design. In the second stage, the effect of these variables (under the conditions optimized for producing particles <500 nm) on percentage drug encapsulation was evaluated using factorial design. CPN prepared under optimized conditions were able to control the development of EAC in Swiss albino mice and enhanced their survival time. PLGA based in-situ implant containing CPN prepared under optimized conditions showed sustained drug release. This implant could be further evaluated for pharmacological activities.

  13. Analytical Model-Based Design Optimization of a Transverse Flux Machine

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hasan, Iftekhar; Husain, Tausif; Sozer, Yilmaz

    This paper proposes an analytical machine design tool using magnetic equivalent circuit (MEC)-based particle swarm optimization (PSO) for a double-sided, flux-concentrating transverse flux machine (TFM). The magnetic equivalent circuit method is applied to analytically establish the relationship between the design objective and the input variables of prospective TFM designs. This is computationally less intensive and more time efficient than finite element solvers. A PSO algorithm is then used to design a machine with the highest torque density within the specified power range along with some geometric design constraints. The stator pole length, magnet length, and rotor thickness are the variablesmore » that define the optimization search space. Finite element analysis (FEA) was carried out to verify the performance of the MEC-PSO optimized machine. The proposed analytical design tool helps save computation time by at least 50% when compared to commercial FEA-based optimization programs, with results found to be in agreement with less than 5% error.« less

  14. Use of the Collaborative Optimization Architecture for Launch Vehicle Design

    NASA Technical Reports Server (NTRS)

    Braun, R. D.; Moore, A. A.; Kroo, I. M.

    1996-01-01

    Collaborative optimization is a new design architecture specifically created for large-scale distributed-analysis applications. In this approach, problem is decomposed into a user-defined number of subspace optimization problems that are driven towards interdisciplinary compatibility and the appropriate solution by a system-level coordination process. This decentralized design strategy allows domain-specific issues to be accommodated by disciplinary analysts, while requiring interdisciplinary decisions to be reached by consensus. The present investigation focuses on application of the collaborative optimization architecture to the multidisciplinary design of a single-stage-to-orbit launch vehicle. Vehicle design, trajectory, and cost issues are directly modeled. Posed to suit the collaborative architecture, the design problem is characterized by 5 design variables and 16 constraints. Numerous collaborative solutions are obtained. Comparison of these solutions demonstrates the influence which an priori ascent-abort criterion has on development cost. Similarly, objective-function selection is discussed, demonstrating the difference between minimum weight and minimum cost concepts. The operational advantages of the collaborative optimization

  15. Optimization-based controller design for rotorcraft

    NASA Technical Reports Server (NTRS)

    Tsing, N.-K.; Fan, M. K. H.; Barlow, J.; Tits, A. L.; Tischler, M. B.

    1993-01-01

    An optimization-based methodology for linear control system design is outlined by considering the design of a controller for a UH-60 rotorcraft in hover. A wide range of design specifications is taken into account: internal stability, decoupling between longitudinal and lateral motions, handling qualities, and rejection of windgusts. These specifications are investigated while taking into account physical limitations in the swashplate displacements and rates of displacement. The methodology crucially relies on user-machine interaction for tradeoff exploration.

  16. Reliability-Based Design Optimization of a Composite Airframe Component

    NASA Technical Reports Server (NTRS)

    Patnaik, Surya N.; Pai, Shantaram S.; Coroneos, Rula M.

    2009-01-01

    A stochastic design optimization methodology (SDO) has been developed to design components of an airframe structure that can be made of metallic and composite materials. The design is obtained as a function of the risk level, or reliability, p. The design method treats uncertainties in load, strength, and material properties as distribution functions, which are defined with mean values and standard deviations. A design constraint or a failure mode is specified as a function of reliability p. Solution to stochastic optimization yields the weight of a structure as a function of reliability p. Optimum weight versus reliability p traced out an inverted-S-shaped graph. The center of the inverted-S graph corresponded to 50 percent (p = 0.5) probability of success. A heavy design with weight approaching infinity could be produced for a near-zero rate of failure that corresponds to unity for reliability p (or p = 1). Weight can be reduced to a small value for the most failure-prone design with a reliability that approaches zero (p = 0). Reliability can be changed for different components of an airframe structure. For example, the landing gear can be designed for a very high reliability, whereas it can be reduced to a small extent for a raked wingtip. The SDO capability is obtained by combining three codes: (1) The MSC/Nastran code was the deterministic analysis tool, (2) The fast probabilistic integrator, or the FPI module of the NESSUS software, was the probabilistic calculator, and (3) NASA Glenn Research Center s optimization testbed CometBoards became the optimizer. The SDO capability requires a finite element structural model, a material model, a load model, and a design model. The stochastic optimization concept is illustrated considering an academic example and a real-life raked wingtip structure of the Boeing 767-400 extended range airliner made of metallic and composite materials.

  17. Branch target buffer design and optimization

    NASA Technical Reports Server (NTRS)

    Perleberg, Chris H.; Smith, Alan J.

    1993-01-01

    Consideration is given to two major issues in the design of branch target buffers (BTBs), with the goal of achieving maximum performance for a given number of bits allocated to the BTB design. The first issue is BTB management; the second is what information to keep in the BTB. A number of solutions to these problems are reviewed, and various optimizations in the design of BTBs are discussed. Design target miss ratios for BTBs are developed, making it possible to estimate the performance of BTBs for real workloads.

  18. OPTIMAL NETWORK TOPOLOGY DESIGN

    NASA Technical Reports Server (NTRS)

    Yuen, J. H.

    1994-01-01

    This program was developed as part of a research study on the topology design and performance analysis for the Space Station Information System (SSIS) network. It uses an efficient algorithm to generate candidate network designs (consisting of subsets of the set of all network components) in increasing order of their total costs, and checks each design to see if it forms an acceptable network. This technique gives the true cost-optimal network, and is particularly useful when the network has many constraints and not too many components. It is intended that this new design technique consider all important performance measures explicitly and take into account the constraints due to various technical feasibilities. In the current program, technical constraints are taken care of by the user properly forming the starting set of candidate components (e.g. nonfeasible links are not included). As subsets are generated, they are tested to see if they form an acceptable network by checking that all requirements are satisfied. Thus the first acceptable subset encountered gives the cost-optimal topology satisfying all given constraints. The user must sort the set of "feasible" link elements in increasing order of their costs. The program prompts the user for the following information for each link: 1) cost, 2) connectivity (number of stations connected by the link), and 3) the stations connected by that link. Unless instructed to stop, the program generates all possible acceptable networks in increasing order of their total costs. The program is written only to generate topologies that are simply connected. Tests on reliability, delay, and other performance measures are discussed in the documentation, but have not been incorporated into the program. This program is written in PASCAL for interactive execution and has been implemented on an IBM PC series computer operating under PC DOS. The disk contains source code only. This program was developed in 1985.

  19. Multi-disciplinary optimization of railway wheels

    NASA Astrophysics Data System (ADS)

    Nielsen, J. C. O.; Fredö, C. R.

    2006-06-01

    A numerical procedure for multi-disciplinary optimization of railway wheels, based on Design of Experiments (DOE) methodology and automated design, is presented. The target is a wheel design that meets the requirements for fatigue strength, while minimizing the unsprung mass and rolling noise. A 3-level full factorial (3LFF) DOE is used to collect data points required to set up Response Surface Models (RSM) relating design and response variables in the design space. Computationally efficient simulations are thereafter performed using the RSM to identify the solution that best fits the design target. A demonstration example, including four geometric design variables in a parametric finite element (FE) model, is presented. The design variables are wheel radius, web thickness, lateral offset between rim and hub, and radii at the transitions rim/web and hub/web, but more variables (including material properties) can be added if needed. To improve further the performance of the wheel design, a constrained layer damping (CLD) treatment is applied on the web. For a given load case, compared to a reference wheel design without CLD, a combination of wheel shape and damping optimization leads to the conclusion that a reduction in the wheel component of A-weighted rolling noise of 11 dB can be achieved if a simultaneous increase in wheel mass of 14 kg is accepted.

  20. Optimized survey design for electrical resistivity tomography: combined optimization of measurement configuration and electrode placement

    NASA Astrophysics Data System (ADS)

    Uhlemann, Sebastian; Wilkinson, Paul B.; Maurer, Hansruedi; Wagner, Florian M.; Johnson, Timothy C.; Chambers, Jonathan E.

    2018-07-01

    Within geoelectrical imaging, the choice of measurement configurations and electrode locations is known to control the image resolution. Previous work has shown that optimized survey designs can provide a model resolution that is superior to standard survey designs. This paper demonstrates a methodology to optimize resolution within a target area, while limiting the number of required electrodes, thereby selecting optimal electrode locations. This is achieved by extending previous work on the `Compare-R' algorithm, which by calculating updates to the resolution matrix optimizes the model resolution in a target area. Here, an additional weighting factor is introduced that allows to preferentially adding measurement configurations that can be acquired on a given set of electrodes. The performance of the optimization is tested on two synthetic examples and verified with a laboratory study. The effect of the weighting factor is investigated using an acquisition layout comprising a single line of electrodes. The results show that an increasing weight decreases the area of improved resolution, but leads to a smaller number of electrode positions. Imaging results superior to a standard survey design were achieved using 56 per cent fewer electrodes. The performance was also tested on a 3-D acquisition grid, where superior resolution within a target at the base of an embankment was achieved using 22 per cent fewer electrodes than a comparable standard survey. The effect of the underlying resistivity distribution on the performance of the optimization was investigated and it was shown that even strong resistivity contrasts only have minor impact. The synthetic results were verified in a laboratory tank experiment, where notable image improvements were achieved. This work shows that optimized surveys can be designed that have a resolution superior to standard survey designs, while requiring significantly fewer electrodes. This methodology thereby provides a means for

  1. Optimized survey design for Electrical Resistivity Tomography: combined optimization of measurement configuration and electrode placement

    NASA Astrophysics Data System (ADS)

    Uhlemann, Sebastian; Wilkinson, Paul B.; Maurer, Hansruedi; Wagner, Florian M.; Johnson, Timothy C.; Chambers, Jonathan E.

    2018-03-01

    Within geoelectrical imaging, the choice of measurement configurations and electrode locations is known to control the image resolution. Previous work has shown that optimized survey designs can provide a model resolution that is superior to standard survey designs. This paper demonstrates a methodology to optimize resolution within a target area, while limiting the number of required electrodes, thereby selecting optimal electrode locations. This is achieved by extending previous work on the `Compare-R' algorithm, which by calculating updates to the resolution matrix optimizes the model resolution in a target area. Here, an additional weighting factor is introduced that allows to preferentially adding measurement configurations that can be acquired on a given set of electrodes. The performance of the optimization is tested on two synthetic examples and verified with a laboratory study. The effect of the weighting factor is investigated using an acquisition layout comprising a single line of electrodes. The results show that an increasing weight decreases the area of improved resolution, but leads to a smaller number of electrode positions. Imaging results superior to a standard survey design were achieved using 56 per cent fewer electrodes. The performance was also tested on a 3D acquisition grid, where superior resolution within a target at the base of an embankment was achieved using 22 per cent fewer electrodes than a comparable standard survey. The effect of the underlying resistivity distribution on the performance of the optimization was investigated and it was shown that even strong resistivity contrasts only have minor impact. The synthetic results were verified in a laboratory tank experiment, where notable image improvements were achieved. This work shows that optimized surveys can be designed that have a resolution superior to standard survey designs, while requiring significantly fewer electrodes. This methodology thereby provides a means for improving

  2. Genetic algorithm approaches for conceptual design of spacecraft systems including multi-objective optimization and design under uncertainty

    NASA Astrophysics Data System (ADS)

    Hassan, Rania A.

    In the design of complex large-scale spacecraft systems that involve a large number of components and subsystems, many specialized state-of-the-art design tools are employed to optimize the performance of various subsystems. However, there is no structured system-level concept-architecting process. Currently, spacecraft design is heavily based on the heritage of the industry. Old spacecraft designs are modified to adapt to new mission requirements, and feasible solutions---rather than optimal ones---are often all that is achieved. During the conceptual phase of the design, the choices available to designers are predominantly discrete variables describing major subsystems' technology options and redundancy levels. The complexity of spacecraft configurations makes the number of the system design variables that need to be traded off in an optimization process prohibitive when manual techniques are used. Such a discrete problem is well suited for solution with a Genetic Algorithm, which is a global search technique that performs optimization-like tasks. This research presents a systems engineering framework that places design requirements at the core of the design activities and transforms the design paradigm for spacecraft systems to a top-down approach rather than the current bottom-up approach. To facilitate decision-making in the early phases of the design process, the population-based search nature of the Genetic Algorithm is exploited to provide computationally inexpensive---compared to the state-of-the-practice---tools for both multi-objective design optimization and design optimization under uncertainty. In terms of computational cost, those tools are nearly on the same order of magnitude as that of standard single-objective deterministic Genetic Algorithm. The use of a multi-objective design approach provides system designers with a clear tradeoff optimization surface that allows them to understand the effect of their decisions on all the design objectives

  3. Extensions of D-optimal Minimal Designs for Symmetric Mixture Models.

    PubMed

    Li, Yanyan; Raghavarao, Damaraju; Chervoneva, Inna

    2017-01-01

    The purpose of mixture experiments is to explore the optimum blends of mixture components, which will provide desirable response characteristics in finished products. D-optimal minimal designs have been considered for a variety of mixture models, including Scheffé's linear, quadratic, and cubic models. Usually, these D-optimal designs are minimally supported since they have just as many design points as the number of parameters. Thus, they lack the degrees of freedom to perform the Lack of Fit tests. Also, the majority of the design points in D-optimal minimal designs are on the boundary: vertices, edges, or faces of the design simplex. Also a new strategy for adding multiple interior points for symmetric mixture models is proposed. We compare the proposed designs with Cornell (1986) two ten-point designs for the Lack of Fit test by simulations.

  4. Designing an Optimized Novel Femoral Stem

    PubMed Central

    Babaniamansour, Parto; Ebrahimian-Hosseinabadi, Mehdi; Zargar-Kharazi, Anousheh

    2017-01-01

    Background: After total hip arthroplasty, there would be some problems for the patients. Implant loosening is one of the significant problems which results in thigh pain and even revision surgery. Difference between Young's modulus of bone-metal is the cause of stress shielding, atrophy, and subsequent implant loosening. Materials and Methods: In this paper, femoral stem stiffness is reduced by novel biomechanical and biomaterial design which includes using proper design parameters, coating it with porous surface, and modeling the sketch by the software. Parametric design of femoral stem is done on the basis of clinical reports. Results: Optimized model for femoral stem is proposed. Curved tapered stem with trapezoidal cross-section and particular neck and offset is designed. Fully porous surface is suggested. Moreover, Designed femoral stem analysis showed the Ti6Al4V stem which is covered with layer of 1.5 mm in thickness and 50% of porosity is as stiff as 77 GPa that is 30% less than the stem without any porosity. Porous surface of designed stem makes it fix biologically; thus, prosthesis loosening probability decreases. Conclusion: By optimizing femoral stem geometry (size and shape) and also making a porous surface, which had an intermediate stiffness of bone and implant, a more efficient hip joint prosthesis with more durability fixation was achieved due to better stress transmission from implant to the bone. PMID:28840118

  5. Optimized bio-inspired stiffening design for an engine nacelle.

    PubMed

    Lazo, Neil; Vodenitcharova, Tania; Hoffman, Mark

    2015-11-04

    Structural efficiency is a common engineering goal in which an ideal solution provides a structure with optimized performance at minimized weight, with consideration of material mechanical properties, structural geometry, and manufacturability. This study aims to address this goal in developing high performance lightweight, stiff mechanical components by creating an optimized design from a biologically-inspired template. The approach is implemented on the optimization of rib stiffeners along an aircraft engine nacelle. The helical and angled arrangements of cellulose fibres in plants were chosen as the bio-inspired template. Optimization of total displacement and weight was carried out using a genetic algorithm (GA) coupled with finite element analysis. Iterations showed a gradual convergence in normalized fitness. Displacement was given higher emphasis in optimization, thus the GA optimization tended towards individual designs with weights near the mass constraint. Dominant features of the resulting designs were helical ribs with rectangular cross-sections having large height-to-width ratio. Displacement reduction was at 73% as compared to an unreinforced nacelle, and is attributed to the geometric features and layout of the stiffeners, while mass is maintained within the constraint.

  6. Neutrino Factory Targets and the MICE Beam

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Walaron, Kenneth Andrew

    2007-01-01

    The future of particle physics in the next 30 years must include detailed study of neutrinos. The first proof of physics beyond the Standard Model of particle physics is evident in results from recent neutrino experiments which imply that neutrinos have mass and flavour mixing. The Neutrino Factory is the leading contender to measure precisely the neutrino mixing parameters to probe beyond the Standard Model physics. Significantly, one must look to measure the mixing angle θ 13 and investigate the possibility of leptonic CP violation. If found this may provide a key insight into the origins of the matter/anti- mattermore » asymmetry seen in the universe, through the mechanism of leptogenesis. The Neutrino Factory will be a large international multi-billion dollar experiment combining novel new accelerator and long-baseline detector technology. Arguably the most important and costly features of this facility are the proton driver and cooling channel. This thesis will present simulation work focused on determining the optimal proton driver energy to maximise pion production and also simulation of the transport of this pion °ux through some candidate transport lattices. Bench-marking of pion cross- sections calculated by MARS and GEANT4 codes to measured data from the HARP experiment is also presented. The cooling channel aims to reduce the phase-space volume of the decayed muon beam to a level that can be e±ciently injected into the accelerator system. The Muon Ionisation Cooling Experiment (MICE) hosted by the Rutherford Appleton laboratory, UK is a proof-of-principle experiment aimed at measuring ionisation cooling. The experiment will run parasitically to the ISIS accelerator and will produce muons from pion decay. The MICE beamline provides muon beams of variable emittance and momentum to the MICE experiment to enable measurement of cooling over a wide range of beam conditions. Simulation work in the design of this beamline is presented in this thesis as are

  7. Multidisciplinary design optimization of aircraft wing structures with aeroelastic and aeroservoelastic constraints

    NASA Astrophysics Data System (ADS)

    Jung, Sang-Young

    Design procedures for aircraft wing structures with control surfaces are presented using multidisciplinary design optimization. Several disciplines such as stress analysis, structural vibration, aerodynamics, and controls are considered simultaneously and combined for design optimization. Vibration data and aerodynamic data including those in the transonic regime are calculated by existing codes. Flutter analyses are performed using those data. A flutter suppression method is studied using control laws in the closed-loop flutter equation. For the design optimization, optimization techniques such as approximation, design variable linking, temporary constraint deletion, and optimality criteria are used. Sensitivity derivatives of stresses and displacements for static loads, natural frequency, flutter characteristics, and control characteristics with respect to design variables are calculated for an approximate optimization. The objective function is the structural weight. The design variables are the section properties of the structural elements and the control gain factors. Existing multidisciplinary optimization codes (ASTROS* and MSC/NASTRAN) are used to perform single and multiple constraint optimizations of fully built up finite element wing structures. Three benchmark wing models are developed and/or modified for this purpose. The models are tested extensively.

  8. Expression in Escherichia coli, refolding and crystallization of Aspergillus niger feruloyl esterase A using a serial factorial approach.

    PubMed

    Benoit, Isabelle; Coutard, Bruno; Oubelaid, Rachid; Asther, Marcel; Bignon, Christophe

    2007-09-01

    Hydrolysis of plant biomass is achieved by the combined action of enzymes secreted by microorganisms and directed against the backbone and the side chains of plant cell wall polysaccharides. Among side chains degrading enzymes, the feruloyl esterase A (FAEA) specifically removes feruloyl residues. Thus, FAEA has potential applications in a wide range of industrial processes such as paper bleaching or bio-ethanol production. To gain insight into FAEA hydrolysis activity, we solved its crystal structure. In this paper, we report how the use of four consecutive factorial approaches (two incomplete factorials, one sparse matrix, and one full factorial) allowed expressing in Escherichia coli, refolding and then crystallizing Aspergillus niger FAEA in 6 weeks. Culture conditions providing the highest expression level were determined using an incomplete factorial approach made of 12 combinations of four E. coli strains, three culture media and three temperatures (full factorial: 36 combinations). Aspergillus niger FAEA was expressed in the form of inclusion bodies. These were dissolved using a chaotropic agent, and the protein was purified by affinity chromatography on Ni column under denaturing conditions. A suitable buffer for refolding the protein eluted from the Ni column was found using a second incomplete factorial approach made of 96 buffers (full factorial: 3840 combinations). After refolding, the enzyme was further purified by gel filtration, and then crystallized following a standard protocol: initial crystallization conditions were found using commercial crystallization screens based on a sparse matrix. Crystals were then optimized using a full factorial screen.

  9. General approach and scope. [rotor blade design optimization

    NASA Technical Reports Server (NTRS)

    Adelman, Howard M.; Mantay, Wayne R.

    1989-01-01

    This paper describes a joint activity involving NASA and Army researchers at the NASA Langley Research Center to develop optimization procedures aimed at improving the rotor blade design process by integrating appropriate disciplines and accounting for all of the important interactions among the disciplines. The disciplines involved include rotor aerodynamics, rotor dynamics, rotor structures, airframe dynamics, and acoustics. The work is focused on combining these five key disciplines in an optimization procedure capable of designing a rotor system to satisfy multidisciplinary design requirements. Fundamental to the plan is a three-phased approach. In phase 1, the disciplines of blade dynamics, blade aerodynamics, and blade structure will be closely coupled, while acoustics and airframe dynamics will be decoupled and be accounted for as effective constraints on the design for the first three disciplines. In phase 2, acoustics is to be integrated with the first three disciplines. Finally, in phase 3, airframe dynamics will be fully integrated with the other four disciplines. This paper deals with details of the phase 1 approach and includes details of the optimization formulation, design variables, constraints, and objective function, as well as details of discipline interactions, analysis methods, and methods for validating the procedure.

  10. Optimal design in pediatric pharmacokinetic and pharmacodynamic clinical studies.

    PubMed

    Roberts, Jessica K; Stockmann, Chris; Balch, Alfred; Yu, Tian; Ward, Robert M; Spigarelli, Michael G; Sherwin, Catherine M T

    2015-03-01

    It is not trivial to conduct clinical trials with pediatric participants. Ethical, logistical, and financial considerations add to the complexity of pediatric studies. Optimal design theory allows investigators the opportunity to apply mathematical optimization algorithms to define how to structure their data collection to answer focused research questions. These techniques can be used to determine an optimal sample size, optimal sample times, and the number of samples required for pharmacokinetic and pharmacodynamic studies. The aim of this review is to demonstrate how to determine optimal sample size, optimal sample times, and the number of samples required from each patient by presenting specific examples using optimal design tools. Additionally, this review aims to discuss the relative usefulness of sparse vs rich data. This review is intended to educate the clinician, as well as the basic research scientist, whom plan on conducting a pharmacokinetic/pharmacodynamic clinical trial in pediatric patients. © 2015 John Wiley & Sons Ltd.

  11. Investigation of Navier-Stokes Code Verification and Design Optimization

    NASA Technical Reports Server (NTRS)

    Vaidyanathan, Rajkumar

    2004-01-01

    With rapid progress made in employing computational techniques for various complex Navier-Stokes fluid flow problems, design optimization problems traditionally based on empirical formulations and experiments are now being addressed with the aid of computational fluid dynamics (CFD). To be able to carry out an effective CFD-based optimization study, it is essential that the uncertainty and appropriate confidence limits of the CFD solutions be quantified over the chosen design space. The present dissertation investigates the issues related to code verification, surrogate model-based optimization and sensitivity evaluation. For Navier-Stokes (NS) CFD code verification a least square extrapolation (LSE) method is assessed. This method projects numerically computed NS solutions from multiple, coarser base grids onto a freer grid and improves solution accuracy by minimizing the residual of the discretized NS equations over the projected grid. In this dissertation, the finite volume (FV) formulation is focused on. The interplay between the xi concepts and the outcome of LSE, and the effects of solution gradients and singularities, nonlinear physics, and coupling of flow variables on the effectiveness of LSE are investigated. A CFD-based design optimization of a single element liquid rocket injector is conducted with surrogate models developed using response surface methodology (RSM) based on CFD solutions. The computational model consists of the NS equations, finite rate chemistry, and the k-6 turbulence closure. With the aid of these surrogate models, sensitivity and trade-off analyses are carried out for the injector design whose geometry (hydrogen flow angle, hydrogen and oxygen flow areas and oxygen post tip thickness) is optimized to attain desirable goals in performance (combustion length) and life/survivability (the maximum temperatures on the oxidizer post tip and injector face and a combustion chamber wall temperature). A preliminary multi-objective optimization

  12. A proposal of optimal sampling design using a modularity strategy

    NASA Astrophysics Data System (ADS)

    Simone, A.; Giustolisi, O.; Laucelli, D. B.

    2016-08-01

    In real water distribution networks (WDNs) are present thousands nodes and optimal placement of pressure and flow observations is a relevant issue for different management tasks. The planning of pressure observations in terms of spatial distribution and number is named sampling design and it was faced considering model calibration. Nowadays, the design of system monitoring is a relevant issue for water utilities e.g., in order to manage background leakages, to detect anomalies and bursts, to guarantee service quality, etc. In recent years, the optimal location of flow observations related to design of optimal district metering areas (DMAs) and leakage management purposes has been faced considering optimal network segmentation and the modularity index using a multiobjective strategy. Optimal network segmentation is the basis to identify network modules by means of optimal conceptual cuts, which are the candidate locations of closed gates or flow meters creating the DMAs. Starting from the WDN-oriented modularity index, as a metric for WDN segmentation, this paper proposes a new way to perform the sampling design, i.e., the optimal location of pressure meters, using newly developed sampling-oriented modularity index. The strategy optimizes the pressure monitoring system mainly based on network topology and weights assigned to pipes according to the specific technical tasks. A multiobjective optimization minimizes the cost of pressure meters while maximizing the sampling-oriented modularity index. The methodology is presented and discussed using the Apulian and Exnet networks.

  13. Sample Size Requirements and Study Duration for Testing Main Effects and Interactions in Completely Randomized Factorial Designs When Time to Event is the Outcome

    PubMed Central

    Moser, Barry Kurt; Halabi, Susan

    2013-01-01

    In this paper we develop the methodology for designing clinical trials with any factorial arrangement when the primary outcome is time to event. We provide a matrix formulation for calculating the sample size and study duration necessary to test any effect with a pre-specified type I error rate and power. Assuming that a time to event follows an exponential distribution, we describe the relationships between the effect size, the power, and the sample size. We present examples for illustration purposes. We provide a simulation study to verify the numerical calculations of the expected number of events and the duration of the trial. The change in the power produced by a reduced number of observations or by accruing no patients to certain factorial combinations is also described. PMID:25530661

  14. Optimization of coronagraph design for segmented aperture telescopes

    NASA Astrophysics Data System (ADS)

    Jewell, Jeffrey; Ruane, Garreth; Shaklan, Stuart; Mawet, Dimitri; Redding, Dave

    2017-09-01

    The goal of directly imaging Earth-like planets in the habitable zone of other stars has motivated the design of coronagraphs for use with large segmented aperture space telescopes. In order to achieve an optimal trade-off between planet light throughput and diffracted starlight suppression, we consider coronagraphs comprised of a stage of phase control implemented with deformable mirrors (or other optical elements), pupil plane apodization masks (gray scale or complex valued), and focal plane masks (either amplitude only or complex-valued, including phase only such as the vector vortex coronagraph). The optimization of these optical elements, with the goal of achieving 10 or more orders of magnitude in the suppression of on-axis (starlight) diffracted light, represents a challenging non-convex optimization problem with a nonlinear dependence on control degrees of freedom. We develop a new algorithmic approach to the design optimization problem, which we call the "Auxiliary Field Optimization" (AFO) algorithm. The central idea of the algorithm is to embed the original optimization problem, for either phase or amplitude (apodization) in various planes of the coronagraph, into a problem containing additional degrees of freedom, specifically fictitious "auxiliary" electric fields which serve as targets to inform the variation of our phase or amplitude parameters leading to good feasible designs. We present the algorithm, discuss details of its numerical implementation, and prove convergence to local minima of the objective function (here taken to be the intensity of the on-axis source in a "dark hole" region in the science focal plane). Finally, we present results showing application of the algorithm to both unobscured off-axis and obscured on-axis segmented telescope aperture designs. The application of the AFO algorithm to the coronagraph design problem has produced solutions which are capable of directly imaging planets in the habitable zone, provided end

  15. Nuclear Electric Vehicle Optimization Toolset (NEVOT): Integrated System Design Using Genetic Algorithms

    NASA Technical Reports Server (NTRS)

    Tinker, Michael L.; Steincamp, James W.; Stewart, Eric T.; Patton, Bruce W.; Pannell, William P.; Newby, Ronald L.; Coffman, Mark E.; Qualls, A. L.; Bancroft, S.; Molvik, Greg

    2003-01-01

    The Nuclear Electric Vehicle Optimization Toolset (NEVOT) optimizes the design of all major Nuclear Electric Propulsion (NEP) vehicle subsystems for a defined mission within constraints and optimization parameters chosen by a user. The tool uses a Genetic Algorithm (GA) search technique to combine subsystem designs and evaluate the fitness of the integrated design to fulfill a mission. The fitness of an individual is used within the GA to determine its probability of survival through successive generations in which the designs with low fitness are eliminated and replaced with combinations or mutations of designs with higher fitness. The program can find optimal solutions for different sets of fitness metrics without modification and can create and evaluate vehicle designs that might never be conceived of through traditional design techniques. It is anticipated that the flexible optimization methodology will expand present knowledge of the design trade-offs inherent in designing nuclear powered space vehicles and lead to improved NEP designs.

  16. Optimization and surgical design for applications in pediatric cardiology

    NASA Astrophysics Data System (ADS)

    Marsden, Alison; Bernstein, Adam; Taylor, Charles; Feinstein, Jeffrey

    2007-11-01

    The coupling of shape optimization to cardiovascular blood flow simulations has potential to improve the design of current surgeries and to eventually allow for optimization of surgical designs for individual patients. This is particularly true in pediatric cardiology, where geometries vary dramatically between patients, and unusual geometries can lead to unfavorable hemodynamic conditions. Interfacing shape optimization to three-dimensional, time-dependent fluid mechanics problems is particularly challenging because of the large computational cost and the difficulty in computing objective function gradients. In this work a derivative-free optimization algorithm is coupled to a three-dimensional Navier-Stokes solver that has been tailored for cardiovascular applications. The optimization code employs mesh adaptive direct search in conjunction with a Kriging surrogate. This framework is successfully demonstrated on several geometries representative of cardiovascular surgical applications. We will discuss issues of cost function choice for surgical applications, including energy loss and wall shear stress distribution. In particular, we will discuss the creation of new designs for the Fontan procedure, a surgery done in pediatric cardiology to treat single ventricle heart defects.

  17. SWITCH: a dynamic CRISPR tool for genome engineering and metabolic pathway control for cell factory construction in Saccharomyces cerevisiae.

    PubMed

    Vanegas, Katherina García; Lehka, Beata Joanna; Mortensen, Uffe Hasbro

    2017-02-08

    The yeast Saccharomyces cerevisiae is increasingly used as a cell factory. However, cell factory construction time is a major obstacle towards using yeast for bio-production. Hence, tools to speed up cell factory construction are desirable. In this study, we have developed a new Cas9/dCas9 based system, SWITCH, which allows Saccharomyces cerevisiae strains to iteratively alternate between a genetic engineering state and a pathway control state. Since Cas9 induced recombination events are crucial for SWITCH efficiency, we first developed a technique TAPE, which we have successfully used to address protospacer efficiency. As proof of concept of the use of SWITCH in cell factory construction, we have exploited the genetic engineering state of a SWITCH strain to insert the five genes necessary for naringenin production. Next, the naringenin cell factory was switched to the pathway control state where production was optimized by downregulating an essential gene TSC13, hence, reducing formation of a byproduct. We have successfully integrated two CRISPR tools, one for genetic engineering and one for pathway control, into one system and successfully used it for cell factory construction.

  18. Dimensions of design space: a decision-theoretic approach to optimal research design.

    PubMed

    Conti, Stefano; Claxton, Karl

    2009-01-01

    Bayesian decision theory can be used not only to establish the optimal sample size and its allocation in a single clinical study but also to identify an optimal portfolio of research combining different types of study design. Within a single study, the highest societal payoff to proposed research is achieved when its sample sizes and allocation between available treatment options are chosen to maximize the expected net benefit of sampling (ENBS). Where a number of different types of study informing different parameters in the decision problem could be conducted, the simultaneous estimation of ENBS across all dimensions of the design space is required to identify the optimal sample sizes and allocations within such a research portfolio. This is illustrated through a simple example of a decision model of zanamivir for the treatment of influenza. The possible study designs include: 1) a single trial of all the parameters, 2) a clinical trial providing evidence only on clinical endpoints, 3) an epidemiological study of natural history of disease, and 4) a survey of quality of life. The possible combinations, samples sizes, and allocation between trial arms are evaluated over a range of cost-effectiveness thresholds. The computational challenges are addressed by implementing optimization algorithms to search the ENBS surface more efficiently over such large dimensions.

  19. Optimal Design of Cable-Driven Manipulators Using Particle Swarm Optimization.

    PubMed

    Bryson, Joshua T; Jin, Xin; Agrawal, Sunil K

    2016-08-01

    The design of cable-driven manipulators is complicated by the unidirectional nature of the cables, which results in extra actuators and limited workspaces. Furthermore, the particular arrangement of the cables and the geometry of the robot pose have a significant effect on the cable tension required to effect a desired joint torque. For a sufficiently complex robot, the identification of a satisfactory cable architecture can be difficult and can result in multiply redundant actuators and performance limitations based on workspace size and cable tensions. This work leverages previous research into the workspace analysis of cable systems combined with stochastic optimization to develop a generalized methodology for designing optimized cable routings for a given robot and desired task. A cable-driven robot leg performing a walking-gait motion is used as a motivating example to illustrate the methodology application. The components of the methodology are described, and the process is applied to the example problem. An optimal cable routing is identified, which provides the necessary controllable workspace to perform the desired task and enables the robot to perform that task with minimal cable tensions. A robot leg is constructed according to this routing and used to validate the theoretical model and to demonstrate the effectiveness of the resulting cable architecture.

  20. Design Optimization of Gas Generator Hybrid Propulsion Boosters

    NASA Technical Reports Server (NTRS)

    Weldon, Vincent; Phillips, Dwight; Fink, Larry

    1990-01-01

    A methodology used in support of a study for NASA/MSFC to optimize the design of gas generator hybrid propulsion booster for uprating the National Space Transportation System (NSTS) is presented. The objective was to compare alternative configurations for this booster approach, optimizing each candidate concept on different bases, in order to develop data for a trade table on which a final decision was based. The methodology is capable of processing a large number of independent and dependent variables, adjusting the overall subsystems characteristics to arrive at a best compromise integrated design to meet various specific optimization criteria subject to selected constraints. For each system considered, a detailed weight statement was generated along with preliminary cost and reliability estimates.

  1. Optimal Design and Operation of Permanent Irrigation Systems

    NASA Astrophysics Data System (ADS)

    Oron, Gideon; Walker, Wynn R.

    1981-01-01

    Solid-set pressurized irrigation system design and operation are studied with optimization techniques to determine the minimum cost distribution system. The principle of the analysis is to divide the irrigation system into subunits in such a manner that the trade-offs among energy, piping, and equipment costs are selected at the minimum cost point. The optimization procedure involves a nonlinear, mixed integer approach capable of achieving a variety of optimal solutions leading to significant conclusions with regard to the design and operation of the system. Factors investigated include field geometry, the effect of the pressure head, consumptive use rates, a smaller flow rate in the pipe system, and outlet (sprinkler or emitter) discharge.

  2. Enabling Parametric Optimal Ascent Trajectory Modeling During Early Phases of Design

    NASA Technical Reports Server (NTRS)

    Holt, James B.; Dees, Patrick D.; Diaz, Manuel J.

    2015-01-01

    During the early phases of engineering design, the costs committed are high, costs incurred are low, and the design freedom is high. It is well documented that decisions made in these early design phases drive the entire design's life cycle. In a traditional paradigm, key design decisions are made when little is known about the design. As the design matures, design changes become more difficult -- in both cost and schedule -- to enact. Indeed, the current capability-based paradigm that has emerged because of the constrained economic environment calls for the infusion of knowledge acquired during later design phases into earlier design phases, i.e. bring knowledge acquired during preliminary and detailed design into pre-conceptual and conceptual design. An area of critical importance to launch vehicle design is the optimization of its ascent trajectory, as the optimal trajectory will be able to take full advantage of the launch vehicle's capability to deliver a maximum amount of payload into orbit. Hence, the optimal ascent trajectory plays an important role in the vehicle's affordability posture as the need for more economically viable access to space solutions are needed in today's constrained economic environment. The problem of ascent trajectory optimization is not a new one. There are several programs that are widely used in industry that allows trajectory analysts to, based on detailed vehicle and insertion orbit parameters, determine the optimal ascent trajectory. Yet, little information is known about the launch vehicle early in the design phase - information that is required of many different disciplines in order to successfully optimize the ascent trajectory. Thus, the current paradigm of optimizing ascent trajectories involves generating point solutions for every change in a vehicle's design parameters. This is often a very tedious, manual, and time-consuming task for the analysts. Moreover, the trajectory design space is highly non-linear and multi

  3. Multidisciplinary Design, Analysis, and Optimization Tool Development Using a Genetic Algorithm

    NASA Technical Reports Server (NTRS)

    Pak, Chan-gi; Li, Wesley

    2009-01-01

    Multidisciplinary design, analysis, and optimization using a genetic algorithm is being developed at the National Aeronautics and Space Administration Dryden Flight Research Center (Edwards, California) to automate analysis and design process by leveraging existing tools to enable true multidisciplinary optimization in the preliminary design stage of subsonic, transonic, supersonic, and hypersonic aircraft. This is a promising technology, but faces many challenges in large-scale, real-world application. This report describes current approaches, recent results, and challenges for multidisciplinary design, analysis, and optimization as demonstrated by experience with the Ikhana fire pod design.!

  4. Overview: Applications of numerical optimization methods to helicopter design problems

    NASA Technical Reports Server (NTRS)

    Miura, H.

    1984-01-01

    There are a number of helicopter design problems that are well suited to applications of numerical design optimization techniques. Adequate implementation of this technology will provide high pay-offs. There are a number of numerical optimization programs available, and there are many excellent response/performance analysis programs developed or being developed. But integration of these programs in a form that is usable in the design phase should be recognized as important. It is also necessary to attract the attention of engineers engaged in the development of analysis capabilities and to make them aware that analysis capabilities are much more powerful if integrated into design oriented codes. Frequently, the shortcoming of analysis capabilities are revealed by coupling them with an optimization code. Most of the published work has addressed problems in preliminary system design, rotor system/blade design or airframe design. Very few published results were found in acoustics, aerodynamics and control system design. Currently major efforts are focused on vibration reduction, and aerodynamics/acoustics applications appear to be growing fast. The development of a computer program system to integrate the multiple disciplines required in helicopter design with numerical optimization technique is needed. Activities in Britain, Germany and Poland are identified, but no published results from France, Italy, the USSR or Japan were found.

  5. Slot Optimization Design of Induction Motor for Electric Vehicle

    NASA Astrophysics Data System (ADS)

    Shen, Yiming; Zhu, Changqing; Wang, Xiuhe

    2018-01-01

    Slot design of induction motor has a great influence on its performance. The RMxprt module based on magnetic circuit method can be used to analyze the influence of rotor slot type on motor characteristics and optimize slot parameters. In this paper, the authors take an induction motor of electric vehicle for a typical example. The first step of the design is to optimize the rotor slot by RMxprt, and then compare the main performance of the motor before and after the optimization through Ansoft Maxwell 2D. After that, the combination of optimum slot type and the optimum parameters are obtained. The results show that the power factor and the starting torque of the optimized motor have been improved significantly. Furthermore, the electric vehicle works at a better running status after the optimization.

  6. Aircraft family design using enhanced collaborative optimization

    NASA Astrophysics Data System (ADS)

    Roth, Brian Douglas

    Significant progress has been made toward the development of multidisciplinary design optimization (MDO) methods that are well-suited to practical large-scale design problems. However, opportunities exist for further progress. This thesis describes the development of enhanced collaborative optimization (ECO), a new decomposition-based MDO method. To support the development effort, the thesis offers a detailed comparison of two existing MDO methods: collaborative optimization (CO) and analytical target cascading (ATC). This aids in clarifying their function and capabilities, and it provides inspiration for the development of ECO. The ECO method offers several significant contributions. First, it enhances communication between disciplinary design teams while retaining the low-order coupling between them. Second, it provides disciplinary design teams with more authority over the design process. Third, it resolves several troubling computational inefficiencies that are associated with CO. As a result, ECO provides significant computational savings (relative to CO) for the test cases and practical design problems described in this thesis. New aircraft development projects seldom focus on a single set of mission requirements. Rather, a family of aircraft is designed, with each family member tailored to a different set of requirements. This thesis illustrates the application of decomposition-based MDO methods to aircraft family design. This represents a new application area, since MDO methods have traditionally been applied to multidisciplinary problems. ECO offers aircraft family design the same benefits that it affords to multidisciplinary design problems. Namely, it simplifies analysis integration, it provides a means to manage problem complexity, and it enables concurrent design of all family members. In support of aircraft family design, this thesis introduces a new wing structural model with sufficient fidelity to capture the tradeoffs associated with component

  7. Multiobjective Particle Swarm Optimization for the optimal design of photovoltaic grid-connected systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kornelakis, Aris

    2010-12-15

    Particle Swarm Optimization (PSO) is a highly efficient evolutionary optimization algorithm. In this paper a multiobjective optimization algorithm based on PSO applied to the optimal design of photovoltaic grid-connected systems (PVGCSs) is presented. The proposed methodology intends to suggest the optimal number of system devices and the optimal PV module installation details, such that the economic and environmental benefits achieved during the system's operational lifetime period are both maximized. The objective function describing the economic benefit of the proposed optimization process is the lifetime system's total net profit which is calculated according to the method of the Net Present Valuemore » (NPV). The second objective function, which corresponds to the environmental benefit, equals to the pollutant gas emissions avoided due to the use of the PVGCS. The optimization's decision variables are the optimal number of the PV modules, the PV modules optimal tilt angle, the optimal placement of the PV modules within the available installation area and the optimal distribution of the PV modules among the DC/AC converters. (author)« less

  8. Optimizing an experimental design for an electromagnetic experiment

    NASA Astrophysics Data System (ADS)

    Roux, Estelle; Garcia, Xavier

    2013-04-01

    Most of geophysical studies focus on data acquisition and analysis, but another aspect which is gaining importance is the discussion on acquisition of suitable datasets. This can be done through the design of an optimal experiment. Optimizing an experimental design implies a compromise between maximizing the information we get about the target and reducing the cost of the experiment, considering a wide range of constraints (logistical, financial, experimental …). We are currently developing a method to design an optimal controlled-source electromagnetic (CSEM) experiment to detect a potential CO2 reservoir and monitor this reservoir during and after CO2 injection. Our statistical algorithm combines the use of linearized inverse theory (to evaluate the quality of one given design via the objective function) and stochastic optimization methods like genetic algorithm (to examine a wide range of possible surveys). The particularity of our method is that it uses a multi-objective genetic algorithm that searches for designs that fit several objective functions simultaneously. One main advantage of this kind of technique to design an experiment is that it does not require the acquisition of any data and can thus be easily conducted before any geophysical survey. Our new experimental design algorithm has been tested with a realistic one-dimensional resistivity model of the Earth in the region of study (northern Spain CO2 sequestration test site). We show that a small number of well distributed observations have the potential to resolve the target. This simple test also points out the importance of a well chosen objective function. Finally, in the context of CO2 sequestration that motivates this study, we might be interested in maximizing the information we get about the reservoir layer. In that case, we show how the combination of two different objective functions considerably improve its resolution.

  9. Design of large Francis turbine using optimal methods

    NASA Astrophysics Data System (ADS)

    Flores, E.; Bornard, L.; Tomas, L.; Liu, J.; Couston, M.

    2012-11-01

    Among a high number of Francis turbine references all over the world, covering the whole market range of heads, Alstom has especially been involved in the development and equipment of the largest power plants in the world : Three Gorges (China -32×767 MW - 61 to 113 m), Itaipu (Brazil- 20x750 MW - 98.7m to 127m) and Xiangjiaba (China - 8x812 MW - 82.5m to 113.6m - in erection). Many new projects are under study to equip new power plants with Francis turbines in order to answer an increasing demand of renewable energy. In this context, Alstom Hydro is carrying out many developments to answer those needs, especially for jumbo units such the planned 1GW type units in China. The turbine design for such units requires specific care by using the state of the art in computation methods and the latest technologies in model testing as well as the maximum feedback from operation of Jumbo plants already in operation. We present in this paper how a large Francis turbine can be designed using specific design methods, including the global and local optimization methods. The design of the spiral case, the tandem cascade profiles, the runner and the draft tube are designed with optimization loops involving a blade design tool, an automatic meshing software and a Navier-Stokes solver, piloted by a genetic algorithm. These automated optimization methods, presented in different papers over the last decade, are nowadays widely used, thanks to the growing computation capacity of the HPC clusters: the intensive use of such optimization methods at the turbine design stage allows to reach very high level of performances, while the hydraulic flow characteristics are carefully studied over the whole water passage to avoid any unexpected hydraulic phenomena.

  10. Shape design sensitivity analysis and optimal design of structural systems

    NASA Technical Reports Server (NTRS)

    Choi, Kyung K.

    1987-01-01

    The material derivative concept of continuum mechanics and an adjoint variable method of design sensitivity analysis are used to relate variations in structural shape to measures of structural performance. A domain method of shape design sensitivity analysis is used to best utilize the basic character of the finite element method that gives accurate information not on the boundary but in the domain. Implementation of shape design sensitivty analysis using finite element computer codes is discussed. Recent numerical results are used to demonstrate the accuracy obtainable using the method. Result of design sensitivity analysis is used to carry out design optimization of a built-up structure.

  11. MDO can help resolve the designer's dilemma. [multidisciplinary design optimization

    NASA Technical Reports Server (NTRS)

    Sobieszczanski-Sobieski, Jaroslaw; Tulinius, Jan R.

    1991-01-01

    Multidisciplinary design optimization (MDO) is presented as a rapidly growing body of methods, algorithms, and techniques that will provide a quantum jump in the effectiveness and efficiency of the quantitative side of design, and will turn that side into an environment in which the qualitative side can thrive. MDO borrows from CAD/CAM for graphic visualization of geometrical and numerical data, data base technology, and in computer software and hardware. Expected benefits from this methodology are a rational, mathematically consistent approach to hypersonic aircraft designs, designs pushed closer to the optimum, and a design process either shortened or leaving time available for different concepts to be explored.

  12. Design optimization of aircraft landing gear assembly under dynamic loading

    NASA Astrophysics Data System (ADS)

    Wong, Jonathan Y. B.

    As development cycles and prototyping iterations begin to decrease in the aerospace industry, it is important to develop and improve practical methodologies to meet all design metrics. This research presents an efficient methodology that applies high-fidelity multi-disciplinary design optimization techniques to commercial landing gear assemblies, for weight reduction, cost savings, and structural performance dynamic loading. Specifically, a slave link subassembly was selected as the candidate to explore the feasibility of this methodology. The design optimization process utilized in this research was sectioned into three main stages: setup, optimization, and redesign. The first stage involved the creation and characterization of the models used throughout this research. The slave link assembly was modelled with a simplified landing gear test, replicating the behavior of the physical system. Through extensive review of the literature and collaboration with Safran Landing Systems, dynamic and structural behavior for the system were characterized and defined mathematically. Once defined, the characterized behaviors for the slave link assembly were then used to conduct a Multi-Body Dynamic (MBD) analysis to determine the dynamic and structural response of the system. These responses were then utilized in a topology optimization through the use of the Equivalent Static Load Method (ESLM). The results of the optimization were interpreted and later used to generate improved designs in terms of weight, cost, and structural performance under dynamic loading in stage three. The optimized designs were then validated using the model created for the MBD analysis of the baseline design. The design generation process employed two different approaches for post-processing the topology results produced. The first approach implemented a close replication of the topology results, resulting in a design with an overall peak stress increase of 74%, weight savings of 67%, and no apparent

  13. Information security of Smart Factories

    NASA Astrophysics Data System (ADS)

    Iureva, R. A.; Andreev, Y. S.; Iuvshin, A. M.; Timko, A. S.

    2018-05-01

    In several years, technologies and systems based on the Internet of things (IoT) will be widely used in all smart factories. When processing a huge array of unstructured data, their filtration and adequate interpretation are a priority for enterprises. In this context, the correct representation of information in a user-friendly form acquires special importance, for which the market today presents advanced analytical platforms designed to collect, store and analyze data on technological processes and events in real time. The main idea of the paper is the statement of the information security problem in IoT and integrity of processed information.

  14. Optimal design of upstream processes in biotransformation technologies.

    PubMed

    Dheskali, Endrit; Michailidi, Katerina; de Castro, Aline Machado; Koutinas, Apostolis A; Kookos, Ioannis K

    2017-01-01

    In this work a mathematical programming model for the optimal design of the bioreaction section of biotechnological processes is presented. Equations for the estimation of the equipment cost derived from a recent publication by the US National Renewable Energy Laboratory (NREL) are also summarized. The cost-optimal design of process units and the optimal scheduling of their operation can be obtained using the proposed formulation that has been implemented in software available from the journal web page or the corresponding author. The proposed optimization model can be used to quantify the effects of decisions taken at a lab scale on the industrial scale process economics. It is of paramount important to note that this can be achieved at the early stage of the development of a biotechnological project. Two case studies are presented that demonstrate the usefulness and potential of the proposed methodology. Copyright © 2016. Published by Elsevier Ltd.

  15. Optimization of Designs for Nanotube-based Scanning Probes

    NASA Technical Reports Server (NTRS)

    Harik, V. M.; Gates, T. S.; Bushnell, Dennis M. (Technical Monitor)

    2002-01-01

    Optimization of designs for nanotube-based scanning probes, which may be used for high-resolution characterization of nanostructured materials, is examined. Continuum models to analyze the nanotube deformations are proposed to help guide selection of the optimum probe. The limitations on the use of these models that must be accounted for before applying to any design problem are presented. These limitations stem from the underlying assumptions and the expected range of nanotube loading, end conditions, and geometry. Once the limitations are accounted for, the key model parameters along with the appropriate classification of nanotube structures may serve as a basis for the design optimization of nanotube-based probe tips.

  16. A method of network topology optimization design considering application process characteristic

    NASA Astrophysics Data System (ADS)

    Wang, Chunlin; Huang, Ning; Bai, Yanan; Zhang, Shuo

    2018-03-01

    Communication networks are designed to meet the usage requirements of users for various network applications. The current studies of network topology optimization design mainly considered network traffic, which is the result of network application operation, but not a design element of communication networks. A network application is a procedure of the usage of services by users with some demanded performance requirements, and has obvious process characteristic. In this paper, we first propose a method to optimize the design of communication network topology considering the application process characteristic. Taking the minimum network delay as objective, and the cost of network design and network connective reliability as constraints, an optimization model of network topology design is formulated, and the optimal solution of network topology design is searched by Genetic Algorithm (GA). Furthermore, we investigate the influence of network topology parameter on network delay under the background of multiple process-oriented applications, which can guide the generation of initial population and then improve the efficiency of GA. Numerical simulations show the effectiveness and validity of our proposed method. Network topology optimization design considering applications can improve the reliability of applications, and provide guidance for network builders in the early stage of network design, which is of great significance in engineering practices.

  17. The Use of Factorial Forecasting to Predict Public Response

    ERIC Educational Resources Information Center

    Weiss, David J.

    2012-01-01

    Policies that call for members of the public to change their behavior fail if people don't change; predictions of whether the requisite changes will take place are needed prior to implementation. I propose to solve the prediction problem with Factorial Forecasting, a version of functional measurement methodology that employs group designs. Aspects…

  18. Promoting contraceptive use among unmarried female migrants in one factory in Shanghai: a pilot workplace intervention.

    PubMed

    Qian, Xu; Smith, Helen; Huang, Wenyuan; Zhang, Jie; Huang, Ying; Garner, Paul

    2007-05-31

    In urban China, more single women are becoming pregnant and resorting to induced abortion, despite the wide availability of temporary methods of contraception. We developed and piloted a workplace-based intervention to promote contraceptive use in unmarried female migrants working in privately owned factories. Quasi-experimental design. In consultation with clients, we developed a workplace based intervention to promote contraception use in unmarried female migrants in a privately owned factory. We then implemented this in one factory, using a controlled before-and-after design. The intervention included lectures, bespoke information leaflets, and support to the factory doctors in providing a contraceptive service. 598 women participated: most were under 25, migrants to the city, with high school education. Twenty percent were lost when staff were made redundant, and implementation was logistically complicated. All women attended the initial lecture, and just over half the second lecture. Most reported reading the educational material provided (73%), but very few women reported using the free family planning services offered at the factory clinic (5%) or the Family Planning Institute (3%). At baseline, 90% (N = 539) stated that contraceptives were required if having sex before marriage; of those reporting sex in the last three months, the majority reporting using contraceptives (78%, 62/79) but condom use was low (44%, 35/79). Qualitative data showed that the reading material seemed to be popular and young women expressed a need for more specific reproductive health information, particularly on HIV/AIDS. Women wanted services with some privacy and anonymity, and views on the factory service were mixed. Implementing a complex intervention with a hard to reach population through a factory in China, using a quasi-experimental design, is not easy. Further research should focus on the specific needs and service preferences of this population and these should be

  19. Cr(VI) transport via a supported ionic liquid membrane containing CYPHOS IL101 as carrier: system analysis and optimization through experimental design strategies.

    PubMed

    Rodríguez de San Miguel, Eduardo; Vital, Xóchitl; de Gyves, Josefina

    2014-05-30

    Chromium(VI) transport through a supported liquid membrane (SLM) system containing the commercial ionic liquid CYPHOS IL101 as carrier was studied. A reducing stripping phase was used as a mean to increase recovery and to simultaneously transform Cr(VI) into a less toxic residue for disposal or reuse. General functions which describe the time-depending evolution of the metal fractions in the cell compartments were defined and used in data evaluation. An experimental design strategy, using factorial and central-composite design matrices, was applied to assess the influence of the extractant, NaOH and citrate concentrations in the different phases, while a desirability function scheme allowed the synchronized optimization of depletion and recovery of the analyte. The mechanism for chromium permeation was analyzed and discussed to contribute to the understanding of the transfer process. The influence of metal concentration was evaluated as well. The presence of different interfering ions (Ca(2+), Al(3+), NO3(-), SO4(2-), and Cl(-)) at several Cr(VI): interfering ion ratios was studied through the use of a Plackett and Burman experimental design matrix. Under optimized conditions 90% of recovery was obtained from a feed solution containing 7mgL(-1) of Cr(VI) in 0.01moldm(-3) HCl medium after 5h of pertraction. Copyright © 2014 Elsevier B.V. All rights reserved.

  20. The optimal design of UAV wing structure

    NASA Astrophysics Data System (ADS)

    Długosz, Adam; Klimek, Wiktor

    2018-01-01

    The paper presents an optimal design of UAV wing, made of composite materials. The aim of the optimization is to improve strength and stiffness together with reduction of the weight of the structure. Three different types of functionals, which depend on stress, stiffness and the total mass are defined. The paper presents an application of the in-house implementation of the evolutionary multi-objective algorithm in optimization of the UAV wing structure. Values of the functionals are calculated on the basis of results obtained from numerical simulations. Numerical FEM model, consisting of different composite materials is created. Adequacy of the numerical model is verified by results obtained from the experiment, performed on a tensile testing machine. Examples of multi-objective optimization by means of Pareto-optimal set of solutions are presented.

  1. A Survey of the Medical Needs of a Group of Small Factories*

    PubMed Central

    Lee, W. R.

    1962-01-01

    The present interest in medical services for small factories is matched by the limited objective information which is available on the demand for and needs of such services. As a teaching project, a survey was made of factories with between 30 and 200 employees on an estate in the North West where there was no organized medical service. Unfortunately, time allowed only 22 factories to be visited. The findings, therefore, are regarded as indicative rather than conclusive, but this does not detract from their interest. Factories were visited by two or three postgraduate students who completed a questionnaire designed to standardize their findings. The questionnaire is included as an appendix to this paper. Regarding the demand for medical services, four of the 22 factories were subsidiaries of larger organizations and had part-time medical advice, 14 expressed no interest even if this would have involved no financial commitment, and the remaining four were interested for differing reasons. The needs of the factories in this context were found to be, first, advice and perhaps better supervision of non-mechanical hazards and, secondly, supervision of the first aid arrangements. From the ambulance journey records of the local authority there appeared to be no great demand for local casualty facilities. To meet these needs it is suggested that the functions of the appointed factory doctor might be modified to include wider supervision of non-mechanical hazards and supervision of first aid arrangements. It is also suggested that the National Health Service should form the basis for dealing with those cases requiring more than first aid. PMID:14463582

  2. Comparison of Grouping Schemes for Exposure to Total Dust in Cement Factories in Korea.

    PubMed

    Koh, Dong-Hee; Kim, Tae-Woo; Jang, Seung Hee; Ryu, Hyang-Woo; Park, Donguk

    2015-08-01

    The purpose of this study was to evaluate grouping schemes for exposure to total dust in cement industry workers using non-repeated measurement data. In total, 2370 total dust measurements taken from nine Portland cement factories in 1995-2009 were analyzed. Various grouping schemes were generated based on work process, job, factory, or average exposure. To characterize variance components of each grouping scheme, we developed mixed-effects models with a B-spline time trend incorporated as fixed effects and a grouping variable incorporated as a random effect. Using the estimated variance components, elasticity was calculated. To compare the prediction performances of different grouping schemes, 10-fold cross-validation tests were conducted, and root mean squared errors and pooled correlation coefficients were calculated for each grouping scheme. The five exposure groups created a posteriori by ranking job and factory combinations according to average dust exposure showed the best prediction performance and highest elasticity among various grouping schemes. Our findings suggest a grouping method based on ranking of job, and factory combinations would be the optimal choice in this population. Our grouping method may aid exposure assessment efforts in similar occupational settings, minimizing the misclassification of exposures. © The Author 2015. Published by Oxford University Press on behalf of the British Occupational Hygiene Society.

  3. Designing and optimizing a healthcare kiosk for the community.

    PubMed

    Lyu, Yongqiang; Vincent, Christopher James; Chen, Yu; Shi, Yuanchun; Tang, Yida; Wang, Wenyao; Liu, Wei; Zhang, Shuangshuang; Fang, Ke; Ding, Ji

    2015-03-01

    Investigating new ways to deliver care, such as the use of self-service kiosks to collect and monitor signs of wellness, supports healthcare efficiency and inclusivity. Self-service kiosks offer this potential, but there is a need for solutions to meet acceptable standards, e.g. provision of accurate measurements. This study investigates the design and optimization of a prototype healthcare kiosk to collect vital signs measures. The design problem was decomposed, formalized, focused and used to generate multiple solutions. Systematic implementation and evaluation allowed for the optimization of measurement accuracy, first for individuals and then for a population. The optimized solution was tested independently to check the suitability of the methods, and quality of the solution. The process resulted in a reduction of measurement noise and an optimal fit, in terms of the positioning of measurement devices. This guaranteed the accuracy of the solution and provides a general methodology for similar design problems. Copyright © 2014 Elsevier Ltd and The Ergonomics Society. All rights reserved.

  4. Optimal two-stage enrichment design correcting for biomarker misclassification.

    PubMed

    Zang, Yong; Guo, Beibei

    2018-01-01

    The enrichment design is an important clinical trial design to detect the treatment effect of the molecularly targeted agent (MTA) in personalized medicine. Under this design, patients are stratified into marker-positive and marker-negative subgroups based on their biomarker statuses and only the marker-positive patients are enrolled into the trial and randomized to receive either the MTA or a standard treatment. As the biomarker plays a key role in determining the enrollment of the trial, a misclassification of the biomarker can induce substantial bias, undermine the integrity of the trial, and seriously affect the treatment evaluation. In this paper, we propose a two-stage optimal enrichment design that utilizes the surrogate marker to correct for the biomarker misclassification. The proposed design is optimal in the sense that it maximizes the probability of correctly classifying each patient's biomarker status based on the surrogate marker information. In addition, after analytically deriving the bias caused by the biomarker misclassification, we develop a likelihood ratio test based on the EM algorithm to correct for such bias. We conduct comprehensive simulation studies to investigate the operating characteristics of the optimal design and the results confirm the desirable performance of the proposed design.

  5. Optimized emission in nanorod arrays through quasi-aperiodic inverse design.

    PubMed

    Anderson, P Duke; Povinelli, Michelle L

    2015-06-01

    We investigate a new class of quasi-aperiodic nanorod structures for the enhancement of incoherent light emission. We identify one optimized structure using an inverse design algorithm and the finite-difference time-domain method. We carry out emission calculations on both the optimized structure as well as a simple periodic array. The optimized structure achieves nearly perfect light extraction while maintaining a high spontaneous emission rate. Overall, the optimized structure can achieve a 20%-42% increase in external quantum efficiency relative to a simple periodic design, depending on material quality.

  6. Iterative optimization method for design of quantitative magnetization transfer imaging experiments.

    PubMed

    Levesque, Ives R; Sled, John G; Pike, G Bruce

    2011-09-01

    Quantitative magnetization transfer imaging (QMTI) using spoiled gradient echo sequences with pulsed off-resonance saturation can be a time-consuming technique. A method is presented for selection of an optimum experimental design for quantitative magnetization transfer imaging based on the iterative reduction of a discrete sampling of the Z-spectrum. The applicability of the technique is demonstrated for human brain white matter imaging at 1.5 T and 3 T, and optimal designs are produced to target specific model parameters. The optimal number of measurements and the signal-to-noise ratio required for stable parameter estimation are also investigated. In vivo imaging results demonstrate that this optimal design approach substantially improves parameter map quality. The iterative method presented here provides an advantage over free form optimal design methods, in that pragmatic design constraints are readily incorporated. In particular, the presented method avoids clustering and repeated measures in the final experimental design, an attractive feature for the purpose of magnetization transfer model validation. The iterative optimal design technique is general and can be applied to any method of quantitative magnetization transfer imaging. Copyright © 2011 Wiley-Liss, Inc.

  7. High-Fidelity Multidisciplinary Design Optimization of Aircraft Configurations

    NASA Technical Reports Server (NTRS)

    Martins, Joaquim R. R. A.; Kenway, Gaetan K. W.; Burdette, David; Jonsson, Eirikur; Kennedy, Graeme J.

    2017-01-01

    To evaluate new airframe technologies we need design tools based on high-fidelity models that consider multidisciplinary interactions early in the design process. The overarching goal of this NRA is to develop tools that enable high-fidelity multidisciplinary design optimization of aircraft configurations, and to apply these tools to the design of high aspect ratio flexible wings. We develop a geometry engine that is capable of quickly generating conventional and unconventional aircraft configurations including the internal structure. This geometry engine features adjoint derivative computation for efficient gradient-based optimization. We also added overset capability to a computational fluid dynamics solver, complete with an adjoint implementation and semiautomatic mesh generation. We also developed an approach to constraining buffet and started the development of an approach for constraining utter. On the applications side, we developed a new common high-fidelity model for aeroelastic studies of high aspect ratio wings. We performed optimal design trade-o s between fuel burn and aircraft weight for metal, conventional composite, and carbon nanotube composite wings. We also assessed a continuous morphing trailing edge technology applied to high aspect ratio wings. This research resulted in the publication of 26 manuscripts so far, and the developed methodologies were used in two other NRAs. 1

  8. Multidisciplinary optimization of a controlled space structure using 150 design variables

    NASA Technical Reports Server (NTRS)

    James, Benjamin B.

    1993-01-01

    A controls-structures interaction design method is presented. The method coordinates standard finite-element structural analysis, multivariable controls, and nonlinear programming codes and allows simultaneous optimization of the structure and control system of a spacecraft. Global sensitivity equations are used to account for coupling between the disciplines. Use of global sensitivity equations helps solve optimization problems that have a large number of design variables and a high degree of coupling between disciplines. The preliminary design of a generic geostationary platform is used to demonstrate the multidisciplinary optimization method. Design problems using 15, 63, and 150 design variables to optimize truss member sizes and feedback gain values are solved and the results are presented. The goal is to reduce the total mass of the structure and the vibration control system while satisfying constraints on vibration decay rate. Incorporation of the nonnegligible mass of actuators causes an essential coupling between structural design variables and control design variables.

  9. Design optimization of gas generator hybrid propulsion boosters

    NASA Technical Reports Server (NTRS)

    Weldon, Vincent; Phillips, Dwight U.; Fink, Lawrence E.

    1990-01-01

    A methodology used in support of a contract study for NASA/MSFC to optimize the design of gas generator hybrid propulsion booster for uprating the National Space Transportation System (NSTS) is presented. The objective was to compare alternative configurations for this booster approach, optimizing each candidate concept on different bases, in order to develop data for a trade table on which a final decision was based. The methodology is capable of processing a large number of independent and dependent variables, adjusting the overall subsystems characteristics to arrive at a best compromise integrated design to meet various specified optimization criteria subject to selected constraints. For each system considered, a detailed weight statement was generated along with preliminary cost and reliability estimates.

  10. A hybrid nonlinear programming method for design optimization

    NASA Technical Reports Server (NTRS)

    Rajan, S. D.

    1986-01-01

    Solutions to engineering design problems formulated as nonlinear programming (NLP) problems usually require the use of more than one optimization technique. Moreover, the interaction between the user (analysis/synthesis) program and the NLP system can lead to interface, scaling, or convergence problems. An NLP solution system is presented that seeks to solve these problems by providing a programming system to ease the user-system interface. A simple set of rules is used to select an optimization technique or to switch from one technique to another in an attempt to detect, diagnose, and solve some potential problems. Numerical examples involving finite element based optimal design of space trusses and rotor bearing systems are used to illustrate the applicability of the proposed methodology.

  11. Nonlinear Shaping Architecture Designed with Using Evolutionary Structural Optimization Tools

    NASA Astrophysics Data System (ADS)

    Januszkiewicz, Krystyna; Banachowicz, Marta

    2017-10-01

    The paper explores the possibilities of using Structural Optimization Tools (ESO) digital tools in an integrated structural and architectural design in response to the current needs geared towards sustainability, combining ecological and economic efficiency. The first part of the paper defines the Evolutionary Structural Optimization tools, which were developed specifically for engineering purposes using finite element analysis as a framework. The development of ESO has led to several incarnations, which are all briefly discussed (Additive ESO, Bi-directional ESO, Extended ESO). The second part presents result of using these tools in structural and architectural design. Actual building projects which involve optimization as a part of the original design process will be presented (Crematorium in Kakamigahara Gifu, Japan, 2006 SANAA“s Learning Centre, EPFL in Lausanne, Switzerland 2008 among others). The conclusion emphasizes that the structural engineering and architectural design mean directing attention to the solutions which are used by Nature, designing works optimally shaped and forming their own environments. Architectural forms never constitute the optimum shape derived through a form-finding process driven only by structural optimization, but rather embody and integrate a multitude of parameters. It might be assumed that there is a similarity between these processes in nature and the presented design methods. Contemporary digital methods make the simulation of such processes possible, and thus enable us to refer back to the empirical methods of previous generations.

  12. PopED lite: An optimal design software for preclinical pharmacokinetic and pharmacodynamic studies.

    PubMed

    Aoki, Yasunori; Sundqvist, Monika; Hooker, Andrew C; Gennemark, Peter

    2016-04-01

    Optimal experimental design approaches are seldom used in preclinical drug discovery. The objective is to develop an optimal design software tool specifically designed for preclinical applications in order to increase the efficiency of drug discovery in vivo studies. Several realistic experimental design case studies were collected and many preclinical experimental teams were consulted to determine the design goal of the software tool. The tool obtains an optimized experimental design by solving a constrained optimization problem, where each experimental design is evaluated using some function of the Fisher Information Matrix. The software was implemented in C++ using the Qt framework to assure a responsive user-software interaction through a rich graphical user interface, and at the same time, achieving the desired computational speed. In addition, a discrete global optimization algorithm was developed and implemented. The software design goals were simplicity, speed and intuition. Based on these design goals, we have developed the publicly available software PopED lite (http://www.bluetree.me/PopED_lite). Optimization computation was on average, over 14 test problems, 30 times faster in PopED lite compared to an already existing optimal design software tool. PopED lite is now used in real drug discovery projects and a few of these case studies are presented in this paper. PopED lite is designed to be simple, fast and intuitive. Simple, to give many users access to basic optimal design calculations. Fast, to fit a short design-execution cycle and allow interactive experimental design (test one design, discuss proposed design, test another design, etc). Intuitive, so that the input to and output from the software tool can easily be understood by users without knowledge of the theory of optimal design. In this way, PopED lite is highly useful in practice and complements existing tools. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  13. A General-Purpose Optimization Engine for Multi-Disciplinary Design Applications

    NASA Technical Reports Server (NTRS)

    Patnaik, Surya N.; Hopkins, Dale A.; Berke, Laszlo

    1996-01-01

    A general purpose optimization tool for multidisciplinary applications, which in the literature is known as COMETBOARDS, is being developed at NASA Lewis Research Center. The modular organization of COMETBOARDS includes several analyzers and state-of-the-art optimization algorithms along with their cascading strategy. The code structure allows quick integration of new analyzers and optimizers. The COMETBOARDS code reads input information from a number of data files, formulates a design as a set of multidisciplinary nonlinear programming problems, and then solves the resulting problems. COMETBOARDS can be used to solve a large problem which can be defined through multiple disciplines, each of which can be further broken down into several subproblems. Alternatively, a small portion of a large problem can be optimized in an effort to improve an existing system. Some of the other unique features of COMETBOARDS include design variable formulation, constraint formulation, subproblem coupling strategy, global scaling technique, analysis approximation, use of either sequential or parallel computational modes, and so forth. The special features and unique strengths of COMETBOARDS assist convergence and reduce the amount of CPU time used to solve the difficult optimization problems of aerospace industries. COMETBOARDS has been successfully used to solve a number of problems, including structural design of space station components, design of nozzle components of an air-breathing engine, configuration design of subsonic and supersonic aircraft, mixed flow turbofan engines, wave rotor topped engines, and so forth. This paper introduces the COMETBOARDS design tool and its versatility, which is illustrated by citing examples from structures, aircraft design, and air-breathing propulsion engine design.

  14. An Expert System-Driven Method for Parametric Trajectory Optimization During Conceptual Design

    NASA Technical Reports Server (NTRS)

    Dees, Patrick D.; Zwack, Mathew R.; Steffens, Michael; Edwards, Stephen; Diaz, Manuel J.; Holt, James B.

    2015-01-01

    During the early phases of engineering design, the costs committed are high, costs incurred are low, and the design freedom is high. It is well documented that decisions made in these early design phases drive the entire design's life cycle cost. In a traditional paradigm, key design decisions are made when little is known about the design. As the design matures, design changes become more difficult in both cost and schedule to enact. The current capability-based paradigm, which has emerged because of the constrained economic environment, calls for the infusion of knowledge usually acquired during later design phases into earlier design phases, i.e. bringing knowledge acquired during preliminary and detailed design into pre-conceptual and conceptual design. An area of critical importance to launch vehicle design is the optimization of its ascent trajectory, as the optimal trajectory will be able to take full advantage of the launch vehicle's capability to deliver a maximum amount of payload into orbit. Hence, the optimal ascent trajectory plays an important role in the vehicle's affordability posture yet little of the information required to successfully optimize a trajectory is known early in the design phase. Thus, the current paradigm of optimizing ascent trajectories involves generating point solutions for every change in a vehicle's design parameters. This is often a very tedious, manual, and time-consuming task for the analysts. Moreover, the trajectory design space is highly non-linear and multi-modal due to the interaction of various constraints. When these obstacles are coupled with the Program to Optimize Simulated Trajectories (POST), an industry standard program to optimize ascent trajectories that is difficult to use, expert trajectory analysts are required to effectively optimize a vehicle's ascent trajectory. Over the course of this paper, the authors discuss a methodology developed at NASA Marshall's Advanced Concepts Office to address these issues

  15. Baby factories taint surrogacy in Nigeria.

    PubMed

    Makinde, Olusesan Ayodeji; Makinde, Olufunmbi Olukemi; Olaleye, Olalekan; Brown, Brandon; Odimegwu, Clifford O

    2016-01-01

    The practice of reproductive medicine in Nigeria is facing new challenges with the proliferation of 'baby factories'. Baby factories are buildings, hospitals or orphanages that have been converted into places for young girls and women to give birth to children for sale on the black market, often to infertile couples, or into trafficking rings. This practice illegally provides outcomes (children) similar to surrogacy. While surrogacy has not been well accepted in this environment, the proliferation of baby factories further threatens its acceptance. The involvement of medical and allied health workers in the operation of baby factories raises ethical concerns. The lack of a properly defined legal framework and code of practice for surrogacy makes it difficult to prosecute baby factory owners, especially when they are health workers claiming to be providing services to clients. In this environment, surrogacy and other assisted reproductive techniques urgently require regulation in order to define when ethico-legal lines have been crossed in providing surrogacy or surrogacy-like services. Copyright © 2015 Reproductive Healthcare Ltd. Published by Elsevier Ltd. All rights reserved.

  16. Design and optimization of a self-deploying PV tent array

    NASA Astrophysics Data System (ADS)

    Colozza, Anthony J.

    A study was performed to design a self-deploying tent shaped PV (photovoltaic) array and optimize the design for maximum specific power. Each structural component of the design was analyzed to determine the size necessary to withstand the various forces it would be subjected to. Through this analysis the component weights were determined. An optimization was performed to determine the array dimensions and blanket geometry which produce the maximum specific power for a given PV blanket. This optimization was performed for both Lunar and Martian environmental conditions. The performance specifications for the array at both locations and with various PV blankets were determined.

  17. Flutter optimization in fighter aircraft design

    NASA Technical Reports Server (NTRS)

    Triplett, W. E.

    1984-01-01

    The efficient design of aircraft structure involves a series of compromises among various engineering disciplines. These compromises are necessary to ensure the best overall design. To effectively reconcile the various technical constraints requires a number of design iterations, with the accompanying long elapsed time. Automated procedures can reduce the elapsed time, improve productivity and hold the promise of optimum designs which may be missed by batch processing. Several examples are given of optimization applications including aeroelastic constraints. Particular attention is given to the success or failure of each example and the lessons learned. The specific applications are shown. The final two applications were made recently.

  18. Application of Optimization Techniques to Spectrally Modulated, Spectrally Encoded Waveform Design

    DTIC Science & Technology

    2008-09-01

    means y1 and y2 do indeed differ [22]. 43 To compare means for more than two populations, the Least Significant Difference ( LSD ) test may be used...case, the LSD test for a full-factorial design is given by LSD = tα 2 ,Ne−a √ 2 eTe n(Ne − a) , (2.41) where α is the significance level and ν = Ne − a...rejected if the means differ by more than the LSD [22]. 44 III. Methodology In many respects, the goal of this dissertation is to develop and

  19. The Walking Interventions Through Texting (WalkIT) Trial: Rationale, Design, and Protocol for a Factorial Randomized Controlled Trial of Adaptive Interventions for Overweight and Obese, Inactive Adults.

    PubMed

    Hurley, Jane C; Hollingshead, Kevin E; Todd, Michael; Jarrett, Catherine L; Tucker, Wesley J; Angadi, Siddhartha S; Adams, Marc A

    2015-09-11

    Walking is a widely accepted and frequently targeted health promotion approach to increase physical activity (PA). Interventions to increase PA have produced only small improvements. Stronger and more potent behavioral intervention components are needed to increase time spent in PA, improve cardiometabolic risk markers, and optimize health. Our aim is to present the rationale and methods from the WalkIT Trial, a 4-month factorial randomized controlled trial (RCT) in inactive, overweight/obese adults. The main purpose of the study was to evaluate whether intensive adaptive components result in greater improvements to adults' PA compared to the static intervention components. Participants enrolled in a 2x2 factorial RCT and were assigned to one of four semi-automated, text message-based walking interventions. Experimental components included adaptive versus static steps/day goals, and immediate versus delayed reinforcement. Principles of percentile shaping and behavioral economics were used to operationalize experimental components. A Fitbit Zip measured the main outcome: participants' daily physical activity (steps and cadence) over the 4-month duration of the study. Secondary outcomes included self-reported PA, psychosocial outcomes, aerobic fitness, and cardiorespiratory risk factors assessed pre/post in a laboratory setting. Participants were recruited through email listservs and websites affiliated with the university campus, community businesses and local government, social groups, and social media advertising. This study has completed data collection as of December 2014, but data cleaning and preliminary analyses are still in progress. We expect to complete analysis of the main outcomes in late 2015 to early 2016. The Walking Interventions through Texting (WalkIT) Trial will further the understanding of theory-based intervention components to increase the PA of men and women who are healthy, insufficiently active and are overweight or obese. WalkIT is one of

  20. Resilience-based optimal design of water distribution network

    NASA Astrophysics Data System (ADS)

    Suribabu, C. R.

    2017-11-01

    Optimal design of water distribution network is generally aimed to minimize the capital cost of the investments on tanks, pipes, pumps, and other appurtenances. Minimizing the cost of pipes is usually considered as a prime objective as its proportion in capital cost of the water distribution system project is very high. However, minimizing the capital cost of the pipeline alone may result in economical network configuration, but it may not be a promising solution in terms of resilience point of view. Resilience of the water distribution network has been considered as one of the popular surrogate measures to address ability of network to withstand failure scenarios. To improve the resiliency of the network, the pipe network optimization can be performed with two objectives, namely minimizing the capital cost as first objective and maximizing resilience measure of the configuration as secondary objective. In the present work, these two objectives are combined as single objective and optimization problem is solved by differential evolution technique. The paper illustrates the procedure for normalizing the objective functions having distinct metrics. Two of the existing resilience indices and power efficiency are considered for optimal design of water distribution network. The proposed normalized objective function is found to be efficient under weighted method of handling multi-objective water distribution design problem. The numerical results of the design indicate the importance of sizing pipe telescopically along shortest path of flow to have enhanced resiliency indices.

  1. Solar Collector Design Optimization: A Hands-on Project Case Study

    ERIC Educational Resources Information Center

    Birnie, Dunbar P., III; Kaz, David M.; Berman, Elena A.

    2012-01-01

    A solar power collector optimization design project has been developed for use in undergraduate classrooms and/or laboratories. The design optimization depends on understanding the current-voltage characteristics of the starting photovoltaic cells as well as how the cell's electrical response changes with increased light illumination. Students…

  2. A General Multidisciplinary Turbomachinery Design Optimization system Applied to a Transonic Fan

    NASA Astrophysics Data System (ADS)

    Nemnem, Ahmed Mohamed Farid

    The blade geometry design process is integral to the development and advancement of compressors and turbines in gas generators or aeroengines. A new airfoil section design capability has been added to an open source parametric 3D blade design tool. Curvature of the meanline is controlled using B-splines to create the airfoils. The curvature is analytically integrated to derive the angles and the meanline is obtained by integrating the angles. A smooth thickness distribution is then added to the airfoil to guarantee a smooth shape while maintaining a prescribed thickness distribution. A leading edge B-spline definition has also been implemented to achieve customized airfoil leading edges which guarantees smoothness with parametric eccentricity and droop. An automated turbomachinery design and optimization system has been created. An existing splittered transonic fan is used as a test and reference case. This design was more general than a conventional design to have access to the other design methodology. The whole mechanical and aerodynamic design loops are automated for the optimization process. The flow path and the geometrical properties of the rotor are initially created using the axi-symmetric design and analysis code (T-AXI). The main and splitter blades are parametrically designed with the created geometry builder (3DBGB) using the new added features (curvature technique). The solid model creation of the rotor sector with a periodic boundaries combining the main blade and splitter is done using MATLAB code directly connected to SolidWorks including the hub, fillets and tip clearance. A mechanical optimization is performed with DAKOTA (developed by DOE) to reduce the mass of the blades while keeping maximum stress as a constraint with a safety factor. A Genetic algorithm followed by Numerical Gradient optimization strategies are used in the mechanical optimization. The splittered transonic fan blades mass is reduced by 2.6% while constraining the maximum

  3. Design Optimization of a Centrifugal Fan with Splitter Blades

    NASA Astrophysics Data System (ADS)

    Heo, Man-Woong; Kim, Jin-Hyuk; Kim, Kwang-Yong

    2015-05-01

    Multi-objective optimization of a centrifugal fan with additionally installed splitter blades was performed to simultaneously maximize the efficiency and pressure rise using three-dimensional Reynolds-averaged Navier-Stokes equations and hybrid multi-objective evolutionary algorithm. Two design variables defining the location of splitter, and the height ratio between inlet and outlet of impeller were selected for the optimization. In addition, the aerodynamic characteristics of the centrifugal fan were investigated with the variation of design variables in the design space. Latin hypercube sampling was used to select the training points, and response surface approximation models were constructed as surrogate models of the objective functions. With the optimization, both the efficiency and pressure rise of the centrifugal fan with splitter blades were improved considerably compared to the reference model.

  4. CFD-Based Design Optimization Tool Developed for Subsonic Inlet

    NASA Technical Reports Server (NTRS)

    1995-01-01

    The traditional approach to the design of engine inlets for commercial transport aircraft is a tedious process that ends with a less-than-optimum design. With the advent of high-speed computers and the availability of more accurate and reliable computational fluid dynamics (CFD) solvers, numerical optimization processes can effectively be used to design an aerodynamic inlet lip that enhances engine performance. The designers' experience at Boeing Corporation showed that for a peak Mach number on the inlet surface beyond some upper limit, the performance of the engine degrades excessively. Thus, our objective was to optimize efficiency (minimize the peak Mach number) at maximum cruise without compromising performance at other operating conditions. Using a CFD code NPARC, the NASA Lewis Research Center, in collaboration with Boeing, developed an integrated procedure at Lewis to find the optimum shape of a subsonic inlet lip and a numerical optimization code, ADS. We used a GRAPE-based three-dimensional grid generator to help automate the optimization procedure. The inlet lip shape at the crown and the keel was described as a superellipse, and the superellipse exponents and radii ratios were considered as design variables. Three operating conditions: cruise, takeoff, and rolling takeoff, were considered in this study. Three-dimensional Euler computations were carried out to obtain the flow field. At the initial design, the peak Mach numbers for maximum cruise, takeoff, and rolling takeoff conditions were 0.88, 1.772, and 1.61, respectively. The acceptable upper limits on the takeoff and rolling takeoff Mach numbers were 1.55 and 1.45. Since the initial design provided by Boeing was found to be optimum with respect to the maximum cruise condition, the sum of the peak Mach numbers at takeoff and rolling takeoff were minimized in the current study while the maximum cruise Mach number was constrained to be close to that at the existing design. With this objective, the

  5. Slime Factory.

    ERIC Educational Resources Information Center

    Fowler, Marilyn L.

    1992-01-01

    Describes a classroom activity using slime, a colloid: it behaves like both a solid and liquid. Explains how slime can be produced from guar gum. An activity where students work in teams and become a slime factory is presented. (PR)

  6. Performance Management and Optimization of Semiconductor Design Projects

    NASA Astrophysics Data System (ADS)

    Hinrichs, Neele; Olbrich, Markus; Barke, Erich

    2010-06-01

    The semiconductor industry is characterized by fast technological changes and small time-to-market windows. Improving productivity is the key factor to stand up to the competitors and thus successfully persist in the market. In this paper a Performance Management System for analyzing, optimizing and evaluating chip design projects is presented. A task graph representation is used to optimize the design process regarding time, cost and workload of resources. Key Performance Indicators are defined in the main areas cost, profit, resources, process and technical output to appraise the project.

  7. D-Optimal Experimental Design for Contaminant Source Identification

    NASA Astrophysics Data System (ADS)

    Sai Baba, A. K.; Alexanderian, A.

    2016-12-01

    Contaminant source identification seeks to estimate the release history of a conservative solute given point concentration measurements at some time after the release. This can be mathematically expressed as an inverse problem, with a linear observation operator or a parameter-to-observation map, which we tackle using a Bayesian approach. Acquisition of experimental data can be laborious and expensive. The goal is to control the experimental parameters - in our case, the sparsity of the sensors, to maximize the information gain subject to some physical or budget constraints. This is known as optimal experimental design (OED). D-optimal experimental design seeks to maximize the expected information gain, and has long been considered the gold standard in the statistics community. Our goal is to develop scalable methods for D-optimal experimental designs involving large-scale PDE constrained problems with high-dimensional parameter fields. A major challenge for the OED, is that a nonlinear optimization algorithm for the D-optimality criterion requires repeated evaluation of objective function and gradient involving the determinant of large and dense matrices - this cost can be prohibitively expensive for applications of interest. We propose novel randomized matrix techniques that bring down the computational costs of the objective function and gradient evaluations by several orders of magnitude compared to the naive approach. The effect of randomized estimators on the accuracy and the convergence of the optimization solver will be discussed. The features and benefits of our new approach will be demonstrated on a challenging model problem from contaminant source identification involving the inference of the initial condition from spatio-temporal observations in a time-dependent advection-diffusion problem.

  8. Factorial-design optimization of gas chromatographic analysis of tetrabrominated to decabrominated diphenyl ethers. Application to domestic dust.

    PubMed

    Regueiro, Jorge; Llompart, Maria; Garcia-Jares, Carmen; Cela, Rafael

    2007-07-01

    Gas chromatographic analysis of polybrominated diphenyl ethers (PBDEs) has been evaluated in an attempt to achieve better control of the separation process, especially for highly substituted congeners. Use of a narrow-bore capillary column enabled adequate determination of tetra, penta, hexa, hepta, octa, nona and decaBDE congeners in only one chromatographic run while maintaining resolution power similar to that of conventional columns. A micro electron-capture detector (GC-microECD) was used. Chromatographic conditions were optimized by multifactorial experimental design, with the objective of obtaining not only high sensitivity but also good precision. In this way two different approaches to maximizing response and minimizing variability were tested, and are fully discussed. These optimum chromatographic conditions were then used to determine PBDEs extracted from domestic dust samples by microwave-assisted solvent extraction (MASE). Quantitative recovery (90-108%) was achieved for all the PBDEs and method precision (RSD < 13%) was satisfactory. Accuracy was tested by use of the standard reference material SRM 2585, and sub-ng g(-1) limits of detection were obtained for all compounds except BDE-209 (1.44 ng g(-1)). Finally, several samples of house dust were analysed by use of the proposed method and all the target PBDEs were detected in all the samples. BDE-209 was the predominant congener. Amounts varied from 58 to 1615 ng g(-1) and the average contribution to the total PBDE burden of 52%. The main congeners of the octaBDE mixture (BDE-183, BDE-197, BDE-207 and BDE-196) also made an important contribution (29%) to the total. These are the first data about the presence of these compounds in European house-dust samples. Finally, the sum of the main congeners in the pentaBDE commercial mixture (BDE-47, BDE-99, and BDE-100) contributed 14% to the total. Figure Polybrominated diphenyl ethers in House Dust.

  9. Design and optimization of interplanetary spacecraft trajectories

    NASA Astrophysics Data System (ADS)

    McConaghy, Thomas Troy

    Scientists involved in space exploration are always looking for ways to accomplish more with their limited budgets. Mission designers can decrease operational costs by crafting trajectories with low launch costs, short time-of-flight, or low propellant requirements. Gravity-assist maneuvers and low-thrust, high-efficiency ion propulsion can be of great help. This dissertation describes advances in methods to design and optimize interplanetary spacecraft trajectories. particularly for missions using gravity-assist maneuvers or low-thrust engines (or both). The first part of this dissertation describes a new, efficient, two-step methodology to design and optimize low-thrust gravity-assist trajectories. Models for the launch vehicle, solar arrays, and engines are introduced and several examples of optimized trajectories are presented. For example, a 3.7-year Earth-Venus-Earth-Mars-Jupiter flyby trajectory with maximized final mass is described. The way that the parameterization of the optimization problem affects convergence speed and reliability is also investigated. The choice of coordinate system is shown to make a significant difference. The second part of this dissertation describes a way to construct Earth-Mars cycler trajectories---periodic orbits that repeatedly encounter Earth and Mars, yet require little or no propellant. We find that well-known cyclers, such as the Aldrin cycler, are special cases of a much larger family of cyclers. In fact, so many new cyclers are found that a comprehensive naming system (nomenclature) is proposed. One particularly promising new cycler, the "ballistic S1L1 cycler" is analyzed in greater detail.

  10. The Sizing and Optimization Language, (SOL): Computer language for design problems

    NASA Technical Reports Server (NTRS)

    Lucas, Stephen H.; Scotti, Stephen J.

    1988-01-01

    The Sizing and Optimization Language, (SOL), a new high level, special purpose computer language was developed to expedite application of numerical optimization to design problems and to make the process less error prone. SOL utilizes the ADS optimization software and provides a clear, concise syntax for describing an optimization problem, the OPTIMIZE description, which closely parallels the mathematical description of the problem. SOL offers language statements which can be used to model a design mathematically, with subroutines or code logic, and with existing FORTRAN routines. In addition, SOL provides error checking and clear output of the optimization results. Because of these language features, SOL is best suited to model and optimize a design concept when the model consits of mathematical expressions written in SOL. For such cases, SOL's unique syntax and error checking can be fully utilized. SOL is presently available for DEC VAX/VMS systems. A SOL package is available which includes the SOL compiler, runtime library routines, and a SOL reference manual.

  11. Truss topology optimization with simultaneous analysis and design

    NASA Technical Reports Server (NTRS)

    Sankaranarayanan, S.; Haftka, Raphael T.; Kapania, Rakesh K.

    1992-01-01

    Strategies for topology optimization of trusses for minimum weight subject to stress and displacement constraints by Simultaneous Analysis and Design (SAND) are considered. The ground structure approach is used. A penalty function formulation of SAND is compared with an augmented Lagrangian formulation. The efficiency of SAND in handling combinations of general constraints is tested. A strategy for obtaining an optimal topology by minimizing the compliance of the truss is compared with a direct weight minimization solution to satisfy stress and displacement constraints. It is shown that for some problems, starting from the ground structure and using SAND is better than starting from a minimum compliance topology design and optimizing only the cross sections for minimum weight under stress and displacement constraints. A member elimination strategy to save CPU time is discussed.

  12. Enhanced Multiobjective Optimization Technique for Comprehensive Aerospace Design. Part A

    NASA Technical Reports Server (NTRS)

    Chattopadhyay, Aditi; Rajadas, John N.

    1997-01-01

    A multidisciplinary design optimization procedure which couples formal multiobjectives based techniques and complex analysis procedures (such as computational fluid dynamics (CFD) codes) developed. The procedure has been demonstrated on a specific high speed flow application involving aerodynamics and acoustics (sonic boom minimization). In order to account for multiple design objectives arising from complex performance requirements, multiobjective formulation techniques are used to formulate the optimization problem. Techniques to enhance the existing Kreisselmeier-Steinhauser (K-S) function multiobjective formulation approach have been developed. The K-S function procedure used in the proposed work transforms a constrained multiple objective functions problem into an unconstrained problem which then is solved using the Broyden-Fletcher-Goldfarb-Shanno (BFGS) algorithm. Weight factors are introduced during the transformation process to each objective function. This enhanced procedure will provide the designer the capability to emphasize specific design objectives during the optimization process. The demonstration of the procedure utilizes a computational Fluid dynamics (CFD) code which solves the three-dimensional parabolized Navier-Stokes (PNS) equations for the flow field along with an appropriate sonic boom evaluation procedure thus introducing both aerodynamic performance as well as sonic boom as the design objectives to be optimized simultaneously. Sensitivity analysis is performed using a discrete differentiation approach. An approximation technique has been used within the optimizer to improve the overall computational efficiency of the procedure in order to make it suitable for design applications in an industrial setting.

  13. DKIST enclosure modeling and verification during factory assembly and testing

    NASA Astrophysics Data System (ADS)

    Larrakoetxea, Ibon; McBride, William; Marshall, Heather K.; Murga, Gaizka

    2014-08-01

    The Daniel K. Inouye Solar Telescope (DKIST, formerly the Advanced Technology Solar Telescope, ATST) is unique as, apart from protecting the telescope and its instrumentation from the weather, it holds the entrance aperture stop and is required to position it with millimeter-level accuracy. The compliance of the Enclosure design with the requirements, as of Final Design Review in January 2012, was supported by mathematical models and other analyses which included structural and mechanical analyses (FEA), control models, ventilation analysis (CFD), thermal models, reliability analysis, etc. During the Enclosure Factory Assembly and Testing the compliance with the requirements has been verified using the real hardware and the models created during the design phase have been revisited. The tests performed during shutter mechanism subsystem (crawler test stand) functional and endurance testing (completed summer 2013) and two comprehensive system-level factory acceptance testing campaigns (FAT#1 in December 2013 and FAT#2 in March 2014) included functional and performance tests on all mechanisms, off-normal mode tests, mechanism wobble tests, creation of the Enclosure pointing map, control system tests, and vibration tests. The comparison of the assumptions used during the design phase with the properties measured during the test campaign provides an interesting reference for future projects.

  14. Design optimization for permanent magnet machine with efficient slot per pole ratio

    NASA Astrophysics Data System (ADS)

    Potnuru, Upendra Kumar; Rao, P. Mallikarjuna

    2018-04-01

    This paper presents a methodology for the enhancement of a Brush Less Direct Current motor (BLDC) with 6Poles and 8slots. In particular; it is focused on amulti-objective optimization using a Genetic Algorithmand Grey Wolf Optimization developed in MATLAB. The optimization aims to maximize the maximum output power value and minimize the total losses of a motor. This paper presents an application of the MATLAB optimization algorithms to brushless DC (BLDC) motor design, with 7 design parameters chosen to be free. The optimal design parameters of the motor derived by GA are compared with those obtained by Grey Wolf Optimization technique. A comparative report on the specified enhancement approaches appearsthat Grey Wolf Optimization technique has a better convergence.

  15. Supernova Dust Factory in M74

    NASA Image and Video Library

    2006-06-09

    Astronomers using NASA Spitzer Space Telescope have spotted a dust factory 30 million light-years away in the spiral galaxy M74. The factory is located at the scene of a massive star explosive death, or supernova.

  16. Structural Optimization of a Force Balance Using a Computational Experiment Design

    NASA Technical Reports Server (NTRS)

    Parker, P. A.; DeLoach, R.

    2002-01-01

    This paper proposes a new approach to force balance structural optimization featuring a computational experiment design. Currently, this multi-dimensional design process requires the designer to perform a simplification by executing parameter studies on a small subset of design variables. This one-factor-at-a-time approach varies a single variable while holding all others at a constant level. Consequently, subtle interactions among the design variables, which can be exploited to achieve the design objectives, are undetected. The proposed method combines Modern Design of Experiments techniques to direct the exploration of the multi-dimensional design space, and a finite element analysis code to generate the experimental data. To efficiently search for an optimum combination of design variables and minimize the computational resources, a sequential design strategy was employed. Experimental results from the optimization of a non-traditional force balance measurement section are presented. An approach to overcome the unique problems associated with the simultaneous optimization of multiple response criteria is described. A quantitative single-point design procedure that reflects the designer's subjective impression of the relative importance of various design objectives, and a graphical multi-response optimization procedure that provides further insights into available tradeoffs among competing design objectives are illustrated. The proposed method enhances the intuition and experience of the designer by providing new perspectives on the relationships between the design variables and the competing design objectives providing a systematic foundation for advancements in structural design.

  17. Integration of prebend optimization in a holistic wind turbine design tool

    NASA Astrophysics Data System (ADS)

    Sartori, L.; Bortolotti, P.; Croce, A.; Bottasso, C. L.

    2016-09-01

    This paper considers the problem of identifying the optimal combination of blade prebend, rotor cone angle and nacelle uptilt, within an integrated aero-structural design environment. Prebend is designed to reach maximum rotor area at rated conditions, while cone and uptilt are computed together with all other design variables to minimize the cost of energy. Constraints are added to the problem formulation in order to translate various design requirements. The proposed optimization approach is applied to a conceptual 10 MW offshore wind turbine, highlighting the benefits of an optimal combination of blade curvature, cone and uptilt angles.

  18. Finite burn maneuver modeling for a generalized spacecraft trajectory design and optimization system.

    PubMed

    Ocampo, Cesar

    2004-05-01

    The modeling, design, and optimization of finite burn maneuvers for a generalized trajectory design and optimization system is presented. A generalized trajectory design and optimization system is a system that uses a single unified framework that facilitates the modeling and optimization of complex spacecraft trajectories that may operate in complex gravitational force fields, use multiple propulsion systems, and involve multiple spacecraft. The modeling and optimization issues associated with the use of controlled engine burn maneuvers of finite thrust magnitude and duration are presented in the context of designing and optimizing a wide class of finite thrust trajectories. Optimal control theory is used examine the optimization of these maneuvers in arbitrary force fields that are generally position, velocity, mass, and are time dependent. The associated numerical methods used to obtain these solutions involve either, the solution to a system of nonlinear equations, an explicit parameter optimization method, or a hybrid parameter optimization that combines certain aspects of both. The theoretical and numerical methods presented here have been implemented in copernicus, a prototype trajectory design and optimization system under development at the University of Texas at Austin.

  19. Topology optimization based design of unilateral NMR for generating a remote homogeneous field.

    PubMed

    Wang, Qi; Gao, Renjing; Liu, Shutian

    2017-06-01

    This paper presents a topology optimization based design method for the design of unilateral nuclear magnetic resonance (NMR), with which a remote homogeneous field can be obtained. The topology optimization is actualized by seeking out the optimal layout of ferromagnetic materials within a given design domain. The design objective is defined as generating a sensitive magnetic field with optimal homogeneity and maximal field strength within a required region of interest (ROI). The sensitivity of the objective function with respect to the design variables is derived and the method for solving the optimization problem is presented. A design example is provided to illustrate the utility of the design method, specifically the ability to improve the quality of the magnetic field over the required ROI by determining the optimal structural topology for the ferromagnetic poles. Both in simulations and experiments, the sensitive region of the magnetic field achieves about 2 times larger than that of the reference design, validating validates the feasibility of the design method. Copyright © 2017. Published by Elsevier Inc.

  20. Optimal color design of psychological counseling room by design of experiments and response surface methodology.

    PubMed

    Liu, Wenjuan; Ji, Jianlin; Chen, Hua; Ye, Chenyu

    2014-01-01

    Color is one of the most powerful aspects of a psychological counseling environment. Little scientific research has been conducted on color design and much of the existing literature is based on observational studies. Using design of experiments and response surface methodology, this paper proposes an optimal color design approach for transforming patients' perception into color elements. Six indices, pleasant-unpleasant, interesting-uninteresting, exciting-boring, relaxing-distressing, safe-fearful, and active-inactive, were used to assess patients' impression. A total of 75 patients participated, including 42 for Experiment 1 and 33 for Experiment 2. 27 representative color samples were designed in Experiment 1, and the color sample (L = 75, a = 0, b = -60) was the most preferred one. In Experiment 2, this color sample was set as the 'central point', and three color attributes were optimized to maximize the patients' satisfaction. The experimental results show that the proposed method can get the optimal solution for color design of a counseling room.

  1. Optimal Color Design of Psychological Counseling Room by Design of Experiments and Response Surface Methodology

    PubMed Central

    Chen, Hua; Ye, Chenyu

    2014-01-01

    Color is one of the most powerful aspects of a psychological counseling environment. Little scientific research has been conducted on color design and much of the existing literature is based on observational studies. Using design of experiments and response surface methodology, this paper proposes an optimal color design approach for transforming patients’ perception into color elements. Six indices, pleasant-unpleasant, interesting-uninteresting, exciting-boring, relaxing-distressing, safe-fearful, and active-inactive, were used to assess patients’ impression. A total of 75 patients participated, including 42 for Experiment 1 and 33 for Experiment 2. 27 representative color samples were designed in Experiment 1, and the color sample (L = 75, a = 0, b = -60) was the most preferred one. In Experiment 2, this color sample was set as the ‘central point’, and three color attributes were optimized to maximize the patients’ satisfaction. The experimental results show that the proposed method can get the optimal solution for color design of a counseling room. PMID:24594683

  2. The Use of a Fractional Factorial Design to Determine the Factors That Impact 1,3-Propanediol Production from Glycerol by Halanaerobium hydrogeniformans.

    PubMed

    Kalia, Shivani; Trager, Jordan; Sitton, Oliver C; Mormile, Melanie R

    2016-08-20

    In recent years, biodiesel, a substitute for fossil fuels, has led to the excessive production of crude glycerol. The resulting crude glycerol can possess a high concentration of salts and an alkaline pH. Moreover, current crude glycerol purification methods are expensive, rendering this former commodity a waste product. However, Halanaerobium hydrogeniformans, a haloalkaliphilic bacterium, possesses the metabolic capability to convert glycerol into 1,3-propanediol, a valuable commodity compound, without the need for salt dilution or adjusting pH when grown on this waste. Experiments were performed with different combinations of 24 medium components to determine their impact on the production of 1,3-propanediol by using a fractional factorial design. Tested medium components were selected based on data from the organism's genome. Analysis of HPLC data revealed enhanced production of 1,3-propanediol with additional glycerol, pH, vitamin B12, ammonium ions, sodium sulfide, cysteine, iron, and cobalt. However, other selected components; nitrate ions, phosphate ions, sulfate ions, sodium:potassium ratio, chloride, calcium, magnesium, silicon, manganese, zinc, borate, nickel, molybdenum, tungstate, copper and aluminum, did not enhance 1,3-propanediol production. The use of a fractional factorial design enabled the quick and efficient assessment of the impact of 24 different medium components on 1,3-propanediol production from glycerol from a haloalkaliphilic bacterium.

  3. Finding Bayesian Optimal Designs for Nonlinear Models: A Semidefinite Programming-Based Approach.

    PubMed

    Duarte, Belmiro P M; Wong, Weng Kee

    2015-08-01

    This paper uses semidefinite programming (SDP) to construct Bayesian optimal design for nonlinear regression models. The setup here extends the formulation of the optimal designs problem as an SDP problem from linear to nonlinear models. Gaussian quadrature formulas (GQF) are used to compute the expectation in the Bayesian design criterion, such as D-, A- or E-optimality. As an illustrative example, we demonstrate the approach using the power-logistic model and compare results in the literature. Additionally, we investigate how the optimal design is impacted by different discretising schemes for the design space, different amounts of uncertainty in the parameter values, different choices of GQF and different prior distributions for the vector of model parameters, including normal priors with and without correlated components. Further applications to find Bayesian D-optimal designs with two regressors for a logistic model and a two-variable generalised linear model with a gamma distributed response are discussed, and some limitations of our approach are noted.

  4. Finding Bayesian Optimal Designs for Nonlinear Models: A Semidefinite Programming-Based Approach

    PubMed Central

    Duarte, Belmiro P. M.; Wong, Weng Kee

    2014-01-01

    Summary This paper uses semidefinite programming (SDP) to construct Bayesian optimal design for nonlinear regression models. The setup here extends the formulation of the optimal designs problem as an SDP problem from linear to nonlinear models. Gaussian quadrature formulas (GQF) are used to compute the expectation in the Bayesian design criterion, such as D-, A- or E-optimality. As an illustrative example, we demonstrate the approach using the power-logistic model and compare results in the literature. Additionally, we investigate how the optimal design is impacted by different discretising schemes for the design space, different amounts of uncertainty in the parameter values, different choices of GQF and different prior distributions for the vector of model parameters, including normal priors with and without correlated components. Further applications to find Bayesian D-optimal designs with two regressors for a logistic model and a two-variable generalised linear model with a gamma distributed response are discussed, and some limitations of our approach are noted. PMID:26512159

  5. A Subsonic Aircraft Design Optimization With Neural Network and Regression Approximators

    NASA Technical Reports Server (NTRS)

    Patnaik, Surya N.; Coroneos, Rula M.; Guptill, James D.; Hopkins, Dale A.; Haller, William J.

    2004-01-01

    The Flight-Optimization-System (FLOPS) code encountered difficulty in analyzing a subsonic aircraft. The limitation made the design optimization problematic. The deficiencies have been alleviated through use of neural network and regression approximations. The insight gained from using the approximators is discussed in this paper. The FLOPS code is reviewed. Analysis models are developed and validated for each approximator. The regression method appears to hug the data points, while the neural network approximation follows a mean path. For an analysis cycle, the approximate model required milliseconds of central processing unit (CPU) time versus seconds by the FLOPS code. Performance of the approximators was satisfactory for aircraft analysis. A design optimization capability has been created by coupling the derived analyzers to the optimization test bed CometBoards. The approximators were efficient reanalysis tools in the aircraft design optimization. Instability encountered in the FLOPS analyzer was eliminated. The convergence characteristics were improved for the design optimization. The CPU time required to calculate the optimum solution, measured in hours with the FLOPS code was reduced to minutes with the neural network approximation and to seconds with the regression method. Generation of the approximators required the manipulation of a very large quantity of data. Design sensitivity with respect to the bounds of aircraft constraints is easily generated.

  6. Optimizing Experimental Design for Comparing Models of Brain Function

    PubMed Central

    Daunizeau, Jean; Preuschoff, Kerstin; Friston, Karl; Stephan, Klaas

    2011-01-01

    This article presents the first attempt to formalize the optimization of experimental design with the aim of comparing models of brain function based on neuroimaging data. We demonstrate our approach in the context of Dynamic Causal Modelling (DCM), which relates experimental manipulations to observed network dynamics (via hidden neuronal states) and provides an inference framework for selecting among candidate models. Here, we show how to optimize the sensitivity of model selection by choosing among experimental designs according to their respective model selection accuracy. Using Bayesian decision theory, we (i) derive the Laplace-Chernoff risk for model selection, (ii) disclose its relationship with classical design optimality criteria and (iii) assess its sensitivity to basic modelling assumptions. We then evaluate the approach when identifying brain networks using DCM. Monte-Carlo simulations and empirical analyses of fMRI data from a simple bimanual motor task in humans serve to demonstrate the relationship between network identification and the optimal experimental design. For example, we show that deciding whether there is a feedback connection requires shorter epoch durations, relative to asking whether there is experimentally induced change in a connection that is known to be present. Finally, we discuss limitations and potential extensions of this work. PMID:22125485

  7. Recent progress in neutrino factory and muon collider research within the Muon Collaboration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    M. M. Alsharoa; Charles M. Ankenbrandt; Muzaffer Atac

    2003-08-01

    We describe the status of our effort to realize a first neutrino factory and the progress made in understanding the problems associated with the collection and cooling of muons towards that end. We summarize the physics that can be done with neutrino factories as well as with intense cold beams of muons. The physics potential of muon colliders is reviewed, both as Higgs Factories and compact high energy lepton colliders. The status and timescale of our research and development effort is reviewed as well as the latest designs in cooling channels including the promise of ring coolers in achieving longitudinalmore » and transverse cooling simultaneously. We detail the efforts being made to mount an international cooling experiment to demonstrate the ionization cooling of muons.« less

  8. An optimal system design process for a Mars roving vehicle

    NASA Technical Reports Server (NTRS)

    Pavarini, C.; Baker, J.; Goldberg, A.

    1971-01-01

    The problem of determining the optimal design for a Mars roving vehicle is considered. A system model is generated by consideration of the physical constraints on the design parameters and the requirement that the system be deliverable to the Mars surface. An expression which evaluates system performance relative to mission goals as a function of the design parameters only is developed. The use of nonlinear programming techniques to optimize the design is proposed and an example considering only two of the vehicle subsystems is formulated and solved.

  9. Multidisciplinary design optimization: An emerging new engineering discipline

    NASA Technical Reports Server (NTRS)

    Sobieszczanski-Sobieski, Jaroslaw

    1993-01-01

    This paper defines the Multidisciplinary Design Optimization (MDO) as a new field of research endeavor and as an aid in the design of engineering systems. It examines the MDO conceptual components in relation to each other and defines their functions.

  10. Optimization of factors to obtain cassava starch films with improved mechanical properties

    NASA Astrophysics Data System (ADS)

    Monteiro, Mayra; Oliveira, Victor; Santos, Francisco; Barros Neto, Eduardo; Silva, Karyn; Silva, Rayane; Henrique, João; Chibério, Abimaelle

    2017-08-01

    In this study, was investigated the optimization of the factors that significantly influenced the mechanical property improvement of cassava starch films through complete factorial design 23. The factors to be analyzed were cassava starch, glycerol and modified clay contents. A regression model was proposed by the factorial analysis, aiming to estimate the condition of the individual factors investigated in the optimum state of the mechanical properties of the biofilm, using the following statistical tool: desirability function and response surface. The response variable that delimits the improvement of the mechanical property of the biofilm is the tensile strength, such improvement is obtained by maximizing the response variable. The factorial analysis showed that the best combination of factor configurations to reach the best response was found to be: with 5g of cassava starch, 10% of glycerol and 5% of modified clay, both percentages in relation to the dry mass of starch used. In addition, the starch biofilm showing the lowest response contained 2g of cassava starch, 0% of modified clay and 30% of glycerol, and was consequently considered the worst biofilm.

  11. Optimizing RF gun cavity geometry within an automated injector design system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alicia Hofler ,Pavel Evtushenko

    2011-03-28

    RF guns play an integral role in the success of several light sources around the world, and properly designed and optimized cw superconducting RF (SRF) guns can provide a path to higher average brightness. As the need for these guns grows, it is important to have automated optimization software tools that vary the geometry of the gun cavity as part of the injector design process. This will allow designers to improve existing designs for present installations, extend the utility of these guns to other applications, and develop new designs. An evolutionary algorithm (EA) based system can provide this capability becausemore » EAs can search in parallel a large parameter space (often non-linear) and in a relatively short time identify promising regions of the space for more careful consideration. The injector designer can then evaluate more cavity design parameters during the injector optimization process against the beam performance requirements of the injector. This paper will describe an extension to the APISA software that allows the cavity geometry to be modified as part of the injector optimization and provide examples of its application to existing RF and SRF gun designs.« less

  12. Aerodynamic design optimization using sensitivity analysis and computational fluid dynamics

    NASA Technical Reports Server (NTRS)

    Baysal, Oktay; Eleshaky, Mohamed E.

    1991-01-01

    A new and efficient method is presented for aerodynamic design optimization, which is based on a computational fluid dynamics (CFD)-sensitivity analysis algorithm. The method is applied to design a scramjet-afterbody configuration for an optimized axial thrust. The Euler equations are solved for the inviscid analysis of the flow, which in turn provides the objective function and the constraints. The CFD analysis is then coupled with the optimization procedure that uses a constrained minimization method. The sensitivity coefficients, i.e. gradients of the objective function and the constraints, needed for the optimization are obtained using a quasi-analytical method rather than the traditional brute force method of finite difference approximations. During the one-dimensional search of the optimization procedure, an approximate flow analysis (predicted flow) based on a first-order Taylor series expansion is used to reduce the computational cost. Finally, the sensitivity of the optimum objective function to various design parameters, which are kept constant during the optimization, is computed to predict new optimum solutions. The flow analysis of the demonstrative example are compared with the experimental data. It is shown that the method is more efficient than the traditional methods.

  13. Real-time PCR probe optimization using design of experiments approach.

    PubMed

    Wadle, S; Lehnert, M; Rubenwolf, S; Zengerle, R; von Stetten, F

    2016-03-01

    Primer and probe sequence designs are among the most critical input factors in real-time polymerase chain reaction (PCR) assay optimization. In this study, we present the use of statistical design of experiments (DOE) approach as a general guideline for probe optimization and more specifically focus on design optimization of label-free hydrolysis probes that are designated as mediator probes (MPs), which are used in reverse transcription MP PCR (RT-MP PCR). The effect of three input factors on assay performance was investigated: distance between primer and mediator probe cleavage site; dimer stability of MP and target sequence (influenza B virus); and dimer stability of the mediator and universal reporter (UR). The results indicated that the latter dimer stability had the greatest influence on assay performance, with RT-MP PCR efficiency increased by up to 10% with changes to this input factor. With an optimal design configuration, a detection limit of 3-14 target copies/10 μl reaction could be achieved. This improved detection limit was confirmed for another UR design and for a second target sequence, human metapneumovirus, with 7-11 copies/10 μl reaction detected in an optimum case. The DOE approach for improving oligonucleotide designs for real-time PCR not only produces excellent results but may also reduce the number of experiments that need to be performed, thus reducing costs and experimental times.

  14. Machine Learning Techniques in Optimal Design

    NASA Technical Reports Server (NTRS)

    Cerbone, Giuseppe

    1992-01-01

    Many important applications can be formalized as constrained optimization tasks. For example, we are studying the engineering domain of two-dimensional (2-D) structural design. In this task, the goal is to design a structure of minimum weight that bears a set of loads. A solution to a design problem in which there is a single load (L) and two stationary support points (S1 and S2) consists of four members, E1, E2, E3, and E4 that connect the load to the support points is discussed. In principle, optimal solutions to problems of this kind can be found by numerical optimization techniques. However, in practice [Vanderplaats, 1984] these methods are slow and they can produce different local solutions whose quality (ratio to the global optimum) varies with the choice of starting points. Hence, their applicability to real-world problems is severely restricted. To overcome these limitations, we propose to augment numerical optimization by first performing a symbolic compilation stage to produce: (a) objective functions that are faster to evaluate and that depend less on the choice of the starting point and (b) selection rules that associate problem instances to a set of recommended solutions. These goals are accomplished by successive specializations of the problem class and of the associated objective functions. In the end, this process reduces the problem to a collection of independent functions that are fast to evaluate, that can be differentiated symbolically, and that represent smaller regions of the overall search space. However, the specialization process can produce a large number of sub-problems. This is overcome by deriving inductively selection rules which associate problems to small sets of specialized independent sub-problems. Each set of candidate solutions is chosen to minimize a cost function which expresses the tradeoff between the quality of the solution that can be obtained from the sub-problem and the time it takes to produce it. The overall solution

  15. Designing optimal greenhouse gas monitoring networks for Australia

    NASA Astrophysics Data System (ADS)

    Ziehn, T.; Law, R. M.; Rayner, P. J.; Roff, G.

    2016-01-01

    Atmospheric transport inversion is commonly used to infer greenhouse gas (GHG) flux estimates from concentration measurements. The optimal location of ground-based observing stations that supply these measurements can be determined by network design. Here, we use a Lagrangian particle dispersion model (LPDM) in reverse mode together with a Bayesian inverse modelling framework to derive optimal GHG observing networks for Australia. This extends the network design for carbon dioxide (CO2) performed by Ziehn et al. (2014) to also minimise the uncertainty on the flux estimates for methane (CH4) and nitrous oxide (N2O), both individually and in a combined network using multiple objectives. Optimal networks are generated by adding up to five new stations to the base network, which is defined as two existing stations, Cape Grim and Gunn Point, in southern and northern Australia respectively. The individual networks for CO2, CH4 and N2O and the combined observing network show large similarities because the flux uncertainties for each GHG are dominated by regions of biologically productive land. There is little penalty, in terms of flux uncertainty reduction, for the combined network compared to individually designed networks. The location of the stations in the combined network is sensitive to variations in the assumed data uncertainty across locations. A simple assessment of economic costs has been included in our network design approach, considering both establishment and maintenance costs. Our results suggest that, while site logistics change the optimal network, there is only a small impact on the flux uncertainty reductions achieved with increasing network size.

  16. Multiple-Objective Optimal Designs for Studying the Dose Response Function and Interesting Dose Levels

    PubMed Central

    Hyun, Seung Won; Wong, Weng Kee

    2016-01-01

    We construct an optimal design to simultaneously estimate three common interesting features in a dose-finding trial with possibly different emphasis on each feature. These features are (1) the shape of the dose-response curve, (2) the median effective dose and (3) the minimum effective dose level. A main difficulty of this task is that an optimal design for a single objective may not perform well for other objectives. There are optimal designs for dual objectives in the literature but we were unable to find optimal designs for 3 or more objectives to date with a concrete application. A reason for this is that the approach for finding a dual-objective optimal design does not work well for a 3 or more multiple-objective design problem. We propose a method for finding multiple-objective optimal designs that estimate the three features with user-specified higher efficiencies for the more important objectives. We use the flexible 4-parameter logistic model to illustrate the methodology but our approach is applicable to find multiple-objective optimal designs for other types of objectives and models. We also investigate robustness properties of multiple-objective optimal designs to mis-specification in the nominal parameter values and to a variation in the optimality criterion. We also provide computer code for generating tailor made multiple-objective optimal designs. PMID:26565557

  17. Multiple-Objective Optimal Designs for Studying the Dose Response Function and Interesting Dose Levels.

    PubMed

    Hyun, Seung Won; Wong, Weng Kee

    2015-11-01

    We construct an optimal design to simultaneously estimate three common interesting features in a dose-finding trial with possibly different emphasis on each feature. These features are (1) the shape of the dose-response curve, (2) the median effective dose and (3) the minimum effective dose level. A main difficulty of this task is that an optimal design for a single objective may not perform well for other objectives. There are optimal designs for dual objectives in the literature but we were unable to find optimal designs for 3 or more objectives to date with a concrete application. A reason for this is that the approach for finding a dual-objective optimal design does not work well for a 3 or more multiple-objective design problem. We propose a method for finding multiple-objective optimal designs that estimate the three features with user-specified higher efficiencies for the more important objectives. We use the flexible 4-parameter logistic model to illustrate the methodology but our approach is applicable to find multiple-objective optimal designs for other types of objectives and models. We also investigate robustness properties of multiple-objective optimal designs to mis-specification in the nominal parameter values and to a variation in the optimality criterion. We also provide computer code for generating tailor made multiple-objective optimal designs.

  18. Simulation study to determine the impact of different design features on design efficiency in discrete choice experiments

    PubMed Central

    Vanniyasingam, Thuva; Cunningham, Charles E; Foster, Gary; Thabane, Lehana

    2016-01-01

    Objectives Discrete choice experiments (DCEs) are routinely used to elicit patient preferences to improve health outcomes and healthcare services. While many fractional factorial designs can be created, some are more statistically optimal than others. The objective of this simulation study was to investigate how varying the number of (1) attributes, (2) levels within attributes, (3) alternatives and (4) choice tasks per survey will improve or compromise the statistical efficiency of an experimental design. Design and methods A total of 3204 DCE designs were created to assess how relative design efficiency (d-efficiency) is influenced by varying the number of choice tasks (2–20), alternatives (2–5), attributes (2–20) and attribute levels (2–5) of a design. Choice tasks were created by randomly allocating attribute and attribute level combinations into alternatives. Outcome Relative d-efficiency was used to measure the optimality of each DCE design. Results DCE design complexity influenced statistical efficiency. Across all designs, relative d-efficiency decreased as the number of attributes and attribute levels increased. It increased for designs with more alternatives. Lastly, relative d-efficiency converges as the number of choice tasks increases, where convergence may not be at 100% statistical optimality. Conclusions Achieving 100% d-efficiency is heavily dependent on the number of attributes, attribute levels, choice tasks and alternatives. Further exploration of overlaps and block sizes are needed. This study's results are widely applicable for researchers interested in creating optimal DCE designs to elicit individual preferences on health services, programmes, policies and products. PMID:27436671

  19. Factorials of real negative and imaginary numbers - A new perspective.

    PubMed

    Thukral, Ashwani K

    2014-01-01

    Presently, factorials of real negative numbers and imaginary numbers, except for zero and negative integers are interpolated using the Euler's gamma function. In the present paper, the concept of factorials has been generalised as applicable to real and imaginary numbers, and multifactorials. New functions based on Euler's factorial function have been proposed for the factorials of real negative and imaginary numbers. As per the present concept, the factorials of real negative numbers, are complex numbers. The factorials of real negative integers have their imaginary part equal to zero, thus are real numbers. Similarly, the factorials of imaginary numbers are complex numbers. The moduli of the complex factorials of real negative numbers, and imaginary numbers are equal to their respective real positive number factorials. Fractional factorials and multifactorials have been defined in a new perspective. The proposed concept has also been extended to Euler's gamma function for real negative numbers and imaginary numbers, and beta function.

  20. Surrogate assisted multidisciplinary design optimization for an all-electric GEO satellite

    NASA Astrophysics Data System (ADS)

    Shi, Renhe; Liu, Li; Long, Teng; Liu, Jian; Yuan, Bin

    2017-09-01

    State-of-the-art all-electric geostationary earth orbit (GEO) satellites use electric thrusters to execute all propulsive duties, which significantly differ from the traditional all-chemical ones in orbit-raising, station-keeping, radiation damage protection, and power budget, etc. Design optimization task of an all-electric GEO satellite is therefore a complex multidisciplinary design optimization (MDO) problem involving unique design considerations. However, solving the all-electric GEO satellite MDO problem faces big challenges in disciplinary modeling techniques and efficient optimization strategy. To address these challenges, we presents a surrogate assisted MDO framework consisting of several modules, i.e., MDO problem definition, multidisciplinary modeling, multidisciplinary analysis (MDA), and surrogate assisted optimizer. Based on the proposed framework, the all-electric GEO satellite MDO problem is formulated to minimize the total mass of the satellite system under a number of practical constraints. Then considerable efforts are spent on multidisciplinary modeling involving geosynchronous transfer, GEO station-keeping, power, thermal control, attitude control, and structure disciplines. Since orbit dynamics models and finite element structural model are computationally expensive, an adaptive response surface surrogate based optimizer is incorporated in the proposed framework to solve the satellite MDO problem with moderate computational cost, where a response surface surrogate is gradually refined to represent the computationally expensive MDA process. After optimization, the total mass of the studied GEO satellite is decreased by 185.3 kg (i.e., 7.3% of the total mass). Finally, the optimal design is further discussed to demonstrate the effectiveness of our proposed framework to cope with the all-electric GEO satellite system design optimization problems. This proposed surrogate assisted MDO framework can also provide valuable references for other all