Sample records for taguchi optimization method

  1. A Gradient Taguchi Method for Engineering Optimization

    NASA Astrophysics Data System (ADS)

    Hwang, Shun-Fa; Wu, Jen-Chih; He, Rong-Song

    2017-10-01

    To balance the robustness and the convergence speed of optimization, a novel hybrid algorithm consisting of Taguchi method and the steepest descent method is proposed in this work. Taguchi method using orthogonal arrays could quickly find the optimum combination of the levels of various factors, even when the number of level and/or factor is quite large. This algorithm is applied to the inverse determination of elastic constants of three composite plates by combining numerical method and vibration testing. For these problems, the proposed algorithm could find better elastic constants in less computation cost. Therefore, the proposed algorithm has nice robustness and fast convergence speed as compared to some hybrid genetic algorithms.

  2. Optimization of PID Parameters Utilizing Variable Weight Grey-Taguchi Method and Particle Swarm Optimization

    NASA Astrophysics Data System (ADS)

    Azmi, Nur Iffah Mohamed; Arifin Mat Piah, Kamal; Yusoff, Wan Azhar Wan; Romlay, Fadhlur Rahman Mohd

    2018-03-01

    Controller that uses PID parameters requires a good tuning method in order to improve the control system performance. Tuning PID control method is divided into two namely the classical methods and the methods of artificial intelligence. Particle swarm optimization algorithm (PSO) is one of the artificial intelligence methods. Previously, researchers had integrated PSO algorithms in the PID parameter tuning process. This research aims to improve the PSO-PID tuning algorithms by integrating the tuning process with the Variable Weight Grey- Taguchi Design of Experiment (DOE) method. This is done by conducting the DOE on the two PSO optimizing parameters: the particle velocity limit and the weight distribution factor. Computer simulations and physical experiments were conducted by using the proposed PSO- PID with the Variable Weight Grey-Taguchi DOE and the classical Ziegler-Nichols methods. They are implemented on the hydraulic positioning system. Simulation results show that the proposed PSO-PID with the Variable Weight Grey-Taguchi DOE has reduced the rise time by 48.13% and settling time by 48.57% compared to the Ziegler-Nichols method. Furthermore, the physical experiment results also show that the proposed PSO-PID with the Variable Weight Grey-Taguchi DOE tuning method responds better than Ziegler-Nichols tuning. In conclusion, this research has improved the PSO-PID parameter by applying the PSO-PID algorithm together with the Variable Weight Grey-Taguchi DOE method as a tuning method in the hydraulic positioning system.

  3. Taguchi optimization of bismuth-telluride based thermoelectric cooler

    NASA Astrophysics Data System (ADS)

    Anant Kishore, Ravi; Kumar, Prashant; Sanghadasa, Mohan; Priya, Shashank

    2017-07-01

    In the last few decades, considerable effort has been made to enhance the figure-of-merit (ZT) of thermoelectric (TE) materials. However, the performance of commercial TE devices still remains low due to the fact that the module figure-of-merit not only depends on the material ZT, but also on the operating conditions and configuration of TE modules. This study takes into account comprehensive set of parameters to conduct the numerical performance analysis of the thermoelectric cooler (TEC) using a Taguchi optimization method. The Taguchi method is a statistical tool that predicts the optimal performance with a far less number of experimental runs than the conventional experimental techniques. Taguchi results are also compared with the optimized parameters obtained by a full factorial optimization method, which reveals that the Taguchi method provides optimum or near-optimum TEC configuration using only 25 experiments against 3125 experiments needed by the conventional optimization method. This study also shows that the environmental factors such as ambient temperature and cooling coefficient do not significantly affect the optimum geometry and optimum operating temperature of TECs. The optimum TEC configuration for simultaneous optimization of cooling capacity and coefficient of performance is also provided.

  4. Optimization of radial-type superconducting magnetic bearing using the Taguchi method

    NASA Astrophysics Data System (ADS)

    Ai, Liwang; Zhang, Guomin; Li, Wanjie; Liu, Guole; Liu, Qi

    2018-07-01

    It is important and complicated to model and optimize the levitation behavior of superconducting magnetic bearing (SMB). That is due to the nonlinear constitutive relationships of superconductor and ferromagnetic materials, the relative movement between the superconducting stator and PM rotor, and the multi-parameter (e.g., air-gap, critical current density, and remanent flux density, etc.) affecting the levitation behavior. In this paper, we present a theoretical calculation and optimization method of the levitation behavior for radial-type SMB. A simplified model of levitation force calculation is established using 2D finite element method with H-formulation. In the model, the boundary condition of superconducting stator is imposed by harmonic series expressions to describe the traveling magnetic field generated by the moving PM rotor. Also, experimental measurements of the levitation force are performed and validate the model method. A statistical method called Taguchi method is adopted to carry out an optimization of load capacity for SMB. Then the factor effects of six optimization parameters on the target characteristics are discussed and the optimum parameters combination is determined finally. The results show that the levitation behavior of SMB is greatly improved and the Taguchi method is suitable for optimizing the SMB.

  5. Application of Taguchi methods to dual mixture ratio propulsion system optimization for SSTO vehicles

    NASA Technical Reports Server (NTRS)

    Stanley, Douglas O.; Unal, Resit; Joyner, C. R.

    1992-01-01

    The application of advanced technologies to future launch vehicle designs would allow the introduction of a rocket-powered, single-stage-to-orbit (SSTO) launch system early in the next century. For a selected SSTO concept, a dual mixture ratio, staged combustion cycle engine that employs a number of innovative technologies was selected as the baseline propulsion system. A series of parametric trade studies are presented to optimize both a dual mixture ratio engine and a single mixture ratio engine of similar design and technology level. The effect of varying lift-off thrust-to-weight ratio, engine mode transition Mach number, mixture ratios, area ratios, and chamber pressure values on overall vehicle weight is examined. The sensitivity of the advanced SSTO vehicle to variations in each of these parameters is presented, taking into account the interaction of each of the parameters with each other. This parametric optimization and sensitivity study employs a Taguchi design method. The Taguchi method is an efficient approach for determining near-optimum design parameters using orthogonal matrices from design of experiments (DOE) theory. Using orthogonal matrices significantly reduces the number of experimental configurations to be studied. The effectiveness and limitations of the Taguchi method for propulsion/vehicle optimization studies as compared to traditional single-variable parametric trade studies is also discussed.

  6. Taguchi optimization: Case study of gold recovery from amalgamation tailing by using froth flotation method

    NASA Astrophysics Data System (ADS)

    Sudibyo, Aji, B. B.; Sumardi, S.; Mufakir, F. R.; Junaidi, A.; Nurjaman, F.; Karna, Aziza, Aulia

    2017-01-01

    Gold amalgamation process was widely used to treat gold ore. This process produces the tailing or amalgamation solid waste, which still contains gold at 8-9 ppm. Froth flotation is one of the promising methods to beneficiate gold from this tailing. However, this process requires optimal conditions which depends on the type of raw material. In this study, Taguchi method was used to optimize the optimum conditions of the froth flotation process. The Taguchi optimization shows that the gold recovery was strongly influenced by the particle size which is the best particle size at 150 mesh followed by the Potassium amyl xanthate concentration, pH and pine oil concentration at 1133.98, 4535.92 and 68.04 gr/ton amalgamation tailing, respectively.

  7. Taguchi Method Applied in Optimization of Shipley SJR 5740 Positive Resist Deposition

    NASA Technical Reports Server (NTRS)

    Hui, A.; Blosiu, J. O.; Wiberg, D. V.

    1998-01-01

    Taguchi Methods of Robust Design presents a way to optimize output process performance through an organized set of experiments by using orthogonal arrays. Analysis of variance and signal-to-noise ratio is used to evaluate the contribution of each of the process controllable parameters in the realization of the process optimization. In the photoresist deposition process, there are numerous controllable parameters that can affect the surface quality and thickness of the final photoresist layer.

  8. Taguchi's off line method and Multivariate loss function approach for quality management and optimization of process parameters -A review

    NASA Astrophysics Data System (ADS)

    Bharti, P. K.; Khan, M. I.; Singh, Harbinder

    2010-10-01

    Off-line quality control is considered to be an effective approach to improve product quality at a relatively low cost. The Taguchi method is one of the conventional approaches for this purpose. Through this approach, engineers can determine a feasible combination of design parameters such that the variability of a product's response can be reduced and the mean is close to the desired target. The traditional Taguchi method was focused on ensuring good performance at the parameter design stage with one quality characteristic, but most products and processes have multiple quality characteristics. The optimal parameter design minimizes the total quality loss for multiple quality characteristics. Several studies have presented approaches addressing multiple quality characteristics. Most of these papers were concerned with maximizing the parameter combination of signal to noise (SN) ratios. The results reveal the advantages of this approach are that the optimal parameter design is the same as the traditional Taguchi method for the single quality characteristic; the optimal design maximizes the amount of reduction of total quality loss for multiple quality characteristics. This paper presents a literature review on solving multi-response problems in the Taguchi method and its successful implementation in various industries.

  9. Incorporating Servqual-QFD with Taguchi Design for optimizing service quality design

    NASA Astrophysics Data System (ADS)

    Arbi Hadiyat, M.

    2018-03-01

    Deploying good service design in service companies has been updated issue in improving customer satisfaction, especially based on the level of service quality measured by Parasuraman’s SERVQUAL. Many researchers have been proposing methods in designing the service, and some of them are based on engineering viewpoint, especially by implementing the QFD method or even using robust Taguchi method. The QFD method would found the qualitative solution by generating the “how’s”, while Taguchi method gives more quantitative calculation in optimizing best solution. However, incorporating both QFD and Taguchi has been done in this paper and yields better design process. The purposes of this research is to evaluate the incorporated methods by implemented it to a case study, then analyze the result and see the robustness of those methods to customer perception of service quality. Started by measuring service attributes using SERVQUAL and find the improvement with QFD, the deployment of QFD solution then generated by defining Taguchi factors levels and calculating the Signal-to-noise ratio in its orthogonal array, and optimized Taguchi response then found. A case study was given for designing service in local bank. Afterward, the service design obtained from previous analysis was then evaluated and shows that it was still meet the customer satisfaction. Incorporating QFD and Taguchi has performed well and can be adopted and developed for another research for evaluating the robustness of result.

  10. Optimization of bone drilling parameters using Taguchi method based on finite element analysis

    NASA Astrophysics Data System (ADS)

    Rosidi, Ayip; Lenggo Ginta, Turnad; Rani, Ahmad Majdi Bin Abdul

    2017-05-01

    Thermal necrosis results fracture problems and implant failure if temperature exceeds 47 °C for one minute during bone drilling. To solve this problem, this work studied a new thermal model by using three drilling parameters: drill diameter, feed rate and spindle speed. Effects of those parameters to heat generation were studied. The drill diameters were 4 mm, 6 mm and 6 mm; the feed rates were 80 mm/min, 100 mm/min and 120 mm/min whereas the spindle speeds were 400 rpm, 500 rpm and 600 rpm then an optimization was done by Taguchi method to which combination parameter can be used to prevent thermal necrosis during bone drilling. The results showed that all the combination of parameters produce confidence results which were below 47 °C and finite element analysis combined with Taguchi method can be used for predicting temperature generation and optimizing bone drilling parameters prior to clinical bone drilling. All of the combination parameters can be used for surgeon to achieve sustainable orthopaedic surgery.

  11. Optimization of porthole die geometrical variables by Taguchi method

    NASA Astrophysics Data System (ADS)

    Gagliardi, F.; Ciancio, C.; Ambrogio, G.; Filice, L.

    2017-10-01

    Porthole die extrusion is commonly used to manufacture hollow profiles made of lightweight alloys for numerous industrial applications. The reliability of extruded parts is affected strongly by the quality of the longitudinal and transversal seam welds. According to that, the die geometry must be designed correctly and the process parameters must be selected properly to achieve the desired product quality. In this study, numerical 3D simulations have been created and run to investigate the role of various geometrical variables on punch load and maximum pressure inside the welding chamber. These are important outputs to take into account affecting, respectively, the necessary capacity of the extrusion press and the quality of the welding lines. The Taguchi technique has been used to reduce the number of the required numerical simulations necessary for considering the influence of twelve different geometric variables. Moreover, the Analysis of variance (ANOVA) has been implemented to individually analyze the effect of each input parameter on the two responses. Then, the methodology has been utilized to determine the optimal process configuration individually optimizing the two investigated process outputs. Finally, the responses of the optimized parameters have been verified through finite element simulations approximating the predicted value closely. This study shows the feasibility of the Taguchi technique for predicting performance, optimization and therefore for improving the design of a porthole extrusion process.

  12. Taguchi method of experimental design in materials education

    NASA Technical Reports Server (NTRS)

    Weiser, Martin W.

    1993-01-01

    Some of the advantages and disadvantages of the Taguchi Method of experimental design as applied to Materials Science will be discussed. This is a fractional factorial method that employs the minimum number of experimental trials for the information obtained. The analysis is also very simple to use and teach, which is quite advantageous in the classroom. In addition, the Taguchi loss function can be easily incorporated to emphasize that improvements in reproducibility are often at least as important as optimization of the response. The disadvantages of the Taguchi Method include the fact that factor interactions are normally not accounted for, there are zero degrees of freedom if all of the possible factors are used, and randomization is normally not used to prevent environmental biasing. In spite of these disadvantages it is felt that the Taguchi Method is extremely useful for both teaching experimental design and as a research tool, as will be shown with a number of brief examples.

  13. Experimental investigation and optimization of welding process parameters for various steel grades using NN tool and Taguchi method

    NASA Astrophysics Data System (ADS)

    Soni, Sourabh Kumar; Thomas, Benedict

    2018-04-01

    The term "weldability" has been used to describe a wide variety of characteristics when a material is subjected to welding. In our analysis we perform experimental investigation to estimate the tensile strength of welded joint strength and then optimization of welding process parameters by using taguchi method and Artificial Neural Network (ANN) tool in MINITAB and MATLAB software respectively. The study reveals the influence on weldability of steel by varying composition of steel by mechanical characterization. At first we prepare the samples of different grades of steel (EN8, EN 19, EN 24). The samples were welded together by metal inert gas welding process and then tensile testing on Universal testing machine (UTM) was conducted for the same to evaluate the tensile strength of the welded steel specimens. Further comparative study was performed to find the effects of welding parameter on quality of weld strength by employing Taguchi method and Neural Network tool. Finally we concluded that taguchi method and Neural Network Tool is much efficient technique for optimization.

  14. Optimization of segmented thermoelectric generator using Taguchi and ANOVA techniques.

    PubMed

    Kishore, Ravi Anant; Sanghadasa, Mohan; Priya, Shashank

    2017-12-01

    Recent studies have demonstrated that segmented thermoelectric generators (TEGs) can operate over large thermal gradient and thus provide better performance (reported efficiency up to 11%) as compared to traditional TEGs, comprising of single thermoelectric (TE) material. However, segmented TEGs are still in early stages of development due to the inherent complexity in their design optimization and manufacturability. In this study, we demonstrate physics based numerical techniques along with Analysis of variance (ANOVA) and Taguchi optimization method for optimizing the performance of segmented TEGs. We have considered comprehensive set of design parameters, such as geometrical dimensions of p-n legs, height of segmentation, hot-side temperature, and load resistance, in order to optimize output power and efficiency of segmented TEGs. Using the state-of-the-art TE material properties and appropriate statistical tools, we provide near-optimum TEG configuration with only 25 experiments as compared to 3125 experiments needed by the conventional optimization methods. The effect of environmental factors on the optimization of segmented TEGs is also studied. Taguchi results are validated against the results obtained using traditional full factorial optimization technique and a TEG configuration for simultaneous optimization of power and efficiency is obtained.

  15. Optimizing Cu(II) removal from aqueous solution by magnetic nanoparticles immobilized on activated carbon using Taguchi method.

    PubMed

    Ebrahimi Zarandi, Mohammad Javad; Sohrabi, Mahmoud Reza; Khosravi, Morteza; Mansouriieh, Nafiseh; Davallo, Mehran; Khosravan, Azita

    2016-01-01

    This study synthesized magnetic nanoparticles (Fe(3)O(4)) immobilized on activated carbon (AC) and used them as an effective adsorbent for Cu(II) removal from aqueous solution. The effect of three parameters, including the concentration of Cu(II), dosage of Fe(3)O(4)/AC magnetic nanocomposite and pH on the removal of Cu(II) using Fe(3)O(4)/AC nanocomposite were studied. In order to examine and describe the optimum condition for each of the mentioned parameters, Taguchi's optimization method was used in a batch system and L9 orthogonal array was used for the experimental design. The removal percentage (R%) of Cu(II) and uptake capacity (q) were transformed into an accurate signal-to-noise ratio (S/N) for a 'larger-the-better' response. Taguchi results, which were analyzed based on choosing the best run by examining the S/N, were statistically tested using analysis of variance; the tests showed that all the parameters' main effects were significant within a 95% confidence level. The best conditions for removal of Cu(II) were determined at pH of 7, nanocomposite dosage of 0.1 gL(-1) and initial Cu(II) concentration of 20 mg L(-1) at constant temperature of 25 °C. Generally, the results showed that the simple Taguchi's method is suitable to optimize the Cu(II) removal experiments.

  16. A comparative study of electrochemical machining process parameters by using GA and Taguchi method

    NASA Astrophysics Data System (ADS)

    Soni, S. K.; Thomas, B.

    2017-11-01

    In electrochemical machining quality of machined surface strongly depend on the selection of optimal parameter settings. This work deals with the application of Taguchi method and genetic algorithm using MATLAB to maximize the metal removal rate and minimize the surface roughness and overcut. In this paper a comparative study is presented for drilling of LM6 AL/B4C composites by comparing the significant impact of numerous machining process parameters such as, electrolyte concentration (g/l),machining voltage (v),frequency (hz) on the response parameters (surface roughness, material removal rate and over cut). Taguchi L27 orthogonal array was chosen in Minitab 17 software, for the investigation of experimental results and also multiobjective optimization done by genetic algorithm is employed by using MATLAB. After obtaining optimized results from Taguchi method and genetic algorithm, a comparative results are presented.

  17. Optimization the mechanical properties of coir-luffa cylindrica filled hybrid composites by using Taguchi method

    NASA Astrophysics Data System (ADS)

    Krishnudu, D. Mohana; Sreeramulu, D.; Reddy, P. Venkateshwar

    2018-04-01

    In the current study mechanical properties of particles filled hybrid composites have been studied. The mechanical properties of the hybrid composite mainly depend on the proportions of the coir weight, Luffa weight and filler weight. RSM along with Taguchi method have been applied to find the optimized parameters of the hybrid composites. From the current study it was observed that the tensile strength of the composite mainly depends on the coir percent than the other two particles.

  18. Optimization of laccase production from Marasmiellus palmivorus LA1 by Taguchi method of Design of experiments.

    PubMed

    Chenthamarakshan, Aiswarya; Parambayil, Nayana; Miziriya, Nafeesathul; Soumya, P S; Lakshmi, M S Kiran; Ramgopal, Anala; Dileep, Anuja; Nambisan, Padma

    2017-02-13

    Fungal laccase has profound applications in different fields of biotechnology due to its broad specificity and high redox potential. Any successful application of the enzyme requires large scale production. As laccase production is highly dependent on medium components and cultural conditions, optimization of the same is essential for efficient product production. Production of laccase by fungal strain Marasmiellus palmivorus LA1 under solid state fermentation was optimized by the Taguchi design of experiments (DOE) methodology. An orthogonal array (L8) was designed using Qualitek-4 software to study the interactions and relative influence of the seven selected factors by one factor at a time approach. The optimum condition formulated was temperature (28 °C), pH (5), galactose (0.8%w/v), cupric sulphate (3 mM), inoculum concentration (number of mycelial agar pieces) (6Nos.) and substrate length (0.05 m). Overall yield increase of 17.6 fold was obtained after optimization. Statistical optimization leads to the elimination of an insignificant medium component ammonium dihydrogen phosphate from the process and contributes to a 1.06 fold increase in enzyme production. A final production of 667.4 ± 13 IU/mL laccase activity paves way for the application of this strain for industrial applications. Study optimized lignin degrading laccases from Marasmiellus palmivorus LA1. This laccases can thus be used for further applications in different scales of production after analyzing the properties of the enzyme. Study also confirmed the use of taguchi method for optimizations of product production.

  19. Application of Taguchi L32 orthogonal array design to optimize copper biosorption by using Spaghnum moss.

    PubMed

    Ozdemir, Utkan; Ozbay, Bilge; Ozbay, Ismail; Veli, Sevil

    2014-09-01

    In this work, Taguchi L32 experimental design was applied to optimize biosorption of Cu(2+) ions by an easily available biosorbent, Spaghnum moss. With this aim, batch biosorption tests were performed to achieve targeted experimental design with five factors (concentration, pH, biosorbent dosage, temperature and agitation time) at two different levels. Optimal experimental conditions were determined by calculated signal-to-noise ratios. "Higher is better" approach was followed to calculate signal-to-noise ratios as it was aimed to obtain high metal removal efficiencies. The impact ratios of factors were determined by the model. Within the study, Cu(2+) biosorption efficiencies were also predicted by using Taguchi method. Results of the model showed that experimental and predicted values were close to each other demonstrating the success of Taguchi approach. Furthermore, thermodynamic, isotherm and kinetic studies were performed to explain the biosorption mechanism. Calculated thermodynamic parameters were in good accordance with the results of Taguchi model. Copyright © 2014 Elsevier Inc. All rights reserved.

  20. Interactive design optimization of magnetorheological-brake actuators using the Taguchi method

    NASA Astrophysics Data System (ADS)

    Erol, Ozan; Gurocak, Hakan

    2011-10-01

    This research explored an optimization method that would automate the process of designing a magnetorheological (MR)-brake but still keep the designer in the loop. MR-brakes apply resistive torque by increasing the viscosity of an MR fluid inside the brake. This electronically controllable brake can provide a very large torque-to-volume ratio, which is very desirable for an actuator. However, the design process is quite complex and time consuming due to many parameters. In this paper, we adapted the popular Taguchi method, widely used in manufacturing, to the problem of designing a complex MR-brake. Unlike other existing methods, this approach can automatically identify the dominant parameters of the design, which reduces the search space and the time it takes to find the best possible design. While automating the search for a solution, it also lets the designer see the dominant parameters and make choices to investigate only their interactions with the design output. The new method was applied for re-designing MR-brakes. It reduced the design time from a week or two down to a few minutes. Also, usability experiments indicated significantly better brake designs by novice users.

  1. Taguchi Approach to Design Optimization for Quality and Cost: An Overview

    NASA Technical Reports Server (NTRS)

    Unal, Resit; Dean, Edwin B.

    1990-01-01

    Calibrations to existing cost of doing business in space indicate that to establish human presence on the Moon and Mars with the Space Exploration Initiative (SEI) will require resources, felt by many, to be more than the national budget can afford. In order for SEI to succeed, we must actually design and build space systems at lower cost this time, even with tremendous increases in quality and performance requirements, such as extremely high reliability. This implies that both government and industry must change the way they do business. Therefore, new philosophy and technology must be employed to design and produce reliable, high quality space systems at low cost. In recognizing the need to reduce cost and improve quality and productivity, Department of Defense (DoD) and National Aeronautics and Space Administration (NASA) have initiated Total Quality Management (TQM). TQM is a revolutionary management strategy in quality assurance and cost reduction. TQM requires complete management commitment, employee involvement, and use of statistical tools. The quality engineering methods of Dr. Taguchi, employing design of experiments (DOE), is one of the most important statistical tools of TQM for designing high quality systems at reduced cost. Taguchi methods provide an efficient and systematic way to optimize designs for performance, quality, and cost. Taguchi methods have been used successfully in Japan and the United States in designing reliable, high quality products at low cost in such areas as automobiles and consumer electronics. However, these methods are just beginning to see application in the aerospace industry. The purpose of this paper is to present an overview of the Taguchi methods for improving quality and reducing cost, describe the current state of applications and its role in identifying cost sensitive design parameters.

  2. A Comparative Analysis of Taguchi Methodology and Shainin System DoE in the Optimization of Injection Molding Process Parameters

    NASA Astrophysics Data System (ADS)

    Khavekar, Rajendra; Vasudevan, Hari, Dr.; Modi, Bhavik

    2017-08-01

    Two well-known Design of Experiments (DoE) methodologies, such as Taguchi Methods (TM) and Shainin Systems (SS) are compared and analyzed in this study through their implementation in a plastic injection molding unit. Experiments were performed at a perfume bottle cap manufacturing company (made by acrylic material) using TM and SS to find out the root cause of defects and to optimize the process parameters for minimum rejection. Experiments obtained the rejection rate to be 8.57% from 40% (appx.) during trial runs, which is quiet low, representing successful implementation of these DoE methods. The comparison showed that both methodologies gave same set of variables as critical for defect reduction, but with change in their significance order. Also, Taguchi methods require more number of experiments and consume more time compared to the Shainin System. Shainin system is less complicated and is easy to implement, whereas Taguchi methods is statistically more reliable for optimization of process parameters. Finally, experimentations implied that DoE methods are strong and reliable in implementation, as organizations attempt to improve the quality through optimization.

  3. Using Quality Management Methods in Knowledge-Based Organizations. An Approach to the Application of the Taguchi Method to the Process of Pressing Tappets into Anchors

    NASA Astrophysics Data System (ADS)

    Ţîţu, M. A.; Pop, A. B.; Ţîţu, Ș

    2017-06-01

    This paper presents a study on the modelling and optimization of certain variables by using the Taguchi Method with a view to modelling and optimizing the process of pressing tappets into anchors, process conducted in an organization that promotes knowledge-based management. The paper promotes practical concepts of the Taguchi Method and describes the way in which the objective functions are obtained and used during the modelling and optimization of the process of pressing tappets into the anchors.

  4. Taguchi method for partial differential equations with application in tumor growth.

    PubMed

    Ilea, M; Turnea, M; Rotariu, M; Arotăriţei, D; Popescu, Marilena

    2014-01-01

    The growth of tumors is a highly complex process. To describe this process, mathematical models are needed. A variety of partial differential mathematical models for tumor growth have been developed and studied. Most of those models are based on the reaction-diffusion equations and mass conservation law. A variety of modeling strategies have been developed, each focusing on tumor growth. Systems of time-dependent partial differential equations occur in many branches of applied mathematics. The vast majority of mathematical models in tumor growth are formulated in terms of partial differential equations. We propose a mathematical model for the interactions between these three cancer cell populations. The Taguchi methods are widely used by quality engineering scientists to compare the effects of multiple variables, together with their interactions, with a simple and manageable experimental design. In Taguchi's design of experiments, variation is more interesting to study than the average. First, Taguchi methods are utilized to search for the significant factors and the optimal level combination of parameters. Except the three parameters levels, other factors levels other factors levels would not be considered. Second, cutting parameters namely, cutting speed, depth of cut, and feed rate are designed using the Taguchi method. Finally, the adequacy of the developed mathematical model is proved by ANOVA. According to the results of ANOVA, since the percentage contribution of the combined error is as small. Many mathematical models can be quantitatively characterized by partial differential equations. The use of MATLAB and Taguchi method in this article illustrates the important role of informatics in research in mathematical modeling. The study of tumor growth cells is an exciting and important topic in cancer research and will profit considerably from theoretical input. Interpret these results to be a permanent collaboration between math's and medical oncologists.

  5. Study of optimal laser parameters for cutting QFN packages by Taguchi's matrix method

    NASA Astrophysics Data System (ADS)

    Li, Chen-Hao; Tsai, Ming-Jong; Yang, Ciann-Dong

    2007-06-01

    This paper reports the study of optimal laser parameters for cutting QFN (Quad Flat No-lead) packages by using a diode pumped solid-state laser system (DPSSL). The QFN cutting path includes two different materials, which are the encapsulated epoxy and a copper lead frame substrate. The Taguchi's experimental method with orthogonal array of L 9(3 4) is employed to obtain optimal combinatorial parameters. A quantified mechanism was proposed for examining the laser cutting quality of a QFN package. The influences of the various factors such as laser current, laser frequency, and cutting speed on the laser cutting quality is also examined. From the experimental results, the factors on the cutting quality in the order of decreasing significance are found to be (a) laser frequency, (b) cutting speed, and (c) laser driving current. The optimal parameters were obtained at the laser frequency of 2 kHz, the cutting speed of 2 mm/s, and the driving current of 29 A. Besides identifying this sequence of dominance, matrix experiment also determines the best level for each control factor. The verification experiment confirms that the application of laser cutting technology to QFN is very successfully by using the optimal laser parameters predicted from matrix experiments.

  6. Multi response optimization of internal grinding process parameters for outer ring using Taguchi method and PCR-TOPSIS

    NASA Astrophysics Data System (ADS)

    Wisnuadi, Alief Regyan; Damayanti, Retno Wulan; Pujiyanto, Eko

    2018-02-01

    Bearing is one of the most widely used parts in automotive industry. One of the leading bearing manufacturing companies in the world is SKF Indonesia. This company must produce bearing with international standard. SKF Indonesia must do continuous improvement in order to face competition. During this time, SKF Indonesia is only performing quality control at its Quality Assurance department. In other words, quality improvement at SKF Indonesia has not been done thoroughly. The purpose of this research is to improve quality of outer ring product at SKF Indonesia by conducting an internal grinding process experiment about setting speed ratio, fine position, and spark out grinding time. The specific purpose of this experiment is to optimize some quality responses such as roughness, roundness, and cycle time. All of the response in this experiment were smaller the better. Taguchi method and PCR-TOPSIS are used for the optimization process. The result of this research shows that by using Taguchi method and PCR-TOPSIS, the optimum condition occurs on speed ratio 36, fine position 18 µm/s and spark out 0.5 s. The optimum conditions result were roughness 0.398 µm, roundness 1.78 µm and cycle time 8.1 s. This results have been better than the previous results and meet the standards. The roughness of 0.523 µm decrease to 0.398 µm and the average cycle time of 8.5 s decrease to 8.1 s.

  7. Factors Affecting Optimal Surface Roughness of AISI 4140 Steel in Turning Operation Using Taguchi Experiment

    NASA Astrophysics Data System (ADS)

    Novareza, O.; Sulistiyarini, D. H.; Wiradmoko, R.

    2018-02-01

    This paper presents the result of using Taguchi method in turning process of medium carbon steel of AISI 4140. The primary concern is to find the optimal surface roughness after turning process. The taguchi method is used to get a combination of factors and factor levels in order to get the optimum surface roughness level. Four important factors with three levels were used in experiment based on Taguchi method. A number of 27 experiments were carried out during the research and analysed using analysis of variance (ANOVA) method. The result of surface finish was determined in Ra type surface roughness. The depth of cut was found to be the most important factors for reducing the surface roughness of AISI 4140 steel. On the contrary, the other important factors i.e. spindle speed and rake side angle of the tool were proven to be less factors that affecting the surface finish. It is interesting to see the effect of coolant composition that gained the second important factors to reduce the roughness. It may need further research to explain this result.

  8. Dysprosium sorption by polymeric composite bead: robust parametric optimization using Taguchi method.

    PubMed

    Yadav, Kartikey K; Dasgupta, Kinshuk; Singh, Dhruva K; Varshney, Lalit; Singh, Harvinderpal

    2015-03-06

    Polyethersulfone-based beads encapsulating di-2-ethylhexyl phosphoric acid have been synthesized and evaluated for the recovery of rare earth values from the aqueous media. Percentage recovery and the sorption behavior of Dy(III) have been investigated under wide range of experimental parameters using these beads. Taguchi method utilizing L-18 orthogonal array has been adopted to identify the most influential process parameters responsible for higher degree of recovery with enhanced sorption of Dy(III) from chloride medium. Analysis of variance indicated that the feed concentration of Dy(III) is the most influential factor for equilibrium sorption capacity, whereas aqueous phase acidity influences the percentage recovery most. The presence of polyvinyl alcohol and multiwalled carbon nanotube modified the internal structure of the composite beads and resulted in uniform distribution of organic extractant inside polymeric matrix. The experiment performed under optimum process conditions as predicted by Taguchi method resulted in enhanced Dy(III) recovery and sorption capacity by polymeric beads with minimum standard deviation. Copyright © 2015 Elsevier B.V. All rights reserved.

  9. Parameter optimization of flux-aided backing-submerged arc welding by using Taguchi method

    NASA Astrophysics Data System (ADS)

    Pu, Juan; Yu, Shengfu; Li, Yuanyuan

    2017-07-01

    Flux-aided backing-submerged arc welding has been conducted on D36 steel with thickness of 20 mm. The effects of processing parameters such as welding current, voltage, welding speed and groove angle on welding quality were investigated by Taguchi method. The optimal welding parameters were predicted and the individual importance of each parameter on welding quality was evaluated by examining the signal-to-noise ratio and analysis of variance (ANOVA) results. The importance order of the welding parameters for the welding quality of weld bead was: welding current > welding speed > groove angle > welding voltage. The welding quality of weld bead increased gradually with increasing welding current and welding speed and decreasing groove angle. The optimum values of the welding current, welding speed, groove angle and welding voltage were found to be 1050 A, 27 cm/min, 40∘ and 34 V, respectively.

  10. Developing an Optimum Protocol for Thermoluminescence Dosimetry with GR-200 Chips using Taguchi Method.

    PubMed

    Sadeghi, Maryam; Faghihi, Reza; Sina, Sedigheh

    2017-06-15

    Thermoluminescence dosimetry (TLD) is a powerful technique with wide applications in personal, environmental and clinical dosimetry. The optimum annealing, storage and reading protocols are very effective in accuracy of TLD response. The purpose of this study is to obtain an optimum protocol for GR-200; LiF: Mg, Cu, P, by optimizing the effective parameters, to increase the reliability of the TLD response using Taguchi method. Taguchi method has been used in this study for optimization of annealing, storage and reading protocols of the TLDs. A number of 108 GR-200 chips were divided into 27 groups, each containing four chips. The TLDs were exposed to three different doses, and stored, annealed and read out by different procedures as suggested by Taguchi Method. By comparing the signal-to-noise ratios the optimum dosimetry procedure was obtained. According to the results, the optimum values for annealing temperature (°C), Annealing Time (s), Annealing to Exposure time (d), Exposure to Readout time (d), Pre-heat Temperature (°C), Pre-heat Time (s), Heating Rate (°C/s), Maximum Temperature of Readout (°C), readout time (s) and Storage Temperature (°C) are 240, 90, 1, 2, 50, 0, 15, 240, 13 and -20, respectively. Using the optimum protocol, an efficient glow curve with low residual signals can be achieved. Using optimum protocol obtained by Taguchi method, the dosimetry can be effectively performed with great accuracy. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  11. Investigation of Structures of Microwave Microelectromechanical-System Switches by Taguchi Method

    NASA Astrophysics Data System (ADS)

    Lai, Yeong-Lin; Lin, Chien-Hung

    2007-10-01

    The optimal design of microwave microelectromechanical-system (MEMS) switches by the Taguchi method is presented. The structures of the switches are analyzed and optimized in terms of the effective stiffness constant, the maximum von Mises stress, and the natural frequency in order to improve the reliability and the performance of the MEMS switches. There are four factors, each of which has three levels in the Taguchi method for the MEMS switches. An L9(34) orthogonal array is used for the matrix experiments. The characteristics of the experiments are studied by the finite-element method and the analytical method. The responses of the signal-to-noise (S/N) ratios of the characteristics of the switches are investigated. The statistical analysis of variance (ANOVA) is used to interpret the experimental results and decide the significant factors. The final optimum setting, A1B3C1D2, predicts that the effective stiffness constant is 1.06 N/m, the maximum von Mises stress is 76.9 MPa, and the natural frequency is 29.331 kHz. The corresponding switching time is 34 μs, and the pull-down voltage is 9.8 V.

  12. Nitric acid treated multi-walled carbon nanotubes optimized by Taguchi method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shamsuddin, Shahidah Arina; Hashim, Uda; Halim, Nur Hamidah Abdul

    Electron transfer rate (ETR) of CNTs can be enhanced by increasing the amounts of COOH groups to their wall and opened tips. With the aim to achieve the highest production amount of COOH, Taguchi robust design has been used for the first time to optimize the surface modification of MWCNTs by nitric acid oxidation. Three main oxidation parameters which are concentration of acid, treatment temperature and treatment time have been selected as the control factors that will be optimized. The amounts of COOH produced are measured by using FTIR spectroscopy through the absorbance intensity. From the analysis, we found thatmore » acid concentration and treatment time had the most important influence on the production of COOH. Meanwhile, the treatment temperature will only give intermediate effect. The optimum amount of COOH can be achieved with the treatment by 8.0 M concentration of nitric acid at 120 °C for 2 hour.« less

  13. SVM-RFE based feature selection and Taguchi parameters optimization for multiclass SVM classifier.

    PubMed

    Huang, Mei-Ling; Hung, Yung-Hsiang; Lee, W M; Li, R K; Jiang, Bo-Ru

    2014-01-01

    Recently, support vector machine (SVM) has excellent performance on classification and prediction and is widely used on disease diagnosis or medical assistance. However, SVM only functions well on two-group classification problems. This study combines feature selection and SVM recursive feature elimination (SVM-RFE) to investigate the classification accuracy of multiclass problems for Dermatology and Zoo databases. Dermatology dataset contains 33 feature variables, 1 class variable, and 366 testing instances; and the Zoo dataset contains 16 feature variables, 1 class variable, and 101 testing instances. The feature variables in the two datasets were sorted in descending order by explanatory power, and different feature sets were selected by SVM-RFE to explore classification accuracy. Meanwhile, Taguchi method was jointly combined with SVM classifier in order to optimize parameters C and γ to increase classification accuracy for multiclass classification. The experimental results show that the classification accuracy can be more than 95% after SVM-RFE feature selection and Taguchi parameter optimization for Dermatology and Zoo databases.

  14. SVM-RFE Based Feature Selection and Taguchi Parameters Optimization for Multiclass SVM Classifier

    PubMed Central

    Huang, Mei-Ling; Hung, Yung-Hsiang; Lee, W. M.; Li, R. K.; Jiang, Bo-Ru

    2014-01-01

    Recently, support vector machine (SVM) has excellent performance on classification and prediction and is widely used on disease diagnosis or medical assistance. However, SVM only functions well on two-group classification problems. This study combines feature selection and SVM recursive feature elimination (SVM-RFE) to investigate the classification accuracy of multiclass problems for Dermatology and Zoo databases. Dermatology dataset contains 33 feature variables, 1 class variable, and 366 testing instances; and the Zoo dataset contains 16 feature variables, 1 class variable, and 101 testing instances. The feature variables in the two datasets were sorted in descending order by explanatory power, and different feature sets were selected by SVM-RFE to explore classification accuracy. Meanwhile, Taguchi method was jointly combined with SVM classifier in order to optimize parameters C and γ to increase classification accuracy for multiclass classification. The experimental results show that the classification accuracy can be more than 95% after SVM-RFE feature selection and Taguchi parameter optimization for Dermatology and Zoo databases. PMID:25295306

  15. A feasibility investigation for modeling and optimization of temperature in bone drilling using fuzzy logic and Taguchi optimization methodology.

    PubMed

    Pandey, Rupesh Kumar; Panda, Sudhansu Sekhar

    2014-11-01

    Drilling of bone is a common procedure in orthopedic surgery to produce hole for screw insertion to fixate the fracture devices and implants. The increase in temperature during such a procedure increases the chances of thermal invasion of bone which can cause thermal osteonecrosis resulting in the increase of healing time or reduction in the stability and strength of the fixation. Therefore, drilling of bone with minimum temperature is a major challenge for orthopedic fracture treatment. This investigation discusses the use of fuzzy logic and Taguchi methodology for predicting and minimizing the temperature produced during bone drilling. The drilling experiments have been conducted on bovine bone using Taguchi's L25 experimental design. A fuzzy model is developed for predicting the temperature during orthopedic drilling as a function of the drilling process parameters (point angle, helix angle, feed rate and cutting speed). Optimum bone drilling process parameters for minimizing the temperature are determined using Taguchi method. The effect of individual cutting parameters on the temperature produced is evaluated using analysis of variance. The fuzzy model using triangular and trapezoidal membership predicts the temperature within a maximum error of ±7%. Taguchi analysis of the obtained results determined the optimal drilling conditions for minimizing the temperature as A3B5C1.The developed system will simplify the tedious task of modeling and determination of the optimal process parameters to minimize the bone drilling temperature. It will reduce the risk of thermal osteonecrosis and can be very effective for the online condition monitoring of the process. © IMechE 2014.

  16. Taguchi's technique: an effective method for improving X-ray medical radiographic screen performance.

    PubMed

    Vlachogiannis, J G

    2003-01-01

    Taguchi's technique is a helpful tool to achieve experimental optimization of a large number of decision variables with a small number of off-line experiments. The technique appears to be an ideal tool for improving the performance of X-ray medical radiographic screens under a noise source. Currently there are very many guides available for improving the efficiency of X-ray medical radiographic screens. These guides can be refined using a second-stage parameter optimization. based on Taguchi's technique, selecting the optimum levels of controllable X-ray radiographic screen factors. A real example of the proposed technique is presented giving certain performance criteria. The present research proposes the reinforcement of X-ray radiography by Taguchi's technique as a novel hardware mechanism.

  17. Parametric Optimization of Wire Electrical Discharge Machining of Powder Metallurgical Cold Worked Tool Steel using Taguchi Method

    NASA Astrophysics Data System (ADS)

    Sudhakara, Dara; Prasanthi, Guvvala

    2017-04-01

    Wire Cut EDM is an unconventional machining process used to build components of complex shape. The current work mainly deals with optimization of surface roughness while machining P/M CW TOOL STEEL by Wire cut EDM using Taguchi method. The process parameters of the Wire Cut EDM is ON, OFF, IP, SV, WT, and WP. L27 OA is used for to design of the experiments for conducting experimentation. In order to find out the effecting parameters on the surface roughness, ANOVA analysis is engaged. The optimum levels for getting minimum surface roughness is ON = 108 µs, OFF = 63 µs, IP = 11 A, SV = 68 V and WT = 8 g.

  18. Optimal design of loudspeaker arrays for robust cross-talk cancellation using the Taguchi method and the genetic algorithm.

    PubMed

    Bai, Mingsian R; Tung, Chih-Wei; Lee, Chih-Chung

    2005-05-01

    An optimal design technique of loudspeaker arrays for cross-talk cancellation with application in three-dimensional audio is presented. An array focusing scheme is presented on the basis of the inverse propagation that relates the transducers to a set of chosen control points. Tikhonov regularization is employed in designing the inverse cancellation filters. An extensive analysis is conducted to explore the cancellation performance and robustness issues. To best compromise the performance and robustness of the cross-talk cancellation system, optimal configurations are obtained with the aid of the Taguchi method and the genetic algorithm (GA). The proposed systems are further justified by physical as well as subjective experiments. The results reveal that large number of loudspeakers, closely spaced configuration, and optimal control point design all contribute to the robustness of cross-talk cancellation systems (CCS) against head misalignment.

  19. An Efficient Taguchi Approach for the Performance Optimization of Health, Safety, Environment and Ergonomics in Generation Companies.

    PubMed

    Azadeh, Ali; Sheikhalishahi, Mohammad

    2015-06-01

    A unique framework for performance optimization of generation companies (GENCOs) based on health, safety, environment, and ergonomics (HSEE) indicators is presented. To rank this sector of industry, the combination of data envelopment analysis (DEA), principal component analysis (PCA), and Taguchi are used for all branches of GENCOs. These methods are applied in an integrated manner to measure the performance of GENCO. The preferred model between DEA, PCA, and Taguchi is selected based on sensitivity analysis and maximum correlation between rankings. To achieve the stated objectives, noise is introduced into input data. The results show that Taguchi outperforms other methods. Moreover, a comprehensive experiment is carried out to identify the most influential factor for ranking GENCOs. The approach developed in this study could be used for continuous assessment and improvement of GENCO's performance in supplying energy with respect to HSEE factors. The results of such studies would help managers to have better understanding of weak and strong points in terms of HSEE factors.

  20. Experimental study of optimal self compacting concrete with spent foundry sand as partial replacement for M-sand using Taguchi approach

    NASA Astrophysics Data System (ADS)

    Nirmala, D. B.; Raviraj, S.

    2016-06-01

    This paper presents the application of Taguchi approach to obtain optimal mix proportion for Self Compacting Concrete (SCC) containing spent foundry sand and M-sand. Spent foundry sand is used as a partial replacement for M-sand. The SCC mix has seven control factors namely, Coarse aggregate, M-sand with Spent Foundry sand, Cement, Fly ash, Water, Super plasticizer and Viscosity modifying agent. Modified Nan Su method is used to proportion the initial SCC mix. L18 (21×37) Orthogonal Arrays (OA) with the seven control factors having 3 levels is used in Taguchi approach which resulted in 18 SCC mix proportions. All mixtures are extensively tested both in fresh and hardened states to verify whether they meet the practical and technical requirements of SCC. The quality characteristics considering "Nominal the better" situation is applied to the test results to arrive at the optimal SCC mix proportion. Test results indicate that the optimal mix satisfies the requirements of fresh and hardened properties of SCC. The study reveals the feasibility of using spent foundry sand as a partial replacement of M-sand in SCC and also that Taguchi method is a reliable tool to arrive at optimal mix proportion of SCC.

  1. An Efficient Taguchi Approach for the Performance Optimization of Health, Safety, Environment and Ergonomics in Generation Companies

    PubMed Central

    Azadeh, Ali; Sheikhalishahi, Mohammad

    2014-01-01

    Background A unique framework for performance optimization of generation companies (GENCOs) based on health, safety, environment, and ergonomics (HSEE) indicators is presented. Methods To rank this sector of industry, the combination of data envelopment analysis (DEA), principal component analysis (PCA), and Taguchi are used for all branches of GENCOs. These methods are applied in an integrated manner to measure the performance of GENCO. The preferred model between DEA, PCA, and Taguchi is selected based on sensitivity analysis and maximum correlation between rankings. To achieve the stated objectives, noise is introduced into input data. Results The results show that Taguchi outperforms other methods. Moreover, a comprehensive experiment is carried out to identify the most influential factor for ranking GENCOs. Conclusion The approach developed in this study could be used for continuous assessment and improvement of GENCO's performance in supplying energy with respect to HSEE factors. The results of such studies would help managers to have better understanding of weak and strong points in terms of HSEE factors. PMID:26106505

  2. Application of Taguchi approach to optimize the sol-gel process of the quaternary Cu2ZnSnS4 with good optical properties

    NASA Astrophysics Data System (ADS)

    Nkuissi Tchognia, Joël Hervé; Hartiti, Bouchaib; Ridah, Abderraouf; Ndjaka, Jean-Marie; Thevenin, Philippe

    2016-07-01

    Present research deals with the optimal deposition parameters configuration for the synthesis of Cu2ZnSnS4 (CZTS) thin films using the sol-gel method associated to spin coating on ordinary glass substrates without sulfurization. The Taguchi design with a L9 (34) orthogonal array, a signal-to-noise (S/N) ratio and an analysis of variance (ANOVA) are used to optimize the performance characteristic (optical band gap) of CZTS thin films. Four deposition parameters called factors namely the annealing temperature, the annealing time, the ratios Cu/(Zn + Sn) and Zn/Sn were chosen. To conduct the tests using the Taguchi method, three levels were chosen for each factor. The effects of the deposition parameters on structural and optical properties are studied. The determination of the most significant factors of the deposition process on optical properties of as-prepared films is also done. The results showed that the significant parameters are Zn/Sn ratio and the annealing temperature by applying the Taguchi method.

  3. Optimization of an Optical Inspection System Based on the Taguchi Method for Quantitative Analysis of Point-of-Care Testing

    PubMed Central

    Yeh, Chia-Hsien; Zhao, Zi-Qi; Shen, Pi-Lan; Lin, Yu-Cheng

    2014-01-01

    This study presents an optical inspection system for detecting a commercial point-of-care testing product and a new detection model covering from qualitative to quantitative analysis. Human chorionic gonadotropin (hCG) strips (cut-off value of the hCG commercial product is 25 mIU/mL) were the detection target in our study. We used a complementary metal-oxide semiconductor (CMOS) sensor to detect the colors of the test line and control line in the specific strips and to reduce the observation errors by the naked eye. To achieve better linearity between the grayscale and the concentration, and to decrease the standard deviation (increase the signal to noise ratio, S/N), the Taguchi method was used to find the optimal parameters for the optical inspection system. The pregnancy test used the principles of the lateral flow immunoassay, and the colors of the test and control line were caused by the gold nanoparticles. Because of the sandwich immunoassay model, the color of the gold nanoparticles in the test line was darkened by increasing the hCG concentration. As the results reveal, the S/N increased from 43.48 dB to 53.38 dB, and the hCG concentration detection increased from 6.25 to 50 mIU/mL with a standard deviation of less than 10%. With the optimal parameters to decrease the detection limit and to increase the linearity determined by the Taguchi method, the optical inspection system can be applied to various commercial rapid tests for the detection of ketamine, troponin I, and fatty acid binding protein (FABP). PMID:25256108

  4. Optimization of reactive-ion etching (RIE) parameters for fabrication of tantalum pentoxide (Ta2O5) waveguide using Taguchi method

    NASA Astrophysics Data System (ADS)

    Muttalib, M. Firdaus A.; Chen, Ruiqi Y.; Pearce, S. J.; Charlton, Martin D. B.

    2017-11-01

    In this paper, we demonstrate the optimization of reactive-ion etching (RIE) parameters for the fabrication of tantalum pentoxide (Ta2O5) waveguide with chromium (Cr) hard mask in a commercial OIPT Plasmalab 80 RIE etcher. A design of experiment (DOE) using Taguchi method was implemented to find optimum RF power, mixture of CHF3 and Ar gas ratio, and chamber pressure for a high etch rate, good selectivity, and smooth waveguide sidewall. It was found that the optimized etch condition obtained in this work were RF power = 200 W, gas ratio = 80 %, and chamber pressure = 30 mTorr with an etch rate of 21.6 nm/min, Ta2O5/Cr selectivity ratio of 28, and smooth waveguide sidewall.

  5. Optimization of Injection Molding Parameters for HDPE/TiO₂ Nanocomposites Fabrication with Multiple Performance Characteristics Using the Taguchi Method and Grey Relational Analysis.

    PubMed

    Pervez, Hifsa; Mozumder, Mohammad S; Mourad, Abdel-Hamid I

    2016-08-22

    The current study presents an investigation on the optimization of injection molding parameters of HDPE/TiO₂ nanocomposites using grey relational analysis with the Taguchi method. Four control factors, including filler concentration (i.e., TiO₂), barrel temperature, residence time and holding time, were chosen at three different levels of each. Mechanical properties, such as yield strength, Young's modulus and elongation, were selected as the performance targets. Nine experimental runs were carried out based on the Taguchi L₉ orthogonal array, and the data were processed according to the grey relational steps. The optimal process parameters were found based on the average responses of the grey relational grades, and the ideal operating conditions were found to be a filler concentration of 5 wt % TiO₂, a barrel temperature of 225 °C, a residence time of 30 min and a holding time of 20 s. Moreover, analysis of variance (ANOVA) has also been applied to identify the most significant factor, and the percentage of TiO₂ nanoparticles was found to have the most significant effect on the properties of the HDPE/TiO₂ nanocomposites fabricated through the injection molding process.

  6. Modified Mahalanobis Taguchi System for Imbalance Data Classification

    PubMed Central

    2017-01-01

    The Mahalanobis Taguchi System (MTS) is considered one of the most promising binary classification algorithms to handle imbalance data. Unfortunately, MTS lacks a method for determining an efficient threshold for the binary classification. In this paper, a nonlinear optimization model is formulated based on minimizing the distance between MTS Receiver Operating Characteristics (ROC) curve and the theoretical optimal point named Modified Mahalanobis Taguchi System (MMTS). To validate the MMTS classification efficacy, it has been benchmarked with Support Vector Machines (SVMs), Naive Bayes (NB), Probabilistic Mahalanobis Taguchi Systems (PTM), Synthetic Minority Oversampling Technique (SMOTE), Adaptive Conformal Transformation (ACT), Kernel Boundary Alignment (KBA), Hidden Naive Bayes (HNB), and other improved Naive Bayes algorithms. MMTS outperforms the benchmarked algorithms especially when the imbalance ratio is greater than 400. A real life case study on manufacturing sector is used to demonstrate the applicability of the proposed model and to compare its performance with Mahalanobis Genetic Algorithm (MGA). PMID:28811820

  7. Optimization of Injection Molding Parameters for HDPE/TiO2 Nanocomposites Fabrication with Multiple Performance Characteristics Using the Taguchi Method and Grey Relational Analysis

    PubMed Central

    Pervez, Hifsa; Mozumder, Mohammad S.; Mourad, Abdel-Hamid I.

    2016-01-01

    The current study presents an investigation on the optimization of injection molding parameters of HDPE/TiO2 nanocomposites using grey relational analysis with the Taguchi method. Four control factors, including filler concentration (i.e., TiO2), barrel temperature, residence time and holding time, were chosen at three different levels of each. Mechanical properties, such as yield strength, Young’s modulus and elongation, were selected as the performance targets. Nine experimental runs were carried out based on the Taguchi L9 orthogonal array, and the data were processed according to the grey relational steps. The optimal process parameters were found based on the average responses of the grey relational grades, and the ideal operating conditions were found to be a filler concentration of 5 wt % TiO2, a barrel temperature of 225 °C, a residence time of 30 min and a holding time of 20 s. Moreover, analysis of variance (ANOVA) has also been applied to identify the most significant factor, and the percentage of TiO2 nanoparticles was found to have the most significant effect on the properties of the HDPE/TiO2 nanocomposites fabricated through the injection molding process. PMID:28773830

  8. Surface Roughness Optimization Using Taguchi Method of High Speed End Milling For Hardened Steel D2

    NASA Astrophysics Data System (ADS)

    Hazza Faizi Al-Hazza, Muataz; Ibrahim, Nur Asmawiyah bt; Adesta, Erry T. Y.; Khan, Ahsan Ali; Abdullah Sidek, Atiah Bt.

    2017-03-01

    The main challenge for any manufacturer is to achieve higher quality of their final products with maintains minimum machining time. In this research final surface roughness analysed and optimized with maximum 0.3 mm flank wear length. The experiment was investigated the effect of cutting speed, feed rate and depth of cut on the final surface roughness using D2 as a work piece hardened to 52-56 HRC, and coated carbide as cutting tool with higher cutting speed 120-240 mm/min. The experiment has been conducted using L9 design of Taguchi collection. The results have been analysed using JMP software.

  9. Application of Taguchi methods to infrared window design

    NASA Astrophysics Data System (ADS)

    Osmer, Kurt A.; Pruszynski, Charles J.

    1990-10-01

    Dr. Genichi Taguchi, a prominent quality consultant, reduced a branch of statistics known as "Design of Experiments" to a cookbook methodology that can be employed by any competent engineer. This technique has been extensively employed by Japanese manufacturers, and is widely credited with helping them attain their current level of success in low cost, high quality product design and fabrication. Although this technique was originally put forth as a tool to streamline the determination of improved production processes, it can also be applied to a wide range of engineering problems. As part of an internal research project, this method of experimental design has been adapted to window trade studies and materials research. Two of these analyses are presented herein, and have been chosen to illustrate the breadth of applications to which the Taguchi method can be utilized.

  10. Mixing behavior of the rhombic micromixers over a wide Reynolds number range using Taguchi method and 3D numerical simulations.

    PubMed

    Chung, C K; Shih, T R; Chen, T C; Wu, B H

    2008-10-01

    A planar micromixer with rhombic microchannels and a converging-diverging element has been systematically investigated by the Taguchi method, CFD-ACE simulations and experiments. To reduce the footprint and extend the operation range of Reynolds number, Taguchi method was used to numerically study the performance of the micromixer in a L(9) orthogonal array. Mixing efficiency is prominently influenced by geometrical parameters and Reynolds number (Re). The four factors in a L(9) orthogonal array are number of rhombi, turning angle, width of the rhombic channel and width of the throat. The degree of sensitivity by Taguchi method can be ranked as: Number of rhombi > Width of the rhombic channel > Width of the throat > Turning angle of the rhombic channel. Increasing the number of rhombi, reducing the width of the rhombic channel and throat and lowering the turning angle resulted in better fluid mixing efficiency. The optimal design of the micromixer in simulations indicates over 90% mixing efficiency at both Re > or = 80 and Re < or = 0.1. Experimental results in the optimal simulations are consistent with the simulated one. This planar rhombic micromixer has simplified the complex fabrication process of the multi-layer or three-dimensional micromixers and improved the performance of a previous rhombic micromixer at a reduced footprint and lower Re.

  11. Multiple performance characteristics optimization for Al 7075 on electric discharge drilling by Taguchi grey relational theory

    NASA Astrophysics Data System (ADS)

    Khanna, Rajesh; Kumar, Anish; Garg, Mohinder Pal; Singh, Ajit; Sharma, Neeraj

    2015-12-01

    Electric discharge drill machine (EDDM) is a spark erosion process to produce micro-holes in conductive materials. This process is widely used in aerospace, medical, dental and automobile industries. As for the performance evaluation of the electric discharge drilling machine, it is very necessary to study the process parameters of machine tool. In this research paper, a brass rod 2 mm diameter was selected as a tool electrode. The experiments generate output responses such as tool wear rate (TWR). The best parameters such as pulse on-time, pulse off-time and water pressure were studied for best machining characteristics. This investigation presents the use of Taguchi approach for better TWR in drilling of Al-7075. A plan of experiments, based on L27 Taguchi design method, was selected for drilling of material. Analysis of variance (ANOVA) shows the percentage contribution of the control factor in the machining of Al-7075 in EDDM. The optimal combination levels and the significant drilling parameters on TWR were obtained. The optimization results showed that the combination of maximum pulse on-time and minimum pulse off-time gives maximum MRR.

  12. Application of Taguchi optimization on the cassava starch wastewater electrocoagulation using batch recycle method

    NASA Astrophysics Data System (ADS)

    Sudibyo, Hermida, L.; Suwardi

    2017-11-01

    Tapioca waste water is very difficult to treat; hence many tapioca factories could not treat it well. One of method which able to overcome this problem is electrodeposition. This process has high performance when it conducted using batch recycle process and use aluminum bipolar electrode. However, the optimum operation conditions are having a significant effect in the tapioca wastewater treatment using bath recycle process. In this research, The Taguchi method was successfully applied to know the optimum condition and the interaction between parameters in electrocoagulation process. The results show that current density, conductivity, electrode distance, and pH have a significant effect on the turbidity removal of cassava starch waste water.

  13. Rapid development of xylanase assay conditions using Taguchi methodology.

    PubMed

    Prasad Uday, Uma Shankar; Bandyopadhyay, Tarun Kanti; Bhunia, Biswanath

    2016-11-01

    The present investigation is mainly concerned with the rapid development of extracellular xylanase assay conditions by using Taguchi methodology. The extracellular xylanase was produced from Aspergillus niger (KP874102.1), a new strain isolated from a soil sample of the Baramura forest, Tripura West, India. Four physical parameters including temperature, pH, buffer concentration and incubation time were considered as key factors for xylanase activity and were optimized using Taguchi robust design methodology for enhanced xylanase activity. The main effect, interaction effects and optimal levels of the process factors were determined using signal-to-noise (S/N) ratio. The Taguchi method recommends the use of S/N ratio to measure quality characteristics. Based on analysis of the S/N ratio, optimal levels of the process factors were determined. Analysis of variance (ANOVA) was performed to evaluate statistically significant process factors. ANOVA results showed that temperature contributed the maximum impact (62.58%) on xylanase activity, followed by pH (22.69%), buffer concentration (9.55%) and incubation time (5.16%). Predicted results showed that enhanced xylanase activity (81.47%) can be achieved with pH 2, temperature 50°C, buffer concentration 50 Mm and incubation time 10 min.

  14. Evaluation on the feasibility of using bamboo fillers in plastic gear manufacturing via the Taguchi optimization method

    NASA Astrophysics Data System (ADS)

    Mehat, N. M.; Kamaruddin, S.

    2017-10-01

    An increase in demand for industrial gears has instigated the escalating uses of plastic-matrix composites, particularly carbon or glass fibre reinforced plastics as gear material to enhance the properties and limitation in plastic gears. However, the production of large quantity of these synthetic fibres reinforced composites has posed serious threat to ecosystem. Therefore, this work is conducted to study the applicability and practical ability of using bamboo fillers particularly in plastic gear manufacturing as opposed to synthetic fibres via the Taguchi optimization method. The results showed that no failure mechanism such as gear tooth root cracking and severe tooth wear were observed in gear tested made of 5-30 wt% of bamboo fillers in comparing with the unfilled PP gear. These results indicated that bamboo can be practically and economically used as an alternative filler in plastic material reinforcement as well as in minimizing the cost of raw material in general.

  15. A Taguchi study of the aeroelastic tailoring design process

    NASA Technical Reports Server (NTRS)

    Bohlmann, Jonathan D.; Scott, Robert C.

    1991-01-01

    A Taguchi study was performed to determine the important players in the aeroelastic tailoring design process and to find the best composition of the optimization's objective function. The Wing Aeroelastic Synthesis Procedure (TSO) was used to ascertain the effects that factors such as composite laminate constraints, roll effectiveness constraints, and built-in wing twist and camber have on the optimum, aeroelastically tailored wing skin design. The results show the Taguchi method to be a viable engineering tool for computational inquiries, and provide some valuable lessons about the practice of aeroelastic tailoring.

  16. Synthesis of graphene by cobalt-catalyzed decomposition of methane in plasma-enhanced CVD: Optimization of experimental parameters with Taguchi method

    NASA Astrophysics Data System (ADS)

    Mehedi, H.-A.; Baudrillart, B.; Alloyeau, D.; Mouhoub, O.; Ricolleau, C.; Pham, V. D.; Chacon, C.; Gicquel, A.; Lagoute, J.; Farhat, S.

    2016-08-01

    This article describes the significant roles of process parameters in the deposition of graphene films via cobalt-catalyzed decomposition of methane diluted in hydrogen using plasma-enhanced chemical vapor deposition (PECVD). The influence of growth temperature (700-850 °C), molar concentration of methane (2%-20%), growth time (30-90 s), and microwave power (300-400 W) on graphene thickness and defect density is investigated using Taguchi method which enables reaching the optimal parameter settings by performing reduced number of experiments. Growth temperature is found to be the most influential parameter in minimizing the number of graphene layers, whereas microwave power has the second largest effect on crystalline quality and minor role on thickness of graphene films. The structural properties of PECVD graphene obtained with optimized synthesis conditions are investigated with Raman spectroscopy and corroborated with atomic-scale characterization performed by high-resolution transmission electron microscopy and scanning tunneling microscopy, which reveals formation of continuous film consisting of 2-7 high quality graphene layers.

  17. Optimization of delignification of two Pennisetum grass species by NaOH pretreatment using Taguchi and ANN statistical approach.

    PubMed

    Mohaptra, Sonali; Dash, Preeti Krishna; Behera, Sudhanshu Shekar; Thatoi, Hrudayanath

    2016-01-01

    In the bioconversion of lignocelluloses for bioethanol, pretreatment seems to be the most important step which improves the elimination of the lignin and hemicelluloses content, exposing cellulose to further hydrolysis. The present study discusses the application of dynamic statistical techniques like the Taguchi method and artificial neural network (ANN) in the optimization of pretreatment of lignocellulosic biomasses such as Hybrid Napier grass (HNG) (Pennisetum purpureum) and Denanath grass (DG) (Pennisetum pedicellatum), using alkali sodium hydroxide. This study analysed and determined a parameter combination with a low number of experiments by using the Taguchi method in which both the substrates can be efficiently pretreated. The optimized parameters obtained from the L16 orthogonal array are soaking time (18 and 26 h), temperature (60°C and 55°C), and alkali concentration (1%) for HNG and DG, respectively. High performance liquid chromatography analysis of the optimized pretreated grass varieties confirmed the presence of glucan (47.94% and 46.50%), xylan (9.35% and 7.95%), arabinan (2.15% and 2.2%), and galactan/mannan (1.44% and 1.52%) for HNG and DG, respectively. Physicochemical characterization studies of native and alkali-pretreated grasses were carried out by scanning electron microscopy and Fourier transformation Infrared spectroscopy which revealed some morphological differences between the native and optimized pretreated samples. Model validation by ANN showed a good agreement between experimental results and the predicted responses.

  18. Multi-Response Optimization of Resin Finishing by Using a Taguchi-Based Grey Relational Analysis

    PubMed Central

    Shafiq, Faizan; Sarwar, Zahid; Jilani, Muhammad Munib; Cai, Yingjie

    2018-01-01

    In this study, the influence and optimization of the factors of a non-formaldehyde resin finishing process on cotton fabric using a Taguchi-based grey relational analysis were experimentally investigated. An L27 orthogonal array was selected for five parameters and three levels by applying Taguchi’s design of experiments. The Taguchi technique was coupled with a grey relational analysis to obtain a grey relational grade for evaluating multiple responses, i.e., crease recovery angle (CRA), tearing strength (TE), and whiteness index (WI). The optimum parameters (values) for resin finishing were the resin concentration (80 g·L−1), the polyethylene softener (40 g·L−1), the catalyst (25 g·L−1), the curing temperature (140 °C), and the curing time (2 min). The goodness-of-fit of the data was validated by an analysis of variance (ANOVA). The optimized sample was characterized by Fourier-transform infrared (FTIR) spectroscopy, thermogravimetric analysis (TGA), and scanning electron microscope (SEM) to better understand the structural details of the resin finishing process. The results showed an improved thermal stability and confirmed the presence of well deposited of resin on the optimized fabric surface. PMID:29543724

  19. Multidisciplinary design of a rocket-based combined cycle SSTO launch vehicle using Taguchi methods

    NASA Technical Reports Server (NTRS)

    Olds, John R.; Walberg, Gerald D.

    1993-01-01

    Results are presented from the optimization process of a winged-cone configuration SSTO launch vehicle that employs a rocket-based ejector/ramjet/scramjet/rocket operational mode variable-cycle engine. The Taguchi multidisciplinary parametric-design method was used to evaluate the effects of simultaneously changing a total of eight design variables, rather than changing them one at a time as in conventional tradeoff studies. A combination of design variables was in this way identified which yields very attractive vehicle dry and gross weights.

  20. Optimization of Surface Roughness Parameters of Al-6351 Alloy in EDC Process: A Taguchi Coupled Fuzzy Logic Approach

    NASA Astrophysics Data System (ADS)

    Kar, Siddhartha; Chakraborty, Sujoy; Dey, Vidyut; Ghosh, Subrata Kumar

    2017-10-01

    This paper investigates the application of Taguchi method with fuzzy logic for multi objective optimization of roughness parameters in electro discharge coating process of Al-6351 alloy with powder metallurgical compacted SiC/Cu tool. A Taguchi L16 orthogonal array was employed to investigate the roughness parameters by varying tool parameters like composition and compaction load and electro discharge machining parameters like pulse-on time and peak current. Crucial roughness parameters like Centre line average roughness, Average maximum height of the profile and Mean spacing of local peaks of the profile were measured on the coated specimen. The signal to noise ratios were fuzzified to optimize the roughness parameters through a single comprehensive output measure (COM). Best COM obtained with lower values of compaction load, pulse-on time and current and 30:70 (SiC:Cu) composition of tool. Analysis of variance is carried out and a significant COM model is observed with peak current yielding highest contribution followed by pulse-on time, compaction load and composition. The deposited layer is characterised by X-Ray Diffraction analysis which confirmed the presence of tool materials on the work piece surface.

  1. Optimization of sol-gel technique for coating of metallic substrates by hydroxyapatite using the Taguchi method

    NASA Astrophysics Data System (ADS)

    Pourbaghi-Masouleh, M.; Asgharzadeh, H.

    2013-08-01

    In this study, the Taguchi method of design of experiment (DOE) was used to optimize the hydroxyapatite (HA) coatings on various metallic substrates deposited by sol-gel dip-coating technique. The experimental design consisted of five factors including substrate material (A), surface preparation of substrate (B), dipping/withdrawal speed (C), number of layers (D), and calcination temperature (E) with three levels of each factor. An orthogonal array of L18 type with mixed levels of the control factors was utilized. The image processing of the micrographs of the coatings was conducted to determine the percentage of coated area ( PCA). Chemical and phase composition of HA coatings were studied by XRD, FT-IR, SEM, and EDS techniques. The analysis of variance (ANOVA) indicated that the PCA of HA coatings was significantly affected by the calcination temperature. The optimum conditions from signal-to-noise ( S/N) ratio analysis were A: pure Ti, B: polishing and etching for 24 h, C: 50 cm min-1, D: 1, and E: 300 °C. In the confirmation experiment using the optimum conditions, the HA coating with high PCA of 98.5 % was obtained.

  2. Synthesis of graphene by cobalt-catalyzed decomposition of methane in plasma-enhanced CVD: Optimization of experimental parameters with Taguchi method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mehedi, H.-A.; Baudrillart, B.; Gicquel, A.

    2016-08-14

    This article describes the significant roles of process parameters in the deposition of graphene films via cobalt-catalyzed decomposition of methane diluted in hydrogen using plasma-enhanced chemical vapor deposition (PECVD). The influence of growth temperature (700–850 °C), molar concentration of methane (2%–20%), growth time (30–90 s), and microwave power (300–400 W) on graphene thickness and defect density is investigated using Taguchi method which enables reaching the optimal parameter settings by performing reduced number of experiments. Growth temperature is found to be the most influential parameter in minimizing the number of graphene layers, whereas microwave power has the second largest effect on crystalline qualitymore » and minor role on thickness of graphene films. The structural properties of PECVD graphene obtained with optimized synthesis conditions are investigated with Raman spectroscopy and corroborated with atomic-scale characterization performed by high-resolution transmission electron microscopy and scanning tunneling microscopy, which reveals formation of continuous film consisting of 2–7 high quality graphene layers.« less

  3. Optimization of tribological performance of SiC embedded composite coating via Taguchi analysis approach

    NASA Astrophysics Data System (ADS)

    Maleque, M. A.; Bello, K. A.; Adebisi, A. A.; Akma, N.

    2017-03-01

    Tungsten inert gas (TIG) torch is one of the most recently used heat source for surface modification of engineering parts, giving similar results to the more expensive high power laser technique. In this study, ceramic-based embedded composite coating has been produced by precoated silicon carbide (SiC) powders on the AISI 4340 low alloy steel substrate using TIG welding torch process. A design of experiment based on Taguchi approach has been adopted to optimize the TIG cladding process parameters. The L9 orthogonal array and the signal-to-noise was used to study the effect of TIG welding parameters such as arc current, travelling speed, welding voltage and argon flow rate on tribological response behaviour (wear rate, surface roughness and wear track width). The objective of the study was to identify optimal design parameter that significantly minimizes each of the surface quality characteristics. The analysis of the experimental results revealed that the argon flow rate was found to be the most influential factor contributing to the minimum wear and surface roughness of the modified coating surface. On the other hand, the key factor in reducing wear scar is the welding voltage. Finally, a convenient and economical Taguchi approach used in this study was efficient to find out optimal factor settings for obtaining minimum wear rate, wear scar and surface roughness responses in TIG-coated surfaces.

  4. Simulation reduction using the Taguchi method

    NASA Technical Reports Server (NTRS)

    Mistree, Farrokh; Lautenschlager, Ume; Erikstad, Stein Owe; Allen, Janet K.

    1993-01-01

    A large amount of engineering effort is consumed in conducting experiments to obtain information needed for making design decisions. Efficiency in generating such information is the key to meeting market windows, keeping development and manufacturing costs low, and having high-quality products. The principal focus of this project is to develop and implement applications of Taguchi's quality engineering techniques. In particular, we show how these techniques are applied to reduce the number of experiments for trajectory simulation of the LifeSat space vehicle. Orthogonal arrays are used to study many parameters simultaneously with a minimum of time and resources. Taguchi's signal to noise ratio is being employed to measure quality. A compromise Decision Support Problem and Robust Design are applied to demonstrate how quality is designed into a product in the early stages of designing.

  5. Permeability Evaluation Through Chitosan Membranes Using Taguchi Design

    PubMed Central

    Sharma, Vipin; Marwaha, Rakesh Kumar; Dureja, Harish

    2010-01-01

    In the present study, chitosan membranes capable of imitating permeation characteristics of diclofenac diethylamine across animal skin were prepared using cast drying method. The effect of concentration of chitosan, concentration of cross-linking agent (NaTPP), crosslinking time was studied using Taguchi design. Taguchi design ranked concentration of chitosan as the most important factor influencing the permeation parameters of diclofenac diethylamine. The flux of the diclofenac diethylamine solution through optimized chitosan membrane (T9) was found to be comparable to that obtained across rat skin. The mathematical model developed using multilinear regression analysis can be used to formulate chitosan membranes that can mimic the desired permeation characteristics. The developed chitosan membranes can be utilized as a substitute to animal skin for in vitro permeation studies. PMID:21179329

  6. Permeability evaluation through chitosan membranes using taguchi design.

    PubMed

    Sharma, Vipin; Marwaha, Rakesh Kumar; Dureja, Harish

    2010-01-01

    In the present study, chitosan membranes capable of imitating permeation characteristics of diclofenac diethylamine across animal skin were prepared using cast drying method. The effect of concentration of chitosan, concentration of cross-linking agent (NaTPP), crosslinking time was studied using Taguchi design. Taguchi design ranked concentration of chitosan as the most important factor influencing the permeation parameters of diclofenac diethylamine. The flux of the diclofenac diethylamine solution through optimized chitosan membrane (T9) was found to be comparable to that obtained across rat skin. The mathematical model developed using multilinear regression analysis can be used to formulate chitosan membranes that can mimic the desired permeation characteristics. The developed chitosan membranes can be utilized as a substitute to animal skin for in vitro permeation studies.

  7. Optimization of Parameters for Manufacture Nanopowder Bioceramics at Machine Pulverisette 6 by Taguchi and ANOVA Method

    NASA Astrophysics Data System (ADS)

    Van Hoten, Hendri; Gunawarman; Mulyadi, Ismet Hari; Kurniawan Mainil, Afdhal; Putra, Bismantoloa dan

    2018-02-01

    This research is about manufacture nanopowder Bioceramics from local materials used Ball Milling for biomedical applications. Source materials for the manufacture of medicines are plants, animal tissues, microbial structures and engineering biomaterial. The form of raw material medicines is a powder before mixed. In the case of medicines, research is to find sources of biomedical materials that will be in the nanoscale powders can be used as raw material for medicine. One of the biomedical materials that can be used as raw material for medicine is of the type of bioceramics is chicken eggshells. This research will develop methods for manufacture nanopowder material from chicken eggshells with Ball Milling using the Taguchi method and ANOVA. Eggshell milled using a variation of Milling rate on 150, 200 and 250 rpm, the time variation of 1, 2 and 3 hours and variations the grinding balls to eggshell powder weight ratio (BPR) 1: 6, 1: 8, 1: 10. Before milled with Ball Milling crushed eggshells in advance and calcinate to a temperature of 900°C. After the milled material characterization of the fine powder of eggshell using SEM to see its size. The result of this research is optimum parameter of Taguchi Design analysis that is 250 rpm milling rate, 3 hours milling time and BPR is 1: 6 with the average eggshell powder size is 1.305 μm. Milling speed, milling time and ball to powder weight of ratio have contribution successively equal to 60.82%, 30.76% and 6.64% by error equal to 1.78%.

  8. Furnace Brazing Parameters Optimized by Taguchi Method and Corrosion Behavior of Tube-Fin System of Automotive Condensers

    NASA Astrophysics Data System (ADS)

    Guía-Tello, J. C.; Pech-Canul, M. A.; Trujillo-Vázquez, E.; Pech-Canul, M. I.

    2017-08-01

    Controlled atmosphere brazing has a widespread industrial use in the production of aluminum automotive heat exchangers. Good-quality joints between the components depend on the initial condition of materials as well as on the brazing process parameters. In this work, the Taguchi method was used to optimize the brazing parameters with respect to corrosion performance for tube-fin mini-assemblies of an automotive condenser. The experimental design consisted of five factors (micro-channel tube type, flux type, peak temperature, heating rate and dwell time), with two levels each. The corrosion behavior in acidified seawater solution pH 2.8 was evaluated through potentiodynamic polarization and electrochemical impedance spectroscopy (EIS) measurements. Scanning electron microscope (SEM) and energy-dispersive x-ray spectroscopy (EDS) were used to analyze the microstructural features in the joint zone. The results showed that the parameters that most significantly affect the corrosion rate are the type of flux and the peak temperature. The optimal conditions were: micro-channel tube with 4.2 g/m2 of zinc coating, standard flux, 610 °C peak temperature, 5 °C/min heating rate and 4 min dwell time. The corrosion current density value of the confirmation experiment is in excellent agreement with the predicted value. The electrochemical characterization for selected samples gave indication that the brazing conditions had a more significant effect on the kinetics of the hydrogen evolution reaction than on the kinetics of the metal dissolution reaction.

  9. Taguchi experimental design to determine the taste quality characteristic of candied carrot

    NASA Astrophysics Data System (ADS)

    Ekawati, Y.; Hapsari, A. A.

    2018-03-01

    Robust parameter design is used to design product that is robust to noise factors so the product’s performance fits the target and delivers a better quality. In the process of designing and developing the innovative product of candied carrot, robust parameter design is carried out using Taguchi Method. The method is used to determine an optimal quality design. The optimal quality design is based on the process and the composition of product ingredients that are in accordance with consumer needs and requirements. According to the identification of consumer needs from the previous research, quality dimensions that need to be assessed are the taste and texture of the product. The quality dimension assessed in this research is limited to the taste dimension. Organoleptic testing is used for this assessment, specifically hedonic testing that makes assessment based on consumer preferences. The data processing uses mean and signal to noise ratio calculation and optimal level setting to determine the optimal process/composition of product ingredients. The optimal value is analyzed using confirmation experiments to prove that proposed product match consumer needs and requirements. The result of this research is identification of factors that affect the product taste and the optimal quality of product according to Taguchi Method.

  10. Taguchi Optimization of Pulsed Current GTA Welding Parameters for Improved Corrosion Resistance of 5083 Aluminum Welds

    NASA Astrophysics Data System (ADS)

    Rastkerdar, E.; Shamanian, M.; Saatchi, A.

    2013-04-01

    In this study, the Taguchi method was used as a design of experiment (DOE) technique to optimize the pulsed current gas tungsten arc welding (GTAW) parameters for improved pitting corrosion resistance of AA5083-H18 aluminum alloy welds. A L9 (34) orthogonal array of the Taguchi design was used, which involves nine experiments for four parameters: peak current ( P), base current ( B), percent pulse-on time ( T), and pulse frequency ( F) with three levels was used. Pitting corrosion resistance in 3.5 wt.% NaCl solution was evaluated by anodic polarization tests at room temperature and calculating the width of the passive region (∆ E pit). Analysis of variance (ANOVA) was performed on the measured data and S/ N (signal to noise) ratios. The "bigger is better" was selected as the quality characteristic (QC). The optimum conditions were found as 170 A, 85 A, 40%, and 6 Hz for P, B, T, and F factors, respectively. The study showed that the percent pulse-on time has the highest influence on the pitting corrosion resistance (50.48%) followed by pulse frequency (28.62%), peak current (11.05%) and base current (9.86%). The range of optimum ∆ E pit at optimum conditions with a confidence level of 90% was predicted to be between 174.81 and 177.74 mVSCE. Under optimum conditions, the confirmation test was carried out, and the experimental value of ∆ E pit of 176 mVSCE was in agreement with the predicted value from the Taguchi model. In this regard, the model can be effectively used to predict the ∆ E pit of pulsed current gas tungsten arc welded joints.

  11. Optimization of process parameters for drilled hole quality characteristics during cortical bone drilling using Taguchi method.

    PubMed

    Singh, Gurmeet; Jain, Vivek; Gupta, Dheeraj; Ghai, Aman

    2016-09-01

    Orthopaedic surgery involves drilling of bones to get them fixed at their original position. The drilling process used in orthopaedic surgery is most likely to the mechanical drilling process and there is all likelihood that it may harm the already damaged bone, the surrounding bone tissue and nerves, and the peril is not limited at that. It is very much feared that the recovery of that part may be impeded so that it may not be able to sustain life long. To achieve sustainable orthopaedic surgery, a surgeon must try to control the drilling damage at the time of bone drilling. The area around the holes decides the life of bone joint and so, the contiguous area of drilled hole must be intact and retain its properties even after drilling. This study mainly focuses on optimization of drilling parameters like rotational speed, feed rate and the type of tool at three levels each used by Taguchi optimization for surface roughness and material removal rate. The confirmation experiments were also carried out and results found with the confidence interval. Scanning electrode microscopy (SEM) images assisted in getting the micro level information of bone damage. Copyright © 2016 Elsevier Ltd. All rights reserved.

  12. Multi-Response Optimization of Process Parameters for Imidacloprid Removal by Reverse Osmosis Using Taguchi Design.

    PubMed

    Genç, Nevim; Doğan, Esra Can; Narcı, Ali Oğuzhan; Bican, Emine

    2017-05-01

      In this study, a multi-response optimization method using Taguchi's robust design approach is proposed for imidacloprid removal by reverse osmosis. Tests were conducted with different membrane type (BW30, LFC-3, CPA-3), transmembrane pressure (TMP = 20, 25, 30 bar), volume reduction factor (VRF = 2, 3, 4), and pH (3, 7, 11). Quality and quantity of permeate are optimized with the multi-response characteristics of the total dissolved solid (TDS), conductivity, imidacloprid, and total organic carbon (TOC) rejection ratios and flux of permeate. The optimized conditions were determined as membrane type of BW30, TMP 30 bar, VRF 3, and pH 11. Under these conditions, TDS, conductivity, imidacloprid, and TOC rejections and permeate flux were 97.50 97.41, 97.80, 98.00% and 30.60 L/m2·h, respectively. Membrane type was obtained as the most effective factor; its contribution is 64%. The difference between the predicted and observed value of multi-response signal/noise (MRSN) is within the confidence interval.

  13. Constrained Response Surface Optimisation and Taguchi Methods for Precisely Atomising Spraying Process

    NASA Astrophysics Data System (ADS)

    Luangpaiboon, P.; Suwankham, Y.; Homrossukon, S.

    2010-10-01

    This research presents a development of a design of experiment technique for quality improvement in automotive manufacturing industrial. The quality of interest is the colour shade, one of the key feature and exterior appearance for the vehicles. With low percentage of first time quality, the manufacturer has spent a lot of cost for repaired works as well as the longer production time. To permanently dissolve such problem, the precisely spraying condition should be optimized. Therefore, this work will apply the full factorial design, the multiple regression, the constrained response surface optimization methods or CRSOM, and Taguchi's method to investigate the significant factors and to determine the optimum factor level in order to improve the quality of paint shop. Firstly, 2κ full factorial was employed to study the effect of five factors including the paint flow rate at robot setting, the paint levelling agent, the paint pigment, the additive slow solvent, and non volatile solid at spraying of atomizing spraying machine. The response values of colour shade at 15 and 45 degrees were measured using spectrophotometer. Then the regression models of colour shade at both degrees were developed from the significant factors affecting each response. Consequently, both regression models were placed into the form of linear programming to maximize the colour shade subjected to 3 main factors including the pigment, the additive solvent and the flow rate. Finally, Taguchi's method was applied to determine the proper level of key variable factors to achieve the mean value target of colour shade. The factor of non volatile solid was found to be one more additional factor at this stage. Consequently, the proper level of all factors from both experiment design methods were used to set a confirmation experiment. It was found that the colour shades, both visual at 15 and 45 angel of measurement degrees of spectrophotometer, were nearly closed to the target and the defective at

  14. Assessing the applicability of the Taguchi design method to an interrill erosion study

    NASA Astrophysics Data System (ADS)

    Zhang, F. B.; Wang, Z. L.; Yang, M. Y.

    2015-02-01

    Full-factorial experimental designs have been used in soil erosion studies, but are time, cost and labor intensive, and sometimes they are impossible to conduct due to the increasing number of factors and their levels to consider. The Taguchi design is a simple, economical and efficient statistical tool that only uses a portion of the total possible factorial combinations to obtain the results of a study. Soil erosion studies that use the Taguchi design are scarce and no comparisons with full-factorial designs have been made. In this paper, a series of simulated rainfall experiments using a full-factorial design of five slope lengths (0.4, 0.8, 1.2, 1.6, and 2 m), five slope gradients (18%, 27%, 36%, 48%, and 58%), and five rainfall intensities (48, 62.4, 102, 149, and 170 mm h-1) were conducted. Validation of the applicability of a Taguchi design to interrill erosion experiments was achieved by extracting data from the full dataset according to a theoretical Taguchi design. The statistical parameters for the mean quasi-steady state erosion and runoff rates of each test, the optimum conditions for producing maximum erosion and runoff, and the main effect and percentage contribution of each factor obtained from the full-factorial and Taguchi designs were compared. Both designs generated almost identical results. Using the experimental data from the Taguchi design, it was possible to accurately predict the erosion and runoff rates under the conditions that had been excluded from the Taguchi design. All of the results obtained from analyzing the experimental data for both designs indicated that the Taguchi design could be applied to interrill erosion studies and could replace full-factorial designs. This would save time, labor and costs by generally reducing the number of tests to be conducted. Further work should test the applicability of the Taguchi design to a wider range of conditions.

  15. Parameters optimization of laser brazing in crimping butt using Taguchi and BPNN-GA

    NASA Astrophysics Data System (ADS)

    Rong, Youmin; Zhang, Zhen; Zhang, Guojun; Yue, Chen; Gu, Yafei; Huang, Yu; Wang, Chunming; Shao, Xinyu

    2015-04-01

    The laser brazing (LB) is widely used in the automotive industry due to the advantages of high speed, small heat affected zone, high quality of welding seam, and low heat input. Welding parameters play a significant role in determining the bead geometry and hence quality of the weld joint. This paper addresses the optimization of the seam shape in LB process with welding crimping butt of 0.8 mm thickness using back propagation neural network (BPNN) and genetic algorithm (GA). A 3-factor, 5-level welding experiment is conducted by Taguchi L25 orthogonal array through the statistical design method. Then, the input parameters are considered here including welding speed, wire speed rate, and gap with 5 levels. The output results are efficient connection length of left side and right side, top width (WT) and bottom width (WB) of the weld bead. The experiment results are embed into the BPNN network to establish relationship between the input and output variables. The predicted results of the BPNN are fed to GA algorithm that optimizes the process parameters subjected to the objectives. Then, the effects of welding speed (WS), wire feed rate (WF), and gap (GAP) on the sum values of bead geometry is discussed. Eventually, the confirmation experiments are carried out to demonstrate the optimal values were effective and reliable. On the whole, the proposed hybrid method, BPNN-GA, can be used to guide the actual work and improve the efficiency and stability of LB process.

  16. Optimization of Quenching Parameters for the Reduction of Titaniferous Magnetite Ore by Lean Grade Coal Using the Taguchi Method and Its Isothermal Kinetic Study

    NASA Astrophysics Data System (ADS)

    Sarkar, Bitan Kumar; Kumar, Nikhil; Dey, Rajib; Das, Gopes Chandra

    2018-06-01

    In the present study, a unique method is adopted to achieve higher reducibility of titaniferous magnetite lump ore (TMO). In this method, TMO is initially heated followed by water quenching. The quenching process generates cracks due to thermal shock in the dense TMO lumps, which, in turn, increases the extent of reduction (EOR) using the lean grade coal as a reductant. The optimum combination of parameters found by using Taguchi's L27 orthogonal array (OA) (five factors, three levels) is - 8 + 4 mm of particle size (PS1), 1423 K of quenching temperature (Qtemp2), 15 minutes of quenching time (Qtime3), 3 times the number of quenching {(No. of Q)3}, and 120 minutes of reduction time (Rtime3) at fixed reduction temperature of 1473 K. At optimized levels of the parameters, 92.39 pct reduction is achieved. Isothermal reduction kinetics of the quenched TMO lumps at the optimized condition reveals mixed controlled mechanisms [initially contracting geometry (CG3) followed by diffusion (D3)]. Activation energies calculated are 69.895 KJ/mole for CG3 and 39.084 KJ/mole for D3.

  17. The Taguchi Method Application to Improve the Quality of a Sustainable Process

    NASA Astrophysics Data System (ADS)

    Titu, A. M.; Sandu, A. V.; Pop, A. B.; Titu, S.; Ciungu, T. C.

    2018-06-01

    Taguchi’s method has always been a method used to improve the quality of the analyzed processes and products. This research shows an unusual situation, namely the modeling of some parameters, considered technical parameters, in a process that is wanted to be durable by improving the quality process and by ensuring quality using an experimental research method. Modern experimental techniques can be applied in any field and this study reflects the benefits of interacting between the agriculture sustainability principles and the Taguchi’s Method application. The experimental method used in this practical study consists of combining engineering techniques with experimental statistical modeling to achieve rapid improvement of quality costs, in fact seeking optimization at the level of existing processes and the main technical parameters. The paper is actually a purely technical research that promotes a technical experiment using the Taguchi method, considered to be an effective method since it allows for rapid achievement of 70 to 90% of the desired optimization of the technical parameters. The missing 10 to 30 percent can be obtained with one or two complementary experiments, limited to 2 to 4 technical parameters that are considered to be the most influential. Applying the Taguchi’s Method in the technique and not only, allowed the simultaneous study in the same experiment of the influence factors considered to be the most important in different combinations and, at the same time, determining each factor contribution.

  18. Application of the Taguchi Method for Optimizing the Process Parameters of Producing Lightweight Aggregates by Incorporating Tile Grinding Sludge with Reservoir Sediments

    PubMed Central

    Chen, How-Ji; Chang, Sheng-Nan; Tang, Chao-Wei

    2017-01-01

    This study aimed to apply the Taguchi optimization technique to determine the process conditions for producing synthetic lightweight aggregate (LWA) by incorporating tile grinding sludge powder with reservoir sediments. An orthogonal array L16(45) was adopted, which consisted of five controllable four-level factors (i.e., sludge content, preheat temperature, preheat time, sintering temperature, and sintering time). Moreover, the analysis of variance method was used to explore the effects of the experimental factors on the particle density, water absorption, bloating ratio, and loss on ignition of the produced LWA. Overall, the produced aggregates had particle densities ranging from 0.43 to 2.1 g/cm3 and water absorption ranging from 0.6% to 13.4%. These values are comparable to the requirements for ordinary and high-performance LWAs. The results indicated that it is considerably feasible to produce high-performance LWA by incorporating tile grinding sludge with reservoir sediments. PMID:29125576

  19. Application of the Taguchi Method for Optimizing the Process Parameters of Producing Lightweight Aggregates by Incorporating Tile Grinding Sludge with Reservoir Sediments.

    PubMed

    Chen, How-Ji; Chang, Sheng-Nan; Tang, Chao-Wei

    2017-11-10

    This study aimed to apply the Taguchi optimization technique to determine the process conditions for producing synthetic lightweight aggregate (LWA) by incorporating tile grinding sludge powder with reservoir sediments. An orthogonal array L 16 (4⁵) was adopted, which consisted of five controllable four-level factors (i.e., sludge content, preheat temperature, preheat time, sintering temperature, and sintering time). Moreover, the analysis of variance method was used to explore the effects of the experimental factors on the particle density, water absorption, bloating ratio, and loss on ignition of the produced LWA. Overall, the produced aggregates had particle densities ranging from 0.43 to 2.1 g/cm³ and water absorption ranging from 0.6% to 13.4%. These values are comparable to the requirements for ordinary and high-performance LWAs. The results indicated that it is considerably feasible to produce high-performance LWA by incorporating tile grinding sludge with reservoir sediments.

  20. Application of Taguchi L16 design method for comparative study of ability of 3A zeolite in removal of Rhodamine B and Malachite green from environmental water samples

    NASA Astrophysics Data System (ADS)

    Rahmani, Mashaallah; Kaykhaii, Massoud; Sasani, Mojtaba

    2018-01-01

    This study aimed to investigate the efficiency of 3A zeolite as a novel adsorbent for removal of Rhodamine B and Malachite green dyes from water samples. To increase the removal efficiency, effecting parameters on adsorption process were investigated and optimized by adopting Taguchi design of experiments approach. The percentage contribution of each parameter on the removal of Rhodamine B and Malachite green dyes determined using ANOVA and showed that the most effective parameters in removal of RhB and MG by 3A zeolite are initial concentration of dye and pH, respectively. Under optimized condition, the amount predicted by Taguchi design method and the value obtained experimentally, showed good closeness (more than 94.86%). Good adsorption efficiency obtained for proposed methods indicates that, the 3A zeolite is capable to remove the significant amounts of Rhodamine B and Malachite green from environmental water samples.

  1. Optimization of Experimental Conditions of the Pulsed Current GTAW Parameters for Mechanical Properties of SDSS UNS S32760 Welds Based on the Taguchi Design Method

    NASA Astrophysics Data System (ADS)

    Yousefieh, M.; Shamanian, M.; Saatchi, A.

    2012-09-01

    Taguchi design method with L9 orthogonal array was implemented to optimize the pulsed current gas tungsten arc welding parameters for the hardness and the toughness of super duplex stainless steel (SDSS, UNS S32760) welds. In this regard, the hardness and the toughness were considered as performance characteristics. Pulse current, background current, % on time, and pulse frequency were chosen as main parameters. Each parameter was varied at three different levels. As a result of pooled analysis of variance, the pulse current is found to be the most significant factor for both the hardness and the toughness of SDSS welds by percentage contribution of 71.81 for hardness and 78.18 for toughness. The % on time (21.99%) and the background current (17.81%) had also the next most significant effect on the hardness and the toughness, respectively. The optimum conditions within the selected parameter values for hardness were found as the first level of pulse current (100 A), third level of background current (70 A), first level of % on time (40%), and first level of pulse frequency (1 Hz), while they were found as the second level of pulse current (120 A), second level of background current (60 A), second level of % on time (60%), and third level of pulse frequency (5 Hz) for toughness. The Taguchi method was found to be a promising tool to obtain the optimum conditions for such studies. Finally, in order to verify experimental results, confirmation tests were carried out at optimum working conditions. Under these conditions, there were good agreements between the predicted and the experimental results for the both hardness and toughness.

  2. Application of Taguchi L16 design method for comparative study of ability of 3A zeolite in removal of Rhodamine B and Malachite green from environmental water samples.

    PubMed

    Rahmani, Mashaallah; Kaykhaii, Massoud; Sasani, Mojtaba

    2018-01-05

    This study aimed to investigate the efficiency of 3A zeolite as a novel adsorbent for removal of Rhodamine B and Malachite green dyes from water samples. To increase the removal efficiency, effecting parameters on adsorption process were investigated and optimized by adopting Taguchi design of experiments approach. The percentage contribution of each parameter on the removal of Rhodamine B and Malachite green dyes determined using ANOVA and showed that the most effective parameters in removal of RhB and MG by 3A zeolite are initial concentration of dye and pH, respectively. Under optimized condition, the amount predicted by Taguchi design method and the value obtained experimentally, showed good closeness (more than 94.86%). Good adsorption efficiency obtained for proposed methods indicates that, the 3A zeolite is capable to remove the significant amounts of Rhodamine B and Malachite green from environmental water samples. Copyright © 2017 Elsevier B.V. All rights reserved.

  3. Application of the Taguchi analytical method for optimization of effective parameters of the chemical vapor deposition process controlling the production of nanotubes/nanobeads.

    PubMed

    Sharon, Maheshwar; Apte, P R; Purandare, S C; Zacharia, Renju

    2005-02-01

    Seven variable parameters of the chemical vapor deposition system have been optimized with the help of the Taguchi analytical method for getting a desired product, e.g., carbon nanotubes or carbon nanobeads. It is observed that almost all selected parameters influence the growth of carbon nanotubes. However, among them, the nature of precursor (racemic, R or Technical grade camphor) and the carrier gas (hydrogen, argon and mixture of argon/hydrogen) seem to be more important parameters affecting the growth of carbon nanotubes. Whereas, for the growth of nanobeads, out of seven parameters, only two, i.e., catalyst (powder of iron, cobalt, and nickel) and temperature (1023 K, 1123 K, and 1273 K), are the most influential parameters. Systematic defects or islands on the substrate surface enhance nucleation of novel carbon materials. Quantitative contributions of process parameters as well as optimum factor levels are obtained by performing analysis of variance (ANOVA) and analysis of mean (ANOM), respectively.

  4. Absolute variation of the mechanical characteristics of halloysite reinforced polyurethane nanocomposites complemented by Taguchi and ANOVA approaches

    NASA Astrophysics Data System (ADS)

    Gaaz, Tayser Sumer; Sulong, Abu Bakar; Kadhum, Abdul Amir H.; Nassir, Mohamed H.; Al-Amiery, Ahmed A.

    The variation of the results of the mechanical properties of halloysite nanotubes (HNTs) reinforced thermoplastic polyurethane (TPU) at different HNTs loadings was implemented as a tool for analysis. The preparation of HNTs-TPU nanocomposites was performed under four controlled parameters of mixing temperature, mixing speed, mixing time, and HNTs loading at three levels each to satisfy Taguchi method orthogonal array L9 aiming to optimize these parameters for the best measurements of tensile strength, Young's modulus, and tensile strain (known as responses). The maximum variation of the experimental results for each response was determined and analysed based on the optimized results predicted by Taguchi method and ANOVA. It was found that the maximum absolute variations of the three mentioned responses are 69%, 352%, and 126%, respectively. The analysis has shown that the preparation of the optimized tensile strength requires 1 wt.% HNTs loading (excluding 2 wt.% and 3 wt.%), mixing temperature of 190 °C (excluding 200 °C and 210 °C), and mixing speed of 30 rpm (excluding 40 rpm and 50 rpm). In addition, the analysis has determined that the mixing time at 20 min has no effect on the preparation. The mentioned analysis was fortified by ANOVA, images of FESEM, and DSC results. Seemingly, the agglomeration and distribution of HNTs in the nanocomposite play an important role in the process. The outcome of the analysis could be considered as a very important step towards the reliability of Taguchi method.

  5. Optimization of Tape Winding Process Parameters to Enhance the Performance of Solid Rocket Nozzle Throat Back Up Liners using Taguchi's Robust Design Methodology

    NASA Astrophysics Data System (ADS)

    Nath, Nayani Kishore

    2017-08-01

    The throat back up liners is used to protect the nozzle structural members from the severe thermal environment in solid rocket nozzles. The throat back up liners is made with E-glass phenolic prepregs by tape winding process. The objective of this work is to demonstrate the optimization of process parameters of tape winding process to achieve better insulative resistance using Taguchi's robust design methodology. In this method four control factors machine speed, roller pressure, tape tension, tape temperature that were investigated for the tape winding process. The presented work was to study the cogency and acceptability of Taguchi's methodology in manufacturing of throat back up liners. The quality characteristic identified was Back wall temperature. Experiments carried out using L 9 ' (34) orthogonal array with three levels of four different control factors. The test results were analyzed using smaller the better criteria for Signal to Noise ratio in order to optimize the process. The experimental results were analyzed conformed and successfully used to achieve the minimum back wall temperature of the throat back up liners. The enhancement in performance of the throat back up liners was observed by carrying out the oxy-acetylene tests. The influence of back wall temperature on the performance of throat back up liners was verified by ground firing test.

  6. Multi objective Taguchi optimization approach for resistance spot welding of cold rolled TWIP steel sheets

    NASA Astrophysics Data System (ADS)

    Tutar, Mumin; Aydin, Hakan; Bayram, Ali

    2017-08-01

    Formability and energy absorption capability of a steel sheet are highly desirable properties in manufacturing components for automotive applications. TWinning Induced Plastisity (TWIP) steels are, new generation high Mn alloyed steels, attractive for the automotive industry due to its outstanding elongation (%40-45) and tensile strength (~1000MPa). So, TWIP steels provide excellent formability and energy absorption capability. Another required property from the steel sheets is suitability for manufacturing methods such as welding. The use of the steel sheets in the automotive applications inevitably involves welding. Considering that there are 3000-5000 welded spots on a vehicle, it can be interpreted that one of the most important manufacturing method is Resistance Spot Welding (RSW) for the automotive industry. In this study; firstly, TWIP steel sheet were cold rolled to 15% reduction in thickness. Then, the cold rolled TWIP steel sheets were welded with RSW method. The welding parameters (welding current, welding time and electrode force) were optimized for maximizing the peak tensile shear load and minimizing the indentation of the joints using a Taguchi L9 orthogonal array. The effect of welding parameters was also evaluated by examining the signal-to-noise ratio and analysis of variance (ANOVA) results.

  7. New charging strategy for lithium-ion batteries based on the integration of Taguchi method and state of charge estimation

    NASA Astrophysics Data System (ADS)

    Vo, Thanh Tu; Chen, Xiaopeng; Shen, Weixiang; Kapoor, Ajay

    2015-01-01

    In this paper, a new charging strategy of lithium-polymer batteries (LiPBs) has been proposed based on the integration of Taguchi method (TM) and state of charge estimation. The TM is applied to search an optimal charging current pattern. An adaptive switching gain sliding mode observer (ASGSMO) is adopted to estimate the SOC which controls and terminates the charging process. The experimental results demonstrate that the proposed charging strategy can successfully charge the same types of LiPBs with different capacities and cycle life. The proposed charging strategy also provides much shorter charging time, narrower temperature variation and slightly higher energy efficiency than the equivalent constant current constant voltage charging method.

  8. A Taguchi approach on optimal process control parameters for HDPE pipe extrusion process

    NASA Astrophysics Data System (ADS)

    Sharma, G. V. S. S.; Rao, R. Umamaheswara; Rao, P. Srinivasa

    2017-06-01

    High-density polyethylene (HDPE) pipes find versatile applicability for transportation of water, sewage and slurry from one place to another. Hence, these pipes undergo tremendous pressure by the fluid carried. The present work entails the optimization of the withstanding pressure of the HDPE pipes using Taguchi technique. The traditional heuristic methodology stresses on a trial and error approach and relies heavily upon the accumulated experience of the process engineers for determining the optimal process control parameters. This results in setting up of less-than-optimal values. Hence, there arouse a necessity to determine optimal process control parameters for the pipe extrusion process, which can ensure robust pipe quality and process reliability. In the proposed optimization strategy, the design of experiments (DoE) are conducted wherein different control parameter combinations are analyzed by considering multiple setting levels of each control parameter. The concept of signal-to-noise ratio ( S/ N ratio) is applied and ultimately optimum values of process control parameters are obtained as: pushing zone temperature of 166 °C, Dimmer speed at 08 rpm, and Die head temperature to be 192 °C. Confirmation experimental run is also conducted to verify the analysis and research result and values proved to be in synchronization with the main experimental findings and the withstanding pressure showed a significant improvement from 0.60 to 1.004 Mpa.

  9. [Development of an optimized formulation of damask marmalade with low energy level using Taguchi methodology].

    PubMed

    Villarroel, Mario; Castro, Ruth; Junod, Julio

    2003-06-01

    The goal of this present study was the development of an optimized formula of damask marmalade low in calories applying Taguchi methodology to improve the quality of this product. The selection of this methodology lies on the fact that in real life conditions the result of an experiment frequently depends on the influence of several variables, therefore, one expedite way to solve this problem is utilizing factorial desings. The influence of acid, thickener, sweetener and aroma additives, as well as time of cooking, and possible interactions among some of them, were studied trying to get the best combination of these factors to optimize the sensorial quality of an experimental formulation of dietetic damask marmalade. An orthogonal array L8 (2(7)) was applied in this experience, as well as level average analysis was carried out according Taguchi methodology to determine the suitable working levels of the design factors previously choiced, to achieve a desirable product quality. A sensory trained panel was utilized to analyze the marmalade samples using a composite scoring test with a descriptive acuantitative scale ranging from 1 = Bad, 5 = Good. It was demonstrated that the design factors sugar/aspartame, pectin and damask aroma had a significant effect (p < 0.05) on the sensory quality of the marmalade with 82% of contribution on the response. The optimal combination result to be: citric acid 0.2%; pectin 1%; 30 g sugar/16 mg aspartame/100 g, damask aroma 0.5 ml/100 g, time of cooking 5 minutes. Regarding chemical composition, the most important results turned out to be the decrease in carbohydrate content compaired with traditional marmalade with a reduction of 56% in coloric value and also the amount of dietary fiber greater than similar commercial products. Assays of storage stability were carried out on marmalade samples submitted to different temperatures held in plastic bags of different density. Non percetible sensorial, microbiological and chemical changes

  10. The Taguchi methodology as a statistical tool for biotechnological applications: a critical appraisal.

    PubMed

    Rao, Ravella Sreenivas; Kumar, C Ganesh; Prakasham, R Shetty; Hobbs, Phil J

    2008-04-01

    Success in experiments and/or technology mainly depends on a properly designed process or product. The traditional method of process optimization involves the study of one variable at a time, which requires a number of combinations of experiments that are time, cost and labor intensive. The Taguchi method of design of experiments is a simple statistical tool involving a system of tabulated designs (arrays) that allows a maximum number of main effects to be estimated in an unbiased (orthogonal) fashion with a minimum number of experimental runs. It has been applied to predict the significant contribution of the design variable(s) and the optimum combination of each variable by conducting experiments on a real-time basis. The modeling that is performed essentially relates signal-to-noise ratio to the control variables in a 'main effect only' approach. This approach enables both multiple response and dynamic problems to be studied by handling noise factors. Taguchi principles and concepts have made extensive contributions to industry by bringing focused awareness to robustness, noise and quality. This methodology has been widely applied in many industrial sectors; however, its application in biological sciences has been limited. In the present review, the application and comparison of the Taguchi methodology has been emphasized with specific case studies in the field of biotechnology, particularly in diverse areas like fermentation, food processing, molecular biology, wastewater treatment and bioremediation.

  11. Evaluation of Listeria monocytogenes survival in ice cream mixes flavored with herbal tea using Taguchi method.

    PubMed

    Ozturk, Ismet; Golec, Adem; Karaman, Safa; Sagdic, Osman; Kayacier, Ahmed

    2010-10-01

    In this study, the effects of the incorporation of some herbal teas at different concentrations into the ice cream mix on the population of Listeria monocytogenes were studied using Taguchi method. The ice cream mix samples flavored with herbal teas were prepared using green tea and sage at different concentrations. Afterward, fresh culture of L. monocytogenes was inoculated into the samples and the L. monocytogenes was counted at different storage periods. Taguchi method was used for experimental design and analysis. In addition, some physicochemical properties of samples were examined. Results suggested that there was some effect, although little, on the population of L. monocytogenes when herbal tea was incorporated into the ice cream mix. Additionally, the use of herbal tea caused a decrease in the pH values of the samples and significant changes in the color values.

  12. Study of Dimple Effect on the Friction Characteristics of a Journal Bearing using Taguchi Method

    NASA Astrophysics Data System (ADS)

    Murthy, A. Amar; Raghunandana, Dr.

    2018-02-01

    The effect of producing dimples using chemically etched techniques or by machining process on the surface of a journal bearing bushing to reduce the friction using Taguchi method is investigated. The data used in the present analysis is based on the results obtained by the series of experiments conducted to study the dimples effect on the Stribeck curve. It is statistically proved that producing dimples on the bushing surface of a journal bearing has significant effect on the friction coefficient when used with light oils. Also it is seen that there is an interaction effect between speeds-load and load-dimples. Hence the interaction effect, which are usually neglected should be considered during actual experiments that significantly contributes in reducing the friction in mixed lubrication regime. The experiments, if were conducted after Taguchi method, then the number of experiments would have been reduced to half of the actual set of experiments that were essentially conducted.

  13. Assessing the transferability of a hybrid Taguchi-objective function method to optimize image segmentation for detecting and counting cave roosting birds using terrestrial laser scanning data

    NASA Astrophysics Data System (ADS)

    Idrees, Mohammed Oludare; Pradhan, Biswajeet; Buchroithner, Manfred F.; Shafri, Helmi Zulhaidi Mohd; Khairunniza Bejo, Siti

    2016-07-01

    As far back as early 15th century during the reign of the Ming Dynasty (1368 to 1634 AD), Gomantong cave in Sabah (Malaysia) has been known as one of the largest roosting sites for wrinkle-lipped bats (Chaerephon plicata) and swiftlet birds (Aerodramus maximus and Aerodramus fuciphagus) in very large colonies. Until recently, no study has been done to quantify or estimate the colony sizes of these inhabitants in spite of the grave danger posed to this avifauna by human activities and potential habitat loss to postspeleogenetic processes. This paper evaluates the transferability of a hybrid optimization image analysis-based method developed to detect and count cave roosting birds. The method utilizes high-resolution terrestrial laser scanning intensity image. First, segmentation parameters were optimized by integrating objective function and the statistical Taguchi methods. Thereafter, the optimized parameters were used as input into the segmentation and classification processes using two images selected from Simud Hitam (lower cave) and Simud Putih (upper cave) of the Gomantong cave. The result shows that the method is capable of detecting birds (and bats) from the image for accurate population censusing. A total number of 9998 swiftlet birds were counted from the first image while 1132 comprising of both bats and birds were obtained from the second image. Furthermore, the transferability evaluation yielded overall accuracies of 0.93 and 0.94 (area under receiver operating characteristic curve) for the first and second image, respectively, with p value of <0.0001 at 95% confidence level. The findings indicate that the method is not only efficient for the detection and counting cave birds for which it was developed for but also useful for counting bats; thus, it can be adopted in any cave.

  14. Comparative Assessment of Cutting Inserts and Optimization during Hard Turning: Taguchi-Based Grey Relational Analysis

    NASA Astrophysics Data System (ADS)

    Venkata Subbaiah, K.; Raju, Ch.; Suresh, Ch.

    2017-08-01

    The present study aims to compare the conventional cutting inserts with wiper cutting inserts during the hard turning of AISI 4340 steel at different workpiece hardness. Type of insert, hardness, cutting speed, feed, and depth of cut are taken as process parameters. Taguchi’s L18 orthogonal array was used to conduct the experimental tests. Parametric analysis carried in order to know the influence of each process parameter on the three important Surface Roughness Characteristics (Ra, Rz, and Rt) and Material Removal Rate. Taguchi based Grey Relational Analysis (GRA) used to optimize the process parameters for individual response and multi-response outputs. Additionally, the analysis of variance (ANOVA) is also applied to identify the most significant factor.

  15. Application of Taguchi method to optimization of surface roughness during precise turning of NiTi shape memory alloy

    NASA Astrophysics Data System (ADS)

    Kowalczyk, M.

    2017-08-01

    This paper describes the research results of surface quality research after the NiTi shape memory alloy (Nitinol) precise turning by the tools with edges made of polycrystalline diamonds (PCD). Nitinol, a nearly equiatomic nickel-titanium shape memory alloy, has wide applications in the arms industry, military, medicine and aerospace industry, and industrial robots. Due to their specific properties NiTi alloys are known to be difficult-to-machine materials particularly by using conventional techniques. The research trials were conducted for three independent parameters (vc, f, ap) affecting the surface roughness were analyzed. The choice of parameter configurations were performed by factorial design methods using orthogonal plan type L9, with three control factors, changing on three levels, developed by G. Taguchi. S/N ratio and ANOVA analyses were performed to identify the best of cutting parameters influencing surface roughness.

  16. Optimisation Of Cutting Parameters Of Composite Material Laser Cutting Process By Taguchi Method

    NASA Astrophysics Data System (ADS)

    Lokesh, S.; Niresh, J.; Neelakrishnan, S.; Rahul, S. P. Deepak

    2018-03-01

    The aim of this work is to develop a laser cutting process model that can predict the relationship between the process input parameters and resultant surface roughness, kerf width characteristics. The research conduct is based on the Design of Experiment (DOE) analysis. Response Surface Methodology (RSM) is used in this work. It is one of the most practical and most effective techniques to develop a process model. Even though RSM has been used for the optimization of the laser process, this research investigates laser cutting of materials like Composite wood (veneer)to be best circumstances of laser cutting using RSM process. The input parameters evaluated are focal length, power supply and cutting speed, the output responses being kerf width, surface roughness, temperature. To efficiently optimize and customize the kerf width and surface roughness characteristics, a machine laser cutting process model using Taguchi L9 orthogonal methodology was proposed.

  17. Application of Taguchi optimisation of electro metal - electro winning (EMEW) for nickel metal from laterite

    NASA Astrophysics Data System (ADS)

    Sudibyo, Hermida, L.; Junaedi, A.; Putra, F. A.

    2017-11-01

    Nickel and cobalt metal able to process from low grade laterite using solvent extraction and electrowinning. One of electrowinning methods which has good performance to produce pure metal is electrometal-electrowinninge(EMEW). In this work, solventextraction was used to separate nickel and cobalt which useCyanex-Versatic Acid in toluene as an organic phase. An aqueous phase of extraction was processed using EMEW in order to deposit the nickel metal in Cathode electrode. The parameters which used in this work were batch temperature, operation time, voltage, and boric acid concentration. Those parameters were studied and optimized using the design of experiment of Taguchi. The Taguchi analysis result shows that the optimum result of EMEW was at 60°C of batch temperature, 2 Voltage, 6 hours operation and 0.5 M of boric acid.

  18. Design of Maternity Pillow by Using Kansei and Taguchi Methods

    NASA Astrophysics Data System (ADS)

    Ilma Rahmillah, Fety; Nanda kartika, Rachmah

    2017-06-01

    One of the customers’ considerations for purchasing a product is it can satisfy their feeling and emotion. It because of such product can enhance sleep quality of pregnant women. However, most of the existing product such as maternity pillows are still designed based on companies’ perspective. This study aims to capture the desire of pregnant women toward maternity pillow desired product by using kansei words and analyze the optimal design with Taguchi method. Eight collected kansei words were durable, aesthetic, comfort, portable, simple, multifunction, attractive motive, and easy to maintain. While L16 orthogonal array is used because there are three variables with two levels and four variables with four levels. It can be concluded that the best maternity pillow that can satisfy the customers can be designed by combining D1-E2-F2-G2-C1-B2-A2 means the model is U shape, flowery motive, medium color, Bag model B, cotton pillow cover, filled with silicon, and use double zipper. However, it is also possible to create combination of D1-E2-F2-G2-C1-B1-A1 by using consideration of cost which means that the zipper is switched to single as well as filled with dacron. In addition, the total percentage of contribution by using ANOVA reaches 95%.

  19. Parametric Optimization Of Gas Metal Arc Welding Process By Using Grey Based Taguchi Method On Aisi 409 Ferritic Stainless Steel

    NASA Astrophysics Data System (ADS)

    Ghosh, Nabendu; Kumar, Pradip; Nandi, Goutam

    2016-10-01

    Welding input process parameters play a very significant role in determining the quality of the welded joint. Only by properly controlling every element of the process can product quality be controlled. For better quality of MIG welding of Ferritic stainless steel AISI 409, precise control of process parameters, parametric optimization of the process parameters, prediction and control of the desired responses (quality indices) etc., continued and elaborate experiments, analysis and modeling are needed. A data of knowledge - base may thus be generated which may be utilized by the practicing engineers and technicians to produce good quality weld more precisely, reliably and predictively. In the present work, X-ray radiographic test has been conducted in order to detect surface and sub-surface defects of weld specimens made of Ferritic stainless steel. The quality of the weld has been evaluated in terms of yield strength, ultimate tensile strength and percentage of elongation of the welded specimens. The observed data have been interpreted, discussed and analyzed by considering ultimate tensile strength ,yield strength and percentage elongation combined with use of Grey-Taguchi methodology.

  20. Near Field and Far Field Effects in the Taguchi-Optimized Design of AN InP/GaAs-BASED Double Wafer-Fused Mqw Long-Wavelength Vertical-Cavity Surface-Emitting Laser

    NASA Astrophysics Data System (ADS)

    Menon, P. S.; Kandiah, K.; Mandeep, J. S.; Shaari, S.; Apte, P. R.

    Long-wavelength VCSELs (LW-VCSEL) operating in the 1.55 μm wavelength regime offer the advantages of low dispersion and optical loss in fiber optic transmission systems which are crucial in increasing data transmission speed and reducing implementation cost of fiber-to-the-home (FTTH) access networks. LW-VCSELs are attractive light sources because they offer unique features such as low power consumption, narrow beam divergence and ease of fabrication for two-dimensional arrays. This paper compares the near field and far field effects of the numerically investigated LW-VCSEL for various design parameters of the device. The optical intensity profile far from the device surface, in the Fraunhofer region, is important for the optical coupling of the laser with other optical components. The near field pattern is obtained from the structure output whereas the far-field pattern is essentially a two-dimensional fast Fourier Transform (FFT) of the near-field pattern. Design parameters such as the number of wells in the multi-quantum-well (MQW) region, the thickness of the MQW and the effect of using Taguchi's orthogonal array method to optimize the device design parameters on the near/far field patterns are evaluated in this paper. We have successfully increased the peak lasing power from an initial 4.84 mW to 12.38 mW at a bias voltage of 2 V and optical wavelength of 1.55 μm using Taguchi's orthogonal array. As a result of the Taguchi optimization and fine tuning, the device threshold current is found to increase along with a slight decrease in the modulation speed due to increased device widths.

  1. Comparative study of coated and uncoated tool inserts with dry machining of EN47 steel using Taguchi L9 optimization technique

    NASA Astrophysics Data System (ADS)

    Vasu, M.; Shivananda, Nayaka H.

    2018-04-01

    EN47 steel samples are machined on a self-centered lathe using Chemical Vapor Deposition of coated TiCN/Al2O3/TiN and uncoated tungsten carbide tool inserts, with nose radius 0.8mm. Results are compared with each other and optimized using statistical tool. Input (cutting) parameters that are considered in this work are feed rate (f), cutting speed (Vc), and depth of cut (ap), the optimization criteria are based on the Taguchi (L9) orthogonal array. ANOVA method is adopted to evaluate the statistical significance and also percentage contribution for each model. Multiple response characteristics namely cutting force (Fz), tool tip temperature (T) and surface roughness (Ra) are evaluated. The results discovered that coated tool insert (TiCN/Al2O3/TiN) exhibits 1.27 and 1.29 times better than the uncoated tool insert for tool tip temperature and surface roughness respectively. A slight increase in cutting force was observed for coated tools.

  2. Optimization of laccase production by Pleurotus ostreatus IMI 395545 using the Taguchi DOE methodology.

    PubMed

    Periasamy, Rathinasamy; Palvannan, Thayumanavan

    2010-12-01

    Production of laccase using a submerged culture of Pleurotus orstreatus IMI 395545 was optimized by the Taguchi orthogonal array (OA) design of experiments (DOE) methodology. This approach facilitates the study of the interactions of a large number of variables spanned by factors and their settings, with a small number of experiments, leading to considerable savings in time and cost for process optimization. This methodology optimizes the number of impact factors and enables to calculate their interaction in the production of industrial enzymes. Eight factors, viz. glucose, yeast extract, malt extract, inoculum, mineral solution, inducer (1 mM CuSO₄) and amino acid (l-asparagine) at three levels and pH at two levels, with an OA layout of L18 (2¹ × 3⁷) were selected for the proposed experimental design. The laccase yield obtained from the 18 sets of fermentation experiments performed with the selected factors and levels was further processed with Qualitek-4 software. The optimized conditions shared an enhanced laccase expression of 86.8% (from 485.0 to 906.3 U). The combination of factors was further validated for laccase production and reactive blue 221 decolorization. The results revealed an enhanced laccase yield of 32.6% and dye decolorization up to 84.6%. This methodology allows the complete evaluation of main and interaction factors. © 2010 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim

  3. Use of Taguchi methodology to enhance the yield of caffeine removal with growing cultures of Pseudomonas pseudoalcaligenes.

    PubMed

    Ashengroph, Morahem; Ababaf, Sajad

    2014-12-01

    Microbial caffeine removal is a green solution for treatment of caffeinated products and agro-industrial effluents. We directed this investigation to optimizing a bio-decaffeination process with growing cultures of Pseudomonas pseudoalcaligenes through Taguchi methodology which is a structured statistical approach that can be lowered variations in a process through Design of Experiments (DOE). Five parameters, i.e. initial fructose, tryptone, Zn(+2) ion and caffeine concentrations and also incubation time selected and an L16 orthogonal array was applied to design experiments with four 4-level factors and one 3-level factor (4(4) × 1(3)). Data analysis was performed using the statistical analysis of variance (ANOVA) method. Furthermore, the optimal conditions were determined by combining the optimal levels of the significant factors and verified by a confirming experiment. Measurement of residual caffeine concentration in the reaction mixture was performed using high-performance liquid chromatography (HPLC). Use of Taguchi methodology for optimization of design parameters resulted in about 86.14% reduction of caffeine in 48 h incubation when 5g/l fructose, 3 mM Zn(+2) ion and 4.5 g/l of caffeine are present in the designed media. Under the optimized conditions, the yield of degradation of caffeine (4.5 g/l) by the native strain of Pseudomonas pseudoalcaligenes TPS8 has been increased from 15.8% to 86.14% which is 5.4 fold higher than the normal yield. According to the experimental results, Taguchi methodology provides a powerful methodology for identifying the favorable parameters on caffeine removal using strain TPS8 which suggests that the approach also has potential application with similar strains to improve the yield of caffeine removal from caffeine containing solutions.

  4. Thermal design, rating and second law analysis of shell and tube condensers based on Taguchi optimization for waste heat recovery based thermal desalination plants

    NASA Astrophysics Data System (ADS)

    Chandrakanth, Balaji; Venkatesan, G; Prakash Kumar, L. S. S; Jalihal, Purnima; Iniyan, S

    2018-03-01

    The present work discusses the design and selection of a shell and tube condenser used in Low Temperature Thermal Desalination (LTTD). To optimize the key geometrical and process parameters of the condenser with multiple parameters and levels, a design of an experiment approach using Taguchi method was chosen. An orthogonal array (OA) of 25 designs was selected for this study. The condenser was designed, analysed using HTRI software and the heat transfer area with respective tube side pressure drop were computed using the same, as these two objective functions determine the capital and running cost of the condenser. There was a complex trade off between the heat transfer area and pressure drop in the analysis, however second law analysis was worked out for determining the optimal heat transfer area vs pressure drop for condensing the required heat load.

  5. Multiresponse Optimization of Process Parameters in Turning of GFRP Using TOPSIS Method

    PubMed Central

    Parida, Arun Kumar; Routara, Bharat Chandra

    2014-01-01

    Taguchi's design of experiment is utilized to optimize the process parameters in turning operation with dry environment. Three parameters, cutting speed (v), feed (f), and depth of cut (d), with three different levels are taken for the responses like material removal rate (MRR) and surface roughness (R a). The machining is conducted with Taguchi L9 orthogonal array, and based on the S/N analysis, the optimal process parameters for surface roughness and MRR are calculated separately. Considering the larger-the-better approach, optimal process parameters for material removal rate are cutting speed at level 3, feed at level 2, and depth of cut at level 3, that is, v 3-f 2-d 3. Similarly for surface roughness, considering smaller-the-better approach, the optimal process parameters are cutting speed at level 1, feed at level 1, and depth of cut at level 3, that is, v 1-f 1-d 3. Results of the main effects plot indicate that depth of cut is the most influencing parameter for MRR but cutting speed is the most influencing parameter for surface roughness and feed is found to be the least influencing parameter for both the responses. The confirmation test is conducted for both MRR and surface roughness separately. Finally, an attempt has been made to optimize the multiresponses using technique for order preference by similarity to ideal solution (TOPSIS) with Taguchi approach. PMID:27437503

  6. Weight optimization of an aerobrake structural concept for a lunar transfer vehicle

    NASA Technical Reports Server (NTRS)

    Bush, Lance B.; Unal, Resit; Rowell, Lawrence F.; Rehder, John J.

    1992-01-01

    An aerobrake structural concept for a lunar transfer vehicle was weight optimized through the use of the Taguchi design method, finite element analyses, and element sizing routines. Six design parameters were chosen to represent the aerobrake structural configuration. The design parameters included honeycomb core thickness, diameter-depth ratio, shape, material, number of concentric ring frames, and number of radial frames. Each parameter was assigned three levels. The aerobrake structural configuration with the minimum weight was 44 percent less than the average weight of all the remaining satisfactory experimental configurations. In addition, the results of this study have served to bolster the advocacy of the Taguchi method for aerospace vehicle design. Both reduced analysis time and an optimized design demonstrated the applicability of the Taguchi method to aerospace vehicle design.

  7. Evaluation of B. subtilis SPB1 biosurfactants' potency for diesel-contaminated soil washing: optimization of oil desorption using Taguchi design.

    PubMed

    Mnif, Inès; Sahnoun, Rihab; Ellouze-Chaabouni, Semia; Ghribi, Dhouha

    2014-01-01

    Low solubility of certain hydrophobic soil contaminants limits remediation process. Surface-active compounds can improve the solubility and removal of hydrophobic compounds from contaminated soils and, consequently, their biodegradation. Hence, this paper aims to study desorption efficiency of oil from soil of SPB1 lipopeptide biosurfactant. The effect of different physicochemical parameters on desorption potency was assessed. Taguchi experimental design method was applied in order to enhance the desorption capacity and establish the best washing parameters. Mobilization potency was compared to those of chemical surfactants under the newly defined conditions. Better desorption capacity was obtained using 0.1% biosurfacatnt solution and the mobilization potency shows great tolerance to acidic and alkaline pH values and salinity. Results show an optimum value of oil removal from diesel-contaminated soil of about 87%. The optimum washing conditions for surfactant solution volume, biosurfactant concentration, agitation speed, temperature, and time were found to be 12 ml/g of soil, 0.1% biosurfactant, 200 rpm, 30 °C, and 24 h, respectively. The obtained results were compared to those of SDS and Tween 80 at the optimal conditions described above, and the study reveals an effectiveness of SPB1 biosurfactant comparable to the reported chemical emulsifiers. (1) The obtained findings suggest (a) the competence of Bacillus subtilis biosurfactant in promoting diesel desorption from soil towards chemical surfactants and (b) the applicability of this method in decontaminating crude oil-contaminated soil and, therefore, improving bioavailability of hydrophobic compounds. (2) The obtained findings also suggest the adequacy of Taguchi design in promoting process efficiency. Our findings suggest that preoptimized desorption process using microbial-derived emulsifier can contribute significantly to enhancement of hydrophobic pollutants' bioavailability. This study can be

  8. Wear behavior of electroless Ni-P-W coating under lubricated condition - a Taguchi based approach

    NASA Astrophysics Data System (ADS)

    Mukhopadhyay, Arkadeb; Duari, Santanu; Barman, Tapan Kumar; Sahoo, Prasanta

    2016-09-01

    The present study aims to investigate the tribological behavior of electroless Ni-P-W coating under engine oil lubricated condition to ascertain its suitability in automotive applications. Coating is deposited onto mild steel specimens by the electroless method. The experiments are carried out on a pin - on - disc type tribo tester under lubrication. Three tribotesting parameters namely the applied normal load, sliding speed and sliding duration are varied at their three levels and their effects on the wear depth of the deposits are studied. The experiments are carried out based on the combinations available in Taguchi's L27 orthogonal array (OA). Optimization of the tribo-testing parameters is carried out using Taguchi's S/N ratio method to minimize the wear depth. Analysis of variance carried out at a confidence level of 99% indicates that the sliding speed is the most significant parameter in controlling the wear behavior of the deposits. Coating characterization is done using scanning electron microscope, energy dispersive X-ray analysis and X-ray diffraction techniques. It is seen that the wear mechanism under lubricated condition is abrasive in nature.

  9. Optimization of temperature and time for drying and carbonization to increase calorific value of coconut shell using Taguchi method

    NASA Astrophysics Data System (ADS)

    Musabbikhah, Saptoadi, H.; Subarmono, Wibisono, M. A.

    2016-03-01

    Fossil fuel still dominates the needs of energy in Indonesia for the past few years. The increasing scarcity of oil and gas from non-renewable materials results in an energy crisis. This condition turns to be a serious problem for society which demands immediate solution. One effort which can be taken to overcome this problem is the utilization and processing of biomass as renewable energy by means of carbonization. Thus, it can be used as qualified raw material for production of briquette. In this research, coconut shell is used as carbonized waste. The research aims at improving the quality of coconut shell as the material for making briquettes as cheap and eco-friendly renewable energy. At the end, it is expected to decrease dependence on oil and gas. The research variables are drying temperature and time, carbonization time and temperature. The dependent variable is calorific value of the coconut shell. The method used in this research is Taguchi Method. The result of the research shows thus variables, have a significant contribution on the increase of coconut shell's calorific value. It is proven that the higher thus variables are higher calorific value. Before carbonization, the average calorific value of coconut shell reaches 4,667 call/g, and a significant increase is notable after the carbonization. The optimization is parameter setting of A2B3C3D3, which means that the drying temperature is 105 °C, the drying time is 24 hours, the carbonization temperature is 650 °C and carbonization time is 120 minutes. The average calorific value is approximately 7,744 cal/g. Therefore, the increase of the coconut shell's calorific value after the carbonization is 3,077 cal/g or approximately 60 %. The charcoal of carbonized coconut shell has met the requirement of SNI, thus it can be used as raw material in making briquette which can eventually be used as cheap and environmental friendly fuel.

  10. Investigating the effects of PDC cutters geometry on ROP using the Taguchi technique

    NASA Astrophysics Data System (ADS)

    Jamaludin, A. A.; Mehat, N. M.; Kamaruddin, S.

    2017-10-01

    At times, the polycrystalline diamond compact (PDC) bit’s performance dropped and affects the rate of penetration (ROP). The objective of this project is to investigate the effect of PDC cutter geometry and optimize them. An intensive study in cutter geometry would further enhance the ROP performance. The relatively extended analysis was carried out and four significant geometry factors have been identified that directly improved the ROP. Cutter size, back rake angle, side rake angle and chamfer angle are the stated geometry factors. An appropriate optimization technique that effectively controls all influential geometry factors during cutters manufacturing is introduced and adopted in this project. By adopting L9 Taguchi OA, simulation experiment is conducted by using explicit dynamics finite element analysis. Through a structure Taguchi analysis, ANOVA confirms that the most significant geometry to improve ROP is cutter size (99.16% percentage contribution). The optimized cutter is expected to drill with high ROP that can reduce the rig time, which in its turn, may reduce the total drilling cost.

  11. Laccase production by Coriolopsis caperata RCK2011: Optimization under solid state fermentation by Taguchi DOE methodology

    PubMed Central

    Nandal, Preeti; Ravella, Sreenivas Rao; Kuhad, Ramesh Chander

    2013-01-01

    Laccase production by Coriolopsis caperata RCK2011 under solid state fermentation was optimized following Taguchi design of experiment. An orthogonal array layout of L18 (21 × 37) was constructed using Qualitek-4 software with eight most influensive factors on laccase production. At individual level pH contributed higher influence, whereas, corn steep liquor (CSL) accounted for more than 50% of the severity index with biotin and KH2PO4 at the interactive level. The optimum conditions derived were; temperature 30°C, pH 5.0, wheat bran 5.0 g, inoculum size 0.5 ml (fungal cell mass = 0.015 g dry wt.), biotin 0.5% w/v, KH2PO4 0.013% w/v, CSL 0.1% v/v and 0.5 mM xylidine as an inducer. The validation experiments using optimized conditions confirmed an improvement in enzyme production by 58.01%. The laccase production to the level of 1623.55 Ugds−1 indicates that the fungus C. caperata RCK2011 has the commercial potential for laccase. PMID:23463372

  12. Application of Taguchi Design and Response Surface Methodology for Improving Conversion of Isoeugenol into Vanillin by Resting Cells of Psychrobacter sp. CSW4.

    PubMed

    Ashengroph, Morahem; Nahvi, Iraj; Amini, Jahanshir

    2013-01-01

    For all industrial processes, modelling, optimisation and control are the keys to enhance productivity and ensure product quality. In the current study, the optimization of process parameters for improving the conversion of isoeugenol to vanillin by Psychrobacter sp. CSW4 was investigated by means of Taguchi approach and Box-Behnken statistical design under resting cell conditions. Taguchi design was employed for screening the significant variables in the bioconversion medium. Sequentially, Box-Behnken design experiments under Response Surface Methodology (RSM) was used for further optimization. Four factors (isoeugenol, NaCl, biomass and tween 80 initial concentrations), which have significant effects on vanillin yield, were selected from ten variables by Taguchi experimental design. With the regression coefficient analysis in the Box-Behnken design, a relationship between vanillin production and four significant variables was obtained, and the optimum levels of the four variables were as follows: initial isoeugenol concentration 6.5 g/L, initial tween 80 concentration 0.89 g/L, initial NaCl concentration 113.2 g/L and initial biomass concentration 6.27 g/L. Under these optimized conditions, the maximum predicted concentration of vanillin was 2.25 g/L. These optimized values of the factors were validated in a triplicate shaking flask study and an average of 2.19 g/L for vanillin, which corresponded to a molar yield 36.3%, after a 24 h bioconversion was obtained. The present work is the first one reporting the application of Taguchi design and Response surface methodology for optimizing bioconversion of isoeugenol into vanillin under resting cell conditions.

  13. An integrated Taguchi and response surface methodological approach for the optimization of an HPLC method to determine glimepiride in a supersaturatable self-nanoemulsifying formulation.

    PubMed

    Dash, Rajendra Narayan; Mohammed, Habibuddin; Humaira, Touseef

    2016-01-01

    We studied the application of Taguchi orthogonal array (TOA) design during the development of an isocratic stability indicating HPLC method for glimepiride as per TOA design; twenty-seven experiments were conducted by varying six chromatographic factors. Percentage of organic phase was the most significant (p < 0.001) on retention time, while buffer pH had the most significant (p < 0.001) effect on tailing factor and theoretical plates. TOA design has shortcoming, which identifies the only linear effect, while ignoring the quadratic and interaction effects. Hence, a response surface model for each response was created including the linear, quadratic and interaction terms. The developed models for each response found to be well predictive bearing an acceptable adjusted correlation coefficient (0.9152 for retention time, 0.8985 for tailing factor and 0.8679 for theoretical plates). The models were found to be significant (p < 0.001) having a high F value for each response (15.76 for retention time, 13.12 for tailing factor and 9.99 for theoretical plates). The optimal chromatographic condition uses acetonitrile - potassium dihydrogen phosphate (pH 4.0; 30 mM) (50:50, v/v) as the mobile phase. The temperature, flow rate and injection volume were selected as 35 ± 2 °C, 1.0 mL min(-1) and 20 μL respectively. The method was validated as per ICH guidelines and was found to be specific for analyzing glimepiride from a novel supersaturatable self-nanoemulsifying formulation.

  14. Vertically aligned N-doped CNTs growth using Taguchi experimental design

    NASA Astrophysics Data System (ADS)

    Silva, Ricardo M.; Fernandes, António J. S.; Ferro, Marta C.; Pinna, Nicola; Silva, Rui F.

    2015-07-01

    The Taguchi method with a parameter design L9 orthogonal array was implemented for optimizing the nitrogen incorporation in the structure of vertically aligned N-doped CNTs grown by thermal chemical deposition (TCVD). The maximization of the ID/IG ratio of the Raman spectra was selected as the target value. As a result, the optimal deposition configuration was NH3 = 90 sccm, growth temperature = 825 °C and catalyst pretreatment time of 2 min, the first parameter having the main effect on nitrogen incorporation. A confirmation experiment with these values was performed, ratifying the predicted ID/IG ratio of 1.42. Scanning electron microscopy (SEM) characterization revealed a uniform completely vertically aligned array of multiwalled CNTs which individually exhibit a bamboo-like structure, consisting of periodically curved graphitic layers, as depicted by high resolution transmission electron microscopy (HRTEM). The X-ray photoelectron spectroscopy (XPS) results indicated a 2.00 at.% of N incorporation in the CNTs in pyridine-like and graphite-like, as the predominant species.

  15. Taguchi approach for co-gasification optimization of torrefied biomass and coal.

    PubMed

    Chen, Wei-Hsin; Chen, Chih-Jung; Hung, Chen-I

    2013-09-01

    This study employs the Taguchi method to approach the optimum co-gasification operation of torrefied biomass (eucalyptus) and coal in an entrained flow gasifier. The cold gas efficiency is adopted as the performance index of co-gasification. The influences of six parameters, namely, the biomass blending ratio, oxygen-to-fuel mass ratio (O/F ratio), biomass torrefaction temperature, gasification pressure, steam-to-fuel mass ratio (S/F ratio), and inlet temperature of the carrier gas, on the performance of co-gasification are considered. The analysis of the signal-to-noise ratio suggests that the O/F ratio is the most important factor in determining the performance and the appropriate O/F ratio is 0.7. The performance is also significantly affected by biomass along with torrefaction, where a torrefaction temperature of 300°C is sufficient to upgrade eucalyptus. According to the recommended operating conditions, the values of cold gas efficiency and carbon conversion at the optimum co-gasification are 80.99% and 94.51%, respectively. Copyright © 2013 Elsevier Ltd. All rights reserved.

  16. An Experimental Investigation into the Optimal Processing Conditions for the CO2 Laser Cladding of 20 MnCr5 Steel Using Taguchi Method and ANN

    NASA Astrophysics Data System (ADS)

    Mondal, Subrata; Bandyopadhyay, Asish.; Pal, Pradip Kumar

    2010-10-01

    This paper presents the prediction and evaluation of laser clad profile formed by means of CO2 laser applying Taguchi method and the artificial neural network (ANN). Laser cladding is one of the surface modifying technologies in which the desired surface characteristics of any component can be achieved such as good corrosion resistance, wear resistance and hardness etc. Laser is used as a heat source to melt the anti-corrosive powder of Inconel-625 (Super Alloy) to give a coating on 20 MnCr5 substrate. The parametric study of this technique is also attempted here. The data obtained from experiments have been used to develop the linear regression equation and then to develop the neural network model. Moreover, the data obtained from regression equations have also been used as supporting data to train the neural network. The artificial neural network (ANN) is used to establish the relationship between the input/output parameters of the process. The established ANN model is then indirectly integrated with the optimization technique. It has been seen that the developed neural network model shows a good degree of approximation with experimental data. In order to obtain the combination of process parameters such as laser power, scan speed and powder feed rate for which the output parameters become optimum, the experimental data have been used to develop the response surfaces.

  17. Anaerobic treatment of complex chemical wastewater in a sequencing batch biofilm reactor: process optimization and evaluation of factor interactions using the Taguchi dynamic DOE methodology.

    PubMed

    Venkata Mohan, S; Chandrasekhara Rao, N; Krishna Prasad, K; Murali Krishna, P; Sreenivas Rao, R; Sarma, P N

    2005-06-20

    The Taguchi robust experimental design (DOE) methodology has been applied on a dynamic anaerobic process treating complex wastewater by an anaerobic sequencing batch biofilm reactor (AnSBBR). For optimizing the process as well as to evaluate the influence of different factors on the process, the uncontrollable (noise) factors have been considered. The Taguchi methodology adopting dynamic approach is the first of its kind for studying anaerobic process evaluation and process optimization. The designed experimental methodology consisted of four phases--planning, conducting, analysis, and validation connected sequence-wise to achieve the overall optimization. In the experimental design, five controllable factors, i.e., organic loading rate (OLR), inlet pH, biodegradability (BOD/COD ratio), temperature, and sulfate concentration, along with the two uncontrollable (noise) factors, volatile fatty acids (VFA) and alkalinity at two levels were considered for optimization of the anae robic system. Thirty-two anaerobic experiments were conducted with a different combination of factors and the results obtained in terms of substrate degradation rates were processed in Qualitek-4 software to study the main effect of individual factors, interaction between the individual factors, and signal-to-noise (S/N) ratio analysis. Attempts were also made to achieve optimum conditions. Studies on the influence of individual factors on process performance revealed the intensive effect of OLR. In multiple factor interaction studies, biodegradability with other factors, such as temperature, pH, and sulfate have shown maximum influence over the process performance. The optimum conditions for the efficient performance of the anaerobic system in treating complex wastewater by considering dynamic (noise) factors obtained are higher organic loading rate of 3.5 Kg COD/m3 day, neutral pH with high biodegradability (BOD/COD ratio of 0.5), along with mesophilic temperature range (40 degrees C), and

  18. Optimization of Recycled Glass Fibre-Reinforced Plastics Gear via Integration of the Taguchi Method and Grey Relational Analysis

    NASA Astrophysics Data System (ADS)

    Mizamzul Mehat, Nik; Syuhada Zakarria, Noor; Kamaruddin, Shahrul

    2018-03-01

    The increase in demand for industrial gears has resulted in the increase in usage of plastic-matrix composites particularly glass fibre-reinforced plastics as the gear materials. The usage of these synthetic fibers is to enhance the mechanical strength and the thermal resistance of the plastic gears. Nevertheless, the production of large quantities of these synthetic fibre-reinforced composites poses a serious threat to the ecosystem. Comprehending to this fact, the present work aimed at investigating the effects of incorporating recycled glass fibre-reinforced plastics in various compositions particularly on dimensional stability and mechanical properties of gear produced with diverse injection moulding processing parameters setting. The integration of Grey relational analysis (GRA) and Taguchi method was adopted to evaluate the influence of recycled glass fibre-reinforced plastics and variation in processing parameters on gear quality. From the experimental results, the blending ratio was found as the most influential parameter of 56.0% contribution in both improving tensile properties as well as in minimizing shrinkage, followed by mould temperature of 24.1% contribution and cooling time of 10.6% contribution. The results obtained from the aforementioned work are expected to contribute to accessing the feasibility of using recycled glass fibre-reinforced plastics especially for gear application.

  19. Experimental Validation for Hot Stamping Process by Using Taguchi Method

    NASA Astrophysics Data System (ADS)

    Fawzi Zamri, Mohd; Lim, Syh Kai; Razlan Yusoff, Ahmad

    2016-02-01

    Due to the demand for reduction in gas emissions, energy saving and producing safer vehicles has driven the development of Ultra High Strength Steel (UHSS) material. To strengthen UHSS material such as boron steel, it needed to undergo a process of hot stamping for heating at certain temperature and time. In this paper, Taguchi method is applied to determine the appropriate parameter of thickness, heating temperature and heating time to achieve optimum strength of boron steel. The experiment is conducted by using flat square shape of hot stamping tool with tensile dog bone as a blank product. Then, the value of tensile strength and hardness is measured as response. The results showed that the lower thickness, higher heating temperature and heating time give the higher strength and hardness for the final product. In conclusion, boron steel blank are able to achieve up to 1200 MPa tensile strength and 650 HV of hardness.

  20. Design of a robust fuzzy controller for the arc stability of CO(2) welding process using the Taguchi method.

    PubMed

    Kim, Dongcheol; Rhee, Sehun

    2002-01-01

    CO(2) welding is a complex process. Weld quality is dependent on arc stability and minimizing the effects of disturbances or changes in the operating condition commonly occurring during the welding process. In order to minimize these effects, a controller can be used. In this study, a fuzzy controller was used in order to stabilize the arc during CO(2) welding. The input variable of the controller was the Mita index. This index estimates quantitatively the arc stability that is influenced by many welding process parameters. Because the welding process is complex, a mathematical model of the Mita index was difficult to derive. Therefore, the parameter settings of the fuzzy controller were determined by performing actual control experiments without using a mathematical model of the controlled process. The solution, the Taguchi method was used to determine the optimal control parameter settings of the fuzzy controller to make the control performance robust and insensitive to the changes in the operating conditions.

  1. Improved Stress Corrosion Cracking Resistance and Strength of a Two-Step Aged Al-Zn-Mg-Cu Alloy Using Taguchi Method

    NASA Astrophysics Data System (ADS)

    Lin, Lianghua; Liu, Zhiyi; Ying, Puyou; Liu, Meng

    2015-12-01

    Multi-step heat treatment effectively enhances the stress corrosion cracking (SCC) resistance but usually degrades the mechanical properties of Al-Zn-Mg-Cu alloys. With the aim to enhance SCC resistance as well as strength of Al-Zn-Mg-Cu alloys, we have optimized the process parameters during two-step aging of Al-6.1Zn-2.8Mg-1.9Cu alloy by Taguchi's L9 orthogonal array. In this work, analysis of variance (ANOVA) was performed to find out the significant heat treatment parameters. The slow strain rate testing combined with scanning electron microscope and transmission electron microscope was employed to study the SCC behaviors of Al-Zn-Mg-Cu alloy. Results showed that the contour map produced by ANOVA offered a reliable reference for selection of optimum heat treatment parameters. By using this method, a desired combination of mechanical performances and SCC resistance was obtained.

  2. Processing of ultra-high molecular weight polyethylene/graphite composites by ultrasonic injection moulding: Taguchi optimization.

    PubMed

    Sánchez-Sánchez, Xavier; Elias-Zuñiga, Alex; Hernández-Avila, Marcelo

    2018-06-01

    Ultrasonic injection moulding was confirmed as an efficient processing technique for manufacturing ultra-high molecular weight polyethylene (UHMWPE)/graphite composites. Graphite contents of 1 wt%, 5 wt%, and 7 wt% were mechanically pre-mixed with UHMWPE powder, and each mixture was pressed at 135 °C. A precise quantity of the pre-composites mixtures cut into irregularly shaped small pieces were subjected to ultrasonic injection moulding to fabricate small tensile specimens. The Taguchi method was applied to achieve the optimal level of ultrasonic moulding parameters and to maximize the tensile strength of the composites; the results showed that mould temperature was the most significant parameter, followed by the graphite content and the plunger profile. The observed improvement in tensile strength in the specimen with 1 wt% graphite was of 8.8% and all composites showed an increase in the tensile modulus. Even though the presence of graphite produced a decrease in the crystallinity of all the samples, their thermal stability was considerably higher than that of pure UHMWPE. X-ray diffraction and scanning electron microscopy confirmed the exfoliation and dispersion of the graphite as a function of the ultrasonic processing. Fourier transform infrared spectra showed that the addition of graphite did not influence the molecular structure of the polymer matrix. Further, the ultrasonic energy led oxidative degradation and chain scission in the polymer. Copyright © 2018 Elsevier B.V. All rights reserved.

  3. Application of Taguchi-grey method to optimize drilling of EMS 45 steel using minimum quantity lubrication (MQL) with multiple performance characteristics

    NASA Astrophysics Data System (ADS)

    Soepangkat, Bobby O. P.; Suhardjono, Pramujati, Bambang

    2017-06-01

    Machining under minimum quantity lubrication (MQL) has drawn the attention of researchers as an alternative to the traditionally used wet and dry machining conditions with the purpose to minimize the cooling and lubricating cost, as well as to reduce cutting zone temperature, tool wear, and hole surface roughness. Drilling is one of the important operations to assemble machine components. The objective of this study was to optimize drilling parameters such as cutting feed and cutting speed, drill type and drill point angle on the thrust force, torque, hole surface roughness and tool flank wear in drilling EMS 45 tool steel using MQL. In this study, experiments were carried out as per Taguchi design of experiments while an L18 orthogonal array was used to study the influence of various combinations of drilling parameters and tool geometries on the thrust force, torque, hole surface roughness and tool flank wear. The optimum drilling parameters was determined by using grey relational grade obtained from grey relational analysis for multiple-performance characteristics. The drilling experiments were carried out by using twist drill and CNC machining center. This work is useful for optimum values selection of various drilling parameters and tool geometries that would not only minimize the thrust force and torque, but also reduce hole surface roughness and tool flank wear.

  4. Modelling the Cast Component Weight in Hot Chamber Die Casting using Combined Taguchi and Buckingham's π Approach

    NASA Astrophysics Data System (ADS)

    Singh, Rupinder

    2018-02-01

    Hot chamber (HC) die casting process is one of the most widely used commercial processes for the casting of low temperature metals and alloys. This process gives near-net shape product with high dimensional accuracy. However in actual field environment the best settings of input parameters is often conflicting as the shape and size of the casting changes and one have to trade off among various output parameters like hardness, dimensional accuracy, casting defects, microstructure etc. So for online inspection of the cast components properties (without affecting the production line) the weight measurement has been established as one of the cost effective method (as the difference in weight of sound and unsound casting reflects the possible casting defects) in field environment. In the present work at first stage the effect of three input process parameters (namely: pressure at 2nd phase in HC die casting; metal pouring temperature and die opening time) has been studied for optimizing the cast component weight `W' as output parameter in form of macro model based upon Taguchi L9 OA. After this Buckingham's π approach has been applied on Taguchi based macro model for the development of micro model. This study highlights the Taguchi-Buckingham based combined approach as a case study (for conversion of macro model into micro model) by identification of optimum levels of input parameters (based on Taguchi approach) and development of mathematical model (based on Buckingham's π approach). Finally developed mathematical model can be used for predicting W in HC die casting process with more flexibility. The results of study highlights second degree polynomial equation for predicting cast component weight in HC die casting and suggest that pressure at 2nd stage is one of the most contributing factors for controlling the casting defect/weight of casting.

  5. Optimization of multi response in end milling process of ASSAB XW-42 tool steel with liquid nitrogen cooling using Taguchi-grey relational analysis

    NASA Astrophysics Data System (ADS)

    Norcahyo, Rachmadi; Soepangkat, Bobby O. P.

    2017-06-01

    A research was conducted for the optimization of the end milling process of ASSAB XW-42 tool steel with multiple performance characteristics based on the orthogonal array with Taguchi-grey relational analysis method. Liquid nitrogen was applied as a coolant. The experimental studies were conducted under varying the liquid nitrogen cooling flow rates (FL), and the end milling process variables, i.e., cutting speed (Vc), feeding speed (Vf), and axial depth of cut (Aa). The optimized multiple performance characteristics were surface roughness (SR), flank wear (VB), and material removal rate (MRR). An orthogonal array, signal-to-noise (S/N) ratio, grey relational analysis, grey relational grade, and analysis of variance were employed to study the multiple performance characteristics. Experimental results showed that flow rate gave the highest contribution for reducing the total variation of the multiple responses, followed by cutting speed, feeding speed, and axial depth of cut. The minimum surface roughness, flank wear, and maximum material removal rate could be obtained by using the values of flow rate, cutting speed, feeding speed, and axial depth of cut of 0.5 l/minute, 109.9 m/minute, 440 mm/minute, and 0.9 mm, respectively.

  6. Thermochemical hydrolysis of macroalgae Ulva for biorefinery: Taguchi robust design method

    NASA Astrophysics Data System (ADS)

    Jiang, Rui; Linzon, Yoav; Vitkin, Edward; Yakhini, Zohar; Chudnovsky, Alexandra; Golberg, Alexander

    2016-06-01

    Understanding the impact of all process parameters on the efficiency of biomass hydrolysis and on the final yield of products is critical to biorefinery design. Using Taguchi orthogonal arrays experimental design and Partial Least Square Regression, we investigated the impact of change and the comparative significance of thermochemical process temperature, treatment time, %Acid and %Solid load on carbohydrates release from green macroalgae from Ulva genus, a promising biorefinery feedstock. The average density of hydrolysate was determined using a new microelectromechanical optical resonator mass sensor. In addition, using Flux Balance Analysis techniques, we compared the potential fermentation yields of these hydrolysate products using metabolic models of Escherichia coli, Saccharomyces cerevisiae wild type, Saccharomyces cerevisiae RN1016 with xylose isomerase and Clostridium acetobutylicum. We found that %Acid plays the most significant role and treatment time the least significant role in affecting the monosaccharaides released from Ulva biomass. We also found that within the tested range of parameters, hydrolysis with 121 °C, 30 min 2% Acid, 15% Solids could lead to the highest yields of conversion: 54.134-57.500 gr ethanol kg-1 Ulva dry weight by S. cerevisiae RN1016 with xylose isomerase. Our results support optimized marine algae utilization process design and will enable smart energy harvesting by thermochemical hydrolysis.

  7. Taguchi Based Performance and Reliability Improvement of an Ion Chamber Amplifier for Enhanced Nuclear Reactor Safety

    NASA Astrophysics Data System (ADS)

    Kulkarni, R. D.; Agarwal, Vivek

    2008-08-01

    An ion chamber amplifier (ICA) is used as a safety device for neutronic power (flux) measurement in regulation and protection systems of nuclear reactors. Therefore, performance reliability of an ICA is an important issue. Appropriate quality engineering is essential to achieve a robust design and performance of the ICA circuit. It is observed that the low input bias current operational amplifiers used in the input stage of the ICA circuit are the most critical devices for proper functioning of the ICA. They are very sensitive to the gamma radiation present in their close vicinity. Therefore, the response of the ICA deteriorates with exposure to gamma radiation resulting in a decrease in the overall reliability, unless desired performance is ensured under all conditions. This paper presents a performance enhancement scheme for an ICA operated in the nuclear environment. The Taguchi method, which is a proven technique for reliability enhancement, has been used in this work. It is demonstrated that if a statistical, optimal design approach, like the Taguchi method is used, the cost of high quality and reliability may be brought down drastically. The complete methodology and statistical calculations involved are presented, as are the experimental and simulation results to arrive at a robust design of the ICA.

  8. The parameters effect on the structural performance of damaged steel box beam using Taguchi method

    NASA Astrophysics Data System (ADS)

    El-taly, Boshra A.; Abd El Hameed, Mohamed F.

    2018-03-01

    In the current study, the influence of notch or opening parameters and the positions of the applied load on the structural performance of steel box beams up to failure was investigated using Finite Element analysis program, ANSYS. The Taguchi-based design of experiments technique was used to plan the current study. The plan included 12 box steel beams; three intact beams, and nine damaged beams (with opening) in the beams web. The numerical studies were conducted under varying the spacing between the two concentrated point loads (location of applied loads), the notch (opening) position, and the ratio between depth and width of the notch with a constant notch area. According to Taguchi analysis, factor X (location of the applied loads) was found the highest contributing parameters for the variation of the ultimate load, vertical deformation, shear stresses, and the compressive normal stresses.

  9. Formulation and optimization of solid lipid nanoparticle formulation for pulmonary delivery of budesonide using Taguchi and Box-Behnken design.

    PubMed

    Emami, J; Mohiti, H; Hamishehkar, H; Varshosaz, J

    2015-01-01

    Budesonide is a potent non-halogenated corticosteroid with high anti-inflammatory effects. The lungs are an attractive route for non-invasive drug delivery with advantages for both systemic and local applications. The aim of the present study was to develop, characterize and optimize a solid lipid nanoparticle system to deliver budesonide to the lungs. Budesonide-loaded solid lipid nanoparticles were prepared by the emulsification-solvent diffusion method. The impact of various processing variables including surfactant type and concentration, lipid content organic and aqueous volume, and sonication time were assessed on the particle size, zeta potential, entrapment efficiency, loading percent and mean dissolution time. Taguchi design with 12 formulations along with Box-Behnken design with 17 formulations was developed. The impact of each factor upon the eventual responses was evaluated, and the optimized formulation was finally selected. The size and morphology of the prepared nanoparticles were studied using scanning electron microscope. Based on the optimization made by Design Expert 7(®) software, a formulation made of glycerol monostearate, 1.2 % polyvinyl alcohol (PVA), weight ratio of lipid/drug of 10 and sonication time of 90 s was selected. Particle size, zeta potential, entrapment efficiency, loading percent, and mean dissolution time of adopted formulation were predicted and confirmed to be 218.2 ± 6.6 nm, -26.7 ± 1.9 mV, 92.5 ± 0.52 %, 5.8 ± 0.3 %, and 10.4 ± 0.29 h, respectively. Since the preparation and evaluation of the selected formulation within the laboratory yielded acceptable results with low error percent, the modeling and optimization was justified. The optimized formulation co-spray dried with lactose (hybrid microparticles) displayed desirable fine particle fraction, mass median aerodynamic diameter (MMAD), and geometric standard deviation of 49.5%, 2.06 μm, and 2.98 μm; respectively. Our results provide fundamental data for the

  10. Formulation and optimization of solid lipid nanoparticle formulation for pulmonary delivery of budesonide using Taguchi and Box-Behnken design

    PubMed Central

    Emami, J.; Mohiti, H.; Hamishehkar, H.; Varshosaz, J.

    2015-01-01

    Budesonide is a potent non-halogenated corticosteroid with high anti-inflammatory effects. The lungs are an attractive route for non-invasive drug delivery with advantages for both systemic and local applications. The aim of the present study was to develop, characterize and optimize a solid lipid nanoparticle system to deliver budesonide to the lungs. Budesonide-loaded solid lipid nanoparticles were prepared by the emulsification-solvent diffusion method. The impact of various processing variables including surfactant type and concentration, lipid content organic and aqueous volume, and sonication time were assessed on the particle size, zeta potential, entrapment efficiency, loading percent and mean dissolution time. Taguchi design with 12 formulations along with Box-Behnken design with 17 formulations was developed. The impact of each factor upon the eventual responses was evaluated, and the optimized formulation was finally selected. The size and morphology of the prepared nanoparticles were studied using scanning electron microscope. Based on the optimization made by Design Expert 7® software, a formulation made of glycerol monostearate, 1.2 % polyvinyl alcohol (PVA), weight ratio of lipid/drug of 10 and sonication time of 90 s was selected. Particle size, zeta potential, entrapment efficiency, loading percent, and mean dissolution time of adopted formulation were predicted and confirmed to be 218.2 ± 6.6 nm, -26.7 ± 1.9 mV, 92.5 ± 0.52 %, 5.8 ± 0.3 %, and 10.4 ± 0.29 h, respectively. Since the preparation and evaluation of the selected formulation within the laboratory yielded acceptable results with low error percent, the modeling and optimization was justified. The optimized formulation co-spray dried with lactose (hybrid microparticles) displayed desirable fine particle fraction, mass median aerodynamic diameter (MMAD), and geometric standard deviation of 49.5%, 2.06 μm, and 2.98 μm; respectively. Our results provide fundamental data for the

  11. Thermochemical hydrolysis of macroalgae Ulva for biorefinery: Taguchi robust design method

    PubMed Central

    Jiang, Rui; Linzon, Yoav; Vitkin, Edward; Yakhini, Zohar; Chudnovsky, Alexandra; Golberg, Alexander

    2016-01-01

    Understanding the impact of all process parameters on the efficiency of biomass hydrolysis and on the final yield of products is critical to biorefinery design. Using Taguchi orthogonal arrays experimental design and Partial Least Square Regression, we investigated the impact of change and the comparative significance of thermochemical process temperature, treatment time, %Acid and %Solid load on carbohydrates release from green macroalgae from Ulva genus, a promising biorefinery feedstock. The average density of hydrolysate was determined using a new microelectromechanical optical resonator mass sensor. In addition, using Flux Balance Analysis techniques, we compared the potential fermentation yields of these hydrolysate products using metabolic models of Escherichia coli, Saccharomyces cerevisiae wild type, Saccharomyces cerevisiae RN1016 with xylose isomerase and Clostridium acetobutylicum. We found that %Acid plays the most significant role and treatment time the least significant role in affecting the monosaccharaides released from Ulva biomass. We also found that within the tested range of parameters, hydrolysis with 121 °C, 30 min 2% Acid, 15% Solids could lead to the highest yields of conversion: 54.134–57.500 gr ethanol kg−1 Ulva dry weight by S. cerevisiae RN1016 with xylose isomerase. Our results support optimized marine algae utilization process design and will enable smart energy harvesting by thermochemical hydrolysis. PMID:27291594

  12. Optimized selection of benchmark test parameters for image watermark algorithms based on Taguchi methods and corresponding influence on design decisions for real-world applications

    NASA Astrophysics Data System (ADS)

    Rodriguez, Tony F.; Cushman, David A.

    2003-06-01

    With the growing commercialization of watermarking techniques in various application scenarios it has become increasingly important to quantify the performance of watermarking products. The quantification of relative merits of various products is not only essential in enabling further adoption of the technology by society as a whole, but will also drive the industry to develop testing plans/methodologies to ensure quality and minimize cost (to both vendors & customers.) While the research community understands the theoretical need for a publicly available benchmarking system to quantify performance, there has been less discussion on the practical application of these systems. By providing a standard set of acceptance criteria, benchmarking systems can dramatically increase the quality of a particular watermarking solution, validating the product performances if they are used efficiently and frequently during the design process. In this paper we describe how to leverage specific design of experiments techniques to increase the quality of a watermarking scheme, to be used with the benchmark tools being developed by the Ad-Hoc Watermark Verification Group. A Taguchi Loss Function is proposed for an application and orthogonal arrays used to isolate optimal levels for a multi-factor experimental situation. Finally, the results are generalized to a population of cover works and validated through an exhaustive test.

  13. Taguchi Optimization of Cutting Parameters in Turning AISI 1020 MS with M2 HSS Tool

    NASA Astrophysics Data System (ADS)

    Sonowal, Dharindom; Sarma, Dhrupad; Bakul Barua, Parimal; Nath, Thuleswar

    2017-08-01

    In this paper the effect of three cutting parameters viz. Spindle speed, Feed and Depth of Cut on surface roughness of AISI 1020 mild steel bar in turning was investigated and optimized to obtain minimum surface roughness. All the experiments are conducted on HMT LB25 lathe machine using M2 HSS cutting tool. Ranges of parameters of interest have been decided through some preliminary experimentation (One Factor At a Time experiments). Finally a combined experiment has been carried out using Taguchi’s L27 Orthogonal Array (OA) to study the main effect and interaction effect of the all three parameters. The experimental results were analyzed with raw data ANOVA (Analysis of Variance) and S/N data (Signal to Noise ratio) ANOVA. Results show that Spindle speed, Feed and Depth of Cut have significant effects on both mean and variation of surface roughness in turning AISI 1020 mild steel. Mild two factors interactions are observed among the aforesaid factors with significant effects only on the mean of the output variable. From the Taguchi parameter optimization the optimum factor combination is found to be 630 rpm spindle speed, 0.05 mm/rev feed and 1.25 mm depth of cut with estimated surface roughness 2.358 ± 0.970 µm. A confirmatory experiment was conducted with the optimum factor combination to verify the results. In the confirmatory experiment the average value of surface roughness is found to be 2.408 µm which is well within the range (0.418 µm to 4.299 µm) predicted for confirmatory experiment.

  14. Taguchi Experimental Design for Optimization of Recombinant Human Growth Hormone Production in CHO Cell Lines and Comparing its Biological Activity with Prokaryotic Growth Hormone.

    PubMed

    Aghili, Zahra Sadat; Zarkesh-Esfahani, Sayyed Hamid

    2018-02-01

    Growth hormone deficiency results in growth retardation in children and the GH deficiency syndrome in adults and they need to receive recombinant-GH in order to rectify the GH deficiency symptoms. Mammalian cells have become the favorite system for production of recombinant proteins for clinical application compared to prokaryotic systems because of their capability for appropriate protein folding, assembly, post-translational modification and proper signal. However, production level in mammalian cells is generally low compared to prokaryotic hosts. Taguchi has established orthogonal arrays to describe a large number of experimental situations mainly to reduce experimental errors and to enhance the efficiency and reproducibility of laboratory experiments.In the present study, rhGH was produced in CHO cells and production of rhGH was assessed using Dot blotting, western blotting and Elisa assay. For optimization of rhGH production in CHO cells using Taguchi method An M16 orthogonal experimental design was used to investigate four different culture components. The biological activity of rhGH was assessed using LHRE-TK-Luciferase reporter gene system in HEK-293 and compared to the biological activity of prokaryotic rhGH.A maximal productivity of rhGH was reached in the conditions of 1%DMSO, 1%glycerol, 25 µM ZnSO 4 and 0 mM NaBu. Our findings indicate that control of culture conditions such as the addition of chemical components helps to develop an efficient large-scale and industrial process for the production of rhGH in CHO cells. Results of bioassay indicated that rhGH produced by CHO cells is able to induce GH-mediated intracellular cell signaling and showed higher bioactivity when compared to prokaryotic GH at the same concentrations. © Georg Thieme Verlag KG Stuttgart · New York.

  15. Taguchi-generalized regression neural network micro-screening for physical and sensory characteristics of bread.

    PubMed

    Besseris, George J

    2018-03-01

    Generalized regression neural networks (GRNN) may act as crowdsourcing cognitive agents to screen small, dense and complex datasets. The concurrent screening and optimization of several complex physical and sensory traits of bread is developed using a structured Taguchi-type micro-mining technique. A novel product outlook is offered to industrial operations to cover separate aspects of smart product design, engineering and marketing. Four controlling factors were selected to be modulated directly on a modern production line: 1) the dough weight, 2) the proofing time, 3) the baking time, and 4) the oven zone temperatures. Concentrated experimental recipes were programmed using the Taguchi-type L 9 (3 4 ) OA-sampler to detect potentially non-linear multi-response tendencies. The fused behavior of the master-ranked bread characteristics behavior was smart sampled with GRNN-crowdsourcing and robust analysis. It was found that the combination of the oven zone temperatures to play a highly influential role in all investigated scenarios. Moreover, the oven zone temperatures and the dough weight appeared to be instrumental when attempting to synchronously adjusting all four physical characteristics. The optimal oven-zone temperature setting for concurrent screening-and-optimization was found to be 270-240 °C. The optimized (median) responses for loaf weight, moisture, height, width, color, flavor, crumb structure, softness, and elasticity are: 782 g, 34.8 %, 9.36 cm, 10.41 cm, 6.6, 7.2, 7.6, 7.3, and 7.0, respectively.

  16. Integration of Mahalanobis-Taguchi system and traditional cost accounting for remanufacturing crankshaft

    NASA Astrophysics Data System (ADS)

    Abu, M. Y.; Norizan, N. S.; Rahman, M. S. Abd

    2018-04-01

    Remanufacturing is a sustainability strategic planning which transforming the end of life product to as new performance with their warranty is same or better than the original product. In order to quantify the advantages of this strategy, all the processes must implement the optimization to reach the ultimate goal and reduce the waste generated. The aim of this work is to evaluate the criticality of parameters on the end of life crankshaft based on Taguchi’s orthogonal array. Then, estimate the cost using traditional cost accounting by considering the critical parameters. By implementing the optimization, the remanufacturer obviously produced lower cost and waste during production with higher potential to gain the profit. Mahalanobis-Taguchi System was proven as a powerful method of optimization that revealed the criticality of parameters. When subjected the method to the MAN engine model, there was 5 out of 6 crankpins were critical which need for grinding process while no changes happened to the Caterpillar engine model. Meanwhile, the cost per unit for MAN engine model was changed from MYR1401.29 to RM1251.29 while for Caterpillar engine model have no changes due to the no changes on criticality of parameters consideration. Therefore, by integrating the optimization and costing through remanufacturing process, a better decision can be achieved after observing the potential profit will be gained. The significant of output demonstrated through promoting sustainability by reducing re-melting process of damaged parts to ensure consistent benefit of return cores.

  17. Investigation and Taguchi Optimization of Microbial Fuel Cell Salt Bridge Dimensional Parameters

    NASA Astrophysics Data System (ADS)

    Sarma, Dhrupad; Barua, Parimal Bakul; Dey, Nabendu; Nath, Sumitro; Thakuria, Mrinmay; Mallick, Synthia

    2018-01-01

    One major problem of two chamber salt bridge microbial fuel cells (MFCs) is the high resistance offered by the salt bridge to anion flow. Many researchers who have studied and optimized various parameters related to salt bridge MFC, have not shed much light on the effect of salt bridge dimensional parameters on the MFC performance. Therefore, the main objective of this research is to investigate the effect of length and cross sectional area of salt bridge and the effect of solar radiation and atmospheric temperature on MFC current output. An experiment has been designed using Taguchi L9 orthogonal array, taking length and cross sectional area of salt bridge as factors having three levels. Nine MFCs were fabricated as per the nine trial conditions. Trials were conducted for 3 days and output current of each of the MFCs along with solar insolation and atmospheric temperature were recorded. Analysis of variance shows that salt bridge length has significant effect both on mean (with 53.90% contribution at 95% CL) and variance (with 56.46% contribution at 87% CL), whereas the effect of cross sectional area of the salt bridge and the interaction of these two factors is significant on mean only (with 95% CL). Optimum combination was found at 260 mm salt bridge length and 506.7 mm2 cross sectional area with 4.75 mA of mean output current. The temperature and solar insolation data when correlated with each of the MFCs average output current, revealed that both external factors have significant impact on MFC current output but the correlation coefficient varies from MFC to MFC depending on salt bridge dimensional parameters.

  18. Supercritical CO2 extraction of candlenut oil: process optimization using Taguchi orthogonal array and physicochemical properties of the oil.

    PubMed

    Subroto, Erna; Widjojokusumo, Edward; Veriansyah, Bambang; Tjandrawinata, Raymond R

    2017-04-01

    A series of experiments was conducted to determine optimum conditions for supercritical carbon dioxide extraction of candlenut oil. A Taguchi experimental design with L 9 orthogonal array (four factors in three levels) was employed to evaluate the effects of pressure of 25-35 MPa, temperature of 40-60 °C, CO 2 flow rate of 10-20 g/min and particle size of 0.3-0.8 mm on oil solubility. The obtained results showed that increase in particle size, pressure and temperature improved the oil solubility. The supercritical carbon dioxide extraction at optimized parameters resulted in oil yield extraction of 61.4% at solubility of 9.6 g oil/kg CO 2 . The obtained candlenut oil from supercritical carbon dioxide extraction has better oil quality than oil which was extracted by Soxhlet extraction using n-hexane. The oil contains high unsaturated oil (linoleic acid and linolenic acid), which have many beneficial effects on human health.

  19. Preparation of nanocellulose from Imperata brasiliensis grass using Taguchi method.

    PubMed

    Benini, Kelly Cristina Coelho de Carvalho; Voorwald, Herman Jacobus Cornelis; Cioffi, Maria Odila Hilário; Rezende, Mirabel Cerqueira; Arantes, Valdeir

    2018-07-15

    Cellulose nanoparticles (CNs) were prepared by acid hydrolysis of the cellulose pulp extracted from the Brazilian satintail (Imperata Brasiliensis) plant using a conventional and a total chlorine free method. Initially, a statistical design of experiment was carried out using Taguchi orthogonal array to study the hydrolysis parameters, and the main properties (crystallinity, thermal stability, morphology, and sizes) of the nanocellulose. X-ray diffraction (XRD), fourier-transform infrared spectroscopy (FTIR), field-emission scanning electron microscopy (FE-SEM), dynamic light scattering (DLS), zeta potential and thermogravimetric analysis (TGA) were carried out to characterize the physical-chemical properties of the CNs obtained. Cellulose nanoparticles with diameter ranging from 10 to 60 nm and length between 150 and 250 nm were successfully obtained at sulfuric acid concentration of 64% (m/m), temperature 35 °C, reaction time 75 min, and a 1:20 (g/mL) pulp-to-solution ratio. Under this condition, the Imperata Brasiliensis CNs showed good stability in suspension, crystallinity index of 65%, and a cellulose degradation temperature of about 117 °C. Considering that these properties are similar to those of nanocelluloses from other lignocellulosics feedstocks, Imperata grass seems also to be a suitable source for nanocellulose production. Copyright © 2018 Elsevier Ltd. All rights reserved.

  20. Bioremediation of chlorpyrifos contaminated soil by two phase bioslurry reactor: Processes evaluation and optimization by Taguchi's design of experimental (DOE) methodology.

    PubMed

    Pant, Apourv; Rai, J P N

    2018-04-15

    Two phase bioreactor was constructed, designed and developed to evaluate the chlorpyrifos remediation. Six biotic and abiotic factors (substrate-loading rate, slurry phase pH, slurry phase dissolved oxygen (DO), soil water ratio, temperature and soil micro flora load) were evaluated by design of experimental (DOE) methodology employing Taguchi's orthogonal array (OA). The selected six factors were considered at two levels L-8 array (2^7, 15 experiments) in the experimental design. The optimum operating conditions obtained from the methodology showed enhanced chlorpyrifos degradation from 283.86µg/g to 955.364µg/g by overall 70.34% of enhancement. In the present study, with the help of few well defined experimental parameters a mathematical model was constructed to understand the complex bioremediation process and optimize the approximate parameters upto great accuracy. Copyright © 2017 Elsevier Inc. All rights reserved.

  1. Improvement of the Mechanical Properties of 1022 Carbon Steel Coil by Using the Taguchi Method to Optimize Spheroidized Annealing Conditions.

    PubMed

    Yang, Chih-Cheng; Liu, Chang-Lun

    2016-08-12

    Cold forging is often applied in the fastener industry. Wires in coil form are used as semi-finished products for the production of billets. This process usually requires preliminarily drawing wire coil in order to reduce the diameter of products. The wire usually has to be annealed to improve its cold formability. The quality of spheroidizing annealed wire affects the forming quality of screws. In the fastener industry, most companies use a subcritical process for spheroidized annealing. Various parameters affect the spheroidized annealing quality of steel wire, such as the spheroidized annealing temperature, prolonged heating time, furnace cooling time and flow rate of nitrogen (protective atmosphere). The effects of the spheroidized annealing parameters affect the quality characteristics of steel wire, such as the tensile strength and hardness. A series of experimental tests on AISI 1022 low carbon steel wire are carried out and the Taguchi method is used to obtain optimum spheroidized annealing conditions to improve the mechanical properties of steel wires for cold forming. The results show that the spheroidized annealing temperature and prolonged heating time have the greatest effect on the mechanical properties of steel wires. A comparison between the results obtained using the optimum spheroidizing conditions and the measures using the original settings shows the new spheroidizing parameter settings effectively improve the performance measures over their value at the original settings. The results presented in this paper could be used as a reference for wire manufacturers.

  2. Anatomical Thin Titanium Mesh Plate Structural Optimization for Zygomatic-Maxillary Complex Fracture under Fatigue Testing.

    PubMed

    Wang, Yu-Tzu; Huang, Shao-Fu; Fang, Yu-Ting; Huang, Shou-Chieh; Cheng, Hwei-Fang; Chen, Chih-Hao; Wang, Po-Fang; Lin, Chun-Li

    2018-01-01

    This study performs a structural optimization of anatomical thin titanium mesh (ATTM) plate and optimal designed ATTM plate fabricated using additive manufacturing (AM) to verify its stabilization under fatigue testing. Finite element (FE) analysis was used to simulate the structural bending resistance of a regular ATTM plate. The Taguchi method was employed to identify the significance of each design factor in controlling the deflection and determine an optimal combination of designed factors. The optimal designed ATTM plate with patient-matched facial contour was fabricated using AM and applied to a ZMC comminuted fracture to evaluate the resting maxillary micromotion/strain under fatigue testing. The Taguchi analysis found that the ATTM plate required a designed internal hole distance to be 0.9 mm, internal hole diameter to be 1 mm, plate thickness to be 0.8 mm, and plate height to be 10 mm. The designed plate thickness factor primarily dominated the bending resistance up to 78% importance. The averaged micromotion (displacement) and strain of the maxillary bone showed that ZMC fracture fixation using the miniplate was significantly higher than those using the AM optimal designed ATTM plate. This study concluded that the optimal designed ATTM plate with enough strength to resist the bending effect can be obtained by combining FE and Taguchi analyses. The optimal designed ATTM plate with patient-matched facial contour fabricated using AM provides superior stabilization for ZMC comminuted fractured bone segments.

  3. Nickel-Cadmium Battery Operation Management Optimization Using Robust Design

    NASA Technical Reports Server (NTRS)

    Blosiu, Julian O.; Deligiannis, Frank; DiStefano, Salvador

    1996-01-01

    In recent years following several spacecraft battery anomalies, it was determined that managing the operational factors of NASA flight NiCd rechargeable battery was very important in order to maintain space flight battery nominal performance. The optimization of existing flight battery operational performance was viewed as something new for a Taguchi Methods application.

  4. Optimization of a chemical identification algorithm

    NASA Astrophysics Data System (ADS)

    Chyba, Thomas H.; Fisk, Brian; Gunning, Christin; Farley, Kevin; Polizzi, Amber; Baughman, David; Simpson, Steven; Slamani, Mohamed-Adel; Almassy, Robert; Da Re, Ryan; Li, Eunice; MacDonald, Steve; Slamani, Ahmed; Mitchell, Scott A.; Pendell-Jones, Jay; Reed, Timothy L.; Emge, Darren

    2010-04-01

    A procedure to evaluate and optimize the performance of a chemical identification algorithm is presented. The Joint Contaminated Surface Detector (JCSD) employs Raman spectroscopy to detect and identify surface chemical contamination. JCSD measurements of chemical warfare agents, simulants, toxic industrial chemicals, interferents and bare surface backgrounds were made in the laboratory and under realistic field conditions. A test data suite, developed from these measurements, is used to benchmark algorithm performance throughout the improvement process. In any one measurement, one of many possible targets can be present along with interferents and surfaces. The detection results are expressed as a 2-category classification problem so that Receiver Operating Characteristic (ROC) techniques can be applied. The limitations of applying this framework to chemical detection problems are discussed along with means to mitigate them. Algorithmic performance is optimized globally using robust Design of Experiments and Taguchi techniques. These methods require figures of merit to trade off between false alarms and detection probability. Several figures of merit, including the Matthews Correlation Coefficient and the Taguchi Signal-to-Noise Ratio are compared. Following the optimization of global parameters which govern the algorithm behavior across all target chemicals, ROC techniques are employed to optimize chemical-specific parameters to further improve performance.

  5. A Review of Metal Injection Molding- Process, Optimization, Defects and Microwave Sintering on WC-Co Cemented Carbide

    NASA Astrophysics Data System (ADS)

    Shahbudin, S. N. A.; Othman, M. H.; Amin, Sri Yulis M.; Ibrahim, M. H. I.

    2017-08-01

    This article is about a review of optimization of metal injection molding and microwave sintering process on tungsten cemented carbide produce by metal injection molding process. In this study, the process parameters for the metal injection molding were optimized using Taguchi method. Taguchi methods have been used widely in engineering analysis to optimize the performance characteristics through the setting of design parameters. Microwave sintering is a process generally being used in powder metallurgy over the conventional method. It has typical characteristics such as accelerated heating rate, shortened processing cycle, high energy efficiency, fine and homogeneous microstructure, and enhanced mechanical performance, which is beneficial to prepare nanostructured cemented carbides in metal injection molding. Besides that, with an advanced and promising technology, metal injection molding has proven that can produce cemented carbides. Cemented tungsten carbide hard metal has been used widely in various applications due to its desirable combination of mechanical, physical, and chemical properties. Moreover, areas of study include common defects in metal injection molding and application of microwave sintering itself has been discussed in this paper.

  6. Optimization of Selective Laser Melting by Evaluation Method of Multiple Quality Characteristics

    NASA Astrophysics Data System (ADS)

    Khaimovich, A. I.; Stepanenko, I. S.; Smelov, V. G.

    2018-01-01

    Article describes the adoption of the Taguchi method in selective laser melting process of sector of combustion chamber by numerical and natural experiments for achieving minimum temperature deformation. The aim was to produce a quality part with minimum amount of numeric experiments. For the study, the following optimization parameters (independent factors) were chosen: the laser beam power and velocity; two factors for compensating the effect of the residual thermal stresses: the scale factor of the preliminary correction of the part geometry and the number of additional reinforcing elements. We used an orthogonal plan of 9 experiments with a factor variation at three levels (L9). As quality criterias, the values of distortions for 9 zones of the combustion chamber and the maximum strength of the material of the chamber were chosen. Since the quality parameters are multidirectional, a grey relational analysis was used to solve the optimization problem for multiple quality parameters. As a result, according to the parameters obtained, the combustion chamber segments of the gas turbine engine were manufactured.

  7. Ultrasonically assisted hydrothermal synthesis of activated carbon-HKUST-1-MOF hybrid for efficient simultaneous ultrasound-assisted removal of ternary organic dyes and antibacterial investigation: Taguchi optimization.

    PubMed

    Azad, F Nasiri; Ghaedi, M; Dashtian, K; Hajati, S; Pezeshkpour, V

    2016-07-01

    Activated carbon (AC) composite with HKUST-1 metal organic framework (AC-HKUST-1 MOF) was prepared by ultrasonically assisted hydrothermal method and characterized by FTIR, SEM and XRD analysis and laterally was applied for the simultaneous ultrasound-assisted removal of crystal violet (CV), disulfine blue (DSB) and quinoline yellow (QY) dyes in their ternary solution. In addition, this material, was screened in vitro for their antibacterial actively against Methicillin-resistant Staphylococcus aureus (MRSA) and Pseudomonas aeruginosa (PAO1) bacteria. In dyes removal process, the effects of important variables such as initial concentration of dyes, adsorbent mass, pH and sonication time on adsorption process optimized by Taguchi approach. Optimum values of 4, 0.02 g, 4 min, 10 mg L(-1) were obtained for pH, AC-HKUST-1 MOF mass, sonication time and the concentration of each dye, respectively. At the optimized condition, the removal percentages of CV, DSB and QY were found to be 99.76%, 91.10%, and 90.75%, respectively, with desirability of 0.989. Kinetics of adsorption processes follow pseudo-second-order model. The Langmuir model as best method with high applicability for representation of experimental data, while maximum mono layer adsorption capacity for CV, DSB and QY on AC-HKUST-1 estimated to be 133.33, 129.87 and 65.37 mg g(-1) which significantly were higher than HKUST-1 as sole material with Qm to equate 59.45, 57.14 and 38.80 mg g(-1), respectively. Copyright © 2016 Elsevier B.V. All rights reserved.

  8. Rolling bearing fault diagnosis and health assessment using EEMD and the adjustment Mahalanobis-Taguchi system

    NASA Astrophysics Data System (ADS)

    Chen, Junxun; Cheng, Longsheng; Yu, Hui; Hu, Shaolin

    2018-01-01

    ABSTRACTSFor the timely identification of the potential faults of a rolling bearing and to observe its health condition intuitively and accurately, a novel fault diagnosis and health assessment model for a rolling bearing based on the ensemble empirical mode decomposition (EEMD) <span class="hlt">method</span> and the adjustment Mahalanobis-<span class="hlt">Taguchi</span> system (AMTS) <span class="hlt">method</span> is proposed. The specific steps are as follows: First, the vibration signal of a rolling bearing is decomposed by EEMD, and the extracted features are used as the input vectors of AMTS. Then, the AMTS <span class="hlt">method</span>, which is designed to overcome the shortcomings of the traditional Mahalanobis-<span class="hlt">Taguchi</span> system and to extract the key features, is proposed for fault diagnosis. Finally, a type of HI concept is proposed according to the results of the fault diagnosis to accomplish the health assessment of a bearing in its life cycle. To validate the superiority of the developed <span class="hlt">method</span> proposed approach, it is compared with other recent <span class="hlt">method</span> and proposed methodology is successfully validated on a vibration data-set acquired from seeded defects and from an accelerated life test. The results show that this <span class="hlt">method</span> represents the actual situation well and is able to accurately and effectively identify the fault type.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/21508553','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/21508553"><span>Oily wastewater treatment by ultrafiltration using <span class="hlt">Taguchi</span> experimental design.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Salahi, A; Mohammadi, T</p> <p>2011-01-01</p> <p>In this research, results of an experimental investigation on separation of oil from a real oily wastewater using an ultrafiltration (UF) polymeric membrane are presented. In order to enhance the performance of UF in API separator effluent treatment and to get more permeation flux (PF), effects of operating factors on the yield of PF were studied. Five factors at four levels were investigated: trans-membrane pressure (TMP), temperature (T), cross flow velocity (CFV), pH and salt concentration (SC). <span class="hlt">Taguchi</span> <span class="hlt">method</span> (L(16) orthogonal array (OA)) was used. Analysis of variance (ANOVA) was applied to calculate sum of square, variance, error variance and contribution percentage of each factor on response. The <span class="hlt">optimal</span> levels thus determined for the four influential factors were: TMP, 3 bar; T, 40˚C; CFV, 1.0 m/s; SC, 25 g/L and pH, 8. The results showed that CFV and SC are the most and the least effective factors on PF, respectively. Increasing CFV, TMP, T and pH caused the better performance of UF membrane process due to enhancement of driving force and fouling residence. Also, effects of oil concentration (OC) in the wastewater on PF and total organic carbon (TOC) rejection were investigated. Finally, the highest TOC rejection was found to be 85%.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018AIPC.1931c0014A','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018AIPC.1931c0014A"><span><span class="hlt">Optimal</span> power flow with <span class="hlt">optimal</span> placement TCSC device on 500 kV Java-Bali electrical power system using genetic Algorithm-<span class="hlt">Taguchi</span> <span class="hlt">method</span></span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Apribowo, Chico Hermanu Brillianto; Ibrahim, Muhammad Hamka; Wicaksono, F. X. Rian</p> <p>2018-02-01</p> <p>The growing burden of the load and the complexity of the power system has had an impact on the need for <span class="hlt">optimization</span> of power system operation. <span class="hlt">Optimal</span> power flow (OPF) with <span class="hlt">optimal</span> location placement and rating of thyristor controlled series capacitor (TCSC) is an effective solution used to determine the economic cost of operating the plant and regulate the power flow in the power system. The purpose of this study is to minimize the total cost of generation by placing the location and the <span class="hlt">optimal</span> rating of TCSC using genetic algorithm-design of experiment techniques (GA-DOE). Simulation on Java-Bali system 500 kV with the amount of TCSC used by 5 compensator, the proposed <span class="hlt">method</span> can reduce the generation cost by 0.89% compared to OPF without using TCSC.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018MS%26E..310a2108C','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018MS%26E..310a2108C"><span><span class="hlt">Optimization</span> of Robotic Spray Painting process Parameters using <span class="hlt">Taguchi</span> <span class="hlt">Method</span></span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Chidhambara, K. V.; Latha Shankar, B.; Vijaykumar</p> <p>2018-02-01</p> <p>Automated spray painting process is gaining interest in industry and research recently due to extensive application of spray painting in automobile industries. Automating spray painting process has advantages of improved quality, productivity, reduced labor, clean environment and particularly cost effectiveness. This study investigates the performance characteristics of an industrial robot Fanuc 250ib for an automated painting process using statistical tool Taguchi’s Design of Experiment technique. The experiment is designed using Taguchi’s L25 orthogonal array by considering three factors and five levels for each factor. The objective of this work is to explore the major control parameters and to <span class="hlt">optimize</span> the same for the improved quality of the paint coating measured in terms of Dry Film thickness(DFT), which also results in reduced rejection. Further Analysis of Variance (ANOVA) is performed to know the influence of individual factors on DFT. It is observed that shaping air and paint flow are the most influencing parameters. Multiple regression model is formulated for estimating predicted values of DFT. Confirmation test is then conducted and comparison results show that error is within acceptable level.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/27411334','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/27411334"><span>Improved production of tannase by Klebsiella pneumoniae using Indian gooseberry leaves under submerged fermentation using <span class="hlt">Taguchi</span> approach.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Kumar, Mukesh; Singh, Amrinder; Beniwal, Vikas; Salar, Raj Kumar</p> <p>2016-12-01</p> <p>Tannase (tannin acyl hydrolase E.C 3.1.1.20) is an inducible, largely extracellular enzyme that causes the hydrolysis of ester and depside bonds present in various substrates. Large scale industrial application of this enzyme is very limited owing to its high production costs. In the present study, cost effective production of tannase by Klebsiella pneumoniae KP715242 was studied under submerged fermentation using different tannin rich agro-residues like Indian gooseberry leaves (Phyllanthus emblica), Black plum leaves (Syzygium cumini), Eucalyptus leaves (Eucalyptus glogus) and Babul leaves (Acacia nilotica). Among all agro-residues, Indian gooseberry leaves were found to be the best substrate for tannase production under submerged fermentation. Sequential <span class="hlt">optimization</span> approach using <span class="hlt">Taguchi</span> orthogonal array screening and response surface methodology was adopted to <span class="hlt">optimize</span> the fermentation variables in order to enhance the enzyme production. Eleven medium components were screened primarily by <span class="hlt">Taguchi</span> orthogonal array design to identify the most contributing factors towards the enzyme production. The four most significant contributing variables affecting tannase production were found to be pH (23.62 %), tannin extract (20.70 %), temperature (20.33 %) and incubation time (14.99 %). These factors were further <span class="hlt">optimized</span> with central composite design using response surface methodology. Maximum tannase production was observed at 5.52 pH, 39.72 °C temperature, 91.82 h of incubation time and 2.17 % tannin content. The enzyme activity was enhanced by 1.26 fold under these <span class="hlt">optimized</span> conditions. The present study emphasizes the use of agro-residues as a potential substrate with an aim to lower down the input costs for tannase production so that the enzyme could be used proficiently for commercial purposes.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017MS%26E..184a2048H','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017MS%26E..184a2048H"><span>Flank wear analysing of high speed end milling for hardened steel D2 using <span class="hlt">Taguchi</span> <span class="hlt">Method</span></span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Hazza Faizi Al-Hazza, Muataz; Ibrahim, Nur Asmawiyah bt; Adesta, Erry T. Y.; Khan, Ahsan Ali; Abdullah Sidek, Atiah Bt.</p> <p>2017-03-01</p> <p>One of the main challenges for any manufacturer is how to decrease the machining cost without affecting the final quality of the product. One of the new advanced machining processes in industry is the high speed hard end milling process that merges three advanced machining processes: high speed milling, hard milling and dry milling. However, one of the most important challenges in this process is to control the flank wear rate. Therefore a analyzing the flank wear rate during machining should be investigated in order to determine the best cutting levels that will not affect the final quality of the product. In this research <span class="hlt">Taguchi</span> <span class="hlt">method</span> has been used to investigate the effect of cutting speed, feed rate and depth of cut and determine the best level s to minimize the flank wear rate up to total length of 0.3mm based on the ISO standard to maintain the finishing requirements.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29797204','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29797204"><span>Experimental analysis of performance and emission on DI diesel engine fueled with diesel-palm kernel methyl ester-triacetin blends: a <span class="hlt">Taguchi</span> fuzzy-based <span class="hlt">optimization</span>.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Panda, Jibitesh Kumar; Sastry, Gadepalli Ravi Kiran; Rai, Ram Naresh</p> <p>2018-05-25</p> <p>The energy situation and the concerns about global warming nowadays have ignited research interest in non-conventional and alternative fuel resources to decrease the emission and the continuous dependency on fossil fuels, particularly for various sectors like power generation, transportation, and agriculture. In the present work, the research is focused on evaluating the performance, emission characteristics, and combustion of biodiesel such as palm kernel methyl ester with the addition of diesel additive "triacetin" in it. A timed manifold injection (TMI) system was taken up to examine the influence of durations of several blends induced on the emission and performance characteristics as compared to normal diesel operation. This experimental study shows better performance and releases less emission as compared with mineral diesel and in turn, indicates that high performance and low emission is promising in PKME-triacetin fuel operation. This analysis also attempts to describe the application of the fuzzy logic-based <span class="hlt">Taguchi</span> analysis to <span class="hlt">optimize</span> the emission and performance parameters.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018MS%26E..310a2008M','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018MS%26E..310a2008M"><span><span class="hlt">Optimization</span> of MR fluid Yield stress using <span class="hlt">Taguchi</span> <span class="hlt">Method</span> and Response Surface Methodology Techniques</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Mangal, S. K.; Sharma, Vivek</p> <p>2018-02-01</p> <p>Magneto rheological fluids belong to a class of smart materials whose rheological characteristics such as yield stress, viscosity etc. changes in the presence of applied magnetic field. In this paper, <span class="hlt">optimization</span> of MR fluid constituents is obtained with on-state yield stress as response parameter. For this, 18 samples of MR fluids are prepared using L-18 Orthogonal Array. These samples are experimentally tested on a developed & fabricated electromagnet setup. It has been found that the yield stress of MR fluid mainly depends on the volume fraction of the iron particles and type of carrier fluid used in it. The <span class="hlt">optimal</span> combination of the input parameters for the fluid are found to be as Mineral oil with a volume percentage of 67%, iron powder of 300 mesh size with a volume percentage of 32%, oleic acid with a volume percentage of 0.5% and tetra-methyl-ammonium-hydroxide with a volume percentage of 0.7%. This <span class="hlt">optimal</span> combination of input parameters has given the on-state yield stress as 48.197 kPa numerically. An experimental confirmation test on the <span class="hlt">optimized</span> MR fluid sample has been then carried out and the response parameter thus obtained has found matching quite well (less than 1% error) with the numerically obtained values.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5457251','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5457251"><span><span class="hlt">Optimizing</span> Injection Molding Parameters of Different Halloysites Type-Reinforced Thermoplastic Polyurethane Nanocomposites via <span class="hlt">Taguchi</span> Complemented with ANOVA</span></a></p> <p><a target="_blank" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Gaaz, Tayser Sumer; Sulong, Abu Bakar; Kadhum, Abdul Amir H.; Nassir, Mohamed H.; Al-Amiery, Ahmed A.</p> <p>2016-01-01</p> <p>Halloysite nanotubes-thermoplastic polyurethane (HNTs-TPU) nanocomposites are attractive products due to increasing demands for specialized materials. This study attempts to <span class="hlt">optimize</span> the parameters for injection just before marketing. The study shows the importance of the preparation of the samples and how well these parameters play their roles in the injection. The control parameters for injection are carefully determined to examine the mechanical properties and the density of the HNTs-TPU nanocomposites. Three types of modified HNTs were used as untreated HNTs (uHNTs), sulfuric acid treated (aHNTs) and a combined treatment of polyvinyl alcohol (PVA)-sodium dodecyl sulfate (SDS)-malonic acid (MA) (treatment (mHNTs)). It was found that mHNTs have the most influential effect of producing HNTs-TPU nanocomposites with the best qualities. One possible reason for this extraordinary result is the effect of SDS as a disperser and MA as a crosslinker between HNTs and PVA. For the highest tensile strength, the control parameters are demonstrated at 150 °C (injection temperature), 8 bar (injection pressure), 30 °C (mold temperature), 8 min (injection time), 2 wt % (HNTs loading) and mHNT (HNTs type). Meanwhile, the <span class="hlt">optimized</span> combination of the levels for all six control parameters that provide the highest Young’s modulus and highest density was found to be 150 °C (injection temperature), 8 bar (injection pressure), 32 °C (mold temperature), 8 min (injection time), 3 wt % (HNTs loading) and mHNT (HNTs type). For the best tensile strain, the six control parameters are found to be 160 °C (injection temperature), 8 bar (injection pressure), 32 °C (mold temperature), 8 min (injection time), 2 wt % (HNTs loading) and mHNT (HNTs type). For the highest hardness, the best parameters are 140 °C (injection temperature), 6 bar (injection pressure), 30 °C (mold temperature), 8 min (injection time), 2 wt % (HNTs loading) and mHNT (HNTs type). The analyses are carried out by</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28774069','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28774069"><span><span class="hlt">Optimizing</span> Injection Molding Parameters of Different Halloysites Type-Reinforced Thermoplastic Polyurethane Nanocomposites via <span class="hlt">Taguchi</span> Complemented with ANOVA.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Gaaz, Tayser Sumer; Sulong, Abu Bakar; Kadhum, Abdul Amir H; Nassir, Mohamed H; Al-Amiery, Ahmed A</p> <p>2016-11-22</p> <p>Halloysite nanotubes-thermoplastic polyurethane (HNTs-TPU) nanocomposites are attractive products due to increasing demands for specialized materials. This study attempts to <span class="hlt">optimize</span> the parameters for injection just before marketing. The study shows the importance of the preparation of the samples and how well these parameters play their roles in the injection. The control parameters for injection are carefully determined to examine the mechanical properties and the density of the HNTs-TPU nanocomposites. Three types of modified HNTs were used as untreated HNTs ( u HNTs), sulfuric acid treated ( a HNTs) and a combined treatment of polyvinyl alcohol (PVA)-sodium dodecyl sulfate (SDS)-malonic acid (MA) (treatment ( m HNTs)). It was found that m HNTs have the most influential effect of producing HNTs-TPU nanocomposites with the best qualities. One possible reason for this extraordinary result is the effect of SDS as a disperser and MA as a crosslinker between HNTs and PVA. For the highest tensile strength, the control parameters are demonstrated at 150 °C (injection temperature), 8 bar (injection pressure), 30 °C (mold temperature), 8 min (injection time), 2 wt % (HNTs loading) and m HNT (HNTs type). Meanwhile, the <span class="hlt">optimized</span> combination of the levels for all six control parameters that provide the highest Young's modulus and highest density was found to be 150 °C (injection temperature), 8 bar (injection pressure), 32 °C (mold temperature), 8 min (injection time), 3 wt % (HNTs loading) and m HNT (HNTs type). For the best tensile strain, the six control parameters are found to be 160 °C (injection temperature), 8 bar (injection pressure), 32 °C (mold temperature), 8 min (injection time), 2 wt % (HNTs loading) and m HNT (HNTs type). For the highest hardness, the best parameters are 140 °C (injection temperature), 6 bar (injection pressure), 30 °C (mold temperature), 8 min (injection time), 2 wt % (HNTs loading) and m HNT (HNTs type). The analyses are carried</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016JIEIC..97..547K','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016JIEIC..97..547K"><span>Wear Evaluation of AISI 4140 Alloy Steel with WC/C Lamellar Coatings Sliding Against EN 8 Using <span class="hlt">Taguchi</span> <span class="hlt">Method</span></span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Kadam, Nikhil Rajendra; Karthikeyan, Ganesarethinam</p> <p>2016-10-01</p> <p>The purpose of the experiments in this paper is to use the <span class="hlt">Taguchi</span> <span class="hlt">methods</span> to investigate the wear of WC/C coated nitrided AISI 4140 alloy steel. A study of lamellar WC/C coating which were deposited by a physical vapor deposition on nitrided AISI 4140 alloy steel. The investigation includes wear evaluation using Pin-on-disk configuration. When WC/C coated AISI 4140 alloy steel slides against EN 8 steel, it was found that carbon-rich coatings show much lower wear of the countersurface than nitrogen-rich coatings. The results were correlated with the properties determined from tribological and mechanical characterization, therefore by probably selecting the proper processing parameters the deposition of WC/C coating results in decreasing the wear rate of the substrate which shows a potential for tribological application.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018ResPh...9..987G','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018ResPh...9..987G"><span>Effect of injection parameters on mechanical and physical properties of super ultra-thin wall propylene packaging by <span class="hlt">Taguchi</span> <span class="hlt">method</span></span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Ginghtong, Thatchanok; Nakpathomkun, Natthapon; Pechyen, Chiravoot</p> <p>2018-06-01</p> <p>The parameters of the plastic injection molding process have been investigated for the manufacture of a 64 oz. ultra-thin polypropylene bucket. The 3 main parameters, such as injection speed, melting temperature, holding pressure, were investigated to study their effect on the physical appearance and compressive strength. The orthogonal array of <span class="hlt">Taguchi</span>'s L9 (33) was used to carry out the experimental plan. The physical properties were measured and the compressive strength was determined using linear regression analysis. The differential scanning calorimeter (DSC) was used to analyze the crystalline structure of the product. The <span class="hlt">optimization</span> results show that the proposed approach can help engineers identify <span class="hlt">optimal</span> process parameters and achieve competitive advantages of energy consumption and product quality. In addition, the injection molding of the product includes 24 mm of shot stroke, 1.47 mm position transfer, 268 rpm screw speed, injection speed 100 mm/s, 172 ton clamping force, 800 kgf holding pressure, 0.9 s holding time and 1.4 s cooling time, make the products in the shape and proportion of the product satisfactory. The parameters of influence are injection speed 71.07%, melting temperature 23.31% and holding pressure 5.62%, respectively. The compressive strength of the product was able to withstand a pressure of up to 839 N before the product became plastic. The low melting temperature was caused by the superior crystalline structure of the super-ultra-thin wall product which leads to a lower compressive strength.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3995665','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3995665"><span>Biosorption of malachite green from aqueous solutions by Pleurotus ostreatus using <span class="hlt">Taguchi</span> <span class="hlt">method</span></span></a></p> <p><a target="_blank" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p></p> <p>2014-01-01</p> <p>Dyes released into the environment have been posing a serious threat to natural ecosystems and aquatic life due to presence of heat, light, chemical and other exposures stable. In this study, the Pleurotus ostreatus (a macro-fungus) was used as a new biosorbent to study the biosorption of hazardous malachite green (MG) from aqueous solutions. The effective disposal of P. ostreatus is a meaningful work for environmental protection and maximum utilization of agricultural residues. The operational parameters such as biosorbent dose, pH, and ionic strength were investigated in a series of batch studies at 25°C. Freundlich isotherm model was described well for the biosorption equilibrium data. The biosorption process followed the pseudo-second-order kinetic model. <span class="hlt">Taguchi</span> <span class="hlt">method</span> was used to simplify the experimental number for determining the significance of factors and the optimum levels of experimental factors for MG biosorption. Biosorbent dose and initial MG concentration had significant influences on the percent removal and biosorption capacity. The highest percent removal reached 89.58% and the largest biosorption capacity reached 32.33 mg/g. The Fourier transform infrared spectroscopy (FTIR) showed that the functional groups such as, carboxyl, hydroxyl, amino and phosphonate groups on the biosorbent surface could be the potential adsorption sites for MG biosorption. P. ostreatus can be considered as an alternative biosorbent for the removal of dyes from aqueous solutions. PMID:24620852</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_6");'>6</a></li> <li><a href="#" onclick='return showDiv("page_7");'>7</a></li> <li class="active"><span>8</span></li> <li><a href="#" onclick='return showDiv("page_9");'>9</a></li> <li><a href="#" onclick='return showDiv("page_10");'>10</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_8 --> <div id="page_9" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_7");'>7</a></li> <li><a href="#" onclick='return showDiv("page_8");'>8</a></li> <li class="active"><span>9</span></li> <li><a href="#" onclick='return showDiv("page_10");'>10</a></li> <li><a href="#" onclick='return showDiv("page_11");'>11</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="161"> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/24620852','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/24620852"><span>Biosorption of malachite green from aqueous solutions by Pleurotus ostreatus using <span class="hlt">Taguchi</span> <span class="hlt">method</span>.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Chen, Zhengsuo; Deng, Hongbo; Chen, Can; Yang, Ying; Xu, Heng</p> <p>2014-03-12</p> <p>Dyes released into the environment have been posing a serious threat to natural ecosystems and aquatic life due to presence of heat, light, chemical and other exposures stable. In this study, the Pleurotus ostreatus (a macro-fungus) was used as a new biosorbent to study the biosorption of hazardous malachite green (MG) from aqueous solutions. The effective disposal of P. ostreatus is a meaningful work for environmental protection and maximum utilization of agricultural residues.The operational parameters such as biosorbent dose, pH, and ionic strength were investigated in a series of batch studies at 25°C. Freundlich isotherm model was described well for the biosorption equilibrium data. The biosorption process followed the pseudo-second-order kinetic model. <span class="hlt">Taguchi</span> <span class="hlt">method</span> was used to simplify the experimental number for determining the significance of factors and the optimum levels of experimental factors for MG biosorption. Biosorbent dose and initial MG concentration had significant influences on the percent removal and biosorption capacity. The highest percent removal reached 89.58% and the largest biosorption capacity reached 32.33 mg/g. The Fourier transform infrared spectroscopy (FTIR) showed that the functional groups such as, carboxyl, hydroxyl, amino and phosphonate groups on the biosorbent surface could be the potential adsorption sites for MG biosorption. P. ostreatus can be considered as an alternative biosorbent for the removal of dyes from aqueous solutions.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/18754384','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/18754384"><span>Determining <span class="hlt">optimal</span> operation parameters for reducing PCDD/F emissions (I-TEQ values) from the iron ore sintering process by using the <span class="hlt">Taguchi</span> experimental design.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Chen, Yu-Cheng; Tsai, Perng-Jy; Mou, Jin-Luh</p> <p>2008-07-15</p> <p>This study is the first one using the <span class="hlt">Taguchi</span> experimental design to identify the <span class="hlt">optimal</span> operating condition for reducing polychlorinated dibenzo-p-dioxins and dibenzofurans (PCDD/ Fs) formations during the iron ore sintering process. Four operating parameters, including the water content (Wc; range = 6.0-7.0 wt %), suction pressure (Ps; range = 1000-1400 mmH2O), bed height (Hb; range = 500-600 mm), and type of hearth layer (including sinter, hematite, and limonite), were selected for conducting experiments in a pilot scale sinter pot to simulate various sintering operating conditions of a real-scale sinter plant We found that the resultant <span class="hlt">optimal</span> combination (Wc = 6.5 wt%, Hb = 500 mm, Ps = 1000 mmH2O, and hearth layer = hematite) could decrease the emission factor of total PCDD/Fs (total EF(PCDD/Fs)) up to 62.8% by reference to the current operating condition of the real-scale sinter plant (Wc = 6.5 wt %, Hb = 550 mm, Ps = 1200 mmH2O, and hearth layer = sinter). Through the ANOVA analysis, we found that Wc was the most significant parameter in determining total EF(PCDD/Fs (accounting for 74.7% of the total contribution of the four selected parameters). The resultant <span class="hlt">optimal</span> combination could also enhance slightly in both sinter productivity and sinter strength (30.3 t/m2/day and 72.4%, respectively) by reference to those obtained from the reference operating condition (29.9 t/m (2)/day and 72.2%, respectively). The above results further ensure the applicability of the obtained <span class="hlt">optimal</span> combination for the real-scale sinter production without interfering its sinter productivity and sinter strength.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5758948','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5758948"><span>Modeling and Multiresponse <span class="hlt">Optimization</span> for Anaerobic Codigestion of Oil Refinery Wastewater and Chicken Manure by Using Artificial Neural Network and the <span class="hlt">Taguchi</span> <span class="hlt">Method</span></span></a></p> <p><a target="_blank" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Hemmat, Abbas; Kafashan, Jalal; Huang, Hongying</p> <p>2017-01-01</p> <p>To study the optimum process conditions for pretreatments and anaerobic codigestion of oil refinery wastewater (ORWW) with chicken manure, L9 (34) <span class="hlt">Taguchi</span>'s orthogonal array was applied. The biogas production (BGP), biomethane content (BMP), and chemical oxygen demand solubilization (CODS) in stabilization rate were evaluated as the process outputs. The optimum conditions were obtained by using Design Expert software (Version 7.0.0). The results indicated that the optimum conditions could be achieved with 44% ORWW, 36°C temperature, 30 min sonication, and 6% TS in the digester. The optimum BGP, BMP, and CODS removal rates by using the optimum conditions were 294.76 mL/gVS, 151.95 mL/gVS, and 70.22%, respectively, as concluded by the experimental results. In addition, the artificial neural network (ANN) technique was implemented to develop an ANN model for predicting BGP yield and BMP content. The Levenberg-Marquardt algorithm was utilized to train ANN, and the architecture of 9-19-2 for the ANN model was obtained. PMID:29441352</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016MS%26E..149a2002K','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016MS%26E..149a2002K"><span><span class="hlt">Optimization</span> of Gas Metal Arc Welding Process Parameters</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Kumar, Amit; Khurana, M. K.; Yadav, Pradeep K.</p> <p>2016-09-01</p> <p>This study presents the application of <span class="hlt">Taguchi</span> <span class="hlt">method</span> combined with grey relational analysis to <span class="hlt">optimize</span> the process parameters of gas metal arc welding (GMAW) of AISI 1020 carbon steels for multiple quality characteristics (bead width, bead height, weld penetration and heat affected zone). An orthogonal array of L9 has been implemented to fabrication of joints. The experiments have been conducted according to the combination of voltage (V), current (A) and welding speed (Ws). The results revealed that the welding speed is most significant process parameter. By analyzing the grey relational grades, <span class="hlt">optimal</span> parameters are obtained and significant factors are known using ANOVA analysis. The welding parameters such as speed, welding current and voltage have been <span class="hlt">optimized</span> for material AISI 1020 using GMAW process. To fortify the robustness of experimental design, a confirmation test was performed at selected <span class="hlt">optimal</span> process parameter setting. Observations from this <span class="hlt">method</span> may be useful for automotive sub-assemblies, shipbuilding and vessel fabricators and operators to obtain <span class="hlt">optimal</span> welding conditions.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/27865731','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/27865731"><span>Physicochemical characterization, modelling and <span class="hlt">optimization</span> of ultrasono-assisted acid pretreatment of two Pennisetum sp. using <span class="hlt">Taguchi</span> and artificial neural networking for enhanced delignification.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Mohapatra, Sonali; Dandapat, Snigdha Jyotsna; Thatoi, Hrudayanath</p> <p>2017-02-01</p> <p>Acid as well as ultrasono-assisted acid pretreatment of lignocellulosic biomass of two Pennisetum sp.; Denanath grass (DG) and Hybrid Napier grass (HNG) have been investigated for enhanced delignification and maximum exposure of cellulose for production of bioethanol. Screening of pretreatment with different acids such as H 2 SO 4 , HCl, H 3 PO 4 and H 2 NO 3 were <span class="hlt">optimized</span> for different temperature, soaking time and acid concentrations using <span class="hlt">Taguchi</span> orthogonal array and the data obtained were statistically validated using artificial neural networking. HCl was found to be the most effective acid for pretreatment of both the Pennisetum sp. The <span class="hlt">optimized</span> conditions of HCl pretreatment were acid concentration of 1% and 1.5%, soaking time 130 and 50 min and temperature 121 °C and 110 °C which yielded maximum delignification of 33.0% and 33.8% for DG and HNG respectively. Further ultrosono-assisted HCl pretreatment with a power supply of 100 W, temperature of 353 K, and duty cycle of 70% has resulted in significantly higher delignification of 80.4% and 82.1% for both DG and HNG respectively than that of acid pretreatment. Investigation using SEM, FTIR and autofloresence microscopy for both acid and ultrasono-assisted acid pretreatment lignocellulosic biomass revealed conformational changes of pretreated lignocellulosic biomass with decreased lignin content and increased exposure of cellulose, with greater effectiveness in case of ultrasono assisted acid pretreatment condition. Copyright © 2016. Published by Elsevier Ltd.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018MS%26E..352a2002Y','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018MS%26E..352a2002Y"><span>Application of <span class="hlt">taguchi</span> <span class="hlt">method</span> for selection parameter bleaching treatments against mechanical and physical properties of agave cantala fiber</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Yudhanto, F.; Jamasri; Rochardjo, Heru S. B.</p> <p>2018-05-01</p> <p>The characterized agave cantala fiber in this research came from Sumenep, Madura, Indonesia was chemically processed using sodium hydroxide (NaOH) and hydrogen peroxide (H2O2) solution. The treatment with both solutions is called bleaching process. Tensile strength test of single fiber was used to get mechanical properties from selecting process of the various parameter are temperature, PH and concentration of H2O2 with an L9 orthogonal array by <span class="hlt">Taguchi</span> <span class="hlt">method</span>. The results indicate that PH is most significant parameter influencing the tensile strength followed by temperature and concentration H2O2. The influence of bleaching treatment on tensile strength showed increasing of crystallinity index of fiber by 21%. It showed by lost of hemicellulose and lignin layers of fiber can be seen from waveforms changes of 1735 (C=O), 1627 (OH), 1319 (CH2), 1250 (C-O) by FTIR graph. The photo SEM showed that the bleaching of fibers causes the fibers more roughly and clearly than untreated fibers.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26719136','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26719136"><span>Improving the Glucose Meter Error Grid With the <span class="hlt">Taguchi</span> Loss Function.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Krouwer, Jan S</p> <p>2016-07-01</p> <p>Glucose meters often have similar performance when compared by error grid analysis. This is one reason that other statistics such as mean absolute relative deviation (MARD) are used to further differentiate performance. The problem with MARD is that too much information is lost. But additional information is available within the A zone of an error grid by using the <span class="hlt">Taguchi</span> loss function. Applying the <span class="hlt">Taguchi</span> loss function gives each glucose meter difference from reference a value ranging from 0 (no error) to 1 (error reaches the A zone limit). Values are averaged over all data which provides an indication of risk of an incorrect medical decision. This allows one to differentiate glucose meter performance for the common case where meters have a high percentage of values in the A zone and no values beyond the B zone. Examples are provided using simulated data. © 2015 Diabetes Technology Society.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018MS%26E..342a2006A','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018MS%26E..342a2006A"><span>Costing improvement of remanufacturing crankshaft by integrating Mahalanobis-<span class="hlt">Taguchi</span> System and Activity based Costing</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Abu, M. Y.; Nor, E. E. Mohd; Rahman, M. S. Abd</p> <p>2018-04-01</p> <p>Integration between quality and costing system is very crucial in order to achieve an accurate product cost and profit. Current practice by most of remanufacturers, there are still lacking on <span class="hlt">optimization</span> during the remanufacturing process which contributed to incorrect variables consideration to the costing system. Meanwhile, traditional costing accounting being practice has distortion in the cost unit which lead to inaccurate cost of product. The aim of this work is to identify the critical and non-critical variables during remanufacturing process using Mahalanobis-<span class="hlt">Taguchi</span> System and simultaneously estimate the cost using Activity Based Costing <span class="hlt">method</span>. The orthogonal array was applied to indicate the contribution of variables in the factorial effect graph and the critical variables were considered with overhead costs that are actually demanding the activities. This work improved the quality inspection together with costing system to produce an accurate profitability information. As a result, the cost per unit of remanufactured crankshaft of MAN engine model with 5 critical crankpins is MYR609.50 while Detroit engine model with 4 critical crankpins is MYR1254.80. The significant of output demonstrated through promoting green by reducing re-melting process of damaged parts to ensure consistent benefit of return cores.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2013JIEI....9....1B','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2013JIEI....9....1B"><span>A neuro-data envelopment analysis approach for <span class="hlt">optimization</span> of uncorrelated multiple response problems with smaller the better type controllable factors</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Bashiri, Mahdi; Farshbaf-Geranmayeh, Amir; Mogouie, Hamed</p> <p>2013-11-01</p> <p>In this paper, a new <span class="hlt">method</span> is proposed to <span class="hlt">optimize</span> a multi-response <span class="hlt">optimization</span> problem based on the <span class="hlt">Taguchi</span> <span class="hlt">method</span> for the processes where controllable factors are the smaller-the-better (STB)-type variables and the analyzer desires to find an <span class="hlt">optimal</span> solution with smaller amount of controllable factors. In such processes, the overall output quality of the product should be maximized while the usage of the process inputs, the controllable factors, should be minimized. Since all possible combinations of factors' levels, are not considered in the <span class="hlt">Taguchi</span> <span class="hlt">method</span>, the response values of the possible unpracticed treatments are estimated using the artificial neural network (ANN). The neural network is tuned by the central composite design (CCD) and the genetic algorithm (GA). Then data envelopment analysis (DEA) is applied for determining the efficiency of each treatment. Although the important issue for implementation of DEA is its philosophy, which is maximization of outputs versus minimization of inputs, this important issue has been neglected in previous similar studies in multi-response problems. Finally, the most efficient treatment is determined using the maximin weight model approach. The performance of the proposed <span class="hlt">method</span> is verified in a plastic molding process. Moreover a sensitivity analysis has been done by an efficiency estimator neural network. The results show efficiency of the proposed approach.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017MS%26E..184a2018M','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017MS%26E..184a2018M"><span>Abrasive wear response of TIG-melted TiC composite coating: <span class="hlt">Taguchi</span> approach</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Maleque, M. A.; Bello, K. A.; Adebisi, A. A.; Dube, A.</p> <p>2017-03-01</p> <p>In this study, <span class="hlt">Taguchi</span> design of experiment approach has been applied to assess wear behaviour of TiC composite coatings deposited on AISI 4340 steel substrates by novel powder preplacement and TIG torch melting processes. To study the abrasive wear behaviour of these coatings against alumina ball at 600° C, a Taguchi’s orthogonal array is used to acquire the wear test data for determining <span class="hlt">optimal</span> parameters that lead to the minimization of wear rate. Composite coatings are developed based on Taguchi’s L-16 orthogonal array experiment with three process parameters (welding current, welding speed, welding voltage and shielding gas flow rate) at four levels. In this technique, mean response and signal-to-noise ratio are used to evaluate the influence of the TIG process parameters on the wear rate performance of the composite coated surfaces. The results reveal that welding voltage is the most significant control parameter for minimizing wear rate while the current presents the least contribution to the wear rate reduction. The study also shows the best <span class="hlt">optimal</span> condition has been arrived at A3 (90 A), B4 (2.5 mm/s), C3 (30 V) and D3 (20 L/min), which gives minimum wear rate in TiC embedded coatings. Finally, a confirmatory experiment has been conducted to verify the <span class="hlt">optimized</span> result and shows that the error between the predicted values and the experimental observation at the <span class="hlt">optimal</span> condition lies within the limit of 4.7 %. Thus, the validity of the optimum condition for the coatings is established.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2011AIPC.1315..993G','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2011AIPC.1315..993G"><span>Application of <span class="hlt">Taguchi</span> <span class="hlt">Method</span> for Analyzing Factors Affecting the Performance of Coated Carbide Tool When Turning FCD700 in Dry Cutting Condition</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Ghani, Jaharah A.; Mohd Rodzi, Mohd Nor Azmi; Zaki Nuawi, Mohd; Othman, Kamal; Rahman, Mohd. Nizam Ab.; Haron, Che Hassan Che; Deros, Baba Md</p> <p>2011-01-01</p> <p>Machining is one of the most important manufacturing processes in these modern industries especially for finishing an automotive component after the primary manufacturing processes such as casting and forging. In this study the turning parameters of dry cutting environment (without air, normal air and chilled air), various cutting speed, and feed rate are evaluated using a <span class="hlt">Taguchi</span> <span class="hlt">optimization</span> methodology. An orthogonal array L27 (313), signal-to-noise (S/N) ratio and analysis of variance (ANOVA) are employed to analyze the effect of these turning parameters on the performance of a coated carbide tool. The results show that the tool life is affected by the cutting speed, feed rate and cutting environment with contribution of 38%, 32% and 27% respectively. Whereas for the surface roughness, the feed rate is significantly controlled the machined surface produced by 77%, followed by the cutting environment of 19%. The cutting speed is found insignificant in controlling the machined surface produced. The study shows that the dry cutting environment factor should be considered in order to produce longer tool life as well as for obtaining a good machined surface.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28390015','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28390015"><span>Approaches towards the enhanced production of Rapamycin by Streptomyces hygroscopicus MTCC 4003 through mutagenesis and <span class="hlt">optimization</span> of process parameters by <span class="hlt">Taguchi</span> orthogonal array methodology.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Dutta, Subhasish; Basak, Bikram; Bhunia, Biswanath; Sinha, Ankan; Dey, Apurba</p> <p>2017-05-01</p> <p>The present research was conducted to define the approaches for enhanced production of rapamycin (Rap) by Streptomyces hygroscopicus microbial type culture collection (MTCC) 4003. Both physical mutagenesis by ultraviolet ray (UV) and chemical mutagenesis by N-methyl-N-nitro-N-nitrosoguanidine (NTG) have been applied successfully for the improvement of Rap production. Enhancing Rap yield by novel sequential UV mutagenesis technique followed by fermentation gives a significant difference in getting economically scalable amount of this industrially important macrolide compound. Mutant obtained through NTG mutagenesis (NTG-30-27) was found to be superior to others as it initially produced 67% higher Rap than wild type. Statistical <span class="hlt">optimization</span> of nutritional and physiochemical parameters was carried out to find out most influential factors responsible for enhanced Rap yield by NTG-30-27 which was performed using <span class="hlt">Taguchi</span> orthogonal array approach. Around 72% enhanced production was achieved with nutritional factors at their assigned level at 23 °C, 120 rpm and pH 7.6. Results were analysed in triplicate basis where validation and purification was carried out using high performance liquid chromatography. Stability study and potency of extracted Rap was supported by turbidimetric assay taking Candida albicans MTCC 227 as test organism.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://hdl.handle.net/2060/19920012025','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19920012025"><span>An Exploratory Exercise in <span class="hlt">Taguchi</span> Analysis of Design Parameters: Application to a Shuttle-to-space Station Automated Approach Control System</span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Deal, Don E.</p> <p>1991-01-01</p> <p>The chief goals of the summer project have been twofold - first, for my host group and myself to learn as much of the working details of <span class="hlt">Taguchi</span> analysis as possible in the time allotted, and, secondly, to apply the methodology to a design problem with the intention of establishing a preliminary set of near-<span class="hlt">optimal</span> (in the sense of producing a desired response) design parameter values from among a large number of candidate factor combinations. The selected problem is concerned with determining design factor settings for an automated approach program which is to have the capability of guiding the Shuttle into the docking port of the Space Station under controlled conditions so as to meet and/or <span class="hlt">optimize</span> certain target criteria. The candidate design parameters under study were glide path (i.e., approach) angle, path intercept and approach gains, and minimum impulse bit mode (a parameter which defines how Shuttle jets shall be fired). Several performance criteria were of concern: terminal relative velocity at the instant the two spacecraft are mated; docking offset; number of Shuttle jet firings in certain specified directions (of interest due to possible plume impingement on the Station's solar arrays), and total RCS (a measure of the energy expended in performing the approach/docking maneuver). In the material discussed here, we have focused on single performance criteria - total RCS. An analysis of the possibility of employing a multiobjective function composed of a weighted sum of the various individual criteria has been undertaken, but is, at this writing, incomplete. Results from the <span class="hlt">Taguchi</span> statistical analysis indicate that only three of the original four posited factors are significant in affecting RCS response. A comparison of model simulation output (via Monte Carlo) with predictions based on estimated factor effects inferred through the <span class="hlt">Taguchi</span> experiment array data suggested acceptable or close agreement between the two except at the predicted optimum</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016E%26ES...36a2049W','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016E%26ES...36a2049W"><span>2-[(Hydroxymethyl)amino]ethanol in water as a preservative: Study of formaldehyde released by <span class="hlt">Taguchi</span>'s <span class="hlt">method</span></span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Wisessirikul, W.; Loykulnant, S.; Montha, S.; Fhulua, T.; Prapainainar, P.</p> <p>2016-06-01</p> <p>This research studied the quantity of free formaldehyde released from 2- [(hydroxymethyl)amino]ethanol (HAE) in DI water and natural rubber latex mixture using high-performance liquid chromatography (HPLC) technique. The quantity of formaldehyde retained in the solution was cross-checked by using titration technique. The investigated factors were the concentration of preservative (HAE), pH, and temperature. <span class="hlt">Taguchi</span>'s <span class="hlt">method</span> was used to design the experiments. The number of experiments was reduced to 16 experiments from all possible experiments by orthogonal arrays (3 factors and 4 levels in each factor). Minitab program was used as a tool for statistical calculation and for finding the suitable condition for the preservative system. HPLC studies showed that higher temperature and higher concentration of the preservative influence the amount of formaldehyde released. It was found that conditions at which formaldehyde was released in the lowest amount were 1.6%w/v HAE, 4 to 40 °C, and the original pH. Nevertheless, the pH value of NR latex should be more than 10 (the suitable pH value was found to be 13). This preservative can be used to replace current preservative systems and can maintain the quality of latex for long-term storage. Use of the proposed preservative system was also shown to have reduced impact on the toxicity of the environment.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017MS%26E..197a2003A','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017MS%26E..197a2003A"><span><span class="hlt">Optimization</span> of Profile and Material of Abrasive Water Jet Nozzle</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Anand Bala Selwin, K. P.; Ramachandran, S.</p> <p>2017-05-01</p> <p>The objective of this work is to study the behaviour of the abrasive water jet nozzle with different profiles and materials. <span class="hlt">Taguchi</span>-Grey relational analysis <span class="hlt">optimization</span> technique is used to <span class="hlt">optimize</span> the value with different material and different profiles. Initially the 3D models of the nozzle are modelled with different profiles by changing the tapered inlet angle of the nozzle. The different profile models are analysed with different materials and the results are <span class="hlt">optimized</span>. The <span class="hlt">optimized</span> results would give the better result taking wear and machining behaviour of the nozzle.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017ApSS..422..787M','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017ApSS..422..787M"><span>Fabrication of flower-like micro/nano dual scale structured copper oxide surfaces: <span class="hlt">Optimization</span> of self-cleaning properties via <span class="hlt">Taguchi</span> design</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Moosavi, Saeideh Sadat; Norouzbeigi, Reza; Velayi, Elmira</p> <p>2017-11-01</p> <p>In the present work, copper oxide superhydrophobic surface is fabricated on a copper foil via the chemical bath deposition (CBD) <span class="hlt">method</span>. The effects of some influential factors such as initial concentrations of Cu (II) ions and the surface energy modifier, solution pH, reaction and modification steps time on the wettability property of copper oxide surface were evaluated using <span class="hlt">Taguchi</span> L16 experimental design. Results showed that the initial concentration of Cu (II) has the most significant impact on the water contact angle and wettability characteristics. The XRD, SEM, AFM and FTIR analyses were used to characterize the copper oxide surfaces. The Water contact angle (WCA) and contact angle hysteresis (CAH) were also measured. The SEM results indicated the formation of a flower-like micro/nano dual-scale structure of copper oxide on the substrate. This structure composed of numerous nano-petals with a thickness of about 50 nm. As a result, a copper oxide hierarchical surface with WCA of 168.4°± 3.5° and CAH of 2.73° exhibited the best superhydrophobicity under proposed optimum condition. This result has been obtained just by 10 min hydrolysis reaction. Besides, this surface showed a good stability under acidic and saline conditions.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28886524','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28886524"><span>Simultaneous quantification of arginine, alanine, methionine and cysteine amino acids in supplements using a novel bioelectro-nanosensor based on CdSe quantum dot/modified carbon nanotube hollow fiber pencil graphite electrode via <span class="hlt">Taguchi</span> <span class="hlt">method</span>.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Hooshmand, Sara; Es'haghi, Zarrin</p> <p>2017-11-30</p> <p>A number of four amino acids have been simultaneously determined at CdSe quantum dot-modified/multi-walled carbon nanotube hollow fiber pencil graphite electrode in different bodybuilding supplements. CdSe quantum dots were synthesized and applied to construct a modified carbon nanotube hollow fiber pencil graphite electrode. FT-IR, TEM, XRD and EDAX <span class="hlt">methods</span> were applied for characterization of the synthesized CdSe QDs. The electro-oxidation of arginine (Arg), alanine (Ala), methionine (Met) and cysteine (Cys) at the surface of the modified electrode was studied. Then the <span class="hlt">Taguchi</span>'s <span class="hlt">method</span> was applied using MINITAB 17 software to find out the optimum conditions for the amino acids determination. Under the <span class="hlt">optimized</span> conditions, the differential pulse (DP) voltammetric peak currents of Arg, Ala, Met and Cys increased linearly with their concentrations in the ranges of 0.287-33670μM and detection limits of 0.081, 0.158, 0.094 and 0.116μM were obtained for them, respectively. Satisfactory results were achieved for calibration and validation sets. The prepared modified electrode represents a very good resolution between the voltammetric peaks of the four amino acids which makes it suitable for the detection of each in presence of others in real samples. Copyright © 2017. Published by Elsevier B.V.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5690196','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5690196"><span>Quantification of dental prostheses on cone‐beam CT images by the <span class="hlt">Taguchi</span> <span class="hlt">method</span></span></a></p> <p><a target="_blank" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Kuo, Rong‐Fu; Fang, Kwang‐Ming; TY, Wong</p> <p>2016-01-01</p> <p>The gray values accuracy of dental cone‐beam computed tomography (CBCT) is affected by dental metal prostheses. The distortion of dental CBCT gray values could lead to inaccuracies of orthodontic and implant treatment. The aim of this study was to quantify the effect of scanning parameters and dental metal prostheses on the accuracy of dental cone‐beam computed tomography (CBCT) gray values using the <span class="hlt">Taguchi</span> <span class="hlt">method</span>. Eight dental model casts of an upper jaw including prostheses, and a ninth prosthesis‐free dental model cast, were scanned by two dental CBCT devices. The mean gray value of the selected circular regions of interest (ROIs) were measured using dental CBCT images of eight dental model casts and were compared with those measured from CBCT images of the prosthesis‐free dental model cast. For each image set, four consecutive slices of gingiva were selected. The seven factors (CBCTs, occlusal plane canting, implant connection, prosthesis position, coping material, coping thickness, and types of dental restoration) were used to evaluate scanning parameter and dental prostheses effects. Statistical <span class="hlt">methods</span> of signal to noise ratio (S/N) and analysis of variance (ANOVA) with 95% confidence were applied to quantify the effects of scanning parameters and dental prostheses on dental CBCT gray values accuracy. For ROIs surrounding dental prostheses, the accuracy of CBCT gray values were affected primarily by implant connection (42%), followed by type of restoration (29%), prostheses position (19%), coping material (4%), and coping thickness (4%). For a single crown prosthesis (without support of implants) placed in dental model casts, gray value differences for ROIs 1–9 were below 12% and gray value differences for ROIs 13–18 away from prostheses were below 10%. We found the gray value differences set to be between 7% and 8% for regions next to a single implant‐supported titanium prosthesis, and between 46% and 59% for regions between double implant</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2013JIEI....9...18M','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2013JIEI....9...18M"><span>Optimisation of shock absorber process parameters using failure mode and effect analysis and genetic algorithm</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Mariajayaprakash, Arokiasamy; Senthilvelan, Thiyagarajan; Vivekananthan, Krishnapillai Ponnambal</p> <p>2013-07-01</p> <p>The various process parameters affecting the quality characteristics of the shock absorber during the process were identified using the Ishikawa diagram and by failure mode and effect analysis. The identified process parameters are welding process parameters (squeeze, heat control, wheel speed, and air pressure), damper sealing process parameters (load, hydraulic pressure, air pressure, and fixture height), washing process parameters (total alkalinity, temperature, pH value of rinsing water, and timing), and painting process parameters (flowability, coating thickness, pointage, and temperature). In this paper, the process parameters, namely, painting and washing process parameters, are <span class="hlt">optimized</span> by <span class="hlt">Taguchi</span> <span class="hlt">method</span>. Though the defects are reasonably minimized by <span class="hlt">Taguchi</span> <span class="hlt">method</span>, in order to achieve zero defects during the processes, genetic algorithm technique is applied on the <span class="hlt">optimized</span> parameters obtained by <span class="hlt">Taguchi</span> <span class="hlt">method</span>.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017MRE.....4i5301K','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017MRE.....4i5301K"><span>Multi-response <span class="hlt">optimization</span> of T300/epoxy prepreg tape-wound cylinder by grey relational analysis coupled with the response surface <span class="hlt">method</span></span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Kang, Chao; Shi, Yaoyao; He, Xiaodong; Yu, Tao; Deng, Bo; Zhang, Hongji; Sun, Pengcheng; Zhang, Wenbin</p> <p>2017-09-01</p> <p>This study investigates the multi-objective <span class="hlt">optimization</span> of quality characteristics for a T300/epoxy prepreg tape-wound cylinder. The <span class="hlt">method</span> integrates the <span class="hlt">Taguchi</span> <span class="hlt">method</span>, grey relational analysis (GRA) and response surface methodology, and is adopted to improve tensile strength and reduce residual stress. In the winding process, the main process parameters involving winding tension, pressure, temperature and speed are selected to evaluate the parametric influences on tensile strength and residual stress. Experiments are conducted using the Box-Behnken design. Based on principal component analysis, the grey relational grades are properly established to convert multi-responses into an individual objective problem. Then the response surface <span class="hlt">method</span> is used to build a second-order model of grey relational grade and predict the optimum parameters. The predictive accuracy of the developed model is proved by two test experiments with a low prediction error of less than 7%. The following process parameters, namely winding tension 124.29 N, pressure 2000 N, temperature 40 °C and speed 10.65 rpm, have the highest grey relational grade and give better quality characteristics in terms of tensile strength and residual stress. The confirmation experiment shows that better results are obtained with GRA improved by the proposed <span class="hlt">method</span> than with ordinary GRA. The proposed <span class="hlt">method</span> is proved to be feasible and can be applied to <span class="hlt">optimize</span> the multi-objective problem in the filament winding process.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_7");'>7</a></li> <li><a href="#" onclick='return showDiv("page_8");'>8</a></li> <li class="active"><span>9</span></li> <li><a href="#" onclick='return showDiv("page_10");'>10</a></li> <li><a href="#" onclick='return showDiv("page_11");'>11</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_9 --> <div id="page_10" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_8");'>8</a></li> <li><a href="#" onclick='return showDiv("page_9");'>9</a></li> <li class="active"><span>10</span></li> <li><a href="#" onclick='return showDiv("page_11");'>11</a></li> <li><a href="#" onclick='return showDiv("page_12");'>12</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="181"> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018MS%26E..319a2035H','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018MS%26E..319a2035H"><span><span class="hlt">Optimization</span> of Surface Roughness and Wall Thickness in Dieless Incremental Forming Of Aluminum Sheet Using <span class="hlt">Taguchi</span></span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Hamedon, Zamzuri; Kuang, Shea Cheng; Jaafar, Hasnulhadi; Azhari, Azmir</p> <p>2018-03-01</p> <p>Incremental sheet forming is a versatile sheet metal forming process where a sheet metal is formed into its final shape by a series of localized deformation without a specialised die. However, it still has many shortcomings that need to be overcome such as geometric accuracy, surface roughness, formability, forming speed, and so on. This project focus on minimising the surface roughness of aluminium sheet and improving its thickness uniformity in incremental sheet forming via optimisation of wall angle, feed rate, and step size. Besides, the effect of wall angle, feed rate, and step size to the surface roughness and thickness uniformity of aluminium sheet was investigated in this project. From the results, it was observed that surface roughness and thickness uniformity were inversely varied due to the formation of surface waviness. Increase in feed rate and decrease in step size will produce a lower surface roughness, while uniform thickness reduction was obtained by reducing the wall angle and step size. By using <span class="hlt">Taguchi</span> analysis, the optimum parameters for minimum surface roughness and uniform thickness reduction of aluminium sheet were determined. The finding of this project helps to reduce the time in optimising the surface roughness and thickness uniformity in incremental sheet forming.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018MS%26E..295a2011Z','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018MS%26E..295a2011Z"><span>Experimental Research and Mathematical Modeling of Parameters Effecting on Cutting Force and SurfaceRoughness in CNC Turning Process</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Zeqiri, F.; Alkan, M.; Kaya, B.; Toros, S.</p> <p>2018-01-01</p> <p>In this paper, the effects of cutting parameters on cutting forces and surface roughness based on <span class="hlt">Taguchi</span> experimental design <span class="hlt">method</span> are determined. <span class="hlt">Taguchi</span> L9 orthogonal array is used to investigate the effects of machining parameters. <span class="hlt">Optimal</span> cutting conditions are determined using the signal/noise (S/N) ratio which is calculated by average surface roughness and cutting force. Using results of analysis, effects of parameters on both average surface roughness and cutting forces are calculated on Minitab 17 using ANOVA <span class="hlt">method</span>. The material that was investigated is Inconel 625 steel for two cases with heat treatment and without heat treatment. The predicted and calculated values with measurement are very close to each other. Confirmation test of results showed that the <span class="hlt">Taguchi</span> <span class="hlt">method</span> was very successful in the <span class="hlt">optimization</span> of machining parameters for maximum surface roughness and cutting forces in the CNC turning process.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4227375','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4227375"><span><span class="hlt">Optimization</span> of Biosorptive Removal of Dye from Aqueous System by Cone Shell of Calabrian Pine</span></a></p> <p><a target="_blank" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Deniz, Fatih</p> <p>2014-01-01</p> <p>The biosorption performance of raw cone shell of Calabrian pine for C.I. Basic Red 46 as a model azo dye from aqueous system was <span class="hlt">optimized</span> using <span class="hlt">Taguchi</span> experimental design methodology. L9 (33) orthogonal array was used to <span class="hlt">optimize</span> the dye biosorption by the pine cone shell. The selected factors and their levels were biosorbent particle size, dye concentration, and contact time. The predicted dye biosorption capacity for the pine cone shell from <span class="hlt">Taguchi</span> design was obtained as 71.770 mg g−1 under <span class="hlt">optimized</span> biosorption conditions. This experimental design provided reasonable predictive performance of dye biosorption by the biosorbent (R 2: 0.9961). Langmuir model fitted better to the biosorption equilibrium data than Freundlich model. This displayed the monolayer coverage of dye molecules on the biosorbent surface. Dubinin-Radushkevich model and the standard Gibbs free energy change proposed physical biosorption for predominant mechanism. The logistic function presented the best fit to the data of biosorption kinetics. The kinetic parameters reflecting biosorption performance were also evaluated. The <span class="hlt">optimization</span> study revealed that the pine cone shell can be an effective and economically feasible biosorbent for the removal of dye. PMID:25405213</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/27343435','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/27343435"><span>Effect of olive mill waste addition on the properties of porous fired clay bricks using <span class="hlt">Taguchi</span> <span class="hlt">method</span>.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Sutcu, Mucahit; Ozturk, Savas; Yalamac, Emre; Gencel, Osman</p> <p>2016-10-01</p> <p>Production of porous clay bricks lightened by adding olive mill waste as a pore making additive was investigated. Factors influencing the brick manufacturing process were analyzed by an experimental design, <span class="hlt">Taguchi</span> <span class="hlt">method</span>, to find out the most favorable conditions for the production of bricks. The optimum process conditions for brick preparation were investigated by studying the effects of mixture ratios (0, 5 and 10 wt%) and firing temperatures (850, 950 and 1050 °C) on the physical, thermal and mechanical properties of the bricks. Apparent density, bulk density, apparent porosity, water absorption, compressive strength, thermal conductivity, microstructure and crystalline phase formations of the fired brick samples were measured. It was found that the use of 10% waste addition reduced the bulk density of the samples up to 1.45 g/cm(3). As the porosities increased from 30.8 to 47.0%, the compressive strengths decreased from 36.9 to 10.26 MPa at firing temperature of 950 °C. The thermal conductivities of samples fired at the same temperature showed a decrease of 31% from 0.638 to 0.436 W/mK, which is hopeful for heat insulation in the buildings. Increasing of the firing temperature also affected their mechanical and physical properties. This study showed that the olive mill waste could be used as a pore maker in brick production. Copyright © 2016 Elsevier Ltd. All rights reserved.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=2663602','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=2663602"><span>Development of New Lipid-Based Paclitaxel Nanoparticles Using Sequential Simplex <span class="hlt">Optimization</span></span></a></p> <p><a target="_blank" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Dong, Xiaowei; Mattingly, Cynthia A.; Tseng, Michael; Cho, Moo; Adams, Val R.; Mumper, Russell J.</p> <p>2008-01-01</p> <p>The objective of these studies was to develop Cremophor-free lipid-based paclitaxel (PX) nanoparticle formulations prepared from warm microemulsion precursors. To identify and <span class="hlt">optimize</span> new nanoparticles, experimental design was performed combining <span class="hlt">Taguchi</span> array and sequential simplex <span class="hlt">optimization</span>. The combination of <span class="hlt">Taguchi</span> array and sequential simplex <span class="hlt">optimization</span> efficiently directed the design of paclitaxel nanoparticles. Two <span class="hlt">optimized</span> paclitaxel nanoparticles (NPs) were obtained: G78 NPs composed of glyceryl tridodecanoate (GT) and polyoxyethylene 20-stearyl ether (Brij 78), and BTM NPs composed of Miglyol 812, Brij 78 and D-alpha-tocopheryl polyethylene glycol 1000 succinate (TPGS). Both nanoparticles successfully entrapped paclitaxel at a final concentration of 150 μg/ml (over 6% drug loading) with particle sizes less than 200 nm and over 85% of entrapment efficiency. These novel paclitaxel nanoparticles were stable at 4°C over three months and in PBS at 37°C over 102 hours as measured by physical stability. Release of paclitaxel was slow and sustained without initial burst release. Cytotoxicity studies in MDA-MB-231 cancer cells showed that both nanoparticles have similar anticancer activities compared to Taxol®. Interestingly, PX BTM nanocapsules could be lyophilized without cryoprotectants. The lyophilized powder comprised only of PX BTM NPs in water could be rapidly rehydrated with complete retention of original physicochemical properties, in-vitro release properties, and cytotoxicity profile. Sequential Simplex <span class="hlt">Optimization</span> has been utilized to identify promising new lipid-based paclitaxel nanoparticles having useful attributes. PMID:19111929</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016RaSc...51.1377L','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016RaSc...51.1377L"><span>Comparison of evolutionary algorithms for LPDA antenna <span class="hlt">optimization</span></span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Lazaridis, Pavlos I.; Tziris, Emmanouil N.; Zaharis, Zaharias D.; Xenos, Thomas D.; Cosmas, John P.; Gallion, Philippe B.; Holmes, Violeta; Glover, Ian A.</p> <p>2016-08-01</p> <p>A novel approach to broadband log-periodic antenna design is presented, where some of the most powerful evolutionary algorithms are applied and compared for the <span class="hlt">optimal</span> design of wire log-periodic dipole arrays (LPDA) using Numerical Electromagnetics Code. The target is to achieve an <span class="hlt">optimal</span> antenna design with respect to maximum gain, gain flatness, front-to-rear ratio (F/R) and standing wave ratio. The parameters of the LPDA <span class="hlt">optimized</span> are the dipole lengths, the spacing between the dipoles, and the dipole wire diameters. The evolutionary algorithms compared are the Differential Evolution (DE), Particle Swarm (PSO), <span class="hlt">Taguchi</span>, Invasive Weed (IWO), and Adaptive Invasive Weed <span class="hlt">Optimization</span> (ADIWO). Superior performance is achieved by the IWO (best results) and PSO (fast convergence) algorithms.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018MS%26E..310a2154V','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018MS%26E..310a2154V"><span>Application of dragonfly algorithm for <span class="hlt">optimal</span> performance analysis of process parameters in turn-mill operations- A case study</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Vikram, K. Arun; Ratnam, Ch; Lakshmi, VVK; Kumar, A. Sunny; Ramakanth, RT</p> <p>2018-02-01</p> <p>Meta-heuristic multi-response <span class="hlt">optimization</span> <span class="hlt">methods</span> are widely in use to solve multi-objective problems to obtain Pareto <span class="hlt">optimal</span> solutions during <span class="hlt">optimization</span>. This work focuses on <span class="hlt">optimal</span> multi-response evaluation of process parameters in generating responses like surface roughness (Ra), surface hardness (H) and tool vibration displacement amplitude (Vib) while performing operations like tangential and orthogonal turn-mill processes on A-axis Computer Numerical Control vertical milling center. Process parameters like tool speed, feed rate and depth of cut are considered as process parameters machined over brass material under dry condition with high speed steel end milling cutters using <span class="hlt">Taguchi</span> design of experiments (DOE). Meta-heuristic like Dragonfly algorithm is used to <span class="hlt">optimize</span> the multi-objectives like ‘Ra’, ‘H’ and ‘Vib’ to identify the <span class="hlt">optimal</span> multi-response process parameters combination. Later, the results thus obtained from multi-objective dragonfly algorithm (MODA) are compared with another multi-response <span class="hlt">optimization</span> technique Viz. Grey relational analysis (GRA).</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017JPCS..110..409K','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017JPCS..110..409K"><span>Tribological behaviour predictions of r-GO reinforced Mg composite using ANN coupled <span class="hlt">Taguchi</span> approach</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Kavimani, V.; Prakash, K. Soorya</p> <p>2017-11-01</p> <p>This paper deals with the fabrication of reduced graphene oxide (r-GO) reinforced Magnesium Metal Matrix Composite (MMC) through a novel solvent based powder metallurgy route. Investigations over basic and functional properties of developed MMC reveals that addition of r-GO improvises the microhardness upto 64 HV but however decrement in specific wear rate is also notified. Visualization of worn out surfaces through SEM images clearly explains for the occurrence of plastic deformation and the presence of wear debris because of ploughing out action. <span class="hlt">Taguchi</span> coupled Artificial Neural Network (ANN) technique is adopted to arrive at <span class="hlt">optimal</span> values of the input parameters such as load, reinforcement weight percentage, sliding distance and sliding velocity and thereby achieve minimal target output value viz. specific wear rate. Influence of any of the input parameter over specific wear rate studied through ANOVA reveals that load acting on pin has a major influence with 38.85% followed by r-GO wt. % of 25.82%. ANN model developed to predict specific wear rate value based on the variation of input parameter facilitates better predictability with R-value of 98.4% when compared with the outcomes of regression model.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26652099','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26652099"><span>Workspace design for crane cabins applying a combined traditional approach and the <span class="hlt">Taguchi</span> <span class="hlt">method</span> for design of experiments.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Spasojević Brkić, Vesna K; Veljković, Zorica A; Golubović, Tamara; Brkić, Aleksandar Dj; Kosić Šotić, Ivana</p> <p>2016-01-01</p> <p>Procedures in the development process of crane cabins are arbitrary and subjective. Since approximately 42% of incidents in the construction industry are linked to them, there is a need to collect fresh anthropometric data and provide additional recommendations for design. In this paper, dimensioning of the crane cabin interior space was carried out using a sample of 64 crane operators' anthropometric measurements, in the Republic of Serbia, by measuring workspace with 10 parameters using nine measured anthropometric data from each crane operator. This paper applies experiments run via full factorial designs using a combined traditional and <span class="hlt">Taguchi</span> approach. The experiments indicated which design parameters are influenced by which anthropometric measurements and to what degree. The results are expected to be of use for crane cabin designers and should assist them to design a cabin that may lead to less strenuous sitting postures and fatigue for operators, thus improving safety and accident prevention.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018JThSc..27...89N','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018JThSc..27...89N"><span><span class="hlt">Optimization</span> of performance and emission characteristics of PPCCI engine fuelled with ethanol and diesel blends using grey-<span class="hlt">Taguchi</span> <span class="hlt">method</span></span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Natarajan, S.; Pitchandi, K.; Mahalakshmi, N. V.</p> <p>2018-02-01</p> <p>The performance and emission characteristics of a PPCCI engine fuelled with ethanol and diesel blends were carried out on a single cylinder air cooled CI engine. In order to achieve the <span class="hlt">optimal</span> process response with a limited number of experimental cycles, multi objective grey relational analysis had been applied for solving a multiple response <span class="hlt">optimization</span> problem. Using grey relational grade and signal-to-noise ratio as a performance index, a combination of input parameters was prefigured so as to achieve optimum response characteristics. It was observed that 20% premixed ratio of blend was most suitable for use in a PPCCI engine without significantly affecting the engine performance and emissions characteristics.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.osti.gov/biblio/21513168-optimization-micro-metal-injection-molding-using-grey-relational-grade','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/21513168-optimization-micro-metal-injection-molding-using-grey-relational-grade"><span><span class="hlt">Optimization</span> of Micro Metal Injection Molding By Using Grey Relational Grade</span></a></p> <p><a target="_blank" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Ibrahim, M. H. I.; Precision Process Research Group, Dept. of Mechanical and Materials Engineering, Faculty of Engineering, Universiti Kebangsaan Malaysia; Muhamad, N.</p> <p>2011-01-17</p> <p>Micro metal injection molding ({mu}MIM) which is a variant of MIM process is a promising <span class="hlt">method</span> towards near net-shape of metallic micro components of complex geometry. In this paper, {mu}MIM is applied to produce 316L stainless steel micro components. Due to highly stringent characteristic of {mu}MIM properties, the study has been emphasized on <span class="hlt">optimization</span> of process parameter where <span class="hlt">Taguchi</span> <span class="hlt">method</span> associated with Grey Relational Analysis (GRA) will be implemented as it represents novel approach towards investigation of multiple performance characteristics. Basic idea of GRA is to find a grey relational grade (GRG) which can be used for the <span class="hlt">optimization</span> conversionmore » from multi objectives case which are density and strength to a single objective case. After considering the form 'the larger the better', results show that the injection time(D) is the most significant followed by injection pressure(A), holding time(E), mold temperature(C) and injection temperature(B). Analysis of variance (ANOVA) is also employed to strengthen the significant of each parameter involved in this study.« less</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016MS%26E..149a2134V','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016MS%26E..149a2134V"><span>Modelling and multi objective <span class="hlt">optimization</span> of WEDM of commercially Monel super alloy using evolutionary algorithms</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Varun, Sajja; Reddy, Kalakada Bhargav Bal; Vardhan Reddy, R. R. Vishnu</p> <p>2016-09-01</p> <p>In this research work, development of a multi response <span class="hlt">optimization</span> technique has been undertaken, using traditional desirability analysis and non-traditional particle swarm <span class="hlt">optimization</span> techniques (for different customer's priorities) in wire electrical discharge machining (WEDM). Monel 400 has been selected as work material for experimentation. The effect of key process parameters such as pulse on time (TON), pulse off time (TOFF), peak current (IP), wire feed (WF) were on material removal rate (MRR) and surface roughness(SR) in WEDM operation were investigated. Further, the responses such as MRR and SR were modelled empirically through regression analysis. The developed models can be used by the machinists to predict the MRR and SR over a wide range of input parameters. The <span class="hlt">optimization</span> of multiple responses has been done for satisfying the priorities of multiple users by using <span class="hlt">Taguchi</span>-desirability function <span class="hlt">method</span> and particle swarm <span class="hlt">optimization</span> technique. The analysis of variance (ANOVA) is also applied to investigate the effect of influential parameters. Finally, the confirmation experiments were conducted for the <span class="hlt">optimal</span> set of machining parameters, and the betterment has been proved.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017JOM....69j1737Z','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017JOM....69j1737Z"><span>Preparation and <span class="hlt">Optimization</span> of Vanadium Titanomagnetite Carbon Composite Hot Briquette: A New Type of Blast Furnace Burden</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Zhao, W.; Wang, H. T.; Liu, Z. G.; Chu, M. S.; Ying, Z. W.; Tang, J.</p> <p>2017-10-01</p> <p>A new type of blast furnace burden, named VTM-CCB (vanadium titanomagnetite carbon composite hot briquette), is proposed and <span class="hlt">optimized</span> in this paper. The preparation process of VTM-CCB includes two components, hot briquetting and heat treatment. The hot-briquetting and heat-treatment parameters are systematically <span class="hlt">optimized</span> based on the <span class="hlt">Taguchi</span> <span class="hlt">method</span> and single-factor experiment. The <span class="hlt">optimized</span> preparation parameters of VTM-CCB include a hot-briquetting temperature of 300°C, a coal particle size of <0.075 mm, a vanadium titanomagnetite particle size of <0.075 mm, a coal-added ratio of 28.52%, a heat-treatment temperature of 500°C and a heat-treatment time of 3 h. The compressive strength of VTM-CCB, based on the <span class="hlt">optimized</span> parameters, reaches 2450 N, which meets the requirement of blast furnace ironmaking. These integrated parameters provide a theoretical basis for the production and application of a blast furnace smelting VTM-CCB.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/24741344','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/24741344"><span><span class="hlt">Optimization</span> of integrated impeller mixer via radiotracer experiments.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Othman, N; Kamarudin, S K; Takriff, M S; Rosli, M I; Engku Chik, E M F; Adnan, M A K</p> <p>2014-01-01</p> <p>Radiotracer experiments are carried out in order to determine the mean residence time (MRT) as well as percentage of dead zone, V dead (%), in an integrated mixer consisting of Rushton and pitched blade turbine (PBT). Conventionally, <span class="hlt">optimization</span> was performed by varying one parameter and others were held constant (OFAT) which lead to enormous number of experiments. Thus, in this study, a 4-factor 3-level <span class="hlt">Taguchi</span> L9 orthogonal array was introduced to obtain an accurate <span class="hlt">optimization</span> of mixing efficiency with minimal number of experiments. This paper describes the <span class="hlt">optimal</span> conditions of four process parameters, namely, impeller speed, impeller clearance, type of impeller, and sampling time, in obtaining MRT and V dead (%) using radiotracer experiments. The optimum conditions for the experiments were 100 rpm impeller speed, 50 mm impeller clearance, Type A mixer, and 900 s sampling time to reach <span class="hlt">optimization</span>.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2012cosp...39.1068L','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2012cosp...39.1068L"><span>Process <span class="hlt">optimization</span> for the preparation of straw feedstuff for rearing yellow mealworms (Tenebrio molitor L.) in BLSS</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Li, Leyuan; Liu, lh64. Hong</p> <p>2012-07-01</p> <p>It has been confirmed in our previous work that in bioregenerative life support systems, feeding yellow mealworms (Tenebrio molitor L.) using fermented straw has the potential to provide good animal protein for astronauts, meanwhile treating with plant wastes. However, since the nitrogen content in straw is very low, T. molitor larvae can not obtain sufficient nitrogen, which results in a relatively low growth efficiency. In this study, wheat straw powder was mixed with simulated human urine before fermentation. Condition parameters, e.g. urine:straw ratio, moisture content, inoculation dose, fermentation time, fermentation temperature and pH were <span class="hlt">optimized</span> using <span class="hlt">Taguchi</span> <span class="hlt">method</span>. Larval growth rate and average individual mass of mature larva increased significantly in the group of T. molitor larvae fed with feedstuff prepared with the <span class="hlt">optimized</span> process.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2014JMoSt1074...85R','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2014JMoSt1074...85R"><span>Synthesis procedure <span class="hlt">optimization</span> and characterization of europium (III) tungstate nanoparticles</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Rahimi-Nasrabadi, Mehdi; Pourmortazavi, Seied Mahdi; Ganjali, Mohammad Reza; Reza Banan, Ali; Ahmadi, Farhad</p> <p>2014-09-01</p> <p><span class="hlt">Taguchi</span> robust design as a statistical <span class="hlt">method</span> was applied for the <span class="hlt">optimization</span> of process parameters in order to tunable, facile and fast synthesis of europium (III) tungstate nanoparticles. Europium (III) tungstate nanoparticles were synthesized by a chemical precipitation reaction involving direct addition of europium ion aqueous solution to the tungstate reagent solved in an aqueous medium. Effects of some synthesis procedure variables on the particle size of europium (III) tungstate nanoparticles were studied. Analysis of variance showed the importance of controlling tungstate concentration, cation feeding flow rate and temperature during preparation of europium (III) tungstate nanoparticles by the proposed chemical precipitation reaction. Finally, europium (III) tungstate nanoparticles were synthesized at the optimum conditions of the proposed <span class="hlt">method</span>. The morphology and chemical composition of the prepared nano-material were characterized by means of X-ray diffraction, scanning electron microscopy, transmission electron microscopy, FT-IR spectroscopy and fluorescence.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/19635663','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/19635663"><span>Microcosm assays and <span class="hlt">Taguchi</span> experimental design for treatment of oil sludge containing high concentration of hydrocarbons.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Castorena-Cortés, G; Roldán-Carrillo, T; Zapata-Peñasco, I; Reyes-Avila, J; Quej-Aké, L; Marín-Cruz, J; Olguín-Lora, P</p> <p>2009-12-01</p> <p>Microcosm assays and <span class="hlt">Taguchi</span> experimental design was used to assess the biodegradation of an oil sludge produced by a gas processing unit. The study showed that the biodegradation of the sludge sample is feasible despite the high level of pollutants and complexity involved in the sludge. The physicochemical and microbiological characterization of the sludge revealed a high concentration of hydrocarbons (334,766+/-7001 mg kg(-1) dry matter, d.m.) containing a variety of compounds between 6 and 73 carbon atoms in their structure, whereas the concentration of Fe was 60,000 mg kg(-1) d.m. and 26,800 mg kg(-1) d.m. of sulfide. A <span class="hlt">Taguchi</span> L(9) experimental design comprising 4 variables and 3 levels moisture, nitrogen source, surfactant concentration and oxidant agent was performed, proving that moisture and nitrogen source are the major variables that affect CO(2) production and total petroleum hydrocarbons (TPH) degradation. The best experimental treatment yielded a TPH removal of 56,092 mg kg(-1) d.m. The treatment was carried out under the following conditions: 70% moisture, no oxidant agent, 0.5% of surfactant and NH(4)Cl as nitrogen source.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017MS%26E..263f2043D','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017MS%26E..263f2043D"><span>Analysis and <span class="hlt">optimization</span> of machining parameters of laser cutting for polypropylene composite</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Deepa, A.; Padmanabhan, K.; Kuppan, P.</p> <p>2017-11-01</p> <p>Present works explains about machining of self-reinforced Polypropylene composite fabricated using hot compaction <span class="hlt">method</span>. The objective of the experiment is to find optimum machining parameters for Polypropylene (PP). Laser power and Machining speed were the parameters considered in response to tensile test and Flexure test. <span class="hlt">Taguchi</span> <span class="hlt">method</span> is used for experimentation. Grey Relational Analysis (GRA) is used for multiple process parameter <span class="hlt">optimization</span>. ANOVA (Analysis of Variance) is used to find impact for process parameter. Polypropylene has got the great application in various fields like, it is used in the form of foam in model aircraft and other radio-controlled vehicles, thin sheets (∼2-20μm) used as a dielectric, PP is also used in piping system, it is also been used in hernia and pelvic organ repair or protect new herrnis in the same location.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015ApPhA.121..555A','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015ApPhA.121..555A"><span>Process modeling and parameter <span class="hlt">optimization</span> using radial basis function neural network and genetic algorithm for laser welding of dissimilar materials</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Ai, Yuewei; Shao, Xinyu; Jiang, Ping; Li, Peigen; Liu, Yang; Yue, Chen</p> <p>2015-11-01</p> <p>The welded joints of dissimilar materials have been widely used in automotive, ship and space industries. The joint quality is often evaluated by weld seam geometry, microstructures and mechanical properties. To obtain the desired weld seam geometry and improve the quality of welded joints, this paper proposes a process modeling and parameter <span class="hlt">optimization</span> <span class="hlt">method</span> to obtain the weld seam with minimum width and desired depth of penetration for laser butt welding of dissimilar materials. During the process, <span class="hlt">Taguchi</span> experiments are conducted on the laser welding of the low carbon steel (Q235) and stainless steel (SUS301L-HT). The experimental results are used to develop the radial basis function neural network model, and the process parameters are <span class="hlt">optimized</span> by genetic algorithm. The proposed <span class="hlt">method</span> is validated by a confirmation experiment. Simultaneously, the microstructures and mechanical properties of the weld seam generated from <span class="hlt">optimal</span> process parameters are further studied by optical microscopy and tensile strength test. Compared with the unoptimized weld seam, the welding defects are eliminated in the <span class="hlt">optimized</span> weld seam and the mechanical properties are improved. The results show that the proposed <span class="hlt">method</span> is effective and reliable for improving the quality of welded joints in practical production.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2009LNCS.5740..211H','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2009LNCS.5740..211H"><span>Evolution of Query <span class="hlt">Optimization</span> <span class="hlt">Methods</span></span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Hameurlain, Abdelkader; Morvan, Franck</p> <p></p> <p>Query <span class="hlt">optimization</span> is the most critical phase in query processing. In this paper, we try to describe synthetically the evolution of query <span class="hlt">optimization</span> <span class="hlt">methods</span> from uniprocessor relational database systems to data Grid systems through parallel, distributed and data integration systems. We point out a set of parameters to characterize and compare query <span class="hlt">optimization</span> <span class="hlt">methods</span>, mainly: (i) size of the search space, (ii) type of <span class="hlt">method</span> (static or dynamic), (iii) modification types of execution plans (re-<span class="hlt">optimization</span> or re-scheduling), (iv) level of modification (intra-operator and/or inter-operator), (v) type of event (estimation errors, delay, user preferences), and (vi) nature of decision-making (centralized or decentralized control).</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_8");'>8</a></li> <li><a href="#" onclick='return showDiv("page_9");'>9</a></li> <li class="active"><span>10</span></li> <li><a href="#" onclick='return showDiv("page_11");'>11</a></li> <li><a href="#" onclick='return showDiv("page_12");'>12</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_10 --> <div id="page_11" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_9");'>9</a></li> <li><a href="#" onclick='return showDiv("page_10");'>10</a></li> <li class="active"><span>11</span></li> <li><a href="#" onclick='return showDiv("page_12");'>12</a></li> <li><a href="#" onclick='return showDiv("page_13");'>13</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="201"> <li> <p><a target="_blank" onclick="trackOutboundLink('http://hdl.handle.net/2060/19930023176','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19930023176"><span><span class="hlt">Optimization</span> of 15 parameters influencing the long-term survival of bacteria in aquatic systems</span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Obenhuber, D. C.</p> <p>1993-01-01</p> <p>NASA is presently engaged in the design and development of a water reclamation system for the future space station. A major concern in processing water is the control of microbial contamination. As a means of developing an <span class="hlt">optimal</span> microbial control strategy, studies were undertaken to determine the type and amount of contamination which could be expected in these systems under a variety of changing environmental conditions. A laboratory-based <span class="hlt">Taguchi</span> <span class="hlt">optimization</span> experiment was conducted to determine the ideal settings for 15 parameters which influence the survival of six bacterial species in aquatic systems. The experiment demonstrated that the bacterial survival period could be decreased significantly by <span class="hlt">optimizing</span> environmental conditions.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/27448371','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/27448371"><span><span class="hlt">Optimized</span> Structure of the Traffic Flow Forecasting Model With a Deep Learning Approach.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Yang, Hao-Fan; Dillon, Tharam S; Chen, Yi-Ping Phoebe</p> <p>2017-10-01</p> <p>Forecasting accuracy is an important issue for successful intelligent traffic management, especially in the domain of traffic efficiency and congestion reduction. The dawning of the big data era brings opportunities to greatly improve prediction accuracy. In this paper, we propose a novel model, stacked autoencoder Levenberg-Marquardt model, which is a type of deep architecture of neural network approach aiming to improve forecasting accuracy. The proposed model is designed using the <span class="hlt">Taguchi</span> <span class="hlt">method</span> to develop an <span class="hlt">optimized</span> structure and to learn traffic flow features through layer-by-layer feature granulation with a greedy layerwise unsupervised learning algorithm. It is applied to real-world data collected from the M6 freeway in the U.K. and is compared with three existing traffic predictors. To the best of our knowledge, this is the first time that an <span class="hlt">optimized</span> structure of the traffic flow forecasting model with a deep learning approach is presented. The evaluation results demonstrate that the proposed model with an <span class="hlt">optimized</span> structure has superior performance in traffic flow forecasting.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/25373790','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/25373790"><span>Experimental design <span class="hlt">methods</span> for bioengineering applications.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Keskin Gündoğdu, Tuğba; Deniz, İrem; Çalışkan, Gülizar; Şahin, Erdem Sefa; Azbar, Nuri</p> <p>2016-01-01</p> <p>Experimental design is a form of process analysis in which certain factors are selected to obtain the desired responses of interest. It may also be used for the determination of the effects of various independent factors on a dependent factor. The bioengineering discipline includes many different areas of scientific interest, and each study area is affected and governed by many different factors. Briefly analyzing the important factors and selecting an experimental design for <span class="hlt">optimization</span> are very effective tools for the design of any bioprocess under question. This review summarizes experimental design <span class="hlt">methods</span> that can be used to investigate various factors relating to bioengineering processes. The experimental <span class="hlt">methods</span> generally used in bioengineering are as follows: full factorial design, fractional factorial design, Plackett-Burman design, <span class="hlt">Taguchi</span> design, Box-Behnken design and central composite design. These design <span class="hlt">methods</span> are briefly introduced, and then the application of these design <span class="hlt">methods</span> to study different bioengineering processes is analyzed.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016JMEP...25.1416A','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016JMEP...25.1416A"><span><span class="hlt">Optimization</span> of a Three-Component Green Corrosion Inhibitor Mixture for Using in Cooling Water by Experimental Design</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Asghari, E.; Ashassi-Sorkhabi, H.; Ahangari, M.; Bagheri, R.</p> <p>2016-04-01</p> <p>Factors such as inhibitor concentration, solution hydrodynamics, and temperature influence the performance of corrosion inhibitor mixtures. The simultaneous studying of the impact of different factors is a time- and cost-consuming process. The use of experimental design <span class="hlt">methods</span> can be useful in minimizing the number of experiments and finding local <span class="hlt">optimized</span> conditions for factors under the investigation. In the present work, the inhibition performance of a three-component inhibitor mixture against corrosion of St37 steel rotating disk electrode, RDE, was studied. The mixture was composed of citric acid, lanthanum(III) nitrate, and tetrabutylammonium perchlorate. In order to decrease the number of experiments, the L16 <span class="hlt">Taguchi</span> orthogonal array was used. The "control factors" were the concentration of each component and the rotation rate of RDE and the "response factor" was the inhibition efficiency. The scanning electron microscopy and energy dispersive x-ray spectroscopy techniques verified the formation of islands of adsorbed citrate complexes with lanthanum ions and insoluble lanthanum(III) hydroxide. From the <span class="hlt">Taguchi</span> analysis results the mixture of 0.50 mM lanthanum(III) nitrate, 0.50 mM citric acid, and 2.0 mM tetrabutylammonium perchlorate under the electrode rotation rate of 1000 rpm was found as optimum conditions.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://hdl.handle.net/2060/19920010016','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19920010016"><span>Experimental validation of structural <span class="hlt">optimization</span> <span class="hlt">methods</span></span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Adelman, Howard M.</p> <p>1992-01-01</p> <p>The topic of validating structural <span class="hlt">optimization</span> <span class="hlt">methods</span> by use of experimental results is addressed. The need for validating the <span class="hlt">methods</span> as a way of effecting a greater and an accelerated acceptance of formal <span class="hlt">optimization</span> <span class="hlt">methods</span> by practicing engineering designers is described. The range of validation strategies is defined which includes comparison of <span class="hlt">optimization</span> results with more traditional design approaches, establishing the accuracy of analyses used, and finally experimental validation of the <span class="hlt">optimization</span> results. Examples of the use of experimental results to validate <span class="hlt">optimization</span> techniques are described. The examples include experimental validation of the following: optimum design of a trussed beam; combined control-structure design of a cable-supported beam simulating an actively controlled space structure; minimum weight design of a beam with frequency constraints; minimization of the vibration response of helicopter rotor blade; minimum weight design of a turbine blade disk; aeroelastic <span class="hlt">optimization</span> of an aircraft vertical fin; airfoil shape <span class="hlt">optimization</span> for drag minimization; <span class="hlt">optimization</span> of the shape of a hole in a plate for stress minimization; <span class="hlt">optimization</span> to minimize beam dynamic response; and structural <span class="hlt">optimization</span> of a low vibration helicopter rotor.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://hdl.handle.net/2060/19920016203','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19920016203"><span>Structural <span class="hlt">optimization</span> of large structural systems by <span class="hlt">optimality</span> criteria <span class="hlt">methods</span></span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Berke, Laszlo</p> <p>1992-01-01</p> <p>The fundamental concepts of the <span class="hlt">optimality</span> criteria <span class="hlt">method</span> of structural <span class="hlt">optimization</span> are presented. The effect of the separability properties of the objective and constraint functions on the <span class="hlt">optimality</span> criteria expressions is emphasized. The single constraint case is treated first, followed by the multiple constraint case with a more complex evaluation of the Lagrange multipliers. Examples illustrate the efficiency of the <span class="hlt">method</span>.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://hdl.handle.net/2060/19940019946','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19940019946"><span>A minimum cost tolerance allocation <span class="hlt">method</span> for rocket engines and robust rocket engine design</span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Gerth, Richard J.</p> <p>1993-01-01</p> <p>Rocket engine design follows three phases: systems design, parameter design, and tolerance design. Systems design and parameter design are most effectively conducted in a concurrent engineering (CE) environment that utilize <span class="hlt">methods</span> such as Quality Function Deployment and <span class="hlt">Taguchi</span> <span class="hlt">methods</span>. However, tolerance allocation remains an art driven by experience, handbooks, and rules of thumb. It was desirable to develop and <span class="hlt">optimization</span> approach to tolerancing. The case study engine was the STME gas generator cycle. The design of the major components had been completed and the functional relationship between the component tolerances and system performance had been computed using the Generic Power Balance model. The system performance nominals (thrust, MR, and Isp) and tolerances were already specified, as were an initial set of component tolerances. However, the question was whether there existed an <span class="hlt">optimal</span> combination of tolerances that would result in the minimum cost without any degradation in system performance.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2009PhDT........76P','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2009PhDT........76P"><span>Evolutionary <span class="hlt">optimization</span> <span class="hlt">methods</span> for accelerator design</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Poklonskiy, Alexey A.</p> <p></p> <p>Many problems from the fields of accelerator physics and beam theory can be formulated as <span class="hlt">optimization</span> problems and, as such, solved using <span class="hlt">optimization</span> <span class="hlt">methods</span>. Despite growing efficiency of the <span class="hlt">optimization</span> <span class="hlt">methods</span>, the adoption of modern <span class="hlt">optimization</span> techniques in these fields is rather limited. Evolutionary Algorithms (EAs) form a relatively new and actively developed <span class="hlt">optimization</span> <span class="hlt">methods</span> family. They possess many attractive features such as: ease of the implementation, modest requirements on the objective function, a good tolerance to noise, robustness, and the ability to perform a global search efficiently. In this work we study the application of EAs to problems from accelerator physics and beam theory. We review the most commonly used <span class="hlt">methods</span> of unconstrained <span class="hlt">optimization</span> and describe the GATool, evolutionary algorithm and the software package, used in this work, in detail. Then we use a set of test problems to assess its performance in terms of computational resources, quality of the obtained result, and the tradeoff between them. We justify the choice of GATool as a heuristic <span class="hlt">method</span> to generate cutoff values for the COSY-GO rigorous global <span class="hlt">optimization</span> package for the COSY Infinity scientific computing package. We design the model of their mutual interaction and demonstrate that the quality of the result obtained by GATool increases as the information about the search domain is refined, which supports the usefulness of this model. We Giscuss GATool's performance on the problems suffering from static and dynamic noise and study useful strategies of GATool parameter tuning for these and other difficult problems. We review the challenges of constrained <span class="hlt">optimization</span> with EAs and <span class="hlt">methods</span> commonly used to overcome them. We describe REPA, a new constrained <span class="hlt">optimization</span> <span class="hlt">method</span> based on repairing, in exquisite detail, including the properties of its two repairing techniques: REFIND and REPROPT. We assess REPROPT's performance on the standard constrained</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018EnOp...50..568A','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018EnOp...50..568A"><span>Guided particle swarm <span class="hlt">optimization</span> <span class="hlt">method</span> to solve general nonlinear <span class="hlt">optimization</span> problems</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Abdelhalim, Alyaa; Nakata, Kazuhide; El-Alem, Mahmoud; Eltawil, Amr</p> <p>2018-04-01</p> <p>The development of hybrid algorithms is becoming an important topic in the global <span class="hlt">optimization</span> research area. This article proposes a new technique in hybridizing the particle swarm <span class="hlt">optimization</span> (PSO) algorithm and the Nelder-Mead (NM) simplex search algorithm to solve general nonlinear unconstrained <span class="hlt">optimization</span> problems. Unlike traditional hybrid <span class="hlt">methods</span>, the proposed <span class="hlt">method</span> hybridizes the NM algorithm inside the PSO to improve the velocities and positions of the particles iteratively. The new hybridization considers the PSO algorithm and NM algorithm as one heuristic, not in a sequential or hierarchical manner. The NM algorithm is applied to improve the initial random solution of the PSO algorithm and iteratively in every step to improve the overall performance of the <span class="hlt">method</span>. The performance of the proposed <span class="hlt">method</span> was tested over 20 <span class="hlt">optimization</span> test functions with varying dimensions. Comprehensive comparisons with other <span class="hlt">methods</span> in the literature indicate that the proposed solution <span class="hlt">method</span> is promising and competitive.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/16671630','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/16671630"><span>Response to <span class="hlt">Taguchi</span> and Noma on "relationship between directionality and orientation in drawings by young children and adults.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Karev, George B</p> <p>2006-02-01</p> <p>When assessing the relationship between direction and orientation in drawings by young children and adults, <span class="hlt">Taguchi</span> and Noma used a fish-drawing task. However, the fish is not convenient enough as an object for such a task so it is highly preferable to use, instead of a single object, a set of several objects to assess directionality quantitatively. These authors' conclusions do not acknowledge alternative explanations.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018AIPC.1943b0074H','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018AIPC.1943b0074H"><span>Experimental wear behavioral studies of as-cast and 5 hr homogenized Al25Mg2Si2Cu4Ni alloy at constant load based on <span class="hlt">taguchi</span> <span class="hlt">method</span></span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Harlapur, M. D.; Mallapur, D. G.; Udupa, K. Rajendra</p> <p>2018-04-01</p> <p>In the present study, an experimental study of the volumetric wear behaviour of Aluminium (Al-25Mg2Si2Cu4Ni) alloy in as cast and 5Hr homogenized with T6 heat treatment is carried out at constant load. The Pin on disc apparatus was used to carry out the sliding wear test. <span class="hlt">Taguchi</span> <span class="hlt">method</span> based on L-16 orthogonal array was employed to evaluate the data on the wear behavior. Signal-to-noise ratio among the objective of smaller the better and mean of means results were used. General regression model is obtained by correlation. Lastly confirmation test was completed to compose a comparison between the experimental results foreseen from the mention correlation. The mathematical model reveals the load has maximum contribution on the wear rate compared to speed. Scanning Electron Microscope was used to analyze the worn-out wear surfaces. Wear results show that 5Hr homogenized Al-25Mg2Si2Cu4Ni alloy samples with T6 treated had better volumetric wear resistance as compared to as cast samples.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018MS%26E..314a2025D','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018MS%26E..314a2025D"><span><span class="hlt">Optimization</span> of friction and wear behaviour of Al7075-Al2O3-B4C metal matrix composites using <span class="hlt">Taguchi</span> <span class="hlt">method</span></span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Dhanalakshmi, S.; Mohanasundararaju, N.; Venkatakrishnan, P. G.; Karthik, V.</p> <p>2018-02-01</p> <p>The present study deals with investigations relating to dry sliding wear behaviour of the Al 7075 alloy, reinforced with Al2O3 and B4C. The hybrid composites are produced through Liquid Metallurgy route - Stir casting <span class="hlt">method</span>. The amount of Al2O3 particles is varied as 3, 6, 9, 12 and 15 wt% and the amount of B4C is kept constant as 3wt%. Experiments were conducted based on the plan of experiments generated through Taguchi’s technique. A L27 Orthogonal array was selected for analysis of the data. The investigation is to find the effect of applied load, sliding speed and sliding distance on wear rate and Coefficient of Friction (COF) of the hybrid Al7075- Al2O3-B4C composite and to determine the <span class="hlt">optimal</span> parameters for obtaining minimum wear rate. The samples were examined using scanning electronic microscopy after wear testing and analyzed.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2013EnOp...45.1167K','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2013EnOp...45.1167K"><span>Performance index and meta-<span class="hlt">optimization</span> of a direct search <span class="hlt">optimization</span> <span class="hlt">method</span></span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Krus, P.; Ölvander, J.</p> <p>2013-10-01</p> <p>Design <span class="hlt">optimization</span> is becoming an increasingly important tool for design, often using simulation as part of the evaluation of the objective function. A measure of the efficiency of an <span class="hlt">optimization</span> algorithm is of great importance when comparing <span class="hlt">methods</span>. The main contribution of this article is the introduction of a singular performance criterion, the entropy rate index based on Shannon's information theory, taking both reliability and rate of convergence into account. It can also be used to characterize the difficulty of different <span class="hlt">optimization</span> problems. Such a performance criterion can also be used for <span class="hlt">optimization</span> of the <span class="hlt">optimization</span> algorithms itself. In this article the Complex-RF <span class="hlt">optimization</span> <span class="hlt">method</span> is described and its performance evaluated and <span class="hlt">optimized</span> using the established performance criterion. Finally, in order to be able to predict the resources needed for <span class="hlt">optimization</span> an objective function temperament factor is defined that indicates the degree of difficulty of the objective function.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017MsT..........4O','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017MsT..........4O"><span>An Integrated <span class="hlt">Method</span> for Airfoil <span class="hlt">Optimization</span></span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Okrent, Joshua B.</p> <p></p> <p>Design exploration and <span class="hlt">optimization</span> is a large part of the initial engineering and design process. To evaluate the aerodynamic performance of a design, viscous Navier-Stokes solvers can be used. However this <span class="hlt">method</span> can prove to be overwhelmingly time consuming when performing an initial design sweep. Therefore, another evaluation <span class="hlt">method</span> is needed to provide accurate results at a faster pace. To accomplish this goal, a coupled viscous-inviscid <span class="hlt">method</span> is used. This thesis proposes an integrated <span class="hlt">method</span> for analyzing, evaluating, and <span class="hlt">optimizing</span> an airfoil using a coupled viscous-inviscid solver along with a genetic algorithm to find the <span class="hlt">optimal</span> candidate. The <span class="hlt">method</span> proposed is different from prior <span class="hlt">optimization</span> efforts in that it greatly broadens the design space, while allowing the <span class="hlt">optimization</span> to search for the best candidate that will meet multiple objectives over a characteristic mission profile rather than over a single condition and single <span class="hlt">optimization</span> parameter. The increased design space is due to the use of multiple parametric airfoil families, namely the NACA 4 series, CST family, and the PARSEC family. Almost all possible airfoil shapes can be created with these three families allowing for all possible configurations to be included. This inclusion of multiple airfoil families addresses a possible criticism of prior <span class="hlt">optimization</span> attempts since by only focusing on one airfoil family, they were inherently limiting the number of possible airfoil configurations. By using multiple parametric airfoils, it can be assumed that all reasonable airfoil configurations are included in the analysis and <span class="hlt">optimization</span> and that a global and not local maximum is found. Additionally, the <span class="hlt">method</span> used is amenable to customization to suit any specific needs as well as including the effects of other physical phenomena or design criteria and/or constraints. This thesis found that an airfoil configuration that met multiple objectives could be found for a given set of nominal</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018MPLB...3240027L','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018MPLB...3240027L"><span>Design and operation of a bio-inspired micropump based on blood-sucking mechanism of mosquitoes</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Leu, Tzong-Shyng; Kao, Ruei-Hung</p> <p>2018-05-01</p> <p>The study is to develop a novel bionic micropump, mimicking blood-suck mechanism of mosquitos with a similar efficiency of 36%. The micropump is produced by using micro-electro-mechanical system (MEMS) technology, PDMS (polydimethylsiloxane) to fabricate the microchannel, and an actuator membrane made by Fe-PDMS. It employs an Nd-FeB permanent magnet and PZT to actuate the Fe-PDMS membrane for generating flow rate. A lumped model theory and the <span class="hlt">Taguchi</span> <span class="hlt">method</span> are used for numerical simulation of pulsating flow in the micropump. Also focused is to change the size of mosquito mouth for identifying the best waveform for the transient flow processes. Based on computational results of channel size and the <span class="hlt">Taguchi</span> <span class="hlt">method</span>, an <span class="hlt">optimization</span> actuation waveform is identified. The maximum pumping flow rate is 23.5 μL/min and the efficiency is 86%. The power density of micropump is about 8 times of that produced by mosquito’s suction. In addition to using theoretical design of the channel size, also combine with <span class="hlt">Taguchi</span> <span class="hlt">method</span> and asymmetric actuation to find the <span class="hlt">optimization</span> actuation waveform, the experimental result shows the maximum pumping flowrate is 23.5 μL/min and efficiency is 86%, moreover, the power density of micropump is 8 times higher than mosquito’s.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2013Nanot..24a5104M','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2013Nanot..24a5104M"><span>A logical approach to <span class="hlt">optimize</span> the nanostructured lipid carrier system of irinotecan: efficient hybrid design methodology</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Mohan Negi, Lalit; Jaggi, Manu; Talegaonkar, Sushama</p> <p>2013-01-01</p> <p>Development of an effective formulation involves careful <span class="hlt">optimization</span> of a number of excipient and process variables. Sometimes the number of variables is so large that even the most efficient <span class="hlt">optimization</span> designs require a very large number of trials which put stress on costs as well as time. A creative combination of a number of design <span class="hlt">methods</span> leads to a smaller number of trials. This study was aimed at the development of nanostructured lipid carriers (NLCs) by using a combination of different <span class="hlt">optimization</span> <span class="hlt">methods</span>. A total of 11 variables were first screened using the Plackett-Burman design for their effects on formulation characteristics like size and entrapment efficiency. Four out of 11 variables were found to have insignificant effects on the formulation parameters and hence were screened out. Out of the remaining seven variables, four (concentration of tween-80, lecithin, sodium taurocholate, and total lipid) were found to have significant effects on the size of the particles while the other three (phase ratio, drug to lipid ratio, and sonication time) had a higher influence on the entrapment efficiency. The first four variables were <span class="hlt">optimized</span> for their effect on size using the <span class="hlt">Taguchi</span> L9 orthogonal array. The <span class="hlt">optimized</span> values of the surfactants and lipids were kept constant for the next stage, where the sonication time, phase ratio, and drug:lipid ratio were varied using the Box-Behnken design response surface <span class="hlt">method</span> to <span class="hlt">optimize</span> the entrapment efficiency. Finally, by performing only 38 trials, we have <span class="hlt">optimized</span> 11 variables for the development of NLCs with a size of 143.52 ± 1.2 nm, zeta potential of -32.6 ± 0.54 mV, and 98.22 ± 2.06% entrapment efficiency.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://hdl.handle.net/2060/20030053190','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20030053190"><span>Profile <span class="hlt">Optimization</span> <span class="hlt">Method</span> for Robust Airfoil Shape <span class="hlt">Optimization</span> in Viscous Flow</span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Li, Wu</p> <p>2003-01-01</p> <p>Simulation results obtained by using FUN2D for robust airfoil shape <span class="hlt">optimization</span> in transonic viscous flow are included to show the potential of the profile <span class="hlt">optimization</span> <span class="hlt">method</span> for generating fairly smooth <span class="hlt">optimal</span> airfoils with no off-design performance degradation.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28461707','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28461707"><span>On the Convergence Analysis of the <span class="hlt">Optimized</span> Gradient <span class="hlt">Method</span>.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Kim, Donghwan; Fessler, Jeffrey A</p> <p>2017-01-01</p> <p>This paper considers the problem of unconstrained minimization of smooth convex functions having Lipschitz continuous gradients with known Lipschitz constant. We recently proposed the <span class="hlt">optimized</span> gradient <span class="hlt">method</span> for this problem and showed that it has a worst-case convergence bound for the cost function decrease that is twice as small as that of Nesterov's fast gradient <span class="hlt">method</span>, yet has a similarly efficient practical implementation. Drori showed recently that the <span class="hlt">optimized</span> gradient <span class="hlt">method</span> has <span class="hlt">optimal</span> complexity for the cost function decrease over the general class of first-order <span class="hlt">methods</span>. This <span class="hlt">optimality</span> makes it important to study fully the convergence properties of the <span class="hlt">optimized</span> gradient <span class="hlt">method</span>. The previous worst-case convergence bound for the <span class="hlt">optimized</span> gradient <span class="hlt">method</span> was derived for only the last iterate of a secondary sequence. This paper provides an analytic convergence bound for the primary sequence generated by the <span class="hlt">optimized</span> gradient <span class="hlt">method</span>. We then discuss additional convergence properties of the <span class="hlt">optimized</span> gradient <span class="hlt">method</span>, including the interesting fact that the <span class="hlt">optimized</span> gradient <span class="hlt">method</span> has two types of worstcase functions: a piecewise affine-quadratic function and a quadratic function. These results help complete the theory of an <span class="hlt">optimal</span> first-order <span class="hlt">method</span> for smooth convex minimization.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5409132','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5409132"><span>On the Convergence Analysis of the <span class="hlt">Optimized</span> Gradient <span class="hlt">Method</span></span></a></p> <p><a target="_blank" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Kim, Donghwan; Fessler, Jeffrey A.</p> <p>2016-01-01</p> <p>This paper considers the problem of unconstrained minimization of smooth convex functions having Lipschitz continuous gradients with known Lipschitz constant. We recently proposed the <span class="hlt">optimized</span> gradient <span class="hlt">method</span> for this problem and showed that it has a worst-case convergence bound for the cost function decrease that is twice as small as that of Nesterov’s fast gradient <span class="hlt">method</span>, yet has a similarly efficient practical implementation. Drori showed recently that the <span class="hlt">optimized</span> gradient <span class="hlt">method</span> has <span class="hlt">optimal</span> complexity for the cost function decrease over the general class of first-order <span class="hlt">methods</span>. This <span class="hlt">optimality</span> makes it important to study fully the convergence properties of the <span class="hlt">optimized</span> gradient <span class="hlt">method</span>. The previous worst-case convergence bound for the <span class="hlt">optimized</span> gradient <span class="hlt">method</span> was derived for only the last iterate of a secondary sequence. This paper provides an analytic convergence bound for the primary sequence generated by the <span class="hlt">optimized</span> gradient <span class="hlt">method</span>. We then discuss additional convergence properties of the <span class="hlt">optimized</span> gradient <span class="hlt">method</span>, including the interesting fact that the <span class="hlt">optimized</span> gradient <span class="hlt">method</span> has two types of worstcase functions: a piecewise affine-quadratic function and a quadratic function. These results help complete the theory of an <span class="hlt">optimal</span> first-order <span class="hlt">method</span> for smooth convex minimization. PMID:28461707</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3026998','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3026998"><span>Enhancement of 2,3-Butanediol Production by Klebsiella oxytoca PTCC 1402</span></a></p> <p><a target="_blank" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Anvari, Maesomeh; Safari Motlagh, Mohammad Reza</p> <p>2011-01-01</p> <p><span class="hlt">Optimal</span> operating parameters of 2,3-Butanediol production using Klebsiella oxytoca under submerged culture conditions are determined by using <span class="hlt">Taguchi</span> <span class="hlt">method</span>. The effect of different factors including medium composition, pH, temperature, mixing intensity, and inoculum size on 2,3-butanediol production was analyzed using the <span class="hlt">Taguchi</span> <span class="hlt">method</span> in three levels. Based on these analyses the optimum concentrations of glucose, acetic acid, and succinic acid were found to be 6, 0.5, and 1.0 (% w/v), respectively. Furthermore, optimum values for temperature, inoculum size, pH, and the shaking speed were determined as 37°C, 8 (g/L), 6.1, and 150 rpm, respectively. The <span class="hlt">optimal</span> combinations of factors obtained from the proposed DOE methodology was further validated by conducting fermentation experiments and the obtained results revealed an enhanced 2,3-Butanediol yield of 44%. PMID:21318172</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_9");'>9</a></li> <li><a href="#" onclick='return showDiv("page_10");'>10</a></li> <li class="active"><span>11</span></li> <li><a href="#" onclick='return showDiv("page_12");'>12</a></li> <li><a href="#" onclick='return showDiv("page_13");'>13</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_11 --> <div id="page_12" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_10");'>10</a></li> <li><a href="#" onclick='return showDiv("page_11");'>11</a></li> <li class="active"><span>12</span></li> <li><a href="#" onclick='return showDiv("page_13");'>13</a></li> <li><a href="#" onclick='return showDiv("page_14");'>14</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="221"> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017PrAeS..93....1L','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017PrAeS..93....1L"><span>Review of design <span class="hlt">optimization</span> <span class="hlt">methods</span> for turbomachinery aerodynamics</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Li, Zhihui; Zheng, Xinqian</p> <p>2017-08-01</p> <p>In today's competitive environment, new turbomachinery designs need to be not only more efficient, quieter, and ;greener; but also need to be developed at on much shorter time scales and at lower costs. A number of advanced <span class="hlt">optimization</span> strategies have been developed to achieve these requirements. This paper reviews recent progress in turbomachinery design <span class="hlt">optimization</span> to solve real-world aerodynamic problems, especially for compressors and turbines. This review covers the following topics that are important for <span class="hlt">optimizing</span> turbomachinery designs. (1) <span class="hlt">optimization</span> <span class="hlt">methods</span>, (2) stochastic <span class="hlt">optimization</span> combined with blade parameterization <span class="hlt">methods</span> and the design of experiment <span class="hlt">methods</span>, (3) gradient-based <span class="hlt">optimization</span> <span class="hlt">methods</span> for compressors and turbines and (4) data mining techniques for Pareto Fronts. We also present our own insights regarding the current research trends and the future <span class="hlt">optimization</span> of turbomachinery designs.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/25967495','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/25967495"><span>Simultaneous <span class="hlt">optimization</span> <span class="hlt">method</span> for absorption spectroscopy postprocessing.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Simms, Jean M; An, Xinliang; Brittelle, Mack S; Ramesh, Varun; Ghandhi, Jaal B; Sanders, Scott T</p> <p>2015-05-10</p> <p>A simultaneous <span class="hlt">optimization</span> <span class="hlt">method</span> is proposed for absorption spectroscopy postprocessing. This <span class="hlt">method</span> is particularly useful for thermometry measurements based on congested spectra, as commonly encountered in combustion applications of H2O absorption spectroscopy. A comparison test demonstrated that the simultaneous <span class="hlt">optimization</span> <span class="hlt">method</span> had greater accuracy, greater precision, and was more user-independent than the common step-wise postprocessing <span class="hlt">method</span> previously used by the authors. The simultaneous <span class="hlt">optimization</span> <span class="hlt">method</span> was also used to process experimental data from an environmental chamber and a constant volume combustion chamber, producing results with errors on the order of only 1%.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://hdl.handle.net/2060/19970004929','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19970004929"><span>Multi-Criterion Preliminary Design of a Tetrahedral Truss Platform</span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Wu, K. Chauncey</p> <p>1995-01-01</p> <p>An efficient <span class="hlt">method</span> is presented for multi-criterion preliminary design and demonstrated for a tetrahedral truss platform. The present <span class="hlt">method</span> requires minimal analysis effort and permits rapid estimation of <span class="hlt">optimized</span> truss behavior for preliminary design. A 14-m-diameter, 3-ring truss platform represents a candidate reflector support structure for space-based science spacecraft. The truss members are divided into 9 groups by truss ring and position. Design variables are the cross-sectional area of all members in a group, and are either 1, 3 or 5 times the minimum member area. Non-structural mass represents the node and joint hardware used to assemble the truss structure. <span class="hlt">Taguchi</span> <span class="hlt">methods</span> are used to efficiently identify key points in the set of Pareto-<span class="hlt">optimal</span> truss designs. Key points identified using <span class="hlt">Taguchi</span> <span class="hlt">methods</span> are the maximum frequency, minimum mass, and maximum frequency-to-mass ratio truss designs. Low-order polynomial curve fits through these points are used to approximate the behavior of the full set of Pareto-<span class="hlt">optimal</span> designs. The resulting Pareto-<span class="hlt">optimal</span> design curve is used to predict frequency and mass for <span class="hlt">optimized</span> trusses. Performance improvements are plotted in frequency-mass (criterion) space and compared to results for uniform trusses. Application of constraints to frequency and mass and sensitivity to constraint variation are demonstrated.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016MS%26E..149a2005D','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016MS%26E..149a2005D"><span>Effects of machining parameters on tool life and its <span class="hlt">optimization</span> in turning mild steel with brazed carbide cutting tool</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Dasgupta, S.; Mukherjee, S.</p> <p>2016-09-01</p> <p>One of the most significant factors in metal cutting is tool life. In this research work, the effects of machining parameters on tool under wet machining environment were studied. Tool life characteristics of brazed carbide cutting tool machined against mild steel and <span class="hlt">optimization</span> of machining parameters based on <span class="hlt">Taguchi</span> design of experiments were examined. The experiments were conducted using three factors, spindle speed, feed rate and depth of cut each having three levels. Nine experiments were performed on a high speed semi-automatic precision central lathe. ANOVA was used to determine the level of importance of the machining parameters on tool life. The optimum machining parameter combination was obtained by the analysis of S/N ratio. A mathematical model based on multiple regression analysis was developed to predict the tool life. <span class="hlt">Taguchi</span>'s orthogonal array analysis revealed the <span class="hlt">optimal</span> combination of parameters at lower levels of spindle speed, feed rate and depth of cut which are 550 rpm, 0.2 mm/rev and 0.5mm respectively. The Main Effects plot reiterated the same. The variation of tool life with different process parameters has been plotted. Feed rate has the most significant effect on tool life followed by spindle speed and depth of cut.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017SoftX...6..231P','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017SoftX...6..231P"><span>Optimel: Software for selecting the <span class="hlt">optimal</span> <span class="hlt">method</span></span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Popova, Olga; Popov, Boris; Romanov, Dmitry; Evseeva, Marina</p> <p></p> <p>Optimel: software for selecting the <span class="hlt">optimal</span> <span class="hlt">method</span> automates the process of selecting a solution <span class="hlt">method</span> from the <span class="hlt">optimization</span> <span class="hlt">methods</span> domain. Optimel features practical novelty. It saves time and money when conducting exploratory studies if its objective is to select the most appropriate <span class="hlt">method</span> for solving an <span class="hlt">optimization</span> problem. Optimel features theoretical novelty because for obtaining the domain a new <span class="hlt">method</span> of knowledge structuring was used. In the Optimel domain, extended quantity of <span class="hlt">methods</span> and their properties are used, which allows identifying the level of scientific studies, enhancing the user's expertise level, expand the prospects the user faces and opening up new research objectives. Optimel can be used both in scientific research institutes and in educational institutions.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017JPhCS.930a2034Y','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017JPhCS.930a2034Y"><span><span class="hlt">Optimizing</span> Robinson Operator with Ant Colony <span class="hlt">Optimization</span> As a Digital Image Edge Detection <span class="hlt">Method</span></span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Yanti Nasution, Tarida; Zarlis, Muhammad; K. M Nasution, Mahyuddin</p> <p>2017-12-01</p> <p>Edge detection serves to identify the boundaries of an object against a background of mutual overlap. One of the classic <span class="hlt">method</span> for edge detection is operator Robinson. Operator Robinson produces a thin, not assertive and grey line edge. To overcome these deficiencies, the proposed improvements to edge detection <span class="hlt">method</span> with the approach graph with Ant Colony <span class="hlt">Optimization</span> algorithm. The repairs may be performed are thicken the edge and connect the edges cut off. Edge detection research aims to do <span class="hlt">optimization</span> of operator Robinson with Ant Colony <span class="hlt">Optimization</span> then compare the output and generated the inferred extent of Ant Colony <span class="hlt">Optimization</span> can improve result of edge detection that has not been <span class="hlt">optimized</span> and improve the accuracy of the results of Robinson edge detection. The parameters used in performance measurement of edge detection are morphology of the resulting edge line, MSE and PSNR. The result showed that Robinson and Ant Colony <span class="hlt">Optimization</span> <span class="hlt">method</span> produces images with a more assertive and thick edge. Ant Colony <span class="hlt">Optimization</span> <span class="hlt">method</span> is able to be used as a <span class="hlt">method</span> for <span class="hlt">optimizing</span> operator Robinson by improving the image result of Robinson detection average 16.77 % than classic Robinson result.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/22751850','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/22751850"><span>Neutralization of red mud with pickling waste liquor using <span class="hlt">Taguchi</span>'s design of experimental methodology.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Rai, Suchita; Wasewar, Kailas L; Lataye, Dilip H; Mishra, Rajshekhar S; Puttewar, Suresh P; Chaddha, Mukesh J; Mahindiran, P; Mukhopadhyay, Jyoti</p> <p>2012-09-01</p> <p>'Red mud' or 'bauxite residue', a waste generated from alumina refinery is highly alkaline in nature with a pH of 10.5-12.5. Red mud poses serious environmental problems such as alkali seepage in ground water and alkaline dust generation. One of the options to make red mud less hazardous and environmentally benign is its neutralization with acid or an acidic waste. Hence, in the present study, neutralization of alkaline red mud was carried out using a highly acidic waste (pickling waste liquor). Pickling waste liquor is a mixture of strong acids used for descaling or cleaning the surfaces in steel making industry. The aim of the study was to look into the feasibility of neutralization process of the two wastes using <span class="hlt">Taguchi</span>'s design of experimental methodology. This would make both the wastes less hazardous and safe for disposal. The effect of slurry solids, volume of pickling liquor, stirring time and temperature on the neutralization process were investigated. The analysis of variance (ANOVA) shows that the volume of the pickling liquor is the most significant parameter followed by quantity of red mud with 69.18% and 18.48% contribution each respectively. Under the <span class="hlt">optimized</span> parameters, pH value of 7 can be achieved by mixing the two wastes. About 25-30% of the total soda from the red mud is being neutralized and alkalinity is getting reduced by 80-85%. Mineralogy and morphology of the neutralized red mud have also been studied. The data presented will be useful in view of environmental concern of red mud disposal.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2014PhDT.......344M','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2014PhDT.......344M"><span>Estudio numerico y experimental del proceso de soldeo MIG sobre la aleacion 6063--T5 utilizando el metodo de <span class="hlt">Taguchi</span></span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Meseguer Valdenebro, Jose Luis</p> <p></p> <p>Electric arc welding processes represent one of the most used techniques on manufacturing processes of mechanical components in modern industry. The electric arc welding processes have been adapted to current needs, becoming a flexible and versatile way to manufacture. Numerical results in the welding process are validated experimentally. The main numerical <span class="hlt">methods</span> most commonly used today are three: finite difference <span class="hlt">method</span>, finite element <span class="hlt">method</span> and finite volume <span class="hlt">method</span>. The most widely used numerical <span class="hlt">method</span> for the modeling of welded joints is the finite element <span class="hlt">method</span> because it is well adapted to the geometric and boundary conditions in addition to the fact that there is a variety of commercial programs which use the finite element <span class="hlt">method</span> as a calculation basis. The content of this thesis shows an experimental study of a welded joint conducted by means of the MIG welding process of aluminum alloy 6063-T5. The numerical process is validated experimentally by applying the <span class="hlt">method</span> of finite element through the calculation program ANSYS. The experimental results in this paper are the cooling curves, the critical cooling time t4/3, the weld bead geometry, the microhardness obtained in the welded joint, and the metal heat affected zone base, process dilution, critical areas intersected between the cooling curves and the curve TTP. The numerical results obtained in this thesis are: the thermal cycle curves, which represent both the heating to maximum temperature and subsequent cooling. The critical cooling time t4/3 and thermal efficiency of the process are calculated and the bead geometry obtained experimentally is represented. The heat affected zone is obtained by differentiating the zones that are found at different temperatures, the critical areas intersected between the cooling curves and the TTP curve. In order to conclude this doctoral thesis, an <span class="hlt">optimization</span> has been conducted by means of the <span class="hlt">Taguchi</span> <span class="hlt">method</span> for welding parameters in order to obtain an</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018JPhCS1007a2031M','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018JPhCS1007a2031M"><span>The robust design for improving crude palm oil quality in Indonesian Mill</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Maretia Benu, Siti; Sinulingga, Sukaria; Matondang, Nazaruddin; Budiman, Irwan</p> <p>2018-04-01</p> <p>This research was conducted in palm oil mill in Sumatra Utara Province, Indonesia. Currently, the main product of this mill is Crude Palm Oil (CPO) and hasn’t met the expected standard quality. CPO is the raw material for many fat derivative products. The generally stipulated quality criteria are dirt count, free fatty acid, and moisture of CPO. The aim of this study is to obtain the <span class="hlt">optimal</span> setting for factor’s affect the quality of CPO. The <span class="hlt">optimal</span> setting will result in an improvement of product’s quality. In this research, Experimental Design with <span class="hlt">Taguchi</span> <span class="hlt">Method</span> is used. Steps of this <span class="hlt">method</span> are identified influence factors, select the orthogonal array, processed data using ANOVA test and signal to noise ratio, and confirmed the research using Quality Loss Function. The result of this study using <span class="hlt">Taguchi</span> <span class="hlt">Method</span> is to suggest to set fruit maturity at 75.4-86.9%, digester temperature at 95°C and press at 21 Ampere to reduce quality deviation until 42.42%.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017OptLT..89..214M','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017OptLT..89..214M"><span>Determination of laser cutting process conditions using the preference selection index <span class="hlt">method</span></span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Madić, Miloš; Antucheviciene, Jurgita; Radovanović, Miroslav; Petković, Dušan</p> <p>2017-03-01</p> <p>Determination of adequate parameter settings for improvement of multiple quality and productivity characteristics at the same time is of great practical importance in laser cutting. This paper discusses the application of the preference selection index (PSI) <span class="hlt">method</span> for discrete <span class="hlt">optimization</span> of the CO2 laser cutting of stainless steel. The main motivation for application of the PSI <span class="hlt">method</span> is that it represents an almost unexplored multi-criteria decision making (MCDM) <span class="hlt">method</span>, and moreover, this <span class="hlt">method</span> does not require assessment of the considered criteria relative significances. After reviewing and comparing the existing approaches for determination of laser cutting parameter settings, the application of the PSI <span class="hlt">method</span> was explained in detail. Experiment realization was conducted by using <span class="hlt">Taguchi</span>'s L27 orthogonal array. Roughness of the cut surface, heat affected zone (HAZ), kerf width and material removal rate (MRR) were considered as <span class="hlt">optimization</span> criteria. The proposed methodology is found to be very useful in real manufacturing environment since it involves simple calculations which are easy to understand and implement. However, while applying the PSI <span class="hlt">method</span> it was observed that it can not be useful in situations where there exist a large number of alternatives which have attribute values (performances) very close to those which are preferred.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017JIEIC..98..541S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017JIEIC..98..541S"><span>An Approach to Maximize Weld Penetration During TIG Welding of P91 Steel Plates by Utilizing Image Processing and <span class="hlt">Taguchi</span> Orthogonal Array</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Singh, Akhilesh Kumar; Debnath, Tapas; Dey, Vidyut; Rai, Ram Naresh</p> <p>2017-10-01</p> <p>P-91 is modified 9Cr-1Mo steel. Fabricated structures and components of P-91 has a lot of application in power and chemical industry owing to its excellent properties like high temperature stress corrosion resistance, less susceptibility to thermal fatigue at high operating temperatures. The weld quality and surface finish of fabricated structure of P91 is very good when welded by Tungsten Inert Gas welding (TIG). However, the process has its limitation regarding weld penetration. The success of a welding process lies in fabricating with such a combination of parameters that gives maximum weld penetration and minimum weld width. To carry out an investigation on the effect of the autogenous TIG welding parameters on weld penetration and weld width, bead-on-plate welds were carried on P91 plates of thickness 6 mm in accordance to a <span class="hlt">Taguchi</span> L9 design. Welding current, welding speed and gas flow rate were the three control variables in the investigation. After autogenous (TIG) welding, the dimension of the weld width, weld penetration and weld area were successfully measured by an image analysis technique developed for the study. The maximum error for the measured dimensions of the weld width, penetration and area with the developed image analysis technique was only 2 % compared to the measurements of Leica-Q-Win-V3 software installed in optical microscope. The measurements with the developed software, unlike the measurements under a microscope, required least human intervention. An Analysis of Variance (ANOVA) confirms the significance of the selected parameters. Thereafter, <span class="hlt">Taguchi</span>'s <span class="hlt">method</span> was successfully used to trade-off between maximum penetration and minimum weld width while keeping the weld area at a minimum.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018MS%26E..297a2013P','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018MS%26E..297a2013P"><span>Application of grey-fuzzy approach in parametric <span class="hlt">optimization</span> of EDM process in machining of MDN 300 steel</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Protim Das, Partha; Gupta, P.; Das, S.; Pradhan, B. B.; Chakraborty, S.</p> <p>2018-01-01</p> <p>Maraging steel (MDN 300) find its application in many industries as it exhibits high hardness which are very difficult to machine material. Electro discharge machining (EDM) is an extensively popular machining process which can be used in machining of such materials. <span class="hlt">Optimization</span> of response parameters are essential for effective machining of these materials. Past researchers have already used <span class="hlt">Taguchi</span> for obtaining the <span class="hlt">optimal</span> responses of EDM process for this material with responses such as material removal rate (MRR), tool wear rate (TWR), relative wear ratio (RWR), and surface roughness (SR) considering discharge current, pulse on time, pulse off time, arc gap, and duty cycle as process parameters. In this paper, grey relation analysis (GRA) with fuzzy logic is applied to this multi objective <span class="hlt">optimization</span> problem to check the responses by an implementation of the derived parametric setting. It was found that the parametric setting derived by the proposed <span class="hlt">method</span> results in better a response than those reported by the past researchers. Obtained results are also verified using the technique for order of preference by similarity to ideal solution (TOPSIS). The predicted result also shows that there is a significant improvement in comparison to the results of past researchers.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017MS%26E..269a2026J','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017MS%26E..269a2026J"><span>Process <span class="hlt">Optimization</span> and Microstructure Characterization of Ti6Al4V Manufactured by Selective Laser Melting</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>junfeng, Li; zhengying, Wei</p> <p>2017-11-01</p> <p>Process <span class="hlt">optimization</span> and microstructure characterization of Ti6Al4V manufactured by selective laser melting (SLM) were investigated in this article. The relative density of sampled fabricated by SLM is influenced by the main process parameters, including laser power, scan speed and hatch distance. The volume energy density (VED) was defined to account for the combined effect of the main process parameters on the relative density. The results shown that the relative density changed with the change of VED and the <span class="hlt">optimized</span> process interval is 55˜60J/mm3. Furthermore, compared with laser power, scan speed and hatch distance by <span class="hlt">taguchi</span> <span class="hlt">method</span>, it was found that the scan speed had the greatest effect on the relative density. Compared with the microstructure of the cross-section of the specimen at different scanning speeds, it was found that the microstructures at different speeds had similar characteristics, all of them were needle-like martensite distributed in the β matrix, but with the increase of scanning speed, the microstructure is finer and the lower scan speed leads to coarsening of the microstructure.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.osti.gov/servlets/purl/760530','SCIGOV-STC'); return false;" href="https://www.osti.gov/servlets/purl/760530"><span>Extremal <span class="hlt">Optimization</span>: <span class="hlt">Methods</span> Derived from Co-Evolution</span></a></p> <p><a target="_blank" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Boettcher, S.; Percus, A.G.</p> <p>1999-07-13</p> <p>We describe a general-purpose <span class="hlt">method</span> for finding high-quality solutions to hard <span class="hlt">optimization</span> problems, inspired by self-organized critical models of co-evolution such as the Bak-Sneppen model. The <span class="hlt">method</span>, called Extremal <span class="hlt">Optimization</span>, successively eliminates extremely undesirable components of sub-<span class="hlt">optimal</span> solutions, rather than ''breeding'' better components. In contrast to Genetic Algorithms which operate on an entire ''gene-pool'' of possible solutions, Extremal <span class="hlt">Optimization</span> improves on a single candidate solution by treating each of its components as species co-evolving according to Darwinian principles. Unlike Simulated Annealing, its non-equilibrium approach effects an algorithm requiring few parameters to tune. With only one adjustable parameter, its performance provesmore » competitive with, and often superior to, more elaborate stochastic <span class="hlt">optimization</span> procedures. We demonstrate it here on two classic hard <span class="hlt">optimization</span> problems: graph partitioning and the traveling salesman problem.« less</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://hdl.handle.net/2060/20040161122','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20040161122"><span>Multidisciplinary <span class="hlt">Optimization</span> <span class="hlt">Methods</span> for Aircraft Preliminary Design</span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Kroo, Ilan; Altus, Steve; Braun, Robert; Gage, Peter; Sobieski, Ian</p> <p>1994-01-01</p> <p>This paper describes a research program aimed at improved <span class="hlt">methods</span> for multidisciplinary design and <span class="hlt">optimization</span> of large-scale aeronautical systems. The research involves new approaches to system decomposition, interdisciplinary communication, and <span class="hlt">methods</span> of exploiting coarse-grained parallelism for analysis and <span class="hlt">optimization</span>. A new architecture, that involves a tight coupling between <span class="hlt">optimization</span> and analysis, is intended to improve efficiency while simplifying the structure of multidisciplinary, computation-intensive design problems involving many analysis disciplines and perhaps hundreds of design variables. Work in two areas is described here: system decomposition using compatibility constraints to simplify the analysis structure and take advantage of coarse-grained parallelism; and collaborative <span class="hlt">optimization</span>, a decomposition of the <span class="hlt">optimization</span> process to permit parallel design and to simplify interdisciplinary communication requirements.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=19930000238&hterms=creating&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D80%26Ntt%3Dcreating','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=19930000238&hterms=creating&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D80%26Ntt%3Dcreating"><span>Creating A Data Base For Design Of An Impeller</span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Prueger, George H.; Chen, Wei-Chung</p> <p>1993-01-01</p> <p>Report describes use of <span class="hlt">Taguchi</span> <span class="hlt">method</span> of parametric design to create data base facilitating <span class="hlt">optimization</span> of design of impeller in centrifugal pump. Data base enables systematic design analysis covering all significant design parameters. Reduces time and cost of parametric <span class="hlt">optimization</span> of design: for particular impeller considered, one can cover 4,374 designs by computational simulations of performance for only 18 cases.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2009JAMDS...3...22D','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2009JAMDS...3...22D"><span>Performance <span class="hlt">Optimization</span> Control of ECH using Fuzzy Inference Application</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Dubey, Abhay Kumar</p> <p></p> <p>Electro-chemical honing (ECH) is a hybrid electrolytic precision micro-finishing technology that, by combining physico-chemical actions of electro-chemical machining and conventional honing processes, provides the controlled functional surfaces-generation and fast material removal capabilities in a single operation. Process multi-performance <span class="hlt">optimization</span> has become vital for utilizing full potential of manufacturing processes to meet the challenging requirements being placed on the surface quality, size, tolerances and production rate of engineering components in this globally competitive scenario. This paper presents an strategy that integrates the <span class="hlt">Taguchi</span> matrix experimental design, analysis of variances and fuzzy inference system (FIS) to formulate a robust practical multi-performance <span class="hlt">optimization</span> methodology for complex manufacturing processes like ECH, which involve several control variables. Two methodologies one using a genetic algorithm tuning of FIS (GA-tuned FIS) and another using an adaptive network based fuzzy inference system (ANFIS) have been evaluated for a multi-performance <span class="hlt">optimization</span> case study of ECH. The actual experimental results confirm their potential for a wide range of machining conditions employed in ECH.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/25687584','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/25687584"><span><span class="hlt">Optimization</span> of supercritical fluid extraction and HPLC identification of wedelolactone from Wedelia calendulacea by orthogonal array design.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Patil, Ajit A; Sachin, Bhusari S; Wakte, Pravin S; Shinde, Devanand B</p> <p>2014-11-01</p> <p>The purpose of this work is to provide a complete study of the influence of operational parameters of the supercritical carbon dioxide assisted extraction (SC CO2E) on yield of wedelolactone from Wedelia calendulacea Less., and to find an <span class="hlt">optimal</span> combination of factors that maximize the wedelolactone yield. In order to determine the <span class="hlt">optimal</span> combination of the four factors viz. operating pressure, temperature, modifier concentration and extraction time, a <span class="hlt">Taguchi</span> experimental design approach was used: four variables (three levels) in L9 orthogonal array. Wedelolactone content was determined using validated HPLC methodology. Optimum extraction conditions were found to be as follows: extraction pressure, 25 MPa; temperature, 40 °C; modifier concentration, 10% and extraction time, 90 min. Optimum extraction conditions demonstrated wedelolactone yield of 8.01 ± 0.34 mg/100 g W. calendulacea Less. Pressure, temperature and time showed significant (p < 0.05) effect on the wedelolactone yield. The supercritical carbon dioxide extraction showed higher selectivity than the conventional Soxhlet assisted extraction <span class="hlt">method</span>.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4293910','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4293910"><span><span class="hlt">Optimization</span> of supercritical fluid extraction and HPLC identification of wedelolactone from Wedelia calendulacea by orthogonal array design</span></a></p> <p><a target="_blank" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Patil, Ajit A.; Sachin, Bhusari S.; Wakte, Pravin S.; Shinde, Devanand B.</p> <p>2013-01-01</p> <p>The purpose of this work is to provide a complete study of the influence of operational parameters of the supercritical carbon dioxide assisted extraction (SC CO2E) on yield of wedelolactone from Wedelia calendulacea Less., and to find an <span class="hlt">optimal</span> combination of factors that maximize the wedelolactone yield. In order to determine the <span class="hlt">optimal</span> combination of the four factors viz. operating pressure, temperature, modifier concentration and extraction time, a <span class="hlt">Taguchi</span> experimental design approach was used: four variables (three levels) in L9 orthogonal array. Wedelolactone content was determined using validated HPLC methodology. Optimum extraction conditions were found to be as follows: extraction pressure, 25 MPa; temperature, 40 °C; modifier concentration, 10% and extraction time, 90 min. Optimum extraction conditions demonstrated wedelolactone yield of 8.01 ± 0.34 mg/100 g W. calendulacea Less. Pressure, temperature and time showed significant (p < 0.05) effect on the wedelolactone yield. The supercritical carbon dioxide extraction showed higher selectivity than the conventional Soxhlet assisted extraction <span class="hlt">method</span>. PMID:25687584</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017SPIE10420E..1MX','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017SPIE10420E..1MX"><span>A new <span class="hlt">optimal</span> seam <span class="hlt">method</span> for seamless image stitching</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Xue, Jiale; Chen, Shengyong; Cheng, Xu; Han, Ying; Zhao, Meng</p> <p>2017-07-01</p> <p>A novel <span class="hlt">optimal</span> seam <span class="hlt">method</span> which aims to stitch those images with overlapping area more seamlessly has been propos ed. Considering the traditional gradient domain <span class="hlt">optimal</span> seam <span class="hlt">method</span> and fusion algorithm result in bad color difference measurement and taking a long time respectively, the input images would be converted to HSV space and a new energy function is designed to seek <span class="hlt">optimal</span> stitching path. To smooth the <span class="hlt">optimal</span> stitching path, a simplified pixel correction and weighted average <span class="hlt">method</span> are utilized individually. The proposed <span class="hlt">methods</span> exhibit performance in eliminating the stitching seam compared with the traditional gradient <span class="hlt">optimal</span> seam and high efficiency with multi-band blending algorithm.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_10");'>10</a></li> <li><a href="#" onclick='return showDiv("page_11");'>11</a></li> <li class="active"><span>12</span></li> <li><a href="#" onclick='return showDiv("page_13");'>13</a></li> <li><a href="#" onclick='return showDiv("page_14");'>14</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_12 --> <div id="page_13" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_11");'>11</a></li> <li><a href="#" onclick='return showDiv("page_12");'>12</a></li> <li class="active"><span>13</span></li> <li><a href="#" onclick='return showDiv("page_14");'>14</a></li> <li><a href="#" onclick='return showDiv("page_15");'>15</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="241"> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018NIMPA.894....8G','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018NIMPA.894....8G"><span><span class="hlt">Optimization</span> study on structural analyses for the J-PARC mercury target vessel</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Guan, Wenhai; Wakai, Eiichi; Naoe, Takashi; Kogawa, Hiroyuki; Wakui, Takashi; Haga, Katsuhiro; Takada, Hiroshi; Futakawa, Masatoshi</p> <p>2018-06-01</p> <p>The spallation neutron source at the Japan Proton Accelerator Research Complex (J-PARC) mercury target vessel is used for various materials science studies, work is underway to achieve stable operation at 1 MW. This is very important for enhancing the structural integrity and durability of the target vessel, which is being developed for 1 MW operation. In the present study, to reduce thermal stress and relax stress concentrations more effectively in the existing target vessel in J-PARC, an <span class="hlt">optimization</span> approach called the <span class="hlt">Taguchi</span> <span class="hlt">method</span> (TM) is applied to thermo-mechanical analysis. The ribs and their relative parameters, as well as the thickness of the mercury vessel and shrouds, were selected as important design parameters for this investigation. According to the analytical results of 18 model types designed using the TM, the <span class="hlt">optimal</span> design was determined. It is characterized by discrete ribs and a thicker vessel wall than the current design. The maximum thermal stresses in the mercury vessel and the outer shroud were reduced by 14% and 15%, respectively. Furthermore, it was indicated that variations in rib width, left/right rib intervals, and shroud thickness could influence the maximum thermal stress performance. It is therefore concluded that the TM was useful for <span class="hlt">optimizing</span> the structure of the target vessel and to reduce the thermal stress in a small number of calculation cases.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28333639','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28333639"><span>Robust Dynamic Multi-objective Vehicle Routing <span class="hlt">Optimization</span> <span class="hlt">Method</span>.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Guo, Yi-Nan; Cheng, Jian; Luo, Sha; Gong, Dun-Wei</p> <p>2017-03-21</p> <p>For dynamic multi-objective vehicle routing problems, the waiting time of vehicle, the number of serving vehicles, the total distance of routes were normally considered as the <span class="hlt">optimization</span> objectives. Except for above objectives, fuel consumption that leads to the environmental pollution and energy consumption was focused on in this paper. Considering the vehicles' load and the driving distance, corresponding carbon emission model was built and set as an <span class="hlt">optimization</span> objective. Dynamic multi-objective vehicle routing problems with hard time windows and randomly appeared dynamic customers, subsequently, were modeled. In existing planning <span class="hlt">methods</span>, when the new service demand came up, global vehicle routing <span class="hlt">optimization</span> <span class="hlt">method</span> was triggered to find the <span class="hlt">optimal</span> routes for non-served customers, which was time-consuming. Therefore, robust dynamic multi-objective vehicle routing <span class="hlt">method</span> with two-phase is proposed. Three highlights of the novel <span class="hlt">method</span> are: (i) After finding <span class="hlt">optimal</span> robust virtual routes for all customers by adopting multi-objective particle swarm <span class="hlt">optimization</span> in the first phase, static vehicle routes for static customers are formed by removing all dynamic customers from robust virtual routes in next phase. (ii)The dynamically appeared customers append to be served according to their service time and the vehicles' statues. Global vehicle routing <span class="hlt">optimization</span> is triggered only when no suitable locations can be found for dynamic customers. (iii)A metric measuring the algorithms' robustness is given. The statistical results indicated that the routes obtained by the proposed <span class="hlt">method</span> have better stability and robustness, but may be sub-optimum. Moreover, time-consuming global vehicle routing <span class="hlt">optimization</span> is avoided as dynamic customers appear.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26462528','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26462528"><span>Honey Bees Inspired <span class="hlt">Optimization</span> <span class="hlt">Method</span>: The Bees Algorithm.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Yuce, Baris; Packianather, Michael S; Mastrocinque, Ernesto; Pham, Duc Truong; Lambiase, Alfredo</p> <p>2013-11-06</p> <p><span class="hlt">Optimization</span> algorithms are search <span class="hlt">methods</span> where the goal is to find an <span class="hlt">optimal</span> solution to a problem, in order to satisfy one or more objective functions, possibly subject to a set of constraints. Studies of social animals and social insects have resulted in a number of computational models of swarm intelligence. Within these swarms their collective behavior is usually very complex. The collective behavior of a swarm of social organisms emerges from the behaviors of the individuals of that swarm. Researchers have developed computational <span class="hlt">optimization</span> <span class="hlt">methods</span> based on biology such as Genetic Algorithms, Particle Swarm <span class="hlt">Optimization</span>, and Ant Colony. The aim of this paper is to describe an <span class="hlt">optimization</span> algorithm called the Bees Algorithm, inspired from the natural foraging behavior of honey bees, to find the <span class="hlt">optimal</span> solution. The algorithm performs both an exploitative neighborhood search combined with random explorative search. In this paper, after an explanation of the natural foraging behavior of honey bees, the basic Bees Algorithm and its improved versions are described and are implemented in order to <span class="hlt">optimize</span> several benchmark functions, and the results are compared with those obtained with different <span class="hlt">optimization</span> algorithms. The results show that the Bees Algorithm offering some advantage over other <span class="hlt">optimization</span> <span class="hlt">methods</span> according to the nature of the problem.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017CMMPh..57.1592T','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017CMMPh..57.1592T"><span>Numerical <span class="hlt">optimization</span> <span class="hlt">methods</span> for controlled systems with parameters</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Tyatyushkin, A. I.</p> <p>2017-10-01</p> <p>First- and second-order numerical <span class="hlt">methods</span> for <span class="hlt">optimizing</span> controlled dynamical systems with parameters are discussed. In unconstrained-parameter problems, the control parameters are <span class="hlt">optimized</span> by applying the conjugate gradient <span class="hlt">method</span>. A more accurate numerical solution in these problems is produced by Newton's <span class="hlt">method</span> based on a second-order functional increment formula. Next, a general <span class="hlt">optimal</span> control problem with state constraints and parameters involved on the righthand sides of the controlled system and in the initial conditions is considered. This complicated problem is reduced to a mathematical programming one, followed by the search for <span class="hlt">optimal</span> parameter values and control functions by applying a multimethod algorithm. The performance of the proposed technique is demonstrated by solving application problems.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://hdl.handle.net/2060/19940020372','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19940020372"><span>Merits and limitations of <span class="hlt">optimality</span> criteria <span class="hlt">method</span> for structural <span class="hlt">optimization</span></span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Patnaik, Surya N.; Guptill, James D.; Berke, Laszlo</p> <p>1993-01-01</p> <p>The merits and limitations of the <span class="hlt">optimality</span> criteria (OC) <span class="hlt">method</span> for the minimum weight design of structures subjected to multiple load conditions under stress, displacement, and frequency constraints were investigated by examining several numerical examples. The examples were solved utilizing the <span class="hlt">Optimality</span> Criteria Design Code that was developed for this purpose at NASA Lewis Research Center. This OC code incorporates OC <span class="hlt">methods</span> available in the literature with generalizations for stress constraints, fully utilized design concepts, and hybrid <span class="hlt">methods</span> that combine both techniques. Salient features of the code include multiple choices for Lagrange multiplier and design variable update <span class="hlt">methods</span>, design strategies for several constraint types, variable linking, displacement and integrated force <span class="hlt">method</span> analyzers, and analytical and numerical sensitivities. The performance of the OC <span class="hlt">method</span>, on the basis of the examples solved, was found to be satisfactory for problems with few active constraints or with small numbers of design variables. For problems with large numbers of behavior constraints and design variables, the OC <span class="hlt">method</span> appears to follow a subset of active constraints that can result in a heavier design. The computational efficiency of OC <span class="hlt">methods</span> appears to be similar to some mathematical programming techniques.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2009cfdd.confE.196S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2009cfdd.confE.196S"><span><span class="hlt">Optimization</span> <span class="hlt">Methods</span> in Sherpa</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Siemiginowska, Aneta; Nguyen, Dan T.; Doe, Stephen M.; Refsdal, Brian L.</p> <p>2009-09-01</p> <p>Forward fitting is a standard technique used to model X-ray data. A statistic, usually assumed weighted chi^2 or Poisson likelihood (e.g. Cash), is minimized in the fitting process to obtain a set of the best model parameters. Astronomical models often have complex forms with many parameters that can be correlated (e.g. an absorbed power law). Minimization is not trivial in such setting, as the statistical parameter space becomes multimodal and finding the global minimum is hard. Standard minimization algorithms can be found in many libraries of scientific functions, but they are usually focused on specific functions. However, Sherpa designed as general fitting and modeling application requires very robust <span class="hlt">optimization</span> <span class="hlt">methods</span> that can be applied to variety of astronomical data (X-ray spectra, images, timing, optical data etc.). We developed several <span class="hlt">optimization</span> algorithms in Sherpa targeting a wide range of minimization problems. Two local minimization <span class="hlt">methods</span> were built: Levenberg-Marquardt algorithm was obtained from MINPACK subroutine LMDIF and modified to achieve the required robustness; and Nelder-Mead simplex <span class="hlt">method</span> has been implemented in-house based on variations of the algorithm described in the literature. A global search Monte-Carlo <span class="hlt">method</span> has been implemented following a differential evolution algorithm presented by Storn and Price (1997). We will present the <span class="hlt">methods</span> in Sherpa and discuss their usage cases. We will focus on the application to Chandra data showing both 1D and 2D examples. This work is supported by NASA contract NAS8-03060 (CXC).</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2009JCoPh.228.6479I','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2009JCoPh.228.6479I"><span><span class="hlt">Optimal</span> preconditioning of lattice Boltzmann <span class="hlt">methods</span></span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Izquierdo, Salvador; Fueyo, Norberto</p> <p>2009-09-01</p> <p>A preconditioning technique to accelerate the simulation of steady-state problems using the single-relaxation-time (SRT) lattice Boltzmann (LB) <span class="hlt">method</span> was first proposed by Guo et al. [Z. Guo, T. Zhao, Y. Shi, Preconditioned lattice-Boltzmann <span class="hlt">method</span> for steady flows, Phys. Rev. E 70 (2004) 066706-1]. The key idea in this preconditioner is to modify the equilibrium distribution function in such a way that, by means of a Chapman-Enskog expansion, a time-derivative preconditioner of the Navier-Stokes (NS) equations is obtained. In the present contribution, the <span class="hlt">optimal</span> values for the free parameter γ of this preconditioner are searched both numerically and theoretically; the later with the aid of linear-stability analysis and with the condition number of the system of NS equations. The influence of the collision operator, single- versus multiple-relaxation-times (MRT), is also studied. Three steady-state laminar test cases are used for validation, namely: the two-dimensional lid-driven cavity, a two-dimensional microchannel and the three-dimensional backward-facing step. Finally, guidelines are suggested for an a priori definition of <span class="hlt">optimal</span> preconditioning parameters as a function of the Reynolds and Mach numbers. The new <span class="hlt">optimally</span> preconditioned MRT <span class="hlt">method</span> derived is shown to improve, simultaneously, the rate of convergence, the stability and the accuracy of the lattice Boltzmann simulations, when compared to the non-preconditioned <span class="hlt">methods</span> and to the <span class="hlt">optimally</span> preconditioned SRT one. Additionally, direct time-derivative preconditioning of the LB equation is also studied.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017PhDT........40C','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017PhDT........40C"><span>Local Approximation and Hierarchical <span class="hlt">Methods</span> for Stochastic <span class="hlt">Optimization</span></span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Cheng, Bolong</p> <p></p> <p>In this thesis, we present local and hierarchical approximation <span class="hlt">methods</span> for two classes of stochastic <span class="hlt">optimization</span> problems: <span class="hlt">optimal</span> learning and Markov decision processes. For the <span class="hlt">optimal</span> learning problem class, we introduce a locally linear model with radial basis function for estimating the posterior mean of the unknown objective function. The <span class="hlt">method</span> uses a compact representation of the function which avoids storing the entire history, as is typically required by nonparametric <span class="hlt">methods</span>. We derive a knowledge gradient policy with the locally parametric model, which maximizes the expected value of information. We show the policy is asymptotically <span class="hlt">optimal</span> in theory, and experimental works suggests that the <span class="hlt">method</span> can reliably find the <span class="hlt">optimal</span> solution on a range of test functions. For the Markov decision processes problem class, we are motivated by an application where we want to co-<span class="hlt">optimize</span> a battery for multiple revenue, in particular energy arbitrage and frequency regulation. The nature of this problem requires the battery to make charging and discharging decisions at different time scales while accounting for the stochastic information such as load demand, electricity prices, and regulation signals. Computing the exact <span class="hlt">optimal</span> policy becomes intractable due to the large state space and the number of time steps. We propose two <span class="hlt">methods</span> to circumvent the computation bottleneck. First, we propose a nested MDP model that structure the co-<span class="hlt">optimization</span> problem into smaller sub-problems with reduced state space. This new model allows us to understand how the battery behaves down to the two-second dynamics (that of the frequency regulation market). Second, we introduce a low-rank value function approximation for backward dynamic programming. This new <span class="hlt">method</span> only requires computing the exact value function for a small subset of the state space and approximate the entire value function via low-rank matrix completion. We test these <span class="hlt">methods</span> on historical price data from the</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018EPJWC.17507043O','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018EPJWC.17507043O"><span>Path <span class="hlt">optimization</span> <span class="hlt">method</span> for the sign problem</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Ohnishi, Akira; Mori, Yuto; Kashiwa, Kouji</p> <p>2018-03-01</p> <p>We propose a path <span class="hlt">optimization</span> <span class="hlt">method</span> (POM) to evade the sign problem in the Monte-Carlo calculations for complex actions. Among many approaches to the sign problem, the Lefschetz-thimble path-integral <span class="hlt">method</span> and the complex Langevin <span class="hlt">method</span> are promising and extensively discussed. In these <span class="hlt">methods</span>, real field variables are complexified and the integration manifold is determined by the flow equations or stochastically sampled. When we have singular points of the action or multiple critical points near the original integral surface, however, we have a risk to encounter the residual and global sign problems or the singular drift term problem. One of the ways to avoid the singular points is to <span class="hlt">optimize</span> the integration path which is designed not to hit the singular points of the Boltzmann weight. By specifying the one-dimensional integration-path as z = t +if(t)(f ɛ R) and by <span class="hlt">optimizing</span> f(t) to enhance the average phase factor, we demonstrate that we can avoid the sign problem in a one-variable toy model for which the complex Langevin <span class="hlt">method</span> is found to fail. In this proceedings, we propose POM and discuss how we can avoid the sign problem in a toy model. We also discuss the possibility to utilize the neural network to <span class="hlt">optimize</span> the path.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2011OptLT..43..660S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2011OptLT..43..660S"><span><span class="hlt">Optimization</span> of laser butt welding parameters with multiple performance characteristics</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Sathiya, P.; Abdul Jaleel, M. Y.; Katherasan, D.; Shanmugarajan, B.</p> <p>2011-04-01</p> <p>This paper presents a study carried out on 3.5 kW cooled slab laser welding of 904 L super austenitic stainless steel. The joints have butts welded with different shielding gases, namely argon, helium and nitrogen, at a constant flow rate. Super austenitic stainless steel (SASS) normally contains high amount of Mo, Cr, Ni, N and Mn. The mechanical properties are controlled to obtain good welded joints. The quality of the joint is evaluated by studying the features of weld bead geometry, such as bead width (BW) and depth of penetration (DOP). In this paper, the tensile strength and bead profiles (BW and DOP) of laser welded butt joints made of AISI 904 L SASS are investigated. The <span class="hlt">Taguchi</span> approach is used as a statistical design of experiment (DOE) technique for <span class="hlt">optimizing</span> the selected welding parameters. Grey relational analysis and the desirability approach are applied to <span class="hlt">optimize</span> the input parameters by considering multiple output variables simultaneously. Confirmation experiments have also been conducted for both of the analyses to validate the <span class="hlt">optimized</span> parameters.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2008JApSc...8..453L','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2008JApSc...8..453L"><span>The Study of an Integrated Rating System for Supplier Quality Performance in the Semiconductor Industry</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Lee, Yu-Cheng; Yen, Tieh-Min; Tsai, Chih-Hung</p> <p></p> <p>This study provides an integrated model of Supplier Quality Performance Assesment (SQPA) activity for the semiconductor industry through introducing the ISO 9001 management framework, Importance-Performance Analysis (IPA) Supplier Quality Performance Assesment and <span class="hlt">Taguchi`s</span> Signal-to-Noise Ratio (S/N) techniques. This integrated model provides a SQPA methodology to create value for all members under mutual cooperation and trust in the supply chain. This <span class="hlt">method</span> helps organizations build a complete SQPA framework, linking organizational objectives and SQPA activities to <span class="hlt">optimize</span> rating techniques to promote supplier quality improvement. The techniques used in SQPA activities are easily understood. A case involving a design house is illustrated to show our model.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015HydJ...23.1051Y','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015HydJ...23.1051Y"><span>Review: <span class="hlt">Optimization</span> <span class="hlt">methods</span> for groundwater modeling and management</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Yeh, William W.-G.</p> <p>2015-09-01</p> <p><span class="hlt">Optimization</span> <span class="hlt">methods</span> have been used in groundwater modeling as well as for the planning and management of groundwater systems. This paper reviews and evaluates the various <span class="hlt">optimization</span> <span class="hlt">methods</span> that have been used for solving the inverse problem of parameter identification (estimation), experimental design, and groundwater planning and management. Various model selection criteria are discussed, as well as criteria used for model discrimination. The inverse problem of parameter identification concerns the <span class="hlt">optimal</span> determination of model parameters using water-level observations. In general, the <span class="hlt">optimal</span> experimental design seeks to find sampling strategies for the purpose of estimating the unknown model parameters. A typical objective of <span class="hlt">optimal</span> conjunctive-use planning of surface water and groundwater is to minimize the operational costs of meeting water demand. The <span class="hlt">optimization</span> <span class="hlt">methods</span> include mathematical programming techniques such as linear programming, quadratic programming, dynamic programming, stochastic programming, nonlinear programming, and the global search algorithms such as genetic algorithms, simulated annealing, and tabu search. Emphasis is placed on groundwater flow problems as opposed to contaminant transport problems. A typical two-dimensional groundwater flow problem is used to explain the basic formulations and algorithms that have been used to solve the formulated <span class="hlt">optimization</span> problems.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018InPhT..89..369S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018InPhT..89..369S"><span>Geometrical quality evaluation in laser cutting of Inconel-718 sheet by using <span class="hlt">Taguchi</span> based regression analysis and particle swarm <span class="hlt">optimization</span></span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Shrivastava, Prashant Kumar; Pandey, Arun Kumar</p> <p>2018-03-01</p> <p>The Inconel-718 is one of the most demanding advanced engineering materials because of its superior quality. The conventional machining techniques are facing many problems to cut intricate profiles on these materials due to its minimum thermal conductivity, minimum elastic property and maximum chemical affinity at magnified temperature. The laser beam cutting is one of the advanced cutting <span class="hlt">method</span> that may be used to achieve the geometrical accuracy with more precision by the suitable management of input process parameters. In this research work, the experimental investigation during the pulsed Nd:YAG laser cutting of Inconel-718 has been carried out. The experiments have been conducted by using the well planned orthogonal array L27. The experimentally measured values of different quality characteristics have been used for developing the second order regression models of bottom kerf deviation (KD), bottom kerf width (KW) and kerf taper (KT). The developed models of different quality characteristics have been utilized as a quality function for single-objective <span class="hlt">optimization</span> by using particle swarm <span class="hlt">optimization</span> (PSO) <span class="hlt">method</span>. The optimum results obtained by the proposed hybrid methodology have been compared with experimental results. The comparison of <span class="hlt">optimized</span> results with the experimental results shows that an individual improvement of 75%, 12.67% and 33.70% in bottom kerf deviation, bottom kerf width, and kerf taper has been observed. The parametric effects of different most significant input process parameters on quality characteristics have also been discussed.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28292475','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28292475"><span>Constrained <span class="hlt">Optimization</span> <span class="hlt">Methods</span> in Health Services Research-An Introduction: Report 1 of the ISPOR <span class="hlt">Optimization</span> <span class="hlt">Methods</span> Emerging Good Practices Task Force.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Crown, William; Buyukkaramikli, Nasuh; Thokala, Praveen; Morton, Alec; Sir, Mustafa Y; Marshall, Deborah A; Tosh, Jon; Padula, William V; Ijzerman, Maarten J; Wong, Peter K; Pasupathy, Kalyan S</p> <p>2017-03-01</p> <p>Providing health services with the greatest possible value to patients and society given the constraints imposed by patient characteristics, health care system characteristics, budgets, and so forth relies heavily on the design of structures and processes. Such problems are complex and require a rigorous and systematic approach to identify the best solution. Constrained <span class="hlt">optimization</span> is a set of <span class="hlt">methods</span> designed to identify efficiently and systematically the best solution (the <span class="hlt">optimal</span> solution) to a problem characterized by a number of potential solutions in the presence of identified constraints. This report identifies 1) key concepts and the main steps in building an <span class="hlt">optimization</span> model; 2) the types of problems for which <span class="hlt">optimal</span> solutions can be determined in real-world health applications; and 3) the appropriate <span class="hlt">optimization</span> <span class="hlt">methods</span> for these problems. We first present a simple graphical model based on the treatment of "regular" and "severe" patients, which maximizes the overall health benefit subject to time and budget constraints. We then relate it back to how <span class="hlt">optimization</span> is relevant in health services research for addressing present day challenges. We also explain how these mathematical <span class="hlt">optimization</span> <span class="hlt">methods</span> relate to simulation <span class="hlt">methods</span>, to standard health economic analysis techniques, and to the emergent fields of analytics and machine learning. Copyright © 2017 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2012OptLT..44.1959K','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2012OptLT..44.1959K"><span>Multi-objective <span class="hlt">optimization</span> of laser-scribed micro grooves on AZO conductive thin film using Data Envelopment Analysis</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Kuo, Chung-Feng Jeffrey; Quang Vu, Huy; Gunawan, Dewantoro; Lan, Wei-Luen</p> <p>2012-09-01</p> <p>Laser scribing process has been considered as an effective approach for surface texturization on thin film solar cell. In this study, a systematic <span class="hlt">method</span> for <span class="hlt">optimizing</span> multi-objective process parameters of fiber laser system was proposed to achieve excellent quality characteristics, such as the minimum scribing line width, the flattest trough bottom, and the least processing edge surface bumps for increasing incident light absorption of thin film solar cell. First, the <span class="hlt">Taguchi</span> <span class="hlt">method</span> (TM) obtained useful statistical information through the orthogonal array with relatively fewer experiments. However, TM is only appropriate to <span class="hlt">optimize</span> single-objective problems and has to rely on engineering judgment for solving multi-objective problems that can cause uncertainty to some degree. The back-propagation neural network (BPNN) and data envelopment analysis (DEA) were utilized to estimate the incomplete data and derive the <span class="hlt">optimal</span> process parameters of laser scribing system. In addition, analysis of variance (ANOVA) <span class="hlt">method</span> was also applied to identify the significant factors which have the greatest effects on the quality of scribing process; in other words, by putting more emphasis on these controllable and profound factors, the quality characteristics of the scribed thin film could be effectively enhanced. The experiments were carried out on ZnO:Al (AZO) transparent conductive thin film with a thickness of 500 nm and the results proved that the proposed approach yields better anticipated improvements than that of the TM which is only superior in improving one quality while sacrificing the other qualities. The results of confirmation experiments have showed the reliability of the proposed <span class="hlt">method</span>.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/25402593','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/25402593"><span>Increased dipicolinic acid production with an enhanced spoVF operon in Bacillus subtilis and medium <span class="hlt">optimization</span>.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Takahashi, Fumikazu; Sumitomo, Nobuyuki; Hagihara, Hiroshi; Ozaki, Katsuya</p> <p>2015-01-01</p> <p>Dipicolinic acid (DPA) is a multi-functional agent for cosmetics, antimicrobial products, detergents, and functional polymers. The aim of this study was to design a new <span class="hlt">method</span> for producing DPA from renewable material. The Bacillus subtilis spoVF operon encodes enzymes for DPA synthase and the part of lysine biosynthetic pathway. However, DPA is only synthesized in the sporulation phase, so the productivity of DPA is low level. Here, we report that DPA synthase was expressed in vegetative cells, and DPA was produced in the culture medium by replacement of the spoVFA promoter with other highly expressed promoter in B. subtilis vegetative cells, such as spoVG promoter. DPA levels were increased in the culture medium of genetically modified strains. DPA productivity was significantly improved up to 29.14 g/L in 72 h culture by improving the medium composition using a two-step <span class="hlt">optimization</span> technique with the <span class="hlt">Taguchi</span> methodology.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://hdl.handle.net/2060/19960029263','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19960029263"><span>Application of <span class="hlt">Optimization</span> Techniques to Design of Unconventional Rocket Nozzle Configurations</span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Follett, W.; Ketchum, A.; Darian, A.; Hsu, Y.</p> <p>1996-01-01</p> <p>Several current rocket engine concepts such as the bell-annular tri-propellant engine, and the linear aerospike being proposed for the X-33 require unconventional three dimensional rocket nozzles which must conform to rectangular or sector shaped envelopes to meet integration constraints. These types of nozzles exist outside the current experience database, therefore, the application of efficient design <span class="hlt">methods</span> for these propulsion concepts is critical to the success of launch vehicle programs. The objective of this work is to <span class="hlt">optimize</span> several different nozzle configurations, including two- and three-dimensional geometries. Methodology includes coupling computational fluid dynamic (CFD) analysis to genetic algorithms and <span class="hlt">Taguchi</span> <span class="hlt">methods</span> as well as implementation of a streamline tracing technique. Results of applications are shown for several geometeries including: three dimensional thruster nozzles with round or super elliptic throats and rectangualar exits, two- and three-dimensional thrusters installed within a bell nozzle, and three dimensional thrusters with round throats and sector shaped exits. Due to the novel designs considered for this study, there is little experience which can be used to guide the effort and limit the design space. With a nearly infinite parameter space to explore, simple parametric design studies cannot possibly search the entire design space within the time frame required to impact the design cycle. For this reason, robust and efficient <span class="hlt">optimization</span> <span class="hlt">methods</span> are required to explore and exploit the design space to achieve high performance engine designs. Five case studies which examine the application of various techniques in the engineering environment are presented in this paper.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=19730045611&hterms=Gradient+calculus&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D40%26Ntt%3DGradient%2Bcalculus','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=19730045611&hterms=Gradient+calculus&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D40%26Ntt%3DGradient%2Bcalculus"><span>An historical survey of computational <span class="hlt">methods</span> in <span class="hlt">optimal</span> control.</span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Polak, E.</p> <p>1973-01-01</p> <p>Review of some of the salient theoretical developments in the specific area of <span class="hlt">optimal</span> control algorithms. The first algorithms for <span class="hlt">optimal</span> control were aimed at unconstrained problems and were derived by using first- and second-variation <span class="hlt">methods</span> of the calculus of variations. These <span class="hlt">methods</span> have subsequently been recognized as gradient, Newton-Raphson, or Gauss-Newton <span class="hlt">methods</span> in function space. A much more recent addition to the arsenal of unconstrained <span class="hlt">optimal</span> control algorithms are several variations of conjugate-gradient <span class="hlt">methods</span>. At first, constrained <span class="hlt">optimal</span> control problems could only be solved by exterior penalty function <span class="hlt">methods</span>. Later algorithms specifically designed for constrained problems have appeared. Among these are <span class="hlt">methods</span> for solving the unconstrained linear quadratic regulator problem, as well as certain constrained minimum-time and minimum-energy problems. Differential-dynamic programming was developed from dynamic programming considerations. The conditional-gradient <span class="hlt">method</span>, the gradient-projection <span class="hlt">method</span>, and a couple of feasible directions <span class="hlt">methods</span> were obtained as extensions or adaptations of related algorithms for finite-dimensional problems. Finally, the so-called epsilon-<span class="hlt">methods</span> combine the Ritz <span class="hlt">method</span> with penalty function techniques.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016MS%26E..149a2123G','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016MS%26E..149a2123G"><span><span class="hlt">Optimization</span> of Machining Parameters of Milling Operation by Application of Semi-synthetic oil based Nano cutting Fluids</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Giri Prasad, M. J.; Abhishek Raaj, A. S.; Rishi Kumar, R.; Gladson, Frank; M, Gautham</p> <p>2016-09-01</p> <p>The present study is concerned with resolving the problems pertaining to the conventional cutting fluids. Two samples of nano cutting fluids were prepared by dispersing 0.01 vol% of MWCNTs and a mixture of 0.01 vol% of MWCNTs and 0.01 vol% of nano ZnO in the soluble oil. The thermophysical properties such as the kinematic viscosity, density, flash point and the tribological properties of the prepared nano cutting fluid samples were experimentally investigated and were compared with those of plain soluble oil. In addition to this, a milling process was carried by varying the process parameters and by application of different samples of cutting fluids and an attempt was made to determine <span class="hlt">optimal</span> cutting condition using the <span class="hlt">Taguchi</span> <span class="hlt">optimization</span> technique.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.osti.gov/servlets/purl/1174373','DOE-PATENT-XML'); return false;" href="https://www.osti.gov/servlets/purl/1174373"><span>Distributed <span class="hlt">optimization</span> system and <span class="hlt">method</span></span></a></p> <p><a target="_blank" href="http://www.osti.gov/doepatents">DOEpatents</a></p> <p>Hurtado, John E.; Dohrmann, Clark R.; Robinett, III, Rush D.</p> <p>2003-06-10</p> <p>A search system and <span class="hlt">method</span> for controlling multiple agents to <span class="hlt">optimize</span> an objective using distributed sensing and cooperative control. The search agent can be one or more physical agents, such as a robot, and can be software agents for searching cyberspace. The objective can be: chemical sources, temperature sources, radiation sources, light sources, evaders, trespassers, explosive sources, time dependent sources, time independent sources, function surfaces, maximization points, minimization points, and <span class="hlt">optimal</span> control of a system such as a communication system, an economy, a crane, and a multi-processor computer.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_11");'>11</a></li> <li><a href="#" onclick='return showDiv("page_12");'>12</a></li> <li class="active"><span>13</span></li> <li><a href="#" onclick='return showDiv("page_14");'>14</a></li> <li><a href="#" onclick='return showDiv("page_15");'>15</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_13 --> <div id="page_14" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_12");'>12</a></li> <li><a href="#" onclick='return showDiv("page_13");'>13</a></li> <li class="active"><span>14</span></li> <li><a href="#" onclick='return showDiv("page_15");'>15</a></li> <li><a href="#" onclick='return showDiv("page_16");'>16</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="261"> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2014RJPCA..88.1241G','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2014RJPCA..88.1241G"><span>Preparation of photocatalytic ZnO nanoparticles and application in photochemical degradation of betamethasone sodium phosphate using <span class="hlt">taguchi</span> approach</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Giahi, M.; Farajpour, G.; Taghavi, H.; Shokri, S.</p> <p>2014-07-01</p> <p>In this study, ZnO nanoparticles were prepared by a sol-gel <span class="hlt">method</span> for the first time. <span class="hlt">Taguchi</span> <span class="hlt">method</span> was used to identify the several factors that may affect degradation percentage of betamethasone sodium phosphate in wastewater in UV/K2S2O8/nano-ZnO system. Our experimental design consisted of testing five factors, i.e., dosage of K2S2O8, concentration of betamethasone sodium phosphate, amount of ZnO, irradiation time and initial pH. With four levels of each factor tested. It was found that, optimum parameters are irradiation time, 180 min; pH 9.0; betamethasone sodium phosphate, 30 mg/L; amount of ZnO, 13 mg; K2S2O8, 1 mM. The percentage contribution of each factor was determined by the analysis of variance (ANOVA). The results showed that irradiation time; pH; amount of ZnO; drug concentration and dosage of K2S2O8 contributed by 46.73, 28.56, 11.56, 6.70, and 6.44%, respectively. Finally, the kinetics process was studied and the photodegradation rate of betamethasone sodium phosphate was found to obey pseudo-first-order kinetics equation represented by the Langmuir-Hinshelwood model.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2011NIMPA.645..332M','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2011NIMPA.645..332M"><span><span class="hlt">Optimal</span> correction and design parameter search by modern <span class="hlt">methods</span> of rigorous global <span class="hlt">optimization</span></span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Makino, K.; Berz, M.</p> <p>2011-07-01</p> <p>Frequently the design of schemes for correction of aberrations or the determination of possible operating ranges for beamlines and cells in synchrotrons exhibit multitudes of possibilities for their correction, usually appearing in disconnected regions of parameter space which cannot be directly qualified by analytical means. In such cases, frequently an abundance of <span class="hlt">optimization</span> runs are carried out, each of which determines a local minimum depending on the specific chosen initial conditions. Practical solutions are then obtained through an often extended interplay of experienced manual adjustment of certain suitable parameters and local searches by varying other parameters. However, in a formal sense this problem can be viewed as a global <span class="hlt">optimization</span> problem, i.e. the determination of all solutions within a certain range of parameters that lead to a specific optimum. For example, it may be of interest to find all possible settings of multiple quadrupoles that can achieve imaging; or to find ahead of time all possible settings that achieve a particular tune; or to find all possible manners to adjust nonlinear parameters to achieve correction of high order aberrations. These tasks can easily be phrased in terms of such an <span class="hlt">optimization</span> problem; but while mathematically this formulation is often straightforward, it has been common belief that it is of limited practical value since the resulting <span class="hlt">optimization</span> problem cannot usually be solved. However, recent significant advances in modern <span class="hlt">methods</span> of rigorous global <span class="hlt">optimization</span> make these <span class="hlt">methods</span> feasible for optics design for the first time. The key ideas of the <span class="hlt">method</span> lie in an interplay of rigorous local underestimators of the objective functions, and by using the underestimators to rigorously iteratively eliminate regions that lie above already known upper bounds of the minima, in what is commonly known as a branch-and-bound approach. Recent enhancements of the Differential Algebraic <span class="hlt">methods</span> used in particle</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29447441','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29447441"><span>Design factors of femur fracture fixation plates made of shape memory alloy based on the <span class="hlt">Taguchi</span> <span class="hlt">method</span> by finite element analysis.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Ko, Cheolwoong; Yang, Mikyung; Byun, Taemin; Lee, Sang-Wook</p> <p>2018-05-01</p> <p>This study proposed a way to design femur fracture fixation plates made of shape memory alloy based on computed tomography (CT) images of Korean cadaveric femurs. To this end, 3 major design factors of femur fracture fixation plates (circumference angle, thickness, and inner diameter) were selected based on the contact pressure when a femur fracture fixation plate was applied to a cylinder model using the <span class="hlt">Taguchi</span> <span class="hlt">method</span>. Then, the effects of the design factors were analyzed. It was shown that the design factors were statistically significant at a level of p = 0.05 concerning the inner diameter and the thickness. The factors affecting the contact pressure were inner diameter, thickness, and circumference angle, in that order. Particularly, in the condition of Case 9 (inner diameter 27 mm, thickness 2.4 mm, and circumference angle 270°), the max. average contact pressure was 21.721 MPa, while the min. average contact pressure was 3.118 MPa in Case 10 (inner diameter 29 mm, thickness 2.0 mm, and circumference angle 210°). When the femur fracture fixation plate was applied to the cylinder model, the displacement due to external sliding and pulling forces was analyzed. As a result, the displacement in the sliding condition was at max. 3.75 times greater than that in the pulling condition, which indicated that the cohesion strength between the femur fracture fixation plate and the cylinder model was likely to be greater in the pulling condition. When a human femur model was applied, the max. average contact pressure was 10.76 MPa, which was lower than the yield strength of a human femur (108 MPa). In addition, the analysis of the rib behaviors of the femur fracture fixation plate in relation to the recovery effect of the shape memory alloy showed that the rib behaviors varied depending on the arbitrarily curved shapes of the femur sections. Copyright © 2018 John Wiley & Sons, Ltd.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2011TISCI..24..119S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2011TISCI..24..119S"><span>Proposal of Evolutionary Simplex <span class="hlt">Method</span> for Global <span class="hlt">Optimization</span> Problem</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Shimizu, Yoshiaki</p> <p></p> <p>To make an agile decision in a rational manner, role of <span class="hlt">optimization</span> engineering has been notified increasingly under diversified customer demand. With this point of view, in this paper, we have proposed a new evolutionary <span class="hlt">method</span> serving as an <span class="hlt">optimization</span> technique in the paradigm of <span class="hlt">optimization</span> engineering. The developed <span class="hlt">method</span> has prospects to solve globally various complicated problem appearing in real world applications. It is evolved from the conventional <span class="hlt">method</span> known as Nelder and Mead’s Simplex <span class="hlt">method</span> by virtue of idea borrowed from recent meta-heuristic <span class="hlt">method</span> such as PSO. Mentioning an algorithm to handle linear inequality constraints effectively, we have validated effectiveness of the proposed <span class="hlt">method</span> through comparison with other <span class="hlt">methods</span> using several benchmark problems.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.osti.gov/servlets/purl/1389065','SCIGOV-STC'); return false;" href="https://www.osti.gov/servlets/purl/1389065"><span>COMPARISON OF NONLINEAR DYNAMICS <span class="hlt">OPTIMIZATION</span> <span class="hlt">METHODS</span> FOR APS-U</span></a></p> <p><a target="_blank" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Sun, Y.; Borland, Michael</p> <p></p> <p>Many different objectives and genetic algorithms have been proposed for storage ring nonlinear dynamics performance <span class="hlt">optimization</span>. These <span class="hlt">optimization</span> objectives include nonlinear chromaticities and driving/detuning terms, on-momentum and off-momentum dynamic acceptance, chromatic detuning, local momentum acceptance, variation of transverse invariant, Touschek lifetime, etc. In this paper, the effectiveness of several different <span class="hlt">optimization</span> <span class="hlt">methods</span> and objectives are compared for the nonlinear beam dynamics <span class="hlt">optimization</span> of the Advanced Photon Source upgrade (APS-U) lattice. The <span class="hlt">optimized</span> solutions from these different <span class="hlt">methods</span> are preliminarily compared in terms of the dynamic acceptance, local momentum acceptance, chromatic detuning, and other performance measures.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015PhDT........95R','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015PhDT........95R"><span>Deterministic <span class="hlt">methods</span> for multi-control fuel loading <span class="hlt">optimization</span></span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Rahman, Fariz B. Abdul</p> <p></p> <p>We have developed a multi-control fuel loading <span class="hlt">optimization</span> code for pressurized water reactors based on deterministic <span class="hlt">methods</span>. The objective is to flatten the fuel burnup profile, which maximizes overall energy production. The <span class="hlt">optimal</span> control problem is formulated using the <span class="hlt">method</span> of Lagrange multipliers and the direct adjoining approach for treatment of the inequality power peaking constraint. The <span class="hlt">optimality</span> conditions are derived for a multi-dimensional multi-group <span class="hlt">optimal</span> control problem via calculus of variations. Due to the Hamiltonian having a linear control, our <span class="hlt">optimal</span> control problem is solved using the gradient <span class="hlt">method</span> to minimize the Hamiltonian and a Newton step formulation to obtain the <span class="hlt">optimal</span> control. We are able to satisfy the power peaking constraint during depletion with the control at beginning of cycle (BOC) by building the proper burnup path forward in time and utilizing the adjoint burnup to propagate the information back to the BOC. Our test results show that we are able to achieve our objective and satisfy the power peaking constraint during depletion using either the fissile enrichment or burnable poison as the control. Our fuel loading designs show an increase of 7.8 equivalent full power days (EFPDs) in cycle length compared with 517.4 EFPDs for the AP600 first cycle.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.osti.gov/servlets/purl/924533','SCIGOV-STC'); return false;" href="https://www.osti.gov/servlets/purl/924533"><span><span class="hlt">Optimal</span> boarding <span class="hlt">method</span> for airline passengers</span></a></p> <p><a target="_blank" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Steffen, Jason H.; /Fermilab</p> <p>2008-02-01</p> <p>Using a Markov Chain Monte Carlo <span class="hlt">optimization</span> algorithm and a computer simulation, I find the passenger ordering which minimizes the time required to board the passengers onto an airplane. The model that I employ assumes that the time that a passenger requires to load his or her luggage is the dominant contribution to the time needed to completely fill the aircraft. The <span class="hlt">optimal</span> boarding strategy may reduce the time required to board and airplane by over a factor of four and possibly more depending upon the dimensions of the aircraft. I explore some features of the <span class="hlt">optimal</span> boarding <span class="hlt">method</span> andmore » discuss practical modifications to the <span class="hlt">optimal</span>. Finally, I mention some of the benefits that could come from implementing an improved passenger boarding scheme.« less</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=19880044963&hterms=engineering+design&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D80%26Ntt%3Dengineering%2Bdesign','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=19880044963&hterms=engineering+design&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D80%26Ntt%3Dengineering%2Bdesign"><span>An efficient multilevel <span class="hlt">optimization</span> <span class="hlt">method</span> for engineering design</span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Vanderplaats, G. N.; Yang, Y. J.; Kim, D. S.</p> <p>1988-01-01</p> <p>An efficient multilevel deisgn <span class="hlt">optimization</span> technique is presented. The proposed <span class="hlt">method</span> is based on the concept of providing linearized information between the system level and subsystem level <span class="hlt">optimization</span> tasks. The advantages of the <span class="hlt">method</span> are that it does not require optimum sensitivities, nonlinear equality constraints are not needed, and the <span class="hlt">method</span> is relatively easy to use. The disadvantage is that the coupling between subsystems is not dealt with in a precise mathematical manner.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2013SPIE.8768E..3XL','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2013SPIE.8768E..3XL"><span>A constraint <span class="hlt">optimization</span> based virtual network mapping <span class="hlt">method</span></span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Li, Xiaoling; Guo, Changguo; Wang, Huaimin; Li, Zhendong; Yang, Zhiwen</p> <p>2013-03-01</p> <p>Virtual network mapping problem, maps different virtual networks onto the substrate network is an extremely challenging work. This paper proposes a constraint <span class="hlt">optimization</span> based mapping <span class="hlt">method</span> for solving virtual network mapping problem. This <span class="hlt">method</span> divides the problem into two phases, node mapping phase and link mapping phase, which are all NP-hard problems. Node mapping algorithm and link mapping algorithm are proposed for solving node mapping phase and link mapping phase, respectively. Node mapping algorithm adopts the thinking of greedy algorithm, mainly considers two factors, available resources which are supplied by the nodes and distance between the nodes. Link mapping algorithm is based on the result of node mapping phase, adopts the thinking of distributed constraint <span class="hlt">optimization</span> <span class="hlt">method</span>, which can guarantee to obtain the <span class="hlt">optimal</span> mapping with the minimum network cost. Finally, simulation experiments are used to validate the <span class="hlt">method</span>, and results show that the <span class="hlt">method</span> performs very well.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29508578','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29508578"><span>[<span class="hlt">Optimized</span> application of nested PCR <span class="hlt">method</span> for detection of malaria].</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Yao-Guang, Z; Li, J; Zhen-Yu, W; Li, C</p> <p>2017-04-28</p> <p>Objective To <span class="hlt">optimize</span> the application of the nested PCR <span class="hlt">method</span> for the detection of malaria according to the working practice, so as to improve the efficiency of malaria detection. <span class="hlt">Methods</span> Premixing solution of PCR, internal primers for further amplification and new designed primers that aimed at two Plasmodium ovale subspecies were employed to <span class="hlt">optimize</span> the reaction system, reaction condition and specific primers of P . ovale on basis of routine nested PCR. Then the specificity and the sensitivity of the <span class="hlt">optimized</span> <span class="hlt">method</span> were analyzed. The positive blood samples and examination samples of malaria were detected by the routine nested PCR and the <span class="hlt">optimized</span> <span class="hlt">method</span> simultaneously, and the detection results were compared and analyzed. Results The <span class="hlt">optimized</span> <span class="hlt">method</span> showed good specificity, and its sensitivity could reach the pg to fg level. The two <span class="hlt">methods</span> were used to detect the same positive malarial blood samples simultaneously, the results indicated that the PCR products of the two <span class="hlt">methods</span> had no significant difference, but the non-specific amplification reduced obviously and the detection rates of P . ovale subspecies improved, as well as the total specificity also increased through the use of the <span class="hlt">optimized</span> <span class="hlt">method</span>. The actual detection results of 111 cases of malarial blood samples showed that the sensitivity and specificity of the routine nested PCR were 94.57% and 86.96%, respectively, and those of the <span class="hlt">optimized</span> <span class="hlt">method</span> were both 93.48%, and there was no statistically significant difference between the two <span class="hlt">methods</span> in the sensitivity ( P > 0.05), but there was a statistically significant difference between the two <span class="hlt">methods</span> in the specificity ( P < 0.05). Conclusion The <span class="hlt">optimized</span> PCR can improve the specificity without reducing the sensitivity on the basis of the routine nested PCR, it also can save the cost and increase the efficiency of malaria detection as less experiment links.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://hdl.handle.net/2060/19840011117','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19840011117"><span><span class="hlt">Optimization</span> <span class="hlt">methods</span> applied to hybrid vehicle design</span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Donoghue, J. F.; Burghart, J. H.</p> <p>1983-01-01</p> <p>The use of <span class="hlt">optimization</span> <span class="hlt">methods</span> as an effective design tool in the design of hybrid vehicle propulsion systems is demonstrated. <span class="hlt">Optimization</span> techniques were used to select values for three design parameters (battery weight, heat engine power rating and power split between the two on-board energy sources) such that various measures of vehicle performance (acquisition cost, life cycle cost and petroleum consumption) were <span class="hlt">optimized</span>. The apporach produced designs which were often significant improvements over hybrid designs already reported on in the literature. The principal conclusions are as follows. First, it was found that the strategy used to split the required power between the two on-board energy sources can have a significant effect on life cycle cost and petroleum consumption. Second, the <span class="hlt">optimization</span> program should be constructed so that performance measures and design variables can be easily changed. Third, the vehicle simulation program has a significant effect on the computer run time of the overall <span class="hlt">optimization</span> program; run time can be significantly reduced by proper design of the types of trips the vehicle takes in a one year period. Fourth, care must be taken in designing the cost and constraint expressions which are used in the <span class="hlt">optimization</span> so that they are relatively smooth functions of the design variables. Fifth, proper handling of constraints on battery weight and heat engine rating, variables which must be large enough to meet power demands, is particularly important for the success of an <span class="hlt">optimization</span> study. Finally, the principal conclusion is that <span class="hlt">optimization</span> <span class="hlt">methods</span> provide a practical tool for carrying out the design of a hybrid vehicle propulsion system.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018CPL...699..255A','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018CPL...699..255A"><span>An improved reaction path <span class="hlt">optimization</span> <span class="hlt">method</span> using a chain of conformations</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Asada, Toshio; Sawada, Nozomi; Nishikawa, Takuya; Koseki, Shiro</p> <p>2018-05-01</p> <p>The efficient fast path <span class="hlt">optimization</span> (FPO) <span class="hlt">method</span> is proposed to <span class="hlt">optimize</span> the reaction paths on energy surfaces by using chains of conformations. No artificial spring force is used in the FPO <span class="hlt">method</span> to ensure the equal spacing of adjacent conformations. The FPO <span class="hlt">method</span> is applied to <span class="hlt">optimize</span> the reaction path on two model potential surfaces. The use of this <span class="hlt">method</span> enabled the <span class="hlt">optimization</span> of the reaction paths with a drastically reduced number of <span class="hlt">optimization</span> cycles for both potentials. It was also successfully utilized to define the MEP of the isomerization of the glycine molecule in water by FPO <span class="hlt">method</span>.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://hdl.handle.net/2060/19920006444','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19920006444"><span><span class="hlt">Optimal</span> least-squares finite element <span class="hlt">method</span> for elliptic problems</span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Jiang, Bo-Nan; Povinelli, Louis A.</p> <p>1991-01-01</p> <p>An <span class="hlt">optimal</span> least squares finite element <span class="hlt">method</span> is proposed for two dimensional and three dimensional elliptic problems and its advantages are discussed over the mixed Galerkin <span class="hlt">method</span> and the usual least squares finite element <span class="hlt">method</span>. In the usual least squares finite element <span class="hlt">method</span>, the second order equation (-Delta x (Delta u) + u = f) is recast as a first order system (-Delta x p + u = f, Delta u - p = 0). The error analysis and numerical experiment show that, in this usual least squares finite element <span class="hlt">method</span>, the rate of convergence for flux p is one order lower than <span class="hlt">optimal</span>. In order to get an <span class="hlt">optimal</span> least squares <span class="hlt">method</span>, the irrotationality Delta x p = 0 should be included in the first order system.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018ManRv...5....1K','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018ManRv...5....1K"><span>Springback <span class="hlt">optimization</span> in automotive Shock Absorber Cup with Genetic Algorithm</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Kakandikar, Ganesh; Nandedkar, Vilas</p> <p>2018-02-01</p> <p>Drawing or forming is a process normally used to achieve a required component form from a metal blank by applying a punch which radially draws the blank into the die by a mechanical or hydraulic action or combining both. When the component is drawn for more depth than the diameter, it is usually seen as deep drawing, which involves complicated states of material deformation. Due to the radial drawing of the material as it enters the die, radial drawing stress occurs in the flange with existence of the tangential compressive stress. This compression generates wrinkles in the flange. Wrinkling is unwanted phenomenon and can be controlled by application of a blank-holding force. Tensile stresses cause thinning in the wall region of the cup. Three main types of the errors occur in such a process are wrinkling, fracturing and springback. This paper reports a work focused on the springback and control. Due to complexity of the process, tool try-outs and experimentation may be costly, bulky and time consuming. Numerical simulation proves to be a good option for studying the process and developing a control strategy for reducing the springback. Finite-element based simulations have been used popularly for such purposes. In this study, the springback in deep drawing of an automotive Shock Absorber Cup is simulated with finite element <span class="hlt">method</span>. <span class="hlt">Taguchi</span> design of experiments and analysis of variance are used to analyze the influencing process parameters on the springback. Mathematical relations are developed to relate the process parameters and the resulting springback. The <span class="hlt">optimization</span> problem is formulated for the springback, referring to the displacement magnitude in the selected sections. Genetic Algorithm is then applied for process <span class="hlt">optimization</span> with an objective to minimize the springback. The results indicate that a better prediction of the springback and process <span class="hlt">optimization</span> could be achieved with a combined use of these <span class="hlt">methods</span> and tools.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018EnOp...50.1114B','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018EnOp...50.1114B"><span>Prepositioning emergency supplies under uncertainty: a parametric <span class="hlt">optimization</span> <span class="hlt">method</span></span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Bai, Xuejie; Gao, Jinwu; Liu, Yankui</p> <p>2018-07-01</p> <p>Prepositioning of emergency supplies is an effective <span class="hlt">method</span> for increasing preparedness for disasters and has received much attention in recent years. In this article, the prepositioning problem is studied by a robust parametric <span class="hlt">optimization</span> <span class="hlt">method</span>. The transportation cost, supply, demand and capacity are unknown prior to the extraordinary event, which are represented as fuzzy parameters with variable possibility distributions. The variable possibility distributions are obtained through the credibility critical value reduction <span class="hlt">method</span> for type-2 fuzzy variables. The prepositioning problem is formulated as a fuzzy value-at-risk model to achieve a minimum total cost incurred in the whole process. The key difficulty in solving the proposed <span class="hlt">optimization</span> model is to evaluate the quantile of the fuzzy function in the objective and the credibility in the constraints. The objective function and constraints can be turned into their equivalent parametric forms through chance constrained programming under the different confidence levels. Taking advantage of the structural characteristics of the equivalent <span class="hlt">optimization</span> model, a parameter-based domain decomposition <span class="hlt">method</span> is developed to divide the original <span class="hlt">optimization</span> problem into six mixed-integer parametric submodels, which can be solved by standard <span class="hlt">optimization</span> solvers. Finally, to explore the viability of the developed model and the solution approach, some computational experiments are performed on realistic scale case problems. The computational results reported in the numerical example show the credibility and superiority of the proposed parametric <span class="hlt">optimization</span> <span class="hlt">method</span>.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016MS%26E..114a2121K','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016MS%26E..114a2121K"><span>Experimental Investigation and <span class="hlt">Optimization</span> of Response Variables in WEDM of Inconel - 718</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Karidkar, S. S.; Dabade, U. A.</p> <p>2016-02-01</p> <p>Effective utilisation of Wire Electrical Discharge Machining (WEDM) technology is challenge for modern manufacturing industries. Day by day new materials with high strengths and capabilities are being developed to fulfil the customers need. Inconel - 718 is similar kind of material which is extensively used in aerospace applications, such as gas turbine, rocket motors, and spacecraft as well as in nuclear reactors and pumps etc. This paper deals with the experimental investigation of <span class="hlt">optimal</span> machining parameters in WEDM for Surface Roughness, Kerf Width and Dimensional Deviation using DoE such as <span class="hlt">Taguchi</span> methodology, L9 orthogonal array. By keeping peak current constant at 70 A, the effect of other process parameters on above response variables were analysed. Obtained experimental results were statistically analysed using Minitab-16 software. Analysis of Variance (ANOVA) shows pulse on time as the most influential parameter followed by wire tension whereas spark gap set voltage is observed to be non-influencing parameter. Multi-objective <span class="hlt">optimization</span> technique, Grey Relational Analysis (GRA), shows <span class="hlt">optimal</span> machining parameters such as pulse on time 108 Machine unit, spark gap set voltage 50 V and wire tension 12 gm for <span class="hlt">optimal</span> response variables considered for the experimental analysis.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://cfpub.epa.gov/si/si_public_record_report.cfm?dirEntryId=59439&keyword=gravity&actType=&TIMSType=+&TIMSSubTypeID=&DEID=&epaNumber=&ntisID=&archiveStatus=Both&ombCat=Any&dateBeginCreated=&dateEndCreated=&dateBeginPublishedPresented=&dateEndPublishedPresented=&dateBeginUpdated=&dateEndUpdated=&dateBeginCompleted=&dateEndCompleted=&personID=&role=Any&journalID=&publisherID=&sortBy=revisionDate&count=50','EPA-EIMS'); return false;" href="https://cfpub.epa.gov/si/si_public_record_report.cfm?dirEntryId=59439&keyword=gravity&actType=&TIMSType=+&TIMSSubTypeID=&DEID=&epaNumber=&ntisID=&archiveStatus=Both&ombCat=Any&dateBeginCreated=&dateEndCreated=&dateBeginPublishedPresented=&dateEndPublishedPresented=&dateBeginUpdated=&dateEndUpdated=&dateBeginCompleted=&dateEndCompleted=&personID=&role=Any&journalID=&publisherID=&sortBy=revisionDate&count=50"><span>INNOVATIVE <span class="hlt">METHODS</span> FOR THE <span class="hlt">OPTIMIZATION</span> OF GRAVITY STORM SEWER DESIGN</span></a></p> <p><a target="_blank" href="http://oaspub.epa.gov/eims/query.page">EPA Science Inventory</a></p> <p></p> <p></p> <p>The purpose of this paper is to describe a new <span class="hlt">method</span> for <span class="hlt">optimizing</span> the design of urban storm sewer systems. Previous efforts to <span class="hlt">optimize</span> gravity sewers have met with limited success because classical <span class="hlt">optimization</span> <span class="hlt">methods</span> require that the problem be well behaved, e.g. describ...</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://eric.ed.gov/?q=methodological&pg=6&id=EJ1120138','ERIC'); return false;" href="https://eric.ed.gov/?q=methodological&pg=6&id=EJ1120138"><span><span class="hlt">Optimizing</span> How We Teach Research <span class="hlt">Methods</span></span></a></p> <p><a target="_blank" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Cvancara, Kristen E.</p> <p>2017-01-01</p> <p>Courses: Research <span class="hlt">Methods</span> (undergraduate or graduate level). Objective: The aim of this exercise is to <span class="hlt">optimize</span> the ability for students to integrate an understanding of various methodologies across research paradigms within a 15-week semester, including a review of procedural steps and experiential learning activities to practice each <span class="hlt">method</span>, a…</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017AGUFM.H41L..02G','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017AGUFM.H41L..02G"><span>Surrogate Based Uni/Multi-Objective <span class="hlt">Optimization</span> and Distribution Estimation <span class="hlt">Methods</span></span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Gong, W.; Duan, Q.; Huo, X.</p> <p>2017-12-01</p> <p>Parameter calibration has been demonstrated as an effective way to improve the performance of dynamic models, such as hydrological models, land surface models, weather and climate models etc. Traditional <span class="hlt">optimization</span> algorithms usually cost a huge number of model evaluations, making dynamic model calibration very difficult, or even computationally prohibitive. With the help of a serious of recently developed adaptive surrogate-modelling based <span class="hlt">optimization</span> <span class="hlt">methods</span>: uni-objective <span class="hlt">optimization</span> <span class="hlt">method</span> ASMO, multi-objective <span class="hlt">optimization</span> <span class="hlt">method</span> MO-ASMO, and probability distribution estimation <span class="hlt">method</span> ASMO-PODE, the number of model evaluations can be significantly reduced to several hundreds, making it possible to calibrate very expensive dynamic models, such as regional high resolution land surface models, weather forecast models such as WRF, and intermediate complexity earth system models such as LOVECLIM. This presentation provides a brief introduction to the common framework of adaptive surrogate-based <span class="hlt">optimization</span> algorithms of ASMO, MO-ASMO and ASMO-PODE, a case study of Common Land Model (CoLM) calibration in Heihe river basin in Northwest China, and an outlook of the potential applications of the surrogate-based <span class="hlt">optimization</span> <span class="hlt">methods</span>.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://hdl.handle.net/2060/19870012868','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19870012868"><span><span class="hlt">Optimization</span> <span class="hlt">methods</span> and silicon solar cell numerical models</span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Girardini, K.; Jacobsen, S. E.</p> <p>1986-01-01</p> <p>An <span class="hlt">optimization</span> algorithm for use with numerical silicon solar cell models was developed. By coupling an <span class="hlt">optimization</span> algorithm with a solar cell model, it is possible to simultaneously vary design variables such as impurity concentrations, front junction depth, back junction depth, and cell thickness to maximize the predicted cell efficiency. An <span class="hlt">optimization</span> algorithm was developed and interfaced with the Solar Cell Analysis Program in 1 Dimension (SCAP1D). SCAP1D uses finite difference <span class="hlt">methods</span> to solve the differential equations which, along with several relations from the physics of semiconductors, describe mathematically the performance of a solar cell. A major obstacle is that the numerical <span class="hlt">methods</span> used in SCAP1D require a significant amount of computer time, and during an <span class="hlt">optimization</span> the model is called iteratively until the design variables converge to the values associated with the maximum efficiency. This problem was alleviated by designing an <span class="hlt">optimization</span> code specifically for use with numerically intensive simulations, to reduce the number of times the efficiency has to be calculated to achieve convergence to the <span class="hlt">optimal</span> solution.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_12");'>12</a></li> <li><a href="#" onclick='return showDiv("page_13");'>13</a></li> <li class="active"><span>14</span></li> <li><a href="#" onclick='return showDiv("page_15");'>15</a></li> <li><a href="#" onclick='return showDiv("page_16");'>16</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_14 --> <div id="page_15" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_13");'>13</a></li> <li><a href="#" onclick='return showDiv("page_14");'>14</a></li> <li class="active"><span>15</span></li> <li><a href="#" onclick='return showDiv("page_16");'>16</a></li> <li><a href="#" onclick='return showDiv("page_17");'>17</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="281"> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2007SPIE.6721E..0GZ','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2007SPIE.6721E..0GZ"><span><span class="hlt">Optimized</span> <span class="hlt">method</span> for manufacturing large aspheric surfaces</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Zhou, Xusheng; Li, Shengyi; Dai, Yifan; Xie, Xuhui</p> <p>2007-12-01</p> <p>Aspheric optics are being used more and more widely in modern optical systems, due to their ability of correcting aberrations, enhancing image quality, enlarging the field of view and extending the range of effect, while reducing the weight and volume of the system. With optical technology development, we have more pressing requirement to large-aperture and high-precision aspheric surfaces. The original computer controlled optical surfacing (CCOS) technique cannot meet the challenge of precision and machining efficiency. This problem has been thought highly of by researchers. Aiming at the problem of original polishing process, an <span class="hlt">optimized</span> <span class="hlt">method</span> for manufacturing large aspheric surfaces is put forward. Subsurface damage (SSD), full aperture errors and full band of frequency errors are all in control of this <span class="hlt">method</span>. Lesser SSD depth can be gained by using little hardness tool and small abrasive grains in grinding process. For full aperture errors control, edge effects can be controlled by using smaller tools and amendment model with material removal function. For full band of frequency errors control, low frequency errors can be corrected with the <span class="hlt">optimized</span> material removal function, while medium-high frequency errors by using uniform removing principle. With this <span class="hlt">optimized</span> <span class="hlt">method</span>, the accuracy of a K9 glass paraboloid mirror can reach rms 0.055 waves (where a wave is 0.6328μm) in a short time. The results show that the <span class="hlt">optimized</span> <span class="hlt">method</span> can guide large aspheric surface manufacturing effectively.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://hdl.handle.net/2060/19890003832','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19890003832"><span>Engineering applications of heuristic multilevel <span class="hlt">optimization</span> <span class="hlt">methods</span></span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Barthelemy, Jean-Francois M.</p> <p>1988-01-01</p> <p>Some engineering applications of heuristic multilevel <span class="hlt">optimization</span> <span class="hlt">methods</span> are presented and the discussion focuses on the dependency matrix that indicates the relationship between problem functions and variables. Coordination of the subproblem <span class="hlt">optimizations</span> is shown to be typically achieved through the use of exact or approximate sensitivity analysis. Areas for further development are identified.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://hdl.handle.net/2060/19890015831','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19890015831"><span>Engineering applications of heuristic multilevel <span class="hlt">optimization</span> <span class="hlt">methods</span></span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Barthelemy, Jean-Francois M.</p> <p>1989-01-01</p> <p>Some engineering applications of heuristic multilevel <span class="hlt">optimization</span> <span class="hlt">methods</span> are presented and the discussion focuses on the dependency matrix that indicates the relationship between problem functions and variables. Coordination of the subproblem <span class="hlt">optimizations</span> is shown to be typically achieved through the use of exact or approximate sensitivity analysis. Areas for further development are identified.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29297357','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29297357"><span><span class="hlt">Optimal</span> projection <span class="hlt">method</span> determination by Logdet Divergence and perturbed von-Neumann Divergence.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Jiang, Hao; Ching, Wai-Ki; Qiu, Yushan; Cheng, Xiao-Qing</p> <p>2017-12-14</p> <p>Positive semi-definiteness is a critical property in kernel <span class="hlt">methods</span> for Support Vector Machine (SVM) by which efficient solutions can be guaranteed through convex quadratic programming. However, a lot of similarity functions in applications do not produce positive semi-definite kernels. We propose projection <span class="hlt">method</span> by constructing projection matrix on indefinite kernels. As a generalization of the spectrum <span class="hlt">method</span> (denoising <span class="hlt">method</span> and flipping <span class="hlt">method</span>), the projection <span class="hlt">method</span> shows better or comparable performance comparing to the corresponding indefinite kernel <span class="hlt">methods</span> on a number of real world data sets. Under the Bregman matrix divergence theory, we can find suggested <span class="hlt">optimal</span> λ in projection <span class="hlt">method</span> using unconstrained <span class="hlt">optimization</span> in kernel learning. In this paper we focus on <span class="hlt">optimal</span> λ determination, in the pursuit of precise <span class="hlt">optimal</span> λ determination <span class="hlt">method</span> in unconstrained <span class="hlt">optimization</span> framework. We developed a perturbed von-Neumann divergence to measure kernel relationships. We compared <span class="hlt">optimal</span> λ determination with Logdet Divergence and perturbed von-Neumann Divergence, aiming at finding better λ in projection <span class="hlt">method</span>. Results on a number of real world data sets show that projection <span class="hlt">method</span> with <span class="hlt">optimal</span> λ by Logdet divergence demonstrate near <span class="hlt">optimal</span> performance. And the perturbed von-Neumann Divergence can help determine a relatively better <span class="hlt">optimal</span> projection <span class="hlt">method</span>. Projection <span class="hlt">method</span> ia easy to use for dealing with indefinite kernels. And the parameter embedded in the <span class="hlt">method</span> can be determined through unconstrained <span class="hlt">optimization</span> under Bregman matrix divergence theory. This may provide a new way in kernel SVMs for varied objectives.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/22980863','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/22980863"><span>Optimisation of flavour ester biosynthesis in an aqueous system of coconut cream and fusel oil catalysed by lipase.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Sun, Jingcan; Yu, Bin; Curran, Philip; Liu, Shao-Quan</p> <p>2012-12-15</p> <p>Coconut cream and fusel oil, two low-cost natural substances, were used as starting materials for the biosynthesis of flavour-active octanoic acid esters (ethyl-, butyl-, isobutyl- and (iso)amyl octanoate) using lipase Palatase as the biocatalyst. The <span class="hlt">Taguchi</span> design <span class="hlt">method</span> was used for the first time to <span class="hlt">optimize</span> the biosynthesis of esters by a lipase in an aqueous system of coconut cream and fusel oil. Temperature, time and enzyme amount were found to be statistically significant factors and the <span class="hlt">optimal</span> conditions were determined to be as follows: temperature 30°C, fusel oil concentration 9% (v/w), reaction time 24h, pH 6.2 and enzyme amount 0.26 g. Under the optimised conditions, a yield of 14.25mg/g (based on cream weight) and signal-to-noise (S/N) ratio of 23.07 dB were obtained. The results indicate that the <span class="hlt">Taguchi</span> design <span class="hlt">method</span> was an efficient and systematic approach to the optimisation of lipase-catalysed biological processes. Copyright © 2012 Elsevier Ltd. All rights reserved.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017Ap%26SS.362..216C','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017Ap%26SS.362..216C"><span>Homotopy <span class="hlt">method</span> for <span class="hlt">optimization</span> of variable-specific-impulse low-thrust trajectories</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Chi, Zhemin; Yang, Hongwei; Chen, Shiyu; Li, Junfeng</p> <p>2017-11-01</p> <p>The homotopy <span class="hlt">method</span> has been used as a useful tool in solving fuel-<span class="hlt">optimal</span> trajectories with constant-specific-impulse low thrust. However, the specific impulse is often variable for many practical solar electric power-limited thrusters. This paper investigates the application of the homotopy <span class="hlt">method</span> for <span class="hlt">optimization</span> of variable-specific-impulse low-thrust trajectories. Difficulties arise when the two commonly-used homotopy functions are employed for trajectory <span class="hlt">optimization</span>. The <span class="hlt">optimal</span> power throttle level and the <span class="hlt">optimal</span> specific impulse are coupled with the commonly-used quadratic and logarithmic homotopy functions. To overcome these difficulties, a modified logarithmic homotopy function is proposed to serve as a gateway for trajectory <span class="hlt">optimization</span>, leading to decoupled expressions of both the <span class="hlt">optimal</span> power throttle level and the <span class="hlt">optimal</span> specific impulse. The homotopy <span class="hlt">method</span> based on this homotopy function is proposed. Numerical simulations validate the feasibility and high efficiency of the proposed <span class="hlt">method</span>.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017AIPC.1906n0007D','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017AIPC.1906n0007D"><span><span class="hlt">Optimization</span> of the gypsum-based materials by the sequential simplex <span class="hlt">method</span></span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Doleželová, Magdalena; Vimmrová, Alena</p> <p>2017-11-01</p> <p>The application of the sequential simplex <span class="hlt">optimization</span> <span class="hlt">method</span> for the design of gypsum based materials is described. The principles of simplex <span class="hlt">method</span> are explained and several examples of the <span class="hlt">method</span> usage for the <span class="hlt">optimization</span> of lightweight gypsum and ternary gypsum based materials are given. By this <span class="hlt">method</span> lightweight gypsum based materials with desired properties and ternary gypsum based material with higher strength (16 MPa) were successfully developed. Simplex <span class="hlt">method</span> is a useful tool for <span class="hlt">optimizing</span> of gypsum based materials, but the objective of the <span class="hlt">optimization</span> has to be formulated appropriately.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017MS%26E..183a2003V','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017MS%26E..183a2003V"><span><span class="hlt">Optimization</span> of process parameters in drilling of fibre hybrid composite using <span class="hlt">Taguchi</span> and grey relational analysis</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Vijaya Ramnath, B.; Sharavanan, S.; Jeykrishnan, J.</p> <p>2017-03-01</p> <p>Nowadays quality plays a vital role in all the products. Hence, the development in manufacturing process focuses on the fabrication of composite with high dimensional accuracy and also incurring low manufacturing cost. In this work, an investigation on machining parameters has been performed on jute-flax hybrid composite. Here, the two important responses characteristics like surface roughness and material removal rate are <span class="hlt">optimized</span> by employing 3 machining input parameters. The input variables considered are drill bit diameter, spindle speed and feed rate. Machining is done on CNC vertical drilling machine at different levels of drilling parameters. Taguchi’s L16 orthogonal array is used for <span class="hlt">optimizing</span> individual tool parameters. Analysis Of Variance is used to find the significance of individual parameters. The simultaneous <span class="hlt">optimization</span> of the process parameters is done by grey relational analysis. The results of this investigation shows that, spindle speed and drill bit diameter have most effect on material removal rate and surface roughness followed by feed rate.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2011ITEIS.131..461M','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2011ITEIS.131..461M"><span><span class="hlt">Optimal</span> Price Decision Problem for Simultaneous Multi-article Auction and Its <span class="hlt">Optimal</span> Price Searching <span class="hlt">Method</span> by Particle Swarm <span class="hlt">Optimization</span></span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Masuda, Kazuaki; Aiyoshi, Eitaro</p> <p></p> <p>We propose a <span class="hlt">method</span> for solving <span class="hlt">optimal</span> price decision problems for simultaneous multi-article auctions. An auction problem, originally formulated as a combinatorial problem, determines both every seller's whether or not to sell his/her article and every buyer's which article(s) to buy, so that the total utility of buyers and sellers will be maximized. Due to the duality theory, we transform it equivalently into a dual problem in which Lagrange multipliers are interpreted as articles' transaction price. As the dual problem is a continuous <span class="hlt">optimization</span> problem with respect to the multipliers (i.e., the transaction prices), we propose a numerical <span class="hlt">method</span> to solve it by applying heuristic global search <span class="hlt">methods</span>. In this paper, Particle Swarm <span class="hlt">Optimization</span> (PSO) is used to solve the dual problem, and experimental results are presented to show the validity of the proposed <span class="hlt">method</span>.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.osti.gov/biblio/22612544-development-optimization-uncertainty-analysis-methods-oil-gas-reservoirs','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/22612544-development-optimization-uncertainty-analysis-methods-oil-gas-reservoirs"><span>Development <span class="hlt">Optimization</span> and Uncertainty Analysis <span class="hlt">Methods</span> for Oil and Gas Reservoirs</span></a></p> <p><a target="_blank" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Ettehadtavakkol, Amin, E-mail: amin.ettehadtavakkol@ttu.edu; Jablonowski, Christopher; Lake, Larry</p> <p></p> <p>Uncertainty complicates the development <span class="hlt">optimization</span> of oil and gas exploration and production projects, but <span class="hlt">methods</span> have been devised to analyze uncertainty and its impact on <span class="hlt">optimal</span> decision-making. This paper compares two <span class="hlt">methods</span> for development <span class="hlt">optimization</span> and uncertainty analysis: Monte Carlo (MC) simulation and stochastic programming. Two example problems for a gas field development and an oilfield development are solved and discussed to elaborate the advantages and disadvantages of each <span class="hlt">method</span>. Development <span class="hlt">optimization</span> involves decisions regarding the configuration of initial capital investment and subsequent operational decisions. Uncertainty analysis involves the quantification of the impact of uncertain parameters on the optimum designmore » concept. The gas field development problem is designed to highlight the differences in the implementation of the two <span class="hlt">methods</span> and to show that both <span class="hlt">methods</span> yield the exact same optimum design. The results show that both MC <span class="hlt">optimization</span> and stochastic programming provide unique benefits, and that the choice of <span class="hlt">method</span> depends on the goal of the analysis. While the MC <span class="hlt">method</span> generates more useful information, along with the optimum design configuration, the stochastic programming <span class="hlt">method</span> is more computationally efficient in determining the <span class="hlt">optimal</span> solution. Reservoirs comprise multiple compartments and layers with multiphase flow of oil, water, and gas. We present a workflow for development <span class="hlt">optimization</span> under uncertainty for these reservoirs, and solve an example on the design <span class="hlt">optimization</span> of a multicompartment, multilayer oilfield development.« less</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2013PhDT........84K','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2013PhDT........84K"><span>Distributed <span class="hlt">Method</span> to <span class="hlt">Optimal</span> Profile Descent</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Kim, Geun I.</p> <p></p> <p>Current ground automation tools for <span class="hlt">Optimal</span> Profile Descent (OPD) procedures utilize path stretching and speed profile change to maintain proper merging and spacing requirements at high traffic terminal area. However, low predictability of aircraft's vertical profile and path deviation during decent add uncertainty to computing estimated time of arrival, a key information that enables the ground control center to manage airspace traffic effectively. This paper uses an OPD procedure that is based on a constant flight path angle to increase the predictability of the vertical profile and defines an OPD <span class="hlt">optimization</span> problem that uses both path stretching and speed profile change while largely maintaining the original OPD procedure. This problem minimizes the cumulative cost of performing OPD procedures for a group of aircraft by assigning a time cost function to each aircraft and a separation cost function to a pair of aircraft. The OPD <span class="hlt">optimization</span> problem is then solved in a decentralized manner using dual decomposition techniques under inter-aircraft ADS-B mechanism. This <span class="hlt">method</span> divides the <span class="hlt">optimization</span> problem into more manageable sub-problems which are then distributed to the group of aircraft. Each aircraft solves its assigned sub-problem and communicate the solutions to other aircraft in an iterative process until an <span class="hlt">optimal</span> solution is achieved thus decentralizing the computation of the <span class="hlt">optimization</span> problem.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://hdl.handle.net/2060/20020059585','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20020059585"><span>Use of High Fidelity <span class="hlt">Methods</span> in Multidisciplinary <span class="hlt">Optimization</span>-A Preliminary Survey</span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Guruswamy, Guru P.; Kwak, Dochan (Technical Monitor)</p> <p>2002-01-01</p> <p>Multidisciplinary <span class="hlt">optimization</span> is a key element of design process. To date multidiscipline <span class="hlt">optimization</span> <span class="hlt">methods</span> that use low fidelity <span class="hlt">methods</span> are well advanced. <span class="hlt">Optimization</span> <span class="hlt">methods</span> based on simple linear aerodynamic equations and plate structural equations have been applied to complex aerospace configurations. However, use of high fidelity <span class="hlt">methods</span> such as the Euler/ Navier-Stokes for fluids and 3-D (three dimensional) finite elements for structures has begun recently. As an activity of Multidiscipline Design <span class="hlt">Optimization</span> Technical Committee (MDO TC) of AIAA (American Institute of Aeronautics and Astronautics), an effort was initiated to assess the status of the use of high fidelity <span class="hlt">methods</span> in multidisciplinary <span class="hlt">optimization</span>. Contributions were solicited through the members MDO TC committee. This paper provides a summary of that survey.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.osti.gov/servlets/purl/1184468','SCIGOV-STC'); return false;" href="https://www.osti.gov/servlets/purl/1184468"><span>An <span class="hlt">Optimization</span>-based Atomistic-to-Continuum Coupling <span class="hlt">Method</span></span></a></p> <p><a target="_blank" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Olson, Derek; Bochev, Pavel B.; Luskin, Mitchell</p> <p>2014-08-21</p> <p>In this paper, we present a new <span class="hlt">optimization</span>-based <span class="hlt">method</span> for atomistic-to-continuum (AtC) coupling. The main idea is to cast the latter as a constrained <span class="hlt">optimization</span> problem with virtual Dirichlet controls on the interfaces between the atomistic and continuum subdomains. The <span class="hlt">optimization</span> objective is to minimize the error between the atomistic and continuum solutions on the overlap between the two subdomains, while the atomistic and continuum force balance equations provide the constraints. Separation, rather then blending of the atomistic and continuum problems, and their subsequent use as constraints in the <span class="hlt">optimization</span> problem distinguishes our approach from the existing AtC formulations. Finally,more » we present and analyze the <span class="hlt">method</span> in the context of a one-dimensional chain of atoms modeled using a linearized two-body potential with next-nearest neighbor interactions.« less</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=20040115816&hterms=soft+computing&qs=N%3D0%26Ntk%3DAll%26Ntx%3Dmode%2Bmatchall%26Ntt%3Dsoft%2Bcomputing','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=20040115816&hterms=soft+computing&qs=N%3D0%26Ntk%3DAll%26Ntx%3Dmode%2Bmatchall%26Ntt%3Dsoft%2Bcomputing"><span>Determining flexor-tendon repair techniques via soft computing</span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Johnson, M.; Firoozbakhsh, K.; Moniem, M.; Jamshidi, M.</p> <p>2001-01-01</p> <p>An SC-based multi-objective decision-making <span class="hlt">method</span> for determining the <span class="hlt">optimal</span> flexor-tendon repair technique from experimental and clinical survey data, and with variable circumstances, was presented. Results were compared with those from the <span class="hlt">Taguchi</span> <span class="hlt">method</span>. Using the <span class="hlt">Taguchi</span> <span class="hlt">method</span> results in the need to perform ad-hoc decisions when the outcomes for individual objectives are contradictory to a particular preference or circumstance, whereas the SC-based multi-objective technique provides a rigorous straightforward computational process in which changing preferences and importance of differing objectives are easily accommodated. Also, adding more objectives is straightforward and easily accomplished. The use of fuzzy-set representations of information categories provides insight into their performance throughout the range of their universe of discourse. The ability of the technique to provide a "best" medical decision given a particular physician, hospital, patient, situation, and other criteria was also demonstrated.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/11838250','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/11838250"><span>Determining flexor-tendon repair techniques via soft computing.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Johnson, M; Firoozbakhsh, K; Moniem, M; Jamshidi, M</p> <p>2001-01-01</p> <p>An SC-based multi-objective decision-making <span class="hlt">method</span> for determining the <span class="hlt">optimal</span> flexor-tendon repair technique from experimental and clinical survey data, and with variable circumstances, was presented. Results were compared with those from the <span class="hlt">Taguchi</span> <span class="hlt">method</span>. Using the <span class="hlt">Taguchi</span> <span class="hlt">method</span> results in the need to perform ad-hoc decisions when the outcomes for individual objectives are contradictory to a particular preference or circumstance, whereas the SC-based multi-objective technique provides a rigorous straightforward computational process in which changing preferences and importance of differing objectives are easily accommodated. Also, adding more objectives is straightforward and easily accomplished. The use of fuzzy-set representations of information categories provides insight into their performance throughout the range of their universe of discourse. The ability of the technique to provide a "best" medical decision given a particular physician, hospital, patient, situation, and other criteria was also demonstrated.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://hdl.handle.net/2060/20000121156','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20000121156"><span>Evaluation of <span class="hlt">Methods</span> for Multidisciplinary Design <span class="hlt">Optimization</span> (MDO). Part 2</span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Kodiyalam, Srinivas; Yuan, Charles; Sobieski, Jaroslaw (Technical Monitor)</p> <p>2000-01-01</p> <p>A new MDO <span class="hlt">method</span>, BLISS, and two different variants of the <span class="hlt">method</span>, BLISS/RS and BLISS/S, have been implemented using iSIGHT's scripting language and evaluated in this report on multidisciplinary problems. All of these <span class="hlt">methods</span> are based on decomposing a modular system <span class="hlt">optimization</span> system into several subtasks <span class="hlt">optimization</span>, that may be executed concurrently, and the system <span class="hlt">optimization</span> that coordinates the subtasks <span class="hlt">optimization</span>. The BLISS <span class="hlt">method</span> and its variants are well suited for exploiting the concurrent processing capabilities in a multiprocessor machine. Several steps, including the local sensitivity analysis, local <span class="hlt">optimization</span>, response surfaces construction and updates are all ideally suited for concurrent processing. Needless to mention, such algorithms that can effectively exploit the concurrent processing capabilities of the compute servers will be a key requirement for solving large-scale industrial design problems, such as the automotive vehicle problem detailed in Section 3.4.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2010CNSNS..15..787K','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2010CNSNS..15..787K"><span>On a biologically inspired topology <span class="hlt">optimization</span> <span class="hlt">method</span></span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Kobayashi, Marcelo H.</p> <p>2010-03-01</p> <p>This work concerns the development of a biologically inspired methodology for the study of topology <span class="hlt">optimization</span> in engineering and natural systems. The methodology is based on L systems and its turtle interpretation for the genotype-phenotype modeling of the topology development. The topology is analyzed using the finite element <span class="hlt">method</span>, and <span class="hlt">optimized</span> using an evolutionary algorithm with the genetic encoding of the L system and its turtle interpretation, as well as, body shape and physical characteristics. The test cases considered in this work clearly show the suitability of the proposed <span class="hlt">method</span> for the study of engineering and natural complex systems.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3571924','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3571924"><span><span class="hlt">Optimization</span> of cultural conditions for conversion of glycerol to ethanol by Enterobacter aerogenes S012</span></a></p> <p><a target="_blank" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p></p> <p>2013-01-01</p> <p>The aim of this research is to <span class="hlt">optimize</span> the cultural conditions for the conversion of glycerol to ethanol by Enterobacter aerogenes S012. <span class="hlt">Taguchi</span> <span class="hlt">method</span> was used to screen the cultural conditions based on their signal to noise ratio (SN). Temperature (°C), agitation speed (rpm) and time (h) were found to have the highest influence on both glycerol utilization and ethanol production by the organism while pH had the lowest. Full factorial design, statistical analysis, and regression model equation were used to <span class="hlt">optimize</span> the selected cultural parameters for maximum ethanol production. The result showed that fermentation at 38°C and 200 rpm for 48 h would be ideal for the bacteria to produce maximum amount of ethanol from glycerol. At these optimum conditions, ethanol production, yield and productivity were 25.4 g/l, 0.53 g/l/h, and 1.12 mol/mol-glycerol, repectively. Ethanol production increased to 26.5 g/l while yield and productivity decreased to 1.04 mol/mol-glycerol and 0.37 g/l/h, respectively, after 72 h. Analysis of the fermentation products was performed using HPLC, while anaerobic condition was created by purging the fermentation vessel with nitrogen gas. PMID:23388539</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=19880045124&hterms=optimization+loading&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D40%26Ntt%3Doptimization%2Bloading','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=19880045124&hterms=optimization+loading&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D40%26Ntt%3Doptimization%2Bloading"><span>An approximation <span class="hlt">method</span> for configuration <span class="hlt">optimization</span> of trusses</span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Hansen, Scott R.; Vanderplaats, Garret N.</p> <p>1988-01-01</p> <p>Two- and three-dimensional elastic trusses are designed for minimum weight by varying the areas of the members and the location of the joints. Constraints on member stresses and Euler buckling are imposed and multiple static loading conditions are considered. The <span class="hlt">method</span> presented here utilizes an approximate structural analysis based on first order Taylor series expansions of the member forces. A numerical <span class="hlt">optimizer</span> minimizes the weight of the truss using information from the approximate structural analysis. Comparisons with results from other <span class="hlt">methods</span> are made. It is shown that the <span class="hlt">method</span> of forming an approximate structural analysis based on linearized member forces leads to a highly efficient <span class="hlt">method</span> of truss configuration <span class="hlt">optimization</span>.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://hdl.handle.net/2060/19730022827','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19730022827"><span>Analysis and <span class="hlt">optimization</span> of cyclic <span class="hlt">methods</span> in orbit computation</span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Pierce, S.</p> <p>1973-01-01</p> <p>The mathematical analysis and computation of the K=3, order 4; K=4, order 6; and K=5, order 7 cyclic <span class="hlt">methods</span> and the K=5, order 6 Cowell <span class="hlt">method</span> and some results of <span class="hlt">optimizing</span> the 3 backpoint cyclic multistep <span class="hlt">methods</span> for solving ordinary differential equations are presented. Cyclic <span class="hlt">methods</span> have the advantage over traditional <span class="hlt">methods</span> of having higher order for a given number of backpoints while at the same time having more free parameters. After considering several error sources the primary source for the cyclic <span class="hlt">methods</span> has been isolated. The free parameters for three backpoint <span class="hlt">methods</span> were used to minimize the effects of some of these error sources. They now yield more accuracy with the same computing time as Cowell's <span class="hlt">method</span> on selected problems. This work is being extended to the five backpoint <span class="hlt">methods</span>. The analysis and <span class="hlt">optimization</span> are more difficult here since the matrices are larger and the dimension of the <span class="hlt">optimizing</span> space is larger. Indications are that the primary error source can be reduced. This will still leave several parameters free to minimize other sources.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_13");'>13</a></li> <li><a href="#" onclick='return showDiv("page_14");'>14</a></li> <li class="active"><span>15</span></li> <li><a href="#" onclick='return showDiv("page_16");'>16</a></li> <li><a href="#" onclick='return showDiv("page_17");'>17</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_15 --> <div id="page_16" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_14");'>14</a></li> <li><a href="#" onclick='return showDiv("page_15");'>15</a></li> <li class="active"><span>16</span></li> <li><a href="#" onclick='return showDiv("page_17");'>17</a></li> <li><a href="#" onclick='return showDiv("page_18");'>18</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="301"> <li> <p><a target="_blank" onclick="trackOutboundLink('http://hdl.handle.net/2060/19870006980','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19870006980"><span><span class="hlt">Optimization</span> <span class="hlt">methods</span> and silicon solar cell numerical models</span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Girardini, K.</p> <p>1986-01-01</p> <p>The goal of this project is the development of an <span class="hlt">optimization</span> algorithm for use with a solar cell model. It is possible to simultaneously vary design variables such as impurity concentrations, front junction depth, back junctions depth, and cell thickness to maximize the predicted cell efficiency. An <span class="hlt">optimization</span> algorithm has been developed and interfaced with the Solar Cell Analysis Program in 1 Dimension (SCAPID). SCAPID uses finite difference <span class="hlt">methods</span> to solve the differential equations which, along with several relations from the physics of semiconductors, describe mathematically the operation of a solar cell. A major obstacle is that the numerical <span class="hlt">methods</span> used in SCAPID require a significant amount of computer time, and during an <span class="hlt">optimization</span> the model is called iteratively until the design variables converge to the value associated with the maximum efficiency. This problem has been alleviated by designing an <span class="hlt">optimization</span> code specifically for use with numerically intensive simulations, to reduce the number of times the efficiency has to be calculated to achieve convergence to the <span class="hlt">optimal</span> solution. Adapting SCAPID so that it could be called iteratively by the <span class="hlt">optimization</span> code provided another means of reducing the cpu time required to complete an <span class="hlt">optimization</span>. Instead of calculating the entire I-V curve, as is usually done in SCAPID, only the efficiency is calculated (maximum power voltage and current) and the solution from previous calculations is used to initiate the next solution.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018SPIE10609E..0NQ','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018SPIE10609E..0NQ"><span>An improved multi-paths <span class="hlt">optimization</span> <span class="hlt">method</span> for video stabilization</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Qin, Tao; Zhong, Sheng</p> <p>2018-03-01</p> <p>For video stabilization, the difference between original camera motion path and the <span class="hlt">optimized</span> one is proportional to the cropping ratio and warping ratio. A good <span class="hlt">optimized</span> path should preserve the moving tendency of the original one meanwhile the cropping ratio and warping ratio of each frame should be kept in a proper range. In this paper we use an improved warping-based motion representation model, and propose a gauss-based multi-paths <span class="hlt">optimization</span> <span class="hlt">method</span> to get a smoothing path and obtain a stabilized video. The proposed video stabilization <span class="hlt">method</span> consists of two parts: camera motion path estimation and path smoothing. We estimate the perspective transform of adjacent frames according to warping-based motion representation model. It works well on some challenging videos where most previous 2D <span class="hlt">methods</span> or 3D <span class="hlt">methods</span> fail for lacking of long features trajectories. The multi-paths <span class="hlt">optimization</span> <span class="hlt">method</span> can deal well with parallax, as we calculate the space-time correlation of the adjacent grid, and then a kernel of gauss is used to weigh the motion of adjacent grid. Then the multi-paths are smoothed while minimize the crop ratio and the distortion. We test our <span class="hlt">method</span> on a large variety of consumer videos, which have casual jitter and parallax, and achieve good results.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/24574929','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/24574929"><span>Reentry trajectory <span class="hlt">optimization</span> based on a multistage pseudospectral <span class="hlt">method</span>.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Zhao, Jiang; Zhou, Rui; Jin, Xuelian</p> <p>2014-01-01</p> <p>Of the many direct numerical <span class="hlt">methods</span>, the pseudospectral <span class="hlt">method</span> serves as an effective tool to solve the reentry trajectory <span class="hlt">optimization</span> for hypersonic vehicles. However, the traditional pseudospectral <span class="hlt">method</span> is time-consuming due to large number of discretization points. For the purpose of autonomous and adaptive reentry guidance, the research herein presents a multistage trajectory control strategy based on the pseudospectral <span class="hlt">method</span>, capable of dealing with the unexpected situations in reentry flight. The strategy typically includes two subproblems: the trajectory estimation and trajectory refining. In each processing stage, the proposed <span class="hlt">method</span> generates a specified range of trajectory with the transition of the flight state. The full glide trajectory consists of several <span class="hlt">optimal</span> trajectory sequences. The newly focused geographic constraints in actual flight are discussed thereafter. Numerical examples of free-space flight, target transition flight, and threat avoidance flight are used to show the feasible application of multistage pseudospectral <span class="hlt">method</span> in reentry trajectory <span class="hlt">optimization</span>.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3915492','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3915492"><span>Reentry Trajectory <span class="hlt">Optimization</span> Based on a Multistage Pseudospectral <span class="hlt">Method</span></span></a></p> <p><a target="_blank" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Zhou, Rui; Jin, Xuelian</p> <p>2014-01-01</p> <p>Of the many direct numerical <span class="hlt">methods</span>, the pseudospectral <span class="hlt">method</span> serves as an effective tool to solve the reentry trajectory <span class="hlt">optimization</span> for hypersonic vehicles. However, the traditional pseudospectral <span class="hlt">method</span> is time-consuming due to large number of discretization points. For the purpose of autonomous and adaptive reentry guidance, the research herein presents a multistage trajectory control strategy based on the pseudospectral <span class="hlt">method</span>, capable of dealing with the unexpected situations in reentry flight. The strategy typically includes two subproblems: the trajectory estimation and trajectory refining. In each processing stage, the proposed <span class="hlt">method</span> generates a specified range of trajectory with the transition of the flight state. The full glide trajectory consists of several <span class="hlt">optimal</span> trajectory sequences. The newly focused geographic constraints in actual flight are discussed thereafter. Numerical examples of free-space flight, target transition flight, and threat avoidance flight are used to show the feasible application of multistage pseudospectral <span class="hlt">method</span> in reentry trajectory <span class="hlt">optimization</span>. PMID:24574929</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29496467','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29496467"><span>Manufacturing of a novel double-function ssDNA aptamer for sensitive diagnosis and efficient neutralization of SEA.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Sedighian, Hamid; Halabian, Raheleh; Amani, Jafar; Heiat, Mohammad; Taheri, Ramezan Ali; Imani Fooladi, Abbas Ali</p> <p>2018-05-01</p> <p>Staphylococcal enterotoxin A (SEA) is an enterotoxin produced mainly by Staphylococcus aureus. In recent years, it has become the most prevalent compound for staphylococcal food poisoning (SFP) around the world. In this study, we isolate new dual-function single-stranded DNA (ssDNA) aptamers by using some new <span class="hlt">methods</span>, such as the <span class="hlt">Taguchi</span> <span class="hlt">method</span>, by focusing on the detection and neutralization of SEA enterotoxin in food and clinical samples. For the asymmetric polymerase chain reaction (PCR) <span class="hlt">optimization</span> of each round of systematic evolution of ligands by exponential enrichment (SELEX), we use <span class="hlt">Taguchi</span> L9 orthogonal arrays, and the aptamer mobility shift assay (AMSA) is used for initial evaluation of the protein-DNA interactions on the last SELEX round. In our investigation the dissociation constant (K D ) value and the limit of detection (LOD) of the candidate aptamer were found to be 8.5 ± 0.91 of nM and 5 ng/ml using surface plasmon resonance (SPR). In the current study, the <span class="hlt">Taguchi</span> and mobility shift assay <span class="hlt">methods</span> were innovatively harnessed to improve the selection process and evaluate the protein-aptamer interactions. To the best of our knowledge, this is the first report on employing these two <span class="hlt">methods</span> in aptamer technology especially against bacterial toxin. Copyright © 2018 Elsevier Inc. All rights reserved.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://hdl.handle.net/2060/19960002740','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19960002740"><span>An analytic model for footprint dispersions and its application to mission design</span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Rao, J. R. Jagannatha; Chen, Yi-Chao</p> <p>1992-01-01</p> <p>This is the final report on our recent research activities that are complementary to those conducted by our colleagues, Professor Farrokh Mistree and students, in the context of the <span class="hlt">Taguchi</span> <span class="hlt">method</span>. We have studied the mathematical model that forms the basis of the Simulation and <span class="hlt">Optimization</span> of Rocket Trajectories (SORT) program and developed an analytic <span class="hlt">method</span> for determining mission reliability with a reduced number of flight simulations. This <span class="hlt">method</span> can be incorporated in a design algorithm to mathematically <span class="hlt">optimize</span> different performance measures of a mission, thus leading to a robust and easy-to-use methodology for mission planning and design.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://hdl.handle.net/2060/19970006726','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19970006726"><span>A PDE Sensitivity Equation <span class="hlt">Method</span> for <span class="hlt">Optimal</span> Aerodynamic Design</span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Borggaard, Jeff; Burns, John</p> <p>1996-01-01</p> <p>The use of gradient based <span class="hlt">optimization</span> algorithms in inverse design is well established as a practical approach to aerodynamic design. A typical procedure uses a simulation scheme to evaluate the objective function (from the approximate states) and its gradient, then passes this information to an <span class="hlt">optimization</span> algorithm. Once the simulation scheme (CFD flow solver) has been selected and used to provide approximate function evaluations, there are several possible approaches to the problem of computing gradients. One popular <span class="hlt">method</span> is to differentiate the simulation scheme and compute design sensitivities that are then used to obtain gradients. Although this black-box approach has many advantages in shape <span class="hlt">optimization</span> problems, one must compute mesh sensitivities in order to compute the design sensitivity. In this paper, we present an alternative approach using the PDE sensitivity equation to develop algorithms for computing gradients. This approach has the advantage that mesh sensitivities need not be computed. Moreover, when it is possible to use the CFD scheme for both the forward problem and the sensitivity equation, then there are computational advantages. An apparent disadvantage of this approach is that it does not always produce consistent derivatives. However, for a proper combination of discretization schemes, one can show asymptotic consistency under mesh refinement, which is often sufficient to guarantee convergence of the <span class="hlt">optimal</span> design algorithm. In particular, we show that when asymptotically consistent schemes are combined with a trust-region <span class="hlt">optimization</span> algorithm, the resulting <span class="hlt">optimal</span> design <span class="hlt">method</span> converges. We denote this approach as the sensitivity equation <span class="hlt">method</span>. The sensitivity equation <span class="hlt">method</span> is presented, convergence results are given and the approach is illustrated on two <span class="hlt">optimal</span> design problems involving shocks.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3157982','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3157982"><span>Comparison of <span class="hlt">Optimal</span> Design <span class="hlt">Methods</span> in Inverse Problems</span></a></p> <p><a target="_blank" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Banks, H. T.; Holm, Kathleen; Kappel, Franz</p> <p>2011-01-01</p> <p>Typical <span class="hlt">optimal</span> design <span class="hlt">methods</span> for inverse or parameter estimation problems are designed to choose <span class="hlt">optimal</span> sampling distributions through minimization of a specific cost function related to the resulting error in parameter estimates. It is hoped that the inverse problem will produce parameter estimates with increased accuracy using data collected according to the <span class="hlt">optimal</span> sampling distribution. Here we formulate the classical <span class="hlt">optimal</span> design problem in the context of general <span class="hlt">optimization</span> problems over distributions of sampling times. We present a new Prohorov metric based theoretical framework that permits one to treat succinctly and rigorously any <span class="hlt">optimal</span> design criteria based on the Fisher Information Matrix (FIM). A fundamental approximation theory is also included in this framework. A new <span class="hlt">optimal</span> design, SE-<span class="hlt">optimal</span> design (standard error <span class="hlt">optimal</span> design), is then introduced in the context of this framework. We compare this new design criteria with the more traditional D-<span class="hlt">optimal</span> and E-<span class="hlt">optimal</span> designs. The <span class="hlt">optimal</span> sampling distributions from each design are used to compute and compare standard errors; the standard errors for parameters are computed using asymptotic theory or bootstrapping and the <span class="hlt">optimal</span> mesh. We use three examples to illustrate ideas: the Verhulst-Pearl logistic population model [13], the standard harmonic oscillator model [13] and a popular glucose regulation model [16, 19, 29]. PMID:21857762</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016JTST...25.1138C','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016JTST...25.1138C"><span>Nozzle Mounting <span class="hlt">Method</span> <span class="hlt">Optimization</span> Based on Robot Kinematic Analysis</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Chen, Chaoyue; Liao, Hanlin; Montavon, Ghislain; Deng, Sihao</p> <p>2016-08-01</p> <p>Nowadays, the application of industrial robots in thermal spray is gaining more and more importance. A desired coating quality depends on factors such as a balanced robot performance, a uniform scanning trajectory and stable parameters (e.g. nozzle speed, scanning step, spray angle, standoff distance). These factors also affect the mass and heat transfer as well as the coating formation. Thus, the kinematic <span class="hlt">optimization</span> of all these aspects plays a key role in order to obtain an <span class="hlt">optimal</span> coating quality. In this study, the robot performance was <span class="hlt">optimized</span> from the aspect of nozzle mounting on the robot. An <span class="hlt">optimized</span> nozzle mounting for a type F4 nozzle was designed, based on the conventional mounting <span class="hlt">method</span> from the point of view of robot kinematics validated on a virtual robot. Robot kinematic parameters were obtained from the simulation by offline programming software and analyzed by statistical <span class="hlt">methods</span>. The energy consumptions of different nozzle mounting <span class="hlt">methods</span> were also compared. The results showed that it was possible to reasonably assign the amount of robot motion to each axis during the process, so achieving a constant nozzle speed. Thus, it is possible <span class="hlt">optimize</span> robot performance and to economize robot energy.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016AdSpR..58....1L','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016AdSpR..58....1L"><span>Fuel-<span class="hlt">optimal</span> low-thrust formation reconfiguration via Radau pseudospectral <span class="hlt">method</span></span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Li, Jing</p> <p>2016-07-01</p> <p>This paper investigates fuel-<span class="hlt">optimal</span> low-thrust formation reconfiguration near circular orbit. Based on the Clohessy-Wiltshire equations, first-order necessary <span class="hlt">optimality</span> conditions are derived from the Pontryagin's maximum principle. The fuel-<span class="hlt">optimal</span> impulsive solution is utilized to divide the low-thrust trajectory into thrust and coast arcs. By introducing the switching times as <span class="hlt">optimization</span> variables, the fuel-<span class="hlt">optimal</span> low-thrust formation reconfiguration is posed as a nonlinear programming problem (NLP) via direct transcription using multiple-phase Radau pseudospectral <span class="hlt">method</span> (RPM), which is then solved by a sparse nonlinear <span class="hlt">optimization</span> software SNOPT. To facilitate <span class="hlt">optimality</span> verification and, if necessary, further refinement of the <span class="hlt">optimized</span> solution of the NLP, formulas for mass costate estimation and initial costates scaling are presented. Numerical examples are given to show the application of the proposed <span class="hlt">optimization</span> <span class="hlt">method</span>. To fix the problem, generic fuel-<span class="hlt">optimal</span> low-thrust formation reconfiguration can be simplified as reconfiguration without any initial and terminal coast arcs, whose <span class="hlt">optimal</span> solutions can be efficiently obtained from the multiple-phase RPM at the cost of a slight fuel increment. Finally, influence of the specific impulse and maximum thrust magnitude on the fuel-<span class="hlt">optimal</span> low-thrust formation reconfiguration is analyzed. Numerical results shown the links and differences between the fuel-<span class="hlt">optimal</span> impulsive and low-thrust solutions.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://hdl.handle.net/2060/19940017001','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19940017001"><span>Layout <span class="hlt">optimization</span> with algebraic multigrid <span class="hlt">methods</span></span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Regler, Hans; Ruede, Ulrich</p> <p>1993-01-01</p> <p>Finding the <span class="hlt">optimal</span> position for the individual cells (also called functional modules) on the chip surface is an important and difficult step in the design of integrated circuits. This paper deals with the problem of relative placement, that is the minimization of a quadratic functional with a large, sparse, positive definite system matrix. The basic <span class="hlt">optimization</span> problem must be augmented by constraints to inhibit solutions where cells overlap. Besides classical iterative <span class="hlt">methods</span>, based on conjugate gradients (CG), we show that algebraic multigrid <span class="hlt">methods</span> (AMG) provide an interesting alternative. For moderately sized examples with about 10000 cells, AMG is already competitive with CG and is expected to be superior for larger problems. Besides the classical 'multiplicative' AMG algorithm where the levels are visited sequentially, we propose an 'additive' variant of AMG where levels may be treated in parallel and that is suitable as a preconditioner in the CG algorithm.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2005SPIE.6040E..0ZH','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2005SPIE.6040E..0ZH"><span>Robust design of microchannel cooler</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>He, Ye; Yang, Tao; Hu, Li; Li, Leimin</p> <p>2005-12-01</p> <p>Microchannel cooler has offered a new <span class="hlt">method</span> for the cooling of high power diode lasers, with the advantages of small volume, high efficiency of thermal dissipation and low cost when mass-produced. In order to reduce the sensitivity of design to manufacture errors or other disturbances, <span class="hlt">Taguchi</span> <span class="hlt">method</span> that is one of robust design <span class="hlt">method</span> was chosen to <span class="hlt">optimize</span> three parameters important to the cooling performance of roof-like microchannel cooler. The hydromechanical and thermal mathematical model of varying section microchannel was calculated using finite volume <span class="hlt">method</span> by FLUENT. A special program was written to realize the automation of the design process for improving efficiency. The <span class="hlt">optimal</span> design is presented which compromises between <span class="hlt">optimal</span> cooling performance and its robustness. This design <span class="hlt">method</span> proves to be available.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017CNSNS..42..623P','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017CNSNS..42..623P"><span>An hp symplectic pseudospectral <span class="hlt">method</span> for nonlinear <span class="hlt">optimal</span> control</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Peng, Haijun; Wang, Xinwei; Li, Mingwu; Chen, Biaosong</p> <p>2017-01-01</p> <p>An adaptive symplectic pseudospectral <span class="hlt">method</span> based on the dual variational principle is proposed and is successfully applied to solving nonlinear <span class="hlt">optimal</span> control problems in this paper. The proposed <span class="hlt">method</span> satisfies the first order necessary conditions of continuous <span class="hlt">optimal</span> control problems, also the symplectic property of the original continuous Hamiltonian system is preserved. The original <span class="hlt">optimal</span> control problem is transferred into a set of nonlinear equations which can be solved easily by Newton-Raphson iterations, and the Jacobian matrix is found to be sparse and symmetric. The proposed <span class="hlt">method</span>, on one hand, exhibits exponent convergence rates when the number of collocation points are increasing with the fixed number of sub-intervals; on the other hand, exhibits linear convergence rates when the number of sub-intervals is increasing with the fixed number of collocation points. Furthermore, combining with the hp <span class="hlt">method</span> based on the residual error of dynamic constraints, the proposed <span class="hlt">method</span> can achieve given precisions in a few iterations. Five examples highlight the high precision and high computational efficiency of the proposed <span class="hlt">method</span>.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://pubs.er.usgs.gov/publication/5222278','USGSPUBS'); return false;" href="https://pubs.er.usgs.gov/publication/5222278"><span>Review of dynamic <span class="hlt">optimization</span> <span class="hlt">methods</span> in renewable natural resource management</span></a></p> <p><a target="_blank" href="http://pubs.er.usgs.gov/pubs/index.jsp?view=adv">USGS Publications Warehouse</a></p> <p>Williams, B.K.</p> <p>1989-01-01</p> <p>In recent years, the applications of dynamic <span class="hlt">optimization</span> procedures in natural resource management have proliferated. A systematic review of these applications is given in terms of a number of <span class="hlt">optimization</span> methodologies and natural resource systems. The applicability of the <span class="hlt">methods</span> to renewable natural resource systems are compared in terms of system complexity, system size, and precision of the <span class="hlt">optimal</span> solutions. Recommendations are made concerning the appropriate <span class="hlt">methods</span> for certain kinds of biological resource problems.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016MS%26E..149a2028S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016MS%26E..149a2028S"><span>Multi-response parametric <span class="hlt">optimization</span> in drilling of bamboo/Kevlar fiber reinforced sandwich composite</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Singh, Thingujam Jackson; Samanta, Sutanu</p> <p>2016-09-01</p> <p>In the present work an attempt was made towards parametric <span class="hlt">optimization</span> of drilling bamboo/Kevlar K29 fiber reinforced sandwich composite to minimize the delamination occurred during the drilling process and also to maximize the tensile strength of the drilled composite. The spindle speed and the feed rate of the drilling operation are taken as the input parameters. The influence of these parameters on delamination and tensile strength of the drilled composite studied and analysed using <span class="hlt">Taguchi</span> GRA and ANOVA technique. The results show that both the response parameters i.e. delamination and tensile strength are more influenced by feed rate than spindle speed. The percentage contribution of feed rate and spindle speed on response parameters are 13.88% and 81.74% respectively.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/27721510','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/27721510"><span>β-galactosidase Production by Aspergillus niger ATCC 9142 Using Inexpensive Substrates in Solid-State Fermentation: <span class="hlt">Optimization</span> by Orthogonal Arrays Design.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Kazemi, Samaneh; Khayati, Gholam; Faezi-Ghasemi, Mohammad</p> <p>2016-01-01</p> <p>Enzymatic hydrolysis of lactose is one of the most important biotechnological processes in the food industry, which is accomplished by enzyme β-galactosidase (β-gal, β-D-galactoside galactohydrolase, EC 3.2.1.23), trivial called lactase. Orthogonal arrays design is an appropriate option for the <span class="hlt">optimization</span> of biotechnological processes for the production of microbial enzymes. Design of experimental (DOE) methodology using <span class="hlt">Taguchi</span> orthogonal array (OA) was employed to screen the most significant levels of parameters, including the solid substrates (wheat straw, rice straw, and peanut pod), the carbon/nitrogen (C/N) ratios, the incubation time, and the inducer. The level of β-gal production was measured by a photometric enzyme activity assay using the artificial substrate ortho-Nitrophenyl-β-D-galactopyranoside. The results showed that C/N ratio (0.2% [w/v], incubation time (144 hour), and solid substrate (wheat straw) were the best conditions determined by the design of experiments using the <span class="hlt">Taguchi</span> approach. Our finding showed that the use of rice straw and peanut pod, as solid-state substrates, led to 2.041-folds increase in the production of the enzyme, as compared to rice straw. In addition, the presence of an inducer did not have any significant impact on the enzyme production levels.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/25013845','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/25013845"><span>A solution quality assessment <span class="hlt">method</span> for swarm intelligence <span class="hlt">optimization</span> algorithms.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Zhang, Zhaojun; Wang, Gai-Ge; Zou, Kuansheng; Zhang, Jianhua</p> <p>2014-01-01</p> <p>Nowadays, swarm intelligence <span class="hlt">optimization</span> has become an important <span class="hlt">optimization</span> tool and wildly used in many fields of application. In contrast to many successful applications, the theoretical foundation is rather weak. Therefore, there are still many problems to be solved. One problem is how to quantify the performance of algorithm in finite time, that is, how to evaluate the solution quality got by algorithm for practical problems. It greatly limits the application in practical problems. A solution quality assessment <span class="hlt">method</span> for intelligent <span class="hlt">optimization</span> is proposed in this paper. It is an experimental analysis <span class="hlt">method</span> based on the analysis of search space and characteristic of algorithm itself. Instead of "value performance," the "ordinal performance" is used as evaluation criteria in this <span class="hlt">method</span>. The feasible solutions were clustered according to distance to divide solution samples into several parts. Then, solution space and "good enough" set can be decomposed based on the clustering results. Last, using relative knowledge of statistics, the evaluation result can be got. To validate the proposed <span class="hlt">method</span>, some intelligent algorithms such as ant colony <span class="hlt">optimization</span> (ACO), particle swarm <span class="hlt">optimization</span> (PSO), and artificial fish swarm algorithm (AFS) were taken to solve traveling salesman problem. Computational results indicate the feasibility of proposed <span class="hlt">method</span>.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2013JChPh.138i4109N','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2013JChPh.138i4109N"><span>A second-order unconstrained <span class="hlt">optimization</span> <span class="hlt">method</span> for canonical-ensemble density-functional <span class="hlt">methods</span></span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Nygaard, Cecilie R.; Olsen, Jeppe</p> <p>2013-03-01</p> <p>A second order converging <span class="hlt">method</span> of ensemble <span class="hlt">optimization</span> (SOEO) in the framework of Kohn-Sham Density-Functional Theory is presented, where the energy is minimized with respect to an ensemble density matrix. It is general in the sense that the number of fractionally occupied orbitals is not predefined, but rather it is <span class="hlt">optimized</span> by the algorithm. SOEO is a second order Newton-Raphson <span class="hlt">method</span> of <span class="hlt">optimization</span>, where both the form of the orbitals and the occupation numbers are <span class="hlt">optimized</span> simultaneously. To keep the occupation numbers between zero and two, a set of occupation angles is defined, from which the occupation numbers are expressed as trigonometric functions. The total number of electrons is controlled by a built-in second order restriction of the Newton-Raphson equations, which can be deactivated in the case of a grand-canonical ensemble (where the total number of electrons is allowed to change). To test the <span class="hlt">optimization</span> <span class="hlt">method</span>, dissociation curves for diatomic carbon are produced using different functionals for the exchange-correlation energy. These curves show that SOEO favors symmetry broken pure-state solutions when using functionals with exact exchange such as Hartree-Fock and Becke three-parameter Lee-Yang-Parr. This is explained by an unphysical contribution to the exact exchange energy from interactions between fractional occupations. For functionals without exact exchange, such as local density approximation or Becke Lee-Yang-Parr, ensemble solutions are favored at interatomic distances larger than the equilibrium distance. Calculations on the chromium dimer are also discussed. They show that SOEO is able to converge to ensemble solutions for systems that are more complicated than diatomic carbon.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2003ITEIS.123.1166K','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2003ITEIS.123.1166K"><span>Application’s <span class="hlt">Method</span> of Quadratic Programming for <span class="hlt">Optimization</span> of Portfolio Selection</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Kawamoto, Shigeru; Takamoto, Masanori; Kobayashi, Yasuhiro</p> <p></p> <p>Investors or fund-managers face with <span class="hlt">optimization</span> of portfolio selection, which means that determine the kind and the quantity of investment among several brands. We have developed a <span class="hlt">method</span> to obtain <span class="hlt">optimal</span> stock’s portfolio more rapidly from twice to three times than conventional <span class="hlt">method</span> with efficient universal <span class="hlt">optimization</span>. The <span class="hlt">method</span> is characterized by quadratic matrix of utility function and constrained matrices divided into several sub-matrices by focusing on structure of these matrices.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2013JEI....22d1123C','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2013JEI....22d1123C"><span>Panorama parking assistant system with improved particle swarm <span class="hlt">optimization</span> <span class="hlt">method</span></span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Cheng, Ruzhong; Zhao, Yong; Li, Zhichao; Jiang, Weigang; Wang, Xin'an; Xu, Yong</p> <p>2013-10-01</p> <p>A panorama parking assistant system (PPAS) for the automotive aftermarket together with a practical improved particle swarm <span class="hlt">optimization</span> <span class="hlt">method</span> (IPSO) are proposed in this paper. In the PPAS system, four fisheye cameras are installed in the vehicle with different views, and four channels of video frames captured by the cameras are processed as a 360-deg top-view image around the vehicle. Besides the embedded design of PPAS, the key problem for image distortion correction and mosaicking is the efficiency of parameter <span class="hlt">optimization</span> in the process of camera calibration. In order to address this problem, an IPSO <span class="hlt">method</span> is proposed. Compared with other parameter <span class="hlt">optimization</span> <span class="hlt">methods</span>, the proposed <span class="hlt">method</span> allows a certain range of dynamic change for the intrinsic and extrinsic parameters, and can exploit only one reference image to complete all of the <span class="hlt">optimization</span>; therefore, the efficiency of the whole camera calibration is increased. The PPAS is commercially available, and the IPSO <span class="hlt">method</span> is a highly practical way to increase the efficiency of the installation and the calibration of PPAS in automobile 4S shops.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_14");'>14</a></li> <li><a href="#" onclick='return showDiv("page_15");'>15</a></li> <li class="active"><span>16</span></li> <li><a href="#" onclick='return showDiv("page_17");'>17</a></li> <li><a href="#" onclick='return showDiv("page_18");'>18</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_16 --> <div id="page_17" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_15");'>15</a></li> <li><a href="#" onclick='return showDiv("page_16");'>16</a></li> <li class="active"><span>17</span></li> <li><a href="#" onclick='return showDiv("page_18");'>18</a></li> <li><a href="#" onclick='return showDiv("page_19");'>19</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="321"> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018MPLB...3240053C','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018MPLB...3240053C"><span>Aerodynamic <span class="hlt">optimization</span> of wind turbine rotor using CFD/AD <span class="hlt">method</span></span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Cao, Jiufa; Zhu, Weijun; Wang, Tongguang; Ke, Shitang</p> <p>2018-05-01</p> <p>The current work describes a novel technique for wind turbine rotor <span class="hlt">optimization</span>. The aerodynamic design and <span class="hlt">optimization</span> of wind turbine rotor can be achieved with different <span class="hlt">methods</span>, such as the semi-empirical engineering <span class="hlt">methods</span> and more accurate computational fluid dynamic (CFD) <span class="hlt">method</span>. The CFD <span class="hlt">method</span> often provides more detailed aerodynamics features during the design process. However, high computational cost limits the application, especially for rotor <span class="hlt">optimization</span> purpose. In this paper, a CFD-based actuator disc (AD) model is used to represent turbulent flow over a wind turbine rotor. The rotor is modeled as a permeable disc of equivalent area where the forces from the blades are distributed on the circular disc. The AD model is coupled with a Reynolds Averaged Navier-Stokes (RANS) solver such that the thrust and power are simulated. The design variables are the shape parameters comprising the chord, the twist and the relative thickness of the wind turbine rotor blade. The comparative aerodynamic performance is analyzed between the original and <span class="hlt">optimized</span> reference wind turbine rotor. The results showed that the <span class="hlt">optimization</span> framework can be effectively and accurately utilized in enhancing the aerodynamic performance of the wind turbine rotor.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://hdl.handle.net/2060/20090007683','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20090007683"><span>Local-in-Time Adjoint-Based <span class="hlt">Method</span> for <span class="hlt">Optimal</span> Control/Design <span class="hlt">Optimization</span> of Unsteady Compressible Flows</span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Yamaleev, N. K.; Diskin, B.; Nielsen, E. J.</p> <p>2009-01-01</p> <p>.We study local-in-time adjoint-based <span class="hlt">methods</span> for minimization of ow matching functionals subject to the 2-D unsteady compressible Euler equations. The key idea of the local-in-time <span class="hlt">method</span> is to construct a very accurate approximation of the global-in-time adjoint equations and the corresponding sensitivity derivative by using only local information available on each time subinterval. In contrast to conventional time-dependent adjoint-based <span class="hlt">optimization</span> <span class="hlt">methods</span> which require backward-in-time integration of the adjoint equations over the entire time interval, the local-in-time <span class="hlt">method</span> solves local adjoint equations sequentially over each time subinterval. Since each subinterval contains relatively few time steps, the storage cost of the local-in-time <span class="hlt">method</span> is much lower than that of the global adjoint formulation, thus making the time-dependent <span class="hlt">optimization</span> feasible for practical applications. The paper presents a detailed comparison of the local- and global-in-time adjoint-based <span class="hlt">methods</span> for minimization of a tracking functional governed by the Euler equations describing the ow around a circular bump. Our numerical results show that the local-in-time <span class="hlt">method</span> converges to the same <span class="hlt">optimal</span> solution obtained with the global counterpart, while drastically reducing the memory cost as compared to the global-in-time adjoint formulation.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=19950010154&hterms=One+shot&qs=N%3D0%26Ntk%3DAll%26Ntx%3Dmode%2Bmatchall%26Ntt%3DOne%2Bshot','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=19950010154&hterms=One+shot&qs=N%3D0%26Ntk%3DAll%26Ntx%3Dmode%2Bmatchall%26Ntt%3DOne%2Bshot"><span>Airfoil <span class="hlt">optimization</span> by the one-shot <span class="hlt">method</span></span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Kuruvila, G.; Taasan, Shlomo; Salas, M. D.</p> <p>1994-01-01</p> <p>An efficient numerical approach for the design of <span class="hlt">optimal</span> aerodynamic shapes is presented in this paper. The objective of any <span class="hlt">optimization</span> problem is to find the optimum of a cost function subject to a certain state equation (Governing equation of the flow field) and certain side constraints. As in classical <span class="hlt">optimal</span> control <span class="hlt">methods</span>, the present approach introduces a costate variable (Language multiplier) to evaluate the gradient of the cost function. High efficiency in reaching the optimum solution is achieved by using a multigrid technique and updating the shape in a hierarchical manner such that smooth (low-frequency) changes are done separately from high-frequency changes. Thus, the design variables are changed on a grid where their changes produce nonsmooth (high-frequency) perturbations that can be damped efficiently by the multigrid. The cost of solving the <span class="hlt">optimization</span> problem is approximately two to three times the cost of the equivalent analysis problem.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/25942836','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/25942836"><span>Influence of process parameters on the content of biomimetic calcium phosphate coating on titanium: a <span class="hlt">Taguchi</span> analysis.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Thammarakcharoen, Faungchat; Suvannapruk, Waraporn; Suwanprateeb, Jintamai</p> <p>2014-10-01</p> <p>In this study, a statistical design of experimental methodology based on <span class="hlt">Taguchi</span> orthogonal design has been used to study the effect of various processing parameters on the amount of calcium phosphate coating produced by such technique. Seven control factors with three levels each including sodium hydroxide concentration, pretreatment temperature, pretreatment time, cleaning <span class="hlt">method</span>, coating time, coating temperature and surface area to solution volume ratio were studied. X-ray diffraction revealed that all the coatings consisted of the mixture of octacalcium phosphate (OCP) and hydroxyapatite (HA) and the presence of each phase depended on the process conditions used. Various content and size (-1-100 μm) of isolated spheroid particles with nanosized plate-like morphology deposited on the titanium surface or a continuous layer of plate-like nanocrystals having the plate thickness in the range of -100-300 nm and the plate width in the range of 3-8 μm were formed depending on the process conditions employed. The optimum condition of using sodium hydroxide concentration of 1 M, pretreatment temperature of 70 degrees C, pretreatment time of 24 h, cleaning by ultrasonic, coating time of 6 h, coating temperature of 50 degrees C and surface area to solution volume ratio of 32.74 for producing the greatest amount of the coating formed on the titanium surface was predicted and validated. In addition, coating temperature was found to be the dominant factor with the greatest contribution to the coating formation while coating time and cleaning <span class="hlt">method</span> were significant factors. Other factors had negligible effects on the coating performance.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5795912','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5795912"><span>A Coarse-Alignment <span class="hlt">Method</span> Based on the <span class="hlt">Optimal</span>-REQUEST Algorithm</span></a></p> <p><a target="_blank" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Zhu, Yongyun</p> <p>2018-01-01</p> <p>In this paper, we proposed a coarse-alignment <span class="hlt">method</span> for strapdown inertial navigation systems based on attitude determination. The observation vectors, which can be obtained by inertial sensors, usually contain various types of noise, which affects the convergence rate and the accuracy of the coarse alignment. Given this drawback, we studied an attitude-determination <span class="hlt">method</span> named <span class="hlt">optimal</span>-REQUEST, which is an <span class="hlt">optimal</span> <span class="hlt">method</span> for attitude determination that is based on observation vectors. Compared to the traditional attitude-determination <span class="hlt">method</span>, the filtering gain of the proposed <span class="hlt">method</span> is tuned autonomously; thus, the convergence rate of the attitude determination is faster than in the traditional <span class="hlt">method</span>. Within the proposed <span class="hlt">method</span>, we developed an iterative <span class="hlt">method</span> for determining the attitude quaternion. We carried out simulation and turntable tests, which we used to validate the proposed method’s performance. The experiment’s results showed that the convergence rate of the proposed <span class="hlt">optimal</span>-REQUEST algorithm is faster and that the coarse alignment’s stability is higher. In summary, the proposed <span class="hlt">method</span> has a high applicability to practical systems. PMID:29337895</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.osti.gov/biblio/459191-comparison-genetic-algorithm-methods-fuel-management-optimization','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/459191-comparison-genetic-algorithm-methods-fuel-management-optimization"><span>Comparison of genetic algorithm <span class="hlt">methods</span> for fuel management <span class="hlt">optimization</span></span></a></p> <p><a target="_blank" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>DeChaine, M.D.; Feltus, M.A.</p> <p>1995-12-31</p> <p>The CIGARO system was developed for genetic algorithm fuel management <span class="hlt">optimization</span>. Tests are performed to find the best fuel location swap mutation operator probability and to compare genetic algorithm to a truly random search <span class="hlt">method</span>. Tests showed the fuel swap probability should be between 0% and 10%, and a 50% definitely hampered the <span class="hlt">optimization</span>. The genetic algorithm performed significantly better than the random search <span class="hlt">method</span>, which did not even satisfy the peak normalized power constraint.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4030569','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4030569"><span>Global <span class="hlt">Optimization</span> Ensemble Model for Classification <span class="hlt">Methods</span></span></a></p> <p><a target="_blank" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Anwar, Hina; Qamar, Usman; Muzaffar Qureshi, Abdul Wahab</p> <p>2014-01-01</p> <p>Supervised learning is the process of data mining for deducing rules from training datasets. A broad array of supervised learning algorithms exists, every one of them with its own advantages and drawbacks. There are some basic issues that affect the accuracy of classifier while solving a supervised learning problem, like bias-variance tradeoff, dimensionality of input space, and noise in the input data space. All these problems affect the accuracy of classifier and are the reason that there is no global <span class="hlt">optimal</span> <span class="hlt">method</span> for classification. There is not any generalized improvement <span class="hlt">method</span> that can increase the accuracy of any classifier while addressing all the problems stated above. This paper proposes a global <span class="hlt">optimization</span> ensemble model for classification <span class="hlt">methods</span> (GMC) that can improve the overall accuracy for supervised learning problems. The experimental results on various public datasets showed that the proposed model improved the accuracy of the classification models from 1% to 30% depending upon the algorithm complexity. PMID:24883382</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://hdl.handle.net/2060/19720023372','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19720023372"><span>An engineering <span class="hlt">optimization</span> <span class="hlt">method</span> with application to STOL-aircraft approach and landing trajectories</span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Jacob, H. G.</p> <p>1972-01-01</p> <p>An <span class="hlt">optimization</span> <span class="hlt">method</span> has been developed that computes the <span class="hlt">optimal</span> open loop inputs for a dynamical system by observing only its output. The <span class="hlt">method</span> reduces to static <span class="hlt">optimization</span> by expressing the inputs as series of functions with parameters to be <span class="hlt">optimized</span>. Since the <span class="hlt">method</span> is not concerned with the details of the dynamical system to be <span class="hlt">optimized</span>, it works for both linear and nonlinear systems. The <span class="hlt">method</span> and the application to <span class="hlt">optimizing</span> longitudinal landing paths for a STOL aircraft with an augmented wing are discussed. Noise, fuel, time, and path deviation minimizations are considered with and without angle of attack, acceleration excursion, flight path, endpoint, and other constraints.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018MS%26E..310a2103B','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018MS%26E..310a2103B"><span>Study of Effects on Mechanical Properties of PLA Filament which is blended with Recycled PLA Materials</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Babagowda; Kadadevara Math, R. S.; Goutham, R.; Srinivas Prasad, K. R.</p> <p>2018-02-01</p> <p>Fused deposition modeling is a rapidly growing additive manufacturing technology due to its ability to build functional parts having complex geometry. The mechanical properties of the build part is depends on several process parameters and build material of the printed specimen. The aim of this study is to characterize and <span class="hlt">optimize</span> the parameters such as layer thickness and PLA build material which is mixed with recycled PLA material. Tensile and flexural or bending test are carried out to determine the mechanical response characteristics of the printed specimen. <span class="hlt">Taguchi</span> <span class="hlt">method</span> is used for number of experiments and <span class="hlt">Taguchi</span> S/N ratio is used to identify the set of parameters which give good results for respective response characteristics, effectiveness of each parameters is investigated by using analysis of variance (ANOVA).</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2012E%26ES...15b2023F','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2012E%26ES...15b2023F"><span>Design of large Francis turbine using <span class="hlt">optimal</span> <span class="hlt">methods</span></span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Flores, E.; Bornard, L.; Tomas, L.; Liu, J.; Couston, M.</p> <p>2012-11-01</p> <p>Among a high number of Francis turbine references all over the world, covering the whole market range of heads, Alstom has especially been involved in the development and equipment of the largest power plants in the world : Three Gorges (China -32×767 MW - 61 to 113 m), Itaipu (Brazil- 20x750 MW - 98.7m to 127m) and Xiangjiaba (China - 8x812 MW - 82.5m to 113.6m - in erection). Many new projects are under study to equip new power plants with Francis turbines in order to answer an increasing demand of renewable energy. In this context, Alstom Hydro is carrying out many developments to answer those needs, especially for jumbo units such the planned 1GW type units in China. The turbine design for such units requires specific care by using the state of the art in computation <span class="hlt">methods</span> and the latest technologies in model testing as well as the maximum feedback from operation of Jumbo plants already in operation. We present in this paper how a large Francis turbine can be designed using specific design <span class="hlt">methods</span>, including the global and local <span class="hlt">optimization</span> <span class="hlt">methods</span>. The design of the spiral case, the tandem cascade profiles, the runner and the draft tube are designed with <span class="hlt">optimization</span> loops involving a blade design tool, an automatic meshing software and a Navier-Stokes solver, piloted by a genetic algorithm. These automated <span class="hlt">optimization</span> <span class="hlt">methods</span>, presented in different papers over the last decade, are nowadays widely used, thanks to the growing computation capacity of the HPC clusters: the intensive use of such <span class="hlt">optimization</span> <span class="hlt">methods</span> at the turbine design stage allows to reach very high level of performances, while the hydraulic flow characteristics are carefully studied over the whole water passage to avoid any unexpected hydraulic phenomena.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018PhyA..505..825E','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018PhyA..505..825E"><span>A novel <span class="hlt">method</span> for overlapping community detection using Multi-objective <span class="hlt">optimization</span></span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Ebrahimi, Morteza; Shahmoradi, Mohammad Reza; Heshmati, Zainabolhoda; Salehi, Mostafa</p> <p>2018-09-01</p> <p>The problem of community detection as one of the most important applications of network science can be addressed effectively by multi-objective <span class="hlt">optimization</span>. In this paper, we aim to present a novel efficient <span class="hlt">method</span> based on this approach. Also, in this study the idea of using all Pareto fronts to detect overlapping communities is introduced. The proposed <span class="hlt">method</span> has two main advantages compared to other multi-objective <span class="hlt">optimization</span> based approaches. The first advantage is scalability, and the second is the ability to find overlapping communities. Despite most of the works, the proposed <span class="hlt">method</span> is able to find overlapping communities effectively. The new algorithm works by extracting appropriate communities from all the Pareto <span class="hlt">optimal</span> solutions, instead of choosing the one <span class="hlt">optimal</span> solution. Empirical experiments on different features of separated and overlapping communities, on both synthetic and real networks show that the proposed <span class="hlt">method</span> performs better in comparison with other <span class="hlt">methods</span>.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2007JSAST..49..220Y','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2007JSAST..49..220Y"><span>Design Tool Using a New <span class="hlt">Optimization</span> <span class="hlt">Method</span> Based on a Stochastic Process</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Yoshida, Hiroaki; Yamaguchi, Katsuhito; Ishikawa, Yoshio</p> <p></p> <p>Conventional <span class="hlt">optimization</span> <span class="hlt">methods</span> are based on a deterministic approach since their purpose is to find out an exact solution. However, such <span class="hlt">methods</span> have initial condition dependence and the risk of falling into local solution. In this paper, we propose a new <span class="hlt">optimization</span> <span class="hlt">method</span> based on the concept of path integrals used in quantum mechanics. The <span class="hlt">method</span> obtains a solution as an expected value (stochastic average) using a stochastic process. The advantages of this <span class="hlt">method</span> are that it is not affected by initial conditions and does not require techniques based on experiences. We applied the new <span class="hlt">optimization</span> <span class="hlt">method</span> to a hang glider design. In this problem, both the hang glider design and its flight trajectory were <span class="hlt">optimized</span>. The numerical calculation results prove that performance of the <span class="hlt">method</span> is sufficient for practical use.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016IJEEP..17..327C','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016IJEEP..17..327C"><span>Application of Multi-Objective Human Learning <span class="hlt">Optimization</span> <span class="hlt">Method</span> to Solve AC/DC Multi-Objective <span class="hlt">Optimal</span> Power Flow Problem</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Cao, Jia; Yan, Zheng; He, Guangyu</p> <p>2016-06-01</p> <p>This paper introduces an efficient algorithm, multi-objective human learning <span class="hlt">optimization</span> <span class="hlt">method</span> (MOHLO), to solve AC/DC multi-objective <span class="hlt">optimal</span> power flow problem (MOPF). Firstly, the model of AC/DC MOPF including wind farms is constructed, where includes three objective functions, operating cost, power loss, and pollutant emission. Combining the non-dominated sorting technique and the crowding distance index, the MOHLO <span class="hlt">method</span> can be derived, which involves individual learning operator, social learning operator, random exploration learning operator and adaptive strategies. Both the proposed MOHLO <span class="hlt">method</span> and non-dominated sorting genetic algorithm II (NSGAII) are tested on an improved IEEE 30-bus AC/DC hybrid system. Simulation results show that MOHLO <span class="hlt">method</span> has excellent search efficiency and the powerful ability of searching <span class="hlt">optimal</span>. Above all, MOHLO <span class="hlt">method</span> can obtain more complete pareto front than that by NSGAII <span class="hlt">method</span>. However, how to choose the <span class="hlt">optimal</span> solution from pareto front depends mainly on the decision makers who stand from the economic point of view or from the energy saving and emission reduction point of view.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://hdl.handle.net/2060/20140008919','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20140008919"><span>Trajectory <span class="hlt">Optimization</span> Using Adjoint <span class="hlt">Method</span> and Chebyshev Polynomial Approximation for Minimizing Fuel Consumption During Climb</span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Nguyen, Nhan T.; Hornby, Gregory; Ishihara, Abe</p> <p>2013-01-01</p> <p>This paper describes two <span class="hlt">methods</span> of trajectory <span class="hlt">optimization</span> to obtain an <span class="hlt">optimal</span> trajectory of minimum-fuel- to-climb for an aircraft. The first <span class="hlt">method</span> is based on the adjoint <span class="hlt">method</span>, and the second <span class="hlt">method</span> is based on a direct trajectory <span class="hlt">optimization</span> <span class="hlt">method</span> using a Chebyshev polynomial approximation and cubic spine approximation. The approximate <span class="hlt">optimal</span> trajectory will be compared with the adjoint-based <span class="hlt">optimal</span> trajectory which is considered as the true <span class="hlt">optimal</span> solution of the trajectory <span class="hlt">optimization</span> problem. The adjoint-based <span class="hlt">optimization</span> problem leads to a singular <span class="hlt">optimal</span> control solution which results in a bang-singular-bang <span class="hlt">optimal</span> control.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://hdl.handle.net/2060/20030005805','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20030005805"><span><span class="hlt">Optimized</span> Vertex <span class="hlt">Method</span> and Hybrid Reliability</span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Smith, Steven A.; Krishnamurthy, T.; Mason, B. H.</p> <p>2002-01-01</p> <p>A <span class="hlt">method</span> of calculating the fuzzy response of a system is presented. This <span class="hlt">method</span>, called the <span class="hlt">Optimized</span> Vertex <span class="hlt">Method</span> (OVM), is based upon the vertex <span class="hlt">method</span> but requires considerably fewer function evaluations. The <span class="hlt">method</span> is demonstrated by calculating the response membership function of strain-energy release rate for a bonded joint with a crack. The possibility of failure of the bonded joint was determined over a range of loads. After completing the possibilistic analysis, the possibilistic (fuzzy) membership functions were transformed to probability density functions and the probability of failure of the bonded joint was calculated. This approach is called a possibility-based hybrid reliability assessment. The possibility and probability of failure are presented and compared to a Monte Carlo Simulation (MCS) of the bonded joint.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://hdl.handle.net/2060/19940009145','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19940009145"><span>Multidisciplinary Design Techniques Applied to Conceptual Aerospace Vehicle Design. Ph.D. Thesis Final Technical Report</span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Olds, John Robert; Walberg, Gerald D.</p> <p>1993-01-01</p> <p>Multidisciplinary design <span class="hlt">optimization</span> (MDO) is an emerging discipline within aerospace engineering. Its goal is to bring structure and efficiency to the complex design process associated with advanced aerospace launch vehicles. Aerospace vehicles generally require input from a variety of traditional aerospace disciplines - aerodynamics, structures, performance, etc. As such, traditional <span class="hlt">optimization</span> <span class="hlt">methods</span> cannot always be applied. Several multidisciplinary techniques and <span class="hlt">methods</span> were proposed as potentially applicable to this class of design problem. Among the candidate options are calculus-based (or gradient-based) <span class="hlt">optimization</span> schemes and parametric schemes based on design of experiments theory. A brief overview of several applicable multidisciplinary design <span class="hlt">optimization</span> <span class="hlt">methods</span> is included. <span class="hlt">Methods</span> from the calculus-based class and the parametric class are reviewed, but the research application reported focuses on <span class="hlt">methods</span> from the parametric class. A vehicle of current interest was chosen as a test application for this research. The rocket-based combined-cycle (RBCC) single-stage-to-orbit (SSTO) launch vehicle combines elements of rocket and airbreathing propulsion in an attempt to produce an attractive option for launching medium sized payloads into low earth orbit. The RBCC SSTO presents a particularly difficult problem for traditional one-variable-at-a-time <span class="hlt">optimization</span> <span class="hlt">methods</span> because of the lack of an adequate experience base and the highly coupled nature of the design variables. MDO, however, with it's structured approach to design, is well suited to this problem. The result of the application of <span class="hlt">Taguchi</span> <span class="hlt">methods</span>, central composite designs, and response surface <span class="hlt">methods</span> to the design <span class="hlt">optimization</span> of the RBCC SSTO are presented. Attention is given to the aspect of <span class="hlt">Taguchi</span> <span class="hlt">methods</span> that attempts to locate a 'robust' design - that is, a design that is least sensitive to uncontrollable influences on the design. Near-optimum minimum dry weight solutions are</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/21748796','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/21748796"><span>Iterative <span class="hlt">optimization</span> <span class="hlt">method</span> for design of quantitative magnetization transfer imaging experiments.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Levesque, Ives R; Sled, John G; Pike, G Bruce</p> <p>2011-09-01</p> <p>Quantitative magnetization transfer imaging (QMTI) using spoiled gradient echo sequences with pulsed off-resonance saturation can be a time-consuming technique. A <span class="hlt">method</span> is presented for selection of an optimum experimental design for quantitative magnetization transfer imaging based on the iterative reduction of a discrete sampling of the Z-spectrum. The applicability of the technique is demonstrated for human brain white matter imaging at 1.5 T and 3 T, and <span class="hlt">optimal</span> designs are produced to target specific model parameters. The <span class="hlt">optimal</span> number of measurements and the signal-to-noise ratio required for stable parameter estimation are also investigated. In vivo imaging results demonstrate that this <span class="hlt">optimal</span> design approach substantially improves parameter map quality. The iterative <span class="hlt">method</span> presented here provides an advantage over free form <span class="hlt">optimal</span> design <span class="hlt">methods</span>, in that pragmatic design constraints are readily incorporated. In particular, the presented <span class="hlt">method</span> avoids clustering and repeated measures in the final experimental design, an attractive feature for the purpose of magnetization transfer model validation. The iterative <span class="hlt">optimal</span> design technique is general and can be applied to any <span class="hlt">method</span> of quantitative magnetization transfer imaging. Copyright © 2011 Wiley-Liss, Inc.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28330258','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28330258"><span><span class="hlt">Optimization</span> of D-lactic acid production using unutilized biomass as substrates by multiple parallel fermentation.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Mufidah, Elya; Wakayama, Mamoru</p> <p>2016-12-01</p> <p>This study investigated the <span class="hlt">optimization</span> of D-lactic acid production from unutilized biomass, specifically banana peel and corncob by multiple parallel fermentation (MPF) with Leuconostoc mesenteroides and Aspergillus awamori. The factors involved in MPF that were assessed in this study comprised banana peel and corncob, KH 2 PO 4 , Tween 80, MgSO 4 ·7H 2 O, NaCl, yeast extract, and diammonium hydrogen citrate to identify the <span class="hlt">optimal</span> concentration for D-lactic acid production. <span class="hlt">Optimization</span> of these component factors was performed using the <span class="hlt">Taguchi</span> <span class="hlt">method</span> with an L8 orthogonal array. The <span class="hlt">optimal</span> concentrations for the effectiveness of MPF using biomass substrates were as follows: (1) banana peel, D-lactic acid production was 31.8 g/L in medium containing 15 % carbon source, 0.5 % KH 2 PO 4 , 0.1 % Tween 80, 0.05 % MgSO 4 ·7H 2 O, 0.05 % NaCl, 1.5 % yeast extract, and 0.2 % diammonium hydrogen citrate. (2) corncob, D-lactic acid production was 38.3 g/L in medium containing 15 % of a carbon source, 0.5 % KH 2 PO 4 , 0.1 % Tween 80, 0.05 % MgSO 4 ·7H 2 O, 0.1 % NaCl, 1.0 % yeast extract, and 0.4 % diammonium hydrogen citrate. Thus, both banana peel and corncob are unutilized potential resources for D-lactic acid production. These results indicate that MPF using L. mesenteroides and A. awamori could constitute part of a potential industrial application of the currently unutilized banana peel and corncob biomass for D-lactic acid production.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018AIPA....8d7504K','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018AIPA....8d7504K"><span>Study on <span class="hlt">optimal</span> design of 210kW traction IPMSM considering thermal demagnetization characteristics</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Kim, Young Hyun; Lee, Seong Soo; Cheon, Byung Chul; Lee, Jung Ho</p> <p>2018-04-01</p> <p>This study analyses the permanent magnet (PM) used in the rotor of an interior permanent magnet synchronous motor (IPMSM) used for driving an electric railway vehicle (ERV) in the context of controllable shape, temperature, and external magnetic field. The positioning of the inserted magnets is a degree of freedom in the design of such machines. This paper describes a preliminary analysis using parametric finite-element <span class="hlt">method</span> performed with the aim of achieving an effective design. Next, features of the experimental design, based on <span class="hlt">methods</span> such as the central-composition <span class="hlt">method</span>, Box-Behnken and <span class="hlt">Taguchi</span> <span class="hlt">method</span>, are explored to optimise the shape of the high power density. The results are used to produce an <span class="hlt">optimal</span> design for IPMSMs, with design errors minimized using Maxwell 2D, a commercial program. Furthermore, the demagnetization process is analysed based on the magnetization and demagnetization theory for PM materials in computer simulation. The result of the analysis can be used to calculate the magnetization and demagnetization phenomenon according to the input B-H curve. This paper presents the conditions for demagnetization by the external magnetic field in the driving and stopped states, and proposes a simulation <span class="hlt">method</span> that can analyse demagnetization phenomena according to each condition and design the IPMSM that maximizes efficiency and torque characteristics. Finally, operational characteristics are analysed in terms of the operation patterns of railway vehicles, and control conditions are deduced to achieve maximum efficiency in all sections. This was experimentally verified.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=19930061019&hterms=conjugate+gradient&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D10%26Ntt%3Dconjugate%2Bgradient','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=19930061019&hterms=conjugate+gradient&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D10%26Ntt%3Dconjugate%2Bgradient"><span>Aerodynamic shape <span class="hlt">optimization</span> using preconditioned conjugate gradient <span class="hlt">methods</span></span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Burgreen, Greg W.; Baysal, Oktay</p> <p>1993-01-01</p> <p>In an effort to further improve upon the latest advancements made in aerodynamic shape <span class="hlt">optimization</span> procedures, a systematic study is performed to examine several current solution methodologies as applied to various aspects of the <span class="hlt">optimization</span> procedure. It is demonstrated that preconditioned conjugate gradient-like methodologies dramatically decrease the computational efforts required for such procedures. The design problem investigated is the shape <span class="hlt">optimization</span> of the upper and lower surfaces of an initially symmetric (NACA-012) airfoil in inviscid transonic flow and at zero degree angle-of-attack. The complete surface shape is represented using a Bezier-Bernstein polynomial. The present <span class="hlt">optimization</span> <span class="hlt">method</span> then automatically obtains supercritical airfoil shapes over a variety of freestream Mach numbers. Furthermore, the best <span class="hlt">optimization</span> strategy examined resulted in a factor of 8 decrease in computational time as well as a factor of 4 decrease in memory over the most efficient strategies in current use.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_15");'>15</a></li> <li><a href="#" onclick='return showDiv("page_16");'>16</a></li> <li class="active"><span>17</span></li> <li><a href="#" onclick='return showDiv("page_18");'>18</a></li> <li><a href="#" onclick='return showDiv("page_19");'>19</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_17 --> <div id="page_18" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_16");'>16</a></li> <li><a href="#" onclick='return showDiv("page_17");'>17</a></li> <li class="active"><span>18</span></li> <li><a href="#" onclick='return showDiv("page_19");'>19</a></li> <li><a href="#" onclick='return showDiv("page_20");'>20</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="341"> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.osti.gov/servlets/purl/873853','DOE-PATENT-XML'); return false;" href="https://www.osti.gov/servlets/purl/873853"><span>Falcon: automated <span class="hlt">optimization</span> <span class="hlt">method</span> for arbitrary assessment criteria</span></a></p> <p><a target="_blank" href="http://www.osti.gov/doepatents">DOEpatents</a></p> <p>Yang, Tser-Yuan; Moses, Edward I.; Hartmann-Siantar, Christine</p> <p>2001-01-01</p> <p>FALCON is a <span class="hlt">method</span> for automatic multivariable <span class="hlt">optimization</span> for arbitrary assessment criteria that can be applied to numerous fields where outcome simulation is combined with <span class="hlt">optimization</span> and assessment criteria. A specific implementation of FALCON is for automatic radiation therapy treatment planning. In this application, FALCON implements dose calculations into the planning process and <span class="hlt">optimizes</span> available beam delivery modifier parameters to determine the treatment plan that best meets clinical decision-making criteria. FALCON is described in the context of the <span class="hlt">optimization</span> of external-beam radiation therapy and intensity modulated radiation therapy (IMRT), but the concepts could also be applied to internal (brachytherapy) radiotherapy. The radiation beams could consist of photons or any charged or uncharged particles. The concept of <span class="hlt">optimizing</span> source distributions can be applied to complex radiography (e.g. flash x-ray or proton) to improve the imaging capabilities of facilities proposed for science-based stockpile stewardship.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015JIEIC..96...57A','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015JIEIC..96...57A"><span>Parameter Design in Fusion Welding of AA 6061 Aluminium Alloy using Desirability Grey Relational Analysis (DGRA) <span class="hlt">Method</span></span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Adalarasan, R.; Santhanakumar, M.</p> <p>2015-01-01</p> <p>In the present work, yield strength, ultimate strength and micro-hardness of the lap joints formed with Al 6061 alloy sheets by using the processes of Tungsten Inert Gas (TIG) welding and Metal Inert Gas (MIG) welding were studied for various combinations of the welding parameters. The parameters taken for study include welding current, voltage, welding speed and inert gas flow rate. <span class="hlt">Taguchi</span>'s L9 orthogonal array was used to conduct the experiments and an integrated technique of desirability grey relational analysis was disclosed for <span class="hlt">optimizing</span> the welding parameters. The ignored robustness in desirability approach is compensated by the grey relational approach to predict the <span class="hlt">optimal</span> setting of input parameters for the TIG and MIG welding processes which were validated through the confirmation experiments.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2014APS..MAR.B3002P','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2014APS..MAR.B3002P"><span><span class="hlt">Optimization</span> of Thick, Large Area YBCO Film Growth Through Response Surface <span class="hlt">Methods</span></span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Porzio, J.; Mahoney, C. H.; Sullivan, M. C.</p> <p>2014-03-01</p> <p>We present our work on the <span class="hlt">optimization</span> of thick, large area YB2C3O7-δ (YBCO) film growth through response surface <span class="hlt">methods</span>. Thick, large area films have commercial uses and have recently been used in dramatic demonstrations of levitation and suspension. Our films are grown via pulsed laser deposition and we have <span class="hlt">optimized</span> growth parameters via response surface <span class="hlt">methods</span>. Response surface <span class="hlt">methods</span> is a statistical tool to <span class="hlt">optimize</span> selected quantities with respect to a set of variables. We <span class="hlt">optimized</span> our YBCO films' critical temperatures, thicknesses, and structures with respect to three PLD growth parameters: deposition temperature, laser energy, and deposition pressure. We will present an overview of YBCO growth via pulsed laser deposition, the statistical theory behind response surface <span class="hlt">methods</span>, and the application of response surface <span class="hlt">methods</span> to pulsed laser deposition growth of YBCO. Results from the experiment will be presented in a discussion of the <span class="hlt">optimized</span> film quality. Supported by NFS grant DMR-1305637</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018OptLT.102...32P','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018OptLT.102...32P"><span>TOPSIS based parametric <span class="hlt">optimization</span> of laser micro-drilling of TBC coated nickel based superalloy</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Parthiban, K.; Duraiselvam, Muthukannan; Manivannan, R.</p> <p>2018-06-01</p> <p>The technique for order of preference by similarity ideal solution (TOPSIS) approach was used for <span class="hlt">optimizing</span> the process parameters of laser micro-drilling of nickel superalloy C263 with Thermal Barrier Coating (TBC). Plasma spraying was used to deposit the TBC and a pico-second Nd:YAG pulsed laser was used to drill the specimens. Drilling angle, laser scan speed and number of passes were considered as input parameters. Based on the machining conditions, <span class="hlt">Taguchi</span> L8 orthogonal array was used for conducting the experimental runs. The surface roughness and surface crack density (SCD) were considered as the output measures. The surface roughness was measured using 3D White Light Interferometer (WLI) and the crack density was measured using Scanning Electron Microscope (SEM). The <span class="hlt">optimized</span> result achieved from this approach suggests reduced surface roughness and surface crack density. The holes drilled at an inclination angle of 45°, laser scan speed of 3 mm/s and 400 number of passes found to be optimum. From the Analysis of variance (ANOVA), inclination angle and number of passes were identified as the major influencing parameter. The <span class="hlt">optimized</span> parameter combination exhibited a 19% improvement in surface finish and 12% reduction in SCD.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3217276','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3217276"><span>Implicit <span class="hlt">methods</span> for efficient musculoskeletal simulation and <span class="hlt">optimal</span> control</span></a></p> <p><a target="_blank" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>van den Bogert, Antonie J.; Blana, Dimitra; Heinrich, Dieter</p> <p>2011-01-01</p> <p>The ordinary differential equations for musculoskeletal dynamics are often numerically stiff and highly nonlinear. Consequently, simulations require small time steps, and <span class="hlt">optimal</span> control problems are slow to solve and have poor convergence. In this paper, we present an implicit formulation of musculoskeletal dynamics, which leads to new numerical <span class="hlt">methods</span> for simulation and <span class="hlt">optimal</span> control, with the expectation that we can mitigate some of these problems. A first order Rosenbrock <span class="hlt">method</span> was developed for solving forward dynamic problems using the implicit formulation. It was used to perform real-time dynamic simulation of a complex shoulder arm system with extreme dynamic stiffness. Simulations had an RMS error of only 0.11 degrees in joint angles when running at real-time speed. For <span class="hlt">optimal</span> control of musculoskeletal systems, a direct collocation <span class="hlt">method</span> was developed for implicitly formulated models. The <span class="hlt">method</span> was applied to predict gait with a prosthetic foot and ankle. Solutions were obtained in well under one hour of computation time and demonstrated how patients may adapt their gait to compensate for limitations of a specific prosthetic limb design. The <span class="hlt">optimal</span> control <span class="hlt">method</span> was also applied to a state estimation problem in sports biomechanics, where forces during skiing were estimated from noisy and incomplete kinematic data. Using a full musculoskeletal dynamics model for state estimation had the additional advantage that forward dynamic simulations, could be done with the same implicitly formulated model to simulate injuries and perturbation responses. While these <span class="hlt">methods</span> are powerful and allow solution of previously intractable problems, there are still considerable numerical challenges, especially related to the convergence of gradient-based solvers. PMID:22102983</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017AIPC.1863p0007G','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017AIPC.1863p0007G"><span><span class="hlt">Optimizing</span> some 3-stage W-<span class="hlt">methods</span> for the time integration of PDEs</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Gonzalez-Pinto, S.; Hernandez-Abreu, D.; Perez-Rodriguez, S.</p> <p>2017-07-01</p> <p>The <span class="hlt">optimization</span> of some W-<span class="hlt">methods</span> for the time integration of time-dependent PDEs in several spatial variables is considered. In [2, Theorem 1] several three-parametric families of three-stage W-<span class="hlt">methods</span> for the integration of IVPs in ODEs were studied. Besides, the <span class="hlt">optimization</span> of several specific <span class="hlt">methods</span> for PDEs when the Approximate Matrix Factorization Splitting (AMF) is used to define the approximate Jacobian matrix (W ≈ fy(yn)) was carried out. Also, some convergence and stability properties were presented [2]. The derived <span class="hlt">methods</span> were <span class="hlt">optimized</span> on the base that the underlying explicit Runge-Kutta <span class="hlt">method</span> is the one having the largest Monotonicity interval among the thee-stage order three Runge-Kutta <span class="hlt">methods</span> [1]. Here, we propose an <span class="hlt">optimization</span> of the <span class="hlt">methods</span> by imposing some additional order condition [7] to keep order three for parabolic PDE problems [6] but at the price of reducing substantially the length of the nonlinear Monotonicity interval of the underlying explicit Runge-Kutta <span class="hlt">method</span>.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://hdl.handle.net/2060/20010037606','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20010037606"><span>Cryogenic Tank Structure Sizing With Structural <span class="hlt">Optimization</span> <span class="hlt">Method</span></span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Wang, J. T.; Johnson, T. F.; Sleight, D. W.; Saether, E.</p> <p>2001-01-01</p> <p>Structural <span class="hlt">optimization</span> <span class="hlt">methods</span> in MSC /NASTRAN are used to size substructures and to reduce the weight of a composite sandwich cryogenic tank for future launch vehicles. Because the feasible design space of this problem is non-convex, many local minima are found. This non-convex problem is investigated in detail by conducting a series of analyses along a design line connecting two feasible designs. Strain constraint violations occur for some design points along the design line. Since MSC/NASTRAN uses gradient-based <span class="hlt">optimization</span> procedures. it does not guarantee that the lowest weight design can be found. In this study, a simple procedure is introduced to create a new starting point based on design variable values from previous <span class="hlt">optimization</span> analyses. <span class="hlt">Optimization</span> analysis using this new starting point can produce a lower weight design. Detailed inputs for setting up the MSC/NASTRAN <span class="hlt">optimization</span> analysis and final tank design results are presented in this paper. Approaches for obtaining further weight reductions are also discussed.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://hdl.handle.net/2060/20060013266','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20060013266"><span>Towards Robust Designs Via Multiple-Objective <span class="hlt">Optimization</span> <span class="hlt">Methods</span></span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Man Mohan, Rai</p> <p>2006-01-01</p> <p> evolutionary <span class="hlt">method</span> (DE) is first used to solve a relatively difficult problem in extended surface heat transfer wherein <span class="hlt">optimal</span> fin geometries are obtained for different safe operating base temperatures. The objective of maximizing the safe operating base temperature range is in direct conflict with the objective of maximizing fin heat transfer. This problem is a good example of achieving robustness in the context of changing operating conditions. The evolutionary <span class="hlt">method</span> is then used to design a turbine airfoil; the two objectives being reduced sensitivity of the pressure distribution to small changes in the airfoil shape and the maximization of the trailing edge wedge angle with the consequent increase in airfoil thickness and strength. This is a relevant example of achieving robustness to manufacturing tolerances and wear and tear in the presence of other objectives.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017PhDT........17J','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017PhDT........17J"><span>Immersed Boundary <span class="hlt">Methods</span> for <span class="hlt">Optimization</span> of Strongly Coupled Fluid-Structure Systems</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Jenkins, Nicholas J.</p> <p></p> <p>Conventional <span class="hlt">methods</span> for design of tightly coupled multidisciplinary systems, such as fluid-structure interaction (FSI) problems, traditionally rely on manual revisions informed by a loosely coupled linearized analysis. These approaches are both inaccurate for a multitude of applications, and they require an intimate understanding of the assumptions and limitations of the procedure in order to soundly <span class="hlt">optimize</span> the design. Computational <span class="hlt">optimization</span>, in particular topology <span class="hlt">optimization</span>, has been shown to yield remarkable results for problems in solid mechanics using density interpolations schemes. In the context of FSI, however, well defined boundaries play a key role in both the design problem and the mechanical model. Density <span class="hlt">methods</span> neither accurately represent the material boundary, nor provide a suitable platform to apply appropriate interface conditions. This thesis presents a new framework for shape and topology <span class="hlt">optimization</span> of FSI problems that uses for the design problem the Level Set <span class="hlt">method</span> (LSM) to describe the geometry evolution in the <span class="hlt">optimization</span> process. The Extended Finite Element <span class="hlt">method</span> (XFEM) is combined with a fictitiously deforming fluid domain (stationary arbitrary Lagrangian-Eulerian <span class="hlt">method</span>) to predict the FSI response. The novelty of the proposed approach lies in the fact that the XFEM explicitly captures the material boundary defined by the level set iso-surface. Moreover, the XFEM provides a means to discretize the governing equations, and weak immersed boundary conditions are applied with Nitsche's <span class="hlt">Method</span> to couple the fields. The flow is predicted by the incompressible Navier-Stokes equations, and a finite-deformation solid model is developed and tested for both hyperelastic and linear elastic problems. Transient and stationary numerical examples are presented to validate the FSI model and numerical solver approach. Pertaining to the <span class="hlt">optimization</span> of FSI problems, the parameters of the discretized level set function are defined as explicit</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4045957','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4045957"><span><span class="hlt">Optimizing</span> photo-Fenton like process for the removal of diesel fuel from the aqueous phase</span></a></p> <p><a target="_blank" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p></p> <p>2014-01-01</p> <p>Background In recent years, pollution of soil and groundwater caused by fuel leakage from old underground storage tanks, oil extraction process, refineries, fuel distribution terminals, improper disposal and also spills during transferring has been reported. Diesel fuel has created many problems for water resources. The main objectives of this research were focused on assessing the feasibility of using photo-Fenton like <span class="hlt">method</span> using nano zero-valent iron (nZVI/UV/H2O2) in removing total petroleum hydrocarbons (TPH) and determining the <span class="hlt">optimal</span> conditions using <span class="hlt">Taguchi</span> <span class="hlt">method</span>. Results The influence of different parameters including the initial concentration of TPH (0.1-1 mg/L), H2O2 concentration (5-20 mmole/L), nZVI concentration (10-100 mg/L), pH (3-9), and reaction time (15-120 min) on TPH reduction rate in diesel fuel were investigated. The variance analysis suggests that the <span class="hlt">optimal</span> conditions for TPH reduction rate from diesel fuel in the aqueous phase are as follows: the initial TPH concentration equals to 0.7 mg/L, nZVI concentration 20 mg/L, H2O2 concentration equals to 5 mmol/L, pH 3, and the reaction time of 60 min and degree of significance for the study parameters are 7.643, 9.33, 13.318, 15.185 and 6.588%, respectively. The predicted removal rate in the <span class="hlt">optimal</span> conditions was 95.8% and confirmed by data obtained in this study which was between 95-100%. Conclusion In conclusion, photo-Fenton like process using nZVI process may enhance the rate of diesel degradation in polluted water and could be used as a pretreatment step for the biological removal of TPH from diesel fuel in the aqueous phase. PMID:24955242</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/1203463','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/1203463"><span>Application of modified Rosenbrock's <span class="hlt">method</span> for <span class="hlt">optimization</span> of nutrient media used in microorganism culturing.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Votruba, J; Pilát, P; Prokop, A</p> <p>1975-12-01</p> <p>The Rosenbrock's procedure has been modified for <span class="hlt">optimization</span> of nutrient medium composition and has been found to be less tedious than the Box-Wilson <span class="hlt">method</span>, especially for larger numbers of <span class="hlt">optimized</span> parameters. Its merits are particularly obvious with multiparameter <span class="hlt">optimization</span> where the gradient <span class="hlt">method</span>, so far the only one employed in microbiology from a variety of <span class="hlt">optimization</span> <span class="hlt">methods</span> (e.g., refs, 9 and 10), becomes impractical because of the excessive number of experiments required. The <span class="hlt">method</span> suggested is also more stable during <span class="hlt">optimization</span> than the gradient <span class="hlt">methods</span> which are very sensitive to the selection of steps in the direction of the gradient and may thus easily shoot out of the <span class="hlt">optimized</span> region. It is also anticipated that other direct search <span class="hlt">methods</span>, particularly simplex design, may be easily adapted for <span class="hlt">optimization</span> of medium composition. It is obvious that direct search <span class="hlt">methods</span> may find an application in process improvement in antibiotic and related industries.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://hdl.handle.net/2060/20120012444','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20120012444"><span>Apparatus and <span class="hlt">methods</span> for manipulation and <span class="hlt">optimization</span> of biological systems</span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Sun, Ren (Inventor); Ho, Chih-Ming (Inventor); Wong, Pak Kin (Inventor); Yu, Fuqu (Inventor)</p> <p>2012-01-01</p> <p>The invention provides systems and <span class="hlt">methods</span> for manipulating, e.g., <span class="hlt">optimizing</span> and controlling, biological systems, e.g., for eliciting a more desired biological response of biological sample, such as a tissue, organ, and/or a cell. In one aspect, systems and <span class="hlt">methods</span> of the invention operate by efficiently searching through a large parametric space of stimuli and system parameters to manipulate, control, and <span class="hlt">optimize</span> the response of biological samples sustained in the system, e.g., a bioreactor. In alternative aspects, systems include a device for sustaining cells or tissue samples, one or more actuators for stimulating the samples via biochemical, electromagnetic, thermal, mechanical, and/or optical stimulation, one or more sensors for measuring a biological response signal of the samples resulting from the stimulation of the sample. In one aspect, the systems and <span class="hlt">methods</span> of the invention use at least one <span class="hlt">optimization</span> algorithm to modify the actuator's control inputs for stimulation, responsive to the sensor's output of response signals. The compositions and <span class="hlt">methods</span> of the invention can be used, e.g., to for systems <span class="hlt">optimization</span> of any biological manufacturing or experimental system, e.g., bioreactors for proteins, e.g., therapeutic proteins, polypeptides or peptides for vaccines, and the like, small molecules (e.g., antibiotics), polysaccharides, lipids, and the like. Another use of the apparatus and <span class="hlt">methods</span> includes combination drug therapy, e.g. <span class="hlt">optimal</span> drug cocktail, directed cell proliferations and differentiations, e.g. in tissue engineering, e.g. neural progenitor cells differentiation, and discovery of key parameters in complex biological systems.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2011InvPr..27g5002B','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2011InvPr..27g5002B"><span>Comparison of <span class="hlt">optimal</span> design <span class="hlt">methods</span> in inverse problems</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Banks, H. T.; Holm, K.; Kappel, F.</p> <p>2011-07-01</p> <p>Typical <span class="hlt">optimal</span> design <span class="hlt">methods</span> for inverse or parameter estimation problems are designed to choose <span class="hlt">optimal</span> sampling distributions through minimization of a specific cost function related to the resulting error in parameter estimates. It is hoped that the inverse problem will produce parameter estimates with increased accuracy using data collected according to the <span class="hlt">optimal</span> sampling distribution. Here we formulate the classical <span class="hlt">optimal</span> design problem in the context of general <span class="hlt">optimization</span> problems over distributions of sampling times. We present a new Prohorov metric-based theoretical framework that permits one to treat succinctly and rigorously any <span class="hlt">optimal</span> design criteria based on the Fisher information matrix. A fundamental approximation theory is also included in this framework. A new <span class="hlt">optimal</span> design, SE-<span class="hlt">optimal</span> design (standard error <span class="hlt">optimal</span> design), is then introduced in the context of this framework. We compare this new design criterion with the more traditional D-<span class="hlt">optimal</span> and E-<span class="hlt">optimal</span> designs. The <span class="hlt">optimal</span> sampling distributions from each design are used to compute and compare standard errors; the standard errors for parameters are computed using asymptotic theory or bootstrapping and the <span class="hlt">optimal</span> mesh. We use three examples to illustrate ideas: the Verhulst-Pearl logistic population model (Banks H T and Tran H T 2009 Mathematical and Experimental Modeling of Physical and Biological Processes (Boca Raton, FL: Chapman and Hall/CRC)), the standard harmonic oscillator model (Banks H T and Tran H T 2009) and a popular glucose regulation model (Bergman R N, Ider Y Z, Bowden C R and Cobelli C 1979 Am. J. Physiol. 236 E667-77 De Gaetano A and Arino O 2000 J. Math. Biol. 40 136-68 Toffolo G, Bergman R N, Finegood D T, Bowden C R and Cobelli C 1980 Diabetes 29 979-90).</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/20941235','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/20941235"><span>Kinoform design with an <span class="hlt">optimal</span>-rotation-angle <span class="hlt">method</span>.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Bengtsson, J</p> <p>1994-10-10</p> <p>Kinoforms (i.e., computer-generated phase holograms) are designed with a new algorithm, the optimalrotation- angle <span class="hlt">method</span>, in the paraxial domain. This is a direct Fourier <span class="hlt">method</span> (i.e., no inverse transform is performed) in which the height of the kinoform relief in each discrete point is chosen so that the diffraction efficiency is increased. The <span class="hlt">optimal</span>-rotation-angle algorithm has a straightforward geometrical interpretation. It yields excellent results close to, or better than, those obtained with other state-of-the-art <span class="hlt">methods</span>. The <span class="hlt">optimal</span>-rotation-angle algorithm can easily be modified to take different restraints into account; as an example, phase-swing-restricted kinoforms, which distribute the light into a number of equally bright spots (so called fan-outs), were designed. The phase-swing restriction lowers the efficiency, but the uniformity can still be made almost perfect.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017JCoPh.351..437C','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017JCoPh.351..437C"><span>Topology <span class="hlt">optimization</span> of hyperelastic structures using a level set <span class="hlt">method</span></span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Chen, Feifei; Wang, Yiqiang; Wang, Michael Yu; Zhang, Y. F.</p> <p>2017-12-01</p> <p>Soft rubberlike materials, due to their inherent compliance, are finding widespread implementation in a variety of applications ranging from assistive wearable technologies to soft material robots. Structural design of such soft and rubbery materials necessitates the consideration of large nonlinear deformations and hyperelastic material models to accurately predict their mechanical behaviour. In this paper, we present an effective level set-based topology <span class="hlt">optimization</span> <span class="hlt">method</span> for the design of hyperelastic structures that undergo large deformations. The <span class="hlt">method</span> incorporates both geometric and material nonlinearities where the strain and stress measures are defined within the total Lagrange framework and the hyperelasticity is characterized by the widely-adopted Mooney-Rivlin material model. A shape sensitivity analysis is carried out, in the strict sense of the material derivative, where the high-order terms involving the displacement gradient are retained to ensure the descent direction. As the design velocity enters into the shape derivative in terms of its gradient and divergence terms, we develop a discrete velocity selection strategy. The whole <span class="hlt">optimization</span> implementation undergoes a two-step process, where the linear <span class="hlt">optimization</span> is first performed and its <span class="hlt">optimized</span> solution serves as the initial design for the subsequent nonlinear <span class="hlt">optimization</span>. It turns out that this operation could efficiently alleviate the numerical instability and facilitate the <span class="hlt">optimization</span> process. To demonstrate the validity and effectiveness of the proposed <span class="hlt">method</span>, three compliance minimization problems are studied and their <span class="hlt">optimized</span> solutions present significant mechanical benefits of incorporating the nonlinearities, in terms of remarkable enhancement in not only the structural stiffness but also the critical buckling load.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018PhRvE..97a0201F','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018PhRvE..97a0201F"><span><span class="hlt">Optimal</span> nonlinear filtering using the finite-volume <span class="hlt">method</span></span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Fox, Colin; Morrison, Malcolm E. K.; Norton, Richard A.; Molteno, Timothy C. A.</p> <p>2018-01-01</p> <p><span class="hlt">Optimal</span> sequential inference, or filtering, for the state of a deterministic dynamical system requires simulation of the Frobenius-Perron operator, that can be formulated as the solution of a continuity equation. For low-dimensional, smooth systems, the finite-volume numerical <span class="hlt">method</span> provides a solution that conserves probability and gives estimates that converge to the <span class="hlt">optimal</span> continuous-time values, while a Courant-Friedrichs-Lewy-type condition assures that intermediate discretized solutions remain positive density functions. This <span class="hlt">method</span> is demonstrated in an example of nonlinear filtering for the state of a simple pendulum, with comparison to results using the unscented Kalman filter, and for a case where rank-deficient observations lead to multimodal probability distributions.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/25190883','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/25190883"><span><span class="hlt">Optimizing</span> oil and xanthorrhizol extraction from Curcuma xanthorrhiza Roxb. rhizome by supercritical carbon dioxide.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Salea, Rinaldi; Widjojokusumo, Edward; Veriansyah, Bambang; Tjandrawinata, Raymond R</p> <p>2014-09-01</p> <p>Oil and xanthorrhizol extraction from Curcuma xanthorrhiza Roxb. rhizome by supercritical carbon dioxide was <span class="hlt">optimized</span> using <span class="hlt">Taguchi</span> <span class="hlt">method</span>. The factors considered were pressure, temperature, carbon dioxide flowrate and time at levels ranging between 10-25 MPa, 35-60 °C, 10-25 g/min and 60-240 min respectively. The highest oil yield (8.0 %) was achieved at factor combination of 15 MPa, 50 °C, 20 g/min and 180 min whereas the highest xanthorrhizol content (128.3 mg/g oil) in Curcuma xanthorrhiza oil was achieved at a factor combination of 25 MPa, 50 °C, 15 g/min and 60 min. Soxhlet extraction with n-hexane and percolation with ethanol gave oil yield of 5.88 %, 11.73 % and xanthorrhizol content of 42.6 mg/g oil, 75.5 mg/g oil, respectively. The experimental oil yield and xanthorrhizol content at optimum conditions agreed favourably with values predicted by computational process. The xanthorrizol content extracted using supercritical carbon dioxide was higher than extracted using Soxhlet extraction and percolation process.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018MS%26E..338a2004M','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018MS%26E..338a2004M"><span>Multi-Response <span class="hlt">Optimization</span> of WEDM Process Parameters Using <span class="hlt">Taguchi</span> Based Desirability Function Analysis</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Majumder, Himadri; Maity, Kalipada</p> <p>2018-03-01</p> <p>Shape memory alloy has a unique capability to return to its original shape after physical deformation by applying heat or thermo-mechanical or magnetic load. In this experimental investigation, desirability function analysis (DFA), a multi-attribute decision making was utilized to find out the optimum input parameter setting during wire electrical discharge machining (WEDM) of Ni-Ti shape memory alloy. Four critical machining parameters, namely pulse on time (TON), pulse off time (TOFF), wire feed (WF) and wire tension (WT) were taken as machining inputs for the experiments to <span class="hlt">optimize</span> three interconnected responses like cutting speed, kerf width, and surface roughness. Input parameter combination TON = 120 μs., TOFF = 55 μs., WF = 3 m/min. and WT = 8 kg-F were found to produce the optimum results. The optimum process parameters for each desired response were also attained using Taguchi’s signal-to-noise ratio. Confirmation test has been done to validate the optimum machining parameter combination which affirmed DFA was a competent approach to select optimum input parameters for the ideal response quality for WEDM of Ni-Ti shape memory alloy.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018JChPh.148e1101S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018JChPh.148e1101S"><span>Communication: Time-dependent <span class="hlt">optimized</span> coupled-cluster <span class="hlt">method</span> for multielectron dynamics</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Sato, Takeshi; Pathak, Himadri; Orimo, Yuki; Ishikawa, Kenichi L.</p> <p>2018-02-01</p> <p>Time-dependent coupled-cluster <span class="hlt">method</span> with time-varying orbital functions, called time-dependent <span class="hlt">optimized</span> coupled-cluster (TD-OCC) <span class="hlt">method</span>, is formulated for multielectron dynamics in an intense laser field. We have successfully derived the equations of motion for CC amplitudes and orthonormal orbital functions based on the real action functional, and implemented the <span class="hlt">method</span> including double excitations (TD-OCCD) and double and triple excitations (TD-OCCDT) within the <span class="hlt">optimized</span> active orbitals. The present <span class="hlt">method</span> is size extensive and gauge invariant, a polynomial cost-scaling alternative to the time-dependent multiconfiguration self-consistent-field <span class="hlt">method</span>. The first application of the TD-OCC <span class="hlt">method</span> of intense-laser driven correlated electron dynamics in Ar atom is reported.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29421889','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29421889"><span>Communication: Time-dependent <span class="hlt">optimized</span> coupled-cluster <span class="hlt">method</span> for multielectron dynamics.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Sato, Takeshi; Pathak, Himadri; Orimo, Yuki; Ishikawa, Kenichi L</p> <p>2018-02-07</p> <p>Time-dependent coupled-cluster <span class="hlt">method</span> with time-varying orbital functions, called time-dependent <span class="hlt">optimized</span> coupled-cluster (TD-OCC) <span class="hlt">method</span>, is formulated for multielectron dynamics in an intense laser field. We have successfully derived the equations of motion for CC amplitudes and orthonormal orbital functions based on the real action functional, and implemented the <span class="hlt">method</span> including double excitations (TD-OCCD) and double and triple excitations (TD-OCCDT) within the <span class="hlt">optimized</span> active orbitals. The present <span class="hlt">method</span> is size extensive and gauge invariant, a polynomial cost-scaling alternative to the time-dependent multiconfiguration self-consistent-field <span class="hlt">method</span>. The first application of the TD-OCC <span class="hlt">method</span> of intense-laser driven correlated electron dynamics in Ar atom is reported.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_16");'>16</a></li> <li><a href="#" onclick='return showDiv("page_17");'>17</a></li> <li class="active"><span>18</span></li> <li><a href="#" onclick='return showDiv("page_19");'>19</a></li> <li><a href="#" onclick='return showDiv("page_20");'>20</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_18 --> <div id="page_19" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_17");'>17</a></li> <li><a href="#" onclick='return showDiv("page_18");'>18</a></li> <li class="active"><span>19</span></li> <li><a href="#" onclick='return showDiv("page_20");'>20</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="361"> <li> <p><a target="_blank" onclick="trackOutboundLink('http://hdl.handle.net/2060/19950024699','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19950024699"><span>Pseudo-time <span class="hlt">methods</span> for constrained <span class="hlt">optimization</span> problems governed by PDE</span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Taasan, Shlomo</p> <p>1995-01-01</p> <p>In this paper we present a novel <span class="hlt">method</span> for solving <span class="hlt">optimization</span> problems governed by partial differential equations. Existing <span class="hlt">methods</span> are gradient information in marching toward the minimum, where the constrained PDE is solved once (sometimes only approximately) per each <span class="hlt">optimization</span> step. Such <span class="hlt">methods</span> can be viewed as a marching techniques on the intersection of the state and costate hypersurfaces while improving the residuals of the design equations per each iteration. In contrast, the <span class="hlt">method</span> presented here march on the design hypersurface and at each iteration improve the residuals of the state and costate equations. The new <span class="hlt">method</span> is usually much less expensive per iteration step since, in most problems of practical interest, the design equation involves much less unknowns that that of either the state or costate equations. Convergence is shown using energy estimates for the evolution equations governing the iterative process. Numerical tests show that the new <span class="hlt">method</span> allows the solution of the <span class="hlt">optimization</span> problem in a cost of solving the analysis problems just a few times, independent of the number of design parameters. The <span class="hlt">method</span> can be applied using single grid iterations as well as with multigrid solvers.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2007TJSAI..22..574H','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2007TJSAI..22..574H"><span>A Discriminative Sentence Compression <span class="hlt">Method</span> as Combinatorial <span class="hlt">Optimization</span> Problem</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Hirao, Tsutomu; Suzuki, Jun; Isozaki, Hideki</p> <p></p> <p>In the study of automatic summarization, the main research topic was `important sentence extraction' but nowadays `sentence compression' is a hot research topic. Conventional sentence compression <span class="hlt">methods</span> usually transform a given sentence into a parse tree or a dependency tree, and modify them to get a shorter sentence. However, this <span class="hlt">method</span> is sometimes too rigid. In this paper, we regard sentence compression as an combinatorial <span class="hlt">optimization</span> problem that extracts an <span class="hlt">optimal</span> subsequence of words. Hori et al. also proposed a similar <span class="hlt">method</span>, but they used only a small number of features and their weights were tuned by hand. We introduce a large number of features such as part-of-speech bigrams and word position in the sentence. Furthermore, we train the system by discriminative learning. According to our experiments, our <span class="hlt">method</span> obtained better score than other <span class="hlt">methods</span> with statistical significance.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017SPIE10256E..57L','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017SPIE10256E..57L"><span>Global <span class="hlt">optimization</span> <span class="hlt">method</span> based on ray tracing to achieve optimum figure error compensation</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Liu, Xiaolin; Guo, Xuejia; Tang, Tianjin</p> <p>2017-02-01</p> <p>Figure error would degrade the performance of optical system. When predicting the performance and performing system assembly, compensation by clocking of optical components around the optical axis is a conventional but user-dependent <span class="hlt">method</span>. Commercial optical software cannot <span class="hlt">optimize</span> this clocking. Meanwhile existing automatic figure-error balancing <span class="hlt">methods</span> can introduce approximate calculation error and the build process of <span class="hlt">optimization</span> model is complex and time-consuming. To overcome these limitations, an accurate and automatic global <span class="hlt">optimization</span> <span class="hlt">method</span> of figure error balancing is proposed. This <span class="hlt">method</span> is based on precise ray tracing to calculate the wavefront error, not approximate calculation, under a given elements' rotation angles combination. The composite wavefront error root-mean-square (RMS) acts as the cost function. Simulated annealing algorithm is used to seek the <span class="hlt">optimal</span> combination of rotation angles of each optical element. This <span class="hlt">method</span> can be applied to all rotational symmetric optics. <span class="hlt">Optimization</span> results show that this <span class="hlt">method</span> is 49% better than previous approximate analytical <span class="hlt">method</span>.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016JCoPh.307..291N','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016JCoPh.307..291N"><span>Topology <span class="hlt">optimization</span> of unsteady flow problems using the lattice Boltzmann <span class="hlt">method</span></span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Nørgaard, Sebastian; Sigmund, Ole; Lazarov, Boyan</p> <p>2016-02-01</p> <p>This article demonstrates and discusses topology <span class="hlt">optimization</span> for unsteady incompressible fluid flows. The fluid flows are simulated using the lattice Boltzmann <span class="hlt">method</span>, and a partial bounceback model is implemented to model the transition between fluid and solid phases in the <span class="hlt">optimization</span> problems. The <span class="hlt">optimization</span> problem is solved with a gradient based <span class="hlt">method</span>, and the design sensitivities are computed by solving the discrete adjoint problem. For moderate Reynolds number flows, it is demonstrated that topology <span class="hlt">optimization</span> can successfully account for unsteady effects such as vortex shedding and time-varying boundary conditions. Such effects are relevant in several engineering applications, i.e. fluid pumps and control valves.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://hdl.handle.net/2060/20110005474','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20110005474"><span>An <span class="hlt">Optimizing</span> Space Data-Communications Scheduling <span class="hlt">Method</span> and Algorithm with Interference Mitigation, Generalized for a Broad Class of <span class="hlt">Optimization</span> Problems</span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Rash, James L.</p> <p>2010-01-01</p> <p>NASA's space data-communications infrastructure, the Space Network and the Ground Network, provide scheduled (as well as some limited types of unscheduled) data-communications services to user spacecraft via orbiting relay satellites and ground stations. An implementation of the <span class="hlt">methods</span> and algorithms disclosed herein will be a system that produces globally <span class="hlt">optimized</span> schedules with not only <span class="hlt">optimized</span> service delivery by the space data-communications infrastructure but also <span class="hlt">optimized</span> satisfaction of all user requirements and prescribed constraints, including radio frequency interference (RFI) constraints. Evolutionary search, a class of probabilistic strategies for searching large solution spaces, constitutes the essential technology in this disclosure. Also disclosed are <span class="hlt">methods</span> and algorithms for <span class="hlt">optimizing</span> the execution efficiency of the schedule-generation algorithm itself. The scheduling <span class="hlt">methods</span> and algorithms as presented are adaptable to accommodate the complexity of scheduling the civilian and/or military data-communications infrastructure. Finally, the problem itself, and the <span class="hlt">methods</span> and algorithms, are generalized and specified formally, with applicability to a very broad class of combinatorial <span class="hlt">optimization</span> problems.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/27549154','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/27549154"><span>Tailored parameter <span class="hlt">optimization</span> <span class="hlt">methods</span> for ordinary differential equation models with steady-state constraints.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Fiedler, Anna; Raeth, Sebastian; Theis, Fabian J; Hausser, Angelika; Hasenauer, Jan</p> <p>2016-08-22</p> <p>Ordinary differential equation (ODE) models are widely used to describe (bio-)chemical and biological processes. To enhance the predictive power of these models, their unknown parameters are estimated from experimental data. These experimental data are mostly collected in perturbation experiments, in which the processes are pushed out of steady state by applying a stimulus. The information that the initial condition is a steady state of the unperturbed process provides valuable information, as it restricts the dynamics of the process and thereby the parameters. However, implementing steady-state constraints in the <span class="hlt">optimization</span> often results in convergence problems. In this manuscript, we propose two new <span class="hlt">methods</span> for solving <span class="hlt">optimization</span> problems with steady-state constraints. The first <span class="hlt">method</span> exploits ideas from <span class="hlt">optimization</span> algorithms on manifolds and introduces a retraction operator, essentially reducing the dimension of the <span class="hlt">optimization</span> problem. The second <span class="hlt">method</span> is based on the continuous analogue of the <span class="hlt">optimization</span> problem. This continuous analogue is an ODE whose equilibrium points are the optima of the constrained <span class="hlt">optimization</span> problem. This equivalence enables the use of adaptive numerical <span class="hlt">methods</span> for solving <span class="hlt">optimization</span> problems with steady-state constraints. Both <span class="hlt">methods</span> are tailored to the problem structure and exploit the local geometry of the steady-state manifold and its stability properties. A parameterization of the steady-state manifold is not required. The efficiency and reliability of the proposed <span class="hlt">methods</span> is evaluated using one toy example and two applications. The first application example uses published data while the second uses a novel dataset for Raf/MEK/ERK signaling. The proposed <span class="hlt">methods</span> demonstrated better convergence properties than state-of-the-art <span class="hlt">methods</span> employed in systems and computational biology. Furthermore, the average computation time per converged start is significantly lower. In addition to the theoretical results, the</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.osti.gov/biblio/1055083-optimal-merging-technique-high-resolution-precipitation-products-optimal-merging-precipitation-method','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/1055083-optimal-merging-technique-high-resolution-precipitation-products-optimal-merging-precipitation-method"><span>An <span class="hlt">optimal</span> merging technique for high-resolution precipitation products: <span class="hlt">OPTIMAL</span> MERGING OF PRECIPITATION <span class="hlt">METHOD</span></span></a></p> <p><a target="_blank" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Shrestha, Roshan; Houser, Paul R.; Anantharaj, Valentine G.</p> <p>2011-04-01</p> <p>Precipitation products are currently available from various sources at higher spatial and temporal resolution than any time in the past. Each of the precipitation products has its strengths and weaknesses in availability, accuracy, resolution, retrieval techniques and quality control. By merging the precipitation data obtained from multiple sources, one can improve its information content by minimizing these issues. However, precipitation data merging poses challenges of scale-mismatch, and accurate error and bias assessment. In this paper we present <span class="hlt">Optimal</span> Merging of Precipitation (OMP), a new <span class="hlt">method</span> to merge precipitation data from multiple sources that are of different spatial and temporal resolutionsmore » and accuracies. This <span class="hlt">method</span> is a combination of scale conversion and merging weight <span class="hlt">optimization</span>, involving performance-tracing based on Bayesian statistics and trend-analysis, which yields merging weights for each precipitation data source. The weights are <span class="hlt">optimized</span> at multiple scales to facilitate multiscale merging and better precipitation downscaling. Precipitation data used in the experiment include products from the 12-km resolution North American Land Data Assimilation (NLDAS) system, the 8-km resolution CMORPH and the 4-km resolution National Stage-IV QPE. The test cases demonstrate that the OMP <span class="hlt">method</span> is capable of identifying a better data source and allocating a higher priority for them in the merging procedure, dynamically over the region and time period. This <span class="hlt">method</span> is also effective in filtering out poor quality data introduced into the merging process.« less</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://hdl.handle.net/2060/20150003323','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20150003323"><span>Apparatus and <span class="hlt">Methods</span> for Manipulation and <span class="hlt">Optimization</span> of Biological Systems</span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Sun, Ren (Inventor); Ho, Chih-Ming (Inventor); Wong, Pak Kin (Inventor); Yu, Fuqu (Inventor)</p> <p>2014-01-01</p> <p>The invention provides systems and <span class="hlt">methods</span> for manipulating biological systems, for example to elicit a more desired biological response from a biological sample, such as a tissue, organ, and/or a cell. In one aspect, the invention operates by efficiently searching through a large parametric space of stimuli and system parameters to manipulate, control, and <span class="hlt">optimize</span> the response of biological samples sustained in the system. In one aspect, the systems and <span class="hlt">methods</span> of the invention use at least one <span class="hlt">optimization</span> algorithm to modify the actuator's control inputs for stimulation, responsive to the sensor's output of response signals. The invention can be used, e.g., to <span class="hlt">optimize</span> any biological system, e.g., bioreactors for proteins, and the like, small molecules, polysaccharides, lipids, and the like. Another use of the apparatus and <span class="hlt">methods</span> includes is for the discovery of key parameters in complex biological systems.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://hdl.handle.net/2060/19910000997','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19910000997"><span>Global <span class="hlt">optimization</span> <span class="hlt">methods</span> for engineering design</span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Arora, Jasbir S.</p> <p>1990-01-01</p> <p>The problem is to find a global minimum for the Problem P. Necessary and sufficient conditions are available for local <span class="hlt">optimality</span>. However, global solution can be assured only under the assumption of convexity of the problem. If the constraint set S is compact and the cost function is continuous on it, existence of a global minimum is guaranteed. However, in view of the fact that no global <span class="hlt">optimality</span> conditions are available, a global solution can be found only by an exhaustive search to satisfy Inequality. The exhaustive search can be organized in such a way that the entire design space need not be searched for the solution. This way the computational burden is reduced somewhat. It is concluded that zooming algorithm for global <span class="hlt">optimizations</span> appears to be a good alternative to stochastic <span class="hlt">methods</span>. More testing is needed; a general, robust, and efficient local minimizer is required. IDESIGN was used in all numerical calculations which is based on a sequential quadratic programming algorithm, and since feasible set keeps on shrinking, a good algorithm to find an initial feasible point is required. Such algorithms need to be developed and evaluated.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016AGUFM.H31F1448T','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016AGUFM.H31F1448T"><span>A Novel Weighted Kernel PCA-Based <span class="hlt">Method</span> for <span class="hlt">Optimization</span> and Uncertainty Quantification</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Thimmisetty, C.; Talbot, C.; Chen, X.; Tong, C. H.</p> <p>2016-12-01</p> <p>It has been demonstrated that machine learning <span class="hlt">methods</span> can be successfully applied to uncertainty quantification for geophysical systems through the use of the adjoint <span class="hlt">method</span> coupled with kernel PCA-based <span class="hlt">optimization</span>. In addition, it has been shown through weighted linear PCA how <span class="hlt">optimization</span> with respect to both observation weights and feature space control variables can accelerate convergence of such <span class="hlt">methods</span>. Linear machine learning <span class="hlt">methods</span>, however, are inherently limited in their ability to represent features of non-Gaussian stochastic random fields, as they are based on only the first two statistical moments of the original data. Nonlinear spatial relationships and multipoint statistics leading to the tortuosity characteristic of channelized media, for example, are captured only to a limited extent by linear PCA. With the aim of coupling the kernel-based and weighted <span class="hlt">methods</span> discussed, we present a novel mathematical formulation of kernel PCA, Weighted Kernel Principal Component Analysis (WKPCA), that both captures nonlinear relationships and incorporates the attribution of significance levels to different realizations of the stochastic random field of interest. We also demonstrate how new instantiations retaining defining characteristics of the random field can be generated using Bayesian <span class="hlt">methods</span>. In particular, we present a novel WKPCA-based <span class="hlt">optimization</span> <span class="hlt">method</span> that minimizes a given objective function with respect to both feature space random variables and observation weights through which <span class="hlt">optimal</span> snapshot significance levels and <span class="hlt">optimal</span> features are learned. We showcase how WKPCA can be applied to nonlinear <span class="hlt">optimal</span> control problems involving channelized media, and in particular demonstrate an application of the <span class="hlt">method</span> to learning the spatial distribution of material parameter values in the context of linear elasticity, and discuss further extensions of the <span class="hlt">method</span> to stochastic inversion.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018JPhCS1005a2043F','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018JPhCS1005a2043F"><span>Analytical Approach to the Fuel <span class="hlt">Optimal</span> Impulsive Transfer Problem Using Primer Vector <span class="hlt">Method</span></span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Fitrianingsih, E.; Armellin, R.</p> <p>2018-04-01</p> <p>One of the objectives of mission design is selecting an optimum orbital transfer which often translated as a transfer which requires minimum propellant consumption. In order to assure the selected trajectory meets the requirement, the <span class="hlt">optimality</span> of transfer should first be analyzed either by directly calculating the ΔV of the candidate trajectories and select the one that gives a minimum value or by evaluating the trajectory according to certain criteria of <span class="hlt">optimality</span>. The second <span class="hlt">method</span> is performed by analyzing the profile of the modulus of the thrust direction vector which is known as primer vector. Both <span class="hlt">methods</span> come with their own advantages and disadvantages. However, it is possible to use the primer vector <span class="hlt">method</span> to verify if the result from the direct <span class="hlt">method</span> is truly <span class="hlt">optimal</span> or if the ΔV can be reduced further by implementing correction maneuver to the reference trajectory. In addition to its capability to evaluate the transfer <span class="hlt">optimality</span> without the need to calculate the transfer ΔV, primer vector also enables us to identify the time and position to apply correction maneuver in order to <span class="hlt">optimize</span> a non-optimum transfer. This paper will present the analytical approach to the fuel <span class="hlt">optimal</span> impulsive transfer using primer vector <span class="hlt">method</span>. The validity of the <span class="hlt">method</span> is confirmed by comparing the result to those from the numerical <span class="hlt">method</span>. The investigation of the <span class="hlt">optimality</span> of direct transfer is used to give an example of the application of the <span class="hlt">method</span>. The case under study is the prograde elliptic transfers from Earth to Mars. The study enables us to identify the <span class="hlt">optimality</span> of all the possible transfers.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2010PhDT........57P','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2010PhDT........57P"><span>Hybrid intelligent <span class="hlt">optimization</span> <span class="hlt">methods</span> for engineering problems</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Pehlivanoglu, Yasin Volkan</p> <p></p> <p>The purpose of <span class="hlt">optimization</span> is to obtain the best solution under certain conditions. There are numerous <span class="hlt">optimization</span> <span class="hlt">methods</span> because different problems need different solution methodologies; therefore, it is difficult to construct patterns. Also mathematical modeling of a natural phenomenon is almost based on differentials. Differential equations are constructed with relative increments among the factors related to yield. Therefore, the gradients of these increments are essential to search the yield space. However, the landscape of yield is not a simple one and mostly multi-modal. Another issue is differentiability. Engineering design problems are usually nonlinear and they sometimes exhibit discontinuous derivatives for the objective and constraint functions. Due to these difficulties, non-gradient-based algorithms have become more popular in recent decades. Genetic algorithms (GA) and particle swarm <span class="hlt">optimization</span> (PSO) algorithms are popular, non-gradient based algorithms. Both are population-based search algorithms and have multiple points for initiation. A significant difference from a gradient-based <span class="hlt">method</span> is the nature of the search methodologies. For example, randomness is essential for the search in GA or PSO. Hence, they are also called stochastic <span class="hlt">optimization</span> <span class="hlt">methods</span>. These algorithms are simple, robust, and have high fidelity. However, they suffer from similar defects, such as, premature convergence, less accuracy, or large computational time. The premature convergence is sometimes inevitable due to the lack of diversity. As the generations of particles or individuals in the population evolve, they may lose their diversity and become similar to each other. To overcome this issue, we studied the diversity concept in GA and PSO algorithms. Diversity is essential for a healthy search, and mutations are the basic operators to provide the necessary variety within a population. After having a close scrutiny of the diversity concept based on qualification and</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017PhDT........65Z','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017PhDT........65Z"><span>Exploratory High-Fidelity Aerostructural <span class="hlt">Optimization</span> Using an Efficient Monolithic Solution <span class="hlt">Method</span></span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Zhang, Jenmy Zimi</p> <p></p> <p>This thesis is motivated by the desire to discover fuel efficient aircraft concepts through exploratory design. An <span class="hlt">optimization</span> methodology based on tightly integrated high-fidelity aerostructural analysis is proposed, which has the flexibility, robustness, and efficiency to contribute to this goal. The present aerostructural <span class="hlt">optimization</span> methodology uses an integrated geometry parameterization and mesh movement strategy, which was initially proposed for aerodynamic shape <span class="hlt">optimization</span>. This integrated approach provides the <span class="hlt">optimizer</span> with a large amount of geometric freedom for conducting exploratory design, while allowing for efficient and robust mesh movement in the presence of substantial shape changes. In extending this approach to aerostructural <span class="hlt">optimization</span>, this thesis has addressed a number of important challenges. A structural mesh deformation strategy has been introduced to translate consistently the shape changes described by the geometry parameterization to the structural model. A three-field formulation of the discrete steady aerostructural residual couples the mesh movement equations with the three-dimensional Euler equations and a linear structural analysis. Gradients needed for <span class="hlt">optimization</span> are computed with a three-field coupled adjoint approach. A number of investigations have been conducted to demonstrate the suitability and accuracy of the present methodology for use in aerostructural <span class="hlt">optimization</span> involving substantial shape changes. Robustness and efficiency in the coupled solution algorithms is crucial to the success of an exploratory <span class="hlt">optimization</span>. This thesis therefore also focuses on the design of an effective monolithic solution algorithm for the proposed methodology. This involves using a Newton-Krylov <span class="hlt">method</span> for the aerostructural analysis and a preconditioned Krylov subspace <span class="hlt">method</span> for the coupled adjoint solution. Several aspects of the monolithic solution <span class="hlt">method</span> have been investigated. These include appropriate strategies for scaling</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015AcAau.110..266V','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015AcAau.110..266V"><span>Performance evaluation of the inverse dynamics <span class="hlt">method</span> for <span class="hlt">optimal</span> spacecraft reorientation</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Ventura, Jacopo; Romano, Marcello; Walter, Ulrich</p> <p>2015-05-01</p> <p>This paper investigates the application of the inverse dynamics in the virtual domain <span class="hlt">method</span> to Euler angles, quaternions, and modified Rodrigues parameters for rapid <span class="hlt">optimal</span> attitude trajectory generation for spacecraft reorientation maneuvers. The impact of the virtual domain and attitude representation is numerically investigated for both minimum time and minimum energy problems. Owing to the nature of the inverse dynamics <span class="hlt">method</span>, it yields sub-<span class="hlt">optimal</span> solutions for minimum time problems. Furthermore, the virtual domain improves the <span class="hlt">optimality</span> of the solution, but at the cost of more computational time. The attitude representation also affects solution quality and computational speed. For minimum energy problems, the <span class="hlt">optimal</span> solution can be obtained without the virtual domain with any considered attitude representation.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5648134','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5648134"><span>DOMe: A deduplication <span class="hlt">optimization</span> <span class="hlt">method</span> for the NewSQL database backups</span></a></p> <p><a target="_blank" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Wang, Longxiang; Zhu, Zhengdong; Zhang, Xingjun; Wang, Yinfeng</p> <p>2017-01-01</p> <p>Reducing duplicated data of database backups is an important application scenario for data deduplication technology. NewSQL is an emerging database system and is now being used more and more widely. NewSQL systems need to improve data reliability by periodically backing up in-memory data, resulting in a lot of duplicated data. The traditional deduplication <span class="hlt">method</span> is not <span class="hlt">optimized</span> for the NewSQL server system and cannot take full advantage of hardware resources to <span class="hlt">optimize</span> deduplication performance. A recent research pointed out that the future NewSQL server will have thousands of CPU cores, large DRAM and huge NVRAM. Therefore, how to utilize these hardware resources to <span class="hlt">optimize</span> the performance of data deduplication is an important issue. To solve this problem, we propose a deduplication <span class="hlt">optimization</span> <span class="hlt">method</span> (DOMe) for NewSQL system backup. To take advantage of the large number of CPU cores in the NewSQL server to <span class="hlt">optimize</span> deduplication performance, DOMe parallelizes the deduplication <span class="hlt">method</span> based on the fork-join framework. The fingerprint index, which is the key data structure in the deduplication process, is implemented as pure in-memory hash table, which makes full use of the large DRAM in NewSQL system, eliminating the performance bottleneck problem of fingerprint index existing in traditional deduplication <span class="hlt">method</span>. The H-store is used as a typical NewSQL database system to implement DOMe <span class="hlt">method</span>. DOMe is experimentally analyzed by two representative backup data. The experimental results show that: 1) DOMe can reduce the duplicated NewSQL backup data. 2) DOMe significantly improves deduplication performance by parallelizing CDC algorithms. In the case of the theoretical speedup ratio of the server is 20.8, the speedup ratio of DOMe can achieve up to 18; 3) DOMe improved the deduplication throughput by 1.5 times through the pure in-memory index <span class="hlt">optimization</span> <span class="hlt">method</span>. PMID:29049307</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.osti.gov/biblio/36067-direct-sqp-methods-solving-optimal-control-problems-delays','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/36067-direct-sqp-methods-solving-optimal-control-problems-delays"><span>Direct SQP-<span class="hlt">methods</span> for solving <span class="hlt">optimal</span> control problems with delays</span></a></p> <p><a target="_blank" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Goellmann, L.; Bueskens, C.; Maurer, H.</p> <p></p> <p>The maximum principle for <span class="hlt">optimal</span> control problems with delays leads to a boundary value problem (BVP) which is retarded in the state and advanced in the costate function. Based on shooting techniques, solution <span class="hlt">methods</span> for this type of BVP have been proposed. In recent years, direct <span class="hlt">optimization</span> <span class="hlt">methods</span> have been favored for solving control problems without delays. Direct <span class="hlt">methods</span> approximate the control and the state over a fixed mesh and solve the resulting NLP-problem with SQP-<span class="hlt">methods</span>. These <span class="hlt">methods</span> dispense with the costate function and have shown to be robust and efficient. In this paper, we propose a direct SQP-<span class="hlt">method</span> formore » retarded control problems. In contrast to conventional direct <span class="hlt">methods</span>, only the control variable is approximated by e.g. spline-functions. The state is computed via a high order Runge-Kutta type algorithm and does not enter explicitly the NLP-problem through an equation. This approach reduces the number of <span class="hlt">optimization</span> variables considerably and is implementable even on a PC. Our <span class="hlt">method</span> is illustrated by the numerical solution of retarded control problems with constraints. In particular, we consider the control of a continuous stirred tank reactor which has been solved by dynamic programming. This example illustrates the robustness and efficiency of the proposed <span class="hlt">method</span>. Open questions concerning sufficient conditions and convergence of discretized NLP-problems are discussed.« less</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.osti.gov/biblio/22413543-robust-optimization-methods-cardiac-sparing-tangential-breast-imrt','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/22413543-robust-optimization-methods-cardiac-sparing-tangential-breast-imrt"><span>Robust <span class="hlt">optimization</span> <span class="hlt">methods</span> for cardiac sparing in tangential breast IMRT</span></a></p> <p><a target="_blank" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Mahmoudzadeh, Houra, E-mail: houra@mie.utoronto.ca; Lee, Jenny; Chan, Timothy C. Y.</p> <p></p> <p>Purpose: In left-sided tangential breast intensity modulated radiation therapy (IMRT), the heart may enter the radiation field and receive excessive radiation while the patient is breathing. The patient’s breathing pattern is often irregular and unpredictable. We verify the clinical applicability of a heart-sparing robust <span class="hlt">optimization</span> approach for breast IMRT. We compare robust <span class="hlt">optimized</span> plans with clinical plans at free-breathing and clinical plans at deep inspiration breath-hold (DIBH) using active breathing control (ABC). <span class="hlt">Methods</span>: Eight patients were included in the study with each patient simulated using 4D-CT. The 4D-CT image acquisition generated ten breathing phase datasets. An average scan was constructedmore » using all the phase datasets. Two of the eight patients were also imaged at breath-hold using ABC. The 4D-CT datasets were used to calculate the accumulated dose for robust <span class="hlt">optimized</span> and clinical plans based on deformable registration. We generated a set of simulated breathing probability mass functions, which represent the fraction of time patients spend in different breathing phases. The robust <span class="hlt">optimization</span> <span class="hlt">method</span> was applied to each patient using a set of dose-influence matrices extracted from the 4D-CT data and a model of the breathing motion uncertainty. The goal of the <span class="hlt">optimization</span> models was to minimize the dose to the heart while ensuring dose constraints on the target were achieved under breathing motion uncertainty. Results: Robust <span class="hlt">optimized</span> plans were improved or equivalent to the clinical plans in terms of heart sparing for all patients studied. The robust <span class="hlt">method</span> reduced the accumulated heart dose (D10cc) by up to 801 cGy compared to the clinical <span class="hlt">method</span> while also improving the coverage of the accumulated whole breast target volume. On average, the robust <span class="hlt">method</span> reduced the heart dose (D10cc) by 364 cGy and improved the optBreast dose (D99%) by 477 cGy. In addition, the robust <span class="hlt">method</span> had smaller deviations from the planned dose to the</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018OptCo.413..230Z','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018OptCo.413..230Z"><span><span class="hlt">Method</span> to <span class="hlt">optimize</span> optical switch topology for photonic network-on-chip</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Zhou, Ting; Jia, Hao</p> <p>2018-04-01</p> <p>In this paper, we propose a <span class="hlt">method</span> to <span class="hlt">optimize</span> the optical switch by substituting optical waveguide crossings for optical switching units and an <span class="hlt">optimizing</span> algorithm to complete the <span class="hlt">optimization</span> automatically. The functionality of the optical switch remains constant under <span class="hlt">optimization</span>. With this <span class="hlt">method</span>, we simplify the topology of optical switch, which means the insertion loss and power consumption of the whole optical switch can be effectively minimized. Simulation result shows that the number of switching units of the optical switch based on Spanke-Benes can be reduced by 16.7%, 20%, 20%, 19% and 17.9% for the scale from 4 × 4 to 8 × 8 respectively. As a proof of concept, the experimental demonstration of an <span class="hlt">optimized</span> six-port optical switch based on Spanke-Benes structure by means of silicon photonics chip is reported.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://www.dtic.mil/docs/citations/AD1033043','DTIC-ST'); return false;" href="http://www.dtic.mil/docs/citations/AD1033043"><span>Advanced Computational <span class="hlt">Methods</span> for <span class="hlt">Optimization</span> of Non-Periodic Inspection Intervals for Aging Infrastructure</span></a></p> <p><a target="_blank" href="http://www.dtic.mil/">DTIC Science & Technology</a></p> <p></p> <p>2017-01-05</p> <p>AFRL-AFOSR-JP-TR-2017-0002 Advanced Computational <span class="hlt">Methods</span> for <span class="hlt">Optimization</span> of Non-Periodic Inspection Intervals for Aging Infrastructure Manabu...Computational <span class="hlt">Methods</span> for <span class="hlt">Optimization</span> of Non-Periodic Inspection Intervals for Aging Infrastructure 5a.  CONTRACT NUMBER 5b.  GRANT NUMBER FA2386...UNLIMITED: PB Public Release 13. SUPPLEMENTARY NOTES 14. ABSTRACT This report for the project titled ’Advanced Computational <span class="hlt">Methods</span> for <span class="hlt">Optimization</span> of</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018JChPh.148n4102B','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018JChPh.148n4102B"><span>Clustering <span class="hlt">methods</span> for the <span class="hlt">optimization</span> of atomic cluster structure</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Bagattini, Francesco; Schoen, Fabio; Tigli, Luca</p> <p>2018-04-01</p> <p>In this paper, we propose a revised global <span class="hlt">optimization</span> <span class="hlt">method</span> and apply it to large scale cluster conformation problems. In the 1990s, the so-called clustering <span class="hlt">methods</span> were considered among the most efficient general purpose global <span class="hlt">optimization</span> techniques; however, their usage has quickly declined in recent years, mainly due to the inherent difficulties of clustering approaches in large dimensional spaces. Inspired from the machine learning literature, we redesigned clustering <span class="hlt">methods</span> in order to deal with molecular structures in a reduced feature space. Our aim is to show that by suitably choosing a good set of geometrical features coupled with a very efficient descent <span class="hlt">method</span>, an effective <span class="hlt">optimization</span> tool is obtained which is capable of finding, with a very high success rate, all known putative optima for medium size clusters without any prior information, both for Lennard-Jones and Morse potentials. The main result is that, beyond being a reliable approach, the proposed <span class="hlt">method</span>, based on the idea of starting a computationally expensive deep local search only when it seems worth doing so, is capable of saving a huge amount of searches with respect to an analogous algorithm which does not employ a clustering phase. In this paper, we are not claiming the superiority of the proposed <span class="hlt">method</span> compared to specific, refined, state-of-the-art procedures, but rather indicating a quite straightforward way to save local searches by means of a clustering scheme working in a reduced variable space, which might prove useful when included in many modern <span class="hlt">methods</span>.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_17");'>17</a></li> <li><a href="#" onclick='return showDiv("page_18");'>18</a></li> <li class="active"><span>19</span></li> <li><a href="#" onclick='return showDiv("page_20");'>20</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_19 --> <div id="page_20" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_18");'>18</a></li> <li><a href="#" onclick='return showDiv("page_19");'>19</a></li> <li class="active"><span>20</span></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="381"> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2001PhDT.......122B','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2001PhDT.......122B"><span>Meshless <span class="hlt">methods</span> in shape <span class="hlt">optimization</span> of linear elastic and thermoelastic solids</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Bobaru, Florin</p> <p></p> <p>This dissertation proposes a meshless approach to problems in shape <span class="hlt">optimization</span> of elastic and thermoelastic solids. The Element-free Galerkin (EFG) <span class="hlt">method</span> is used for this purpose. The ability of the EFG to avoid remeshing, that is normally done in a Finite Element approach to correct highly distorted meshes, is clearly demonstrated by several examples. The shape <span class="hlt">optimization</span> example of a thermal cooling fin shows a dramatic improvement in the objective compared to a previous FEM analysis. More importantly, the new solution, displaying large shape changes contrasted to the initial design, was completely missed by the FEM analysis. The EFG formulation given here for shape <span class="hlt">optimization</span> "uncovers" new solutions that are, apparently, unobtainable via a FEM approach. This is one of the main achievements of our work. The variational formulations for the analysis problem and for the sensitivity problems are obtained with a penalty <span class="hlt">method</span> for imposing the displacement boundary conditions. The continuum formulation is general and this facilitates 2D and 3D with minor differences from one another. Also, transient thermoelastic problems can use the present development at each time step to solve shape <span class="hlt">optimization</span> problems for time-dependent thermal problems. For the elasticity framework, displacement sensitivity is obtained in the EFG context. Excellent agreements with analytical solutions for some test problems are obtained. The shape <span class="hlt">optimization</span> of a fillet is carried out in great detail, and results show significant improvement of the EFG solution over the FEM or the Boundary Element <span class="hlt">Method</span> solutions. In our approach we avoid differentiating the complicated EFG shape functions, with respect to the shape design parameters, by using a particular discretization for sensitivity calculations. Displacement and temperature sensitivities are formulated for the shape <span class="hlt">optimization</span> of a linear thermoelastic solid. Two important examples considered in this work, the <span class="hlt">optimization</span> of</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2006JSAST..49..137T','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2006JSAST..49..137T"><span>Near-<span class="hlt">Optimal</span> Guidance <span class="hlt">Method</span> for Maximizing the Reachable Domain of Gliding Aircraft</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Tsuchiya, Takeshi</p> <p></p> <p>This paper proposes a guidance <span class="hlt">method</span> for gliding aircraft by using onboard computers to calculate a near-<span class="hlt">optimal</span> trajectory in real-time, and thereby expanding the reachable domain. The results are applicable to advanced aircraft and future space transportation systems that require high safety. The calculation load of the <span class="hlt">optimal</span> control problem that is used to maximize the reachable domain is too large for current computers to calculate in real-time. Thus the <span class="hlt">optimal</span> control problem is divided into two problems: a gliding distance maximization problem in which the aircraft motion is limited to a vertical plane, and an <span class="hlt">optimal</span> turning flight problem in a horizontal direction. First, the former problem is solved using a shooting <span class="hlt">method</span>. It can be solved easily because its scale is smaller than that of the original problem, and because some of the features of the <span class="hlt">optimal</span> solution are obtained in the first part of this paper. Next, in the latter problem, the <span class="hlt">optimal</span> bank angle is computed from the solution of the former; this is an analytical computation, rather than an iterative computation. Finally, the reachable domain obtained from the proposed near-<span class="hlt">optimal</span> guidance <span class="hlt">method</span> is compared with that obtained from the original <span class="hlt">optimal</span> control problem.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018EnOp...50..253Z','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018EnOp...50..253Z"><span>A kriging metamodel-assisted robust <span class="hlt">optimization</span> <span class="hlt">method</span> based on a reverse model</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Zhou, Hui; Zhou, Qi; Liu, Congwei; Zhou, Taotao</p> <p>2018-02-01</p> <p>The goal of robust <span class="hlt">optimization</span> <span class="hlt">methods</span> is to obtain a solution that is both optimum and relatively insensitive to uncertainty factors. Most existing robust <span class="hlt">optimization</span> approaches use outer-inner nested <span class="hlt">optimization</span> structures where a large amount of computational effort is required because the robustness of each candidate solution delivered from the outer level should be evaluated in the inner level. In this article, a kriging metamodel-assisted robust <span class="hlt">optimization</span> <span class="hlt">method</span> based on a reverse model (K-RMRO) is first proposed, in which the nested <span class="hlt">optimization</span> structure is reduced into a single-loop <span class="hlt">optimization</span> structure to ease the computational burden. Ignoring the interpolation uncertainties from kriging, K-RMRO may yield non-robust optima. Hence, an improved kriging-assisted robust <span class="hlt">optimization</span> <span class="hlt">method</span> based on a reverse model (IK-RMRO) is presented to take the interpolation uncertainty of kriging metamodel into consideration. In IK-RMRO, an objective switching criterion is introduced to determine whether the inner level robust <span class="hlt">optimization</span> or the kriging metamodel replacement should be used to evaluate the robustness of design alternatives. The proposed criterion is developed according to whether or not the robust status of the individual can be changed because of the interpolation uncertainties from the kriging metamodel. Numerical and engineering cases are used to demonstrate the applicability and efficiency of the proposed approach.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018EnOp...50..733M','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018EnOp...50..733M"><span>Reliability-based design <span class="hlt">optimization</span> using a generalized subset simulation <span class="hlt">method</span> and posterior approximation</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Ma, Yuan-Zhuo; Li, Hong-Shuang; Yao, Wei-Xing</p> <p>2018-05-01</p> <p>The evaluation of the probabilistic constraints in reliability-based design <span class="hlt">optimization</span> (RBDO) problems has always been significant and challenging work, which strongly affects the performance of RBDO <span class="hlt">methods</span>. This article deals with RBDO problems using a recently developed generalized subset simulation (GSS) <span class="hlt">method</span> and a posterior approximation approach. The posterior approximation approach is used to transform all the probabilistic constraints into ordinary constraints as in deterministic <span class="hlt">optimization</span>. The assessment of multiple failure probabilities required by the posterior approximation approach is achieved by GSS in a single run at all supporting points, which are selected by a proper experimental design scheme combining Sobol' sequences and Bucher's design. Sequentially, the transformed deterministic design <span class="hlt">optimization</span> problem can be solved by <span class="hlt">optimization</span> algorithms, for example, the sequential quadratic programming <span class="hlt">method</span>. Three <span class="hlt">optimization</span> problems are used to demonstrate the efficiency and accuracy of the proposed <span class="hlt">method</span>.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://hdl.handle.net/2060/19830002606','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19830002606"><span>Nonlinear <span class="hlt">optimization</span> with linear constraints using a projection <span class="hlt">method</span></span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Fox, T.</p> <p>1982-01-01</p> <p>Nonlinear <span class="hlt">optimization</span> problems that are encountered in science and industry are examined. A <span class="hlt">method</span> of projecting the gradient vector onto a set of linear contraints is developed, and a program that uses this <span class="hlt">method</span> is presented. The algorithm that generates this projection matrix is based on the Gram-Schmidt <span class="hlt">method</span> and overcomes some of the objections to the Rosen projection <span class="hlt">method</span>.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016AIPC.1739b0076G','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016AIPC.1739b0076G"><span>A modified form of conjugate gradient <span class="hlt">method</span> for unconstrained <span class="hlt">optimization</span> problems</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Ghani, Nur Hamizah Abdul; Rivaie, Mohd.; Mamat, Mustafa</p> <p>2016-06-01</p> <p>Conjugate gradient (CG) <span class="hlt">methods</span> have been recognized as an interesting technique to solve <span class="hlt">optimization</span> problems, due to the numerical efficiency, simplicity and low memory requirements. In this paper, we propose a new CG <span class="hlt">method</span> based on the study of Rivaie et al. [7] (Comparative study of conjugate gradient coefficient for unconstrained <span class="hlt">Optimization</span>, Aus. J. Bas. Appl. Sci. 5(2011) 947-951). Then, we show that our <span class="hlt">method</span> satisfies sufficient descent condition and converges globally with exact line search. Numerical results show that our proposed <span class="hlt">method</span> is efficient for given standard test problems, compare to other existing CG <span class="hlt">methods</span>.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://www.dtic.mil/docs/citations/ADA273945','DTIC-ST'); return false;" href="http://www.dtic.mil/docs/citations/ADA273945"><span>An Exploratory Survey of <span class="hlt">Methods</span> Used to Develop Measures of Performance</span></a></p> <p><a target="_blank" href="http://www.dtic.mil/">DTIC Science & Technology</a></p> <p></p> <p>1993-09-01</p> <p>E3 Genichi <span class="hlt">Taguchi</span> O3 Robert C. Camp 0 Kaoru Ishikawa 0 Dorsey J. Talley o Philip B. Crosby 0 J.M. Juran 0 Arthur R. Tenner 0 W. Edwards Deming 0...authored books or papers on the subject of quality? (Mark all that apply) o Nancy Brady 0 H. James Harrington 03 Genichi <span class="hlt">Taguchi</span> o Robert C. Camp 0 Kaoru ... Ishikawa 03 Dorsey J. Talley 0 Philip B. Crosby 0 J.M. Juran 0 Arthur R. Tenner o W. Edwards Deming 0 Dennis Kinlaw 03 Hans J. Thamhain 0 Irving J</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://hdl.handle.net/2060/20170007696','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20170007696"><span>A Requirements-Driven <span class="hlt">Optimization</span> <span class="hlt">Method</span> for Acoustic Liners Using Analytic Derivatives</span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Berton, Jeffrey J.; Lopes, Leonard V.</p> <p>2017-01-01</p> <p>More than ever, there is flexibility and freedom in acoustic liner design. Subject to practical considerations, liner design variables may be manipulated to achieve a target attenuation spectrum. But characteristics of the ideal attenuation spectrum can be difficult to know. Many multidisciplinary system effects govern how engine noise sources contribute to community noise. Given a hardwall fan noise source to be suppressed, and using an analytical certification noise model to compute a community noise measure of merit, the <span class="hlt">optimal</span> attenuation spectrum can be derived using multidisciplinary systems analysis <span class="hlt">methods</span>. In a previous paper on this subject, a <span class="hlt">method</span> deriving the ideal target attenuation spectrum that minimizes noise perceived by observers on the ground was described. A simple code-wrapping approach was used to evaluate a community noise objective function for an external <span class="hlt">optimizer</span>. Gradients were evaluated using a finite difference formula. The subject of this paper is an application of analytic derivatives that supply precise gradients to an <span class="hlt">optimization</span> process. Analytic derivatives improve the efficiency and accuracy of gradient-based <span class="hlt">optimization</span> <span class="hlt">methods</span> and allow consideration of more design variables. In addition, the benefit of variable impedance liners is explored using a multi-objective <span class="hlt">optimization</span>.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://eric.ed.gov/?q=chromium&pg=2&id=EJ292159','ERIC'); return false;" href="https://eric.ed.gov/?q=chromium&pg=2&id=EJ292159"><span><span class="hlt">Optimal</span> Multicomponent Analysis Using the Generalized Standard Addition <span class="hlt">Method</span>.</span></a></p> <p><a target="_blank" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Raymond, Margaret; And Others</p> <p>1983-01-01</p> <p>Describes an experiment on the simultaneous determination of chromium and magnesium by spectophotometry modified to include the Generalized Standard Addition <span class="hlt">Method</span> computer program, a multivariate calibration <span class="hlt">method</span> that provides <span class="hlt">optimal</span> multicomponent analysis in the presence of interference and matrix effects. Provides instructions for…</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015IAUGA..2257588R','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015IAUGA..2257588R"><span>Autonomous Modelling of X-ray Spectra Using Robust Global <span class="hlt">Optimization</span> <span class="hlt">Methods</span></span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Rogers, Adam; Safi-Harb, Samar; Fiege, Jason</p> <p>2015-08-01</p> <p>The standard approach to model fitting in X-ray astronomy is by means of local <span class="hlt">optimization</span> <span class="hlt">methods</span>. However, these local <span class="hlt">optimizers</span> suffer from a number of problems, such as a tendency for the fit parameters to become trapped in local minima, and can require an involved process of detailed user intervention to guide them through the <span class="hlt">optimization</span> process. In this work we introduce a general GUI-driven global <span class="hlt">optimization</span> <span class="hlt">method</span> for fitting models to X-ray data, written in MATLAB, which searches for <span class="hlt">optimal</span> models with minimal user interaction. We directly interface with the commonly used XSPEC libraries to access the full complement of pre-existing spectral models that describe a wide range of physics appropriate for modelling astrophysical sources, including supernova remnants and compact objects. Our algorithm is powered by the Ferret genetic algorithm and Locust particle swarm <span class="hlt">optimizer</span> from the Qubist Global <span class="hlt">Optimization</span> Toolbox, which are robust at finding families of solutions and identifying degeneracies. This technique will be particularly instrumental for multi-parameter models and high-fidelity data. In this presentation, we provide details of the code and use our techniques to analyze X-ray data obtained from a variety of astrophysical sources.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.osti.gov/biblio/264237-temperature-match-based-optimization-method-daily-load-prediction-considering-dlc-effect','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/264237-temperature-match-based-optimization-method-daily-load-prediction-considering-dlc-effect"><span>A temperature match based <span class="hlt">optimization</span> <span class="hlt">method</span> for daily load prediction considering DLC effect</span></a></p> <p><a target="_blank" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Yu, Z.</p> <p></p> <p>This paper presents a unique <span class="hlt">optimization</span> <span class="hlt">method</span> for short term load forecasting. The new <span class="hlt">method</span> is based on the <span class="hlt">optimal</span> template temperature match between the future and past temperatures. The <span class="hlt">optimal</span> error reduction technique is a new concept introduced in this paper. Two case studies show that for hourly load forecasting, this <span class="hlt">method</span> can yield results as good as the rather complicated Box-Jenkins Transfer Function <span class="hlt">method</span>, and better than the Box-Jenkins <span class="hlt">method</span>; for peak load prediction, this <span class="hlt">method</span> is comparable in accuracy to the neural network <span class="hlt">method</span> with back propagation, and can produce more accurate results than the multi-linear regressionmore » <span class="hlt">method</span>. The DLC effect on system load is also considered in this <span class="hlt">method</span>.« less</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://www.dtic.mil/docs/citations/ADA087555','DTIC-ST'); return false;" href="http://www.dtic.mil/docs/citations/ADA087555"><span><span class="hlt">Methods</span> for Large-Scale Nonlinear <span class="hlt">Optimization</span>.</span></a></p> <p><a target="_blank" href="http://www.dtic.mil/">DTIC Science & Technology</a></p> <p></p> <p>1980-05-01</p> <p>STANFORD, CALIFORNIA 94305 <span class="hlt">METHODS</span> FOR LARGE-SCALE NONLINEAR <span class="hlt">OPTIMIZATION</span> by Philip E. Gill, Waiter Murray, I Michael A. Saunden, and Masgaret H. Wright...typical iteration can be partitioned so that where B is an m X m basise matrix. This partition effectively divides the vari- ables into three classes... attention is given to the standard of the coding or the documentation. A much better way of obtaining mathematical software is from a software library</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://hdl.handle.net/2060/20050185329','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20050185329"><span>Aerodynamic <span class="hlt">Optimization</span> of Rocket Control Surface Geometry Using Cartesian <span class="hlt">Methods</span> and CAD Geometry</span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Nelson, Andrea; Aftosmis, Michael J.; Nemec, Marian; Pulliam, Thomas H.</p> <p>2004-01-01</p> <p>Aerodynamic design is an iterative process involving geometry manipulation and complex computational analysis subject to physical constraints and aerodynamic objectives. A design cycle consists of first establishing the performance of a baseline design, which is usually created with low-fidelity engineering tools, and then progressively <span class="hlt">optimizing</span> the design to maximize its performance. <span class="hlt">Optimization</span> techniques have evolved from relying exclusively on designer intuition and insight in traditional trial and error <span class="hlt">methods</span>, to sophisticated local and global search <span class="hlt">methods</span>. Recent attempts at automating the search through a large design space with formal <span class="hlt">optimization</span> <span class="hlt">methods</span> include both database driven and direct evaluation schemes. Databases are being used in conjunction with surrogate and neural network models as a basis on which to run <span class="hlt">optimization</span> algorithms. <span class="hlt">Optimization</span> algorithms are also being driven by the direct evaluation of objectives and constraints using high-fidelity simulations. Surrogate <span class="hlt">methods</span> use data points obtained from simulations, and possibly gradients evaluated at the data points, to create mathematical approximations of a database. Neural network models work in a similar fashion, using a number of high-fidelity database calculations as training iterations to create a database model. <span class="hlt">Optimal</span> designs are obtained by coupling an <span class="hlt">optimization</span> algorithm to the database model. Evaluation of the current best design then gives either a new local optima and/or increases the fidelity of the approximation model for the next iteration. Surrogate <span class="hlt">methods</span> have also been developed that iterate on the selection of data points to decrease the uncertainty of the approximation model prior to searching for an <span class="hlt">optimal</span> design. The database approximation models for each of these cases, however, become computationally expensive with increase in dimensionality. Thus the <span class="hlt">method</span> of using <span class="hlt">optimization</span> algorithms to search a database model becomes problematic as the</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://hdl.handle.net/2060/20050185556','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20050185556"><span>Adjoint Algorithm for CAD-Based Shape <span class="hlt">Optimization</span> Using a Cartesian <span class="hlt">Method</span></span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Nemec, Marian; Aftosmis, Michael J.</p> <p>2004-01-01</p> <p>Adjoint solutions of the governing flow equations are becoming increasingly important for the development of efficient analysis and <span class="hlt">optimization</span> algorithms. A well-known use of the adjoint <span class="hlt">method</span> is gradient-based shape <span class="hlt">optimization</span>. Given an objective function that defines some measure of performance, such as the lift and drag functionals, its gradient is computed at a cost that is essentially independent of the number of design variables (geometric parameters that control the shape). More recently, emerging adjoint applications focus on the analysis problem, where the adjoint solution is used to drive mesh adaptation, as well as to provide estimates of functional error bounds and corrections. The attractive feature of this approach is that the mesh-adaptation procedure targets a specific functional, thereby localizing the mesh refinement and reducing computational cost. Our focus is on the development of adjoint-based <span class="hlt">optimization</span> techniques for a Cartesian <span class="hlt">method</span> with embedded boundaries.12 In contrast t o implementations on structured and unstructured grids, Cartesian <span class="hlt">methods</span> decouple the surface discretization from the volume mesh. This feature makes Cartesian <span class="hlt">methods</span> well suited for the automated analysis of complex geometry problems, and consequently a promising approach to aerodynamic <span class="hlt">optimization</span>. Melvin et developed an adjoint formulation for the TRANAIR code, which is based on the full-potential equation with viscous corrections. More recently, Dadone and Grossman presented an adjoint formulation for the Euler equations. In both approaches, a boundary condition is introduced to approximate the effects of the evolving surface shape that results in accurate gradient computation. Central to automated shape <span class="hlt">optimization</span> algorithms is the issue of geometry modeling and control. The need to <span class="hlt">optimize</span> complex, "real-life" geometry provides a strong incentive for the use of parametric-CAD systems within the <span class="hlt">optimization</span> procedure. In previous work, we presented</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.osti.gov/pages/biblio/1343079-new-adaptive-method-optimize-secondary-reflector-linear-fresnel-collectors','SCIGOV-DOEP'); return false;" href="https://www.osti.gov/pages/biblio/1343079-new-adaptive-method-optimize-secondary-reflector-linear-fresnel-collectors"><span>New adaptive <span class="hlt">method</span> to <span class="hlt">optimize</span> the secondary reflector of linear Fresnel collectors</span></a></p> <p><a target="_blank" href="http://www.osti.gov/pages">DOE PAGES</a></p> <p>Zhu, Guangdong</p> <p>2017-01-16</p> <p>Performance of linear Fresnel collectors may largely depend on the secondary-reflector profile design when small-aperture absorbers are used. <span class="hlt">Optimization</span> of the secondary-reflector profile is an extremely challenging task because there is no established theory to ensure superior performance of derived profiles. In this work, an innovative <span class="hlt">optimization</span> <span class="hlt">method</span> is proposed to <span class="hlt">optimize</span> the secondary-reflector profile of a generic linear Fresnel configuration. The <span class="hlt">method</span> correctly and accurately captures impacts of both geometric and optical aspects of a linear Fresnel collector to secondary-reflector design. The proposed <span class="hlt">method</span> is an adaptive approach that does not assume a secondary shape of any particular form,more » but rather, starts at a single edge point and adaptively constructs the next surface point to maximize the reflected power to be reflected to absorber(s). As a test case, the proposed <span class="hlt">optimization</span> <span class="hlt">method</span> is applied to an industrial linear Fresnel configuration, and the results show that the derived <span class="hlt">optimal</span> secondary reflector is able to redirect more than 90% of the power to the absorber in a wide range of incidence angles. Here, the proposed <span class="hlt">method</span> can be naturally extended to other types of solar collectors as well, and it will be a valuable tool for solar-collector designs with a secondary reflector.« less</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.osti.gov/servlets/purl/1343079','SCIGOV-STC'); return false;" href="https://www.osti.gov/servlets/purl/1343079"><span>New adaptive <span class="hlt">method</span> to <span class="hlt">optimize</span> the secondary reflector of linear Fresnel collectors</span></a></p> <p><a target="_blank" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Zhu, Guangdong</p> <p></p> <p>Performance of linear Fresnel collectors may largely depend on the secondary-reflector profile design when small-aperture absorbers are used. <span class="hlt">Optimization</span> of the secondary-reflector profile is an extremely challenging task because there is no established theory to ensure superior performance of derived profiles. In this work, an innovative <span class="hlt">optimization</span> <span class="hlt">method</span> is proposed to <span class="hlt">optimize</span> the secondary-reflector profile of a generic linear Fresnel configuration. The <span class="hlt">method</span> correctly and accurately captures impacts of both geometric and optical aspects of a linear Fresnel collector to secondary-reflector design. The proposed <span class="hlt">method</span> is an adaptive approach that does not assume a secondary shape of any particular form,more » but rather, starts at a single edge point and adaptively constructs the next surface point to maximize the reflected power to be reflected to absorber(s). As a test case, the proposed <span class="hlt">optimization</span> <span class="hlt">method</span> is applied to an industrial linear Fresnel configuration, and the results show that the derived <span class="hlt">optimal</span> secondary reflector is able to redirect more than 90% of the power to the absorber in a wide range of incidence angles. Here, the proposed <span class="hlt">method</span> can be naturally extended to other types of solar collectors as well, and it will be a valuable tool for solar-collector designs with a secondary reflector.« less</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://www.dtic.mil/docs/citations/ADA632453','DTIC-ST'); return false;" href="http://www.dtic.mil/docs/citations/ADA632453"><span>Discovery and <span class="hlt">Optimization</span> of Low-Storage Runge-Kutta <span class="hlt">Methods</span></span></a></p> <p><a target="_blank" href="http://www.dtic.mil/">DTIC Science & Technology</a></p> <p></p> <p>2015-06-01</p> <p>NAVAL POSTGRADUATE SCHOOL MONTEREY, CALIFORNIA THESIS DISCOVERY AND <span class="hlt">OPTIMIZATION</span> OF LOW-STORAGE RUNGE-KUTTA <span class="hlt">METHODS</span> by Matthew T. Fletcher June 2015... <span class="hlt">methods</span> are an important family of iterative <span class="hlt">methods</span> for approximating the solutions of ordinary differential equations (ODEs) and differential...algebraic equations (DAEs). It is common to use an RK <span class="hlt">method</span> to discretize in time when solving time dependent partial differential equations (PDEs) with a</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3881631','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3881631"><span><span class="hlt">Optimization</span> of Polygalacturonase Production from a Newly Isolated Thalassospira frigidphilosprofundus to Use in Pectin Hydrolysis: Statistical Approach</span></a></p> <p><a target="_blank" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Rekha, V. P. B.; Ghosh, Mrinmoy; Adapa, Vijayanand; Oh, Sung-Jong; Pulicherla, K. K.; Sambasiva Rao, K. R. S.</p> <p>2013-01-01</p> <p>The present study deals with the production of cold active polygalacturonase (PGase) by submerged fermentation using Thalassospira frigidphilosprofundus, a novel species isolated from deep waters of Bay of Bengal. Nonlinear models were applied to <span class="hlt">optimize</span> the medium components for enhanced production of PGase. <span class="hlt">Taguchi</span> orthogonal array design was adopted to evaluate the factors influencing the yield of PGase, followed by the central composite design (CCD) of response surface methodology (RSM) to identify the optimum concentrations of the key factors responsible for PGase production. Data obtained from the above mentioned statistical experimental design was used for final <span class="hlt">optimization</span> study by linking the artificial neural network and genetic algorithm (ANN-GA). Using ANN-GA hybrid model, the maximum PGase activity (32.54 U/mL) was achieved at the <span class="hlt">optimized</span> concentrations of medium components. In a comparison between the <span class="hlt">optimal</span> output of RSM and ANN-GA hybrid, the latter favored the production of PGase. In addition, the study also focused on the determination of factors responsible for pectin hydrolysis by crude pectinase extracted from T. frigidphilosprofundus through the central composite design. Results indicated 80% degradation of pectin in banana fiber at 20°C in 120 min, suggesting the scope of cold active PGase usage in the treatment of raw banana fibers. PMID:24455722</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/24455722','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/24455722"><span><span class="hlt">Optimization</span> of polygalacturonase production from a newly isolated Thalassospira frigidphilosprofundus to use in pectin hydrolysis: statistical approach.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Rekha, V P B; Ghosh, Mrinmoy; Adapa, Vijayanand; Oh, Sung-Jong; Pulicherla, K K; Sambasiva Rao, K R S</p> <p>2013-01-01</p> <p>The present study deals with the production of cold active polygalacturonase (PGase) by submerged fermentation using Thalassospira frigidphilosprofundus, a novel species isolated from deep waters of Bay of Bengal. Nonlinear models were applied to <span class="hlt">optimize</span> the medium components for enhanced production of PGase. <span class="hlt">Taguchi</span> orthogonal array design was adopted to evaluate the factors influencing the yield of PGase, followed by the central composite design (CCD) of response surface methodology (RSM) to identify the optimum concentrations of the key factors responsible for PGase production. Data obtained from the above mentioned statistical experimental design was used for final <span class="hlt">optimization</span> study by linking the artificial neural network and genetic algorithm (ANN-GA). Using ANN-GA hybrid model, the maximum PGase activity (32.54 U/mL) was achieved at the <span class="hlt">optimized</span> concentrations of medium components. In a comparison between the <span class="hlt">optimal</span> output of RSM and ANN-GA hybrid, the latter favored the production of PGase. In addition, the study also focused on the determination of factors responsible for pectin hydrolysis by crude pectinase extracted from T. frigidphilosprofundus through the central composite design. Results indicated 80% degradation of pectin in banana fiber at 20 °C in 120 min, suggesting the scope of cold active PGase usage in the treatment of raw banana fibers.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016SoSyR..50..587K','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016SoSyR..50..587K"><span>Linearization <span class="hlt">methods</span> for <span class="hlt">optimizing</span> the low thrust spacecraft trajectory: Theoretical aspects</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Kazmerchuk, P. V.</p> <p>2016-12-01</p> <p>The theoretical aspects of the modified linearization <span class="hlt">method</span>, which makes it possible to solve a wide class of nonlinear problems on <span class="hlt">optimizing</span> low-thrust spacecraft trajectories (V. V. Efanov et al., 2009; V. V. Khartov et al., 2010) are examined. The main modifications of the linearization <span class="hlt">method</span> are connected with its refinement for <span class="hlt">optimizing</span> the main dynamic systems and design parameters of the spacecraft.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_18");'>18</a></li> <li><a href="#" onclick='return showDiv("page_19");'>19</a></li> <li class="active"><span>20</span></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_20 --> <div id="page_21" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_19");'>19</a></li> <li><a href="#" onclick='return showDiv("page_20");'>20</a></li> <li class="active"><span>21</span></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="401"> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/27505357','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/27505357"><span>Inversion <span class="hlt">method</span> based on stochastic <span class="hlt">optimization</span> for particle sizing.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Sánchez-Escobar, Juan Jaime; Barbosa-Santillán, Liliana Ibeth; Vargas-Ubera, Javier; Aguilar-Valdés, Félix</p> <p>2016-08-01</p> <p>A stochastic inverse <span class="hlt">method</span> is presented based on a hybrid evolutionary <span class="hlt">optimization</span> algorithm (HEOA) to retrieve a monomodal particle-size distribution (PSD) from the angular distribution of scattered light. By solving an <span class="hlt">optimization</span> problem, the HEOA (with the Fraunhofer approximation) retrieves the PSD from an intensity pattern generated by Mie theory. The analyzed light-scattering pattern can be attributed to unimodal normal, gamma, or lognormal distribution of spherical particles covering the interval of modal size parameters 46≤α≤150. The HEOA ensures convergence to the near-<span class="hlt">optimal</span> solution during the <span class="hlt">optimization</span> of a real-valued objective function by combining the advantages of a multimember evolution strategy and locally weighted linear regression. The numerical results show that our HEOA can be satisfactorily applied to solve the inverse light-scattering problem.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/27437484','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/27437484"><span><span class="hlt">Optimal</span> Variational Asymptotic <span class="hlt">Method</span> for Nonlinear Fractional Partial Differential Equations.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Baranwal, Vipul K; Pandey, Ram K; Singh, Om P</p> <p>2014-01-01</p> <p>We propose <span class="hlt">optimal</span> variational asymptotic <span class="hlt">method</span> to solve time fractional nonlinear partial differential equations. In the proposed <span class="hlt">method</span>, an arbitrary number of auxiliary parameters γ 0, γ 1, γ 2,… and auxiliary functions H 0(x), H 1(x), H 2(x),… are introduced in the correction functional of the standard variational iteration <span class="hlt">method</span>. The <span class="hlt">optimal</span> values of these parameters are obtained by minimizing the square residual error. To test the <span class="hlt">method</span>, we apply it to solve two important classes of nonlinear partial differential equations: (1) the fractional advection-diffusion equation with nonlinear source term and (2) the fractional Swift-Hohenberg equation. Only few iterations are required to achieve fairly accurate solutions of both the first and second problems.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=19910038332&hterms=right+Bless+you&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D10%26Ntt%3Dright%2BBless%2Byou','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=19910038332&hterms=right+Bless+you&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D10%26Ntt%3Dright%2BBless%2Byou"><span>Weak Hamiltonian finite element <span class="hlt">method</span> for <span class="hlt">optimal</span> control problems</span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Hodges, Dewey H.; Bless, Robert R.</p> <p>1991-01-01</p> <p>A temporal finite element <span class="hlt">method</span> based on a mixed form of the Hamiltonian weak principle is developed for dynamics and <span class="hlt">optimal</span> control problems. The mixed form of Hamilton's weak principle contains both displacements and momenta as primary variables that are expanded in terms of nodal values and simple polynomial shape functions. Unlike other forms of Hamilton's principle, however, time derivatives of the momenta and displacements do not appear therein; instead, only the virtual momenta and virtual displacements are differentiated with respect to time. Based on the duality that is observed to exist between the mixed form of Hamilton's weak principle and variational principles governing classical <span class="hlt">optimal</span> control problems, a temporal finite element formulation of the latter can be developed in a rather straightforward manner. Several well-known problems in dynamics and <span class="hlt">optimal</span> control are illustrated. The example dynamics problem involves a time-marching problem. As <span class="hlt">optimal</span> control examples, elementary trajectory <span class="hlt">optimization</span> problems are treated.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26778864','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26778864"><span>QUADRO: A SUPERVISED DIMENSION REDUCTION <span class="hlt">METHOD</span> VIA RAYLEIGH QUOTIENT <span class="hlt">OPTIMIZATION</span>.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Fan, Jianqing; Ke, Zheng Tracy; Liu, Han; Xia, Lucy</p> <p></p> <p>We propose a novel Rayleigh quotient based sparse quadratic dimension reduction <span class="hlt">method</span>-named QUADRO (Quadratic Dimension Reduction via Rayleigh <span class="hlt">Optimization</span>)-for analyzing high-dimensional data. Unlike in the linear setting where Rayleigh quotient <span class="hlt">optimization</span> coincides with classification, these two problems are very different under nonlinear settings. In this paper, we clarify this difference and show that Rayleigh quotient <span class="hlt">optimization</span> may be of independent scientific interests. One major challenge of Rayleigh quotient <span class="hlt">optimization</span> is that the variance of quadratic statistics involves all fourth cross-moments of predictors, which are infeasible to compute for high-dimensional applications and may accumulate too many stochastic errors. This issue is resolved by considering a family of elliptical models. Moreover, for heavy-tail distributions, robust estimates of mean vectors and covariance matrices are employed to guarantee uniform convergence in estimating non-polynomially many parameters, even though only the fourth moments are assumed. Methodologically, QUADRO is based on elliptical models which allow us to formulate the Rayleigh quotient maximization as a convex <span class="hlt">optimization</span> problem. Computationally, we propose an efficient linearized augmented Lagrangian <span class="hlt">method</span> to solve the constrained <span class="hlt">optimization</span> problem. Theoretically, we provide explicit rates of convergence in terms of Rayleigh quotient under both Gaussian and general elliptical models. Thorough numerical results on both synthetic and real datasets are also provided to back up our theoretical results.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4124212','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4124212"><span>A Sequential <span class="hlt">Optimization</span> Sampling <span class="hlt">Method</span> for Metamodels with Radial Basis Functions</span></a></p> <p><a target="_blank" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Pan, Guang; Ye, Pengcheng; Yang, Zhidong</p> <p>2014-01-01</p> <p>Metamodels have been widely used in engineering design to facilitate analysis and <span class="hlt">optimization</span> of complex systems that involve computationally expensive simulation programs. The accuracy of metamodels is strongly affected by the sampling <span class="hlt">methods</span>. In this paper, a new sequential <span class="hlt">optimization</span> sampling <span class="hlt">method</span> is proposed. Based on the new sampling <span class="hlt">method</span>, metamodels can be constructed repeatedly through the addition of sampling points, namely, extrema points of metamodels and minimum points of density function. Afterwards, the more accurate metamodels would be constructed by the procedure above. The validity and effectiveness of proposed sampling <span class="hlt">method</span> are examined by studying typical numerical examples. PMID:25133206</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017MS%26E..165a2015M','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017MS%26E..165a2015M"><span>Parameter <span class="hlt">Optimization</span> Of Natural Hydroxyapatite/SS316l Via Metal Injection Molding (MIM)</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Mustafa, N.; Ibrahim1, M. H. I.; Amin, A. M.; Asmawi, R.</p> <p>2017-01-01</p> <p>Metal injection molding (MIM) are well known as a worldwide application of powder injection molding (PIM) where as applied the shaping concept and the beneficial of plastic injection molding but develops the applications to various high performance metals and alloys, plus metal matrix composites and ceramics. This study investigates the strength of green part by using stainless steel 316L/ Natural hydroxyapatite composite as a feedstock. Stainless steel 316L (SS316L) was mixed with Natural hydroxyapatite (NHAP) by adding 40 wt. % Low Density Polyethylene and 60 %wt. Palm Stearin as a binder system at 63 wt. % powder loading consist of 90 % wt. of SS316 L and 10 wt. % NHAP prepared thru critical powder volume percentage (CPVC). <span class="hlt">Taguchi</span> <span class="hlt">method</span> was functional as a tool in determining the optimum green strength for Metal Injection Molding (MIM) parameters. The green strength was <span class="hlt">optimized</span> with 4 significant injection parameter such as Injection temperature (A), Mold temperature (B), Pressure (C) and Speed (D) were selected throughout screening process. An orthogonal array of L9 (3)4 was conducted. The optimum injection parameters for highest green strength were established at A1, B2, C0 and D1 and where as calculated based on Signal to Noise Ratio.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016EnOp...48.1759T','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016EnOp...48.1759T"><span>Dual-mode nested search <span class="hlt">method</span> for categorical uncertain multi-objective <span class="hlt">optimization</span></span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Tang, Long; Wang, Hu</p> <p>2016-10-01</p> <p>Categorical multi-objective <span class="hlt">optimization</span> is an important issue involved in many matching design problems. Non-numerical variables and their uncertainty are the major challenges of such <span class="hlt">optimizations</span>. Therefore, this article proposes a dual-mode nested search (DMNS) <span class="hlt">method</span>. In the outer layer, kriging metamodels are established using standard regular simplex mapping (SRSM) from categorical candidates to numerical values. Assisted by the metamodels, a k-cluster-based intelligent sampling strategy is developed to search Pareto frontier points. The inner layer uses an interval number <span class="hlt">method</span> to model the uncertainty of categorical candidates. To improve the efficiency, a multi-feature convergent <span class="hlt">optimization</span> via most-promising-area stochastic search (MFCOMPASS) is proposed to determine the bounds of objectives. Finally, typical numerical examples are employed to demonstrate the effectiveness of the proposed DMNS <span class="hlt">method</span>.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/24892046','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/24892046"><span>An <span class="hlt">optimization</span> <span class="hlt">method</span> for condition based maintenance of aircraft fleet considering prognostics uncertainty.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Feng, Qiang; Chen, Yiran; Sun, Bo; Li, Songjie</p> <p>2014-01-01</p> <p>An <span class="hlt">optimization</span> <span class="hlt">method</span> for condition based maintenance (CBM) of aircraft fleet considering prognostics uncertainty is proposed. The CBM and dispatch process of aircraft fleet is analyzed first, and the alternative strategy sets for single aircraft are given. Then, the <span class="hlt">optimization</span> problem of fleet CBM with lower maintenance cost and dispatch risk is translated to the combinatorial <span class="hlt">optimization</span> problem of single aircraft strategy. Remain useful life (RUL) distribution of the key line replaceable Module (LRM) has been transformed into the failure probability of the aircraft and the fleet health status matrix is established. And the calculation <span class="hlt">method</span> of the costs and risks for mission based on health status matrix and maintenance matrix is given. Further, an <span class="hlt">optimization</span> <span class="hlt">method</span> for fleet dispatch and CBM under acceptable risk is proposed based on an improved genetic algorithm. Finally, a fleet of 10 aircrafts is studied to verify the proposed <span class="hlt">method</span>. The results shows that it could realize <span class="hlt">optimization</span> and control of the aircraft fleet oriented to mission success.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4032759','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4032759"><span>An <span class="hlt">Optimization</span> <span class="hlt">Method</span> for Condition Based Maintenance of Aircraft Fleet Considering Prognostics Uncertainty</span></a></p> <p><a target="_blank" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Chen, Yiran; Sun, Bo; Li, Songjie</p> <p>2014-01-01</p> <p>An <span class="hlt">optimization</span> <span class="hlt">method</span> for condition based maintenance (CBM) of aircraft fleet considering prognostics uncertainty is proposed. The CBM and dispatch process of aircraft fleet is analyzed first, and the alternative strategy sets for single aircraft are given. Then, the <span class="hlt">optimization</span> problem of fleet CBM with lower maintenance cost and dispatch risk is translated to the combinatorial <span class="hlt">optimization</span> problem of single aircraft strategy. Remain useful life (RUL) distribution of the key line replaceable Module (LRM) has been transformed into the failure probability of the aircraft and the fleet health status matrix is established. And the calculation <span class="hlt">method</span> of the costs and risks for mission based on health status matrix and maintenance matrix is given. Further, an <span class="hlt">optimization</span> <span class="hlt">method</span> for fleet dispatch and CBM under acceptable risk is proposed based on an improved genetic algorithm. Finally, a fleet of 10 aircrafts is studied to verify the proposed <span class="hlt">method</span>. The results shows that it could realize <span class="hlt">optimization</span> and control of the aircraft fleet oriented to mission success. PMID:24892046</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018IJASS.tmp...16O','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018IJASS.tmp...16O"><span>Provisional-Ideal-Point-Based Multi-objective <span class="hlt">Optimization</span> <span class="hlt">Method</span> for Drone Delivery Problem</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Omagari, Hiroki; Higashino, Shin-Ichiro</p> <p>2018-04-01</p> <p>In this paper, we proposed a new evolutionary multi-objective <span class="hlt">optimization</span> <span class="hlt">method</span> for solving drone delivery problems (DDP). It can be formulated as a constrained multi-objective <span class="hlt">optimization</span> problem. In our previous research, we proposed the "aspiration-point-based <span class="hlt">method</span>" to solve multi-objective <span class="hlt">optimization</span> problems. However, this <span class="hlt">method</span> needs to calculate the <span class="hlt">optimal</span> values of each objective function value in advance. Moreover, it does not consider the constraint conditions except for the objective functions. Therefore, it cannot apply to DDP which has many constraint conditions. To solve these issues, we proposed "provisional-ideal-point-based <span class="hlt">method</span>." The proposed <span class="hlt">method</span> defines a "penalty value" to search for feasible solutions. It also defines a new reference solution named "provisional-ideal point" to search for the preferred solution for a decision maker. In this way, we can eliminate the preliminary calculations and its limited application scope. The results of the benchmark test problems show that the proposed <span class="hlt">method</span> can generate the preferred solution efficiently. The usefulness of the proposed <span class="hlt">method</span> is also demonstrated by applying it to DDP. As a result, the delivery path when combining one drone and one truck drastically reduces the traveling distance and the delivery time compared with the case of using only one truck.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017JPhCS.890a2033O','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017JPhCS.890a2033O"><span>Hybrid DFP-CG <span class="hlt">method</span> for solving unconstrained <span class="hlt">optimization</span> problems</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Osman, Wan Farah Hanan Wan; Asrul Hery Ibrahim, Mohd; Mamat, Mustafa</p> <p>2017-09-01</p> <p>The conjugate gradient (CG) <span class="hlt">method</span> and quasi-Newton <span class="hlt">method</span> are both well known <span class="hlt">method</span> for solving unconstrained <span class="hlt">optimization</span> <span class="hlt">method</span>. In this paper, we proposed a new <span class="hlt">method</span> by combining the search direction between conjugate gradient <span class="hlt">method</span> and quasi-Newton <span class="hlt">method</span> based on BFGS-CG <span class="hlt">method</span> developed by Ibrahim et al. The Davidon-Fletcher-Powell (DFP) update formula is used as an approximation of Hessian for this new hybrid algorithm. Numerical result showed that the new algorithm perform well than the ordinary DFP <span class="hlt">method</span> and proven to posses both sufficient descent and global convergence properties.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016IJTJE..33....1H','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016IJTJE..33....1H"><span>Design and <span class="hlt">Optimization</span> <span class="hlt">Method</span> of a Two-Disk Rotor System</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Huang, Jingjing; Zheng, Longxi; Mei, Qing</p> <p>2016-04-01</p> <p>An integrated analytical <span class="hlt">method</span> based on multidisciplinary <span class="hlt">optimization</span> software Isight and general finite element software ANSYS was proposed in this paper. Firstly, a two-disk rotor system was established and the mode, humorous response and transient response at acceleration condition were analyzed with ANSYS. The dynamic characteristics of the two-disk rotor system were achieved. On this basis, the two-disk rotor model was integrated to the multidisciplinary design <span class="hlt">optimization</span> software Isight. According to the design of experiment (DOE) and the dynamic characteristics, the <span class="hlt">optimization</span> variables, <span class="hlt">optimization</span> objectives and constraints were confirmed. After that, the multi-objective design <span class="hlt">optimization</span> of the transient process was carried out with three different global <span class="hlt">optimization</span> algorithms including Evolutionary <span class="hlt">Optimization</span> Algorithm, Multi-Island Genetic Algorithm and Pointer Automatic <span class="hlt">Optimizer</span>. The optimum position of the two-disk rotor system was obtained at the specified constraints. Meanwhile, the accuracy and calculation numbers of different <span class="hlt">optimization</span> algorithms were compared. The <span class="hlt">optimization</span> results indicated that the rotor vibration reached the minimum value and the design efficiency and quality were improved by the multidisciplinary design <span class="hlt">optimization</span> in the case of meeting the design requirements, which provided the reference to improve the design efficiency and reliability of the aero-engine rotor.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://hdl.handle.net/2060/19950015740','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19950015740"><span>On the wavelet <span class="hlt">optimized</span> finite difference <span class="hlt">method</span></span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Jameson, Leland</p> <p>1994-01-01</p> <p>When one considers the effect in the physical space, Daubechies-based wavelet <span class="hlt">methods</span> are equivalent to finite difference <span class="hlt">methods</span> with grid refinement in regions of the domain where small scale structure exists. Adding a wavelet basis function at a given scale and location where one has a correspondingly large wavelet coefficient is, essentially, equivalent to adding a grid point, or two, at the same location and at a grid density which corresponds to the wavelet scale. This paper introduces a wavelet <span class="hlt">optimized</span> finite difference <span class="hlt">method</span> which is equivalent to a wavelet <span class="hlt">method</span> in its multiresolution approach but which does not suffer from difficulties with nonlinear terms and boundary conditions, since all calculations are done in the physical space. With this <span class="hlt">method</span> one can obtain an arbitrarily good approximation to a conservative difference <span class="hlt">method</span> for solving nonlinear conservation laws.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=19910047276&hterms=topology&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D80%26Ntt%3Dtopology','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=19910047276&hterms=topology&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D80%26Ntt%3Dtopology"><span>New displacement-based <span class="hlt">methods</span> for <span class="hlt">optimal</span> truss topology design</span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Bendsoe, Martin P.; Ben-Tal, Aharon; Haftka, Raphael T.</p> <p>1991-01-01</p> <p>Two alternate <span class="hlt">methods</span> for maximum stiffness truss topology design are presented. The ground structure approach is used, and the problem is formulated in terms of displacements and bar areas. This large, nonconvex <span class="hlt">optimization</span> problem can be solved by a simultaneous analysis and design approach. Alternatively, an equivalent, unconstrained, and convex problem in the displacements only can be formulated, and this problem can be solved by a nonsmooth, steepest descent algorithm. In both <span class="hlt">methods</span>, the explicit solving of the equilibrium equations and the assembly of the global stiffness matrix are circumvented. A large number of examples have been studied, showing the attractive features of topology design as well as exposing interesting features of <span class="hlt">optimal</span> topologies.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018CMMPh..58..215K','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018CMMPh..58..215K"><span>Computational Efficiency of the Simplex Embedding <span class="hlt">Method</span> in Convex Nondifferentiable <span class="hlt">Optimization</span></span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Kolosnitsyn, A. V.</p> <p>2018-02-01</p> <p>The simplex embedding <span class="hlt">method</span> for solving convex nondifferentiable <span class="hlt">optimization</span> problems is considered. A description of modifications of this <span class="hlt">method</span> based on a shift of the cutting plane intended for cutting off the maximum number of simplex vertices is given. These modification speed up the problem solution. A numerical comparison of the efficiency of the proposed modifications based on the numerical solution of benchmark convex nondifferentiable <span class="hlt">optimization</span> problems is presented.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2008EEEV....7...13S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2008EEEV....7...13S"><span><span class="hlt">Optimal</span> design of structures for earthquake loads by a hybrid RBF-BPSO <span class="hlt">method</span></span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Salajegheh, Eysa; Gholizadeh, Saeed; Khatibinia, Mohsen</p> <p>2008-03-01</p> <p>The <span class="hlt">optimal</span> seismic design of structures requires that time history analyses (THA) be carried out repeatedly. This makes the <span class="hlt">optimal</span> design process inefficient, in particular, if an evolutionary algorithm is used. To reduce the overall time required for structural <span class="hlt">optimization</span>, two artificial intelligence strategies are employed. In the first strategy, radial basis function (RBF) neural networks are used to predict the time history responses of structures in the <span class="hlt">optimization</span> flow. In the second strategy, a binary particle swarm <span class="hlt">optimization</span> (BPSO) is used to find the optimum design. Combining the RBF and BPSO, a hybrid RBF-BPSO <span class="hlt">optimization</span> <span class="hlt">method</span> is proposed in this paper, which achieves fast <span class="hlt">optimization</span> with high computational performance. Two examples are presented and compared to determine the <span class="hlt">optimal</span> weight of structures under earthquake loadings using both exact and approximate analyses. The numerical results demonstrate the computational advantages and effectiveness of the proposed hybrid RBF-BPSO <span class="hlt">optimization</span> <span class="hlt">method</span> for the seismic design of structures.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26543781','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26543781"><span>The q-G <span class="hlt">method</span> : A q-version of the Steepest Descent <span class="hlt">method</span> for global <span class="hlt">optimization</span>.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Soterroni, Aline C; Galski, Roberto L; Scarabello, Marluce C; Ramos, Fernando M</p> <p>2015-01-01</p> <p>In this work, the q-Gradient (q-G) <span class="hlt">method</span>, a q-version of the Steepest Descent <span class="hlt">method</span>, is presented. The main idea behind the q-G <span class="hlt">method</span> is the use of the negative of the q-gradient vector of the objective function as the search direction. The q-gradient vector, or simply the q-gradient, is a generalization of the classical gradient vector based on the concept of Jackson's derivative from the q-calculus. Its use provides the algorithm an effective mechanism for escaping from local minima. The q-G <span class="hlt">method</span> reduces to the Steepest Descent <span class="hlt">method</span> when the parameter q tends to 1. The algorithm has three free parameters and it is implemented so that the search process gradually shifts from global exploration in the beginning to local exploitation in the end. We evaluated the q-G <span class="hlt">method</span> on 34 test functions, and compared its performance with 34 <span class="hlt">optimization</span> algorithms, including derivative-free algorithms and the Steepest Descent <span class="hlt">method</span>. Our results show that the q-G <span class="hlt">method</span> is competitive and has a great potential for solving multimodal <span class="hlt">optimization</span> problems.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/25273415','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/25273415"><span>Fast <span class="hlt">optimization</span> of binary clusters using a novel dynamic lattice searching <span class="hlt">method</span>.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Wu, Xia; Cheng, Wen</p> <p>2014-09-28</p> <p>Global <span class="hlt">optimization</span> of binary clusters has been a difficult task despite of much effort and many efficient <span class="hlt">methods</span>. Directing toward two types of elements (i.e., homotop problem) in binary clusters, two classes of virtual dynamic lattices are constructed and a modified dynamic lattice searching (DLS) <span class="hlt">method</span>, i.e., binary DLS (BDLS) <span class="hlt">method</span>, is developed. However, it was found that the BDLS can only be utilized for the <span class="hlt">optimization</span> of binary clusters with small sizes because homotop problem is hard to be solved without atomic exchange operation. Therefore, the iterated local search (ILS) <span class="hlt">method</span> is adopted to solve homotop problem and an efficient <span class="hlt">method</span> based on the BDLS <span class="hlt">method</span> and ILS, named as BDLS-ILS, is presented for global <span class="hlt">optimization</span> of binary clusters. In order to assess the efficiency of the proposed <span class="hlt">method</span>, binary Lennard-Jones clusters with up to 100 atoms are investigated. Results show that the <span class="hlt">method</span> is proved to be efficient. Furthermore, the BDLS-ILS <span class="hlt">method</span> is also adopted to study the geometrical structures of (AuPd)79 clusters with DFT-fit parameters of Gupta potential.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017MS%26E..206a2059A','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017MS%26E..206a2059A"><span><span class="hlt">Optimization</span> of an auto-thermal ammonia synthesis reactor using cyclic coordinate <span class="hlt">method</span></span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>A-N Nguyen, T.; Nguyen, T.-A.; Vu, T.-D.; Nguyen, K.-T.; K-T Dao, T.; P-H Huynh, K.</p> <p>2017-06-01</p> <p>The ammonia synthesis system is an important chemical process used in the manufacture of fertilizers, chemicals, explosives, fibers, plastics, refrigeration. In the literature, many works approaching the modeling, simulation and <span class="hlt">optimization</span> of an auto-thermal ammonia synthesis reactor can be found. However, they just focus on the <span class="hlt">optimization</span> of the reactor length while keeping the others parameters constant. In this study, the other parameters are also considered in the <span class="hlt">optimization</span> problem such as the temperature of feed gas enters the catalyst zone, the initial nitrogen proportion. The <span class="hlt">optimal</span> problem requires the maximization of an objective function which is multivariable function and subject to a number of equality constraints involving the solution of coupled differential equations and also inequality constraint. The cyclic coordinate search was applied to solve the multivariable-<span class="hlt">optimization</span> problem. In each coordinate, the golden section <span class="hlt">method</span> was applied to find the maximum value. The inequality constraints were treated using penalty <span class="hlt">method</span>. The coupled differential equations system was solved using Runge-Kutta 4th order <span class="hlt">method</span>. The results obtained from this study are also compared to the results from the literature.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2012SuMi...52.1131Y','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2012SuMi...52.1131Y"><span>The structure and photocatalytic activity of TiO2 thin films deposited by dc magnetron sputtering</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Yang, W. J.; Hsu, C. Y.; Liu, Y. W.; Hsu, R. Q.; Lu, T. W.; Hu, C. C.</p> <p>2012-12-01</p> <p>This paper seeks to determine the <span class="hlt">optimal</span> settings for the deposition parameters, for TiO2 thin film, prepared on non-alkali glass substrates, by direct current (dc) sputtering, using a ceramic TiO2 target in an argon gas environment. An orthogonal array, the signal-to-noise ratio and analysis of variance are used to analyze the effect of the deposition parameters. Using the <span class="hlt">Taguchi</span> <span class="hlt">method</span> for design of a robust experiment, the interactions between factors are also investigated. The main deposition parameters, such as dc power (W), sputtering pressure (Pa), substrate temperature (°C) and deposition time (min), were <span class="hlt">optimized</span>, with reference to the structure and photocatalytic characteristics of TiO2. The results of this study show that substrate temperature and deposition time have the most significant effect on photocatalytic performance. For the <span class="hlt">optimal</span> combination of deposition parameters, the (1 1 0) and (2 0 0) peaks of the rutile structure and the (2 0 0) peak of the anatase structure were observed, at 2θ ˜ 27.4°, 39.2° and 48°, respectively. The experimental results illustrate that the <span class="hlt">Taguchi</span> <span class="hlt">method</span> allowed a suitable solution to the problem, with the minimum number of trials, compared to a full factorial design. The adhesion of the coatings was also measured and evaluated, via a scratch test. Superior wear behavior was observed, for the TiO2 film, because of the increased strength of the interface of micro-blasted tools.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_19");'>19</a></li> <li><a href="#" onclick='return showDiv("page_20");'>20</a></li> <li class="active"><span>21</span></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_21 --> <div id="page_22" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_20");'>20</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li class="active"><span>22</span></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li><a href="#" onclick='return showDiv("page_24");'>24</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="421"> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018AIPC.1967b0035M','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018AIPC.1967b0035M"><span><span class="hlt">Optimized</span> iterative decoding <span class="hlt">method</span> for TPC coded CPM</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Ma, Yanmin; Lai, Penghui; Wang, Shilian; Xie, Shunqin; Zhang, Wei</p> <p>2018-05-01</p> <p>Turbo Product Code (TPC) coded Continuous Phase Modulation (CPM) system (TPC-CPM) has been widely used in aeronautical telemetry and satellite communication. This paper mainly investigates the improvement and <span class="hlt">optimization</span> on the TPC-CPM system. We first add the interleaver and deinterleaver to the TPC-CPM system, and then establish an iterative system to iteratively decode. However, the improved system has a poor convergence ability. To overcome this issue, we use the Extrinsic Information Transfer (EXIT) analysis to find the <span class="hlt">optimal</span> factors for the system. The experiments show our <span class="hlt">method</span> is efficient to improve the convergence performance.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5364801','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5364801"><span>Comprehensive <span class="hlt">Optimization</span> of LC-MS Metabolomics <span class="hlt">Methods</span> Using Design of Experiments (COLMeD)</span></a></p> <p><a target="_blank" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Rhoades, Seth D.</p> <p>2017-01-01</p> <p>Introduction Both reverse-phase and HILIC chemistries are deployed for liquid-chromatography mass spectrometry (LC-MS) metabolomics analyses, however HILIC <span class="hlt">methods</span> lag behind reverse-phase <span class="hlt">methods</span> in reproducibility and versatility. Comprehensive metabolomics analysis is additionally complicated by the physiochemical diversity of metabolites and array of tunable analytical parameters. Objective Our aim was to rationally and efficiently design complementary HILIC-based polar metabolomics <span class="hlt">methods</span> on multiple instruments using Design of Experiments (DoE). <span class="hlt">Methods</span> We iteratively tuned LC and MS conditions on ion-switching triple quadrupole (QqQ) and quadrupole-time-of-flight (qTOF) mass spectrometers through multiple rounds of a workflow we term COLMeD (Comprehensive <span class="hlt">optimization</span> of LC-MS metabolomics <span class="hlt">methods</span> using design of experiments). Multivariate statistical analysis guided our decision process in the <span class="hlt">method</span> <span class="hlt">optimizations</span>. Results LC-MS/MS tuning for the QqQ <span class="hlt">method</span> on serum metabolites yielded a median response increase of 161.5% (p<0.0001) over initial conditions with a 13.3% increase in metabolite coverage. The COLMeD output was benchmarked against two widely used polar metabolomics <span class="hlt">methods</span>, demonstrating total ion current increases of 105.8% and 57.3%, with median metabolite response increases of 106.1% and 10.3% (p<0.0001 and p<0.05 respectively). For our <span class="hlt">optimized</span> qTOF <span class="hlt">method</span>, 22 solvent systems were compared on a standard mix of physiochemically diverse metabolites, followed by COLMeD <span class="hlt">optimization</span>, yielding a median 29.8% response increase (p<0.0001) over initial conditions. Conclusions The COLMeD process elucidated response tradeoffs, facilitating improved chromatography and MS response without compromising separation of isobars. COLMeD is efficient, requiring no more than 20 injections in a given DoE round, and flexible, capable of class-specific <span class="hlt">optimization</span> as demonstrated through acylcarnitine <span class="hlt">optimization</span> within the QqQ <span class="hlt">method</span>. PMID:28348510</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018JAG...152....1C','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018JAG...152....1C"><span>A seismic fault recognition <span class="hlt">method</span> based on ant colony <span class="hlt">optimization</span></span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Chen, Lei; Xiao, Chuangbai; Li, Xueliang; Wang, Zhenli; Huo, Shoudong</p> <p>2018-05-01</p> <p>Fault recognition is an important section in seismic interpretation and there are many <span class="hlt">methods</span> for this technology, but no one can recognize fault exactly enough. For this problem, we proposed a new fault recognition <span class="hlt">method</span> based on ant colony <span class="hlt">optimization</span> which can locate fault precisely and extract fault from the seismic section. Firstly, seismic horizons are extracted by the connected component labeling algorithm; secondly, the fault location are decided according to the horizontal endpoints of each horizon; thirdly, the whole seismic section is divided into several rectangular blocks and the top and bottom endpoints of each rectangular block are considered as the nest and food respectively for the ant colony <span class="hlt">optimization</span> algorithm. Besides that, the positive section is taken as an actual three dimensional terrain by using the seismic amplitude as a height. After that, the <span class="hlt">optimal</span> route from nest to food calculated by the ant colony in each block is judged as a fault. Finally, extensive comparative tests were performed on the real seismic data. Availability and advancement of the proposed <span class="hlt">method</span> were validated by the experimental results.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=19930070511&hterms=Layout+Design&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D90%26Ntt%3DLayout%2BDesign','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=19930070511&hterms=Layout+Design&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D90%26Ntt%3DLayout%2BDesign"><span>Layout <span class="hlt">optimization</span> using the homogenization <span class="hlt">method</span></span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Suzuki, Katsuyuki; Kikuchi, Noboru</p> <p>1993-01-01</p> <p>A generalized layout problem involving sizing, shape, and topology <span class="hlt">optimization</span> is solved by using the homogenization <span class="hlt">method</span> for three-dimensional linearly elastic shell structures in order to seek a possibility of establishment of an integrated design system of automotive car bodies, as an extension of the previous work by Bendsoe and Kikuchi. A formulation of a three-dimensional homogenized shell, a solution algorithm, and several examples of computing the optimum layout are presented in this first part of the two articles.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018MS%26E..310a2109D','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018MS%26E..310a2109D"><span><span class="hlt">Optimization</span> of cutting parameters in CNC turning of stainless steel 304 with TiAlN nano coated carbide cutting tool</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Durga Prasada Rao, V.; Harsha, N.; Raghu Ram, N. S.; Navya Geethika, V.</p> <p>2018-02-01</p> <p>In this work, turning was performed to <span class="hlt">optimize</span> the surface finish or roughness (Ra) of stainless steel 304 with uncoated and coated carbide tools under dry conditions. The carbide tools were coated with Titanium Aluminium Nitride (TiAlN) nano coating using Physical Vapour Deposition (PVD) <span class="hlt">method</span>. The machining parameters, viz., cutting speed, depth of cut and feed rate which show major impact on Ra are considered during turning. The experiments are designed as per <span class="hlt">Taguchi</span> orthogonal array and machining process is done accordingly. Then second-order regression equations have been developed on the basis of experimental results for Ra in terms of machining parameters used. Regarding the effect of machining parameters, an upward trend is observed in Ra with respect to feed rate, and as cutting speed increases the Ra value increased slightly due to chatter and vibrations. The adequacy of response variable (Ra) is tested by conducting additional experiments. The predicted Ra values are found to be a close match of their corresponding experimental values of uncoated and coated tools. The corresponding average % errors are found to be within the acceptable limits. Then the surface roughness equations of uncoated and coated tools are set as the objectives of <span class="hlt">optimization</span> problem and are solved by using Differential Evolution (DE) algorithm. Also the tool lives of uncoated and coated tools are predicted by using Taylor’s tool life equation.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/1999PhDT........52S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/1999PhDT........52S"><span>Pivot <span class="hlt">methods</span> for global <span class="hlt">optimization</span></span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Stanton, Aaron Fletcher</p> <p></p> <p>A new algorithm is presented for the location of the global minimum of a multiple minima problem. It begins with a series of randomly placed probes in phase space, and then uses an iterative redistribution of the worst probes into better regions of phase space until a chosen convergence criterion is fulfilled. The <span class="hlt">method</span> quickly converges, does not require derivatives, and is resistant to becoming trapped in local minima. Comparison of this algorithm with others using a standard test suite demonstrates that the number of function calls has been decreased conservatively by a factor of about three with the same degrees of accuracy. Two major variations of the <span class="hlt">method</span> are presented, differing primarily in the <span class="hlt">method</span> of choosing the probes that act as the basis for the new probes. The first variation, termed the lowest energy pivot <span class="hlt">method</span>, ranks all probes by their energy and keeps the best probes. The probes being discarded select from those being kept as the basis for the new cycle. In the second variation, the nearest neighbor pivot <span class="hlt">method</span>, all probes are paired with their nearest neighbor. The member of each pair with the higher energy is relocated in the vicinity of its neighbor. Both <span class="hlt">methods</span> are tested against a standard test suite of functions to determine their relative efficiency, and the nearest neighbor pivot <span class="hlt">method</span> is found to be the more efficient. A series of Lennard-Jones clusters is <span class="hlt">optimized</span> with the nearest neighbor <span class="hlt">method</span>, and a scaling law is found for cpu time versus the number of particles in the system. The two <span class="hlt">methods</span> are then compared more explicitly, and finally a study in the use of the pivot <span class="hlt">method</span> for solving the Schroedinger equation is presented. The nearest neighbor <span class="hlt">method</span> is found to be able to solve the ground state of the quantum harmonic oscillator from a pure random initialization of the wavefunction.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://hdl.handle.net/2060/20110011998','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20110011998"><span>Comparison of Traditional Design Nonlinear Programming <span class="hlt">Optimization</span> and Stochastic <span class="hlt">Methods</span> for Structural Design</span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Patnaik, Surya N.; Pai, Shantaram S.; Coroneos, Rula M.</p> <p>2010-01-01</p> <p>Structural design generated by traditional <span class="hlt">method</span>, <span class="hlt">optimization</span> <span class="hlt">method</span> and the stochastic design concept are compared. In the traditional <span class="hlt">method</span>, the constraints are manipulated to obtain the design and weight is back calculated. In design <span class="hlt">optimization</span>, the weight of a structure becomes the merit function with constraints imposed on failure modes and an <span class="hlt">optimization</span> algorithm is used to generate the solution. Stochastic design concept accounts for uncertainties in loads, material properties, and other parameters and solution is obtained by solving a design <span class="hlt">optimization</span> problem for a specified reliability. Acceptable solutions were produced by all the three <span class="hlt">methods</span>. The variation in the weight calculated by the <span class="hlt">methods</span> was modest. Some variation was noticed in designs calculated by the <span class="hlt">methods</span>. The variation may be attributed to structural indeterminacy. It is prudent to develop design by all three <span class="hlt">methods</span> prior to its fabrication. The traditional design <span class="hlt">method</span> can be improved when the simplified sensitivities of the behavior constraint is used. Such sensitivity can reduce design calculations and may have a potential to unify the traditional and <span class="hlt">optimization</span> <span class="hlt">methods</span>. Weight versus reliabilitytraced out an inverted-S-shaped graph. The center of the graph corresponded to mean valued design. A heavy design with weight approaching infinity could be produced for a near-zero rate of failure. Weight can be reduced to a small value for a most failure-prone design. Probabilistic modeling of load and material properties remained a challenge.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018AIPA....8e6610Z','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018AIPA....8e6610Z"><span>A dynamic multi-level <span class="hlt">optimal</span> design <span class="hlt">method</span> with embedded finite-element modeling for power transformers</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Zhang, Yunpeng; Ho, Siu-lau; Fu, Weinong</p> <p>2018-05-01</p> <p>This paper proposes a dynamic multi-level <span class="hlt">optimal</span> design <span class="hlt">method</span> for power transformer design <span class="hlt">optimization</span> (TDO) problems. A response surface generated by second-order polynomial regression analysis is updated dynamically by adding more design points, which are selected by Shifted Hammersley <span class="hlt">Method</span> (SHM) and calculated by finite-element <span class="hlt">method</span> (FEM). The updating stops when the accuracy requirement is satisfied, and <span class="hlt">optimized</span> solutions of the preliminary design are derived simultaneously. The <span class="hlt">optimal</span> design level is modulated through changing the level of error tolerance. Based on the response surface of the preliminary design, a refined <span class="hlt">optimal</span> design is added using multi-objective genetic algorithm (MOGA). The effectiveness of the proposed <span class="hlt">optimal</span> design <span class="hlt">method</span> is validated through a classic three-phase power TDO problem.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2012JBO....17j6015J','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2012JBO....17j6015J"><span>Minimal residual <span class="hlt">method</span> provides <span class="hlt">optimal</span> regularization parameter for diffuse optical tomography</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Jagannath, Ravi Prasad K.; Yalavarthy, Phaneendra K.</p> <p>2012-10-01</p> <p>The inverse problem in the diffuse optical tomography is known to be nonlinear, ill-posed, and sometimes under-determined, requiring regularization to obtain meaningful results, with Tikhonov-type regularization being the most popular one. The choice of this regularization parameter dictates the reconstructed optical image quality and is typically chosen empirically or based on prior experience. An automated <span class="hlt">method</span> for <span class="hlt">optimal</span> selection of regularization parameter that is based on regularized minimal residual <span class="hlt">method</span> (MRM) is proposed and is compared with the traditional generalized cross-validation <span class="hlt">method</span>. The results obtained using numerical and gelatin phantom data indicate that the MRM-based <span class="hlt">method</span> is capable of providing the <span class="hlt">optimal</span> regularization parameter.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/23052562','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/23052562"><span>Minimal residual <span class="hlt">method</span> provides <span class="hlt">optimal</span> regularization parameter for diffuse optical tomography.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Jagannath, Ravi Prasad K; Yalavarthy, Phaneendra K</p> <p>2012-10-01</p> <p>The inverse problem in the diffuse optical tomography is known to be nonlinear, ill-posed, and sometimes under-determined, requiring regularization to obtain meaningful results, with Tikhonov-type regularization being the most popular one. The choice of this regularization parameter dictates the reconstructed optical image quality and is typically chosen empirically or based on prior experience. An automated <span class="hlt">method</span> for <span class="hlt">optimal</span> selection of regularization parameter that is based on regularized minimal residual <span class="hlt">method</span> (MRM) is proposed and is compared with the traditional generalized cross-validation <span class="hlt">method</span>. The results obtained using numerical and gelatin phantom data indicate that the MRM-based <span class="hlt">method</span> is capable of providing the <span class="hlt">optimal</span> regularization parameter.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4310466','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4310466"><span><span class="hlt">Optimizing</span> Clinical Trial Enrollment <span class="hlt">Methods</span> Through "Goal Programming"</span></a></p> <p><a target="_blank" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Davis, J.M.; Sandgren, A.J.; Manley, A.R.; Daleo, M.A.; Smith, S.S.</p> <p>2014-01-01</p> <p>Introduction Clinical trials often fail to reach desired goals due to poor recruitment outcomes, including low participant turnout, high recruitment cost, or poor representation of minorities. At present, there is limited literature available to guide recruitment methodology. This study, conducted by researchers at the University of Wisconsin Center for Tobacco Research and Intervention (UW-CTRI), provides an example of how iterative analysis of recruitment data may be used to <span class="hlt">optimize</span> recruitment outcomes during ongoing recruitment. Study methodology UW-CTRI’s research team provided a description of <span class="hlt">methods</span> used to recruit smokers in two randomized trials (n = 196 and n = 175). The trials targeted low socioeconomic status (SES) smokers and involved time-intensive smoking cessation interventions. Primary recruitment goals were to meet required sample size and provide representative diversity while working with limited funds and limited time. Recruitment data was analyzed repeatedly throughout each study to <span class="hlt">optimize</span> recruitment outcomes. Results Estimates of recruitment outcomes based on prior studies on smoking cessation suggested that researchers would be able to recruit 240 low SES smokers within 30 months at a cost of $72,000. With employment of <span class="hlt">methods</span> described herein, researchers were able to recruit 374 low SES smokers over 30 months at a cost of $36,260. Discussion Each human subjects study presents unique recruitment challenges with time and cost of recruitment dependent on the sample population and study methodology. Nonetheless, researchers may be able to improve recruitment outcomes though iterative analysis of recruitment data and <span class="hlt">optimization</span> of recruitment <span class="hlt">methods</span> throughout the recruitment period. PMID:25642125</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/27217517','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/27217517"><span>A comparison of automated dispensing cabinet <span class="hlt">optimization</span> <span class="hlt">methods</span>.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>O'Neil, Daniel P; Miller, Adam; Cronin, Daniel; Hatfield, Chad J</p> <p>2016-07-01</p> <p>Results of a study comparing two <span class="hlt">methods</span> of <span class="hlt">optimizing</span> automated dispensing cabinets (ADCs) are reported. Eight nonprofiled ADCs were <span class="hlt">optimized</span> over six months. <span class="hlt">Optimization</span> of each cabinet involved three steps: (1) removal of medications that had not been dispensed for at least 180 days, (2) movement of ADC stock to better suit end-user needs and available space, and (3) adjustment of par levels (desired on-hand inventory levels). The par levels of four ADCs (the Day Supply group) were adjusted according to average daily usage; the par levels of the other four ADCs (the Formula group) were adjusted using a standard inventory formula. The primary outcome was the vend:fill ratio, while secondary outcomes included total inventory, inventory cost, quantity of expired medications, and ADC stockout percentage. The total number of medications stocked in the eight machines was reduced from 1,273 in a designated two-month preoptimization period to 1,182 in a designated two-month postoptimization period, yielding a carrying cost savings of $44,981. The mean vend:fill ratios before and after <span class="hlt">optimization</span> were 4.43 and 4.46, respectively. The vend:fill ratio for ADCs in the Formula group increased from 4.33 before <span class="hlt">optimization</span> to 5.2 after <span class="hlt">optimization</span>; in the Day Supply group, the ratio declined (from 4.52 to 3.90). The postoptimization interaction difference between the Formula and Day Supply groups was found to be significant (p = 0.0477). ADC <span class="hlt">optimization</span> via a standard inventory formula had a positive impact on inventory costs, refills, vend:fill ratios, and stockout percentages. Copyright © 2016 by the American Society of Health-System Pharmacists, Inc. All rights reserved.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28348510','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28348510"><span>Comprehensive <span class="hlt">Optimization</span> of LC-MS Metabolomics <span class="hlt">Methods</span> Using Design of Experiments (COLMeD).</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Rhoades, Seth D; Weljie, Aalim M</p> <p>2016-12-01</p> <p>Both reverse-phase and HILIC chemistries are deployed for liquid-chromatography mass spectrometry (LC-MS) metabolomics analyses, however HILIC <span class="hlt">methods</span> lag behind reverse-phase <span class="hlt">methods</span> in reproducibility and versatility. Comprehensive metabolomics analysis is additionally complicated by the physiochemical diversity of metabolites and array of tunable analytical parameters. Our aim was to rationally and efficiently design complementary HILIC-based polar metabolomics <span class="hlt">methods</span> on multiple instruments using Design of Experiments (DoE). We iteratively tuned LC and MS conditions on ion-switching triple quadrupole (QqQ) and quadrupole-time-of-flight (qTOF) mass spectrometers through multiple rounds of a workflow we term COLMeD (Comprehensive <span class="hlt">optimization</span> of LC-MS metabolomics <span class="hlt">methods</span> using design of experiments). Multivariate statistical analysis guided our decision process in the <span class="hlt">method</span> <span class="hlt">optimizations</span>. LC-MS/MS tuning for the QqQ <span class="hlt">method</span> on serum metabolites yielded a median response increase of 161.5% (p<0.0001) over initial conditions with a 13.3% increase in metabolite coverage. The COLMeD output was benchmarked against two widely used polar metabolomics <span class="hlt">methods</span>, demonstrating total ion current increases of 105.8% and 57.3%, with median metabolite response increases of 106.1% and 10.3% (p<0.0001 and p<0.05 respectively). For our <span class="hlt">optimized</span> qTOF <span class="hlt">method</span>, 22 solvent systems were compared on a standard mix of physiochemically diverse metabolites, followed by COLMeD <span class="hlt">optimization</span>, yielding a median 29.8% response increase (p<0.0001) over initial conditions. The COLMeD process elucidated response tradeoffs, facilitating improved chromatography and MS response without compromising separation of isobars. COLMeD is efficient, requiring no more than 20 injections in a given DoE round, and flexible, capable of class-specific <span class="hlt">optimization</span> as demonstrated through acylcarnitine <span class="hlt">optimization</span> within the QqQ <span class="hlt">method</span>.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018JIEI..tmp....6T','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018JIEI..tmp....6T"><span><span class="hlt">Optimization</span> of rotor shaft shrink fit <span class="hlt">method</span> for motor using "Robust design"</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Toma, Eiji</p> <p>2018-01-01</p> <p>This research is collaborative investigation with the general-purpose motor manufacturer. To review construction <span class="hlt">method</span> in production process, we applied the parameter design <span class="hlt">method</span> of quality engineering and tried to approach the <span class="hlt">optimization</span> of construction <span class="hlt">method</span>. Conventionally, press-fitting <span class="hlt">method</span> has been adopted in process of fitting rotor core and shaft which is main component of motor, but quality defects such as core shaft deflection occurred at the time of press fitting. In this research, as a result of <span class="hlt">optimization</span> design of "shrink fitting <span class="hlt">method</span> by high-frequency induction heating" devised as a new construction <span class="hlt">method</span>, its construction <span class="hlt">method</span> was feasible, and it was possible to extract the optimum processing condition.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://edg.epa.gov/metadata/catalog/search/resource/details.page?uuid=%7B5B8CD319-8757-4067-9FD5-1971CB06063A%7D','PESTICIDES'); return false;" href="https://edg.epa.gov/metadata/catalog/search/resource/details.page?uuid=%7B5B8CD319-8757-4067-9FD5-1971CB06063A%7D"><span><span class="hlt">Optimization</span> and evaluation of a <span class="hlt">method</span> to detect adenoviruses in river water</span></a></p> <p><a target="_blank" href="http://www.epa.gov/pesticides/search.htm">EPA Pesticide Factsheets</a></p> <p></p> <p></p> <p>This dataset includes the recoveries of spiked adenovirus through various stages of experimental <span class="hlt">optimization</span> procedures. This dataset is associated with the following publication:McMinn , B., A. Korajkic, and A. Grimm. <span class="hlt">Optimization</span> and evaluation of a <span class="hlt">method</span> to detect adenoviruses in river water. JOURNAL OF VIROLOGICAL <span class="hlt">METHODS</span>. Elsevier Science Ltd, New York, NY, USA, 231(1): 8-13, (2016).</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2014APhy...60..570M','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2014APhy...60..570M"><span><span class="hlt">Optimal</span> and adaptive <span class="hlt">methods</span> of processing hydroacoustic signals (review)</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Malyshkin, G. S.; Sidel'nikov, G. B.</p> <p>2014-09-01</p> <p>Different <span class="hlt">methods</span> of <span class="hlt">optimal</span> and adaptive processing of hydroacoustic signals for multipath propagation and scattering are considered. Advantages and drawbacks of the classical adaptive (Capon, MUSIC, and Johnson) algorithms and "fast" projection algorithms are analyzed for the case of multipath propagation and scattering of strong signals. The classical <span class="hlt">optimal</span> approaches to detecting multipath signals are presented. A mechanism of controlled normalization of strong signals is proposed to automatically detect weak signals. The results of simulating the operation of different detection algorithms for a linear equidistant array under multipath propagation and scattering are presented. An automatic detector is analyzed, which is based on classical or fast projection algorithms, which estimates the background proceeding from median filtering or the <span class="hlt">method</span> of bilateral spatial contrast.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://hdl.handle.net/2060/19830010393','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19830010393"><span>Computational <span class="hlt">methods</span> for aerodynamic design using numerical <span class="hlt">optimization</span></span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Peeters, M. F.</p> <p>1983-01-01</p> <p>Five <span class="hlt">methods</span> to increase the computational efficiency of aerodynamic design using numerical <span class="hlt">optimization</span>, by reducing the computer time required to perform gradient calculations, are examined. The most promising <span class="hlt">method</span> consists of drastically reducing the size of the computational domain on which aerodynamic calculations are made during gradient calculations. Since a gradient calculation requires the solution of the flow about an airfoil whose geometry was slightly perturbed from a base airfoil, the flow about the base airfoil is used to determine boundary conditions on the reduced computational domain. This <span class="hlt">method</span> worked well in subcritical flow.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://hdl.handle.net/2060/20040111230','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20040111230"><span>Airfoil Design and <span class="hlt">Optimization</span> by the One-Shot <span class="hlt">Method</span></span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Kuruvila, G.; Taasan, Shlomo; Salas, M. D.</p> <p>1995-01-01</p> <p>An efficient numerical approach for the design of <span class="hlt">optimal</span> aerodynamic shapes is presented in this paper. The objective of any <span class="hlt">optimization</span> problem is to find the optimum of a cost function subject to a certain state equation (governing equation of the flow field) and certain side constraints. As in classical <span class="hlt">optimal</span> control <span class="hlt">methods</span>, the present approach introduces a costate variable (Lagrange multiplier) to evaluate the gradient of the cost function. High efficiency in reaching the optimum solution is achieved by using a multigrid technique and updating the shape in a hierarchical manner such that smooth (low-frequency) changes are done separately from high-frequency changes. Thus, the design variables are changed on a grid where their changes produce nonsmooth (high-frequency) perturbations that can be damped efficiently by the multigrid. The cost of solving the <span class="hlt">optimization</span> problem is approximately two to three times the cost of the equivalent analysis problem.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2010AcMSn..26..735R','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2010AcMSn..26..735R"><span>A structural topological <span class="hlt">optimization</span> <span class="hlt">method</span> for multi-displacement constraints and any initial topology configuration</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Rong, J. H.; Yi, J. H.</p> <p>2010-10-01</p> <p>In density-based topological design, one expects that the final result consists of elements either black (solid material) or white (void), without any grey areas. Moreover, one also expects that the <span class="hlt">optimal</span> topology can be obtained by starting from any initial topology configuration. An improved structural topological <span class="hlt">optimization</span> <span class="hlt">method</span> for multi- displacement constraints is proposed in this paper. In the proposed <span class="hlt">method</span>, the whole <span class="hlt">optimization</span> process is divided into two <span class="hlt">optimization</span> adjustment phases and a phase transferring step. Firstly, an <span class="hlt">optimization</span> model is built to deal with the varied displacement limits, design space adjustments, and reasonable relations between the element stiffness matrix and mass and its element topology variable. Secondly, a procedure is proposed to solve the <span class="hlt">optimization</span> problem formulated in the first <span class="hlt">optimization</span> adjustment phase, by starting with a small design space and advancing to a larger deign space. The design space adjustments are automatic when the design domain needs expansions, in which the convergence of the proposed <span class="hlt">method</span> will not be affected. The final topology obtained by the proposed procedure in the first <span class="hlt">optimization</span> phase, can approach to the vicinity of the optimum topology. Then, a heuristic algorithm is given to improve the efficiency and make the designed structural topology black/white in both the phase transferring step and the second <span class="hlt">optimization</span> adjustment phase. And the optimum topology can finally be obtained by the second phase <span class="hlt">optimization</span> adjustments. Two examples are presented to show that the topologies obtained by the proposed <span class="hlt">method</span> are of very good 0/1 design distribution property, and the computational efficiency is enhanced by reducing the element number of the design structural finite model during two <span class="hlt">optimization</span> adjustment phases. And the examples also show that this <span class="hlt">method</span> is robust and practicable.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.osti.gov/servlets/purl/1329308','SCIGOV-STC'); return false;" href="https://www.osti.gov/servlets/purl/1329308"><span><span class="hlt">Method</span> of generating features <span class="hlt">optimal</span> to a dataset and classifier</span></a></p> <p><a target="_blank" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Bruillard, Paul J.; Gosink, Luke J.; Jarman, Kenneth D.</p> <p></p> <p>A <span class="hlt">method</span> of generating features <span class="hlt">optimal</span> to a particular dataset and classifier is disclosed. A dataset of messages is inputted and a classifier is selected. An algebra of features is encoded. Computable features that are capable of describing the dataset from the algebra of features are selected. Irredundant features that are <span class="hlt">optimal</span> for the classifier and the dataset are selected.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_20");'>20</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li class="active"><span>22</span></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li><a href="#" onclick='return showDiv("page_24");'>24</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_22 --> <div id="page_23" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li class="active"><span>23</span></li> <li><a href="#" onclick='return showDiv("page_24");'>24</a></li> <li><a href="#" onclick='return showDiv("page_25");'>25</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="441"> <li> <p><a target="_blank" onclick="trackOutboundLink('http://hdl.handle.net/2060/19890007408','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19890007408"><span>An indirect <span class="hlt">method</span> for numerical <span class="hlt">optimization</span> using the Kreisselmeir-Steinhauser function</span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Wrenn, Gregory A.</p> <p>1989-01-01</p> <p>A technique is described for converting a constrained <span class="hlt">optimization</span> problem into an unconstrained problem. The technique transforms one of more objective functions into reduced objective functions, which are analogous to goal constraints used in the goal programming <span class="hlt">method</span>. These reduced objective functions are appended to the set of constraints and an envelope of the entire function set is computed using the Kreisselmeir-Steinhauser function. This envelope function is then searched for an unconstrained minimum. The technique may be categorized as a SUMT algorithm. Advantages of this approach are the use of unconstrained <span class="hlt">optimization</span> <span class="hlt">methods</span> to find a constrained minimum without the draw down factor typical of penalty function <span class="hlt">methods</span>, and that the technique may be started from the feasible or infeasible design space. In multiobjective applications, the approach has the advantage of locating a compromise minimum design without the need to <span class="hlt">optimize</span> for each individual objective function separately.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018JSV...418...55Z','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018JSV...418...55Z"><span>An <span class="hlt">optimized</span> time varying filtering based empirical mode decomposition <span class="hlt">method</span> with grey wolf <span class="hlt">optimizer</span> for machinery fault diagnosis</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Zhang, Xin; Liu, Zhiwen; Miao, Qiang; Wang, Lei</p> <p>2018-03-01</p> <p>A time varying filtering based empirical mode decomposition (EMD) (TVF-EMD) <span class="hlt">method</span> was proposed recently to solve the mode mixing problem of EMD <span class="hlt">method</span>. Compared with the classical EMD, TVF-EMD was proven to improve the frequency separation performance and be robust to noise interference. However, the decomposition parameters (i.e., bandwidth threshold and B-spline order) significantly affect the decomposition results of this <span class="hlt">method</span>. In original TVF-EMD <span class="hlt">method</span>, the parameter values are assigned in advance, which makes it difficult to achieve satisfactory analysis results. To solve this problem, this paper develops an <span class="hlt">optimized</span> TVF-EMD <span class="hlt">method</span> based on grey wolf <span class="hlt">optimizer</span> (GWO) algorithm for fault diagnosis of rotating machinery. Firstly, a measurement index termed weighted kurtosis index is constructed by using kurtosis index and correlation coefficient. Subsequently, the <span class="hlt">optimal</span> TVF-EMD parameters that match with the input signal can be obtained by GWO algorithm using the maximum weighted kurtosis index as objective function. Finally, fault features can be extracted by analyzing the sensitive intrinsic mode function (IMF) owning the maximum weighted kurtosis index. Simulations and comparisons highlight the performance of TVF-EMD <span class="hlt">method</span> for signal decomposition, and meanwhile verify the fact that bandwidth threshold and B-spline order are critical to the decomposition results. Two case studies on rotating machinery fault diagnosis demonstrate the effectiveness and advantages of the proposed <span class="hlt">method</span>.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017EGUGA..19.7206C','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017EGUGA..19.7206C"><span>Modelling Schumann resonances from ELF measurements using non-linear <span class="hlt">optimization</span> <span class="hlt">methods</span></span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Castro, Francisco; Toledo-Redondo, Sergio; Fornieles, Jesús; Salinas, Alfonso; Portí, Jorge; Navarro, Enrique; Sierra, Pablo</p> <p>2017-04-01</p> <p>Schumann resonances (SR) can be found in planetary atmospheres, inside the cavity formed by the conducting surface of the planet and the lower ionosphere. They are a powerful tool to investigate both the electric processes that occur in the atmosphere and the characteristics of the surface and the lower ionosphere. In this study, the measurements are obtained in the ELF (Extremely Low Frequency) Juan Antonio Morente station located in the national park of Sierra Nevada. The three first modes, contained in the frequency band between 6 to 25 Hz, will be considered. For each time series recorded by the station, the amplitude spectrum was estimated by using Bartlett averaging. Then, the central frequencies and amplitudes of the SRs were obtained by fitting the spectrum with non-linear functions. In the poster, a study of nonlinear unconstrained <span class="hlt">optimization</span> <span class="hlt">methods</span> applied to the estimation of the Schumann Resonances will be presented. Non-linear fit, also known as <span class="hlt">optimization</span> process, is the procedure followed in obtaining Schumann Resonances from the natural electromagnetic noise. The <span class="hlt">optimization</span> <span class="hlt">methods</span> that have been analysed are: Levenberg-Marquardt, Conjugate Gradient, Gradient, Newton and Quasi-Newton. The functions that the different <span class="hlt">methods</span> fit to data are three lorentzian curves plus a straight line. Gaussian curves have also been considered. The conclusions of this study are outlined in the following paragraphs: i) Natural electromagnetic noise is better fitted using Lorentzian functions; ii) the measurement bandwidth can accelerate the convergence of the <span class="hlt">optimization</span> <span class="hlt">method</span>; iii) Gradient <span class="hlt">method</span> has less convergence and has a highest mean squared error (MSE) between measurement and the fitted function, whereas Levenberg-Marquad, Gradient conjugate <span class="hlt">method</span> and Cuasi-Newton <span class="hlt">method</span> give similar results (Newton <span class="hlt">method</span> presents higher MSE); v) There are differences in the MSE between the parameters that define the fit function, and an interval from 1% to 5% has</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://pubs.er.usgs.gov/publication/5221982','USGSPUBS'); return false;" href="https://pubs.er.usgs.gov/publication/5221982"><span><span class="hlt">Optimal</span> management strategies in variable environments: Stochastic <span class="hlt">optimal</span> control <span class="hlt">methods</span></span></a></p> <p><a target="_blank" href="http://pubs.er.usgs.gov/pubs/index.jsp?view=adv">USGS Publications Warehouse</a></p> <p>Williams, B.K.</p> <p>1985-01-01</p> <p>Dynamic <span class="hlt">optimization</span> was used to investigate the <span class="hlt">optimal</span> defoliation of salt desert shrubs in north-western Utah. Management was formulated in the context of <span class="hlt">optimal</span> stochastic control theory, with objective functions composed of discounted or time-averaged biomass yields. Climatic variability and community patterns of salt desert shrublands make the application of stochastic <span class="hlt">optimal</span> control both feasible and necessary. A primary production model was used to simulate shrub responses and harvest yields under a variety of climatic regimes and defoliation patterns. The simulation results then were used in an <span class="hlt">optimization</span> model to determine <span class="hlt">optimal</span> defoliation strategies. The latter model encodes an algorithm for finite state, finite action, infinite discrete time horizon Markov decision processes. Three questions were addressed: (i) What effect do changes in weather patterns have on <span class="hlt">optimal</span> management strategies? (ii) What effect does the discounting of future returns have? (iii) How do the <span class="hlt">optimal</span> strategies perform relative to certain fixed defoliation strategies? An analysis was performed for the three shrub species, winterfat (Ceratoides lanata), shadscale (Atriplex confertifolia) and big sagebrush (Artemisia tridentata). In general, the results indicate substantial differences among species in <span class="hlt">optimal</span> control strategies, which are associated with differences in physiological and morphological characteristics. <span class="hlt">Optimal</span> policies for big sagebrush varied less with variation in climate, reserve levels and discount rates than did either shadscale or winterfat. This was attributed primarily to the overwintering of photosynthetically active tissue and to metabolic activity early in the growing season. <span class="hlt">Optimal</span> defoliation of shadscale and winterfat generally was more responsive to differences in plant vigor and climate, reflecting the sensitivity of these species to utilization and replenishment of carbohydrate reserves. Similarities could be seen in the influence of both</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017AIPC.1870f0010M','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017AIPC.1870f0010M"><span><span class="hlt">Optimization</span> <span class="hlt">methods</span> for activities selection problems</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Mahad, Nor Faradilah; Alias, Suriana; Yaakop, Siti Zulaika; Arshad, Norul Amanina Mohd; Mazni, Elis Sofia</p> <p>2017-08-01</p> <p>Co-curriculum activities must be joined by every student in Malaysia and these activities bring a lot of benefits to the students. By joining these activities, the students can learn about the time management and they can developing many useful skills. This project focuses on the selection of co-curriculum activities in secondary school using the <span class="hlt">optimization</span> <span class="hlt">methods</span> which are the Analytic Hierarchy Process (AHP) and Zero-One Goal Programming (ZOGP). A secondary school in Negeri Sembilan, Malaysia was chosen as a case study. A set of questionnaires were distributed randomly to calculate the weighted for each activity based on the 3 chosen criteria which are soft skills, interesting activities and performances. The weighted was calculated by using AHP and the results showed that the most important criteria is soft skills. Then, the ZOGP model will be analyzed by using LINGO Software version 15.0. There are two priorities to be considered. The first priority which is to minimize the budget for the activities is achieved since the total budget can be reduced by RM233.00. Therefore, the total budget to implement the selected activities is RM11,195.00. The second priority which is to select the co-curriculum activities is also achieved. The results showed that 9 out of 15 activities were selected. Thus, it can concluded that AHP and ZOGP approach can be used as the <span class="hlt">optimization</span> <span class="hlt">methods</span> for activities selection problem.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://hdl.handle.net/2060/20120002587','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20120002587"><span>Direct <span class="hlt">Method</span> Transcription for a Human-Class Translunar Injection Trajectory <span class="hlt">Optimization</span></span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Witzberger, Kevin E.; Zeiler, Tom</p> <p>2012-01-01</p> <p>This paper presents a new trajectory <span class="hlt">optimization</span> software package developed in the framework of a low-to-high fidelity 3 degrees-of-freedom (DOF)/6-DOF vehicle simulation program named Mission Analysis Simulation Tool in Fortran (MASTIF) and its application to a translunar trajectory <span class="hlt">optimization</span> problem. The functionality of the developed <span class="hlt">optimization</span> package is implemented as a new "mode" in generalized settings to make it applicable for a general trajectory <span class="hlt">optimization</span> problem. In doing so, a direct <span class="hlt">optimization</span> <span class="hlt">method</span> using collocation is employed for solving the problem. Trajectory <span class="hlt">optimization</span> problems in MASTIF are transcribed to a constrained nonlinear programming (NLP) problem and solved with SNOPT, a commercially available NLP solver. A detailed description of the <span class="hlt">optimization</span> software developed is provided as well as the transcription specifics for the translunar injection (TLI) problem. The analysis includes a 3-DOF trajectory TLI <span class="hlt">optimization</span> and a 3-DOF vehicle TLI simulation using closed-loop guidance.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016MS%26E..158a2058K','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016MS%26E..158a2058K"><span>Study of motion of <span class="hlt">optimal</span> bodies in the soil of grid <span class="hlt">method</span></span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Kotov, V. L.; Linnik, E. Yu</p> <p>2016-11-01</p> <p>The paper presents a <span class="hlt">method</span> of calculating the optimum forms in axisymmetric numerical <span class="hlt">method</span> based on the Godunov and models elastoplastic soil vedium Grigoryan. Solved two problems in a certain definition of generetrix rotation of the body of a given length and radius of the base, having a minimum impedance and maximum penetration depth. Numerical calculations are carried out by a modified <span class="hlt">method</span> of local variations, which allows to significantly reduce the number of operations at different representations of generetrix. Significantly simplify the process of searching for <span class="hlt">optimal</span> body allows the use of a quadratic model of local interaction for preliminary assessments. It is noted the qualitative similarity of the process of convergence of numerical calculations for solving the <span class="hlt">optimization</span> problem based on local interaction model and within the of continuum mechanics. A comparison of the <span class="hlt">optimal</span> bodies with absolutely <span class="hlt">optimal</span> bodies possessing the minimum resistance of penetration below which is impossible to achieve under given constraints on the geometry. It is shown that the conical striker with a variable vertex angle, which equal to the angle of the solution is absolutely <span class="hlt">optimal</span> body of minimum resistance of penetration for each value of the velocity of implementation will have a final depth of penetration is only 12% more than the traditional body absolutely <span class="hlt">optimal</span> maximum depth penetration.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016MS%26E..149a2013M','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016MS%26E..149a2013M"><span><span class="hlt">Optimization</span> and Analysis of Laser Beam Machining Parameters for Al7075-TiB2 In-situ Composite</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Manjoth, S.; Keshavamurthy, R.; Pradeep Kumar, G. S.</p> <p>2016-09-01</p> <p>The paper focuses on laser beam machining (LBM) of In-situ synthesized Al7075-TiB2 metal matrix composite. <span class="hlt">Optimization</span> and influence of laser machining process parameters on surface roughness, volumetric material removal rate (VMRR) and dimensional accuracy of composites were studied. Al7075-TiB2 metal matrix composite was synthesized by in-situ reaction technique using stir casting process. <span class="hlt">Taguchi</span>'s L9 orthogonal array was used to design experimental trials. Standoff distance (SOD) (0.3 - 0.5mm), Cutting Speed (1000 - 1200 m/hr) and Gas pressure (0.5 - 0.7 bar) were considered as variable input parameters at three different levels, while power and nozzle diameter were maintained constant with air as assisting gas. <span class="hlt">Optimized</span> process parameters for surface roughness, volumetric material removal rate (VMRR) and dimensional accuracy were calculated by generating the main effects plot for signal noise ratio (S/N ratio) for surface roughness, VMRR and dimensional error using Minitab software (version 16). The Significant of standoff distance (SOD), cutting speed and gas pressure on surface roughness, volumetric material removal rate (VMRR) and dimensional error were calculated using analysis of variance (ANOVA) <span class="hlt">method</span>. Results indicate that, for surface roughness, cutting speed (56.38%) is most significant parameter followed by standoff distance (41.03%) and gas pressure (2.6%). For volumetric material removal (VMRR), gas pressure (42.32%) is most significant parameter followed by cutting speed (33.60%) and standoff distance (24.06%). For dimensional error, Standoff distance (53.34%) is most significant parameter followed by cutting speed (34.12%) and gas pressure (12.53%). Further, verification experiments were carried out to confirm performance of <span class="hlt">optimized</span> process parameters.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018JCoPh.365..376D','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018JCoPh.365..376D"><span>Topology <span class="hlt">optimization</span> of thermal fluid flows with an adjoint Lattice Boltzmann <span class="hlt">Method</span></span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Dugast, Florian; Favennec, Yann; Josset, Christophe; Fan, Yilin; Luo, Lingai</p> <p>2018-07-01</p> <p>This paper presents an adjoint Lattice Boltzmann <span class="hlt">Method</span> (LBM) coupled with the Level-Set <span class="hlt">Method</span> (LSM) for topology <span class="hlt">optimization</span> of thermal fluid flows. The adjoint-state formulation implies discrete velocity directions in order to take into account the LBM boundary conditions. These boundary conditions are introduced at the beginning of the adjoint-state <span class="hlt">method</span> as the LBM residuals, so that the adjoint-state boundary conditions can appear directly during the adjoint-state equation formulation. The proposed <span class="hlt">method</span> is tested with 3 numerical examples concerning thermal fluid flows, but with different objectives: minimization of the mean temperature in the domain, maximization of the heat evacuated by the fluid, and maximization of the heat exchange with heated solid parts. This latter example, treated in several articles, is used to validate our <span class="hlt">method</span>. In these <span class="hlt">optimization</span> problems, a limitation of the maximal pressure drop and of the porosity (number of fluid elements) is also applied. The obtained results demonstrate that the <span class="hlt">method</span> is robust and effective for solving topology <span class="hlt">optimization</span> of thermal fluid flows.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://hdl.handle.net/2060/20150002724','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20150002724"><span>An <span class="hlt">Optimizing</span> Space Data-Communications Scheduling <span class="hlt">Method</span> and Algorithm with Interference Mitigation, Generalized for a Broad Class of <span class="hlt">Optimization</span> Problems</span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Rash, James</p> <p>2014-01-01</p> <p>NASA's space data-communications infrastructure-the Space Network and the Ground Network-provide scheduled (as well as some limited types of unscheduled) data-communications services to user spacecraft. The Space Network operates several orbiting geostationary platforms (the Tracking and Data Relay Satellite System (TDRSS)), each with its own servicedelivery antennas onboard. The Ground Network operates service-delivery antennas at ground stations located around the world. Together, these networks enable data transfer between user spacecraft and their mission control centers on Earth. Scheduling data-communications events for spacecraft that use the NASA communications infrastructure-the relay satellites and the ground stations-can be accomplished today with software having an operational heritage dating from the 1980s or earlier. An implementation of the scheduling <span class="hlt">methods</span> and algorithms disclosed and formally specified herein will produce globally <span class="hlt">optimized</span> schedules with not only <span class="hlt">optimized</span> service delivery by the space data-communications infrastructure but also <span class="hlt">optimized</span> satisfaction of all user requirements and prescribed constraints, including radio frequency interference (RFI) constraints. Evolutionary algorithms, a class of probabilistic strategies for searching large solution spaces, is the essential technology invoked and exploited in this disclosure. Also disclosed are secondary <span class="hlt">methods</span> and algorithms for <span class="hlt">optimizing</span> the execution efficiency of the schedule-generation algorithms themselves. The scheduling <span class="hlt">methods</span> and algorithms as presented are adaptable to accommodate the complexity of scheduling the civilian and/or military data-communications infrastructure within the expected range of future users and space- or ground-based service-delivery assets. Finally, the problem itself, and the <span class="hlt">methods</span> and algorithms, are generalized and specified formally. The generalized <span class="hlt">methods</span> and algorithms are applicable to a very broad class of combinatorial-<span class="hlt">optimization</span></p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://hdl.handle.net/2060/19950004411','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19950004411"><span>Multigrid one shot <span class="hlt">methods</span> for <span class="hlt">optimal</span> control problems: Infinite dimensional control</span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Arian, Eyal; Taasan, Shlomo</p> <p>1994-01-01</p> <p>The multigrid one shot <span class="hlt">method</span> for <span class="hlt">optimal</span> control problems, governed by elliptic systems, is introduced for the infinite dimensional control space. ln this case, the control variable is a function whose discrete representation involves_an increasing number of variables with grid refinement. The minimization algorithm uses Lagrange multipliers to calculate sensitivity gradients. A preconditioned gradient descent algorithm is accelerated by a set of coarse grids. It <span class="hlt">optimizes</span> for different scales in the representation of the control variable on different discretization levels. An analysis which reduces the problem to the boundary is introduced. It is used to approximate the two level asymptotic convergence rate, to determine the amplitude of the minimization steps, and the choice of a high pass filter to be used when necessary. The effectiveness of the <span class="hlt">method</span> is demonstrated on a series of test problems. The new <span class="hlt">method</span> enables the solutions of <span class="hlt">optimal</span> control problems at the same cost of solving the corresponding analysis problems just a few times.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://hdl.handle.net/2060/20140016748','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20140016748"><span>OpenMDAO: Framework for Flexible Multidisciplinary Design, Analysis and <span class="hlt">Optimization</span> <span class="hlt">Methods</span></span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Heath, Christopher M.; Gray, Justin S.</p> <p>2012-01-01</p> <p>The OpenMDAO project is underway at NASA to develop a framework which simplifies the implementation of state-of-the-art tools and <span class="hlt">methods</span> for multidisciplinary design, analysis and <span class="hlt">optimization</span>. Foremost, OpenMDAO has been designed to handle variable problem formulations, encourage reconfigurability, and promote model reuse. This work demonstrates the concept of iteration hierarchies in OpenMDAO to achieve a flexible environment for supporting advanced <span class="hlt">optimization</span> <span class="hlt">methods</span> which include adaptive sampling and surrogate modeling techniques. In this effort, two efficient global <span class="hlt">optimization</span> <span class="hlt">methods</span> were applied to solve a constrained, single-objective and constrained, multiobjective version of a joint aircraft/engine sizing problem. The aircraft model, NASA's nextgeneration advanced single-aisle civil transport, is being studied as part of the Subsonic Fixed Wing project to help meet simultaneous program goals for reduced fuel burn, emissions, and noise. This analysis serves as a realistic test problem to demonstrate the flexibility and reconfigurability offered by OpenMDAO.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2008IJTPE.128..388H','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2008IJTPE.128..388H"><span>Online <span class="hlt">Optimization</span> <span class="hlt">Method</span> for Operation of Generators in a Micro Grid</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Hayashi, Yasuhiro; Miyamoto, Hideki; Matsuki, Junya; Iizuka, Toshio; Azuma, Hitoshi</p> <p></p> <p>Recently a lot of studies and developments about distributed generator such as photovoltaic generation system, wind turbine generation system and fuel cell have been performed under the background of the global environment issues and deregulation of the electricity market, and the technique of these distributed generators have progressed. Especially, micro grid which consists of several distributed generators, loads and storage battery is expected as one of the new operation system of distributed generator. However, since precipitous load fluctuation occurs in micro grid for the reason of its smaller capacity compared with conventional power system, high-accuracy load forecasting and control scheme to balance of supply and demand are needed. Namely, it is necessary to improve the precision of operation in micro grid by observing load fluctuation and correcting start-stop schedule and output of generators online. But it is not easy to determine the operation schedule of each generator in short time, because the problem to determine start-up, shut-down and output of each generator in micro grid is a mixed integer programming problem. In this paper, the authors propose an online <span class="hlt">optimization</span> <span class="hlt">method</span> for the <span class="hlt">optimal</span> operation schedule of generators in micro grid. The proposed <span class="hlt">method</span> is based on enumeration <span class="hlt">method</span> and particle swarm <span class="hlt">optimization</span> (PSO). In the proposed <span class="hlt">method</span>, after picking up all unit commitment patterns of each generators satisfied with minimum up time and minimum down time constraint by using enumeration <span class="hlt">method</span>, <span class="hlt">optimal</span> schedule and output of generators are determined under the other operational constraints by using PSO. Numerical simulation is carried out for a micro grid model with five generators and photovoltaic generation system in order to examine the validity of the proposed <span class="hlt">method</span>.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/27387139','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/27387139"><span>Applying Mathematical <span class="hlt">Optimization</span> <span class="hlt">Methods</span> to an ACT-R Instance-Based Learning Model.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Said, Nadia; Engelhart, Michael; Kirches, Christian; Körkel, Stefan; Holt, Daniel V</p> <p>2016-01-01</p> <p>Computational models of cognition provide an interface to connect advanced mathematical tools and <span class="hlt">methods</span> to empirically supported theories of behavior in psychology, cognitive science, and neuroscience. In this article, we consider a computational model of instance-based learning, implemented in the ACT-R cognitive architecture. We propose an approach for obtaining mathematical reformulations of such cognitive models that improve their computational tractability. For the well-established Sugar Factory dynamic decision making task, we conduct a simulation study to analyze central model parameters. We show how mathematical <span class="hlt">optimization</span> techniques can be applied to efficiently identify <span class="hlt">optimal</span> parameter values with respect to different <span class="hlt">optimization</span> goals. Beyond these methodological contributions, our analysis reveals the sensitivity of this particular task with respect to initial settings and yields new insights into how average human performance deviates from potential <span class="hlt">optimal</span> performance. We conclude by discussing possible extensions of our approach as well as future steps towards applying more powerful derivative-based <span class="hlt">optimization</span> <span class="hlt">methods</span>.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018JSV...420...73D','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018JSV...420...73D"><span>Topology <span class="hlt">optimization</span> in acoustics and elasto-acoustics via a level-set <span class="hlt">method</span></span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Desai, J.; Faure, A.; Michailidis, G.; Parry, G.; Estevez, R.</p> <p>2018-04-01</p> <p><span class="hlt">Optimizing</span> the shape and topology (S&T) of structures to improve their acoustic performance is quite challenging. The exact position of the structural boundary is usually of critical importance, which dictates the use of geometric <span class="hlt">methods</span> for topology <span class="hlt">optimization</span> instead of standard density approaches. The goal of the present work is to investigate different possibilities for handling topology <span class="hlt">optimization</span> problems in acoustics and elasto-acoustics via a level-set <span class="hlt">method</span>. From a theoretical point of view, we detail two equivalent ways to perform the derivation of surface-dependent terms and propose a smoothing technique for treating problems of boundary conditions <span class="hlt">optimization</span>. In the numerical part, we examine the importance of the surface-dependent term in the shape derivative, neglected in previous studies found in the literature, on the <span class="hlt">optimal</span> designs. Moreover, we test different mesh adaptation choices, as well as technical details related to the implicit surface definition in the level-set approach. We present results in two and three-space dimensions.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017MS%26E..199a2053C','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017MS%26E..199a2053C"><span><span class="hlt">Optimal</span> Control of Micro Grid Operation Mode Seamless Switching Based on Radau Allocation <span class="hlt">Method</span></span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Chen, Xiaomin; Wang, Gang</p> <p>2017-05-01</p> <p>The seamless switching process of micro grid operation mode directly affects the safety and stability of its operation. According to the switching process from island mode to grid-connected mode of micro grid, we establish a dynamic <span class="hlt">optimization</span> model based on two grid-connected inverters. We use Radau allocation <span class="hlt">method</span> to discretize the model, and use Newton iteration <span class="hlt">method</span> to obtain the <span class="hlt">optimal</span> solution. Finally, we implement the <span class="hlt">optimization</span> mode in MATLAB and get the <span class="hlt">optimal</span> control trajectory of the inverters.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4881196','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4881196"><span>Knowledge-Based <span class="hlt">Methods</span> To Train and <span class="hlt">Optimize</span> Virtual Screening Ensembles</span></a></p> <p><a target="_blank" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p></p> <p>2016-01-01</p> <p>Ensemble docking can be a successful virtual screening technique that addresses the innate conformational heterogeneity of macromolecular drug targets. Yet, lacking a <span class="hlt">method</span> to identify a subset of conformational states that effectively segregates active and inactive small molecules, ensemble docking may result in the recommendation of a large number of false positives. Here, three knowledge-based <span class="hlt">methods</span> that construct structural ensembles for virtual screening are presented. Each <span class="hlt">method</span> selects ensembles by <span class="hlt">optimizing</span> an objective function calculated using the receiver operating characteristic (ROC) curve: either the area under the ROC curve (AUC) or a ROC enrichment factor (EF). As the number of receptor conformations, N, becomes large, the <span class="hlt">methods</span> differ in their asymptotic scaling. Given a set of small molecules with known activities and a collection of target conformations, the most resource intense <span class="hlt">method</span> is guaranteed to find the <span class="hlt">optimal</span> ensemble but scales as O(2N). A recursive approximation to the <span class="hlt">optimal</span> solution scales as O(N2), and a more severe approximation leads to a faster <span class="hlt">method</span> that scales linearly, O(N). The techniques are generally applicable to any system, and we demonstrate their effectiveness on the androgen nuclear hormone receptor (AR), cyclin-dependent kinase 2 (CDK2), and the peroxisome proliferator-activated receptor δ (PPAR-δ) drug targets. Conformations that consisted of a crystal structure and molecular dynamics simulation cluster centroids were used to form AR and CDK2 ensembles. Multiple available crystal structures were used to form PPAR-δ ensembles. For each target, we show that the three <span class="hlt">methods</span> perform similarly to one another on both the training and test sets. PMID:27097522</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://hdl.handle.net/2060/19890015502','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19890015502"><span>A weak Hamiltonian finite element <span class="hlt">method</span> for <span class="hlt">optimal</span> control problems</span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Hodges, Dewey H.; Bless, Robert R.</p> <p>1989-01-01</p> <p>A temporal finite element <span class="hlt">method</span> based on a mixed form of the Hamiltonian weak principle is developed for dynamics and <span class="hlt">optimal</span> control problems. The mixed form of Hamilton's weak principle contains both displacements and momenta as primary variables that are expanded in terms of nodal values and simple polynomial shape functions. Unlike other forms of Hamilton's principle, however, time derivatives of the momenta and displacements do not appear therein; instead, only the virtual momenta and virtual displacements are differentiated with respect to time. Based on the duality that is observed to exist between the mixed form of Hamilton's weak principle and variational principles governing classical <span class="hlt">optimal</span> control problems, a temporal finite element formulation of the latter can be developed in a rather straightforward manner. Several well-known problems in dynamics and <span class="hlt">optimal</span> control are illustrated. The example dynamics problem involves a time-marching problem. As <span class="hlt">optimal</span> control examples, elementary trajectory <span class="hlt">optimization</span> problems are treated.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=19910043965&hterms=right+Bless+you&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D20%26Ntt%3Dright%2BBless%2Byou','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=19910043965&hterms=right+Bless+you&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D20%26Ntt%3Dright%2BBless%2Byou"><span>A weak Hamiltonian finite element <span class="hlt">method</span> for <span class="hlt">optimal</span> control problems</span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Hodges, Dewey H.; Bless, Robert R.</p> <p>1990-01-01</p> <p>A temporal finite element <span class="hlt">method</span> based on a mixed form of the Hamiltonian weak principle is developed for dynamics and <span class="hlt">optimal</span> control problems. The mixed form of Hamilton's weak principle contains both displacements and momenta as primary variables that are expanded in terms of nodal values and simple polynomial shape functions. Unlike other forms of Hamilton's principle, however, time derivatives of the momenta and displacements do not appear therein; instead, only the virtual momenta and virtual displacements are differentiated with respect to time. Based on the duality that is observed to exist between the mixed form of Hamilton's weak principle and variational principles governing classical <span class="hlt">optimal</span> control problems, a temporal finite element formulation of the latter can be developed in a rather straightforward manner. Several well-known problems in dynamics and <span class="hlt">optimal</span> control are illustrated. The example dynamics problem involves a time-marching problem. As <span class="hlt">optimal</span> control examples, elementary trajectory <span class="hlt">optimization</span> problems are treated.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017CEAS....9..243M','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017CEAS....9..243M"><span>A new <span class="hlt">method</span> for <span class="hlt">optimization</span> of low-thrust gravity-assist sequences</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Maiwald, V.</p> <p>2017-09-01</p> <p>Recently missions like Hayabusa and Dawn have shown the relevance and benefits of low-thrust spacecraft concerning the exploration of our solar system. In general, the efficiency of low-thrust propulsion is one means of improving mission payload mass. At the same time, gravity-assist maneuvers can serve as mission enablers, as they have the capability to provide "free energy." A combination of both, gravity-assist and low-thrust propulsion, has the potential to generally improve mission performance, i.e. planning and <span class="hlt">optimization</span> of gravity-assist sequences for low-thrust missions is a desirable asset. Currently no established <span class="hlt">methods</span> exist to include the gravity-assist partners as <span class="hlt">optimization</span> variable for low-thrust missions. The present paper explains how gravity-assists are planned and <span class="hlt">optimized</span>, including the gravity-assist partners, for high-thrust missions and discusses the possibility to transfer the established <span class="hlt">method</span>, based on the Tisserand Criterion, to low-thrust missions. It is shown how the Tisserand Criterion needs to be adapted using a correction term for the low-thrust situation. It is explained why this necessary correction term excludes an a priori evaluation of sequences and therefore their planning and an alternate approach is proposed. Preliminary results of this <span class="hlt">method</span>, by application of a Differential Evolution <span class="hlt">optimization</span> algorithm, are presented and discussed, showing that the <span class="hlt">method</span> is valid but can be improved. Two constraints on the search space are briefly presented for that aim.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li class="active"><span>23</span></li> <li><a href="#" onclick='return showDiv("page_24");'>24</a></li> <li><a href="#" onclick='return showDiv("page_25");'>25</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_23 --> <div id="page_24" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li class="active"><span>24</span></li> <li><a href="#" onclick='return showDiv("page_25");'>25</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="461"> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017MS%26E..245c2016P','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017MS%26E..245c2016P"><span>Assessment of Masonry Buildings Subjected to Landslide-Induced Settlements: From Load Path <span class="hlt">Method</span> to Evolutionary <span class="hlt">Optimization</span> <span class="hlt">Method</span></span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Palmisano, Fabrizio; Elia, Angelo</p> <p>2017-10-01</p> <p>One of the main difficulties, when dealing with landslide structural vulnerability, is the diagnosis of the causes of crack patterns. This is also due to the excessive complexity of models based on classical structural mechanics that makes them inappropriate especially when there is the necessity to perform a rapid vulnerability assessment at the territorial scale. This is why, a new approach, based on a ‘simple model’ (i.e. the Load Path <span class="hlt">Method</span>, LPM), has been proposed by Palmisano and Elia for the interpretation of the behaviour of masonry buildings subjected to landslide-induced settlements. However, the LPM is very useful for rapidly finding the 'most plausible solution' instead of the exact solution. To find the solution, <span class="hlt">optimization</span> algorithms are necessary. In this scenario, this article aims to show how the Bidirectional Evolutionary Structural <span class="hlt">Optimization</span> <span class="hlt">method</span> by Huang and Xie, can be very useful to <span class="hlt">optimize</span> the strut-and-tie models obtained by using the Load Path <span class="hlt">Method</span>.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.osti.gov/biblio/1426370-three-stage-enhanced-reactive-power-voltage-optimization-method-high-penetration-solar','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/1426370-three-stage-enhanced-reactive-power-voltage-optimization-method-high-penetration-solar"><span>A Three-Stage Enhanced Reactive Power and Voltage <span class="hlt">Optimization</span> <span class="hlt">Method</span> for High Penetration of Solar</span></a></p> <p><a target="_blank" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Ke, Xinda; Huang, Renke; Vallem, Mallikarjuna R.</p> <p></p> <p>This paper presents a three-stage enhanced volt/var <span class="hlt">optimization</span> <span class="hlt">method</span> to stabilize voltage fluctuations in transmission networks by <span class="hlt">optimizing</span> the usage of reactive power control devices. In contrast with existing volt/var <span class="hlt">optimization</span> algorithms, the proposed <span class="hlt">method</span> <span class="hlt">optimizes</span> the voltage profiles of the system, while keeping the voltage and real power output of the generators as close to the original scheduling values as possible. This allows the <span class="hlt">method</span> to accommodate realistic power system operation and market scenarios, in which the original generation dispatch schedule will not be affected. The proposed <span class="hlt">method</span> was tested and validated on a modified IEEE 118-bus system withmore » photovoltaic data.« less</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.osti.gov/biblio/22290406-electrochemical-synthesis-characterization-zinc-oxalate-nanoparticles','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/22290406-electrochemical-synthesis-characterization-zinc-oxalate-nanoparticles"><span>Electrochemical synthesis and characterization of zinc oxalate nanoparticles</span></a></p> <p><a target="_blank" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Shamsipur, Mojtaba, E-mail: mshamsipur@yahoo.com; Roushani, Mahmoud; Department of Chemistry, Ilam University, Ilam</p> <p>2013-03-15</p> <p>Highlights: ► Synthesis of zinc oxalate nanoparticles via electrolysis of a zinc plate anode in sodium oxalate solutions. ► Design of a <span class="hlt">Taguchi</span> orthogonal array to identify the <span class="hlt">optimal</span> experimental conditions. ► Controlling the size and shape of particles via applied voltage and oxalate concentration. ► Characterization of zinc oxalate nanoparticles by SEM, UV–vis, FT-IR and TG–DTA. - Abstract: A rapid, clean and simple electrodeposition <span class="hlt">method</span> was designed for the synthesis of zinc oxalate nanoparticles. Zinc oxalate nanoparticles in different size and shapes were electrodeposited by electrolysis of a zinc plate anode in sodium oxalate aqueous solutions. It was foundmore » that the size and shape of the product could be tuned by electrolysis voltage, oxalate ion concentration, and stirring rate of electrolyte solution. A <span class="hlt">Taguchi</span> orthogonal array design was designed to identify the <span class="hlt">optimal</span> experimental conditions. The morphological characterization of the product was carried out by scanning electron microscopy. UV–vis and FT-IR spectroscopies were also used to characterize the electrodeposited nanoparticles. The TG–DTA studies of the nanoparticles indicated that the main thermal degradation occurs in two steps over a temperature range of 350–430 °C. In contrast to the existing <span class="hlt">methods</span>, the present study describes a process which can be easily scaled up for the production of nano-sized zinc oxalate powder.« less</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016PhDT........35V','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016PhDT........35V"><span>Topology <span class="hlt">Optimization</span> using the Level Set and eXtended Finite Element <span class="hlt">Methods</span>: Theory and Applications</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Villanueva Perez, Carlos Hernan</p> <p></p> <p>Computational design <span class="hlt">optimization</span> provides designers with automated techniques to develop novel and non-intuitive <span class="hlt">optimal</span> designs. Topology <span class="hlt">optimization</span> is a design <span class="hlt">optimization</span> technique that allows for the evolution of a broad variety of geometries in the <span class="hlt">optimization</span> process. Traditional density-based topology <span class="hlt">optimization</span> <span class="hlt">methods</span> often lack a sufficient resolution of the geometry and physical response, which prevents direct use of the <span class="hlt">optimized</span> design in manufacturing and the accurate modeling of the physical response of boundary conditions. The goal of this thesis is to introduce a unified topology <span class="hlt">optimization</span> framework that uses the Level Set <span class="hlt">Method</span> (LSM) to describe the design geometry and the eXtended Finite Element <span class="hlt">Method</span> (XFEM) to solve the governing equations and measure the performance of the design. The methodology is presented as an alternative to density-based <span class="hlt">optimization</span> approaches, and is able to accommodate a broad range of engineering design problems. The framework presents state-of-the-art <span class="hlt">methods</span> for immersed boundary techniques to stabilize the systems of equations and enforce the boundary conditions, and is studied with applications in 2D and 3D linear elastic structures, incompressible flow, and energy and species transport problems to test the robustness and the characteristics of the <span class="hlt">method</span>. A comparison of the framework against density-based topology <span class="hlt">optimization</span> approaches is studied with regards to convergence, performance, and the capability to manufacture the designs. Furthermore, the ability to control the shape of the design to operate within manufacturing constraints is developed and studied. The analysis capability of the framework is validated quantitatively through comparison against previous benchmark studies, and qualitatively through its application to topology <span class="hlt">optimization</span> problems. The design <span class="hlt">optimization</span> problems converge to intuitive designs and resembled well the results from previous 2D or density-based studies.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018MRE.....5c5005B','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018MRE.....5c5005B"><span>Wear rate <span class="hlt">optimization</span> of Al/SiCnp/e-glass fibre hybrid metal matrix composites using <span class="hlt">Taguchi</span> <span class="hlt">method</span> and genetic algorithm and development of wear model using artificial neural networks</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Bongale, Arunkumar M.; Kumar, Satish; Sachit, T. S.; Jadhav, Priya</p> <p>2018-03-01</p> <p>Studies on wear properties of Aluminium based hybrid nano composite materials, processed through powder metallurgy technique, are reported in the present study. Silicon Carbide nano particles and E-glass fibre are reinforced in pure aluminium matrix to fabricate hybrid nano composite material samples. Pin-on-Disc wear testing equipment is used to evaluate dry sliding wear properties of the composite samples. The tests were conducted following the Taguchi’s Design of Experiments <span class="hlt">method</span>. Signal-to-Noise ratio analysis and Analysis of Variance are carried out on the test data to find out the influence of test parameters on the wear rate. Scanning Electron Microscopic analysis and Energy Dispersive x-ray analysis are conducted on the worn surfaces to find out the wear mechanisms responsible for wear of the composites. Multiple linear regression analysis and Genetic Algorithm techniques are employed for <span class="hlt">optimization</span> of wear test parameters to yield minimum wear of the composite samples. Finally, a wear model is built by the application of Artificial Neural Networks to predict the wear rate of the composite material, under different testing conditions. The predicted values of wear rate are found to be very close to the experimental values with a deviation in the range of 0.15% to 8.09%.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016IJTJE..33..275R','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016IJTJE..33..275R"><span><span class="hlt">Taguchi</span> Based Regression Analysis of End-Wall Film Cooling in a Gas Turbine Cascade with Single Row of Holes</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Ravi, D.; Parammasivam, K. M.</p> <p>2016-09-01</p> <p>Numerical investigations were conducted on a turbine cascade, with end-wall cooling by a single row of cylindrical holes, inclined at 30°. The mainstream fluid was hot air and the coolant was CO2 gas. Based on the Reynolds number, the flow was turbulent at the inlet. The film hole row position, its pitch and blowing ratio was varied with five different values. <span class="hlt">Taguchi</span> approach was used in designing a L25 orthogonal array (OA) for these parameters. The end-wall averaged film cooling effectiveness (bar η) was chosen as the quality characteristic. CFD analyses were carried out using Ansys Fluent on computational domains designed with inputs from OA. Experiments were conducted for one chosen OA configuration and the computational results were found to correlate well with experimental measurements. The responses from the CFD analyses were fed to the statistical tool to develop a correlation for bar η using regression analysis.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016JIEIC..97..185G','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016JIEIC..97..185G"><span>Process Parameters <span class="hlt">Optimization</span> in Single Point Incremental Forming</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Gulati, Vishal; Aryal, Ashmin; Katyal, Puneet; Goswami, Amitesh</p> <p>2016-04-01</p> <p>This work aims to <span class="hlt">optimize</span> the formability and surface roughness of parts formed by the single-point incremental forming process for an Aluminium-6063 alloy. The tests are based on <span class="hlt">Taguchi</span>'s L18 orthogonal array selected on the basis of DOF. The tests have been carried out on vertical machining center (DMC70V); using CAD/CAM software (SolidWorks V5/MasterCAM). Two levels of tool radius, three levels of sheet thickness, step size, tool rotational speed, feed rate and lubrication have been considered as the input process parameters. Wall angle and surface roughness have been considered process responses. The influential process parameters for the formability and surface roughness have been identified with the help of statistical tool (response table, main effect plot and ANOVA). The parameter that has the utmost influence on formability and surface roughness is lubrication. In the case of formability, lubrication followed by the tool rotational speed, feed rate, sheet thickness, step size and tool radius have the influence in descending order. Whereas in surface roughness, lubrication followed by feed rate, step size, tool radius, sheet thickness and tool rotational speed have the influence in descending order. The predicted <span class="hlt">optimal</span> values for the wall angle and surface roughness are found to be 88.29° and 1.03225 µm. The confirmation experiments were conducted thrice and the value of wall angle and surface roughness were found to be 85.76° and 1.15 µm respectively.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.osti.gov/servlets/purl/1082125','DOE-PATENT-XML'); return false;" href="https://www.osti.gov/servlets/purl/1082125"><span>Biological <span class="hlt">optimization</span> systems for enhancing photosynthetic efficiency and <span class="hlt">methods</span> of use</span></a></p> <p><a target="_blank" href="http://www.osti.gov/doepatents">DOEpatents</a></p> <p>Hunt, Ryan W.; Chinnasamy, Senthil; Das, Keshav C.; de Mattos, Erico Rolim</p> <p>2012-11-06</p> <p>Biological <span class="hlt">optimization</span> systems for enhancing photosynthetic efficiency and <span class="hlt">methods</span> of use. Specifically, <span class="hlt">methods</span> for enhancing photosynthetic efficiency including applying pulsed light to a photosynthetic organism, using a chlorophyll fluorescence feedback control system to determine one or more photosynthetic efficiency parameters, and adjusting one or more of the photosynthetic efficiency parameters to drive the photosynthesis by the delivery of an amount of light to <span class="hlt">optimize</span> light absorption of the photosynthetic organism while providing enough dark time between light pulses to prevent oversaturation of the chlorophyll reaction centers are disclosed.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.osti.gov/servlets/purl/1026668','DOE-PATENT-XML'); return false;" href="https://www.osti.gov/servlets/purl/1026668"><span>Hybrid robust predictive <span class="hlt">optimization</span> <span class="hlt">method</span> of power system dispatch</span></a></p> <p><a target="_blank" href="http://www.osti.gov/doepatents">DOEpatents</a></p> <p>Chandra, Ramu Sharat [Niskayuna, NY; Liu, Yan [Ballston Lake, NY; Bose, Sumit [Niskayuna, NY; de Bedout, Juan Manuel [West Glenville, NY</p> <p>2011-08-02</p> <p>A <span class="hlt">method</span> of power system dispatch control solves power system dispatch problems by integrating a larger variety of generation, load and storage assets, including without limitation, combined heat and power (CHP) units, renewable generation with forecasting, controllable loads, electric, thermal and water energy storage. The <span class="hlt">method</span> employs a predictive algorithm to dynamically schedule different assets in order to achieve global <span class="hlt">optimization</span> and maintain the system normal operation.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/15011257','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/15011257"><span>Ligand-protein docking using a quantum stochastic tunneling <span class="hlt">optimization</span> <span class="hlt">method</span>.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Mancera, Ricardo L; Källblad, Per; Todorov, Nikolay P</p> <p>2004-04-30</p> <p>A novel hybrid <span class="hlt">optimization</span> <span class="hlt">method</span> called quantum stochastic tunneling has been recently introduced. Here, we report its implementation within a new docking program called EasyDock and a validation with the CCDC/Astex data set of ligand-protein complexes using the PLP score to represent the ligand-protein potential energy surface and ScreenScore to score the ligand-protein binding energies. When taking the top energy-ranked ligand binding mode pose, we were able to predict the correct crystallographic ligand binding mode in up to 75% of the cases. By using this novel <span class="hlt">optimization</span> <span class="hlt">method</span> run times for typical docking simulations are significantly shortened. Copyright 2004 Wiley Periodicals, Inc. J Comput Chem 25: 858-864, 2004</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018E%26ES..136a2019N','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018E%26ES..136a2019N"><span>Design <span class="hlt">optimization</span> of hydraulic turbine draft tube based on CFD and DOE <span class="hlt">method</span></span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Nam, Mun chol; Dechun, Ba; Xiangji, Yue; Mingri, Jin</p> <p>2018-03-01</p> <p>In order to improve performance of the hydraulic turbine draft tube in its design process, the <span class="hlt">optimization</span> for draft tube is performed based on multi-disciplinary collaborative design <span class="hlt">optimization</span> platform by combining the computation fluid dynamic (CFD) and the design of experiment (DOE) in this paper. The geometrical design variables are considered as the median section in the draft tube and the cross section in its exit diffuser and objective function is to maximize the pressure recovery factor (Cp). Sample matrixes required for the shape <span class="hlt">optimization</span> of the draft tube are generated by <span class="hlt">optimal</span> Latin hypercube (OLH) <span class="hlt">method</span> of the DOE technique and their performances are evaluated through computational fluid dynamic (CFD) numerical simulation. Subsequently the main effect analysis and the sensitivity analysis of the geometrical parameters of the draft tube are accomplished. Then, the design <span class="hlt">optimization</span> of the geometrical design variables is determined using the response surface <span class="hlt">method</span>. The <span class="hlt">optimization</span> result of the draft tube shows a marked performance improvement over the original.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018MS%26E..314a2009S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018MS%26E..314a2009S"><span><span class="hlt">Optimization</span> of tribological behaviour on Al- coconut shell ash composite at elevated temperature</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Siva Sankara Raju, R.; Panigrahi, M. K.; Ganguly, R. I.; Srinivasa Rao, G.</p> <p>2018-02-01</p> <p>In this study, determine the tribological behaviour of composite at elevated temperature i.e. 50 - 150 °C. The aluminium matrix composite (AMC) are prepared with compo casting route by volume of reinforcement of coconut shell ash (CSA) such as 5, 10 and 15%. Mechanical properties of composite has enhances with increasing volume of CSA. This study details to <span class="hlt">optimization</span> of wear behaviour of composite at elevated temperatures. The influencing parameters such as temperature, sliding velocity and sliding distance are considered. The outcome response is wear rate (mm3/m) and coefficient of friction. The experiments are designed based on <span class="hlt">Taguchi</span> [L9] array. All the experiments are considered as constant load of 10N. Analysis of variance (ANOVA) revealed that temperature is highest influencing factor followed by sliding velocity and sliding distance. Similarly, sliding velocity is most influencing factor followed by temperature and distance on coefficient of friction (COF). Finally, corroborates analytical and regression equation values by confirmation test.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017MS%26E..257a2007B','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017MS%26E..257a2007B"><span>Efficient operation scheduling for adsorption chillers using predictive <span class="hlt">optimization</span>-based control <span class="hlt">methods</span></span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Bürger, Adrian; Sawant, Parantapa; Bohlayer, Markus; Altmann-Dieses, Angelika; Braun, Marco; Diehl, Moritz</p> <p>2017-10-01</p> <p>Within this work, the benefits of using predictive control <span class="hlt">methods</span> for the operation of Adsorption Cooling Machines (ACMs) are shown on a simulation study. Since the internal control decisions of series-manufactured ACMs often cannot be influenced, the work focuses on <span class="hlt">optimized</span> scheduling of an ACM considering its internal functioning as well as forecasts for load and driving energy occurrence. For illustration, an assumed solar thermal climate system is introduced and a system model suitable for use within gradient-based <span class="hlt">optimization</span> <span class="hlt">methods</span> is developed. The results of a system simulation using a conventional scheme for ACM scheduling are compared to the results of a predictive, <span class="hlt">optimization</span>-based scheduling approach for the same exemplary scenario of load and driving energy occurrence. The benefits of the latter approach are shown and future actions for application of these <span class="hlt">methods</span> for system control are addressed.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018MPLB...3250091W','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018MPLB...3250091W"><span>A <span class="hlt">method</span> of network topology <span class="hlt">optimization</span> design considering application process characteristic</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Wang, Chunlin; Huang, Ning; Bai, Yanan; Zhang, Shuo</p> <p>2018-03-01</p> <p>Communication networks are designed to meet the usage requirements of users for various network applications. The current studies of network topology <span class="hlt">optimization</span> design mainly considered network traffic, which is the result of network application operation, but not a design element of communication networks. A network application is a procedure of the usage of services by users with some demanded performance requirements, and has obvious process characteristic. In this paper, we first propose a <span class="hlt">method</span> to <span class="hlt">optimize</span> the design of communication network topology considering the application process characteristic. Taking the minimum network delay as objective, and the cost of network design and network connective reliability as constraints, an <span class="hlt">optimization</span> model of network topology design is formulated, and the <span class="hlt">optimal</span> solution of network topology design is searched by Genetic Algorithm (GA). Furthermore, we investigate the influence of network topology parameter on network delay under the background of multiple process-oriented applications, which can guide the generation of initial population and then improve the efficiency of GA. Numerical simulations show the effectiveness and validity of our proposed <span class="hlt">method</span>. Network topology <span class="hlt">optimization</span> design considering applications can improve the reliability of applications, and provide guidance for network builders in the early stage of network design, which is of great significance in engineering practices.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28113609','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28113609"><span>A Unified Fisher's Ratio Learning <span class="hlt">Method</span> for Spatial Filter <span class="hlt">Optimization</span>.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Li, Xinyang; Guan, Cuntai; Zhang, Haihong; Ang, Kai Keng</p> <p></p> <p>To detect the mental task of interest, spatial filtering has been widely used to enhance the spatial resolution of electroencephalography (EEG). However, the effectiveness of spatial filtering is undermined due to the significant nonstationarity of EEG. Based on regularization, most of the conventional stationary spatial filter design <span class="hlt">methods</span> address the nonstationarity at the cost of the interclass discrimination. Moreover, spatial filter <span class="hlt">optimization</span> is inconsistent with feature extraction when EEG covariance matrices could not be jointly diagonalized due to the regularization. In this paper, we propose a novel framework for a spatial filter design. With Fisher's ratio in feature space directly used as the objective function, the spatial filter <span class="hlt">optimization</span> is unified with feature extraction. Given its ratio form, the selection of the regularization parameter could be avoided. We evaluate the proposed <span class="hlt">method</span> on a binary motor imagery data set of 16 subjects, who performed the calibration and test sessions on different days. The experimental results show that the proposed <span class="hlt">method</span> yields improvement in classification performance for both single broadband and filter bank settings compared with conventional nonunified <span class="hlt">methods</span>. We also provide a systematic attempt to compare different objective functions in modeling data nonstationarity with simulation studies.To detect the mental task of interest, spatial filtering has been widely used to enhance the spatial resolution of electroencephalography (EEG). However, the effectiveness of spatial filtering is undermined due to the significant nonstationarity of EEG. Based on regularization, most of the conventional stationary spatial filter design <span class="hlt">methods</span> address the nonstationarity at the cost of the interclass discrimination. Moreover, spatial filter <span class="hlt">optimization</span> is inconsistent with feature extraction when EEG covariance matrices could not be jointly diagonalized due to the regularization. In this paper, we propose a novel</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017IJSMD...8A..13B','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017IJSMD...8A..13B"><span>Robust design <span class="hlt">optimization</span> using the price of robustness, robust least squares and regularization <span class="hlt">methods</span></span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Bukhari, Hassan J.</p> <p>2017-12-01</p> <p>In this paper a framework for robust <span class="hlt">optimization</span> of mechanical design problems and process systems that have parametric uncertainty is presented using three different approaches. Robust <span class="hlt">optimization</span> problems are formulated so that the <span class="hlt">optimal</span> solution is robust which means it is minimally sensitive to any perturbations in parameters. The first <span class="hlt">method</span> uses the price of robustness approach which assumes the uncertain parameters to be symmetric and bounded. The robustness for the design can be controlled by limiting the parameters that can perturb.The second <span class="hlt">method</span> uses the robust least squares <span class="hlt">method</span> to determine the <span class="hlt">optimal</span> parameters when data itself is subjected to perturbations instead of the parameters. The last <span class="hlt">method</span> manages uncertainty by restricting the perturbation on parameters to improve sensitivity similar to Tikhonov regularization. The <span class="hlt">methods</span> are implemented on two sets of problems; one linear and the other non-linear. This methodology will be compared with a prior <span class="hlt">method</span> using multiple Monte Carlo simulation runs which shows that the approach being presented in this paper results in better performance.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://hdl.handle.net/2060/19820014371','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19820014371"><span>A linear decomposition <span class="hlt">method</span> for large <span class="hlt">optimization</span> problems. Blueprint for development</span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Sobieszczanski-Sobieski, J.</p> <p>1982-01-01</p> <p>A <span class="hlt">method</span> is proposed for decomposing large <span class="hlt">optimization</span> problems encountered in the design of engineering systems such as an aircraft into a number of smaller subproblems. The decomposition is achieved by organizing the problem and the subordinated subproblems in a tree hierarchy and <span class="hlt">optimizing</span> each subsystem separately. Coupling of the subproblems is accounted for by subsequent <span class="hlt">optimization</span> of the entire system based on sensitivities of the suboptimization problem solutions at each level of the tree to variables of the next higher level. A formalization of the procedure suitable for computer implementation is developed and the state of readiness of the implementation building blocks is reviewed showing that the ingredients for the development are on the shelf. The decomposition <span class="hlt">method</span> is also shown to be compatible with the natural human organization of the design process of engineering systems. The <span class="hlt">method</span> is also examined with respect to the trends in computer hardware and software progress to point out that its efficiency can be amplified by network computing using parallel processors.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017IJASE...9..397V','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017IJASE...9..397V"><span>Topology <span class="hlt">optimization</span> analysis based on the direct coupling of the boundary element <span class="hlt">method</span> and the level set <span class="hlt">method</span></span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Vitório, Paulo Cezar; Leonel, Edson Denner</p> <p>2017-12-01</p> <p>The structural design must ensure suitable working conditions by attending for safe and economic criteria. However, the <span class="hlt">optimal</span> solution is not easily available, because these conditions depend on the bodies' dimensions, materials strength and structural system configuration. In this regard, topology <span class="hlt">optimization</span> aims for achieving the <span class="hlt">optimal</span> structural geometry, i.e. the shape that leads to the minimum requirement of material, respecting constraints related to the stress state at each material point. The present study applies an evolutionary approach for determining the <span class="hlt">optimal</span> geometry of 2D structures using the coupling of the boundary element <span class="hlt">method</span> (BEM) and the level set <span class="hlt">method</span> (LSM). The proposed algorithm consists of mechanical modelling, topology <span class="hlt">optimization</span> approach and structural reconstruction. The mechanical model is composed of singular and hyper-singular BEM algebraic equations. The topology <span class="hlt">optimization</span> is performed through the LSM. Internal and external geometries are evolved by the LS function evaluated at its zero level. The reconstruction process concerns the remeshing. Because the structural boundary moves at each iteration, the body's geometry change and, consequently, a new mesh has to be defined. The proposed algorithm, which is based on the direct coupling of such approaches, introduces internal cavities automatically during the <span class="hlt">optimization</span> process, according to the intensity of Von Mises stress. The developed <span class="hlt">optimization</span> model was applied in two benchmarks available in the literature. Good agreement was observed among the results, which demonstrates its efficiency and accuracy.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2006JTST...15..340G','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2006JTST...15..340G"><span>Down-selection and <span class="hlt">optimization</span> of thermal-sprayed coatings for aluminum mould tool protection and upgrade</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Gibbons, Gregory John; Hansell, Robert George</p> <p>2006-09-01</p> <p>This article details the down-selection procedure for thermally sprayed coatings for aluminum injection mould tooling. A down-selection metric was used to rank a wide range of coatings. A range of high-velocity oxyfuel (HVOF) and atmospheric plasma spray (APS) systems was used to identify the <span class="hlt">optimal</span> coating-process-system combinations. Three coatings were identified as suitable for further study; two CrC NiCr materials and one Fe Ni Cr alloy. No APS-deposited coatings were suitable for the intended application due to poor substrate adhesion (SA) and very high surface roughness (SR). The DJ2700 deposited coating properties were inferior to the coatings deposited using other HVOF systems and thus a <span class="hlt">Taguchi</span> L18 five parameter, three-level <span class="hlt">optimization</span> was used to <span class="hlt">optimize</span> SA of CRC-1 and FE-1. Significant mean increases in bond strength were achieved (147±30% for FE-1 [58±4 MPa] and 12±1% for CRC-1 [67±5 MPa]). An analysis of variance (ANOVA) indicated that the coating bond strengths were primarily dependent on powder flow rate and propane gas flow rate, and also secondarily dependent on spray distance. The <span class="hlt">optimal</span> deposition parameters identified were: (CRC-1/FE-1) O2 264/264 standard liters per minute (SLPM); C3H8 62/73 SLPM; air 332/311 SLPM; feed rate 30/28 g/min; and spray distance 150/206 mm.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://hdl.handle.net/2060/19870000570','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19870000570"><span>Development of a turbomachinery design <span class="hlt">optimization</span> procedure using a multiple-parameter nonlinear perturbation <span class="hlt">method</span></span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Stahara, S. S.</p> <p>1984-01-01</p> <p>An investigation was carried out to complete the preliminary development of a combined perturbation/<span class="hlt">optimization</span> procedure and associated computational code for designing <span class="hlt">optimized</span> blade-to-blade profiles of turbomachinery blades. The overall purpose of the procedures developed is to provide demonstration of a rapid nonlinear perturbation <span class="hlt">method</span> for minimizing the computational requirements associated with parametric design studies of turbomachinery flows. The <span class="hlt">method</span> combines the multiple parameter nonlinear perturbation <span class="hlt">method</span>, successfully developed in previous phases of this study, with the NASA TSONIC blade-to-blade turbomachinery flow solver, and the COPES-CONMIN <span class="hlt">optimization</span> procedure into a user's code for designing <span class="hlt">optimized</span> blade-to-blade surface profiles of turbomachinery blades. Results of several design applications and a documented version of the code together with a user's manual are provided.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li class="active"><span>24</span></li> <li><a href="#" onclick='return showDiv("page_25");'>25</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_24 --> <div id="page_25" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li><a href="#" onclick='return showDiv("page_24");'>24</a></li> <li class="active"><span>25</span></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="481"> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/27421397','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/27421397"><span>Automated property <span class="hlt">optimization</span> via ab initio O(N) elongation <span class="hlt">method</span>: Application to (hyper-)polarizability in DNA.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Orimoto, Yuuichi; Aoki, Yuriko</p> <p>2016-07-14</p> <p>An automated property <span class="hlt">optimization</span> <span class="hlt">method</span> was developed based on the ab initio O(N) elongation (ELG) <span class="hlt">method</span> and applied to the <span class="hlt">optimization</span> of nonlinear optical (NLO) properties in DNA as a first test. The ELG <span class="hlt">method</span> mimics a polymerization reaction on a computer, and the reaction terminal of a starting cluster is attacked by monomers sequentially to elongate the electronic structure of the system by solving in each step a limited space including the terminal (localized molecular orbitals at the terminal) and monomer. The ELG-finite field (ELG-FF) <span class="hlt">method</span> for calculating (hyper-)polarizabilities was used as the engine program of the <span class="hlt">optimization</span> <span class="hlt">method</span>, and it was found to show linear scaling efficiency while maintaining high computational accuracy for a random sequenced DNA model. Furthermore, the self-consistent field convergence was significantly improved by using the ELG-FF <span class="hlt">method</span> compared with a conventional <span class="hlt">method</span>, and it can lead to more feasible NLO property values in the FF treatment. The automated <span class="hlt">optimization</span> <span class="hlt">method</span> successfully chose an appropriate base pair from four base pairs (A, T, G, and C) for each elongation step according to an evaluation function. From test <span class="hlt">optimizations</span> for the first order hyper-polarizability (β) in DNA, a substantial difference was observed depending on <span class="hlt">optimization</span> conditions between "choose-maximum" (choose a base pair giving the maximum β for each step) and "choose-minimum" (choose a base pair giving the minimum β). In contrast, there was an ambiguous difference between these conditions for <span class="hlt">optimizing</span> the second order hyper-polarizability (γ) because of the small absolute value of γ and the limitation of numerical differential calculations in the FF <span class="hlt">method</span>. It can be concluded that the ab initio level property <span class="hlt">optimization</span> <span class="hlt">method</span> introduced here can be an effective step towards an advanced computer aided material design <span class="hlt">method</span> as long as the numerical limitation of the FF <span class="hlt">method</span> is taken into account.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/12861612','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/12861612"><span>Analytical <span class="hlt">method</span> for promoting process capability of shock absorption steel.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Sung, Wen-Pei; Shih, Ming-Hsiang; Chen, Kuen-Suan</p> <p>2003-01-01</p> <p>Mechanical properties and low cycle fatigue are two factors that must be considered in developing new type steel for shock absorption. Process capability and process control are significant factors in achieving the purpose of research and development programs. Often-used evaluation <span class="hlt">methods</span> failed to measure process yield and process centering; so this paper uses <span class="hlt">Taguchi</span> loss function as basis to establish an evaluation <span class="hlt">method</span> and the steps for assessing the quality of mechanical properties and process control of an iron and steel manufacturer. The establishment of this <span class="hlt">method</span> can serve the research and development and manufacturing industry and lay a foundation in enhancing its process control ability to select better manufacturing processes that are more reliable than decision making by using the other commonly used <span class="hlt">methods</span>.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018HTMP...37..219S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018HTMP...37..219S"><span>Studies on the Parametric Effects of Plasma Arc Welding of 2205 Duplex Stainless Steel</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Selva Bharathi, R.; Siva Shanmugam, N.; Murali Kannan, R.; Arungalai Vendan, S.</p> <p>2018-03-01</p> <p>This research study attempts to create an <span class="hlt">optimized</span> parametric window by employing <span class="hlt">Taguchi</span> algorithm for Plasma Arc Welding (PAW) of 2 mm thick 2205 duplex stainless steel. The parameters considered for experimentation and <span class="hlt">optimization</span> are the welding current, welding speed and pilot arc length respectively. The experimentation involves the parameters variation and subsequently recording the depth of penetration and bead width. Welding current of 60-70 A, welding speed of 250-300 mm/min and pilot arc length of 1-2 mm are the range between which the parameters are varied. Design of experiments is used for the experimental trials. Back propagation neural network, Genetic algorithm and <span class="hlt">Taguchi</span> techniques are used for predicting the bead width, depth of penetration and validated with experimentally achieved results which were in good agreement. Additionally, micro-structural characterizations are carried out to examine the weld quality. The extrapolation of these <span class="hlt">optimized</span> parametric values yield enhanced weld strength with cost and time reduction.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=19810044996&hterms=NLP&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D20%26Ntt%3DNLP','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=19810044996&hterms=NLP&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D20%26Ntt%3DNLP"><span>The controlled growth <span class="hlt">method</span> - A tool for structural <span class="hlt">optimization</span></span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Hajela, P.; Sobieszczanski-Sobieski, J.</p> <p>1981-01-01</p> <p>An adaptive design variable linking scheme in a NLP based <span class="hlt">optimization</span> algorithm is proposed and evaluated for feasibility of application. The present scheme, based on an intuitive effectiveness measure for each variable, differs from existing methodology in that a single dominant variable controls the growth of all others in a prescribed <span class="hlt">optimization</span> cycle. The proposed <span class="hlt">method</span> is implemented for truss assemblies and a wing box structure for stress, displacement and frequency constraints. Substantial reduction in computational time, even more so for structures under multiple load conditions, coupled with a minimal accompanying loss in accuracy, vindicates the algorithm.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016AAS...22724908S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016AAS...22724908S"><span>The Value of <span class="hlt">Methodical</span> Management: <span class="hlt">Optimizing</span> Science Results</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Saby, Linnea</p> <p>2016-01-01</p> <p>As science progresses, making new discoveries in radio astronomy becomes increasingly complex. Instrumentation must be incredibly fine-tuned and well-understood, scientists must consider the skills and schedules of large research teams, and inter-organizational projects sometimes require coordination between observatories around the globe. Structured and <span class="hlt">methodical</span> management allows scientists to work more effectively in this environment and leads to <span class="hlt">optimal</span> science output. This report outlines the principles of <span class="hlt">methodical</span> project management in general, and describes how those principles are applied at the National Radio Astronomy Observatory (NRAO) in Charlottesville, Virginia.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017SPIE10448E..1ZW','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017SPIE10448E..1ZW"><span>An <span class="hlt">optimized</span> <span class="hlt">method</span> to calculate error correction capability of tool influence function in frequency domain</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Wang, Jia; Hou, Xi; Wan, Yongjian; Shi, Chunyan</p> <p>2017-10-01</p> <p>An <span class="hlt">optimized</span> <span class="hlt">method</span> to calculate error correction capability of tool influence function (TIF) in certain polishing conditions will be proposed based on smoothing spectral function. The basic mathematical model for this <span class="hlt">method</span> will be established in theory. A set of polishing experimental data with rigid conformal tool is used to validate the <span class="hlt">optimized</span> <span class="hlt">method</span>. The calculated results can quantitatively indicate error correction capability of TIF for different spatial frequency errors in certain polishing conditions. The comparative analysis with previous <span class="hlt">method</span> shows that the <span class="hlt">optimized</span> <span class="hlt">method</span> is simpler in form and can get the same accuracy results with less calculating time in contrast to previous <span class="hlt">method</span>.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2005JSASS..53..398Y','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2005JSASS..53..398Y"><span>The Tool for Designing Engineering Systems Using a New <span class="hlt">Optimization</span> <span class="hlt">Method</span> Based on a Stochastic Process</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Yoshida, Hiroaki; Yamaguchi, Katsuhito; Ishikawa, Yoshio</p> <p></p> <p>The conventional <span class="hlt">optimization</span> <span class="hlt">methods</span> were based on a deterministic approach, since their purpose is to find out an exact solution. However, these <span class="hlt">methods</span> have initial condition dependence and risk of falling into local solution. In this paper, we propose a new <span class="hlt">optimization</span> <span class="hlt">method</span> based on a concept of path integral <span class="hlt">method</span> used in quantum mechanics. The <span class="hlt">method</span> obtains a solutions as an expected value (stochastic average) using a stochastic process. The advantages of this <span class="hlt">method</span> are not to be affected by initial conditions and not to need techniques based on experiences. We applied the new <span class="hlt">optimization</span> <span class="hlt">method</span> to a design of the hang glider. In this problem, not only the hang glider design but also its flight trajectory were <span class="hlt">optimized</span>. The numerical calculation results showed that the <span class="hlt">method</span> has a sufficient performance.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2010InvPr..26g4007C','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2010InvPr..26g4007C"><span>Subspace-based <span class="hlt">optimization</span> <span class="hlt">method</span> for inverse scattering problems with an inhomogeneous background medium</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Chen, Xudong</p> <p>2010-07-01</p> <p>This paper proposes a version of the subspace-based <span class="hlt">optimization</span> <span class="hlt">method</span> to solve the inverse scattering problem with an inhomogeneous background medium where the known inhomogeneities are bounded in a finite domain. Although the background Green's function at each discrete point in the computational domain is not directly available in an inhomogeneous background scenario, the paper uses the finite element <span class="hlt">method</span> to simultaneously obtain the Green's function at all discrete points. The essence of the subspace-based <span class="hlt">optimization</span> <span class="hlt">method</span> is that part of the contrast source is determined from the spectrum analysis without using any <span class="hlt">optimization</span>, whereas the orthogonally complementary part is determined by solving a lower dimension <span class="hlt">optimization</span> problem. This feature significantly speeds up the convergence of the algorithm and at the same time makes it robust against noise. Numerical simulations illustrate the efficacy of the proposed algorithm. The algorithm presented in this paper finds wide applications in nondestructive evaluation, such as through-wall imaging.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/21185025','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/21185025"><span><span class="hlt">Optimizing</span> pressurized liquid extraction of microbial lipids using the response surface <span class="hlt">method</span>.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Cescut, J; Severac, E; Molina-Jouve, C; Uribelarrea, J-L</p> <p>2011-01-21</p> <p>Response surface methodology (RSM) was used for the determination of optimum extraction parameters to reach maximum lipid extraction yield with yeast. Total lipids were extracted from oleaginous yeast (Rhodotorula glutinis) using pressurized liquid extraction (PLE). The effects of extraction parameters on lipid extraction yield were studied by employing a second-order central composite design. The <span class="hlt">optimal</span> condition was obtained as three cycles of 15 min at 100°C with a ratio of 144 g of hydromatrix per 100 g of dry cell weight. Different analysis <span class="hlt">methods</span> were used to compare the <span class="hlt">optimized</span> PLE <span class="hlt">method</span> with two conventional <span class="hlt">methods</span> (Soxhlet and modification of Bligh and Dyer <span class="hlt">methods</span>) under efficiency, selectivity and reproducibility criteria thanks to gravimetric analysis, GC with flame ionization detector, High Performance Liquid Chromatography linked to Evaporative Light Scattering Detector (HPLC-ELSD) and thin-layer chromatographic analysis. For each sample, the lipid extraction yield with <span class="hlt">optimized</span> PLE was higher than those obtained with referenced <span class="hlt">methods</span> (Soxhlet and Bligh and Dyer <span class="hlt">methods</span> with, respectively, a recovery of 78% and 85% compared to PLE <span class="hlt">method</span>). Moreover, the use of PLE led to major advantages such as an analysis time reduction by a factor of 10 and solvent quantity reduction by 70%, compared with traditional extraction <span class="hlt">methods</span>. Copyright © 2010 Elsevier B.V. All rights reserved.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=19930043208&hterms=sampling+methods&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D90%26Ntt%3Dsampling%2Bmethods','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=19930043208&hterms=sampling+methods&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D90%26Ntt%3Dsampling%2Bmethods"><span><span class="hlt">Optimal</span> thresholds for the estimation of area rain-rate moments by the threshold <span class="hlt">method</span></span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Short, David A.; Shimizu, Kunio; Kedem, Benjamin</p> <p>1993-01-01</p> <p><span class="hlt">Optimization</span> of the threshold <span class="hlt">method</span>, achieved by determination of the threshold that maximizes the correlation between an area-average rain-rate moment and the area coverage of rain rates exceeding the threshold, is demonstrated empirically and theoretically. Empirical results for a sequence of GATE radar snapshots show <span class="hlt">optimal</span> thresholds of 5 and 27 mm/h for the first and second moments, respectively. Theoretical <span class="hlt">optimization</span> of the threshold <span class="hlt">method</span> by the maximum-likelihood approach of Kedem and Pavlopoulos (1991) predicts <span class="hlt">optimal</span> thresholds near 5 and 26 mm/h for lognormally distributed rain rates with GATE-like parameters. The agreement between theory and observations suggests that the <span class="hlt">optimal</span> threshold can be understood as arising due to sampling variations, from snapshot to snapshot, of a parent rain-rate distribution. <span class="hlt">Optimal</span> thresholds for gamma and inverse Gaussian distributions are also derived and compared.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3098747','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3098747"><span>Analytical and numerical analysis of inverse <span class="hlt">optimization</span> problems: conditions of uniqueness and computational <span class="hlt">methods</span></span></a></p> <p><a target="_blank" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Zatsiorsky, Vladimir M.</p> <p>2011-01-01</p> <p>One of the key problems of motor control is the redundancy problem, in particular how the central nervous system (CNS) chooses an action out of infinitely many possible. A promising way to address this question is to assume that the choice is made based on <span class="hlt">optimization</span> of a certain cost function. A number of cost functions have been proposed in the literature to explain performance in different motor tasks: from force sharing in grasping to path planning in walking. However, the problem of uniqueness of the cost function(s) was not addressed until recently. In this article, we analyze two <span class="hlt">methods</span> of finding additive cost functions in inverse <span class="hlt">optimization</span> problems with linear constraints, so-called linear-additive inverse <span class="hlt">optimization</span> problems. These <span class="hlt">methods</span> are based on the Uniqueness Theorem for inverse <span class="hlt">optimization</span> problems that we proved recently (Terekhov et al., J Math Biol 61(3):423–453, 2010). Using synthetic data, we show that both <span class="hlt">methods</span> allow for determining the cost function. We analyze the influence of noise on the both <span class="hlt">methods</span>. Finally, we show how a violation of the conditions of the Uniqueness Theorem may lead to incorrect solutions of the inverse <span class="hlt">optimization</span> problem. PMID:21311907</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28788111','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28788111"><span>Fabrication of an Optical Fiber Micro-Sphere with a Diameter of Several Tens of Micrometers.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Yu, Huijuan; Huang, Qiangxian; Zhao, Jian</p> <p>2014-06-25</p> <p>A new <span class="hlt">method</span> to fabricate an integrated optical fiber micro-sphere with a diameter within 100 µm, based on the optical fiber tapering technique and the <span class="hlt">Taguchi</span> <span class="hlt">method</span> is proposed. Using a 125 µm diameter single-mode (SM) optical fiber, an optical fiber taper with a cone angle is formed with the tapering technique, and the fabrication <span class="hlt">optimization</span> of a micro-sphere with a diameter of less than 100 µm is achieved using the <span class="hlt">Taguchi</span> <span class="hlt">method</span>. The optimum combination of process factors levels is obtained, and the signal-to-noise ratio (SNR) of three quality evaluation parameters and the significance of each process factors influencing them are selected as the two standards. Using the minimum zone <span class="hlt">method</span> (MZM) to evaluate the quality of the fabricated optical fiber micro-sphere, a three-dimensional (3D) numerical fitting image of its surface profile and the true sphericity are subsequently realized. From the results, an optical fiber micro-sphere with a two-dimensional (2D) diameter less than 80 µm, 2D roundness error less than 0.70 µm, 2D offset distance between the micro-sphere center and the fiber stylus central line less than 0.65 µm, and true sphericity of about 0.5 µm, is fabricated.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015amos.confE..48J','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015amos.confE..48J"><span>A Fast <span class="hlt">Method</span> for Embattling <span class="hlt">Optimization</span> of Ground-Based Radar Surveillance Network</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Jiang, H.; Cheng, H.; Zhang, Y.; Liu, J.</p> <p></p> <p>A growing number of space activities have created an orbital debris environment that poses increasing impact risks to existing space systems and human space flight. For the safety of in-orbit spacecraft, a lot of observation facilities are needed to catalog space objects, especially in low earth orbit. Surveillance of Low earth orbit objects are mainly rely on ground-based radar, due to the ability limitation of exist radar facilities, a large number of ground-based radar need to build in the next few years in order to meet the current space surveillance demands. How to <span class="hlt">optimize</span> the embattling of ground-based radar surveillance network is a problem to need to be solved. The traditional <span class="hlt">method</span> for embattling <span class="hlt">optimization</span> of ground-based radar surveillance network is mainly through to the detection simulation of all possible stations with cataloged data, and makes a comprehensive comparative analysis of various simulation results with the combinational <span class="hlt">method</span>, and then selects an <span class="hlt">optimal</span> result as station layout scheme. This <span class="hlt">method</span> is time consuming for single simulation and high computational complexity for the combinational analysis, when the number of stations increases, the complexity of <span class="hlt">optimization</span> problem will be increased exponentially, and cannot be solved with traditional <span class="hlt">method</span>. There is no better way to solve this problem till now. In this paper, target detection procedure was simplified. Firstly, the space coverage of ground-based radar was simplified, a space coverage projection model of radar facilities in different orbit altitudes was built; then a simplified objects cross the radar coverage model was established according to the characteristics of space objects orbit motion; after two steps simplification, the computational complexity of the target detection was greatly simplified, and simulation results shown the correctness of the simplified results. In addition, the detection areas of ground-based radar network can be easily computed with the</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/23927349','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/23927349"><span>A collimator <span class="hlt">optimization</span> <span class="hlt">method</span> for quantitative imaging: application to Y-90 bremsstrahlung SPECT.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Rong, Xing; Frey, Eric C</p> <p>2013-08-01</p> <p>Post-therapy quantitative 90Y bremsstrahlung single photon emission computed tomography (SPECT) has shown great potential to provide reliable activity estimates, which are essential for dose verification. Typically 90Y imaging is performed with high- or medium-energy collimators. However, the energy spectrum of 90Y bremsstrahlung photons is substantially different than typical for these collimators. In addition, dosimetry requires quantitative images, and collimators are not typically <span class="hlt">optimized</span> for such tasks. <span class="hlt">Optimizing</span> a collimator for 90Y imaging is both novel and potentially important. Conventional <span class="hlt">optimization</span> <span class="hlt">methods</span> are not appropriate for 90Y bremsstrahlung photons, which have a continuous and broad energy distribution. In this work, the authors developed a parallel-hole collimator <span class="hlt">optimization</span> <span class="hlt">method</span> for quantitative tasks that is particularly applicable to radionuclides with complex emission energy spectra. The authors applied the proposed <span class="hlt">method</span> to develop an <span class="hlt">optimal</span> collimator for quantitative 90Y bremsstrahlung SPECT in the context of microsphere radioembolization. To account for the effects of the collimator on both the bias and the variance of the activity estimates, the authors used the root mean squared error (RMSE) of the volume of interest activity estimates as the figure of merit (FOM). In the FOM, the bias due to the null space of the image formation process was taken in account. The RMSE was weighted by the inverse mass to reflect the application to dosimetry; for a different application, more relevant weighting could easily be adopted. The authors proposed a parameterization for the collimator that facilitates the incorporation of the important factors (geometric sensitivity, geometric resolution, and septal penetration fraction) determining collimator performance, while keeping the number of free parameters describing the collimator small (i.e., two parameters). To make the <span class="hlt">optimization</span> results for quantitative 90Y bremsstrahlung SPECT more</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2013PhDT.......306G','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2013PhDT.......306G"><span>New numerical <span class="hlt">methods</span> for open-loop and feedback solutions to dynamic <span class="hlt">optimization</span> problems</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Ghosh, Pradipto</p> <p></p> <p>The topic of the first part of this research is trajectory <span class="hlt">optimization</span> of dynamical systems via computational swarm intelligence. Particle swarm <span class="hlt">optimization</span> is a nature-inspired heuristic search <span class="hlt">method</span> that relies on a group of potential solutions to explore the fitness landscape. Conceptually, each particle in the swarm uses its own memory as well as the knowledge accumulated by the entire swarm to iteratively converge on an <span class="hlt">optimal</span> or near-<span class="hlt">optimal</span> solution. It is relatively straightforward to implement and unlike gradient-based solvers, does not require an initial guess or continuity in the problem definition. Although particle swarm <span class="hlt">optimization</span> has been successfully employed in solving static <span class="hlt">optimization</span> problems, its application in dynamic <span class="hlt">optimization</span>, as posed in <span class="hlt">optimal</span> control theory, is still relatively new. In the first half of this thesis particle swarm <span class="hlt">optimization</span> is used to generate near-<span class="hlt">optimal</span> solutions to several nontrivial trajectory <span class="hlt">optimization</span> problems including thrust programming for minimum fuel, multi-burn spacecraft orbit transfer, and computing minimum-time rest-to-rest trajectories for a robotic manipulator. A distinct feature of the particle swarm <span class="hlt">optimization</span> implementation in this work is the runtime selection of the <span class="hlt">optimal</span> solution structure. <span class="hlt">Optimal</span> trajectories are generated by solving instances of constrained nonlinear mixed-integer programming problems with the swarming technique. For each solved <span class="hlt">optimal</span> programming problem, the particle swarm <span class="hlt">optimization</span> result is compared with a nearly exact solution found via a direct <span class="hlt">method</span> using nonlinear programming. Numerical experiments indicate that swarm search can locate solutions to very great accuracy. The second half of this research develops a new extremal-field approach for synthesizing nearly <span class="hlt">optimal</span> feedback controllers for <span class="hlt">optimal</span> control and two-player pursuit-evasion games described by general nonlinear differential equations. A notable revelation from this development</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://hdl.handle.net/2060/20090036314','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20090036314"><span>Robust <span class="hlt">Optimal</span> Adaptive Control <span class="hlt">Method</span> with Large Adaptive Gain</span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Nguyen, Nhan T.</p> <p>2009-01-01</p> <p>In the presence of large uncertainties, a control system needs to be able to adapt rapidly to regain performance. Fast adaptation is referred to the implementation of adaptive control with a large adaptive gain to reduce the tracking error rapidly. However, a large adaptive gain can lead to high-frequency oscillations which can adversely affect robustness of an adaptive control law. A new adaptive control modification is presented that can achieve robust adaptation with a large adaptive gain without incurring high-frequency oscillations as with the standard model-reference adaptive control. The modification is based on the minimization of the Y2 norm of the tracking error, which is formulated as an <span class="hlt">optimal</span> control problem. The <span class="hlt">optimality</span> condition is used to derive the modification using the gradient <span class="hlt">method</span>. The <span class="hlt">optimal</span> control modification results in a stable adaptation and allows a large adaptive gain to be used for better tracking while providing sufficient stability robustness. Simulations were conducted for a damaged generic transport aircraft with both standard adaptive control and the adaptive <span class="hlt">optimal</span> control modification technique. The results demonstrate the effectiveness of the proposed modification in tracking a reference model while maintaining a sufficient time delay margin.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017AdWR..110..310G','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017AdWR..110..310G"><span><span class="hlt">Optimal</span> estimation and scheduling in aquifer management using the rapid feedback control <span class="hlt">method</span></span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Ghorbanidehno, Hojat; Kokkinaki, Amalia; Kitanidis, Peter K.; Darve, Eric</p> <p>2017-12-01</p> <p>Management of water resources systems often involves a large number of parameters, as in the case of large, spatially heterogeneous aquifers, and a large number of "noisy" observations, as in the case of pressure observation in wells. <span class="hlt">Optimizing</span> the operation of such systems requires both searching among many possible solutions and utilizing new information as it becomes available. However, the computational cost of this task increases rapidly with the size of the problem to the extent that textbook <span class="hlt">optimization</span> <span class="hlt">methods</span> are practically impossible to apply. In this paper, we present a new computationally efficient technique as a practical alternative for <span class="hlt">optimally</span> operating large-scale dynamical systems. The proposed <span class="hlt">method</span>, which we term Rapid Feedback Controller (RFC), provides a practical approach for combined monitoring, parameter estimation, uncertainty quantification, and <span class="hlt">optimal</span> control for linear and nonlinear systems with a quadratic cost function. For illustration, we consider the case of a weakly nonlinear uncertain dynamical system with a quadratic objective function, specifically a two-dimensional heterogeneous aquifer management problem. To validate our <span class="hlt">method</span>, we compare our results with the linear quadratic Gaussian (LQG) <span class="hlt">method</span>, which is the basic approach for feedback control. We show that the computational cost of the RFC scales only linearly with the number of unknowns, a great improvement compared to the basic LQG control with a computational cost that scales quadratically. We demonstrate that the RFC <span class="hlt">method</span> can obtain the <span class="hlt">optimal</span> control values at a greatly reduced computational cost compared to the conventional LQG algorithm with small and controllable losses in the accuracy of the state and parameter estimation.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2005JCoAM.173..169H','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2005JCoAM.173..169H"><span>A time-domain decomposition iterative <span class="hlt">method</span> for the solution of distributed linear quadratic <span class="hlt">optimal</span> control problems</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Heinkenschloss, Matthias</p> <p>2005-01-01</p> <p>We study a class of time-domain decomposition-based <span class="hlt">methods</span> for the numerical solution of large-scale linear quadratic <span class="hlt">optimal</span> control problems. Our <span class="hlt">methods</span> are based on a multiple shooting reformulation of the linear quadratic <span class="hlt">optimal</span> control problem as a discrete-time <span class="hlt">optimal</span> control (DTOC) problem. The <span class="hlt">optimality</span> conditions for this DTOC problem lead to a linear block tridiagonal system. The diagonal blocks are invertible and are related to the original linear quadratic <span class="hlt">optimal</span> control problem restricted to smaller time-subintervals. This motivates the application of block Gauss-Seidel (GS)-type <span class="hlt">methods</span> for the solution of the block tridiagonal systems. Numerical experiments show that the spectral radii of the block GS iteration matrices are larger than one for typical applications, but that the eigenvalues of the iteration matrices decay to zero fast. Hence, while the GS <span class="hlt">method</span> is not expected to convergence for typical applications, it can be effective as a preconditioner for Krylov-subspace <span class="hlt">methods</span>. This is confirmed by our numerical tests.A byproduct of this research is the insight that certain instantaneous control techniques can be viewed as the application of one step of the forward block GS <span class="hlt">method</span> applied to the DTOC <span class="hlt">optimality</span> system.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.osti.gov/biblio/22675923-automated-property-optimization-via-ab-initio-elongation-method-application-hyper-polarizability-dna','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/22675923-automated-property-optimization-via-ab-initio-elongation-method-application-hyper-polarizability-dna"><span>Automated property <span class="hlt">optimization</span> via ab initio O(N) elongation <span class="hlt">method</span>: Application to (hyper-)polarizability in DNA</span></a></p> <p><a target="_blank" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Orimoto, Yuuichi, E-mail: orimoto.yuuichi.888@m.kyushu-u.ac.jp; Aoki, Yuriko; Japan Science and Technology Agency, CREST, 4-1-8 Hon-chou, Kawaguchi, Saitama 332-0012</p> <p></p> <p>An automated property <span class="hlt">optimization</span> <span class="hlt">method</span> was developed based on the ab initio O(N) elongation (ELG) <span class="hlt">method</span> and applied to the <span class="hlt">optimization</span> of nonlinear optical (NLO) properties in DNA as a first test. The ELG <span class="hlt">method</span> mimics a polymerization reaction on a computer, and the reaction terminal of a starting cluster is attacked by monomers sequentially to elongate the electronic structure of the system by solving in each step a limited space including the terminal (localized molecular orbitals at the terminal) and monomer. The ELG-finite field (ELG-FF) <span class="hlt">method</span> for calculating (hyper-)polarizabilities was used as the engine program of the <span class="hlt">optimization</span> <span class="hlt">method</span>,more » and it was found to show linear scaling efficiency while maintaining high computational accuracy for a random sequenced DNA model. Furthermore, the self-consistent field convergence was significantly improved by using the ELG-FF <span class="hlt">method</span> compared with a conventional <span class="hlt">method</span>, and it can lead to more feasible NLO property values in the FF treatment. The automated <span class="hlt">optimization</span> <span class="hlt">method</span> successfully chose an appropriate base pair from four base pairs (A, T, G, and C) for each elongation step according to an evaluation function. From test <span class="hlt">optimizations</span> for the first order hyper-polarizability (β) in DNA, a substantial difference was observed depending on <span class="hlt">optimization</span> conditions between “choose-maximum” (choose a base pair giving the maximum β for each step) and “choose-minimum” (choose a base pair giving the minimum β). In contrast, there was an ambiguous difference between these conditions for <span class="hlt">optimizing</span> the second order hyper-polarizability (γ) because of the small absolute value of γ and the limitation of numerical differential calculations in the FF <span class="hlt">method</span>. It can be concluded that the ab initio level property <span class="hlt">optimization</span> <span class="hlt">method</span> introduced here can be an effective step towards an advanced computer aided material design <span class="hlt">method</span> as long as the numerical limitation of the FF <span class="hlt">method</span> is taken into account.« less</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015JTePh..60.1632O','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015JTePh..60.1632O"><span>Efficiency of operation of wind turbine rotors <span class="hlt">optimized</span> by the Glauert and Betz <span class="hlt">methods</span></span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Okulov, V. L.; Mikkelsen, R.; Litvinov, I. V.; Naumov, I. V.</p> <p>2015-11-01</p> <p>The models of two types of rotors with blades constructed using different <span class="hlt">optimization</span> <span class="hlt">methods</span> are compared experimentally. In the first case, the Glauert <span class="hlt">optimization</span> by the pulsed <span class="hlt">method</span> is used, which is applied independently for each individual blade cross section. This <span class="hlt">method</span> remains the main approach in designing rotors of various duties. The construction of the other rotor is based on the Betz idea about <span class="hlt">optimization</span> of rotors by determining a special distribution of circulation over the blade, which ensures the helical structure of the wake behind the rotor. It is established for the first time as a result of direct experimental comparison that the rotor constructed using the Betz <span class="hlt">method</span> makes it possible to extract more kinetic energy from the homogeneous incoming flow.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li><a href="#" onclick='return showDiv("page_24");'>24</a></li> <li class="active"><span>25</span></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_25 --> <div class="footer-extlink text-muted" style="margin-bottom:1rem; text-align:center;">Some links on this page may take you to non-federal websites. Their policies may differ from this site.</div> </div><!-- container --> <footer><a id="backToTop" href="#top"> </a><nav><a id="backToTop" href="#top"> </a><ul class="links"><a id="backToTop" href="#top"> </a><li><a id="backToTop" href="#top"></a><a href="/sitemap.html">Site Map</a></li> <li><a href="/members/index.html">Members Only</a></li> <li><a href="/website-policies.html">Website Policies</a></li> <li><a href="https://doe.responsibledisclosure.com/hc/en-us" target="_blank">Vulnerability Disclosure Program</a></li> <li><a href="/contact.html">Contact Us</a></li> </ul> <div class="small">Science.gov is maintained by the U.S. Department of Energy's <a href="https://www.osti.gov/" target="_blank">Office of Scientific and Technical Information</a>, in partnership with <a href="https://www.cendi.gov/" target="_blank">CENDI</a>.</div> </nav> </footer> <script type="text/javascript"><!-- // var lastDiv = ""; function showDiv(divName) { // hide last div if (lastDiv) { document.getElementById(lastDiv).className = "hiddenDiv"; } //if value of the box is not nothing and an object with that name exists, then change the class if (divName && document.getElementById(divName)) { document.getElementById(divName).className = "visibleDiv"; lastDiv = divName; } } //--> </script> <script> /** * Function that tracks a click on an outbound link in Google Analytics. * This function takes a valid URL string as an argument, and uses that URL string * as the event label. */ var trackOutboundLink = function(url,collectionCode) { try { h = window.open(url); setTimeout(function() { ga('send', 'event', 'topic-page-click-through', collectionCode, url); }, 1000); } catch(err){} }; </script> <!-- Google Analytics --> <script> (function(i,s,o,g,r,a,m){i['GoogleAnalyticsObject']=r;i[r]=i[r]||function(){ (i[r].q=i[r].q||[]).push(arguments)},i[r].l=1*new Date();a=s.createElement(o), m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m) })(window,document,'script','//www.google-analytics.com/analytics.js','ga'); ga('create', 'UA-1122789-34', 'auto'); ga('send', 'pageview'); </script> <!-- End Google Analytics --> <script> showDiv('page_1') </script> </body> </html>