NASA Technical Reports Server (NTRS)
Bao, Han P.; Samareh, J. A.
2000-01-01
The primary objective of this paper is to demonstrate the use of process-based manufacturing and assembly cost models in a traditional performance-focused multidisciplinary design and optimization process. The use of automated cost-performance analysis is an enabling technology that could bring realistic processbased manufacturing and assembly cost into multidisciplinary design and optimization. In this paper, we present a new methodology for incorporating process costing into a standard multidisciplinary design optimization process. Material, manufacturing processes, and assembly processes costs then could be used as the objective function for the optimization method. A case study involving forty-six different configurations of a simple wing is presented, indicating that a design based on performance criteria alone may not necessarily be the most affordable as far as manufacturing and assembly cost is concerned.
The impact of chief executive officer optimism on hospital strategic decision making.
Langabeer, James R; Yao, Emery
2012-01-01
Previous strategic decision making research has focused mostly on the analytical positioning approach, which broadly emphasizes an alignment between rationality and the external environment. In this study, we propose that hospital chief executive optimism (or the general tendency to expect positive future outcomes) will moderate the relationship between comprehensively rational decision-making process and organizational performance. The purpose of this study was to explore the impact that dispositional optimism has on the well-established relationship between rational decision-making processes and organizational performance. Specifically, we hypothesized that optimism will moderate the relationship between the level of rationality and the organization's performance. We further suggest that this relationship will be more negative for those with high, as opposed to low, optimism. We surveyed 168 hospital CEOs and used moderated hierarchical regression methods to statically test our hypothesis. On the basis of a survey study of 168 hospital CEOs, we found evidence of a complex interplay of optimism in the rationality-organizational performance relationship. More specifically, we found that the two-way interactions between optimism and rational decision making were negatively associated with performance and that where optimism was the highest, the rationality-performance relationship was the most negative. Executive optimism was positively associated with organizational performance. We also found that greater perceived environmental turbulence, when interacting with optimism, did not have a significant interaction effect on the rationality-performance relationship. These findings suggest potential for broader participation in strategic processes and the use of organizational development techniques that assess executive disposition and traits for recruitment processes, because CEO optimism influences hospital-level processes. Research implications include incorporating greater use of behavior and cognition constructs to better depict decision-making processes in complex organizations like hospitals.
Improving scanner wafer alignment performance by target optimization
NASA Astrophysics Data System (ADS)
Leray, Philippe; Jehoul, Christiane; Socha, Robert; Menchtchikov, Boris; Raghunathan, Sudhar; Kent, Eric; Schoonewelle, Hielke; Tinnemans, Patrick; Tuffy, Paul; Belen, Jun; Wise, Rich
2016-03-01
In the process nodes of 10nm and below, the patterning complexity along with the processing and materials required has resulted in a need to optimize alignment targets in order to achieve the required precision, accuracy and throughput performance. Recent industry publications on the metrology target optimization process have shown a move from the expensive and time consuming empirical methodologies, towards a faster computational approach. ASML's Design for Control (D4C) application, which is currently used to optimize YieldStar diffraction based overlay (DBO) metrology targets, has been extended to support the optimization of scanner wafer alignment targets. This allows the necessary process information and design methodology, used for DBO target designs, to be leveraged for the optimization of alignment targets. In this paper, we show how we applied this computational approach to wafer alignment target design. We verify the correlation between predictions and measurements for the key alignment performance metrics and finally show the potential alignment and overlay performance improvements that an optimized alignment target could achieve.
Meta-control of combustion performance with a data mining approach
NASA Astrophysics Data System (ADS)
Song, Zhe
Large scale combustion process is complex and proposes challenges of optimizing its performance. Traditional approaches based on thermal dynamics have limitations on finding optimal operational regions due to time-shift nature of the process. Recent advances in information technology enable people collect large volumes of process data easily and continuously. The collected process data contains rich information about the process and, to some extent, represents a digital copy of the process over time. Although large volumes of data exist in industrial combustion processes, they are not fully utilized to the level where the process can be optimized. Data mining is an emerging science which finds patterns or models from large data sets. It has found many successful applications in business marketing, medical and manufacturing domains The focus of this dissertation is on applying data mining to industrial combustion processes, and ultimately optimizing the combustion performance. However the philosophy, methods and frameworks discussed in this research can also be applied to other industrial processes. Optimizing an industrial combustion process has two major challenges. One is the underlying process model changes over time and obtaining an accurate process model is nontrivial. The other is that a process model with high fidelity is usually highly nonlinear, solving the optimization problem needs efficient heuristics. This dissertation is set to solve these two major challenges. The major contribution of this 4-year research is the data-driven solution to optimize the combustion process, where process model or knowledge is identified based on the process data, then optimization is executed by evolutionary algorithms to search for optimal operating regions.
NASA Astrophysics Data System (ADS)
Welch, Kevin; Leonard, Jerry; Jones, Richard D.
2010-08-01
Increasingly stringent requirements on the performance of diffractive optical elements (DOEs) used in wafer scanner illumination systems are driving continuous improvements in their associated manufacturing processes. Specifically, these processes are designed to improve the output pattern uniformity of off-axis illumination systems to minimize degradation in the ultimate imaging performance of a lithographic tool. In this paper, we discuss performance improvements in both photolithographic patterning and RIE etching of fused silica diffractive optical structures. In summary, optimized photolithographic processes were developed to increase critical dimension uniformity and featuresize linearity across the substrate. The photoresist film thickness was also optimized for integration with an improved etch process. This etch process was itself optimized for pattern transfer fidelity, sidewall profile (wall angle, trench bottom flatness), and across-wafer etch depth uniformity. Improvements observed with these processes on idealized test structures (for ease of analysis) led to their implementation in product flows, with comparable increases in performance and yield on customer designs.
Optimization in the systems engineering process
NASA Technical Reports Server (NTRS)
Lemmerman, L. A.
1984-01-01
The objective is to look at optimization as it applies to the design process at a large aircraft company. The design process at Lockheed-Georgia is described. Some examples of the impact that optimization has had on that process are given, and then some areas that must be considered if optimization is to be successful and supportive in the total design process are indicated. Optimization must continue to be sold and this selling is best done by consistent good performance. For this good performance to occur, the future approaches must be clearly thought out so that the optimization methods solve the problems that actually occur during design. The visibility of the design process must be maintained as further developments are proposed. Careful attention must be given to the management of data in the optimization process, both for technical reasons and for administrative purposes. Finally, to satisfy program needs, provisions must be included to supply data to support program decisions, and to communicate with design processes outside of the optimization process. If designers fail to adequately consider all of these needs, the future acceptance of optimization will be impeded.
Noise tolerant illumination optimization applied to display devices
NASA Astrophysics Data System (ADS)
Cassarly, William J.; Irving, Bruce
2005-02-01
Display devices have historically been designed through an iterative process using numerous hardware prototypes. This process is effective but the number of iterations is limited by the time and cost to make the prototypes. In recent years, virtual prototyping using illumination software modeling tools has replaced many of the hardware prototypes. Typically, the designer specifies the design parameters, builds the software model, predicts the performance using a Monte Carlo simulation, and uses the performance results to repeat this process until an acceptable design is obtained. What is highly desired, and now possible, is to use illumination optimization to automate the design process. Illumination optimization provides the ability to explore a wider range of design options while also providing improved performance. Since Monte Carlo simulations are often used to calculate the system performance but those predictions have statistical uncertainty, the use of noise tolerant optimization algorithms is important. The use of noise tolerant illumination optimization is demonstrated by considering display device designs that extract light using 2D paint patterns as well as 3D textured surfaces. A hybrid optimization approach that combines a mesh feedback optimization with a classical optimizer is demonstrated. Displays with LED sources and cold cathode fluorescent lamps are considered.
Optimization Of PVDF-TrFE Processing Conditions For The Fabrication Of Organic MEMS Resonators
Ducrot, Pierre-Henri; Dufour, Isabelle; Ayela, Cédric
2016-01-01
This paper reports a systematic optimization of processing conditions of PVDF-TrFE piezoelectric thin films, used as integrated transducers in organic MEMS resonators. Indeed, despite data on electromechanical properties of PVDF found in the literature, optimized processing conditions that lead to these properties remain only partially described. In this work, a rigorous optimization of parameters enabling state-of-the-art piezoelectric properties of PVDF-TrFE thin films has been performed via the evaluation of the actuation performance of MEMS resonators. Conditions such as annealing duration, poling field and poling duration have been optimized and repeatability of the process has been demonstrated. PMID:26792224
Optimization Of PVDF-TrFE Processing Conditions For The Fabrication Of Organic MEMS Resonators.
Ducrot, Pierre-Henri; Dufour, Isabelle; Ayela, Cédric
2016-01-21
This paper reports a systematic optimization of processing conditions of PVDF-TrFE piezoelectric thin films, used as integrated transducers in organic MEMS resonators. Indeed, despite data on electromechanical properties of PVDF found in the literature, optimized processing conditions that lead to these properties remain only partially described. In this work, a rigorous optimization of parameters enabling state-of-the-art piezoelectric properties of PVDF-TrFE thin films has been performed via the evaluation of the actuation performance of MEMS resonators. Conditions such as annealing duration, poling field and poling duration have been optimized and repeatability of the process has been demonstrated.
NASA Astrophysics Data System (ADS)
Vikram, K. Arun; Ratnam, Ch; Lakshmi, VVK; Kumar, A. Sunny; Ramakanth, RT
2018-02-01
Meta-heuristic multi-response optimization methods are widely in use to solve multi-objective problems to obtain Pareto optimal solutions during optimization. This work focuses on optimal multi-response evaluation of process parameters in generating responses like surface roughness (Ra), surface hardness (H) and tool vibration displacement amplitude (Vib) while performing operations like tangential and orthogonal turn-mill processes on A-axis Computer Numerical Control vertical milling center. Process parameters like tool speed, feed rate and depth of cut are considered as process parameters machined over brass material under dry condition with high speed steel end milling cutters using Taguchi design of experiments (DOE). Meta-heuristic like Dragonfly algorithm is used to optimize the multi-objectives like ‘Ra’, ‘H’ and ‘Vib’ to identify the optimal multi-response process parameters combination. Later, the results thus obtained from multi-objective dragonfly algorithm (MODA) are compared with another multi-response optimization technique Viz. Grey relational analysis (GRA).
Piezoresistive Cantilever Performance—Part II: Optimization
Park, Sung-Jin; Doll, Joseph C.; Rastegar, Ali J.; Pruitt, Beth L.
2010-01-01
Piezoresistive silicon cantilevers fabricated by ion implantation are frequently used for force, displacement, and chemical sensors due to their low cost and electronic readout. However, the design of piezoresistive cantilevers is not a straightforward problem due to coupling between the design parameters, constraints, process conditions, and performance. We systematically analyzed the effect of design and process parameters on force resolution and then developed an optimization approach to improve force resolution while satisfying various design constraints using simulation results. The combined simulation and optimization approach is extensible to other doping methods beyond ion implantation in principle. The optimization results were validated by fabricating cantilevers with the optimized conditions and characterizing their performance. The measurement results demonstrate that the analytical model accurately predicts force and displacement resolution, and sensitivity and noise tradeoff in optimal cantilever performance. We also performed a comparison between our optimization technique and existing models and demonstrated eight times improvement in force resolution over simplified models. PMID:20333323
General purpose graphic processing unit implementation of adaptive pulse compression algorithms
NASA Astrophysics Data System (ADS)
Cai, Jingxiao; Zhang, Yan
2017-07-01
This study introduces a practical approach to implement real-time signal processing algorithms for general surveillance radar based on NVIDIA graphical processing units (GPUs). The pulse compression algorithms are implemented using compute unified device architecture (CUDA) libraries such as CUDA basic linear algebra subroutines and CUDA fast Fourier transform library, which are adopted from open source libraries and optimized for the NVIDIA GPUs. For more advanced, adaptive processing algorithms such as adaptive pulse compression, customized kernel optimization is needed and investigated. A statistical optimization approach is developed for this purpose without needing much knowledge of the physical configurations of the kernels. It was found that the kernel optimization approach can significantly improve the performance. Benchmark performance is compared with the CPU performance in terms of processing accelerations. The proposed implementation framework can be used in various radar systems including ground-based phased array radar, airborne sense and avoid radar, and aerospace surveillance radar.
NASA Astrophysics Data System (ADS)
Marchukov, E.; Egorov, I.; Popov, G.; Baturin, O.; Goriachkin, E.; Novikova, Y.; Kolmakova, D.
2017-08-01
The article presents one optimization method for improving of the working process of an axial compressor of gas turbine engine. Developed method allows to perform search for the best geometry of compressor blades automatically by using optimization software IOSO and CFD software NUMECA Fine/Turbo. Optimization was performed by changing the form of the middle line in the three sections of each blade and shifts of three sections of the guide vanes in the circumferential and axial directions. The calculation of the compressor parameters was performed for work and stall point of its performance map on each optimization step. Study was carried out for seven-stage high-pressure compressor and three-stage low-pressure compressors. As a result of optimization, improvement of efficiency was achieved for all investigated compressors.
NASA Astrophysics Data System (ADS)
Medi, Bijan; Kazi, Monzure-Khoda; Amanullah, Mohammad
2013-06-01
Chromatography has been established as the method of choice for the separation and purification of optically pure drugs which has a market size of about 250 billion USD. Single column chromatography (SCC) is commonly used in the development and testing phase of drug development while multi-column Simulated Moving Bed (SMB) chromatography is more suitable for large scale production due to its continuous nature. In this study, optimal performance of SCC and SMB processes for the separation of optical isomers under linear and overloaded separation conditions has been investigated. The performance indicators, namely productivity and desorbent requirement have been compared under geometric similarity for the separation of a mixture of guaifenesin, and Tröger's base enantiomers. SCC process has been analyzed under equilibrium assumption i.e., assuming infinite column efficiency, and zero dispersion, and its optimal performance parameters are compared with the optimal prediction of an SMB process by triangle theory. Simulation results obtained using actual experimental data indicate that SCC may compete with SMB in terms of productivity depending on the molecules to be separated. Besides, insights into the process performances in terms of degree of freedom and relationship between the optimal operating point and solubility limit of the optical isomers have been ascertained. This investigation enables appropriate selection of single or multi-column chromatographic processes based on column packing properties and isotherm parameters.
NASA Astrophysics Data System (ADS)
Raju, B. S.; Sekhar, U. Chandra; Drakshayani, D. N.
2017-08-01
The paper investigates optimization of stereolithography process for SL5530 epoxy resin material to enhance part quality. The major characteristics indexed for performance selected to evaluate the processes are tensile strength, Flexural strength, Impact strength and Density analysis and corresponding process parameters are Layer thickness, Orientation and Hatch spacing. In this study, the process is intrinsically with multiple parameters tuning so that grey relational analysis which uses grey relational grade as performance index is specially adopted to determine the optimal combination of process parameters. Moreover, the principal component analysis is applied to evaluate the weighting values corresponding to various performance characteristics so that their relative importance can be properly and objectively desired. The results of confirmation experiments reveal that grey relational analysis coupled with principal component analysis can effectively acquire the optimal combination of process parameters. Hence, this confirm that the proposed approach in this study can be an useful tool to improve the process parameters in stereolithography process, which is very useful information for machine designers as well as RP machine users.
Performance Optimization Control of ECH using Fuzzy Inference Application
NASA Astrophysics Data System (ADS)
Dubey, Abhay Kumar
Electro-chemical honing (ECH) is a hybrid electrolytic precision micro-finishing technology that, by combining physico-chemical actions of electro-chemical machining and conventional honing processes, provides the controlled functional surfaces-generation and fast material removal capabilities in a single operation. Process multi-performance optimization has become vital for utilizing full potential of manufacturing processes to meet the challenging requirements being placed on the surface quality, size, tolerances and production rate of engineering components in this globally competitive scenario. This paper presents an strategy that integrates the Taguchi matrix experimental design, analysis of variances and fuzzy inference system (FIS) to formulate a robust practical multi-performance optimization methodology for complex manufacturing processes like ECH, which involve several control variables. Two methodologies one using a genetic algorithm tuning of FIS (GA-tuned FIS) and another using an adaptive network based fuzzy inference system (ANFIS) have been evaluated for a multi-performance optimization case study of ECH. The actual experimental results confirm their potential for a wide range of machining conditions employed in ECH.
Hernandez, Wilmar
2007-01-01
In this paper a survey on recent applications of optimal signal processing techniques to improve the performance of mechanical sensors is made. Here, a comparison between classical filters and optimal filters for automotive sensors is made, and the current state of the art of the application of robust and optimal control and signal processing techniques to the design of the intelligent (or smart) sensors that today's cars need is presented through several experimental results that show that the fusion of intelligent sensors and optimal signal processing techniques is the clear way to go. However, the switch between the traditional methods of designing automotive sensors and the new ones cannot be done overnight because there are some open research issues that have to be solved. This paper draws attention to one of the open research issues and tries to arouse researcher's interest in the fusion of intelligent sensors and optimal signal processing techniques.
Process Optimization Assessment: Fort Leonard Wood, MO and Fort Carson, CO
2003-11-01
IUJ US Army Corps of Engineers, Engineer Research and Development Center Process Optimization Assessment Fort Leonard Wood, MO and Fort Carson, CO... Optimization Assessment: Fort Leonard Wood, MO and Fort Carson, CO Mike C.J. Lin and John Vavrin Construction Engineering Research Laboratory PO Box 9005...work performed a Process Optimization Assessment (POA) on behalf of Fort Leonard Wood, MO and Fort Carson, CO to identify process, energy, and
Parameter optimization of electrochemical machining process using black hole algorithm
NASA Astrophysics Data System (ADS)
Singh, Dinesh; Shukla, Rajkamal
2017-12-01
Advanced machining processes are significant as higher accuracy in machined component is required in the manufacturing industries. Parameter optimization of machining processes gives optimum control to achieve the desired goals. In this paper, electrochemical machining (ECM) process is considered to evaluate the performance of the considered process using black hole algorithm (BHA). BHA considers the fundamental idea of a black hole theory and it has less operating parameters to tune. The two performance parameters, material removal rate (MRR) and overcut (OC) are considered separately to get optimum machining parameter settings using BHA. The variations of process parameters with respect to the performance parameters are reported for better and effective understanding of the considered process using single objective at a time. The results obtained using BHA are found better while compared with results of other metaheuristic algorithms, such as, genetic algorithm (GA), artificial bee colony (ABC) and bio-geography based optimization (BBO) attempted by previous researchers.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rosenhoover, W.A.; Stouffer, M.R.; Withum, J.A.
1994-12-01
The objective of this research project is to develop second-generation duct injection technology as a cost-effective SO{sub 2} control option for the 1990 Clean Air Act Amendments. Research is focused on the Advanced Coolside process, which has shown the potential for achieving the performance targets of 90% SO{sub 2} removal and 60% sorbent utilization. In Subtask 2.2, Design Optimization, process improvement was sought by optimizing sorbent recycle and by optimizing process equipment for reduced cost. The pilot plant recycle testing showed that 90% SO{sub 2} removal could be achieved at sorbent utilizations up to 75%. This testing also showed thatmore » the Advanced Coolside process has the potential to achieve very high removal efficiency (90 to greater than 99%). Two alternative contactor designs were developed, tested and optimized through pilot plant testing; the improved designs will reduce process costs significantly, while maintaining operability and performance essential to the process. Also, sorbent recycle handling equipment was optimized to reduce cost.« less
Kim, Baek-Chul; Park, S J; Cho, M S; Lee, Y; Nam, J D; Choi, H R; Koo, J C
2009-12-01
Present work delivers a systematical evaluation of actuation efficiency of a nano-particle electrode conducting polymer actuator fabricated based on Nitrile Butadiene Rubber (NBR). Attempts are made for maximizing mechanical functionality of the nano-particle electrode conducting polymer actuator that can be driven in the air. As the conducting polymer polypyrrole of the actuator is to be fabricated through a chemical oxidation polymerization process that may impose certain limitations on both electrical and mechanical functionality of the actuator, a coordinated study for optimization process of the actuator is necessary for maximizing its performance. In this article actuation behaviors of the nano-particle electrode polypyrrole conducting polymer is studied and an optimization process for the mechanical performance maximization is performed.
Tchamna, Rodrigue; Lee, Moonyong
2018-01-01
This paper proposes a novel optimization-based approach for the design of an industrial two-term proportional-integral (PI) controller for the optimal regulatory control of unstable processes subjected to three common operational constraints related to the process variable, manipulated variable and its rate of change. To derive analytical design relations, the constrained optimal control problem in the time domain was transformed into an unconstrained optimization problem in a new parameter space via an effective parameterization. The resulting optimal PI controller has been verified to yield optimal performance and stability of an open-loop unstable first-order process under operational constraints. The proposed analytical design method explicitly takes into account the operational constraints in the controller design stage and also provides useful insights into the optimal controller design. Practical procedures for designing optimal PI parameters and a feasible constraint set exclusive of complex optimization steps are also proposed. The proposed controller was compared with several other PI controllers to illustrate its performance. The robustness of the proposed controller against plant-model mismatch has also been investigated. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Leung, Nelson; Abdelhafez, Mohamed; Koch, Jens; Schuster, David
2017-04-01
We implement a quantum optimal control algorithm based on automatic differentiation and harness the acceleration afforded by graphics processing units (GPUs). Automatic differentiation allows us to specify advanced optimization criteria and incorporate them in the optimization process with ease. We show that the use of GPUs can speedup calculations by more than an order of magnitude. Our strategy facilitates efficient numerical simulations on affordable desktop computers and exploration of a host of optimization constraints and system parameters relevant to real-life experiments. We demonstrate optimization of quantum evolution based on fine-grained evaluation of performance at each intermediate time step, thus enabling more intricate control on the evolution path, suppression of departures from the truncated model subspace, as well as minimization of the physical time needed to perform high-fidelity state preparation and unitary gates.
Performance Management and Optimization of Semiconductor Design Projects
NASA Astrophysics Data System (ADS)
Hinrichs, Neele; Olbrich, Markus; Barke, Erich
2010-06-01
The semiconductor industry is characterized by fast technological changes and small time-to-market windows. Improving productivity is the key factor to stand up to the competitors and thus successfully persist in the market. In this paper a Performance Management System for analyzing, optimizing and evaluating chip design projects is presented. A task graph representation is used to optimize the design process regarding time, cost and workload of resources. Key Performance Indicators are defined in the main areas cost, profit, resources, process and technical output to appraise the project.
Extreme Learning Machine and Particle Swarm Optimization in optimizing CNC turning operation
NASA Astrophysics Data System (ADS)
Janahiraman, Tiagrajah V.; Ahmad, Nooraziah; Hani Nordin, Farah
2018-04-01
The CNC machine is controlled by manipulating cutting parameters that could directly influence the process performance. Many optimization methods has been applied to obtain the optimal cutting parameters for the desired performance function. Nonetheless, the industry still uses the traditional technique to obtain those values. Lack of knowledge on optimization techniques is the main reason for this issue to be prolonged. Therefore, the simple yet easy to implement, Optimal Cutting Parameters Selection System is introduced to help the manufacturer to easily understand and determine the best optimal parameters for their turning operation. This new system consists of two stages which are modelling and optimization. In modelling of input-output and in-process parameters, the hybrid of Extreme Learning Machine and Particle Swarm Optimization is applied. This modelling technique tend to converge faster than other artificial intelligent technique and give accurate result. For the optimization stage, again the Particle Swarm Optimization is used to get the optimal cutting parameters based on the performance function preferred by the manufacturer. Overall, the system can reduce the gap between academic world and the industry by introducing a simple yet easy to implement optimization technique. This novel optimization technique can give accurate result besides being the fastest technique.
Optimal design of leak-proof SRAM cell using MCDM method
NASA Astrophysics Data System (ADS)
Wang, Qi; Kang, Sung-Mo
2003-04-01
As deep-submicron CMOS technology advances, on-chip cache has become a bottleneck on microprocessor's performance. Meanwhile, it also occupies a big percentage of processor area and consumes large power. Speed, power and area of SRAM are mutually contradicting, and not easy to be met simultaneously. Many existent leakage suppression techniques have been proposed, but they limit the circuit's performance. We apply a Multi-Criteria Decision Making strategy to perform a minimum delay-power-area optimization on SRAM circuit under some certain constraints. Based on an integrated device and circuit-level approach, we search for a process that yields a targeted composite performance. In consideration of the huge amount of simulation workload involved in the optimal design-seeking process, most of this process is automated to facilitate our goal-pursuant. With varying emphasis put on delay, power or area, different optimal SRAM designs are derived and a gate-oxide thickness scaling limit is projected. The result seems to indicate that a better composite performance could be achieved under a thinner oxide thickness. Under the derived optimal oxide thickness, the static leakage power consumption contributes less than 1% in the total power dissipation.
An optimal design of wind turbine and ship structure based on neuro-response surface method
NASA Astrophysics Data System (ADS)
Lee, Jae-Chul; Shin, Sung-Chul; Kim, Soo-Young
2015-07-01
The geometry of engineering systems affects their performances. For this reason, the shape of engineering systems needs to be optimized in the initial design stage. However, engineering system design problems consist of multi-objective optimization and the performance analysis using commercial code or numerical analysis is generally time-consuming. To solve these problems, many engineers perform the optimization using the approximation model (response surface). The Response Surface Method (RSM) is generally used to predict the system performance in engineering research field, but RSM presents some prediction errors for highly nonlinear systems. The major objective of this research is to establish an optimal design method for multi-objective problems and confirm its applicability. The proposed process is composed of three parts: definition of geometry, generation of response surface, and optimization process. To reduce the time for performance analysis and minimize the prediction errors, the approximation model is generated using the Backpropagation Artificial Neural Network (BPANN) which is considered as Neuro-Response Surface Method (NRSM). The optimization is done for the generated response surface by non-dominated sorting genetic algorithm-II (NSGA-II). Through case studies of marine system and ship structure (substructure of floating offshore wind turbine considering hydrodynamics performances and bulk carrier bottom stiffened panels considering structure performance), we have confirmed the applicability of the proposed method for multi-objective side constraint optimization problems.
An Integrated Framework for Parameter-based Optimization of Scientific Workflows.
Kumar, Vijay S; Sadayappan, P; Mehta, Gaurang; Vahi, Karan; Deelman, Ewa; Ratnakar, Varun; Kim, Jihie; Gil, Yolanda; Hall, Mary; Kurc, Tahsin; Saltz, Joel
2009-01-01
Data analysis processes in scientific applications can be expressed as coarse-grain workflows of complex data processing operations with data flow dependencies between them. Performance optimization of these workflows can be viewed as a search for a set of optimal values in a multi-dimensional parameter space. While some performance parameters such as grouping of workflow components and their mapping to machines do not a ect the accuracy of the output, others may dictate trading the output quality of individual components (and of the whole workflow) for performance. This paper describes an integrated framework which is capable of supporting performance optimizations along multiple dimensions of the parameter space. Using two real-world applications in the spatial data analysis domain, we present an experimental evaluation of the proposed framework.
Optimization of A(2)O BNR processes using ASM and EAWAG Bio-P models: model performance.
El Shorbagy, Walid E; Radif, Nawras N; Droste, Ronald L
2013-12-01
This paper presents the performance of an optimization model for a biological nutrient removal (BNR) system using the anaerobic-anoxic-oxic (A(2)O) process. The formulated model simulates removal of organics, nitrogen, and phosphorus using a reduced International Water Association (IWA) Activated Sludge Model #3 (ASM3) model and a Swiss Federal Institute for Environmental Science and Technology (EAWAG) Bio-P module. Optimal sizing is attained considering capital and operational costs. Process performance is evaluated against the effect of influent conditions, effluent limits, and selected parameters of various optimal solutions with the following results: an increase of influent temperature from 10 degrees C to 25 degrees C decreases the annual cost by about 8.5%, an increase of influent flow from 500 to 2500 m(3)/h triples the annual cost, the A(2)O BNR system is more sensitive to variations in influent ammonia than phosphorus concentration and the maximum growth rate of autotrophic biomass was the most sensitive kinetic parameter in the optimization model.
Economic-Oriented Stochastic Optimization in Advanced Process Control of Chemical Processes
Dobos, László; Király, András; Abonyi, János
2012-01-01
Finding the optimal operating region of chemical processes is an inevitable step toward improving economic performance. Usually the optimal operating region is situated close to process constraints related to product quality or process safety requirements. Higher profit can be realized only by assuring a relatively low frequency of violation of these constraints. A multilevel stochastic optimization framework is proposed to determine the optimal setpoint values of control loops with respect to predetermined risk levels, uncertainties, and costs of violation of process constraints. The proposed framework is realized as direct search-type optimization of Monte-Carlo simulation of the controlled process. The concept is illustrated throughout by a well-known benchmark problem related to the control of a linear dynamical system and the model predictive control of a more complex nonlinear polymerization process. PMID:23213298
Modeling of pulsed propellant reorientation
NASA Technical Reports Server (NTRS)
Patag, A. E.; Hochstein, J. I.; Chato, D. J.
1989-01-01
Optimization of the propellant reorientation process can provide increased payload capability and extend the service life of spacecraft. The use of pulsed propellant reorientation to optimize the reorientation process is proposed. The ECLIPSE code was validated for modeling the reorientation process and is used to study pulsed reorientation in small-scale and full-scale propellant tanks. A dimensional analysis of the process is performed and the resulting dimensionless groups are used to present and correlate the computational predictions for reorientation performance.
Optimal diabatic dynamics of Majorana-based quantum gates
NASA Astrophysics Data System (ADS)
Rahmani, Armin; Seradjeh, Babak; Franz, Marcel
2017-08-01
In topological quantum computing, unitary operations on qubits are performed by adiabatic braiding of non-Abelian quasiparticles, such as Majorana zero modes, and are protected from local environmental perturbations. In the adiabatic regime, with timescales set by the inverse gap of the system, the errors can be made arbitrarily small by performing the process more slowly. To enhance the performance of quantum information processing with Majorana zero modes, we apply the theory of optimal control to the diabatic dynamics of Majorana-based qubits. While we sacrifice complete topological protection, we impose constraints on the optimal protocol to take advantage of the nonlocal nature of topological information and increase the robustness of our gates. By using the Pontryagin's maximum principle, we show that robust equivalent gates to perfect adiabatic braiding can be implemented in finite times through optimal pulses. In our implementation, modifications to the device Hamiltonian are avoided. Focusing on thermally isolated systems, we study the effects of calibration errors and external white and 1 /f (pink) noise on Majorana-based gates. While a noise-induced antiadiabatic behavior, where a slower process creates more diabatic excitations, prohibits indefinite enhancement of the robustness of the adiabatic scheme, our fast optimal protocols exhibit remarkable stability to noise and have the potential to significantly enhance the practical performance of Majorana-based information processing.
NASA Technical Reports Server (NTRS)
Welstead, Jason
2014-01-01
This research focused on incorporating stability and control into a multidisciplinary de- sign optimization on a Boeing 737-class advanced concept called the D8.2b. A new method of evaluating the aircraft handling performance using quantitative evaluation of the sys- tem to disturbances, including perturbations, continuous turbulence, and discrete gusts, is presented. A multidisciplinary design optimization was performed using the D8.2b transport air- craft concept. The con guration was optimized for minimum fuel burn using a design range of 3,000 nautical miles. Optimization cases were run using xed tail volume coecients, static trim constraints, and static trim and dynamic response constraints. A Cessna 182T model was used to test the various dynamic analysis components, ensuring the analysis was behaving as expected. Results of the optimizations show that including stability and con- trol in the design process drastically alters the optimal design, indicating that stability and control should be included in conceptual design to avoid system level penalties later in the design process.
Bioreactor performance: a more scientific approach for practice.
Lübbert, A; Bay Jørgensen, S
2001-02-13
In practice, the performance of a biochemical conversion process, i.e. the bioreactor performance, is essentially determined by the benefit/cost ratio. The benefit is generally defined in terms of the amount of the desired product produced and its market price. Cost reduction is the major objective in biochemical engineering. There are two essential engineering approaches to minimizing the cost of creating a particular product in an existing plant. One is to find a control path or operational procedure that optimally uses the dynamics of the process and copes with the many constraints restricting production. The other is to remove or lower the constraints by constructive improvements of the equipment and/or the microorganisms. This paper focuses on the first approach, dealing with optimization of the operational procedure and the measures by which one can ensure that the process adheres to the predetermined path. In practice, feedforward control is the predominant control mode applied. However, as it is frequently inadequate for optimal performance, feedback control may also be employed. Relevant aspects of such performance optimization are discussed.
Optical performance of random anti-reflection structured surfaces (rARSS) on spherical lenses
NASA Astrophysics Data System (ADS)
Taylor, Courtney D.
Random anti-reflection structured surfaces (rARSS) have been reported to improve transmittance of optical-grade fused silica planar substrates to values greater than 99%. These textures are fabricated directly on the substrates using reactive-ion/inductively-coupled plasma etching (RIE/ICP) techniques, and often result in transmitted spectra with no measurable interference effects (fringes) for a wide range of wavelengths. The RIE/ICP processes used in the fabrication process to etch the rARSS is anisotropic and thus well suited for planar components. The improvement in spectral transmission has been found to be independent of optical incidence angles for values from 0° to +/-30°. Qualifying and quantifying the rARSS performance on curved substrates, such as convex lenses, is required to optimize the fabrication of the desired AR effect on optical-power elements. In this work, rARSS was fabricated on fused silica plano-convex (PCX) and plano-concave (PCV) lenses using a planar-substrate optimized RIE process to maximize optical transmission in the range from 500 to 1100 nm. An additional set of lenses were etched in a non-optimized ICP process to provide additional comparisons. Results are presented from optical transmission and beam propagation tests (optimized lenses only) of rARSS lenses for both TE and TM incident polarizations at a wavelength of 633 nm and over a 70° full field of view in both singlet and doublet configurations. These results suggest optimization of the fabrication process is not required, mainly due to the wide angle-of-incidence AR tolerance performance of the rARSS lenses. Non-optimized recipe lenses showed low transmission enhancement, and confirmed the need to optimized etch recipes prior to process transfer of PCX/PCV lenses. Beam propagation tests indicated no major beam degradation through the optimized lens elements. Scanning electron microscopy (SEM) images confirmed different structure between optimized and non-optimized samples. SEM images also indicated isotropically-oriented surface structures on both types of lenses.
Additive manufacturing: Toward holistic design
Jared, Bradley H.; Aguilo, Miguel A.; Beghini, Lauren L.; ...
2017-03-18
Here, additive manufacturing offers unprecedented opportunities to design complex structures optimized for performance envelopes inaccessible under conventional manufacturing constraints. Additive processes also promote realization of engineered materials with microstructures and properties that are impossible via traditional synthesis techniques. Enthused by these capabilities, optimization design tools have experienced a recent revival. The current capabilities of additive processes and optimization tools are summarized briefly, while an emerging opportunity is discussed to achieve a holistic design paradigm whereby computational tools are integrated with stochastic process and material awareness to enable the concurrent optimization of design topologies, material constructs and fabrication processes.
A design optimization process for Space Station Freedom
NASA Technical Reports Server (NTRS)
Chamberlain, Robert G.; Fox, George; Duquette, William H.
1990-01-01
The Space Station Freedom Program is used to develop and implement a process for design optimization. Because the relative worth of arbitrary design concepts cannot be assessed directly, comparisons must be based on designs that provide the same performance from the point of view of station users; such designs can be compared in terms of life cycle cost. Since the technology required to produce a space station is widely dispersed, a decentralized optimization process is essential. A formulation of the optimization process is provided and the mathematical models designed to facilitate its implementation are described.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jared, Bradley H.; Aguilo, Miguel A.; Beghini, Lauren L.
Here, additive manufacturing offers unprecedented opportunities to design complex structures optimized for performance envelopes inaccessible under conventional manufacturing constraints. Additive processes also promote realization of engineered materials with microstructures and properties that are impossible via traditional synthesis techniques. Enthused by these capabilities, optimization design tools have experienced a recent revival. The current capabilities of additive processes and optimization tools are summarized briefly, while an emerging opportunity is discussed to achieve a holistic design paradigm whereby computational tools are integrated with stochastic process and material awareness to enable the concurrent optimization of design topologies, material constructs and fabrication processes.
Evans, Steven T; Stewart, Kevin D; Afdahl, Chris; Patel, Rohan; Newell, Kelcy J
2017-07-14
In this paper, we discuss the optimization and implementation of a high throughput process development (HTPD) tool that utilizes commercially available micro-liter sized column technology for the purification of multiple clinically significant monoclonal antibodies. Chromatographic profiles generated using this optimized tool are shown to overlay with comparable profiles from the conventional bench-scale and clinical manufacturing scale. Further, all product quality attributes measured are comparable across scales for the mAb purifications. In addition to supporting chromatography process development efforts (e.g., optimization screening), comparable product quality results at all scales makes this tool is an appropriate scale model to enable purification and product quality comparisons of HTPD bioreactors conditions. The ability to perform up to 8 chromatography purifications in parallel with reduced material requirements per run creates opportunities for gathering more process knowledge in less time. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.
When teams shift among processes: insights from simulation and optimization.
Kennedy, Deanna M; McComb, Sara A
2014-09-01
This article introduces process shifts to study the temporal interplay among transition and action processes espoused in the recurring phase model proposed by Marks, Mathieu, and Zacarro (2001). Process shifts are those points in time when teams complete a focal process and change to another process. By using team communication patterns to measure process shifts, this research explores (a) when teams shift among different transition processes and initiate action processes and (b) the potential of different interventions, such as communication directives, to manipulate process shift timing and order and, ultimately, team performance. Virtual experiments are employed to compare data from observed laboratory teams not receiving interventions, simulated teams receiving interventions, and optimal simulated teams generated using genetic algorithm procedures. Our results offer insights about the potential for different interventions to affect team performance. Moreover, certain interventions may promote discussions about key issues (e.g., tactical strategies) and facilitate shifting among transition processes in a manner that emulates optimal simulated teams' communication patterns. Thus, we contribute to theory regarding team processes in 2 important ways. First, we present process shifts as a way to explore the timing of when teams shift from transition to action processes. Second, we use virtual experimentation to identify those interventions with the greatest potential to affect performance by changing when teams shift among processes. Additionally, we employ computational methods including neural networks, simulation, and optimization, thereby demonstrating their applicability in conducting team research. PsycINFO Database Record (c) 2014 APA, all rights reserved.
Bayesian Optimization for Neuroimaging Pre-processing in Brain Age Classification and Prediction
Lancaster, Jenessa; Lorenz, Romy; Leech, Rob; Cole, James H.
2018-01-01
Neuroimaging-based age prediction using machine learning is proposed as a biomarker of brain aging, relating to cognitive performance, health outcomes and progression of neurodegenerative disease. However, even leading age-prediction algorithms contain measurement error, motivating efforts to improve experimental pipelines. T1-weighted MRI is commonly used for age prediction, and the pre-processing of these scans involves normalization to a common template and resampling to a common voxel size, followed by spatial smoothing. Resampling parameters are often selected arbitrarily. Here, we sought to improve brain-age prediction accuracy by optimizing resampling parameters using Bayesian optimization. Using data on N = 2003 healthy individuals (aged 16–90 years) we trained support vector machines to (i) distinguish between young (<22 years) and old (>50 years) brains (classification) and (ii) predict chronological age (regression). We also evaluated generalisability of the age-regression model to an independent dataset (CamCAN, N = 648, aged 18–88 years). Bayesian optimization was used to identify optimal voxel size and smoothing kernel size for each task. This procedure adaptively samples the parameter space to evaluate accuracy across a range of possible parameters, using independent sub-samples to iteratively assess different parameter combinations to arrive at optimal values. When distinguishing between young and old brains a classification accuracy of 88.1% was achieved, (optimal voxel size = 11.5 mm3, smoothing kernel = 2.3 mm). For predicting chronological age, a mean absolute error (MAE) of 5.08 years was achieved, (optimal voxel size = 3.73 mm3, smoothing kernel = 3.68 mm). This was compared to performance using default values of 1.5 mm3 and 4mm respectively, resulting in MAE = 5.48 years, though this 7.3% improvement was not statistically significant. When assessing generalisability, best performance was achieved when applying the entire Bayesian optimization framework to the new dataset, out-performing the parameters optimized for the initial training dataset. Our study outlines the proof-of-principle that neuroimaging models for brain-age prediction can use Bayesian optimization to derive case-specific pre-processing parameters. Our results suggest that different pre-processing parameters are selected when optimization is conducted in specific contexts. This potentially motivates use of optimization techniques at many different points during the experimental process, which may improve statistical sensitivity and reduce opportunities for experimenter-led bias. PMID:29483870
Deng, Bo; Shi, Yaoyao; Yu, Tao; Kang, Chao; Zhao, Pan
2018-01-31
The composite tape winding process, which utilizes a tape winding machine and prepreg tapes, provides a promising way to improve the quality of composite products. Nevertheless, the process parameters of composite tape winding have crucial effects on the tensile strength and void content, which are closely related to the performances of the winding products. In this article, two different object values of winding products, including mechanical performance (tensile strength) and a physical property (void content), were respectively calculated. Thereafter, the paper presents an integrated methodology by combining multi-parameter relative sensitivity analysis and single-parameter sensitivity analysis to obtain the optimal intervals of the composite tape winding process. First, the global multi-parameter sensitivity analysis method was applied to investigate the sensitivity of each parameter in the tape winding processing. Then, the local single-parameter sensitivity analysis method was employed to calculate the sensitivity of a single parameter within the corresponding range. Finally, the stability and instability ranges of each parameter were distinguished. Meanwhile, the authors optimized the process parameter ranges and provided comprehensive optimized intervals of the winding parameters. The verification test validated that the optimized intervals of the process parameters were reliable and stable for winding products manufacturing.
Yu, Tao; Kang, Chao; Zhao, Pan
2018-01-01
The composite tape winding process, which utilizes a tape winding machine and prepreg tapes, provides a promising way to improve the quality of composite products. Nevertheless, the process parameters of composite tape winding have crucial effects on the tensile strength and void content, which are closely related to the performances of the winding products. In this article, two different object values of winding products, including mechanical performance (tensile strength) and a physical property (void content), were respectively calculated. Thereafter, the paper presents an integrated methodology by combining multi-parameter relative sensitivity analysis and single-parameter sensitivity analysis to obtain the optimal intervals of the composite tape winding process. First, the global multi-parameter sensitivity analysis method was applied to investigate the sensitivity of each parameter in the tape winding processing. Then, the local single-parameter sensitivity analysis method was employed to calculate the sensitivity of a single parameter within the corresponding range. Finally, the stability and instability ranges of each parameter were distinguished. Meanwhile, the authors optimized the process parameter ranges and provided comprehensive optimized intervals of the winding parameters. The verification test validated that the optimized intervals of the process parameters were reliable and stable for winding products manufacturing. PMID:29385048
Venkata Mohan, S; Chandrasekhara Rao, N; Krishna Prasad, K; Murali Krishna, P; Sreenivas Rao, R; Sarma, P N
2005-06-20
The Taguchi robust experimental design (DOE) methodology has been applied on a dynamic anaerobic process treating complex wastewater by an anaerobic sequencing batch biofilm reactor (AnSBBR). For optimizing the process as well as to evaluate the influence of different factors on the process, the uncontrollable (noise) factors have been considered. The Taguchi methodology adopting dynamic approach is the first of its kind for studying anaerobic process evaluation and process optimization. The designed experimental methodology consisted of four phases--planning, conducting, analysis, and validation connected sequence-wise to achieve the overall optimization. In the experimental design, five controllable factors, i.e., organic loading rate (OLR), inlet pH, biodegradability (BOD/COD ratio), temperature, and sulfate concentration, along with the two uncontrollable (noise) factors, volatile fatty acids (VFA) and alkalinity at two levels were considered for optimization of the anae robic system. Thirty-two anaerobic experiments were conducted with a different combination of factors and the results obtained in terms of substrate degradation rates were processed in Qualitek-4 software to study the main effect of individual factors, interaction between the individual factors, and signal-to-noise (S/N) ratio analysis. Attempts were also made to achieve optimum conditions. Studies on the influence of individual factors on process performance revealed the intensive effect of OLR. In multiple factor interaction studies, biodegradability with other factors, such as temperature, pH, and sulfate have shown maximum influence over the process performance. The optimum conditions for the efficient performance of the anaerobic system in treating complex wastewater by considering dynamic (noise) factors obtained are higher organic loading rate of 3.5 Kg COD/m3 day, neutral pH with high biodegradability (BOD/COD ratio of 0.5), along with mesophilic temperature range (40 degrees C), and low sulfate concentration (700 mg/L). The optimization resulted in enhanced anaerobic performance (56.7%) from a substrate degradation rate (SDR) of 1.99 to 3.13 Kg COD/m3 day. Considering the obtained optimum factors, further validation experiments were carried out, which showed enhanced process performance (3.04 Kg COD/m3-day from 1.99 Kg COD/m3 day) accounting for 52.13% improvement with the optimized process conditions. The proposed method facilitated a systematic mathematical approach to understand the complex multi-species manifested anaerobic process treating complex chemical wastewater by considering the uncontrollable factors. Copyright (c) 2005 Wiley Periodicals, Inc.
Taguchi Method Applied in Optimization of Shipley SJR 5740 Positive Resist Deposition
NASA Technical Reports Server (NTRS)
Hui, A.; Blosiu, J. O.; Wiberg, D. V.
1998-01-01
Taguchi Methods of Robust Design presents a way to optimize output process performance through an organized set of experiments by using orthogonal arrays. Analysis of variance and signal-to-noise ratio is used to evaluate the contribution of each of the process controllable parameters in the realization of the process optimization. In the photoresist deposition process, there are numerous controllable parameters that can affect the surface quality and thickness of the final photoresist layer.
Neuroimaging markers associated with maintenance of optimal memory performance in late-life.
Dekhtyar, Maria; Papp, Kathryn V; Buckley, Rachel; Jacobs, Heidi I L; Schultz, Aaron P; Johnson, Keith A; Sperling, Reisa A; Rentz, Dorene M
2017-06-01
Age-related memory decline has been well-documented; however, some individuals reach their 8th-10th decade while maintaining strong memory performance. To determine which demographic and biomarker factors differentiated top memory performers (aged 75+, top 20% for memory) from their peers and whether top memory performance was maintained over 3 years. Clinically normal adults (n=125, CDR=0; age: 79.5±3.57 years) from the Harvard Aging Brain Study underwent cognitive testing and neuroimaging (amyloid PET, MRI) at baseline and 3-year follow-up. Participants were grouped into Optimal (n=25) vs. Typical (n=100) performers using performance on 3 challenging memory measures. Non-parametric tests were used to compare groups. There were no differences in age, sex, or education between Optimal vs. Typical performers. The Optimal group performed better in Processing Speed (p=0.016) and Executive Functioning (p<0.001). Optimal performers had larger hippocampal volumes at baseline compared with Typical Performers (p=0.027) but no differences in amyloid burden (p=0.442). Twenty-three of the 25 Optimal performers had longitudinal data and16 maintained top memory performance while 7 declined. Non-Maintainers additionally declined in Executive Functioning but not Processing Speed. Longitudinally, there were no hippocampal volume differences between Maintainers and Non-Maintainers, however Non-Maintainers exhibited higher amyloid burden at baseline in contrast with Maintainers (p=0.008). Excellent memory performance in late life does not guarantee protection against cognitive decline. Those who maintain an optimal memory into the 8th and 9th decades may have lower levels of AD pathology. Copyright © 2017. Published by Elsevier Ltd.
Linear-Quadratic-Gaussian Regulator Developed for a Magnetic Bearing
NASA Technical Reports Server (NTRS)
Choi, Benjamin B.
2002-01-01
Linear-Quadratic-Gaussian (LQG) control is a modern state-space technique for designing optimal dynamic regulators. It enables us to trade off regulation performance and control effort, and to take into account process and measurement noise. The Structural Mechanics and Dynamics Branch at the NASA Glenn Research Center has developed an LQG control for a fault-tolerant magnetic bearing suspension rig to optimize system performance and to reduce the sensor and processing noise. The LQG regulator consists of an optimal state-feedback gain and a Kalman state estimator. The first design step is to seek a state-feedback law that minimizes the cost function of regulation performance, which is measured by a quadratic performance criterion with user-specified weighting matrices, and to define the tradeoff between regulation performance and control effort. The next design step is to derive a state estimator using a Kalman filter because the optimal state feedback cannot be implemented without full state measurement. Since the Kalman filter is an optimal estimator when dealing with Gaussian white noise, it minimizes the asymptotic covariance of the estimation error.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
The objective of the contract is to consolidate the advances made during the previous contract in the conversion of syngas to motor fuels using Molecular Sieve-containing catalysts and to demonstrate the practical utility and economic value of the new catalyst/process systems with appropriate laboratory runs. Work on the program is divided into the following six tasks: (1) preparation of a detailed work plan covering the entire performance of the contract; (2) preliminary techno-economic assessment of the UCC catalyst/process system; (3) optimization of the most promising catalysts developed under prior contract; (4) optimization of the UCC catalyst system in a mannermore » that will give it the longest possible service life; (5) optimization of a UCC process/catalyst system based upon a tubular reactor with a recycle loop; and (6) economic evaluation of the optimal performance found under Task 5 for the UCC process/catalyst system. Accomplishments are reported for Tasks 2 through 5.« less
2015-01-01
The standard artificial bee colony (ABC) algorithm involves exploration and exploitation processes which need to be balanced for enhanced performance. This paper proposes a new modified ABC algorithm named JA-ABC5 to enhance convergence speed and improve the ability to reach the global optimum by balancing exploration and exploitation processes. New stages have been proposed at the earlier stages of the algorithm to increase the exploitation process. Besides that, modified mutation equations have also been introduced in the employed and onlooker-bees phases to balance the two processes. The performance of JA-ABC5 has been analyzed on 27 commonly used benchmark functions and tested to optimize the reactive power optimization problem. The performance results have clearly shown that the newly proposed algorithm has outperformed other compared algorithms in terms of convergence speed and global optimum achievement. PMID:25879054
Optimal nonlinear information processing capacity in delay-based reservoir computers
NASA Astrophysics Data System (ADS)
Grigoryeva, Lyudmila; Henriques, Julie; Larger, Laurent; Ortega, Juan-Pablo
2015-09-01
Reservoir computing is a recently introduced brain-inspired machine learning paradigm capable of excellent performances in the processing of empirical data. We focus in a particular kind of time-delay based reservoir computers that have been physically implemented using optical and electronic systems and have shown unprecedented data processing rates. Reservoir computing is well-known for the ease of the associated training scheme but also for the problematic sensitivity of its performance to architecture parameters. This article addresses the reservoir design problem, which remains the biggest challenge in the applicability of this information processing scheme. More specifically, we use the information available regarding the optimal reservoir working regimes to construct a functional link between the reservoir parameters and its performance. This function is used to explore various properties of the device and to choose the optimal reservoir architecture, thus replacing the tedious and time consuming parameter scannings used so far in the literature.
Sulaiman, Noorazliza; Mohamad-Saleh, Junita; Abro, Abdul Ghani
2015-01-01
The standard artificial bee colony (ABC) algorithm involves exploration and exploitation processes which need to be balanced for enhanced performance. This paper proposes a new modified ABC algorithm named JA-ABC5 to enhance convergence speed and improve the ability to reach the global optimum by balancing exploration and exploitation processes. New stages have been proposed at the earlier stages of the algorithm to increase the exploitation process. Besides that, modified mutation equations have also been introduced in the employed and onlooker-bees phases to balance the two processes. The performance of JA-ABC5 has been analyzed on 27 commonly used benchmark functions and tested to optimize the reactive power optimization problem. The performance results have clearly shown that the newly proposed algorithm has outperformed other compared algorithms in terms of convergence speed and global optimum achievement.
EUV process establishment through litho and etch for N7 node
NASA Astrophysics Data System (ADS)
Kuwahara, Yuhei; Kawakami, Shinichiro; Kubota, Minoru; Matsunaga, Koichi; Nafus, Kathleen; Foubert, Philippe; Mao, Ming
2016-03-01
Extreme ultraviolet lithography (EUVL) technology is steadily reaching high volume manufacturing for 16nm half pitch node and beyond. However, some challenges, for example scanner availability and resist performance (resolution, CD uniformity (CDU), LWR, etch behavior and so on) are remaining. Advance EUV patterning on the ASML NXE:3300/ CLEAN TRACK LITHIUS Pro Z- EUV litho cluster is launched at imec, allowing for finer pitch patterns for L/S and CH. Tokyo Electron Ltd. and imec are continuously collabo rating to develop manufacturing quality POR processes for NXE:3300. TEL's technologies to enhance CDU, defectivity and LWR/LER can improve patterning performance. The patterning is characterized and optimized in both litho and etch for a more complete understanding of the final patterning performance. This paper reports on post-litho CDU improvement by litho process optimization and also post-etch LWR reduction by litho and etch process optimization.
Optimal nonlinear information processing capacity in delay-based reservoir computers.
Grigoryeva, Lyudmila; Henriques, Julie; Larger, Laurent; Ortega, Juan-Pablo
2015-09-11
Reservoir computing is a recently introduced brain-inspired machine learning paradigm capable of excellent performances in the processing of empirical data. We focus in a particular kind of time-delay based reservoir computers that have been physically implemented using optical and electronic systems and have shown unprecedented data processing rates. Reservoir computing is well-known for the ease of the associated training scheme but also for the problematic sensitivity of its performance to architecture parameters. This article addresses the reservoir design problem, which remains the biggest challenge in the applicability of this information processing scheme. More specifically, we use the information available regarding the optimal reservoir working regimes to construct a functional link between the reservoir parameters and its performance. This function is used to explore various properties of the device and to choose the optimal reservoir architecture, thus replacing the tedious and time consuming parameter scannings used so far in the literature.
Optimal nonlinear information processing capacity in delay-based reservoir computers
Grigoryeva, Lyudmila; Henriques, Julie; Larger, Laurent; Ortega, Juan-Pablo
2015-01-01
Reservoir computing is a recently introduced brain-inspired machine learning paradigm capable of excellent performances in the processing of empirical data. We focus in a particular kind of time-delay based reservoir computers that have been physically implemented using optical and electronic systems and have shown unprecedented data processing rates. Reservoir computing is well-known for the ease of the associated training scheme but also for the problematic sensitivity of its performance to architecture parameters. This article addresses the reservoir design problem, which remains the biggest challenge in the applicability of this information processing scheme. More specifically, we use the information available regarding the optimal reservoir working regimes to construct a functional link between the reservoir parameters and its performance. This function is used to explore various properties of the device and to choose the optimal reservoir architecture, thus replacing the tedious and time consuming parameter scannings used so far in the literature. PMID:26358528
Increase of Gas-Turbine Plant Efficiency by Optimizing Operation of Compressors
NASA Astrophysics Data System (ADS)
Matveev, V.; Goriachkin, E.; Volkov, A.
2018-01-01
The article presents optimization method for improving of the working process of axial compressors of gas turbine engines. Developed method allows to perform search for the best geometry of compressor blades automatically by using optimization software IOSO and CFD software NUMECA Fine/Turbo. The calculation of the compressor parameters was performed for work and stall point of its performance map on each optimization step. Study was carried out for seven-stage high-pressure compressor and three-stage low-pressure compressors. As a result of optimization, improvement of efficiency was achieved for all investigated compressors.
A Data-Driven Solution for Performance Improvement
NASA Technical Reports Server (NTRS)
2002-01-01
Marketed as the "Software of the Future," Optimal Engineering Systems P.I. EXPERT(TM) technology offers statistical process control and optimization techniques that are critical to businesses looking to restructure or accelerate operations in order to gain a competitive edge. Kennedy Space Center granted Optimal Engineering Systems the funding and aid necessary to develop a prototype of the process monitoring and improvement software. Completion of this prototype demonstrated that it was possible to integrate traditional statistical quality assurance tools with robust optimization techniques in a user- friendly format that is visually compelling. Using an expert system knowledge base, the software allows the user to determine objectives, capture constraints and out-of-control processes, predict results, and compute optimal process settings.
Design of optimal buffer layers for CuInGaSe2 thin-film solar cells(Conference Presentation)
NASA Astrophysics Data System (ADS)
Lordi, Vincenzo; Varley, Joel B.; He, Xiaoqing; Rockett, Angus A.; Bailey, Jeff; Zapalac, Geordie H.; Mackie, Neil; Poplavskyy, Dmitry; Bayman, Atiye
2016-09-01
Optimizing the buffer layer in manufactured thin-film PV is essential to maximize device efficiency. Here, we describe a combined synthesis, characterization, and theory effort to design optimal buffers based on the (Cd,Zn)(O,S) alloy system for CIGS devices. Optimization of buffer composition and absorber/buffer interface properties in light of several competing requirements for maximum device efficiency were performed, along with process variations to control the film and interface quality. The most relevant buffer properties controlling performance include band gap, conduction band offset with absorber, dopability, interface quality, and film crystallinity. Control of an all-PVD deposition process enabled variation of buffer composition, crystallinity, doping, and quality of the absorber/buffer interface. Analytical electron microscopy was used to characterize the film composition and morphology, while hybrid density functional theory was used to predict optimal compositions and growth parameters based on computed material properties. Process variations were developed to produce layers with controlled crystallinity, varying from amorphous to fully epitaxial, depending primarily on oxygen content. Elemental intermixing between buffer and absorber, particularly involving Cd and Cu, also is controlled and significantly affects device performance. Secondary phase formation at the interface is observed for some conditions and may be detrimental depending on the morphology. Theoretical calculations suggest optimal composition ranges for the buffer based on a suite of computed properties and drive process optimizations connected with observed film properties. Prepared by LLNL under Contract DE-AC52-07NA27344.
Analytical optimal pulse shapes obtained with the aid of genetic algorithms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Guerrero, Rubén D., E-mail: rdguerrerom@unal.edu.co; Arango, Carlos A.; Reyes, Andrés
2015-09-28
We propose a methodology to design optimal pulses for achieving quantum optimal control on molecular systems. Our approach constrains pulse shapes to linear combinations of a fixed number of experimentally relevant pulse functions. Quantum optimal control is obtained by maximizing a multi-target fitness function using genetic algorithms. As a first application of the methodology, we generated an optimal pulse that successfully maximized the yield on a selected dissociation channel of a diatomic molecule. Our pulse is obtained as a linear combination of linearly chirped pulse functions. Data recorded along the evolution of the genetic algorithm contained important information regarding themore » interplay between radiative and diabatic processes. We performed a principal component analysis on these data to retrieve the most relevant processes along the optimal path. Our proposed methodology could be useful for performing quantum optimal control on more complex systems by employing a wider variety of pulse shape functions.« less
Fuel consumption optimization for smart hybrid electric vehicle during a car-following process
NASA Astrophysics Data System (ADS)
Li, Liang; Wang, Xiangyu; Song, Jian
2017-03-01
Hybrid electric vehicles (HEVs) provide large potential to save energy and reduce emission, and smart vehicles bring out great convenience and safety for drivers. By combining these two technologies, vehicles may achieve excellent performances in terms of dynamic, economy, environmental friendliness, safety, and comfort. Hence, a smart hybrid electric vehicle (s-HEV) is selected as a platform in this paper to study a car-following process with optimizing the fuel consumption. The whole process is a multi-objective optimal problem, whose optimal solution is not just adding an energy management strategy (EMS) to an adaptive cruise control (ACC), but a deep fusion of these two methods. The problem has more restricted conditions, optimal objectives, and system states, which may result in larger computing burden. Therefore, a novel fuel consumption optimization algorithm based on model predictive control (MPC) is proposed and some search skills are adopted in receding horizon optimization to reduce computing burden. Simulations are carried out and the results indicate that the fuel consumption of proposed method is lower than that of the ACC+EMS method on the condition of ensuring car-following performances.
A Technical Survey on Optimization of Processing Geo Distributed Data
NASA Astrophysics Data System (ADS)
Naga Malleswari, T. Y. J.; Ushasukhanya, S.; Nithyakalyani, A.; Girija, S.
2018-04-01
With growing cloud services and technology, there is growth in some geographically distributed data centers to store large amounts of data. Analysis of geo-distributed data is required in various services for data processing, storage of essential information, etc., processing this geo-distributed data and performing analytics on this data is a challenging task. The distributed data processing is accompanied by issues in storage, computation and communication. The key issues to be dealt with are time efficiency, cost minimization, utility maximization. This paper describes various optimization methods like end-to-end multiphase, G-MR, etc., using the techniques like Map-Reduce, CDS (Community Detection based Scheduling), ROUT, Workload-Aware Scheduling, SAGE, AMP (Ant Colony Optimization) to handle these issues. In this paper various optimization methods and techniques used are analyzed. It has been observed that end-to end multiphase achieves time efficiency; Cost minimization concentrates to achieve Quality of Service, Computation and reduction of Communication cost. SAGE achieves performance improvisation in processing geo-distributed data sets.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Krishnan, Shankar; Karri, Naveen K.; Gogna, Pawan K.
2012-03-13
Enormous military and commercial interests exist in developing quiet, lightweight, and compact thermoelectric (TE) power generation systems. This paper investigates design integration and analysis of an advanced TE power generation system implementing JP-8 fueled combustion and thermal recuperation. Design and development of a portable TE power system using a JP-8 combustor as a high temperature heat source and optimal process flows depend on efficient heat generation, transfer, and recovery within the system are explored. Design optimization of the system required considering the combustion system efficiency and TE conversion efficiency simultaneously. The combustor performance and TE sub-system performance were coupled directlymore » through exhaust temperatures, fuel and air mass flow rates, heat exchanger performance, subsequent hot-side temperatures, and cold-side cooling techniques and temperatures. Systematic investigation of this system relied on accurate thermodynamic modeling of complex, high-temperature combustion processes concomitantly with detailed thermoelectric converter thermal/mechanical modeling. To this end, this work reports on design integration of systemlevel process flow simulations using commercial software CHEMCADTM with in-house thermoelectric converter and module optimization, and heat exchanger analyses using COMSOLTM software. High-performance, high-temperature TE materials and segmented TE element designs are incorporated in coupled design analyses to achieve predicted TE subsystem level conversion efficiencies exceeding 10%. These TE advances are integrated with a high performance microtechnology combustion reactor based on recent advances at the Pacific Northwest National Laboratory (PNNL). Predictions from this coupled simulation established a basis for optimal selection of fuel and air flow rates, thermoelectric module design and operating conditions, and microtechnology heat-exchanger design criteria. This paper will discuss this simulation process that leads directly to system efficiency power maps defining potentially available optimal system operating conditions and regimes. This coupled simulation approach enables pathways for integrated use of high-performance combustor components, high performance TE devices, and microtechnologies to produce a compact, lightweight, combustion driven TE power system prototype that operates on common fuels.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wiebenga, J. H.; Atzema, E. H.; Boogaard, A. H. van den
Robust design of forming processes using numerical simulations is gaining attention throughout the industry. In this work, it is demonstrated how robust optimization can assist in further stretching the limits of metal forming processes. A deterministic and a robust optimization study are performed, considering a stretch-drawing process of a hemispherical cup product. For the robust optimization study, both the effect of material and process scatter are taken into account. For quantifying the material scatter, samples of 41 coils of a drawing quality forming steel have been collected. The stochastic material behavior is obtained by a hybrid approach, combining mechanical testingmore » and texture analysis, and efficiently implemented in a metamodel based optimization strategy. The deterministic and robust optimization results are subsequently presented and compared, demonstrating an increased process robustness and decreased number of product rejects by application of the robust optimization approach.« less
A Complete Procedure for Predicting and Improving the Performance of HAWT's
NASA Astrophysics Data System (ADS)
Al-Abadi, Ali; Ertunç, Özgür; Sittig, Florian; Delgado, Antonio
2014-06-01
A complete procedure for predicting and improving the performance of the horizontal axis wind turbine (HAWT) has been developed. The first process is predicting the power extracted by the turbine and the derived rotor torque, which should be identical to that of the drive unit. The BEM method and a developed post-stall treatment for resolving stall-regulated HAWT is incorporated in the prediction. For that, a modified stall-regulated prediction model, which can predict the HAWT performance over the operating range of oncoming wind velocity, is derived from existing models. The model involves radius and chord, which has made it more general in applications for predicting the performance of different scales and rotor shapes of HAWTs. The second process is modifying the rotor shape by an optimization process, which can be applied to any existing HAWT, to improve its performance. A gradient- based optimization is used for adjusting the chord and twist angle distribution of the rotor blade to increase the extraction of the power while keeping the drive torque constant, thus the same drive unit can be kept. The final process is testing the modified turbine to predict its enhanced performance. The procedure is applied to NREL phase-VI 10kW as a baseline turbine. The study has proven the applicability of the developed model in predicting the performance of the baseline as well as the optimized turbine. In addition, the optimization method has shown that the power coefficient can be increased while keeping same design rotational speed.
NASA Astrophysics Data System (ADS)
Wang, Hongyan
2017-04-01
This paper addresses the waveform optimization problem for improving the detection performance of multi-input multioutput (MIMO) orthogonal frequency division multiplexing (OFDM) radar-based space-time adaptive processing (STAP) in the complex environment. By maximizing the output signal-to-interference-and-noise-ratio (SINR) criterion, the waveform optimization problem for improving the detection performance of STAP, which is subjected to the constant modulus constraint, is derived. To tackle the resultant nonlinear and complicated optimization issue, a diagonal loading-based method is proposed to reformulate the issue as a semidefinite programming one; thereby, this problem can be solved very efficiently. In what follows, the optimized waveform can be obtained to maximize the output SINR of MIMO-OFDM such that the detection performance of STAP can be improved. The simulation results show that the proposed method can improve the output SINR detection performance considerably as compared with that of uncorrelated waveforms and the existing MIMO-based STAP method.
Performance Review of Harmony Search, Differential Evolution and Particle Swarm Optimization
NASA Astrophysics Data System (ADS)
Mohan Pandey, Hari
2017-08-01
Metaheuristic algorithms are effective in the design of an intelligent system. These algorithms are widely applied to solve complex optimization problems, including image processing, big data analytics, language processing, pattern recognition and others. This paper presents a performance comparison of three meta-heuristic algorithms, namely Harmony Search, Differential Evolution, and Particle Swarm Optimization. These algorithms are originated altogether from different fields of meta-heuristics yet share a common objective. The standard benchmark functions are used for the simulation. Statistical tests are conducted to derive a conclusion on the performance. The key motivation to conduct this research is to categorize the computational capabilities, which might be useful to the researchers.
NASA Astrophysics Data System (ADS)
Shahbudin, S. N. A.; Othman, M. H.; Amin, Sri Yulis M.; Ibrahim, M. H. I.
2017-08-01
This article is about a review of optimization of metal injection molding and microwave sintering process on tungsten cemented carbide produce by metal injection molding process. In this study, the process parameters for the metal injection molding were optimized using Taguchi method. Taguchi methods have been used widely in engineering analysis to optimize the performance characteristics through the setting of design parameters. Microwave sintering is a process generally being used in powder metallurgy over the conventional method. It has typical characteristics such as accelerated heating rate, shortened processing cycle, high energy efficiency, fine and homogeneous microstructure, and enhanced mechanical performance, which is beneficial to prepare nanostructured cemented carbides in metal injection molding. Besides that, with an advanced and promising technology, metal injection molding has proven that can produce cemented carbides. Cemented tungsten carbide hard metal has been used widely in various applications due to its desirable combination of mechanical, physical, and chemical properties. Moreover, areas of study include common defects in metal injection molding and application of microwave sintering itself has been discussed in this paper.
Strengthening the revenue cycle: a 4-step method for optimizing payment.
Clark, Jonathan J
2008-10-01
Four steps for enhancing the revenue cycle to ensure optimal payment are: *Establish key performance indicator dashboards in each department that compare current with targeted performance; *Create proper organizational structures for each department; *Ensure that high-performing leaders are hired in all management and supervisory positions; *Implement efficient processes in underperforming operations.
Optimization of High-Dimensional Functions through Hypercube Evaluation
Abiyev, Rahib H.; Tunay, Mustafa
2015-01-01
A novel learning algorithm for solving global numerical optimization problems is proposed. The proposed learning algorithm is intense stochastic search method which is based on evaluation and optimization of a hypercube and is called the hypercube optimization (HO) algorithm. The HO algorithm comprises the initialization and evaluation process, displacement-shrink process, and searching space process. The initialization and evaluation process initializes initial solution and evaluates the solutions in given hypercube. The displacement-shrink process determines displacement and evaluates objective functions using new points, and the search area process determines next hypercube using certain rules and evaluates the new solutions. The algorithms for these processes have been designed and presented in the paper. The designed HO algorithm is tested on specific benchmark functions. The simulations of HO algorithm have been performed for optimization of functions of 1000-, 5000-, or even 10000 dimensions. The comparative simulation results with other approaches demonstrate that the proposed algorithm is a potential candidate for optimization of both low and high dimensional functions. PMID:26339237
NASA Astrophysics Data System (ADS)
Pei, Ji; Wang, Wenjie; Yuan, Shouqi; Zhang, Jinfeng
2016-09-01
In order to widen the high-efficiency operating range of a low-specific-speed centrifugal pump, an optimization process for considering efficiencies under 1.0 Q d and 1.4 Q d is proposed. Three parameters, namely, the blade outlet width b 2, blade outlet angle β 2, and blade wrap angle φ, are selected as design variables. Impellers are generated using the optimal Latin hypercube sampling method. The pump efficiencies are calculated using the software CFX 14.5 at two operating points selected as objectives. Surrogate models are also constructed to analyze the relationship between the objectives and the design variables. Finally, the particle swarm optimization algorithm is applied to calculate the surrogate model to determine the best combination of the impeller parameters. The results show that the performance curve predicted by numerical simulation has a good agreement with the experimental results. Compared with the efficiencies of the original impeller, the hydraulic efficiencies of the optimized impeller are increased by 4.18% and 0.62% under 1.0 Q d and 1.4Qd, respectively. The comparison of inner flow between the original pump and optimized one illustrates the improvement of performance. The optimization process can provide a useful reference on performance improvement of other pumps, even on reduction of pressure fluctuations.
Liu, Ping; Li, Guodong; Liu, Xinggao
2015-09-01
Control vector parameterization (CVP) is an important approach of the engineering optimization for the industrial dynamic processes. However, its major defect, the low optimization efficiency caused by calculating the relevant differential equations in the generated nonlinear programming (NLP) problem repeatedly, limits its wide application in the engineering optimization for the industrial dynamic processes. A novel highly effective control parameterization approach, fast-CVP, is first proposed to improve the optimization efficiency for industrial dynamic processes, where the costate gradient formulae is employed and a fast approximate scheme is presented to solve the differential equations in dynamic process simulation. Three well-known engineering optimization benchmark problems of the industrial dynamic processes are demonstrated as illustration. The research results show that the proposed fast approach achieves a fine performance that at least 90% of the computation time can be saved in contrast to the traditional CVP method, which reveals the effectiveness of the proposed fast engineering optimization approach for the industrial dynamic processes. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.
Optimization of polymer electrolyte membrane fuel cell flow channels using a genetic algorithm
NASA Astrophysics Data System (ADS)
Catlin, Glenn; Advani, Suresh G.; Prasad, Ajay K.
The design of the flow channels in PEM fuel cells directly impacts the transport of reactant gases to the electrodes and affects cell performance. This paper presents results from a study to optimize the geometry of the flow channels in a PEM fuel cell. The optimization process implements a genetic algorithm to rapidly converge on the channel geometry that provides the highest net power output from the cell. In addition, this work implements a method for the automatic generation of parameterized channel domains that are evaluated for performance using a commercial computational fluid dynamics package from ANSYS. The software package includes GAMBIT as the solid modeling and meshing software, the solver FLUENT, and a PEMFC Add-on Module capable of modeling the relevant physical and electrochemical mechanisms that describe PEM fuel cell operation. The result of the optimization process is a set of optimal channel geometry values for the single-serpentine channel configuration. The performance of the optimal geometry is contrasted with a sub-optimal one by comparing contour plots of current density, oxygen and hydrogen concentration. In addition, the role of convective bypass in bringing fresh reactant to the catalyst layer is examined in detail. The convergence to the optimal geometry is confirmed by a bracketing study which compares the performance of the best individual to those of its neighbors with adjacent parameter values.
Optimization of startup and shutdown operation of simulated moving bed chromatographic processes.
Li, Suzhou; Kawajiri, Yoshiaki; Raisch, Jörg; Seidel-Morgenstern, Andreas
2011-06-24
This paper presents new multistage optimal startup and shutdown strategies for simulated moving bed (SMB) chromatographic processes. The proposed concept allows to adjust transient operating conditions stage-wise, and provides capability to improve transient performance and to fulfill product quality specifications simultaneously. A specially tailored decomposition algorithm is developed to ensure computational tractability of the resulting dynamic optimization problems. By examining the transient operation of a literature separation example characterized by nonlinear competitive isotherm, the feasibility of the solution approach is demonstrated, and the performance of the conventional and multistage optimal transient regimes is evaluated systematically. The quantitative results clearly show that the optimal operating policies not only allow to significantly reduce both duration of the transient phase and desorbent consumption, but also enable on-spec production even during startup and shutdown periods. With the aid of the developed transient procedures, short-term separation campaigns with small batch sizes can be performed more flexibly and efficiently by SMB chromatography. Copyright © 2011 Elsevier B.V. All rights reserved.
[Imaging center - optimization of the imaging process].
Busch, H-P
2013-04-01
Hospitals around the world are under increasing pressure to optimize the economic efficiency of treatment processes. Imaging is responsible for a great part of the success but also of the costs of treatment. In routine work an excessive supply of imaging methods leads to an "as well as" strategy up to the limit of the capacity without critical reflection. Exams that have no predictable influence on the clinical outcome are an unjustified burden for the patient. They are useless and threaten the financial situation and existence of the hospital. In recent years the focus of process optimization was exclusively on the quality and efficiency of performed single examinations. In the future critical discussion of the effectiveness of single exams in relation to the clinical outcome will be more important. Unnecessary exams can be avoided, only if in addition to the optimization of single exams (efficiency) there is an optimization strategy for the total imaging process (efficiency and effectiveness). This requires a new definition of processes (Imaging Pathway), new structures for organization (Imaging Center) and a new kind of thinking on the part of the medical staff. Motivation has to be changed from gratification of performed exams to gratification of process quality (medical quality, service quality, economics), including the avoidance of additional (unnecessary) exams. © Georg Thieme Verlag KG Stuttgart · New York.
Real-time parameter optimization based on neural network for smart injection molding
NASA Astrophysics Data System (ADS)
Lee, H.; Liau, Y.; Ryu, K.
2018-03-01
The manufacturing industry has been facing several challenges, including sustainability, performance and quality of production. Manufacturers attempt to enhance the competitiveness of companies by implementing CPS (Cyber-Physical Systems) through the convergence of IoT(Internet of Things) and ICT(Information & Communication Technology) in the manufacturing process level. Injection molding process has a short cycle time and high productivity. This features have been making it suitable for mass production. In addition, this process is used to produce precise parts in various industry fields such as automobiles, optics and medical devices. Injection molding process has a mixture of discrete and continuous variables. In order to optimized the quality, variables that is generated in the injection molding process must be considered. Furthermore, Optimal parameter setting is time-consuming work to predict the optimum quality of the product. Since the process parameter cannot be easily corrected during the process execution. In this research, we propose a neural network based real-time process parameter optimization methodology that sets optimal process parameters by using mold data, molding machine data, and response data. This paper is expected to have academic contribution as a novel study of parameter optimization during production compare with pre - production parameter optimization in typical studies.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
The objective of the contract is to consolidate the advances made during the previous contract in the conversion of syngas to motor fuels using Molecular Sieve-containing catalysts and to demonstrate the practical utility and economic value of the new catalyst/process systems with appropriate laboratory runs. Work on the program is divided into the following six tasks: (1) preparation of a detailed work plan covering the entire performance of the contract; (2) preliminary techno-economic assessment of the UCC catalyst/process system; (3) optimization of the most promising catalyst developed under prior contract; (4) optimization of the UCC catalyst system in a mannermore » that will give it the longest possible service life; (5) optimization of a UCC process/catalyst system based upon a tubular reactor with a recycle loop containing the most promising catalyst developed under Tasks 3 and 4 studies; and (6) economic evaluation of the optimal performance found under Task 5 for the UCC process/catalyst system. Progress reports are presented for tasks 2 through 5. 232 figs., 19 tabs.« less
NASA Astrophysics Data System (ADS)
Yang, Xudong; Sun, Lingyu; Zhang, Cheng; Li, Lijun; Dai, Zongmiao; Xiong, Zhenkai
2018-03-01
The application of polymer composites as a substitution of metal is an effective approach to reduce vehicle weight. However, the final performance of composite structures is determined not only by the material types, structural designs and manufacturing process, but also by their mutual restrict. Hence, an integrated "material-structure-process-performance" method is proposed for the conceptual and detail design of composite components. The material selection is based on the principle of composite mechanics such as rule of mixture for laminate. The design of component geometry, dimension and stacking sequence is determined by parametric modeling and size optimization. The selection of process parameters are based on multi-physical field simulation. The stiffness and modal constraint conditions were obtained from the numerical analysis of metal benchmark under typical load conditions. The optimal design was found by multi-discipline optimization. Finally, the proposed method was validated by an application case of automotive hatchback using carbon fiber reinforced polymer. Compared with the metal benchmark, the weight of composite one reduces 38.8%, simultaneously, its torsion and bending stiffness increases 3.75% and 33.23%, respectively, and the first frequency also increases 44.78%.
Procedure for minimizing the cost per watt of photovoltaic systems
NASA Technical Reports Server (NTRS)
Redfield, D.
1977-01-01
A general analytic procedure is developed that provides a quantitative method for optimizing any element or process in the fabrication of a photovoltaic energy conversion system by minimizing its impact on the cost per watt of the complete system. By determining the effective value of any power loss associated with each element of the system, this procedure furnishes the design specifications that optimize the cost-performance tradeoffs for each element. A general equation is derived that optimizes the properties of any part of the system in terms of appropriate cost and performance functions, although the power-handling components are found to have a different character from the cell and array steps. Another principal result is that a fractional performance loss occurring at any cell- or array-fabrication step produces that same fractional increase in the cost per watt of the complete array. It also follows that no element or process step can be optimized correctly by considering only its own cost and performance
NASA Astrophysics Data System (ADS)
Yu, Long; Druckenbrod, Markus; Greve, Martin; Wang, Ke-qi; Abdel-Maksoud, Moustafa
2015-10-01
A fully automated optimization process is provided for the design of ducted propellers under open water conditions, including 3D geometry modeling, meshing, optimization algorithm and CFD analysis techniques. The developed process allows the direct integration of a RANSE solver in the design stage. A practical ducted propeller design case study is carried out for validation. Numerical simulations and open water tests are fulfilled and proved that the optimum ducted propeller improves hydrodynamic performance as predicted.
NASA Astrophysics Data System (ADS)
Qyyum, Muhammad Abdul; Long, Nguyen Van Duc; Minh, Le Quang; Lee, Moonyong
2018-01-01
Design optimization of the single mixed refrigerant (SMR) natural gas liquefaction (LNG) process involves highly non-linear interactions between decision variables, constraints, and the objective function. These non-linear interactions lead to an irreversibility, which deteriorates the energy efficiency of the LNG process. In this study, a simple and highly efficient hybrid modified coordinate descent (HMCD) algorithm was proposed to cope with the optimization of the natural gas liquefaction process. The single mixed refrigerant process was modeled in Aspen Hysys® and then connected to a Microsoft Visual Studio environment. The proposed optimization algorithm provided an improved result compared to the other existing methodologies to find the optimal condition of the complex mixed refrigerant natural gas liquefaction process. By applying the proposed optimization algorithm, the SMR process can be designed with the 0.2555 kW specific compression power which is equivalent to 44.3% energy saving as compared to the base case. Furthermore, in terms of coefficient of performance (COP), it can be enhanced up to 34.7% as compared to the base case. The proposed optimization algorithm provides a deep understanding of the optimization of the liquefaction process in both technical and numerical perspectives. In addition, the HMCD algorithm can be employed to any mixed refrigerant based liquefaction process in the natural gas industry.
Optimization of MLS receivers for multipath environments
NASA Technical Reports Server (NTRS)
Mcalpine, G. A.; Highfill, J. H., III
1976-01-01
The design of a microwave landing system (MLS) aircraft receiver, capable of optimal performance in multipath environments found in air terminal areas, is reported. Special attention was given to the angle tracking problem of the receiver and includes tracking system design considerations, study and application of locally optimum estimation involving multipath adaptive reception and then envelope processing, and microcomputer system design. Results show processing is competitive in this application with i-f signal processing performance-wise and is much more simple and cheaper. A summary of the signal model is given.
Performance Optimization of Irreversible Air Heat Pumps Considering Size Effect
NASA Astrophysics Data System (ADS)
Bi, Yuehong; Chen, Lingen; Ding, Zemin; Sun, Fengrui
2018-06-01
Considering the size of an irreversible air heat pump (AHP), heating load density (HLD) is taken as thermodynamic optimization objective by using finite-time thermodynamics. Based on an irreversible AHP with infinite reservoir thermal-capacitance rate model, the expression of HLD of AHP is put forward. The HLD optimization processes are studied analytically and numerically, which consist of two aspects: (1) to choose pressure ratio; (2) to distribute heat-exchanger inventory. Heat reservoir temperatures, heat transfer performance of heat exchangers as well as irreversibility during compression and expansion processes are important factors influencing on the performance of an irreversible AHP, which are characterized with temperature ratio, heat exchanger inventory as well as isentropic efficiencies, respectively. Those impacts of parameters on the maximum HLD are thoroughly studied. The research results show that HLD optimization can make the size of the AHP system smaller and improve the compactness of system.
NASA Astrophysics Data System (ADS)
Najafi, Ali; Acar, Erdem; Rais-Rohani, Masoud
2014-02-01
The stochastic uncertainties associated with the material, process and product are represented and propagated to process and performance responses. A finite element-based sequential coupled process-performance framework is used to simulate the forming and energy absorption responses of a thin-walled tube in a manner that both material properties and component geometry can evolve from one stage to the next for better prediction of the structural performance measures. Metamodelling techniques are used to develop surrogate models for manufacturing and performance responses. One set of metamodels relates the responses to the random variables whereas the other relates the mean and standard deviation of the responses to the selected design variables. A multi-objective robust design optimization problem is formulated and solved to illustrate the methodology and the influence of uncertainties on manufacturability and energy absorption of a metallic double-hat tube. The results are compared with those of deterministic and augmented robust optimization problems.
An optimized and low-cost FPGA-based DNA sequence alignment--a step towards personal genomics.
Shah, Hurmat Ali; Hasan, Laiq; Ahmad, Nasir
2013-01-01
DNA sequence alignment is a cardinal process in computational biology but also is much expensive computationally when performing through traditional computational platforms like CPU. Of many off the shelf platforms explored for speeding up the computation process, FPGA stands as the best candidate due to its performance per dollar spent and performance per watt. These two advantages make FPGA as the most appropriate choice for realizing the aim of personal genomics. The previous implementation of DNA sequence alignment did not take into consideration the price of the device on which optimization was performed. This paper presents optimization over previous FPGA implementation that increases the overall speed-up achieved as well as the price incurred by the platform that was optimized. The optimizations are (1) The array of processing elements is made to run on change in input value and not on clock, so eliminating the need for tight clock synchronization, (2) the implementation is unrestrained by the size of the sequences to be aligned, (3) the waiting time required for the sequences to load to FPGA is reduced to the minimum possible and (4) an efficient method is devised to store the output matrix that make possible to save the diagonal elements to be used in next pass, in parallel with the computation of output matrix. Implemented on Spartan3 FPGA, this implementation achieved 20 times performance improvement in terms of CUPS over GPP implementation.
Cooperative optimization of reconfigurable machine tool configurations and production process plan
NASA Astrophysics Data System (ADS)
Xie, Nan; Li, Aiping; Xue, Wei
2012-09-01
The production process plan design and configurations of reconfigurable machine tool (RMT) interact with each other. Reasonable process plans with suitable configurations of RMT help to improve product quality and reduce production cost. Therefore, a cooperative strategy is needed to concurrently solve the above issue. In this paper, the cooperative optimization model for RMT configurations and production process plan is presented. Its objectives take into account both impacts of process and configuration. Moreover, a novel genetic algorithm is also developed to provide optimal or near-optimal solutions: firstly, its chromosome is redesigned which is composed of three parts, operations, process plan and configurations of RMTs, respectively; secondly, its new selection, crossover and mutation operators are also developed to deal with the process constraints from operation processes (OP) graph, otherwise these operators could generate illegal solutions violating the limits; eventually the optimal configurations for RMT under optimal process plan design can be obtained. At last, a manufacturing line case is applied which is composed of three RMTs. It is shown from the case that the optimal process plan and configurations of RMT are concurrently obtained, and the production cost decreases 6.28% and nonmonetary performance increases 22%. The proposed method can figure out both RMT configurations and production process, improve production capacity, functions and equipment utilization for RMT.
Efforts are currently underway at the USEPA to develop information technology applications to improve the environmental performance of the chemical process industry. These efforts include the use of genetic algorithms to optimize different process options for minimal environmenta...
Optimal teaching strategy in periodic impulsive knowledge dissemination system.
Liu, Dan-Qing; Wu, Zhen-Qiang; Wang, Yu-Xin; Guo, Qiang; Liu, Jian-Guo
2017-01-01
Accurately describing the knowledge dissemination process is significant to enhance the performance of personalized education. In this study, considering the effect of periodic teaching activities on the learning process, we propose a periodic impulsive knowledge dissemination system to regenerate the knowledge dissemination process. Meanwhile, we put forward learning effectiveness which is an outcome of a trade-off between the benefits and costs raised by knowledge dissemination as objective function. Further, we investigate the optimal teaching strategy which can maximize learning effectiveness, to obtain the optimal effect of knowledge dissemination affected by the teaching activities. We solve this dynamic optimization problem by optimal control theory and get the optimization system. At last we numerically solve this system in several practical examples to make the conclusions intuitive and specific. The optimal teaching strategy proposed in this paper can be applied widely in the optimization problem of personal education and beneficial for enhancing the effect of knowledge dissemination.
Optimal teaching strategy in periodic impulsive knowledge dissemination system
Liu, Dan-Qing; Wu, Zhen-Qiang; Wang, Yu-Xin; Guo, Qiang
2017-01-01
Accurately describing the knowledge dissemination process is significant to enhance the performance of personalized education. In this study, considering the effect of periodic teaching activities on the learning process, we propose a periodic impulsive knowledge dissemination system to regenerate the knowledge dissemination process. Meanwhile, we put forward learning effectiveness which is an outcome of a trade-off between the benefits and costs raised by knowledge dissemination as objective function. Further, we investigate the optimal teaching strategy which can maximize learning effectiveness, to obtain the optimal effect of knowledge dissemination affected by the teaching activities. We solve this dynamic optimization problem by optimal control theory and get the optimization system. At last we numerically solve this system in several practical examples to make the conclusions intuitive and specific. The optimal teaching strategy proposed in this paper can be applied widely in the optimization problem of personal education and beneficial for enhancing the effect of knowledge dissemination. PMID:28665961
Cheema, Jitender Jit Singh; Sankpal, Narendra V; Tambe, Sanjeev S; Kulkarni, Bhaskar D
2002-01-01
This article presents two hybrid strategies for the modeling and optimization of the glucose to gluconic acid batch bioprocess. In the hybrid approaches, first a novel artificial intelligence formalism, namely, genetic programming (GP), is used to develop a process model solely from the historic process input-output data. In the next step, the input space of the GP-based model, representing process operating conditions, is optimized using two stochastic optimization (SO) formalisms, viz., genetic algorithms (GAs) and simultaneous perturbation stochastic approximation (SPSA). These SO formalisms possess certain unique advantages over the commonly used gradient-based optimization techniques. The principal advantage of the GP-GA and GP-SPSA hybrid techniques is that process modeling and optimization can be performed exclusively from the process input-output data without invoking the detailed knowledge of the process phenomenology. The GP-GA and GP-SPSA techniques have been employed for modeling and optimization of the glucose to gluconic acid bioprocess, and the optimized process operating conditions obtained thereby have been compared with those obtained using two other hybrid modeling-optimization paradigms integrating artificial neural networks (ANNs) and GA/SPSA formalisms. Finally, the overall optimized operating conditions given by the GP-GA method, when verified experimentally resulted in a significant improvement in the gluconic acid yield. The hybrid strategies presented here are generic in nature and can be employed for modeling and optimization of a wide variety of batch and continuous bioprocesses.
Gaussian process regression for geometry optimization
NASA Astrophysics Data System (ADS)
Denzel, Alexander; Kästner, Johannes
2018-03-01
We implemented a geometry optimizer based on Gaussian process regression (GPR) to find minimum structures on potential energy surfaces. We tested both a two times differentiable form of the Matérn kernel and the squared exponential kernel. The Matérn kernel performs much better. We give a detailed description of the optimization procedures. These include overshooting the step resulting from GPR in order to obtain a higher degree of interpolation vs. extrapolation. In a benchmark against the Limited-memory Broyden-Fletcher-Goldfarb-Shanno optimizer of the DL-FIND library on 26 test systems, we found the new optimizer to generally reduce the number of required optimization steps.
Optimization of Gas Metal Arc Welding Process Parameters
NASA Astrophysics Data System (ADS)
Kumar, Amit; Khurana, M. K.; Yadav, Pradeep K.
2016-09-01
This study presents the application of Taguchi method combined with grey relational analysis to optimize the process parameters of gas metal arc welding (GMAW) of AISI 1020 carbon steels for multiple quality characteristics (bead width, bead height, weld penetration and heat affected zone). An orthogonal array of L9 has been implemented to fabrication of joints. The experiments have been conducted according to the combination of voltage (V), current (A) and welding speed (Ws). The results revealed that the welding speed is most significant process parameter. By analyzing the grey relational grades, optimal parameters are obtained and significant factors are known using ANOVA analysis. The welding parameters such as speed, welding current and voltage have been optimized for material AISI 1020 using GMAW process. To fortify the robustness of experimental design, a confirmation test was performed at selected optimal process parameter setting. Observations from this method may be useful for automotive sub-assemblies, shipbuilding and vessel fabricators and operators to obtain optimal welding conditions.
Processing time tolerance-based ACO algorithm for solving job-shop scheduling problem
NASA Astrophysics Data System (ADS)
Luo, Yabo; Waden, Yongo P.
2017-06-01
Ordinarily, Job Shop Scheduling Problem (JSSP) is known as NP-hard problem which has uncertainty and complexity that cannot be handled by a linear method. Thus, currently studies on JSSP are concentrated mainly on applying different methods of improving the heuristics for optimizing the JSSP. However, there still exist many problems for efficient optimization in the JSSP, namely, low efficiency and poor reliability, which can easily trap the optimization process of JSSP into local optima. Therefore, to solve this problem, a study on Ant Colony Optimization (ACO) algorithm combined with constraint handling tactics is carried out in this paper. Further, the problem is subdivided into three parts: (1) Analysis of processing time tolerance-based constraint features in the JSSP which is performed by the constraint satisfying model; (2) Satisfying the constraints by considering the consistency technology and the constraint spreading algorithm in order to improve the performance of ACO algorithm. Hence, the JSSP model based on the improved ACO algorithm is constructed; (3) The effectiveness of the proposed method based on reliability and efficiency is shown through comparative experiments which are performed on benchmark problems. Consequently, the results obtained by the proposed method are better, and the applied technique can be used in optimizing JSSP.
Alejo-Alvarez, Luz; Guzmán-Fierro, Víctor; Fernández, Katherina; Roeckel, Marlene
2016-11-01
A full-scale process for the treatment of 80 tons per day of poultry manure was designed and optimized. A total ammonia nitrogen (TAN) balance was performed at steady state, considering the stoichiometry and the kinetic data from the anaerobic digestion and the anaerobic ammonia oxidation. The equipment, reactor design, investment costs, and operational costs were considered. The volume and cost objective functions optimized the process in terms of three variables: the water recycle ratio, the protein conversion during AD, and the TAN conversion in the process. The processes were compared with and without water recycle; savings of 70% and 43% in the annual fresh water consumption and the heating costs, respectively, were achieved. The optimal process complies with the Chilean environmental legislation limit of 0.05 g total nitrogen/L.
Rodríguez-Yáñez, Alicia Berenice; Méndez-Vázquez, Yaileen
2014-01-01
Process windows in injection molding are habitually built with only one performance measure in mind. In reality, a more realistic picture can be obtained when considering multiple performance measures at a time, especially in the presence of conflict. In this work, the construction of process windows for injection molding (IM) is undertaken considering two and three performance measures in conflict simultaneously. The best compromises between the criteria involved are identified through the direct application of the concept of Pareto-dominance in multiple criteria optimization. The aim is to provide a formal and realistic strategy to set processing conditions in IM operations. The resulting optimization approach is easily implementable in MS Excel. The solutions are presented graphically to facilitate their use in manufacturing plants. PMID:25530927
Rodríguez-Yáñez, Alicia Berenice; Méndez-Vázquez, Yaileen; Cabrera-Ríos, Mauricio
2014-01-01
Process windows in injection molding are habitually built with only one performance measure in mind. In reality, a more realistic picture can be obtained when considering multiple performance measures at a time, especially in the presence of conflict. In this work, the construction of process windows for injection molding (IM) is undertaken considering two and three performance measures in conflict simultaneously. The best compromises between the criteria involved are identified through the direct application of the concept of Pareto-dominance in multiple criteria optimization. The aim is to provide a formal and realistic strategy to set processing conditions in IM operations. The resulting optimization approach is easily implementable in MS Excel. The solutions are presented graphically to facilitate their use in manufacturing plants.
Multi-disciplinary optimization of aeroservoelastic systems
NASA Technical Reports Server (NTRS)
Karpel, Mordechay
1990-01-01
Efficient analytical and computational tools for simultaneous optimal design of the structural and control components of aeroservoelastic systems are presented. The optimization objective is to achieve aircraft performance requirements and sufficient flutter and control stability margins with a minimal weight penalty and without violating the design constraints. Analytical sensitivity derivatives facilitate an efficient optimization process which allows a relatively large number of design variables. Standard finite element and unsteady aerodynamic routines are used to construct a modal data base. Minimum State aerodynamic approximations and dynamic residualization methods are used to construct a high accuracy, low order aeroservoelastic model. Sensitivity derivatives of flutter dynamic pressure, control stability margins and control effectiveness with respect to structural and control design variables are presented. The performance requirements are utilized by equality constraints which affect the sensitivity derivatives. A gradient-based optimization algorithm is used to minimize an overall cost function. A realistic numerical example of a composite wing with four controls is used to demonstrate the modeling technique, the optimization process, and their accuracy and efficiency.
Multidisciplinary optimization of aeroservoelastic systems using reduced-size models
NASA Technical Reports Server (NTRS)
Karpel, Mordechay
1992-01-01
Efficient analytical and computational tools for simultaneous optimal design of the structural and control components of aeroservoelastic systems are presented. The optimization objective is to achieve aircraft performance requirements and sufficient flutter and control stability margins with a minimal weight penalty and without violating the design constraints. Analytical sensitivity derivatives facilitate an efficient optimization process which allows a relatively large number of design variables. Standard finite element and unsteady aerodynamic routines are used to construct a modal data base. Minimum State aerodynamic approximations and dynamic residualization methods are used to construct a high accuracy, low order aeroservoelastic model. Sensitivity derivatives of flutter dynamic pressure, control stability margins and control effectiveness with respect to structural and control design variables are presented. The performance requirements are utilized by equality constraints which affect the sensitivity derivatives. A gradient-based optimization algorithm is used to minimize an overall cost function. A realistic numerical example of a composite wing with four controls is used to demonstrate the modeling technique, the optimization process, and their accuracy and efficiency.
Evolutionary computing for the design search and optimization of space vehicle power subsystems
NASA Technical Reports Server (NTRS)
Kordon, Mark; Klimeck, Gerhard; Hanks, David; Hua, Hook
2004-01-01
Evolutionary computing has proven to be a straightforward and robust approach for optimizing a wide range of difficult analysis and design problems. This paper discusses the application of these techniques to an existing space vehicle power subsystem resource and performance analysis simulation in a parallel processing environment. Out preliminary results demonstrate that this approach has the potential to improve the space system trade study process by allowing engineers to statistically weight subsystem goals of mass, cost and performance then automatically size power elements based on anticipated performance of the subsystem rather than on worst-case estimates.
Applying simulation to optimize plastic molded optical parts
NASA Astrophysics Data System (ADS)
Jaworski, Matthew; Bakharev, Alexander; Costa, Franco; Friedl, Chris
2012-10-01
Optical injection molded parts are used in many different industries including electronics, consumer, medical and automotive due to their cost and performance advantages compared to alternative materials such as glass. The injection molding process, however, induces elastic (residual stress) and viscoelastic (flow orientation stress) deformation into the molded article which alters the material's refractive index to be anisotropic in different directions. Being able to predict and correct optical performance issues associated with birefringence early in the design phase is a huge competitive advantage. This paper reviews how to apply simulation analysis of the entire molding process to optimize manufacturability and part performance.
NASA Astrophysics Data System (ADS)
Saavedra, Juan Alejandro
Quality Control (QC) and Quality Assurance (QA) strategies vary significantly across industries in the manufacturing sector depending on the product being built. Such strategies range from simple statistical analysis and process controls, decision-making process of reworking, repairing, or scraping defective product. This study proposes an optimal QC methodology in order to include rework stations during the manufacturing process by identifying the amount and location of these workstations. The factors that are considered to optimize these stations are cost, cycle time, reworkability and rework benefit. The goal is to minimize the cost and cycle time of the process, but increase the reworkability and rework benefit. The specific objectives of this study are: (1) to propose a cost estimation model that includes energy consumption, and (2) to propose an optimal QC methodology to identify quantity and location of rework workstations. The cost estimation model includes energy consumption as part of the product direct cost. The cost estimation model developed allows the user to calculate product direct cost as the quality sigma level of the process changes. This provides a benefit because a complete cost estimation calculation does not need to be performed every time the processes yield changes. This cost estimation model is then used for the QC strategy optimization process. In order to propose a methodology that provides an optimal QC strategy, the possible factors that affect QC were evaluated. A screening Design of Experiments (DOE) was performed on seven initial factors and identified 3 significant factors. It reflected that one response variable was not required for the optimization process. A full factorial DOE was estimated in order to verify the significant factors obtained previously. The QC strategy optimization is performed through a Genetic Algorithm (GA) which allows the evaluation of several solutions in order to obtain feasible optimal solutions. The GA evaluates possible solutions based on cost, cycle time, reworkability and rework benefit. Finally it provides several possible solutions because this is a multi-objective optimization problem. The solutions are presented as chromosomes that clearly state the amount and location of the rework stations. The user analyzes these solutions in order to select one by deciding which of the four factors considered is most important depending on the product being manufactured or the company's objective. The major contribution of this study is to provide the user with a methodology used to identify an effective and optimal QC strategy that incorporates the number and location of rework substations in order to minimize direct product cost, and cycle time, and maximize reworkability, and rework benefit.
A Doppler centroid estimation algorithm for SAR systems optimized for the quasi-homogeneous source
NASA Technical Reports Server (NTRS)
Jin, Michael Y.
1989-01-01
Radar signal processing applications frequently require an estimate of the Doppler centroid of a received signal. The Doppler centroid estimate is required for synthetic aperture radar (SAR) processing. It is also required for some applications involving target motion estimation and antenna pointing direction estimation. In some cases, the Doppler centroid can be accurately estimated based on available information regarding the terrain topography, the relative motion between the sensor and the terrain, and the antenna pointing direction. Often, the accuracy of the Doppler centroid estimate can be improved by analyzing the characteristics of the received SAR signal. This kind of signal processing is also referred to as clutterlock processing. A Doppler centroid estimation (DCE) algorithm is described which contains a linear estimator optimized for the type of terrain surface that can be modeled by a quasi-homogeneous source (QHS). Information on the following topics is presented: (1) an introduction to the theory of Doppler centroid estimation; (2) analysis of the performance characteristics of previously reported DCE algorithms; (3) comparison of these analysis results with experimental results; (4) a description and performance analysis of a Doppler centroid estimator which is optimized for a QHS; and (5) comparison of the performance of the optimal QHS Doppler centroid estimator with that of previously reported methods.
Design optimization of hydraulic turbine draft tube based on CFD and DOE method
NASA Astrophysics Data System (ADS)
Nam, Mun chol; Dechun, Ba; Xiangji, Yue; Mingri, Jin
2018-03-01
In order to improve performance of the hydraulic turbine draft tube in its design process, the optimization for draft tube is performed based on multi-disciplinary collaborative design optimization platform by combining the computation fluid dynamic (CFD) and the design of experiment (DOE) in this paper. The geometrical design variables are considered as the median section in the draft tube and the cross section in its exit diffuser and objective function is to maximize the pressure recovery factor (Cp). Sample matrixes required for the shape optimization of the draft tube are generated by optimal Latin hypercube (OLH) method of the DOE technique and their performances are evaluated through computational fluid dynamic (CFD) numerical simulation. Subsequently the main effect analysis and the sensitivity analysis of the geometrical parameters of the draft tube are accomplished. Then, the design optimization of the geometrical design variables is determined using the response surface method. The optimization result of the draft tube shows a marked performance improvement over the original.
ERIC Educational Resources Information Center
Fasoula, S.; Nikitas, P.; Pappa-Louisi, A.
2017-01-01
A series of Microsoft Excel spreadsheets were developed to simulate the process of separation optimization under isocratic and simple gradient conditions. The optimization procedure is performed in a stepwise fashion using simple macros for an automatic application of this approach. The proposed optimization approach involves modeling of the peak…
Xu, Hongyi; Li, Yang; Zeng, Danielle
2017-01-02
Process integration and optimization is the key enabler of the Integrated Computational Materials Engineering (ICME) of carbon fiber composites. In this paper, automated workflows are developed for two types of composites: Sheet Molding Compounds (SMC) short fiber composites, and multi-layer unidirectional (UD) composites. For SMC, the proposed workflow integrates material processing simulation, microstructure representation volume element (RVE) models, material property prediction and structure preformation simulation to enable multiscale, multidisciplinary analysis and design. Processing parameters, microstructure parameters and vehicle subframe geometry parameters are defined as the design variables; the stiffness and weight of the structure are defined as the responses. Formore » multi-layer UD structure, this work focuses on the discussion of different design representation methods and their impacts on the optimization performance. Challenges in ICME process integration and optimization are also summarized and highlighted. Two case studies are conducted to demonstrate the integrated process and its application in optimization.« less
2014-11-01
Paradigm ............................................................................19 3.4 Collaborative BCI for Improving Overall Performance...interfaces ( BCIs ) provide the biggest improvement in performance? Can we demonstrate clear advantages with BCIs ? 2 2. Simulator Development and...stimuli in real time. Fig. 18 ROC curves for each subject after the combination of 2 trials 3.4 Collaborative BCI for Improving Overall
Robust optimization of front members in a full frontal car impact
NASA Astrophysics Data System (ADS)
Aspenberg (né Lönn), David; Jergeus, Johan; Nilsson, Larsgunnar
2013-03-01
In the search for lightweight automobile designs, it is necessary to assure that robust crashworthiness performance is achieved. Structures that are optimized to handle a finite number of load cases may perform poorly when subjected to various dispersions. Thus, uncertainties must be accounted for in the optimization process. This article presents an approach to optimization where all design evaluations include an evaluation of the robustness. Metamodel approximations are applied both to the design space and the robustness evaluations, using artifical neural networks and polynomials, respectively. The features of the robust optimization approach are displayed in an analytical example, and further demonstrated in a large-scale design example of front side members of a car. Different optimization formulations are applied and it is shown that the proposed approach works well. It is also concluded that a robust optimization puts higher demands on the finite element model performance than normally.
Multidisciplinary Design Optimization of a Full Vehicle with High Performance Computing
NASA Technical Reports Server (NTRS)
Yang, R. J.; Gu, L.; Tho, C. H.; Sobieszczanski-Sobieski, Jaroslaw
2001-01-01
Multidisciplinary design optimization (MDO) of a full vehicle under the constraints of crashworthiness, NVH (Noise, Vibration and Harshness), durability, and other performance attributes is one of the imperative goals for automotive industry. However, it is often infeasible due to the lack of computational resources, robust simulation capabilities, and efficient optimization methodologies. This paper intends to move closer towards that goal by using parallel computers for the intensive computation and combining different approximations for dissimilar analyses in the MDO process. The MDO process presented in this paper is an extension of the previous work reported by Sobieski et al. In addition to the roof crush, two full vehicle crash modes are added: full frontal impact and 50% frontal offset crash. Instead of using an adaptive polynomial response surface method, this paper employs a DOE/RSM method for exploring the design space and constructing highly nonlinear crash functions. Two NMO strategies are used and results are compared. This paper demonstrates that with high performance computing, a conventionally intractable real world full vehicle multidisciplinary optimization problem considering all performance attributes with large number of design variables become feasible.
A fast elitism Gaussian estimation of distribution algorithm and application for PID optimization.
Xu, Qingyang; Zhang, Chengjin; Zhang, Li
2014-01-01
Estimation of distribution algorithm (EDA) is an intelligent optimization algorithm based on the probability statistics theory. A fast elitism Gaussian estimation of distribution algorithm (FEGEDA) is proposed in this paper. The Gaussian probability model is used to model the solution distribution. The parameters of Gaussian come from the statistical information of the best individuals by fast learning rule. A fast learning rule is used to enhance the efficiency of the algorithm, and an elitism strategy is used to maintain the convergent performance. The performances of the algorithm are examined based upon several benchmarks. In the simulations, a one-dimensional benchmark is used to visualize the optimization process and probability model learning process during the evolution, and several two-dimensional and higher dimensional benchmarks are used to testify the performance of FEGEDA. The experimental results indicate the capability of FEGEDA, especially in the higher dimensional problems, and the FEGEDA exhibits a better performance than some other algorithms and EDAs. Finally, FEGEDA is used in PID controller optimization of PMSM and compared with the classical-PID and GA.
A Fast Elitism Gaussian Estimation of Distribution Algorithm and Application for PID Optimization
Xu, Qingyang; Zhang, Chengjin; Zhang, Li
2014-01-01
Estimation of distribution algorithm (EDA) is an intelligent optimization algorithm based on the probability statistics theory. A fast elitism Gaussian estimation of distribution algorithm (FEGEDA) is proposed in this paper. The Gaussian probability model is used to model the solution distribution. The parameters of Gaussian come from the statistical information of the best individuals by fast learning rule. A fast learning rule is used to enhance the efficiency of the algorithm, and an elitism strategy is used to maintain the convergent performance. The performances of the algorithm are examined based upon several benchmarks. In the simulations, a one-dimensional benchmark is used to visualize the optimization process and probability model learning process during the evolution, and several two-dimensional and higher dimensional benchmarks are used to testify the performance of FEGEDA. The experimental results indicate the capability of FEGEDA, especially in the higher dimensional problems, and the FEGEDA exhibits a better performance than some other algorithms and EDAs. Finally, FEGEDA is used in PID controller optimization of PMSM and compared with the classical-PID and GA. PMID:24892059
Optimal design of an alignment-free two-DOF rehabilitation robot for the shoulder complex.
Galinski, Daniel; Sapin, Julien; Dehez, Bruno
2013-06-01
This paper presents the optimal design of an alignment-free exoskeleton for the rehabilitation of the shoulder complex. This robot structure is constituted of two actuated joints and is linked to the arm through passive degrees of freedom (DOFs) to drive the flexion-extension and abduction-adduction movements of the upper arm. The optimal design of this structure is performed through two steps. The first step is a multi-objective optimization process aiming to find the best parameters characterizing the robot and its position relative to the patient. The second step is a comparison process aiming to select the best solution from the optimization results on the basis of several criteria related to practical considerations. The optimal design process leads to a solution outperforming an existing solution on aspects as kinematics or ergonomics while being more simple.
Comparison of DNQ/novolac resists for e-beam exposure
NASA Astrophysics Data System (ADS)
Fedynyshyn, Theodore H.; Doran, Scott P.; Lind, Michele L.; Lyszczarz, Theodore M.; DiNatale, William F.; Lennon, Donna; Sauer, Charles A.; Meute, Jeff
1999-12-01
We have surveyed the commercial resist market with the dual purpose of identifying diazoquinone/novolac based resists that have potential for use as e-beam mask making resists and baselining these resists for comparison against future mask making resist candidates. For completeness, this survey would require that each resist be compared with an optimized developer and development process. To accomplish this task in an acceptable time period, e-beam lithography modeling was employed to quickly identify the resist and developer combinations that lead to superior resist performance. We describe the verification of a method to quickly screen commercial i-line resists with different developers, by determining modeling parameters for i-line resists from e-beam exposures, modeling the resist performance, and comparing predicted performance versus actual performance. We determined the lithographic performance of several DNQ/novolac resists whose modeled performance suggests that sensitivities of less than 40 (mu) C/cm2 coupled with less than 10-nm CD change per percent change in dose are possible for target 600-nm features. This was accomplished by performing a series of statistically designed experiments on the leading resists candidates to optimize processing variables, followed by comparing experimentally determined resist sensitivities, latitudes, and profiles of the DNQ/novolac resists a their optimized process.
NASA Technical Reports Server (NTRS)
Biess, J. J.; Yu, Y.; Middlebrook, R. D.; Schoenfeld, A. D.
1974-01-01
A review is given of future power processing systems planned for the next 20 years, and the state-of-the-art of power processing design modeling and analysis techniques used to optimize power processing systems. A methodology of modeling and analysis of power processing equipment and systems has been formulated to fulfill future tradeoff studies and optimization requirements. Computer techniques were applied to simulate power processor performance and to optimize the design of power processing equipment. A program plan to systematically develop and apply the tools for power processing systems modeling and analysis is presented so that meaningful results can be obtained each year to aid the power processing system engineer and power processing equipment circuit designers in their conceptual and detail design and analysis tasks.
Distributed query plan generation using multiobjective genetic algorithm.
Panicker, Shina; Kumar, T V Vijay
2014-01-01
A distributed query processing strategy, which is a key performance determinant in accessing distributed databases, aims to minimize the total query processing cost. One way to achieve this is by generating efficient distributed query plans that involve fewer sites for processing a query. In the case of distributed relational databases, the number of possible query plans increases exponentially with respect to the number of relations accessed by the query and the number of sites where these relations reside. Consequently, computing optimal distributed query plans becomes a complex problem. This distributed query plan generation (DQPG) problem has already been addressed using single objective genetic algorithm, where the objective is to minimize the total query processing cost comprising the local processing cost (LPC) and the site-to-site communication cost (CC). In this paper, this DQPG problem is formulated and solved as a biobjective optimization problem with the two objectives being minimize total LPC and minimize total CC. These objectives are simultaneously optimized using a multiobjective genetic algorithm NSGA-II. Experimental comparison of the proposed NSGA-II based DQPG algorithm with the single objective genetic algorithm shows that the former performs comparatively better and converges quickly towards optimal solutions for an observed crossover and mutation probability.
Distributed Query Plan Generation Using Multiobjective Genetic Algorithm
Panicker, Shina; Vijay Kumar, T. V.
2014-01-01
A distributed query processing strategy, which is a key performance determinant in accessing distributed databases, aims to minimize the total query processing cost. One way to achieve this is by generating efficient distributed query plans that involve fewer sites for processing a query. In the case of distributed relational databases, the number of possible query plans increases exponentially with respect to the number of relations accessed by the query and the number of sites where these relations reside. Consequently, computing optimal distributed query plans becomes a complex problem. This distributed query plan generation (DQPG) problem has already been addressed using single objective genetic algorithm, where the objective is to minimize the total query processing cost comprising the local processing cost (LPC) and the site-to-site communication cost (CC). In this paper, this DQPG problem is formulated and solved as a biobjective optimization problem with the two objectives being minimize total LPC and minimize total CC. These objectives are simultaneously optimized using a multiobjective genetic algorithm NSGA-II. Experimental comparison of the proposed NSGA-II based DQPG algorithm with the single objective genetic algorithm shows that the former performs comparatively better and converges quickly towards optimal solutions for an observed crossover and mutation probability. PMID:24963513
NASA Astrophysics Data System (ADS)
Chen, Ting; Van Den Broeke, Doug; Hsu, Stephen; Hsu, Michael; Park, Sangbong; Berger, Gabriel; Coskun, Tamer; de Vocht, Joep; Chen, Fung; Socha, Robert; Park, JungChul; Gronlund, Keith
2005-11-01
Illumination optimization, often combined with optical proximity corrections (OPC) to the mask, is becoming one of the critical components for a production-worthy lithography process for 55nm-node DRAM/Flash memory devices and beyond. At low-k1, e.g. k1<0.31, both resolution and imaging contrast can be severely limited by the current imaging tools while using the standard illumination sources. Illumination optimization is a process where the source shape is varied, in both profile and intensity distribution, to achieve enhancement in the final image contrast as compared to using the non-optimized sources. The optimization can be done efficiently for repetitive patterns such as DRAM/Flash memory cores. However, illumination optimization often produces source shapes that are "free-form" like and they can be too complex to be directly applicable for production and lack the necessary radial and annular symmetries desirable for the diffractive optical element (DOE) based illumination systems in today's leading lithography tools. As a result, post-optimization rendering and verification of the optimized source shape are often necessary to meet the production-ready or manufacturability requirements and ensure optimal performance gains. In this work, we describe our approach to the illumination optimization for k1<0.31 DRAM/Flash memory patterns, using an ASML XT:1400i at NA 0.93, where the all necessary manufacturability requirements are fully accounted for during the optimization. The imaging contrast in the resist is optimized in a reduced solution space constrained by the manufacturability requirements, which include minimum distance between poles, minimum opening pole angles, minimum ring width and minimum source filling factor in the sigma space. For additional performance gains, the intensity within the optimized source can vary in a gray-tone fashion (eight shades used in this work). Although this new optimization approach can sometimes produce closely spaced solutions as gauged by the NILS based metrics, we show that the optimal and production-ready source shape solution can be easily determined by comparing the best solutions to the "free-form" solution and more importantly, by their respective imaging fidelity and process latitude ranking. Imaging fidelity and process latitude simulations are performed to analyze the impact and sensitivity of the manufacturability requirements on pattern specific illumination optimizations using ASML XT:1400i and other latest imaging systems. Mask model based OPC (MOPC) is applied and optimized sequentially to ensure that the CD uniformity requirements are met.
NASA Astrophysics Data System (ADS)
El-Wardany, Tahany; Lynch, Mathew; Gu, Wenjiong; Hsu, Arthur; Klecka, Michael; Nardi, Aaron; Viens, Daniel
This paper proposes an optimization framework enabling the integration of multi-scale / multi-physics simulation codes to perform structural optimization design for additively manufactured components. Cold spray was selected as the additive manufacturing (AM) process and its constraints were identified and included in the optimization scheme. The developed framework first utilizes topology optimization to maximize stiffness for conceptual design. The subsequent step applies shape optimization to refine the design for stress-life fatigue. The component weight was reduced by 20% while stresses were reduced by 75% and the rigidity was improved by 37%. The framework and analysis codes were implemented using Altair software as well as an in-house loading code. The optimized design was subsequently produced by the cold spray process.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
The objective of the contract is to consolidate the advances made during the previous contract in the conversion of syngas to motor fuels using Molecular Sieve-containing catalysts and to demonstrate the practical utility and economic value of the new catalyst/process systems with appropriate laboratory runs. Work on the program is divided into the following six tasks: (1) preparation of a detailed work plan covering the entire performance of the contract; (2) techno-economic studies that will supplement those that are presently being carried out by MITRE; (3) optimization of the most promising catalysts developed under prior contract; (4) optimization of themore » UCC catalyst system in a manner that will give it the longest possible service life; (5) optimization of a UCC process/catalyst system based upon a tubular reactor with a recycle loop containing the most promising catalyst developed under Tasks 3 and 4 studies; and (6) economic evaluation of the optimal performance found under Task 5 for the UCC process/catalyst system. Progress reports are presented for Tasks 1, 3, 4, and 5.« less
NASA Astrophysics Data System (ADS)
Marcozzi, Michael D.
2008-12-01
We consider theoretical and approximation aspects of the stochastic optimal control of ultradiffusion processes in the context of a prototype model for the selling price of a European call option. Within a continuous-time framework, the dynamic management of a portfolio of assets is effected through continuous or point control, activation costs, and phase delay. The performance index is derived from the unique weak variational solution to the ultraparabolic Hamilton-Jacobi equation; the value function is the optimal realization of the performance index relative to all feasible portfolios. An approximation procedure based upon a temporal box scheme/finite element method is analyzed; numerical examples are presented in order to demonstrate the viability of the approach.
Box-Behnken statistical design to optimize thermal performance of energy storage systems
NASA Astrophysics Data System (ADS)
Jalalian, Iman Joz; Mohammadiun, Mohammad; Moqadam, Hamid Hashemi; Mohammadiun, Hamid
2018-05-01
Latent heat thermal storage (LHTS) is a technology that can help to reduce energy consumption for cooling applications, where the cold is stored in phase change materials (PCMs). In the present study a comprehensive theoretical and experimental investigation is performed on a LHTES system containing RT25 as phase change material (PCM). Process optimization of the experimental conditions (inlet air temperature and velocity and number of slabs) was carried out by means of Box-Behnken design (BBD) of Response surface methodology (RSM). Two parameters (cooling time and COP value) were chosen to be the responses. Both of the responses were significantly influenced by combined effect of inlet air temperature with velocity and number of slabs. Simultaneous optimization was performed on the basis of the desirability function to determine the optimal conditions for the cooling time and COP value. Maximum cooling time (186 min) and COP value (6.04) were found at optimum process conditions i.e. inlet temperature of (32.5), air velocity of (1.98) and slab number of (7).
Kell, Alexander J E; Yamins, Daniel L K; Shook, Erica N; Norman-Haignere, Sam V; McDermott, Josh H
2018-05-02
A core goal of auditory neuroscience is to build quantitative models that predict cortical responses to natural sounds. Reasoning that a complete model of auditory cortex must solve ecologically relevant tasks, we optimized hierarchical neural networks for speech and music recognition. The best-performing network contained separate music and speech pathways following early shared processing, potentially replicating human cortical organization. The network performed both tasks as well as humans and exhibited human-like errors despite not being optimized to do so, suggesting common constraints on network and human performance. The network predicted fMRI voxel responses substantially better than traditional spectrotemporal filter models throughout auditory cortex. It also provided a quantitative signature of cortical representational hierarchy-primary and non-primary responses were best predicted by intermediate and late network layers, respectively. The results suggest that task optimization provides a powerful set of tools for modeling sensory systems. Copyright © 2018 Elsevier Inc. All rights reserved.
Sun, Laixi; Shao, Ting; Shi, Zhaohua; Huang, Jin; Ye, Xin; Jiang, Xiaodong; Wu, Weidong; Yang, Liming; Zheng, Wanguo
2018-01-01
The reactive ion etching (RIE) process of fused silica is often accompanied by surface contamination, which seriously degrades the ultraviolet laser damage performance of the optics. In this study, we find that the contamination behavior on the fused silica surface is very sensitive to the RIE process which can be significantly optimized by changing the plasma generating conditions such as discharge mode, etchant gas and electrode material. Additionally, an optimized RIE process is proposed to thoroughly remove polishing-introduced contamination and efficiently prevent the introduction of other contamination during the etching process. The research demonstrates the feasibility of improving the damage performance of fused silica optics by using the RIE technique. PMID:29642571
Wong, Ling Ai; Shareef, Hussain; Mohamed, Azah; Ibrahim, Ahmad Asrul
2014-01-01
This paper presents the application of enhanced opposition-based firefly algorithm in obtaining the optimal battery energy storage systems (BESS) sizing in photovoltaic generation integrated radial distribution network in order to mitigate the voltage rise problem. Initially, the performance of the original firefly algorithm is enhanced by utilizing the opposition-based learning and introducing inertia weight. After evaluating the performance of the enhanced opposition-based firefly algorithm (EOFA) with fifteen benchmark functions, it is then adopted to determine the optimal size for BESS. Two optimization processes are conducted where the first optimization aims to obtain the optimal battery output power on hourly basis and the second optimization aims to obtain the optimal BESS capacity by considering the state of charge constraint of BESS. The effectiveness of the proposed method is validated by applying the algorithm to the 69-bus distribution system and by comparing the performance of EOFA with conventional firefly algorithm and gravitational search algorithm. Results show that EOFA has the best performance comparatively in terms of mitigating the voltage rise problem. PMID:25054184
Wong, Ling Ai; Shareef, Hussain; Mohamed, Azah; Ibrahim, Ahmad Asrul
2014-01-01
This paper presents the application of enhanced opposition-based firefly algorithm in obtaining the optimal battery energy storage systems (BESS) sizing in photovoltaic generation integrated radial distribution network in order to mitigate the voltage rise problem. Initially, the performance of the original firefly algorithm is enhanced by utilizing the opposition-based learning and introducing inertia weight. After evaluating the performance of the enhanced opposition-based firefly algorithm (EOFA) with fifteen benchmark functions, it is then adopted to determine the optimal size for BESS. Two optimization processes are conducted where the first optimization aims to obtain the optimal battery output power on hourly basis and the second optimization aims to obtain the optimal BESS capacity by considering the state of charge constraint of BESS. The effectiveness of the proposed method is validated by applying the algorithm to the 69-bus distribution system and by comparing the performance of EOFA with conventional firefly algorithm and gravitational search algorithm. Results show that EOFA has the best performance comparatively in terms of mitigating the voltage rise problem.
Hybrid cryptosystem RSA - CRT optimization and VMPC
NASA Astrophysics Data System (ADS)
Rahmadani, R.; Mawengkang, H.; Sutarman
2018-03-01
Hybrid cryptosystem combines symmetric algorithms and asymmetric algorithms. This combination utilizes speeds on encryption/decryption processes of symmetric algorithms and asymmetric algorithms to secure symmetric keys. In this paper we propose hybrid cryptosystem that combine symmetric algorithms VMPC and asymmetric algorithms RSA - CRT optimization. RSA - CRT optimization speeds up the decryption process by obtaining plaintext with dp and p key only, so there is no need to perform CRT processes. The VMPC algorithm is more efficient in software implementation and reduces known weaknesses in RC4 key generation. The results show hybrid cryptosystem RSA - CRT optimization and VMPC is faster than hybrid cryptosystem RSA - VMPC and hybrid cryptosystem RSA - CRT - VMPC. Keyword : Cryptography, RSA, RSA - CRT, VMPC, Hybrid Cryptosystem.
NASA Astrophysics Data System (ADS)
Cheng, Jie; Qian, Zhaogang; Irani, Keki B.; Etemad, Hossein; Elta, Michael E.
1991-03-01
To meet the ever-increasing demand of the rapidly-growing semiconductor manufacturing industry it is critical to have a comprehensive methodology integrating techniques for process optimization real-time monitoring and adaptive process control. To this end we have accomplished an integrated knowledge-based approach combining latest expert system technology machine learning method and traditional statistical process control (SPC) techniques. This knowledge-based approach is advantageous in that it makes it possible for the task of process optimization and adaptive control to be performed consistently and predictably. Furthermore this approach can be used to construct high-level and qualitative description of processes and thus make the process behavior easy to monitor predict and control. Two software packages RIST (Rule Induction and Statistical Testing) and KARSM (Knowledge Acquisition from Response Surface Methodology) have been developed and incorporated with two commercially available packages G2 (real-time expert system) and ULTRAMAX (a tool for sequential process optimization).
He, Jianlong; Zhang, Wenbo; Liu, Xiaoyan; Xu, Ning; Xiong, Peng
2016-11-01
Ethanol is a very important industrial chemical. In order to improve ethanol productivity using Saccharomyces cerevisiae in fermentation from furfural process residue, we developed a process of simultaneous saccharification and fermentation (SSF) of furfural process residue, optimizing prehydrolysis cellulase loading concentration, prehydrolysis time, and substrate feeding strategy. The ethanol concentration obtained from the optimized process was 19.3 g/L, corresponding 76.5% ethanol yield, achieved by running SSF for 48 h from 10% furfural process residue with prehydrolysis at 50°C for 4 h and cellulase loading of 15 FPU/g furfural process residue. For higher ethanol concentrations, fed-batch fermentation was performed. The optimized fed-batch process increased the ethanol concentration to 37.6 g/L, 74.5% yield, obtained from 10% furfural process residue with two additions of 5% substrate at 12 and 24 h. Copyright © 2016 The Society for Biotechnology, Japan. Published by Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Mohamed, Najihah; Lutfi Amri Ramli, Ahmad; Majid, Ahmad Abd; Piah, Abd Rahni Mt
2017-09-01
A metaheuristic algorithm, called Harmony Search is quite highly applied in optimizing parameters in many areas. HS is a derivative-free real parameter optimization algorithm, and draws an inspiration from the musical improvisation process of searching for a perfect state of harmony. Propose in this paper Modified Harmony Search for solving optimization problems, which employs a concept from genetic algorithm method and particle swarm optimization for generating new solution vectors that enhances the performance of HS algorithm. The performances of MHS and HS are investigated on ten benchmark optimization problems in order to make a comparison to reflect the efficiency of the MHS in terms of final accuracy, convergence speed and robustness.
Parameter meta-optimization of metaheuristics of solving specific NP-hard facility location problem
NASA Astrophysics Data System (ADS)
Skakov, E. S.; Malysh, V. N.
2018-03-01
The aim of the work is to create an evolutionary method for optimizing the values of the control parameters of metaheuristics of solving the NP-hard facility location problem. A system analysis of the tuning process of optimization algorithms parameters is carried out. The problem of finding the parameters of a metaheuristic algorithm is formulated as a meta-optimization problem. Evolutionary metaheuristic has been chosen to perform the task of meta-optimization. Thus, the approach proposed in this work can be called “meta-metaheuristic”. Computational experiment proving the effectiveness of the procedure of tuning the control parameters of metaheuristics has been performed.
NASA Technical Reports Server (NTRS)
Hulcher, A. B.; Tiwari, S. N.; Marchello, J. M.; Johnston, Norman J. (Technical Monitor)
2001-01-01
Experiments were carried out at the NASA Langley Research Center automated Fiber placement facility to determine an optimal process for the fabrication of composite materials having polymer film interleaves. A series of experiments was conducted to determine an optimal process for the composite prior to investigation of a process to fabricate laminates with polymer films. The results of the composite tests indicated that a well-consolidated, void-free laminate could be attained. Preliminary interleaf processing trials were then conducted to establish some broad guidelines for film processing. The primary finding of these initial studies was that a two-stage process was necessary in order to process these materials adequately. A screening experiment was then performed to determine the relative influence of the process variables on the quality of the film interface as determined by the wedge peel test method. Parameters that were found to be of minor influence on specimen quality were subsequently held at fixed values enabling a more rapid determination of an optimal process. Optimization studies were then performed by varying the remaining parameters at three film melt processing rates. The resulting peel data were fitted with quadratic response surfaces. Additional specimens were fabricated at levels of high peel strength as predicted by the regression models in an attempt to gage the accuracy of the predicted response and to assess the repeatability of the process. The overall results indicate that quality laminates having film interleaves can be successfully and repeatably fabricated by automated fiber placement.
Continuous Fiber Ceramic Composites
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fareed, Ali; Craig, Phillip A.
2002-09-01
Fiber-reinforced ceramic composites demonstrate the high-temperature stability of ceramics--with an increased fracture toughness resulting from the fiber reinforcement of the composite. The material optimization performed under the continuous fiber ceramic composites (CFCC) included a series of systematic optimizations. The overall goals were to define the processing window, to increase the robustinous of the process, to increase process yield while reducing costs, and to define the complexity of parts that could be fabricated.
Process and Energy Optimization Assessment, Tobyhanna Army Depot, PA
2006-04-17
assembly of electronic-communication components, different welding processes are performed at TYAD. It uses shielded arc, metal inert gas (MIG...tungsten inert gas ( TIG ), and silver braz- ing oxygen/acetylene cutting plasma arc methods to complete mission re- quirements. Major welding jobs are...ER D C/ CE R L TR -0 6 -1 1 Process and Energy Optimization Assessment Tobyhanna Army Depot, PA Mike C.J. Lin, Alexander M. Zhivov
Optimization of locations of diffusion spots in indoor optical wireless local area networks
NASA Astrophysics Data System (ADS)
Eltokhey, Mahmoud W.; Mahmoud, K. R.; Ghassemlooy, Zabih; Obayya, Salah S. A.
2018-03-01
In this paper, we present a novel optimization of the locations of the diffusion spots in indoor optical wireless local area networks, based on the central force optimization (CFO) scheme. The users' performance uniformity is addressed by using the CFO algorithm, and adopting different objective function's configurations, while considering maximization and minimization of the signal to noise ratio and the delay spread, respectively. We also investigate the effect of varying the objective function's weights on the system and the users' performance as part of the adaptation process. The results show that the proposed objective function configuration-based optimization procedure offers an improvement of 65% in the standard deviation of individual receivers' performance.
Celik, Yuksel; Ulker, Erkan
2013-01-01
Marriage in honey bees optimization (MBO) is a metaheuristic optimization algorithm developed by inspiration of the mating and fertilization process of honey bees and is a kind of swarm intelligence optimizations. In this study we propose improved marriage in honey bees optimization (IMBO) by adding Levy flight algorithm for queen mating flight and neighboring for worker drone improving. The IMBO algorithm's performance and its success are tested on the well-known six unconstrained test functions and compared with other metaheuristic optimization algorithms.
Collective Framework and Performance Optimizations to Open MPI for Cray XT Platforms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ladd, Joshua S; Gorentla Venkata, Manjunath; Shamis, Pavel
2011-01-01
The performance and scalability of collective operations plays a key role in the performance and scalability of many scientific applications. Within the Open MPI code base we have developed a general purpose hierarchical collective operations framework called Cheetah, and applied it at large scale on the Oak Ridge Leadership Computing Facility's Jaguar (OLCF) platform, obtaining better performance and scalability than the native MPI implementation. This paper discuss Cheetah's design and implementation, and optimizations to the framework for Cray XT 5 platforms. Our results show that the Cheetah's Broadcast and Barrier perform better than the native MPI implementation. For medium data,more » the Cheetah's Broadcast outperforms the native MPI implementation by 93% for 49,152 processes problem size. For small and large data, it out performs the native MPI implementation by 10% and 9%, respectively, at 24,576 processes problem size. The Cheetah's Barrier performs 10% better than the native MPI implementation for 12,288 processes problem size.« less
NASA Astrophysics Data System (ADS)
Brown, G. J.; Haugan, H. J.; Mahalingam, K.; Grazulis, L.; Elhamri, S.
2015-01-01
The objective of this work is to establish molecular beam epitaxy (MBE) growth processes that can produce high quality InAs/GaInSb superlattice (SL) materials specifically tailored for very long wavelength infrared (VLWIR) detection. To accomplish this goal, several series of MBE growth optimization studies, using a SL structure of 47.0 Å InAs/21.5 Å Ga0.75In0.25Sb, were performed to refine the MBE growth process and optimize growth parameters. Experimental results demonstrated that our "slow" MBE growth process can consistently produce an energy gap near 50 meV. This is an important factor in narrow band gap SLs. However, there are other growth factors that also impact the electrical and optical properties of the SL materials. The SL layers are particularly sensitive to the anion incorporation condition formed during the surface reconstruction process. Since antisite defects are potentially responsible for the inherent residual carrier concentrations and short carrier lifetimes, the optimization of anion incorporation conditions, by manipulating anion fluxes, anion species, and deposition temperature, was systematically studied. Optimization results are reported in the context of comparative studies on the influence of the growth temperature on the crystal structural quality and surface roughness performed under a designed set of deposition conditions. The optimized SL samples produced an overall strong photoresponse signal with a relatively sharp band edge that is essential for developing VLWIR detectors. A quantitative analysis of the lattice strain, performed at the atomic scale by aberration corrected transmission electron microscopy, provided valuable information about the strain distribution at the GaInSb-on-InAs interface and in the InAs layers, which was important for optimizing the anion conditions.
Layout optimization of DRAM cells using rigorous simulation model for NTD
NASA Astrophysics Data System (ADS)
Jeon, Jinhyuck; Kim, Shinyoung; Park, Chanha; Yang, Hyunjo; Yim, Donggyu; Kuechler, Bernd; Zimmermann, Rainer; Muelders, Thomas; Klostermann, Ulrich; Schmoeller, Thomas; Do, Mun-hoe; Choi, Jung-Hoe
2014-03-01
DRAM chip space is mainly determined by the size of the memory cell array patterns which consist of periodic memory cell features and edges of the periodic array. Resolution Enhancement Techniques (RET) are used to optimize the periodic pattern process performance. Computational Lithography such as source mask optimization (SMO) to find the optimal off axis illumination and optical proximity correction (OPC) combined with model based SRAF placement are applied to print patterns on target. For 20nm Memory Cell optimization we see challenges that demand additional tool competence for layout optimization. The first challenge is a memory core pattern of brick-wall type with a k1 of 0.28, so it allows only two spectral beams to interfere. We will show how to analytically derive the only valid geometrically limited source. Another consequence of two-beam interference limitation is a "super stable" core pattern, with the advantage of high depth of focus (DoF) but also low sensitivity to proximity corrections or changes of contact aspect ratio. This makes an array edge correction very difficult. The edge can be the most critical pattern since it forms the transition from the very stable regime of periodic patterns to non-periodic periphery, so it combines the most critical pitch and highest susceptibility to defocus. Above challenge makes the layout correction to a complex optimization task demanding a layout optimization that finds a solution with optimal process stability taking into account DoF, exposure dose latitude (EL), mask error enhancement factor (MEEF) and mask manufacturability constraints. This can only be achieved by simultaneously considering all criteria while placing and sizing SRAFs and main mask features. The second challenge is the use of a negative tone development (NTD) type resist, which has a strong resist effect and is difficult to characterize experimentally due to negative resist profile taper angles that perturb CD at bottom characterization by scanning electron microscope (SEM) measurements. High resist impact and difficult model data acquisition demand for a simulation model that hat is capable of extrapolating reliably beyond its calibration dataset. We use rigorous simulation models to provide that predictive performance. We have discussed the need of a rigorous mask optimization process for DRAM contact cell layout yielding mask layouts that are optimal in process performance, mask manufacturability and accuracy. In this paper, we have shown the step by step process from analytical illumination source derivation, a NTD and application tailored model calibration to layout optimization such as OPC and SRAF placement. Finally the work has been verified with simulation and experimental results on wafer.
Performance and evaluation of real-time multicomputer control systems
NASA Technical Reports Server (NTRS)
Shin, K. G.
1983-01-01
New performance measures, detailed examples, modeling of error detection process, performance evaluation of rollback recovery methods, experiments on FTMP, and optimal size of an NMR cluster are discussed.
Baumann, Pascal; Hahn, Tobias; Hubbuch, Jürgen
2015-10-01
Upstream processes are rather complex to design and the productivity of cells under suitable cultivation conditions is hard to predict. The method of choice for examining the design space is to execute high-throughput cultivation screenings in micro-scale format. Various predictive in silico models have been developed for many downstream processes, leading to a reduction of time and material costs. This paper presents a combined optimization approach based on high-throughput micro-scale cultivation experiments and chromatography modeling. The overall optimized system must not necessarily be the one with highest product titers, but the one resulting in an overall superior process performance in up- and downstream. The methodology is presented in a case study for the Cherry-tagged enzyme Glutathione-S-Transferase from Escherichia coli SE1. The Cherry-Tag™ (Delphi Genetics, Belgium) which can be fused to any target protein allows for direct product analytics by simple VIS absorption measurements. High-throughput cultivations were carried out in a 48-well format in a BioLector micro-scale cultivation system (m2p-Labs, Germany). The downstream process optimization for a set of randomly picked upstream conditions producing high yields was performed in silico using a chromatography modeling software developed in-house (ChromX). The suggested in silico-optimized operational modes for product capturing were validated subsequently. The overall best system was chosen based on a combination of excellent up- and downstream performance. © 2015 Wiley Periodicals, Inc.
Optimal Signal Processing of Frequency-Stepped CW Radar Data
NASA Technical Reports Server (NTRS)
Ybarra, Gary A.; Wu, Shawkang M.; Bilbro, Griff L.; Ardalan, Sasan H.; Hearn, Chase P.; Neece, Robert T.
1995-01-01
An optimal signal processing algorithm is derived for estimating the time delay and amplitude of each scatterer reflection using a frequency-stepped CW system. The channel is assumed to be composed of abrupt changes in the reflection coefficient profile. The optimization technique is intended to maximize the target range resolution achievable from any set of frequency-stepped CW radar measurements made in such an environment. The algorithm is composed of an iterative two-step procedure. First, the amplitudes of the echoes are optimized by solving an overdetermined least squares set of equations. Then, a nonlinear objective function is scanned in an organized fashion to find its global minimum. The result is a set of echo strengths and time delay estimates. Although this paper addresses the specific problem of resolving the time delay between the first two echoes, the derivation is general in the number of echoes. Performance of the optimization approach is illustrated using measured data obtained from an HP-X510 network analyzer. It is demonstrated that the optimization approach offers a significant resolution enhancement over the standard processing approach that employs an IFFT. Degradation in the performance of the algorithm due to suboptimal model order selection and the effects of additive white Gaussion noise are addressed.
Optimal Signal Processing of Frequency-Stepped CW Radar Data
NASA Technical Reports Server (NTRS)
Ybarra, Gary A.; Wu, Shawkang M.; Bilbro, Griff L.; Ardalan, Sasan H.; Hearn, Chase P.; Neece, Robert T.
1995-01-01
An optimal signal processing algorithm is derived for estimating the time delay and amplitude of each scatterer reflection using a frequency-stepped CW system. The channel is assumed to be composed of abrupt changes in the reflection coefficient profile. The optimization technique is intended to maximize the target range resolution achievable from any set of frequency-stepped CW radar measurements made in such an environment. The algorithm is composed of an iterative two-step procedure. First, the amplitudes of the echoes are optimized by solving an overdetermined least squares set of equations. Then, a nonlinear objective function is scanned in an organized fashion to find its global minimum. The result is a set of echo strengths and time delay estimates. Although this paper addresses the specific problem of resolving the time delay between the two echoes, the derivation is general in the number of echoes. Performance of the optimization approach is illustrated using measured data obtained from an HP-851O network analyzer. It is demonstrated that the optimization approach offers a significant resolution enhancement over the standard processing approach that employs an IFFT. Degradation in the performance of the algorithm due to suboptimal model order selection and the effects of additive white Gaussion noise are addressed.
Optimizing a mobile robot control system using GPU acceleration
NASA Astrophysics Data System (ADS)
Tuck, Nat; McGuinness, Michael; Martin, Fred
2012-01-01
This paper describes our attempt to optimize a robot control program for the Intelligent Ground Vehicle Competition (IGVC) by running computationally intensive portions of the system on a commodity graphics processing unit (GPU). The IGVC Autonomous Challenge requires a control program that performs a number of different computationally intensive tasks ranging from computer vision to path planning. For the 2011 competition our Robot Operating System (ROS) based control system would not run comfortably on the multicore CPU on our custom robot platform. The process of profiling the ROS control program and selecting appropriate modules for porting to run on a GPU is described. A GPU-targeting compiler, Bacon, is used to speed up development and help optimize the ported modules. The impact of the ported modules on overall performance is discussed. We conclude that GPU optimization can free a significant amount of CPU resources with minimal effort for expensive user-written code, but that replacing heavily-optimized library functions is more difficult, and a much less efficient use of time.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xu, Hongyi; Li, Yang; Zeng, Danielle
Process integration and optimization is the key enabler of the Integrated Computational Materials Engineering (ICME) of carbon fiber composites. In this paper, automated workflows are developed for two types of composites: Sheet Molding Compounds (SMC) short fiber composites, and multi-layer unidirectional (UD) composites. For SMC, the proposed workflow integrates material processing simulation, microstructure representation volume element (RVE) models, material property prediction and structure preformation simulation to enable multiscale, multidisciplinary analysis and design. Processing parameters, microstructure parameters and vehicle subframe geometry parameters are defined as the design variables; the stiffness and weight of the structure are defined as the responses. Formore » multi-layer UD structure, this work focuses on the discussion of different design representation methods and their impacts on the optimization performance. Challenges in ICME process integration and optimization are also summarized and highlighted. Two case studies are conducted to demonstrate the integrated process and its application in optimization.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gong, Zhenhuan; Boyuka, David; Zou, X
Download Citation Email Print Request Permissions Save to Project The size and scope of cutting-edge scientific simulations are growing much faster than the I/O and storage capabilities of their run-time environments. The growing gap is exacerbated by exploratory, data-intensive analytics, such as querying simulation data with multivariate, spatio-temporal constraints, which induces heterogeneous access patterns that stress the performance of the underlying storage system. Previous work addresses data layout and indexing techniques to improve query performance for a single access pattern, which is not sufficient for complex analytics jobs. We present PARLO a parallel run-time layout optimization framework, to achieve multi-levelmore » data layout optimization for scientific applications at run-time before data is written to storage. The layout schemes optimize for heterogeneous access patterns with user-specified priorities. PARLO is integrated with ADIOS, a high-performance parallel I/O middleware for large-scale HPC applications, to achieve user-transparent, light-weight layout optimization for scientific datasets. It offers simple XML-based configuration for users to achieve flexible layout optimization without the need to modify or recompile application codes. Experiments show that PARLO improves performance by 2 to 26 times for queries with heterogeneous access patterns compared to state-of-the-art scientific database management systems. Compared to traditional post-processing approaches, its underlying run-time layout optimization achieves a 56% savings in processing time and a reduction in storage overhead of up to 50%. PARLO also exhibits a low run-time resource requirement, while also limiting the performance impact on running applications to a reasonable level.« less
High-Lift Optimization Design Using Neural Networks on a Multi-Element Airfoil
NASA Technical Reports Server (NTRS)
Greenman, Roxana M.; Roth, Karlin R.; Smith, Charles A. (Technical Monitor)
1998-01-01
The high-lift performance of a multi-element airfoil was optimized by using neural-net predictions that were trained using a computational data set. The numerical data was generated using a two-dimensional, incompressible, Navier-Stokes algorithm with the Spalart-Allmaras turbulence model. Because it is difficult to predict maximum lift for high-lift systems, an empirically-based maximum lift criteria was used in this study to determine both the maximum lift and the angle at which it occurs. Multiple input, single output networks were trained using the NASA Ames variation of the Levenberg-Marquardt algorithm for each of the aerodynamic coefficients (lift, drag, and moment). The artificial neural networks were integrated with a gradient-based optimizer. Using independent numerical simulations and experimental data for this high-lift configuration, it was shown that this design process successfully optimized flap deflection, gap, overlap, and angle of attack to maximize lift. Once the neural networks were trained and integrated with the optimizer, minimal additional computer resources were required to perform optimization runs with different initial conditions and parameters. Applying the neural networks within the high-lift rigging optimization process reduced the amount of computational time and resources by 83% compared with traditional gradient-based optimization procedures for multiple optimization runs.
Combining Simulation and Optimization Models for Hardwood Lumber Production
G.A. Mendoza; R.J. Meimban; W.G. Luppold; Philip A. Araman
1991-01-01
Published literature contains a number of optimization and simulation models dealing with the primary processing of hardwood and softwood logs. Simulation models have been developed primarily as descriptive models for characterizing the general operations and performance of a sawmill. Optimization models, on the other hand, were developed mainly as analytical tools for...
NASA Astrophysics Data System (ADS)
Sumesh, A.; Sai Ramnadh, L. V.; Manish, P.; Harnath, V.; Lakshman, V.
2016-09-01
Welding is one of the most common metal joining techniques used in industry for decades. As in the global manufacturing scenario the products should be more cost effective. Therefore the selection of right process with optimal parameters will help the industry in minimizing their cost of production. SA 106 Grade B steel has a wide application in Automobile chassis structure, Boiler tubes and pressure vessels industries. Employing central composite design the process parameters for Gas Tungsten Arc Welding was optimized. The input parameters chosen were weld current, peak current and frequency. The joint tensile strength was the response considered in this study. Analysis of variance was performed to determine the statistical significance of the parameters and a Regression analysis was performed to determine the effect of input parameters over the response. From the experiment the maximum tensile strength obtained was 95 KN reported for a weld current of 95 Amp, frequency of 50 Hz and peak current of 100 Amp. With an aim of maximizing the joint strength using Response optimizer a target value of 100 KN is selected and regression models were optimized. The output results are achievable with a Weld current of 62.6148 Amp, Frequency of 23.1821 Hz, and Peak current of 65.9104 Amp. Using Die penetration test the weld joints were also classified in to 2 categories as good weld and weld with defect. This will also help in getting a defect free joint when welding is performed using GTAW process.
Co-optimization of lithographic and patterning processes for improved EPE performance
NASA Astrophysics Data System (ADS)
Maslow, Mark J.; Timoshkov, Vadim; Kiers, Ton; Jee, Tae Kwon; de Loijer, Peter; Morikita, Shinya; Demand, Marc; Metz, Andrew W.; Okada, Soichiro; Kumar, Kaushik A.; Biesemans, Serge; Yaegashi, Hidetami; Di Lorenzo, Paolo; Bekaert, Joost P.; Mao, Ming; Beral, Christophe; Larivière, Stephane
2017-03-01
Complimentary lithography is already being used for advanced logic patterns. The tight pitches for 1D Metal layers are expected to be created using spacer based multiple patterning ArF-i exposures and the more complex cut/block patterns are made using EUV exposures. At the same time, control requirements of CDU, pattern shift and pitch-walk are approaching sub-nanometer levels to meet edge placement error (EPE) requirements. Local variability, such as Line Edge Roughness (LER), Local CDU, and Local Placement Error (LPE), are dominant factors in the total Edge Placement error budget. In the lithography process, improving the imaging contrast when printing the core pattern has been shown to improve the local variability. In the etch process, it has been shown that the fusion of atomic level etching and deposition can also improve these local variations. Co-optimization of lithography and etch processing is expected to further improve the performance over individual optimizations alone. To meet the scaling requirements and keep process complexity to a minimum, EUV is increasingly seen as the platform for delivering the exposures for both the grating and the cut/block patterns beyond N7. In this work, we evaluated the overlay and pattern fidelity of an EUV block printed in a negative tone resist on an ArF-i SAQP grating. High-order Overlay modeling and corrections during the exposure can reduce overlay error after development, a significant component of the total EPE. During etch, additional degrees of freedom are available to improve the pattern placement error in single layer processes. Process control of advanced pitch nanoscale-multi-patterning techniques as described above is exceedingly complicated in a high volume manufacturing environment. Incorporating potential patterning optimizations into both design and HVM controls for the lithography process is expected to bring a combined benefit over individual optimizations. In this work we will show the EPE performance improvement for a 32nm pitch SAQP + block patterned Metal 2 layer by cooptimizing the lithography and etch processes. Recommendations for further improvements and alternative processes will be given.
2010-11-10
CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d. PROJECT NUMBER 5e. TASK NUMBER 5f. WORK UNIT NUMBER 7. PERFORMING ...ORGANIZATION NAME(S) AND ADDRESS(ES) Woods Hole Oceanographic Institution,Woods Hole,MA,02543 8. PERFORMING ORGANIZATION REPORT NUMBER 9. SPONSORING...consider an alternate means of finding the minima of 〈|θ|2〉. We perform a two-part optimization process based on Matlab’s built-in nonlinear
Abou-El-Enein, Mohamed; Römhild, Andy; Kaiser, Daniel; Beier, Carola; Bauer, Gerhard; Volk, Hans-Dieter; Reinke, Petra
2013-03-01
Advanced therapy medicinal products (ATMP) have gained considerable attention in academia due to their therapeutic potential. Good Manufacturing Practice (GMP) principles ensure the quality and sterility of manufacturing these products. We developed a model for estimating the manufacturing costs of cell therapy products and optimizing the performance of academic GMP-facilities. The "Clean-Room Technology Assessment Technique" (CTAT) was tested prospectively in the GMP facility of BCRT, Berlin, Germany, then retrospectively in the GMP facility of the University of California-Davis, California, USA. CTAT is a two-level model: level one identifies operational (core) processes and measures their fixed costs; level two identifies production (supporting) processes and measures their variable costs. The model comprises several tools to measure and optimize performance of these processes. Manufacturing costs were itemized using adjusted micro-costing system. CTAT identified GMP activities with strong correlation to the manufacturing process of cell-based products. Building best practice standards allowed for performance improvement and elimination of human errors. The model also demonstrated the unidirectional dependencies that may exist among the core GMP activities. When compared to traditional business models, the CTAT assessment resulted in a more accurate allocation of annual expenses. The estimated expenses were used to set a fee structure for both GMP facilities. A mathematical equation was also developed to provide the final product cost. CTAT can be a useful tool in estimating accurate costs for the ATMPs manufactured in an optimized GMP process. These estimates are useful when analyzing the cost-effectiveness of these novel interventions. Copyright © 2013 International Society for Cellular Therapy. Published by Elsevier Inc. All rights reserved.
Zheng, Bei; Ge, Xiao-peng; Yu, Zhi-yong; Yuan, Sheng-guang; Zhang, Wen-jing; Sun, Jing-fang
2012-08-01
Atomic force microscope (AFM) fluid imaging was applied to the study of micro-flocculation filtration process and the optimization of micro-flocculation time and the agitation intensity of G values. It can be concluded that AFM fluid imaging proves to be a promising tool in the observation and characterization of floc morphology and the dynamic coagulation processes under aqueous environmental conditions. Through the use of AFM fluid imaging technique, optimized conditions for micro-flocculation time of 2 min and the agitation intensity (G value) of 100 s(-1) were obtained in the treatment of dye-printing industrial tailing wastewater by the micro-flocculation filtration process with a good performance.
FPGA-based protein sequence alignment : A review
NASA Astrophysics Data System (ADS)
Isa, Mohd. Nazrin Md.; Muhsen, Ku Noor Dhaniah Ku; Saiful Nurdin, Dayana; Ahmad, Muhammad Imran; Anuar Zainol Murad, Sohiful; Nizam Mohyar, Shaiful; Harun, Azizi; Hussin, Razaidi
2017-11-01
Sequence alignment have been optimized using several techniques in order to accelerate the computation time to obtain the optimal score by implementing DP-based algorithm into hardware such as FPGA-based platform. During hardware implementation, there will be performance challenges such as the frequent memory access and highly data dependent in computation process. Therefore, investigation in processing element (PE) configuration where involves more on memory access in load or access the data (substitution matrix, query sequence character) and the PE configuration time will be the main focus in this paper. There are various approaches to enhance the PE configuration performance that have been done in previous works such as by using serial configuration chain and parallel configuration chain i.e. the configuration data will be loaded into each PEs sequentially and simultaneously respectively. Some researchers have proven that the performance using parallel configuration chain has optimized both the configuration time and area.
An integrated 3D log processing optimization system for small sawmills in central Appalachia
Wenshu Lin; Jingxin Wang
2013-01-01
An integrated 3D log processing optimization system was developed to perform 3D log generation, opening face determination, headrig log sawing simulation, fl itch edging and trimming simulation, cant resawing, and lumber grading. A circular cross-section model, together with 3D modeling techniques, was used to reconstruct 3D virtual logs. Internal log defects (knots)...
Manenti, Diego R; Módenes, Aparecido N; Soares, Petrick A; Boaventura, Rui A R; Palácio, Soraya M; Borba, Fernando H; Espinoza-Quiñones, Fernando R; Bergamasco, Rosângela; Vilar, Vítor J P
2015-01-01
In this work, the application of an iron electrode-based electrocoagulation (EC) process on the treatment of a real textile wastewater (RTW) was investigated. In order to perform an efficient integration of the EC process with a biological oxidation one, an enhancement in the biodegradability and low toxicity of final compounds was sought. Optimal values of EC reactor operation parameters (pH, current density and electrolysis time) were achieved by applying a full factorial 3(3) experimental design. Biodegradability and toxicity assays were performed on treated RTW samples obtained at the optimal values of: pH of the solution (7.0), current density (142.9 A m(-2)) and different electrolysis times. As response variables for the biodegradability and toxicity assessment, the Zahn-Wellens test (Dt), the ratio values of dissolved organic carbon (DOC) relative to low-molecular-weight carboxylates anions (LMCA) and lethal concentration 50 (LC50) were used. According to the Dt, the DOC/LMCA ratio and LC50, an electrolysis time of 15 min along with the optimal values of pH and current density were suggested as suitable for a next stage of treatment based on a biological oxidation process.
Design optimization for active twist rotor blades
NASA Astrophysics Data System (ADS)
Mok, Ji Won
This dissertation introduces the process of optimizing active twist rotor blades in the presence of embedded anisotropic piezo-composite actuators. Optimum design of active twist blades is a complex task, since it involves a rich design space with tightly coupled design variables. The study presents the development of an optimization framework for active helicopter rotor blade cross-sectional design. This optimization framework allows for exploring a rich and highly nonlinear design space in order to optimize the active twist rotor blades. Different analytical components are combined in the framework: cross-sectional analysis (UM/VABS), an automated mesh generator, a beam solver (DYMORE), a three-dimensional local strain recovery module, and a gradient based optimizer within MATLAB. Through the mathematical optimization problem, the static twist actuation performance of a blade is maximized while satisfying a series of blade constraints. These constraints are associated with locations of the center of gravity and elastic axis, blade mass per unit span, fundamental rotating blade frequencies, and the blade strength based on local three-dimensional strain fields under worst loading conditions. Through pre-processing, limitations of the proposed process have been studied. When limitations were detected, resolution strategies were proposed. These include mesh overlapping, element distortion, trailing edge tab modeling, electrode modeling and foam implementation of the mesh generator, and the initial point sensibility of the current optimization scheme. Examples demonstrate the effectiveness of this process. Optimization studies were performed on the NASA/Army/MIT ATR blade case. Even though that design was built and shown significant impact in vibration reduction, the proposed optimization process showed that the design could be improved significantly. The second example, based on a model scale of the AH-64D Apache blade, emphasized the capability of this framework to explore the nonlinear design space of complex planform. Especially for this case, detailed design is carried out to make the actual blade manufacturable. The proposed optimization framework is shown to be an effective tool to design high authority active twist blades to reduce vibration in future helicopter rotor blades.
An investigation of squeeze-cast alloy 718
NASA Technical Reports Server (NTRS)
Gamwell, W. R.
1993-01-01
Alloy 718 billets produced by the squeeze-cast process have been evaluated for use as potential replacements for propulsion engine components which are normally produced from forgings. Alloy 718 billets were produced using various processing conditions. Structural characterizations were performed on 'as-cast' billets. As-cast billets were then homogenized and solution treated and aged according to conventional heat-treatment practices for this alloy. Mechanical property evaluations were performed on heat-treated billets. As-cast macrostructures and microstructures varied with squeeze-cast processing parameters. Mechanical properties varied with squeeze-cast processing parameters and heat treatments. One billet exhibited a defect free, refined microstructure, with mechanical properties approaching those of wrought alloy 718 bar, confirming the feasibility of squeeze-casting alloy 718. However, further process optimization is required, and further structural and mechanical property improvements are expected with process optimization.
An evaluation of MPI message rate on hybrid-core processors
Barrett, Brian W.; Brightwell, Ron; Grant, Ryan; ...
2014-11-01
Power and energy concerns are motivating chip manufacturers to consider future hybrid-core processor designs that may combine a small number of traditional cores optimized for single-thread performance with a large number of simpler cores optimized for throughput performance. This trend is likely to impact the way in which compute resources for network protocol processing functions are allocated and managed. In particular, the performance of MPI match processing is critical to achieving high message throughput. In this paper, we analyze the ability of simple and more complex cores to perform MPI matching operations for various scenarios in order to gain insightmore » into how MPI implementations for future hybrid-core processors should be designed.« less
NASA Technical Reports Server (NTRS)
Connolly, Janis H.; Arch, M.; Elfezouaty, Eileen Schultz; Novak, Jennifer Blume; Bond, Robert L. (Technical Monitor)
1999-01-01
Design and Human Engineering (HE) processes strive to ensure that the human-machine interface is designed for optimal performance throughout the system life cycle. Each component can be tested and assessed independently to assure optimal performance, but it is not until full integration that the system and the inherent interactions between the system components can be assessed as a whole. HE processes (which are defining/app lying requirements for human interaction with missions/systems) are included in space flight activities, but also need to be included in ground activities and specifically, ground facility testbeds such as Bio-Plex. A unique aspect of the Bio-Plex Facility is the integral issue of Habitability which includes qualities of the environment that allow humans to work and live. HE is a process by which Habitability and system performance can be assessed.
Optimization of Refining Craft for Vegetable Insulating Oil
NASA Astrophysics Data System (ADS)
Zhou, Zhu-Jun; Hu, Ting; Cheng, Lin; Tian, Kai; Wang, Xuan; Yang, Jun; Kong, Hai-Yang; Fang, Fu-Xin; Qian, Hang; Fu, Guang-Pan
2016-05-01
Vegetable insulating oil because of its environmental friendliness are considered as ideal material instead of mineral oil used for the insulation and the cooling of the transformer. The main steps of traditional refining process included alkali refining, bleaching and distillation. This kind of refining process used in small doses of insulating oil refining can get satisfactory effect, but can't be applied to the large capacity reaction kettle. This paper using rapeseed oil as crude oil, and the refining process has been optimized for large capacity reaction kettle. The optimized refining process increases the acid degumming process. The alkali compound adds the sodium silicate composition in the alkali refining process, and the ratio of each component is optimized. Add the amount of activated clay and activated carbon according to 10:1 proportion in the de-colorization process, which can effectively reduce the oil acid value and dielectric loss. Using vacuum pumping gas instead of distillation process can further reduce the acid value. Compared some part of the performance parameters of refined oil products with mineral insulating oil, the dielectric loss of vegetable insulating oil is still high and some measures are needed to take to further optimize in the future.
Automated Sensitivity Analysis of Interplanetary Trajectories for Optimal Mission Design
NASA Technical Reports Server (NTRS)
Knittel, Jeremy; Hughes, Kyle; Englander, Jacob; Sarli, Bruno
2017-01-01
This work describes a suite of Python tools known as the Python EMTG Automated Trade Study Application (PEATSA). PEATSA was written to automate the operation of trajectory optimization software, simplify the process of performing sensitivity analysis, and was ultimately found to out-perform a human trajectory designer in unexpected ways. These benefits will be discussed and demonstrated on sample mission designs.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Oswald, R.; Morris, J.
1994-11-01
The objective of this subcontract over its three-year duration is to advance Solarex`s photovoltaic manufacturing technologies, reduce its a-Si:H module production costs, increase module performance and expand the Solarex commercial production capacity. Solarex shall meet these objectives by improving the deposition and quality of the transparent front contact, by optimizing the laser patterning process, scaling-up the semiconductor deposition process, improving the back contact deposition, scaling-up and improving the encapsulation and testing of its a-Si:H modules. In the Phase 2 portion of this subcontract, Solarex focused on improving deposition of the front contact, investigating alternate feed stocks for the front contact,more » maximizing throughput and area utilization for all laser scribes, optimizing a-Si:H deposition equipment to achieve uniform deposition over large-areas, optimizing the triple-junction module fabrication process, evaluating the materials to deposit the rear contact, and optimizing the combination of isolation scribe and encapsulant to pass the wet high potential test. Progress is reported on the following: Front contact development; Laser scribe process development; Amorphous silicon based semiconductor deposition; Rear contact deposition process; Frit/bus/wire/frame; Materials handling; and Environmental test, yield and performance analysis.« less
NASA Astrophysics Data System (ADS)
Brekhna, Brekhna; Mahmood, Arif; Zhou, Yuanfeng; Zhang, Caiming
2017-11-01
Superpixels have gradually become popular in computer vision and image processing applications. However, no comprehensive study has been performed to evaluate the robustness of superpixel algorithms in regard to common forms of noise in natural images. We evaluated the robustness of 11 recently proposed algorithms to different types of noise. The images were corrupted with various degrees of Gaussian blur, additive white Gaussian noise, and impulse noise that either made the object boundaries weak or added extra information to it. We performed a robustness analysis of simple linear iterative clustering (SLIC), Voronoi Cells (VCells), flooding-based superpixel generation (FCCS), bilateral geodesic distance (Bilateral-G), superpixel via geodesic distance (SSS-G), manifold SLIC (M-SLIC), Turbopixels, superpixels extracted via energy-driven sampling (SEEDS), lazy random walk (LRW), real-time superpixel segmentation by DBSCAN clustering, and video supervoxels using partially absorbing random walks (PARW) algorithms. The evaluation process was carried out both qualitatively and quantitatively. For quantitative performance comparison, we used achievable segmentation accuracy (ASA), compactness, under-segmentation error (USE), and boundary recall (BR) on the Berkeley image database. The results demonstrated that all algorithms suffered performance degradation due to noise. For Gaussian blur, Bilateral-G exhibited optimal results for ASA and USE measures, SLIC yielded optimal compactness, whereas FCCS and DBSCAN remained optimal for BR. For the case of additive Gaussian and impulse noises, FCCS exhibited optimal results for ASA, USE, and BR, whereas Bilateral-G remained a close competitor in ASA and USE for Gaussian noise only. Additionally, Turbopixel demonstrated optimal performance for compactness for both types of noise. Thus, no single algorithm was able to yield optimal results for all three types of noise across all performance measures. Conclusively, to solve real-world problems effectively, more robust superpixel algorithms must be developed.
Orbit design and optimization based on global telecommunication performance metrics
NASA Technical Reports Server (NTRS)
Lee, Seungwon; Lee, Charles H.; Kerridge, Stuart; Cheung, Kar-Ming; Edwards, Charles D.
2006-01-01
The orbit selection of telecommunications orbiters is one of the critical design processes and should be guided by global telecom performance metrics and mission-specific constraints. In order to aid the orbit selection, we have coupled the Telecom Orbit Analysis and Simulation Tool (TOAST) with genetic optimization algorithms. As a demonstration, we have applied the developed tool to select an optimal orbit for general Mars telecommunications orbiters with the constraint of being a frozen orbit. While a typical optimization goal is to minimize tele-communications down time, several relevant performance metrics are examined: 1) area-weighted average gap time, 2) global maximum of local maximum gap time, 3) global maximum of local minimum gap time. Optimal solutions are found with each of the metrics. Common and different features among the optimal solutions as well as the advantage and disadvantage of each metric are presented. The optimal solutions are compared with several candidate orbits that were considered during the development of Mars Telecommunications Orbiter.
Celik, Yuksel; Ulker, Erkan
2013-01-01
Marriage in honey bees optimization (MBO) is a metaheuristic optimization algorithm developed by inspiration of the mating and fertilization process of honey bees and is a kind of swarm intelligence optimizations. In this study we propose improved marriage in honey bees optimization (IMBO) by adding Levy flight algorithm for queen mating flight and neighboring for worker drone improving. The IMBO algorithm's performance and its success are tested on the well-known six unconstrained test functions and compared with other metaheuristic optimization algorithms. PMID:23935416
Tuning of PID controller using optimization techniques for a MIMO process
NASA Astrophysics Data System (ADS)
Thulasi dharan, S.; Kavyarasan, K.; Bagyaveereswaran, V.
2017-11-01
In this paper, two processes were considered one is Quadruple tank process and the other is CSTR (Continuous Stirred Tank Reactor) process. These are majorly used in many industrial applications for various domains, especially, CSTR in chemical plants.At first mathematical model of both the process is to be done followed by linearization of the system due to MIMO process and controllers are the major part to control the whole process to our desired point as per the applications so the tuning of the controller plays a major role among the whole process. For tuning of parameters we use two optimizations techniques like Particle Swarm Optimization, Genetic Algorithm. The above techniques are majorly used in different applications to obtain which gives the best among all, we use these techniques to obtain the best tuned values among many. Finally, we will compare the performance of the each process with both the techniques.
Foo, Brian; van der Schaar, Mihaela
2010-11-01
In this paper, we discuss distributed optimization techniques for configuring classifiers in a real-time, informationally-distributed stream mining system. Due to the large volume of streaming data, stream mining systems must often cope with overload, which can lead to poor performance and intolerable processing delay for real-time applications. Furthermore, optimizing over an entire system of classifiers is a difficult task since changing the filtering process at one classifier can impact both the feature values of data arriving at classifiers further downstream and thus, the classification performance achieved by an ensemble of classifiers, as well as the end-to-end processing delay. To address this problem, this paper makes three main contributions: 1) Based on classification and queuing theoretic models, we propose a utility metric that captures both the performance and the delay of a binary filtering classifier system. 2) We introduce a low-complexity framework for estimating the system utility by observing, estimating, and/or exchanging parameters between the inter-related classifiers deployed across the system. 3) We provide distributed algorithms to reconfigure the system, and analyze the algorithms based on their convergence properties, optimality, information exchange overhead, and rate of adaptation to non-stationary data sources. We provide results using different video classifier systems.
NASA Astrophysics Data System (ADS)
Göll, S.; Samsun, R. C.; Peters, R.
Fuel-cell-based auxiliary power units can help to reduce fuel consumption and emissions in transportation. For this application, the combination of solid oxide fuel cells (SOFCs) with upstream fuel processing by autothermal reforming (ATR) is seen as a highly favorable configuration. Notwithstanding the necessity to improve each single component, an optimized architecture of the fuel cell system as a whole must be achieved. To enable model-based analyses, a system-level approach is proposed in which the fuel cell system is modeled as a multi-stage thermo-chemical process using the "flowsheeting" environment PRO/II™. Therein, the SOFC stack and the ATR are characterized entirely by corresponding thermodynamic processes together with global performance parameters. The developed model is then used to achieve an optimal system layout by comparing different system architectures. A system with anode and cathode off-gas recycling was identified to have the highest electric system efficiency. Taking this system as a basis, the potential for further performance enhancement was evaluated by varying four parameters characterizing different system components. Using methods from the design and analysis of experiments, the effects of these parameters and of their interactions were quantified, leading to an overall optimized system with encouraging performance data.
Optimal Parameter Design of Coarse Alignment for Fiber Optic Gyro Inertial Navigation System.
Lu, Baofeng; Wang, Qiuying; Yu, Chunmei; Gao, Wei
2015-06-25
Two different coarse alignment algorithms for Fiber Optic Gyro (FOG) Inertial Navigation System (INS) based on inertial reference frame are discussed in this paper. Both of them are based on gravity vector integration, therefore, the performance of these algorithms is determined by integration time. In previous works, integration time is selected by experience. In order to give a criterion for the selection process, and make the selection of the integration time more accurate, optimal parameter design of these algorithms for FOG INS is performed in this paper. The design process is accomplished based on the analysis of the error characteristics of these two coarse alignment algorithms. Moreover, this analysis and optimal parameter design allow us to make an adequate selection of the most accurate algorithm for FOG INS according to the actual operational conditions. The analysis and simulation results show that the parameter provided by this work is the optimal value, and indicate that in different operational conditions, the coarse alignment algorithms adopted for FOG INS are different in order to achieve better performance. Lastly, the experiment results validate the effectiveness of the proposed algorithm.
Sriram, Vinay K; Montgomery, Doug
2017-07-01
The Internet is subject to attacks due to vulnerabilities in its routing protocols. One proposed approach to attain greater security is to cryptographically protect network reachability announcements exchanged between Border Gateway Protocol (BGP) routers. This study proposes and evaluates the performance and efficiency of various optimization algorithms for validation of digitally signed BGP updates. In particular, this investigation focuses on the BGPSEC (BGP with SECurity extensions) protocol, currently under consideration for standardization in the Internet Engineering Task Force. We analyze three basic BGPSEC update processing algorithms: Unoptimized, Cache Common Segments (CCS) optimization, and Best Path Only (BPO) optimization. We further propose and study cache management schemes to be used in conjunction with the CCS and BPO algorithms. The performance metrics used in the analyses are: (1) routing table convergence time after BGPSEC peering reset or router reboot events and (2) peak-second signature verification workload. Both analytical modeling and detailed trace-driven simulation were performed. Results show that the BPO algorithm is 330% to 628% faster than the unoptimized algorithm for routing table convergence in a typical Internet core-facing provider edge router.
Development of SiC/SiC composites by PIP in combination with RS
NASA Astrophysics Data System (ADS)
Kotani, Masaki; Kohyama, Akira; Katoh, Yutai
2001-02-01
In order to improve the mechanical performances of SiC/SiC composite, process improvement and modification of polymer impregnation and pyrolysis (PIP) and reaction sintering (RS) process were investigated. The fibrous prepregs were prepared by a polymeric intra-bundle densification technique using Tyranno-SA™ fiber. For inter-bundle matrix, four kinds of process options utilizing polymer pyrolysis and reaction sintering were studied. The process conditions were systematically optimized through fabricating monoliths. Then, SiC/SiC composites were fabricated using optimized inter-bundle matrix slurries in each process for the first inspection of process requirements.
GPU computing in medical physics: a review.
Pratx, Guillem; Xing, Lei
2011-05-01
The graphics processing unit (GPU) has emerged as a competitive platform for computing massively parallel problems. Many computing applications in medical physics can be formulated as data-parallel tasks that exploit the capabilities of the GPU for reducing processing times. The authors review the basic principles of GPU computing as well as the main performance optimization techniques, and survey existing applications in three areas of medical physics, namely image reconstruction, dose calculation and treatment plan optimization, and image processing.
NASA Technical Reports Server (NTRS)
Axdahl, Erik L.
2015-01-01
Removing human interaction from design processes by using automation may lead to gains in both productivity and design precision. This memorandum describes efforts to incorporate high fidelity numerical analysis tools into an automated framework and applying that framework to applications of practical interest. The purpose of this effort was to integrate VULCAN-CFD into an automated, DAKOTA-enabled framework with a proof-of-concept application being the optimization of supersonic test facility nozzles. It was shown that the optimization framework could be deployed on a high performance computing cluster with the flow of information handled effectively to guide the optimization process. Furthermore, the application of the framework to supersonic test facility nozzle flowpath design and optimization was demonstrated using multiple optimization algorithms.
Conceptual design optimization study
NASA Technical Reports Server (NTRS)
Hollowell, S. J.; Beeman, E. R., II; Hiyama, R. M.
1990-01-01
The feasibility of applying multilevel functional decomposition and optimization techniques to conceptual design of advanced fighter aircraft was investigated. Applying the functional decomposition techniques to the conceptual design phase appears to be feasible. The initial implementation of the modified design process will optimize wing design variables. A hybrid approach, combining functional decomposition techniques for generation of aerodynamic and mass properties linear sensitivity derivatives with existing techniques for sizing mission performance and optimization, is proposed.
NASA Astrophysics Data System (ADS)
Nath, Nayani Kishore
2017-08-01
The throat back up liners is used to protect the nozzle structural members from the severe thermal environment in solid rocket nozzles. The throat back up liners is made with E-glass phenolic prepregs by tape winding process. The objective of this work is to demonstrate the optimization of process parameters of tape winding process to achieve better insulative resistance using Taguchi's robust design methodology. In this method four control factors machine speed, roller pressure, tape tension, tape temperature that were investigated for the tape winding process. The presented work was to study the cogency and acceptability of Taguchi's methodology in manufacturing of throat back up liners. The quality characteristic identified was Back wall temperature. Experiments carried out using L 9 ' (34) orthogonal array with three levels of four different control factors. The test results were analyzed using smaller the better criteria for Signal to Noise ratio in order to optimize the process. The experimental results were analyzed conformed and successfully used to achieve the minimum back wall temperature of the throat back up liners. The enhancement in performance of the throat back up liners was observed by carrying out the oxy-acetylene tests. The influence of back wall temperature on the performance of throat back up liners was verified by ground firing test.
A multiple objective optimization approach to quality control
NASA Technical Reports Server (NTRS)
Seaman, Christopher Michael
1991-01-01
The use of product quality as the performance criteria for manufacturing system control is explored. The goal in manufacturing, for economic reasons, is to optimize product quality. The problem is that since quality is a rather nebulous product characteristic, there is seldom an analytic function that can be used as a measure. Therefore standard control approaches, such as optimal control, cannot readily be applied. A second problem with optimizing product quality is that it is typically measured along many dimensions: there are many apsects of quality which must be optimized simultaneously. Very often these different aspects are incommensurate and competing. The concept of optimality must now include accepting tradeoffs among the different quality characteristics. These problems are addressed using multiple objective optimization. It is shown that the quality control problem can be defined as a multiple objective optimization problem. A controller structure is defined using this as the basis. Then, an algorithm is presented which can be used by an operator to interactively find the best operating point. Essentially, the algorithm uses process data to provide the operator with two pieces of information: (1) if it is possible to simultaneously improve all quality criteria, then determine what changes to the process input or controller parameters should be made to do this; and (2) if it is not possible to improve all criteria, and the current operating point is not a desirable one, select a criteria in which a tradeoff should be made, and make input changes to improve all other criteria. The process is not operating at an optimal point in any sense if no tradeoff has to be made to move to a new operating point. This algorithm ensures that operating points are optimal in some sense and provides the operator with information about tradeoffs when seeking the best operating point. The multiobjective algorithm was implemented in two different injection molding scenarios: tuning of process controllers to meet specified performance objectives and tuning of process inputs to meet specified quality objectives. Five case studies are presented.
System, apparatus and methods to implement high-speed network analyzers
Ezick, James; Lethin, Richard; Ros-Giralt, Jordi; Szilagyi, Peter; Wohlford, David E
2015-11-10
Systems, apparatus and methods for the implementation of high-speed network analyzers are provided. A set of high-level specifications is used to define the behavior of the network analyzer emitted by a compiler. An optimized inline workflow to process regular expressions is presented without sacrificing the semantic capabilities of the processing engine. An optimized packet dispatcher implements a subset of the functions implemented by the network analyzer, providing a fast and slow path workflow used to accelerate specific processing units. Such dispatcher facility can also be used as a cache of policies, wherein if a policy is found, then packet manipulations associated with the policy can be quickly performed. An optimized method of generating DFA specifications for network signatures is also presented. The method accepts several optimization criteria, such as min-max allocations or optimal allocations based on the probability of occurrence of each signature input bit.
Influence of signal processing strategy in auditory abilities.
Melo, Tatiana Mendes de; Bevilacqua, Maria Cecília; Costa, Orozimbo Alves; Moret, Adriane Lima Mortari
2013-01-01
The signal processing strategy is a parameter that may influence the auditory performance of cochlear implant and is important to optimize this parameter to provide better speech perception, especially in difficult listening situations. To evaluate the individual's auditory performance using two different signal processing strategy. Prospective study with 11 prelingually deafened children with open-set speech recognition. A within-subjects design was used to compare performance with standard HiRes and HiRes 120 in three different moments. During test sessions, subject's performance was evaluated by warble-tone sound-field thresholds, speech perception evaluation, in quiet and in noise. In the silence, children S1, S4, S5, S7 showed better performance with the HiRes 120 strategy and children S2, S9, S11 showed better performance with the HiRes strategy. In the noise was also observed that some children performed better using the HiRes 120 strategy and other with HiRes. Not all children presented the same pattern of response to the different strategies used in this study, which reinforces the need to look at optimizing cochlear implant clinical programming.
Graphics Processing Unit Acceleration of Gyrokinetic Turbulence Simulations
NASA Astrophysics Data System (ADS)
Hause, Benjamin; Parker, Scott
2012-10-01
We find a substantial increase in on-node performance using Graphics Processing Unit (GPU) acceleration in gyrokinetic delta-f particle-in-cell simulation. Optimization is performed on a two-dimensional slab gyrokinetic particle simulation using the Portland Group Fortran compiler with the GPU accelerator compiler directives. We have implemented the GPU acceleration on a Core I7 gaming PC with a NVIDIA GTX 580 GPU. We find comparable, or better, acceleration relative to the NERSC DIRAC cluster with the NVIDIA Tesla C2050 computing processor. The Tesla C 2050 is about 2.6 times more expensive than the GTX 580 gaming GPU. Optimization strategies and comparisons between DIRAC and the gaming PC will be presented. We will also discuss progress on optimizing the comprehensive three dimensional general geometry GEM code.
A Multivariate Quality Loss Function Approach for Optimization of Spinning Processes
NASA Astrophysics Data System (ADS)
Chakraborty, Shankar; Mitra, Ankan
2018-05-01
Recent advancements in textile industry have given rise to several spinning techniques, such as ring spinning, rotor spinning etc., which can be used to produce a wide variety of textile apparels so as to fulfil the end requirements of the customers. To achieve the best out of these processes, they should be utilized at their optimal parametric settings. However, in presence of multiple yarn characteristics which are often conflicting in nature, it becomes a challenging task for the spinning industry personnel to identify the best parametric mix which would simultaneously optimize all the responses. Hence, in this paper, the applicability of a new systematic approach in the form of multivariate quality loss function technique is explored for optimizing multiple quality characteristics of yarns while identifying the ideal settings of two spinning processes. It is observed that this approach performs well against the other multi-objective optimization techniques, such as desirability function, distance function and mean squared error methods. With slight modifications in the upper and lower specification limits of the considered quality characteristics, and constraints of the non-linear optimization problem, it can be successfully applied to other processes in textile industry to determine their optimal parametric settings.
150-nm DR contact holes die-to-database inspection
NASA Astrophysics Data System (ADS)
Kuo, Shen C.; Wu, Clare; Eran, Yair; Staud, Wolfgang; Hemar, Shirley; Lindman, Ofer
2000-07-01
Using a failure analysis-driven yield enhancements concept, based on an optimization of the mask manufacturing process and UV reticle inspection is studied and shown to improve the contact layer quality. This is achieved by relating various manufacturing processes to very fine tuned contact defect detection. In this way, selecting an optimized manufacturing process with fine-tuned inspection setup is achieved in a controlled manner. This paper presents a study, performed on a specially designed test reticle, which simulates production contact layers of design rule 250nm, 180nm and 150nm. This paper focuses on the use of advanced UV reticle inspection techniques as part of the process optimization cycle. Current inspection equipment uses traditional and insufficient methods of small contact-hole inspection and review.
Enhanced Multiobjective Optimization Technique for Comprehensive Aerospace Design. Part A
NASA Technical Reports Server (NTRS)
Chattopadhyay, Aditi; Rajadas, John N.
1997-01-01
A multidisciplinary design optimization procedure which couples formal multiobjectives based techniques and complex analysis procedures (such as computational fluid dynamics (CFD) codes) developed. The procedure has been demonstrated on a specific high speed flow application involving aerodynamics and acoustics (sonic boom minimization). In order to account for multiple design objectives arising from complex performance requirements, multiobjective formulation techniques are used to formulate the optimization problem. Techniques to enhance the existing Kreisselmeier-Steinhauser (K-S) function multiobjective formulation approach have been developed. The K-S function procedure used in the proposed work transforms a constrained multiple objective functions problem into an unconstrained problem which then is solved using the Broyden-Fletcher-Goldfarb-Shanno (BFGS) algorithm. Weight factors are introduced during the transformation process to each objective function. This enhanced procedure will provide the designer the capability to emphasize specific design objectives during the optimization process. The demonstration of the procedure utilizes a computational Fluid dynamics (CFD) code which solves the three-dimensional parabolized Navier-Stokes (PNS) equations for the flow field along with an appropriate sonic boom evaluation procedure thus introducing both aerodynamic performance as well as sonic boom as the design objectives to be optimized simultaneously. Sensitivity analysis is performed using a discrete differentiation approach. An approximation technique has been used within the optimizer to improve the overall computational efficiency of the procedure in order to make it suitable for design applications in an industrial setting.
Multi-GPU hybrid programming accelerated three-dimensional phase-field model in binary alloy
NASA Astrophysics Data System (ADS)
Zhu, Changsheng; Liu, Jieqiong; Zhu, Mingfang; Feng, Li
2018-03-01
In the process of dendritic growth simulation, the computational efficiency and the problem scales have extremely important influence on simulation efficiency of three-dimensional phase-field model. Thus, seeking for high performance calculation method to improve the computational efficiency and to expand the problem scales has a great significance to the research of microstructure of the material. A high performance calculation method based on MPI+CUDA hybrid programming model is introduced. Multi-GPU is used to implement quantitative numerical simulations of three-dimensional phase-field model in binary alloy under the condition of multi-physical processes coupling. The acceleration effect of different GPU nodes on different calculation scales is explored. On the foundation of multi-GPU calculation model that has been introduced, two optimization schemes, Non-blocking communication optimization and overlap of MPI and GPU computing optimization, are proposed. The results of two optimization schemes and basic multi-GPU model are compared. The calculation results show that the use of multi-GPU calculation model can improve the computational efficiency of three-dimensional phase-field obviously, which is 13 times to single GPU, and the problem scales have been expanded to 8193. The feasibility of two optimization schemes is shown, and the overlap of MPI and GPU computing optimization has better performance, which is 1.7 times to basic multi-GPU model, when 21 GPUs are used.
Conceptual Comparison of Population Based Metaheuristics for Engineering Problems
Green, Paul
2015-01-01
Metaheuristic algorithms are well-known optimization tools which have been employed for solving a wide range of optimization problems. Several extensions of differential evolution have been adopted in solving constrained and nonconstrained multiobjective optimization problems, but in this study, the third version of generalized differential evolution (GDE) is used for solving practical engineering problems. GDE3 metaheuristic modifies the selection process of the basic differential evolution and extends DE/rand/1/bin strategy in solving practical applications. The performance of the metaheuristic is investigated through engineering design optimization problems and the results are reported. The comparison of the numerical results with those of other metaheuristic techniques demonstrates the promising performance of the algorithm as a robust optimization tool for practical purposes. PMID:25874265
Conceptual comparison of population based metaheuristics for engineering problems.
Adekanmbi, Oluwole; Green, Paul
2015-01-01
Metaheuristic algorithms are well-known optimization tools which have been employed for solving a wide range of optimization problems. Several extensions of differential evolution have been adopted in solving constrained and nonconstrained multiobjective optimization problems, but in this study, the third version of generalized differential evolution (GDE) is used for solving practical engineering problems. GDE3 metaheuristic modifies the selection process of the basic differential evolution and extends DE/rand/1/bin strategy in solving practical applications. The performance of the metaheuristic is investigated through engineering design optimization problems and the results are reported. The comparison of the numerical results with those of other metaheuristic techniques demonstrates the promising performance of the algorithm as a robust optimization tool for practical purposes.
A hybrid optimization approach in non-isothermal glass molding
NASA Astrophysics Data System (ADS)
Vu, Anh-Tuan; Kreilkamp, Holger; Krishnamoorthi, Bharathwaj Janaki; Dambon, Olaf; Klocke, Fritz
2016-10-01
Intensively growing demands on complex yet low-cost precision glass optics from the today's photonic market motivate the development of an efficient and economically viable manufacturing technology for complex shaped optics. Against the state-of-the-art replication-based methods, Non-isothermal Glass Molding turns out to be a promising innovative technology for cost-efficient manufacturing because of increased mold lifetime, less energy consumption and high throughput from a fast process chain. However, the selection of parameters for the molding process usually requires a huge effort to satisfy precious requirements of the molded optics and to avoid negative effects on the expensive tool molds. Therefore, to reduce experimental work at the beginning, a coupling CFD/FEM numerical modeling was developed to study the molding process. This research focuses on the development of a hybrid optimization approach in Non-isothermal glass molding. To this end, an optimal configuration with two optimization stages for multiple quality characteristics of the glass optics is addressed. The hybrid Back-Propagation Neural Network (BPNN)-Genetic Algorithm (GA) is first carried out to realize the optimal process parameters and the stability of the process. The second stage continues with the optimization of glass preform using those optimal parameters to guarantee the accuracy of the molded optics. Experiments are performed to evaluate the effectiveness and feasibility of the model for the process development in Non-isothermal glass molding.
Optimizing the Compressive Strength of Strain-Hardenable Stretch-Formed Microtruss Architectures
NASA Astrophysics Data System (ADS)
Yu, Bosco; Abu Samk, Khaled; Hibbard, Glenn D.
2015-05-01
The mechanical performance of stretch-formed microtrusses is determined by both the internal strut architecture and the accumulated plastic strain during fabrication. The current study addresses the question of optimization, by taking into consideration the interdependency between fabrication path, material properties and architecture. Low carbon steel (AISI1006) and aluminum (AA3003) material systems were investigated experimentally, with good agreement between measured values and the analytical model. The compressive performance of the microtrusses was then optimized on a minimum weight basis under design constraints such as fixed starting sheet thickness and final microtruss height by satisfying the Karush-Kuhn-Tucker condition. The optimization results were summarized as carpet plots in order to meaningfully visualize the interdependency between architecture, microstructural state, and mechanical performance, enabling material and processing path selection.
NASA Technical Reports Server (NTRS)
Huck, Friedrich O.; Fales, Carl L.
1990-01-01
Researchers are concerned with the end-to-end performance of image gathering, coding, and processing. The applications range from high-resolution television to vision-based robotics, wherever the resolution, efficiency and robustness of visual information acquisition and processing are critical. For the presentation at this workshop, it is convenient to divide research activities into the following two overlapping areas: The first is the development of focal-plane processing techniques and technology to effectively combine image gathering with coding, with an emphasis on low-level vision processing akin to the retinal processing in human vision. The approach includes the familiar Laplacian pyramid, the new intensity-dependent spatial summation, and parallel sensing/processing networks. Three-dimensional image gathering is attained by combining laser ranging with sensor-array imaging. The second is the rigorous extension of information theory and optimal filtering to visual information acquisition and processing. The goal is to provide a comprehensive methodology for quantitatively assessing the end-to-end performance of image gathering, coding, and processing.
Support Vector Machine Based on Adaptive Acceleration Particle Swarm Optimization
Abdulameer, Mohammed Hasan; Othman, Zulaiha Ali
2014-01-01
Existing face recognition methods utilize particle swarm optimizer (PSO) and opposition based particle swarm optimizer (OPSO) to optimize the parameters of SVM. However, the utilization of random values in the velocity calculation decreases the performance of these techniques; that is, during the velocity computation, we normally use random values for the acceleration coefficients and this creates randomness in the solution. To address this problem, an adaptive acceleration particle swarm optimization (AAPSO) technique is proposed. To evaluate our proposed method, we employ both face and iris recognition based on AAPSO with SVM (AAPSO-SVM). In the face and iris recognition systems, performance is evaluated using two human face databases, YALE and CASIA, and the UBiris dataset. In this method, we initially perform feature extraction and then recognition on the extracted features. In the recognition process, the extracted features are used for SVM training and testing. During the training and testing, the SVM parameters are optimized with the AAPSO technique, and in AAPSO, the acceleration coefficients are computed using the particle fitness values. The parameters in SVM, which are optimized by AAPSO, perform efficiently for both face and iris recognition. A comparative analysis between our proposed AAPSO-SVM and the PSO-SVM technique is presented. PMID:24790584
NASA Astrophysics Data System (ADS)
Xu, Jincheng; Liu, Wei; Wang, Jin; Liu, Linong; Zhang, Jianfeng
2018-02-01
De-absorption pre-stack time migration (QPSTM) compensates for the absorption and dispersion of seismic waves by introducing an effective Q parameter, thereby making it an effective tool for 3D, high-resolution imaging of seismic data. Although the optimal aperture obtained via stationary-phase migration reduces the computational cost of 3D QPSTM and yields 3D stationary-phase QPSTM, the associated computational efficiency is still the main problem in the processing of 3D, high-resolution images for real large-scale seismic data. In the current paper, we proposed a division method for large-scale, 3D seismic data to optimize the performance of stationary-phase QPSTM on clusters of graphics processing units (GPU). Then, we designed an imaging point parallel strategy to achieve an optimal parallel computing performance. Afterward, we adopted an asynchronous double buffering scheme for multi-stream to perform the GPU/CPU parallel computing. Moreover, several key optimization strategies of computation and storage based on the compute unified device architecture (CUDA) were adopted to accelerate the 3D stationary-phase QPSTM algorithm. Compared with the initial GPU code, the implementation of the key optimization steps, including thread optimization, shared memory optimization, register optimization and special function units (SFU), greatly improved the efficiency. A numerical example employing real large-scale, 3D seismic data showed that our scheme is nearly 80 times faster than the CPU-QPSTM algorithm. Our GPU/CPU heterogeneous parallel computing framework significant reduces the computational cost and facilitates 3D high-resolution imaging for large-scale seismic data.
NASA Astrophysics Data System (ADS)
Natarajan, S.; Pitchandi, K.; Mahalakshmi, N. V.
2018-02-01
The performance and emission characteristics of a PPCCI engine fuelled with ethanol and diesel blends were carried out on a single cylinder air cooled CI engine. In order to achieve the optimal process response with a limited number of experimental cycles, multi objective grey relational analysis had been applied for solving a multiple response optimization problem. Using grey relational grade and signal-to-noise ratio as a performance index, a combination of input parameters was prefigured so as to achieve optimum response characteristics. It was observed that 20% premixed ratio of blend was most suitable for use in a PPCCI engine without significantly affecting the engine performance and emissions characteristics.
NASA Astrophysics Data System (ADS)
Tan, Yang; Srinivasan, Vasudevan; Nakamura, Toshio; Sampath, Sanjay; Bertrand, Pierre; Bertrand, Ghislaine
2012-09-01
The properties and performance of plasma-sprayed thermal barrier coatings (TBCs) are strongly dependent on the microstructural defects, which are affected by starting powder morphology and processing conditions. Of particular interest is the use of hollow powders which not only allow for efficient melting of zirconia ceramics but also produce lower conductivity and more compliant coatings. Typical industrial hollow spray powders have an assortment of densities resulting in masking potential advantages of the hollow morphology. In this study, we have conducted process mapping strategies using a novel uniform shell thickness hollow powder to control the defect microstructure and properties. Correlations among coating properties, microstructure, and processing reveal feasibility to produce highly compliant and low conductivity TBC through a combination of optimized feedstock and processing conditions. The results are presented through the framework of process maps establishing correlations among process, microstructure, and properties and providing opportunities for optimization of TBCs.
Nguyen, Dinh Duc; Yoon, Yong Soo; Bui, Xuan Thanh; Kim, Sung Su; Chang, Soon Woong; Guo, Wenshan; Ngo, Huu Hao
2017-11-01
Performance of an electrocoagulation (EC) process in batch and continuous operating modes was thoroughly investigated and evaluated for enhancing wastewater phosphorus removal under various operating conditions, individually or combined with initial phosphorus concentration, wastewater conductivity, current density, and electrolysis times. The results revealed excellent phosphorus removal (72.7-100%) for both processes within 3-6 min of electrolysis, with relatively low energy requirements, i.e., less than 0.5 kWh/m 3 for treated wastewater. However, the removal efficiency of phosphorus in the continuous EC operation mode was better than that in batch mode within the scope of the study. Additionally, the rate and efficiency of phosphorus removal strongly depended on operational parameters, including wastewater conductivity, initial phosphorus concentration, current density, and electrolysis time. Based on experimental data, statistical model verification of the response surface methodology (RSM) (multiple factor optimization) was also established to provide further insights and accurately describe the interactive relationship between the process variables, thus optimizing the EC process performance. The EC process using iron electrodes is promising for improving wastewater phosphorus removal efficiency, and RSM can be a sustainable tool for predicting the performance of the EC process and explaining the influence of the process variables.
Global optimization method based on ray tracing to achieve optimum figure error compensation
NASA Astrophysics Data System (ADS)
Liu, Xiaolin; Guo, Xuejia; Tang, Tianjin
2017-02-01
Figure error would degrade the performance of optical system. When predicting the performance and performing system assembly, compensation by clocking of optical components around the optical axis is a conventional but user-dependent method. Commercial optical software cannot optimize this clocking. Meanwhile existing automatic figure-error balancing methods can introduce approximate calculation error and the build process of optimization model is complex and time-consuming. To overcome these limitations, an accurate and automatic global optimization method of figure error balancing is proposed. This method is based on precise ray tracing to calculate the wavefront error, not approximate calculation, under a given elements' rotation angles combination. The composite wavefront error root-mean-square (RMS) acts as the cost function. Simulated annealing algorithm is used to seek the optimal combination of rotation angles of each optical element. This method can be applied to all rotational symmetric optics. Optimization results show that this method is 49% better than previous approximate analytical method.
Comprehensive optimization process of paranasal sinus radiography.
Saarakkala, S; Nironen, K; Hermunen, H; Aarnio, J; Heikkinen, J O
2009-04-01
The optimization of radiological examinations is important in order to reduce unnecessary patient radiation exposure. To perform a comprehensive optimization process for paranasal sinus radiography at Mikkeli Central Hospital, Finland. Patients with suspicion of acute sinusitis were imaged with a Kodak computed radiography (CR) system (n=20) and with a Philips digital radiography (DR) system (n=30) using focus-detector distances (FDDs) of 110 cm, 150 cm, or 200 cm. Patients' radiation exposure was determined in terms of entrance surface dose and dose-area product. Furthermore, an anatomical phantom was used for the estimation of point doses inside the head. Clinical image quality was evaluated by an experienced radiologist, and physical image quality was evaluated from the digital radiography phantom. Patient doses were significantly lower and image quality better with the DR system compared to the CR system. The differences in patient dose and physical image quality were small with varying FDD. Clinical image quality of the DR system was lowest with FDD of 200 cm. Further, imaging with FDD of 150 cm was technically easier for the technologist to perform than with FDD of 110 cm. After optimization, it was recommended that the DR system with FDD of 150 cm should always be used at Mikkeli Central Hospital. We recommend this kind of comprehensive approach in all optimization processes of radiological examinations.
Simulative design and process optimization of the two-stage stretch-blow molding process
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hopmann, Ch.; Rasche, S.; Windeck, C.
2015-05-22
The total production costs of PET bottles are significantly affected by the costs of raw material. Approximately 70 % of the total costs are spent for the raw material. Therefore, stretch-blow molding industry intends to reduce the total production costs by an optimized material efficiency. However, there is often a trade-off between an optimized material efficiency and required product properties. Due to a multitude of complex boundary conditions, the design process of new stretch-blow molded products is still a challenging task and is often based on empirical knowledge. Application of current CAE-tools supports the design process by reducing development timemore » and costs. This paper describes an approach to determine optimized preform geometry and corresponding process parameters iteratively. The wall thickness distribution and the local stretch ratios of the blown bottle are calculated in a three-dimensional process simulation. Thereby, the wall thickness distribution is correlated with an objective function and preform geometry as well as process parameters are varied by an optimization algorithm. Taking into account the correlation between material usage, process history and resulting product properties, integrative coupled simulation steps, e.g. structural analyses or barrier simulations, are performed. The approach is applied on a 0.5 liter PET bottle of Krones AG, Neutraubling, Germany. The investigations point out that the design process can be supported by applying this simulative optimization approach. In an optimization study the total bottle weight is reduced from 18.5 g to 15.5 g. The validation of the computed results is in progress.« less
Simulative design and process optimization of the two-stage stretch-blow molding process
NASA Astrophysics Data System (ADS)
Hopmann, Ch.; Rasche, S.; Windeck, C.
2015-05-01
The total production costs of PET bottles are significantly affected by the costs of raw material. Approximately 70 % of the total costs are spent for the raw material. Therefore, stretch-blow molding industry intends to reduce the total production costs by an optimized material efficiency. However, there is often a trade-off between an optimized material efficiency and required product properties. Due to a multitude of complex boundary conditions, the design process of new stretch-blow molded products is still a challenging task and is often based on empirical knowledge. Application of current CAE-tools supports the design process by reducing development time and costs. This paper describes an approach to determine optimized preform geometry and corresponding process parameters iteratively. The wall thickness distribution and the local stretch ratios of the blown bottle are calculated in a three-dimensional process simulation. Thereby, the wall thickness distribution is correlated with an objective function and preform geometry as well as process parameters are varied by an optimization algorithm. Taking into account the correlation between material usage, process history and resulting product properties, integrative coupled simulation steps, e.g. structural analyses or barrier simulations, are performed. The approach is applied on a 0.5 liter PET bottle of Krones AG, Neutraubling, Germany. The investigations point out that the design process can be supported by applying this simulative optimization approach. In an optimization study the total bottle weight is reduced from 18.5 g to 15.5 g. The validation of the computed results is in progress.
Optimization of the performance of the polymerase chain reaction in silicon-based microstructures.
Taylor, T B; Winn-Deen, E S; Picozza, E; Woudenberg, T M; Albin, M
1997-01-01
We have demonstrated the ability to perform real-time homogeneous, sequence specific detection of PCR products in silicon microstructures. Optimal design/ processing result in equivalent performance (yield and specificity) for high surface-to-volume silicon structures as compared to larger volume reactions in polypropylene tubes. Amplifications in volumes as small as 0.5 microl and thermal cycling times reduced as much as 5-fold from that of conventional systems have been demonstrated for the microstructures. PMID:9224619
Aerodynamic Design of Complex Configurations Using Cartesian Methods and CAD Geometry
NASA Technical Reports Server (NTRS)
Nemec, Marian; Aftosmis, Michael J.; Pulliam, Thomas H.
2003-01-01
The objective for this paper is to present the development of an optimization capability for the Cartesian inviscid-flow analysis package of Aftosmis et al. We evaluate and characterize the following modules within the new optimization framework: (1) A component-based geometry parameterization approach using a CAD solid representation and the CAPRI interface. (2) The use of Cartesian methods in the development Optimization techniques using a genetic algorithm. The discussion and investigations focus on several real world problems of the optimization process. We examine the architectural issues associated with the deployment of a CAD-based design approach in a heterogeneous parallel computing environment that contains both CAD workstations and dedicated compute nodes. In addition, we study the influence of noise on the performance of optimization techniques, and the overall efficiency of the optimization process for aerodynamic design of complex three-dimensional configurations. of automated optimization tools. rithm and a gradient-based algorithm.
NASA Astrophysics Data System (ADS)
Abdeljaber, Osama; Avci, Onur; Inman, Daniel J.
2016-05-01
One of the major challenges in civil, mechanical, and aerospace engineering is to develop vibration suppression systems with high efficiency and low cost. Recent studies have shown that high damping performance at broadband frequencies can be achieved by incorporating periodic inserts with tunable dynamic properties as internal resonators in structural systems. Structures featuring these kinds of inserts are referred to as metamaterials inspired structures or metastructures. Chiral lattice inserts exhibit unique characteristics such as frequency bandgaps which can be tuned by varying the parameters that define the lattice topology. Recent analytical and experimental investigations have shown that broadband vibration attenuation can be achieved by including chiral lattices as internal resonators in beam-like structures. However, these studies have suggested that the performance of chiral lattice inserts can be maximized by utilizing an efficient optimization technique to obtain the optimal topology of the inserted lattice. In this study, an automated optimization procedure based on a genetic algorithm is applied to obtain the optimal set of parameters that will result in chiral lattice inserts tuned properly to reduce the global vibration levels of a finite-sized beam. Genetic algorithms are considered in this study due to their capability of dealing with complex and insufficiently understood optimization problems. In the optimization process, the basic parameters that govern the geometry of periodic chiral lattices including the number of circular nodes, the thickness of the ligaments, and the characteristic angle are considered. Additionally, a new set of parameters is introduced to enable the optimization process to explore non-periodic chiral designs. Numerical simulations are carried out to demonstrate the efficiency of the optimization process.
PSO-tuned PID controller for coupled tank system via priority-based fitness scheme
NASA Astrophysics Data System (ADS)
Jaafar, Hazriq Izzuan; Hussien, Sharifah Yuslinda Syed; Selamat, Nur Asmiza; Abidin, Amar Faiz Zainal; Aras, Mohd Shahrieel Mohd; Nasir, Mohamad Na'im Mohd; Bohari, Zul Hasrizal
2015-05-01
The industrial applications of Coupled Tank System (CTS) are widely used especially in chemical process industries. The overall process is require liquids to be pumped, stored in the tank and pumped again to another tank. Nevertheless, the level of liquid in tank need to be controlled and flow between two tanks must be regulated. This paper presents development of an optimal PID controller for controlling the desired liquid level of the CTS. Two method of Particle Swarm Optimization (PSO) algorithm will be tested in optimizing the PID controller parameters. These two methods of PSO are standard Particle Swarm Optimization (PSO) and Priority-based Fitness Scheme in Particle Swarm Optimization (PFPSO). Simulation is conducted within Matlab environment to verify the performance of the system in terms of settling time (Ts), steady state error (SSE) and overshoot (OS). It has been demonstrated that implementation of PSO via Priority-based Fitness Scheme (PFPSO) for this system is potential technique to control the desired liquid level and improve the system performances compared with standard PSO.
A Fast Proceduere for Optimizing Thermal Protection Systems of Re-Entry Vehicles
NASA Astrophysics Data System (ADS)
Ferraiuolo, M.; Riccio, A.; Tescione, D.; Gigliotti, M.
The aim of the present work is to introduce a fast procedure to optimize thermal protection systems for re-entry vehicles subjected to high thermal loads. A simplified one-dimensional optimization process, performed in order to find the optimum design variables (lengths, sections etc.), is the first step of the proposed design procedure. Simultaneously, the most suitable materials able to sustain high temperatures and meeting the weight requirements are selected and positioned within the design layout. In this stage of the design procedure, simplified (generalized plane strain) FEM models are used when boundary and geometrical conditions allow the reduction of the degrees of freedom. Those simplified local FEM models can be useful because they are time-saving and very simple to build; they are essentially one dimensional and can be used for optimization processes in order to determine the optimum configuration with regard to weight, temperature and stresses. A triple-layer and a double-layer body, subjected to the same aero-thermal loads, have been optimized to minimize the overall weight. Full two and three-dimensional analyses are performed in order to validate those simplified models. Thermal-structural analyses and optimizations are executed by adopting the Ansys FEM code.
NASA Astrophysics Data System (ADS)
Prathabrao, M.; Nawawi, Azli; Sidek, Noor Azizah
2017-04-01
Radio Frequency Identification (RFID) system has multiple benefits which can improve the operational efficiency of the organization. The advantages are the ability to record data systematically and quickly, reducing human errors and system errors, update the database automatically and efficiently. It is often more readers (reader) is needed for the installation purposes in RFID system. Thus, it makes the system more complex. As a result, RFID network planning process is needed to ensure the RFID system works perfectly. The planning process is also considered as an optimization process and power adjustment because the coordinates of each RFID reader to be determined. Therefore, algorithms inspired by the environment (Algorithm Inspired by Nature) is often used. In the study, PSO algorithm is used because it has few number of parameters, the simulation time is fast, easy to use and also very practical. However, PSO parameters must be adjusted correctly, for robust and efficient usage of PSO. Failure to do so may result in disruption of performance and results of PSO optimization of the system will be less good. To ensure the efficiency of PSO, this study will examine the effects of two parameters on the performance of PSO Algorithm in RFID tag coverage optimization. The parameters to be studied are the swarm size and iteration number. In addition to that, the study will also recommend the most optimal adjustment for both parameters that is, 200 for the no. iterations and 800 for the no. of swarms. Finally, the results of this study will enable PSO to operate more efficiently in order to optimize RFID network planning system.
Key Processes of Silicon-On-Glass MEMS Fabrication Technology for Gyroscope Application.
Ma, Zhibo; Wang, Yinan; Shen, Qiang; Zhang, Han; Guo, Xuetao
2018-04-17
MEMS fabrication that is based on the silicon-on-glass (SOG) process requires many steps, including patterning, anodic bonding, deep reactive ion etching (DRIE), and chemical mechanical polishing (CMP). The effects of the process parameters of CMP and DRIE are investigated in this study. The process parameters of CMP, such as abrasive size, load pressure, and pH value of SF1 solution are examined to optimize the total thickness variation in the structure and the surface quality. The ratio of etching and passivation cycle time and the process pressure are also adjusted to achieve satisfactory performance during DRIE. The process is optimized to avoid neither the notching nor lag effects on the fabricated silicon structures. For demonstrating the capability of the modified CMP and DRIE processes, a z-axis micro gyroscope is fabricated that is based on the SOG process. Initial test results show that the average surface roughness of silicon is below 1.13 nm and the thickness of the silicon is measured to be 50 μm. All of the structures are well defined without the footing effect by the use of the modified DRIE process. The initial performance test results of the resonant frequency for the drive and sense modes are 4.048 and 4.076 kHz, respectively. The demands for this kind of SOG MEMS device can be fulfilled using the optimized process.
Characterization and nultivariate analysis of physical properties of processing peaches
USDA-ARS?s Scientific Manuscript database
Characterization of physical properties of fruits represents the first vital step to ensure optimal performance of fruit processing operations and is also a prerequisite in the development of new processing equipment. In this study, physical properties of engineering significance to processing of th...
NASA Astrophysics Data System (ADS)
Wang, Jiaoyang; Wang, Lin; Yang, Ying; Gong, Rui; Shao, Xiaopeng; Liang, Chao; Xu, Jun
2016-05-01
In this paper, an integral design that combines optical system with image processing is introduced to obtain high resolution images, and the performance is evaluated and demonstrated. Traditional imaging methods often separate the two technical procedures of optical system design and imaging processing, resulting in the failures in efficient cooperation between the optical and digital elements. Therefore, an innovative approach is presented to combine the merit function during optical design together with the constraint conditions of image processing algorithms. Specifically, an optical imaging system with low resolution is designed to collect the image signals which are indispensable for imaging processing, while the ultimate goal is to obtain high resolution images from the final system. In order to optimize the global performance, the optimization function of ZEMAX software is utilized and the number of optimization cycles is controlled. Then Wiener filter algorithm is adopted to process the image simulation and mean squared error (MSE) is taken as evaluation criterion. The results show that, although the optical figures of merit for the optical imaging systems is not the best, it can provide image signals that are more suitable for image processing. In conclusion. The integral design of optical system and image processing can search out the overall optimal solution which is missed by the traditional design methods. Especially, when designing some complex optical system, this integral design strategy has obvious advantages to simplify structure and reduce cost, as well as to gain high resolution images simultaneously, which has a promising perspective of industrial application.
NASA Astrophysics Data System (ADS)
Aittokoski, Timo; Miettinen, Kaisa
2008-07-01
Solving real-life engineering problems can be difficult because they often have multiple conflicting objectives, the objective functions involved are highly nonlinear and they contain multiple local minima. Furthermore, function values are often produced via a time-consuming simulation process. These facts suggest the need for an automated optimization tool that is efficient (in terms of number of objective function evaluations) and capable of solving global and multiobjective optimization problems. In this article, the requirements on a general simulation-based optimization system are discussed and such a system is applied to optimize the performance of a two-stroke combustion engine. In the example of a simulation-based optimization problem, the dimensions and shape of the exhaust pipe of a two-stroke engine are altered, and values of three conflicting objective functions are optimized. These values are derived from power output characteristics of the engine. The optimization approach involves interactive multiobjective optimization and provides a convenient tool to balance between conflicting objectives and to find good solutions.
Capitanescu, F; Rege, S; Marvuglia, A; Benetto, E; Ahmadi, A; Gutiérrez, T Navarrete; Tiruta-Barna, L
2016-07-15
Empowering decision makers with cost-effective solutions for reducing industrial processes environmental burden, at both design and operation stages, is nowadays a major worldwide concern. The paper addresses this issue for the sector of drinking water production plants (DWPPs), seeking for optimal solutions trading-off operation cost and life cycle assessment (LCA)-based environmental impact while satisfying outlet water quality criteria. This leads to a challenging bi-objective constrained optimization problem, which relies on a computationally expensive intricate process-modelling simulator of the DWPP and has to be solved with limited computational budget. Since mathematical programming methods are unusable in this case, the paper examines the performances in tackling these challenges of six off-the-shelf state-of-the-art global meta-heuristic optimization algorithms, suitable for such simulation-based optimization, namely Strength Pareto Evolutionary Algorithm (SPEA2), Non-dominated Sorting Genetic Algorithm (NSGA-II), Indicator-based Evolutionary Algorithm (IBEA), Multi-Objective Evolutionary Algorithm based on Decomposition (MOEA/D), Differential Evolution (DE), and Particle Swarm Optimization (PSO). The results of optimization reveal that good reduction in both operating cost and environmental impact of the DWPP can be obtained. Furthermore, NSGA-II outperforms the other competing algorithms while MOEA/D and DE perform unexpectedly poorly. Copyright © 2016 Elsevier Ltd. All rights reserved.
Analysis of grinding of superalloys and ceramics for off-line process optimization
NASA Astrophysics Data System (ADS)
Sathyanarayanan, G.
The present study has compared the performances of resinoid, vitrified, and electroplated CBN wheels in creep feed grinding of M42 and D2 tool steels. Responses such as a specific energy, normal and tangential forces, and surface roughness were used as measures of performance. It was found that creep feed grinding with resinoid, vitrified, and electroplated CBN wheels has its own advantages, but no single wheel could provide good finish, lower specific energy, and high material removal rates simultaneously. To optimize the CBN grinding with different bonded wheels, a Multiple Criteria Decision Making (MCDM) methodology was used. Creep feed grinding of superalloys, Ti-6Al-4V and Inconel 718, has been modeled by utilizing neural networks to optimize the grinding process. A parallel effort was directed at creep feed grinding of alumina ceramics with diamond wheels to investigate the influence of process variables on responses based on experimental results and statistical analysis. The conflicting influence of variables was observed. This led to the formulation of ceramic grinding process as a multi-objective nonlinear mixed integer problem.
Simultaneous optimization of micro-heliostat geometry and field layout using a genetic algorithm
NASA Astrophysics Data System (ADS)
Lazardjani, Mani Yousefpour; Kronhardt, Valentina; Dikta, Gerhard; Göttsche, Joachim
2016-05-01
A new optimization tool for micro-heliostat (MH) geometry and field layout is presented. The method intends simultaneous performance improvement and cost reduction through iteration of heliostat geometry and field layout parameters. This tool was developed primarily for the optimization of a novel micro-heliostat concept, which was developed at Solar-Institut Jülich (SIJ). However, the underlying approach for the optimization can be used for any heliostat type. During the optimization the performance is calculated using the ray-tracing tool SolCal. The costs of the heliostats are calculated by use of a detailed cost function. A genetic algorithm is used to change heliostat geometry and field layout in an iterative process. Starting from an initial setup, the optimization tool generates several configurations of heliostat geometries and field layouts. For each configuration a cost-performance ratio is calculated. Based on that, the best geometry and field layout can be selected in each optimization step. In order to find the best configuration, this step is repeated until no significant improvement in the results is observed.
Hu, Rui; Liu, Shutian; Li, Quhao
2017-05-20
For the development of a large-aperture space telescope, one of the key techniques is the method for designing the flexures for mounting the primary mirror, as the flexures are the key components. In this paper, a topology-optimization-based method for designing flexures is presented. The structural performances of the mirror system under multiple load conditions, including static gravity and thermal loads, as well as the dynamic vibration, are considered. The mirror surface shape error caused by gravity and the thermal effect is treated as the objective function, and the first-order natural frequency of the mirror structural system is taken as the constraint. The pattern repetition constraint is added, which can ensure symmetrical material distribution. The topology optimization model for flexure design is established. The substructuring method is also used to condense the degrees of freedom (DOF) of all the nodes of the mirror system, except for the nodes that are linked to the mounting flexures, to reduce the computation effort during the optimization iteration process. A potential optimized configuration is achieved by solving the optimization model and post-processing. A detailed shape optimization is subsequently conducted to optimize its dimension parameters. Our optimization method deduces new mounting structures that significantly enhance the optical performance of the mirror system compared to the traditional methods, which only focus on the parameters of existing structures. Design results demonstrate the effectiveness of the proposed optimization method.
Performance evaluation of an asynchronous multisensor track fusion filter
NASA Astrophysics Data System (ADS)
Alouani, Ali T.; Gray, John E.; McCabe, D. H.
2003-08-01
Recently the authors developed a new filter that uses data generated by asynchronous sensors to produce a state estimate that is optimal in the minimum mean square sense. The solution accounts for communications delay between sensors platform and fusion center. It also deals with out of sequence data as well as latent data by processing the information in a batch-like manner. This paper compares, using simulated targets and Monte Carlo simulations, the performance of the filter to the optimal sequential processing approach. It was found that the new asynchronous Multisensor track fusion filter (AMSTFF) performance is identical to that of the extended sequential Kalman filter (SEKF), while the new filter updates its track at a much lower rate than the SEKF.
Complex optimization for big computational and experimental neutron datasets
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bao, Feng; Oak Ridge National Lab.; Archibald, Richard
Here, we present a framework to use high performance computing to determine accurate solutions to the inverse optimization problem of big experimental data against computational models. We demonstrate how image processing, mathematical regularization, and hierarchical modeling can be used to solve complex optimization problems on big data. We also demonstrate how both model and data information can be used to further increase solution accuracy of optimization by providing confidence regions for the processing and regularization algorithms. Finally, we use the framework in conjunction with the software package SIMPHONIES to analyze results from neutron scattering experiments on silicon single crystals, andmore » refine first principles calculations to better describe the experimental data.« less
Complex optimization for big computational and experimental neutron datasets
Bao, Feng; Oak Ridge National Lab.; Archibald, Richard; ...
2016-11-07
Here, we present a framework to use high performance computing to determine accurate solutions to the inverse optimization problem of big experimental data against computational models. We demonstrate how image processing, mathematical regularization, and hierarchical modeling can be used to solve complex optimization problems on big data. We also demonstrate how both model and data information can be used to further increase solution accuracy of optimization by providing confidence regions for the processing and regularization algorithms. Finally, we use the framework in conjunction with the software package SIMPHONIES to analyze results from neutron scattering experiments on silicon single crystals, andmore » refine first principles calculations to better describe the experimental data.« less
New reflective symmetry design capability in the JPL-IDEAS Structure Optimization Program
NASA Technical Reports Server (NTRS)
Strain, D.; Levy, R.
1986-01-01
The JPL-IDEAS antenna structure analysis and design optimization computer program was modified to process half structure models of symmetric structures subjected to arbitrary external static loads, synthesize the performance, and optimize the design of the full structure. Significant savings in computation time and cost (more than 50%) were achieved compared to the cost of full model computer runs. The addition of the new reflective symmetry analysis design capabilities to the IDEAS program allows processing of structure models whose size would otherwise prevent automated design optimization. The new program produced synthesized full model iterative design results identical to those of actual full model program executions at substantially reduced cost, time, and computer storage.
NASA Technical Reports Server (NTRS)
Miller, David W.; Uebelhart, Scott A.; Blaurock, Carl
2004-01-01
This report summarizes work performed by the Space Systems Laboratory (SSL) for NASA Langley Research Center in the field of performance optimization for systems subject to uncertainty. The objective of the research is to develop design methods and tools to the aerospace vehicle design process which take into account lifecycle uncertainties. It recognizes that uncertainty between the predictions of integrated models and data collected from the system in its operational environment is unavoidable. Given the presence of uncertainty, the goal of this work is to develop means of identifying critical sources of uncertainty, and to combine these with the analytical tools used with integrated modeling. In this manner, system uncertainty analysis becomes part of the design process, and can motivate redesign. The specific program objectives were: 1. To incorporate uncertainty modeling, propagation and analysis into the integrated (controls, structures, payloads, disturbances, etc.) design process to derive the error bars associated with performance predictions. 2. To apply modern optimization tools to guide in the expenditure of funds in a way that most cost-effectively improves the lifecycle productivity of the system by enhancing the subsystem reliability and redundancy. The results from the second program objective are described. This report describes the work and results for the first objective: uncertainty modeling, propagation, and synthesis with integrated modeling.
Optimized design of embedded DSP system hardware supporting complex algorithms
NASA Astrophysics Data System (ADS)
Li, Yanhua; Wang, Xiangjun; Zhou, Xinling
2003-09-01
The paper presents an optimized design method for a flexible and economical embedded DSP system that can implement complex processing algorithms as biometric recognition, real-time image processing, etc. It consists of a floating-point DSP, 512 Kbytes data RAM, 1 Mbytes FLASH program memory, a CPLD for achieving flexible logic control of input channel and a RS-485 transceiver for local network communication. Because of employing a high performance-price ratio DSP TMS320C6712 and a large FLASH in the design, this system permits loading and performing complex algorithms with little algorithm optimization and code reduction. The CPLD provides flexible logic control for the whole DSP board, especially in input channel, and allows convenient interface between different sensors and DSP system. The transceiver circuit can transfer data between DSP and host computer. In the paper, some key technologies are also introduced which make the whole system work efficiently. Because of the characters referred above, the hardware is a perfect flat for multi-channel data collection, image processing, and other signal processing with high performance and adaptability. The application section of this paper presents how this hardware is adapted for the biometric identification system with high identification precision. The result reveals that this hardware is easy to interface with a CMOS imager and is capable of carrying out complex biometric identification algorithms, which require real-time process.
Under-Track CFD-Based Shape Optimization for a Low-Boom Demonstrator Concept
NASA Technical Reports Server (NTRS)
Wintzer, Mathias; Ordaz, Irian; Fenbert, James W.
2015-01-01
The detailed outer mold line shaping of a Mach 1.6, demonstrator-sized low-boom concept is presented. Cruise trim is incorporated a priori as part of the shaping objective, using an equivalent-area-based approach. Design work is performed using a gradient-driven optimization framework that incorporates a three-dimensional, nonlinear flow solver, a parametric geometry modeler, and sensitivities derived using the adjoint method. The shaping effort is focused on reducing the under-track sonic boom level using an inverse design approach, while simultaneously satisfying the trim requirement. Conceptual-level geometric constraints are incorporated in the optimization process, including the internal layout of fuel tanks, landing gear, engine, and crew station. Details of the model parameterization and design process are documented for both flow-through and powered states, and the performance of these optimized vehicles presented in terms of inviscid L/D, trim state, pressures in the near-field and at the ground, and predicted sonic boom loudness.
Chen, Qingkui; Zhao, Deyu; Wang, Jingjuan
2017-01-01
This paper aims to develop a low-cost, high-performance and high-reliability computing system to process large-scale data using common data mining algorithms in the Internet of Things (IoT) computing environment. Considering the characteristics of IoT data processing, similar to mainstream high performance computing, we use a GPU (Graphics Processing Unit) cluster to achieve better IoT services. Firstly, we present an energy consumption calculation method (ECCM) based on WSNs. Then, using the CUDA (Compute Unified Device Architecture) Programming model, we propose a Two-level Parallel Optimization Model (TLPOM) which exploits reasonable resource planning and common compiler optimization techniques to obtain the best blocks and threads configuration considering the resource constraints of each node. The key to this part is dynamic coupling Thread-Level Parallelism (TLP) and Instruction-Level Parallelism (ILP) to improve the performance of the algorithms without additional energy consumption. Finally, combining the ECCM and the TLPOM, we use the Reliable GPU Cluster Architecture (RGCA) to obtain a high-reliability computing system considering the nodes’ diversity, algorithm characteristics, etc. The results show that the performance of the algorithms significantly increased by 34.1%, 33.96% and 24.07% for Fermi, Kepler and Maxwell on average with TLPOM and the RGCA ensures that our IoT computing system provides low-cost and high-reliability services. PMID:28777325
Fang, Yuling; Chen, Qingkui; Xiong, Neal N; Zhao, Deyu; Wang, Jingjuan
2017-08-04
This paper aims to develop a low-cost, high-performance and high-reliability computing system to process large-scale data using common data mining algorithms in the Internet of Things (IoT) computing environment. Considering the characteristics of IoT data processing, similar to mainstream high performance computing, we use a GPU (Graphics Processing Unit) cluster to achieve better IoT services. Firstly, we present an energy consumption calculation method (ECCM) based on WSNs. Then, using the CUDA (Compute Unified Device Architecture) Programming model, we propose a Two-level Parallel Optimization Model (TLPOM) which exploits reasonable resource planning and common compiler optimization techniques to obtain the best blocks and threads configuration considering the resource constraints of each node. The key to this part is dynamic coupling Thread-Level Parallelism (TLP) and Instruction-Level Parallelism (ILP) to improve the performance of the algorithms without additional energy consumption. Finally, combining the ECCM and the TLPOM, we use the Reliable GPU Cluster Architecture (RGCA) to obtain a high-reliability computing system considering the nodes' diversity, algorithm characteristics, etc. The results show that the performance of the algorithms significantly increased by 34.1%, 33.96% and 24.07% for Fermi, Kepler and Maxwell on average with TLPOM and the RGCA ensures that our IoT computing system provides low-cost and high-reliability services.
Optimization design of turbo-expander gas bearing for a 500W helium refrigerator
NASA Astrophysics Data System (ADS)
Li, S. S.; Fu, B.; Y Zhang, Q.
2017-12-01
Turbo-expander is the core machinery of the helium refrigerator. Bearing as the supporting element is the core technology to impact the design of turbo-expander. The perfect design and performance study for the gas bearing are essential to ensure the stability of turbo-expander. In this paper, numerical simulation is used to analyze the performance of gas bearing for a 500W helium refrigerator turbine, and the optimization design of the gas bearing has been completed. And the results of the gas bearing optimization have a guiding role in the processing technology. Finally, the turbine experiments verify that the gas bearing has good performance, and ensure the stable operation of the turbine.
Finite grade pheromone ant colony optimization for image segmentation
NASA Astrophysics Data System (ADS)
Yuanjing, F.; Li, Y.; Liangjun, K.
2008-06-01
By combining the decision process of ant colony optimization (ACO) with the multistage decision process of image segmentation based on active contour model (ACM), an algorithm called finite grade ACO (FACO) for image segmentation is proposed. This algorithm classifies pheromone into finite grades and updating of the pheromone is achieved by changing the grades and the updated quantity of pheromone is independent from the objective function. The algorithm that provides a new approach to obtain precise contour is proved to converge to the global optimal solutions linearly by means of finite Markov chains. The segmentation experiments with ultrasound heart image show the effectiveness of the algorithm. Comparing the results for segmentation of left ventricle images shows that the ACO for image segmentation is more effective than the GA approach and the new pheromone updating strategy appears good time performance in optimization process.
Evolutionary Optimization of a Geometrically Refined Truss
NASA Technical Reports Server (NTRS)
Hull, P. V.; Tinker, M. L.; Dozier, G. V.
2007-01-01
Structural optimization is a field of research that has experienced noteworthy growth for many years. Researchers in this area have developed optimization tools to successfully design and model structures, typically minimizing mass while maintaining certain deflection and stress constraints. Numerous optimization studies have been performed to minimize mass, deflection, and stress on a benchmark cantilever truss problem. Predominantly traditional optimization theory is applied to this problem. The cross-sectional area of each member is optimized to minimize the aforementioned objectives. This Technical Publication (TP) presents a structural optimization technique that has been previously applied to compliant mechanism design. This technique demonstrates a method that combines topology optimization, geometric refinement, finite element analysis, and two forms of evolutionary computation: genetic algorithms and differential evolution to successfully optimize a benchmark structural optimization problem. A nontraditional solution to the benchmark problem is presented in this TP, specifically a geometrically refined topological solution. The design process begins with an alternate control mesh formulation, multilevel geometric smoothing operation, and an elastostatic structural analysis. The design process is wrapped in an evolutionary computing optimization toolset.
Visual Perceptual Learning and Models.
Dosher, Barbara; Lu, Zhong-Lin
2017-09-15
Visual perceptual learning through practice or training can significantly improve performance on visual tasks. Originally seen as a manifestation of plasticity in the primary visual cortex, perceptual learning is more readily understood as improvements in the function of brain networks that integrate processes, including sensory representations, decision, attention, and reward, and balance plasticity with system stability. This review considers the primary phenomena of perceptual learning, theories of perceptual learning, and perceptual learning's effect on signal and noise in visual processing and decision. Models, especially computational models, play a key role in behavioral and physiological investigations of the mechanisms of perceptual learning and for understanding, predicting, and optimizing human perceptual processes, learning, and performance. Performance improvements resulting from reweighting or readout of sensory inputs to decision provide a strong theoretical framework for interpreting perceptual learning and transfer that may prove useful in optimizing learning in real-world applications.
Hussein, Husnah; Williams, David J; Liu, Yang
2015-07-01
A systematic design of experiments (DOE) approach was used to optimize the perfusion process of a tri-axial bioreactor designed for translational tissue engineering exploiting mechanical stimuli and mechanotransduction. Four controllable design parameters affecting the perfusion process were identified in a cause-effect diagram as potential improvement opportunities. A screening process was used to separate out the factors that have the largest impact from the insignificant ones. DOE was employed to find the settings of the platen design, return tubing configuration and the elevation difference that minimise the load on the pump and variation in the perfusion process and improve the controllability of the perfusion pressures within the prescribed limits. DOE was very effective for gaining increased knowledge of the perfusion process and optimizing the process for improved functionality. It is hypothesized that the optimized perfusion system will result in improved biological performance and consistency.
Reduced state feedback gain computation. [optimization and control theory for aircraft control
NASA Technical Reports Server (NTRS)
Kaufman, H.
1976-01-01
Because application of conventional optimal linear regulator theory to flight controller design requires the capability of measuring and/or estimating the entire state vector, it is of interest to consider procedures for computing controls which are restricted to be linear feedback functions of a lower dimensional output vector and which take into account the presence of measurement noise and process uncertainty. Therefore, a stochastic linear model that was developed is presented which accounts for aircraft parameter and initial uncertainty, measurement noise, turbulence, pilot command and a restricted number of measurable outputs. Optimization with respect to the corresponding output feedback gains was performed for both finite and infinite time performance indices without gradient computation by using Zangwill's modification of a procedure originally proposed by Powell. Results using a seventh order process show the proposed procedures to be very effective.
AN OPTIMAL MAINTENANCE MANAGEMENT MODEL FOR AIRPORT CONCRETE PAVEMENT
NASA Astrophysics Data System (ADS)
Shimomura, Taizo; Fujimori, Yuji; Kaito, Kiyoyuki; Obama, Kengo; Kobayashi, Kiyoshi
In this paper, an optimal management model is formulated for the performance-based rehabilitation/maintenance contract for airport concrete pavement, whereby two types of life cycle cost risks, i.e., ground consolidation risk and concrete depreciation risk, are explicitly considered. The non-homogenous Markov chain model is formulated to represent the deterioration processes of concrete pavement which are conditional upon the ground consolidation processes. The optimal non-homogenous Markov decision model with multiple types of risk is presented to design the optimal rehabilitation/maintenance plans. And the methodology to revise the optimal rehabilitation/maintenance plans based upon the monitoring data by the Bayesian up-to-dating rules. The validity of the methodology presented in this paper is examined based upon the case studies carried out for the H airport.
Design Tool Using a New Optimization Method Based on a Stochastic Process
NASA Astrophysics Data System (ADS)
Yoshida, Hiroaki; Yamaguchi, Katsuhito; Ishikawa, Yoshio
Conventional optimization methods are based on a deterministic approach since their purpose is to find out an exact solution. However, such methods have initial condition dependence and the risk of falling into local solution. In this paper, we propose a new optimization method based on the concept of path integrals used in quantum mechanics. The method obtains a solution as an expected value (stochastic average) using a stochastic process. The advantages of this method are that it is not affected by initial conditions and does not require techniques based on experiences. We applied the new optimization method to a hang glider design. In this problem, both the hang glider design and its flight trajectory were optimized. The numerical calculation results prove that performance of the method is sufficient for practical use.
NASA Astrophysics Data System (ADS)
Mejid Elsiti, Nagwa; Noordin, M. Y.; Idris, Ani; Saed Majeed, Faraj
2017-10-01
This paper presents an optimization of process parameters of Micro-Electrical Discharge Machining (EDM) process with (γ-Fe2O3) nano-powder mixed dielectric using multi-response optimization Grey Relational Analysis (GRA) method instead of single response optimization. These parameters were optimized based on 2-Level factorial design combined with Grey Relational Analysis. The machining parameters such as peak current, gap voltage, and pulse on time were chosen for experimentation. The performance characteristics chosen for this study are material removal rate (MRR), tool wear rate (TWR), Taper and Overcut. Experiments were conducted using electrolyte copper as the tool and CoCrMo as the workpiece. Experimental results have been improved through this approach.
Permutation flow-shop scheduling problem to optimize a quadratic objective function
NASA Astrophysics Data System (ADS)
Ren, Tao; Zhao, Peng; Zhang, Da; Liu, Bingqian; Yuan, Huawei; Bai, Danyu
2017-09-01
A flow-shop scheduling model enables appropriate sequencing for each job and for processing on a set of machines in compliance with identical processing orders. The objective is to achieve a feasible schedule for optimizing a given criterion. Permutation is a special setting of the model in which the processing order of the jobs on the machines is identical for each subsequent step of processing. This article addresses the permutation flow-shop scheduling problem to minimize the criterion of total weighted quadratic completion time. With a probability hypothesis, the asymptotic optimality of the weighted shortest processing time schedule under a consistency condition (WSPT-CC) is proven for sufficiently large-scale problems. However, the worst case performance ratio of the WSPT-CC schedule is the square of the number of machines in certain situations. A discrete differential evolution algorithm, where a new crossover method with multiple-point insertion is used to improve the final outcome, is presented to obtain high-quality solutions for moderate-scale problems. A sequence-independent lower bound is designed for pruning in a branch-and-bound algorithm for small-scale problems. A set of random experiments demonstrates the performance of the lower bound and the effectiveness of the proposed algorithms.
Sarkar, Saurabh; Minatovicz, Bruna; Thalberg, Kyrre; Chaudhuri, Bodhisattwa
2017-01-01
The purpose of the present study was to develop guidance toward rational choice of blenders and processing conditions to make robust and high performing adhesive mixtures for dry-powder inhalers and to develop quantitative experimental approaches for optimizing the process. Mixing behavior of carrier (LH100) and AstraZeneca fine lactose in high-shear and low-shear double cone blenders was systematically investigated. Process variables impacting the mixing performance were evaluated for both blenders. The performance of the blenders with respect to the mixing time, press-on forces, static charging, and abrasion of carrier fines was monitored, and for some of the parameters, distinct differences could be detected. A comparison table is presented, which can be used as a guidance to enable rational choice of blender and process parameters based on the user requirements. Segregation of adhesive mixtures during hopper discharge was also investigated. Copyright © 2016 American Pharmacists Association®. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Knight, Travis W.; Anghaie, Samim
2002-11-01
Optimization of powder processing techniques were sought for the fabrication of single-phase, solid-solution mixed uranium/refractory metal carbide nuclear fuels - namely (U, Zr, Nb)C. These advanced, ultra-high temperature nuclear fuels have great potential for improved performance over graphite matrix, dispersed fuels tested in the Rover/NERVA program of the 1960s and early 1970s. Hypostoichiometric fuel samples with carbon-to-metal ratios of 0.98, uranium metal mole fractions of 5% and 10%, and porosities less than 5% were fabricated. These qualities should provide for the longest life and highest performance capability for these fuels. Study and optimization of processing methods were necessary to provide the quality assurance of samples for meaningful testing and assessment of performance for nuclear thermal propulsion applications. The processing parameters and benefits of enhanced sintering by uranium carbide liquid-phase sintering were established for the rapid and effective consolidation and formation of a solid-solution mixed carbide nuclear fuel.
Lithographic performance of recent DUV photoresists
NASA Astrophysics Data System (ADS)
Streefkerk, Bob; van Ingen Schenau, Koen; Buijk, Corine
1998-06-01
Commercially available photoresists from the major photoresist vendors are investigated using a PAS 5500/300 wafer stepper, a 31.1 mm diameter field size high throughput wafer stepper with variable NA capability up to 0.63. The critical dimension (CD) investigated is 0.25 micrometers and lower for dense and isolated lines and 0.25 micrometers for dense contact holes. The photoresist process performance is quantified by measuring exposure-defocus windows for a specific resolution using a CD SEM. Photoresists that are comparable with or better than APEX-E with RTC top coat, which is the current base line process for lines and spaces imaging performance, are Clariant AZ-DX1300 and Shin Etsu SEPR-4103PB50. Most recent photoresists have much improved delay performance when compared to APEX without top coat. Improvement, when an organic BARC is applied, depends on the actual photoresist characteristics. The optimal photoresist found for 0.25 micrometers contact holes is TOK DP015 C. This process operates at optimal conditions.
Model-Based Thermal System Design Optimization for the James Webb Space Telescope
NASA Technical Reports Server (NTRS)
Cataldo, Giuseppe; Niedner, Malcolm B.; Fixsen, Dale J.; Moseley, Samuel H.
2017-01-01
Spacecraft thermal model validation is normally performed by comparing model predictions with thermal test data and reducing their discrepancies to meet the mission requirements. Based on thermal engineering expertise, the model input parameters are adjusted to tune the model output response to the test data. The end result is not guaranteed to be the best solution in terms of reduced discrepancy and the process requires months to complete. A model-based methodology was developed to perform the validation process in a fully automated fashion and provide mathematical bases to the search for the optimal parameter set that minimizes the discrepancies between model and data. The methodology was successfully applied to several thermal subsystems of the James Webb Space Telescope (JWST). Global or quasiglobal optimal solutions were found and the total execution time of the model validation process was reduced to about two weeks. The model sensitivities to the parameters, which are required to solve the optimization problem, can be calculated automatically before the test begins and provide a library for sensitivity studies. This methodology represents a crucial commodity when testing complex, large-scale systems under time and budget constraints. Here, results for the JWST Core thermal system will be presented in detail.
Model-based thermal system design optimization for the James Webb Space Telescope
NASA Astrophysics Data System (ADS)
Cataldo, Giuseppe; Niedner, Malcolm B.; Fixsen, Dale J.; Moseley, Samuel H.
2017-10-01
Spacecraft thermal model validation is normally performed by comparing model predictions with thermal test data and reducing their discrepancies to meet the mission requirements. Based on thermal engineering expertise, the model input parameters are adjusted to tune the model output response to the test data. The end result is not guaranteed to be the best solution in terms of reduced discrepancy and the process requires months to complete. A model-based methodology was developed to perform the validation process in a fully automated fashion and provide mathematical bases to the search for the optimal parameter set that minimizes the discrepancies between model and data. The methodology was successfully applied to several thermal subsystems of the James Webb Space Telescope (JWST). Global or quasiglobal optimal solutions were found and the total execution time of the model validation process was reduced to about two weeks. The model sensitivities to the parameters, which are required to solve the optimization problem, can be calculated automatically before the test begins and provide a library for sensitivity studies. This methodology represents a crucial commodity when testing complex, large-scale systems under time and budget constraints. Here, results for the JWST Core thermal system will be presented in detail.
Application of Particle Swarm Optimization in Computer Aided Setup Planning
NASA Astrophysics Data System (ADS)
Kafashi, Sajad; Shakeri, Mohsen; Abedini, Vahid
2011-01-01
New researches are trying to integrate computer aided design (CAD) and computer aided manufacturing (CAM) environments. The role of process planning is to convert the design specification into manufacturing instructions. Setup planning has a basic role in computer aided process planning (CAPP) and significantly affects the overall cost and quality of machined part. This research focuses on the development for automatic generation of setups and finding the best setup plan in feasible condition. In order to computerize the setup planning process, three major steps are performed in the proposed system: a) Extraction of machining data of the part. b) Analyzing and generation of all possible setups c) Optimization to reach the best setup plan based on cost functions. Considering workshop resources such as machine tool, cutter and fixture, all feasible setups could be generated. Then the problem is adopted with technological constraints such as TAD (tool approach direction), tolerance relationship and feature precedence relationship to have a completely real and practical approach. The optimal setup plan is the result of applying the PSO (particle swarm optimization) algorithm into the system using cost functions. A real sample part is illustrated to demonstrate the performance and productivity of the system.
Experimental test of an online ion-optics optimizer
NASA Astrophysics Data System (ADS)
Amthor, A. M.; Schillaci, Z. M.; Morrissey, D. J.; Portillo, M.; Schwarz, S.; Steiner, M.; Sumithrarachchi, Ch.
2018-07-01
A technique has been developed and tested to automatically adjust multiple electrostatic or magnetic multipoles on an ion optical beam line - according to a defined optimization algorithm - until an optimal tune is found. This approach simplifies the process of determining high-performance optical tunes, satisfying a given set of optical properties, for an ion optical system. The optimization approach is based on the particle swarm method and is entirely model independent, thus the success of the optimization does not depend on the accuracy of an extant ion optical model of the system to be optimized. Initial test runs of a first order optimization of a low-energy (<60 keV) all-electrostatic beamline at the NSCL show reliable convergence of nine quadrupole degrees of freedom to well-performing tunes within a reasonable number of trial solutions, roughly 500, with full beam optimization run times of roughly two hours. Improved tunes were found both for quasi-local optimizations and for quasi-global optimizations, indicating a good ability of the optimizer to find a solution with or without a well defined set of initial multipole settings.
An optimal repartitioning decision policy
NASA Technical Reports Server (NTRS)
Nicol, D. M.; Reynolds, P. F., Jr.
1986-01-01
A central problem to parallel processing is the determination of an effective partitioning of workload to processors. The effectiveness of any given partition is dependent on the stochastic nature of the workload. The problem of determining when and if the stochastic behavior of the workload has changed enough to warrant the calculation of a new partition is treated. The problem is modeled as a Markov decision process, and an optimal decision policy is derived. Quantification of this policy is usually intractable. A heuristic policy which performs nearly optimally is investigated empirically. The results suggest that the detection of change is the predominant issue in this problem.
Gholikandi, Gagik Badalians; Kazemirad, Khashayar
2018-03-01
In this study, the performance of the electrochemical peroxidation (ECP) process for removing the volatile suspended solids (VSS) content of waste-activated sludge was evaluated. The Fe 2+ ions required by the process were obtained directly from iron electrodes in the system. The performance of the ECP process was investigated in various operational conditions employing a laboratory-scale pilot setup and optimized by response surface methodology (RSM). According to the results, the ECP process showed its best performance when the pH value, current density, H 2 O 2 concentration and the retention time were 3, 3.2 mA/cm 2 , 1,535 mg/L and 240 min, respectively. In these conditions, the introduced Fe 2+ concentration was approximately 500 (mg/L) and the VSS removal efficiency about 74%. Moreover, the results of the microbial characteristics of the raw and the stabilized sludge demonstrated that the ECP process is able to remove close to 99.9% of the coliforms in the raw sludge during the stabilization process. The energy consumption evaluation showed that the required energy of the ECP reactor (about 1.8-2.5 kWh (kg VSS removed) -1 ) is considerably lower than for aerobic digestion, the conventional waste-activated sludge stabilization method (about 2-3 kWh (kg VSS removed) -1 ). The RSM optimization process showed that the best operational conditions of the ECP process comply with the experimental results, and the actual and the predicted results are in good conformity with each other. This feature makes it possible to predict the introduced Fe 2+ concentrations into the system and the VSS removal efficiency of the process precisely.
Group interaction and flight crew performance
NASA Technical Reports Server (NTRS)
Foushee, H. Clayton; Helmreich, Robert L.
1988-01-01
The application of human-factors analysis to the performance of aircraft-operation tasks by the crew as a group is discussed in an introductory review and illustrated with anecdotal material. Topics addressed include the function of a group in the operational environment, the classification of group performance factors (input, process, and output parameters), input variables and the flight crew process, and the effect of process variables on performance. Consideration is given to aviation safety issues, techniques for altering group norms, ways of increasing crew effort and coordination, and the optimization of group composition.
A system level model for preliminary design of a space propulsion solid rocket motor
NASA Astrophysics Data System (ADS)
Schumacher, Daniel M.
Preliminary design of space propulsion solid rocket motors entails a combination of components and subsystems. Expert design tools exist to find near optimal performance of subsystems and components. Conversely, there is no system level preliminary design process for space propulsion solid rocket motors that is capable of synthesizing customer requirements into a high utility design for the customer. The preliminary design process for space propulsion solid rocket motors typically builds on existing designs and pursues feasible rather than the most favorable design. Classical optimization is an extremely challenging method when dealing with the complex behavior of an integrated system. The complexity and combinations of system configurations make the number of the design parameters that are traded off unreasonable when manual techniques are used. Existing multi-disciplinary optimization approaches generally address estimating ratios and correlations rather than utilizing mathematical models. The developed system level model utilizes the Genetic Algorithm to perform the necessary population searches to efficiently replace the human iterations required during a typical solid rocket motor preliminary design. This research augments, automates, and increases the fidelity of the existing preliminary design process for space propulsion solid rocket motors. The system level aspect of this preliminary design process, and the ability to synthesize space propulsion solid rocket motor requirements into a near optimal design, is achievable. The process of developing the motor performance estimate and the system level model of a space propulsion solid rocket motor is described in detail. The results of this research indicate that the model is valid for use and able to manage a very large number of variable inputs and constraints towards the pursuit of the best possible design.
Persson, Oliver; Andersson, Niklas; Nilsson, Bernt
2018-01-05
Preparative liquid chromatography is a separation technique widely used in the manufacturing of fine chemicals and pharmaceuticals. A major drawback of traditional single-column batch chromatography step is the trade-off between product purity and process performance. Recirculation of impure product can be utilized to make the trade-off more favorable. The aim of the present study was to investigate the usage of a two-column batch-to-batch recirculation process step to increase the performance compared to single-column batch chromatography at a high purity requirement. The separation of a ternary protein mixture on ion-exchange chromatography columns was used to evaluate the proposed process. The investigation used modelling and simulation of the process step, experimental validation and optimization of the simulated process. In the presented case the yield increases from 45.4% to 93.6% and the productivity increases 3.4 times compared to the performance of a batch run for a nominal case. A rapid concentration build-up product can be seen during the first cycles, before the process reaches a cyclic steady-state with reoccurring concentration profiles. The optimization of the simulation model predicts that the recirculated salt can be used as a flying start of the elution, which would enhance the process performance. The proposed process is more complex than a batch process, but may improve the separation performance, especially while operating at cyclic steady-state. The recirculation of impure fractions reduces the product losses and ensures separation of product to a high degree of purity. Copyright © 2017 Elsevier B.V. All rights reserved.
Arbitrary Shape Deformation in CFD Design
NASA Technical Reports Server (NTRS)
Landon, Mark; Perry, Ernest
2014-01-01
Sculptor(R) is a commercially available software tool, based on an Arbitrary Shape Design (ASD), which allows the user to perform shape optimization for computational fluid dynamics (CFD) design. The developed software tool provides important advances in the state-of-the-art of automatic CFD shape deformations and optimization software. CFD is an analysis tool that is used by engineering designers to help gain a greater understanding of the fluid flow phenomena involved in the components being designed. The next step in the engineering design process is to then modify, the design to improve the components' performance. This step has traditionally been performed manually via trial and error. Two major problems that have, in the past, hindered the development of an automated CFD shape optimization are (1) inadequate shape parameterization algorithms, and (2) inadequate algorithms for CFD grid modification. The ASD that has been developed as part of the Sculptor(R) software tool is a major advancement in solving these two issues. First, the ASD allows the CFD designer to freely create his own shape parameters, thereby eliminating the restriction of only being able to use the CAD model parameters. Then, the software performs a smooth volumetric deformation, which eliminates the extremely costly process of having to remesh the grid for every shape change (which is how this process had previously been achieved). Sculptor(R) can be used to optimize shapes for aerodynamic and structural design of spacecraft, aircraft, watercraft, ducts, and other objects that affect and are affected by flows of fluids and heat. Sculptor(R) makes it possible to perform, in real time, a design change that would manually take hours or days if remeshing were needed.
Bustillo-Lecompte, Ciro Fernando; Mehrvar, Mehrab; Quiñones-Bolaños, Edgar
2014-02-15
The objective of this study is to evaluate the operating costs of treating slaughterhouse wastewater (SWW) using combined biological and advanced oxidation processes (AOPs). This study compares the performance and the treatment capability of an anaerobic baffled reactor (ABR), an aerated completely mixed activated sludge reactor (AS), and a UV/H2O2 process, as well as their combination for the removal of the total organic carbon (TOC). Overall efficiencies are found to be up to 75.22, 89.47, 94.53, 96.10, 96.36, and 99.98% for the UV/H2O2, ABR, AS, combined AS-ABR, combined ABR-AS, and combined ABR-AS-UV/H2O2 processes, respectively. Due to the consumption of electrical energy and reagents, operating costs are calculated at optimal conditions of each process. A cost-effectiveness analysis (CEA) is performed at optimal conditions for the SWW treatment by optimizing the total electricity cost, H2O2 consumption, and hydraulic retention time (HRT). The combined ABR-AS-UV/H2O2 processes have an optimal TOC removal of 92.46% at an HRT of 41 h, a cost of $1.25/kg of TOC removed, and $11.60/m(3) of treated SWW. This process reaches a maximum TOC removal of 99% in 76.5 h with an estimated cost of $2.19/kg TOC removal and $21.65/m(3) treated SWW, equivalent to $6.79/m(3) day. Copyright © 2014 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Wang, Fengwen; Jensen, Jakob S.; Sigmund, Ole
2012-10-01
Photonic crystal waveguides are optimized for modal confinement and loss related to slow light with high group index. A detailed comparison between optimized circular-hole based waveguides and optimized waveguides with free topology is performed. Design robustness with respect to manufacturing imperfections is enforced by considering different design realizations generated from under-, standard- and over-etching processes in the optimization procedure. A constraint ensures a certain modal confinement, and loss related to slow light with high group index is indirectly treated by penalizing field energy located in air regions. It is demonstrated that slow light with a group index up to ng = 278 can be achieved by topology optimized waveguides with promising modal confinement and restricted group-velocity-dispersion. All the topology optimized waveguides achieve a normalized group-index bandwidth of 0.48 or above. The comparisons between circular-hole based designs and topology optimized designs illustrate that the former can be efficient for dispersion engineering but that larger improvements are possible if irregular geometries are allowed.
NASA Astrophysics Data System (ADS)
Li, Shuang; Zhu, Yongsheng; Wang, Yukai
2014-02-01
Asteroid deflection techniques are essential in order to protect the Earth from catastrophic impacts by hazardous asteroids. Rapid design and optimization of low-thrust rendezvous/interception trajectories is considered as one of the key technologies to successfully deflect potentially hazardous asteroids. In this paper, we address a general framework for the rapid design and optimization of low-thrust rendezvous/interception trajectories for future asteroid deflection missions. The design and optimization process includes three closely associated steps. Firstly, shape-based approaches and genetic algorithm (GA) are adopted to perform preliminary design, which provides a reasonable initial guess for subsequent accurate optimization. Secondly, Radau pseudospectral method is utilized to transcribe the low-thrust trajectory optimization problem into a discrete nonlinear programming (NLP) problem. Finally, sequential quadratic programming (SQP) is used to efficiently solve the nonlinear programming problem and obtain the optimal low-thrust rendezvous/interception trajectories. The rapid design and optimization algorithms developed in this paper are validated by three simulation cases with different performance indexes and boundary constraints.
Application of high-performance computing to numerical simulation of human movement
NASA Technical Reports Server (NTRS)
Anderson, F. C.; Ziegler, J. M.; Pandy, M. G.; Whalen, R. T.
1995-01-01
We have examined the feasibility of using massively-parallel and vector-processing supercomputers to solve large-scale optimization problems for human movement. Specifically, we compared the computational expense of determining the optimal controls for the single support phase of gait using a conventional serial machine (SGI Iris 4D25), a MIMD parallel machine (Intel iPSC/860), and a parallel-vector-processing machine (Cray Y-MP 8/864). With the human body modeled as a 14 degree-of-freedom linkage actuated by 46 musculotendinous units, computation of the optimal controls for gait could take up to 3 months of CPU time on the Iris. Both the Cray and the Intel are able to reduce this time to practical levels. The optimal solution for gait can be found with about 77 hours of CPU on the Cray and with about 88 hours of CPU on the Intel. Although the overall speeds of the Cray and the Intel were found to be similar, the unique capabilities of each machine are better suited to different portions of the computational algorithm used. The Intel was best suited to computing the derivatives of the performance criterion and the constraints whereas the Cray was best suited to parameter optimization of the controls. These results suggest that the ideal computer architecture for solving very large-scale optimal control problems is a hybrid system in which a vector-processing machine is integrated into the communication network of a MIMD parallel machine.
Optimization of an optically implemented on-board FDMA demultiplexer
NASA Technical Reports Server (NTRS)
Fargnoli, J.; Riddle, L.
1991-01-01
Performance of a 30 GHz frequency division multiple access (FDMA) uplink to a processing satellite is modelled for the case where the onboard demultiplexer is implemented optically. Included in the performance model are the effects of adjacent channel interference, intersymbol interference, and spurious signals associated with the optical implementation. Demultiplexer parameters are optimized to provide the minimum bit error probability at a given bandwidth efficiency when filtered QPSK modulation is employed.
Experiences in autotuning matrix multiplication for energy minimization on GPUs
Anzt, Hartwig; Haugen, Blake; Kurzak, Jakub; ...
2015-05-20
In this study, we report extensive results and analysis of autotuning the computationally intensive graphics processing units kernel for dense matrix–matrix multiplication in double precision. In contrast to traditional autotuning and/or optimization for runtime performance only, we also take the energy efficiency into account. For kernels achieving equal performance, we show significant differences in their energy balance. We also identify the memory throughput as the most influential metric that trades off performance and energy efficiency. Finally, as a result, the performance optimal case ends up not being the most efficient kernel in overall resource use.
Two-Dimensional High-Lift Aerodynamic Optimization Using Neural Networks
NASA Technical Reports Server (NTRS)
Greenman, Roxana M.
1998-01-01
The high-lift performance of a multi-element airfoil was optimized by using neural-net predictions that were trained using a computational data set. The numerical data was generated using a two-dimensional, incompressible, Navier-Stokes algorithm with the Spalart-Allmaras turbulence model. Because it is difficult to predict maximum lift for high-lift systems, an empirically-based maximum lift criteria was used in this study to determine both the maximum lift and the angle at which it occurs. The 'pressure difference rule,' which states that the maximum lift condition corresponds to a certain pressure difference between the peak suction pressure and the pressure at the trailing edge of the element, was applied and verified with experimental observations for this configuration. Multiple input, single output networks were trained using the NASA Ames variation of the Levenberg-Marquardt algorithm for each of the aerodynamic coefficients (lift, drag and moment). The artificial neural networks were integrated with a gradient-based optimizer. Using independent numerical simulations and experimental data for this high-lift configuration, it was shown that this design process successfully optimized flap deflection, gap, overlap, and angle of attack to maximize lift. Once the neural nets were trained and integrated with the optimizer, minimal additional computer resources were required to perform optimization runs with different initial conditions and parameters. Applying the neural networks within the high-lift rigging optimization process reduced the amount of computational time and resources by 44% compared with traditional gradient-based optimization procedures for multiple optimization runs.
Process control systems: integrated for future process technologies
NASA Astrophysics Data System (ADS)
Botros, Youssry; Hajj, Hazem M.
2003-06-01
Process Control Systems (PCS) are becoming more crucial to the success of Integrated Circuit makers due to their direct impact on product quality, cost, and Fab output. The primary objective of PCS is to minimize variability by detecting and correcting non optimal performance. Current PCS implementations are considered disparate, where each PCS application is designed, deployed and supported separately. Each implementation targets a specific area of control such as equipment performance, wafer manufacturing, and process health monitoring. With Intel entering the nanometer technology era, tighter process specifications are required for higher yields and lower cost. This requires areas of control to be tightly coupled and integrated to achieve the optimal performance. This requirement can be achieved via consistent design and deployment of the integrated PCS. PCS integration will result in several benefits such as leveraging commonalities, avoiding redundancy, and facilitating sharing between implementations. This paper will address PCS implementations and focus on benefits and requirements of the integrated PCS. Intel integrated PCS Architecture will be then presented and its components will be briefly discussed. Finally, industry direction and efforts to standardize PCS interfaces that enable PCS integration will be presented.
Adaptive hybrid optimal quantum control for imprecisely characterized systems.
Egger, D J; Wilhelm, F K
2014-06-20
Optimal quantum control theory carries a huge promise for quantum technology. Its experimental application, however, is often hindered by imprecise knowledge of the input variables, the quantum system's parameters. We show how to overcome this by adaptive hybrid optimal control, using a protocol named Ad-HOC. This protocol combines open- and closed-loop optimal control by first performing a gradient search towards a near-optimal control pulse and then an experimental fidelity estimation with a gradient-free method. For typical settings in solid-state quantum information processing, adaptive hybrid optimal control enhances gate fidelities by an order of magnitude, making optimal control theory applicable and useful.
Contrast research of CDMA and GSM network optimization
NASA Astrophysics Data System (ADS)
Wu, Yanwen; Liu, Zehong; Zhou, Guangyue
2004-03-01
With the development of mobile telecommunication network, users of CDMA advanced their request of network service quality. While the operators also change their network management object from signal coverage to performance improvement. In that case, reasonably layout & optimization of mobile telecommunication network, reasonably configuration of network resource, improvement of the service quality, and increase the enterprise's core competition ability, all those have been concerned by the operator companies. This paper firstly looked into the flow of CDMA network optimization. Then it dissertated to some keystones in the CDMA network optimization, like PN code assignment, calculation of soft handover, etc. As GSM is also the similar cellular mobile telecommunication system like CDMA, so this paper also made a contrast research of CDMA and GSM network optimization in details, including the similarity and the different. In conclusion, network optimization is a long time job; it will run through the whole process of network construct. By the adjustment of network hardware (like BTS equipments, RF systems, etc.) and network software (like parameter optimized, configuration optimized, capacity optimized, etc.), network optimization work can improve the performance and service quality of the network.
Design of shared unit-dose drug distribution network using multi-level particle swarm optimization.
Chen, Linjie; Monteiro, Thibaud; Wang, Tao; Marcon, Eric
2018-03-01
Unit-dose drug distribution systems provide optimal choices in terms of medication security and efficiency for organizing the drug-use process in large hospitals. As small hospitals have to share such automatic systems for economic reasons, the structure of their logistic organization becomes a very sensitive issue. In the research reported here, we develop a generalized multi-level optimization method - multi-level particle swarm optimization (MLPSO) - to design a shared unit-dose drug distribution network. Structurally, the problem studied can be considered as a type of capacitated location-routing problem (CLRP) with new constraints related to specific production planning. This kind of problem implies that a multi-level optimization should be performed in order to minimize logistic operating costs. Our results show that with the proposed algorithm, a more suitable modeling framework, as well as computational time savings and better optimization performance are obtained than that reported in the literature on this subject.
Long-Run Savings and Investment Strategy Optimization
Gerrard, Russell; Guillén, Montserrat; Pérez-Marín, Ana M.
2014-01-01
We focus on automatic strategies to optimize life cycle savings and investment. Classical optimal savings theory establishes that, given the level of risk aversion, a saver would keep the same relative amount invested in risky assets at any given time. We show that, when optimizing lifecycle investment, performance and risk assessment have to take into account the investor's risk aversion and the maximum amount the investor could lose, simultaneously. When risk aversion and maximum possible loss are considered jointly, an optimal savings strategy is obtained, which follows from constant rather than relative absolute risk aversion. This result is fundamental to prove that if risk aversion and the maximum possible loss are both high, then holding a constant amount invested in the risky asset is optimal for a standard lifetime saving/pension process and outperforms some other simple strategies. Performance comparisons are based on downside risk-adjusted equivalence that is used in our illustration. PMID:24711728
Long-run savings and investment strategy optimization.
Gerrard, Russell; Guillén, Montserrat; Nielsen, Jens Perch; Pérez-Marín, Ana M
2014-01-01
We focus on automatic strategies to optimize life cycle savings and investment. Classical optimal savings theory establishes that, given the level of risk aversion, a saver would keep the same relative amount invested in risky assets at any given time. We show that, when optimizing lifecycle investment, performance and risk assessment have to take into account the investor's risk aversion and the maximum amount the investor could lose, simultaneously. When risk aversion and maximum possible loss are considered jointly, an optimal savings strategy is obtained, which follows from constant rather than relative absolute risk aversion. This result is fundamental to prove that if risk aversion and the maximum possible loss are both high, then holding a constant amount invested in the risky asset is optimal for a standard lifetime saving/pension process and outperforms some other simple strategies. Performance comparisons are based on downside risk-adjusted equivalence that is used in our illustration.
NASA Astrophysics Data System (ADS)
Norlina, M. S.; Diyana, M. S. Nor; Mazidah, P.; Rusop, M.
2016-07-01
In the RF magnetron sputtering process, the desirable layer properties are largely influenced by the process parameters and conditions. If the quality of the thin film has not reached up to its intended level, the experiments have to be repeated until the desirable quality has been met. This research is proposing Gravitational Search Algorithm (GSA) as the optimization model to reduce the time and cost to be spent in the thin film fabrication. The optimization model's engine has been developed using Java. The model is developed based on GSA concept, which is inspired by the Newtonian laws of gravity and motion. In this research, the model is expected to optimize four deposition parameters which are RF power, deposition time, oxygen flow rate and substrate temperature. The results have turned out to be promising and it could be concluded that the performance of the model is satisfying in this parameter optimization problem. Future work could compare GSA with other nature based algorithms and test them with various set of data.
Carius, Lisa; Rumschinski, Philipp; Faulwasser, Timm; Flockerzi, Dietrich; Grammel, Hartmut; Findeisen, Rolf
2014-04-01
Microaerobic (oxygen-limited) conditions are critical for inducing many important microbial processes in industrial or environmental applications. At very low oxygen concentrations, however, the process performance often suffers from technical limitations. Available dissolved oxygen measurement techniques are not sensitive enough and thus control techniques, that can reliable handle these conditions, are lacking. Recently, we proposed a microaerobic process control strategy, which overcomes these restrictions and allows to assess different degrees of oxygen limitation in bioreactor batch cultivations. Here, we focus on the design of a control strategy for the automation of oxygen-limited continuous cultures using the microaerobic formation of photosynthetic membranes (PM) in Rhodospirillum rubrum as model phenomenon. We draw upon R. rubrum since the considered phenomenon depends on the optimal availability of mixed-carbon sources, hence on boundary conditions which make the process performance challenging. Empirically assessing these specific microaerobic conditions is scarcely practicable as such a process reacts highly sensitive to changes in the substrate composition and the oxygen availability in the culture broth. Therefore, we propose a model-based process control strategy which allows to stabilize steady-states of cultures grown under these conditions. As designing the appropriate strategy requires a detailed knowledge of the system behavior, we begin by deriving and validating an unstructured process model. This model is used to optimize the experimental conditions, and identify properties of the system which are critical for process performance. The derived model facilitates the good process performance via the proposed optimal control strategy. In summary the presented model-based control strategy allows to access and maintain microaerobic steady-states of interest and to precisely and efficiently transfer the culture from one stable microaerobic steady-state into another. Therefore, the presented approach is a valuable tool to study regulatory mechanisms of microaerobic phenomena in response to oxygen limitation alone. Biotechnol. Bioeng. 2014;111: 734-747. © 2013 Wiley Periodicals, Inc. © 2013 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
McCray, Wilmon Wil L., Jr.
The research was prompted by a need to conduct a study that assesses process improvement, quality management and analytical techniques taught to students in U.S. colleges and universities undergraduate and graduate systems engineering and the computing science discipline (e.g., software engineering, computer science, and information technology) degree programs during their academic training that can be applied to quantitatively manage processes for performance. Everyone involved in executing repeatable processes in the software and systems development lifecycle processes needs to become familiar with the concepts of quantitative management, statistical thinking, process improvement methods and how they relate to process-performance. Organizations are starting to embrace the de facto Software Engineering Institute (SEI) Capability Maturity Model Integration (CMMI RTM) Models as process improvement frameworks to improve business processes performance. High maturity process areas in the CMMI model imply the use of analytical, statistical, quantitative management techniques, and process performance modeling to identify and eliminate sources of variation, continually improve process-performance; reduce cost and predict future outcomes. The research study identifies and provides a detail discussion of the gap analysis findings of process improvement and quantitative analysis techniques taught in U.S. universities systems engineering and computing science degree programs, gaps that exist in the literature, and a comparison analysis which identifies the gaps that exist between the SEI's "healthy ingredients " of a process performance model and courses taught in U.S. universities degree program. The research also heightens awareness that academicians have conducted little research on applicable statistics and quantitative techniques that can be used to demonstrate high maturity as implied in the CMMI models. The research also includes a Monte Carlo simulation optimization model and dashboard that demonstrates the use of statistical methods, statistical process control, sensitivity analysis, quantitative and optimization techniques to establish a baseline and predict future customer satisfaction index scores (outcomes). The American Customer Satisfaction Index (ACSI) model and industry benchmarks were used as a framework for the simulation model.
Self-adaptive multimethod optimization applied to a tailored heating forging process
NASA Astrophysics Data System (ADS)
Baldan, M.; Steinberg, T.; Baake, E.
2018-05-01
The presented paper describes an innovative self-adaptive multi-objective optimization code. Investigation goals concern proving the superiority of this code compared to NGSA-II and applying it to an inductor’s design case study addressed to a “tailored” heating forging application. The choice of the frequency and the heating time are followed by the determination of the turns number and their positions. Finally, a straightforward optimization is performed in order to minimize energy consumption using “optimal control”.
Design and Manufacturing of Composite Tower Structure for Wind Turbine Equipment
NASA Astrophysics Data System (ADS)
Park, Hyunbum
2018-02-01
This study proposes the composite tower design process for large wind turbine equipment. In this work, structural design of tower and analysis using finite element method was performed. After structural design, prototype blade manufacturing and test was performed. The used material is a glass fiber and epoxy resin composite. And also, sand was used in the middle part. The optimized structural design and analysis was performed. The parameter for optimized structural design is weight reduction and safety of structure. Finally, structure of tower will be confirmed by structural test.
Green Schools as High Performance Learning Facilities
ERIC Educational Resources Information Center
Gordon, Douglas E.
2010-01-01
In practice, a green school is the physical result of a consensus process of planning, design, and construction that takes into account a building's performance over its entire 50- to 60-year life cycle. The main focus of the process is to reinforce optimal learning, a goal very much in keeping with the parallel goals of resource efficiency and…
Optimization of Straight Cylindrical Turning Using Artificial Bee Colony (ABC) Algorithm
NASA Astrophysics Data System (ADS)
Prasanth, Rajanampalli Seshasai Srinivasa; Hans Raj, Kandikonda
2017-04-01
Artificial bee colony (ABC) algorithm, that mimics the intelligent foraging behavior of honey bees, is increasingly gaining acceptance in the field of process optimization, as it is capable of handling nonlinearity, complexity and uncertainty. Straight cylindrical turning is a complex and nonlinear machining process which involves the selection of appropriate cutting parameters that affect the quality of the workpiece. This paper presents the estimation of optimal cutting parameters of the straight cylindrical turning process using the ABC algorithm. The ABC algorithm is first tested on four benchmark problems of numerical optimization and its performance is compared with genetic algorithm (GA) and ant colony optimization (ACO) algorithm. Results indicate that, the rate of convergence of ABC algorithm is better than GA and ACO. Then, the ABC algorithm is used to predict optimal cutting parameters such as cutting speed, feed rate, depth of cut and tool nose radius to achieve good surface finish. Results indicate that, the ABC algorithm estimated a comparable surface finish when compared with real coded genetic algorithm and differential evolution algorithm.
Optimal cure cycle design of a resin-fiber composite laminate
NASA Technical Reports Server (NTRS)
Hou, Jean W.; Sheen, Jeenson
1987-01-01
A unified computed aided design method was studied for the cure cycle design that incorporates an optimal design technique with the analytical model of a composite cure process. The preliminary results of using this proposed method for optimal cure cycle design are reported and discussed. The cure process of interest is the compression molding of a polyester which is described by a diffusion reaction system. The finite element method is employed to convert the initial boundary value problem into a set of first order differential equations which are solved simultaneously by the DE program. The equations for thermal design sensitivities are derived by using the direct differentiation method and are solved by the DE program. A recursive quadratic programming algorithm with an active set strategy called a linearization method is used to optimally design the cure cycle, subjected to the given design performance requirements. The difficulty of casting the cure cycle design process into a proper mathematical form is recognized. Various optimal design problems are formulated to address theses aspects. The optimal solutions of these formulations are compared and discussed.
Davis, Edward T; Pagkalos, Joseph; Gallie, Price A M; Macgroarty, Kelly; Waddell, James P; Schemitsch, Emil H
2015-01-01
Optimal component alignment in total knee arthroplasty has been associated with better functional outcome as well as improved implant longevity. The ability to align components optimally during minimally invasive (MIS) total knee replacement (TKR) has been a cause of concern. Computer navigation is a useful aid in achieving the desired alignment although it is limited by the error during the manual registration of landmarks. Our study aims to compare the registration process error between a standard and a MIS surgical approach. We hypothesized that performing the registration error via an MIS approach would increase the registration process error. Five fresh frozen lower limbs were routinely prepared and draped. The registration process was performed through an MIS approach. This was then extended to the standard approach and the registration was performed again. Two surgeons performed the registration process five times with each approach. Performing the registration process through the MIS approach was not associated with higher error compared to the standard approach in the alignment parameters of interest. This rejects our hypothesis. Image-free navigated MIS TKR does not appear to carry higher risk of component malalignment due to the registration process error. Navigation can be used during MIS TKR to improve alignment without reduced accuracy due to the approach.
Contributions to optimization of storage and transporting industrial goods
NASA Astrophysics Data System (ADS)
Babanatsas, T.; Babanatis Merce, R. M.; Glăvan, D. O.; Glăvan, A.
2018-01-01
Optimization of storage and transporting industrial goods in a factory either from a constructive, functional, or technological point of view is a determinant parameter in programming the manufacturing process, the performance of the whole process being determined by the correlation realized taking in consideration those two factors (optimization and programming the process). It is imperative to take into consideration each type of production program (range), to restrain as much as possible the area that we are using and to minimize the times of execution, all of these in order to satisfy the client’s needs, to try to classify them in order to be able to define a global software (with general rules) that is expected to fulfil each client’s needs.
Impulsive Control for Continuous-Time Markov Decision Processes: A Linear Programming Approach
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dufour, F., E-mail: dufour@math.u-bordeaux1.fr; Piunovskiy, A. B., E-mail: piunov@liv.ac.uk
2016-08-15
In this paper, we investigate an optimization problem for continuous-time Markov decision processes with both impulsive and continuous controls. We consider the so-called constrained problem where the objective of the controller is to minimize a total expected discounted optimality criterion associated with a cost rate function while keeping other performance criteria of the same form, but associated with different cost rate functions, below some given bounds. Our model allows multiple impulses at the same time moment. The main objective of this work is to study the associated linear program defined on a space of measures including the occupation measures ofmore » the controlled process and to provide sufficient conditions to ensure the existence of an optimal control.« less
Optimization of the production process using virtual model of a workspace
NASA Astrophysics Data System (ADS)
Monica, Z.
2015-11-01
Optimization of the production process is an element of the design cycle consisting of: problem definition, modelling, simulation, optimization and implementation. Without the use of simulation techniques, the only thing which could be achieved is larger or smaller improvement of the process, not the optimization (i.e., the best result it is possible to get for the conditions under which the process works). Optimization is generally management actions that are ultimately bring savings in time, resources, and raw materials and improve the performance of a specific process. It does not matter whether it is a service or manufacturing process. Optimizing the savings generated by improving and increasing the efficiency of the processes. Optimization consists primarily of organizational activities that require very little investment, or rely solely on the changing organization of work. Modern companies operating in a market economy shows a significant increase in interest in modern methods of production management and services. This trend is due to the high competitiveness among companies that want to achieve success are forced to continually modify the ways to manage and flexible response to changing demand. Modern methods of production management, not only imply a stable position of the company in the sector, but also influence the improvement of health and safety within the company and contribute to the implementation of more efficient rules for standardization work in the company. This is why in the paper is presented the application of such developed environment like Siemens NX to create the virtual model of a production system and to simulate as well as optimize its work. The analyzed system is the robotized workcell consisting of: machine tools, industrial robots, conveyors, auxiliary equipment and buffers. In the program could be defined the control program realizing the main task in the virtual workcell. It is possible, using this tool, to optimize both the object trajectory and the cooperation process.
Viscous Aerodynamic Shape Optimization with Installed Propulsion Effects
NASA Technical Reports Server (NTRS)
Heath, Christopher M.; Seidel, Jonathan A.; Rallabhandi, Sriram K.
2017-01-01
Aerodynamic shape optimization is demonstrated to tailor the under-track pressure signature of a conceptual low-boom supersonic aircraft. Primarily, the optimization reduces nearfield pressure waveforms induced by propulsion integration effects. For computational efficiency, gradient-based optimization is used and coupled to the discrete adjoint formulation of the Reynolds-averaged Navier Stokes equations. The engine outer nacelle, nozzle, and vertical tail fairing are axi-symmetrically parameterized, while the horizontal tail is shaped using a wing-based parameterization. Overall, 48 design variables are coupled to the geometry and used to deform the outer mold line. During the design process, an inequality drag constraint is enforced to avoid major compromise in aerodynamic performance. Linear elastic mesh morphing is used to deform volume grids between design iterations. The optimization is performed at Mach 1.6 cruise, assuming standard day altitude conditions at 51,707-ft. To reduce uncertainty, a coupled thermodynamic engine cycle model is employed that captures installed inlet performance effects on engine operation.
Optimal design of zero-water discharge rinsing systems.
Thöming, Jorg
2002-03-01
This paper is about zero liquid discharge in processes that use water for rinsing. Emphasis was given to those systems that contaminate process water with valuable process liquor and compounds. The approach involved the synthesis of optimal rinsing and recycling networks (RRN) that had a priori excluded water discharge. The total annualized costs of the RRN were minimized by the use of a mixed-integer nonlinear program (MINLP). This MINLP was based on a hyperstructure of the RRN and contained eight counterflow rinsing stages and three regenerator units: electrodialysis, reverse osmosis, and ion exchange columns. A "large-scale nickel plating process" case study showed that by means of zero-water discharge and optimized rinsing the total waste could be reduced by 90.4% at a revenue of $448,000/yr. Furthermore, with the optimized RRN, the rinsing performance can be improved significantly at a low-cost increase. In all the cases, the amount of valuable compounds reclaimed was above 99%.
GilPavas, Edison; Dobrosz-Gómez, Izabela; Gómez-García, Miguel Ángel
2012-01-01
The Response Surface Methodology (RSM) was applied as a tool for the optimization of the operational conditions of the photo-degradation of highly concentrated PY12 wastewater, resulting from a textile industry located in the suburbs of Medellin (Colombia). The Box-Behnken experimental Design (BBD) was chosen for the purpose of response optimization. The photo-Fenton process was carried out in a laboratory-scale batch photo-reactor. A multifactorial experimental design was proposed, including the following variables: the initial dyestuff concentration, the H(2)O(2) and the Fe(+2) concentrations, as well as the UV wavelength radiation. The photo-Fenton process performed at the optimized conditions resulted in ca. 100% of dyestuff decolorization, 92% of COD and 82% of TOC degradation. A kinetic study was accomplished, including the identification of some intermediate compounds generated during the oxidation process. The water biodegradability reached a final DBO(5)/DQO = 0.86 value.
Decision support for operations and maintenance (DSOM) system
Jarrell, Donald B [Kennewick, WA; Meador, Richard J [Richland, WA; Sisk, Daniel R [Richland, WA; Hatley, Darrel D [Kennewick, WA; Brown, Daryl R [Richland, WA; Keibel, Gary R [Richland, WA; Gowri, Krishnan [Richland, WA; Reyes-Spindola, Jorge F [Richland, WA; Adams, Kevin J [San Bruno, CA; Yates, Kenneth R [Lake Oswego, OR; Eschbach, Elizabeth J [Fort Collins, CO; Stratton, Rex C [Richland, WA
2006-03-21
A method for minimizing the life cycle cost of processes such as heating a building. The method utilizes sensors to monitor various pieces of equipment used in the process, for example, boilers, turbines, and the like. The method then performs the steps of identifying a set optimal operating conditions for the process, identifying and measuring parameters necessary to characterize the actual operating condition of the process, validating data generated by measuring those parameters, characterizing the actual condition of the process, identifying an optimal condition corresponding to the actual condition, comparing said optimal condition with the actual condition and identifying variances between the two, and drawing from a set of pre-defined algorithms created using best engineering practices, an explanation of at least one likely source and at least one recommended remedial action for selected variances, and providing said explanation as an output to at least one user.
NASA Astrophysics Data System (ADS)
Corbetta, Matteo; Sbarufatti, Claudio; Giglio, Marco; Todd, Michael D.
2018-05-01
The present work critically analyzes the probabilistic definition of dynamic state-space models subject to Bayesian filters used for monitoring and predicting monotonic degradation processes. The study focuses on the selection of the random process, often called process noise, which is a key perturbation source in the evolution equation of particle filtering. Despite the large number of applications of particle filtering predicting structural degradation, the adequacy of the picked process noise has not been investigated. This paper reviews existing process noise models that are typically embedded in particle filters dedicated to monitoring and predicting structural damage caused by fatigue, which is monotonic in nature. The analysis emphasizes that existing formulations of the process noise can jeopardize the performance of the filter in terms of state estimation and remaining life prediction (i.e., damage prognosis). This paper subsequently proposes an optimal and unbiased process noise model and a list of requirements that the stochastic model must satisfy to guarantee high prognostic performance. These requirements are useful for future and further implementations of particle filtering for monotonic system dynamics. The validity of the new process noise formulation is assessed against experimental fatigue crack growth data from a full-scale aeronautical structure using dedicated performance metrics.
Das, Saptarshi; Pan, Indranil; Das, Shantanu
2015-09-01
An optimal trade-off design for fractional order (FO)-PID controller is proposed with a Linear Quadratic Regulator (LQR) based technique using two conflicting time domain objectives. A class of delayed FO systems with single non-integer order element, exhibiting both sluggish and oscillatory open loop responses, have been controlled here. The FO time delay processes are handled within a multi-objective optimization (MOO) formalism of LQR based FOPID design. A comparison is made between two contemporary approaches of stabilizing time-delay systems withinLQR. The MOO control design methodology yields the Pareto optimal trade-off solutions between the tracking performance and total variation (TV) of the control signal. Tuning rules are formed for the optimal LQR-FOPID controller parameters, using median of the non-dominated Pareto solutions to handle delayed FO processes. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Deris, A. M.; Zain, A. M.; Sallehuddin, R.; Sharif, S.
2017-09-01
Electric discharge machine (EDM) is one of the widely used nonconventional machining processes for hard and difficult to machine materials. Due to the large number of machining parameters in EDM and its complicated structural, the selection of the optimal solution of machining parameters for obtaining minimum machining performance is remain as a challenging task to the researchers. This paper proposed experimental investigation and optimization of machining parameters for EDM process on stainless steel 316L work piece using Harmony Search (HS) algorithm. The mathematical model was developed based on regression approach with four input parameters which are pulse on time, peak current, servo voltage and servo speed to the output response which is dimensional accuracy (DA). The optimal result of HS approach was compared with regression analysis and it was found HS gave better result y giving the most minimum DA value compared with regression approach.
On processing development for fabrication of fiber reinforced composite, part 2
NASA Technical Reports Server (NTRS)
Hou, Tan-Hung; Hou, Gene J. W.; Sheen, Jeen S.
1989-01-01
Fiber-reinforced composite laminates are used in many aerospace and automobile applications. The magnitudes and durations of the cure temperature and the cure pressure applied during the curing process have significant consequences for the performance of the finished product. The objective of this study is to exploit the potential of applying the optimization technique to the cure cycle design. Using the compression molding of a filled polyester sheet molding compound (SMC) as an example, a unified Computer Aided Design (CAD) methodology, consisting of three uncoupled modules, (i.e., optimization, analysis and sensitivity calculations), is developed to systematically generate optimal cure cycle designs. Various optimization formulations for the cure cycle design are investigated. The uniformities in the distributions of the temperature and the degree with those resulting from conventional isothermal processing conditions with pre-warmed platens. Recommendations with regards to further research in the computerization of the cure cycle design are also addressed.
NASA Astrophysics Data System (ADS)
Yoshida, Hiroaki; Yamaguchi, Katsuhito; Ishikawa, Yoshio
The conventional optimization methods were based on a deterministic approach, since their purpose is to find out an exact solution. However, these methods have initial condition dependence and risk of falling into local solution. In this paper, we propose a new optimization method based on a concept of path integral method used in quantum mechanics. The method obtains a solutions as an expected value (stochastic average) using a stochastic process. The advantages of this method are not to be affected by initial conditions and not to need techniques based on experiences. We applied the new optimization method to a design of the hang glider. In this problem, not only the hang glider design but also its flight trajectory were optimized. The numerical calculation results showed that the method has a sufficient performance.
Pulse shape optimization for electron-positron production in rotating fields
NASA Astrophysics Data System (ADS)
Fillion-Gourdeau, François; Hebenstreit, Florian; Gagnon, Denis; MacLean, Steve
2017-07-01
We optimize the pulse shape and polarization of time-dependent electric fields to maximize the production of electron-positron pairs via strong field quantum electrodynamics processes. The pulse is parametrized in Fourier space by a B -spline polynomial basis, which results in a relatively low-dimensional parameter space while still allowing for a large number of electric field modes. The optimization is performed by using a parallel implementation of the differential evolution, one of the most efficient metaheuristic algorithms. The computational performance of the numerical method and the results on pair production are compared with a local multistart optimization algorithm. These techniques allow us to determine the pulse shape and field polarization that maximize the number of produced pairs in computationally accessible regimes.
It looks easy! Heuristics for combinatorial optimization problems.
Chronicle, Edward P; MacGregor, James N; Ormerod, Thomas C; Burr, Alistair
2006-04-01
Human performance on instances of computationally intractable optimization problems, such as the travelling salesperson problem (TSP), can be excellent. We have proposed a boundary-following heuristic to account for this finding. We report three experiments with TSPs where the capacity to employ this heuristic was varied. In Experiment 1, participants free to use the heuristic produced solutions significantly closer to optimal than did those prevented from doing so. Experiments 2 and 3 together replicated this finding in larger problems and demonstrated that a potential confound had no effect. In all three experiments, performance was closely matched by a boundary-following model. The results implicate global rather than purely local processes. Humans may have access to simple, perceptually based, heuristics that are suited to some combinatorial optimization tasks.
Aerodynamic optimization of supersonic compressor cascade using differential evolution on GPU
NASA Astrophysics Data System (ADS)
Aissa, Mohamed Hasanine; Verstraete, Tom; Vuik, Cornelis
2016-06-01
Differential Evolution (DE) is a powerful stochastic optimization method. Compared to gradient-based algorithms, DE is able to avoid local minima but requires at the same time more function evaluations. In turbomachinery applications, function evaluations are performed with time-consuming CFD simulation, which results in a long, non affordable, design cycle. Modern High Performance Computing systems, especially Graphic Processing Units (GPUs), are able to alleviate this inconvenience by accelerating the design evaluation itself. In this work we present a validated CFD Solver running on GPUs, able to accelerate the design evaluation and thus the entire design process. An achieved speedup of 20x to 30x enabled the DE algorithm to run on a high-end computer instead of a costly large cluster. The GPU-enhanced DE was used to optimize the aerodynamics of a supersonic compressor cascade, achieving an aerodynamic loss minimization of 20%.
NASA Technical Reports Server (NTRS)
Steele, John W.; Rector, Tony; Gazda, Daniel; Lewis, John
2011-01-01
An EMU water processing kit (Airlock Coolant Loop Recovery -- A/L CLR) was developed as a corrective action to Extravehicular Mobility Unit (EMU) coolant flow disruptions experienced on the International Space Station (ISS) in May of 2004 and thereafter. A conservative duty cycle and set of use parameters for A/L CLR use and component life were initially developed and implemented based on prior analysis results and analytical modeling. Several initiatives were undertaken to optimize the duty cycle and use parameters of the hardware. Examination of post-flight samples and EMU Coolant Loop hardware provided invaluable information on the performance of the A/L CLR and has allowed for an optimization of the process. The intent of this paper is to detail the evolution of the A/L CLR hardware, efforts to optimize the duty cycle and use parameters, and the final recommendations for implementation in the post-Shuttle retirement era.
Aerodynamic optimization of supersonic compressor cascade using differential evolution on GPU
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aissa, Mohamed Hasanine; Verstraete, Tom; Vuik, Cornelis
Differential Evolution (DE) is a powerful stochastic optimization method. Compared to gradient-based algorithms, DE is able to avoid local minima but requires at the same time more function evaluations. In turbomachinery applications, function evaluations are performed with time-consuming CFD simulation, which results in a long, non affordable, design cycle. Modern High Performance Computing systems, especially Graphic Processing Units (GPUs), are able to alleviate this inconvenience by accelerating the design evaluation itself. In this work we present a validated CFD Solver running on GPUs, able to accelerate the design evaluation and thus the entire design process. An achieved speedup of 20xmore » to 30x enabled the DE algorithm to run on a high-end computer instead of a costly large cluster. The GPU-enhanced DE was used to optimize the aerodynamics of a supersonic compressor cascade, achieving an aerodynamic loss minimization of 20%.« less
NASA Astrophysics Data System (ADS)
Liu, Yang; Zhang, Jian; Pang, Zhicong; Wu, Weihui
2018-04-01
Selective laser melting (SLM) provides a feasible way for manufacturing of complex thin-walled parts directly, however, the energy input during SLM process, namely derived from the laser power, scanning speed, layer thickness and scanning space, etc. has great influence on the thin wall's qualities. The aim of this work is to relate the thin wall's parameters (responses), namely track width, surface roughness and hardness to the process parameters considered in this research (laser power, scanning speed and layer thickness) and to find out the optimal manufacturing conditions. Design of experiment (DoE) was used by implementing composite central design to achieve better manufacturing qualities. Mathematical models derived from the statistical analysis were used to establish the relationships between the process parameters and the responses. Also, the effects of process parameters on each response were determined. Then, a numerical optimization was performed to find out the optimal process set at which the quality features are at their desired values. Based on this study, the relationship between process parameters and SLMed thin-walled structure was revealed and thus, the corresponding optimal process parameters can be used to manufactured thin-walled parts with high quality.
A novel multireceiver communications system configuration based on optimal estimation theory
NASA Technical Reports Server (NTRS)
Kumar, R.
1990-01-01
A multireceiver configuration for the purpose of carrier arraying and/or signal arraying is presented. Such a problem arises for example, in the NASA Deep Space Network where the same data-modulated signal from a spacecraft is received by a number of geographically separated antennas and the data detection must be efficiently performed on the basis of the various received signals. The proposed configuration is arrived at by formulating the carrier and/or signal arraying problem as an optimal estimation problem. Two specific solutions are proposed. The first solution is to simultaneously and optimally estimate the various phase processes received at different receivers with coupled phase locked loops (PLLs) wherein the individual PLLs acquire and track their respective receivers' phase processes, but are aided by each other in an optimal manner. However, when the phase processes are relatively weakly correlated, and for the case of relatively high values of symbol energy-to-noise spectral density ratio, a novel configuration for combining the data modulated, loop-output signals is proposed. The scheme can be extended to the case of low symbol energy-to-noise case by performing the combining/detection process over a multisymbol period. Such a configuration results in the minimization of the effective radio loss at the combiner output, and thus a maximization of energy per bit to noise-power spectral density ration is achieved.
SIFT optimization and automation for matching images from multiple temporal sources
NASA Astrophysics Data System (ADS)
Castillo-Carrión, Sebastián; Guerrero-Ginel, José-Emilio
2017-05-01
Scale Invariant Feature Transformation (SIFT) was applied to extract tie-points from multiple source images. Although SIFT is reported to perform reliably under widely different radiometric and geometric conditions, using the default input parameters resulted in too few points being found. We found that the best solution was to focus on large features as these are more robust and not prone to scene changes over time, which constitutes a first approach to the automation of processes using mapping applications such as geometric correction, creation of orthophotos and 3D models generation. The optimization of five key SIFT parameters is proposed as a way of increasing the number of correct matches; the performance of SIFT is explored in different images and parameter values, finding optimization values which are corroborated using different validation imagery. The results show that the optimization model improves the performance of SIFT in correlating multitemporal images captured from different sources.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Deslippe, Jack; da Jornada, Felipe H.; Vigil-Fowler, Derek
2016-10-06
We profile and optimize calculations performed with the BerkeleyGW code on the Xeon-Phi architecture. BerkeleyGW depends both on hand-tuned critical kernels as well as on BLAS and FFT libraries. We describe the optimization process and performance improvements achieved. We discuss a layered parallelization strategy to take advantage of vector, thread and node-level parallelism. We discuss locality changes (including the consequence of the lack of L3 cache) and effective use of the on-package high-bandwidth memory. We show preliminary results on Knights-Landing including a roofline study of code performance before and after a number of optimizations. We find that the GW methodmore » is particularly well-suited for many-core architectures due to the ability to exploit a large amount of parallelism over plane-wave components, band-pairs, and frequencies.« less
An Optimized Trajectory Planning for Welding Robot
NASA Astrophysics Data System (ADS)
Chen, Zhilong; Wang, Jun; Li, Shuting; Ren, Jun; Wang, Quan; Cheng, Qunchao; Li, Wentao
2018-03-01
In order to improve the welding efficiency and quality, this paper studies the combined planning between welding parameters and space trajectory for welding robot and proposes a trajectory planning method with high real-time performance, strong controllability and small welding error. By adding the virtual joint at the end-effector, the appropriate virtual joint model is established and the welding process parameters are represented by the virtual joint variables. The trajectory planning is carried out in the robot joint space, which makes the control of the welding process parameters more intuitive and convenient. By using the virtual joint model combined with the B-spline curve affine invariant, the welding process parameters are indirectly controlled by controlling the motion curve of the real joint. To solve the optimal time solution as the goal, the welding process parameters and joint space trajectory joint planning are optimized.
Structural Performance’s Optimally Analysing and Implementing Based on ANSYS Technology
NASA Astrophysics Data System (ADS)
Han, Na; Wang, Xuquan; Yue, Haifang; Sun, Jiandong; Wu, Yongchun
2017-06-01
Computer-aided Engineering (CAE) is a hotspot both in academic field and in modern engineering practice. Analysis System(ANSYS) simulation software for its excellent performance become outstanding one in CAE family, it is committed to the innovation of engineering simulation to help users to shorten the design process, improve product innovation and performance. Aimed to explore a structural performance’s optimally analyzing model for engineering enterprises, this paper introduced CAE and its development, analyzed the necessity for structural optimal analysis as well as the framework of structural optimal analysis on ANSYS Technology, used ANSYS to implement a reinforced concrete slab structural performance’s optimal analysis, which was display the chart of displacement vector and the chart of stress intensity. Finally, this paper compared ANSYS software simulation results with the measured results,expounded that ANSYS is indispensable engineering calculation tools.
Online adaptation and over-trial learning in macaque visuomotor control.
Braun, Daniel A; Aertsen, Ad; Paz, Rony; Vaadia, Eilon; Rotter, Stefan; Mehring, Carsten
2011-01-01
When faced with unpredictable environments, the human motor system has been shown to develop optimized adaptation strategies that allow for online adaptation during the control process. Such online adaptation is to be contrasted to slower over-trial learning that corresponds to a trial-by-trial update of the movement plan. Here we investigate the interplay of both processes, i.e., online adaptation and over-trial learning, in a visuomotor experiment performed by macaques. We show that simple non-adaptive control schemes fail to perform in this task, but that a previously suggested adaptive optimal feedback control model can explain the observed behavior. We also show that over-trial learning as seen in learning and aftereffect curves can be explained by learning in a radial basis function network. Our results suggest that both the process of over-trial learning and the process of online adaptation are crucial to understand visuomotor learning.
Online Adaptation and Over-Trial Learning in Macaque Visuomotor Control
Braun, Daniel A.; Aertsen, Ad; Paz, Rony; Vaadia, Eilon; Rotter, Stefan; Mehring, Carsten
2011-01-01
When faced with unpredictable environments, the human motor system has been shown to develop optimized adaptation strategies that allow for online adaptation during the control process. Such online adaptation is to be contrasted to slower over-trial learning that corresponds to a trial-by-trial update of the movement plan. Here we investigate the interplay of both processes, i.e., online adaptation and over-trial learning, in a visuomotor experiment performed by macaques. We show that simple non-adaptive control schemes fail to perform in this task, but that a previously suggested adaptive optimal feedback control model can explain the observed behavior. We also show that over-trial learning as seen in learning and aftereffect curves can be explained by learning in a radial basis function network. Our results suggest that both the process of over-trial learning and the process of online adaptation are crucial to understand visuomotor learning. PMID:21720526
NASA Astrophysics Data System (ADS)
Chen, Shuming; Wang, Dengfeng; Liu, Bo
This paper investigates optimization design of the thickness of the sound package performed on a passenger automobile. The major characteristics indexes for performance selected to evaluate the processes are the SPL of the exterior noise and the weight of the sound package, and the corresponding parameters of the sound package are the thickness of the glass wool with aluminum foil for the first layer, the thickness of the glass fiber for the second layer, and the thickness of the PE foam for the third layer. In this paper, the process is fundamentally with multiple performances, thus, the grey relational analysis that utilizes grey relational grade as performance index is especially employed to determine the optimal combination of the thickness of the different layers for the designed sound package. Additionally, in order to evaluate the weighting values corresponding to various performance characteristics, the principal component analysis is used to show their relative importance properly and objectively. The results of the confirmation experiments uncover that grey relational analysis coupled with principal analysis methods can successfully be applied to find the optimal combination of the thickness for each layer of the sound package material. Therefore, the presented method can be an effective tool to improve the vehicle exterior noise and lower the weight of the sound package. In addition, it will also be helpful for other applications in the automotive industry, such as the First Automobile Works in China, Changan Automobile in China, etc.
A Framework for Preliminary Design of Aircraft Structures Based on Process Information. Part 1
NASA Technical Reports Server (NTRS)
Rais-Rohani, Masoud
1998-01-01
This report discusses the general framework and development of a computational tool for preliminary design of aircraft structures based on process information. The described methodology is suitable for multidisciplinary design optimization (MDO) activities associated with integrated product and process development (IPPD). The framework consists of three parts: (1) product and process definitions; (2) engineering synthesis, and (3) optimization. The product and process definitions are part of input information provided by the design team. The backbone of the system is its ability to analyze a given structural design for performance as well as manufacturability and cost assessment. The system uses a database on material systems and manufacturing processes. Based on the identified set of design variables and an objective function, the system is capable of performing optimization subject to manufacturability, cost, and performance constraints. The accuracy of the manufacturability measures and cost models discussed here depend largely on the available data on specific methods of manufacture and assembly and associated labor requirements. As such, our focus in this research has been on the methodology itself and not so much on its accurate implementation in an industrial setting. A three-tier approach is presented for an IPPD-MDO based design of aircraft structures. The variable-complexity cost estimation methodology and an approach for integrating manufacturing cost assessment into design process are also discussed. This report is presented in two parts. In the first part, the design methodology is presented, and the computational design tool is described. In the second part, a prototype model of the preliminary design Tool for Aircraft Structures based on Process Information (TASPI) is described. Part two also contains an example problem that applies the methodology described here for evaluation of six different design concepts for a wing spar.
Development of optimized, graded-permeability axial groove heat pipes
NASA Technical Reports Server (NTRS)
Kapolnek, Michael R.; Holmes, H. Rolland
1988-01-01
Heat pipe performance can usually be improved by uniformly varying or grading wick permeability from end to end. A unique and cost effective method for grading the permeability of an axial groove heat pipe is described - selective chemical etching of the pipe casing. This method was developed and demonstrated on a proof-of-concept test article. The process improved the test article's performance by 50 percent. Further improvement is possible through the use of optimally etched grooves.
Switching and optimizing control for coal flotation process based on a hybrid model
Dong, Zhiyong; Wang, Ranfeng; Fan, Minqiang; Fu, Xiang
2017-01-01
Flotation is an important part of coal preparation, and the flotation column is widely applied as efficient flotation equipment. This process is complex and affected by many factors, with the froth depth and reagent dosage being two of the most important and frequently manipulated variables. This paper proposes a new method of switching and optimizing control for the coal flotation process. A hybrid model is built and evaluated using industrial data. First, wavelet analysis and principal component analysis (PCA) are applied for signal pre-processing. Second, a control model for optimizing the set point of the froth depth is constructed based on fuzzy control, and a control model is designed to optimize the reagent dosages based on expert system. Finally, the least squares-support vector machine (LS-SVM) is used to identify the operating conditions of the flotation process and to select one of the two models (froth depth or reagent dosage) for subsequent operation according to the condition parameters. The hybrid model is developed and evaluated on an industrial coal flotation column and exhibits satisfactory performance. PMID:29040305
DOE Office of Scientific and Technical Information (OSTI.GOV)
Diaz, Luis A.; Clark, Gemma G.; Lister, Tedd E.
The rapid growth of the electronic waste can be viewed both as an environmental threat and as an attractive source of minerals that can reduce the mining of natural resources, and stabilize the market of critical materials, such as rare earths. Here in this article surface response methodology was used to optimize a previously developed electrochemical recovery process for base metals from electronic waste using a mild oxidant (Fe 3+). Through this process an effective extraction of base metals can be achieved enriching the concentration of precious metals and significantly reducing environmental impacts and operational costs associated with the wastemore » generation and chemical consumption. The optimization was performed using a bench-scale system specifically designed for this process. Operational parameters such as flow rate, applied current density and iron concentration were optimized to reduce the specific energy consumption of the electrochemical recovery process to 1.94 kWh per kg of metal recovered at a processing rate of 3.3 g of electronic waste per hour.« less
Derived heuristics-based consistent optimization of material flow in a gold processing plant
NASA Astrophysics Data System (ADS)
Myburgh, Christie; Deb, Kalyanmoy
2018-01-01
Material flow in a chemical processing plant often follows complicated control laws and involves plant capacity constraints. Importantly, the process involves discrete scenarios which when modelled in a programming format involves if-then-else statements. Therefore, a formulation of an optimization problem of such processes becomes complicated with nonlinear and non-differentiable objective and constraint functions. In handling such problems using classical point-based approaches, users often have to resort to modifications and indirect ways of representing the problem to suit the restrictions associated with classical methods. In a particular gold processing plant optimization problem, these facts are demonstrated by showing results from MATLAB®'s well-known fmincon routine. Thereafter, a customized evolutionary optimization procedure which is capable of handling all complexities offered by the problem is developed. Although the evolutionary approach produced results with comparatively less variance over multiple runs, the performance has been enhanced by introducing derived heuristics associated with the problem. In this article, the development and usage of derived heuristics in a practical problem are presented and their importance in a quick convergence of the overall algorithm is demonstrated.
Diaz, Luis A.; Clark, Gemma G.; Lister, Tedd E.
2017-06-08
The rapid growth of the electronic waste can be viewed both as an environmental threat and as an attractive source of minerals that can reduce the mining of natural resources, and stabilize the market of critical materials, such as rare earths. Here in this article surface response methodology was used to optimize a previously developed electrochemical recovery process for base metals from electronic waste using a mild oxidant (Fe 3+). Through this process an effective extraction of base metals can be achieved enriching the concentration of precious metals and significantly reducing environmental impacts and operational costs associated with the wastemore » generation and chemical consumption. The optimization was performed using a bench-scale system specifically designed for this process. Operational parameters such as flow rate, applied current density and iron concentration were optimized to reduce the specific energy consumption of the electrochemical recovery process to 1.94 kWh per kg of metal recovered at a processing rate of 3.3 g of electronic waste per hour.« less
Implementation of new pavement performance prediction models in PMIS : report
DOT National Transportation Integrated Search
2012-08-01
Pavement performance prediction models and maintenance and rehabilitation (M&R) optimization processes : enable managers and engineers to plan and prioritize pavement M&R activities in a cost-effective manner. : This report describes TxDOTs effort...
Optimizing construction quality management of pavements using mechanistic performance analysis.
DOT National Transportation Integrated Search
2004-08-01
This report presents a statistical-based algorithm that was developed to reconcile the results from several pavement performance models used in the state of practice with systematic process control techniques. These algorithms identify project-specif...
Development of a parameter optimization technique for the design of automatic control systems
NASA Technical Reports Server (NTRS)
Whitaker, P. H.
1977-01-01
Parameter optimization techniques for the design of linear automatic control systems that are applicable to both continuous and digital systems are described. The model performance index is used as the optimization criterion because of the physical insight that can be attached to it. The design emphasis is to start with the simplest system configuration that experience indicates would be practical. Design parameters are specified, and a digital computer program is used to select that set of parameter values which minimizes the performance index. The resulting design is examined, and complexity, through the use of more complex information processing or more feedback paths, is added only if performance fails to meet operational specifications. System performance specifications are assumed to be such that the desired step function time response of the system can be inferred.
Optimal cure cycle design of a resin-fiber composite laminate
NASA Technical Reports Server (NTRS)
Hou, Jean W.; Hou, Tan H.; Sheen, Jeen S.
1987-01-01
Fibers reinforced composites are used in many applications. The composite parts and structures are often manufactured by curing the prepreg or unmolded material. The magnitudes and durations of the cure temperature and the cure pressure applied during the cure process have significant consequences on the performance of the finished product. The goal of this study is to exploit the potential of applying the optimization technique to the cure cycle design. The press molding process of a polyester is used as an example. Various optimization formulations for the cure cycle design are investigated. Recommendations are given for further research in computerizing the cure cycle design.
Throughput Optimization of Continuous Biopharmaceutical Manufacturing Facilities.
Garcia, Fernando A; Vandiver, Michael W
2017-01-01
In order to operate profitably under different product demand scenarios, biopharmaceutical companies must design their facilities with mass output flexibility in mind. Traditional biologics manufacturing technologies pose operational challenges in this regard due to their high costs and slow equipment turnaround times, restricting the types of products and mass quantities that can be processed. Modern plant design, however, has facilitated the development of lean and efficient bioprocessing facilities through footprint reduction and adoption of disposable and continuous manufacturing technologies. These development efforts have proven to be crucial in seeking to drastically reduce the high costs typically associated with the manufacturing of recombinant proteins. In this work, mathematical modeling is used to optimize annual production schedules for a single-product commercial facility operating with a continuous upstream and discrete batch downstream platform. Utilizing cell culture duration and volumetric productivity as process variables in the model, and annual plant throughput as the optimization objective, 3-D surface plots are created to understand the effect of process and facility design on expected mass output. The model shows that once a plant has been fully debottlenecked it is capable of processing well over a metric ton of product per year. Moreover, the analysis helped to uncover a major limiting constraint on plant performance, the stability of the neutralized viral inactivated pool, which may indicate that this should be a focus of attention during future process development efforts. LAY ABSTRACT: Biopharmaceutical process modeling can be used to design and optimize manufacturing facilities and help companies achieve a predetermined set of goals. One way to perform optimization is by making the most efficient use of process equipment in order to minimize the expenditure of capital, labor and plant resources. To that end, this paper introduces a novel mathematical algorithm used to determine the most optimal equipment scheduling configuration that maximizes the mass output for a facility producing a single product. The paper also illustrates how different scheduling arrangements can have a profound impact on the availability of plant resources, and identifies limiting constraints on the plant design. In addition, simulation data is presented using visualization techniques that aid in the interpretation of the scientific concepts discussed. © PDA, Inc. 2017.
Mohamed, Omar Ahmed; Masood, Syed Hasan; Bhowmik, Jahar Lal
2016-11-04
Fused deposition modeling (FDM) additive manufacturing has been intensively used for many industrial applications due to its attractive advantages over traditional manufacturing processes. The process parameters used in FDM have significant influence on the part quality and its properties. This process produces the plastic part through complex mechanisms and it involves complex relationships between the manufacturing conditions and the quality of the processed part. In the present study, the influence of multi-level manufacturing parameters on the temperature-dependent dynamic mechanical properties of FDM processed parts was investigated using IV-optimality response surface methodology (RSM) and multilayer feed-forward neural networks (MFNNs). The process parameters considered for optimization and investigation are slice thickness, raster to raster air gap, deposition angle, part print direction, bead width, and number of perimeters. Storage compliance and loss compliance were considered as response variables. The effect of each process parameter was investigated using developed regression models and multiple regression analysis. The surface characteristics are studied using scanning electron microscope (SEM). Furthermore, performance of optimum conditions was determined and validated by conducting confirmation experiment. The comparison between the experimental values and the predicted values by IV-Optimal RSM and MFNN was conducted for each experimental run and results indicate that the MFNN provides better predictions than IV-Optimal RSM.
Mohamed, Omar Ahmed; Masood, Syed Hasan; Bhowmik, Jahar Lal
2016-01-01
Fused deposition modeling (FDM) additive manufacturing has been intensively used for many industrial applications due to its attractive advantages over traditional manufacturing processes. The process parameters used in FDM have significant influence on the part quality and its properties. This process produces the plastic part through complex mechanisms and it involves complex relationships between the manufacturing conditions and the quality of the processed part. In the present study, the influence of multi-level manufacturing parameters on the temperature-dependent dynamic mechanical properties of FDM processed parts was investigated using IV-optimality response surface methodology (RSM) and multilayer feed-forward neural networks (MFNNs). The process parameters considered for optimization and investigation are slice thickness, raster to raster air gap, deposition angle, part print direction, bead width, and number of perimeters. Storage compliance and loss compliance were considered as response variables. The effect of each process parameter was investigated using developed regression models and multiple regression analysis. The surface characteristics are studied using scanning electron microscope (SEM). Furthermore, performance of optimum conditions was determined and validated by conducting confirmation experiment. The comparison between the experimental values and the predicted values by IV-Optimal RSM and MFNN was conducted for each experimental run and results indicate that the MFNN provides better predictions than IV-Optimal RSM. PMID:28774019
NASA Astrophysics Data System (ADS)
Ghaly, Michael; Links, Jonathan M.; Frey, Eric
2015-03-01
In this work, we used the ideal observer (IO) and IO with model mismatch (IO-MM) applied in the projection domain and an anthropomorphic Channelized Hotelling Observer (CHO) applied to reconstructed images to optimize the acquisition energy window width and evaluate various scatter compensation methods in the context of a myocardial perfusion SPECT defect detection task. The IO has perfect knowledge of the image formation process and thus reflects performance with perfect compensation for image-degrading factors. Thus, using the IO to optimize imaging systems could lead to suboptimal parameters compared to those optimized for humans interpreting SPECT images reconstructed with imperfect or no compensation. The IO-MM allows incorporating imperfect system models into the IO optimization process. We found that with near-perfect scatter compensation, the optimal energy window for the IO and CHO were similar; in its absence the IO-MM gave a better prediction of the optimal energy window for the CHO using different scatter compensation methods. These data suggest that the IO-MM may be useful for projection-domain optimization when model mismatch is significant, and that the IO is useful when followed by reconstruction with good models of the image formation process.
A Numerical Optimization Approach for Tuning Fuzzy Logic Controllers
NASA Technical Reports Server (NTRS)
Woodard, Stanley E.; Garg, Devendra P.
1998-01-01
This paper develops a method to tune fuzzy controllers using numerical optimization. The main attribute of this approach is that it allows fuzzy logic controllers to be tuned to achieve global performance requirements. Furthermore, this approach allows design constraints to be implemented during the tuning process. The method tunes the controller by parameterizing the membership functions for error, change-in-error and control output. The resulting parameters form a design vector which is iteratively changed to minimize an objective function. The minimal objective function results in an optimal performance of the system. A spacecraft mounted science instrument line-of-sight pointing control is used to demonstrate results.
Vaccaro, G; Pelaez, J I; Gil, J A
2016-07-01
Objective masticatory performance assessment using two-coloured specimens relies on image processing techniques; however, just a few approaches have been tested and no comparative studies are reported. The aim of this study was to present a selection procedure of the optimal image analysis method for masticatory performance assessment with a given two-coloured chewing gum. Dentate participants (n = 250; 25 ± 6·3 years) chewed red-white chewing gums for 3, 6, 9, 12, 15, 18, 21 and 25 cycles (2000 samples). Digitalised images of retrieved specimens were analysed using 122 image processing methods (IPMs) based on feature extraction algorithms (pixel values and histogram analysis). All IPMs were tested following the criteria of: normality of measurements (Kolmogorov-Smirnov), ability to detect differences among mixing states (anova corrected with post hoc Bonferroni) and moderate-to-high correlation with the number of cycles (Spearman's Rho). The optimal IPM was chosen using multiple criteria decision analysis (MCDA). Measurements provided by all IPMs proved to be normally distributed (P < 0·05), 116 proved sensible to mixing states (P < 0·05), and 35 showed moderate-to-high correlation with the number of cycles (|ρ| > 0·5; P < 0·05). The variance of the histogram of the Hue showed the highest correlation with the number of cycles (ρ = 0·792; P < 0·0001) and the highest MCDA score (optimal). The proposed procedure proved to be reliable and able to select the optimal approach among multiple IPMs. This experiment may be reproduced to identify the optimal approach for each case of locally available test foods. © 2016 John Wiley & Sons Ltd.
Real-time CT-video registration for continuous endoscopic guidance
NASA Astrophysics Data System (ADS)
Merritt, Scott A.; Rai, Lav; Higgins, William E.
2006-03-01
Previous research has shown that CT-image-based guidance could be useful for the bronchoscopic assessment of lung cancer. This research drew upon the registration of bronchoscopic video images to CT-based endoluminal renderings of the airway tree. The proposed methods either were restricted to discrete single-frame registration, which took several seconds to complete, or required non-real-time buffering and processing of video sequences. We have devised a fast 2D/3D image registration method that performs single-frame CT-Video registration in under 1/15th of a second. This allows the method to be used for real-time registration at full video frame rates without significantly altering the physician's behavior. The method achieves its speed through a gradient-based optimization method that allows most of the computation to be performed off-line. During live registration, the optimization iteratively steps toward the locally optimal viewpoint at which a CT-based endoluminal view is most similar to a current bronchoscopic video frame. After an initial registration to begin the process (generally done in the trachea for bronchoscopy), subsequent registrations are performed in real-time on each incoming video frame. As each new bronchoscopic video frame becomes available, the current optimization is initialized using the previous frame's optimization result, allowing continuous guidance to proceed without manual re-initialization. Tests were performed using both synthetic and pre-recorded bronchoscopic video. The results show that the method is robust to initialization errors, that registration accuracy is high, and that continuous registration can proceed on real-time video at >15 frames per sec. with minimal user-intervention.
Joelsson, Daniel; Moravec, Phil; Troutman, Matthew; Pigeon, Joseph; DePhillips, Pete
2008-08-20
Transferring manual ELISAs to automated platforms requires optimizing the assays for each particular robotic platform. These optimization experiments are often time consuming and difficult to perform using a traditional one-factor-at-a-time strategy. In this manuscript we describe the development of an automated process using statistical design of experiments (DOE) to quickly optimize immunoassays for precision and robustness on the Tecan EVO liquid handler. By using fractional factorials and a split-plot design, five incubation time variables and four reagent concentration variables can be optimized in a short period of time.
JPL-ANTOPT antenna structure optimization program
NASA Technical Reports Server (NTRS)
Strain, D. M.
1994-01-01
New antenna path-length error and pointing-error structure optimization codes were recently added to the MSC/NASTRAN structural analysis computer program. Path-length and pointing errors are important measured of structure-related antenna performance. The path-length and pointing errors are treated as scalar displacements for statics loading cases. These scalar displacements can be subject to constraint during the optimization process. Path-length and pointing-error calculations supplement the other optimization and sensitivity capabilities of NASTRAN. The analysis and design functions were implemented as 'DMAP ALTERs' to the Design Optimization (SOL 200) Solution Sequence of MSC-NASTRAN, Version 67.5.
Virtually optimized insoles for offloading the diabetic foot: A randomized crossover study.
Telfer, S; Woodburn, J; Collier, A; Cavanagh, P R
2017-07-26
Integration of objective biomechanical measures of foot function into the design process for insoles has been shown to provide enhanced plantar tissue protection for individuals at-risk of plantar ulceration. The use of virtual simulations utilizing numerical modeling techniques offers a potential approach to further optimize these devices. In a patient population at-risk of foot ulceration, we aimed to compare the pressure offloading performance of insoles that were optimized via numerical simulation techniques against shape-based devices. Twenty participants with diabetes and at-risk feet were enrolled in this study. Three pairs of personalized insoles: one based on shape data and subsequently manufactured via direct milling; and two were based on a design derived from shape, pressure, and ultrasound data which underwent a finite element analysis-based virtual optimization procedure. For the latter set of insole designs, one pair was manufactured via direct milling, and a second pair was manufactured through 3D printing. The offloading performance of the insoles was analyzed for forefoot regions identified as having elevated plantar pressures. In 88% of the regions of interest, the use of virtually optimized insoles resulted in lower peak plantar pressures compared to the shape-based devices. Overall, the virtually optimized insoles significantly reduced peak pressures by a mean of 41.3kPa (p<0.001, 95% CI [31.1, 51.5]) for milled and 40.5kPa (p<0.001, 95% CI [26.4, 54.5]) for printed devices compared to shape-based insoles. The integration of virtual optimization into the insole design process resulted in improved offloading performance compared to standard, shape-based devices. ISRCTN19805071, www.ISRCTN.org. Copyright © 2017 Elsevier Ltd. All rights reserved.
Warehouse stocking optimization based on dynamic ant colony genetic algorithm
NASA Astrophysics Data System (ADS)
Xiao, Xiaoxu
2018-04-01
In view of the various orders of FAW (First Automotive Works) International Logistics Co., Ltd., the SLP method is used to optimize the layout of the warehousing units in the enterprise, thus the warehouse logistics is optimized and the external processing speed of the order is improved. In addition, the relevant intelligent algorithms for optimizing the stocking route problem are analyzed. The ant colony algorithm and genetic algorithm which have good applicability are emphatically studied. The parameters of ant colony algorithm are optimized by genetic algorithm, which improves the performance of ant colony algorithm. A typical path optimization problem model is taken as an example to prove the effectiveness of parameter optimization.
Evolutionary computing for the design search and optimization of space vehicle power subsystems
NASA Technical Reports Server (NTRS)
Kordon, M.; Klimeck, G.; Hanks, D.
2004-01-01
Evolutionary computing has proven to be a straightforward and robust approach for optimizing a wide range of difficult analysis and design problems. This paper discusses the application of these techniques to an existing space vehicle power subsystem resource and performance analysis simulation in a parallel processing environment.
Elimination of Bimodal Size in InAs/GaAs Quantum Dots for Preparation of 1.3-μm Quantum Dot Lasers
NASA Astrophysics Data System (ADS)
Su, Xiang-Bin; Ding, Ying; Ma, Ben; Zhang, Ke-Lu; Chen, Ze-Sheng; Li, Jing-Lun; Cui, Xiao-Ran; Xu, Ying-Qiang; Ni, Hai-Qiao; Niu, Zhi-Chuan
2018-02-01
The device characteristics of semiconductor quantum dot lasers have been improved with progress in active layer structures. Self-assembly formed InAs quantum dots grown on GaAs had been intensively promoted in order to achieve quantum dot lasers with superior device performances. In the process of growing high-density InAs/GaAs quantum dots, bimodal size occurs due to large mismatch and other factors. The bimodal size in the InAs/GaAs quantum dot system is eliminated by the method of high-temperature annealing and optimized the in situ annealing temperature. The annealing temperature is taken as the key optimization parameters, and the optimal annealing temperature of 680 °C was obtained. In this process, quantum dot growth temperature, InAs deposition, and arsenic (As) pressure are optimized to improve quantum dot quality and emission wavelength. A 1.3-μm high-performance F-P quantum dot laser with a threshold current density of 110 A/cm2 was demonstrated.
A novel optimization algorithm for MIMO Hammerstein model identification under heavy-tailed noise.
Jin, Qibing; Wang, Hehe; Su, Qixin; Jiang, Beiyan; Liu, Qie
2018-01-01
In this paper, we study the system identification of multi-input multi-output (MIMO) Hammerstein processes under the typical heavy-tailed noise. To the best of our knowledge, there is no general analytical method to solve this identification problem. Motivated by this, we propose a general identification method to solve this problem based on a Gaussian-Mixture Distribution intelligent optimization algorithm (GMDA). The nonlinear part of Hammerstein process is modeled by a Radial Basis Function (RBF) neural network, and the identification problem is converted to an optimization problem. To overcome the drawbacks of analytical identification method in the presence of heavy-tailed noise, a meta-heuristic optimization algorithm, Cuckoo search (CS) algorithm is used. To improve its performance for this identification problem, the Gaussian-mixture Distribution (GMD) and the GMD sequences are introduced to improve the performance of the standard CS algorithm. Numerical simulations for different MIMO Hammerstein models are carried out, and the simulation results verify the effectiveness of the proposed GMDA. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
Elimination of Bimodal Size in InAs/GaAs Quantum Dots for Preparation of 1.3-μm Quantum Dot Lasers.
Su, Xiang-Bin; Ding, Ying; Ma, Ben; Zhang, Ke-Lu; Chen, Ze-Sheng; Li, Jing-Lun; Cui, Xiao-Ran; Xu, Ying-Qiang; Ni, Hai-Qiao; Niu, Zhi-Chuan
2018-02-21
The device characteristics of semiconductor quantum dot lasers have been improved with progress in active layer structures. Self-assembly formed InAs quantum dots grown on GaAs had been intensively promoted in order to achieve quantum dot lasers with superior device performances. In the process of growing high-density InAs/GaAs quantum dots, bimodal size occurs due to large mismatch and other factors. The bimodal size in the InAs/GaAs quantum dot system is eliminated by the method of high-temperature annealing and optimized the in situ annealing temperature. The annealing temperature is taken as the key optimization parameters, and the optimal annealing temperature of 680 °C was obtained. In this process, quantum dot growth temperature, InAs deposition, and arsenic (As) pressure are optimized to improve quantum dot quality and emission wavelength. A 1.3-μm high-performance F-P quantum dot laser with a threshold current density of 110 A/cm 2 was demonstrated.
Taguchi experimental design to determine the taste quality characteristic of candied carrot
NASA Astrophysics Data System (ADS)
Ekawati, Y.; Hapsari, A. A.
2018-03-01
Robust parameter design is used to design product that is robust to noise factors so the product’s performance fits the target and delivers a better quality. In the process of designing and developing the innovative product of candied carrot, robust parameter design is carried out using Taguchi Method. The method is used to determine an optimal quality design. The optimal quality design is based on the process and the composition of product ingredients that are in accordance with consumer needs and requirements. According to the identification of consumer needs from the previous research, quality dimensions that need to be assessed are the taste and texture of the product. The quality dimension assessed in this research is limited to the taste dimension. Organoleptic testing is used for this assessment, specifically hedonic testing that makes assessment based on consumer preferences. The data processing uses mean and signal to noise ratio calculation and optimal level setting to determine the optimal process/composition of product ingredients. The optimal value is analyzed using confirmation experiments to prove that proposed product match consumer needs and requirements. The result of this research is identification of factors that affect the product taste and the optimal quality of product according to Taguchi Method.
A method of network topology optimization design considering application process characteristic
NASA Astrophysics Data System (ADS)
Wang, Chunlin; Huang, Ning; Bai, Yanan; Zhang, Shuo
2018-03-01
Communication networks are designed to meet the usage requirements of users for various network applications. The current studies of network topology optimization design mainly considered network traffic, which is the result of network application operation, but not a design element of communication networks. A network application is a procedure of the usage of services by users with some demanded performance requirements, and has obvious process characteristic. In this paper, we first propose a method to optimize the design of communication network topology considering the application process characteristic. Taking the minimum network delay as objective, and the cost of network design and network connective reliability as constraints, an optimization model of network topology design is formulated, and the optimal solution of network topology design is searched by Genetic Algorithm (GA). Furthermore, we investigate the influence of network topology parameter on network delay under the background of multiple process-oriented applications, which can guide the generation of initial population and then improve the efficiency of GA. Numerical simulations show the effectiveness and validity of our proposed method. Network topology optimization design considering applications can improve the reliability of applications, and provide guidance for network builders in the early stage of network design, which is of great significance in engineering practices.
NASA Astrophysics Data System (ADS)
Soni, Sourabh Kumar; Thomas, Benedict
2018-04-01
The term "weldability" has been used to describe a wide variety of characteristics when a material is subjected to welding. In our analysis we perform experimental investigation to estimate the tensile strength of welded joint strength and then optimization of welding process parameters by using taguchi method and Artificial Neural Network (ANN) tool in MINITAB and MATLAB software respectively. The study reveals the influence on weldability of steel by varying composition of steel by mechanical characterization. At first we prepare the samples of different grades of steel (EN8, EN 19, EN 24). The samples were welded together by metal inert gas welding process and then tensile testing on Universal testing machine (UTM) was conducted for the same to evaluate the tensile strength of the welded steel specimens. Further comparative study was performed to find the effects of welding parameter on quality of weld strength by employing Taguchi method and Neural Network tool. Finally we concluded that taguchi method and Neural Network Tool is much efficient technique for optimization.
Dynamic imaging model and parameter optimization for a star tracker.
Yan, Jinyun; Jiang, Jie; Zhang, Guangjun
2016-03-21
Under dynamic conditions, star spots move across the image plane of a star tracker and form a smeared star image. This smearing effect increases errors in star position estimation and degrades attitude accuracy. First, an analytical energy distribution model of a smeared star spot is established based on a line segment spread function because the dynamic imaging process of a star tracker is equivalent to the static imaging process of linear light sources. The proposed model, which has a clear physical meaning, explicitly reflects the key parameters of the imaging process, including incident flux, exposure time, velocity of a star spot in an image plane, and Gaussian radius. Furthermore, an analytical expression of the centroiding error of the smeared star spot is derived using the proposed model. An accurate and comprehensive evaluation of centroiding accuracy is obtained based on the expression. Moreover, analytical solutions of the optimal parameters are derived to achieve the best performance in centroid estimation. Finally, we perform numerical simulations and a night sky experiment to validate the correctness of the dynamic imaging model, the centroiding error expression, and the optimal parameters.
Ye, Zhiwei; Wang, Mingwei; Hu, Zhengbing; Liu, Wei
2015-01-01
Image enhancement is an important procedure of image processing and analysis. This paper presents a new technique using a modified measure and blending of cuckoo search and particle swarm optimization (CS-PSO) for low contrast images to enhance image adaptively. In this way, contrast enhancement is obtained by global transformation of the input intensities; it employs incomplete Beta function as the transformation function and a novel criterion for measuring image quality considering three factors which are threshold, entropy value, and gray-level probability density of the image. The enhancement process is a nonlinear optimization problem with several constraints. CS-PSO is utilized to maximize the objective fitness criterion in order to enhance the contrast and detail in an image by adapting the parameters of a novel extension to a local enhancement technique. The performance of the proposed method has been compared with other existing techniques such as linear contrast stretching, histogram equalization, and evolutionary computing based image enhancement methods like backtracking search algorithm, differential search algorithm, genetic algorithm, and particle swarm optimization in terms of processing time and image quality. Experimental results demonstrate that the proposed method is robust and adaptive and exhibits the better performance than other methods involved in the paper. PMID:25784928
Ye, Zhiwei; Wang, Mingwei; Hu, Zhengbing; Liu, Wei
2015-01-01
Image enhancement is an important procedure of image processing and analysis. This paper presents a new technique using a modified measure and blending of cuckoo search and particle swarm optimization (CS-PSO) for low contrast images to enhance image adaptively. In this way, contrast enhancement is obtained by global transformation of the input intensities; it employs incomplete Beta function as the transformation function and a novel criterion for measuring image quality considering three factors which are threshold, entropy value, and gray-level probability density of the image. The enhancement process is a nonlinear optimization problem with several constraints. CS-PSO is utilized to maximize the objective fitness criterion in order to enhance the contrast and detail in an image by adapting the parameters of a novel extension to a local enhancement technique. The performance of the proposed method has been compared with other existing techniques such as linear contrast stretching, histogram equalization, and evolutionary computing based image enhancement methods like backtracking search algorithm, differential search algorithm, genetic algorithm, and particle swarm optimization in terms of processing time and image quality. Experimental results demonstrate that the proposed method is robust and adaptive and exhibits the better performance than other methods involved in the paper.
Fan, Mingyi; Hu, Jiwei; Cao, Rensheng; Xiong, Kangning; Wei, Xionghui
2017-12-21
Reduced graphene oxide-supported nanoscale zero-valent iron (nZVI/rGO) magnetic nanocomposites were prepared and then applied in the Cu(II) removal from aqueous solutions. Scanning electron microscopy, transmission electron microscopy, X-ray photoelectron spectroscopy and superconduction quantum interference device magnetometer were performed to characterize the nZVI/rGO nanocomposites. In order to reduce the number of experiments and the economic cost, response surface methodology (RSM) combined with artificial intelligence (AI) techniques, such as artificial neural network (ANN), genetic algorithm (GA) and particle swarm optimization (PSO), has been utilized as a major tool that can model and optimize the removal processes, because a tremendous advance has recently been made on AI that may result in extensive applications. Based on RSM, ANN-GA and ANN-PSO were employed to model the Cu(II) removal process and optimize the operating parameters, e.g., operating temperature, initial pH, initial concentration and contact time. The ANN-PSO model was proven to be an effective tool for modeling and optimizing the Cu(II) removal with a low absolute error and a high removal efficiency. Furthermore, the isotherm, kinetic, thermodynamic studies and the XPS analysis were performed to explore the mechanisms of Cu(II) removal process.
NASA Astrophysics Data System (ADS)
Lee, X. N.; Fathullah, M.; Shayfull, Z.; Nasir, S. M.; Hazwan, M. H. M.; Shazzuan, S.
2017-09-01
Plastic injection moulding is a popular manufacturing method not only it is reliable, but also efficient and cost saving. It able to produce plastic part with detailed features and complex geometry. However, defects in injection moulding process degrades the quality and aesthetic of the injection moulded product. The most common defect occur in the process is warpage. Inappropriate process parameter setting of injection moulding machine is one of the reason that leads to the occurrence of warpage. The aims of this study were to improve the quality of injection moulded part by investigating the optimal parameters in minimizing warpage using Response Surface Methodology (RSM) and Glowworm Swarm Optimization (GSO). Subsequent to this, the most significant parameter was identified and recommended parameters setting was compared with the optimized parameter setting using RSM and GSO. In this research, the mobile phone case was selected as case study. The mould temperature, melt temperature, packing pressure, packing time and cooling time were selected as variables whereas warpage in y-direction was selected as responses in this research. The simulation was carried out by using Autodesk Moldflow Insight 2012. In addition, the RSM was performed by using Design Expert 7.0 whereas the GSO was utilized by using MATLAB. The warpage in y direction recommended by RSM were reduced by 70 %. The warpages recommended by GSO were decreased by 61 % in y direction. The resulting warpages under optimal parameter setting by RSM and GSO were validated by simulation in AMI 2012. RSM performed better than GSO in solving warpage issue.
Flight-Test Validation and Flying Qualities Evaluation of a Rotorcraft UAV Flight Control System
NASA Technical Reports Server (NTRS)
Mettler, Bernard; Tuschler, Mark B.; Kanade, Takeo
2000-01-01
This paper presents a process of design and flight-test validation and flying qualities evaluation of a flight control system for a rotorcraft-based unmanned aerial vehicle (RUAV). The keystone of this process is an accurate flight-dynamic model of the aircraft, derived by using system identification modeling. The model captures the most relevant dynamic features of our unmanned rotorcraft, and explicitly accounts for the presence of a stabilizer bar. Using the identified model we were able to determine the performance margins of our original control system and identify limiting factors. The performance limitations were addressed and the attitude control system was 0ptimize.d for different three performance levels: slow, medium, fast. The optimized control laws will be implemented in our RUAV. We will first determine the validity of our control design approach by flight test validating our optimized controllers. Subsequently, we will fly a series of maneuvers with the three optimized controllers to determine the level of flying qualities that can be attained. The outcome enable us to draw important conclusions on the flying qualities requirements for small-scale RUAVs.
Optimization of a Thermodynamic Model Using a Dakota Toolbox Interface
NASA Astrophysics Data System (ADS)
Cyrus, J.; Jafarov, E. E.; Schaefer, K. M.; Wang, K.; Clow, G. D.; Piper, M.; Overeem, I.
2016-12-01
Scientific modeling of the Earth physical processes is an important driver of modern science. The behavior of these scientific models is governed by a set of input parameters. It is crucial to choose accurate input parameters that will also preserve the corresponding physics being simulated in the model. In order to effectively simulate real world processes the models output data must be close to the observed measurements. To achieve this optimal simulation, input parameters are tuned until we have minimized the objective function, which is the error between the simulation model outputs and the observed measurements. We developed an auxiliary package, which serves as a python interface between the user and DAKOTA. The package makes it easy for the user to conduct parameter space explorations, parameter optimizations, as well as sensitivity analysis while tracking and storing results in a database. The ability to perform these analyses via a Python library also allows the users to combine analysis techniques, for example finding an approximate equilibrium with optimization then immediately explore the space around it. We used the interface to calibrate input parameters for the heat flow model, which is commonly used in permafrost science. We performed optimization on the first three layers of the permafrost model, each with two thermal conductivity coefficients input parameters. Results of parameter space explorations indicate that the objective function not always has a unique minimal value. We found that gradient-based optimization works the best for the objective functions with one minimum. Otherwise, we employ more advanced Dakota methods such as genetic optimization and mesh based convergence in order to find the optimal input parameters. We were able to recover 6 initially unknown thermal conductivity parameters within 2% accuracy of their known values. Our initial tests indicate that the developed interface for the Dakota toolbox could be used to perform analysis and optimization on a `black box' scientific model more efficiently than using just Dakota.
Multiobjective hyper heuristic scheme for system design and optimization
NASA Astrophysics Data System (ADS)
Rafique, Amer Farhan
2012-11-01
As system design is becoming more and more multifaceted, integrated, and complex, the traditional single objective optimization trends of optimal design are becoming less and less efficient and effective. Single objective optimization methods present a unique optimal solution whereas multiobjective methods present pareto front. The foremost intent is to predict a reasonable distributed pareto-optimal solution set independent of the problem instance through multiobjective scheme. Other objective of application of intended approach is to improve the worthiness of outputs of the complex engineering system design process at the conceptual design phase. The process is automated in order to provide the system designer with the leverage of the possibility of studying and analyzing a large multiple of possible solutions in a short time. This article presents Multiobjective Hyper Heuristic Optimization Scheme based on low level meta-heuristics developed for the application in engineering system design. Herein, we present a stochastic function to manage meta-heuristics (low-level) to augment surety of global optimum solution. Generic Algorithm, Simulated Annealing and Swarm Intelligence are used as low-level meta-heuristics in this study. Performance of the proposed scheme is investigated through a comprehensive empirical analysis yielding acceptable results. One of the primary motives for performing multiobjective optimization is that the current engineering systems require simultaneous optimization of conflicting and multiple. Random decision making makes the implementation of this scheme attractive and easy. Injecting feasible solutions significantly alters the search direction and also adds diversity of population resulting in accomplishment of pre-defined goals set in the proposed scheme.
Reengineering the JPL Spacecraft Design Process
NASA Technical Reports Server (NTRS)
Briggs, C.
1995-01-01
This presentation describes the factors that have emerged in the evolved process of reengineering the unmanned spacecraft design process at the Jet Propulsion Laboratory in Pasadena, California. Topics discussed include: New facilities, new design factors, new system-level tools, complex performance objectives, changing behaviors, design integration, leadership styles, and optimization.
Masoumi, Hamid Reza Fard; Basri, Mahiran; Kassim, Anuar; Abdullah, Dzulkefly Kuang; Abdollahi, Yadollah; Abd Gani, Siti Salwa; Rezaee, Malahat
2013-01-01
Lipase-catalyzed production of triethanolamine-based esterquat by esterification of oleic acid (OA) with triethanolamine (TEA) in n-hexane was performed in 2 L stirred-tank reactor. A set of experiments was designed by central composite design to process modeling and statistically evaluate the findings. Five independent process variables, including enzyme amount, reaction time, reaction temperature, substrates molar ratio of OA to TEA, and agitation speed, were studied under the given conditions designed by Design Expert software. Experimental data were examined for normality test before data processing stage and skewness and kurtosis indices were determined. The mathematical model developed was found to be adequate and statistically accurate to predict the optimum conversion of product. Response surface methodology with central composite design gave the best performance in this study, and the methodology as a whole has been proven to be adequate for the design and optimization of the enzymatic process.
Li, Zheng; Qi, Rong; Wang, Bo; Zou, Zhe; Wei, Guohong; Yang, Min
2013-01-01
A full-scale oxidation ditch process for treating sewage was simulated with the ASM2d model and optimized for minimal cost with acceptable performance in terms of ammonium and phosphorus removal. A unified index was introduced by integrating operational costs (aeration energy and sludge production) with effluent violations for performance evaluation. Scenario analysis showed that, in comparison with the baseline (all of the 9 aerators activated), the strategy of activating 5 aerators could save aeration energy significantly with an ammonium violation below 10%. Sludge discharge scenario analysis showed that a sludge discharge flow of 250-300 m3/day (solid retention time (SRT), 13-15 days) was appropriate for the enhancement of phosphorus removal without excessive sludge production. The proposed optimal control strategy was: activating 5 rotating disks operated with a mode of "111100100" ("1" represents activation and "0" represents inactivation) for aeration and sludge discharge flow of 200 m3/day (SRT, 19 days). Compared with the baseline, this strategy could achieve ammonium violation below 10% and TP violation below 30% with substantial reduction of aeration energy cost (46%) and minimal increment of sludge production (< 2%). This study provides a useful approach for the optimization of process operation and control.
NASA Astrophysics Data System (ADS)
Norcahyo, Rachmadi; Soepangkat, Bobby O. P.
2017-06-01
A research was conducted for the optimization of the end milling process of ASSAB XW-42 tool steel with multiple performance characteristics based on the orthogonal array with Taguchi-grey relational analysis method. Liquid nitrogen was applied as a coolant. The experimental studies were conducted under varying the liquid nitrogen cooling flow rates (FL), and the end milling process variables, i.e., cutting speed (Vc), feeding speed (Vf), and axial depth of cut (Aa). The optimized multiple performance characteristics were surface roughness (SR), flank wear (VB), and material removal rate (MRR). An orthogonal array, signal-to-noise (S/N) ratio, grey relational analysis, grey relational grade, and analysis of variance were employed to study the multiple performance characteristics. Experimental results showed that flow rate gave the highest contribution for reducing the total variation of the multiple responses, followed by cutting speed, feeding speed, and axial depth of cut. The minimum surface roughness, flank wear, and maximum material removal rate could be obtained by using the values of flow rate, cutting speed, feeding speed, and axial depth of cut of 0.5 l/minute, 109.9 m/minute, 440 mm/minute, and 0.9 mm, respectively.
A method to evaluate process performance by integrating time and resources
NASA Astrophysics Data System (ADS)
Wang, Yu; Wei, Qingjie; Jin, Shuang
2017-06-01
The purpose of process mining is to improve the existing process of the enterprise, so how to measure the performance of the process is particularly important. However, the current research on the performance evaluation method is still insufficient. The main methods of evaluation are mainly using time or resource. These basic statistics cannot evaluate process performance very well. In this paper, a method of evaluating the performance of the process based on time dimension and resource dimension is proposed. This method can be used to measure the utilization and redundancy of resources in the process. This paper will introduce the design principle and formula of the evaluation algorithm. Then, the design and the implementation of the evaluation method will be introduced. Finally, we will use the evaluating method to analyse the event log from a telephone maintenance process and propose an optimization plan.
On optima: the case of myoglobin-facilitated oxygen diffusion.
Wittenberg, Jonathan B
2007-08-15
The process of myoglobin/leghemoglobin-facilitated oxygen diffusion is adapted to function in different environments in diverse organisms. We enquire how the functional parameters of the process are optimized in particular organisms. The ligand-binding properties of the proteins, myoglobin and plant symbiotic hemoglobins, we discover, suggest that they have been adapted under genetic selection pressure for optimal performance. Since carrier-mediated oxygen transport has probably evolved independantly many times, adaptation of diverse proteins for a common functionality exemplifies the process of convergent evolution. The progenitor proteins may be built on the myoglobin scaffold or may be very different.
Dynamic Systems Analysis for Turbine Based Aero Propulsion Systems
NASA Technical Reports Server (NTRS)
Csank, Jeffrey T.
2016-01-01
The aircraft engine design process seeks to optimize the overall system-level performance, weight, and cost for a given concept. Steady-state simulations and data are used to identify trade-offs that should be balanced to optimize the system in a process known as systems analysis. These systems analysis simulations and data may not adequately capture the true performance trade-offs that exist during transient operation. Dynamic systems analysis provides the capability for assessing the dynamic tradeoffs at an earlier stage of the engine design process. The dynamic systems analysis concept, developed tools, and potential benefit are presented in this paper. To provide this capability, the Tool for Turbine Engine Closed-loop Transient Analysis (TTECTrA) was developed to provide the user with an estimate of the closed-loop performance (response time) and operability (high pressure compressor surge margin) for a given engine design and set of control design requirements. TTECTrA along with engine deterioration information, can be used to develop a more generic relationship between performance and operability that can impact the engine design constraints and potentially lead to a more efficient engine.
Extending BPM Environments of Your Choice with Performance Related Decision Support
NASA Astrophysics Data System (ADS)
Fritzsche, Mathias; Picht, Michael; Gilani, Wasif; Spence, Ivor; Brown, John; Kilpatrick, Peter
What-if Simulations have been identified as one solution for business performance related decision support. Such support is especially useful in cases where it can be automatically generated out of Business Process Management (BPM) Environments from the existing business process models and performance parameters monitored from the executed business process instances. Currently, some of the available BPM Environments offer basic-level performance prediction capabilities. However, these functionalities are normally too limited to be generally useful for performance related decision support at business process level. In this paper, an approach is presented which allows the non-intrusive integration of sophisticated tooling for what-if simulations, analytic performance prediction tools, process optimizations or a combination of such solutions into already existing BPM environments. The approach abstracts from process modelling techniques which enable automatic decision support spanning processes across numerous BPM Environments. For instance, this enables end-to-end decision support for composite processes modelled with the Business Process Modelling Notation (BPMN) on top of existing Enterprise Resource Planning (ERP) processes modelled with proprietary languages.
Computation of output feedback gains for linear stochastic systems using the Zangnill-Powell Method
NASA Technical Reports Server (NTRS)
Kaufman, H.
1975-01-01
Because conventional optimal linear regulator theory results in a controller which requires the capability of measuring and/or estimating the entire state vector, it is of interest to consider procedures for computing controls which are restricted to be linear feedback functions of a lower dimensional output vector and which take into account the presence of measurement noise and process uncertainty. To this effect a stochastic linear model has been developed that accounts for process parameter and initial uncertainty, measurement noise, and a restricted number of measurable outputs. Optimization with respect to the corresponding output feedback gains was then performed for both finite and infinite time performance indices without gradient computation by using Zangwill's modification of a procedure originally proposed by Powell. Results using a seventh order process show the proposed procedures to be very effective.
On-Board Real-Time Optimization Control for Turbo-Fan Engine Life Extending
NASA Astrophysics Data System (ADS)
Zheng, Qiangang; Zhang, Haibo; Miao, Lizhen; Sun, Fengyong
2017-11-01
A real-time optimization control method is proposed to extend turbo-fan engine service life. This real-time optimization control is based on an on-board engine mode, which is devised by a MRR-LSSVR (multi-input multi-output recursive reduced least squares support vector regression method). To solve the optimization problem, a FSQP (feasible sequential quadratic programming) algorithm is utilized. The thermal mechanical fatigue is taken into account during the optimization process. Furthermore, to describe the engine life decaying, a thermal mechanical fatigue model of engine acceleration process is established. The optimization objective function not only contains the sub-item which can get fast response of the engine, but also concludes the sub-item of the total mechanical strain range which has positive relationship to engine fatigue life. Finally, the simulations of the conventional optimization control which just consider engine acceleration performance or the proposed optimization method have been conducted. The simulations demonstrate that the time of the two control methods from idle to 99.5 % of the maximum power are equal. However, the engine life using the proposed optimization method could be surprisingly increased by 36.17 % compared with that using conventional optimization control.
Statistical model for speckle pattern optimization.
Su, Yong; Zhang, Qingchuan; Gao, Zeren
2017-11-27
Image registration is the key technique of optical metrologies such as digital image correlation (DIC), particle image velocimetry (PIV), and speckle metrology. Its performance depends critically on the quality of image pattern, and thus pattern optimization attracts extensive attention. In this article, a statistical model is built to optimize speckle patterns that are composed of randomly positioned speckles. It is found that the process of speckle pattern generation is essentially a filtered Poisson process. The dependence of measurement errors (including systematic errors, random errors, and overall errors) upon speckle pattern generation parameters is characterized analytically. By minimizing the errors, formulas of the optimal speckle radius are presented. Although the primary motivation is from the field of DIC, we believed that scholars in other optical measurement communities, such as PIV and speckle metrology, will benefit from these discussions.
Effect of nucleation time on bending response of ionic polymer–metal composite actuators
Kim, Suran; Hong, Seungbum; Choi, Yoon-Young; ...
2013-07-02
We attempted an autocatalytic electro-less plating of nickel in order to replace an electroless impregnation-reduction (IR) method in ionic polymer–metal composite (IPMC) actuators to reduce cost and processing time. Because nucleation time of Pd–Sn colloids is the determining factor of overall processing time, we used the nucleation time as our control parameter. In order to optimize nucleation time and investigate its effect on the performance of IPMC actuators, we analyzed the relationship between the nucleation time, interface morphology and electrical properties. The optimized nucleation time was 10 h. Furthermore, the trends of the performance and electrical properties as a functionmore » of nucleation time were attributed to the fact that the Ni penetration depth was determined by the minimum diffusion length of either Pd–Sn colloids or reducing agent ions. The Ni-IPMC actuators can be fabricated less than 14 h processing time without deteriorating performance of the actuators, which is comparable to Pt-IPMC prepared by IR method.« less
Hosseini, Zahra; Liu, Junmin; Solovey, Igor; Menon, Ravi S; Drangova, Maria
2017-04-01
To implement and optimize a new approach for susceptibility-weighted image (SWI) generation from multi-echo multi-channel image data and compare its performance against optimized traditional SWI pipelines. Five healthy volunteers were imaged at 7 Tesla. The inter-echo-variance (IEV) channel combination, which uses the variance of the local frequency shift at multiple echo times as a weighting factor during channel combination, was used to calculate multi-echo local phase shift maps. Linear phase masks were combined with the magnitude to generate IEV-SWI. The performance of the IEV-SWI pipeline was compared with that of two accepted SWI pipelines-channel combination followed by (i) Homodyne filtering (HPH-SWI) and (ii) unwrapping and high-pass filtering (SVD-SWI). The filtering steps of each pipeline were optimized. Contrast-to-noise ratio was used as the comparison metric. Qualitative assessment of artifact and vessel conspicuity was performed and processing time of pipelines was evaluated. The optimized IEV-SWI pipeline (σ = 7 mm) resulted in continuous vessel visibility throughout the brain. IEV-SWI had significantly higher contrast compared with HPH-SWI and SVD-SWI (P < 0.001, Friedman nonparametric test). Residual background fields and phase wraps in HPH-SWI and SVD-SWI corrupted the vessel signal and/or generated vessel-mimicking artifact. Optimized implementation of the IEV-SWI pipeline processed a six-echo 16-channel dataset in under 10 min. IEV-SWI benefits from channel-by-channel processing of phase data and results in high contrast images with an optimal balance between contrast and background noise removal, thereby presenting evidence of importance of the order in which postprocessing techniques are applied for multi-channel SWI generation. 2 J. Magn. Reson. Imaging 2017;45:1113-1124. © 2016 International Society for Magnetic Resonance in Medicine.
Xu, Gang; Liang, Xifeng; Yao, Shuanbao; Chen, Dawei; Li, Zhiwei
2017-01-01
Minimizing the aerodynamic drag and the lift of the train coach remains a key issue for high-speed trains. With the development of computing technology and computational fluid dynamics (CFD) in the engineering field, CFD has been successfully applied to the design process of high-speed trains. However, developing a new streamlined shape for high-speed trains with excellent aerodynamic performance requires huge computational costs. Furthermore, relationships between multiple design variables and the aerodynamic loads are seldom obtained. In the present study, the Kriging surrogate model is used to perform a multi-objective optimization of the streamlined shape of high-speed trains, where the drag and the lift of the train coach are the optimization objectives. To improve the prediction accuracy of the Kriging model, the cross-validation method is used to construct the optimal Kriging model. The optimization results show that the two objectives are efficiently optimized, indicating that the optimization strategy used in the present study can greatly improve the optimization efficiency and meet the engineering requirements.
New approaches to optimization in aerospace conceptual design
NASA Technical Reports Server (NTRS)
Gage, Peter J.
1995-01-01
Aerospace design can be viewed as an optimization process, but conceptual studies are rarely performed using formal search algorithms. Three issues that restrict the success of automatic search are identified in this work. New approaches are introduced to address the integration of analyses and optimizers, to avoid the need for accurate gradient information and a smooth search space (required for calculus-based optimization), and to remove the restrictions imposed by fixed complexity problem formulations. (1) Optimization should be performed in a flexible environment. A quasi-procedural architecture is used to conveniently link analysis modules and automatically coordinate their execution. It efficiently controls a large-scale design tasks. (2) Genetic algorithms provide a search method for discontinuous or noisy domains. The utility of genetic optimization is demonstrated here, but parameter encodings and constraint-handling schemes must be carefully chosen to avoid premature convergence to suboptimal designs. The relationship between genetic and calculus-based methods is explored. (3) A variable-complexity genetic algorithm is created to permit flexible parameterization, so that the level of description can change during optimization. This new optimizer automatically discovers novel designs in structural and aerodynamic tasks.
A Big Data-driven Model for the Optimization of Healthcare Processes.
Koufi, Vassiliki; Malamateniou, Flora; Vassilacopoulos, George
2015-01-01
Healthcare organizations increasingly navigate a highly volatile, complex environment in which technological advancements and new healthcare delivery business models are the only constants. In their effort to out-perform in this environment, healthcare organizations need to be agile enough in order to become responsive to these increasingly changing conditions. To act with agility, healthcare organizations need to discover new ways to optimize their operations. To this end, they focus on healthcare processes that guide healthcare delivery and on the technologies that support them. Business process management (BPM) and Service-Oriented Architecture (SOA) can provide a flexible, dynamic, cloud-ready infrastructure where business process analytics can be utilized to extract useful insights from mountains of raw data, and make them work in ways beyond the abilities of human brains, or IT systems from just a year ago. This paper presents a framework which provides healthcare professionals gain better insight within and across your business processes. In particular, it performs real-time analysis on process-related data in order reveal areas of potential process improvement.
A perspective on future directions in aerospace propulsion system simulation
NASA Technical Reports Server (NTRS)
Miller, Brent A.; Szuch, John R.; Gaugler, Raymond E.; Wood, Jerry R.
1989-01-01
The design and development of aircraft engines is a lengthy and costly process using today's methodology. This is due, in large measure, to the fact that present methods rely heavily on experimental testing to verify the operability, performance, and structural integrity of components and systems. The potential exists for achieving significant speedups in the propulsion development process through increased use of computational techniques for simulation, analysis, and optimization. This paper outlines the concept and technology requirements for a Numerical Propulsion Simulation System (NPSS) that would provide capabilities to do interactive, multidisciplinary simulations of complete propulsion systems. By combining high performance computing hardware and software with state-of-the-art propulsion system models, the NPSS will permit the rapid calculation, assessment, and optimization of subcomponent, component, and system performance, durability, reliability and weight-before committing to building hardware.
DSP code optimization based on cache
NASA Astrophysics Data System (ADS)
Xu, Chengfa; Li, Chengcheng; Tang, Bin
2013-03-01
DSP program's running efficiency on board is often lower than which via the software simulation during the program development, which is mainly resulted from the user's improper use and incomplete understanding of the cache-based memory. This paper took the TI TMS320C6455 DSP as an example, analyzed its two-level internal cache, and summarized the methods of code optimization. Processor can achieve its best performance when using these code optimization methods. At last, a specific algorithm application in radar signal processing is proposed. Experiment result shows that these optimization are efficient.
Design optimization of a prescribed vibration system using conjoint value analysis
NASA Astrophysics Data System (ADS)
Malinga, Bongani; Buckner, Gregory D.
2016-12-01
This article details a novel design optimization strategy for a prescribed vibration system (PVS) used to mechanically filter solids from fluids in oil and gas drilling operations. A dynamic model of the PVS is developed, and the effects of disturbance torques are detailed. This model is used to predict the effects of design parameters on system performance and efficiency, as quantified by system attributes. Conjoint value analysis, a statistical technique commonly used in marketing science, is utilized to incorporate designer preferences. This approach effectively quantifies and optimizes preference-based trade-offs in the design process. The effects of designer preferences on system performance and efficiency are simulated. This novel optimization strategy yields improvements in all system attributes across all simulated vibration profiles, and is applicable to other industrial electromechanical systems.
Ultra-slim flexible glass for roll-to-roll electronic device fabrication
NASA Astrophysics Data System (ADS)
Garner, Sean; Glaesemann, Scott; Li, Xinghua
2014-08-01
As displays and electronics evolve to become lighter, thinner, and more flexible, the choice of substrate continues to be critical to their overall optimization. The substrate directly affects improvements in the designs, materials, fabrication processes, and performance of advanced electronics. With their inherent benefits such as surface quality, optical transmission, hermeticity, and thermal and dimensional stability, glass substrates enable high-quality and long-life devices. As substrate thicknesses are reduced below 200 μm, ultra-slim flexible glass continues to provide these inherent benefits to high-performance flexible electronics such as displays, touch sensors, photovoltaics, and lighting. In addition, the reduction in glass thickness also allows for new device designs and high-throughput, continuous manufacturing enabled by R2R processes. This paper provides an overview of ultra-slim flexible glass substrates and how they enable flexible electronic device optimization. Specific focus is put on flexible glass' mechanical reliability. For this, a combination of substrate design and process optimizations has been demonstrated that enables R2R device fabrication on flexible glass. Demonstrations of R2R flexible glass processes such as vacuum deposition, photolithography, laser patterning, screen printing, slot die coating, and lamination have been made. Compatibility with these key process steps has resulted in the first demonstration of a fully functional flexible glass device fabricated completely using R2R processes.
Su, Weixing; Chen, Hanning; Liu, Fang; Lin, Na; Jing, Shikai; Liang, Xiaodan; Liu, Wei
2017-03-01
There are many dynamic optimization problems in the real world, whose convergence and searching ability is cautiously desired, obviously different from static optimization cases. This requires an optimization algorithm adaptively seek the changing optima over dynamic environments, instead of only finding the global optimal solution in the static environment. This paper proposes a novel comprehensive learning artificial bee colony optimizer (CLABC) for optimization in dynamic environments problems, which employs a pool of optimal foraging strategies to balance the exploration and exploitation tradeoff. The main motive of CLABC is to enrich artificial bee foraging behaviors in the ABC model by combining Powell's pattern search method, life-cycle, and crossover-based social learning strategy. The proposed CLABC is a more bee-colony-realistic model that the bee can reproduce and die dynamically throughout the foraging process and population size varies as the algorithm runs. The experiments for evaluating CLABC are conducted on the dynamic moving peak benchmarks. Furthermore, the proposed algorithm is applied to a real-world application of dynamic RFID network optimization. Statistical analysis of all these cases highlights the significant performance improvement due to the beneficial combination and demonstrates the performance superiority of the proposed algorithm.
Optomechanical study and optimization of cantilever plate dynamics
NASA Astrophysics Data System (ADS)
Furlong, Cosme; Pryputniewicz, Ryszard J.
1995-06-01
Optimum dynamic characteristics of an aluminum cantilever plate containing holes of different sizes and located at arbitrary positions on the plate are studied computationally and experimentally. The objective function of this optimization is the minimization/maximization of the natural frequencies of the plate in terms of such design variable s as the sizes and locations of the holes. The optimization process is performed using the finite element method and mathematical programming techniques in order to obtain the natural frequencies and the optimum conditions of the plate, respectively. The modal behavior of the resultant optimal plate layout is studied experimentally through the use of holographic interferometry techniques. Comparisons of the computational and experimental results show that good agreement between theory and test is obtained. The comparisons also show that the combined, or hybrid use of experimental and computational techniques complement each other and prove to be a very efficient tool for performing optimization studies of mechanical components.
Studies on Hot-Melt Prepregging on PRM-II-50 Polyimide Resin with Graphite Fibers
NASA Technical Reports Server (NTRS)
Shin, E. Eugene; Sutter, James K.; Juhas, John; Veverka, Adrienne; Klans, Ojars; Inghram, Linda; Scheiman, Dan; Papadopoulos, Demetrios; Zoha, John; Bubnick, Jim
2004-01-01
A second generation PMR (in situ Polymerization of Monomer Reactants) polyimide resin PMR-II-50, has been considered for high temperature and high stiffness space propulsion composites applications for its improved high temperature performance. As part of composite processing optimization, two commercial prepregging methods: solution vs. hot-melt processes were investigated with M40J fabrics from Toray. In a previous study a systematic chemical, physical, thermal and mechanical characterization of these composites indicated the poor resin-fiber interfacial wetting, especially for the hot-melt process, resulted in poor composite quality. In order to improve the interfacial wetting, optimization of the resin viscosity and process variables were attempted in a commercial hot-melt prepregging line. In addition to presenting the results from the prepreg quality optimization trials, the combined effects of the prepregging method and two different composite cure methods, i.e. hot press vs. autoclave on composite quality and properties are discussed.
Studies on Hot-Melt Prepregging of PMR-II-50 Polyimide Resin with Graphite Fibers
NASA Technical Reports Server (NTRS)
Shin, E. Eugene; Sutter, James K.; Juhas, John; Veverka, Adrienne; Klans, Ojars; Inghram, Linda; Scheiman, Dan; Papadopoulos, Demetrios; Zoha, John; Bubnick, Jim
2003-01-01
A Second generation PMR (in situ Polymerization of Monomer Reactants) polyimide resin, PMR-II-50, has been considered for high temperature and high stiffness space propulsion composites applications for its improved high temperature performance. As part of composite processing optimization, two commercial prepregging methods: solution vs. hot-melt processes were investigated with M40J fabrics from Toray. In a previous study a systematic chemical, physical, thermal and mechanical characterization of these composites indicated that poor resin-fiber interfacial wetting, especially for the hot-melt process, resulted in poor composite quality. In order to improve the interfacial wetting, optimization of the resin viscosity and process variables were attempted in a commercial hot-melt prepregging line. In addition to presenting the results from the prepreg quality optimization trials, the combined effects of the prepregging method and two different composite cure methods, i.e., hot press vs. autoclave on composite quality and properties are discussed.
Van Dongen, Hans P. A.; Mott, Christopher G.; Huang, Jen-Kuang; Mollicone, Daniel J.; McKenzie, Frederic D.; Dinges, David F.
2007-01-01
Current biomathematical models of fatigue and performance do not accurately predict cognitive performance for individuals with a priori unknown degrees of trait vulnerability to sleep loss, do not predict performance reliably when initial conditions are uncertain, and do not yield statistically valid estimates of prediction accuracy. These limitations diminish their usefulness for predicting the performance of individuals in operational environments. To overcome these 3 limitations, a novel modeling approach was developed, based on the expansion of a statistical technique called Bayesian forecasting. The expanded Bayesian forecasting procedure was implemented in the two-process model of sleep regulation, which has been used to predict performance on the basis of the combination of a sleep homeostatic process and a circadian process. Employing the two-process model with the Bayesian forecasting procedure to predict performance for individual subjects in the face of unknown traits and uncertain states entailed subject-specific optimization of 3 trait parameters (homeostatic build-up rate, circadian amplitude, and basal performance level) and 2 initial state parameters (initial homeostatic state and circadian phase angle). Prior information about the distribution of the trait parameters in the population at large was extracted from psychomotor vigilance test (PVT) performance measurements in 10 subjects who had participated in a laboratory experiment with 88 h of total sleep deprivation. The PVT performance data of 3 additional subjects in this experiment were set aside beforehand for use in prospective computer simulations. The simulations involved updating the subject-specific model parameters every time the next performance measurement became available, and then predicting performance 24 h ahead. Comparison of the predictions to the subjects' actual data revealed that as more data became available for the individuals at hand, the performance predictions became increasingly more accurate and had progressively smaller 95% confidence intervals, as the model parameters converged efficiently to those that best characterized each individual. Even when more challenging simulations were run (mimicking a change in the initial homeostatic state; simulating the data to be sparse), the predictions were still considerably more accurate than would have been achieved by the two-process model alone. Although the work described here is still limited to periods of consolidated wakefulness with stable circadian rhythms, the results obtained thus far indicate that the Bayesian forecasting procedure can successfully overcome some of the major outstanding challenges for biomathematical prediction of cognitive performance in operational settings. Citation: Van Dongen HPA; Mott CG; Huang JK; Mollicone DJ; McKenzie FD; Dinges DF. Optimization of biomathematical model predictions for cognitive performance impairment in individuals: accounting for unknown traits and uncertain states in homeostatic and circadian processes. SLEEP 2007;30(9):1129-1143. PMID:17910385
Optimization of a chemical identification algorithm
NASA Astrophysics Data System (ADS)
Chyba, Thomas H.; Fisk, Brian; Gunning, Christin; Farley, Kevin; Polizzi, Amber; Baughman, David; Simpson, Steven; Slamani, Mohamed-Adel; Almassy, Robert; Da Re, Ryan; Li, Eunice; MacDonald, Steve; Slamani, Ahmed; Mitchell, Scott A.; Pendell-Jones, Jay; Reed, Timothy L.; Emge, Darren
2010-04-01
A procedure to evaluate and optimize the performance of a chemical identification algorithm is presented. The Joint Contaminated Surface Detector (JCSD) employs Raman spectroscopy to detect and identify surface chemical contamination. JCSD measurements of chemical warfare agents, simulants, toxic industrial chemicals, interferents and bare surface backgrounds were made in the laboratory and under realistic field conditions. A test data suite, developed from these measurements, is used to benchmark algorithm performance throughout the improvement process. In any one measurement, one of many possible targets can be present along with interferents and surfaces. The detection results are expressed as a 2-category classification problem so that Receiver Operating Characteristic (ROC) techniques can be applied. The limitations of applying this framework to chemical detection problems are discussed along with means to mitigate them. Algorithmic performance is optimized globally using robust Design of Experiments and Taguchi techniques. These methods require figures of merit to trade off between false alarms and detection probability. Several figures of merit, including the Matthews Correlation Coefficient and the Taguchi Signal-to-Noise Ratio are compared. Following the optimization of global parameters which govern the algorithm behavior across all target chemicals, ROC techniques are employed to optimize chemical-specific parameters to further improve performance.
NASA Technical Reports Server (NTRS)
Huyse, Luc; Bushnell, Dennis M. (Technical Monitor)
2001-01-01
Free-form shape optimization of airfoils poses unexpected difficulties. Practical experience has indicated that a deterministic optimization for discrete operating conditions can result in dramatically inferior performance when the actual operating conditions are different from the - somewhat arbitrary - design values used for the optimization. Extensions to multi-point optimization have proven unable to adequately remedy this problem of "localized optimization" near the sampled operating conditions. This paper presents an intrinsically statistical approach and demonstrates how the shortcomings of multi-point optimization with respect to "localized optimization" can be overcome. The practical examples also reveal how the relative likelihood of each of the operating conditions is automatically taken into consideration during the optimization process. This is a key advantage over the use of multipoint methods.
Wang, Chia-Chi; Yang, Ming-Ta; Lu, Kang-Hao; Chan, Kuei-Hui
2016-03-04
Creatine plays an important role in muscle energy metabolism. Postactivation potentiation (PAP) is a phenomenon that can acutely increase muscle power, but it is an individualized process that is influenced by muscle fatigue. This study examined the effects of creatine supplementation on explosive performance and the optimal individual PAP time during a set of complex training bouts. Thirty explosive athletes performed tests of back squat for one repetition maximum (1RM) strength and complex training bouts for determining the individual optimal timing of PAP, height and peak power of a counter movement jump before and after the supplementation. Subjects were assigned to a creatine or placebo group and then consumed 20 g of creatine or carboxymethyl cellulose per day for six days. After the supplementation, the 1RM strength in the creatine group significantly increased (p < 0.05). The optimal individual PAP time in the creatine group was also significant earlier than the pre-supplementation and post-supplementation of the placebo group (p < 0.05). There was no significant difference in jump performance between the groups. This study demonstrates that creatine supplementation improves maximal muscle strength and the optimal individual PAP time of complex training but has no effect on explosive performance.
Wang, Chia-Chi; Yang, Ming-Ta; Lu, Kang-Hao; Chan, Kuei-Hui
2016-01-01
Creatine plays an important role in muscle energy metabolism. Postactivation potentiation (PAP) is a phenomenon that can acutely increase muscle power, but it is an individualized process that is influenced by muscle fatigue. This study examined the effects of creatine supplementation on explosive performance and the optimal individual PAP time during a set of complex training bouts. Thirty explosive athletes performed tests of back squat for one repetition maximum (1RM) strength and complex training bouts for determining the individual optimal timing of PAP, height and peak power of a counter movement jump before and after the supplementation. Subjects were assigned to a creatine or placebo group and then consumed 20 g of creatine or carboxymethyl cellulose per day for six days. After the supplementation, the 1RM strength in the creatine group significantly increased (p < 0.05). The optimal individual PAP time in the creatine group was also significant earlier than the pre-supplementation and post-supplementation of the placebo group (p < 0.05). There was no significant difference in jump performance between the groups. This study demonstrates that creatine supplementation improves maximal muscle strength and the optimal individual PAP time of complex training but has no effect on explosive performance. PMID:26959056
Model-as-a-service (MaaS) using the cloud service innovation platform (CSIP)
USDA-ARS?s Scientific Manuscript database
Cloud infrastructures for modelling activities such as data processing, performing environmental simulations, or conducting model calibrations/optimizations provide a cost effective alternative to traditional high performance computing approaches. Cloud-based modelling examples emerged into the more...
Optimization of subsurface flow and associated treatment processes.
DOT National Transportation Integrated Search
2006-02-01
The objective of this study was to examine the use and performance of synthetic media (growth substrate) in a rock filter waste treatment system located at the Grand Prairie Rest Area. Specifically, this study examined the performance of the syntheti...
An expert system for integrated structural analysis and design optimization for aerospace structures
NASA Technical Reports Server (NTRS)
1992-01-01
The results of a research study on the development of an expert system for integrated structural analysis and design optimization is presented. An Object Representation Language (ORL) was developed first in conjunction with a rule-based system. This ORL/AI shell was then used to develop expert systems to provide assistance with a variety of structural analysis and design optimization tasks, in conjunction with procedural modules for finite element structural analysis and design optimization. The main goal of the research study was to provide expertise, judgment, and reasoning capabilities in the aerospace structural design process. This will allow engineers performing structural analysis and design, even without extensive experience in the field, to develop error-free, efficient and reliable structural designs very rapidly and cost-effectively. This would not only improve the productivity of design engineers and analysts, but also significantly reduce time to completion of structural design. An extensive literature survey in the field of structural analysis, design optimization, artificial intelligence, and database management systems and their application to the structural design process was first performed. A feasibility study was then performed, and the architecture and the conceptual design for the integrated 'intelligent' structural analysis and design optimization software was then developed. An Object Representation Language (ORL), in conjunction with a rule-based system, was then developed using C++. Such an approach would improve the expressiveness for knowledge representation (especially for structural analysis and design applications), provide ability to build very large and practical expert systems, and provide an efficient way for storing knowledge. Functional specifications for the expert systems were then developed. The ORL/AI shell was then used to develop a variety of modules of expert systems for a variety of modeling, finite element analysis, and design optimization tasks in the integrated aerospace structural design process. These expert systems were developed to work in conjunction with procedural finite element structural analysis and design optimization modules (developed in-house at SAT, Inc.). The complete software, AutoDesign, so developed, can be used for integrated 'intelligent' structural analysis and design optimization. The software was beta-tested at a variety of companies, used by a range of engineers with different levels of background and expertise. Based on the feedback obtained by such users, conclusions were developed and are provided.
An expert system for integrated structural analysis and design optimization for aerospace structures
NASA Astrophysics Data System (ADS)
1992-04-01
The results of a research study on the development of an expert system for integrated structural analysis and design optimization is presented. An Object Representation Language (ORL) was developed first in conjunction with a rule-based system. This ORL/AI shell was then used to develop expert systems to provide assistance with a variety of structural analysis and design optimization tasks, in conjunction with procedural modules for finite element structural analysis and design optimization. The main goal of the research study was to provide expertise, judgment, and reasoning capabilities in the aerospace structural design process. This will allow engineers performing structural analysis and design, even without extensive experience in the field, to develop error-free, efficient and reliable structural designs very rapidly and cost-effectively. This would not only improve the productivity of design engineers and analysts, but also significantly reduce time to completion of structural design. An extensive literature survey in the field of structural analysis, design optimization, artificial intelligence, and database management systems and their application to the structural design process was first performed. A feasibility study was then performed, and the architecture and the conceptual design for the integrated 'intelligent' structural analysis and design optimization software was then developed. An Object Representation Language (ORL), in conjunction with a rule-based system, was then developed using C++. Such an approach would improve the expressiveness for knowledge representation (especially for structural analysis and design applications), provide ability to build very large and practical expert systems, and provide an efficient way for storing knowledge. Functional specifications for the expert systems were then developed. The ORL/AI shell was then used to develop a variety of modules of expert systems for a variety of modeling, finite element analysis, and design optimization tasks in the integrated aerospace structural design process. These expert systems were developed to work in conjunction with procedural finite element structural analysis and design optimization modules (developed in-house at SAT, Inc.). The complete software, AutoDesign, so developed, can be used for integrated 'intelligent' structural analysis and design optimization. The software was beta-tested at a variety of companies, used by a range of engineers with different levels of background and expertise. Based on the feedback obtained by such users, conclusions were developed and are provided.
Radar Doppler Processing with Nonuniform Sampling.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Doerry, Armin W.
2017-07-01
Conventional signal processing to estimate radar Doppler frequency often assumes uniform pulse/sample spacing. This is for the convenience of t he processing. More recent performance enhancements in processor capability allow optimally processing nonuniform pulse/sample spacing, thereby overcoming some of the baggage that attends uniform sampling, such as Doppler ambiguity and SNR losses due to sidelobe control measures.
Optimizing a Laser Process for Making Carbon Nanotubes
NASA Technical Reports Server (NTRS)
Arepalli, Sivaram; Nikolaev, Pavel; Holmes, William
2010-01-01
A systematic experimental study has been performed to determine the effects of each of the operating conditions in a double-pulse laser ablation process that is used to produce single-wall carbon nanotubes (SWCNTs). The comprehensive data compiled in this study have been analyzed to recommend conditions for optimizing the process and scaling up the process for mass production. The double-pulse laser ablation process for making SWCNTs was developed by Rice University researchers. Of all currently known nanotube-synthesizing processes (arc and chemical vapor deposition), this process yields the greatest proportion of SWCNTs in the product material. The aforementioned process conditions are important for optimizing the production of SWCNTs and scaling up production. Reports of previous research (mostly at Rice University) toward optimization of process conditions mention effects of oven temperature and briefly mention effects of flow conditions, but no systematic, comprehensive study of the effects of process conditions was done prior to the study described here. This was a parametric study, in which several production runs were carried out, changing one operating condition for each run. The study involved variation of a total of nine parameters: the sequence of the laser pulses, pulse-separation time, laser pulse energy density, buffer gas (helium or nitrogen instead of argon), oven temperature, pressure, flow speed, inner diameter of the flow tube, and flow-tube material.
NASA Astrophysics Data System (ADS)
Suarez, Hernan; Zhang, Yan R.
2015-05-01
New radar applications need to perform complex algorithms and process large quantity of data to generate useful information for the users. This situation has motivated the search for better processing solutions that include low power high-performance processors, efficient algorithms, and high-speed interfaces. In this work, hardware implementation of adaptive pulse compression for real-time transceiver optimization are presented, they are based on a System-on-Chip architecture for Xilinx devices. This study also evaluates the performance of dedicated coprocessor as hardware accelerator units to speed up and improve the computation of computing-intensive tasks such matrix multiplication and matrix inversion which are essential units to solve the covariance matrix. The tradeoffs between latency and hardware utilization are also presented. Moreover, the system architecture takes advantage of the embedded processor, which is interconnected with the logic resources through the high performance AXI buses, to perform floating-point operations, control the processing blocks, and communicate with external PC through a customized software interface. The overall system functionality is demonstrated and tested for real-time operations using a Ku-band tested together with a low-cost channel emulator for different types of waveforms.
Holistic Context-Sensitivity for Run-Time Optimization of Flexible Manufacturing Systems.
Scholze, Sebastian; Barata, Jose; Stokic, Dragan
2017-02-24
Highly flexible manufacturing systems require continuous run-time (self-) optimization of processes with respect to diverse parameters, e.g., efficiency, availability, energy consumption etc. A promising approach for achieving (self-) optimization in manufacturing systems is the usage of the context sensitivity approach based on data streaming from high amount of sensors and other data sources. Cyber-physical systems play an important role as sources of information to achieve context sensitivity. Cyber-physical systems can be seen as complex intelligent sensors providing data needed to identify the current context under which the manufacturing system is operating. In this paper, it is demonstrated how context sensitivity can be used to realize a holistic solution for (self-) optimization of discrete flexible manufacturing systems, by making use of cyber-physical systems integrated in manufacturing systems/processes. A generic approach for context sensitivity, based on self-learning algorithms, is proposed aiming at a various manufacturing systems. The new solution encompasses run-time context extractor and optimizer. Based on the self-learning module both context extraction and optimizer are continuously learning and improving their performance. The solution is following Service Oriented Architecture principles. The generic solution is developed and then applied to two very different manufacturing processes.
Holistic Context-Sensitivity for Run-Time Optimization of Flexible Manufacturing Systems
Scholze, Sebastian; Barata, Jose; Stokic, Dragan
2017-01-01
Highly flexible manufacturing systems require continuous run-time (self-) optimization of processes with respect to diverse parameters, e.g., efficiency, availability, energy consumption etc. A promising approach for achieving (self-) optimization in manufacturing systems is the usage of the context sensitivity approach based on data streaming from high amount of sensors and other data sources. Cyber-physical systems play an important role as sources of information to achieve context sensitivity. Cyber-physical systems can be seen as complex intelligent sensors providing data needed to identify the current context under which the manufacturing system is operating. In this paper, it is demonstrated how context sensitivity can be used to realize a holistic solution for (self-) optimization of discrete flexible manufacturing systems, by making use of cyber-physical systems integrated in manufacturing systems/processes. A generic approach for context sensitivity, based on self-learning algorithms, is proposed aiming at a various manufacturing systems. The new solution encompasses run-time context extractor and optimizer. Based on the self-learning module both context extraction and optimizer are continuously learning and improving their performance. The solution is following Service Oriented Architecture principles. The generic solution is developed and then applied to two very different manufacturing processes. PMID:28245564
Impact of Aerodynamics and Structures Technology on Heavy Lift Tiltrotors
NASA Technical Reports Server (NTRS)
Acree, C. W., Jr.
2006-01-01
Rotor performance and aeroelastic stability are presented for a 124,000-lb Large Civil Tilt Rotor (LCTR) design. It was designed to carry 120 passengers for 1200 nm, with performance of 350 knots at 30,000 ft altitude. Design features include a low-mounted wing and hingeless rotors, with a very low cruise tip speed of 350 ft/sec. The rotor and wing design processes are described, including rotor optimization methods and wing/rotor aeroelastic stability analyses. New rotor airfoils were designed specifically for the LCTR; the resulting performance improvements are compared to current technology airfoils. Twist, taper and precone optimization are presented, along with the effects of blade flexibility on performance. A new wing airfoil was designed and a composite structure was developed to meet the wing load requirements for certification. Predictions of aeroelastic stability are presented for the optimized rotor and wing, along with summaries of the effects of rotor design parameters on stability.
Framework for Multidisciplinary Analysis, Design, and Optimization with High-Fidelity Analysis Tools
NASA Technical Reports Server (NTRS)
Orr, Stanley A.; Narducci, Robert P.
2009-01-01
A plan is presented for the development of a high fidelity multidisciplinary optimization process for rotorcraft. The plan formulates individual disciplinary design problems, identifies practical high-fidelity tools and processes that can be incorporated in an automated optimization environment, and establishes statements of the multidisciplinary design problem including objectives, constraints, design variables, and cross-disciplinary dependencies. Five key disciplinary areas are selected in the development plan. These are rotor aerodynamics, rotor structures and dynamics, fuselage aerodynamics, fuselage structures, and propulsion / drive system. Flying qualities and noise are included as ancillary areas. Consistency across engineering disciplines is maintained with a central geometry engine that supports all multidisciplinary analysis. The multidisciplinary optimization process targets the preliminary design cycle where gross elements of the helicopter have been defined. These might include number of rotors and rotor configuration (tandem, coaxial, etc.). It is at this stage that sufficient configuration information is defined to perform high-fidelity analysis. At the same time there is enough design freedom to influence a design. The rotorcraft multidisciplinary optimization tool is built and substantiated throughout its development cycle in a staged approach by incorporating disciplines sequentially.
Optimization of hole generation in Ti/CFRP stacks
NASA Astrophysics Data System (ADS)
Ivanov, Y. N.; Pashkov, A. E.; Chashhin, N. S.
2018-03-01
The article aims to describe methods for improving the surface quality and hole accuracy in Ti/CFRP stacks by optimizing cutting methods and drill geometry. The research is based on the fundamentals of machine building, theory of probability, mathematical statistics, and experiment planning and manufacturing process optimization theories. Statistical processing of experiment data was carried out by means of Statistica 6 and Microsoft Excel 2010. Surface geometry in Ti stacks was analyzed using a Taylor Hobson Form Talysurf i200 Series Profilometer, and in CFRP stacks - using a Bruker ContourGT-Kl Optical Microscope. Hole shapes and sizes were analyzed using a Carl Zeiss CONTURA G2 Measuring machine, temperatures in cutting zones were recorded with a FLIR SC7000 Series Infrared Camera. Models of multivariate analysis of variance were developed. They show effects of drilling modes on surface quality and accuracy of holes in Ti/CFRP stacks. The task of multicriteria drilling process optimization was solved. Optimal cutting technologies which improve performance were developed. Methods for assessing thermal tool and material expansion effects on the accuracy of holes in Ti/CFRP/Ti stacks were developed.
Image gathering and processing - Information and fidelity
NASA Technical Reports Server (NTRS)
Huck, F. O.; Fales, C. L.; Halyo, N.; Samms, R. W.; Stacy, K.
1985-01-01
In this paper we formulate and use information and fidelity criteria to assess image gathering and processing, combining optical design with image-forming and edge-detection algorithms. The optical design of the image-gathering system revolves around the relationship among sampling passband, spatial response, and signal-to-noise ratio (SNR). Our formulations of information, fidelity, and optimal (Wiener) restoration account for the insufficient sampling (i.e., aliasing) common in image gathering as well as for the blurring and noise that conventional formulations account for. Performance analyses and simulations for ordinary optical-design constraints and random scences indicate that (1) different image-forming algorithms prefer different optical designs; (2) informationally optimized designs maximize the robustness of optimal image restorations and lead to the highest-spatial-frequency channel (relative to the sampling passband) for which edge detection is reliable (if the SNR is sufficiently high); and (3) combining the informationally optimized design with a 3 by 3 lateral-inhibitory image-plane-processing algorithm leads to a spatial-response shape that approximates the optimal edge-detection response of (Marr's model of) human vision and thus reduces the data preprocessing and transmission required for machine vision.
An intelligent factory-wide optimal operation system for continuous production process
NASA Astrophysics Data System (ADS)
Ding, Jinliang; Chai, Tianyou; Wang, Hongfeng; Wang, Junwei; Zheng, Xiuping
2016-03-01
In this study, a novel intelligent factory-wide operation system for a continuous production process is designed to optimise the entire production process, which consists of multiple units; furthermore, this system is developed using process operational data to avoid the complexity of mathematical modelling of the continuous production process. The data-driven approach aims to specify the structure of the optimal operation system; in particular, the operational data of the process are used to formulate each part of the system. In this context, the domain knowledge of process engineers is utilised, and a closed-loop dynamic optimisation strategy, which combines feedback, performance prediction, feed-forward, and dynamic tuning schemes into a framework, is employed. The effectiveness of the proposed system has been verified using industrial experimental results.
NASA Astrophysics Data System (ADS)
Asoodeh, Mojtaba; Bagheripour, Parisa; Gholami, Amin
2015-06-01
Free fluid porosity and rock permeability, undoubtedly the most critical parameters of hydrocarbon reservoir, could be obtained by processing of nuclear magnetic resonance (NMR) log. Despite conventional well logs (CWLs), NMR logging is very expensive and time-consuming. Therefore, idea of synthesizing NMR log from CWLs would be of a great appeal among reservoir engineers. For this purpose, three optimization strategies are followed. Firstly, artificial neural network (ANN) is optimized by virtue of hybrid genetic algorithm-pattern search (GA-PS) technique, then fuzzy logic (FL) is optimized by means of GA-PS, and eventually an alternative condition expectation (ACE) model is constructed using the concept of committee machine to combine outputs of optimized and non-optimized FL and ANN models. Results indicated that optimization of traditional ANN and FL model using GA-PS technique significantly enhances their performances. Furthermore, the ACE committee of aforementioned models produces more accurate and reliable results compared with a singular model performing alone.
Yang, Zhen-Lun; Wu, Angus; Min, Hua-Qing
2015-01-01
An improved quantum-behaved particle swarm optimization with elitist breeding (EB-QPSO) for unconstrained optimization is presented and empirically studied in this paper. In EB-QPSO, the novel elitist breeding strategy acts on the elitists of the swarm to escape from the likely local optima and guide the swarm to perform more efficient search. During the iterative optimization process of EB-QPSO, when criteria met, the personal best of each particle and the global best of the swarm are used to generate new diverse individuals through the transposon operators. The new generated individuals with better fitness are selected to be the new personal best particles and global best particle to guide the swarm for further solution exploration. A comprehensive simulation study is conducted on a set of twelve benchmark functions. Compared with five state-of-the-art quantum-behaved particle swarm optimization algorithms, the proposed EB-QPSO performs more competitively in all of the benchmark functions in terms of better global search capability and faster convergence rate.
Mission and system optimization of nuclear electric propulsion vehicles for lunar and Mars missions
NASA Technical Reports Server (NTRS)
Gilland, James H.
1991-01-01
The detailed mission and system optimization of low thrust electric propulsion missions is a complex, iterative process involving interaction between orbital mechanics and system performance. Through the use of appropriate approximations, initial system optimization and analysis can be performed for a range of missions. The intent of these calculations is to provide system and mission designers with simple methods to assess system design without requiring access or detailed knowledge of numerical calculus of variations optimizations codes and methods. Approximations for the mission/system optimization of Earth orbital transfer and Mars mission have been derived. Analyses include the variation of thruster efficiency with specific impulse. Optimum specific impulse, payload fraction, and power/payload ratios are calculated. The accuracy of these methods is tested and found to be reasonable for initial scoping studies. Results of optimization for Space Exploration Initiative lunar cargo and Mars missions are presented for a range of power system and thruster options.
Aparicio, Juan Daniel; Raimondo, Enzo Emanuel; Gil, Raúl Andrés; Benimeli, Claudia Susana; Polti, Marta Alejandra
2018-01-15
The objective of the present work was to establish optimal biological and physicochemical parameters in order to remove simultaneously lindane and Cr(VI) at high and/or low pollutants concentrations from the soil by an actinobacteria consortium formed by Streptomyces sp. M7, MC1, A5, and Amycolatopsis tucumanensis AB0. Also, the final aim was to treat real soils contaminated with Cr(VI) and/or lindane from the Northwest of Argentina employing the optimal biological and physicochemical conditions. In this sense, after determining the optimal inoculum concentration (2gkg -1 ), an experimental design model with four factors (temperature, moisture, initial concentration of Cr(VI) and lindane) was employed for predicting the system behavior during bioremediation process. According to response optimizer, the optimal moisture level was 30% for all bioremediation processes. However, the optimal temperature was different for each situation: for low initial concentrations of both pollutants, the optimal temperature was 25°C; for low initial concentrations of Cr(VI) and high initial concentrations of lindane, the optimal temperature was 30°C; and for high initial concentrations of Cr(VI), the optimal temperature was 35°C. In order to confirm the model adequacy and the validity of the optimization procedure, experiments were performed in six real contaminated soils samples. The defined actinobacteria consortium reduced the contaminants concentrations in five of the six samples, by working at laboratory scale and employing the optimal conditions obtained through the factorial design. Copyright © 2017 Elsevier B.V. All rights reserved.
Optimal design of a novel remote center-of-motion mechanism for minimally invasive surgical robot
NASA Astrophysics Data System (ADS)
Sun, Jingyuan; Yan, Zhiyuan; Du, Zhijiang
2017-06-01
Surgical robot with a remote center-of-motion (RCM) plays an important role in minimally invasive surgery (MIS) field. To make the mechanism has high flexibility and meet the demand of movements during processing of operation, an optimized RCM mechanism is proposed in this paper. Then, the kinematic performance and workspace are analyzed. Finally, a new optimization objective function is built by using the condition number index and the workspace index.
The trade-off between morphology and control in the co-optimized design of robots.
Rosendo, Andre; von Atzigen, Marco; Iida, Fumiya
2017-01-01
Conventionally, robot morphologies are developed through simulations and calculations, and different control methods are applied afterwards. Assuming that simulations and predictions are simplified representations of our reality, how sure can roboticists be that the chosen morphology is the most adequate for the possible control choices in the real-world? Here we study the influence of the design parameters in the creation of a robot with a Bayesian morphology-control (MC) co-optimization process. A robot autonomously creates child robots from a set of possible design parameters and uses Bayesian Optimization (BO) to infer the best locomotion behavior from real world experiments. Then, we systematically change from an MC co-optimization to a control-only (C) optimization, which better represents the traditional way that robots are developed, to explore the trade-off between these two methods. We show that although C processes can greatly improve the behavior of poor morphologies, such agents are still outperformed by MC co-optimization results with as few as 25 iterations. Our findings, on one hand, suggest that BO should be used in the design process of robots for both morphological and control parameters to reach optimal performance, and on the other hand, point to the downfall of current design methods in face of new search techniques.
The trade-off between morphology and control in the co-optimized design of robots
Iida, Fumiya
2017-01-01
Conventionally, robot morphologies are developed through simulations and calculations, and different control methods are applied afterwards. Assuming that simulations and predictions are simplified representations of our reality, how sure can roboticists be that the chosen morphology is the most adequate for the possible control choices in the real-world? Here we study the influence of the design parameters in the creation of a robot with a Bayesian morphology-control (MC) co-optimization process. A robot autonomously creates child robots from a set of possible design parameters and uses Bayesian Optimization (BO) to infer the best locomotion behavior from real world experiments. Then, we systematically change from an MC co-optimization to a control-only (C) optimization, which better represents the traditional way that robots are developed, to explore the trade-off between these two methods. We show that although C processes can greatly improve the behavior of poor morphologies, such agents are still outperformed by MC co-optimization results with as few as 25 iterations. Our findings, on one hand, suggest that BO should be used in the design process of robots for both morphological and control parameters to reach optimal performance, and on the other hand, point to the downfall of current design methods in face of new search techniques. PMID:29023482
Nozzle Mounting Method Optimization Based on Robot Kinematic Analysis
NASA Astrophysics Data System (ADS)
Chen, Chaoyue; Liao, Hanlin; Montavon, Ghislain; Deng, Sihao
2016-08-01
Nowadays, the application of industrial robots in thermal spray is gaining more and more importance. A desired coating quality depends on factors such as a balanced robot performance, a uniform scanning trajectory and stable parameters (e.g. nozzle speed, scanning step, spray angle, standoff distance). These factors also affect the mass and heat transfer as well as the coating formation. Thus, the kinematic optimization of all these aspects plays a key role in order to obtain an optimal coating quality. In this study, the robot performance was optimized from the aspect of nozzle mounting on the robot. An optimized nozzle mounting for a type F4 nozzle was designed, based on the conventional mounting method from the point of view of robot kinematics validated on a virtual robot. Robot kinematic parameters were obtained from the simulation by offline programming software and analyzed by statistical methods. The energy consumptions of different nozzle mounting methods were also compared. The results showed that it was possible to reasonably assign the amount of robot motion to each axis during the process, so achieving a constant nozzle speed. Thus, it is possible optimize robot performance and to economize robot energy.
Stochastic simulation and robust design optimization of integrated photonic filters
NASA Astrophysics Data System (ADS)
Weng, Tsui-Wei; Melati, Daniele; Melloni, Andrea; Daniel, Luca
2017-01-01
Manufacturing variations are becoming an unavoidable issue in modern fabrication processes; therefore, it is crucial to be able to include stochastic uncertainties in the design phase. In this paper, integrated photonic coupled ring resonator filters are considered as an example of significant interest. The sparsity structure in photonic circuits is exploited to construct a sparse combined generalized polynomial chaos model, which is then used to analyze related statistics and perform robust design optimization. Simulation results show that the optimized circuits are more robust to fabrication process variations and achieve a reduction of 11%-35% in the mean square errors of the 3 dB bandwidth compared to unoptimized nominal designs.
Kheifets, Aaron; Gallistel, C R
2012-05-29
Animals successfully navigate the world despite having only incomplete information about behaviorally important contingencies. It is an open question to what degree this behavior is driven by estimates of stochastic parameters (brain-constructed models of the experienced world) and to what degree it is directed by reinforcement-driven processes that optimize behavior in the limit without estimating stochastic parameters (model-free adaptation processes, such as associative learning). We find that mice adjust their behavior in response to a change in probability more quickly and abruptly than can be explained by differential reinforcement. Our results imply that mice represent probabilities and perform calculations over them to optimize their behavior, even when the optimization produces negligible material gain.
Kheifets, Aaron; Gallistel, C. R.
2012-01-01
Animals successfully navigate the world despite having only incomplete information about behaviorally important contingencies. It is an open question to what degree this behavior is driven by estimates of stochastic parameters (brain-constructed models of the experienced world) and to what degree it is directed by reinforcement-driven processes that optimize behavior in the limit without estimating stochastic parameters (model-free adaptation processes, such as associative learning). We find that mice adjust their behavior in response to a change in probability more quickly and abruptly than can be explained by differential reinforcement. Our results imply that mice represent probabilities and perform calculations over them to optimize their behavior, even when the optimization produces negligible material gain. PMID:22592792
Cui, Xinchun; Niu, Yuying; Zheng, Xiangwei; Han, Yingshuai
2018-01-01
In this paper, a new color watermarking algorithm based on differential evolution is proposed. A color host image is first converted from RGB space to YIQ space, which is more suitable for the human visual system. Then, apply three-level discrete wavelet transformation to luminance component Y and generate four different frequency sub-bands. After that, perform singular value decomposition on these sub-bands. In the watermark embedding process, apply discrete wavelet transformation to a watermark image after the scrambling encryption processing. Our new algorithm uses differential evolution algorithm with adaptive optimization to choose the right scaling factors. Experimental results show that the proposed algorithm has a better performance in terms of invisibility and robustness.
Lot sizing and unequal-sized shipment policy for an integrated production-inventory system
NASA Astrophysics Data System (ADS)
Giri, B. C.; Sharma, S.
2014-05-01
This article develops a single-manufacturer single-retailer production-inventory model in which the manufacturer delivers the retailer's ordered quantity in unequal shipments. The manufacturer's production process is imperfect and it may produce some defective items during a production run. The retailer performs a screening process immediately after receiving the order from the manufacturer. The expected average total cost of the integrated production-inventory system is derived using renewal theory and a solution procedure is suggested to determine the optimal production and shipment policy. An extensive numerical study based on different sets of parameter values is conducted and the optimal results so obtained are analysed to examine the relative performance of the models under equal and unequal shipment policies.
Computation of output feedback gains for linear stochastic systems using the Zangwill-Powell method
NASA Technical Reports Server (NTRS)
Kaufman, H.
1977-01-01
Because conventional optimal linear regulator theory results in a controller which requires the capability of measuring and/or estimating the entire state vector, it is of interest to consider procedures for computing controls which are restricted to be linear feedback functions of a lower dimensional output vector and which take into account the presence of measurement noise and process uncertainty. To this effect a stochastic linear model has been developed that accounts for process parameter and initial uncertainty, measurement noise, and a restricted number of measurable outputs. Optimization with respect to the corresponding output feedback gains was then performed for both finite and infinite time performance indices without gradient computation by using Zangwill's modification of a procedure originally proposed by Powell.
Fast principal component analysis for stacking seismic data
NASA Astrophysics Data System (ADS)
Wu, Juan; Bai, Min
2018-04-01
Stacking seismic data plays an indispensable role in many steps of the seismic data processing and imaging workflow. Optimal stacking of seismic data can help mitigate seismic noise and enhance the principal components to a great extent. Traditional average-based seismic stacking methods cannot obtain optimal performance when the ambient noise is extremely strong. We propose a principal component analysis (PCA) algorithm for stacking seismic data without being sensitive to noise level. Considering the computational bottleneck of the classic PCA algorithm in processing massive seismic data, we propose an efficient PCA algorithm to make the proposed method readily applicable for industrial applications. Two numerically designed examples and one real seismic data are used to demonstrate the performance of the presented method.
Optimizing spacecraft design - optimization engine development : progress and plans
NASA Technical Reports Server (NTRS)
Cornford, Steven L.; Feather, Martin S.; Dunphy, Julia R; Salcedo, Jose; Menzies, Tim
2003-01-01
At JPL and NASA, a process has been developed to perform life cycle risk management. This process requires users to identify: goals and objectives to be achieved (and their relative priorities), the various risks to achieving those goals and objectives, and options for risk mitigation (prevention, detection ahead of time, and alleviation). Risks are broadly defined to include the risk of failing to design a system with adequate performance, compatibility and robustness in addition to more traditional implementation and operational risks. The options for mitigating these different kinds of risks can include architectural and design choices, technology plans and technology back-up options, test-bed and simulation options, engineering models and hardware/software development techniques and other more traditional risk reduction techniques.
Hu, Meng; Krauss, Martin; Brack, Werner; Schulze, Tobias
2016-11-01
Liquid chromatography-high resolution mass spectrometry (LC-HRMS) is a well-established technique for nontarget screening of contaminants in complex environmental samples. Automatic peak detection is essential, but its performance has only rarely been assessed and optimized so far. With the aim to fill this gap, we used pristine water extracts spiked with 78 contaminants as a test case to evaluate and optimize chromatogram and spectral data processing. To assess whether data acquisition strategies have a significant impact on peak detection, three values of MS cycle time (CT) of an LTQ Orbitrap instrument were tested. Furthermore, the key parameter settings of the data processing software MZmine 2 were optimized to detect the maximum number of target peaks from the samples by the design of experiments (DoE) approach and compared to a manual evaluation. The results indicate that short CT significantly improves the quality of automatic peak detection, which means that full scan acquisition without additional MS 2 experiments is suggested for nontarget screening. MZmine 2 detected 75-100 % of the peaks compared to manual peak detection at an intensity level of 10 5 in a validation dataset on both spiked and real water samples under optimal parameter settings. Finally, we provide an optimization workflow of MZmine 2 for LC-HRMS data processing that is applicable for environmental samples for nontarget screening. The results also show that the DoE approach is useful and effort-saving for optimizing data processing parameters. Graphical Abstract ᅟ.
Integrated multidisciplinary optimization of rotorcraft: A plan for development
NASA Technical Reports Server (NTRS)
Adelman, Howard M. (Editor); Mantay, Wayne R. (Editor)
1989-01-01
This paper describes a joint NASA/Army initiative at the Langley Research Center to develop optimization procedures aimed at improving the rotor blade design process by integrating appropriate disciplines and accounting for important interactions among the disciplines. The paper describes the optimization formulation in terms of the objective function, design variables, and constraints. Additionally, some of the analysis aspects are discussed, validation strategies are described, and an initial attempt at defining the interdisciplinary couplings is summarized. At this writing, significant progress has been made, principally in the areas of single discipline optimization. Accomplishments are described in areas of rotor aerodynamic performance optimization for minimum hover horsepower, rotor dynamic optimization for vibration reduction, and rotor structural optimization for minimum weight.
Improving 130nm node patterning using inverse lithography techniques for an analog process
NASA Astrophysics Data System (ADS)
Duan, Can; Jessen, Scott; Ziger, David; Watanabe, Mizuki; Prins, Steve; Ho, Chi-Chien; Shu, Jing
2018-03-01
Developing a new lithographic process routinely involves usage of lithographic toolsets and much engineering time to perform data analysis. Process transfers between fabs occur quite often. One of the key assumptions made is that lithographic settings are equivalent from one fab to another and that the transfer is fluid. In some cases, that is far from the truth. Differences in tools can change the proximity effect seen in low k1 imaging processes. If you use model based optical proximity correction (MBOPC), then a model built in one fab will not work under the same conditions at another fab. This results in many wafers being patterned to try and match a baseline response. Even if matching is achieved, there is no guarantee that optimal lithographic responses are met. In this paper, we discuss the approach used to transfer and develop new lithographic processes and define MBOPC builds for the new lithographic process in Fab B which was transferred from a similar lithographic process in Fab A. By using PROLITHTM simulations to match OPC models for each level, minimal downtime in wafer processing was observed. Source Mask Optimization (SMO) was also used to optimize lithographic processes using novel inverse lithography techniques (ILT) to simultaneously optimize mask bias, depth of focus (DOF), exposure latitude (EL) and mask error enhancement factor (MEEF) for critical designs for each level.
Search asymmetries: parallel processing of uncertain sensory information.
Vincent, Benjamin T
2011-08-01
What is the mechanism underlying search phenomena such as search asymmetry? Two-stage models such as Feature Integration Theory and Guided Search propose parallel pre-attentive processing followed by serial post-attentive processing. They claim search asymmetry effects are indicative of finding pairs of features, one processed in parallel, the other in serial. An alternative proposal is that a 1-stage parallel process is responsible, and search asymmetries occur when one stimulus has greater internal uncertainty associated with it than another. While the latter account is simpler, only a few studies have set out to empirically test its quantitative predictions, and many researchers still subscribe to the 2-stage account. This paper examines three separate parallel models (Bayesian optimal observer, max rule, and a heuristic decision rule). All three parallel models can account for search asymmetry effects and I conclude that either people can optimally utilise the uncertain sensory data available to them, or are able to select heuristic decision rules which approximate optimal performance. Copyright © 2011 Elsevier Ltd. All rights reserved.
A Framework for Robust Multivariable Optimization of Integrated Circuits in Space Applications
NASA Technical Reports Server (NTRS)
DuMonthier, Jeffrey; Suarez, George
2013-01-01
Application Specific Integrated Circuit (ASIC) design for space applications involves multiple challenges of maximizing performance, minimizing power and ensuring reliable operation in extreme environments. This is a complex multidimensional optimization problem which must be solved early in the development cycle of a system due to the time required for testing and qualification severely limiting opportunities to modify and iterate. Manual design techniques which generally involve simulation at one or a small number of corners with a very limited set of simultaneously variable parameters in order to make the problem tractable are inefficient and not guaranteed to achieve the best possible results within the performance envelope defined by the process and environmental requirements. What is required is a means to automate design parameter variation, allow the designer to specify operational constraints and performance goals, and to analyze the results in a way which facilitates identifying the tradeoffs defining the performance envelope over the full set of process and environmental corner cases. The system developed by the Mixed Signal ASIC Group (MSAG) at the Goddard Space Flight Center is implemented as framework of software modules, templates and function libraries. It integrates CAD tools and a mathematical computing environment, and can be customized for new circuit designs with only a modest amount of effort as most common tasks are already encapsulated. Customization is required for simulation test benches to determine performance metrics and for cost function computation. Templates provide a starting point for both while toolbox functions minimize the code required. Once a test bench has been coded to optimize a particular circuit, it is also used to verify the final design. The combination of test bench and cost function can then serve as a template for similar circuits or be re-used to migrate the design to different processes by re-running it with the new process specific device models. The system has been used in the design of time to digital converters for laser ranging and time-of-flight mass spectrometry to optimize analog, mixed signal and digital circuits such as charge sensitive amplifiers, comparators, delay elements, radiation tolerant dual interlocked (DICE) flip-flops and two of three voter gates.
NASA Astrophysics Data System (ADS)
Li, J. C.; Gong, B.; Wang, H. G.
2016-08-01
Optimal development of shale gas fields involves designing a most productive fracturing network for hydraulic stimulation processes and operating wells appropriately throughout the production time. A hydraulic fracturing network design-determining well placement, number of fracturing stages, and fracture lengths-is defined by specifying a set of integer ordered blocks to drill wells and create fractures in a discrete shale gas reservoir model. The well control variables such as bottom hole pressures or production rates for well operations are real valued. Shale gas development problems, therefore, can be mathematically formulated with mixed-integer optimization models. A shale gas reservoir simulator is used to evaluate the production performance for a hydraulic fracturing and well control plan. To find the optimal fracturing design and well operation is challenging because the problem is a mixed integer optimization problem and entails computationally expensive reservoir simulation. A dynamic simplex interpolation-based alternate subspace (DSIAS) search method is applied for mixed integer optimization problems associated with shale gas development projects. The optimization performance is demonstrated with the example case of the development of the Barnett Shale field. The optimization results of DSIAS are compared with those of a pattern search algorithm.
Vitre-graf Coating on Mullite. Low Cost Silicon Array Project: Large Area Sillicon Sheet Task
NASA Technical Reports Server (NTRS)
Rossi, R. C.
1979-01-01
The processing parameters of the Vitre-Graf coating for optimal performance and economy when applied to mullite and graphite as substrates were presented. A minor effort was also performed on slip-cast fused silica substractes.
NASA Astrophysics Data System (ADS)
Vijaya Ramnath, B.; Sharavanan, S.; Jeykrishnan, J.
2017-03-01
Nowadays quality plays a vital role in all the products. Hence, the development in manufacturing process focuses on the fabrication of composite with high dimensional accuracy and also incurring low manufacturing cost. In this work, an investigation on machining parameters has been performed on jute-flax hybrid composite. Here, the two important responses characteristics like surface roughness and material removal rate are optimized by employing 3 machining input parameters. The input variables considered are drill bit diameter, spindle speed and feed rate. Machining is done on CNC vertical drilling machine at different levels of drilling parameters. Taguchi’s L16 orthogonal array is used for optimizing individual tool parameters. Analysis Of Variance is used to find the significance of individual parameters. The simultaneous optimization of the process parameters is done by grey relational analysis. The results of this investigation shows that, spindle speed and drill bit diameter have most effect on material removal rate and surface roughness followed by feed rate.
NASA Astrophysics Data System (ADS)
Khalilpourazari, Soheyl; Khalilpourazary, Saman
2017-05-01
In this article a multi-objective mathematical model is developed to minimize total time and cost while maximizing the production rate and surface finish quality in the grinding process. The model aims to determine optimal values of the decision variables considering process constraints. A lexicographic weighted Tchebycheff approach is developed to obtain efficient Pareto-optimal solutions of the problem in both rough and finished conditions. Utilizing a polyhedral branch-and-cut algorithm, the lexicographic weighted Tchebycheff model of the proposed multi-objective model is solved using GAMS software. The Pareto-optimal solutions provide a proper trade-off between conflicting objective functions which helps the decision maker to select the best values for the decision variables. Sensitivity analyses are performed to determine the effect of change in the grain size, grinding ratio, feed rate, labour cost per hour, length of workpiece, wheel diameter and downfeed of grinding parameters on each value of the objective function.
Zhan, Xiaobin; Jiang, Shulan; Yang, Yili; Liang, Jian; Shi, Tielin; Li, Xiwen
2015-09-18
This paper proposes an ultrasonic measurement system based on least squares support vector machines (LS-SVM) for inline measurement of particle concentrations in multicomponent suspensions. Firstly, the ultrasonic signals are analyzed and processed, and the optimal feature subset that contributes to the best model performance is selected based on the importance of features. Secondly, the LS-SVM model is tuned, trained and tested with different feature subsets to obtain the optimal model. In addition, a comparison is made between the partial least square (PLS) model and the LS-SVM model. Finally, the optimal LS-SVM model with the optimal feature subset is applied to inline measurement of particle concentrations in the mixing process. The results show that the proposed method is reliable and accurate for inline measuring the particle concentrations in multicomponent suspensions and the measurement accuracy is sufficiently high for industrial application. Furthermore, the proposed method is applicable to the modeling of the nonlinear system dynamically and provides a feasible way to monitor industrial processes.
Optimization of freeform lightpipes for light-emitting-diode projectors.
Fournier, Florian; Rolland, Jannick
2008-03-01
Standard nonimaging components used to collect and integrate light in light-emitting-diode-based projector light engines such as tapered rods and compound parabolic concentrators are compared to optimized freeform shapes in terms of transmission efficiency and spatial uniformity. We show that the simultaneous optimization of the output surface and the profile shape yields transmission efficiency within the étendue limit up to 90% and spatial uniformity higher than 95%, even for compact sizes. The optimization process involves a manual study of the trends for different shapes and the use of an optimization algorithm to further improve the performance of the freeform lightpipe.
Optimization of freeform lightpipes for light-emitting-diode projectors
NASA Astrophysics Data System (ADS)
Fournier, Florian; Rolland, Jannick
2008-03-01
Standard nonimaging components used to collect and integrate light in light-emitting-diode-based projector light engines such as tapered rods and compound parabolic concentrators are compared to optimized freeform shapes in terms of transmission efficiency and spatial uniformity. We show that the simultaneous optimization of the output surface and the profile shape yields transmission efficiency within the étendue limit up to 90% and spatial uniformity higher than 95%, even for compact sizes. The optimization process involves a manual study of the trends for different shapes and the use of an optimization algorithm to further improve the performance of the freeform lightpipe.
NASA Astrophysics Data System (ADS)
Monicke, A.; Katajisto, H.; Leroy, M.; Petermann, N.; Kere, P.; Perillo, M.
2012-07-01
For many years, layered composites have proven essential for the successful design of high-performance space structures, such as launchers or satellites. A generic cylindrical composite structure for a launcher application was optimized with respect to objectives and constraints typical for space applications. The studies included the structural stability, laminate load response and failure analyses. Several types of cylinders (with and without stiffeners) were considered and optimized using different lay-up parameterizations. Results for the best designs are presented and discussed. The simulation tools, ESAComp [1] and modeFRONTIER [2], employed in the optimization loop are elucidated and their value for the optimization process is explained.
Smolensky, Paul; Goldrick, Matthew; Mathis, Donald
2014-08-01
Mental representations have continuous as well as discrete, combinatorial properties. For example, while predominantly discrete, phonological representations also vary continuously; this is reflected by gradient effects in instrumental studies of speech production. Can an integrated theoretical framework address both aspects of structure? The framework we introduce here, Gradient Symbol Processing, characterizes the emergence of grammatical macrostructure from the Parallel Distributed Processing microstructure (McClelland, Rumelhart, & The PDP Research Group, 1986) of language processing. The mental representations that emerge, Distributed Symbol Systems, have both combinatorial and gradient structure. They are processed through Subsymbolic Optimization-Quantization, in which an optimization process favoring representations that satisfy well-formedness constraints operates in parallel with a distributed quantization process favoring discrete symbolic structures. We apply a particular instantiation of this framework, λ-Diffusion Theory, to phonological production. Simulations of the resulting model suggest that Gradient Symbol Processing offers a way to unify accounts of grammatical competence with both discrete and continuous patterns in language performance. Copyright © 2013 Cognitive Science Society, Inc.
Li, Yongbao; Tian, Zhen; Song, Ting; Wu, Zhaoxia; Liu, Yaqiang; Jiang, Steve; Jia, Xun
2017-01-07
Monte Carlo (MC)-based spot dose calculation is highly desired for inverse treatment planning in proton therapy because of its accuracy. Recent studies on biological optimization have also indicated the use of MC methods to compute relevant quantities of interest, e.g. linear energy transfer. Although GPU-based MC engines have been developed to address inverse optimization problems, their efficiency still needs to be improved. Also, the use of a large number of GPUs in MC calculation is not favorable for clinical applications. The previously proposed adaptive particle sampling (APS) method can improve the efficiency of MC-based inverse optimization by using the computationally expensive MC simulation more effectively. This method is more efficient than the conventional approach that performs spot dose calculation and optimization in two sequential steps. In this paper, we propose a computational library to perform MC-based spot dose calculation on GPU with the APS scheme. The implemented APS method performs a non-uniform sampling of the particles from pencil beam spots during the optimization process, favoring those from the high intensity spots. The library also conducts two computationally intensive matrix-vector operations frequently used when solving an optimization problem. This library design allows a streamlined integration of the MC-based spot dose calculation into an existing proton therapy inverse planning process. We tested the developed library in a typical inverse optimization system with four patient cases. The library achieved the targeted functions by supporting inverse planning in various proton therapy schemes, e.g. single field uniform dose, 3D intensity modulated proton therapy, and distal edge tracking. The efficiency was 41.6 ± 15.3% higher than the use of a GPU-based MC package in a conventional calculation scheme. The total computation time ranged between 2 and 50 min on a single GPU card depending on the problem size.
NASA Astrophysics Data System (ADS)
Li, Yongbao; Tian, Zhen; Song, Ting; Wu, Zhaoxia; Liu, Yaqiang; Jiang, Steve; Jia, Xun
2017-01-01
Monte Carlo (MC)-based spot dose calculation is highly desired for inverse treatment planning in proton therapy because of its accuracy. Recent studies on biological optimization have also indicated the use of MC methods to compute relevant quantities of interest, e.g. linear energy transfer. Although GPU-based MC engines have been developed to address inverse optimization problems, their efficiency still needs to be improved. Also, the use of a large number of GPUs in MC calculation is not favorable for clinical applications. The previously proposed adaptive particle sampling (APS) method can improve the efficiency of MC-based inverse optimization by using the computationally expensive MC simulation more effectively. This method is more efficient than the conventional approach that performs spot dose calculation and optimization in two sequential steps. In this paper, we propose a computational library to perform MC-based spot dose calculation on GPU with the APS scheme. The implemented APS method performs a non-uniform sampling of the particles from pencil beam spots during the optimization process, favoring those from the high intensity spots. The library also conducts two computationally intensive matrix-vector operations frequently used when solving an optimization problem. This library design allows a streamlined integration of the MC-based spot dose calculation into an existing proton therapy inverse planning process. We tested the developed library in a typical inverse optimization system with four patient cases. The library achieved the targeted functions by supporting inverse planning in various proton therapy schemes, e.g. single field uniform dose, 3D intensity modulated proton therapy, and distal edge tracking. The efficiency was 41.6 ± 15.3% higher than the use of a GPU-based MC package in a conventional calculation scheme. The total computation time ranged between 2 and 50 min on a single GPU card depending on the problem size.
NASA Astrophysics Data System (ADS)
Boughari, Yamina
New methodologies have been developed to optimize the integration, testing and certification of flight control systems, an expensive process in the aerospace industry. This thesis investigates the stability of the Cessna Citation X aircraft without control, and then optimizes two different flight controllers from design to validation. The aircraft's model was obtained from the data provided by the Research Aircraft Flight Simulator (RAFS) of the Cessna Citation business aircraft. To increase the stability and control of aircraft systems, optimizations of two different flight control designs were performed: 1) the Linear Quadratic Regulation and the Proportional Integral controllers were optimized using the Differential Evolution algorithm and the level 1 handling qualities as the objective function. The results were validated for the linear and nonlinear aircraft models, and some of the clearance criteria were investigated; and 2) the Hinfinity control method was applied on the stability and control augmentation systems. To minimize the time required for flight control design and its validation, an optimization of the controllers design was performed using the Differential Evolution (DE), and the Genetic algorithms (GA). The DE algorithm proved to be more efficient than the GA. New tools for visualization of the linear validation process were also developed to reduce the time required for the flight controller assessment. Matlab software was used to validate the different optimization algorithms' results. Research platforms of the aircraft's linear and nonlinear models were developed, and compared with the results of flight tests performed on the Research Aircraft Flight Simulator. Some of the clearance criteria of the optimized H-infinity flight controller were evaluated, including its linear stability, eigenvalues, and handling qualities criteria. Nonlinear simulations of the maneuvers criteria were also investigated during this research to assess the Cessna Citation X's flight controller clearance, and therefore, for its anticipated certification.
Li, Yongbao; Tian, Zhen; Song, Ting; Wu, Zhaoxia; Liu, Yaqiang; Jiang, Steve; Jia, Xun
2016-01-01
Monte Carlo (MC)-based spot dose calculation is highly desired for inverse treatment planning in proton therapy because of its accuracy. Recent studies on biological optimization have also indicated the use of MC methods to compute relevant quantities of interest, e.g. linear energy transfer. Although GPU-based MC engines have been developed to address inverse optimization problems, their efficiency still needs to be improved. Also, the use of a large number of GPUs in MC calculation is not favorable for clinical applications. The previously proposed adaptive particle sampling (APS) method can improve the efficiency of MC-based inverse optimization by using the computationally expensive MC simulation more effectively. This method is more efficient than the conventional approach that performs spot dose calculation and optimization in two sequential steps. In this paper, we propose a computational library to perform MC-based spot dose calculation on GPU with the APS scheme. The implemented APS method performs a non-uniform sampling of the particles from pencil beam spots during the optimization process, favoring those from the high intensity spots. The library also conducts two computationally intensive matrix-vector operations frequently used when solving an optimization problem. This library design allows a streamlined integration of the MC-based spot dose calculation into an existing proton therapy inverse planning process. We tested the developed library in a typical inverse optimization system with four patient cases. The library achieved the targeted functions by supporting inverse planning in various proton therapy schemes, e.g. single field uniform dose, 3D intensity modulated proton therapy, and distal edge tracking. The efficiency was 41.6±15.3% higher than the use of a GPU-based MC package in a conventional calculation scheme. The total computation time ranged between 2 and 50 min on a single GPU card depending on the problem size. PMID:27991456
Regression analysis as a design optimization tool
NASA Technical Reports Server (NTRS)
Perley, R.
1984-01-01
The optimization concepts are described in relation to an overall design process as opposed to a detailed, part-design process where the requirements are firmly stated, the optimization criteria are well established, and a design is known to be feasible. The overall design process starts with the stated requirements. Some of the design criteria are derived directly from the requirements, but others are affected by the design concept. It is these design criteria that define the performance index, or objective function, that is to be minimized within some constraints. In general, there will be multiple objectives, some mutually exclusive, with no clear statement of their relative importance. The optimization loop that is given adjusts the design variables and analyzes the resulting design, in an iterative fashion, until the objective function is minimized within the constraints. This provides a solution, but it is only the beginning. In effect, the problem definition evolves as information is derived from the results. It becomes a learning process as we determine what the physics of the system can deliver in relation to the desirable system characteristics. As with any learning process, an interactive capability is a real attriubute for investigating the many alternatives that will be suggested as learning progresses.
NASA Astrophysics Data System (ADS)
Sadeghimeresht, E.; Markocsan, N.; Nylén, P.
2016-12-01
Selection of the thermal spray process is the most important step toward a proper coating solution for a given application as important coating characteristics such as adhesion and microstructure are highly dependent on it. In the present work, a process-microstructure-properties-performance correlation study was performed in order to figure out the main characteristics and corrosion performance of the coatings produced by different thermal spray techniques such as high-velocity air fuel (HVAF), high-velocity oxy fuel (HVOF), and atmospheric plasma spraying (APS). Previously optimized HVOF and APS process parameters were used to deposit Ni, NiCr, and NiAl coatings and compare with HVAF-sprayed coatings with randomly selected process parameters. As the HVAF process presented the best coating characteristics and corrosion behavior, few process parameters such as feed rate and standoff distance (SoD) were investigated to systematically optimize the HVAF coatings in terms of low porosity and high corrosion resistance. The Ni and NiAl coatings with lower porosity and better corrosion behavior were obtained at an average SoD of 300 mm and feed rate of 150 g/min. The NiCr coating sprayed at a SoD of 250 mm and feed rate of 75 g/min showed the highest corrosion resistance among all investigated samples.
Ng, Candy K S; Osuna-Sanchez, Hector; Valéry, Eric; Sørensen, Eva; Bracewell, Daniel G
2012-06-15
An integrated experimental and modeling approach for the design of high productivity protein A chromatography is presented to maximize productivity in bioproduct manufacture. The approach consists of four steps: (1) small-scale experimentation, (2) model parameter estimation, (3) productivity optimization and (4) model validation with process verification. The integrated use of process experimentation and modeling enables fewer experiments to be performed, and thus minimizes the time and materials required in order to gain process understanding, which is of key importance during process development. The application of the approach is demonstrated for the capture of antibody by a novel silica-based high performance protein A adsorbent named AbSolute. In the example, a series of pulse injections and breakthrough experiments were performed to develop a lumped parameter model, which was then used to find the best design that optimizes the productivity of a batch protein A chromatographic process for human IgG capture. An optimum productivity of 2.9 kg L⁻¹ day⁻¹ for a column of 5mm diameter and 8.5 cm length was predicted, and subsequently verified experimentally, completing the whole process design approach in only 75 person-hours (or approximately 2 weeks). Copyright © 2012 Elsevier B.V. All rights reserved.
Das, Saptarshi; Pan, Indranil; Das, Shantanu
2013-07-01
Fuzzy logic based PID controllers have been studied in this paper, considering several combinations of hybrid controllers by grouping the proportional, integral and derivative actions with fuzzy inferencing in different forms. Fractional order (FO) rate of error signal and FO integral of control signal have been used in the design of a family of decomposed hybrid FO fuzzy PID controllers. The input and output scaling factors (SF) along with the integro-differential operators are tuned with real coded genetic algorithm (GA) to produce optimum closed loop performance by simultaneous consideration of the control loop error index and the control signal. Three different classes of fractional order oscillatory processes with various levels of relative dominance between time constant and time delay have been used to test the comparative merits of the proposed family of hybrid fractional order fuzzy PID controllers. Performance comparison of the different FO fuzzy PID controller structures has been done in terms of optimal set-point tracking, load disturbance rejection and minimal variation of manipulated variable or smaller actuator requirement etc. In addition, multi-objective Non-dominated Sorting Genetic Algorithm (NSGA-II) has been used to study the Pareto optimal trade-offs between the set point tracking and control signal, and the set point tracking and load disturbance performance for each of the controller structure to handle the three different types of processes. Copyright © 2013 ISA. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Qyyum, Muhammad Abdul; Wei, Feng; Hussain, Arif; Ali, Wahid; Sehee, Oh; Lee, Moonyong
2017-11-01
This research work unfolds a simple, safe, and environment-friendly energy efficient novel vortex tube-based natural gas liquefaction process (LNG). A vortex tube was introduced to the popular N2-expander liquefaction process to enhance the liquefaction efficiency. The process structure and condition were modified and optimized to take a potential advantage of the vortex tube on the natural gas liquefaction cycle. Two commercial simulators ANSYS® and Aspen HYSYS® were used to investigate the application of vortex tube in the refrigeration cycle of LNG process. The Computational fluid dynamics (CFD) model was used to simulate the vortex tube with nitrogen (N2) as a working fluid. Subsequently, the results of the CFD model were embedded in the Aspen HYSYS® to validate the proposed LNG liquefaction process. The proposed natural gas liquefaction process was optimized using the knowledge-based optimization (KBO) approach. The overall energy consumption was chosen as an objective function for optimization. The performance of the proposed liquefaction process was compared with the conventional N2-expander liquefaction process. The vortex tube-based LNG process showed a significant improvement of energy efficiency by 20% in comparison with the conventional N2-expander liquefaction process. This high energy efficiency was mainly due to the isentropic expansion of the vortex tube. It turned out that the high energy efficiency of vortex tube-based process is totally dependent on the refrigerant cold fraction, operating conditions as well as refrigerant cycle configurations.
Blended near-optimal tools for flexible water resources decision making
NASA Astrophysics Data System (ADS)
Rosenberg, David
2015-04-01
State-of-the-art systems analysis techniques focus on efficiently finding optimal solutions. Yet an optimal solution is optimal only for the static modelled issues and managers often seek near-optimal alternatives that address un-modelled or changing objectives, preferences, limits, uncertainties, and other issues. Early on, Modelling to Generate Alternatives (MGA) formalized near-optimal as performance within a tolerable deviation from the optimal objective function value and identified a few maximally-different alternatives that addressed select un-modelled issues. This paper presents new stratified, Monte Carlo Markov Chain sampling and parallel coordinate plotting tools that generate and communicate the structure and full extent of the near-optimal region to an optimization problem. Plot controls allow users to interactively explore region features of most interest. Controls also streamline the process to elicit un-modelled issues and update the model formulation in response to elicited issues. Use for a single-objective water quality management problem at Echo Reservoir, Utah identifies numerous and flexible practices to reduce the phosphorus load to the reservoir and maintain close-to-optimal performance. Compared to MGA, the new blended tools generate more numerous alternatives faster, more fully show the near-optimal region, help elicit a larger set of un-modelled issues, and offer managers greater flexibility to cope in a changing world.
Comparison of Low-Thrust Control Laws for Application in Planetocentric Space
NASA Technical Reports Server (NTRS)
Falck, Robert D.; Sjauw, Waldy K.; Smith, David A.
2014-01-01
Recent interest at NASA for the application of solar electric propulsion for the transfer of significant payloads in cislunar space has led to the development of high-fidelity simulations of such missions. With such transfers involving transfer times on the order of months, simulation time can be significant. In the past, the examination of such missions typically began with the use of lower-fidelity trajectory optimization tools such as SEPSPOT to develop and tune guidance laws which delivered optimal or near- optimal trajectories, where optimal is generally defined as minimizing propellant expenditure or time of flight. The transfer of these solutions to a high-fidelity simulation is typically an iterative process whereby the initial solution may nearly, but not precisely, meet mission objectives. Further tuning of the guidance algorithm is typically necessary when accounting for high-fidelity perturbations such as those due to more detailed gravity models, secondary-body effects, solar radiation pressure, etc. While trajectory optimization is a useful method for determining optimal performance metrics, algorithms which deliver nearly optimal performance with minimal tuning are an attractive alternative.
NASA Astrophysics Data System (ADS)
Lee, Junghyun; Kim, Heewon; Chung, Hyun; Kim, Haedong; Choi, Sujin; Jung, Okchul; Chung, Daewon; Ko, Kwanghee
2018-04-01
In this paper, we propose a method that uses a genetic algorithm for the dynamic schedule optimization of imaging missions for multiple satellites and ground systems. In particular, the visibility conflicts of communication and mission operation using satellite resources (electric power and onboard memory) are integrated in sequence. Resource consumption and restoration are considered in the optimization process. Image acquisition is an essential part of satellite missions and is performed via a series of subtasks such as command uplink, image capturing, image storing, and image downlink. An objective function for optimization is designed to maximize the usability by considering the following components: user-assigned priority, resource consumption, and image-acquisition time. For the simulation, a series of hypothetical imaging missions are allocated to a multi-satellite control system comprising five satellites and three ground stations having S- and X-band antennas. To demonstrate the performance of the proposed method, simulations are performed via three operation modes: general, commercial, and tactical.
A new logistic dynamic particle swarm optimization algorithm based on random topology.
Ni, Qingjian; Deng, Jianming
2013-01-01
Population topology of particle swarm optimization (PSO) will directly affect the dissemination of optimal information during the evolutionary process and will have a significant impact on the performance of PSO. Classic static population topologies are usually used in PSO, such as fully connected topology, ring topology, star topology, and square topology. In this paper, the performance of PSO with the proposed random topologies is analyzed, and the relationship between population topology and the performance of PSO is also explored from the perspective of graph theory characteristics in population topologies. Further, in a relatively new PSO variant which named logistic dynamic particle optimization, an extensive simulation study is presented to discuss the effectiveness of the random topology and the design strategies of population topology. Finally, the experimental data are analyzed and discussed. And about the design and use of population topology on PSO, some useful conclusions are proposed which can provide a basis for further discussion and research.
NASA Astrophysics Data System (ADS)
Umbarkar, A. J.; Balande, U. T.; Seth, P. D.
2017-06-01
The field of nature inspired computing and optimization techniques have evolved to solve difficult optimization problems in diverse fields of engineering, science and technology. The firefly attraction process is mimicked in the algorithm for solving optimization problems. In Firefly Algorithm (FA) sorting of fireflies is done by using sorting algorithm. The original FA is proposed with bubble sort for ranking the fireflies. In this paper, the quick sort replaces bubble sort to decrease the time complexity of FA. The dataset used is unconstrained benchmark functions from CEC 2005 [22]. The comparison of FA using bubble sort and FA using quick sort is performed with respect to best, worst, mean, standard deviation, number of comparisons and execution time. The experimental result shows that FA using quick sort requires less number of comparisons but requires more execution time. The increased number of fireflies helps to converge into optimal solution whereas by varying dimension for algorithm performed better at a lower dimension than higher dimension.
DOE Office of Scientific and Technical Information (OSTI.GOV)
McMordie Stoughton, Kate; Duan, Xiaoli; Wendel, Emily M.
This technology evaluation was prepared by Pacific Northwest National Laboratory on behalf of the U.S. Department of Energy’s Federal Energy Management Program (FEMP). ¬The technology evaluation assesses techniques for optimizing reverse osmosis (RO) systems to increase RO system performance and water efficiency. This evaluation provides a general description of RO systems, the influence of RO systems on water use, and key areas where RO systems can be optimized to reduce water and energy consumption. The evaluation is intended to help facility managers at Federal sites understand the basic concepts of the RO process and system optimization options, enabling them tomore » make informed decisions during the system design process for either new projects or recommissioning of existing equipment. This evaluation is focused on commercial-sized RO systems generally treating more than 80 gallons per hour.¬« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
This technology evaluation was prepared by Pacific Northwest National Laboratory on behalf of the U.S. Department of Energy’s Federal Energy Management Program (FEMP). The technology evaluation assesses techniques for optimizing reverse osmosis (RO) systems to increase RO system performance and water efficiency. This evaluation provides a general description of RO systems, the influence of RO systems on water use, and key areas where RO systems can be optimized to reduce water and energy consumption. The evaluation is intended to help facility managers at Federal sites understand the basic concepts of the RO process and system optimization options, enabling them tomore » make informed decisions during the system design process for either new projects or recommissioning of existing equipment. This evaluation is focused on commercial-sized RO systems generally treating more than 80 gallons per hour.« less
A fast optimization approach for treatment planning of volumetric modulated arc therapy.
Yan, Hui; Dai, Jian-Rong; Li, Ye-Xiong
2018-05-30
Volumetric modulated arc therapy (VMAT) is widely used in clinical practice. It not only significantly reduces treatment time, but also produces high-quality treatment plans. Current optimization approaches heavily rely on stochastic algorithms which are time-consuming and less repeatable. In this study, a novel approach is proposed to provide a high-efficient optimization algorithm for VMAT treatment planning. A progressive sampling strategy is employed for beam arrangement of VMAT planning. The initial beams with equal-space are added to the plan in a coarse sampling resolution. Fluence-map optimization and leaf-sequencing are performed for these beams. Then, the coefficients of fluence-maps optimization algorithm are adjusted according to the known fluence maps of these beams. In the next round the sampling resolution is doubled and more beams are added. This process continues until the total number of beams arrived. The performance of VMAT optimization algorithm was evaluated using three clinical cases and compared to those of a commercial planning system. The dosimetric quality of VMAT plans is equal to or better than the corresponding IMRT plans for three clinical cases. The maximum dose to critical organs is reduced considerably for VMAT plans comparing to those of IMRT plans, especially in the head and neck case. The total number of segments and monitor units are reduced for VMAT plans. For three clinical cases, VMAT optimization takes < 5 min accomplished using proposed approach and is 3-4 times less than that of the commercial system. The proposed VMAT optimization algorithm is able to produce high-quality VMAT plans efficiently and consistently. It presents a new way to accelerate current optimization process of VMAT planning.
Conceptual design and structural analysis for an 8.4-m telescope
NASA Astrophysics Data System (ADS)
Mendoza, Manuel; Farah, Alejandro; Ruiz Schneider, Elfego
2004-09-01
This paper describes the conceptual design of the optics support structures of a telescope with a primary mirror of 8.4 m, the same size as a Large Binocular Telescope (LBT) primary mirror. The design goal is to achieve a structure for supporting the primary and secondary mirrors and keeping them joined as rigid as possible. With this purpose an optimization with several models was done. This iterative design process includes: specifications development, concepts generation and evaluation. Process included Finite Element Analysis (FEA) as well as other analytical calculations. Quality Function Deployment (QFD) matrix was used to obtain telescope tube and spider specifications. Eight spiders and eleven tubes geometric concepts were proposed. They were compared in decision matrixes using performance indicators and parameters. Tubes and spiders went under an iterative optimization process. The best tubes and spiders concepts were assembled together. All assemblies were compared and ranked according to their performance.
2013-06-01
Kobu, 2007) Gunasekaran and Kobu also presented six observations as they relate to these key performance indicators ( KPI ), as follows: 1...Internal business process (50% of the KPI ) and customers (50% of the KPI ) play a significant role in SC environments. This implies that internal business...process PMs have significant impact on the operational performance. 2. The most widely used PM is financial performance (38% of the KPI ). This
Uncertainty quantification-based robust aerodynamic optimization of laminar flow nacelle
NASA Astrophysics Data System (ADS)
Xiong, Neng; Tao, Yang; Liu, Zhiyong; Lin, Jun
2018-05-01
The aerodynamic performance of laminar flow nacelle is highly sensitive to uncertain working conditions, especially the surface roughness. An efficient robust aerodynamic optimization method on the basis of non-deterministic computational fluid dynamic (CFD) simulation and Efficient Global Optimization (EGO)algorithm was employed. A non-intrusive polynomial chaos method is used in conjunction with an existing well-verified CFD module to quantify the uncertainty propagation in the flow field. This paper investigates the roughness modeling behavior with the γ-Ret shear stress transport model including modeling flow transition and surface roughness effects. The roughness effects are modeled to simulate sand grain roughness. A Class-Shape Transformation-based parametrical description of the nacelle contour as part of an automatic design evaluation process is presented. A Design-of-Experiments (DoE) was performed and surrogate model by Kriging method was built. The new design nacelle process demonstrates that significant improvements of both mean and variance of the efficiency are achieved and the proposed method can be applied to laminar flow nacelle design successfully.
Influence of ion-implanted profiles on the performance of GaAs MESFET's and MMIC amplifiers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pavlidis, D.; Cazaux, J.L.; Graffeuil, J.
1988-04-01
The RF small-signal performance of GaAs MESFET's and MMIC amplifiers as a function of various ion-implanted profiles is theoretically and experimentally investigated. Implantation energy, dose, and recess depth influence are theoretically analyzed with the help of a specially developed device simulator. The performance of MMIC amplifiers processed with various energies, doses, recess depths, and bias conditions is discussed and compared to experimental characteristics. Some criteria are finally proposed for the choice of implantation conditions and process in order to optimize the characteristics of ion-implanted FET's and to realize process-tolerant MMIC amplifiers.
Design and Performance of the Astro-E/XRS Signal Processing System
NASA Technical Reports Server (NTRS)
Boyce, Kevin R.; Audley, M. D.; Baker, R. G.; Dumonthier, J. J.; Fujimoto, R.; Gendreau, K. C.; Ishisaki, Y.; Kelley, R. L.; Stahle, C. K.; Szymkowiak, A. E.
1999-01-01
We describe the signal processing system of the Astro-E XRS instrument. The Calorimeter Analog Processor (CAP) provides bias and power for the detectors and amplifies the detector signals by a factor of 20,000. The Calorimeter Digital Processor (CDP) performs the digital processing of the calorimeter signals, detecting X-ray pulses and analyzing them by optimal filtering. We describe the operation of pulse detection, Pulse height analysis. and risetime determination. We also discuss performance, including the three event grades (hi-res mid-res, and low-res). anticoincidence detection, counting rate dependence, and noise rejection.
On the Efficacy of Source Code Optimizations for Cache-Based Systems
NASA Technical Reports Server (NTRS)
VanderWijngaart, Rob F.; Saphir, William C.
1998-01-01
Obtaining high performance without machine-specific tuning is an important goal of scientific application programmers. Since most scientific processing is done on commodity microprocessors with hierarchical memory systems, this goal of "portable performance" can be achieved if a common set of optimization principles is effective for all such systems. It is widely believed, or at least hoped, that portable performance can be realized. The rule of thumb for optimization on hierarchical memory systems is to maximize temporal and spatial locality of memory references by reusing data and minimizing memory access stride. We investigate the effects of a number of optimizations on the performance of three related kernels taken from a computational fluid dynamics application. Timing the kernels on a range of processors, we observe an inconsistent and often counterintuitive impact of the optimizations on performance. In particular, code variations that have a positive impact on one architecture can have a negative impact on another, and variations expected to be unimportant can produce large effects. Moreover, we find that cache miss rates - as reported by a cache simulation tool, and confirmed by hardware counters - only partially explain the results. By contrast, the compiler-generated assembly code provides more insight by revealing the importance of processor-specific instructions and of compiler maturity, both of which strongly, and sometimes unexpectedly, influence performance. We conclude that it is difficult to obtain performance portability on modern cache-based computers, and comment on the implications of this result.
On the Efficacy of Source Code Optimizations for Cache-Based Systems
NASA Technical Reports Server (NTRS)
VanderWijngaart, Rob F.; Saphir, William C.; Saini, Subhash (Technical Monitor)
1998-01-01
Obtaining high performance without machine-specific tuning is an important goal of scientific application programmers. Since most scientific processing is done on commodity microprocessors with hierarchical memory systems, this goal of "portable performance" can be achieved if a common set of optimization principles is effective for all such systems. It is widely believed, or at least hoped, that portable performance can be realized. The rule of thumb for optimization on hierarchical memory systems is to maximize temporal and spatial locality of memory references by reusing data and minimizing memory access stride. We investigate the effects of a number of optimizations on the performance of three related kernels taken from a computational fluid dynamics application. Timing the kernels on a range of processors, we observe an inconsistent and often counterintuitive impact of the optimizations on performance. In particular, code variations that have a positive impact on one architecture can have a negative impact on another, and variations expected to be unimportant can produce large effects. Moreover, we find that cache miss rates-as reported by a cache simulation tool, and confirmed by hardware counters-only partially explain the results. By contrast, the compiler-generated assembly code provides more insight by revealing the importance of processor-specific instructions and of compiler maturity, both of which strongly, and sometimes unexpectedly, influence performance. We conclude that it is difficult to obtain performance portability on modern cache-based computers, and comment on the implications of this result.
High-productivity DRIE solutions for 3D-SiP and MEMS volume manufacturing
NASA Astrophysics Data System (ADS)
Puech, M.; Thevenoud, J. M.; Launay, N.; Arnal, N.; Godinat, P.; Andrieu, B.; Gruffat, J. M.
2006-12-01
Emerging 3D-SiP technologies and high volume MEMS applications require high productivity mass production DRIE systems. The Alcatel DRIE product range has recently been optimized to reach the highest process and hardware production performances. A study based on sub-micron high aspect ratio structures encountered in the most stringent 3D-SiP has been carried out. The optimization of the Bosch process parameters have shown ultra high silicon etch rate, with unrivaled uniformity and repeatability leading to excellent process yields. In parallel, most recent hardware and proprietary design optimization including vacuum pumping lines, process chamber, wafer chucks, pressure control system, gas delivery are discussed. A key factor for achieving the highest performances was the recognized expertise of Alcatel vacuum and plasma science technologies. These improvements have been monitored in a mass production environment for a mobile phone application. Field data analysis shows a significant reduction of cost of ownership thanks to increased throughput and much lower running costs. These benefits are now available for all 3D-SiP and high volume MEMS applications. The typical etched patterns include tapered trenches for CMOS imagers, through silicon via holes for die stacking, well controlled profile angle for 3D high precision inertial sensors, and large exposed area features for inkjet printer head and Silicon microphones.
A FRAMEWORK TO DESIGN AND OPTIMIZE CHEMICAL FLOODING PROCESSES
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mojdeh Delshad; Gary A. Pope; Kamy Sepehrnoori
2005-07-01
The goal of this proposed research is to provide an efficient and user friendly simulation framework for screening and optimizing chemical/microbial enhanced oil recovery processes. The framework will include (1) a user friendly interface to identify the variables that have the most impact on oil recovery using the concept of experimental design and response surface maps, (2) UTCHEM reservoir simulator to perform the numerical simulations, and (3) an economic model that automatically imports the simulation production data to evaluate the profitability of a particular design. Such a reservoir simulation framework is not currently available to the oil industry. The objectivesmore » of Task 1 are to develop three primary modules representing reservoir, chemical, and well data. The modules will be interfaced with an already available experimental design model. The objective of the Task 2 is to incorporate UTCHEM reservoir simulator and the modules with the strategic variables and developing the response surface maps to identify the significant variables from each module. The objective of the Task 3 is to develop the economic model designed specifically for the chemical processes targeted in this proposal and interface the economic model with UTCHEM production output. Task 4 is on the validation of the framework and performing simulations of oil reservoirs to screen, design and optimize the chemical processes.« less
A Framework to Design and Optimize Chemical Flooding Processes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mojdeh Delshad; Gary A. Pope; Kamy Sepehrnoori
2006-08-31
The goal of this proposed research is to provide an efficient and user friendly simulation framework for screening and optimizing chemical/microbial enhanced oil recovery processes. The framework will include (1) a user friendly interface to identify the variables that have the most impact on oil recovery using the concept of experimental design and response surface maps, (2) UTCHEM reservoir simulator to perform the numerical simulations, and (3) an economic model that automatically imports the simulation production data to evaluate the profitability of a particular design. Such a reservoir simulation framework is not currently available to the oil industry. The objectivesmore » of Task 1 are to develop three primary modules representing reservoir, chemical, and well data. The modules will be interfaced with an already available experimental design model. The objective of the Task 2 is to incorporate UTCHEM reservoir simulator and the modules with the strategic variables and developing the response surface maps to identify the significant variables from each module. The objective of the Task 3 is to develop the economic model designed specifically for the chemical processes targeted in this proposal and interface the economic model with UTCHEM production output. Task 4 is on the validation of the framework and performing simulations of oil reservoirs to screen, design and optimize the chemical processes.« less
A FRAMEWORK TO DESIGN AND OPTIMIZE CHEMICAL FLOODING PROCESSES
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mojdeh Delshad; Gary A. Pope; Kamy Sepehrnoori
2004-11-01
The goal of this proposed research is to provide an efficient and user friendly simulation framework for screening and optimizing chemical/microbial enhanced oil recovery processes. The framework will include (1) a user friendly interface to identify the variables that have the most impact on oil recovery using the concept of experimental design and response surface maps, (2) UTCHEM reservoir simulator to perform the numerical simulations, and (3) an economic model that automatically imports the simulation production data to evaluate the profitability of a particular design. Such a reservoir simulation framework is not currently available to the oil industry. The objectivesmore » of Task 1 are to develop three primary modules representing reservoir, chemical, and well data. The modules will be interfaced with an already available experimental design model. The objective of the Task 2 is to incorporate UTCHEM reservoir simulator and the modules with the strategic variables and developing the response surface maps to identify the significant variables from each module. The objective of the Task 3 is to develop the economic model designed specifically for the chemical processes targeted in this proposal and interface the economic model with UTCHEM production output. Task 4 is on the validation of the framework and performing simulations of oil reservoirs to screen, design and optimize the chemical processes.« less
NASA Astrophysics Data System (ADS)
Ferhati, H.; Djeffal, F.
2017-12-01
In this paper, a new junctionless optical controlled field effect transistor (JL-OCFET) and its comprehensive theoretical model is proposed to achieve high optical performance and low cost fabrication process. Exhaustive study of the device characteristics and comparison between the proposed junctionless design and the conventional inversion mode structure (IM-OCFET) for similar dimensions are performed. Our investigation reveals that the proposed design exhibits an outstanding capability to be an alternative to the IM-OCFET due to the high performance and the weak signal detection benefit offered by this design. Moreover, the developed analytical expressions are exploited to formulate the objective functions to optimize the device performance using Genetic Algorithms (GAs) approach. The optimized JL-OCFET not only demonstrates good performance in terms of derived drain current and responsivity, but also exhibits superior signal to noise ratio, low power consumption, high-sensitivity, high ION/IOFF ratio and high-detectivity as compared to the conventional IM-OCFET counterpart. These characteristics make the optimized JL-OCFET potentially suitable for developing low cost and ultrasensitive photodetectors for high-performance and low cost inter-chips data communication applications.
NASA Technical Reports Server (NTRS)
Nguyen, Howard; Willacy, Karen; Allen, Mark
2012-01-01
KINETICS is a coupled dynamics and chemistry atmosphere model that is data intensive and computationally demanding. The potential performance gain from using a supercomputer motivates the adaptation from a serial version to a parallelized one. Although the initial parallelization had been done, bottlenecks caused by an abundance of communication calls between processors led to an unfavorable drop in performance. Before starting on the parallel optimization process, a partial overhaul was required because a large emphasis was placed on streamlining the code for user convenience and revising the program to accommodate the new supercomputers at Caltech and JPL. After the first round of optimizations, the partial runtime was reduced by a factor of 23; however, performance gains are dependent on the size of the data, the number of processors requested, and the computer used.
Optimal integration of gravity in trajectory planning of vertical pointing movements.
Crevecoeur, Frédéric; Thonnard, Jean-Louis; Lefèvre, Philippe
2009-08-01
The planning and control of motor actions requires knowledge of the dynamics of the controlled limb to generate the appropriate muscular commands and achieve the desired goal. Such planning and control imply that the CNS must be able to deal with forces and constraints acting on the limb, such as the omnipresent force of gravity. The present study investigates the effect of hypergravity induced by parabolic flights on the trajectory of vertical pointing movements to test the hypothesis that motor commands are optimized with respect to the effect of gravity on the limb. Subjects performed vertical pointing movements in normal gravity and hypergravity. We use a model based on optimal control to identify the role played by gravity in the optimal arm trajectory with minimal motor costs. First, the simulations in normal gravity reproduce the asymmetry in the velocity profiles (the velocity reaches its maximum before half of the movement duration), which typically characterizes the vertical pointing movements performed on Earth, whereas the horizontal movements present symmetrical velocity profiles. Second, according to the simulations, the optimal trajectory in hypergravity should present an increase in the peak acceleration and peak velocity despite the increase in the arm weight. In agreement with these predictions, the subjects performed faster movements in hypergravity with significant increases in the peak acceleration and peak velocity, which were accompanied by a significant decrease in the movement duration. This suggests that movement kinematics change in response to an increase in gravity, which is consistent with the hypothesis that motor commands are optimized and the action of gravity on the limb is taken into account. The results provide evidence for an internal representation of gravity in the central planning process and further suggest that an adaptation to altered dynamics can be understood as a reoptimization process.
Li, Zukui; Ding, Ran; Floudas, Christodoulos A.
2011-01-01
Robust counterpart optimization techniques for linear optimization and mixed integer linear optimization problems are studied in this paper. Different uncertainty sets, including those studied in literature (i.e., interval set; combined interval and ellipsoidal set; combined interval and polyhedral set) and new ones (i.e., adjustable box; pure ellipsoidal; pure polyhedral; combined interval, ellipsoidal, and polyhedral set) are studied in this work and their geometric relationship is discussed. For uncertainty in the left hand side, right hand side, and objective function of the optimization problems, robust counterpart optimization formulations induced by those different uncertainty sets are derived. Numerical studies are performed to compare the solutions of the robust counterpart optimization models and applications in refinery production planning and batch process scheduling problem are presented. PMID:21935263
Optimization of a thermal hydrolysis process for sludge pre-treatment.
Sapkaite, I; Barrado, E; Fdz-Polanco, F; Pérez-Elvira, S I
2017-05-01
At industrial scale, thermal hydrolysis is the most used process to enhance biodegradability of the sludge produced in wastewater treatment plants. Through statistically guided Box-Behnken experimental design, the present study analyses the effect of TH as pre-treatment applied to activated sludge. The selected process variables were temperature (130-180 °C), time (5-50 min) and decompression mode (slow or steam-explosion effect), and the parameters evaluated were sludge solubilisation and methane production by anaerobic digestion. A quadratic polynomial model was generated to compare the process performance for the 15 different combinations of operation conditions by modifying the process variables evaluated. The statistical analysis performed exhibited that methane production and solubility were significantly affected by pre-treatment time and temperature. During high intensity pre-treatment (high temperature and long times), the solubility increased sharply while the methane production exhibited the opposite behaviour, indicating the formation of some soluble but non-biodegradable materials. Therefore, solubilisation is not a reliable parameter to quantify the efficiency of a thermal hydrolysis pre-treatment, since it is not directly related to methane production. Based on the operational parameters optimization, the estimated optimal thermal hydrolysis conditions to enhance of sewage sludge digestion were: 140-170 °C heating temperature, 5-35min residence time, and one sudden decompression. Copyright © 2017 Elsevier Ltd. All rights reserved.
Towards Implementation of a Generalized Architecture for High-Level Quantum Programming Language
NASA Astrophysics Data System (ADS)
Ameen, El-Mahdy M.; Ali, Hesham A.; Salem, Mofreh M.; Badawy, Mahmoud
2017-08-01
This paper investigates a novel architecture to the problem of quantum computer programming. A generalized architecture for a high-level quantum programming language has been proposed. Therefore, the programming evolution from the complicated quantum-based programming to the high-level quantum independent programming will be achieved. The proposed architecture receives the high-level source code and, automatically transforms it into the equivalent quantum representation. This architecture involves two layers which are the programmer layer and the compilation layer. These layers have been implemented in the state of the art of three main stages; pre-classification, classification, and post-classification stages respectively. The basic building block of each stage has been divided into subsequent phases. Each phase has been implemented to perform the required transformations from one representation to another. A verification process was exposed using a case study to investigate the ability of the compiler to perform all transformation processes. Experimental results showed that the efficacy of the proposed compiler achieves a correspondence correlation coefficient about R ≈ 1 between outputs and the targets. Also, an obvious achievement has been utilized with respect to the consumed time in the optimization process compared to other techniques. In the online optimization process, the consumed time has increased exponentially against the amount of accuracy needed. However, in the proposed offline optimization process has increased gradually.
Modeling and Analysis of Power Processing Systems (MAPPS). Volume 1: Technical report
NASA Technical Reports Server (NTRS)
Lee, F. C.; Rahman, S.; Carter, R. A.; Wu, C. H.; Yu, Y.; Chang, R.
1980-01-01
Computer aided design and analysis techniques were applied to power processing equipment. Topics covered include: (1) discrete time domain analysis of switching regulators for performance analysis; (2) design optimization of power converters using augmented Lagrangian penalty function technique; (3) investigation of current-injected multiloop controlled switching regulators; and (4) application of optimization for Navy VSTOL energy power system. The generation of the mathematical models and the development and application of computer aided design techniques to solve the different mathematical models are discussed. Recommendations are made for future work that would enhance the application of the computer aided design techniques for power processing systems.
Joint optimization: Merging a new culture with a new physical environment.
Stichler, Jaynelle F; Ecoff, Laurie
2009-04-01
Nearly $200 billion of healthcare construction is expected by the year 2015, and nurse leaders must expand their knowledge and capabilities in healthcare design. This bimonthly department prepares nurse leaders to use the evidence-based design process to ensure that new, expanded, and renovated hospitals facilitate optimal patient outcomes, enhance the work environment for healthcare providers, and improve organizational performance. In this article, the authors discuss the concept of joint optimization of merging organizational culture with a new hospital facility.
DOT National Transportation Integrated Search
2006-01-01
The implementation of an effective performance-based construction quality management requires a tool for determining impacts of construction quality on the life-cycle performance of pavements. This report presents an update on the efforts in the deve...
NASA Astrophysics Data System (ADS)
Vdovin, R. A.; Smelov, V. G.
2017-02-01
This work describes the experience in manufacturing the turbine rotor for the micro-engine. It demonstrates the design principles for the complex investment casting process combining the use of the ProCast software and the rapid prototyping techniques. At the virtual modelling stage, in addition to optimized process parameters, the casting structure was improved to obtain the defect-free section. The real production stage allowed demonstrating the performance and fitness of rapid prototyping techniques for the manufacture of geometrically-complex engine-building parts.
Kennedy, Jacob J.; Whiteaker, Jeffrey R.; Schoenherr, Regine M.; Yan, Ping; Allison, Kimberly; Shipley, Melissa; Lerch, Melissa; Hoofnagle, Andrew N.; Baird, Geoffrey Stuart; Paulovich, Amanda G.
2016-01-01
Despite a clinical, economic, and regulatory imperative to develop companion diagnostics, precious few new biomarkers have been successfully translated into clinical use, due in part to inadequate protein assay technologies to support large-scale testing of hundreds of candidate biomarkers in formalin-fixed paraffin embedded (FFPE) tissues. While the feasibility of using targeted, multiple reaction monitoring-mass spectrometry (MRM-MS) for quantitative analyses of FFPE tissues has been demonstrated, protocols have not been systematically optimized for robust quantification across a large number of analytes, nor has the performance of peptide immuno-MRM been evaluated. To address this gap, we used a test battery approach coupled to MRM-MS with the addition of stable isotope labeled standard peptides (targeting 512 analytes) to quantitatively evaluate the performance of three extraction protocols in combination with three trypsin digestion protocols (i.e. 9 processes). A process based on RapiGest buffer extraction and urea-based digestion was identified to enable similar quantitation results from FFPE and frozen tissues. Using the optimized protocols for MRM-based analysis of FFPE tissues, median precision was 11.4% (across 249 analytes). There was excellent correlation between measurements made on matched FFPE and frozen tissues, both for direct MRM analysis (R2 = 0.94) and immuno-MRM (R2 = 0.89). The optimized process enables highly reproducible, multiplex, standardizable, quantitative MRM in archival tissue specimens. PMID:27462933
Wieberger, Florian; Kolb, Tristan; Neuber, Christian; Ober, Christopher K; Schmidt, Hans-Werner
2013-04-08
In this article we present several developed and improved combinatorial techniques to optimize processing conditions and material properties of organic thin films. The combinatorial approach allows investigations of multi-variable dependencies and is the perfect tool to investigate organic thin films regarding their high performance purposes. In this context we develop and establish the reliable preparation of gradients of material composition, temperature, exposure, and immersion time. Furthermore we demonstrate the smart application of combinations of composition and processing gradients to create combinatorial libraries. First a binary combinatorial library is created by applying two gradients perpendicular to each other. A third gradient is carried out in very small areas and arranged matrix-like over the entire binary combinatorial library resulting in a ternary combinatorial library. Ternary combinatorial libraries allow identifying precise trends for the optimization of multi-variable dependent processes which is demonstrated on the lithographic patterning process. Here we verify conclusively the strong interaction and thus the interdependency of variables in the preparation and properties of complex organic thin film systems. The established gradient preparation techniques are not limited to lithographic patterning. It is possible to utilize and transfer the reported combinatorial techniques to other multi-variable dependent processes and to investigate and optimize thin film layers and devices for optical, electro-optical, and electronic applications.
Cao, Wenhua; Lim, Gino; Li, Xiaoqiang; Li, Yupeng; Zhu, X. Ronald; Zhang, Xiaodong
2014-01-01
The purpose of this study is to investigate the feasibility and impact of incorporating deliverable monitor unit (MU) constraints into spot intensity optimization in intensity modulated proton therapy (IMPT) treatment planning. The current treatment planning system (TPS) for IMPT disregards deliverable MU constraints in the spot intensity optimization (SIO) routine. It performs a post-processing procedure on an optimized plan to enforce deliverable MU values that are required by the spot scanning proton delivery system. This procedure can create a significant dose distribution deviation between the optimized and post-processed deliverable plans, especially when small spot spacings are used. In this study, we introduce a two-stage linear programming (LP) approach to optimize spot intensities and constrain deliverable MU values simultaneously, i.e., a deliverable spot intensity optimization (DSIO) model. Thus, the post-processing procedure is eliminated and the associated optimized plan deterioration can be avoided. Four prostate cancer cases at our institution were selected for study and two parallel opposed beam angles were planned for all cases. A quadratic programming (QP) based model without MU constraints, i.e., a conventional spot intensity optimization (CSIO) model, was also implemented to emulate the commercial TPS. Plans optimized by both the DSIO and CSIO models were evaluated for five different settings of spot spacing from 3 mm to 7 mm. For all spot spacings, the DSIO-optimized plans yielded better uniformity for the target dose coverage and critical structure sparing than did the CSIO-optimized plans. With reduced spot spacings, more significant improvements in target dose uniformity and critical structure sparing were observed in the DSIO- than in the CSIO-optimized plans. Additionally, better sparing of the rectum and bladder was achieved when reduced spacings were used for the DSIO-optimized plans. The proposed DSIO approach ensures the deliverability of optimized IMPT plans that take into account MU constraints. This eliminates the post-processing procedure required by the TPS as well as the resultant deteriorating effect on ultimate dose distributions. This approach therefore allows IMPT plans to adopt all possible spot spacings optimally. Moreover, dosimetric benefits can be achieved using smaller spot spacings. PMID:23835656
Stochastic optimization algorithms for barrier dividend strategies
NASA Astrophysics Data System (ADS)
Yin, G.; Song, Q. S.; Yang, H.
2009-01-01
This work focuses on finding optimal barrier policy for an insurance risk model when the dividends are paid to the share holders according to a barrier strategy. A new approach based on stochastic optimization methods is developed. Compared with the existing results in the literature, more general surplus processes are considered. Precise models of the surplus need not be known; only noise-corrupted observations of the dividends are used. Using barrier-type strategies, a class of stochastic optimization algorithms are developed. Convergence of the algorithm is analyzed; rate of convergence is also provided. Numerical results are reported to demonstrate the performance of the algorithm.
Optimal structural design of the midship of a VLCC based on the strategy integrating SVM and GA
NASA Astrophysics Data System (ADS)
Sun, Li; Wang, Deyu
2012-03-01
In this paper a hybrid process of modeling and optimization, which integrates a support vector machine (SVM) and genetic algorithm (GA), was introduced to reduce the high time cost in structural optimization of ships. SVM, which is rooted in statistical learning theory and an approximate implementation of the method of structural risk minimization, can provide a good generalization performance in metamodeling the input-output relationship of real problems and consequently cuts down on high time cost in the analysis of real problems, such as FEM analysis. The GA, as a powerful optimization technique, possesses remarkable advantages for the problems that can hardly be optimized with common gradient-based optimization methods, which makes it suitable for optimizing models built by SVM. Based on the SVM-GA strategy, optimization of structural scantlings in the midship of a very large crude carrier (VLCC) ship was carried out according to the direct strength assessment method in common structural rules (CSR), which eventually demonstrates the high efficiency of SVM-GA in optimizing the ship structural scantlings under heavy computational complexity. The time cost of this optimization with SVM-GA has been sharply reduced, many more loops have been processed within a small amount of time and the design has been improved remarkably.
Simulated Annealing in the Variable Landscape
NASA Astrophysics Data System (ADS)
Hasegawa, Manabu; Kim, Chang Ju
An experimental analysis is conducted to test whether the appropriate introduction of the smoothness-temperature schedule enhances the optimizing ability of the MASSS method, the combination of the Metropolis algorithm (MA) and the search-space smoothing (SSS) method. The test is performed on two types of random traveling salesman problems. The results show that the optimization performance of the MA is substantially improved by a single smoothing alone and slightly more by a single smoothing with cooling and by a de-smoothing process with heating. The performance is compared to that of the parallel tempering method and a clear advantage of the idea of smoothing is observed depending on the problem.
NASA Astrophysics Data System (ADS)
Ausaf, Muhammad Farhan; Gao, Liang; Li, Xinyu
2015-12-01
For increasing the overall performance of modern manufacturing systems, effective integration of process planning and scheduling functions has been an important area of consideration among researchers. Owing to the complexity of handling process planning and scheduling simultaneously, most of the research work has been limited to solving the integrated process planning and scheduling (IPPS) problem for a single objective function. As there are many conflicting objectives when dealing with process planning and scheduling, real world problems cannot be fully captured considering only a single objective for optimization. Therefore considering multi-objective IPPS (MOIPPS) problem is inevitable. Unfortunately, only a handful of research papers are available on solving MOIPPS problem. In this paper, an optimization algorithm for solving MOIPPS problem is presented. The proposed algorithm uses a set of dispatching rules coupled with priority assignment to optimize the IPPS problem for various objectives like makespan, total machine load, total tardiness, etc. A fixed sized external archive coupled with a crowding distance mechanism is used to store and maintain the non-dominated solutions. To compare the results with other algorithms, a C-matric based method has been used. Instances from four recent papers have been solved to demonstrate the effectiveness of the proposed algorithm. The experimental results show that the proposed method is an efficient approach for solving the MOIPPS problem.
Genetic Algorithm Optimizes Q-LAW Control Parameters
NASA Technical Reports Server (NTRS)
Lee, Seungwon; von Allmen, Paul; Petropoulos, Anastassios; Terrile, Richard
2008-01-01
A document discusses a multi-objective, genetic algorithm designed to optimize Lyapunov feedback control law (Q-law) parameters in order to efficiently find Pareto-optimal solutions for low-thrust trajectories for electronic propulsion systems. These would be propellant-optimal solutions for a given flight time, or flight time optimal solutions for a given propellant requirement. The approximate solutions are used as good initial solutions for high-fidelity optimization tools. When the good initial solutions are used, the high-fidelity optimization tools quickly converge to a locally optimal solution near the initial solution. Q-law control parameters are represented as real-valued genes in the genetic algorithm. The performances of the Q-law control parameters are evaluated in the multi-objective space (flight time vs. propellant mass) and sorted by the non-dominated sorting method that assigns a better fitness value to the solutions that are dominated by a fewer number of other solutions. With the ranking result, the genetic algorithm encourages the solutions with higher fitness values to participate in the reproduction process, improving the solutions in the evolution process. The population of solutions converges to the Pareto front that is permitted within the Q-law control parameter space.
Optimum Design of High Speed Prop-Rotors
NASA Technical Reports Server (NTRS)
Chattopadhyay, Aditi
1992-01-01
The objective of this research is to develop optimization procedures to provide design trends in high speed prop-rotors. The necessary disciplinary couplings are all considered within a closed loop optimization process. The procedures involve the consideration of blade aeroelastic, aerodynamic performance, structural and dynamic design requirements. Further, since the design involves consideration of several different objectives, multiobjective function formulation techniques are developed.
Optimized Free Energies from Bidirectional Single-Molecule Force Spectroscopy
NASA Astrophysics Data System (ADS)
Minh, David D. L.; Adib, Artur B.
2008-05-01
An optimized method for estimating path-ensemble averages using data from processes driven in opposite directions is presented. Based on this estimator, bidirectional expressions for reconstructing free energies and potentials of mean force from single-molecule force spectroscopy—valid for biasing potentials of arbitrary stiffness—are developed. Numerical simulations on a model potential indicate that these methods perform better than unidirectional strategies.
Optimizing ion channel models using a parallel genetic algorithm on graphical processors.
Ben-Shalom, Roy; Aviv, Amit; Razon, Benjamin; Korngreen, Alon
2012-01-01
We have recently shown that we can semi-automatically constrain models of voltage-gated ion channels by combining a stochastic search algorithm with ionic currents measured using multiple voltage-clamp protocols. Although numerically successful, this approach is highly demanding computationally, with optimization on a high performance Linux cluster typically lasting several days. To solve this computational bottleneck we converted our optimization algorithm for work on a graphical processing unit (GPU) using NVIDIA's CUDA. Parallelizing the process on a Fermi graphic computing engine from NVIDIA increased the speed ∼180 times over an application running on an 80 node Linux cluster, considerably reducing simulation times. This application allows users to optimize models for ion channel kinetics on a single, inexpensive, desktop "super computer," greatly reducing the time and cost of building models relevant to neuronal physiology. We also demonstrate that the point of algorithm parallelization is crucial to its performance. We substantially reduced computing time by solving the ODEs (Ordinary Differential Equations) so as to massively reduce memory transfers to and from the GPU. This approach may be applied to speed up other data intensive applications requiring iterative solutions of ODEs. Copyright © 2012 Elsevier B.V. All rights reserved.
Graphics Processing Unit Acceleration of Gyrokinetic Turbulence Simulations
NASA Astrophysics Data System (ADS)
Hause, Benjamin; Parker, Scott; Chen, Yang
2013-10-01
We find a substantial increase in on-node performance using Graphics Processing Unit (GPU) acceleration in gyrokinetic delta-f particle-in-cell simulation. Optimization is performed on a two-dimensional slab gyrokinetic particle simulation using the Portland Group Fortran compiler with the OpenACC compiler directives and Fortran CUDA. Mixed implementation of both Open-ACC and CUDA is demonstrated. CUDA is required for optimizing the particle deposition algorithm. We have implemented the GPU acceleration on a third generation Core I7 gaming PC with two NVIDIA GTX 680 GPUs. We find comparable, or better, acceleration relative to the NERSC DIRAC cluster with the NVIDIA Tesla C2050 computing processor. The Tesla C 2050 is about 2.6 times more expensive than the GTX 580 gaming GPU. We also see enormous speedups (10 or more) on the Titan supercomputer at Oak Ridge with Kepler K20 GPUs. Results show speed-ups comparable or better than that of OpenMP models utilizing multiple cores. The use of hybrid OpenACC, CUDA Fortran, and MPI models across many nodes will also be discussed. Optimization strategies will be presented. We will discuss progress on optimizing the comprehensive three dimensional general geometry GEM code.
Pervez, Hifsa; Mozumder, Mohammad S; Mourad, Abdel-Hamid I
2016-08-22
The current study presents an investigation on the optimization of injection molding parameters of HDPE/TiO₂ nanocomposites using grey relational analysis with the Taguchi method. Four control factors, including filler concentration (i.e., TiO₂), barrel temperature, residence time and holding time, were chosen at three different levels of each. Mechanical properties, such as yield strength, Young's modulus and elongation, were selected as the performance targets. Nine experimental runs were carried out based on the Taguchi L₉ orthogonal array, and the data were processed according to the grey relational steps. The optimal process parameters were found based on the average responses of the grey relational grades, and the ideal operating conditions were found to be a filler concentration of 5 wt % TiO₂, a barrel temperature of 225 °C, a residence time of 30 min and a holding time of 20 s. Moreover, analysis of variance (ANOVA) has also been applied to identify the most significant factor, and the percentage of TiO₂ nanoparticles was found to have the most significant effect on the properties of the HDPE/TiO₂ nanocomposites fabricated through the injection molding process.
NASA Astrophysics Data System (ADS)
Khanna, Rajesh; Kumar, Anish; Garg, Mohinder Pal; Singh, Ajit; Sharma, Neeraj
2015-12-01
Electric discharge drill machine (EDDM) is a spark erosion process to produce micro-holes in conductive materials. This process is widely used in aerospace, medical, dental and automobile industries. As for the performance evaluation of the electric discharge drilling machine, it is very necessary to study the process parameters of machine tool. In this research paper, a brass rod 2 mm diameter was selected as a tool electrode. The experiments generate output responses such as tool wear rate (TWR). The best parameters such as pulse on-time, pulse off-time and water pressure were studied for best machining characteristics. This investigation presents the use of Taguchi approach for better TWR in drilling of Al-7075. A plan of experiments, based on L27 Taguchi design method, was selected for drilling of material. Analysis of variance (ANOVA) shows the percentage contribution of the control factor in the machining of Al-7075 in EDDM. The optimal combination levels and the significant drilling parameters on TWR were obtained. The optimization results showed that the combination of maximum pulse on-time and minimum pulse off-time gives maximum MRR.
Ge/IIIV fin field-effect transistor common gate process and numerical simulations
NASA Astrophysics Data System (ADS)
Chen, Bo-Yuan; Chen, Jiann-Lin; Chu, Chun-Lin; Luo, Guang-Li; Lee, Shyong; Chang, Edward Yi
2017-04-01
This study investigates the manufacturing process of thermal atomic layer deposition (ALD) and analyzes its thermal and physical mechanisms. Moreover, experimental observations and computational fluid dynamics (CFD) are both used to investigate the formation and deposition rate of a film for precisely controlling the thickness and structure of the deposited material. First, the design of the TALD system model is analyzed, and then CFD is used to simulate the optimal parameters, such as gas flow and the thermal, pressure, and concentration fields, in the manufacturing process to assist the fabrication of oxide-semiconductors and devices based on them, and to improve their characteristics. In addition, the experiment applies ALD to grow films on Ge and GaAs substrates with three-dimensional (3-D) transistors having high electric performance. The electrical analysis of dielectric properties, leakage current density, and trapped charges for the transistors is conducted by high- and low-frequency measurement instruments to determine the optimal conditions for 3-D device fabrication. It is anticipated that the competitive strength of such devices in the semiconductor industry will be enhanced by the reduction of cost and improvement of device performance through these optimizations.
Co-optimization of CO 2 -EOR and Storage Processes under Geological Uncertainty
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ampomah, William; Balch, Robert; Will, Robert
This paper presents an integrated numerical framework to co-optimize EOR and CO 2 storage performance in the Farnsworth field unit (FWU), Ochiltree County, Texas. The framework includes a field-scale compositional reservoir flow model, an uncertainty quantification model and a neural network optimization process. The reservoir flow model has been constructed based on the field geophysical, geological, and engineering data. A laboratory fluid analysis was tuned to an equation of state and subsequently used to predict the thermodynamic minimum miscible pressure (MMP). A history match of primary and secondary recovery processes was conducted to estimate the reservoir and multiphase flow parametersmore » as the baseline case for analyzing the effect of recycling produced gas, infill drilling and water alternating gas (WAG) cycles on oil recovery and CO 2 storage. A multi-objective optimization model was defined for maximizing both oil recovery and CO 2 storage. The uncertainty quantification model comprising the Latin Hypercube sampling, Monte Carlo simulation, and sensitivity analysis, was used to study the effects of uncertain variables on the defined objective functions. Uncertain variables such as bottom hole injection pressure, WAG cycle, injection and production group rates, and gas-oil ratio among others were selected. The most significant variables were selected as control variables to be used for the optimization process. A neural network optimization algorithm was utilized to optimize the objective function both with and without geological uncertainty. The vertical permeability anisotropy (Kv/Kh) was selected as one of the uncertain parameters in the optimization process. The simulation results were compared to a scenario baseline case that predicted CO 2 storage of 74%. The results showed an improved approach for optimizing oil recovery and CO 2 storage in the FWU. The optimization process predicted more than 94% of CO 2 storage and most importantly about 28% of incremental oil recovery. The sensitivity analysis reduced the number of control variables to decrease computational time. A risk aversion factor was used to represent results at various confidence levels to assist management in the decision-making process. The defined objective functions were proved to be a robust approach to co-optimize oil recovery and CO 2 storage. The Farnsworth CO 2 project will serve as a benchmark for future CO 2–EOR or CCUS projects in the Anadarko basin or geologically similar basins throughout the world.« less
Co-optimization of CO 2 -EOR and Storage Processes under Geological Uncertainty
Ampomah, William; Balch, Robert; Will, Robert; ...
2017-07-01
This paper presents an integrated numerical framework to co-optimize EOR and CO 2 storage performance in the Farnsworth field unit (FWU), Ochiltree County, Texas. The framework includes a field-scale compositional reservoir flow model, an uncertainty quantification model and a neural network optimization process. The reservoir flow model has been constructed based on the field geophysical, geological, and engineering data. A laboratory fluid analysis was tuned to an equation of state and subsequently used to predict the thermodynamic minimum miscible pressure (MMP). A history match of primary and secondary recovery processes was conducted to estimate the reservoir and multiphase flow parametersmore » as the baseline case for analyzing the effect of recycling produced gas, infill drilling and water alternating gas (WAG) cycles on oil recovery and CO 2 storage. A multi-objective optimization model was defined for maximizing both oil recovery and CO 2 storage. The uncertainty quantification model comprising the Latin Hypercube sampling, Monte Carlo simulation, and sensitivity analysis, was used to study the effects of uncertain variables on the defined objective functions. Uncertain variables such as bottom hole injection pressure, WAG cycle, injection and production group rates, and gas-oil ratio among others were selected. The most significant variables were selected as control variables to be used for the optimization process. A neural network optimization algorithm was utilized to optimize the objective function both with and without geological uncertainty. The vertical permeability anisotropy (Kv/Kh) was selected as one of the uncertain parameters in the optimization process. The simulation results were compared to a scenario baseline case that predicted CO 2 storage of 74%. The results showed an improved approach for optimizing oil recovery and CO 2 storage in the FWU. The optimization process predicted more than 94% of CO 2 storage and most importantly about 28% of incremental oil recovery. The sensitivity analysis reduced the number of control variables to decrease computational time. A risk aversion factor was used to represent results at various confidence levels to assist management in the decision-making process. The defined objective functions were proved to be a robust approach to co-optimize oil recovery and CO 2 storage. The Farnsworth CO 2 project will serve as a benchmark for future CO 2–EOR or CCUS projects in the Anadarko basin or geologically similar basins throughout the world.« less
Ling, Qing-Hua; Song, Yu-Qing; Han, Fei; Yang, Dan; Huang, De-Shuang
2016-01-01
For ensemble learning, how to select and combine the candidate classifiers are two key issues which influence the performance of the ensemble system dramatically. Random vector functional link networks (RVFL) without direct input-to-output links is one of suitable base-classifiers for ensemble systems because of its fast learning speed, simple structure and good generalization performance. In this paper, to obtain a more compact ensemble system with improved convergence performance, an improved ensemble of RVFL based on attractive and repulsive particle swarm optimization (ARPSO) with double optimization strategy is proposed. In the proposed method, ARPSO is applied to select and combine the candidate RVFL. As for using ARPSO to select the optimal base RVFL, ARPSO considers both the convergence accuracy on the validation data and the diversity of the candidate ensemble system to build the RVFL ensembles. In the process of combining RVFL, the ensemble weights corresponding to the base RVFL are initialized by the minimum norm least-square method and then further optimized by ARPSO. Finally, a few redundant RVFL is pruned, and thus the more compact ensemble of RVFL is obtained. Moreover, in this paper, theoretical analysis and justification on how to prune the base classifiers on classification problem is presented, and a simple and practically feasible strategy for pruning redundant base classifiers on both classification and regression problems is proposed. Since the double optimization is performed on the basis of the single optimization, the ensemble of RVFL built by the proposed method outperforms that built by some single optimization methods. Experiment results on function approximation and classification problems verify that the proposed method could improve its convergence accuracy as well as reduce the complexity of the ensemble system. PMID:27835638
Ling, Qing-Hua; Song, Yu-Qing; Han, Fei; Yang, Dan; Huang, De-Shuang
2016-01-01
For ensemble learning, how to select and combine the candidate classifiers are two key issues which influence the performance of the ensemble system dramatically. Random vector functional link networks (RVFL) without direct input-to-output links is one of suitable base-classifiers for ensemble systems because of its fast learning speed, simple structure and good generalization performance. In this paper, to obtain a more compact ensemble system with improved convergence performance, an improved ensemble of RVFL based on attractive and repulsive particle swarm optimization (ARPSO) with double optimization strategy is proposed. In the proposed method, ARPSO is applied to select and combine the candidate RVFL. As for using ARPSO to select the optimal base RVFL, ARPSO considers both the convergence accuracy on the validation data and the diversity of the candidate ensemble system to build the RVFL ensembles. In the process of combining RVFL, the ensemble weights corresponding to the base RVFL are initialized by the minimum norm least-square method and then further optimized by ARPSO. Finally, a few redundant RVFL is pruned, and thus the more compact ensemble of RVFL is obtained. Moreover, in this paper, theoretical analysis and justification on how to prune the base classifiers on classification problem is presented, and a simple and practically feasible strategy for pruning redundant base classifiers on both classification and regression problems is proposed. Since the double optimization is performed on the basis of the single optimization, the ensemble of RVFL built by the proposed method outperforms that built by some single optimization methods. Experiment results on function approximation and classification problems verify that the proposed method could improve its convergence accuracy as well as reduce the complexity of the ensemble system.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Piepel, Gregory F.; Pasquini, Benedetta; Cooley, Scott K.
In recent years, multivariate optimization has played an increasing role in analytical method development. ICH guidelines recommend using statistical design of experiments to identify the design space, in which multivariate combinations of composition variables and process variables have been demonstrated to provide quality results. Considering a microemulsion electrokinetic chromatography method (MEEKC), the performance of the electrophoretic run depends on the proportions of mixture components (MCs) of the microemulsion and on the values of process variables (PVs). In the present work, for the first time in the literature, a mixture-process variable (MPV) approach was applied to optimize a MEEKC method formore » the analysis of coenzyme Q10 (Q10), ascorbic acid (AA), and folic acid (FA) contained in nutraceuticals. The MCs (buffer, surfactant-cosurfactant, oil) and the PVs (voltage, buffer concentration, buffer pH) were simultaneously changed according to a MPV experimental design. A 62-run MPV design was generated using the I-optimality criterion, assuming a 46-term MPV model allowing for special-cubic blending of the MCs, quadratic effects of the PVs, and some MC-PV interactions. The obtained data were used to develop MPV models that express the performance of an electrophoretic run (measured as peak efficiencies of Q10, AA, and FA) in terms of the MCs and PVs. Contour and perturbation plots were drawn for each of the responses. Finally, the MPV models and criteria for the peak efficiencies were used to develop the design space and an optimal subregion (i.e., the settings of the mixture MCs and PVs that satisfy the respective criteria), as well as a unique optimal combination of MCs and PVs.« less
ACT Payload Shroud Structural Concept Analysis and Optimization
NASA Technical Reports Server (NTRS)
Zalewski, Bart B.; Bednarcyk, Brett A.
2010-01-01
Aerospace structural applications demand a weight efficient design to perform in a cost effective manner. This is particularly true for launch vehicle structures, where weight is the dominant design driver. The design process typically requires many iterations to ensure that a satisfactory minimum weight has been obtained. Although metallic structures can be weight efficient, composite structures can provide additional weight savings due to their lower density and additional design flexibility. This work presents structural analysis and weight optimization of a composite payload shroud for NASA s Ares V heavy lift vehicle. Two concepts, which were previously determined to be efficient for such a structure are evaluated: a hat stiffened/corrugated panel and a fiber reinforced foam sandwich panel. A composite structural optimization code, HyperSizer, is used to optimize the panel geometry, composite material ply orientations, and sandwich core material. HyperSizer enables an efficient evaluation of thousands of potential designs versus multiple strength and stability-based failure criteria across multiple load cases. HyperSizer sizing process uses a global finite element model to obtain element forces, which are statistically processed to arrive at panel-level design-to loads. These loads are then used to analyze each candidate panel design. A near optimum design is selected as the one with the lowest weight that also provides all positive margins of safety. The stiffness of each newly sized panel or beam component is taken into account in the subsequent finite element analysis. Iteration of analysis/optimization is performed to ensure a converged design. Sizing results for the hat stiffened panel concept and the fiber reinforced foam sandwich concept are presented.
Optimal Design of Cable-Driven Manipulators Using Particle Swarm Optimization.
Bryson, Joshua T; Jin, Xin; Agrawal, Sunil K
2016-08-01
The design of cable-driven manipulators is complicated by the unidirectional nature of the cables, which results in extra actuators and limited workspaces. Furthermore, the particular arrangement of the cables and the geometry of the robot pose have a significant effect on the cable tension required to effect a desired joint torque. For a sufficiently complex robot, the identification of a satisfactory cable architecture can be difficult and can result in multiply redundant actuators and performance limitations based on workspace size and cable tensions. This work leverages previous research into the workspace analysis of cable systems combined with stochastic optimization to develop a generalized methodology for designing optimized cable routings for a given robot and desired task. A cable-driven robot leg performing a walking-gait motion is used as a motivating example to illustrate the methodology application. The components of the methodology are described, and the process is applied to the example problem. An optimal cable routing is identified, which provides the necessary controllable workspace to perform the desired task and enables the robot to perform that task with minimal cable tensions. A robot leg is constructed according to this routing and used to validate the theoretical model and to demonstrate the effectiveness of the resulting cable architecture.
Martens, Jürgen
2005-01-01
The hygienic performance of biowaste composting plants to ensure the quality of compost is of high importance. Existing compost quality assurance systems reflect this importance through intensive testing of hygienic parameters. In many countries, compost quality assurance systems are under construction and it is necessary to check and to optimize the methods to state the hygienic performance of composting plants. A set of indicator methods to evaluate the hygienic performance of normal operating biowaste composting plants was developed. The indicator methods were developed by investigating temperature measurements from indirect process tests from 23 composting plants belonging to 11 design types of the Hygiene Design Type Testing System of the German Compost Quality Association (BGK e.V.). The presented indicator methods are the grade of hygienization, the basic curve shape, and the hygienic risk area. The temperature courses of single plants are not distributed normally, but they were grouped by cluster analysis in normal distributed subgroups. That was a precondition to develop the mentioned indicator methods. For each plant the grade of hygienization was calculated through transformation into the standard normal distribution. It shows the part in percent of the entire data set which meet the legal temperature requirements. The hygienization grade differs widely within the design types and falls below 50% for about one fourth of the plants. The subgroups are divided visually into basic curve shapes which stand for different process courses. For each plant the composition of the entire data set out of the various basic curve shapes can be used as an indicator for the basic process conditions. Some basic curve shapes indicate abnormal process courses which can be emended through process optimization. A hygienic risk area concept using the 90% range of variation of the normal temperature courses was introduced. Comparing the design type range of variation with the legal temperature defaults showed hygienic risk areas over the temperature courses which could be minimized through process optimization. The hygienic risk area of four design types shows a suboptimal hygienic performance.
NASA Astrophysics Data System (ADS)
Rosenberg, David E.
2015-04-01
State-of-the-art systems analysis techniques focus on efficiently finding optimal solutions. Yet an optimal solution is optimal only for the modeled issues and managers often seek near-optimal alternatives that address unmodeled objectives, preferences, limits, uncertainties, and other issues. Early on, Modeling to Generate Alternatives (MGA) formalized near-optimal as performance within a tolerable deviation from the optimal objective function value and identified a few maximally different alternatives that addressed some unmodeled issues. This paper presents new stratified, Monte-Carlo Markov Chain sampling and parallel coordinate plotting tools that generate and communicate the structure and extent of the near-optimal region to an optimization problem. Interactive plot controls allow users to explore region features of most interest. Controls also streamline the process to elicit unmodeled issues and update the model formulation in response to elicited issues. Use for an example, single-objective, linear water quality management problem at Echo Reservoir, Utah, identifies numerous and flexible practices to reduce the phosphorus load to the reservoir and maintain close-to-optimal performance. Flexibility is upheld by further interactive alternative generation, transforming the formulation into a multiobjective problem, and relaxing the tolerance parameter to expand the near-optimal region. Compared to MGA, the new blended tools generate more numerous alternatives faster, more fully show the near-optimal region, and help elicit a larger set of unmodeled issues.
"Does Degree of Asymmetry Relate to Performance?" A Critical Review
ERIC Educational Resources Information Center
Boles, David B.; Barth, Joan M.
2011-01-01
In a recent paper, Chiarello, Welcome, Halderman, and Leonard (2009) reported positive correlations between word-related visual field asymmetries and reading performance. They argued that strong word processing lateralization represents a more optimal brain organization for reading acquisition. Their empirical results contrasted sharply with those…
Optimization of Multi-Fidelity Computer Experiments via the EQIE Criterion
DOE Office of Scientific and Technical Information (OSTI.GOV)
He, Xu; Tuo, Rui; Jeff Wu, C. F.
Computer experiments based on mathematical models are powerful tools for understanding physical processes. This article addresses the problem of kriging-based optimization for deterministic computer experiments with tunable accuracy. Our approach is to use multi- delity computer experiments with increasing accuracy levels and a nonstationary Gaussian process model. We propose an optimization scheme that sequentially adds new computer runs by following two criteria. The first criterion, called EQI, scores candidate inputs with given level of accuracy, and the second criterion, called EQIE, scores candidate combinations of inputs and accuracy. Here, from simulation results and a real example using finite element analysis,more » our method out-performs the expected improvement (EI) criterion which works for single-accuracy experiments.« less
Optimization of Multi-Fidelity Computer Experiments via the EQIE Criterion
He, Xu; Tuo, Rui; Jeff Wu, C. F.
2017-01-31
Computer experiments based on mathematical models are powerful tools for understanding physical processes. This article addresses the problem of kriging-based optimization for deterministic computer experiments with tunable accuracy. Our approach is to use multi- delity computer experiments with increasing accuracy levels and a nonstationary Gaussian process model. We propose an optimization scheme that sequentially adds new computer runs by following two criteria. The first criterion, called EQI, scores candidate inputs with given level of accuracy, and the second criterion, called EQIE, scores candidate combinations of inputs and accuracy. Here, from simulation results and a real example using finite element analysis,more » our method out-performs the expected improvement (EI) criterion which works for single-accuracy experiments.« less
Selection of Sustainable Processes using Sustainability ...
Chemical products can be obtained by process pathways involving varying amounts and types of resources, utilities, and byproduct formation. When such competing process options such as six processes for making methanol as are considered in this study, it is necessary to identify the most sustainable option. Sustainability of a chemical process is generally evaluated with indicators that require process and chemical property data. These indicators individually reflect the impacts of the process on areas of sustainability, such as the environment or society. In order to choose among several alternative processes an overall comparative analysis is essential. Generally net profit will show the most economic process. A mixed integer optimization problem can also be solved to identify the most economic among competing processes. This method uses economic optimization and leaves aside the environmental and societal impacts. To make a decision on the most sustainable process, the method presented here rationally aggregates the sustainability indicators into a single index called sustainability footprint (De). Process flow and economic data were used to compute the indicator values. Results from sustainability footprint (De) are compared with those from solving a mixed integer optimization problem. In order to identify the rank order of importance of the indicators, a multivariate analysis is performed using partial least square variable importance in projection (PLS-VIP)
A novel dismantling process of waste printed circuit boards using water-soluble ionic liquid.
Zeng, Xianlai; Li, Jinhui; Xie, Henghua; Liu, Lili
2013-10-01
Recycling processes for waste printed circuit boards (WPCBs) have been well established in terms of scientific research and field pilots. However, current dismantling procedures for WPCBs have restricted the recycling process, due to their low efficiency and negative impacts on environmental and human health. This work aimed to seek an environmental-friendly dismantling process through heating with water-soluble ionic liquid to separate electronic components and tin solder from two main types of WPCBs-cathode ray tubes and computer mainframes. The work systematically investigates the influence factors, heating mechanism, and optimal parameters for opening solder connections on WPCBs during the dismantling process, and addresses its environmental performance and economic assessment. The results obtained demonstrate that the optimal temperature, retention time, and turbulence resulting from impeller rotation during the dismantling process, were 250 °C, 12 min, and 45 rpm, respectively. Nearly 90% of the electronic components were separated from the WPCBs under the optimal experimental conditions. This novel process offers the possibility of large industrial-scale operations for separating electronic components and recovering tin solder, and for a more efficient and environmentally sound process for WPCBs recycling. Copyright © 2013 Elsevier Ltd. All rights reserved.
A method to accelerate creation of plasma etch recipes using physics and Bayesian statistics
NASA Astrophysics Data System (ADS)
Chopra, Meghali J.; Verma, Rahul; Lane, Austin; Willson, C. G.; Bonnecaze, Roger T.
2017-03-01
Next generation semiconductor technologies like high density memory storage require precise 2D and 3D nanopatterns. Plasma etching processes are essential to achieving the nanoscale precision required for these structures. Current plasma process development methods rely primarily on iterative trial and error or factorial design of experiment (DOE) to define the plasma process space. Here we evaluate the efficacy of the software tool Recipe Optimization for Deposition and Etching (RODEo) against standard industry methods at determining the process parameters of a high density O2 plasma system with three case studies. In the first case study, we demonstrate that RODEo is able to predict etch rates more accurately than a regression model based on a full factorial design while using 40% fewer experiments. In the second case study, we demonstrate that RODEo performs significantly better than a full factorial DOE at identifying optimal process conditions to maximize anisotropy. In the third case study we experimentally show how RODEo maximizes etch rates while using half the experiments of a full factorial DOE method. With enhanced process predictions and more accurate maps of the process space, RODEo reduces the number of experiments required to develop and optimize plasma processes.
Development of Chemical Process Design and Control for ...
This contribution describes a novel process systems engineering framework that couples advanced control with sustainability evaluation and decision making for the optimization of process operations to minimize environmental impacts associated with products, materials, and energy. The implemented control strategy combines a biologically inspired method with optimal control concepts for finding more sustainable operating trajectories. The sustainability assessment of process operating points is carried out by using the U.S. E.P.A.’s Gauging Reaction Effectiveness for the ENvironmental Sustainability of Chemistries with a multi-Objective Process Evaluator (GREENSCOPE) tool that provides scores for the selected indicators in the economic, material efficiency, environmental and energy areas. The indicator scores describe process performance on a sustainability measurement scale, effectively determining which operating point is more sustainable if there are more than several steady states for one specific product manufacturing. Through comparisons between a representative benchmark and the optimal steady-states obtained through implementation of the proposed controller, a systematic decision can be made in terms of whether the implementation of the controller is moving the process towards a more sustainable operation. The effectiveness of the proposed framework is illustrated through a case study of a continuous fermentation process for fuel production, whose materi
COLA: Optimizing Stream Processing Applications via Graph Partitioning
NASA Astrophysics Data System (ADS)
Khandekar, Rohit; Hildrum, Kirsten; Parekh, Sujay; Rajan, Deepak; Wolf, Joel; Wu, Kun-Lung; Andrade, Henrique; Gedik, Buğra
In this paper, we describe an optimization scheme for fusing compile-time operators into reasonably-sized run-time software units called processing elements (PEs). Such PEs are the basic deployable units in System S, a highly scalable distributed stream processing middleware system. Finding a high quality fusion significantly benefits the performance of streaming jobs. In order to maximize throughput, our solution approach attempts to minimize the processing cost associated with inter-PE stream traffic while simultaneously balancing load across the processing hosts. Our algorithm computes a hierarchical partitioning of the operator graph based on a minimum-ratio cut subroutine. We also incorporate several fusion constraints in order to support real-world System S jobs. We experimentally compare our algorithm with several other reasonable alternative schemes, highlighting the effectiveness of our approach.
Optimization of airport security process
NASA Astrophysics Data System (ADS)
Wei, Jianan
2017-05-01
In order to facilitate passenger travel, on the basis of ensuring public safety, the airport security process and scheduling to optimize. The stochastic Petri net is used to simulate the single channel security process, draw the reachable graph, construct the homogeneous Markov chain to realize the performance analysis of the security process network, and find the bottleneck to limit the passenger throughput. Curve changes in the flow of passengers to open a security channel for the initial state. When the passenger arrives at a rate that exceeds the processing capacity of the security channel, it is queued. The passenger reaches the acceptable threshold of the queuing time as the time to open or close the next channel, simulate the number of dynamic security channel scheduling to reduce the passenger queuing time.
NASA Technical Reports Server (NTRS)
Fishbach, L. H.
1979-01-01
The paper describes the computational techniques employed in determining the optimal propulsion systems for future aircraft applications and to identify system tradeoffs and technology requirements. The computer programs used to perform calculations for all the factors that enter into the selection process of determining the optimum combinations of airplanes and engines are examined. Attention is given to the description of the computer codes including NNEP, WATE, LIFCYC, INSTAL, and POD DRG. A process is illustrated by which turbine engines can be evaluated as to fuel consumption, engine weight, cost and installation effects. Examples are shown as to the benefits of variable geometry and of the tradeoff between fuel burned and engine weights. Future plans for further improvements in the analytical modeling of engine systems are also described.
Sensitivity-Based Guided Model Calibration
NASA Astrophysics Data System (ADS)
Semnani, M.; Asadzadeh, M.
2017-12-01
A common practice in automatic calibration of hydrologic models is applying the sensitivity analysis prior to the global optimization to reduce the number of decision variables (DVs) by identifying the most sensitive ones. This two-stage process aims to improve the optimization efficiency. However, Parameter sensitivity information can be used to enhance the ability of the optimization algorithms to find good quality solutions in a fewer number of solution evaluations. This improvement can be achieved by increasing the focus of optimization on sampling from the most sensitive parameters in each iteration. In this study, the selection process of the dynamically dimensioned search (DDS) optimization algorithm is enhanced by utilizing a sensitivity analysis method to put more emphasis on the most sensitive decision variables for perturbation. The performance of DDS with the sensitivity information is compared to the original version of DDS for different mathematical test functions and a model calibration case study. Overall, the results show that DDS with sensitivity information finds nearly the same solutions as original DDS, however, in a significantly fewer number of solution evaluations.
Optimization of a Lunar Pallet Lander Reinforcement Structure Using a Genetic Algorithm
NASA Technical Reports Server (NTRS)
Burt, Adam O.; Hull, Patrick V.
2014-01-01
This paper presents a design automation process using optimization via a genetic algorithm to design the conceptual structure of a Lunar Pallet Lander. The goal is to determine a design that will have the primary natural frequencies at or above a target value as well as minimize the total mass. Several iterations of the process are presented. First, a concept optimization is performed to determine what class of structure would produce suitable candidate designs. From this a stiffened sheet metal approach was selected leading to optimization of beam placement through generating a two-dimensional mesh and varying the physical location of reinforcing beams. Finally, the design space is reformulated as a binary problem using 1-dimensional beam elements to truncate the design space to allow faster convergence and additional mechanical failure criteria to be included in the optimization responses. Results are presented for each design space configuration. The final flight design was derived from these results.
NASA Astrophysics Data System (ADS)
Torabi, Amir; Kolahan, Farhad
2018-07-01
Pulsed laser welding is a powerful technique especially suitable for joining thin sheet metals. In this study, based on experimental data, pulsed laser welding of thin AISI316L austenitic stainless steel sheet has been modeled and optimized. The experimental data required for modeling are gathered as per Central Composite Design matrix in Response Surface Methodology (RSM) with full replication of 31 runs. Ultimate Tensile Strength (UTS) is considered as the main quality measure in laser welding. Furthermore, the important process parameters including peak power, pulse duration, pulse frequency and welding speed are selected as input process parameters. The relation between input parameters and the output response is established via full quadratic response surface regression with confidence level of 95%. The adequacy of the regression model was verified using Analysis of Variance technique results. The main effects of each factor and the interactions effects with other factors were analyzed graphically in contour and surface plot. Next, to maximum joint UTS, the best combinations of parameters levels were specified using RSM. Moreover, the mathematical model is implanted into a Simulated Annealing (SA) optimization algorithm to determine the optimal values of process parameters. The results obtained by both SA and RSM optimization techniques are in good agreement. The optimal parameters settings for peak power of 1800 W, pulse duration of 4.5 ms, frequency of 4.2 Hz and welding speed of 0.5 mm/s would result in a welded joint with 96% of the base metal UTS. Computational results clearly demonstrate that the proposed modeling and optimization procedures perform quite well for pulsed laser welding process.
NASA Astrophysics Data System (ADS)
Hassan, Rania A.
In the design of complex large-scale spacecraft systems that involve a large number of components and subsystems, many specialized state-of-the-art design tools are employed to optimize the performance of various subsystems. However, there is no structured system-level concept-architecting process. Currently, spacecraft design is heavily based on the heritage of the industry. Old spacecraft designs are modified to adapt to new mission requirements, and feasible solutions---rather than optimal ones---are often all that is achieved. During the conceptual phase of the design, the choices available to designers are predominantly discrete variables describing major subsystems' technology options and redundancy levels. The complexity of spacecraft configurations makes the number of the system design variables that need to be traded off in an optimization process prohibitive when manual techniques are used. Such a discrete problem is well suited for solution with a Genetic Algorithm, which is a global search technique that performs optimization-like tasks. This research presents a systems engineering framework that places design requirements at the core of the design activities and transforms the design paradigm for spacecraft systems to a top-down approach rather than the current bottom-up approach. To facilitate decision-making in the early phases of the design process, the population-based search nature of the Genetic Algorithm is exploited to provide computationally inexpensive---compared to the state-of-the-practice---tools for both multi-objective design optimization and design optimization under uncertainty. In terms of computational cost, those tools are nearly on the same order of magnitude as that of standard single-objective deterministic Genetic Algorithm. The use of a multi-objective design approach provides system designers with a clear tradeoff optimization surface that allows them to understand the effect of their decisions on all the design objectives under consideration simultaneously. Incorporating uncertainties avoids large safety margins and unnecessary high redundancy levels. The focus on low computational cost for the optimization tools stems from the objective that improving the design of complex systems should not be achieved at the expense of a costly design methodology.
NASA Astrophysics Data System (ADS)
Wang, Zi Shuai; Sha, Wei E. I.; Choy, Wallace C. H.
2016-12-01
Modeling the charge-generation process is highly important to understand device physics and optimize power conversion efficiency of bulk-heterojunction organic solar cells (OSCs). Free carriers are generated by both ultrafast exciton delocalization and slow exciton diffusion and dissociation at the heterojunction interface. In this work, we developed a systematic numerical simulation to describe the charge-generation process by a modified drift-diffusion model. The transport, recombination, and collection of free carriers are incorporated to fully capture the device response. The theoretical results match well with the state-of-the-art high-performance organic solar cells. It is demonstrated that the increase of exciton delocalization ratio reduces the energy loss in the exciton diffusion-dissociation process, and thus, significantly improves the device efficiency, especially for the short-circuit current. By changing the exciton delocalization ratio, OSC performances are comprehensively investigated under the conditions of short-circuit and open-circuit. Particularly, bulk recombination dependent fill factor saturation is unveiled and understood. As a fundamental electrical analysis of the delocalization mechanism, our work is important to understand and optimize the high-performance OSCs.
Robust design of microchannel cooler
NASA Astrophysics Data System (ADS)
He, Ye; Yang, Tao; Hu, Li; Li, Leimin
2005-12-01
Microchannel cooler has offered a new method for the cooling of high power diode lasers, with the advantages of small volume, high efficiency of thermal dissipation and low cost when mass-produced. In order to reduce the sensitivity of design to manufacture errors or other disturbances, Taguchi method that is one of robust design method was chosen to optimize three parameters important to the cooling performance of roof-like microchannel cooler. The hydromechanical and thermal mathematical model of varying section microchannel was calculated using finite volume method by FLUENT. A special program was written to realize the automation of the design process for improving efficiency. The optimal design is presented which compromises between optimal cooling performance and its robustness. This design method proves to be available.
Fast Pixel Buffer For Processing With Lookup Tables
NASA Technical Reports Server (NTRS)
Fisher, Timothy E.
1992-01-01
Proposed scheme for buffering data on intensities of picture elements (pixels) of image increases rate or processing beyond that attainable when data read, one pixel at time, from main image memory. Scheme applied in design of specialized image-processing circuitry. Intended to optimize performance of processor in which electronic equivalent of address-lookup table used to address those pixels in main image memory required for processing.
Joint-layer encoder optimization for HEVC scalable extensions
NASA Astrophysics Data System (ADS)
Tsai, Chia-Ming; He, Yuwen; Dong, Jie; Ye, Yan; Xiu, Xiaoyu; He, Yong
2014-09-01
Scalable video coding provides an efficient solution to support video playback on heterogeneous devices with various channel conditions in heterogeneous networks. SHVC is the latest scalable video coding standard based on the HEVC standard. To improve enhancement layer coding efficiency, inter-layer prediction including texture and motion information generated from the base layer is used for enhancement layer coding. However, the overall performance of the SHVC reference encoder is not fully optimized because rate-distortion optimization (RDO) processes in the base and enhancement layers are independently considered. It is difficult to directly extend the existing joint-layer optimization methods to SHVC due to the complicated coding tree block splitting decisions and in-loop filtering process (e.g., deblocking and sample adaptive offset (SAO) filtering) in HEVC. To solve those problems, a joint-layer optimization method is proposed by adjusting the quantization parameter (QP) to optimally allocate the bit resource between layers. Furthermore, to make more proper resource allocation, the proposed method also considers the viewing probability of base and enhancement layers according to packet loss rate. Based on the viewing probability, a novel joint-layer RD cost function is proposed for joint-layer RDO encoding. The QP values of those coding tree units (CTUs) belonging to lower layers referenced by higher layers are decreased accordingly, and the QP values of those remaining CTUs are increased to keep total bits unchanged. Finally the QP values with minimal joint-layer RD cost are selected to match the viewing probability. The proposed method was applied to the third temporal level (TL-3) pictures in the Random Access configuration. Simulation results demonstrate that the proposed joint-layer optimization method can improve coding performance by 1.3% for these TL-3 pictures compared to the SHVC reference encoder without joint-layer optimization.
A dual-task investigation of automaticity in visual word processing
NASA Technical Reports Server (NTRS)
McCann, R. S.; Remington, R. W.; Van Selst, M.
2000-01-01
An analysis of activation models of visual word processing suggests that frequency-sensitive forms of lexical processing should proceed normally while unattended. This hypothesis was tested by having participants perform a speeded pitch discrimination task followed by lexical decisions or word naming. As the stimulus onset asynchrony between the tasks was reduced, lexical-decision and naming latencies increased dramatically. Word-frequency effects were additive with the increase, indicating that frequency-sensitive processing was subject to postponement while attention was devoted to the other task. Either (a) the same neural hardware shares responsibility for lexical processing and central stages of choice reaction time task processing and cannot perform both computations simultaneously, or (b) lexical processing is blocked in order to optimize performance on the pitch discrimination task. Either way, word processing is not as automatic as activation models suggest.
Bare-Bones Teaching-Learning-Based Optimization
Zou, Feng; Wang, Lei; Hei, Xinhong; Chen, Debao; Jiang, Qiaoyong; Li, Hongye
2014-01-01
Teaching-learning-based optimization (TLBO) algorithm which simulates the teaching-learning process of the class room is one of the recently proposed swarm intelligent (SI) algorithms. In this paper, a new TLBO variant called bare-bones teaching-learning-based optimization (BBTLBO) is presented to solve the global optimization problems. In this method, each learner of teacher phase employs an interactive learning strategy, which is the hybridization of the learning strategy of teacher phase in the standard TLBO and Gaussian sampling learning based on neighborhood search, and each learner of learner phase employs the learning strategy of learner phase in the standard TLBO or the new neighborhood search strategy. To verify the performance of our approaches, 20 benchmark functions and two real-world problems are utilized. Conducted experiments can been observed that the BBTLBO performs significantly better than, or at least comparable to, TLBO and some existing bare-bones algorithms. The results indicate that the proposed algorithm is competitive to some other optimization algorithms. PMID:25013844
Bare-bones teaching-learning-based optimization.
Zou, Feng; Wang, Lei; Hei, Xinhong; Chen, Debao; Jiang, Qiaoyong; Li, Hongye
2014-01-01
Teaching-learning-based optimization (TLBO) algorithm which simulates the teaching-learning process of the class room is one of the recently proposed swarm intelligent (SI) algorithms. In this paper, a new TLBO variant called bare-bones teaching-learning-based optimization (BBTLBO) is presented to solve the global optimization problems. In this method, each learner of teacher phase employs an interactive learning strategy, which is the hybridization of the learning strategy of teacher phase in the standard TLBO and Gaussian sampling learning based on neighborhood search, and each learner of learner phase employs the learning strategy of learner phase in the standard TLBO or the new neighborhood search strategy. To verify the performance of our approaches, 20 benchmark functions and two real-world problems are utilized. Conducted experiments can been observed that the BBTLBO performs significantly better than, or at least comparable to, TLBO and some existing bare-bones algorithms. The results indicate that the proposed algorithm is competitive to some other optimization algorithms.
NASA Astrophysics Data System (ADS)
Wang, Ji-Bo; Wang, Ming-Zheng; Ji, Ping
2012-05-01
In this article, we consider a single machine scheduling problem with a time-dependent learning effect and deteriorating jobs. By the effects of time-dependent learning and deterioration, we mean that the job processing time is defined by a function of its starting time and total normal processing time of jobs in front of it in the sequence. The objective is to determine an optimal schedule so as to minimize the total completion time. This problem remains open for the case of -1 < a < 0, where a denotes the learning index; we show that an optimal schedule of the problem is V-shaped with respect to job normal processing times. Three heuristic algorithms utilising the V-shaped property are proposed, and computational experiments show that the last heuristic algorithm performs effectively and efficiently in obtaining near-optimal solutions.
Kumar, M; Tamilarasan, R; Arthanareeswaran, G; Ismail, A F
2015-11-01
Recently noted that the methylene blue cause severe central nervous system toxicity. It is essential to optimize the methylene blue from aqueous environment. In this study, a comparison of an optimization of methylene blue was investigated by using modified Ca(2+) and Zn(2+) bio-polymer hydrogel beads. A batch mode study was conducted using various parameters like time, dye concentration, bio-polymer dose, pH and process temperature. The isotherms, kinetics, diffusion and thermodynamic studies were performed for feasibility of the optimization process. Freundlich and Langmuir isotherm equations were used for the prediction of isotherm parameters and correlated with dimensionless separation factor (RL). Pseudo-first order and pseudo-second order Lagegren's kinetic equations were used for the correlation of kinetic parameters. Intraparticle diffusion model was employed for diffusion of the optimization process. The Fourier Transform Infrared Spectroscopy (FTIR) shows different absorbent peaks of Ca(2+) and Zn(2+) beads and the morphology of the bio-polymer material analyzed with Scanning Electron Microscope (SEM). The TG & DTA studies show that good thermal stability with less humidity without production of any non-degraded products. Copyright © 2015 Elsevier Inc. All rights reserved.
Theoretical model for design and analysis of protectional eyewear.
Zelzer, B; Speck, A; Langenbucher, A; Eppig, T
2013-05-01
Protectional eyewear has to fulfill both mechanical and optical stress tests. To pass those optical tests the surfaces of safety spectacles have to be optimized to minimize optical aberrations. Starting with the surface data of three measured safety spectacles, a theoretical spectacle model (four spherical surfaces) is recalculated first and then optimized while keeping the front surface unchanged. Next to spherical power, astigmatic power and prism imbalance we used the wavefront error (five different viewing directions) to simulate the optical performance and to optimize the safety spectacle geometries. All surfaces were spherical (maximum global deviation 'peak-to-valley' between the measured surface and the best-fit sphere: 0.132mm). Except the spherical power of the model Axcont (-0.07m(-1)) all simulated optical performance before optimization was better than the limits defined by standards. The optimization reduced the wavefront error by 1% to 0.150 λ (Windor/Infield), by 63% to 0.194 λ (Axcont/Bolle) and by 55% to 0.199 λ (2720/3M) without dropping below the measured thickness. The simulated optical performance of spectacle designs could be improved when using a smart optimization. A good optical design counteracts degradation by parameter variation throughout the manufacturing process. Copyright © 2013. Published by Elsevier GmbH.
NASA Astrophysics Data System (ADS)
El Ayoubi, Carole; Hassan, Ibrahim; Ghaly, Wahid
2012-11-01
This paper aims to optimize film coolant flow parameters on the suction surface of a high-pressure gas turbine blade in order to obtain an optimum compromise between a superior cooling performance and a minimum aerodynamic penalty. An optimization algorithm coupled with three-dimensional Reynolds-averaged Navier Stokes analysis is used to determine the optimum film cooling configuration. The VKI blade with two staggered rows of axially oriented, conically flared, film cooling holes on its suction surface is considered. Two design variables are selected; the coolant to mainstream temperature ratio and total pressure ratio. The optimization objective consists of maximizing the spatially averaged film cooling effectiveness and minimizing the aerodynamic penalty produced by film cooling. The effect of varying the coolant flow parameters on the film cooling effectiveness and the aerodynamic loss is analyzed using an optimization method and three dimensional steady CFD simulations. The optimization process consists of a genetic algorithm and a response surface approximation of the artificial neural network type to provide low-fidelity predictions of the objective function. The CFD simulations are performed using the commercial software CFX. The numerical predictions of the aero-thermal performance is validated against a well-established experimental database.
Optimization of the Alkaline Pretreatment of Rice Straw for Enhanced Methane Yield
Song, Zilin; Yang, Gaihe; Han, Xinhui; Feng, Yongzhong; Ren, Guangxin
2013-01-01
The lime pretreatment process for rice straw was optimized to enhance the biodegradation performance and increase biogas yield. The optimization was implemented using response surface methodology (RSM) and Box-Behnken experimental design. The effects of biodegradation, as well as the interactive effects of Ca(OH)2 concentration, pretreatment time, and inoculum amount on biogas improvement, were investigated. Rice straw compounds, such as lignin, cellulose, and hemicellulose, were significantly degraded with increasing Ca(OH)2 concentration. The optimal conditions for the use of pretreated rice straw in anaerobic digestion were 9.81% Ca(OH)2 (w/w TS), 5.89 d treatment time, and 45.12% inoculum content, which resulted in a methane yield of 225.3 mL/g VS. A determination coefficient (R 2) of 96% was obtained, indicating that the model used to predict the anabolic digestion process shows a favorable fit with the experimental parameters. PMID:23509824
The optimization of total laboratory automation by simulation of a pull-strategy.
Yang, Taho; Wang, Teng-Kuan; Li, Vincent C; Su, Chia-Lo
2015-01-01
Laboratory results are essential for physicians to diagnose medical conditions. Because of the critical role of medical laboratories, an increasing number of hospitals use total laboratory automation (TLA) to improve laboratory performance. Although the benefits of TLA are well documented, systems occasionally become congested, particularly when hospitals face peak demand. This study optimizes TLA operations. Firstly, value stream mapping (VSM) is used to identify the non-value-added time. Subsequently, batch processing control and parallel scheduling rules are devised and a pull mechanism that comprises a constant work-in-process (CONWIP) is proposed. Simulation optimization is then used to optimize the design parameters and to ensure a small inventory and a shorter average cycle time (CT). For empirical illustration, this approach is applied to a real case. The proposed methodology significantly improves the efficiency of laboratory work and leads to a reduction in patient waiting times and increased service level.
Venkateswarulu, T C; Prabhakar, K Vidya; Kumar, R Bharath; Krupanidhi, S
2017-07-01
Modeling and optimization were performed to enhance production of lactase through submerged fermentation by Bacillus subtilis VUVD001 using artificial neural networks (ANN) and response surface methodology (RSM). The effect of process parameters namely temperature (°C), pH, and incubation time (h) and their combinational interactions on production was studied in shake flask culture by Box-Behnken design. The model was validated by conducting an experiment at optimized process variables which gave the maximum lactase activity of 91.32 U/ml. Compared to traditional activity, 3.48-folds improved production was obtained after RSM optimization. This study clearly shows that both RSM and ANN models provided desired predictions. However, compared with RSM (R 2 = 0.9496), the ANN model (R 2 = 0.99456) gave a better prediction for the production of lactase.
Enabling Incremental Query Re-Optimization.
Liu, Mengmeng; Ives, Zachary G; Loo, Boon Thau
2016-01-01
As declarative query processing techniques expand to the Web, data streams, network routers, and cloud platforms, there is an increasing need to re-plan execution in the presence of unanticipated performance changes. New runtime information may affect which query plan we prefer to run. Adaptive techniques require innovation both in terms of the algorithms used to estimate costs , and in terms of the search algorithm that finds the best plan. We investigate how to build a cost-based optimizer that recomputes the optimal plan incrementally given new cost information, much as a stream engine constantly updates its outputs given new data. Our implementation especially shows benefits for stream processing workloads. It lays the foundations upon which a variety of novel adaptive optimization algorithms can be built. We start by leveraging the recently proposed approach of formulating query plan enumeration as a set of recursive datalog queries ; we develop a variety of novel optimization approaches to ensure effective pruning in both static and incremental cases. We further show that the lessons learned in the declarative implementation can be equally applied to more traditional optimizer implementations.
Enabling Incremental Query Re-Optimization
Liu, Mengmeng; Ives, Zachary G.; Loo, Boon Thau
2017-01-01
As declarative query processing techniques expand to the Web, data streams, network routers, and cloud platforms, there is an increasing need to re-plan execution in the presence of unanticipated performance changes. New runtime information may affect which query plan we prefer to run. Adaptive techniques require innovation both in terms of the algorithms used to estimate costs, and in terms of the search algorithm that finds the best plan. We investigate how to build a cost-based optimizer that recomputes the optimal plan incrementally given new cost information, much as a stream engine constantly updates its outputs given new data. Our implementation especially shows benefits for stream processing workloads. It lays the foundations upon which a variety of novel adaptive optimization algorithms can be built. We start by leveraging the recently proposed approach of formulating query plan enumeration as a set of recursive datalog queries; we develop a variety of novel optimization approaches to ensure effective pruning in both static and incremental cases. We further show that the lessons learned in the declarative implementation can be equally applied to more traditional optimizer implementations. PMID:28659658
NASA Astrophysics Data System (ADS)
Fox, Matthew D.
Advanced automotive technology assessment and powertrain design are increasingly performed through modeling, simulation, and optimization. But technology assessments usually target many competing criteria making any individual optimization challenging and arbitrary. Further, independent design simulations and optimizations take considerable time to execute, and design constraints and objectives change throughout the design process. Changes in design considerations usually require re-processing of simulations and more time. In this thesis, these challenges are confronted through CSU's participation in the EcoCAR2 hybrid vehicle design competition. The complexity of the competition's design objectives leveraged development of a decision support system tool to aid in multi-criteria decision making across technologies and to perform powertrain optimization. To make the decision support system interactive, and bypass the problem of long simulation times, a new approach was taken. The result of this research is CSU's architecture selection and component sizing, which optimizes a composite objective function representing the competition score. The selected architecture is an electric vehicle with an onboard range extending hydrogen fuel cell system. The vehicle has a 145kW traction motor, 18.9kWh of lithium ion battery, a 15kW fuel cell system, and 5kg of hydrogen storage capacity. Finally, a control strategy was developed that improves the vehicles performance throughout the driving range under variable driving conditions. In conclusion, the design process used in this research is reviewed and evaluated against other common design methodologies. I conclude, through the highlighted case studies, that the approach is more comprehensive than other popular design methodologies and is likely to lead to a higher quality product. The upfront modeling work and decision support system formulation will pay off in superior and timely knowledge transfer and more informed design decisions. The hypothesis is supported by the three case studies examined in this thesis.
Application of genetic algorithm in integrated setup planning and operation sequencing
NASA Astrophysics Data System (ADS)
Kafashi, Sajad; Shakeri, Mohsen
2011-01-01
Process planning is an essential component for linking design and manufacturing process. Setup planning and operation sequencing is two main tasks in process planning. Many researches solved these two problems separately. Considering the fact that the two functions are complementary, it is necessary to integrate them more tightly so that performance of a manufacturing system can be improved economically and competitively. This paper present a generative system and genetic algorithm (GA) approach to process plan the given part. The proposed approach and optimization methodology analyses the TAD (tool approach direction), tolerance relation between features and feature precedence relations to generate all possible setups and operations using workshop resource database. Based on these technological constraints the GA algorithm approach, which adopts the feature-based representation, optimizes the setup plan and sequence of operations using cost indices. Case study show that the developed system can generate satisfactory results in optimizing the setup planning and operation sequencing simultaneously in feasible condition.
Genetic evolutionary taboo search for optimal marker placement in infrared patient setup
NASA Astrophysics Data System (ADS)
Riboldi, M.; Baroni, G.; Spadea, M. F.; Tagaste, B.; Garibaldi, C.; Cambria, R.; Orecchia, R.; Pedotti, A.
2007-09-01
In infrared patient setup adequate selection of the external fiducial configuration is required for compensating inner target displacements (target registration error, TRE). Genetic algorithms (GA) and taboo search (TS) were applied in a newly designed approach to optimal marker placement: the genetic evolutionary taboo search (GETS) algorithm. In the GETS paradigm, multiple solutions are simultaneously tested in a stochastic evolutionary scheme, where taboo-based decision making and adaptive memory guide the optimization process. The GETS algorithm was tested on a group of ten prostate patients, to be compared to standard optimization and to randomly selected configurations. The changes in the optimal marker configuration, when TRE is minimized for OARs, were specifically examined. Optimal GETS configurations ensured a 26.5% mean decrease in the TRE value, versus 19.4% for conventional quasi-Newton optimization. Common features in GETS marker configurations were highlighted in the dataset of ten patients, even when multiple runs of the stochastic algorithm were performed. Including OARs in TRE minimization did not considerably affect the spatial distribution of GETS marker configurations. In conclusion, the GETS algorithm proved to be highly effective in solving the optimal marker placement problem. Further work is needed to embed site-specific deformation models in the optimization process.
DOE Office of Scientific and Technical Information (OSTI.GOV)
LAH, J; Shin, D; Manger, R
Purpose: To show how the Six Sigma DMAIC (Define-Measure-Analyze-Improve-Control) can be used for improving and optimizing the efficiency of patient-specific QA process by designing site-specific range tolerances. Methods: The Six Sigma tools (process flow diagram, cause and effect, capability analysis, Pareto chart, and control chart) were utilized to determine the steps that need focus for improving the patient-specific QA process. The patient-specific range QA plans were selected according to 7 treatment site groups, a total of 1437 cases. The process capability index, Cpm was used to guide the tolerance design of patient site-specific range. We also analyzed the financial impactmore » of this project. Results: Our results suggested that the patient range measurements were non-capable at the current tolerance level of ±1 mm in clinical proton plans. The optimized tolerances were calculated for treatment sites. Control charts for the patient QA time were constructed to compare QA time before and after the new tolerances were implemented. It is found that overall processing time was decreased by 24.3% after establishing new site-specific range tolerances. The QA failure for whole process in proton therapy would lead up to a 46% increase in total cost. This result can also predict how costs are affected by changes in adopting the tolerance design. Conclusion: We often believe that the quality and performance of proton therapy can easily be improved by merely tightening some or all of its tolerance requirements. This can become costly, however, and it is not necessarily a guarantee of better performance. The tolerance design is not a task to be undertaken without careful thought. The Six Sigma DMAIC can be used to improve the QA process by setting optimized tolerances. When tolerance design is optimized, the quality is reasonably balanced with time and cost demands.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Law, J.D.; Tillotson, R.D.; Todd, T.A.
2002-09-19
The Caustic-Side Solvent Extraction (CSSX) process has been selected for the separation of cesium from Savannah River Site high-level waste. The solvent composition used in the CSSX process was recently optimized so that the solvent is no longer supersaturated with respect to the calixarene crown ether extractant. Hydraulic performance and mass transfer efficiency testing of a single stage of 5.5-cm ORNL-designed centrifugal contactor has been performed for the CSSX process with the optimized solvent. Maximum throughputs of the 5.5-cm centrifugal contactor, as a function of contactor rotor speed, have been measured for the extraction, scrub, strip, and wash sections ofmore » the CSSX flowsheet at the baseline organic/aqueous flow ratios (O/A) of the process, as well as at O/A's 20% higher and 20% lower than the baseline. Maximum throughputs are comparable to the design throughput of the contactor, as well as with throughputs obtained previously in a 5-cm centrifugal contactor with the non-optimized CSSX solvent formulation. The 20% variation in O/A had minimal effect on contactor throughput. Additionally, mass transfer efficiencies have been determined for the extraction and strip sections of the flowsheet. Efficiencies were lower than the process goal of greater than or equal to 80%, ranging from 72 to 75% for the extraction section and from 36 to 60% in the strip section. Increasing the mixing intensity and/or the solution level in the mixing zone of the centrifugal contactor (residence time) could potentially increase efficiencies. Several methods are available to accomplish this including (1) increasing the size of the opening in the bottom of the rotor, resulting in a contactor which is partially pumping instead of fully pumping, (2) decreasing the number of vanes in the contactor, (3) increasing the vane height, or (4) adding vanes on the rotor and baffles on the housing of the contactor. The low efficiency results obtained stress the importance of proper design of a centrifugal contactor for use in the CSSX process. A prototype of any centrifugal contactors designed for future pilot-scale or full-scale processing should be thoroughly tested prior to implementation.« less
A Review on Sensor, Signal, and Information Processing Algorithms (PREPRINT)
2010-01-01
processing [214], ambi- guity surface averaging [215], optimum uncertain field tracking, and optimal minimum variance track - before - detect [216]. In [217, 218...2) (2001) 739–746. [216] S. L. Tantum, L. W. Nolte, J. L. Krolik, K. Harmanci, The performance of matched-field track - before - detect methods using
Parameter learning for performance adaptation
NASA Technical Reports Server (NTRS)
Peek, Mark D.; Antsaklis, Panos J.
1990-01-01
A parameter learning method is introduced and used to broaden the region of operability of the adaptive control system of a flexible space antenna. The learning system guides the selection of control parameters in a process leading to optimal system performance. A grid search procedure is used to estimate an initial set of parameter values. The optimization search procedure uses a variation of the Hooke and Jeeves multidimensional search algorithm. The method is applicable to any system where performance depends on a number of adjustable parameters. A mathematical model is not necessary, as the learning system can be used whenever the performance can be measured via simulation or experiment. The results of two experiments, the transient regulation and the command following experiment, are presented.
Conceptual design and multidisciplinary optimization of in-plane morphing wing structures
NASA Astrophysics Data System (ADS)
Inoyama, Daisaku; Sanders, Brian P.; Joo, James J.
2006-03-01
In this paper, the topology optimization methodology for the synthesis of distributed actuation system with specific applications to the morphing air vehicle is discussed. The main emphasis is placed on the topology optimization problem formulations and the development of computational modeling concepts. For demonstration purposes, the inplane morphing wing model is presented. The analysis model is developed to meet several important criteria: It must allow large rigid-body displacements, as well as variation in planform area, with minimum strain on structural members while retaining acceptable numerical stability for finite element analysis. Preliminary work has indicated that addressed modeling concept meets the criteria and may be suitable for the purpose. Topology optimization is performed on the ground structure based on this modeling concept with design variables that control the system configuration. In other words, states of each element in the model are design variables and they are to be determined through optimization process. In effect, the optimization process assigns morphing members as 'soft' elements, non-morphing load-bearing members as 'stiff' elements, and non-existent members as 'voids.' In addition, the optimization process determines the location and relative force intensities of distributed actuators, which is represented computationally as equal and opposite nodal forces with soft axial stiffness. Several different optimization problem formulations are investigated to understand their potential benefits in solution quality, as well as meaningfulness of formulation itself. Sample in-plane morphing problems are solved to demonstrate the potential capability of the methodology introduced in this paper.
An optimal open/closed-loop control method with application to a pre-stressed thin duralumin plate
NASA Astrophysics Data System (ADS)
Nadimpalli, Sruthi Raju
The excessive vibrations of a pre-stressed duralumin plate, suppressed by a combination of open-loop and closed-loop controls, also known as open/closed-loop control, is studied in this thesis. The two primary steps involved in this process are: Step (I) with an assumption that the closed-loop control law is proportional, obtain the optimal open-loop control by direct minimization of the performance measure consisting of energy at terminal time and a penalty on open-loop control force via calculus of variations. If the performance measure also involves a penalty on closed-loop control effort then a Fourier based method is utilized. Step (II) the energy at terminal time is minimized numerically to obtain optimal values of feedback gains. The optimal closed-loop control gains obtained are used to describe the displacement and the velocity of open-loop, closed-loop and open/closed-loop controlled duralumin plate.
Costa, Filippo; Monorchio, Agostino; Manara, Giuliano
2016-01-01
A methodology to obtain wideband scattering diffusion based on periodic artificial surfaces is presented. The proposed surfaces provide scattering towards multiple propagation directions across an extremely wide frequency band. They comprise unit cells with an optimized geometry and arranged in a periodic lattice characterized by a repetition period larger than one wavelength which induces the excitation of multiple Floquet harmonics. The geometry of the elementary unit cell is optimized in order to minimize the reflection coefficient of the fundamental Floquet harmonic over a wide frequency band. The optimization of FSS geometry is performed through a genetic algorithm in conjunction with periodic Method of Moments. The design method is verified through full-wave simulations and measurements. The proposed solution guarantees very good performance in terms of bandwidth-thickness ratio and removes the need of a high-resolution printing process. PMID:27181841
CONORBIT: constrained optimization by radial basis function interpolation in trust regions
Regis, Rommel G.; Wild, Stefan M.
2016-09-26
Here, this paper presents CONORBIT (CONstrained Optimization by Radial Basis function Interpolation in Trust regions), a derivative-free algorithm for constrained black-box optimization where the objective and constraint functions are computationally expensive. CONORBIT employs a trust-region framework that uses interpolating radial basis function (RBF) models for the objective and constraint functions, and is an extension of the ORBIT algorithm. It uses a small margin for the RBF constraint models to facilitate the generation of feasible iterates, and extensive numerical tests confirm that such a margin is helpful in improving performance. CONORBIT is compared with other algorithms on 27 test problems, amore » chemical process optimization problem, and an automotive application. Numerical results show that CONORBIT performs better than COBYLA, a sequential penalty derivative-free method, an augmented Lagrangian method, a direct search method, and another RBF-based algorithm on the test problems and on the automotive application.« less