Data processing and optimization system to study prospective interstate power interconnections
NASA Astrophysics Data System (ADS)
Podkovalnikov, Sergei; Trofimov, Ivan; Trofimov, Leonid
2018-01-01
The paper presents Data processing and optimization system for studying and making rational decisions on the formation of interstate electric power interconnections, with aim to increasing effectiveness of their functioning and expansion. The technologies for building and integrating a Data processing and optimization system including an object-oriented database and a predictive mathematical model for optimizing the expansion of electric power systems ORIRES, are described. The technology of collection and pre-processing of non-structured data collected from various sources and its loading to the object-oriented database, as well as processing and presentation of information in the GIS system are described. One of the approaches of graphical visualization of the results of optimization model is considered on the example of calculating the option for expansion of the South Korean electric power grid.
Instrumentation for optimizing an underground coal-gasification process
NASA Astrophysics Data System (ADS)
Seabaugh, W.; Zielinski, R. E.
1982-06-01
While the United States has a coal resource base of 6.4 trillion tons, only seven percent is presently recoverable by mining. The process of in-situ gasification can recover another twenty-eight percent of the vast resource, however, viable technology must be developed for effective in-situ recovery. The key to this technology is system that can optimize and control the process in real-time. An instrumentation system is described that optimizes the composition of the injection gas, controls the in-situ process and conditions the product gas for maximum utilization. The key elements of this system are Monsanto PRISM Systems, a real-time analytical system, and a real-time data acquisition and control system. This system provides from complete automation of the process but can easily be overridden by manual control. The use of this cost effective system can provide process optimization and is an effective element in developing a viable in-situ technology.
Wang, Monan; Zhang, Kai; Yang, Ning
2018-04-09
To help doctors decide their treatment from the aspect of mechanical analysis, the work built a computer assisted optimal system for treatment of femoral neck fracture oriented to clinical application. The whole system encompassed the following three parts: Preprocessing module, finite element mechanical analysis module, post processing module. Preprocessing module included parametric modeling of bone, parametric modeling of fracture face, parametric modeling of fixed screw and fixed position and input and transmission of model parameters. Finite element mechanical analysis module included grid division, element type setting, material property setting, contact setting, constraint and load setting, analysis method setting and batch processing operation. Post processing module included extraction and display of batch processing operation results, image generation of batch processing operation, optimal program operation and optimal result display. The system implemented the whole operations from input of fracture parameters to output of the optimal fixed plan according to specific patient real fracture parameter and optimal rules, which demonstrated the effectiveness of the system. Meanwhile, the system had a friendly interface, simple operation and could improve the system function quickly through modifying single module.
NASA Technical Reports Server (NTRS)
Biess, J. J.; Yu, Y.; Middlebrook, R. D.; Schoenfeld, A. D.
1974-01-01
A review is given of future power processing systems planned for the next 20 years, and the state-of-the-art of power processing design modeling and analysis techniques used to optimize power processing systems. A methodology of modeling and analysis of power processing equipment and systems has been formulated to fulfill future tradeoff studies and optimization requirements. Computer techniques were applied to simulate power processor performance and to optimize the design of power processing equipment. A program plan to systematically develop and apply the tools for power processing systems modeling and analysis is presented so that meaningful results can be obtained each year to aid the power processing system engineer and power processing equipment circuit designers in their conceptual and detail design and analysis tasks.
Quantum optimal control with automatic differentiation using graphics processors
NASA Astrophysics Data System (ADS)
Leung, Nelson; Abdelhafez, Mohamed; Chakram, Srivatsan; Naik, Ravi; Groszkowski, Peter; Koch, Jens; Schuster, David
We implement quantum optimal control based on automatic differentiation and harness the acceleration afforded by graphics processing units (GPUs). Automatic differentiation allows us to specify advanced optimization criteria and incorporate them into the optimization process with ease. We will describe efficient techniques to optimally control weakly anharmonic systems that are commonly encountered in circuit QED, including coupled superconducting transmon qubits and multi-cavity circuit QED systems. These systems allow for a rich variety of control schemes that quantum optimal control is well suited to explore.
Optimal teaching strategy in periodic impulsive knowledge dissemination system.
Liu, Dan-Qing; Wu, Zhen-Qiang; Wang, Yu-Xin; Guo, Qiang; Liu, Jian-Guo
2017-01-01
Accurately describing the knowledge dissemination process is significant to enhance the performance of personalized education. In this study, considering the effect of periodic teaching activities on the learning process, we propose a periodic impulsive knowledge dissemination system to regenerate the knowledge dissemination process. Meanwhile, we put forward learning effectiveness which is an outcome of a trade-off between the benefits and costs raised by knowledge dissemination as objective function. Further, we investigate the optimal teaching strategy which can maximize learning effectiveness, to obtain the optimal effect of knowledge dissemination affected by the teaching activities. We solve this dynamic optimization problem by optimal control theory and get the optimization system. At last we numerically solve this system in several practical examples to make the conclusions intuitive and specific. The optimal teaching strategy proposed in this paper can be applied widely in the optimization problem of personal education and beneficial for enhancing the effect of knowledge dissemination.
Optimal teaching strategy in periodic impulsive knowledge dissemination system
Liu, Dan-Qing; Wu, Zhen-Qiang; Wang, Yu-Xin; Guo, Qiang
2017-01-01
Accurately describing the knowledge dissemination process is significant to enhance the performance of personalized education. In this study, considering the effect of periodic teaching activities on the learning process, we propose a periodic impulsive knowledge dissemination system to regenerate the knowledge dissemination process. Meanwhile, we put forward learning effectiveness which is an outcome of a trade-off between the benefits and costs raised by knowledge dissemination as objective function. Further, we investigate the optimal teaching strategy which can maximize learning effectiveness, to obtain the optimal effect of knowledge dissemination affected by the teaching activities. We solve this dynamic optimization problem by optimal control theory and get the optimization system. At last we numerically solve this system in several practical examples to make the conclusions intuitive and specific. The optimal teaching strategy proposed in this paper can be applied widely in the optimization problem of personal education and beneficial for enhancing the effect of knowledge dissemination. PMID:28665961
A Data-Driven Solution for Performance Improvement
NASA Technical Reports Server (NTRS)
2002-01-01
Marketed as the "Software of the Future," Optimal Engineering Systems P.I. EXPERT(TM) technology offers statistical process control and optimization techniques that are critical to businesses looking to restructure or accelerate operations in order to gain a competitive edge. Kennedy Space Center granted Optimal Engineering Systems the funding and aid necessary to develop a prototype of the process monitoring and improvement software. Completion of this prototype demonstrated that it was possible to integrate traditional statistical quality assurance tools with robust optimization techniques in a user- friendly format that is visually compelling. Using an expert system knowledge base, the software allows the user to determine objectives, capture constraints and out-of-control processes, predict results, and compute optimal process settings.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Murav’ev, V. P., E-mail: murval@mail.ru; Kochetkov, A. V.; Glazova, E. G.
An algorithm and software for calculating the optimal operating regimes of the process water supply system at the Kalininskaya NPP are described. The parameters of the optimal regimes are determined for time varying meteorological conditions and condensation loads of the NPP. The optimal flow of the cooling water in the turbines is determined computationally; a regime map with the data on the optimal water consumption distribution between the coolers and displaying the regimes with an admissible heat load on the natural cooling lakes is composed. Optimizing the cooling system for a 4000-MW NPP will make it possible to conserve atmore » least 155,000 MW · h of electricity per year. The procedure developed can be used to optimize the process water supply systems of nuclear and thermal power plants.« less
NASA Astrophysics Data System (ADS)
Nuh, M. Z.; Nasir, N. F.
2017-08-01
Biodiesel as a fuel comprised of mono alkyl esters of long chain fatty acids derived from renewable lipid feedstock, such as vegetable oil and animal fat. Biodiesel production is complex process which need systematic design and optimization. However, no case study using the process system engineering (PSE) elements which are superstructure optimization of batch process, it involves complex problems and uses mixed-integer nonlinear programming (MINLP). The PSE offers a solution to complex engineering system by enabling the use of viable tools and techniques to better manage and comprehend the complexity of the system. This study is aimed to apply the PSE tools for the simulation of biodiesel process and optimization and to develop mathematical models for component of the plant for case A, B, C by using published kinetic data. Secondly, to determine economic analysis for biodiesel production, focusing on heterogeneous catalyst. Finally, the objective of this study is to develop the superstructure for biodiesel production by using heterogeneous catalyst. The mathematical models are developed by the superstructure and solving the resulting mixed integer non-linear model and estimation economic analysis by using MATLAB software. The results of the optimization process with the objective function of minimizing the annual production cost by batch process from case C is 23.2587 million USD. Overall, the implementation a study of process system engineering (PSE) has optimized the process of modelling, design and cost estimation. By optimizing the process, it results in solving the complex production and processing of biodiesel by batch.
A System-Oriented Approach for the Optimal Control of Process Chains under Stochastic Influences
NASA Astrophysics Data System (ADS)
Senn, Melanie; Schäfer, Julian; Pollak, Jürgen; Link, Norbert
2011-09-01
Process chains in manufacturing consist of multiple connected processes in terms of dynamic systems. The properties of a product passing through such a process chain are influenced by the transformation of each single process. There exist various methods for the control of individual processes, such as classical state controllers from cybernetics or function mapping approaches realized by statistical learning. These controllers ensure that a desired state is obtained at process end despite of variations in the input and disturbances. The interactions between the single processes are thereby neglected, but play an important role in the optimization of the entire process chain. We divide the overall optimization into two phases: (1) the solution of the optimization problem by Dynamic Programming to find the optimal control variable values for each process for any encountered end state of its predecessor and (2) the application of the optimal control variables at runtime for the detected initial process state. The optimization problem is solved by selecting adequate control variables for each process in the chain backwards based on predefined quality requirements for the final product. For the demonstration of the proposed concept, we have chosen a process chain from sheet metal manufacturing with simplified transformation functions.
Holistic Context-Sensitivity for Run-Time Optimization of Flexible Manufacturing Systems.
Scholze, Sebastian; Barata, Jose; Stokic, Dragan
2017-02-24
Highly flexible manufacturing systems require continuous run-time (self-) optimization of processes with respect to diverse parameters, e.g., efficiency, availability, energy consumption etc. A promising approach for achieving (self-) optimization in manufacturing systems is the usage of the context sensitivity approach based on data streaming from high amount of sensors and other data sources. Cyber-physical systems play an important role as sources of information to achieve context sensitivity. Cyber-physical systems can be seen as complex intelligent sensors providing data needed to identify the current context under which the manufacturing system is operating. In this paper, it is demonstrated how context sensitivity can be used to realize a holistic solution for (self-) optimization of discrete flexible manufacturing systems, by making use of cyber-physical systems integrated in manufacturing systems/processes. A generic approach for context sensitivity, based on self-learning algorithms, is proposed aiming at a various manufacturing systems. The new solution encompasses run-time context extractor and optimizer. Based on the self-learning module both context extraction and optimizer are continuously learning and improving their performance. The solution is following Service Oriented Architecture principles. The generic solution is developed and then applied to two very different manufacturing processes.
Holistic Context-Sensitivity for Run-Time Optimization of Flexible Manufacturing Systems
Scholze, Sebastian; Barata, Jose; Stokic, Dragan
2017-01-01
Highly flexible manufacturing systems require continuous run-time (self-) optimization of processes with respect to diverse parameters, e.g., efficiency, availability, energy consumption etc. A promising approach for achieving (self-) optimization in manufacturing systems is the usage of the context sensitivity approach based on data streaming from high amount of sensors and other data sources. Cyber-physical systems play an important role as sources of information to achieve context sensitivity. Cyber-physical systems can be seen as complex intelligent sensors providing data needed to identify the current context under which the manufacturing system is operating. In this paper, it is demonstrated how context sensitivity can be used to realize a holistic solution for (self-) optimization of discrete flexible manufacturing systems, by making use of cyber-physical systems integrated in manufacturing systems/processes. A generic approach for context sensitivity, based on self-learning algorithms, is proposed aiming at a various manufacturing systems. The new solution encompasses run-time context extractor and optimizer. Based on the self-learning module both context extraction and optimizer are continuously learning and improving their performance. The solution is following Service Oriented Architecture principles. The generic solution is developed and then applied to two very different manufacturing processes. PMID:28245564
Optimization of the Switch Mechanism in a Circuit Breaker Using MBD Based Simulation
Jang, Jin-Seok; Yoon, Chang-Gyu; Ryu, Chi-Young; Kim, Hyun-Woo; Bae, Byung-Tae; Yoo, Wan-Suk
2015-01-01
A circuit breaker is widely used to protect electric power system from fault currents or system errors; in particular, the opening mechanism in a circuit breaker is important to protect current overflow in the electric system. In this paper, multibody dynamic model of a circuit breaker including switch mechanism was developed including the electromagnetic actuator system. Since the opening mechanism operates sequentially, optimization of the switch mechanism was carried out to improve the current breaking time. In the optimization process, design parameters were selected from length and shape of each latch, which changes pivot points of bearings to shorten the breaking time. To validate optimization results, computational results were compared to physical tests with a high speed camera. Opening time of the optimized mechanism was decreased by 2.3 ms, which was proved by experiments. Switch mechanism design process can be improved including contact-latch system by using this process. PMID:25918740
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
The objective of the contract is to consolidate the advances made during the previous contract in the conversion of syngas to motor fuels using Molecular Sieve-containing catalysts and to demonstrate the practical utility and economic value of the new catalyst/process systems with appropriate laboratory runs. Work on the program is divided into the following six tasks: (1) preparation of a detailed work plan covering the entire performance of the contract; (2) preliminary techno-economic assessment of the UCC catalyst/process system; (3) optimization of the most promising catalysts developed under prior contract; (4) optimization of the UCC catalyst system in a mannermore » that will give it the longest possible service life; (5) optimization of a UCC process/catalyst system based upon a tubular reactor with a recycle loop; and (6) economic evaluation of the optimal performance found under Task 5 for the UCC process/catalyst system. Accomplishments are reported for Tasks 2 through 5.« less
NASA Astrophysics Data System (ADS)
Cheng, Jie; Qian, Zhaogang; Irani, Keki B.; Etemad, Hossein; Elta, Michael E.
1991-03-01
To meet the ever-increasing demand of the rapidly-growing semiconductor manufacturing industry it is critical to have a comprehensive methodology integrating techniques for process optimization real-time monitoring and adaptive process control. To this end we have accomplished an integrated knowledge-based approach combining latest expert system technology machine learning method and traditional statistical process control (SPC) techniques. This knowledge-based approach is advantageous in that it makes it possible for the task of process optimization and adaptive control to be performed consistently and predictably. Furthermore this approach can be used to construct high-level and qualitative description of processes and thus make the process behavior easy to monitor predict and control. Two software packages RIST (Rule Induction and Statistical Testing) and KARSM (Knowledge Acquisition from Response Surface Methodology) have been developed and incorporated with two commercially available packages G2 (real-time expert system) and ULTRAMAX (a tool for sequential process optimization).
NASA Astrophysics Data System (ADS)
Goienetxea Uriarte, A.; Ruiz Zúñiga, E.; Urenda Moris, M.; Ng, A. H. C.
2015-05-01
Discrete Event Simulation (DES) is nowadays widely used to support decision makers in system analysis and improvement. However, the use of simulation for improving stochastic logistic processes is not common among healthcare providers. The process of improving healthcare systems involves the necessity to deal with trade-off optimal solutions that take into consideration a multiple number of variables and objectives. Complementing DES with Multi-Objective Optimization (SMO) creates a superior base for finding these solutions and in consequence, facilitates the decision-making process. This paper presents how SMO has been applied for system improvement analysis in a Swedish Emergency Department (ED). A significant number of input variables, constraints and objectives were considered when defining the optimization problem. As a result of the project, the decision makers were provided with a range of optimal solutions which reduces considerably the length of stay and waiting times for the ED patients. SMO has proved to be an appropriate technique to support healthcare system design and improvement processes. A key factor for the success of this project has been the involvement and engagement of the stakeholders during the whole process.
Microfiltration of thin stillage: Process simulation and economic analyses
USDA-ARS?s Scientific Manuscript database
In plant scale operations, multistage membrane systems have been adopted for cost minimization. We considered design optimization and operation of a continuous microfiltration (MF) system for the corn dry grind process. The objectives were to develop a model to simulate a multistage MF system, optim...
Evaluation of Methods for Multidisciplinary Design Optimization (MDO). Part 2
NASA Technical Reports Server (NTRS)
Kodiyalam, Srinivas; Yuan, Charles; Sobieski, Jaroslaw (Technical Monitor)
2000-01-01
A new MDO method, BLISS, and two different variants of the method, BLISS/RS and BLISS/S, have been implemented using iSIGHT's scripting language and evaluated in this report on multidisciplinary problems. All of these methods are based on decomposing a modular system optimization system into several subtasks optimization, that may be executed concurrently, and the system optimization that coordinates the subtasks optimization. The BLISS method and its variants are well suited for exploiting the concurrent processing capabilities in a multiprocessor machine. Several steps, including the local sensitivity analysis, local optimization, response surfaces construction and updates are all ideally suited for concurrent processing. Needless to mention, such algorithms that can effectively exploit the concurrent processing capabilities of the compute servers will be a key requirement for solving large-scale industrial design problems, such as the automotive vehicle problem detailed in Section 3.4.
NASA Technical Reports Server (NTRS)
Consoli, Robert David; Sobieszczanski-Sobieski, Jaroslaw
1990-01-01
Advanced multidisciplinary analysis and optimization methods, namely system sensitivity analysis and non-hierarchical system decomposition, are applied to reduce the cost and improve the visibility of an automated vehicle design synthesis process. This process is inherently complex due to the large number of functional disciplines and associated interdisciplinary couplings. Recent developments in system sensitivity analysis as applied to complex non-hierarchic multidisciplinary design optimization problems enable the decomposition of these complex interactions into sub-processes that can be evaluated in parallel. The application of these techniques results in significant cost, accuracy, and visibility benefits for the entire design synthesis process.
Aircraft adaptive learning control
NASA Technical Reports Server (NTRS)
Lee, P. S. T.; Vanlandingham, H. F.
1979-01-01
The optimal control theory of stochastic linear systems is discussed in terms of the advantages of distributed-control systems, and the control of randomly-sampled systems. An optimal solution to longitudinal control is derived and applied to the F-8 DFBW aircraft. A randomly-sampled linear process model with additive process and noise is developed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
The objective of the contract is to consolidate the advances made during the previous contract in the conversion of syngas to motor fuels using Molecular Sieve-containing catalysts and to demonstrate the practical utility and economic value of the new catalyst/process systems with appropriate laboratory runs. Work on the program is divided into the following six tasks: (1) preparation of a detailed work plan covering the entire performance of the contract; (2) preliminary techno-economic assessment of the UCC catalyst/process system; (3) optimization of the most promising catalyst developed under prior contract; (4) optimization of the UCC catalyst system in a mannermore » that will give it the longest possible service life; (5) optimization of a UCC process/catalyst system based upon a tubular reactor with a recycle loop containing the most promising catalyst developed under Tasks 3 and 4 studies; and (6) economic evaluation of the optimal performance found under Task 5 for the UCC process/catalyst system. Progress reports are presented for tasks 2 through 5. 232 figs., 19 tabs.« less
Intensity modulated neutron radiotherapy optimization by photon proxy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Snyder, Michael; Hammoud, Ahmad; Bossenberger, Todd
2012-08-15
Purpose: Introducing intensity modulation into neutron radiotherapy (IMNRT) planning has the potential to mitigate some normal tissue complications seen in past neutron trials. While the hardware to deliver IMNRT plans has been in use for several years, until recently the IMNRT planning process has been cumbersome and of lower fidelity than conventional photon plans. Our in-house planning system used to calculate neutron therapy plans allows beam weight optimization of forward planned segments, but does not provide inverse optimization capabilities. Commercial treatment planning systems provide inverse optimization capabilities, but currently cannot model our neutron beam. Methods: We have developed a methodologymore » and software suite to make use of the robust optimization in our commercial planning system while still using our in-house planning system to calculate final neutron dose distributions. Optimized multileaf collimator (MLC) leaf positions for segments designed in the commercial system using a 4 MV photon proxy beam are translated into static neutron ports that can be represented within our in-house treatment planning system. The true neutron dose distribution is calculated in the in-house system and then exported back through the MATLAB software into the commercial treatment planning system for evaluation. Results: The planning process produces optimized IMNRT plans that reduce dose to normal tissue structures as compared to 3D conformal plans using static MLC apertures. The process involves standard planning techniques using a commercially available treatment planning system, and is not significantly more complex than conventional IMRT planning. Using a photon proxy in a commercial optimization algorithm produces IMNRT plans that are more conformal than those previously designed at our center and take much less time to create. Conclusions: The planning process presented here allows for the optimization of IMNRT plans by a commercial treatment planning optimization algorithm, potentially allowing IMNRT to achieve similar conformality in treatment as photon IMRT. The only remaining requirements for the delivery of very highly modulated neutron treatments are incremental improvements upon already implemented hardware systems that should be readily achievable.« less
2014-04-30
cost to acquire systems as design maturity could be verified incrementally as the system was developed vice waiting for specific large “ big bang ...Framework (MBAF) be applied to simulate or optimize process variations on programs? LSI Roles and Responsibilities A review of the roles and...the model/process optimization process. It is the current intent that NAVAIR will use the model to run simulations on process changes in an attempt to
Ant colony system algorithm for the optimization of beer fermentation control.
Xiao, Jie; Zhou, Ze-Kui; Zhang, Guang-Xin
2004-12-01
Beer fermentation is a dynamic process that must be guided along a temperature profile to obtain the desired results. Ant colony system algorithm was applied to optimize the kinetic model of this process. During a fixed period of fermentation time, a series of different temperature profiles of the mixture were constructed. An optimal one was chosen at last. Optimal temperature profile maximized the final ethanol production and minimized the byproducts concentration and spoilage risk. The satisfactory results obtained did not require much computation effort.
Eslick, John C.; Ng, Brenda; Gao, Qianwen; ...
2014-12-31
Under the auspices of the U.S. Department of Energy’s Carbon Capture Simulation Initiative (CCSI), a Framework for Optimization and Quantification of Uncertainty and Sensitivity (FOQUS) has been developed. This tool enables carbon capture systems to be rapidly synthesized and rigorously optimized, in an environment that accounts for and propagates uncertainties in parameters and models. FOQUS currently enables (1) the development of surrogate algebraic models utilizing the ALAMO algorithm, which can be used for superstructure optimization to identify optimal process configurations, (2) simulation-based optimization utilizing derivative free optimization (DFO) algorithms with detailed black-box process models, and (3) rigorous uncertainty quantification throughmore » PSUADE. FOQUS utilizes another CCSI technology, the Turbine Science Gateway, to manage the thousands of simulated runs necessary for optimization and UQ. Thus, this computational framework has been demonstrated for the design and analysis of a solid sorbent based carbon capture system.« less
Combining analysis with optimization at Langley Research Center. An evolutionary process
NASA Technical Reports Server (NTRS)
Rogers, J. L., Jr.
1982-01-01
The evolutionary process of combining analysis and optimization codes was traced with a view toward providing insight into the long term goal of developing the methodology for an integrated, multidisciplinary software system for the concurrent analysis and optimization of aerospace structures. It was traced along the lines of strength sizing, concurrent strength and flutter sizing, and general optimization to define a near-term goal for combining analysis and optimization codes. Development of a modular software system combining general-purpose, state-of-the-art, production-level analysis computer programs for structures, aerodynamics, and aeroelasticity with a state-of-the-art optimization program is required. Incorporation of a modular and flexible structural optimization software system into a state-of-the-art finite element analysis computer program will facilitate this effort. This effort results in the software system used that is controlled with a special-purpose language, communicates with a data management system, and is easily modified for adding new programs and capabilities. A 337 degree-of-freedom finite element model is used in verifying the accuracy of this system.
DOE Office of Scientific and Technical Information (OSTI.GOV)
McMordie Stoughton, Kate; Duan, Xiaoli; Wendel, Emily M.
This technology evaluation was prepared by Pacific Northwest National Laboratory on behalf of the U.S. Department of Energy’s Federal Energy Management Program (FEMP). ¬The technology evaluation assesses techniques for optimizing reverse osmosis (RO) systems to increase RO system performance and water efficiency. This evaluation provides a general description of RO systems, the influence of RO systems on water use, and key areas where RO systems can be optimized to reduce water and energy consumption. The evaluation is intended to help facility managers at Federal sites understand the basic concepts of the RO process and system optimization options, enabling them tomore » make informed decisions during the system design process for either new projects or recommissioning of existing equipment. This evaluation is focused on commercial-sized RO systems generally treating more than 80 gallons per hour.¬« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
This technology evaluation was prepared by Pacific Northwest National Laboratory on behalf of the U.S. Department of Energy’s Federal Energy Management Program (FEMP). The technology evaluation assesses techniques for optimizing reverse osmosis (RO) systems to increase RO system performance and water efficiency. This evaluation provides a general description of RO systems, the influence of RO systems on water use, and key areas where RO systems can be optimized to reduce water and energy consumption. The evaluation is intended to help facility managers at Federal sites understand the basic concepts of the RO process and system optimization options, enabling them tomore » make informed decisions during the system design process for either new projects or recommissioning of existing equipment. This evaluation is focused on commercial-sized RO systems generally treating more than 80 gallons per hour.« less
NASA Astrophysics Data System (ADS)
Pershin, I. M.; Pervukhin, D. A.; Ilyushin, Y. V.; Afanaseva, O. V.
2017-10-01
The article considers the important issue of designing the distributed systems of hydrolithospere processes management. Control effects on the hydrolithospere processes are implemented by a set of extractive wells. The article shows how to determine the optimal number of extractive wells that provide a distributed control impact on the management object.
Near-optimal integration of facial form and motion.
Dobs, Katharina; Ma, Wei Ji; Reddy, Leila
2017-09-08
Human perception consists of the continuous integration of sensory cues pertaining to the same object. While it has been fairly well shown that humans use an optimal strategy when integrating low-level cues proportional to their relative reliability, the integration processes underlying high-level perception are much less understood. Here we investigate cue integration in a complex high-level perceptual system, the human face processing system. We tested cue integration of facial form and motion in an identity categorization task and found that an optimal model could successfully predict subjects' identity choices. Our results suggest that optimal cue integration may be implemented across different levels of the visual processing hierarchy.
Optimal Resource Allocation in Library Systems
ERIC Educational Resources Information Center
Rouse, William B.
1975-01-01
Queueing theory is used to model processes as either waiting or balking processes. The optimal allocation of resources to these processes is defined as that which maximizes the expected value of the decision-maker's utility function. (Author)
NASA Astrophysics Data System (ADS)
Wang, Jiaoyang; Wang, Lin; Yang, Ying; Gong, Rui; Shao, Xiaopeng; Liang, Chao; Xu, Jun
2016-05-01
In this paper, an integral design that combines optical system with image processing is introduced to obtain high resolution images, and the performance is evaluated and demonstrated. Traditional imaging methods often separate the two technical procedures of optical system design and imaging processing, resulting in the failures in efficient cooperation between the optical and digital elements. Therefore, an innovative approach is presented to combine the merit function during optical design together with the constraint conditions of image processing algorithms. Specifically, an optical imaging system with low resolution is designed to collect the image signals which are indispensable for imaging processing, while the ultimate goal is to obtain high resolution images from the final system. In order to optimize the global performance, the optimization function of ZEMAX software is utilized and the number of optimization cycles is controlled. Then Wiener filter algorithm is adopted to process the image simulation and mean squared error (MSE) is taken as evaluation criterion. The results show that, although the optical figures of merit for the optical imaging systems is not the best, it can provide image signals that are more suitable for image processing. In conclusion. The integral design of optical system and image processing can search out the overall optimal solution which is missed by the traditional design methods. Especially, when designing some complex optical system, this integral design strategy has obvious advantages to simplify structure and reduce cost, as well as to gain high resolution images simultaneously, which has a promising perspective of industrial application.
An intelligent factory-wide optimal operation system for continuous production process
NASA Astrophysics Data System (ADS)
Ding, Jinliang; Chai, Tianyou; Wang, Hongfeng; Wang, Junwei; Zheng, Xiuping
2016-03-01
In this study, a novel intelligent factory-wide operation system for a continuous production process is designed to optimise the entire production process, which consists of multiple units; furthermore, this system is developed using process operational data to avoid the complexity of mathematical modelling of the continuous production process. The data-driven approach aims to specify the structure of the optimal operation system; in particular, the operational data of the process are used to formulate each part of the system. In this context, the domain knowledge of process engineers is utilised, and a closed-loop dynamic optimisation strategy, which combines feedback, performance prediction, feed-forward, and dynamic tuning schemes into a framework, is employed. The effectiveness of the proposed system has been verified using industrial experimental results.
Online POMDP Algorithms for Very Large Observation Spaces
2017-06-06
stochastic optimization: From sets to paths." In Advances in Neural Information Processing Systems, pp. 1585- 1593 . 2015. • Luo, Yuanfu, Haoyu Bai...and Wee Sun Lee. "Adaptive stochastic optimization: From sets to paths." In Advances in Neural Information Processing Systems, pp. 1585- 1593 . 2015
NASA Astrophysics Data System (ADS)
Ma, Xu; Li, Yanqiu; Guo, Xuejia; Dong, Lisong
2012-03-01
Optical proximity correction (OPC) and phase shifting mask (PSM) are the most widely used resolution enhancement techniques (RET) in the semiconductor industry. Recently, a set of OPC and PSM optimization algorithms have been developed to solve for the inverse lithography problem, which are only designed for the nominal imaging parameters without giving sufficient attention to the process variations due to the aberrations, defocus and dose variation. However, the effects of process variations existing in the practical optical lithography systems become more pronounced as the critical dimension (CD) continuously shrinks. On the other hand, the lithography systems with larger NA (NA>0.6) are now extensively used, rendering the scalar imaging models inadequate to describe the vector nature of the electromagnetic field in the current optical lithography systems. In order to tackle the above problems, this paper focuses on developing robust gradient-based OPC and PSM optimization algorithms to the process variations under a vector imaging model. To achieve this goal, an integrative and analytic vector imaging model is applied to formulate the optimization problem, where the effects of process variations are explicitly incorporated in the optimization framework. The steepest descent algorithm is used to optimize the mask iteratively. In order to improve the efficiency of the proposed algorithms, a set of algorithm acceleration techniques (AAT) are exploited during the optimization procedure.
Integrated controls design optimization
Lou, Xinsheng; Neuschaefer, Carl H.
2015-09-01
A control system (207) for optimizing a chemical looping process of a power plant includes an optimizer (420), an income algorithm (230) and a cost algorithm (225) and a chemical looping process models. The process models are used to predict the process outputs from process input variables. Some of the process in puts and output variables are related to the income of the plant; and some others are related to the cost of the plant operations. The income algorithm (230) provides an income input to the optimizer (420) based on a plurality of input parameters (215) of the power plant. The cost algorithm (225) provides a cost input to the optimizer (420) based on a plurality of output parameters (220) of the power plant. The optimizer (420) determines an optimized operating parameter solution based on at least one of the income input and the cost input, and supplies the optimized operating parameter solution to the power plant.
Economic-Oriented Stochastic Optimization in Advanced Process Control of Chemical Processes
Dobos, László; Király, András; Abonyi, János
2012-01-01
Finding the optimal operating region of chemical processes is an inevitable step toward improving economic performance. Usually the optimal operating region is situated close to process constraints related to product quality or process safety requirements. Higher profit can be realized only by assuring a relatively low frequency of violation of these constraints. A multilevel stochastic optimization framework is proposed to determine the optimal setpoint values of control loops with respect to predetermined risk levels, uncertainties, and costs of violation of process constraints. The proposed framework is realized as direct search-type optimization of Monte-Carlo simulation of the controlled process. The concept is illustrated throughout by a well-known benchmark problem related to the control of a linear dynamical system and the model predictive control of a more complex nonlinear polymerization process. PMID:23213298
Optimality approaches to describe characteristic fluvial patterns on landscapes
Paik, Kyungrock; Kumar, Praveen
2010-01-01
Mother Nature has left amazingly regular geomorphic patterns on the Earth's surface. These patterns are often explained as having arisen as a result of some optimal behaviour of natural processes. However, there is little agreement on what is being optimized. As a result, a number of alternatives have been proposed, often with little a priori justification with the argument that successful predictions will lend a posteriori support to the hypothesized optimality principle. Given that maximum entropy production is an optimality principle attempting to predict the microscopic behaviour from a macroscopic characterization, this paper provides a review of similar approaches with the goal of providing a comparison and contrast between them to enable synthesis. While assumptions of optimal behaviour approach a system from a macroscopic viewpoint, process-based formulations attempt to resolve the mechanistic details whose interactions lead to the system level functions. Using observed optimality trends may help simplify problem formulation at appropriate levels of scale of interest. However, for such an approach to be successful, we suggest that optimality approaches should be formulated at a broader level of environmental systems' viewpoint, i.e. incorporating the dynamic nature of environmental variables and complex feedback mechanisms between fluvial and non-fluvial processes. PMID:20368257
Optimization in the systems engineering process
NASA Technical Reports Server (NTRS)
Lemmerman, L. A.
1984-01-01
The objective is to look at optimization as it applies to the design process at a large aircraft company. The design process at Lockheed-Georgia is described. Some examples of the impact that optimization has had on that process are given, and then some areas that must be considered if optimization is to be successful and supportive in the total design process are indicated. Optimization must continue to be sold and this selling is best done by consistent good performance. For this good performance to occur, the future approaches must be clearly thought out so that the optimization methods solve the problems that actually occur during design. The visibility of the design process must be maintained as further developments are proposed. Careful attention must be given to the management of data in the optimization process, both for technical reasons and for administrative purposes. Finally, to satisfy program needs, provisions must be included to supply data to support program decisions, and to communicate with design processes outside of the optimization process. If designers fail to adequately consider all of these needs, the future acceptance of optimization will be impeded.
Extreme Learning Machine and Particle Swarm Optimization in optimizing CNC turning operation
NASA Astrophysics Data System (ADS)
Janahiraman, Tiagrajah V.; Ahmad, Nooraziah; Hani Nordin, Farah
2018-04-01
The CNC machine is controlled by manipulating cutting parameters that could directly influence the process performance. Many optimization methods has been applied to obtain the optimal cutting parameters for the desired performance function. Nonetheless, the industry still uses the traditional technique to obtain those values. Lack of knowledge on optimization techniques is the main reason for this issue to be prolonged. Therefore, the simple yet easy to implement, Optimal Cutting Parameters Selection System is introduced to help the manufacturer to easily understand and determine the best optimal parameters for their turning operation. This new system consists of two stages which are modelling and optimization. In modelling of input-output and in-process parameters, the hybrid of Extreme Learning Machine and Particle Swarm Optimization is applied. This modelling technique tend to converge faster than other artificial intelligent technique and give accurate result. For the optimization stage, again the Particle Swarm Optimization is used to get the optimal cutting parameters based on the performance function preferred by the manufacturer. Overall, the system can reduce the gap between academic world and the industry by introducing a simple yet easy to implement optimization technique. This novel optimization technique can give accurate result besides being the fastest technique.
A Hybrid Interval-Robust Optimization Model for Water Quality Management.
Xu, Jieyu; Li, Yongping; Huang, Guohe
2013-05-01
In water quality management problems, uncertainties may exist in many system components and pollution-related processes ( i.e. , random nature of hydrodynamic conditions, variability in physicochemical processes, dynamic interactions between pollutant loading and receiving water bodies, and indeterminacy of available water and treated wastewater). These complexities lead to difficulties in formulating and solving the resulting nonlinear optimization problems. In this study, a hybrid interval-robust optimization (HIRO) method was developed through coupling stochastic robust optimization and interval linear programming. HIRO can effectively reflect the complex system features under uncertainty, where implications of water quality/quantity restrictions for achieving regional economic development objectives are studied. By delimiting the uncertain decision space through dimensional enlargement of the original chemical oxygen demand (COD) discharge constraints, HIRO enhances the robustness of the optimization processes and resulting solutions. This method was applied to planning of industry development in association with river-water pollution concern in New Binhai District of Tianjin, China. Results demonstrated that the proposed optimization model can effectively communicate uncertainties into the optimization process and generate a spectrum of potential inexact solutions supporting local decision makers in managing benefit-effective water quality management schemes. HIRO is helpful for analysis of policy scenarios related to different levels of economic penalties, while also providing insight into the tradeoff between system benefits and environmental requirements.
NASA Astrophysics Data System (ADS)
Okazaki, Yuji; Uno, Takanori; Asai, Hideki
In this paper, we propose an optimization system with parallel processing for reducing electromagnetic interference (EMI) on electronic control unit (ECU). We adopt simulated annealing (SA), genetic algorithm (GA) and taboo search (TS) to seek optimal solutions, and a Spice-like circuit simulator to analyze common-mode current. Therefore, the proposed system can determine the adequate combinations of the parasitic inductance and capacitance values on printed circuit board (PCB) efficiently and practically, to reduce EMI caused by the common-mode current. Finally, we apply the proposed system to an example circuit to verify the validity and efficiency of the system.
Energy Center Structure Optimization by using Smart Technologies in Process Control System
NASA Astrophysics Data System (ADS)
Shilkina, Svetlana V.
2018-03-01
The article deals with practical application of fuzzy logic methods in process control systems. A control object - agroindustrial greenhouse complex, which includes its own energy center - is considered. The paper analyzes object power supply options taking into account connection to external power grids and/or installation of own power generating equipment with various layouts. The main problem of a greenhouse facility basic process is extremely uneven power consumption, which forces to purchase redundant generating equipment idling most of the time, which quite negatively affects project profitability. Energy center structure optimization is largely based on solving the object process control system construction issue. To cut investor’s costs it was proposed to optimize power consumption by building an energy-saving production control system based on a fuzzy logic controller. The developed algorithm of automated process control system functioning ensured more even electric and thermal energy consumption, allowed to propose construction of the object energy center with a smaller number of units due to their more even utilization. As a result, it is shown how practical use of microclimate parameters fuzzy control system during object functioning leads to optimization of agroindustrial complex energy facility structure, which contributes to a significant reduction in object construction and operation costs.
Zimmer, H D
1993-01-01
It is discussed what is underlying the assumption of modality-specific processing systems and representations. Starting from the information processing approach relevant aspects of mental representations and their physiological realizations are discussed. Then three different forms of modality-specific systems are distinguished: as stimulus specific processing, as specific informational formats, and as modular part systems. Parallel to that three kinds of analogue systems are differentiated: as holding an analogue-relation, as having a specific informational format and as a set of specific processing constraints. These different aspects of the assumption of modality-specific systems are demonstrated in the example of visual and spatial information processing. It is concluded that postulating information-specific systems is not a superfluous assumption, but it is necessary, and even more likely it is an inevitable consequence of an optimization of stimulus processing.
NASA Astrophysics Data System (ADS)
Kryuchkov, D. I.; Zalazinsky, A. G.
2017-12-01
Mathematical models and a hybrid modeling system are developed for the implementation of the experimental-calculation method for the engineering analysis and optimization of the plastic deformation of inhomogeneous materials with the purpose of improving metal-forming processes and machines. The created software solution integrates Abaqus/CAE, a subroutine for mathematical data processing, with the use of Python libraries and the knowledge base. Practical application of the software solution is exemplified by modeling the process of extrusion of a bimetallic billet. The results of the engineering analysis and optimization of the extrusion process are shown, the material damage being monitored.
Control and optimization system
Xinsheng, Lou
2013-02-12
A system for optimizing a power plant includes a chemical loop having an input for receiving an input parameter (270) and an output for outputting an output parameter (280), a control system operably connected to the chemical loop and having a multiple controller part (230) comprising a model-free controller. The control system receives the output parameter (280), optimizes the input parameter (270) based on the received output parameter (280), and outputs an optimized input parameter (270) to the input of the chemical loop to control a process of the chemical loop in an optimized manner.
NASA Astrophysics Data System (ADS)
Hamada, Aulia; Rosyidi, Cucuk Nur; Jauhari, Wakhid Ahmad
2017-11-01
Minimizing processing time in a production system can increase the efficiency of a manufacturing company. Processing time are influenced by application of modern technology and machining parameter. Application of modern technology can be apply by use of CNC machining, one of the machining process can be done with a CNC machining is turning. However, the machining parameters not only affect the processing time but also affect the environmental impact. Hence, optimization model is needed to optimize the machining parameters to minimize the processing time and environmental impact. This research developed a multi-objective optimization to minimize the processing time and environmental impact in CNC turning process which will result in optimal decision variables of cutting speed and feed rate. Environmental impact is converted from environmental burden through the use of eco-indicator 99. The model were solved by using OptQuest optimization software from Oracle Crystal Ball.
Kumar, Sameer
2011-01-01
It is increasingly recognized that hospital operation is an intricate system with limited resources and many interacting sources of both positive and negative feedback. The purpose of this study is to design a surgical delivery process in a county hospital in the U.S where patient flow through a surgical ward is optimized. The system simulation modeling is used to address questions of capacity planning, throughput management and interacting resources which constitute the constantly changing complexity that characterizes designing a contemporary surgical delivery process in a hospital. The steps in building a system simulation model is demonstrated using an example of building a county hospital in a small city in the US. It is used to illustrate a modular system simulation modeling of patient surgery process flows. The system simulation model development will enable planners and designers how they can build in overall efficiencies in a healthcare facility through optimal bed capacity for peak patient flow of emergency and routine patients.
NASA Astrophysics Data System (ADS)
Welch, Kevin; Leonard, Jerry; Jones, Richard D.
2010-08-01
Increasingly stringent requirements on the performance of diffractive optical elements (DOEs) used in wafer scanner illumination systems are driving continuous improvements in their associated manufacturing processes. Specifically, these processes are designed to improve the output pattern uniformity of off-axis illumination systems to minimize degradation in the ultimate imaging performance of a lithographic tool. In this paper, we discuss performance improvements in both photolithographic patterning and RIE etching of fused silica diffractive optical structures. In summary, optimized photolithographic processes were developed to increase critical dimension uniformity and featuresize linearity across the substrate. The photoresist film thickness was also optimized for integration with an improved etch process. This etch process was itself optimized for pattern transfer fidelity, sidewall profile (wall angle, trench bottom flatness), and across-wafer etch depth uniformity. Improvements observed with these processes on idealized test structures (for ease of analysis) led to their implementation in product flows, with comparable increases in performance and yield on customer designs.
FPGA based hardware optimized implementation of signal processing system for LFM pulsed radar
NASA Astrophysics Data System (ADS)
Azim, Noor ul; Jun, Wang
2016-11-01
Signal processing is one of the main parts of any radar system. Different signal processing algorithms are used to extract information about different parameters like range, speed, direction etc, of a target in the field of radar communication. This paper presents LFM (Linear Frequency Modulation) pulsed radar signal processing algorithms which are used to improve target detection, range resolution and to estimate the speed of a target. Firstly, these algorithms are simulated in MATLAB to verify the concept and theory. After the conceptual verification in MATLAB, the simulation is converted into implementation on hardware using Xilinx FPGA. Chosen FPGA is Xilinx Virtex-6 (XC6LVX75T). For hardware implementation pipeline optimization is adopted and also other factors are considered for resources optimization in the process of implementation. Focusing algorithms in this work for improving target detection, range resolution and speed estimation are hardware optimized fast convolution processing based pulse compression and pulse Doppler processing.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Krishnan, Shankar; Karri, Naveen K.; Gogna, Pawan K.
2012-03-13
Enormous military and commercial interests exist in developing quiet, lightweight, and compact thermoelectric (TE) power generation systems. This paper investigates design integration and analysis of an advanced TE power generation system implementing JP-8 fueled combustion and thermal recuperation. Design and development of a portable TE power system using a JP-8 combustor as a high temperature heat source and optimal process flows depend on efficient heat generation, transfer, and recovery within the system are explored. Design optimization of the system required considering the combustion system efficiency and TE conversion efficiency simultaneously. The combustor performance and TE sub-system performance were coupled directlymore » through exhaust temperatures, fuel and air mass flow rates, heat exchanger performance, subsequent hot-side temperatures, and cold-side cooling techniques and temperatures. Systematic investigation of this system relied on accurate thermodynamic modeling of complex, high-temperature combustion processes concomitantly with detailed thermoelectric converter thermal/mechanical modeling. To this end, this work reports on design integration of systemlevel process flow simulations using commercial software CHEMCADTM with in-house thermoelectric converter and module optimization, and heat exchanger analyses using COMSOLTM software. High-performance, high-temperature TE materials and segmented TE element designs are incorporated in coupled design analyses to achieve predicted TE subsystem level conversion efficiencies exceeding 10%. These TE advances are integrated with a high performance microtechnology combustion reactor based on recent advances at the Pacific Northwest National Laboratory (PNNL). Predictions from this coupled simulation established a basis for optimal selection of fuel and air flow rates, thermoelectric module design and operating conditions, and microtechnology heat-exchanger design criteria. This paper will discuss this simulation process that leads directly to system efficiency power maps defining potentially available optimal system operating conditions and regimes. This coupled simulation approach enables pathways for integrated use of high-performance combustor components, high performance TE devices, and microtechnologies to produce a compact, lightweight, combustion driven TE power system prototype that operates on common fuels.« less
Trajectory Optimization for Missions to Small Bodies with a Focus on Scientific Merit.
Englander, Jacob A; Vavrina, Matthew A; Lim, Lucy F; McFadden, Lucy A; Rhoden, Alyssa R; Noll, Keith S
2017-01-01
Trajectory design for missions to small bodies is tightly coupled both with the selection of targets for a mission and with the choice of spacecraft power, propulsion, and other hardware. Traditional methods of trajectory optimization have focused on finding the optimal trajectory for an a priori selection of destinations and spacecraft parameters. Recent research has expanded the field of trajectory optimization to multidisciplinary systems optimization that includes spacecraft parameters. The logical next step is to extend the optimization process to include target selection based not only on engineering figures of merit but also scientific value. This paper presents a new technique to solve the multidisciplinary mission optimization problem for small-bodies missions, including classical trajectory design, the choice of spacecraft power and propulsion systems, and also the scientific value of the targets. This technique, when combined with modern parallel computers, enables a holistic view of the small body mission design process that previously required iteration among several different design processes.
An optimizing start-up strategy for a bio-methanator.
Sbarciog, Mihaela; Loccufier, Mia; Vande Wouwer, Alain
2012-05-01
This paper presents an optimizing start-up strategy for a bio-methanator. The goal of the control strategy is to maximize the outflow rate of methane in anaerobic digestion processes, which can be described by a two-population model. The methodology relies on a thorough analysis of the system dynamics and involves the solution of two optimization problems: steady-state optimization for determining the optimal operating point and transient optimization. The latter is a classical optimal control problem, which can be solved using the maximum principle of Pontryagin. The proposed control law is of the bang-bang type. The process is driven from an initial state to a small neighborhood of the optimal steady state by switching the manipulated variable (dilution rate) from the minimum to the maximum value at a certain time instant. Then the dilution rate is set to the optimal value and the system settles down in the optimal steady state. This control law ensures the convergence of the system to the optimal steady state and substantially increases its stability region. The region of attraction of the steady state corresponding to maximum production of methane is considerably enlarged. In some cases, which are related to the possibility of selecting the minimum dilution rate below a certain level, the stability region of the optimal steady state equals the interior of the state space. Aside its efficiency, which is evaluated not only in terms of biogas production but also from the perspective of treatment of the organic load, the strategy is also characterized by simplicity, being thus appropriate for implementation in real-life systems. Another important advantage is its generality: this technique may be applied to any anaerobic digestion process, for which the acidogenesis and methanogenesis are, respectively, characterized by Monod and Haldane kinetics.
Electronic circuitry development in a micropyrotechnic system for micropropulsion applications
NASA Astrophysics Data System (ADS)
Puig-Vidal, Manuel; Lopez, Jaime; Miribel, Pere; Montane, Enric; Lopez-Villegas, Jose M.; Samitier, Josep; Rossi, Carole; Camps, Thierry; Dumonteuil, Maxime
2003-04-01
An electronic circuitry is proposed and implemented to optimize the ignition process and the robustness of a microthruster. The principle is based on the integration of propellant material within a micromachined system. The operational concept is simply based on the combustion of an energetic propellant stored in a micromachined chamber. Each thruster contains three parts (heater, chamber, nozzle). Due to the one shot characteristic, microthrusters are fabricated in 2D array configuration. For the functioning of this kind of system, one critical point is the optimization of the ignition process as a function of the power schedule delivered by electronic devices. One particular attention has been paid on the design and implementation of an electronic chip to control and optimize the system ignition. Ignition process is triggered by electrical power delivered to a polysilicon resistance in contact with the propellant. The resistance is used to sense the temperature on the propellant which is in contact. Temperature of the microthruster node before the ignition is monitored via the electronic circuitry. A pre-heating process before ignition seems to be a good methodology to optimize the ignition process. Pre-heating temperature and pre-heating time are critical parameters to be adjusted. Simulation and experimental results will deeply contribute to improve the micropyrotechnic system. This paper will discuss all these point.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zitney, S.E.
This presentation will examine process systems engineering R&D needs for application to advanced fossil energy (FE) systems and highlight ongoing research activities at the National Energy Technology Laboratory (NETL) under the auspices of a recently launched Collaboratory for Process & Dynamic Systems Research. The three current technology focus areas include: 1) High-fidelity systems with NETL's award-winning Advanced Process Engineering Co-Simulator (APECS) technology for integrating process simulation with computational fluid dynamics (CFD) and virtual engineering concepts, 2) Dynamic systems with R&D on plant-wide IGCC dynamic simulation, control, and real-time training applications, and 3) Systems optimization including large-scale process optimization, stochastic simulationmore » for risk/uncertainty analysis, and cost estimation. Continued R&D aimed at these and other key process systems engineering models, methods, and tools will accelerate the development of advanced gasification-based FE systems and produce increasingly valuable outcomes for DOE and the Nation.« less
About Distributed Simulation-based Optimization of Forming Processes using a Grid Architecture
NASA Astrophysics Data System (ADS)
Grauer, Manfred; Barth, Thomas
2004-06-01
Permanently increasing complexity of products and their manufacturing processes combined with a shorter "time-to-market" leads to more and more use of simulation and optimization software systems for product design. Finding a "good" design of a product implies the solution of computationally expensive optimization problems based on the results of simulation. Due to the computational load caused by the solution of these problems, the requirements on the Information&Telecommunication (IT) infrastructure of an enterprise or research facility are shifting from stand-alone resources towards the integration of software and hardware resources in a distributed environment for high-performance computing. Resources can either comprise software systems, hardware systems, or communication networks. An appropriate IT-infrastructure must provide the means to integrate all these resources and enable their use even across a network to cope with requirements from geographically distributed scenarios, e.g. in computational engineering and/or collaborative engineering. Integrating expert's knowledge into the optimization process is inevitable in order to reduce the complexity caused by the number of design variables and the high dimensionality of the design space. Hence, utilization of knowledge-based systems must be supported by providing data management facilities as a basis for knowledge extraction from product data. In this paper, the focus is put on a distributed problem solving environment (PSE) capable of providing access to a variety of necessary resources and services. A distributed approach integrating simulation and optimization on a network of workstations and cluster systems is presented. For geometry generation the CAD-system CATIA is used which is coupled with the FEM-simulation system INDEED for simulation of sheet-metal forming processes and the problem solving environment OpTiX for distributed optimization.
NASA Astrophysics Data System (ADS)
Yadav, Basant; Ch, Sudheer; Mathur, Shashi; Adamowski, Jan
2016-12-01
In-situ bioremediation is the most common groundwater remediation procedure used for treating organically contaminated sites. A simulation-optimization approach, which incorporates a simulation model for groundwaterflow and transport processes within an optimization program, could help engineers in designing a remediation system that best satisfies management objectives as well as regulatory constraints. In-situ bioremediation is a highly complex, non-linear process and the modelling of such a complex system requires significant computational exertion. Soft computing techniques have a flexible mathematical structure which can generalize complex nonlinear processes. In in-situ bioremediation management, a physically-based model is used for the simulation and the simulated data is utilized by the optimization model to optimize the remediation cost. The recalling of simulator to satisfy the constraints is an extremely tedious and time consuming process and thus there is need for a simulator which can reduce the computational burden. This study presents a simulation-optimization approach to achieve an accurate and cost effective in-situ bioremediation system design for groundwater contaminated with BTEX (Benzene, Toluene, Ethylbenzene, and Xylenes) compounds. In this study, the Extreme Learning Machine (ELM) is used as a proxy simulator to replace BIOPLUME III for the simulation. The selection of ELM is done by a comparative analysis with Artificial Neural Network (ANN) and Support Vector Machine (SVM) as they were successfully used in previous studies of in-situ bioremediation system design. Further, a single-objective optimization problem is solved by a coupled Extreme Learning Machine (ELM)-Particle Swarm Optimization (PSO) technique to achieve the minimum cost for the in-situ bioremediation system design. The results indicate that ELM is a faster and more accurate proxy simulator than ANN and SVM. The total cost obtained by the ELM-PSO approach is held to a minimum while successfully satisfying all the regulatory constraints of the contaminated site.
Program Aids Analysis And Optimization Of Design
NASA Technical Reports Server (NTRS)
Rogers, James L., Jr.; Lamarsh, William J., II
1994-01-01
NETS/ PROSSS (NETS Coupled With Programming System for Structural Synthesis) computer program developed to provide system for combining NETS (MSC-21588), neural-network application program and CONMIN (Constrained Function Minimization, ARC-10836), optimization program. Enables user to reach nearly optimal design. Design then used as starting point in normal optimization process, possibly enabling user to converge to optimal solution in significantly fewer iterations. NEWT/PROSSS written in C language and FORTRAN 77.
Model-Based PAT for Quality Management in Pharmaceuticals Freeze-Drying: State of the Art
Fissore, Davide
2017-01-01
Model-based process analytical technologies can be used for the in-line control and optimization of a pharmaceuticals freeze-drying process, as well as for the off-line design of the process, i.e., the identification of the optimal operating conditions. This paper aims at presenting the state of the art in this field, focusing, particularly, on three groups of systems, namely, those based on the temperature measurement (i.e., the soft sensor), on the chamber pressure measurement (i.e., the systems based on the test of pressure rise and of pressure decrease), and on the sublimation flux estimate (i.e., the tunable diode laser absorption spectroscopy and the valveless monitoring system). The application of these systems for in-line process optimization (e.g., using a model predictive control algorithm) and to get a true quality by design (e.g., through the off-line calculation of the design space of the process) is presented and discussed. PMID:28224123
Optimal regulation in systems with stochastic time sampling
NASA Technical Reports Server (NTRS)
Montgomery, R. C.; Lee, P. S.
1980-01-01
An optimal control theory that accounts for stochastic variable time sampling in a distributed microprocessor based flight control system is presented. The theory is developed by using a linear process model for the airplane dynamics and the information distribution process is modeled as a variable time increment process where, at the time that information is supplied to the control effectors, the control effectors know the time of the next information update only in a stochastic sense. An optimal control problem is formulated and solved for the control law that minimizes the expected value of a quadratic cost function. The optimal cost obtained with a variable time increment Markov information update process where the control effectors know only the past information update intervals and the Markov transition mechanism is almost identical to that obtained with a known and uniform information update interval.
Comprehensive optimization process of paranasal sinus radiography.
Saarakkala, S; Nironen, K; Hermunen, H; Aarnio, J; Heikkinen, J O
2009-04-01
The optimization of radiological examinations is important in order to reduce unnecessary patient radiation exposure. To perform a comprehensive optimization process for paranasal sinus radiography at Mikkeli Central Hospital, Finland. Patients with suspicion of acute sinusitis were imaged with a Kodak computed radiography (CR) system (n=20) and with a Philips digital radiography (DR) system (n=30) using focus-detector distances (FDDs) of 110 cm, 150 cm, or 200 cm. Patients' radiation exposure was determined in terms of entrance surface dose and dose-area product. Furthermore, an anatomical phantom was used for the estimation of point doses inside the head. Clinical image quality was evaluated by an experienced radiologist, and physical image quality was evaluated from the digital radiography phantom. Patient doses were significantly lower and image quality better with the DR system compared to the CR system. The differences in patient dose and physical image quality were small with varying FDD. Clinical image quality of the DR system was lowest with FDD of 200 cm. Further, imaging with FDD of 150 cm was technically easier for the technologist to perform than with FDD of 110 cm. After optimization, it was recommended that the DR system with FDD of 150 cm should always be used at Mikkeli Central Hospital. We recommend this kind of comprehensive approach in all optimization processes of radiological examinations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
The objective of the contract is to consolidate the advances made during the previous contract in the conversion of syngas to motor fuels using Molecular Sieve-containing catalysts and to demonstrate the practical utility and economic value of the new catalyst/process systems with appropriate laboratory runs. Work on the program is divided into the following six tasks: (1) preparation of a detailed work plan covering the entire performance of the contract; (2) techno-economic studies that will supplement those that are presently being carried out by MITRE; (3) optimization of the most promising catalysts developed under prior contract; (4) optimization of themore » UCC catalyst system in a manner that will give it the longest possible service life; (5) optimization of a UCC process/catalyst system based upon a tubular reactor with a recycle loop containing the most promising catalyst developed under Tasks 3 and 4 studies; and (6) economic evaluation of the optimal performance found under Task 5 for the UCC process/catalyst system. Progress reports are presented for Tasks 1, 3, 4, and 5.« less
A Hybrid Interval–Robust Optimization Model for Water Quality Management
Xu, Jieyu; Li, Yongping; Huang, Guohe
2013-01-01
Abstract In water quality management problems, uncertainties may exist in many system components and pollution-related processes (i.e., random nature of hydrodynamic conditions, variability in physicochemical processes, dynamic interactions between pollutant loading and receiving water bodies, and indeterminacy of available water and treated wastewater). These complexities lead to difficulties in formulating and solving the resulting nonlinear optimization problems. In this study, a hybrid interval–robust optimization (HIRO) method was developed through coupling stochastic robust optimization and interval linear programming. HIRO can effectively reflect the complex system features under uncertainty, where implications of water quality/quantity restrictions for achieving regional economic development objectives are studied. By delimiting the uncertain decision space through dimensional enlargement of the original chemical oxygen demand (COD) discharge constraints, HIRO enhances the robustness of the optimization processes and resulting solutions. This method was applied to planning of industry development in association with river-water pollution concern in New Binhai District of Tianjin, China. Results demonstrated that the proposed optimization model can effectively communicate uncertainties into the optimization process and generate a spectrum of potential inexact solutions supporting local decision makers in managing benefit-effective water quality management schemes. HIRO is helpful for analysis of policy scenarios related to different levels of economic penalties, while also providing insight into the tradeoff between system benefits and environmental requirements. PMID:23922495
Optimal design of geodesically stiffened composite cylindrical shells
NASA Technical Reports Server (NTRS)
Gendron, G.; Guerdal, Z.
1992-01-01
An optimization system based on the finite element code Computations Structural Mechanics (CSM) Testbed and the optimization program, Automated Design Synthesis (ADS), is described. The optimization system can be used to obtain minimum-weight designs of composite stiffened structures. Ply thickness, ply orientations, and stiffener heights can be used as design variables. Buckling, displacement, and material failure constraints can be imposed on the design. The system is used to conduct a design study of geodesically stiffened shells. For comparison purposes, optimal designs of unstiffened shells and shells stiffened by rings and stingers are also obtained. Trends in the design of geodesically stiffened shells are identified. An approach to include local stress concentrations during the design optimization process is then presented. The method is based on a global/local analysis technique. It employs spline interpolation functions to determine displacements and rotations from a global model which are used as 'boundary conditions' for the local model. The organization of the strategy in the context of an optimization process is described. The method is validated with an example.
Optimization and Simulation of Plastic Injection Process using Genetic Algorithm and Moldflow
NASA Astrophysics Data System (ADS)
Martowibowo, Sigit Yoewono; Kaswadi, Agung
2017-03-01
The use of plastic-based products is continuously increasing. The increasing demands for thinner products, lower production costs, yet higher product quality has triggered an increase in the number of research projects on plastic molding processes. An important branch of such research is focused on mold cooling system. Conventional cooling systems are most widely used because they are easy to make by using conventional machining processes. However, the non-uniform cooling processes are considered as one of their weaknesses. Apart from the conventional systems, there are also conformal cooling systems that are designed for faster and more uniform plastic mold cooling. In this study, the conformal cooling system is applied for the production of bowl-shaped product made of PP AZ564. Optimization is conducted to initiate machine setup parameters, namely, the melting temperature, injection pressure, holding pressure and holding time. The genetic algorithm method and Moldflow were used to optimize the injection process parameters at a minimum cycle time. It is found that, an optimum injection molding processes could be obtained by setting the parameters to the following values: T M = 180 °C; P inj = 20 MPa; P hold = 16 MPa and t hold = 8 s, with a cycle time of 14.11 s. Experiments using the conformal cooling system yielded an average cycle time of 14.19 s. The studied conformal cooling system yielded a volumetric shrinkage of 5.61% and the wall shear stress was found at 0.17 MPa. The difference between the cycle time obtained through simulations and experiments using the conformal cooling system was insignificant (below 1%). Thus, combining process parameters optimization and simulations by using genetic algorithm method with Moldflow can be considered as valid.
Microscopic heat engine and control of work fluctuations
NASA Astrophysics Data System (ADS)
Xiao, Gaoyang
In this thesis, we study novel behaviors of microscopic work and heat in systems involving few degrees of freedom. We firstly report that a quantum Carnot cycle should consist of two isothermal processes and two mechanical adiabatic processes if we want to maximize its heat-to-work conversion efficiency. We then find that the efficiency can be further optimized, and it is generally system specific, lower than the Carnot efficiency, and dependent upon both temperatures of the cold and hot reservoirs. We then move on to the studies the fluctuations of microscopic work. We find a principle of minimal work fluctuations related to the Jarzynski equality. In brief, an adiabatic process without energy level crossing yields the minimal fluctuations in exponential work, given a thermally isolated system initially prepared at thermal equilibrium. Finally, we investigate an optimal control approach to suppress the work fluctuations and accelerate the adiabatic processes. This optimal control approach can apply to wide variety of systems even when we do not have full knowledge of the systems.
Optimal protocols for slowly driven quantum systems.
Zulkowski, Patrick R; DeWeese, Michael R
2015-09-01
The design of efficient quantum information processing will rely on optimal nonequilibrium transitions of driven quantum systems. Building on a recently developed geometric framework for computing optimal protocols for classical systems driven in finite time, we construct a general framework for optimizing the average information entropy for driven quantum systems. Geodesics on the parameter manifold endowed with a positive semidefinite metric correspond to protocols that minimize the average information entropy production in finite time. We use this framework to explicitly compute the optimal entropy production for a simple two-state quantum system coupled to a heat bath of bosonic oscillators, which has applications to quantum annealing.
Fuzzy logic control and optimization system
Lou, Xinsheng [West Hartford, CT
2012-04-17
A control system (300) for optimizing a power plant includes a chemical loop having an input for receiving an input signal (369) and an output for outputting an output signal (367), and a hierarchical fuzzy control system (400) operably connected to the chemical loop. The hierarchical fuzzy control system (400) includes a plurality of fuzzy controllers (330). The hierarchical fuzzy control system (400) receives the output signal (367), optimizes the input signal (369) based on the received output signal (367), and outputs an optimized input signal (369) to the input of the chemical loop to control a process of the chemical loop in an optimized manner.
Car painting process scheduling with harmony search algorithm
NASA Astrophysics Data System (ADS)
Syahputra, M. F.; Maiyasya, A.; Purnamawati, S.; Abdullah, D.; Albra, W.; Heikal, M.; Abdurrahman, A.; Khaddafi, M.
2018-02-01
Automotive painting program in the process of painting the car body by using robot power, making efficiency in the production system. Production system will be more efficient if pay attention to scheduling of car order which will be done by considering painting body shape of car. Flow shop scheduling is a scheduling model in which the job-job to be processed entirely flows in the same product direction / path. Scheduling problems often arise if there are n jobs to be processed on the machine, which must be specified which must be done first and how to allocate jobs on the machine to obtain a scheduled production process. Harmony Search Algorithm is a metaheuristic optimization algorithm based on music. The algorithm is inspired by observations that lead to music in search of perfect harmony. This musical harmony is in line to find optimal in the optimization process. Based on the tests that have been done, obtained the optimal car sequence with minimum makespan value.
Image processing occupancy sensor
Brackney, Larry J.
2016-09-27
A system and method of detecting occupants in a building automation system environment using image based occupancy detection and position determinations. In one example, the system includes an image processing occupancy sensor that detects the number and position of occupants within a space that has controllable building elements such as lighting and ventilation diffusers. Based on the position and location of the occupants, the system can finely control the elements to optimize conditions for the occupants, optimize energy usage, among other advantages.
NASA Astrophysics Data System (ADS)
Göll, S.; Samsun, R. C.; Peters, R.
Fuel-cell-based auxiliary power units can help to reduce fuel consumption and emissions in transportation. For this application, the combination of solid oxide fuel cells (SOFCs) with upstream fuel processing by autothermal reforming (ATR) is seen as a highly favorable configuration. Notwithstanding the necessity to improve each single component, an optimized architecture of the fuel cell system as a whole must be achieved. To enable model-based analyses, a system-level approach is proposed in which the fuel cell system is modeled as a multi-stage thermo-chemical process using the "flowsheeting" environment PRO/II™. Therein, the SOFC stack and the ATR are characterized entirely by corresponding thermodynamic processes together with global performance parameters. The developed model is then used to achieve an optimal system layout by comparing different system architectures. A system with anode and cathode off-gas recycling was identified to have the highest electric system efficiency. Taking this system as a basis, the potential for further performance enhancement was evaluated by varying four parameters characterizing different system components. Using methods from the design and analysis of experiments, the effects of these parameters and of their interactions were quantified, leading to an overall optimized system with encouraging performance data.
Hayashibe, Mitsuhiro; Shimoda, Shingo
2014-01-01
A human motor system can improve its behavior toward optimal movement. The skeletal system has more degrees of freedom than the task dimensions, which incurs an ill-posed problem. The multijoint system involves complex interaction torques between joints. To produce optimal motion in terms of energy consumption, the so-called cost function based optimization has been commonly used in previous works.Even if it is a fact that an optimal motor pattern is employed phenomenologically, there is no evidence that shows the existence of a physiological process that is similar to such a mathematical optimization in our central nervous system.In this study, we aim to find a more primitive computational mechanism with a modular configuration to realize adaptability and optimality without prior knowledge of system dynamics.We propose a novel motor control paradigm based on tacit learning with task space feedback. The motor command accumulation during repetitive environmental interactions, play a major role in the learning process. It is applied to a vertical cyclic reaching which involves complex interaction torques.We evaluated whether the proposed paradigm can learn how to optimize solutions with a 3-joint, planar biomechanical model. The results demonstrate that the proposed method was valid for acquiring motor synergy and resulted in energy efficient solutions for different load conditions. The case in feedback control is largely affected by the interaction torques. In contrast, the trajectory is corrected over time with tacit learning toward optimal solutions.Energy efficient solutions were obtained by the emergence of motor synergy. During learning, the contribution from feedforward controller is augmented and the one from the feedback controller is significantly minimized down to 12% for no load at hand, 16% for a 0.5 kg load condition.The proposed paradigm could provide an optimization process in redundant system with dynamic-model-free and cost-function-free approach. PMID:24616695
Hayashibe, Mitsuhiro; Shimoda, Shingo
2014-01-01
A human motor system can improve its behavior toward optimal movement. The skeletal system has more degrees of freedom than the task dimensions, which incurs an ill-posed problem. The multijoint system involves complex interaction torques between joints. To produce optimal motion in terms of energy consumption, the so-called cost function based optimization has been commonly used in previous works.Even if it is a fact that an optimal motor pattern is employed phenomenologically, there is no evidence that shows the existence of a physiological process that is similar to such a mathematical optimization in our central nervous system.In this study, we aim to find a more primitive computational mechanism with a modular configuration to realize adaptability and optimality without prior knowledge of system dynamics.We propose a novel motor control paradigm based on tacit learning with task space feedback. The motor command accumulation during repetitive environmental interactions, play a major role in the learning process. It is applied to a vertical cyclic reaching which involves complex interaction torques.We evaluated whether the proposed paradigm can learn how to optimize solutions with a 3-joint, planar biomechanical model. The results demonstrate that the proposed method was valid for acquiring motor synergy and resulted in energy efficient solutions for different load conditions. The case in feedback control is largely affected by the interaction torques. In contrast, the trajectory is corrected over time with tacit learning toward optimal solutions.Energy efficient solutions were obtained by the emergence of motor synergy. During learning, the contribution from feedforward controller is augmented and the one from the feedback controller is significantly minimized down to 12% for no load at hand, 16% for a 0.5 kg load condition.The proposed paradigm could provide an optimization process in redundant system with dynamic-model-free and cost-function-free approach.
Intelligent system of coordination and control for manufacturing
NASA Astrophysics Data System (ADS)
Ciortea, E. M.
2016-08-01
This paper wants shaping an intelligent system monitoring and control, which leads to optimizing material and information flows of the company. The paper presents a model for tracking and control system using intelligent real. Production system proposed for simulation analysis provides the ability to track and control the process in real time. Using simulation models be understood: the influence of changes in system structure, commands influence on the general condition of the manufacturing process conditions influence the behavior of some system parameters. Practical character consists of tracking and real-time control of the technological process. It is based on modular systems analyzed using mathematical models, graphic-analytical sizing, configuration, optimization and simulation.
An optimal design of wind turbine and ship structure based on neuro-response surface method
NASA Astrophysics Data System (ADS)
Lee, Jae-Chul; Shin, Sung-Chul; Kim, Soo-Young
2015-07-01
The geometry of engineering systems affects their performances. For this reason, the shape of engineering systems needs to be optimized in the initial design stage. However, engineering system design problems consist of multi-objective optimization and the performance analysis using commercial code or numerical analysis is generally time-consuming. To solve these problems, many engineers perform the optimization using the approximation model (response surface). The Response Surface Method (RSM) is generally used to predict the system performance in engineering research field, but RSM presents some prediction errors for highly nonlinear systems. The major objective of this research is to establish an optimal design method for multi-objective problems and confirm its applicability. The proposed process is composed of three parts: definition of geometry, generation of response surface, and optimization process. To reduce the time for performance analysis and minimize the prediction errors, the approximation model is generated using the Backpropagation Artificial Neural Network (BPANN) which is considered as Neuro-Response Surface Method (NRSM). The optimization is done for the generated response surface by non-dominated sorting genetic algorithm-II (NSGA-II). Through case studies of marine system and ship structure (substructure of floating offshore wind turbine considering hydrodynamics performances and bulk carrier bottom stiffened panels considering structure performance), we have confirmed the applicability of the proposed method for multi-objective side constraint optimization problems.
Integrating a Genetic Algorithm Into a Knowledge-Based System for Ordering Complex Design Processes
NASA Technical Reports Server (NTRS)
Rogers, James L.; McCulley, Collin M.; Bloebaum, Christina L.
1996-01-01
The design cycle associated with large engineering systems requires an initial decomposition of the complex system into design processes which are coupled through the transference of output data. Some of these design processes may be grouped into iterative subcycles. In analyzing or optimizing such a coupled system, it is essential to be able to determine the best ordering of the processes within these subcycles to reduce design cycle time and cost. Many decomposition approaches assume the capability is available to determine what design processes and couplings exist and what order of execution will be imposed during the design cycle. Unfortunately, this is often a complex problem and beyond the capabilities of a human design manager. A new feature, a genetic algorithm, has been added to DeMAID (Design Manager's Aid for Intelligent Decomposition) to allow the design manager to rapidly examine many different combinations of ordering processes in an iterative subcycle and to optimize the ordering based on cost, time, and iteration requirements. Two sample test cases are presented to show the effects of optimizing the ordering with a genetic algorithm.
Hierarchical optimal control of large-scale nonlinear chemical processes.
Ramezani, Mohammad Hossein; Sadati, Nasser
2009-01-01
In this paper, a new approach is presented for optimal control of large-scale chemical processes. In this approach, the chemical process is decomposed into smaller sub-systems at the first level, and a coordinator at the second level, for which a two-level hierarchical control strategy is designed. For this purpose, each sub-system in the first level can be solved separately, by using any conventional optimization algorithm. In the second level, the solutions obtained from the first level are coordinated using a new gradient-type strategy, which is updated by the error of the coordination vector. The proposed algorithm is used to solve the optimal control problem of a complex nonlinear chemical stirred tank reactor (CSTR), where its solution is also compared with the ones obtained using the centralized approach. The simulation results show the efficiency and the capability of the proposed hierarchical approach, in finding the optimal solution, over the centralized method.
System, apparatus and methods to implement high-speed network analyzers
Ezick, James; Lethin, Richard; Ros-Giralt, Jordi; Szilagyi, Peter; Wohlford, David E
2015-11-10
Systems, apparatus and methods for the implementation of high-speed network analyzers are provided. A set of high-level specifications is used to define the behavior of the network analyzer emitted by a compiler. An optimized inline workflow to process regular expressions is presented without sacrificing the semantic capabilities of the processing engine. An optimized packet dispatcher implements a subset of the functions implemented by the network analyzer, providing a fast and slow path workflow used to accelerate specific processing units. Such dispatcher facility can also be used as a cache of policies, wherein if a policy is found, then packet manipulations associated with the policy can be quickly performed. An optimized method of generating DFA specifications for network signatures is also presented. The method accepts several optimization criteria, such as min-max allocations or optimal allocations based on the probability of occurrence of each signature input bit.
Foo, Brian; van der Schaar, Mihaela
2010-11-01
In this paper, we discuss distributed optimization techniques for configuring classifiers in a real-time, informationally-distributed stream mining system. Due to the large volume of streaming data, stream mining systems must often cope with overload, which can lead to poor performance and intolerable processing delay for real-time applications. Furthermore, optimizing over an entire system of classifiers is a difficult task since changing the filtering process at one classifier can impact both the feature values of data arriving at classifiers further downstream and thus, the classification performance achieved by an ensemble of classifiers, as well as the end-to-end processing delay. To address this problem, this paper makes three main contributions: 1) Based on classification and queuing theoretic models, we propose a utility metric that captures both the performance and the delay of a binary filtering classifier system. 2) We introduce a low-complexity framework for estimating the system utility by observing, estimating, and/or exchanging parameters between the inter-related classifiers deployed across the system. 3) We provide distributed algorithms to reconfigure the system, and analyze the algorithms based on their convergence properties, optimality, information exchange overhead, and rate of adaptation to non-stationary data sources. We provide results using different video classifier systems.
Optimization of MLS receivers for multipath environments
NASA Technical Reports Server (NTRS)
Mcalpine, G. A.; Highfill, J. H., III
1976-01-01
The design of a microwave landing system (MLS) aircraft receiver, capable of optimal performance in multipath environments found in air terminal areas, is reported. Special attention was given to the angle tracking problem of the receiver and includes tracking system design considerations, study and application of locally optimum estimation involving multipath adaptive reception and then envelope processing, and microcomputer system design. Results show processing is competitive in this application with i-f signal processing performance-wise and is much more simple and cheaper. A summary of the signal model is given.
Vectorial mask optimization methods for robust optical lithography
NASA Astrophysics Data System (ADS)
Ma, Xu; Li, Yanqiu; Guo, Xuejia; Dong, Lisong; Arce, Gonzalo R.
2012-10-01
Continuous shrinkage of critical dimension in an integrated circuit impels the development of resolution enhancement techniques for low k1 lithography. Recently, several pixelated optical proximity correction (OPC) and phase-shifting mask (PSM) approaches were developed under scalar imaging models to account for the process variations. However, the lithography systems with larger-NA (NA>0.6) are predominant for current technology nodes, rendering the scalar models inadequate to describe the vector nature of the electromagnetic field that propagates through the optical lithography system. In addition, OPC and PSM algorithms based on scalar models can compensate for wavefront aberrations, but are incapable of mitigating polarization aberrations in practical lithography systems, which can only be dealt with under the vector model. To this end, we focus on developing robust pixelated gradient-based OPC and PSM optimization algorithms aimed at canceling defocus, dose variation, wavefront and polarization aberrations under a vector model. First, an integrative and analytic vector imaging model is applied to formulate the optimization problem, where the effects of process variations are explicitly incorporated in the optimization framework. A steepest descent algorithm is then used to iteratively optimize the mask patterns. Simulations show that the proposed algorithms can effectively improve the process windows of the optical lithography systems.
A knowledge-based approach to improving optimization techniques in system planning
NASA Technical Reports Server (NTRS)
Momoh, J. A.; Zhang, Z. Z.
1990-01-01
A knowledge-based (KB) approach to improve mathematical programming techniques used in the system planning environment is presented. The KB system assists in selecting appropriate optimization algorithms, objective functions, constraints and parameters. The scheme is implemented by integrating symbolic computation of rules derived from operator and planner's experience and is used for generalized optimization packages. The KB optimization software package is capable of improving the overall planning process which includes correction of given violations. The method was demonstrated on a large scale power system discussed in the paper.
Precup, Radu-Emil; David, Radu-Codrut; Petriu, Emil M; Radac, Mircea-Bogdan; Preitl, Stefan
2014-11-01
This paper suggests a new generation of optimal PI controllers for a class of servo systems characterized by saturation and dead zone static nonlinearities and second-order models with an integral component. The objective functions are expressed as the integral of time multiplied by absolute error plus the weighted sum of the integrals of output sensitivity functions of the state sensitivity models with respect to two process parametric variations. The PI controller tuning conditions applied to a simplified linear process model involve a single design parameter specific to the extended symmetrical optimum (ESO) method which offers the desired tradeoff to several control system performance indices. An original back-calculation and tracking anti-windup scheme is proposed in order to prevent the integrator wind-up and to compensate for the dead zone nonlinearity of the process. The minimization of the objective functions is carried out in the framework of optimization problems with inequality constraints which guarantee the robust stability with respect to the process parametric variations and the controller robustness. An adaptive gravitational search algorithm (GSA) solves the optimization problems focused on the optimal tuning of the design parameter specific to the ESO method and of the anti-windup tracking gain. A tuning method for PI controllers is proposed as an efficient approach to the design of resilient control systems. The tuning method and the PI controllers are experimentally validated by the adaptive GSA-based tuning of PI controllers for the angular position control of a laboratory servo system.
A system approach to aircraft optimization
NASA Technical Reports Server (NTRS)
Sobieszczanski-Sobieski, Jaroslaw
1991-01-01
Mutual couplings among the mathematical models of physical phenomena and parts of a system such as an aircraft complicate the design process because each contemplated design change may have a far reaching consequence throughout the system. Techniques are outlined for computing these influences as system design derivatives useful for both judgemental and formal optimization purposes. The techniques facilitate decomposition of the design process into smaller, more manageable tasks and they form a methodology that can easily fit into existing engineering organizations and incorporate their design tools.
Improving processes through evolutionary optimization.
Clancy, Thomas R
2011-09-01
As systems evolve over time, their natural tendency is to become increasingly more complex. Studies on complex systems have generated new perspectives on management in social organizations such as hospitals. Much of this research appears as a natural extension of the cross-disciplinary field of systems theory. This is the 18th in a series of articles applying complex systems science to the traditional management concepts of planning, organizing, directing, coordinating, and controlling. In this article, I discuss methods to optimize complex healthcare processes through learning, adaptation, and evolutionary planning.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kornelakis, Aris
2010-12-15
Particle Swarm Optimization (PSO) is a highly efficient evolutionary optimization algorithm. In this paper a multiobjective optimization algorithm based on PSO applied to the optimal design of photovoltaic grid-connected systems (PVGCSs) is presented. The proposed methodology intends to suggest the optimal number of system devices and the optimal PV module installation details, such that the economic and environmental benefits achieved during the system's operational lifetime period are both maximized. The objective function describing the economic benefit of the proposed optimization process is the lifetime system's total net profit which is calculated according to the method of the Net Present Valuemore » (NPV). The second objective function, which corresponds to the environmental benefit, equals to the pollutant gas emissions avoided due to the use of the PVGCS. The optimization's decision variables are the optimal number of the PV modules, the PV modules optimal tilt angle, the optimal placement of the PV modules within the available installation area and the optimal distribution of the PV modules among the DC/AC converters. (author)« less
Gas flow parameters in laser cutting of wood- nozzle design
Kali Mukherjee; Tom Grendzwell; Parwaiz A.A. Khan; Charles McMillin
1990-01-01
The Automated Lumber Processing System (ALPS) is an ongoing team research effort to optimize the yield of parts in a furniture rough mill. The process is designed to couple aspects of computer vision, computer optimization of yield, and laser cutting. This research is focused on optimizing laser wood cutting. Laser machining of lumber has the advantage over...
An integrated 3D log processing optimization system for small sawmills in central Appalachia
Wenshu Lin; Jingxin Wang
2013-01-01
An integrated 3D log processing optimization system was developed to perform 3D log generation, opening face determination, headrig log sawing simulation, fl itch edging and trimming simulation, cant resawing, and lumber grading. A circular cross-section model, together with 3D modeling techniques, was used to reconstruct 3D virtual logs. Internal log defects (knots)...
Optimizing a mobile robot control system using GPU acceleration
NASA Astrophysics Data System (ADS)
Tuck, Nat; McGuinness, Michael; Martin, Fred
2012-01-01
This paper describes our attempt to optimize a robot control program for the Intelligent Ground Vehicle Competition (IGVC) by running computationally intensive portions of the system on a commodity graphics processing unit (GPU). The IGVC Autonomous Challenge requires a control program that performs a number of different computationally intensive tasks ranging from computer vision to path planning. For the 2011 competition our Robot Operating System (ROS) based control system would not run comfortably on the multicore CPU on our custom robot platform. The process of profiling the ROS control program and selecting appropriate modules for porting to run on a GPU is described. A GPU-targeting compiler, Bacon, is used to speed up development and help optimize the ported modules. The impact of the ported modules on overall performance is discussed. We conclude that GPU optimization can free a significant amount of CPU resources with minimal effort for expensive user-written code, but that replacing heavily-optimized library functions is more difficult, and a much less efficient use of time.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zitney, S.E.
Emerging fossil energy power generation systems must operate with unprecedented efficiency and near-zero emissions, while optimizing profitably amid cost fluctuations for raw materials, finished products, and energy. To help address these challenges, the fossil energy industry will have to rely increasingly on the use advanced computational tools for modeling and simulating complex process systems. In this paper, we present the computational research challenges and opportunities for the optimization of fossil energy power generation systems across the plant lifecycle from process synthesis and design to plant operations. We also look beyond the plant gates to discuss research challenges and opportunities formore » enterprise-wide optimization, including planning, scheduling, and supply chain technologies.« less
NASA Astrophysics Data System (ADS)
Baik, Ki-Ho; Dean, Robert L.; Mueller, Mark; Lu, Maiying; Lem, Homer Y.; Osborne, Stephen; Abboud, Frank E.
2002-07-01
A chemically amplified resist (CAR) process has been recognized as an approach to meet the demanding critical dimension (CD) specifications of 100nm node technology and beyond. Recently, significant effort has been devoted to optimizing CAR materials, which offer the characteristics required for next generation photomask fabrication. In this paper, a process established with a positive-tone CAR from TOK and 50kV MEBES eXara system is discussed. This resist is developed for raster scan 50 kV e-beam systems. It has high contrast, good coating characteristics, good dry etch selectivity, and high environmental stability. The coating process is conducted in an environment with amine concentration less than 2 ppb. A nitrogen environment is provided during plate transfer steps. Resolution using a 60nm writing grid is 90nm line and space patterns. CD linearity is maintained down to 240nm for isolated lines or spaces by applying embedded proximity effect correction (emPEC). Optimizations of post-apply bake (PAB) and post-expose bake (PEB) time, temperature, and uniformity are completed to improve adhesion, coating uniformity, and resolution. A puddle develop process is optimized to improve line edge roughness, edge slope, and resolution. Dry etch process is optimized on a TetraT system to transfer the resist image into the chrome layer with minimum etch bias.
Analytical optimal pulse shapes obtained with the aid of genetic algorithms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Guerrero, Rubén D., E-mail: rdguerrerom@unal.edu.co; Arango, Carlos A.; Reyes, Andrés
2015-09-28
We propose a methodology to design optimal pulses for achieving quantum optimal control on molecular systems. Our approach constrains pulse shapes to linear combinations of a fixed number of experimentally relevant pulse functions. Quantum optimal control is obtained by maximizing a multi-target fitness function using genetic algorithms. As a first application of the methodology, we generated an optimal pulse that successfully maximized the yield on a selected dissociation channel of a diatomic molecule. Our pulse is obtained as a linear combination of linearly chirped pulse functions. Data recorded along the evolution of the genetic algorithm contained important information regarding themore » interplay between radiative and diabatic processes. We performed a principal component analysis on these data to retrieve the most relevant processes along the optimal path. Our proposed methodology could be useful for performing quantum optimal control on more complex systems by employing a wider variety of pulse shape functions.« less
Regional process redesign of lung cancer care: a learning health system pilot project.
Fung-Kee-Fung, M; Maziak, D E; Pantarotto, J R; Smylie, J; Taylor, L; Timlin, T; Cacciotti, T; Villeneuve, P J; Dennie, C; Bornais, C; Madore, S; Aquino, J; Wheatley-Price, P; Ozer, R S; Stewart, D J
2018-02-01
The Ottawa Hospital (toh) defined delay to timely lung cancer care as a system design problem. Recognizing the patient need for an integrated journey and the need for dynamic alignment of providers, toh used a learning health system (lhs) vision to redesign regional diagnostic processes. A lhs is driven by feedback utilizing operational and clinical information to drive system optimization and innovation. An essential component of a lhs is a collaborative platform that provides connectivity across silos, organizations, and professions. To operationalize a lhs, we developed the Ottawa Health Transformation Model (ohtm) as a consensus approach that addresses process barriers, resistance to change, and conflicting priorities. A regional Community of Practice (cop) was established to engage stakeholders, and a dedicated transformation team supported process improvements and implementation. The project operationalized the lung cancer diagnostic pathway and optimized patient flow from referral to initiation of treatment. Twelve major processes in referral, review, diagnostics, assessment, triage, and consult were redesigned. The Ottawa Hospital now provides a diagnosis to 80% of referrals within the provincial target of 28 days. The median patient journey from referral to initial treatment decreased by 48% from 92 to 47 days. The initiative optimized regional integration from referral to initial treatment. Use of a lhs lens enabled the creation of a system that is standardized to best practice and open to ongoing innovation. Continued transformation initiatives across the continuum of care are needed to incorporate best practice and optimize delivery systems for regional populations.
Li, Mingjie; Zhou, Ping; Wang, Hong; ...
2017-09-19
As one of the most important unit in the papermaking industry, the high consistency (HC) refining system is confronted with challenges such as improving pulp quality, energy saving, and emissions reduction in its operation processes. Here in this correspondence, an optimal operation of HC refining system is presented using nonlinear multiobjective model predictive control strategies that aim at set-point tracking objective of pulp quality, economic objective, and specific energy (SE) consumption objective, respectively. First, a set of input and output data at different times are employed to construct the subprocess model of the state process model for the HC refiningmore » system, and then the Wiener-type model can be obtained through combining the mechanism model of Canadian Standard Freeness and the state process model that determines their structures based on Akaike information criterion. Second, the multiobjective optimization strategy that optimizes both the set-point tracking objective of pulp quality and SE consumption is proposed simultaneously, which uses NSGA-II approach to obtain the Pareto optimal set. Furthermore, targeting at the set-point tracking objective of pulp quality, economic objective, and SE consumption objective, the sequential quadratic programming method is utilized to produce the optimal predictive controllers. In conclusion, the simulation results demonstrate that the proposed methods can make the HC refining system provide a better performance of set-point tracking of pulp quality when these predictive controllers are employed. In addition, while the optimal predictive controllers orienting with comprehensive economic objective and SE consumption objective, it has been shown that they have significantly reduced the energy consumption.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Mingjie; Zhou, Ping; Wang, Hong
As one of the most important unit in the papermaking industry, the high consistency (HC) refining system is confronted with challenges such as improving pulp quality, energy saving, and emissions reduction in its operation processes. Here in this correspondence, an optimal operation of HC refining system is presented using nonlinear multiobjective model predictive control strategies that aim at set-point tracking objective of pulp quality, economic objective, and specific energy (SE) consumption objective, respectively. First, a set of input and output data at different times are employed to construct the subprocess model of the state process model for the HC refiningmore » system, and then the Wiener-type model can be obtained through combining the mechanism model of Canadian Standard Freeness and the state process model that determines their structures based on Akaike information criterion. Second, the multiobjective optimization strategy that optimizes both the set-point tracking objective of pulp quality and SE consumption is proposed simultaneously, which uses NSGA-II approach to obtain the Pareto optimal set. Furthermore, targeting at the set-point tracking objective of pulp quality, economic objective, and SE consumption objective, the sequential quadratic programming method is utilized to produce the optimal predictive controllers. In conclusion, the simulation results demonstrate that the proposed methods can make the HC refining system provide a better performance of set-point tracking of pulp quality when these predictive controllers are employed. In addition, while the optimal predictive controllers orienting with comprehensive economic objective and SE consumption objective, it has been shown that they have significantly reduced the energy consumption.« less
Modeling and Advanced Control for Sustainable Process Systems
This book chapter introduces a novel process systems engineering framework that integrates process control with sustainability assessment tools for the simultaneous evaluation and optimization of process operations. The implemented control strategy consists of a biologically-insp...
Multi-Sensor Optimal Data Fusion Based on the Adaptive Fading Unscented Kalman Filter
Gao, Bingbing; Hu, Gaoge; Gao, Shesheng; Gu, Chengfan
2018-01-01
This paper presents a new optimal data fusion methodology based on the adaptive fading unscented Kalman filter for multi-sensor nonlinear stochastic systems. This methodology has a two-level fusion structure: at the bottom level, an adaptive fading unscented Kalman filter based on the Mahalanobis distance is developed and serves as local filters to improve the adaptability and robustness of local state estimations against process-modeling error; at the top level, an unscented transformation-based multi-sensor optimal data fusion for the case of N local filters is established according to the principle of linear minimum variance to calculate globally optimal state estimation by fusion of local estimations. The proposed methodology effectively refrains from the influence of process-modeling error on the fusion solution, leading to improved adaptability and robustness of data fusion for multi-sensor nonlinear stochastic systems. It also achieves globally optimal fusion results based on the principle of linear minimum variance. Simulation and experimental results demonstrate the efficacy of the proposed methodology for INS/GNSS/CNS (inertial navigation system/global navigation satellite system/celestial navigation system) integrated navigation. PMID:29415509
Multi-Sensor Optimal Data Fusion Based on the Adaptive Fading Unscented Kalman Filter.
Gao, Bingbing; Hu, Gaoge; Gao, Shesheng; Zhong, Yongmin; Gu, Chengfan
2018-02-06
This paper presents a new optimal data fusion methodology based on the adaptive fading unscented Kalman filter for multi-sensor nonlinear stochastic systems. This methodology has a two-level fusion structure: at the bottom level, an adaptive fading unscented Kalman filter based on the Mahalanobis distance is developed and serves as local filters to improve the adaptability and robustness of local state estimations against process-modeling error; at the top level, an unscented transformation-based multi-sensor optimal data fusion for the case of N local filters is established according to the principle of linear minimum variance to calculate globally optimal state estimation by fusion of local estimations. The proposed methodology effectively refrains from the influence of process-modeling error on the fusion solution, leading to improved adaptability and robustness of data fusion for multi-sensor nonlinear stochastic systems. It also achieves globally optimal fusion results based on the principle of linear minimum variance. Simulation and experimental results demonstrate the efficacy of the proposed methodology for INS/GNSS/CNS (inertial navigation system/global navigation satellite system/celestial navigation system) integrated navigation.
PSO-tuned PID controller for coupled tank system via priority-based fitness scheme
NASA Astrophysics Data System (ADS)
Jaafar, Hazriq Izzuan; Hussien, Sharifah Yuslinda Syed; Selamat, Nur Asmiza; Abidin, Amar Faiz Zainal; Aras, Mohd Shahrieel Mohd; Nasir, Mohamad Na'im Mohd; Bohari, Zul Hasrizal
2015-05-01
The industrial applications of Coupled Tank System (CTS) are widely used especially in chemical process industries. The overall process is require liquids to be pumped, stored in the tank and pumped again to another tank. Nevertheless, the level of liquid in tank need to be controlled and flow between two tanks must be regulated. This paper presents development of an optimal PID controller for controlling the desired liquid level of the CTS. Two method of Particle Swarm Optimization (PSO) algorithm will be tested in optimizing the PID controller parameters. These two methods of PSO are standard Particle Swarm Optimization (PSO) and Priority-based Fitness Scheme in Particle Swarm Optimization (PFPSO). Simulation is conducted within Matlab environment to verify the performance of the system in terms of settling time (Ts), steady state error (SSE) and overshoot (OS). It has been demonstrated that implementation of PSO via Priority-based Fitness Scheme (PFPSO) for this system is potential technique to control the desired liquid level and improve the system performances compared with standard PSO.
Closed-Loop Multitarget Optimization for Discovery of New Emulsion Polymerization Recipes
2015-01-01
Self-optimization of chemical reactions enables faster optimization of reaction conditions or discovery of molecules with required target properties. The technology of self-optimization has been expanded to discovery of new process recipes for manufacture of complex functional products. A new machine-learning algorithm, specifically designed for multiobjective target optimization with an explicit aim to minimize the number of “expensive” experiments, guides the discovery process. This “black-box” approach assumes no a priori knowledge of chemical system and hence particularly suited to rapid development of processes to manufacture specialist low-volume, high-value products. The approach was demonstrated in discovery of process recipes for a semibatch emulsion copolymerization, targeting a specific particle size and full conversion. PMID:26435638
Multidisciplinary optimization for engineering systems - Achievements and potential
NASA Technical Reports Server (NTRS)
Sobieszczanski-Sobieski, Jaroslaw
1989-01-01
The currently common sequential design process for engineering systems is likely to lead to suboptimal designs. Recently developed decomposition methods offer an alternative for coming closer to optimum by breaking the large task of system optimization into smaller, concurrently executed and, yet, coupled tasks, identified with engineering disciplines or subsystems. The hierarchic and non-hierarchic decompositions are discussed and illustrated by examples. An organization of a design process centered on the non-hierarchic decomposition is proposed.
Multidisciplinary optimization for engineering systems: Achievements and potential
NASA Technical Reports Server (NTRS)
Sobieszczanski-Sobieski, Jaroslaw
1989-01-01
The currently common sequential design process for engineering systems is likely to lead to suboptimal designs. Recently developed decomposition methods offer an alternative for coming closer to optimum by breaking the large task of system optimization into smaller, concurrently executed and, yet, coupled tasks, identified with engineering disciplines or subsystems. The hierarchic and non-hierarchic decompositions are discussed and illustrated by examples. An organization of a design process centered on the non-hierarchic decomposition is proposed.
Real-time parameter optimization based on neural network for smart injection molding
NASA Astrophysics Data System (ADS)
Lee, H.; Liau, Y.; Ryu, K.
2018-03-01
The manufacturing industry has been facing several challenges, including sustainability, performance and quality of production. Manufacturers attempt to enhance the competitiveness of companies by implementing CPS (Cyber-Physical Systems) through the convergence of IoT(Internet of Things) and ICT(Information & Communication Technology) in the manufacturing process level. Injection molding process has a short cycle time and high productivity. This features have been making it suitable for mass production. In addition, this process is used to produce precise parts in various industry fields such as automobiles, optics and medical devices. Injection molding process has a mixture of discrete and continuous variables. In order to optimized the quality, variables that is generated in the injection molding process must be considered. Furthermore, Optimal parameter setting is time-consuming work to predict the optimum quality of the product. Since the process parameter cannot be easily corrected during the process execution. In this research, we propose a neural network based real-time process parameter optimization methodology that sets optimal process parameters by using mold data, molding machine data, and response data. This paper is expected to have academic contribution as a novel study of parameter optimization during production compare with pre - production parameter optimization in typical studies.
NASA Astrophysics Data System (ADS)
Khavekar, Rajendra; Vasudevan, Hari, Dr.; Modi, Bhavik
2017-08-01
Two well-known Design of Experiments (DoE) methodologies, such as Taguchi Methods (TM) and Shainin Systems (SS) are compared and analyzed in this study through their implementation in a plastic injection molding unit. Experiments were performed at a perfume bottle cap manufacturing company (made by acrylic material) using TM and SS to find out the root cause of defects and to optimize the process parameters for minimum rejection. Experiments obtained the rejection rate to be 8.57% from 40% (appx.) during trial runs, which is quiet low, representing successful implementation of these DoE methods. The comparison showed that both methodologies gave same set of variables as critical for defect reduction, but with change in their significance order. Also, Taguchi methods require more number of experiments and consume more time compared to the Shainin System. Shainin system is less complicated and is easy to implement, whereas Taguchi methods is statistically more reliable for optimization of process parameters. Finally, experimentations implied that DoE methods are strong and reliable in implementation, as organizations attempt to improve the quality through optimization.
NASA Astrophysics Data System (ADS)
Hassan, Rania A.
In the design of complex large-scale spacecraft systems that involve a large number of components and subsystems, many specialized state-of-the-art design tools are employed to optimize the performance of various subsystems. However, there is no structured system-level concept-architecting process. Currently, spacecraft design is heavily based on the heritage of the industry. Old spacecraft designs are modified to adapt to new mission requirements, and feasible solutions---rather than optimal ones---are often all that is achieved. During the conceptual phase of the design, the choices available to designers are predominantly discrete variables describing major subsystems' technology options and redundancy levels. The complexity of spacecraft configurations makes the number of the system design variables that need to be traded off in an optimization process prohibitive when manual techniques are used. Such a discrete problem is well suited for solution with a Genetic Algorithm, which is a global search technique that performs optimization-like tasks. This research presents a systems engineering framework that places design requirements at the core of the design activities and transforms the design paradigm for spacecraft systems to a top-down approach rather than the current bottom-up approach. To facilitate decision-making in the early phases of the design process, the population-based search nature of the Genetic Algorithm is exploited to provide computationally inexpensive---compared to the state-of-the-practice---tools for both multi-objective design optimization and design optimization under uncertainty. In terms of computational cost, those tools are nearly on the same order of magnitude as that of standard single-objective deterministic Genetic Algorithm. The use of a multi-objective design approach provides system designers with a clear tradeoff optimization surface that allows them to understand the effect of their decisions on all the design objectives under consideration simultaneously. Incorporating uncertainties avoids large safety margins and unnecessary high redundancy levels. The focus on low computational cost for the optimization tools stems from the objective that improving the design of complex systems should not be achieved at the expense of a costly design methodology.
Recent Results on "Approximations to Optimal Alarm Systems for Anomaly Detection"
NASA Technical Reports Server (NTRS)
Martin, Rodney Alexander
2009-01-01
An optimal alarm system and its approximations may use Kalman filtering for univariate linear dynamic systems driven by Gaussian noise to provide a layer of predictive capability. Predicted Kalman filter future process values and a fixed critical threshold can be used to construct a candidate level-crossing event over a predetermined prediction window. An optimal alarm system can be designed to elicit the fewest false alarms for a fixed detection probability in this particular scenario.
Polyhedral Interpolation for Optimal Reaction Control System Jet Selection
NASA Technical Reports Server (NTRS)
Gefert, Leon P.; Wright, Theodore
2014-01-01
An efficient algorithm is described for interpolating optimal values for spacecraft Reaction Control System jet firing duty cycles. The algorithm uses the symmetrical geometry of the optimal solution to reduce the number of calculations and data storage requirements to a level that enables implementation on the small real time flight control systems used in spacecraft. The process minimizes acceleration direction errors, maximizes control authority, and minimizes fuel consumption.
Technology-design-manufacturing co-optimization for advanced mobile SoCs
NASA Astrophysics Data System (ADS)
Yang, Da; Gan, Chock; Chidambaram, P. R.; Nallapadi, Giri; Zhu, John; Song, S. C.; Xu, Jeff; Yeap, Geoffrey
2014-03-01
How to maintain the Moore's Law scaling beyond the 193 immersion resolution limit is the key question semiconductor industry needs to answer in the near future. Process complexity will undoubtfully increase for 14nm node and beyond, which brings both challenges and opportunities for technology development. A vertically integrated design-technologymanufacturing co-optimization flow is desired to better address the complicated issues new process changes bring. In recent years smart mobile wireless devices have been the fastest growing consumer electronics market. Advanced mobile devices such as smartphones are complex systems with the overriding objective of providing the best userexperience value by harnessing all the technology innovations. Most critical system drivers are better system performance/power efficiency, cost effectiveness, and smaller form factors, which, in turns, drive the need of system design and solution with More-than-Moore innovations. Mobile system-on-chips (SoCs) has become the leading driver for semiconductor technology definition and manufacturing. Here we highlight how the co-optimization strategy influenced architecture, device/circuit, process technology and package, in the face of growing process cost/complexity and variability as well as design rule restrictions.
Discrete-time Markovian-jump linear quadratic optimal control
NASA Technical Reports Server (NTRS)
Chizeck, H. J.; Willsky, A. S.; Castanon, D.
1986-01-01
This paper is concerned with the optimal control of discrete-time linear systems that possess randomly jumping parameters described by finite-state Markov processes. For problems having quadratic costs and perfect observations, the optimal control laws and expected costs-to-go can be precomputed from a set of coupled Riccati-like matrix difference equations. Necessary and sufficient conditions are derived for the existence of optimal constant control laws which stabilize the controlled system as the time horizon becomes infinite, with finite optimal expected cost.
Hussein, Husnah; Williams, David J; Liu, Yang
2015-07-01
A systematic design of experiments (DOE) approach was used to optimize the perfusion process of a tri-axial bioreactor designed for translational tissue engineering exploiting mechanical stimuli and mechanotransduction. Four controllable design parameters affecting the perfusion process were identified in a cause-effect diagram as potential improvement opportunities. A screening process was used to separate out the factors that have the largest impact from the insignificant ones. DOE was employed to find the settings of the platen design, return tubing configuration and the elevation difference that minimise the load on the pump and variation in the perfusion process and improve the controllability of the perfusion pressures within the prescribed limits. DOE was very effective for gaining increased knowledge of the perfusion process and optimizing the process for improved functionality. It is hypothesized that the optimized perfusion system will result in improved biological performance and consistency.
Status of the ITER Cryodistribution
NASA Astrophysics Data System (ADS)
Chang, H.-S.; Vaghela, H.; Patel, P.; Rizzato, A.; Cursan, M.; Henry, D.; Forgeas, A.; Grillot, D.; Sarkar, B.; Muralidhara, S.; Das, J.; Shukla, V.; Adler, E.
2017-12-01
Since the conceptual design of the ITER Cryodistribution many modifications have been applied due to both system optimization and improved knowledge of the clients’ requirements. Process optimizations in the Cryoplant resulted in component simplifications whereas increased heat load in some of the superconducting magnet systems required more complicated process configuration but also the removal of a cold box was possible due to component arrangement standardization. Another cold box, planned for redundancy, has been removed due to the Tokamak in-Cryostat piping layout modification. In this proceeding we will summarize the present design status and component configuration of the ITER Cryodistribution with all changes implemented which aim at process optimization and simplification as well as operational reliability, stability and flexibility.
Research on crude oil storage and transportation based on optimization algorithm
NASA Astrophysics Data System (ADS)
Yuan, Xuhua
2018-04-01
At present, the optimization theory and method have been widely used in the optimization scheduling and optimal operation scheme of complex production systems. Based on C++Builder 6 program development platform, the theoretical research results are implemented by computer. The simulation and intelligent decision system of crude oil storage and transportation inventory scheduling are designed. The system includes modules of project management, data management, graphics processing, simulation of oil depot operation scheme. It can realize the optimization of the scheduling scheme of crude oil storage and transportation system. A multi-point temperature measuring system for monitoring the temperature field of floating roof oil storage tank is developed. The results show that by optimizing operating parameters such as tank operating mode and temperature, the total transportation scheduling costs of the storage and transportation system can be reduced by 9.1%. Therefore, this method can realize safe and stable operation of crude oil storage and transportation system.
Multiobjective hyper heuristic scheme for system design and optimization
NASA Astrophysics Data System (ADS)
Rafique, Amer Farhan
2012-11-01
As system design is becoming more and more multifaceted, integrated, and complex, the traditional single objective optimization trends of optimal design are becoming less and less efficient and effective. Single objective optimization methods present a unique optimal solution whereas multiobjective methods present pareto front. The foremost intent is to predict a reasonable distributed pareto-optimal solution set independent of the problem instance through multiobjective scheme. Other objective of application of intended approach is to improve the worthiness of outputs of the complex engineering system design process at the conceptual design phase. The process is automated in order to provide the system designer with the leverage of the possibility of studying and analyzing a large multiple of possible solutions in a short time. This article presents Multiobjective Hyper Heuristic Optimization Scheme based on low level meta-heuristics developed for the application in engineering system design. Herein, we present a stochastic function to manage meta-heuristics (low-level) to augment surety of global optimum solution. Generic Algorithm, Simulated Annealing and Swarm Intelligence are used as low-level meta-heuristics in this study. Performance of the proposed scheme is investigated through a comprehensive empirical analysis yielding acceptable results. One of the primary motives for performing multiobjective optimization is that the current engineering systems require simultaneous optimization of conflicting and multiple. Random decision making makes the implementation of this scheme attractive and easy. Injecting feasible solutions significantly alters the search direction and also adds diversity of population resulting in accomplishment of pre-defined goals set in the proposed scheme.
Optimization of the production process using virtual model of a workspace
NASA Astrophysics Data System (ADS)
Monica, Z.
2015-11-01
Optimization of the production process is an element of the design cycle consisting of: problem definition, modelling, simulation, optimization and implementation. Without the use of simulation techniques, the only thing which could be achieved is larger or smaller improvement of the process, not the optimization (i.e., the best result it is possible to get for the conditions under which the process works). Optimization is generally management actions that are ultimately bring savings in time, resources, and raw materials and improve the performance of a specific process. It does not matter whether it is a service or manufacturing process. Optimizing the savings generated by improving and increasing the efficiency of the processes. Optimization consists primarily of organizational activities that require very little investment, or rely solely on the changing organization of work. Modern companies operating in a market economy shows a significant increase in interest in modern methods of production management and services. This trend is due to the high competitiveness among companies that want to achieve success are forced to continually modify the ways to manage and flexible response to changing demand. Modern methods of production management, not only imply a stable position of the company in the sector, but also influence the improvement of health and safety within the company and contribute to the implementation of more efficient rules for standardization work in the company. This is why in the paper is presented the application of such developed environment like Siemens NX to create the virtual model of a production system and to simulate as well as optimize its work. The analyzed system is the robotized workcell consisting of: machine tools, industrial robots, conveyors, auxiliary equipment and buffers. In the program could be defined the control program realizing the main task in the virtual workcell. It is possible, using this tool, to optimize both the object trajectory and the cooperation process.
Applicability and Limitations of Reliability Allocation Methods
NASA Technical Reports Server (NTRS)
Cruz, Jose A.
2016-01-01
Reliability allocation process may be described as the process of assigning reliability requirements to individual components within a system to attain the specified system reliability. For large systems, the allocation process is often performed at different stages of system design. The allocation process often begins at the conceptual stage. As the system design develops, more information about components and the operating environment becomes available, different allocation methods can be considered. Reliability allocation methods are usually divided into two categories: weighting factors and optimal reliability allocation. When properly applied, these methods can produce reasonable approximations. Reliability allocation techniques have limitations and implied assumptions that need to be understood by system engineers. Applying reliability allocation techniques without understanding their limitations and assumptions can produce unrealistic results. This report addresses weighting factors, optimal reliability allocation techniques, and identifies the applicability and limitations of each reliability allocation technique.
Procedure for minimizing the cost per watt of photovoltaic systems
NASA Technical Reports Server (NTRS)
Redfield, D.
1977-01-01
A general analytic procedure is developed that provides a quantitative method for optimizing any element or process in the fabrication of a photovoltaic energy conversion system by minimizing its impact on the cost per watt of the complete system. By determining the effective value of any power loss associated with each element of the system, this procedure furnishes the design specifications that optimize the cost-performance tradeoffs for each element. A general equation is derived that optimizes the properties of any part of the system in terms of appropriate cost and performance functions, although the power-handling components are found to have a different character from the cell and array steps. Another principal result is that a fractional performance loss occurring at any cell- or array-fabrication step produces that same fractional increase in the cost per watt of the complete array. It also follows that no element or process step can be optimized correctly by considering only its own cost and performance
Development of a Platform for Simulating and Optimizing Thermoelectric Energy Systems
NASA Astrophysics Data System (ADS)
Kreuder, John J.
Thermoelectrics are solid state devices that can convert thermal energy directly into electrical energy. They have historically been used only in niche applications because of their relatively low efficiencies. With the advent of nanotechnology and improved manufacturing processes thermoelectric materials have become less costly and more efficient As next generation thermoelectric materials become available there is a need for industries to quickly and cost effectively seek out feasible applications for thermoelectric heat recovery platforms. Determining the technical and economic feasibility of such systems requires a model that predicts performance at the system level. Current models focus on specific system applications or neglect the rest of the system altogether, focusing on only module design and not an entire energy system. To assist in screening and optimizing entire energy systems using thermoelectrics, a novel software tool, Thermoelectric Power System Simulator (TEPSS), is developed for system level simulation and optimization of heat recovery systems. The platform is designed for use with a generic energy system so that most types of thermoelectric heat recovery applications can be modeled. TEPSS is based on object-oriented programming in MATLABRTM. A modular, shell based architecture is developed to carry out concept generation, system simulation and optimization. Systems are defined according to the components and interconnectivity specified by the user. An iterative solution process based on Newton's Method is employed to determine the system's steady state so that an objective function representing the cost of the system can be evaluated at the operating point. An optimization algorithm from MATLAB's Optimization Toolbox uses sequential quadratic programming to minimize this objective function with respect to a set of user specified design variables and constraints. During this iterative process many independent system simulations are executed and the optimal operating condition of the system is determined. A comprehensive guide to using the software platform is included. TEPSS is intended to be expandable so that users can add new types of components and implement component models with an adequate degree of complexity for a required application. Special steps are taken to ensure that the system of nonlinear algebraic equations presented in the system engineering model is square and that all equations are independent. In addition, the third party program FluidProp is leveraged to allow for simulations of systems with a range of fluids. Sequential unconstrained minimization techniques are used to prevent physical variables like pressure and temperature from trending to infinity during optimization. Two case studies are performed to verify and demonstrate the simulation and optimization routines employed by TEPSS. The first is of a simple combined cycle in which the size of the heat exchanger and fuel rate are optimized. The second case study is the optimization of geometric parameters of a thermoelectric heat recovery platform in a regenerative Brayton Cycle. A basic package of components and interconnections are verified and provided as well.
NASA Astrophysics Data System (ADS)
Prathabrao, M.; Nawawi, Azli; Sidek, Noor Azizah
2017-04-01
Radio Frequency Identification (RFID) system has multiple benefits which can improve the operational efficiency of the organization. The advantages are the ability to record data systematically and quickly, reducing human errors and system errors, update the database automatically and efficiently. It is often more readers (reader) is needed for the installation purposes in RFID system. Thus, it makes the system more complex. As a result, RFID network planning process is needed to ensure the RFID system works perfectly. The planning process is also considered as an optimization process and power adjustment because the coordinates of each RFID reader to be determined. Therefore, algorithms inspired by the environment (Algorithm Inspired by Nature) is often used. In the study, PSO algorithm is used because it has few number of parameters, the simulation time is fast, easy to use and also very practical. However, PSO parameters must be adjusted correctly, for robust and efficient usage of PSO. Failure to do so may result in disruption of performance and results of PSO optimization of the system will be less good. To ensure the efficiency of PSO, this study will examine the effects of two parameters on the performance of PSO Algorithm in RFID tag coverage optimization. The parameters to be studied are the swarm size and iteration number. In addition to that, the study will also recommend the most optimal adjustment for both parameters that is, 200 for the no. iterations and 800 for the no. of swarms. Finally, the results of this study will enable PSO to operate more efficiently in order to optimize RFID network planning system.
Global optimization for quantum dynamics of few-fermion systems
NASA Astrophysics Data System (ADS)
Li, Xikun; Pecak, Daniel; Sowiński, Tomasz; Sherson, Jacob; Nielsen, Anne E. B.
2018-03-01
Quantum state preparation is vital to quantum computation and quantum information processing tasks. In adiabatic state preparation, the target state is theoretically obtained with nearly perfect fidelity if the control parameter is tuned slowly enough. As this, however, leads to slow dynamics, it is often desirable to be able to carry out processes more rapidly. In this work, we employ two global optimization methods to estimate the quantum speed limit for few-fermion systems confined in a one-dimensional harmonic trap. Such systems can be produced experimentally in a well-controlled manner. We determine the optimized control fields and achieve a reduction in the ramping time of more than a factor of four compared to linear ramping. We also investigate how robust the fidelity is to small variations of the control fields away from the optimized shapes.
An expert system for integrated structural analysis and design optimization for aerospace structures
NASA Technical Reports Server (NTRS)
1992-01-01
The results of a research study on the development of an expert system for integrated structural analysis and design optimization is presented. An Object Representation Language (ORL) was developed first in conjunction with a rule-based system. This ORL/AI shell was then used to develop expert systems to provide assistance with a variety of structural analysis and design optimization tasks, in conjunction with procedural modules for finite element structural analysis and design optimization. The main goal of the research study was to provide expertise, judgment, and reasoning capabilities in the aerospace structural design process. This will allow engineers performing structural analysis and design, even without extensive experience in the field, to develop error-free, efficient and reliable structural designs very rapidly and cost-effectively. This would not only improve the productivity of design engineers and analysts, but also significantly reduce time to completion of structural design. An extensive literature survey in the field of structural analysis, design optimization, artificial intelligence, and database management systems and their application to the structural design process was first performed. A feasibility study was then performed, and the architecture and the conceptual design for the integrated 'intelligent' structural analysis and design optimization software was then developed. An Object Representation Language (ORL), in conjunction with a rule-based system, was then developed using C++. Such an approach would improve the expressiveness for knowledge representation (especially for structural analysis and design applications), provide ability to build very large and practical expert systems, and provide an efficient way for storing knowledge. Functional specifications for the expert systems were then developed. The ORL/AI shell was then used to develop a variety of modules of expert systems for a variety of modeling, finite element analysis, and design optimization tasks in the integrated aerospace structural design process. These expert systems were developed to work in conjunction with procedural finite element structural analysis and design optimization modules (developed in-house at SAT, Inc.). The complete software, AutoDesign, so developed, can be used for integrated 'intelligent' structural analysis and design optimization. The software was beta-tested at a variety of companies, used by a range of engineers with different levels of background and expertise. Based on the feedback obtained by such users, conclusions were developed and are provided.
An expert system for integrated structural analysis and design optimization for aerospace structures
NASA Astrophysics Data System (ADS)
1992-04-01
The results of a research study on the development of an expert system for integrated structural analysis and design optimization is presented. An Object Representation Language (ORL) was developed first in conjunction with a rule-based system. This ORL/AI shell was then used to develop expert systems to provide assistance with a variety of structural analysis and design optimization tasks, in conjunction with procedural modules for finite element structural analysis and design optimization. The main goal of the research study was to provide expertise, judgment, and reasoning capabilities in the aerospace structural design process. This will allow engineers performing structural analysis and design, even without extensive experience in the field, to develop error-free, efficient and reliable structural designs very rapidly and cost-effectively. This would not only improve the productivity of design engineers and analysts, but also significantly reduce time to completion of structural design. An extensive literature survey in the field of structural analysis, design optimization, artificial intelligence, and database management systems and their application to the structural design process was first performed. A feasibility study was then performed, and the architecture and the conceptual design for the integrated 'intelligent' structural analysis and design optimization software was then developed. An Object Representation Language (ORL), in conjunction with a rule-based system, was then developed using C++. Such an approach would improve the expressiveness for knowledge representation (especially for structural analysis and design applications), provide ability to build very large and practical expert systems, and provide an efficient way for storing knowledge. Functional specifications for the expert systems were then developed. The ORL/AI shell was then used to develop a variety of modules of expert systems for a variety of modeling, finite element analysis, and design optimization tasks in the integrated aerospace structural design process. These expert systems were developed to work in conjunction with procedural finite element structural analysis and design optimization modules (developed in-house at SAT, Inc.). The complete software, AutoDesign, so developed, can be used for integrated 'intelligent' structural analysis and design optimization. The software was beta-tested at a variety of companies, used by a range of engineers with different levels of background and expertise. Based on the feedback obtained by such users, conclusions were developed and are provided.
Energy Optimization for a Weak Hybrid Power System of an Automobile Exhaust Thermoelectric Generator
NASA Astrophysics Data System (ADS)
Fang, Wei; Quan, Shuhai; Xie, Changjun; Tang, Xinfeng; Ran, Bin; Jiao, Yatian
2017-11-01
An integrated starter generator (ISG)-type hybrid electric vehicle (HEV) scheme is proposed based on the automobile exhaust thermoelectric generator (AETEG). An eddy current dynamometer is used to simulate the vehicle's dynamic cycle. A weak ISG hybrid bench test system is constructed to test the 48 V output from the power supply system, which is based on engine exhaust-based heat power generation. The thermoelectric power generation-based system must ultimately be tested when integrated into the ISG weak hybrid mixed power system. The test process is divided into two steps: comprehensive simulation and vehicle-based testing. The system's dynamic process is simulated for both conventional and thermoelectric powers, and the dynamic running process comprises four stages: starting, acceleration, cruising and braking. The quantity of fuel available and battery pack energy, which are used as target vehicle energy functions for comparison with conventional systems, are simplified into a single energy target function, and the battery pack's output current is used as the control variable in the thermoelectric hybrid energy optimization model. The system's optimal battery pack output current function is resolved when its dynamic operating process is considered as part of the hybrid thermoelectric power generation system. In the experiments, the system bench is tested using conventional power and hybrid thermoelectric power for the four dynamic operation stages. The optimal battery pack curve is calculated by functional analysis. In the vehicle, a power control unit is used to control the battery pack's output current and minimize energy consumption. Data analysis shows that the fuel economy of the hybrid power system under European Driving Cycle conditions is improved by 14.7% when compared with conventional systems.
SynGenics Optimization System (SynOptSys)
NASA Technical Reports Server (NTRS)
Ventresca, Carol; McMilan, Michelle L.; Globus, Stephanie
2013-01-01
The SynGenics Optimization System (SynOptSys) software application optimizes a product with respect to multiple, competing criteria using statistical Design of Experiments, Response-Surface Methodology, and the Desirability Optimization Methodology. The user is not required to be skilled in the underlying math; thus, SynOptSys can help designers and product developers overcome the barriers that prevent them from using powerful techniques to develop better pro ducts in a less costly manner. SynOpt-Sys is applicable to the design of any product or process with multiple criteria to meet, and at least two factors that influence achievement of those criteria. The user begins with a selected solution principle or system concept and a set of criteria that needs to be satisfied. The criteria may be expressed in terms of documented desirements or defined responses that the future system needs to achieve. Documented desirements can be imported into SynOptSys or created and documented directly within SynOptSys. Subsequent steps include identifying factors, specifying model order for each response, designing the experiment, running the experiment and gathering the data, analyzing the results, and determining the specifications for the optimized system. The user may also enter textual information as the project progresses. Data is easily edited within SynOptSys, and the software design enables full traceability within any step in the process, and facilitates reporting as needed. SynOptSys is unique in the way responses are defined and the nuances of the goodness associated with changes in response values for each of the responses of interest. The Desirability Optimization Methodology provides the basis of this novel feature. Moreover, this is a complete, guided design and optimization process tool with embedded math that can remain invisible to the user. It is not a standalone statistical program; it is a design and optimization system.
Optimization of A(2)O BNR processes using ASM and EAWAG Bio-P models: model performance.
El Shorbagy, Walid E; Radif, Nawras N; Droste, Ronald L
2013-12-01
This paper presents the performance of an optimization model for a biological nutrient removal (BNR) system using the anaerobic-anoxic-oxic (A(2)O) process. The formulated model simulates removal of organics, nitrogen, and phosphorus using a reduced International Water Association (IWA) Activated Sludge Model #3 (ASM3) model and a Swiss Federal Institute for Environmental Science and Technology (EAWAG) Bio-P module. Optimal sizing is attained considering capital and operational costs. Process performance is evaluated against the effect of influent conditions, effluent limits, and selected parameters of various optimal solutions with the following results: an increase of influent temperature from 10 degrees C to 25 degrees C decreases the annual cost by about 8.5%, an increase of influent flow from 500 to 2500 m(3)/h triples the annual cost, the A(2)O BNR system is more sensitive to variations in influent ammonia than phosphorus concentration and the maximum growth rate of autotrophic biomass was the most sensitive kinetic parameter in the optimization model.
Hu, Rui; Liu, Shutian; Li, Quhao
2017-05-20
For the development of a large-aperture space telescope, one of the key techniques is the method for designing the flexures for mounting the primary mirror, as the flexures are the key components. In this paper, a topology-optimization-based method for designing flexures is presented. The structural performances of the mirror system under multiple load conditions, including static gravity and thermal loads, as well as the dynamic vibration, are considered. The mirror surface shape error caused by gravity and the thermal effect is treated as the objective function, and the first-order natural frequency of the mirror structural system is taken as the constraint. The pattern repetition constraint is added, which can ensure symmetrical material distribution. The topology optimization model for flexure design is established. The substructuring method is also used to condense the degrees of freedom (DOF) of all the nodes of the mirror system, except for the nodes that are linked to the mounting flexures, to reduce the computation effort during the optimization iteration process. A potential optimized configuration is achieved by solving the optimization model and post-processing. A detailed shape optimization is subsequently conducted to optimize its dimension parameters. Our optimization method deduces new mounting structures that significantly enhance the optical performance of the mirror system compared to the traditional methods, which only focus on the parameters of existing structures. Design results demonstrate the effectiveness of the proposed optimization method.
Heuristic decomposition for non-hierarchic systems
NASA Technical Reports Server (NTRS)
Bloebaum, Christina L.; Hajela, P.
1991-01-01
Design and optimization is substantially more complex in multidisciplinary and large-scale engineering applications due to the existing inherently coupled interactions. The paper introduces a quasi-procedural methodology for multidisciplinary optimization that is applicable for nonhierarchic systems. The necessary decision-making support for the design process is provided by means of an embedded expert systems capability. The method employs a decomposition approach whose modularity allows for implementation of specialized methods for analysis and optimization within disciplines.
Firmware Development Improves System Efficiency
NASA Technical Reports Server (NTRS)
Chern, E. James; Butler, David W.
1993-01-01
Most manufacturing processes require physical pointwise positioning of the components or tools from one location to another. Typical mechanical systems utilize either stop-and-go or fixed feed-rate procession to accomplish the task. The first approach achieves positional accuracy but prolongs overall time and increases wear on the mechanical system. The second approach sustains the throughput but compromises positional accuracy. A computer firmware approach has been developed to optimize this point wise mechanism by utilizing programmable interrupt controls to synchronize engineering processes 'on the fly'. This principle has been implemented in an eddy current imaging system to demonstrate the improvement. Software programs were developed that enable a mechanical controller card to transmit interrupts to a system controller as a trigger signal to initiate an eddy current data acquisition routine. The advantages are: (1) optimized manufacturing processes, (2) increased throughput of the system, (3) improved positional accuracy, and (4) reduced wear and tear on the mechanical system.
Habib, Basant A; AbouGhaly, Mohamed H H
2016-06-01
This study aims to illustrate the applicability of combined mixture-process variable (MPV) design and modeling for optimization of nanovesicular systems. The D-optimal experimental plan studied the influence of three mixture components (MCs) and two process variables (PVs) on lercanidipine transfersomes. The MCs were phosphatidylcholine (A), sodium glycocholate (B) and lercanidipine hydrochloride (C), while the PVs were glycerol amount in the hydration mixture (D) and sonication time (E). The studied responses were Y1: particle size, Y2: zeta potential and Y3: entrapment efficiency percent (EE%). Polynomial equations were used to study the influence of MCs and PVs on each response. Response surface methodology and multiple response optimization were applied to optimize the formulation with the goals of minimizing Y1 and maximizing Y2 and Y3. The obtained polynomial models had prediction R(2) values of 0.645, 0.947 and 0.795 for Y1, Y2 and Y3, respectively. Contour, Piepel's response trace, perturbation, and interaction plots were drawn for responses representation. The optimized formulation, A: 265 mg, B: 10 mg, C: 40 mg, D: zero g and E: 120 s, had desirability of 0.9526. The actual response values for the optimized formulation were within the two-sided 95% prediction intervals and were close to the predicted values with maximum percent deviation of 6.2%. This indicates the validity of combined MPV design and modeling for optimization of transfersomal formulations as an example of nanovesicular systems.
Electric Propulsion System Selection Process for Interplanetary Missions
NASA Technical Reports Server (NTRS)
Landau, Damon; Chase, James; Kowalkowski, Theresa; Oh, David; Randolph, Thomas; Sims, Jon; Timmerman, Paul
2008-01-01
The disparate design problems of selecting an electric propulsion system, launch vehicle, and flight time all have a significant impact on the cost and robustness of a mission. The effects of these system choices combine into a single optimization of the total mission cost, where the design constraint is a required spacecraft neutral (non-electric propulsion) mass. Cost-optimal systems are designed for a range of mass margins to examine how the optimal design varies with mass growth. The resulting cost-optimal designs are compared with results generated via mass optimization methods. Additional optimizations with continuous system parameters address the impact on mission cost due to discrete sets of launch vehicle, power, and specific impulse. The examined mission set comprises a near-Earth asteroid sample return, multiple main belt asteroid rendezvous, comet rendezvous, comet sample return, and a mission to Saturn.
Approximation of the Newton Step by a Defect Correction Process
NASA Technical Reports Server (NTRS)
Arian, E.; Batterman, A.; Sachs, E. W.
1999-01-01
In this paper, an optimal control problem governed by a partial differential equation is considered. The Newton step for this system can be computed by solving a coupled system of equations. To do this efficiently with an iterative defect correction process, a modifying operator is introduced into the system. This operator is motivated by local mode analysis. The operator can be used also for preconditioning in Generalized Minimum Residual (GMRES). We give a detailed convergence analysis for the defect correction process and show the derivation of the modifying operator. Numerical tests are done on the small disturbance shape optimization problem in two dimensions for the defect correction process and for GMRES.
NASA Astrophysics Data System (ADS)
Krawczyk, Rafał D.; Czarski, Tomasz; Kolasiński, Piotr; Linczuk, Paweł; Poźniak, Krzysztof T.; Chernyshova, Maryna; Kasprowicz, Grzegorz; Wojeński, Andrzej; Zabolotny, Wojciech; Zienkiewicz, Paweł
2016-09-01
This article is an overview of what has been implemented in the process of development and testing the GEM detector based acquisition system in terms of post-processing algorithms. Information is given on mex functions for extended statistics collection, unified hex topology and optimized S-DAQ algorithm for splitting overlapped signals. Additional discussion on bottlenecks and major factors concerning optimization is presented.
NASA Astrophysics Data System (ADS)
Jolanta Walery, Maria
2017-12-01
The article describes optimization studies aimed at analysing the impact of capital and current costs changes of medical waste incineration on the cost of the system management and its structure. The study was conducted on the example of an analysis of the system of medical waste management in the Podlaskie Province, in north-eastern Poland. The scope of operational research carried out under the optimization study was divided into two stages of optimization calculations with assumed technical and economic parameters of the system. In the first stage, the lowest cost of functioning of the analysed system was generated, whereas in the second one the influence of the input parameter of the system, i.e. capital and current costs of medical waste incineration on economic efficiency index (E) and the spatial structure of the system was determined. Optimization studies were conducted for the following cases: with a 25% increase in capital and current costs of incineration process, followed by 50%, 75% and 100% increase. As a result of the calculations, the highest cost of system operation was achieved at the level of 3143.70 PLN/t with the assumption of 100% increase in capital and current costs of incineration process. There was an increase in the economic efficiency index (E) by about 97% in relation to run 1.
Optimal design of zero-water discharge rinsing systems.
Thöming, Jorg
2002-03-01
This paper is about zero liquid discharge in processes that use water for rinsing. Emphasis was given to those systems that contaminate process water with valuable process liquor and compounds. The approach involved the synthesis of optimal rinsing and recycling networks (RRN) that had a priori excluded water discharge. The total annualized costs of the RRN were minimized by the use of a mixed-integer nonlinear program (MINLP). This MINLP was based on a hyperstructure of the RRN and contained eight counterflow rinsing stages and three regenerator units: electrodialysis, reverse osmosis, and ion exchange columns. A "large-scale nickel plating process" case study showed that by means of zero-water discharge and optimized rinsing the total waste could be reduced by 90.4% at a revenue of $448,000/yr. Furthermore, with the optimized RRN, the rinsing performance can be improved significantly at a low-cost increase. In all the cases, the amount of valuable compounds reclaimed was above 99%.
Performance Optimization Control of ECH using Fuzzy Inference Application
NASA Astrophysics Data System (ADS)
Dubey, Abhay Kumar
Electro-chemical honing (ECH) is a hybrid electrolytic precision micro-finishing technology that, by combining physico-chemical actions of electro-chemical machining and conventional honing processes, provides the controlled functional surfaces-generation and fast material removal capabilities in a single operation. Process multi-performance optimization has become vital for utilizing full potential of manufacturing processes to meet the challenging requirements being placed on the surface quality, size, tolerances and production rate of engineering components in this globally competitive scenario. This paper presents an strategy that integrates the Taguchi matrix experimental design, analysis of variances and fuzzy inference system (FIS) to formulate a robust practical multi-performance optimization methodology for complex manufacturing processes like ECH, which involve several control variables. Two methodologies one using a genetic algorithm tuning of FIS (GA-tuned FIS) and another using an adaptive network based fuzzy inference system (ANFIS) have been evaluated for a multi-performance optimization case study of ECH. The actual experimental results confirm their potential for a wide range of machining conditions employed in ECH.
Evolution of Query Optimization Methods
NASA Astrophysics Data System (ADS)
Hameurlain, Abdelkader; Morvan, Franck
Query optimization is the most critical phase in query processing. In this paper, we try to describe synthetically the evolution of query optimization methods from uniprocessor relational database systems to data Grid systems through parallel, distributed and data integration systems. We point out a set of parameters to characterize and compare query optimization methods, mainly: (i) size of the search space, (ii) type of method (static or dynamic), (iii) modification types of execution plans (re-optimization or re-scheduling), (iv) level of modification (intra-operator and/or inter-operator), (v) type of event (estimation errors, delay, user preferences), and (vi) nature of decision-making (centralized or decentralized control).
NASA Technical Reports Server (NTRS)
Rasmussen, John
1990-01-01
Structural optimization has attracted the attention since the days of Galileo. Olhoff and Taylor have produced an excellent overview of the classical research within this field. However, the interest in structural optimization has increased greatly during the last decade due to the advent of reliable general numerical analysis methods and the computer power necessary to use them efficiently. This has created the possibility of developing general numerical systems for shape optimization. Several authors, eg., Esping; Braibant & Fleury; Bennet & Botkin; Botkin, Yang, and Bennet; and Stanton have published practical and successful applications of general optimization systems. Ding and Homlein have produced extensive overviews of available systems. Furthermore, a number of commercial optimization systems based on well-established finite element codes have been introduced. Systems like ANSYS, IDEAS, OASIS, and NISAOPT are widely known examples. In parallel to this development, the technology of computer aided design (CAD) has gained a large influence on the design process of mechanical engineering. The CAD technology has already lived through a rapid development driven by the drastically growing capabilities of digital computers. However, the systems of today are still considered as being only the first generation of a long row of computer integrated manufacturing (CIM) systems. These systems to come will offer an integrated environment for design, analysis, and fabrication of products of almost any character. Thus, the CAD system could be regarded as simply a database for geometrical information equipped with a number of tools with the purpose of helping the user in the design process. Among these tools are facilities for structural analysis and optimization as well as present standard CAD features like drawing, modeling, and visualization tools. The state of the art of structural optimization is that a large amount of mathematical and mechanical techniques are available for the solution of single problems. By implementing collections of the available techniques into general software systems, operational environments for structural optimization have been created. The forthcoming years must bring solutions to the problem of integrating such systems into more general design environments. The result of this work should be CAD systems for rational design in which structural optimization is one important design tool among many others.
Phase transitions in Pareto optimal complex networks
NASA Astrophysics Data System (ADS)
Seoane, Luís F.; Solé, Ricard
2015-09-01
The organization of interactions in complex systems can be described by networks connecting different units. These graphs are useful representations of the local and global complexity of the underlying systems. The origin of their topological structure can be diverse, resulting from different mechanisms including multiplicative processes and optimization. In spatial networks or in graphs where cost constraints are at work, as it occurs in a plethora of situations from power grids to the wiring of neurons in the brain, optimization plays an important part in shaping their organization. In this paper we study network designs resulting from a Pareto optimization process, where different simultaneous constraints are the targets of selection. We analyze three variations on a problem, finding phase transitions of different kinds. Distinct phases are associated with different arrangements of the connections, but the need of drastic topological changes does not determine the presence or the nature of the phase transitions encountered. Instead, the functions under optimization do play a determinant role. This reinforces the view that phase transitions do not arise from intrinsic properties of a system alone, but from the interplay of that system with its external constraints.
Ludwig, T; Kern, P; Bongards, M; Wolf, C
2011-01-01
The optimization of relaxation and filtration times of submerged microfiltration flat modules in membrane bioreactors used for municipal wastewater treatment is essential for efficient plant operation. However, the optimization and control of such plants and their filtration processes is a challenging problem due to the underlying highly nonlinear and complex processes. This paper presents the use of genetic algorithms for this optimization problem in conjunction with a fully calibrated simulation model, as computational intelligence methods are perfectly suited to the nonconvex multi-objective nature of the optimization problems posed by these complex systems. The simulation model is developed and calibrated using membrane modules from the wastewater simulation software GPS-X based on the Activated Sludge Model No.1 (ASM1). Simulation results have been validated at a technical reference plant. They clearly show that filtration process costs for cleaning and energy can be reduced significantly by intelligent process optimization.
Das, Saptarshi; Pan, Indranil; Das, Shantanu
2015-09-01
An optimal trade-off design for fractional order (FO)-PID controller is proposed with a Linear Quadratic Regulator (LQR) based technique using two conflicting time domain objectives. A class of delayed FO systems with single non-integer order element, exhibiting both sluggish and oscillatory open loop responses, have been controlled here. The FO time delay processes are handled within a multi-objective optimization (MOO) formalism of LQR based FOPID design. A comparison is made between two contemporary approaches of stabilizing time-delay systems withinLQR. The MOO control design methodology yields the Pareto optimal trade-off solutions between the tracking performance and total variation (TV) of the control signal. Tuning rules are formed for the optimal LQR-FOPID controller parameters, using median of the non-dominated Pareto solutions to handle delayed FO processes. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.
Barekati-Goudarzi, Mohamad; Boldor, Dorin; Nde, Divine B
2016-02-01
In-situ transesterification (simultaneous extraction and transesterification) of Chinese tallow tree seeds into methyl esters using a batch microwave system was investigated in this study. A high degree of oil extraction and efficient conversion of oil to biodiesel were found in the proposed range. The process was further optimized in terms of product yields and conversion rates using Doehlert optimization methodology. Based on the experimental results and statistical analysis, the optimal production yield conditions for this process were determined as: catalyst concentration of 1.74wt.%, solvent ratio about 3 (v/w), reaction time of 20min and temperature of 58.1°C. H(+)NMR was used to calculate reaction conversion. All methyl esters produced using this method met ASTM biodiesel quality specifications. Copyright © 2015 Elsevier Ltd. All rights reserved.
Combined Optimal Control System for excavator electric drive
NASA Astrophysics Data System (ADS)
Kurochkin, N. S.; Kochetkov, V. P.; Platonova, E. V.; Glushkin, E. Y.; Dulesov, A. S.
2018-03-01
The article presents a synthesis of the combined optimal control algorithms of the AC drive rotation mechanism of the excavator. Synthesis of algorithms consists in the regulation of external coordinates - based on the theory of optimal systems and correction of the internal coordinates electric drive using the method "technical optimum". The research shows the advantage of optimal combined control systems for the electric rotary drive over classical systems of subordinate regulation. The paper presents a method for selecting the optimality criterion of coefficients to find the intersection of the range of permissible values of the coordinates of the control object. There is possibility of system settings by choosing the optimality criterion coefficients, which allows one to select the required characteristics of the drive: the dynamic moment (M) and the time of the transient process (tpp). Due to the use of combined optimal control systems, it was possible to significantly reduce the maximum value of the dynamic moment (M) and at the same time - reduce the transient time (tpp).
Development of a standardized, citywide process for managing smart-pump drug libraries.
Walroth, Todd A; Smallwood, Shannon; Arthur, Karen; Vance, Betsy; Washington, Alana; Staublin, Therese; Haslar, Tammy; Reddan, Jennifer G; Fuller, James
2018-06-15
Development and implementation of an interprofessional consensus-driven process for review and optimization of smart-pump drug libraries and dosing limits are described. The Indianapolis Coalition for Patient Safety (ICPS), which represents 6 Indianapolis-area health systems, identified an opportunity to reduce clinically insignificant alerts that smart infusion pumps present to end users. Through a consensus-driven process, ICPS aimed to identify best practices to implement at individual hospitals in order to establish specific action items for smart-pump drug library optimization. A work group of pharmacists, nurses, and industrial engineers met to evaluate variability within and lack of scrutiny of smart-pump drug libraries. The work group used Lean Six Sigma methodologies to generate a list of key needs and barriers to be addressed in process standardization. The group reviewed targets for smart-pump drug library optimization, including dosing limits, types of alerts reviewed, policies, and safety best practices. The work group also analyzed existing processes at each site to develop a final consensus statement outlining a model process for reviewing alerts and managing smart-pump data. Analysis of the total number of alerts per device across ICPS-affiliated health systems over a 4-year period indicated a 50% decrease (from 7.2 to 3.6 alerts per device per month) after implementation of the model by ICPS member organizations. Through implementation of a standardized, consensus-driven process for smart-pump drug library optimization, ICPS member health systems reduced clinically insignificant smart-pump alerts. Copyright © 2018 by the American Society of Health-System Pharmacists, Inc. All rights reserved.
Efficient sensitivity analysis and optimization of a helicopter rotor
NASA Technical Reports Server (NTRS)
Lim, Joon W.; Chopra, Inderjit
1989-01-01
Aeroelastic optimization of a system essentially consists of the determination of the optimum values of design variables which minimize the objective function and satisfy certain aeroelastic and geometric constraints. The process of aeroelastic optimization analysis is illustrated. To carry out aeroelastic optimization effectively, one needs a reliable analysis procedure to determine steady response and stability of a rotor system in forward flight. The rotor dynamic analysis used in the present study developed inhouse at the University of Maryland is based on finite elements in space and time. The analysis consists of two major phases: vehicle trim and rotor steady response (coupled trim analysis), and aeroelastic stability of the blade. For a reduction of helicopter vibration, the optimization process requires the sensitivity derivatives of the objective function and aeroelastic stability constraints. For this, the derivatives of steady response, hub loads and blade stability roots are calculated using a direct analytical approach. An automated optimization procedure is developed by coupling the rotor dynamic analysis, design sensitivity analysis and constrained optimization code CONMIN.
Mission and system optimization of nuclear electric propulsion vehicles for lunar and Mars missions
NASA Technical Reports Server (NTRS)
Gilland, James H.
1991-01-01
The detailed mission and system optimization of low thrust electric propulsion missions is a complex, iterative process involving interaction between orbital mechanics and system performance. Through the use of appropriate approximations, initial system optimization and analysis can be performed for a range of missions. The intent of these calculations is to provide system and mission designers with simple methods to assess system design without requiring access or detailed knowledge of numerical calculus of variations optimizations codes and methods. Approximations for the mission/system optimization of Earth orbital transfer and Mars mission have been derived. Analyses include the variation of thruster efficiency with specific impulse. Optimum specific impulse, payload fraction, and power/payload ratios are calculated. The accuracy of these methods is tested and found to be reasonable for initial scoping studies. Results of optimization for Space Exploration Initiative lunar cargo and Mars missions are presented for a range of power system and thruster options.
Adaptive hybrid optimal quantum control for imprecisely characterized systems.
Egger, D J; Wilhelm, F K
2014-06-20
Optimal quantum control theory carries a huge promise for quantum technology. Its experimental application, however, is often hindered by imprecise knowledge of the input variables, the quantum system's parameters. We show how to overcome this by adaptive hybrid optimal control, using a protocol named Ad-HOC. This protocol combines open- and closed-loop optimal control by first performing a gradient search towards a near-optimal control pulse and then an experimental fidelity estimation with a gradient-free method. For typical settings in solid-state quantum information processing, adaptive hybrid optimal control enhances gate fidelities by an order of magnitude, making optimal control theory applicable and useful.
NASA Astrophysics Data System (ADS)
Ghaly, Michael; Links, Jonathan M.; Frey, Eric
2015-03-01
In this work, we used the ideal observer (IO) and IO with model mismatch (IO-MM) applied in the projection domain and an anthropomorphic Channelized Hotelling Observer (CHO) applied to reconstructed images to optimize the acquisition energy window width and evaluate various scatter compensation methods in the context of a myocardial perfusion SPECT defect detection task. The IO has perfect knowledge of the image formation process and thus reflects performance with perfect compensation for image-degrading factors. Thus, using the IO to optimize imaging systems could lead to suboptimal parameters compared to those optimized for humans interpreting SPECT images reconstructed with imperfect or no compensation. The IO-MM allows incorporating imperfect system models into the IO optimization process. We found that with near-perfect scatter compensation, the optimal energy window for the IO and CHO were similar; in its absence the IO-MM gave a better prediction of the optimal energy window for the CHO using different scatter compensation methods. These data suggest that the IO-MM may be useful for projection-domain optimization when model mismatch is significant, and that the IO is useful when followed by reconstruction with good models of the image formation process.
A system level model for preliminary design of a space propulsion solid rocket motor
NASA Astrophysics Data System (ADS)
Schumacher, Daniel M.
Preliminary design of space propulsion solid rocket motors entails a combination of components and subsystems. Expert design tools exist to find near optimal performance of subsystems and components. Conversely, there is no system level preliminary design process for space propulsion solid rocket motors that is capable of synthesizing customer requirements into a high utility design for the customer. The preliminary design process for space propulsion solid rocket motors typically builds on existing designs and pursues feasible rather than the most favorable design. Classical optimization is an extremely challenging method when dealing with the complex behavior of an integrated system. The complexity and combinations of system configurations make the number of the design parameters that are traded off unreasonable when manual techniques are used. Existing multi-disciplinary optimization approaches generally address estimating ratios and correlations rather than utilizing mathematical models. The developed system level model utilizes the Genetic Algorithm to perform the necessary population searches to efficiently replace the human iterations required during a typical solid rocket motor preliminary design. This research augments, automates, and increases the fidelity of the existing preliminary design process for space propulsion solid rocket motors. The system level aspect of this preliminary design process, and the ability to synthesize space propulsion solid rocket motor requirements into a near optimal design, is achievable. The process of developing the motor performance estimate and the system level model of a space propulsion solid rocket motor is described in detail. The results of this research indicate that the model is valid for use and able to manage a very large number of variable inputs and constraints towards the pursuit of the best possible design.
Gaussian process regression for geometry optimization
NASA Astrophysics Data System (ADS)
Denzel, Alexander; Kästner, Johannes
2018-03-01
We implemented a geometry optimizer based on Gaussian process regression (GPR) to find minimum structures on potential energy surfaces. We tested both a two times differentiable form of the Matérn kernel and the squared exponential kernel. The Matérn kernel performs much better. We give a detailed description of the optimization procedures. These include overshooting the step resulting from GPR in order to obtain a higher degree of interpolation vs. extrapolation. In a benchmark against the Limited-memory Broyden-Fletcher-Goldfarb-Shanno optimizer of the DL-FIND library on 26 test systems, we found the new optimizer to generally reduce the number of required optimization steps.
Application of genetic algorithm in integrated setup planning and operation sequencing
NASA Astrophysics Data System (ADS)
Kafashi, Sajad; Shakeri, Mohsen
2011-01-01
Process planning is an essential component for linking design and manufacturing process. Setup planning and operation sequencing is two main tasks in process planning. Many researches solved these two problems separately. Considering the fact that the two functions are complementary, it is necessary to integrate them more tightly so that performance of a manufacturing system can be improved economically and competitively. This paper present a generative system and genetic algorithm (GA) approach to process plan the given part. The proposed approach and optimization methodology analyses the TAD (tool approach direction), tolerance relation between features and feature precedence relations to generate all possible setups and operations using workshop resource database. Based on these technological constraints the GA algorithm approach, which adopts the feature-based representation, optimizes the setup plan and sequence of operations using cost indices. Case study show that the developed system can generate satisfactory results in optimizing the setup planning and operation sequencing simultaneously in feasible condition.
Modeling and Analysis of Power Processing Systems (MAPPS). Volume 1: Technical report
NASA Technical Reports Server (NTRS)
Lee, F. C.; Rahman, S.; Carter, R. A.; Wu, C. H.; Yu, Y.; Chang, R.
1980-01-01
Computer aided design and analysis techniques were applied to power processing equipment. Topics covered include: (1) discrete time domain analysis of switching regulators for performance analysis; (2) design optimization of power converters using augmented Lagrangian penalty function technique; (3) investigation of current-injected multiloop controlled switching regulators; and (4) application of optimization for Navy VSTOL energy power system. The generation of the mathematical models and the development and application of computer aided design techniques to solve the different mathematical models are discussed. Recommendations are made for future work that would enhance the application of the computer aided design techniques for power processing systems.
Advanced Information Technology in Simulation Based Life Cycle Design
NASA Technical Reports Server (NTRS)
Renaud, John E.
2003-01-01
In this research a Collaborative Optimization (CO) approach for multidisciplinary systems design is used to develop a decision based design framework for non-deterministic optimization. To date CO strategies have been developed for use in application to deterministic systems design problems. In this research the decision based design (DBD) framework proposed by Hazelrigg is modified for use in a collaborative optimization framework. The Hazelrigg framework as originally proposed provides a single level optimization strategy that combines engineering decisions with business decisions in a single level optimization. By transforming this framework for use in collaborative optimization one can decompose the business and engineering decision making processes. In the new multilevel framework of Decision Based Collaborative Optimization (DBCO) the business decisions are made at the system level. These business decisions result in a set of engineering performance targets that disciplinary engineering design teams seek to satisfy as part of subspace optimizations. The Decision Based Collaborative Optimization framework more accurately models the existing relationship between business and engineering in multidisciplinary systems design.
Vaisali, C; Belur, Prasanna D; Regupathi, Iyyaswami
2017-10-01
Lipophilization of antioxidants is recognized as an effective strategy to enhance solubility and thus effectiveness in lipid based food. In this study, an effort was made to optimize rutin fatty ester synthesis in two different solvent systems to understand the influence of reaction system hydrophobicity on the optimum conditions using immobilised Candida antartica lipase. Under unoptimized conditions, 52.14% and 13.02% conversion was achieved in acetone and tert-butanol solvent systems, respectively. Among all the process parameters, water activity of the system was found to show highest influence on the conversion in each reaction system. In the presence of molecular sieves, the ester production increased to 62.9% in tert-butanol system, unlike acetone system. Under optimal conditions, conversion increased to 60.74% and 65.73% in acetone and tert-butanol system, respectively. This study shows, maintaining optimal water activity is crucial in reaction systems having polar solvents compared to more non-polar solvents. Copyright © 2017 Elsevier Ltd. All rights reserved.
Fast machine-learning online optimization of ultra-cold-atom experiments.
Wigley, P B; Everitt, P J; van den Hengel, A; Bastian, J W; Sooriyabandara, M A; McDonald, G D; Hardman, K S; Quinlivan, C D; Manju, P; Kuhn, C C N; Petersen, I R; Luiten, A N; Hope, J J; Robins, N P; Hush, M R
2016-05-16
We apply an online optimization process based on machine learning to the production of Bose-Einstein condensates (BEC). BEC is typically created with an exponential evaporation ramp that is optimal for ergodic dynamics with two-body s-wave interactions and no other loss rates, but likely sub-optimal for real experiments. Through repeated machine-controlled scientific experimentation and observations our 'learner' discovers an optimal evaporation ramp for BEC production. In contrast to previous work, our learner uses a Gaussian process to develop a statistical model of the relationship between the parameters it controls and the quality of the BEC produced. We demonstrate that the Gaussian process machine learner is able to discover a ramp that produces high quality BECs in 10 times fewer iterations than a previously used online optimization technique. Furthermore, we show the internal model developed can be used to determine which parameters are essential in BEC creation and which are unimportant, providing insight into the optimization process of the system.
Fast machine-learning online optimization of ultra-cold-atom experiments
Wigley, P. B.; Everitt, P. J.; van den Hengel, A.; Bastian, J. W.; Sooriyabandara, M. A.; McDonald, G. D.; Hardman, K. S.; Quinlivan, C. D.; Manju, P.; Kuhn, C. C. N.; Petersen, I. R.; Luiten, A. N.; Hope, J. J.; Robins, N. P.; Hush, M. R.
2016-01-01
We apply an online optimization process based on machine learning to the production of Bose-Einstein condensates (BEC). BEC is typically created with an exponential evaporation ramp that is optimal for ergodic dynamics with two-body s-wave interactions and no other loss rates, but likely sub-optimal for real experiments. Through repeated machine-controlled scientific experimentation and observations our ‘learner’ discovers an optimal evaporation ramp for BEC production. In contrast to previous work, our learner uses a Gaussian process to develop a statistical model of the relationship between the parameters it controls and the quality of the BEC produced. We demonstrate that the Gaussian process machine learner is able to discover a ramp that produces high quality BECs in 10 times fewer iterations than a previously used online optimization technique. Furthermore, we show the internal model developed can be used to determine which parameters are essential in BEC creation and which are unimportant, providing insight into the optimization process of the system. PMID:27180805
Topology synthesis and size optimization of morphing wing structures
NASA Astrophysics Data System (ADS)
Inoyama, Daisaku
This research demonstrates a novel topology and size optimization methodology for synthesis of distributed actuation systems with specific applications to morphing air vehicle structures. The main emphasis is placed on the topology and size optimization problem formulations and the development of computational modeling concepts. The analysis model is developed to meet several important criteria: It must allow a rigid-body displacement, as well as a variation in planform area, with minimum strain on structural members while retaining acceptable numerical stability for finite element analysis. Topology optimization is performed on a semi-ground structure with design variables that control the system configuration. In effect, the optimization process assigns morphing members as "soft" elements, non-morphing load-bearing members as "stiff' elements, and non-existent members as "voids." The optimization process also determines the optimum actuator placement, where each actuator is represented computationally by equal and opposite nodal forces with soft axial stiffness. In addition, the configuration of attachments that connect the morphing structure to a non-morphing structure is determined simultaneously. Several different optimization problem formulations are investigated to understand their potential benefits in solution quality, as well as meaningfulness of the formulations. Extensions and enhancements to the initial concept and problem formulations are made to accommodate multiple-configuration definitions. In addition, the principal issues on the external-load dependency and the reversibility of a design, as well as the appropriate selection of a reference configuration, are addressed in the research. The methodology to control actuator distributions and concentrations is also discussed. Finally, the strategy to transfer the topology solution to the sizing optimization is developed and cross-sectional areas of existent structural members are optimized under applied aerodynamic loads. That is, the optimization process is implemented in sequential order: The actuation system layout is first determined through multi-disciplinary topology optimization process, and then the thickness or cross-sectional area of each existent member is optimized under given constraints and boundary conditions. Sample problems are solved to demonstrate the potential capabilities of the presented methodology. The research demonstrates an innovative structural design procedure from a computational perspective and opens new insights into the potential design requirements and characteristics of morphing structures.
Short-Term Planning of Hybrid Power System
NASA Astrophysics Data System (ADS)
Knežević, Goran; Baus, Zoran; Nikolovski, Srete
2016-07-01
In this paper short-term planning algorithm for hybrid power system consist of different types of cascade hydropower plants (run-of-the river, pumped storage, conventional), thermal power plants (coal-fired power plants, combined cycle gas-fired power plants) and wind farms is presented. The optimization process provides a joint bid of the hybrid system, and thus making the operation schedule of hydro and thermal power plants, the operation condition of pumped-storage hydropower plants with the aim of maximizing profits on day ahead market, according to expected hourly electricity prices, the expected local water inflow in certain hydropower plants, and the expected production of electrical energy from the wind farm, taking into account previously contracted bilateral agreement for electricity generation. Optimization process is formulated as hourly-discretized mixed integer linear optimization problem. Optimization model is applied on the case study in order to show general features of the developed model.
Multidisciplinary Optimization Methods for Aircraft Preliminary Design
NASA Technical Reports Server (NTRS)
Kroo, Ilan; Altus, Steve; Braun, Robert; Gage, Peter; Sobieski, Ian
1994-01-01
This paper describes a research program aimed at improved methods for multidisciplinary design and optimization of large-scale aeronautical systems. The research involves new approaches to system decomposition, interdisciplinary communication, and methods of exploiting coarse-grained parallelism for analysis and optimization. A new architecture, that involves a tight coupling between optimization and analysis, is intended to improve efficiency while simplifying the structure of multidisciplinary, computation-intensive design problems involving many analysis disciplines and perhaps hundreds of design variables. Work in two areas is described here: system decomposition using compatibility constraints to simplify the analysis structure and take advantage of coarse-grained parallelism; and collaborative optimization, a decomposition of the optimization process to permit parallel design and to simplify interdisciplinary communication requirements.
COLA: Optimizing Stream Processing Applications via Graph Partitioning
NASA Astrophysics Data System (ADS)
Khandekar, Rohit; Hildrum, Kirsten; Parekh, Sujay; Rajan, Deepak; Wolf, Joel; Wu, Kun-Lung; Andrade, Henrique; Gedik, Buğra
In this paper, we describe an optimization scheme for fusing compile-time operators into reasonably-sized run-time software units called processing elements (PEs). Such PEs are the basic deployable units in System S, a highly scalable distributed stream processing middleware system. Finding a high quality fusion significantly benefits the performance of streaming jobs. In order to maximize throughput, our solution approach attempts to minimize the processing cost associated with inter-PE stream traffic while simultaneously balancing load across the processing hosts. Our algorithm computes a hierarchical partitioning of the operator graph based on a minimum-ratio cut subroutine. We also incorporate several fusion constraints in order to support real-world System S jobs. We experimentally compare our algorithm with several other reasonable alternative schemes, highlighting the effectiveness of our approach.
PID controller tuning using metaheuristic optimization algorithms for benchmark problems
NASA Astrophysics Data System (ADS)
Gholap, Vishal; Naik Dessai, Chaitali; Bagyaveereswaran, V.
2017-11-01
This paper contributes to find the optimal PID controller parameters using particle swarm optimization (PSO), Genetic Algorithm (GA) and Simulated Annealing (SA) algorithm. The algorithms were developed through simulation of chemical process and electrical system and the PID controller is tuned. Here, two different fitness functions such as Integral Time Absolute Error and Time domain Specifications were chosen and applied on PSO, GA and SA while tuning the controller. The proposed Algorithms are implemented on two benchmark problems of coupled tank system and DC motor. Finally, comparative study has been done with different algorithms based on best cost, number of iterations and different objective functions. The closed loop process response for each set of tuned parameters is plotted for each system with each fitness function.
Hu, Shaoxing; Xu, Shike; Wang, Duhu; Zhang, Aiwu
2015-11-11
Aiming at addressing the problem of high computational cost of the traditional Kalman filter in SINS/GPS, a practical optimization algorithm with offline-derivation and parallel processing methods based on the numerical characteristics of the system is presented in this paper. The algorithm exploits the sparseness and/or symmetry of matrices to simplify the computational procedure. Thus plenty of invalid operations can be avoided by offline derivation using a block matrix technique. For enhanced efficiency, a new parallel computational mechanism is established by subdividing and restructuring calculation processes after analyzing the extracted "useful" data. As a result, the algorithm saves about 90% of the CPU processing time and 66% of the memory usage needed in a classical Kalman filter. Meanwhile, the method as a numerical approach needs no precise-loss transformation/approximation of system modules and the accuracy suffers little in comparison with the filter before computational optimization. Furthermore, since no complicated matrix theories are needed, the algorithm can be easily transplanted into other modified filters as a secondary optimization method to achieve further efficiency.
Optimal control of raw timber production processes
Ivan Kolenka
1978-01-01
This paper demonstrates the possibility of optimal planning and control of timber harvesting activ-ities with mathematical optimization models. The separate phases of timber harvesting are represented by coordinated models which can be used to select the optimal decision for the execution of any given phase. The models form a system whose components are connected and...
Auto-SEIA: simultaneous optimization of image processing and machine learning algorithms
NASA Astrophysics Data System (ADS)
Negro Maggio, Valentina; Iocchi, Luca
2015-02-01
Object classification from images is an important task for machine vision and it is a crucial ingredient for many computer vision applications, ranging from security and surveillance to marketing. Image based object classification techniques properly integrate image processing and machine learning (i.e., classification) procedures. In this paper we present a system for automatic simultaneous optimization of algorithms and parameters for object classification from images. More specifically, the proposed system is able to process a dataset of labelled images and to return a best configuration of image processing and classification algorithms and of their parameters with respect to the accuracy of classification. Experiments with real public datasets are used to demonstrate the effectiveness of the developed system.
Energy-efficient hierarchical processing in the network of wireless intelligent sensors (WISE)
NASA Astrophysics Data System (ADS)
Raskovic, Dejan
Sensor network nodes have benefited from technological advances in the field of wireless communication, processing, and power sources. However, the processing power of microcontrollers is often not sufficient to perform sophisticated processing, while the power requirements of digital signal processing boards or handheld computers are usually too demanding for prolonged system use. We are matching the intrinsic hierarchical nature of many digital signal-processing applications with the natural hierarchy in distributed wireless networks, and building the hierarchical system of wireless intelligent sensors. Our goal is to build a system that will exploit the hierarchical organization to optimize the power consumption and extend battery life for the given time and memory constraints, while providing real-time processing of sensor signals. In addition, we are designing our system to be able to adapt to the current state of the environment, by dynamically changing the algorithm through procedure replacement. This dissertation presents the analysis of hierarchical environment and methods for energy profiling used to evaluate different system design strategies, and to optimize time-effective and energy-efficient processing.
An optimal system design process for a Mars roving vehicle
NASA Technical Reports Server (NTRS)
Pavarini, C.; Baker, J.; Goldberg, A.
1971-01-01
The problem of determining the optimal design for a Mars roving vehicle is considered. A system model is generated by consideration of the physical constraints on the design parameters and the requirement that the system be deliverable to the Mars surface. An expression which evaluates system performance relative to mission goals as a function of the design parameters only is developed. The use of nonlinear programming techniques to optimize the design is proposed and an example considering only two of the vehicle subsystems is formulated and solved.
Henshall, Chris; Schuller, Tara; Mardhani-Bayne, Logan
2012-07-01
Health systems face rising patient expectations and economic pressures; decision makers seek to enhance efficiency to improve access to appropriate care. There is international interest in the role of HTA to support decisions to optimize use of established technologies, particularly in "disinvesting" from low-benefit uses. This study summarizes main points from an HTAi Policy Forum meeting on this topic, drawing on presentations, discussions among attendees, and an advance background paper. Optimization involves assessment or re-assessment of a technology, a decision on optimal use, and decision implementation. This may occur within a routine process to improve safety and quality and create "headroom" for new technologies, or ad hoc in response to financial constraints. The term "disinvestment" is not always helpful in describing these processes. HTA contributes to optimization, but there is scope to increase its role in many systems. Stakeholders may have strong views on access to technology, and stakeholder involvement is essential. Optimization faces challenges including loss aversion and entitlement, stakeholder inertia and entrenchment, heterogeneity in patient outcomes, and the need to demonstrate convincingly absence of benefit. While basic HTA principles remain applicable, methodological developments are needed better to support optimization. These include mechanisms for candidate technology identification and prioritization, enhanced collection and analysis of routine data, and clinician engagement. To maximize value to decision makers, HTA should consider implementation strategies and barriers. Improving optimization processes calls for a coordinated approach, and actions are identified for system leaders, HTA and other health organizations, and industry.
Overview and Software Architecture of the Copernicus Trajectory Design and Optimization System
NASA Technical Reports Server (NTRS)
Williams, Jacob; Senent, Juan S.; Ocampo, Cesar; Mathur, Ravi; Davis, Elizabeth C.
2010-01-01
The Copernicus Trajectory Design and Optimization System represents an innovative and comprehensive approach to on-orbit mission design, trajectory analysis and optimization. Copernicus integrates state of the art algorithms in optimization, interactive visualization, spacecraft state propagation, and data input-output interfaces, allowing the analyst to design spacecraft missions to all possible Solar System destinations. All of these features are incorporated within a single architecture that can be used interactively via a comprehensive GUI interface, or passively via external interfaces that execute batch processes. This paper describes the Copernicus software architecture together with the challenges associated with its implementation. Additionally, future development and planned new capabilities are discussed. Key words: Copernicus, Spacecraft Trajectory Optimization Software.
Design optimization of a prescribed vibration system using conjoint value analysis
NASA Astrophysics Data System (ADS)
Malinga, Bongani; Buckner, Gregory D.
2016-12-01
This article details a novel design optimization strategy for a prescribed vibration system (PVS) used to mechanically filter solids from fluids in oil and gas drilling operations. A dynamic model of the PVS is developed, and the effects of disturbance torques are detailed. This model is used to predict the effects of design parameters on system performance and efficiency, as quantified by system attributes. Conjoint value analysis, a statistical technique commonly used in marketing science, is utilized to incorporate designer preferences. This approach effectively quantifies and optimizes preference-based trade-offs in the design process. The effects of designer preferences on system performance and efficiency are simulated. This novel optimization strategy yields improvements in all system attributes across all simulated vibration profiles, and is applicable to other industrial electromechanical systems.
Designing a Digital Instructional Management System To Optimize Early Education.
ERIC Educational Resources Information Center
Mooij, Ton
2002-01-01
Discusses digital instructional management systems (DIMSs) and describes a pilot study conducted in two Dutch kindergartens with a prototype DIMS that included individualization and optimization, that is matching curriculum with learner characteristics. Topics include learning processes for children at risk; and future plans. (LRW)
Optimized design of embedded DSP system hardware supporting complex algorithms
NASA Astrophysics Data System (ADS)
Li, Yanhua; Wang, Xiangjun; Zhou, Xinling
2003-09-01
The paper presents an optimized design method for a flexible and economical embedded DSP system that can implement complex processing algorithms as biometric recognition, real-time image processing, etc. It consists of a floating-point DSP, 512 Kbytes data RAM, 1 Mbytes FLASH program memory, a CPLD for achieving flexible logic control of input channel and a RS-485 transceiver for local network communication. Because of employing a high performance-price ratio DSP TMS320C6712 and a large FLASH in the design, this system permits loading and performing complex algorithms with little algorithm optimization and code reduction. The CPLD provides flexible logic control for the whole DSP board, especially in input channel, and allows convenient interface between different sensors and DSP system. The transceiver circuit can transfer data between DSP and host computer. In the paper, some key technologies are also introduced which make the whole system work efficiently. Because of the characters referred above, the hardware is a perfect flat for multi-channel data collection, image processing, and other signal processing with high performance and adaptability. The application section of this paper presents how this hardware is adapted for the biometric identification system with high identification precision. The result reveals that this hardware is easy to interface with a CMOS imager and is capable of carrying out complex biometric identification algorithms, which require real-time process.
A Generalized Decision Framework Using Multi-objective Optimization for Water Resources Planning
NASA Astrophysics Data System (ADS)
Basdekas, L.; Stewart, N.; Triana, E.
2013-12-01
Colorado Springs Utilities (CSU) is currently engaged in an Integrated Water Resource Plan (IWRP) to address the complex planning scenarios, across multiple time scales, currently faced by CSU. The modeling framework developed for the IWRP uses a flexible data-centered Decision Support System (DSS) with a MODSIM-based modeling system to represent the operation of the current CSU raw water system coupled with a state-of-the-art multi-objective optimization algorithm. Three basic components are required for the framework, which can be implemented for planning horizons ranging from seasonal to interdecadal. First, a water resources system model is required that is capable of reasonable system simulation to resolve performance metrics at the appropriate temporal and spatial scales of interest. The system model should be an existing simulation model, or one developed during the planning process with stakeholders, so that 'buy-in' has already been achieved. Second, a hydrologic scenario tool(s) capable of generating a range of plausible inflows for the planning period of interest is required. This may include paleo informed or climate change informed sequences. Third, a multi-objective optimization model that can be wrapped around the system simulation model is required. The new generation of multi-objective optimization models do not require parameterization which greatly reduces problem complexity. Bridging the gap between research and practice will be evident as we use a case study from CSU's planning process to demonstrate this framework with specific competing water management objectives. Careful formulation of objective functions, choice of decision variables, and system constraints will be discussed. Rather than treating results as theoretically Pareto optimal in a planning process, we use the powerful multi-objective optimization models as tools to more efficiently and effectively move out of the inferior decision space. The use of this framework will help CSU evaluate tradeoffs in a continually changing world.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zitney, S.E.
This paper highlights the use of the CAPE-OPEN (CO) standard interfaces in the Advanced Process Engineering Co-Simulator (APECS) developed at the National Energy Technology Laboratory (NETL). The APECS system uses the CO unit operation, thermodynamic, and reaction interfaces to provide its plug-and-play co-simulation capabilities, including the integration of process simulation with computational fluid dynamics (CFD) simulation. APECS also relies heavily on the use of a CO COM/CORBA bridge for running process/CFD co-simulations on multiple operating systems. For process optimization in the face of multiple and some time conflicting objectives, APECS offers stochastic modeling and multi-objective optimization capabilities developed to complymore » with the CO software standard. At NETL, system analysts are applying APECS to a wide variety of advanced power generation systems, ranging from small fuel cell systems to commercial-scale power plants including the coal-fired, gasification-based FutureGen power and hydrogen production plant.« less
NASA Technical Reports Server (NTRS)
Connolly, Janis H.; Arch, M.; Elfezouaty, Eileen Schultz; Novak, Jennifer Blume; Bond, Robert L. (Technical Monitor)
1999-01-01
Design and Human Engineering (HE) processes strive to ensure that the human-machine interface is designed for optimal performance throughout the system life cycle. Each component can be tested and assessed independently to assure optimal performance, but it is not until full integration that the system and the inherent interactions between the system components can be assessed as a whole. HE processes (which are defining/app lying requirements for human interaction with missions/systems) are included in space flight activities, but also need to be included in ground activities and specifically, ground facility testbeds such as Bio-Plex. A unique aspect of the Bio-Plex Facility is the integral issue of Habitability which includes qualities of the environment that allow humans to work and live. HE is a process by which Habitability and system performance can be assessed.
NASA Astrophysics Data System (ADS)
Leung, Nelson; Abdelhafez, Mohamed; Koch, Jens; Schuster, David
2017-04-01
We implement a quantum optimal control algorithm based on automatic differentiation and harness the acceleration afforded by graphics processing units (GPUs). Automatic differentiation allows us to specify advanced optimization criteria and incorporate them in the optimization process with ease. We show that the use of GPUs can speedup calculations by more than an order of magnitude. Our strategy facilitates efficient numerical simulations on affordable desktop computers and exploration of a host of optimization constraints and system parameters relevant to real-life experiments. We demonstrate optimization of quantum evolution based on fine-grained evaluation of performance at each intermediate time step, thus enabling more intricate control on the evolution path, suppression of departures from the truncated model subspace, as well as minimization of the physical time needed to perform high-fidelity state preparation and unitary gates.
SPECT System Optimization Against A Discrete Parameter Space
Meng, L. J.; Li, N.
2013-01-01
In this paper, we present an analytical approach for optimizing the design of a static SPECT system or optimizing the sampling strategy with a variable/adaptive SPECT imaging hardware against an arbitrarily given set of system parameters. This approach has three key aspects. First, it is designed to operate over a discretized system parameter space. Second, we have introduced an artificial concept of virtual detector as the basic building block of an imaging system. With a SPECT system described as a collection of the virtual detectors, one can convert the task of system optimization into a process of finding the optimum imaging time distribution (ITD) across all virtual detectors. Thirdly, the optimization problem (finding the optimum ITD) could be solved with a block-iterative approach or other non-linear optimization algorithms. In essence, the resultant optimum ITD could provide a quantitative measure of the relative importance (or effectiveness) of the virtual detectors and help to identify the system configuration or sampling strategy that leads to an optimum imaging performance. Although we are using SPECT imaging as a platform to demonstrate the system optimization strategy, this development also provides a useful framework for system optimization problems in other modalities, such as positron emission tomography (PET) and X-ray computed tomography (CT) [1, 2]. PMID:23587609
Bioregenerative food system cost based on optimized menus for advanced life support
NASA Technical Reports Server (NTRS)
Waters, Geoffrey C R.; Olabi, Ammar; Hunter, Jean B.; Dixon, Mike A.; Lasseur, Christophe
2002-01-01
Optimized menus for a bioregenerative life support system have been developed based on measures of crop productivity, food item acceptability, menu diversity, and nutritional requirements of crew. Crop-specific biomass requirements were calculated from menu recipe demands while accounting for food processing and preparation losses. Under the assumption of staggered planting, the optimized menu demanded a total crop production area of 453 m2 for six crew. Cost of the bioregenerative food system is estimated at 439 kg per menu cycle or 7.3 kg ESM crew-1 day-1, including agricultural waste processing costs. On average, about 60% (263.6 kg ESM) of the food system cost is tied up in equipment, 26% (114.2 kg ESM) in labor, and 14% (61.5 kg ESM) in power and cooling. This number is high compared to the STS and ISS (nonregenerative) systems but reductions in ESM may be achieved through intensive crop productivity improvements, reductions in equipment masses associated with crop production, and planning of production, processing, and preparation to minimize the requirement for crew labor.
Modeling Business Processes of the Social Insurance Fund in Information System Runa WFE
NASA Astrophysics Data System (ADS)
Kataev, M. Yu; Bulysheva, L. A.; Xu, Li D.; Loseva, N. V.
2016-08-01
Introduction - Business processes are gradually becoming a tool that allows you at a new level to put employees or to make more efficient document management system. In these directions the main work, and presents the largest possible number of publications. However, business processes are still poorly implemented in public institutions, where it is very difficult to formalize the main existing processes. Us attempts to build a system of business processes for such state agencies as the Russian social insurance Fund (SIF), where virtually all of the processes, when different inputs have the same output: public service. The parameters of the state services (as a rule, time limits) are set by state laws and regulations. The article provides a brief overview of the FSS, the formulation of requirements to business processes, the justification of the choice of software for modeling business processes and create models of work in the system Runa WFE and optimization models one of the main business processes of the FSS. The result of the work of Runa WFE is an optimized model of the business process of FSS.
A State-Space Approach to Optimal Level-Crossing Prediction for Linear Gaussian Processes
NASA Technical Reports Server (NTRS)
Martin, Rodney Alexander
2009-01-01
In many complex engineered systems, the ability to give an alarm prior to impending critical events is of great importance. These critical events may have varying degrees of severity, and in fact they may occur during normal system operation. In this article, we investigate approximations to theoretically optimal methods of designing alarm systems for the prediction of level-crossings by a zero-mean stationary linear dynamic system driven by Gaussian noise. An optimal alarm system is designed to elicit the fewest false alarms for a fixed detection probability. This work introduces the use of Kalman filtering in tandem with the optimal level-crossing problem. It is shown that there is a negligible loss in overall accuracy when using approximations to the theoretically optimal predictor, at the advantage of greatly reduced computational complexity. I
Zhou, Zhi; de Bedout, Juan Manuel; Kern, John Michael; Biyik, Emrah; Chandra, Ramu Sharat
2013-01-22
A system for optimizing customer utility usage in a utility network of customer sites, each having one or more utility devices, where customer site is communicated between each of the customer sites and an optimization server having software for optimizing customer utility usage over one or more networks, including private and public networks. A customer site model for each of the customer sites is generated based upon the customer site information, and the customer utility usage is optimized based upon the customer site information and the customer site model. The optimization server can be hosted by an external source or within the customer site. In addition, the optimization processing can be partitioned between the customer site and an external source.
Synchronic interval Gaussian mixed-integer programming for air quality management.
Cheng, Guanhui; Huang, Guohe Gordon; Dong, Cong
2015-12-15
To reveal the synchronism of interval uncertainties, the tradeoff between system optimality and security, the discreteness of facility-expansion options, the uncertainty of pollutant dispersion processes, and the seasonality of wind features in air quality management (AQM) systems, a synchronic interval Gaussian mixed-integer programming (SIGMIP) approach is proposed in this study. A robust interval Gaussian dispersion model is developed for approaching the pollutant dispersion process under interval uncertainties and seasonal variations. The reflection of synchronic effects of interval uncertainties in the programming objective is enabled through introducing interval functions. The proposition of constraint violation degrees helps quantify the tradeoff between system optimality and constraint violation under interval uncertainties. The overall optimality of system profits of an SIGMIP model is achieved based on the definition of an integrally optimal solution. Integer variables in the SIGMIP model are resolved by the existing cutting-plane method. Combining these efforts leads to an effective algorithm for the SIGMIP model. An application to an AQM problem in a region in Shandong Province, China, reveals that the proposed SIGMIP model can facilitate identifying the desired scheme for AQM. The enhancement of the robustness of optimization exercises may be helpful for increasing the reliability of suggested schemes for AQM under these complexities. The interrelated tradeoffs among control measures, emission sources, flow processes, receptors, influencing factors, and economic and environmental goals are effectively balanced. Interests of many stakeholders are reasonably coordinated. The harmony between economic development and air quality control is enabled. Results also indicate that the constraint violation degree is effective at reflecting the compromise relationship between constraint-violation risks and system optimality under interval uncertainties. This can help decision makers mitigate potential risks, e.g. insufficiency of pollutant treatment capabilities, exceedance of air quality standards, deficiency of pollution control fund, or imbalance of economic or environmental stress, in the process of guiding AQM. Copyright © 2015 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Hirsch, Piotr; Duzinkiewicz, Kazimierz; Grochowski, Michał
2017-11-01
District Heating (DH) systems are commonly supplied using local heat sources. Nowadays, modern insulation materials allow for effective and economically viable heat transportation over long distances (over 20 km). In the paper a method for optimized selection of design and operating parameters of long distance Heat Transportation System (HTS) is proposed. The method allows for evaluation of feasibility and effectivity of heat transportation from the considered heat sources. The optimized selection is formulated as multicriteria decision-making problem. The constraints for this problem include a static HTS model, allowing considerations of system life cycle, time variability and spatial topology. Thereby, variation of heat demand and ground temperature within the DH area, insulation and pipe aging and/or terrain elevation profile are taken into account in the decision-making process. The HTS construction costs, pumping power, and heat losses are considered as objective functions. Inner pipe diameter, insulation thickness, temperatures and pumping stations locations are optimized during the decision-making process. Moreover, the variants of pipe-laying e.g. one pipeline with the larger diameter or two with the smaller might be considered during the optimization. The analyzed optimization problem is multicriteria, hybrid and nonlinear. Because of such problem properties, the genetic solver was applied.
Research Based on AMESim of Electro-hydraulic Servo Loading System
NASA Astrophysics Data System (ADS)
Li, Jinlong; Hu, Zhiyong
2017-09-01
Electro-hydraulic servo loading system is a subject studied by many scholars in the field of simulation and control at home and abroad. The electro-hydraulic servo loading system is a loading device simulation of stress objects by aerodynamic moment and other force in the process of movement, its function is all kinds of gas in the lab condition to analyze stress under dynamic load of objects. The purpose of this paper is the design of AMESim electro-hydraulic servo system, PID control technology is used to configure the parameters of the control system, complete the loading process under different conditions, the optimal design parameters, optimization of dynamic performance of the loading system.
NASA Technical Reports Server (NTRS)
Spurlock, Paul; Spurlock, Jack M.; Evanich, Peggy L.
1991-01-01
An overview of recent developments in process-control technology which might have applications in future advanced life support systems for long-duration space operations is presented. Consideration is given to design criteria related to control system selection and optimization, and process-control interfacing methodology. Attention is also given to current life support system process control strategies, innovative sensors, instrumentation and control, and innovations in process supervision.
Aircraft Flight Modeling During the Optimization of Gas Turbine Engine Working Process
NASA Astrophysics Data System (ADS)
Tkachenko, A. Yu; Kuz'michev, V. S.; Krupenich, I. N.
2018-01-01
The article describes a method for simulating the flight of the aircraft along a predetermined path, establishing a functional connection between the parameters of the working process of gas turbine engine and the efficiency criteria of the aircraft. This connection is necessary for solving the optimization tasks of the conceptual design stage of the engine according to the systems approach. Engine thrust level, in turn, influences the operation of aircraft, thus making accurate simulation of the aircraft behavior during flight necessary for obtaining the correct solution. The described mathematical model of aircraft flight provides the functional connection between the airframe characteristics, working process of gas turbine engines (propulsion system), ambient and flight conditions and flight profile features. This model provides accurate results of flight simulation and the resulting aircraft efficiency criteria, required for optimization of working process and control function of a gas turbine engine.
2009-09-01
SAS Statistical Analysis Software SE Systems Engineering SEP Systems Engineering Process SHP Shaft Horsepower SIGINT Signals Intelligence......management occurs (OSD 2002). The Systems Engineering Process (SEP), displayed in Figure 2, is a comprehensive , iterative and recursive problem
Optimization of cell seeding in a 2D bio-scaffold system using computational models.
Ho, Nicholas; Chua, Matthew; Chui, Chee-Kong
2017-05-01
The cell expansion process is a crucial part of generating cells on a large-scale level in a bioreactor system. Hence, it is important to set operating conditions (e.g. initial cell seeding distribution, culture medium flow rate) to an optimal level. Often, the initial cell seeding distribution factor is neglected and/or overlooked in the design of a bioreactor using conventional seeding distribution methods. This paper proposes a novel seeding distribution method that aims to maximize cell growth and minimize production time/cost. The proposed method utilizes two computational models; the first model represents cell growth patterns whereas the second model determines optimal initial cell seeding positions for adherent cell expansions. Cell growth simulation from the first model demonstrates that the model can be a representation of various cell types with known probabilities. The second model involves a combination of combinatorial optimization, Monte Carlo and concepts of the first model, and is used to design a multi-layer 2D bio-scaffold system that increases cell production efficiency in bioreactor applications. Simulation results have shown that the recommended input configurations obtained from the proposed optimization method are the most optimal configurations. The results have also illustrated the effectiveness of the proposed optimization method. The potential of the proposed seeding distribution method as a useful tool to optimize the cell expansion process in modern bioreactor system applications is highlighted. Copyright © 2017 Elsevier Ltd. All rights reserved.
Optimization of 15 parameters influencing the long-term survival of bacteria in aquatic systems
NASA Technical Reports Server (NTRS)
Obenhuber, D. C.
1993-01-01
NASA is presently engaged in the design and development of a water reclamation system for the future space station. A major concern in processing water is the control of microbial contamination. As a means of developing an optimal microbial control strategy, studies were undertaken to determine the type and amount of contamination which could be expected in these systems under a variety of changing environmental conditions. A laboratory-based Taguchi optimization experiment was conducted to determine the ideal settings for 15 parameters which influence the survival of six bacterial species in aquatic systems. The experiment demonstrated that the bacterial survival period could be decreased significantly by optimizing environmental conditions.
The effect of model uncertainty on some optimal routing problems
NASA Technical Reports Server (NTRS)
Mohanty, Bibhu; Cassandras, Christos G.
1991-01-01
The effect of model uncertainties on optimal routing in a system of parallel queues is examined. The uncertainty arises in modeling the service time distribution for the customers (jobs, packets) to be served. For a Poisson arrival process and Bernoulli routing, the optimal mean system delay generally depends on the variance of this distribution. However, as the input traffic load approaches the system capacity the optimal routing assignment and corresponding mean system delay are shown to converge to a variance-invariant point. The implications of these results are examined in the context of gradient-based routing algorithms. An example of a model-independent algorithm using online gradient estimation is also included.
Continuous Firefly Algorithm for Optimal Tuning of Pid Controller in Avr System
NASA Astrophysics Data System (ADS)
Bendjeghaba, Omar
2014-01-01
This paper presents a tuning approach based on Continuous firefly algorithm (CFA) to obtain the proportional-integral- derivative (PID) controller parameters in Automatic Voltage Regulator system (AVR). In the tuning processes the CFA is iterated to reach the optimal or the near optimal of PID controller parameters when the main goal is to improve the AVR step response characteristics. Conducted simulations show the effectiveness and the efficiency of the proposed approach. Furthermore the proposed approach can improve the dynamic of the AVR system. Compared with particle swarm optimization (PSO), the new CFA tuning method has better control system performance in terms of time domain specifications and set-point tracking.
Multi-disciplinary optimization of aeroservoelastic systems
NASA Technical Reports Server (NTRS)
Karpel, Mordechay
1990-01-01
Efficient analytical and computational tools for simultaneous optimal design of the structural and control components of aeroservoelastic systems are presented. The optimization objective is to achieve aircraft performance requirements and sufficient flutter and control stability margins with a minimal weight penalty and without violating the design constraints. Analytical sensitivity derivatives facilitate an efficient optimization process which allows a relatively large number of design variables. Standard finite element and unsteady aerodynamic routines are used to construct a modal data base. Minimum State aerodynamic approximations and dynamic residualization methods are used to construct a high accuracy, low order aeroservoelastic model. Sensitivity derivatives of flutter dynamic pressure, control stability margins and control effectiveness with respect to structural and control design variables are presented. The performance requirements are utilized by equality constraints which affect the sensitivity derivatives. A gradient-based optimization algorithm is used to minimize an overall cost function. A realistic numerical example of a composite wing with four controls is used to demonstrate the modeling technique, the optimization process, and their accuracy and efficiency.
Multidisciplinary optimization of aeroservoelastic systems using reduced-size models
NASA Technical Reports Server (NTRS)
Karpel, Mordechay
1992-01-01
Efficient analytical and computational tools for simultaneous optimal design of the structural and control components of aeroservoelastic systems are presented. The optimization objective is to achieve aircraft performance requirements and sufficient flutter and control stability margins with a minimal weight penalty and without violating the design constraints. Analytical sensitivity derivatives facilitate an efficient optimization process which allows a relatively large number of design variables. Standard finite element and unsteady aerodynamic routines are used to construct a modal data base. Minimum State aerodynamic approximations and dynamic residualization methods are used to construct a high accuracy, low order aeroservoelastic model. Sensitivity derivatives of flutter dynamic pressure, control stability margins and control effectiveness with respect to structural and control design variables are presented. The performance requirements are utilized by equality constraints which affect the sensitivity derivatives. A gradient-based optimization algorithm is used to minimize an overall cost function. A realistic numerical example of a composite wing with four controls is used to demonstrate the modeling technique, the optimization process, and their accuracy and efficiency.
The Design of Large Geothermally Powered Air-Conditioning Systems Using an Optimal Control Approach
NASA Astrophysics Data System (ADS)
Horowitz, F. G.; O'Bryan, L.
2010-12-01
The direct use of geothermal energy from Hot Sedimentary Aquifer (HSA) systems for large scale air-conditioning projects involves many tradeoffs. Aspects contributing towards making design decisions for such systems include: the inadequately known permeability and thermal distributions underground; the combinatorial complexity of selecting pumping and chiller systems to match the underground conditions to the air-conditioning requirements; the future price variations of the electricity market; any uncertainties in future Carbon pricing; and the applicable discount rate for evaluating the financial worth of the project. Expanding upon the previous work of Horowitz and Hornby (2007), we take an optimal control approach to the design of such systems. By building a model of the HSA system, the drilling process, the pumping process, and the chilling operations, along with a specified objective function, we can write a Hamiltonian for the system. Using the standard techniques of optimal control, we use gradients of the Hamiltonian to find the optimal design for any given set of permeabilities, thermal distributions, and the other engineering and financial parameters. By using this approach, optimal system designs could potentially evolve in response to the actual conditions encountered during drilling. Because the granularity of some current models is so coarse, we will be able to compare our optimal control approach to an exhaustive search of parameter space. We will present examples from the conditions appropriate for the Perth Basin of Western Australia, where the WA Geothermal Centre of Excellence is involved with two large air-conditioning projects using geothermal water from deep aquifers at 75 to 95 degrees C.
NASA Technical Reports Server (NTRS)
Nobbs, Steven G.
1995-01-01
An overview of the performance seeking control (PSC) algorithm and details of the important components of the algorithm are given. The onboard propulsion system models, the linear programming optimization, and engine control interface are described. The PSC algorithm receives input from various computers on the aircraft including the digital flight computer, digital engine control, and electronic inlet control. The PSC algorithm contains compact models of the propulsion system including the inlet, engine, and nozzle. The models compute propulsion system parameters, such as inlet drag and fan stall margin, which are not directly measurable in flight. The compact models also compute sensitivities of the propulsion system parameters to change in control variables. The engine model consists of a linear steady state variable model (SSVM) and a nonlinear model. The SSVM is updated with efficiency factors calculated in the engine model update logic, or Kalman filter. The efficiency factors are used to adjust the SSVM to match the actual engine. The propulsion system models are mathematically integrated to form an overall propulsion system model. The propulsion system model is then optimized using a linear programming optimization scheme. The goal of the optimization is determined from the selected PSC mode of operation. The resulting trims are used to compute a new operating point about which the optimization process is repeated. This process is continued until an overall (global) optimum is reached before applying the trims to the controllers.
Design and Optimization Method of a Two-Disk Rotor System
NASA Astrophysics Data System (ADS)
Huang, Jingjing; Zheng, Longxi; Mei, Qing
2016-04-01
An integrated analytical method based on multidisciplinary optimization software Isight and general finite element software ANSYS was proposed in this paper. Firstly, a two-disk rotor system was established and the mode, humorous response and transient response at acceleration condition were analyzed with ANSYS. The dynamic characteristics of the two-disk rotor system were achieved. On this basis, the two-disk rotor model was integrated to the multidisciplinary design optimization software Isight. According to the design of experiment (DOE) and the dynamic characteristics, the optimization variables, optimization objectives and constraints were confirmed. After that, the multi-objective design optimization of the transient process was carried out with three different global optimization algorithms including Evolutionary Optimization Algorithm, Multi-Island Genetic Algorithm and Pointer Automatic Optimizer. The optimum position of the two-disk rotor system was obtained at the specified constraints. Meanwhile, the accuracy and calculation numbers of different optimization algorithms were compared. The optimization results indicated that the rotor vibration reached the minimum value and the design efficiency and quality were improved by the multidisciplinary design optimization in the case of meeting the design requirements, which provided the reference to improve the design efficiency and reliability of the aero-engine rotor.
Optimal design of reverse osmosis module networks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maskan, F.; Wiley, D.E.; Johnston, L.P.M.
2000-05-01
The structure of individual reverse osmosis modules, the configuration of the module network, and the operating conditions were optimized for seawater and brackish water desalination. The system model included simple mathematical equations to predict the performance of the reverse osmosis modules. The optimization problem was formulated as a constrained multivariable nonlinear optimization. The objective function was the annual profit for the system, consisting of the profit obtained from the permeate, capital cost for the process units, and operating costs associated with energy consumption and maintenance. Optimization of several dual-stage reverse osmosis systems were investigated and compared. It was found thatmore » optimal network designs are the ones that produce the most permeate. It may be possible to achieve economic improvements by refining current membrane module designs and their operating pressures.« less
Optimal physiological structure of small neurons to guarantee stable information processing
NASA Astrophysics Data System (ADS)
Zeng, S. Y.; Zhang, Z. Z.; Wei, D. Q.; Luo, X. S.; Tang, W. Y.; Zeng, S. W.; Wang, R. F.
2013-02-01
Spike is the basic element for neuronal information processing and the spontaneous spiking frequency should be less than 1 Hz for stable information processing. If the neuronal membrane area is small, the frequency of neuronal spontaneous spiking caused by ion channel noise may be high. Therefore, it is important to suppress the deleterious spontaneous spiking of the small neurons. We find by simulation of stochastic neurons with Hodgkin-Huxley-type channels that the leakage system is critical and extremely efficient to suppress the spontaneous spiking and guarantee stable information processing of the small neurons. However, within the physiological limit the potassium system cannot do so. The suppression effect of the leakage system is super-exponential, but that of the potassium system is quasi-linear. With the minor physiological cost and the minimal consumption of metabolic energy, a slightly lower reversal potential and a relatively larger conductance of the leakage system give the optimal physiological structure to suppress the deleterious spontaneous spiking and guarantee stable information processing of small neurons, dendrites and axons.
Optimal allocation model of construction land based on two-level system optimization theory
NASA Astrophysics Data System (ADS)
Liu, Min; Liu, Yanfang; Xia, Yuping; Lei, Qihong
2007-06-01
The allocation of construction land is an important task in land-use planning. Whether implementation of planning decisions is a success or not, usually depends on a reasonable and scientific distribution method. Considering the constitution of land-use planning system and planning process in China, multiple levels and multiple objective decision problems is its essence. Also, planning quantity decomposition is a two-level system optimization problem and an optimal resource allocation decision problem between a decision-maker in the topper and a number of parallel decision-makers in the lower. According the characteristics of the decision-making process of two-level decision-making system, this paper develops an optimal allocation model of construction land based on two-level linear planning. In order to verify the rationality and the validity of our model, Baoan district of Shenzhen City has been taken as a test case. Under the assistance of the allocation model, construction land is allocated to ten townships of Baoan district. The result obtained from our model is compared to that of traditional method, and results show that our model is reasonable and usable. In the end, the paper points out the shortcomings of the model and further research directions.
A CPS Based Optimal Operational Control System for Fused Magnesium Furnace
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chai, Tian-you; Wu, Zhi-wei; Wang, Hong
Fused magnesia smelting for fused magnesium furnace (FMF) is an energy intensive process with high temperature and comprehensive complexities. Its operational index namely energy consumption per ton (ECPT) is defined as the consumed electrical energy per ton of acceptable quality and is difficult to measure online. Moreover, the dynamics of ECPT cannot be precisely modelled mathematically. The model parameters of the three-phase currents of the electrodes such as the molten pool level, its variation rate and resistance are uncertain and nonlinear functions of the changes in both the smelting process and the raw materials composition. In this paper, an integratedmore » optimal operational control algorithm proposed is composed of a current set-point control, a current switching control and a self-optimized tuning mechanism. The tight conjoining of and coordination between the computational resources including the integrated optimal operational control, embedded software, industrial cloud, wireless communication and the physical resources of FMF constitutes a cyber-physical system (CPS) based embedded optimal operational control system. Successful application of this system has been made for a production line with ten fused magnesium furnaces in a factory in China, leading to a significant reduced ECPT.« less
A new strategy of glucose supply in a microbial fermentation model
NASA Astrophysics Data System (ADS)
Kasbawati, Gunawan, A. Y.; Sidarto, K. A.; Hertadi, R.
2015-09-01
Strategy of glucose supply to achieve an optimal productivity of ethanol production of a yeast cell is one of the main features in a microbial fermentation process. Beside a known continuous glucose supply, in this study we consider a new supply strategy so called the on-off supply. An optimal control theory is applied to the fermentation system to find the optimal rate of glucose supply and time of supply. The optimization problem is solved numerically using Differential Evolutionary algorithm. We find two alternative solutions that we can choose to get the similar result: either long period process with low supply or short period process with high glucose supply.
Maximizing the efficiency of multienzyme process by stoichiometry optimization.
Dvorak, Pavel; Kurumbang, Nagendra P; Bendl, Jaroslav; Brezovsky, Jan; Prokop, Zbynek; Damborsky, Jiri
2014-09-05
Multienzyme processes represent an important area of biocatalysis. Their efficiency can be enhanced by optimization of the stoichiometry of the biocatalysts. Here we present a workflow for maximizing the efficiency of a three-enzyme system catalyzing a five-step chemical conversion. Kinetic models of pathways with wild-type or engineered enzymes were built, and the enzyme stoichiometry of each pathway was optimized. Mathematical modeling and one-pot multienzyme experiments provided detailed insights into pathway dynamics, enabled the selection of a suitable engineered enzyme, and afforded high efficiency while minimizing biocatalyst loadings. Optimizing the stoichiometry in a pathway with an engineered enzyme reduced the total biocatalyst load by an impressive 56 %. Our new workflow represents a broadly applicable strategy for optimizing multienzyme processes. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
NASA Astrophysics Data System (ADS)
Haapasalo, Erkka; Pellonpää, Juha-Pekka
2017-12-01
Various forms of optimality for quantum observables described as normalized positive-operator-valued measures (POVMs) are studied in this paper. We give characterizations for observables that determine the values of the measured quantity with probabilistic certainty or a state of the system before or after the measurement. We investigate observables that are free from noise caused by classical post-processing, mixing, or pre-processing of quantum nature. Especially, a complete characterization of pre-processing and post-processing clean observables is given, and necessary and sufficient conditions are imposed on informationally complete POVMs within the set of pure states. We also discuss joint and sequential measurements of optimal quantum observables.
Chen, Jianjun; Frey, H Christopher
2004-12-15
Methods for optimization of process technologies considering the distinction between variability and uncertainty are developed and applied to case studies of NOx control for Integrated Gasification Combined Cycle systems. Existing methods of stochastic optimization (SO) and stochastic programming (SP) are demonstrated. A comparison of SO and SP results provides the value of collecting additional information to reduce uncertainty. For example, an expected annual benefit of 240,000 dollars is estimated if uncertainty can be reduced before a final design is chosen. SO and SP are typically applied to uncertainty. However, when applied to variability, the benefit of dynamic process control is obtained. For example, an annual savings of 1 million dollars could be achieved if the system is adjusted to changes in process conditions. When variability and uncertainty are treated distinctively, a coupled stochastic optimization and programming method and a two-dimensional stochastic programming method are demonstrated via a case study. For the case study, the mean annual benefit of dynamic process control is estimated to be 700,000 dollars, with a 95% confidence range of 500,000 dollars to 940,000 dollars. These methods are expected to be of greatest utility for problems involving a large commitment of resources, for which small differences in designs can produce large cost savings.
Regression analysis as a design optimization tool
NASA Technical Reports Server (NTRS)
Perley, R.
1984-01-01
The optimization concepts are described in relation to an overall design process as opposed to a detailed, part-design process where the requirements are firmly stated, the optimization criteria are well established, and a design is known to be feasible. The overall design process starts with the stated requirements. Some of the design criteria are derived directly from the requirements, but others are affected by the design concept. It is these design criteria that define the performance index, or objective function, that is to be minimized within some constraints. In general, there will be multiple objectives, some mutually exclusive, with no clear statement of their relative importance. The optimization loop that is given adjusts the design variables and analyzes the resulting design, in an iterative fashion, until the objective function is minimized within the constraints. This provides a solution, but it is only the beginning. In effect, the problem definition evolves as information is derived from the results. It becomes a learning process as we determine what the physics of the system can deliver in relation to the desirable system characteristics. As with any learning process, an interactive capability is a real attriubute for investigating the many alternatives that will be suggested as learning progresses.
Qin, Nan; Shen, Chenyang; Tsai, Min-Yu; Pinto, Marco; Tian, Zhen; Dedes, Georgios; Pompos, Arnold; Jiang, Steve B; Parodi, Katia; Jia, Xun
2018-01-01
One of the major benefits of carbon ion therapy is enhanced biological effectiveness at the Bragg peak region. For intensity modulated carbon ion therapy (IMCT), it is desirable to use Monte Carlo (MC) methods to compute the properties of each pencil beam spot for treatment planning, because of their accuracy in modeling physics processes and estimating biological effects. We previously developed goCMC, a graphics processing unit (GPU)-oriented MC engine for carbon ion therapy. The purpose of the present study was to build a biological treatment plan optimization system using goCMC. The repair-misrepair-fixation model was implemented to compute the spatial distribution of linear-quadratic model parameters for each spot. A treatment plan optimization module was developed to minimize the difference between the prescribed and actual biological effect. We used a gradient-based algorithm to solve the optimization problem. The system was embedded in the Varian Eclipse treatment planning system under a client-server architecture to achieve a user-friendly planning environment. We tested the system with a 1-dimensional homogeneous water case and 3 3-dimensional patient cases. Our system generated treatment plans with biological spread-out Bragg peaks covering the targeted regions and sparing critical structures. Using 4 NVidia GTX 1080 GPUs, the total computation time, including spot simulation, optimization, and final dose calculation, was 0.6 hour for the prostate case (8282 spots), 0.2 hour for the pancreas case (3795 spots), and 0.3 hour for the brain case (6724 spots). The computation time was dominated by MC spot simulation. We built a biological treatment plan optimization system for IMCT that performs simulations using a fast MC engine, goCMC. To the best of our knowledge, this is the first time that full MC-based IMCT inverse planning has been achieved in a clinically viable time frame. Copyright © 2017 Elsevier Inc. All rights reserved.
Smolensky, Paul; Goldrick, Matthew; Mathis, Donald
2014-08-01
Mental representations have continuous as well as discrete, combinatorial properties. For example, while predominantly discrete, phonological representations also vary continuously; this is reflected by gradient effects in instrumental studies of speech production. Can an integrated theoretical framework address both aspects of structure? The framework we introduce here, Gradient Symbol Processing, characterizes the emergence of grammatical macrostructure from the Parallel Distributed Processing microstructure (McClelland, Rumelhart, & The PDP Research Group, 1986) of language processing. The mental representations that emerge, Distributed Symbol Systems, have both combinatorial and gradient structure. They are processed through Subsymbolic Optimization-Quantization, in which an optimization process favoring representations that satisfy well-formedness constraints operates in parallel with a distributed quantization process favoring discrete symbolic structures. We apply a particular instantiation of this framework, λ-Diffusion Theory, to phonological production. Simulations of the resulting model suggest that Gradient Symbol Processing offers a way to unify accounts of grammatical competence with both discrete and continuous patterns in language performance. Copyright © 2013 Cognitive Science Society, Inc.
Modeling joint restoration strategies for interdependent infrastructure systems.
Zhang, Chao; Kong, Jingjing; Simonovic, Slobodan P
2018-01-01
Life in the modern world depends on multiple critical services provided by infrastructure systems which are interdependent at multiple levels. To effectively respond to infrastructure failures, this paper proposes a model for developing optimal joint restoration strategy for interdependent infrastructure systems following a disruptive event. First, models for (i) describing structure of interdependent infrastructure system and (ii) their interaction process, are presented. Both models are considering the failure types, infrastructure operating rules and interdependencies among systems. Second, an optimization model for determining an optimal joint restoration strategy at infrastructure component level by minimizing the economic loss from the infrastructure failures, is proposed. The utility of the model is illustrated using a case study of electric-water systems. Results show that a small number of failed infrastructure components can trigger high level failures in interdependent systems; the optimal joint restoration strategy varies with failure occurrence time. The proposed models can help decision makers to understand the mechanisms of infrastructure interactions and search for optimal joint restoration strategy, which can significantly enhance safety of infrastructure systems.
NASA Astrophysics Data System (ADS)
Mangaud, E.; Puthumpally-Joseph, R.; Sugny, D.; Meier, C.; Atabek, O.; Desouter-Lecomte, M.
2018-04-01
Optimal control theory is implemented with fully converged hierarchical equations of motion (HEOM) describing the time evolution of an open system density matrix strongly coupled to the bath in a spin-boson model. The populations of the two-level sub-system are taken as control objectives; namely, their revivals or exchange when switching off the field. We, in parallel, analyze how the optimal electric field consequently modifies the information back flow from the environment through different non-Markovian witnesses. Although the control field has a dipole interaction with the central sub-system only, its indirect influence on the bath collective mode dynamics is probed through HEOM auxiliary matrices, revealing a strong correlation between control and dissipation during a non-Markovian process. A heterojunction is taken as an illustrative example for modeling in a realistic way the two-level sub-system parameters and its spectral density function leading to a non-perturbative strong coupling regime with the bath. Although, due to strong system-bath couplings, control performances remain rather modest, the most important result is a noticeable increase of the non-Markovian bath response induced by the optimally driven processes.
Evolutionary design optimization of traffic signals applied to Quito city.
Armas, Rolando; Aguirre, Hernán; Daolio, Fabio; Tanaka, Kiyoshi
2017-01-01
This work applies evolutionary computation and machine learning methods to study the transportation system of Quito from a design optimization perspective. It couples an evolutionary algorithm with a microscopic transport simulator and uses the outcome of the optimization process to deepen our understanding of the problem and gain knowledge about the system. The work focuses on the optimization of a large number of traffic lights deployed on a wide area of the city and studies their impact on travel time, emissions and fuel consumption. An evolutionary algorithm with specialized mutation operators is proposed to search effectively in large decision spaces, evolving small populations for a short number of generations. The effects of the operators combined with a varying mutation schedule are studied, and an analysis of the parameters of the algorithm is also included. In addition, hierarchical clustering is performed on the best solutions found in several runs of the algorithm. An analysis of signal clusters and their geolocation, estimation of fuel consumption, spatial analysis of emissions, and an analysis of signal coordination provide an overall picture of the systemic effects of the optimization process.
Evolutionary design optimization of traffic signals applied to Quito city
2017-01-01
This work applies evolutionary computation and machine learning methods to study the transportation system of Quito from a design optimization perspective. It couples an evolutionary algorithm with a microscopic transport simulator and uses the outcome of the optimization process to deepen our understanding of the problem and gain knowledge about the system. The work focuses on the optimization of a large number of traffic lights deployed on a wide area of the city and studies their impact on travel time, emissions and fuel consumption. An evolutionary algorithm with specialized mutation operators is proposed to search effectively in large decision spaces, evolving small populations for a short number of generations. The effects of the operators combined with a varying mutation schedule are studied, and an analysis of the parameters of the algorithm is also included. In addition, hierarchical clustering is performed on the best solutions found in several runs of the algorithm. An analysis of signal clusters and their geolocation, estimation of fuel consumption, spatial analysis of emissions, and an analysis of signal coordination provide an overall picture of the systemic effects of the optimization process. PMID:29236733
Panorama parking assistant system with improved particle swarm optimization method
NASA Astrophysics Data System (ADS)
Cheng, Ruzhong; Zhao, Yong; Li, Zhichao; Jiang, Weigang; Wang, Xin'an; Xu, Yong
2013-10-01
A panorama parking assistant system (PPAS) for the automotive aftermarket together with a practical improved particle swarm optimization method (IPSO) are proposed in this paper. In the PPAS system, four fisheye cameras are installed in the vehicle with different views, and four channels of video frames captured by the cameras are processed as a 360-deg top-view image around the vehicle. Besides the embedded design of PPAS, the key problem for image distortion correction and mosaicking is the efficiency of parameter optimization in the process of camera calibration. In order to address this problem, an IPSO method is proposed. Compared with other parameter optimization methods, the proposed method allows a certain range of dynamic change for the intrinsic and extrinsic parameters, and can exploit only one reference image to complete all of the optimization; therefore, the efficiency of the whole camera calibration is increased. The PPAS is commercially available, and the IPSO method is a highly practical way to increase the efficiency of the installation and the calibration of PPAS in automobile 4S shops.
Availability Control for Means of Transport in Decisive Semi-Markov Models of Exploitation Process
NASA Astrophysics Data System (ADS)
Migawa, Klaudiusz
2012-12-01
The issues presented in this research paper refer to problems connected with the control process for exploitation implemented in the complex systems of exploitation for technical objects. The article presents the description of the method concerning the control availability for technical objects (means of transport) on the basis of the mathematical model of the exploitation process with the implementation of the decisive processes by semi-Markov. The presented method means focused on the preparing the decisive for the exploitation process for technical objects (semi-Markov model) and after that specifying the best control strategy (optimal strategy) from among possible decisive variants in accordance with the approved criterion (criteria) of the activity evaluation of the system of exploitation for technical objects. In the presented method specifying the optimal strategy for control availability in the technical objects means a choice of a sequence of control decisions made in individual states of modelled exploitation process for which the function being a criterion of evaluation reaches the extreme value. In order to choose the optimal control strategy the implementation of the genetic algorithm was chosen. The opinions were presented on the example of the exploitation process of the means of transport implemented in the real system of the bus municipal transport. The model of the exploitation process for the means of transports was prepared on the basis of the results implemented in the real transport system. The mathematical model of the exploitation process was built taking into consideration the fact that the model of the process constitutes the homogenous semi-Markov process.
NASA Astrophysics Data System (ADS)
Monica, Z.; Sękala, A.; Gwiazda, A.; Banaś, W.
2016-08-01
Nowadays a key issue is to reduce the energy consumption of road vehicles. In particular solution one could find different strategies of energy optimization. The most popular but not sophisticated is so called eco-driving. In this strategy emphasized is particular behavior of drivers. In more sophisticated solution behavior of drivers is supported by control system measuring driving parameters and suggesting proper operation of the driver. The other strategy is concerned with application of different engineering solutions that aid optimization the process of energy consumption. Such systems take into consideration different parameters measured in real time and next take proper action according to procedures loaded to the control computer of a vehicle. The third strategy bases on optimization of the designed vehicle taking into account especially main sub-systems of a technical mean. In this approach the optimal level of energy consumption by a vehicle is obtained by synergetic results of individual optimization of particular constructional sub-systems of a vehicle. It is possible to distinguish three main sub-systems: the structural one the drive one and the control one. In the case of the structural sub-system optimization of the energy consumption level is related with the optimization or the weight parameter and optimization the aerodynamic parameter. The result is optimized body of a vehicle. Regarding the drive sub-system the optimization of the energy consumption level is related with the fuel or power consumption using the previously elaborated physical models. Finally the optimization of the control sub-system consists in determining optimal control parameters.
Cao, Wenhua; Lim, Gino; Li, Xiaoqiang; Li, Yupeng; Zhu, X. Ronald; Zhang, Xiaodong
2014-01-01
The purpose of this study is to investigate the feasibility and impact of incorporating deliverable monitor unit (MU) constraints into spot intensity optimization in intensity modulated proton therapy (IMPT) treatment planning. The current treatment planning system (TPS) for IMPT disregards deliverable MU constraints in the spot intensity optimization (SIO) routine. It performs a post-processing procedure on an optimized plan to enforce deliverable MU values that are required by the spot scanning proton delivery system. This procedure can create a significant dose distribution deviation between the optimized and post-processed deliverable plans, especially when small spot spacings are used. In this study, we introduce a two-stage linear programming (LP) approach to optimize spot intensities and constrain deliverable MU values simultaneously, i.e., a deliverable spot intensity optimization (DSIO) model. Thus, the post-processing procedure is eliminated and the associated optimized plan deterioration can be avoided. Four prostate cancer cases at our institution were selected for study and two parallel opposed beam angles were planned for all cases. A quadratic programming (QP) based model without MU constraints, i.e., a conventional spot intensity optimization (CSIO) model, was also implemented to emulate the commercial TPS. Plans optimized by both the DSIO and CSIO models were evaluated for five different settings of spot spacing from 3 mm to 7 mm. For all spot spacings, the DSIO-optimized plans yielded better uniformity for the target dose coverage and critical structure sparing than did the CSIO-optimized plans. With reduced spot spacings, more significant improvements in target dose uniformity and critical structure sparing were observed in the DSIO- than in the CSIO-optimized plans. Additionally, better sparing of the rectum and bladder was achieved when reduced spacings were used for the DSIO-optimized plans. The proposed DSIO approach ensures the deliverability of optimized IMPT plans that take into account MU constraints. This eliminates the post-processing procedure required by the TPS as well as the resultant deteriorating effect on ultimate dose distributions. This approach therefore allows IMPT plans to adopt all possible spot spacings optimally. Moreover, dosimetric benefits can be achieved using smaller spot spacings. PMID:23835656
Inverse problems and optimal experiment design in unsteady heat transfer processes identification
NASA Technical Reports Server (NTRS)
Artyukhin, Eugene A.
1991-01-01
Experimental-computational methods for estimating characteristics of unsteady heat transfer processes are analyzed. The methods are based on the principles of distributed parameter system identification. The theoretical basis of such methods is the numerical solution of nonlinear ill-posed inverse heat transfer problems and optimal experiment design problems. Numerical techniques for solving problems are briefly reviewed. The results of the practical application of identification methods are demonstrated when estimating effective thermophysical characteristics of composite materials and thermal contact resistance in two-layer systems.
[Design of medical devices management system supporting full life-cycle process management].
Su, Peng; Zhong, Jianping
2014-03-01
Based on the analysis of the present status of medical devices management, this paper optimized management process, developed a medical devices management system with Web technologies. With information technology to dynamic master the use of state of the entire life-cycle of medical devices. Through the closed-loop management with pre-event budget, mid-event control and after-event analysis, improved the delicacy management level of medical devices, optimized asset allocation, promoted positive operation of devices.
Inverse problems in the design, modeling and testing of engineering systems
NASA Technical Reports Server (NTRS)
Alifanov, Oleg M.
1991-01-01
Formulations, classification, areas of application, and approaches to solving different inverse problems are considered for the design of structures, modeling, and experimental data processing. Problems in the practical implementation of theoretical-experimental methods based on solving inverse problems are analyzed in order to identify mathematical models of physical processes, aid in input data preparation for design parameter optimization, help in design parameter optimization itself, and to model experiments, large-scale tests, and real tests of engineering systems.
Zhan, Xiaobin; Jiang, Shulan; Yang, Yili; Liang, Jian; Shi, Tielin; Li, Xiwen
2015-09-18
This paper proposes an ultrasonic measurement system based on least squares support vector machines (LS-SVM) for inline measurement of particle concentrations in multicomponent suspensions. Firstly, the ultrasonic signals are analyzed and processed, and the optimal feature subset that contributes to the best model performance is selected based on the importance of features. Secondly, the LS-SVM model is tuned, trained and tested with different feature subsets to obtain the optimal model. In addition, a comparison is made between the partial least square (PLS) model and the LS-SVM model. Finally, the optimal LS-SVM model with the optimal feature subset is applied to inline measurement of particle concentrations in the mixing process. The results show that the proposed method is reliable and accurate for inline measuring the particle concentrations in multicomponent suspensions and the measurement accuracy is sufficiently high for industrial application. Furthermore, the proposed method is applicable to the modeling of the nonlinear system dynamically and provides a feasible way to monitor industrial processes.
Optimal robust control strategy of a solid oxide fuel cell system
NASA Astrophysics Data System (ADS)
Wu, Xiaojuan; Gao, Danhui
2018-01-01
Optimal control can ensure system safe operation with a high efficiency. However, only a few papers discuss optimal control strategies for solid oxide fuel cell (SOFC) systems. Moreover, the existed methods ignore the impact of parameter uncertainty on system instantaneous performance. In real SOFC systems, several parameters may vary with the variation of operation conditions and can not be identified exactly, such as load current. Therefore, a robust optimal control strategy is proposed, which involves three parts: a SOFC model with parameter uncertainty, a robust optimizer and robust controllers. During the model building process, boundaries of the uncertain parameter are extracted based on Monte Carlo algorithm. To achieve the maximum efficiency, a two-space particle swarm optimization approach is employed to obtain optimal operating points, which are used as the set points of the controllers. To ensure the SOFC safe operation, two feed-forward controllers and a higher-order robust sliding mode controller are presented to control fuel utilization ratio, air excess ratio and stack temperature afterwards. The results show the proposed optimal robust control method can maintain the SOFC system safe operation with a maximum efficiency under load and uncertainty variations.
Larson, David B; Malarik, Remo J; Hall, Seth M; Podberesky, Daniel J
2013-10-01
To evaluate the effect of an automated computed tomography (CT) radiation dose optimization and process control system on the consistency of estimated image noise and size-specific dose estimates (SSDEs) of radiation in CT examinations of the chest, abdomen, and pelvis. This quality improvement project was determined not to constitute human subject research. An automated system was developed to analyze each examination immediately after completion, and to report individual axial-image-level and study-level summary data for patient size, image noise, and SSDE. The system acquired data for 4 months beginning October 1, 2011. Protocol changes were made by using parameters recommended by the prediction application, and 3 months of additional data were acquired. Preimplementation and postimplementation mean image noise and SSDE were compared by using unpaired t tests and F tests. Common-cause variation was differentiated from special-cause variation by using a statistical process control individual chart. A total of 817 CT examinations, 490 acquired before and 327 acquired after the initial protocol changes, were included in the study. Mean patient age and water-equivalent diameter were 12.0 years and 23.0 cm, respectively. The difference between actual and target noise increased from -1.4 to 0.3 HU (P < .01) and the standard deviation decreased from 3.9 to 1.6 HU (P < .01). Mean SSDE decreased from 11.9 to 7.5 mGy, a 37% reduction (P < .01). The process control chart identified several special causes of variation. Implementation of an automated CT radiation dose optimization system led to verifiable simultaneous decrease in image noise variation and SSDE. The automated nature of the system provides the opportunity for consistent CT radiation dose optimization on a broad scale. © RSNA, 2013.
Elevated depressive symptoms enhance reflexive but not reflective auditory category learning.
Maddox, W Todd; Chandrasekaran, Bharath; Smayda, Kirsten; Yi, Han-Gyol; Koslov, Seth; Beevers, Christopher G
2014-09-01
In vision an extensive literature supports the existence of competitive dual-processing systems of category learning that are grounded in neuroscience and are partially-dissociable. The reflective system is prefrontally-mediated and uses working memory and executive attention to develop and test rules for classifying in an explicit fashion. The reflexive system is striatally-mediated and operates by implicitly associating perception with actions that lead to reinforcement. Although categorization is fundamental to auditory processing, little is known about the learning systems that mediate auditory categorization and even less is known about the effects of individual difference in the relative efficiency of the two learning systems. Previous studies have shown that individuals with elevated depressive symptoms show deficits in reflective processing. We exploit this finding to test critical predictions of the dual-learning systems model in audition. Specifically, we examine the extent to which the two systems are dissociable and competitive. We predicted that elevated depressive symptoms would lead to reflective-optimal learning deficits but reflexive-optimal learning advantages. Because natural speech category learning is reflexive in nature, we made the prediction that elevated depressive symptoms would lead to superior speech learning. In support of our predictions, individuals with elevated depressive symptoms showed a deficit in reflective-optimal auditory category learning, but an advantage in reflexive-optimal auditory category learning. In addition, individuals with elevated depressive symptoms showed an advantage in learning a non-native speech category structure. Computational modeling suggested that the elevated depressive symptom advantage was due to faster, more accurate, and more frequent use of reflexive category learning strategies in individuals with elevated depressive symptoms. The implications of this work for dual-process approach to auditory learning and depression are discussed. Copyright © 2014 Elsevier Ltd. All rights reserved.
Elevated Depressive Symptoms Enhance Reflexive but not Reflective Auditory Category Learning
Maddox, W. Todd; Chandrasekaran, Bharath; Smayda, Kirsten; Yi, Han-Gyol; Koslov, Seth; Beevers, Christopher G.
2014-01-01
In vision an extensive literature supports the existence of competitive dual-processing systems of category learning that are grounded in neuroscience and are partially-dissociable. The reflective system is prefrontally-mediated and uses working memory and executive attention to develop and test rules for classifying in an explicit fashion. The reflexive system is striatally-mediated and operates by implicitly associating perception with actions that lead to reinforcement. Although categorization is fundamental to auditory processing, little is known about the learning systems that mediate auditory categorization and even less is known about the effects of individual difference in the relative efficiency of the two learning systems. Previous studies have shown that individuals with elevated depressive symptoms show deficits in reflective processing. We exploit this finding to test critical predictions of the dual-learning systems model in audition. Specifically, we examine the extent to which the two systems are dissociable and competitive. We predicted that elevated depressive symptoms would lead to reflective-optimal learning deficits but reflexive-optimal learning advantages. Because natural speech category learning is reflexive in nature, we made the prediction that elevated depressive symptoms would lead to superior speech learning. In support of our predictions, individuals with elevated depressive symptoms showed a deficit in reflective-optimal auditory category learning, but an advantage in reflexive-optimal auditory category learning. In addition, individuals with elevated depressive symptoms showed an advantage in learning a non-native speech category structure. Computational modeling suggested that the elevated depressive symptom advantage was due to faster, more accurate, and more frequent use of reflexive category learning strategies in individuals with elevated depressive symptoms. The implications of this work for dual-process approach to auditory learning and depression are discussed. PMID:25041936
Optimization of seismic isolation systems via harmony search
NASA Astrophysics Data System (ADS)
Melih Nigdeli, Sinan; Bekdaş, Gebrail; Alhan, Cenk
2014-11-01
In this article, the optimization of isolation system parameters via the harmony search (HS) optimization method is proposed for seismically isolated buildings subjected to both near-fault and far-fault earthquakes. To obtain optimum values of isolation system parameters, an optimization program was developed in Matlab/Simulink employing the HS algorithm. The objective was to obtain a set of isolation system parameters within a defined range that minimizes the acceleration response of a seismically isolated structure subjected to various earthquakes without exceeding a peak isolation system displacement limit. Several cases were investigated for different isolation system damping ratios and peak displacement limitations of seismic isolation devices. Time history analyses were repeated for the neighbouring parameters of optimum values and the results proved that the parameters determined via HS were true optima. The performance of the optimum isolation system was tested under a second set of earthquakes that was different from the first set used in the optimization process. The proposed optimization approach is applicable to linear isolation systems. Isolation systems composed of isolation elements that are inherently nonlinear are the subject of a future study. Investigation of the optimum isolation system parameters has been considered in parametric studies. However, obtaining the best performance of a seismic isolation system requires a true optimization by taking the possibility of both near-fault and far-fault earthquakes into account. HS optimization is proposed here as a viable solution to this problem.
NASA Astrophysics Data System (ADS)
Yang, Chunhui; Su, Zhixiong; Wang, Xin; Liu, Yang; Qi, Yongwei
2017-03-01
The new normalization of the economic situation and the implementation of a new round of electric power system reform put forward higher requirements to the daily operation of power grid companies. As an important day-to-day operation of power grid companies, investment management is directly related to the promotion of the company's operating efficiency and management level. In this context, the establishment of power grid company investment management optimization system will help to improve the level of investment management and control the company, which is of great significance for power gird companies to adapt to market environment changing as soon as possible and meet the policy environment requirements. Therefore, the purpose of this paper is to construct the investment management optimization system of power grid companies, which includes investment management system, investment process control system, investment structure optimization system, and investment project evaluation system and investment management information platform support system.
NASA Astrophysics Data System (ADS)
Ranjitha, P. Raj; Ratheesh, R.; Jayakumar, J. S.; Balakrishnan, Shankar
2018-02-01
Availability and utilization of energy and water are the top most global challenges being faced by the new millennium. At the present state water scarcity has become a global as well as a regional challenge. 40 % of world population faces water shortage. Challenge of water scarcity can be tackled only with increase in water supply beyond what is obtained from hydrological cycle. This can be achieved either by desalinating the sea water or by reusing the waste water. High energy requirement need to be overcome for either of the two processes. Of many desalination technologies, humidification dehumidification (HDH) technology powered by solar energy is widely accepted for small scale production. Detailed optimization studies on system have the potential to effectively utilize the solar energy for brackish water desalination. Dehumidification technology, specifically, require further study because the dehumidifier effectiveness control the energetic performance of the entire HDH system. The reason attributes to the high resistance involved to diffuse dilute vapor through air in a dehumidifier. The present work intends to optimize the design of a bubble column dehumidifier for a solar energy driven desalination process. Optimization is carried out using Matlab simulation. Design process will identify the unique needs of a bubble column dehumidifier in HDH system.
Optimization analysis of thermal management system for electric vehicle battery pack
NASA Astrophysics Data System (ADS)
Gong, Huiqi; Zheng, Minxin; Jin, Peng; Feng, Dong
2018-04-01
Electric vehicle battery pack can increase the temperature to affect the power battery system cycle life, charge-ability, power, energy, security and reliability. The Computational Fluid Dynamics simulation and experiment of the charging and discharging process of the battery pack were carried out for the thermal management system of the battery pack under the continuous charging of the battery. The simulation result and the experimental data were used to verify the rationality of the Computational Fluid Dynamics calculation model. In view of the large temperature difference of the battery module in high temperature environment, three optimization methods of the existing thermal management system of the battery pack were put forward: adjusting the installation position of the fan, optimizing the arrangement of the battery pack and reducing the fan opening temperature threshold. The feasibility of the optimization method is proved by simulation and experiment of the thermal management system of the optimized battery pack.
[Development of a medical equipment support information system based on PDF portable document].
Cheng, Jiangbo; Wang, Weidong
2010-07-01
According to the organizational structure and management system of the hospital medical engineering support, integrate medical engineering support workflow to ensure the medical engineering data effectively, accurately and comprehensively collected and kept in electronic archives. Analyse workflow of the medical, equipment support work and record all work processes by the portable electronic document. Using XML middleware technology and SQL Server database, complete process management, data calculation, submission, storage and other functions. The practical application shows that the medical equipment support information system optimizes the existing work process, standardized and digital, automatic and efficient orderly and controllable. The medical equipment support information system based on portable electronic document can effectively optimize and improve hospital medical engineering support work, improve performance, reduce costs, and provide full and accurate digital data
A trust-region algorithm for the optimization of PSA processes using reduced-order modeling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Agarwal, A.; Biegler, L.; Zitney, S.
2009-01-01
The last few decades have seen a considerable increase in the applications of adsorptive gas separation technologies, such as pressure swing adsorption (PSA); the applications range from bulk separations to trace contaminant removal. PSA processes are based on solid-gas equilibrium and operate under periodic transient conditions [1]. Bed models for these processes are therefore defined by coupled nonlinear partial differential and algebraic equations (PDAEs) distributed in space and time with periodic boundary conditions that connect the processing steps together and high nonlinearities arising from non-isothermal effects and nonlinear adsorption isotherms. As a result, the optimization of such systems for eithermore » design or operation represents a significant computational challenge to current nonlinear programming algorithms. Model reduction is a powerful methodology that permits systematic generation of cost-efficient low-order representations of large-scale systems that result from discretization of such PDAEs. In particular, low-dimensional approximations can be obtained from reduced order modeling (ROM) techniques based on proper orthogonal decomposition (POD) and can be used as surrogate models in the optimization problems. In this approach, a representative ensemble of solutions of the dynamic PDAE system is constructed by solving a higher-order discretization of the model using the method of lines, followed by the application of Karhunen-Loeve expansion to derive a small set of empirical eigenfunctions (POD modes). These modes are used as basis functions within a Galerkin's projection framework to derive a low-order DAE system that accurately describes the dominant dynamics of the PDAE system. This approach leads to a DAE system of significantly lower order, thus replacing the one obtained from spatial discretization before and making optimization problem computationally efficient [2]. The ROM methodology has been successfully applied to a 2-bed 4-step PSA process used for separating a hydrogen-methane mixture in [3]. The reduced order model developed was successfully used to optimize this process to maximize hydrogen recovery within a trust-region. We extend this approach in this work to develop a rigorous trust-region algorithm for ROM-based optimization of PSA processes. The trust-region update rules and sufficient decrease condition for the objective is used to determine the size of the trust-region. Based on the decrease in the objective function and error in the ROM, a ROM updation strategy is designed [4, 5]. The inequalities and bounds are handled in the algorithm using exact penalty formulation, and a non-smooth trust-region algorithm by Conn et al. [6] is used to handle non-differentiability. To ensure that the first order consistency condition is met and the optimum obtained from ROM-based optimization corresponds to the optimum of the original problem, a scaling function, such as one proposed by Alexandrov et al. [7], is incorporated in the objective function. Such error control mechanism is also capable of handling numerical inconsistencies such as unphysical oscillations in the state variable profiles. The proposed methodology is applied to optimize a PSA process to concentrate CO{sub 2} from a nitrogen-carbon dioxide mixture. As in [3], separate ROMs are developed for each operating step with different POD modes for each state variable. Numerical results will be presented for optimization case studies which involve maximizing CO{sub 2} recovery, feed throughput or minimizing overall power consumption.« less
Pinto Mariano, Adriano; Bastos Borba Costa, Caliane; de Franceschi de Angelis, Dejanira; Maugeri Filho, Francisco; Pires Atala, Daniel Ibraim; Wolf Maciel, Maria Regina; Maciel Filho, Rubens
2009-11-01
In this work, the mathematical optimization of a continuous flash fermentation process for the production of biobutanol was studied. The process consists of three interconnected units, as follows: fermentor, cell-retention system (tangential microfiltration), and vacuum flash vessel (responsible for the continuous recovery of butanol from the broth). The objective of the optimization was to maximize butanol productivity for a desired substrate conversion. Two strategies were compared for the optimization of the process. In one of them, the process was represented by a deterministic model with kinetic parameters determined experimentally and, in the other, by a statistical model obtained using the factorial design technique combined with simulation. For both strategies, the problem was written as a nonlinear programming problem and was solved with the sequential quadratic programming technique. The results showed that despite the very similar solutions obtained with both strategies, the problems found with the strategy using the deterministic model, such as lack of convergence and high computational time, make the use of the optimization strategy with the statistical model, which showed to be robust and fast, more suitable for the flash fermentation process, being recommended for real-time applications coupling optimization and control.
Markert, Sven; Joeris, Klaus
2017-01-01
We developed an automated microtiter plate (MTP)-based system for suspension cell culture to meet the increased demands for miniaturized high throughput applications in biopharmaceutical process development. The generic system is based on off-the-shelf commercial laboratory automation equipment and is able to utilize MTPs of different configurations (6-24 wells per plate) in orbital shaken mode. The shaking conditions were optimized by Computational Fluid Dynamics simulations. The fully automated system handles plate transport, seeding and feeding of cells, daily sampling, and preparation of analytical assays. The integration of all required analytical instrumentation into the system enables a hands-off operation which prevents bottlenecks in sample processing. The modular set-up makes the system flexible and adaptable for a continuous extension of analytical parameters and add-on components. The system proved suitable as screening tool for process development by verifying the comparability of results for the MTP-based system and bioreactors regarding profiles of viable cell density, lactate, and product concentration of CHO cell lines. These studies confirmed that 6 well MTPs as well as 24 deepwell MTPs were predictive for a scale up to a 1000 L stirred tank reactor (scale factor 1:200,000). Applying the established cell culture system for automated media blend screening in late stage development, a 22% increase in product yield was achieved in comparison to the reference process. The predicted product increase was subsequently confirmed in 2 L bioreactors. Thus, we demonstrated the feasibility of the automated MTP-based cell culture system for enhanced screening and optimization applications in process development and identified further application areas such as process robustness. The system offers a great potential to accelerate time-to-market for new biopharmaceuticals. Biotechnol. Bioeng. 2017;114: 113-121. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
Potential use of advanced process control for safety purposes during attack of a process plant.
Whiteley, James R
2006-03-17
Many refineries and commodity chemical plants employ advanced process control (APC) systems to improve throughputs and yields. These APC systems utilize empirical process models for control purposes and enable operation closer to constraints than can be achieved with traditional PID regulatory feedback control. Substantial economic benefits are typically realized from the addition of APC systems. This paper considers leveraging the control capabilities of existing APC systems to minimize the potential impact of a terrorist attack on a process plant (e.g., petroleum refinery). Two potential uses of APC are described. The first is a conventional application of APC and involves automatically moving the process to a reduced operating rate when an attack first begins. The second is a non-conventional application and involves reconfiguring the APC system to optimize safety rather than economics. The underlying intent in both cases is to reduce the demands on the operator to allow focus on situation assessment and optimal response planning. An overview of APC is provided along with a brief description of the modifications required for the proposed new applications of the technology.
Optimal diabatic dynamics of Majorana-based quantum gates
NASA Astrophysics Data System (ADS)
Rahmani, Armin; Seradjeh, Babak; Franz, Marcel
2017-08-01
In topological quantum computing, unitary operations on qubits are performed by adiabatic braiding of non-Abelian quasiparticles, such as Majorana zero modes, and are protected from local environmental perturbations. In the adiabatic regime, with timescales set by the inverse gap of the system, the errors can be made arbitrarily small by performing the process more slowly. To enhance the performance of quantum information processing with Majorana zero modes, we apply the theory of optimal control to the diabatic dynamics of Majorana-based qubits. While we sacrifice complete topological protection, we impose constraints on the optimal protocol to take advantage of the nonlocal nature of topological information and increase the robustness of our gates. By using the Pontryagin's maximum principle, we show that robust equivalent gates to perfect adiabatic braiding can be implemented in finite times through optimal pulses. In our implementation, modifications to the device Hamiltonian are avoided. Focusing on thermally isolated systems, we study the effects of calibration errors and external white and 1 /f (pink) noise on Majorana-based gates. While a noise-induced antiadiabatic behavior, where a slower process creates more diabatic excitations, prohibits indefinite enhancement of the robustness of the adiabatic scheme, our fast optimal protocols exhibit remarkable stability to noise and have the potential to significantly enhance the practical performance of Majorana-based information processing.
Ott, Denise; Kralisch, Dana; Denčić, Ivana; Hessel, Volker; Laribi, Yosra; Perrichon, Philippe D; Berguerand, Charline; Kiwi-Minsker, Lioubov; Loeb, Patrick
2014-12-01
As the demand for new drugs is rising, the pharmaceutical industry faces the quest of shortening development time, and thus, reducing the time to market. Environmental aspects typically still play a minor role within the early phase of process development. Nevertheless, it is highly promising to rethink, redesign, and optimize process strategies as early as possible in active pharmaceutical ingredient (API) process development, rather than later at the stage of already established processes. The study presented herein deals with a holistic life-cycle-based process optimization and intensification of a pharmaceutical production process targeting a low-volume, high-value API. Striving for process intensification by transfer from batch to continuous processing, as well as an alternative catalytic system, different process options are evaluated with regard to their environmental impact to identify bottlenecks and improvement potentials for further process development activities. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Model-Based Thermal System Design Optimization for the James Webb Space Telescope
NASA Technical Reports Server (NTRS)
Cataldo, Giuseppe; Niedner, Malcolm B.; Fixsen, Dale J.; Moseley, Samuel H.
2017-01-01
Spacecraft thermal model validation is normally performed by comparing model predictions with thermal test data and reducing their discrepancies to meet the mission requirements. Based on thermal engineering expertise, the model input parameters are adjusted to tune the model output response to the test data. The end result is not guaranteed to be the best solution in terms of reduced discrepancy and the process requires months to complete. A model-based methodology was developed to perform the validation process in a fully automated fashion and provide mathematical bases to the search for the optimal parameter set that minimizes the discrepancies between model and data. The methodology was successfully applied to several thermal subsystems of the James Webb Space Telescope (JWST). Global or quasiglobal optimal solutions were found and the total execution time of the model validation process was reduced to about two weeks. The model sensitivities to the parameters, which are required to solve the optimization problem, can be calculated automatically before the test begins and provide a library for sensitivity studies. This methodology represents a crucial commodity when testing complex, large-scale systems under time and budget constraints. Here, results for the JWST Core thermal system will be presented in detail.
Abdelkarim, Noha; Mohamed, Amr E; El-Garhy, Ahmed M; Dorrah, Hassen T
2016-01-01
The two-coupled distillation column process is a physically complicated system in many aspects. Specifically, the nested interrelationship between system inputs and outputs constitutes one of the significant challenges in system control design. Mostly, such a process is to be decoupled into several input/output pairings (loops), so that a single controller can be assigned for each loop. In the frame of this research, the Brain Emotional Learning Based Intelligent Controller (BELBIC) forms the control structure for each decoupled loop. The paper's main objective is to develop a parameterization technique for decoupling and control schemes, which ensures robust control behavior. In this regard, the novel optimization technique Bacterial Swarm Optimization (BSO) is utilized for the minimization of summation of the integral time-weighted squared errors (ITSEs) for all control loops. This optimization technique constitutes a hybrid between two techniques, which are the Particle Swarm and Bacterial Foraging algorithms. According to the simulation results, this hybridized technique ensures low mathematical burdens and high decoupling and control accuracy. Moreover, the behavior analysis of the proposed BELBIC shows a remarkable improvement in the time domain behavior and robustness over the conventional PID controller.
Model-based thermal system design optimization for the James Webb Space Telescope
NASA Astrophysics Data System (ADS)
Cataldo, Giuseppe; Niedner, Malcolm B.; Fixsen, Dale J.; Moseley, Samuel H.
2017-10-01
Spacecraft thermal model validation is normally performed by comparing model predictions with thermal test data and reducing their discrepancies to meet the mission requirements. Based on thermal engineering expertise, the model input parameters are adjusted to tune the model output response to the test data. The end result is not guaranteed to be the best solution in terms of reduced discrepancy and the process requires months to complete. A model-based methodology was developed to perform the validation process in a fully automated fashion and provide mathematical bases to the search for the optimal parameter set that minimizes the discrepancies between model and data. The methodology was successfully applied to several thermal subsystems of the James Webb Space Telescope (JWST). Global or quasiglobal optimal solutions were found and the total execution time of the model validation process was reduced to about two weeks. The model sensitivities to the parameters, which are required to solve the optimization problem, can be calculated automatically before the test begins and provide a library for sensitivity studies. This methodology represents a crucial commodity when testing complex, large-scale systems under time and budget constraints. Here, results for the JWST Core thermal system will be presented in detail.
Mohamed, Amr E.; Dorrah, Hassen T.
2016-01-01
The two-coupled distillation column process is a physically complicated system in many aspects. Specifically, the nested interrelationship between system inputs and outputs constitutes one of the significant challenges in system control design. Mostly, such a process is to be decoupled into several input/output pairings (loops), so that a single controller can be assigned for each loop. In the frame of this research, the Brain Emotional Learning Based Intelligent Controller (BELBIC) forms the control structure for each decoupled loop. The paper's main objective is to develop a parameterization technique for decoupling and control schemes, which ensures robust control behavior. In this regard, the novel optimization technique Bacterial Swarm Optimization (BSO) is utilized for the minimization of summation of the integral time-weighted squared errors (ITSEs) for all control loops. This optimization technique constitutes a hybrid between two techniques, which are the Particle Swarm and Bacterial Foraging algorithms. According to the simulation results, this hybridized technique ensures low mathematical burdens and high decoupling and control accuracy. Moreover, the behavior analysis of the proposed BELBIC shows a remarkable improvement in the time domain behavior and robustness over the conventional PID controller. PMID:27807444
Application of Particle Swarm Optimization in Computer Aided Setup Planning
NASA Astrophysics Data System (ADS)
Kafashi, Sajad; Shakeri, Mohsen; Abedini, Vahid
2011-01-01
New researches are trying to integrate computer aided design (CAD) and computer aided manufacturing (CAM) environments. The role of process planning is to convert the design specification into manufacturing instructions. Setup planning has a basic role in computer aided process planning (CAPP) and significantly affects the overall cost and quality of machined part. This research focuses on the development for automatic generation of setups and finding the best setup plan in feasible condition. In order to computerize the setup planning process, three major steps are performed in the proposed system: a) Extraction of machining data of the part. b) Analyzing and generation of all possible setups c) Optimization to reach the best setup plan based on cost functions. Considering workshop resources such as machine tool, cutter and fixture, all feasible setups could be generated. Then the problem is adopted with technological constraints such as TAD (tool approach direction), tolerance relationship and feature precedence relationship to have a completely real and practical approach. The optimal setup plan is the result of applying the PSO (particle swarm optimization) algorithm into the system using cost functions. A real sample part is illustrated to demonstrate the performance and productivity of the system.
Noise tolerant illumination optimization applied to display devices
NASA Astrophysics Data System (ADS)
Cassarly, William J.; Irving, Bruce
2005-02-01
Display devices have historically been designed through an iterative process using numerous hardware prototypes. This process is effective but the number of iterations is limited by the time and cost to make the prototypes. In recent years, virtual prototyping using illumination software modeling tools has replaced many of the hardware prototypes. Typically, the designer specifies the design parameters, builds the software model, predicts the performance using a Monte Carlo simulation, and uses the performance results to repeat this process until an acceptable design is obtained. What is highly desired, and now possible, is to use illumination optimization to automate the design process. Illumination optimization provides the ability to explore a wider range of design options while also providing improved performance. Since Monte Carlo simulations are often used to calculate the system performance but those predictions have statistical uncertainty, the use of noise tolerant optimization algorithms is important. The use of noise tolerant illumination optimization is demonstrated by considering display device designs that extract light using 2D paint patterns as well as 3D textured surfaces. A hybrid optimization approach that combines a mesh feedback optimization with a classical optimizer is demonstrated. Displays with LED sources and cold cathode fluorescent lamps are considered.
General purpose graphic processing unit implementation of adaptive pulse compression algorithms
NASA Astrophysics Data System (ADS)
Cai, Jingxiao; Zhang, Yan
2017-07-01
This study introduces a practical approach to implement real-time signal processing algorithms for general surveillance radar based on NVIDIA graphical processing units (GPUs). The pulse compression algorithms are implemented using compute unified device architecture (CUDA) libraries such as CUDA basic linear algebra subroutines and CUDA fast Fourier transform library, which are adopted from open source libraries and optimized for the NVIDIA GPUs. For more advanced, adaptive processing algorithms such as adaptive pulse compression, customized kernel optimization is needed and investigated. A statistical optimization approach is developed for this purpose without needing much knowledge of the physical configurations of the kernels. It was found that the kernel optimization approach can significantly improve the performance. Benchmark performance is compared with the CPU performance in terms of processing accelerations. The proposed implementation framework can be used in various radar systems including ground-based phased array radar, airborne sense and avoid radar, and aerospace surveillance radar.
Simulation and optimization of pressure swing adsorption systmes using reduced-order modeling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Agarwal, A.; Biegler, L.; Zitney, S.
2009-01-01
Over the past three decades, pressure swing adsorption (PSA) processes have been widely used as energyefficient gas separation techniques, especially for high purity hydrogen purification from refinery gases. Models for PSA processes are multiple instances of partial differential equations (PDEs) in time and space with periodic boundary conditions that link the processing steps together. The solution of this coupled stiff PDE system is governed by steep fronts moving with time. As a result, the optimization of such systems represents a significant computational challenge to current differential algebraic equation (DAE) optimization techniques and nonlinear programming algorithms. Model reduction is one approachmore » to generate cost-efficient low-order models which can be used as surrogate models in the optimization problems. This study develops a reducedorder model (ROM) based on proper orthogonal decomposition (POD), which is a low-dimensional approximation to a dynamic PDE-based model. The proposed method leads to a DAE system of significantly lower order, thus replacing the one obtained from spatial discretization and making the optimization problem computationally efficient. The method has been applied to the dynamic coupled PDE-based model of a twobed four-step PSA process for separation of hydrogen from methane. Separate ROMs have been developed for each operating step with different POD modes for each of them. A significant reduction in the order of the number of states has been achieved. The reduced-order model has been successfully used to maximize hydrogen recovery by manipulating operating pressures, step times and feed and regeneration velocities, while meeting product purity and tight bounds on these parameters. Current results indicate the proposed ROM methodology as a promising surrogate modeling technique for cost-effective optimization purposes.« less
Optimization technique of wavefront coding system based on ZEMAX externally compiled programs
NASA Astrophysics Data System (ADS)
Han, Libo; Dong, Liquan; Liu, Ming; Zhao, Yuejin; Liu, Xiaohua
2016-10-01
Wavefront coding technique as a means of athermalization applied to infrared imaging system, the design of phase plate is the key to system performance. This paper apply the externally compiled programs of ZEMAX to the optimization of phase mask in the normal optical design process, namely defining the evaluation function of wavefront coding system based on the consistency of modulation transfer function (MTF) and improving the speed of optimization by means of the introduction of the mathematical software. User write an external program which computes the evaluation function on account of the powerful computing feature of the mathematical software in order to find the optimal parameters of phase mask, and accelerate convergence through generic algorithm (GA), then use dynamic data exchange (DDE) interface between ZEMAX and mathematical software to realize high-speed data exchanging. The optimization of the rotational symmetric phase mask and the cubic phase mask have been completed by this method, the depth of focus increases nearly 3 times by inserting the rotational symmetric phase mask, while the other system with cubic phase mask can be increased to 10 times, the consistency of MTF decrease obviously, the maximum operating temperature of optimized system range between -40°-60°. Results show that this optimization method can be more convenient to define some unconventional optimization goals and fleetly to optimize optical system with special properties due to its externally compiled function and DDE, there will be greater significance for the optimization of unconventional optical system.
NASA Astrophysics Data System (ADS)
Saranya, Kunaparaju; John Rozario Jegaraj, J.; Ramesh Kumar, Katta; Venkateshwara Rao, Ghanta
2016-06-01
With the increased trend in automation of modern manufacturing industry, the human intervention in routine, repetitive and data specific activities of manufacturing is greatly reduced. In this paper, an attempt has been made to reduce the human intervention in selection of optimal cutting tool and process parameters for metal cutting applications, using Artificial Intelligence techniques. Generally, the selection of appropriate cutting tool and parameters in metal cutting is carried out by experienced technician/cutting tool expert based on his knowledge base or extensive search from huge cutting tool database. The present proposed approach replaces the existing practice of physical search for tools from the databooks/tool catalogues with intelligent knowledge-based selection system. This system employs artificial intelligence based techniques such as artificial neural networks, fuzzy logic and genetic algorithm for decision making and optimization. This intelligence based optimal tool selection strategy is developed using Mathworks Matlab Version 7.11.0 and implemented. The cutting tool database was obtained from the tool catalogues of different tool manufacturers. This paper discusses in detail, the methodology and strategies employed for selection of appropriate cutting tool and optimization of process parameters based on multi-objective optimization criteria considering material removal rate, tool life and tool cost.
Pixel-based OPC optimization based on conjugate gradients.
Ma, Xu; Arce, Gonzalo R
2011-01-31
Optical proximity correction (OPC) methods are resolution enhancement techniques (RET) used extensively in the semiconductor industry to improve the resolution and pattern fidelity of optical lithography. In pixel-based OPC (PBOPC), the mask is divided into small pixels, each of which is modified during the optimization process. Two critical issues in PBOPC are the required computational complexity of the optimization process, and the manufacturability of the optimized mask. Most current OPC optimization methods apply the steepest descent (SD) algorithm to improve image fidelity augmented by regularization penalties to reduce the complexity of the mask. Although simple to implement, the SD algorithm converges slowly. The existing regularization penalties, however, fall short in meeting the mask rule check (MRC) requirements often used in semiconductor manufacturing. This paper focuses on developing OPC optimization algorithms based on the conjugate gradient (CG) method which exhibits much faster convergence than the SD algorithm. The imaging formation process is represented by the Fourier series expansion model which approximates the partially coherent system as a sum of coherent systems. In order to obtain more desirable manufacturability properties of the mask pattern, a MRC penalty is proposed to enlarge the linear size of the sub-resolution assistant features (SRAFs), as well as the distances between the SRAFs and the main body of the mask. Finally, a projection method is developed to further reduce the complexity of the optimized mask pattern.
Reduced-order model for dynamic optimization of pressure swing adsorption processes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Agarwal, A.; Biegler, L.; Zitney, S.
2007-01-01
Over the past decades, pressure swing adsorption (PSA) processes have been widely used as energy-efficient gas and liquid separation techniques, especially for high purity hydrogen purification from refinery gases. The separation processes are based on solid-gas equilibrium and operate under periodic transient conditions. Models for PSA processes are therefore multiple instances of partial differential equations (PDEs) in time and space with periodic boundary conditions that link the processing steps together. The solution of this coupled stiff PDE system is governed by steep concentrations and temperature fronts moving with time. As a result, the optimization of such systems for either designmore » or operation represents a significant computational challenge to current differential algebraic equation (DAE) optimization techniques and nonlinear programming algorithms. Model reduction is one approach to generate cost-efficient low-order models which can be used as surrogate models in the optimization problems. The study develops a reduced-order model (ROM) based on proper orthogonal decomposition (POD), which is a low-dimensional approximation to a dynamic PDE-based model. Initially, a representative ensemble of solutions of the dynamic PDE system is constructed by solving a higher-order discretization of the model using the method of lines, a two-stage approach that discretizes the PDEs in space and then integrates the resulting DAEs over time. Next, the ROM method applies the Karhunen-Loeve expansion to derive a small set of empirical eigenfunctions (POD modes) which are used as basis functions within a Galerkin's projection framework to derive a low-order DAE system that accurately describes the dominant dynamics of the PDE system. The proposed method leads to a DAE system of significantly lower order, thus replacing the one obtained from spatial discretization before and making optimization problem computationally-efficient. The method has been applied to the dynamic coupled PDE-based model of a two-bed four-step PSA process for separation of hydrogen from methane. Separate ROMs have been developed for each operating step with different POD modes for each of them. A significant reduction in the order of the number of states has been achieved. The gas-phase mole fraction, solid-state loading and temperature profiles from the low-order ROM and from the high-order simulations have been compared. Moreover, the profiles for a different set of inputs and parameter values fed to the same ROM were compared with the accurate profiles from the high-order simulations. Current results indicate the proposed ROM methodology as a promising surrogate modeling technique for cost-effective optimization purposes. Moreover, deviations from the ROM for different set of inputs and parameters suggest that a recalibration of the model is required for the optimization studies. Results for these will also be presented with the aforementioned results.« less
Big Data Analysis of Manufacturing Processes
NASA Astrophysics Data System (ADS)
Windmann, Stefan; Maier, Alexander; Niggemann, Oliver; Frey, Christian; Bernardi, Ansgar; Gu, Ying; Pfrommer, Holger; Steckel, Thilo; Krüger, Michael; Kraus, Robert
2015-11-01
The high complexity of manufacturing processes and the continuously growing amount of data lead to excessive demands on the users with respect to process monitoring, data analysis and fault detection. For these reasons, problems and faults are often detected too late, maintenance intervals are chosen too short and optimization potential for higher output and increased energy efficiency is not sufficiently used. A possibility to cope with these challenges is the development of self-learning assistance systems, which identify relevant relationships by observation of complex manufacturing processes so that failures, anomalies and need for optimization are automatically detected. The assistance system developed in the present work accomplishes data acquisition, process monitoring and anomaly detection in industrial and agricultural processes. The assistance system is evaluated in three application cases: Large distillation columns, agricultural harvesting processes and large-scale sorting plants. In this paper, the developed infrastructures for data acquisition in these application cases are described as well as the developed algorithms and initial evaluation results.
Liu, Derong; Wang, Ding; Wang, Fei-Yue; Li, Hongliang; Yang, Xiong
2014-12-01
In this paper, the infinite horizon optimal robust guaranteed cost control of continuous-time uncertain nonlinear systems is investigated using neural-network-based online solution of Hamilton-Jacobi-Bellman (HJB) equation. By establishing an appropriate bounded function and defining a modified cost function, the optimal robust guaranteed cost control problem is transformed into an optimal control problem. It can be observed that the optimal cost function of the nominal system is nothing but the optimal guaranteed cost of the original uncertain system. A critic neural network is constructed to facilitate the solution of the modified HJB equation corresponding to the nominal system. More importantly, an additional stabilizing term is introduced for helping to verify the stability, which reinforces the updating process of the weight vector and reduces the requirement of an initial stabilizing control. The uniform ultimate boundedness of the closed-loop system is analyzed by using the Lyapunov approach as well. Two simulation examples are provided to verify the effectiveness of the present control approach.
Optimization and resilience of complex supply-demand networks
NASA Astrophysics Data System (ADS)
Zhang, Si-Ping; Huang, Zi-Gang; Dong, Jia-Qi; Eisenberg, Daniel; Seager, Thomas P.; Lai, Ying-Cheng
2015-06-01
Supply-demand processes take place on a large variety of real-world networked systems ranging from power grids and the internet to social networking and urban systems. In a modern infrastructure, supply-demand systems are constantly expanding, leading to constant increase in load requirement for resources and consequently, to problems such as low efficiency, resource scarcity, and partial system failures. Under certain conditions global catastrophe on the scale of the whole system can occur through the dynamical process of cascading failures. We investigate optimization and resilience of time-varying supply-demand systems by constructing network models of such systems, where resources are transported from the supplier sites to users through various links. Here by optimization we mean minimization of the maximum load on links, and system resilience can be characterized using the cascading failure size of users who fail to connect with suppliers. We consider two representative classes of supply schemes: load driven supply and fix fraction supply. Our findings are: (1) optimized systems are more robust since relatively smaller cascading failures occur when triggered by external perturbation to the links; (2) a large fraction of links can be free of load if resources are directed to transport through the shortest paths; (3) redundant links in the performance of the system can help to reroute the traffic but may undesirably transmit and enlarge the failure size of the system; (4) the patterns of cascading failures depend strongly upon the capacity of links; (5) the specific location of the trigger determines the specific route of cascading failure, but has little effect on the final cascading size; (6) system expansion typically reduces the efficiency; and (7) when the locations of the suppliers are optimized over a long expanding period, fewer suppliers are required. These results hold for heterogeneous networks in general, providing insights into designing optimal and resilient complex supply-demand systems that expand constantly in time.
Conceptual design and multidisciplinary optimization of in-plane morphing wing structures
NASA Astrophysics Data System (ADS)
Inoyama, Daisaku; Sanders, Brian P.; Joo, James J.
2006-03-01
In this paper, the topology optimization methodology for the synthesis of distributed actuation system with specific applications to the morphing air vehicle is discussed. The main emphasis is placed on the topology optimization problem formulations and the development of computational modeling concepts. For demonstration purposes, the inplane morphing wing model is presented. The analysis model is developed to meet several important criteria: It must allow large rigid-body displacements, as well as variation in planform area, with minimum strain on structural members while retaining acceptable numerical stability for finite element analysis. Preliminary work has indicated that addressed modeling concept meets the criteria and may be suitable for the purpose. Topology optimization is performed on the ground structure based on this modeling concept with design variables that control the system configuration. In other words, states of each element in the model are design variables and they are to be determined through optimization process. In effect, the optimization process assigns morphing members as 'soft' elements, non-morphing load-bearing members as 'stiff' elements, and non-existent members as 'voids.' In addition, the optimization process determines the location and relative force intensities of distributed actuators, which is represented computationally as equal and opposite nodal forces with soft axial stiffness. Several different optimization problem formulations are investigated to understand their potential benefits in solution quality, as well as meaningfulness of formulation itself. Sample in-plane morphing problems are solved to demonstrate the potential capability of the methodology introduced in this paper.
NASA Astrophysics Data System (ADS)
Fedorov, Yu V.
1995-10-01
A description is given of a novel optical system for optical information processing. An analysis is given of ways of increasing optoenergetic characteristics of optical information processing systems in which use is made of spatial light modulators with phase-relief (in thermoplastic materials) and polarisation (in crystalline structures of the DKDP type) information storage.
Tuning of PID controller using optimization techniques for a MIMO process
NASA Astrophysics Data System (ADS)
Thulasi dharan, S.; Kavyarasan, K.; Bagyaveereswaran, V.
2017-11-01
In this paper, two processes were considered one is Quadruple tank process and the other is CSTR (Continuous Stirred Tank Reactor) process. These are majorly used in many industrial applications for various domains, especially, CSTR in chemical plants.At first mathematical model of both the process is to be done followed by linearization of the system due to MIMO process and controllers are the major part to control the whole process to our desired point as per the applications so the tuning of the controller plays a major role among the whole process. For tuning of parameters we use two optimizations techniques like Particle Swarm Optimization, Genetic Algorithm. The above techniques are majorly used in different applications to obtain which gives the best among all, we use these techniques to obtain the best tuned values among many. Finally, we will compare the performance of the each process with both the techniques.
Baumann, Pascal; Hahn, Tobias; Hubbuch, Jürgen
2015-10-01
Upstream processes are rather complex to design and the productivity of cells under suitable cultivation conditions is hard to predict. The method of choice for examining the design space is to execute high-throughput cultivation screenings in micro-scale format. Various predictive in silico models have been developed for many downstream processes, leading to a reduction of time and material costs. This paper presents a combined optimization approach based on high-throughput micro-scale cultivation experiments and chromatography modeling. The overall optimized system must not necessarily be the one with highest product titers, but the one resulting in an overall superior process performance in up- and downstream. The methodology is presented in a case study for the Cherry-tagged enzyme Glutathione-S-Transferase from Escherichia coli SE1. The Cherry-Tag™ (Delphi Genetics, Belgium) which can be fused to any target protein allows for direct product analytics by simple VIS absorption measurements. High-throughput cultivations were carried out in a 48-well format in a BioLector micro-scale cultivation system (m2p-Labs, Germany). The downstream process optimization for a set of randomly picked upstream conditions producing high yields was performed in silico using a chromatography modeling software developed in-house (ChromX). The suggested in silico-optimized operational modes for product capturing were validated subsequently. The overall best system was chosen based on a combination of excellent up- and downstream performance. © 2015 Wiley Periodicals, Inc.
Enzymatic process optimization for the in vitro production of isoprene from mevalonate.
Cheng, Tao; Liu, Hui; Zou, Huibin; Chen, Ningning; Shi, Mengxun; Xie, Congxia; Zhao, Guang; Xian, Mo
2017-01-09
As an important bulk chemical for synthetic rubber, isoprene can be biosynthesized by robust microbes. But rational engineering and optimization are often demanded to make the in vivo process feasible due to the complexities of cellular metabolism. Alternative synthetic biochemistry strategies are in fast development to produce isoprene or isoprenoids in vitro. This study set up an in vitro enzyme synthetic chemistry process using 5 enzymes in the lower mevalonate pathway to produce isoprene from mevalonate. We found the level and ratio of individual enzymes would significantly affect the efficiency of the whole system. The optimized process using 10 balanced enzyme unites (5.0 µM of MVK, PMK, MVD; 10.0 µM of IDI, 80.0 µM of ISPS) could produce 6323.5 µmol/L/h (430 mg/L/h) isoprene in a 2 ml in vitro system. In a scale up process (50 ml) only using 1 balanced enzyme unit (0.5 µM of MVK, PMK, MVD; 1.0 µM of IDI, 8.0 µM of ISPS), the system could produce 302 mg/L isoprene in 40 h, which showed higher production rate and longer reaction phase with comparison of the in vivo control. By optimizing the enzyme levels of lower MVA pathway, synthetic biochemistry methods could be set up for the enzymatic production of isoprene or isoprenoids from mevalonate.
Truss systems and shape optimization
NASA Astrophysics Data System (ADS)
Pricop, Mihai Victor; Bunea, Marian; Nedelcu, Roxana
2017-07-01
Structure optimization is an important topic because of its benefits and wide applicability range, from civil engineering to aerospace and automotive industries, contributing to a more green industry and life. Truss finite elements are still in use in many research/industrial codesfor their simple stiffness matrixand are naturally matching the requirements for cellular materials especially considering various 3D printing technologies. Optimality Criteria combined with Solid Isotropic Material with Penalization is the optimization method of choice, particularized for truss systems. Global locked structures areobtainedusinglocally locked lattice local organization, corresponding to structured or unstructured meshes. Post processing is important for downstream application of the method, to make a faster link to the CAD systems. To export the optimal structure in CATIA, a CATScript file is automatically generated. Results, findings and conclusions are given for two and three-dimensional cases.
Fiedler, Anna; Raeth, Sebastian; Theis, Fabian J; Hausser, Angelika; Hasenauer, Jan
2016-08-22
Ordinary differential equation (ODE) models are widely used to describe (bio-)chemical and biological processes. To enhance the predictive power of these models, their unknown parameters are estimated from experimental data. These experimental data are mostly collected in perturbation experiments, in which the processes are pushed out of steady state by applying a stimulus. The information that the initial condition is a steady state of the unperturbed process provides valuable information, as it restricts the dynamics of the process and thereby the parameters. However, implementing steady-state constraints in the optimization often results in convergence problems. In this manuscript, we propose two new methods for solving optimization problems with steady-state constraints. The first method exploits ideas from optimization algorithms on manifolds and introduces a retraction operator, essentially reducing the dimension of the optimization problem. The second method is based on the continuous analogue of the optimization problem. This continuous analogue is an ODE whose equilibrium points are the optima of the constrained optimization problem. This equivalence enables the use of adaptive numerical methods for solving optimization problems with steady-state constraints. Both methods are tailored to the problem structure and exploit the local geometry of the steady-state manifold and its stability properties. A parameterization of the steady-state manifold is not required. The efficiency and reliability of the proposed methods is evaluated using one toy example and two applications. The first application example uses published data while the second uses a novel dataset for Raf/MEK/ERK signaling. The proposed methods demonstrated better convergence properties than state-of-the-art methods employed in systems and computational biology. Furthermore, the average computation time per converged start is significantly lower. In addition to the theoretical results, the analysis of the dataset for Raf/MEK/ERK signaling provides novel biological insights regarding the existence of feedback regulation. Many optimization problems considered in systems and computational biology are subject to steady-state constraints. While most optimization methods have convergence problems if these steady-state constraints are highly nonlinear, the methods presented recover the convergence properties of optimizers which can exploit an analytical expression for the parameter-dependent steady state. This renders them an excellent alternative to methods which are currently employed in systems and computational biology.
Combined optimization of image-gathering and image-processing systems for scene feature detection
NASA Technical Reports Server (NTRS)
Halyo, Nesim; Arduini, Robert F.; Samms, Richard W.
1987-01-01
The relationship between the image gathering and image processing systems for minimum mean squared error estimation of scene characteristics is investigated. A stochastic optimization problem is formulated where the objective is to determine a spatial characteristic of the scene rather than a feature of the already blurred, sampled and noisy image data. An analytical solution for the optimal characteristic image processor is developed. The Wiener filter for the sampled image case is obtained as a special case, where the desired characteristic is scene restoration. Optimal edge detection is investigated using the Laplacian operator x G as the desired characteristic, where G is a two dimensional Gaussian distribution function. It is shown that the optimal edge detector compensates for the blurring introduced by the image gathering optics, and notably, that it is not circularly symmetric. The lack of circular symmetry is largely due to the geometric effects of the sampling lattice used in image acquisition. The optimal image gathering optical transfer function is also investigated and the results of a sensitivity analysis are shown.
Optimization of dynamic envelope measurement system for high speed train based on monocular vision
NASA Astrophysics Data System (ADS)
Wu, Bin; Liu, Changjie; Fu, Luhua; Wang, Zhong
2018-01-01
The definition of dynamic envelope curve is the maximum limit outline caused by various adverse effects during the running process of the train. It is an important base of making railway boundaries. At present, the measurement work of dynamic envelope curve of high-speed vehicle is mainly achieved by the way of binocular vision. There are some problems of the present measuring system like poor portability, complicated process and high cost. A new measurement system based on the monocular vision measurement theory and the analysis on the test environment is designed and the measurement system parameters, the calibration of camera with wide field of view, the calibration of the laser plane are designed and optimized in this paper. The accuracy has been verified to be up to 2mm by repeated tests and experimental data analysis. The feasibility and the adaptability of the measurement system is validated. There are some advantages of the system like lower cost, a simpler measurement and data processing process, more reliable data. And the system needs no matching algorithm.
Parametric Cost Analysis: A Design Function
NASA Technical Reports Server (NTRS)
Dean, Edwin B.
1989-01-01
Parametric cost analysis uses equations to map measurable system attributes into cost. The measures of the system attributes are called metrics. The equations are called cost estimating relationships (CER's), and are obtained by the analysis of cost and technical metric data of products analogous to those to be estimated. Examples of system metrics include mass, power, failure_rate, mean_time_to_repair, energy _consumed, payload_to_orbit, pointing_accuracy, manufacturing_complexity, number_of_fasteners, and percent_of_electronics_weight. The basic assumption is that a measurable relationship exists between system attributes and the cost of the system. If a function exists, the attributes are cost drivers. Candidates for metrics include system requirement metrics and engineering process metrics. Requirements are constraints on the engineering process. From optimization theory we know that any active constraint generates cost by not permitting full optimization of the objective. Thus, requirements are cost drivers. Engineering processes reflect a projection of the requirements onto the corporate culture, engineering technology, and system technology. Engineering processes are an indirect measure of the requirements and, hence, are cost drivers.
NASA Space Engineering Research Center for utilization of local planetary resources
NASA Technical Reports Server (NTRS)
1992-01-01
Reports covering the period from 1 Nov. 1991 to 31 Oct. 1992 and documenting progress at the NASA Space Engineering Research Center are included. Topics covered include: (1) processing of propellants, volatiles, and metals; (2) production of structural and refractory materials; (3) system optimization discovery and characterization; (4) system automation and optimization; and (5) database development.
Development of a Groundwater Transport Simulation Tool for Remedial Process Optimization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ivarson, Kristine A.; Hanson, James P.; Tonkin, M.
2015-01-14
The groundwater remedy for hexavalent chromium at the Hanford Site includes operation of five large pump-and-treat systems along the Columbia River. The systems at the 100-HR-3 and 100-KR-4 groundwater operable units treat a total of about 9,840 liters per minute (2,600 gallons per minute) of groundwater to remove hexavalent chromium, and cover an area of nearly 26 square kilometers (10 square miles). The pump-and-treat systems result in large scale manipulation of groundwater flow direction, velocities, and most importantly, the contaminant plumes. Tracking of the plumes and predicting needed system modifications is part of the remedial process optimization, and is amore » continual process with the goal of reducing costs and shortening the timeframe to achieve the cleanup goals. While most of the initial system evaluations are conducted by assessing performance (e.g., reduction in contaminant concentration in groundwater and changes in inferred plume size), changes to the well field are often recommended. To determine the placement for new wells, well realignments, and modifications to pumping rates, it is important to be able to predict resultant plume changes. In smaller systems, it may be effective to make small scale changes periodically and adjust modifications based on groundwater monitoring results. Due to the expansive nature of the remediation systems at Hanford, however, additional tools were needed to predict the plume reactions to system changes. A computer simulation tool was developed to support pumping rate recommendations for optimization of large pump-and-treat groundwater remedy systems. This tool, called the Pumping Optimization Model, or POM, is based on a 1-layer derivation of a multi-layer contaminant transport model using MODFLOW and MT3D.« less
Optimization and resilience in natural resources management
Williams, Byron K.; Johnson, Fred A.
2015-01-01
We consider the putative tradeoff between optimization and resilience in the management of natural resources, using a framework that incorporates different sources of uncertainty that are common in natural resources management. We address one-time decisions, and then expand the decision context to the more complex problem of iterative decision making. For both cases we focus on two key sources of uncertainty: partial observability of system state and uncertainty as to system dynamics. Optimal management strategies will vary considerably depending on the timeframe being considered and the amount and quality of information that is available to characterize system features and project the consequences of potential decisions. But in all cases an optimal decision making framework, if properly identified and focused, can be useful in recognizing sound decisions. We argue that under the conditions of deep uncertainty that characterize many resource systems, an optimal decision process that focuses on robustness does not automatically induce a loss of resilience.
Zawada, James F; Yin, Gang; Steiner, Alexander R; Yang, Junhao; Naresh, Alpana; Roy, Sushmita M; Gold, Daniel S; Heinsohn, Henry G; Murray, Christopher J
2011-01-01
Engineering robust protein production and purification of correctly folded biotherapeutic proteins in cell-based systems is often challenging due to the requirements for maintaining complex cellular networks for cell viability and the need to develop associated downstream processes that reproducibly yield biopharmaceutical products with high product quality. Here, we present an alternative Escherichia coli-based open cell-free synthesis (OCFS) system that is optimized for predictable high-yield protein synthesis and folding at any scale with straightforward downstream purification processes. We describe how the linear scalability of OCFS allows rapid process optimization of parameters affecting extract activation, gene sequence optimization, and redox folding conditions for disulfide bond formation at microliter scales. Efficient and predictable high-level protein production can then be achieved using batch processes in standard bioreactors. We show how a fully bioactive protein produced by OCFS from optimized frozen extract can be purified directly using a streamlined purification process that yields a biologically active cytokine, human granulocyte-macrophage colony-stimulating factor, produced at titers of 700 mg/L in 10 h. These results represent a milestone for in vitro protein synthesis, with potential for the cGMP production of disulfide-bonded biotherapeutic proteins. Biotechnol. Bioeng. 2011; 108:1570–1578. © 2011 Wiley Periodicals, Inc. PMID:21337337
Optimization of scheduling system for plant watering using electric cars in agro techno park
NASA Astrophysics Data System (ADS)
Oktavia Adiwijaya, Nelly; Herlambang, Yudha; Slamin
2018-04-01
Agro Techno Park in University of Jember is a special area used for the development of agriculture, livestock and fishery. In this plantation, the process of watering the plants is according to the frequency of each plant needs. This research develops the optimization of plant watering scheduling system using edge coloring of graph. This research was conducted in 3 stages, namely, data collection phase, analysis phase, and system development stage. The collected data was analyzed and then converted into a graph by using bipartite adjacency matrix representation. The development phase is conducted to build a web-based watering schedule optimization system. The result of this research showed that the schedule system is optimal because it can maximize the use of all electric cars to water the plants and minimize the number of idle cars.
Multi-Objective Mission Route Planning Using Particle Swarm Optimization
2002-03-01
solutions to complex problems using particles that interact with each other. Both Particle Swarm Optimization (PSO) and the Ant System (AS) have been...EXPERIMENTAL DESING PROCESS..............................................................55 5.1. Introduction...46 18. Phenotype level particle interaction
Ren, Luquan; Zhou, Xueli; Song, Zhengyi; Zhao, Che; Liu, Qingping; Xue, Jingze; Li, Xiujuan
2017-03-16
Recently, with a broadening range of available materials and alteration of feeding processes, several extrusion-based 3D printing processes for metal materials have been developed. An emerging process is applicable for the fabrication of metal parts into electronics and composites. In this paper, some critical parameters of extrusion-based 3D printing processes were optimized by a series of experiments with a melting extrusion printer. The raw materials were copper powder and a thermoplastic organic binder system and the system included paraffin wax, low density polyethylene, and stearic acid (PW-LDPE-SA). The homogeneity and rheological behaviour of the raw materials, the strength of the green samples, and the hardness of the sintered samples were investigated. Moreover, the printing and sintering parameters were optimized with an orthogonal design method. The influence factors in regard to the ultimate tensile strength of the green samples can be described as follows: infill degree > raster angle > layer thickness. As for the sintering process, the major factor on hardness is sintering temperature, followed by holding time and heating rate. The highest hardness of the sintered samples was very close to the average hardness of commercially pure copper material. Generally, the extrusion-based printing process for producing metal materials is a promising strategy because it has some advantages over traditional approaches for cost, efficiency, and simplicity.
Ren, Luquan; Zhou, Xueli; Song, Zhengyi; Zhao, Che; Liu, Qingping; Xue, Jingze; Li, Xiujuan
2017-01-01
Recently, with a broadening range of available materials and alteration of feeding processes, several extrusion-based 3D printing processes for metal materials have been developed. An emerging process is applicable for the fabrication of metal parts into electronics and composites. In this paper, some critical parameters of extrusion-based 3D printing processes were optimized by a series of experiments with a melting extrusion printer. The raw materials were copper powder and a thermoplastic organic binder system and the system included paraffin wax, low density polyethylene, and stearic acid (PW–LDPE–SA). The homogeneity and rheological behaviour of the raw materials, the strength of the green samples, and the hardness of the sintered samples were investigated. Moreover, the printing and sintering parameters were optimized with an orthogonal design method. The influence factors in regard to the ultimate tensile strength of the green samples can be described as follows: infill degree > raster angle > layer thickness. As for the sintering process, the major factor on hardness is sintering temperature, followed by holding time and heating rate. The highest hardness of the sintered samples was very close to the average hardness of commercially pure copper material. Generally, the extrusion-based printing process for producing metal materials is a promising strategy because it has some advantages over traditional approaches for cost, efficiency, and simplicity. PMID:28772665
Tomasik, Martin J; Knecht, Michaela; Freund, Alexandra M
2017-12-01
Based on optimal foraging theory, we propose a metric that allows evaluating the goodness of goal systems, that is, systems comprising multiple goals with facilitative and conflicting interrelations. This optimal foraging theory takes into account expectancy and value, as well as opportunity costs, of foraging. Applying this approach to goal systems provides a single index of goodness of a goal system for goal striving. Three quasi-experimental studies (N = 277, N = 145, and N = 210) provide evidence for the usefulness of this approach for goal systems comprising between 3 to 10 goals. Results indicate that persons with a more optimized goal-system are more conscientious and open to new experience, are more likely to represent their goals in terms of means (i.e., adopt a process focus), and are more satisfied and engaged with their goals. Persons with a suboptimal goal system tend to switch their goals more often and thereby optimize their goal system. We discuss limitations as well as possible future directions of this approach. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Modeling joint restoration strategies for interdependent infrastructure systems
Simonovic, Slobodan P.
2018-01-01
Life in the modern world depends on multiple critical services provided by infrastructure systems which are interdependent at multiple levels. To effectively respond to infrastructure failures, this paper proposes a model for developing optimal joint restoration strategy for interdependent infrastructure systems following a disruptive event. First, models for (i) describing structure of interdependent infrastructure system and (ii) their interaction process, are presented. Both models are considering the failure types, infrastructure operating rules and interdependencies among systems. Second, an optimization model for determining an optimal joint restoration strategy at infrastructure component level by minimizing the economic loss from the infrastructure failures, is proposed. The utility of the model is illustrated using a case study of electric-water systems. Results show that a small number of failed infrastructure components can trigger high level failures in interdependent systems; the optimal joint restoration strategy varies with failure occurrence time. The proposed models can help decision makers to understand the mechanisms of infrastructure interactions and search for optimal joint restoration strategy, which can significantly enhance safety of infrastructure systems. PMID:29649300
Performance Optimization of Irreversible Air Heat Pumps Considering Size Effect
NASA Astrophysics Data System (ADS)
Bi, Yuehong; Chen, Lingen; Ding, Zemin; Sun, Fengrui
2018-06-01
Considering the size of an irreversible air heat pump (AHP), heating load density (HLD) is taken as thermodynamic optimization objective by using finite-time thermodynamics. Based on an irreversible AHP with infinite reservoir thermal-capacitance rate model, the expression of HLD of AHP is put forward. The HLD optimization processes are studied analytically and numerically, which consist of two aspects: (1) to choose pressure ratio; (2) to distribute heat-exchanger inventory. Heat reservoir temperatures, heat transfer performance of heat exchangers as well as irreversibility during compression and expansion processes are important factors influencing on the performance of an irreversible AHP, which are characterized with temperature ratio, heat exchanger inventory as well as isentropic efficiencies, respectively. Those impacts of parameters on the maximum HLD are thoroughly studied. The research results show that HLD optimization can make the size of the AHP system smaller and improve the compactness of system.
Further optimization of SeDDaRA blind image deconvolution algorithm and its DSP implementation
NASA Astrophysics Data System (ADS)
Wen, Bo; Zhang, Qiheng; Zhang, Jianlin
2011-11-01
Efficient algorithm for blind image deconvolution and its high-speed implementation is of great value in practice. Further optimization of SeDDaRA is developed, from algorithm structure to numerical calculation methods. The main optimization covers that, the structure's modularization for good implementation feasibility, reducing the data computation and dependency of 2D-FFT/IFFT, and acceleration of power operation by segmented look-up table. Then the Fast SeDDaRA is proposed and specialized for low complexity. As the final implementation, a hardware system of image restoration is conducted by using the multi-DSP parallel processing. Experimental results show that, the processing time and memory demand of Fast SeDDaRA decreases 50% at least; the data throughput of image restoration system is over 7.8Msps. The optimization is proved efficient and feasible, and the Fast SeDDaRA is able to support the real-time application.
System and method for bullet tracking and shooter localization
Roberts, Randy S [Livermore, CA; Breitfeller, Eric F [Dublin, CA
2011-06-21
A system and method of processing infrared imagery to determine projectile trajectories and the locations of shooters with a high degree of accuracy. The method includes image processing infrared image data to reduce noise and identify streak-shaped image features, using a Kalman filter to estimate optimal projectile trajectories, updating the Kalman filter with new image data, determining projectile source locations by solving a combinatorial least-squares solution for all optimal projectile trajectories, and displaying all of the projectile source locations. Such a shooter-localization system is of great interest for military and law enforcement applications to determine sniper locations, especially in urban combat scenarios.
Emergency strategy optimization for the environmental control system in manned spacecraft
NASA Astrophysics Data System (ADS)
Li, Guoxiang; Pang, Liping; Liu, Meng; Fang, Yufeng; Zhang, Helin
2018-02-01
It is very important for a manned environmental control system (ECS) to be able to reconfigure its operation strategy in emergency conditions. In this article, a multi-objective optimization is established to design the optimal emergency strategy for an ECS in an insufficient power supply condition. The maximum ECS lifetime and the minimum power consumption are chosen as the optimization objectives. Some adjustable key variables are chosen as the optimization variables, which finally represent the reconfigured emergency strategy. The non-dominated sorting genetic algorithm-II is adopted to solve this multi-objective optimization problem. Optimization processes are conducted at four different carbon dioxide partial pressure control levels. The study results show that the Pareto-optimal frontiers obtained from this multi-objective optimization can represent the relationship between the lifetime and the power consumption of the ECS. Hence, the preferred emergency operation strategy can be recommended for situations when there is suddenly insufficient power.
Mitchell, Peter D; Ratcliffe, Elizabeth; Hourd, Paul; Williams, David J; Thomas, Robert J
2014-12-01
It is well documented that cryopreservation and resuscitation of human embryonic stem cells (hESCs) is complex and ill-defined, and often suffers poor cell recovery and increased levels of undesirable cell differentiation. In this study we have applied Quality-by-Design (QbD) concepts to the critical processes of slow-freeze cryopreservation and resuscitation of hESC colony cultures. Optimized subprocesses were linked together to deliver a controlled complete process. We have demonstrated a rapid, high-throughput, and stable system for measurement of cell adherence and viability as robust markers of in-process and postrecovery cell state. We observed that measurement of adherence and viability of adhered cells at 1 h postseeding was predictive of cell proliferative ability up to 96 h in this system. Application of factorial design defined the operating spaces for cryopreservation and resuscitation, critically linking the performance of these two processes. Optimization of both processes resulted in enhanced reattachment and post-thaw viability, resulting in substantially greater recovery of cryopreserved, pluripotent cell colonies. This study demonstrates the importance of QbD concepts and tools for rapid, robust, and low-risk process design that can inform manufacturing controls and logistics.
Dunnett, Alex J; Adjiman, Claire S; Shah, Nilay
2008-01-01
Background Lignocellulosic bioethanol technologies exhibit significant capacity for performance improvement across the supply chain through the development of high-yielding energy crops, integrated pretreatment, hydrolysis and fermentation technologies and the application of dedicated ethanol pipelines. The impact of such developments on cost-optimal plant location, scale and process composition within multiple plant infrastructures is poorly understood. A combined production and logistics model has been developed to investigate cost-optimal system configurations for a range of technological, system scale, biomass supply and ethanol demand distribution scenarios specific to European agricultural land and population densities. Results Ethanol production costs for current technologies decrease significantly from $0.71 to $0.58 per litre with increasing economies of scale, up to a maximum single-plant capacity of 550 × 106 l year-1. The development of high-yielding energy crops and consolidated bio-processing realises significant cost reductions, with production costs ranging from $0.33 to $0.36 per litre. Increased feedstock yields result in systems of eight fully integrated plants operating within a 500 × 500 km2 region, each producing between 1.24 and 2.38 × 109 l year-1 of pure ethanol. A limited potential for distributed processing and centralised purification systems is identified, requiring developments in modular, ambient pretreatment and fermentation technologies and the pipeline transport of pure ethanol. Conclusion The conceptual and mathematical modelling framework developed provides a valuable tool for the assessment and optimisation of the lignocellulosic bioethanol supply chain. In particular, it can provide insight into the optimal configuration of multiple plant systems. This information is invaluable in ensuring (near-)cost-optimal strategic development within the sector at the regional and national scale. The framework is flexible and can thus accommodate a range of processing tasks, logistical modes, by-product markets and impacting policy constraints. Significant scope for application to real-world case studies through dynamic extensions of the formulation has been identified. PMID:18662392
NASA Technical Reports Server (NTRS)
Miller, David W.; Uebelhart, Scott A.; Blaurock, Carl
2004-01-01
This report summarizes work performed by the Space Systems Laboratory (SSL) for NASA Langley Research Center in the field of performance optimization for systems subject to uncertainty. The objective of the research is to develop design methods and tools to the aerospace vehicle design process which take into account lifecycle uncertainties. It recognizes that uncertainty between the predictions of integrated models and data collected from the system in its operational environment is unavoidable. Given the presence of uncertainty, the goal of this work is to develop means of identifying critical sources of uncertainty, and to combine these with the analytical tools used with integrated modeling. In this manner, system uncertainty analysis becomes part of the design process, and can motivate redesign. The specific program objectives were: 1. To incorporate uncertainty modeling, propagation and analysis into the integrated (controls, structures, payloads, disturbances, etc.) design process to derive the error bars associated with performance predictions. 2. To apply modern optimization tools to guide in the expenditure of funds in a way that most cost-effectively improves the lifecycle productivity of the system by enhancing the subsystem reliability and redundancy. The results from the second program objective are described. This report describes the work and results for the first objective: uncertainty modeling, propagation, and synthesis with integrated modeling.
Parameterized data-driven fuzzy model based optimal control of a semi-batch reactor.
Kamesh, Reddi; Rani, K Yamuna
2016-09-01
A parameterized data-driven fuzzy (PDDF) model structure is proposed for semi-batch processes, and its application for optimal control is illustrated. The orthonormally parameterized input trajectories, initial states and process parameters are the inputs to the model, which predicts the output trajectories in terms of Fourier coefficients. Fuzzy rules are formulated based on the signs of a linear data-driven model, while the defuzzification step incorporates a linear regression model to shift the domain from input to output domain. The fuzzy model is employed to formulate an optimal control problem for single rate as well as multi-rate systems. Simulation study on a multivariable semi-batch reactor system reveals that the proposed PDDF modeling approach is capable of capturing the nonlinear and time-varying behavior inherent in the semi-batch system fairly accurately, and the results of operating trajectory optimization using the proposed model are found to be comparable to the results obtained using the exact first principles model, and are also found to be comparable to or better than parameterized data-driven artificial neural network model based optimization results. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.
Development of a nanosatellite de-orbiting system by reliability based design optimization
NASA Astrophysics Data System (ADS)
Nikbay, Melike; Acar, Pınar; Aslan, Alim Rüstem
2015-12-01
This paper presents design approaches to develop a reliable and efficient de-orbiting system for the 3USAT nanosatellite to provide a beneficial orbital decay process at the end of a mission. A de-orbiting system is initially designed by employing the aerodynamic drag augmentation principle where the structural constraints of the overall satellite system and the aerodynamic forces are taken into account. Next, an alternative de-orbiting system is designed with new considerations and further optimized using deterministic and reliability based design techniques. For the multi-objective design, the objectives are chosen to maximize the aerodynamic drag force through the maximization of the Kapton surface area while minimizing the de-orbiting system mass. The constraints are related in a deterministic manner to the required deployment force, the height of the solar panel hole and the deployment angle. The length and the number of layers of the deployable Kapton structure are used as optimization variables. In the second stage of this study, uncertainties related to both manufacturing and operating conditions of the deployable structure in space environment are considered. These uncertainties are then incorporated into the design process by using different probabilistic approaches such as Monte Carlo Simulation, the First-Order Reliability Method and the Second-Order Reliability Method. The reliability based design optimization seeks optimal solutions using the former design objectives and constraints with the inclusion of a reliability index. Finally, the de-orbiting system design alternatives generated by different approaches are investigated and the reliability based optimum design is found to yield the best solution since it significantly improves both system reliability and performance requirements.
Biometric Attendance and Big Data Analysis for Optimizing Work Processes.
Verma, Neetu; Xavier, Teenu; Agrawal, Deepak
2016-01-01
Although biometric attendance management is available, large healthcare organizations have difficulty in big data analysis for optimization of work processes. The aim of this project was to assess the implementation of a biometric attendance system and its utility following big data analysis. In this prospective study the implementation of biometric system was evaluated over 3 month period at our institution. Software integration with other existing systems for data analysis was also evaluated. Implementation of the biometric system could be successfully done over a two month period with enrollment of 10,000 employees into the system. However generating reports and taking action this large number of staff was a challenge. For this purpose software was made for capturing the duty roster of each employee and integrating it with the biometric system and adding an SMS gateway. This helped in automating the process of sending SMSs to each employee who had not signed in. Standalone biometric systems have limited functionality in large organizations unless it is meshed with employee duty roster.
Development of an Optimization Methodology for the Aluminum Alloy Wheel Casting Process
NASA Astrophysics Data System (ADS)
Duan, Jianglan; Reilly, Carl; Maijer, Daan M.; Cockcroft, Steve L.; Phillion, Andre B.
2015-08-01
An optimization methodology has been developed for the aluminum alloy wheel casting process. The methodology is focused on improving the timing of cooling processes in a die to achieve improved casting quality. This methodology utilizes (1) a casting process model, which was developed within the commercial finite element package, ABAQUS™—ABAQUS is a trademark of Dassault Systèms; (2) a Python-based results extraction procedure; and (3) a numerical optimization module from the open-source Python library, Scipy. To achieve optimal casting quality, a set of constraints have been defined to ensure directional solidification, and an objective function, based on the solidification cooling rates, has been defined to either maximize, or target a specific, cooling rate. The methodology has been applied to a series of casting and die geometries with different cooling system configurations, including a 2-D axisymmetric wheel and die assembly generated from a full-scale prototype wheel. The results show that, with properly defined constraint and objective functions, solidification conditions can be improved and optimal cooling conditions can be achieved leading to process productivity and product quality improvements.
Multi-objective optimization of GENIE Earth system models.
Price, Andrew R; Myerscough, Richard J; Voutchkov, Ivan I; Marsh, Robert; Cox, Simon J
2009-07-13
The tuning of parameters in climate models is essential to provide reliable long-term forecasts of Earth system behaviour. We apply a multi-objective optimization algorithm to the problem of parameter estimation in climate models. This optimization process involves the iterative evaluation of response surface models (RSMs), followed by the execution of multiple Earth system simulations. These computations require an infrastructure that provides high-performance computing for building and searching the RSMs and high-throughput computing for the concurrent evaluation of a large number of models. Grid computing technology is therefore essential to make this algorithm practical for members of the GENIE project.
Fricke, Jens; Pohlmann, Kristof; Jonescheit, Nils A; Ellert, Andree; Joksch, Burkhard; Luttmann, Reiner
2013-06-01
The identification of optimal expression conditions for state-of-the-art production of pharmaceutical proteins is a very time-consuming and expensive process. In this report a method for rapid and reproducible optimization of protein expression in an in-house designed small-scale BIOSTAT® multi-bioreactor plant is described. A newly developed BioPAT® MFCS/win Design of Experiments (DoE) module (Sartorius Stedim Systems, Germany) connects the process control system MFCS/win and the DoE software MODDE® (Umetrics AB, Sweden) and enables therefore the implementation of fully automated optimization procedures. As a proof of concept, a commercial Pichia pastoris strain KM71H has been transformed for the expression of potential malaria vaccines. This approach has allowed a doubling of intact protein secretion productivity due to the DoE optimization procedure compared to initial cultivation results. In a next step, robustness regarding the sensitivity to process parameter variability has been proven around the determined optimum. Thereby, a pharmaceutical production process that is significantly improved within seven 24-hour cultivation cycles was established. Specifically, regarding the regulatory demands pointed out in the process analytical technology (PAT) initiative of the United States Food and Drug Administration (FDA), the combination of a highly instrumented, fully automated multi-bioreactor platform with proper cultivation strategies and extended DoE software solutions opens up promising benefits and opportunities for pharmaceutical protein production. Copyright © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Reliability of system for precise cold forging
NASA Astrophysics Data System (ADS)
Krušič, Vid; Rodič, Tomaž
2017-07-01
The influence of scatter of principal input parameters of the forging system on the dimensional accuracy of product and on the tool life for closed-die forging process is presented in this paper. Scatter of the essential input parameters for the closed-die upsetting process was adjusted to the maximal values that enabled the reliable production of a dimensionally accurate product at optimal tool life. An operating window was created in which exists the maximal scatter of principal input parameters for the closed-die upsetting process that still ensures the desired dimensional accuracy of the product and the optimal tool life. Application of the adjustment of the process input parameters is shown on the example of making an inner race of homokinetic joint from mass production. High productivity in manufacture of elements by cold massive extrusion is often achieved by multiple forming operations that are performed simultaneously on the same press. By redesigning the time sequences of forming operations at multistage forming process of starter barrel during the working stroke the course of the resultant force is optimized.
FRANOPP: Framework for analysis and optimization problems user's guide
NASA Technical Reports Server (NTRS)
Riley, K. M.
1981-01-01
Framework for analysis and optimization problems (FRANOPP) is a software aid for the study and solution of design (optimization) problems which provides the driving program and plotting capability for a user generated programming system. In addition to FRANOPP, the programming system also contains the optimization code CONMIN, and two user supplied codes, one for analysis and one for output. With FRANOPP the user is provided with five options for studying a design problem. Three of the options utilize the plot capability and present an indepth study of the design problem. The study can be focused on a history of the optimization process or on the interaction of variables within the design problem.
Optimization of valve opening process for the suppression of impulse exhaust noise
NASA Astrophysics Data System (ADS)
Li, Jingxiang; Zhao, Shengdun
2017-02-01
Impulse exhaust noise generated by the sudden impact of discharging flow of pneumatic systems has significant temporal characteristics including high sound pressure and rapid sound transient. The impulse noise exposures are more hazardous to hearing than the energy equivalent uniform noise exposures. This paper presents a novel approach to suppress the peak sound pressure as a major indicator of impulsiveness of the impulse exhaust noise by an optimization of the opening process of valve. Relationships between exhaust flow and impulse noise are described by thermodynamics and noise generating mechanism. Then an optimized approach by controlling the valve opening process is derived under a constraint of pre-setting exhaust time. A modified servo-direct-driven valve was designed and assembled in a typical pneumatic system for the verification experiments comparing with an original solenoid valve. Experimental results with groups of initial cylinder pressures and pre-setting exhaust times are shown to verify the effects of the proposed optimization. Some indicators of energy-equivalent and impulsiveness are introduced to discuss the effects of the noise suppressions. Relationship between noise reduction and exhaust time delay is also discussed.
Using Markov Models of Fault Growth Physics and Environmental Stresses to Optimize Control Actions
NASA Technical Reports Server (NTRS)
Bole, Brian; Goebel, Kai; Vachtsevanos, George
2012-01-01
A generalized Markov chain representation of fault dynamics is presented for the case that available modeling of fault growth physics and future environmental stresses can be represented by two independent stochastic process models. A contrived but representatively challenging example will be presented and analyzed, in which uncertainty in the modeling of fault growth physics is represented by a uniformly distributed dice throwing process, and a discrete random walk is used to represent uncertain modeling of future exogenous loading demands to be placed on the system. A finite horizon dynamic programming algorithm is used to solve for an optimal control policy over a finite time window for the case that stochastic models representing physics of failure and future environmental stresses are known, and the states of both stochastic processes are observable by implemented control routines. The fundamental limitations of optimization performed in the presence of uncertain modeling information are examined by comparing the outcomes obtained from simulations of an optimizing control policy with the outcomes that would be achievable if all modeling uncertainties were removed from the system.
Optimal control of multiphoton ionization dynamics of small alkali aggregates
NASA Astrophysics Data System (ADS)
Lindinger, A.; Bartelt, A.; Lupulescu, C.; Vajda, S.; Woste, Ludger
2003-11-01
We have performed transient multi-photon ionization experiments on small alkali clusters of different size in order to probe their wave packet dynamics, structural reorientations, charge transfers and dissociative events in different vibrationally excited electronic states including their ground state. The observed processes were highly dependent on the irradiated pulse parameters like wavelength range or its phase and amplitude; an emphasis to employ a feedback control system for generating the optimum pulse shapes. Their spectral and temporal behavior reflects interesting properties about the investigated system and the irradiated photo-chemical process. First, we present the vibrational dynamics of bound electronically excited states of alkali dimers and trimers. The scheme for observing the wave packet dynamics in the electronic ground state using stimulated Raman-pumping is shown. Since the employed pulse parameters significantly influence the efficiency of the irradiated dynamic pathways photo-induced ioniziation experiments were carried out. The controllability of 3-photon ionization pathways is investigated on the model-like systems NaK and K2. A closed learning loop for adaptive feedback control is used to find the optimal fs pulse shape. Sinusoidal parameterizations of the spectral phase modulation are investigated in regard to the obtained optimal field. By reducing the number of parameters and thereby the complexity of the phase moduation, optimal pulse shapes can be generated that carry fingerprints of the molecule's dynamical properties. This enables to find "understandable" optimal pulse forms and offers the possiblity to gain insight into the photo-induced control process. Characteristic motions of the involved wave packets are proposed to explain the optimized dynamic dissociation pathways.
NASA Astrophysics Data System (ADS)
Lahmiri, Salim; Boukadoum, Mounir
2015-08-01
We present a new ensemble system for stock market returns prediction where continuous wavelet transform (CWT) is used to analyze return series and backpropagation neural networks (BPNNs) for processing CWT-based coefficients, determining the optimal ensemble weights, and providing final forecasts. Particle swarm optimization (PSO) is used for finding optimal weights and biases for each BPNN. To capture symmetry/asymmetry in the underlying data, three wavelet functions with different shapes are adopted. The proposed ensemble system was tested on three Asian stock markets: The Hang Seng, KOSPI, and Taiwan stock market data. Three statistical metrics were used to evaluate the forecasting accuracy; including, mean of absolute errors (MAE), root mean of squared errors (RMSE), and mean of absolute deviations (MADs). Experimental results showed that our proposed ensemble system outperformed the individual CWT-ANN models each with different wavelet function. In addition, the proposed ensemble system outperformed the conventional autoregressive moving average process. As a result, the proposed ensemble system is suitable to capture symmetry/asymmetry in financial data fluctuations for better prediction accuracy.
Quintero, Catherine; Kariv, Ilona
2009-06-01
To meet the needs of the increasingly rapid and parallelized lead optimization process, a fully integrated local compound storage and liquid handling system was designed and implemented to automate the generation of assay-ready plates directly from newly submitted and cherry-picked compounds. A key feature of the system is the ability to create project- or assay-specific compound-handling methods, which provide flexibility for any combination of plate types, layouts, and plate bar-codes. Project-specific workflows can be created by linking methods for processing new and cherry-picked compounds and control additions to produce a complete compound set for both biological testing and local storage in one uninterrupted workflow. A flexible cherry-pick approach allows for multiple, user-defined strategies to select the most appropriate replicate of a compound for retesting. Examples of custom selection parameters include available volume, compound batch, and number of freeze/thaw cycles. This adaptable and integrated combination of software and hardware provides a basis for reducing cycle time, fully automating compound processing, and ultimately increasing the rate at which accurate, biologically relevant results can be produced for compounds of interest in the lead optimization process.
NASA Technical Reports Server (NTRS)
Gern, Frank; Vicroy, Dan D.; Mulani, Sameer B.; Chhabra, Rupanshi; Kapania, Rakesh K.; Schetz, Joseph A.; Brown, Derrell; Princen, Norman H.
2014-01-01
Traditional methods of control allocation optimization have shown difficulties in exploiting the full potential of controlling large arrays of control devices on innovative air vehicles. Artificial neutral networks are inspired by biological nervous systems and neurocomputing has successfully been applied to a variety of complex optimization problems. This project investigates the potential of applying neurocomputing to the control allocation optimization problem of Hybrid Wing Body (HWB) aircraft concepts to minimize control power, hinge moments, and actuator forces, while keeping system weights within acceptable limits. The main objective of this project is to develop a proof-of-concept process suitable to demonstrate the potential of using neurocomputing for optimizing actuation power for aircraft featuring multiple independently actuated control surfaces. A Nastran aeroservoelastic finite element model is used to generate a learning database of hinge moment and actuation power characteristics for an array of flight conditions and control surface deflections. An artificial neural network incorporating a genetic algorithm then uses this training data to perform control allocation optimization for the investigated aircraft configuration. The phase I project showed that optimization results for the sum of required hinge moments are improved by more than 12% over the best Nastran solution by using the neural network optimization process.
State transformations and Hamiltonian structures for optimal control in discrete systems
NASA Astrophysics Data System (ADS)
Sieniutycz, S.
2006-04-01
Preserving usual definition of Hamiltonian H as the scalar product of rates and generalized momenta we investigate two basic classes of discrete optimal control processes governed by the difference rather than differential equations for the state transformation. The first class, linear in the time interval θ, secures the constancy of optimal H and satisfies a discrete Hamilton-Jacobi equation. The second class, nonlinear in θ, does not assure the constancy of optimal H and satisfies only a relationship that may be regarded as an equation of Hamilton-Jacobi type. The basic question asked is if and when Hamilton's canonical structures emerge in optimal discrete systems. For a constrained discrete control, general optimization algorithms are derived that constitute powerful theoretical and computational tools when evaluating extremum properties of constrained physical systems. The mathematical basis is Bellman's method of dynamic programming (DP) and its extension in the form of the so-called Carathéodory-Boltyanski (CB) stage optimality criterion which allows a variation of the terminal state that is otherwise fixed in Bellman's method. For systems with unconstrained intervals of the holdup time θ two powerful optimization algorithms are obtained: an unconventional discrete algorithm with a constant H and its counterpart for models nonlinear in θ. We also present the time-interval-constrained extension of the second algorithm. The results are general; namely, one arrives at: discrete canonical equations of Hamilton, maximum principles, and (at the continuous limit of processes with free intervals of time) the classical Hamilton-Jacobi theory, along with basic results of variational calculus. A vast spectrum of applications and an example are briefly discussed with particular attention paid to models nonlinear in the time interval θ.
A new implementation of the programming system for structural synthesis (PROSSS-2)
NASA Technical Reports Server (NTRS)
Rogers, James L., Jr.
1984-01-01
This new implementation of the PROgramming System for Structural Synthesis (PROSSS-2) combines a general-purpose finite element computer program for structural analysis, a state-of-the-art optimization program, and several user-supplied, problem-dependent computer programs. The results are flexibility of the optimization procedure, organization, and versatility of the formulation of constraints and design variables. The analysis-optimization process results in a minimized objective function, typically the mass. The analysis and optimization programs are executed repeatedly by looping through the system until the process is stopped by a user-defined termination criterion. However, some of the analysis, such as model definition, need only be one time and the results are saved for future use. The user must write some small, simple FORTRAN programs to interface between the analysis and optimization programs. One of these programs, the front processor, converts the design variables output from the optimizer into the suitable format for input into the analyzer. Another, the end processor, retrieves the behavior variables and, optionally, their gradients from the analysis program and evaluates the objective function and constraints and optionally their gradients. These quantities are output in a format suitable for input into the optimizer. These user-supplied programs are problem-dependent because they depend primarily upon which finite elements are being used in the model. PROSSS-2 differs from the original PROSSS in that the optimizer and front and end processors have been integrated into the finite element computer program. This was done to reduce the complexity and increase portability of the system, and to take advantage of the data handling features found in the finite element program.
Performance Management and Optimization of Semiconductor Design Projects
NASA Astrophysics Data System (ADS)
Hinrichs, Neele; Olbrich, Markus; Barke, Erich
2010-06-01
The semiconductor industry is characterized by fast technological changes and small time-to-market windows. Improving productivity is the key factor to stand up to the competitors and thus successfully persist in the market. In this paper a Performance Management System for analyzing, optimizing and evaluating chip design projects is presented. A task graph representation is used to optimize the design process regarding time, cost and workload of resources. Key Performance Indicators are defined in the main areas cost, profit, resources, process and technical output to appraise the project.
Evolutionary and biological metaphors for engineering design
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jakiela, M.
1994-12-31
Since computing became generally available, there has been strong interest in using computers to assist and automate engineering design processes. Specifically, for design optimization and automation, nonlinear programming and artificial intelligence techniques have been extensively studied. New computational techniques, based upon the natural processes of evolution, adaptation, and learing, are showing promise because of their generality and robustness. This presentation will describe the use of two such techniques, genetic algorithms and classifier systems, for a variety of engineering design problems. Structural topology optimization, meshing, and general engineering optimization are shown as example applications.
NASA Astrophysics Data System (ADS)
Ciminelli, Caterina; Dell'Olio, Francesco; Armenise, Mario N.; Iacomacci, Francesco; Pasquali, Franca; Formaro, Roberto
2017-11-01
A fiber optic digital link for on-board data handling is modeled, designed and optimized in this paper. Design requirements and constraints relevant to the link, which is in the frame of novel on-board processing architectures, are discussed. Two possible link configurations are investigated, showing their advantages and disadvantages. An accurate mathematical model of each link component and the entire system is reported and results of link simulation based on those models are presented. Finally, some details on the optimized design are provided.
Analysis and optimization of hybrid electric vehicle thermal management systems
NASA Astrophysics Data System (ADS)
Hamut, H. S.; Dincer, I.; Naterer, G. F.
2014-02-01
In this study, the thermal management system of a hybrid electric vehicle is optimized using single and multi-objective evolutionary algorithms in order to maximize the exergy efficiency and minimize the cost and environmental impact of the system. The objective functions are defined and decision variables, along with their respective system constraints, are selected for the analysis. In the multi-objective optimization, a Pareto frontier is obtained and a single desirable optimal solution is selected based on LINMAP decision-making process. The corresponding solutions are compared against the exergetic, exergoeconomic and exergoenvironmental single objective optimization results. The results show that the exergy efficiency, total cost rate and environmental impact rate for the baseline system are determined to be 0.29, ¢28 h-1 and 77.3 mPts h-1 respectively. Moreover, based on the exergoeconomic optimization, 14% higher exergy efficiency and 5% lower cost can be achieved, compared to baseline parameters at an expense of a 14% increase in the environmental impact. Based on the exergoenvironmental optimization, a 13% higher exergy efficiency and 5% lower environmental impact can be achieved at the expense of a 27% increase in the total cost.
Large Aircraft Robotic Paint Stripping (LARPS) system and the high pressure water process
NASA Astrophysics Data System (ADS)
See, David W.; Hofacker, Scott A.; Stone, M. Anthony; Harbaugh, Darcy
1993-03-01
The aircraft maintenance industry is beset by new Environmental Protection Agency (EPA) guidelines on air emissions, Occupational Safety and Health Administration (OSHA) standards, dwindling labor markets, Federal Aviation Administration (FAA) safety guidelines, and increased operating costs. In light of these factors, the USAF's Wright Laboratory Manufacturing Technology Directorate and the Aircraft Division of the Oklahoma City Air Logistics Center initiated a MANTECH/REPTECH effort to automate an alternate paint removal method and eliminate the current manual methylene chloride chemical stripping methods. This paper presents some of the background and history of the LARPS program, describes the LARPS system, documents the projected operational flow, quantifies some of the projected system benefits and describes the High Pressure Water Stripping Process. Certification of an alternative paint removal method to replace the current chemical process is being performed in two phases: Process Optimization and Process Validation. This paper also presents the results of the Process Optimization for metal substrates. Data on the coating removal rate, residual stresses, surface roughness, preliminary process envelopes, and technical plans for process Validation Testing will be discussed.
Evaluation of laser cutting process with auxiliary gas pressure by soft computing approach
NASA Astrophysics Data System (ADS)
Lazov, Lyubomir; Nikolić, Vlastimir; Jovic, Srdjan; Milovančević, Miloš; Deneva, Heristina; Teirumenieka, Erika; Arsic, Nebojsa
2018-06-01
Evaluation of the optimal laser cutting parameters is very important for the high cut quality. This is highly nonlinear process with different parameters which is the main challenge in the optimization process. Data mining methodology is one of most versatile method which can be used laser cutting process optimization. Support vector regression (SVR) procedure is implemented since it is a versatile and robust technique for very nonlinear data regression. The goal in this study was to determine the optimal laser cutting parameters to ensure robust condition for minimization of average surface roughness. Three cutting parameters, the cutting speed, the laser power, and the assist gas pressure, were used in the investigation. As a laser type TruLaser 1030 technological system was used. Nitrogen as an assisted gas was used in the laser cutting process. As the data mining method, support vector regression procedure was used. Data mining prediction accuracy was very high according the coefficient (R2) of determination and root mean square error (RMSE): R2 = 0.9975 and RMSE = 0.0337. Therefore the data mining approach could be used effectively for determination of the optimal conditions of the laser cutting process.
NASA Technical Reports Server (NTRS)
Thareja, R.; Haftka, R. T.
1986-01-01
There has been recent interest in multidisciplinary multilevel optimization applied to large engineering systems. The usual approach is to divide the system into a hierarchy of subsystems with ever increasing detail in the analysis focus. Equality constraints are usually placed on various design quantities at every successive level to ensure consistency between levels. In many previous applications these equality constraints were eliminated by reducing the number of design variables. In complex systems this may not be possible and these equality constraints may have to be retained in the optimization process. In this paper the impact of such a retention is examined for a simple portal frame problem. It is shown that the equality constraints introduce numerical difficulties, and that the numerical solution becomes very sensitive to optimization parameters for a wide range of optimization algorithms.
Specialty Task Force: A Strategic Component to Electronic Health Record (EHR) Optimization.
Romero, Mary Rachel; Staub, Allison
2016-01-01
Post-implementation stage comes after an electronic health record (EHR) deployment. Analyst and end users deal with the reality that some of the concepts and designs initially planned and created may not be complementary to the workflow; creating anxiety, dissatisfaction, and failure with early adoption of system. Problems encountered during deployment are numerous and can vary from simple to complex. Redundant ticket submission creates backlog for Information Technology personnel resulting in delays in resolving concerns with EHR system. The process of optimization allows for evaluation of system and reassessment of users' needs. A solid and well executed optimization infrastructure can help minimize unexpected end-user disruptions and help tailor the system to meet regulatory agency goals and practice standards. A well device plan to resolve problems during post implementation is necessary for cost containment and to streamline communication efforts. Creating a specialty specific collaborative task force is efficacious and expedites resolution of users' concerns through a more structured process.
No Cost – Low Cost Compressed Air System Optimization in Industry
NASA Astrophysics Data System (ADS)
Dharma, A.; Budiarsa, N.; Watiniasih, N.; Antara, N. G.
2018-04-01
Energy conservation is a systematic, integrated of effort, in order to preserve energy sources and improve energy utilization efficiency. Utilization of energy in efficient manner without reducing the energy usage it must. Energy conservation efforts are applied at all stages of utilization, from utilization of energy resources to final, using efficient technology, and cultivating an energy-efficient lifestyle. The most common way is to promote energy efficiency in the industry on end use and overcome barriers to achieve such efficiency by using system energy optimization programs. The facts show that energy saving efforts in the process usually only focus on replacing tools and not an overall system improvement effort. In this research, a framework of sustainable energy reduction work in companies that have or have not implemented energy management system (EnMS) will be conducted a systematic technical approach in evaluating accurately a compressed-air system and potential optimization through observation, measurement and verification environmental conditions and processes, then processing the physical quantities of systems such as air flow, pressure and electrical power energy at any given time measured using comparative analysis methods in this industry, to provide the potential savings of energy saving is greater than the component approach, with no cost to the lowest cost (no cost - low cost). The process of evaluating energy utilization and energy saving opportunities will provide recommendations for increasing efficiency in the industry and reducing CO2 emissions and improving environmental quality.
NASA Technical Reports Server (NTRS)
Kasahara, Hironori; Honda, Hiroki; Narita, Seinosuke
1989-01-01
Parallel processing of real-time dynamic systems simulation on a multiprocessor system named OSCAR is presented. In the simulation of dynamic systems, generally, the same calculation are repeated every time step. However, we cannot apply to Do-all or the Do-across techniques for parallel processing of the simulation since there exist data dependencies from the end of an iteration to the beginning of the next iteration and furthermore data-input and data-output are required every sampling time period. Therefore, parallelism inside the calculation required for a single time step, or a large basic block which consists of arithmetic assignment statements, must be used. In the proposed method, near fine grain tasks, each of which consists of one or more floating point operations, are generated to extract the parallelism from the calculation and assigned to processors by using optimal static scheduling at compile time in order to reduce large run time overhead caused by the use of near fine grain tasks. The practicality of the scheme is demonstrated on OSCAR (Optimally SCheduled Advanced multiprocessoR) which has been developed to extract advantageous features of static scheduling algorithms to the maximum extent.
Fuel consumption optimization for smart hybrid electric vehicle during a car-following process
NASA Astrophysics Data System (ADS)
Li, Liang; Wang, Xiangyu; Song, Jian
2017-03-01
Hybrid electric vehicles (HEVs) provide large potential to save energy and reduce emission, and smart vehicles bring out great convenience and safety for drivers. By combining these two technologies, vehicles may achieve excellent performances in terms of dynamic, economy, environmental friendliness, safety, and comfort. Hence, a smart hybrid electric vehicle (s-HEV) is selected as a platform in this paper to study a car-following process with optimizing the fuel consumption. The whole process is a multi-objective optimal problem, whose optimal solution is not just adding an energy management strategy (EMS) to an adaptive cruise control (ACC), but a deep fusion of these two methods. The problem has more restricted conditions, optimal objectives, and system states, which may result in larger computing burden. Therefore, a novel fuel consumption optimization algorithm based on model predictive control (MPC) is proposed and some search skills are adopted in receding horizon optimization to reduce computing burden. Simulations are carried out and the results indicate that the fuel consumption of proposed method is lower than that of the ACC+EMS method on the condition of ensuring car-following performances.
Finite-size effect on optimal efficiency of heat engines.
Tajima, Hiroyasu; Hayashi, Masahito
2017-07-01
The optimal efficiency of quantum (or classical) heat engines whose heat baths are n-particle systems is given by the strong large deviation. We give the optimal work extraction process as a concrete energy-preserving unitary time evolution among the heat baths and the work storage. We show that our optimal work extraction turns the disordered energy of the heat baths to the ordered energy of the work storage, by evaluating the ratio of the entropy difference to the energy difference in the heat baths and the work storage, respectively. By comparing the statistical mechanical optimal efficiency with the macroscopic thermodynamic bound, we evaluate the accuracy of the macroscopic thermodynamics with finite-size heat baths from the statistical mechanical viewpoint. We also evaluate the quantum coherence effect on the optimal efficiency of the cycle processes without restricting their cycle time by comparing the classical and quantum optimal efficiencies.
NASA Astrophysics Data System (ADS)
Serrano, Rafael; González, Luis Carlos; Martín, Francisco Jesús
2009-11-01
Under the project SENSOR-IA which has had financial funding from the Order of Incentives to the Regional Technology Centers of the Counsil of Innovation, Science and Enterprise of Andalusia, an architecture for the optimization of a machining process in real time through rule-based expert system has been developed. The architecture consists of an acquisition system and sensor data processing engine (SATD) from an expert system (SE) rule-based which communicates with the SATD. The SE has been designed as an inference engine with an algorithm for effective action, using a modus ponens rule model of goal-oriented rules.The pilot test demonstrated that it is possible to govern in real time the machining process based on rules contained in a SE. The tests have been done with approximated rules. Future work includes an exhaustive collection of data with different tool materials and geometries in a database to extract more precise rules.
Design optimization of space launch vehicles using a genetic algorithm
NASA Astrophysics Data System (ADS)
Bayley, Douglas James
The United States Air Force (USAF) continues to have a need for assured access to space. In addition to flexible and responsive spacelift, a reduction in the cost per launch of space launch vehicles is also desirable. For this purpose, an investigation of the design optimization of space launch vehicles has been conducted. Using a suite of custom codes, the performance aspects of an entire space launch vehicle were analyzed. A genetic algorithm (GA) was employed to optimize the design of the space launch vehicle. A cost model was incorporated into the optimization process with the goal of minimizing the overall vehicle cost. The other goals of the design optimization included obtaining the proper altitude and velocity to achieve a low-Earth orbit. Specific mission parameters that are particular to USAF space endeavors were specified at the start of the design optimization process. Solid propellant motors, liquid fueled rockets, and air-launched systems in various configurations provided the propulsion systems for two, three and four-stage launch vehicles. Mass properties models, an aerodynamics model, and a six-degree-of-freedom (6DOF) flight dynamics simulator were all used to model the system. The results show the feasibility of this method in designing launch vehicles that meet mission requirements. Comparisons to existing real world systems provide the validation for the physical system models. However, the ability to obtain a truly minimized cost was elusive. The cost model uses an industry standard approach, however, validation of this portion of the model was challenging due to the proprietary nature of cost figures and due to the dependence of many existing systems on surplus hardware.
Cahyadi, Christine; Heng, Paul Wan Sia; Chan, Lai Wah
2011-03-01
The aim of this study was to identify and optimize the critical process parameters of the newly developed Supercell quasi-continuous coater for optimal tablet coat quality. Design of experiments, aided by multivariate analysis techniques, was used to quantify the effects of various coating process conditions and their interactions on the quality of film-coated tablets. The process parameters varied included batch size, inlet temperature, atomizing pressure, plenum pressure, spray rate and coating level. An initial screening stage was carried out using a 2(6-1(IV)) fractional factorial design. Following these preliminary experiments, optimization study was carried out using the Box-Behnken design. Main response variables measured included drug-loading efficiency, coat thickness variation, and the extent of tablet damage. Apparent optimum conditions were determined by using response surface plots. The process parameters exerted various effects on the different response variables. Hence, trade-offs between individual optima were necessary to obtain the best compromised set of conditions. The adequacy of the optimized process conditions in meeting the combined goals for all responses was indicated by the composite desirability value. By using response surface methodology and optimization, coating conditions which produced coated tablets of high drug-loading efficiency, low incidences of tablet damage and low coat thickness variation were defined. Optimal conditions were found to vary over a large spectrum when different responses were considered. Changes in processing parameters across the design space did not result in drastic changes to coat quality, thereby demonstrating robustness in the Supercell coating process. © 2010 American Association of Pharmaceutical Scientists
Neural dynamic programming and its application to control systems
NASA Astrophysics Data System (ADS)
Seong, Chang-Yun
There are few general practical feedback control methods for nonlinear MIMO (multi-input-multi-output) systems, although such methods exist for their linear counterparts. Neural Dynamic Programming (NDP) is proposed as a practical design method of optimal feedback controllers for nonlinear MIMO systems. NDP is an offspring of both neural networks and optimal control theory. In optimal control theory, the optimal solution to any nonlinear MIMO control problem may be obtained from the Hamilton-Jacobi-Bellman equation (HJB) or the Euler-Lagrange equations (EL). The two sets of equations provide the same solution in different forms: EL leads to a sequence of optimal control vectors, called Feedforward Optimal Control (FOC); HJB yields a nonlinear optimal feedback controller, called Dynamic Programming (DP). DP produces an optimal solution that can reject disturbances and uncertainties as a result of feedback. Unfortunately, computation and storage requirements associated with DP solutions can be problematic, especially for high-order nonlinear systems. This dissertation presents an approximate technique for solving the DP problem based on neural network techniques that provides many of the performance benefits (e.g., optimality and feedback) of DP and benefits from the numerical properties of neural networks. We formulate neural networks to approximate optimal feedback solutions whose existence DP justifies. We show the conditions under which NDP closely approximates the optimal solution. Finally, we introduce the learning operator characterizing the learning process of the neural network in searching the optimal solution. The analysis of the learning operator provides not only a fundamental understanding of the learning process in neural networks but also useful guidelines for selecting the number of weights of the neural network. As a result, NDP finds---with a reasonable amount of computation and storage---the optimal feedback solutions to nonlinear MIMO control problems that would be very difficult to solve with DP. NDP was demonstrated on several applications such as the lateral autopilot logic for a Boeing 747, the minimum fuel control of a double-integrator plant with bounded control, the backward steering of a two-trailer truck, and the set-point control of a two-link robot arm.
NASA Astrophysics Data System (ADS)
Zhou, J.; Zeng, X.; Mo, L.; Chen, L.; Jiang, Z.; Feng, Z.; Yuan, L.; He, Z.
2017-12-01
Generally, the adaptive utilization and regulation of runoff in the source region of China's southwest rivers is classified as a typical multi-objective collaborative optimization problem. There are grim competitions and incidence relation in the subsystems of water supply, electricity generation and environment, which leads to a series of complex problems represented by hydrological process variation, blocked electricity output and water environment risk. Mathematically, the difficulties of multi-objective collaborative optimization focus on the description of reciprocal relationships and the establishment of evolving model of adaptive systems. Thus, based on the theory of complex systems science, this project tries to carry out the research from the following aspects: the changing trend of coupled water resource, the covariant factor and driving mechanism, the dynamic evolution law of mutual feedback dynamic process in the supply-generation-environment coupled system, the environmental response and influence mechanism of coupled mutual feedback water resource system, the relationship between leading risk factor and multiple risk based on evolutionary stability and dynamic balance, the transfer mechanism of multiple risk response with the variation of the leading risk factor, the multidimensional coupled feedback system of multiple risk assessment index system and optimized decision theory. Based on the above-mentioned research results, the dynamic method balancing the efficiency of multiple objectives in the coupled feedback system and optimized regulation model of water resources is proposed, and the adaptive scheduling mode considering the internal characteristics and external response of coupled mutual feedback system of water resource is established. In this way, the project can make a contribution to the optimal scheduling theory and methodology of water resource management under uncertainty in the source region of Southwest River.
NASA Astrophysics Data System (ADS)
Shu, Hui; Zhou, Xideng
2014-05-01
The single-vendor single-buyer integrated production inventory system has been an object of study for a long time, but little is known about the effect of investing in reducing setup cost reduction and process-quality improvement for an integrated inventory system in which the products are sold with free minimal repair warranty. The purpose of this article is to minimise the integrated cost by optimising simultaneously the number of shipments and the shipment quantity, the setup cost, and the process quality. An efficient algorithm procedure is proposed for determining the optimal decision variables. A numerical example is presented to illustrate the results of the proposed models graphically. Sensitivity analysis of the model with respect to key parameters of the system is carried out. The paper shows that the proposed integrated model can result in significant savings in the integrated cost.
Improved system integration for integrated gasification combined cycle (IGCC) systems.
Frey, H Christopher; Zhu, Yunhua
2006-03-01
Integrated gasification combined cycle (IGCC) systems are a promising technology for power generation. They include an air separation unit (ASU), a gasification system, and a gas turbine combined cycle power block, and feature competitive efficiency and lower emissions compared to conventional power generation technology. IGCC systems are not yet in widespread commercial use and opportunities remain to improve system feasibility via improved process integration. A process simulation model was developed for IGCC systems with alternative types of ASU and gas turbine integration. The model is applied to evaluate integration schemes involving nitrogen injection, air extraction, and combinations of both, as well as different ASU pressure levels. The optimal nitrogen injection only case in combination with an elevated pressure ASU had the highest efficiency and power output and approximately the lowest emissions per unit output of all cases considered, and thus is a recommended design option. The optimal combination of air extraction coupled with nitrogen injection had slightly worse efficiency, power output, and emissions than the optimal nitrogen injection only case. Air extraction alone typically produced lower efficiency, lower power output, and higher emissions than all other cases. The recommended nitrogen injection only case is estimated to provide annualized cost savings compared to a nonintegrated design. Process simulation modeling is shown to be a useful tool for evaluation and screening of technology options.
Innovation in managing the referral process at a Canadian pediatric hospital.
MacGregor, Daune; Parker, Sandra; MacMillan, Sharon; Blais, Irene; Wong, Eugene; Robertson, Chris J; Bruce-Barrett, Cindy
2009-01-01
The provision of timely and optimal patient care is a priority in pediatric academic health science centres. Timely access to care is optimized when there is an efficient and consistent referral system in place. In order to improve the patient referral process and, therefore, access to care, an innovative web-based system was developed and implemented. The Ambulatory Referral Management System enables the electronic routing for submission, review, triage and management of all outpatient referrals. The implementation of this system has provided significant metrics that have informed how processes can be improved to increase access to care. Use of the system has improved efficiency in the referral process and has reduced the work associated with the previous paper-based referral system. It has also enhanced communication between the healthcare provider and the patient and family and has improved the security and confidentiality of patient information management. Referral guidelines embedded within the system have helped to ensure that referrals are more complete and that the patient being referred meets the criteria for assessment and treatment in an ambulatory setting. The system calculates and reports on wait times, as well as other measures.
Workflow management in large distributed systems
NASA Astrophysics Data System (ADS)
Legrand, I.; Newman, H.; Voicu, R.; Dobre, C.; Grigoras, C.
2011-12-01
The MonALISA (Monitoring Agents using a Large Integrated Services Architecture) framework provides a distributed service system capable of controlling and optimizing large-scale, data-intensive applications. An essential part of managing large-scale, distributed data-processing facilities is a monitoring system for computing facilities, storage, networks, and the very large number of applications running on these systems in near realtime. All this monitoring information gathered for all the subsystems is essential for developing the required higher-level services—the components that provide decision support and some degree of automated decisions—and for maintaining and optimizing workflow in large-scale distributed systems. These management and global optimization functions are performed by higher-level agent-based services. We present several applications of MonALISA's higher-level services including optimized dynamic routing, control, data-transfer scheduling, distributed job scheduling, dynamic allocation of storage resource to running jobs and automated management of remote services among a large set of grid facilities.
Bi-Level Integrated System Synthesis (BLISS)
NASA Technical Reports Server (NTRS)
Sobieszczanski-Sobieski, Jaroslaw; Agte, Jeremy S.; Sandusky, Robert R., Jr.
1998-01-01
BLISS is a method for optimization of engineering systems by decomposition. It separates the system level optimization, having a relatively small number of design variables, from the potentially numerous subsystem optimizations that may each have a large number of local design variables. The subsystem optimizations are autonomous and may be conducted concurrently. Subsystem and system optimizations alternate, linked by sensitivity data, producing a design improvement in each iteration. Starting from a best guess initial design, the method improves that design in iterative cycles, each cycle comprised of two steps. In step one, the system level variables are frozen and the improvement is achieved by separate, concurrent, and autonomous optimizations in the local variable subdomains. In step two, further improvement is sought in the space of the system level variables. Optimum sensitivity data link the second step to the first. The method prototype was implemented using MATLAB and iSIGHT programming software and tested on a simplified, conceptual level supersonic business jet design, and a detailed design of an electronic device. Satisfactory convergence and favorable agreement with the benchmark results were observed. Modularity of the method is intended to fit the human organization and map well on the computing technology of concurrent processing.
NASA Astrophysics Data System (ADS)
Abdeljaber, Osama; Avci, Onur; Inman, Daniel J.
2016-05-01
One of the major challenges in civil, mechanical, and aerospace engineering is to develop vibration suppression systems with high efficiency and low cost. Recent studies have shown that high damping performance at broadband frequencies can be achieved by incorporating periodic inserts with tunable dynamic properties as internal resonators in structural systems. Structures featuring these kinds of inserts are referred to as metamaterials inspired structures or metastructures. Chiral lattice inserts exhibit unique characteristics such as frequency bandgaps which can be tuned by varying the parameters that define the lattice topology. Recent analytical and experimental investigations have shown that broadband vibration attenuation can be achieved by including chiral lattices as internal resonators in beam-like structures. However, these studies have suggested that the performance of chiral lattice inserts can be maximized by utilizing an efficient optimization technique to obtain the optimal topology of the inserted lattice. In this study, an automated optimization procedure based on a genetic algorithm is applied to obtain the optimal set of parameters that will result in chiral lattice inserts tuned properly to reduce the global vibration levels of a finite-sized beam. Genetic algorithms are considered in this study due to their capability of dealing with complex and insufficiently understood optimization problems. In the optimization process, the basic parameters that govern the geometry of periodic chiral lattices including the number of circular nodes, the thickness of the ligaments, and the characteristic angle are considered. Additionally, a new set of parameters is introduced to enable the optimization process to explore non-periodic chiral designs. Numerical simulations are carried out to demonstrate the efficiency of the optimization process.
Development of Chemical Process Design and Control for Sustainability
This contribution describes a novel process systems engineering framework that couples advanced control with sustainability evaluation and decision making for the optimization of process operations to minimize environmental impacts associated with products, materials, and energy....
Channel modeling, signal processing and coding for perpendicular magnetic recording
NASA Astrophysics Data System (ADS)
Wu, Zheng
With the increasing areal density in magnetic recording systems, perpendicular recording has replaced longitudinal recording to overcome the superparamagnetic limit. Studies on perpendicular recording channels including aspects of channel modeling, signal processing and coding techniques are presented in this dissertation. To optimize a high density perpendicular magnetic recording system, one needs to know the tradeoffs between various components of the system including the read/write transducers, the magnetic medium, and the read channel. We extend the work by Chaichanavong on the parameter optimization for systems via design curves. Different signal processing and coding techniques are studied. Information-theoretic tools are utilized to determine the acceptable region for the channel parameters when optimal detection and linear coding techniques are used. Our results show that a considerable gain can be achieved by the optimal detection and coding techniques. The read-write process in perpendicular magnetic recording channels includes a number of nonlinear effects. Nonlinear transition shift (NLTS) is one of them. The signal distortion induced by NLTS can be reduced by write precompensation during data recording. We numerically evaluate the effect of NLTS on the read-back signal and examine the effectiveness of several write precompensation schemes in combating NLTS in a channel characterized by both transition jitter noise and additive white Gaussian electronics noise. We also present an analytical method to estimate the bit-error-rate and use it to help determine the optimal write precompensation values in multi-level precompensation schemes. We propose a mean-adjusted pattern-dependent noise predictive (PDNP) detection algorithm for use on the channel with NLTS. We show that this detector can offer significant improvements in bit-error-rate (BER) compared to conventional Viterbi and PDNP detectors. Moreover, the system performance can be further improved by combining the new detector with a simple write precompensation scheme. Soft-decision decoding for algebraic codes can improve performance for magnetic recording systems. In this dissertation, we propose two soft-decision decoding methods for tensor-product parity codes. We also present a list decoding algorithm for generalized error locating codes.
Practical colloidal processing of multication ceramics
Bell, Nelson S.; Monson, Todd C.; Diantonio, Christopher; ...
2015-09-07
The use of colloidal processing principles in the formation of ceramic materials is well appreciated for developing homogeneous material properties in sintered products, enabling novel forming techniques for porous ceramics or 3D printing, and controlling microstructure to enable optimized material properties. The solution processing of electronic ceramic materials often involves multiple cationic elements or dopants to affect microstructure and properties. Material stability must be considered through the steps of colloidal processing to optimize desired component properties. This review provides strategies for preventing material degradation in particle synthesis, milling processes, and dispersion, with case studies of consolidation using spark plasma sinteringmore » of these systems. The prevention of multication corrosion in colloidal dispersions can be achieved by utilizing conditions similar to the synthesis environment or by the development of surface passivation layers. The choice of dispersing surfactants can be related to these surface states, which are of special importance for nanoparticle systems. A survey of dispersant chemistries related to some common synthesis conditions is provided for perovskite systems as an example. Furthermore, these principles can be applied to many colloidal systems related to electronic and optical applications.« less
A perspective on future directions in aerospace propulsion system simulation
NASA Technical Reports Server (NTRS)
Miller, Brent A.; Szuch, John R.; Gaugler, Raymond E.; Wood, Jerry R.
1989-01-01
The design and development of aircraft engines is a lengthy and costly process using today's methodology. This is due, in large measure, to the fact that present methods rely heavily on experimental testing to verify the operability, performance, and structural integrity of components and systems. The potential exists for achieving significant speedups in the propulsion development process through increased use of computational techniques for simulation, analysis, and optimization. This paper outlines the concept and technology requirements for a Numerical Propulsion Simulation System (NPSS) that would provide capabilities to do interactive, multidisciplinary simulations of complete propulsion systems. By combining high performance computing hardware and software with state-of-the-art propulsion system models, the NPSS will permit the rapid calculation, assessment, and optimization of subcomponent, component, and system performance, durability, reliability and weight-before committing to building hardware.
[Optimization of end-tool parameters based on robot hand-eye calibration].
Zhang, Lilong; Cao, Tong; Liu, Da
2017-04-01
A new one-time registration method was developed in this research for hand-eye calibration of a surgical robot to simplify the operation process and reduce the preparation time. And a new and practical method is introduced in this research to optimize the end-tool parameters of the surgical robot based on analysis of the error sources in this registration method. In the process with one-time registration method, firstly a marker on the end-tool of the robot was recognized by a fixed binocular camera, and then the orientation and position of the marker were calculated based on the joint parameters of the robot. Secondly the relationship between the camera coordinate system and the robot base coordinate system could be established to complete the hand-eye calibration. Because of manufacturing and assembly errors of robot end-tool, an error equation was established with the transformation matrix between the robot end coordinate system and the robot end-tool coordinate system as the variable. Numerical optimization was employed to optimize end-tool parameters of the robot. The experimental results showed that the one-time registration method could significantly improve the efficiency of the robot hand-eye calibration compared with the existing methods. The parameter optimization method could significantly improve the absolute positioning accuracy of the one-time registration method. The absolute positioning accuracy of the one-time registration method can meet the requirements of the clinical surgery.
An intelligent agent for optimal river-reservoir system management
NASA Astrophysics Data System (ADS)
Rieker, Jeffrey D.; Labadie, John W.
2012-09-01
A generalized software package is presented for developing an intelligent agent for stochastic optimization of complex river-reservoir system management and operations. Reinforcement learning is an approach to artificial intelligence for developing a decision-making agent that learns the best operational policies without the need for explicit probabilistic models of hydrologic system behavior. The agent learns these strategies experientially in a Markov decision process through observational interaction with the environment and simulation of the river-reservoir system using well-calibrated models. The graphical user interface for the reinforcement learning process controller includes numerous learning method options and dynamic displays for visualizing the adaptive behavior of the agent. As a case study, the generalized reinforcement learning software is applied to developing an intelligent agent for optimal management of water stored in the Truckee river-reservoir system of California and Nevada for the purpose of streamflow augmentation for water quality enhancement. The intelligent agent successfully learns long-term reservoir operational policies that specifically focus on mitigating water temperature extremes during persistent drought periods that jeopardize the survival of threatened and endangered fish species.
Flight-Test Validation and Flying Qualities Evaluation of a Rotorcraft UAV Flight Control System
NASA Technical Reports Server (NTRS)
Mettler, Bernard; Tuschler, Mark B.; Kanade, Takeo
2000-01-01
This paper presents a process of design and flight-test validation and flying qualities evaluation of a flight control system for a rotorcraft-based unmanned aerial vehicle (RUAV). The keystone of this process is an accurate flight-dynamic model of the aircraft, derived by using system identification modeling. The model captures the most relevant dynamic features of our unmanned rotorcraft, and explicitly accounts for the presence of a stabilizer bar. Using the identified model we were able to determine the performance margins of our original control system and identify limiting factors. The performance limitations were addressed and the attitude control system was 0ptimize.d for different three performance levels: slow, medium, fast. The optimized control laws will be implemented in our RUAV. We will first determine the validity of our control design approach by flight test validating our optimized controllers. Subsequently, we will fly a series of maneuvers with the three optimized controllers to determine the level of flying qualities that can be attained. The outcome enable us to draw important conclusions on the flying qualities requirements for small-scale RUAVs.
Collectives for Multiple Resource Job Scheduling Across Heterogeneous Servers
NASA Technical Reports Server (NTRS)
Tumer, K.; Lawson, J.
2003-01-01
Efficient management of large-scale, distributed data storage and processing systems is a major challenge for many computational applications. Many of these systems are characterized by multi-resource tasks processed across a heterogeneous network. Conventional approaches, such as load balancing, work well for centralized, single resource problems, but breakdown in the more general case. In addition, most approaches are often based on heuristics which do not directly attempt to optimize the world utility. In this paper, we propose an agent based control system using the theory of collectives. We configure the servers of our network with agents who make local job scheduling decisions. These decisions are based on local goals which are constructed to be aligned with the objective of optimizing the overall efficiency of the system. We demonstrate that multi-agent systems in which all the agents attempt to optimize the same global utility function (team game) only marginally outperform conventional load balancing. On the other hand, agents configured using collectives outperform both team games and load balancing (by up to four times for the latter), despite their distributed nature and their limited access to information.
Utilization-Based Modeling and Optimization for Cognitive Radio Networks
NASA Astrophysics Data System (ADS)
Liu, Yanbing; Huang, Jun; Liu, Zhangxiong
The cognitive radio technique promises to manage and allocate the scarce radio spectrum in the highly varying and disparate modern environments. This paper considers a cognitive radio scenario composed of two queues for the primary (licensed) users and cognitive (unlicensed) users. According to the Markov process, the system state equations are derived and an optimization model for the system is proposed. Next, the system performance is evaluated by calculations which show the rationality of our system model. Furthermore, discussions among different parameters for the system are presented based on the experimental results.
A Framework for Preliminary Design of Aircraft Structures Based on Process Information. Part 1
NASA Technical Reports Server (NTRS)
Rais-Rohani, Masoud
1998-01-01
This report discusses the general framework and development of a computational tool for preliminary design of aircraft structures based on process information. The described methodology is suitable for multidisciplinary design optimization (MDO) activities associated with integrated product and process development (IPPD). The framework consists of three parts: (1) product and process definitions; (2) engineering synthesis, and (3) optimization. The product and process definitions are part of input information provided by the design team. The backbone of the system is its ability to analyze a given structural design for performance as well as manufacturability and cost assessment. The system uses a database on material systems and manufacturing processes. Based on the identified set of design variables and an objective function, the system is capable of performing optimization subject to manufacturability, cost, and performance constraints. The accuracy of the manufacturability measures and cost models discussed here depend largely on the available data on specific methods of manufacture and assembly and associated labor requirements. As such, our focus in this research has been on the methodology itself and not so much on its accurate implementation in an industrial setting. A three-tier approach is presented for an IPPD-MDO based design of aircraft structures. The variable-complexity cost estimation methodology and an approach for integrating manufacturing cost assessment into design process are also discussed. This report is presented in two parts. In the first part, the design methodology is presented, and the computational design tool is described. In the second part, a prototype model of the preliminary design Tool for Aircraft Structures based on Process Information (TASPI) is described. Part two also contains an example problem that applies the methodology described here for evaluation of six different design concepts for a wing spar.
Physical Modeling of Contact Processes on the Cutting Tools Surfaces of STM When Turning
NASA Astrophysics Data System (ADS)
Belozerov, V. A.; Uteshev, M. H.
2016-08-01
This article describes how to create an optimization model of the process of fine turning of superalloys and steel tools from STM on CNC machines, flexible manufacturing units (GPM), machining centers. Creation of the optimization model allows you to link (unite) contact processes simultaneously on the front and back surfaces of the tool from STM to manage contact processes and the dynamic strength of the cutting tool at the top of the STM. Established optimization model of management of the dynamic strength of the incisors of the STM in the process of fine turning is based on a previously developed thermomechanical (physical, heat) model, which allows the system thermomechanical approach to choosing brands STM (domestic and foreign) for cutting tools from STM designed for fine turning of heat resistant alloys and steels.
Optimality study of a gust alleviation system for light wing-loading STOL aircraft
NASA Technical Reports Server (NTRS)
Komoda, M.
1976-01-01
An analytical study was made of an optimal gust alleviation system that employs a vertical gust sensor mounted forward of an aircraft's center of gravity. Frequency domain optimization techniques were employed to synthesize the optimal filters that process the corrective signals to the flaps and elevator actuators. Special attention was given to evaluating the effectiveness of lead time, that is, the time by which relative wind sensor information should lead the actual encounter of the gust. The resulting filter is expressed as an implicit function of the prescribed control cost. A numerical example for a light wing loading STOL aircraft is included in which the optimal trade-off between performance and control cost is systematically studied.
A Fast Proceduere for Optimizing Thermal Protection Systems of Re-Entry Vehicles
NASA Astrophysics Data System (ADS)
Ferraiuolo, M.; Riccio, A.; Tescione, D.; Gigliotti, M.
The aim of the present work is to introduce a fast procedure to optimize thermal protection systems for re-entry vehicles subjected to high thermal loads. A simplified one-dimensional optimization process, performed in order to find the optimum design variables (lengths, sections etc.), is the first step of the proposed design procedure. Simultaneously, the most suitable materials able to sustain high temperatures and meeting the weight requirements are selected and positioned within the design layout. In this stage of the design procedure, simplified (generalized plane strain) FEM models are used when boundary and geometrical conditions allow the reduction of the degrees of freedom. Those simplified local FEM models can be useful because they are time-saving and very simple to build; they are essentially one dimensional and can be used for optimization processes in order to determine the optimum configuration with regard to weight, temperature and stresses. A triple-layer and a double-layer body, subjected to the same aero-thermal loads, have been optimized to minimize the overall weight. Full two and three-dimensional analyses are performed in order to validate those simplified models. Thermal-structural analyses and optimizations are executed by adopting the Ansys FEM code.
Dynamic Systems Analysis for Turbine Based Aero Propulsion Systems
NASA Technical Reports Server (NTRS)
Csank, Jeffrey T.
2016-01-01
The aircraft engine design process seeks to optimize the overall system-level performance, weight, and cost for a given concept. Steady-state simulations and data are used to identify trade-offs that should be balanced to optimize the system in a process known as systems analysis. These systems analysis simulations and data may not adequately capture the true performance trade-offs that exist during transient operation. Dynamic systems analysis provides the capability for assessing the dynamic tradeoffs at an earlier stage of the engine design process. The dynamic systems analysis concept, developed tools, and potential benefit are presented in this paper. To provide this capability, the Tool for Turbine Engine Closed-loop Transient Analysis (TTECTrA) was developed to provide the user with an estimate of the closed-loop performance (response time) and operability (high pressure compressor surge margin) for a given engine design and set of control design requirements. TTECTrA along with engine deterioration information, can be used to develop a more generic relationship between performance and operability that can impact the engine design constraints and potentially lead to a more efficient engine.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zitney, S.E.; McCorkle, D.; Yang, C.
Process modeling and simulation tools are widely used for the design and operation of advanced power generation systems. These tools enable engineers to solve the critical process systems engineering problems that arise throughout the lifecycle of a power plant, such as designing a new process, troubleshooting a process unit or optimizing operations of the full process. To analyze the impact of complex thermal and fluid flow phenomena on overall power plant performance, the Department of Energy’s (DOE) National Energy Technology Laboratory (NETL) has developed the Advanced Process Engineering Co-Simulator (APECS). The APECS system is an integrated software suite that combinesmore » process simulation (e.g., Aspen Plus) and high-fidelity equipment simulations such as those based on computational fluid dynamics (CFD), together with advanced analysis capabilities including case studies, sensitivity analysis, stochastic simulation for risk/uncertainty analysis, and multi-objective optimization. In this paper we discuss the initial phases of the integration of the APECS system with the immersive and interactive virtual engineering software, VE-Suite, developed at Iowa State University and Ames Laboratory. VE-Suite uses the ActiveX (OLE Automation) controls in the Aspen Plus process simulator wrapped by the CASI library developed by Reaction Engineering International to run process/CFD co-simulations and query for results. This integration represents a necessary step in the development of virtual power plant co-simulations that will ultimately reduce the time, cost, and technical risk of developing advanced power generation systems.« less
Solving NP-Hard Problems with Physarum-Based Ant Colony System.
Liu, Yuxin; Gao, Chao; Zhang, Zili; Lu, Yuxiao; Chen, Shi; Liang, Mingxin; Tao, Li
2017-01-01
NP-hard problems exist in many real world applications. Ant colony optimization (ACO) algorithms can provide approximate solutions for those NP-hard problems, but the performance of ACO algorithms is significantly reduced due to premature convergence and weak robustness, etc. With these observations in mind, this paper proposes a Physarum-based pheromone matrix optimization strategy in ant colony system (ACS) for solving NP-hard problems such as traveling salesman problem (TSP) and 0/1 knapsack problem (0/1 KP). In the Physarum-inspired mathematical model, one of the unique characteristics is that critical tubes can be reserved in the process of network evolution. The optimized updating strategy employs the unique feature and accelerates the positive feedback process in ACS, which contributes to the quick convergence of the optimal solution. Some experiments were conducted using both benchmark and real datasets. The experimental results show that the optimized ACS outperforms other meta-heuristic algorithms in accuracy and robustness for solving TSPs. Meanwhile, the convergence rate and robustness for solving 0/1 KPs are better than those of classical ACS.
Sensitivity Analysis and Optimization of Enclosure Radiation with Applications to Crystal Growth
NASA Technical Reports Server (NTRS)
Tiller, Michael M.
1995-01-01
In engineering, simulation software is often used as a convenient means for carrying out experiments to evaluate physical systems. The benefit of using simulations as 'numerical' experiments is that the experimental conditions can be easily modified and repeated at much lower cost than the comparable physical experiment. The goal of these experiments is to 'improve' the process or result of the experiment. In most cases, the computational experiments employ the same trial and error approach as their physical counterparts. When using this approach for complex systems, the cause and effect relationship of the system may never be fully understood and efficient strategies for improvement never utilized. However, it is possible when running simulations to accurately and efficiently determine the sensitivity of the system results with respect to simulation to accurately and efficiently determine the sensitivity of the system results with respect to simulation parameters (e.g., initial conditions, boundary conditions, and material properties) by manipulating the underlying computations. This results in a better understanding of the system dynamics and gives us efficient means to improve processing conditions. We begin by discussing the steps involved in performing simulations. Then we consider how sensitivity information about simulation results can be obtained and ways this information may be used to improve the process or result of the experiment. Next, we discuss optimization and the efficient algorithms which use sensitivity information. We draw on all this information to propose a generalized approach for integrating simulation and optimization, with an emphasis on software programming issues. After discussing our approach to simulation and optimization we consider an application involving crystal growth. This application is interesting because it includes radiative heat transfer. We discuss the computation of radiative new factors and the impact this mode of heat transfer has on our approach. Finally, we will demonstrate the results of our optimization.
Numerical research of the optimal control problem in the semi-Markov inventory model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gorshenin, Andrey K.; Belousov, Vasily V.; Shnourkoff, Peter V.
2015-03-10
This paper is devoted to the numerical simulation of stochastic system for inventory management products using controlled semi-Markov process. The results of a special software for the system’s research and finding the optimal control are presented.
Aircraft optimization by a system approach: Achievements and trends
NASA Technical Reports Server (NTRS)
Sobieszczanski-Sobieski, Jaroslaw
1992-01-01
Recently emerging methodology for optimal design of aircraft treated as a system of interacting physical phenomena and parts is examined. The methodology is found to coalesce into methods for hierarchic, non-hierarchic, and hybrid systems all dependent on sensitivity analysis. A separate category of methods has also evolved independent of sensitivity analysis, hence suitable for discrete problems. References and numerical applications are cited. Massively parallel computer processing is seen as enabling technology for practical implementation of the methodology.
NASA Astrophysics Data System (ADS)
Şoimoşan, Teodora M.; Danku, Gelu; Felseghi, Raluca A.
2017-12-01
Within the thermo-energy optimization process of an existing heating system, the increase of the system's energy efficiency and speeding-up the transition to green energy use are pursued. The concept of multi-energy district heating system, with high harnessing levels of the renewable energy sources (RES) in order to produce heat, is expected to be the key-element in the future urban energy infrastructure, due to the important role it can have in the strategies of optimizing and decarbonizing the existing district heating systems. The issues that arise are related to the efficient integration of different technologies of harnessing renewable energy sources in the energy mix and to the increase of the participation levels of RES, respectively. For the holistic modeling of the district heating system, the concept of the energy hub was used, where the synergy of different primary forms of entered energy provides the system a high degree energy security and flexibility in operation. The optimization of energy flows within the energy hub allows the optimization of the thermo-energy district system in order to approach the dual concept of smart city & smart energy.
Phantom evaluation of the effect of film processing on mammographic screen-film combinations.
McLean, D; Rickard, M T
1994-08-01
Mammographic image quality should be optimal for diagnosis, and the film contrast can be manipulated by altering development parameters. In this study phantom test objects were radiographed and processed for a given range of developer temperatures and times for four film-screen systems. Radiologists scored the phantom test objects on the resultant films to evaluate the effect on diagnosis of varying image contrast. While for three film-screen systems processing led to appreciable contrast differences, for only one film system did maximum contrast correspond with optimal phantom test object scoring. The inability to show an effect on diagnosis in all cases is possibly due to the variation in radiologist responses found in this study and in normal clinical circumstances. Other technical factors such as changes in film fog, grain and mottle may contribute to the study findings.
He, L; Huang, G H; Lu, H W
2010-04-15
Solving groundwater remediation optimization problems based on proxy simulators can usually yield optimal solutions differing from the "true" ones of the problem. This study presents a new stochastic optimization model under modeling uncertainty and parameter certainty (SOMUM) and the associated solution method for simultaneously addressing modeling uncertainty associated with simulator residuals and optimizing groundwater remediation processes. This is a new attempt different from the previous modeling efforts. The previous ones focused on addressing uncertainty in physical parameters (i.e. soil porosity) while this one aims to deal with uncertainty in mathematical simulator (arising from model residuals). Compared to the existing modeling approaches (i.e. only parameter uncertainty is considered), the model has the advantages of providing mean-variance analysis for contaminant concentrations, mitigating the effects of modeling uncertainties on optimal remediation strategies, offering confidence level of optimal remediation strategies to system designers, and reducing computational cost in optimization processes. 2009 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Chen, Ting; Van Den Broeke, Doug; Hsu, Stephen; Hsu, Michael; Park, Sangbong; Berger, Gabriel; Coskun, Tamer; de Vocht, Joep; Chen, Fung; Socha, Robert; Park, JungChul; Gronlund, Keith
2005-11-01
Illumination optimization, often combined with optical proximity corrections (OPC) to the mask, is becoming one of the critical components for a production-worthy lithography process for 55nm-node DRAM/Flash memory devices and beyond. At low-k1, e.g. k1<0.31, both resolution and imaging contrast can be severely limited by the current imaging tools while using the standard illumination sources. Illumination optimization is a process where the source shape is varied, in both profile and intensity distribution, to achieve enhancement in the final image contrast as compared to using the non-optimized sources. The optimization can be done efficiently for repetitive patterns such as DRAM/Flash memory cores. However, illumination optimization often produces source shapes that are "free-form" like and they can be too complex to be directly applicable for production and lack the necessary radial and annular symmetries desirable for the diffractive optical element (DOE) based illumination systems in today's leading lithography tools. As a result, post-optimization rendering and verification of the optimized source shape are often necessary to meet the production-ready or manufacturability requirements and ensure optimal performance gains. In this work, we describe our approach to the illumination optimization for k1<0.31 DRAM/Flash memory patterns, using an ASML XT:1400i at NA 0.93, where the all necessary manufacturability requirements are fully accounted for during the optimization. The imaging contrast in the resist is optimized in a reduced solution space constrained by the manufacturability requirements, which include minimum distance between poles, minimum opening pole angles, minimum ring width and minimum source filling factor in the sigma space. For additional performance gains, the intensity within the optimized source can vary in a gray-tone fashion (eight shades used in this work). Although this new optimization approach can sometimes produce closely spaced solutions as gauged by the NILS based metrics, we show that the optimal and production-ready source shape solution can be easily determined by comparing the best solutions to the "free-form" solution and more importantly, by their respective imaging fidelity and process latitude ranking. Imaging fidelity and process latitude simulations are performed to analyze the impact and sensitivity of the manufacturability requirements on pattern specific illumination optimizations using ASML XT:1400i and other latest imaging systems. Mask model based OPC (MOPC) is applied and optimized sequentially to ensure that the CD uniformity requirements are met.
Application of Particle Swarm Optimization Algorithm in the Heating System Planning Problem
Ma, Rong-Jiang; Yu, Nan-Yang; Hu, Jun-Yi
2013-01-01
Based on the life cycle cost (LCC) approach, this paper presents an integral mathematical model and particle swarm optimization (PSO) algorithm for the heating system planning (HSP) problem. The proposed mathematical model minimizes the cost of heating system as the objective for a given life cycle time. For the particularity of HSP problem, the general particle swarm optimization algorithm was improved. An actual case study was calculated to check its feasibility in practical use. The results show that the improved particle swarm optimization (IPSO) algorithm can more preferably solve the HSP problem than PSO algorithm. Moreover, the results also present the potential to provide useful information when making decisions in the practical planning process. Therefore, it is believed that if this approach is applied correctly and in combination with other elements, it can become a powerful and effective optimization tool for HSP problem. PMID:23935429
On the optimal use of a slow server in two-stage queueing systems
NASA Astrophysics Data System (ADS)
Papachristos, Ioannis; Pandelis, Dimitrios G.
2017-07-01
We consider two-stage tandem queueing systems with a dedicated server in each queue and a slower flexible server that can attend both queues. We assume Poisson arrivals and exponential service times, and linear holding costs for jobs present in the system. We study the optimal dynamic assignment of servers to jobs assuming that two servers cannot collaborate to work on the same job and preemptions are not allowed. We formulate the problem as a Markov decision process and derive properties of the optimal allocation for the dedicated (fast) servers. Specifically, we show that the one downstream should not idle, and the same is true for the one upstream when holding costs are larger there. The optimal allocation of the slow server is investigated through extensive numerical experiments that lead to conjectures on the structure of the optimal policy.
NASA Astrophysics Data System (ADS)
Javad Kazemzadeh-Parsi, Mohammad; Daneshmand, Farhang; Ahmadfard, Mohammad Amin; Adamowski, Jan; Martel, Richard
2015-01-01
In the present study, an optimization approach based on the firefly algorithm (FA) is combined with a finite element simulation method (FEM) to determine the optimum design of pump and treat remediation systems. Three multi-objective functions in which pumping rate and clean-up time are design variables are considered and the proposed FA-FEM model is used to minimize operating costs, total pumping volumes and total pumping rates in three scenarios while meeting water quality requirements. The groundwater lift and contaminant concentration are also minimized through the optimization process. The obtained results show the applicability of the FA in conjunction with the FEM for the optimal design of groundwater remediation systems. The performance of the FA is also compared with the genetic algorithm (GA) and the FA is found to have a better convergence rate than the GA.
Affordable Development and Optimization of CERMET Fuels for NTP Ground Testing
NASA Technical Reports Server (NTRS)
Hickman, Robert R.; Broadway, Jeramie W.; Mireles, Omar R.
2014-01-01
CERMET fuel materials for Nuclear Thermal Propulsion (NTP) are currently being developed at NASA's Marshall Space Flight Center. The work is part of NASA's Advanced Space Exploration Systems Nuclear Cryogenic Propulsion Stage (NCPS) Project. The goal of the FY12-14 project is to address critical NTP technology challenges and programmatic issues to establish confidence in the affordability and viability of an NTP system. A key enabling technology for an NCPS system is the fabrication of a stable high temperature nuclear fuel form. Although much of the technology was demonstrated during previous programs, there are currently no qualified fuel materials or processes. The work at MSFC is focused on developing critical materials and process technologies for manufacturing robust, full-scale CERMET fuels. Prototypical samples are being fabricated and tested in flowing hot hydrogen to understand processing and performance relationships. As part of this initial demonstration task, a final full scale element test will be performed to validate robust designs. The next phase of the project will focus on continued development and optimization of the fuel materials to enable future ground testing. The purpose of this paper is to provide a detailed overview of the CERMET fuel materials development plan. The overall CERMET fuel development path is shown in Figure 2. The activities begin prior to ATP for a ground reactor or engine system test and include materials and process optimization, hot hydrogen screening, material property testing, and irradiation testing. The goal of the development is to increase the maturity of the fuel form and reduce risk. One of the main accomplishmens of the current AES FY12-14 project was to develop dedicated laboratories at MSFC for the fabrication and testing of full length fuel elements. This capability will enable affordable, near term development and optimization of the CERMET fuels for future ground testing. Figure 2 provides a timeline of the development and optimization tasks for the AES FY15-17 follow on program.
Enhanced Automated Guidance System for Horizontal Auger Boring Based on Image Processing
Wu, Lingling; Wen, Guojun; Wang, Yudan; Huang, Lei; Zhou, Jiang
2018-01-01
Horizontal auger boring (HAB) is a widely used trenchless technology for the high-accuracy installation of gravity or pressure pipelines on line and grade. Differing from other pipeline installations, HAB requires a more precise and automated guidance system for use in a practical project. This paper proposes an economic and enhanced automated optical guidance system, based on optimization research of light-emitting diode (LED) light target and five automated image processing bore-path deviation algorithms. An LED target was optimized for many qualities, including light color, filter plate color, luminous intensity, and LED layout. The image preprocessing algorithm, feature extraction algorithm, angle measurement algorithm, deflection detection algorithm, and auto-focus algorithm, compiled in MATLAB, are used to automate image processing for deflection computing and judging. After multiple indoor experiments, this guidance system is applied in a project of hot water pipeline installation, with accuracy controlled within 2 mm in 48-m distance, providing accurate line and grade controls and verifying the feasibility and reliability of the guidance system. PMID:29462855
Influence of processing parameters on morphology of polymethoxyflavone in emulsions.
Ting, Yuwen; Li, Colin C; Wang, Yin; Ho, Chi-Tang; Huang, Qingrong
2015-01-21
Polymethoxyflavones (PMFs) are groups of compounds isolated from citrus peels that have been documented with wide arrays of health-promoting bioactivities. Because of their hydrophobic structure and high melting point, crystallized PMFs usually have poor systemic bioavailability when consumed orally. To improve the oral efficiency of PMFs, a viscoelastic emulsion system was formulated. Because of the crystalline nature, the inclusion of PMFs into the emulsion system faces great challenges in having sufficient loading capacity and stabilities. In this study, the process of optimizing the quality of emulsion-based formulation intended for PMF oral delivery was systematically studied. With alteration of the PMF loading concentration, processing temperature, and pressure, the emulsion with the desired droplet and crystal size can be effectively fabricated. Moreover, storage temperatures significantly influenced the stability of the crystal-containing emulsion system. The results from this study are a good illustration of system optimization and serve as a great reference for future formulation design of other hydrophobic crystalline compounds.
Enhanced Automated Guidance System for Horizontal Auger Boring Based on Image Processing.
Wu, Lingling; Wen, Guojun; Wang, Yudan; Huang, Lei; Zhou, Jiang
2018-02-15
Horizontal auger boring (HAB) is a widely used trenchless technology for the high-accuracy installation of gravity or pressure pipelines on line and grade. Differing from other pipeline installations, HAB requires a more precise and automated guidance system for use in a practical project. This paper proposes an economic and enhanced automated optical guidance system, based on optimization research of light-emitting diode (LED) light target and five automated image processing bore-path deviation algorithms. An LED light target was optimized for many qualities, including light color, filter plate color, luminous intensity, and LED layout. The image preprocessing algorithm, direction location algorithm, angle measurement algorithm, deflection detection algorithm, and auto-focus algorithm, compiled in MATLAB, are used to automate image processing for deflection computing and judging. After multiple indoor experiments, this guidance system is applied in a project of hot water pipeline installation, with accuracy controlled within 2 mm in 48-m distance, providing accurate line and grade controls and verifying the feasibility and reliability of the guidance system.
Manipulation and handling processes off-line programming and optimization with use of K-Roset
NASA Astrophysics Data System (ADS)
Gołda, G.; Kampa, A.
2017-08-01
Contemporary trends in development of efficient, flexible manufacturing systems require practical implementation of modern “Lean production” concepts for maximizing customer value through minimizing all wastes in manufacturing and logistics processes. Every FMS is built on the basis of automated and robotized production cells. Except flexible CNC machine tools and other equipments, the industrial robots are primary elements of the system. In the studies, authors look for wastes of time and cost in real tasks of robots, during manipulation processes. According to aspiration for optimization of handling and manipulation processes with use of the robots, the application of modern off-line programming methods and computer simulation, is the best solution and it is only way to minimize unnecessary movements and other instructions. The modelling process of robotized production cell and offline programming of Kawasaki robots in AS-Language will be described. The simulation of robotized workstation will be realized with use of virtual reality software K-Roset. Authors show the process of industrial robot’s programs improvement and optimization in terms of minimizing the number of useless manipulator movements and unnecessary instructions. This is realized in order to shorten the time of production cycles. This will also reduce costs of handling, manipulations and technological process.
NASA Astrophysics Data System (ADS)
Gen, Mitsuo; Lin, Lin
Many combinatorial optimization problems from industrial engineering and operations research in real-world are very complex in nature and quite hard to solve them by conventional techniques. Since the 1960s, there has been an increasing interest in imitating living beings to solve such kinds of hard combinatorial optimization problems. Simulating the natural evolutionary process of human beings results in stochastic optimization techniques called evolutionary algorithms (EAs), which can often outperform conventional optimization methods when applied to difficult real-world problems. In this survey paper, we provide a comprehensive survey of the current state-of-the-art in the use of EA in manufacturing and logistics systems. In order to demonstrate the EAs which are powerful and broadly applicable stochastic search and optimization techniques, we deal with the following engineering design problems: transportation planning models, layout design models and two-stage logistics models in logistics systems; job-shop scheduling, resource constrained project scheduling in manufacturing system.
Toward Optimal Transport Networks
NASA Technical Reports Server (NTRS)
Alexandrov, Natalia; Kincaid, Rex K.; Vargo, Erik P.
2008-01-01
Strictly evolutionary approaches to improving the air transport system a highly complex network of interacting systems no longer suffice in the face of demand that is projected to double or triple in the near future. Thus evolutionary approaches should be augmented with active design methods. The ability to actively design, optimize and control a system presupposes the existence of predictive modeling and reasonably well-defined functional dependences between the controllable variables of the system and objective and constraint functions for optimization. Following recent advances in the studies of the effects of network topology structure on dynamics, we investigate the performance of dynamic processes on transport networks as a function of the first nontrivial eigenvalue of the network's Laplacian, which, in turn, is a function of the network s connectivity and modularity. The last two characteristics can be controlled and tuned via optimization. We consider design optimization problem formulations. We have developed a flexible simulation of network topology coupled with flows on the network for use as a platform for computational experiments.
Fong, Erika J.; Huang, Chao; Hamilton, Julie; ...
2015-11-23
Here, a major advantage of microfluidic devices is the ability to manipulate small sample volumes, thus reducing reagent waste and preserving precious sample. However, to achieve robust sample manipulation it is necessary to address device integration with the macroscale environment. To realize repeatable, sensitive particle separation with microfluidic devices, this protocol presents a complete automated and integrated microfluidic platform that enables precise processing of 0.15–1.5 ml samples using microfluidic devices. Important aspects of this system include modular device layout and robust fixtures resulting in reliable and flexible world to chip connections, and fully-automated fluid handling which accomplishes closed-loop sample collection,more » system cleaning and priming steps to ensure repeatable operation. Different microfluidic devices can be used interchangeably with this architecture. Here we incorporate an acoustofluidic device, detail its characterization, performance optimization, and demonstrate its use for size-separation of biological samples. By using real-time feedback during separation experiments, sample collection is optimized to conserve and concentrate sample. Although requiring the integration of multiple pieces of equipment, advantages of this architecture include the ability to process unknown samples with no additional system optimization, ease of device replacement, and precise, robust sample processing.« less
Genetic particle swarm parallel algorithm analysis of optimization arrangement on mistuned blades
NASA Astrophysics Data System (ADS)
Zhao, Tianyu; Yuan, Huiqun; Yang, Wenjun; Sun, Huagang
2017-12-01
This article introduces a method of mistuned parameter identification which consists of static frequency testing of blades, dichotomy and finite element analysis. A lumped parameter model of an engine bladed-disc system is then set up. A bladed arrangement optimization method, namely the genetic particle swarm optimization algorithm, is presented. It consists of a discrete particle swarm optimization and a genetic algorithm. From this, the local and global search ability is introduced. CUDA-based co-evolution particle swarm optimization, using a graphics processing unit, is presented and its performance is analysed. The results show that using optimization results can reduce the amplitude and localization of the forced vibration response of a bladed-disc system, while optimization based on the CUDA framework can improve the computing speed. This method could provide support for engineering applications in terms of effectiveness and efficiency.
Information distribution in distributed microprocessor based flight control systems
NASA Technical Reports Server (NTRS)
Montgomery, R. C.; Lee, P. S.
1977-01-01
This paper presents an optimal control theory that accounts for variable time intervals in the information distribution to control effectors in a distributed microprocessor based flight control system. The theory is developed using a linear process model for the aircraft dynamics and the information distribution process is modeled as a variable time increment process where, at the time that information is supplied to the control effectors, the control effectors know the time of the next information update only in a stochastic sense. An optimal control problem is formulated and solved that provides the control law that minimizes the expected value of a quadratic cost function. An example is presented where the theory is applied to the control of the longitudinal motions of the F8-DFBW aircraft. Theoretical and simulation results indicate that, for the example problem, the optimal cost obtained using a variable time increment Markov information update process where the control effectors know only the past information update intervals and the Markov transition mechanism is almost identical to that obtained using a known uniform information update interval.
NASA Astrophysics Data System (ADS)
Svirina, Anna; Shindor, Olga; Tatmyshevsky, Konstantin
2014-12-01
The paper deals with the main problems of Russian energy system development that proves necessary to provide educational programs in the field of renewable and alternative energy. In the paper the process of curricula development and defining teaching techniques on the basis of expert opinion evaluation is defined, and the competence model for renewable and alternative energy processing master students is suggested. On the basis of a distributed questionnaire and in-depth interviews, the data for statistical analysis was obtained. On the basis of this data, an optimization of curricula structure was performed, and three models of a structure for optimizing teaching techniques were developed. The suggested educational program structure which was adopted by employers is presented in the paper. The findings include quantitatively estimated importance of systemic thinking and professional skills and knowledge as basic competences of a masters' program graduate; statistically estimated necessity of practice-based learning approach; and optimization models for structuring curricula in renewable and alternative energy processing. These findings allow the establishment of a platform for the development of educational programs.
NASA Astrophysics Data System (ADS)
Gleason, J. L.; Hillyer, T. N.
2011-12-01
Clouds and the Earth's Radiant Energy System (CERES) is one of NASA's highest priority Earth Observing System (EOS) scientific instruments. The CERES science team will integrate data from the CERES Flight Model 5 (FM5) on the NPOESS Preparatory Project (NPP) in addition to the four CERES scanning instrument on Terra and Aqua. The CERES production system consists of over 75 Product Generation Executives (PGEs) maintained by twelve subsystem groups. The processing chain fuses CERES instrument observations with data from 19 other unique sources. The addition of FM5 to over 22 instrument years of data to be reprocessed from flight models 1-4 creates a need for an optimized production processing approach. This poster discusses a new approach, using JBoss and Perl to manage job scheduling and interdependencies between PGEs and external data sources. The new optimized approach uses JBoss to serve handler servlets which regulate PGE-level job interdependencies and job completion notifications. Additional servlets are used to regulate all job submissions from the handlers and to interact with the operator. Perl submission scripts are used to build Process Control Files and to interact directly with the operating system and cluster scheduler. The result is a reduced burden on the operator by algorithmically enforcing a set of rules that determine the optimal time to produce data products with the highest integrity. These rules are designed on a per PGE basis and periodically change. This design provides the means to dynamically update PGE rules at run time and increases the processing throughput by using an event driven controller. The immediate notification of a PGE's completion (an event) allows successor PGEs to launch at the proper time with minimal start up latency, thereby increasing computer system utilization.
NASA Astrophysics Data System (ADS)
Aittokoski, Timo; Miettinen, Kaisa
2008-07-01
Solving real-life engineering problems can be difficult because they often have multiple conflicting objectives, the objective functions involved are highly nonlinear and they contain multiple local minima. Furthermore, function values are often produced via a time-consuming simulation process. These facts suggest the need for an automated optimization tool that is efficient (in terms of number of objective function evaluations) and capable of solving global and multiobjective optimization problems. In this article, the requirements on a general simulation-based optimization system are discussed and such a system is applied to optimize the performance of a two-stroke combustion engine. In the example of a simulation-based optimization problem, the dimensions and shape of the exhaust pipe of a two-stroke engine are altered, and values of three conflicting objective functions are optimized. These values are derived from power output characteristics of the engine. The optimization approach involves interactive multiobjective optimization and provides a convenient tool to balance between conflicting objectives and to find good solutions.
Venkata Mohan, S; Chandrasekhara Rao, N; Krishna Prasad, K; Murali Krishna, P; Sreenivas Rao, R; Sarma, P N
2005-06-20
The Taguchi robust experimental design (DOE) methodology has been applied on a dynamic anaerobic process treating complex wastewater by an anaerobic sequencing batch biofilm reactor (AnSBBR). For optimizing the process as well as to evaluate the influence of different factors on the process, the uncontrollable (noise) factors have been considered. The Taguchi methodology adopting dynamic approach is the first of its kind for studying anaerobic process evaluation and process optimization. The designed experimental methodology consisted of four phases--planning, conducting, analysis, and validation connected sequence-wise to achieve the overall optimization. In the experimental design, five controllable factors, i.e., organic loading rate (OLR), inlet pH, biodegradability (BOD/COD ratio), temperature, and sulfate concentration, along with the two uncontrollable (noise) factors, volatile fatty acids (VFA) and alkalinity at two levels were considered for optimization of the anae robic system. Thirty-two anaerobic experiments were conducted with a different combination of factors and the results obtained in terms of substrate degradation rates were processed in Qualitek-4 software to study the main effect of individual factors, interaction between the individual factors, and signal-to-noise (S/N) ratio analysis. Attempts were also made to achieve optimum conditions. Studies on the influence of individual factors on process performance revealed the intensive effect of OLR. In multiple factor interaction studies, biodegradability with other factors, such as temperature, pH, and sulfate have shown maximum influence over the process performance. The optimum conditions for the efficient performance of the anaerobic system in treating complex wastewater by considering dynamic (noise) factors obtained are higher organic loading rate of 3.5 Kg COD/m3 day, neutral pH with high biodegradability (BOD/COD ratio of 0.5), along with mesophilic temperature range (40 degrees C), and low sulfate concentration (700 mg/L). The optimization resulted in enhanced anaerobic performance (56.7%) from a substrate degradation rate (SDR) of 1.99 to 3.13 Kg COD/m3 day. Considering the obtained optimum factors, further validation experiments were carried out, which showed enhanced process performance (3.04 Kg COD/m3-day from 1.99 Kg COD/m3 day) accounting for 52.13% improvement with the optimized process conditions. The proposed method facilitated a systematic mathematical approach to understand the complex multi-species manifested anaerobic process treating complex chemical wastewater by considering the uncontrollable factors. Copyright (c) 2005 Wiley Periodicals, Inc.
NASA Technical Reports Server (NTRS)
Saravanos, D. A.; Chamis, C. C.; Morel, M.
1991-01-01
A methodology is presented to reduce the residual matrix stresses in continuous fiber metal matrix composites (MMC) by optimizing the fabrication process and interphase layer characteristics. The response of the fabricated MMC was simulated based on nonlinear micromechanics. Application cases include fabrication tailoring, interphase tailoring, and concurrent fabrication-interphase optimization. Two composite systems, silicon carbide/titanium and graphite/copper, are considered. Results illustrate the merits of each approach, indicate that concurrent fabrication/interphase optimization produces significant reductions in the matrix residual stresses and demonstrate the strong coupling between fabrication and interphase tailoring.
Li, Xuejun; Xu, Jia; Yang, Yun
2015-01-01
Cloud workflow system is a kind of platform service based on cloud computing. It facilitates the automation of workflow applications. Between cloud workflow system and its counterparts, market-oriented business model is one of the most prominent factors. The optimization of task-level scheduling in cloud workflow system is a hot topic. As the scheduling is a NP problem, Ant Colony Optimization (ACO) and Particle Swarm Optimization (PSO) have been proposed to optimize the cost. However, they have the characteristic of premature convergence in optimization process and therefore cannot effectively reduce the cost. To solve these problems, Chaotic Particle Swarm Optimization (CPSO) algorithm with chaotic sequence and adaptive inertia weight factor is applied to present the task-level scheduling. Chaotic sequence with high randomness improves the diversity of solutions, and its regularity assures a good global convergence. Adaptive inertia weight factor depends on the estimate value of cost. It makes the scheduling avoid premature convergence by properly balancing between global and local exploration. The experimental simulation shows that the cost obtained by our scheduling is always lower than the other two representative counterparts.
Li, Xuejun; Xu, Jia; Yang, Yun
2015-01-01
Cloud workflow system is a kind of platform service based on cloud computing. It facilitates the automation of workflow applications. Between cloud workflow system and its counterparts, market-oriented business model is one of the most prominent factors. The optimization of task-level scheduling in cloud workflow system is a hot topic. As the scheduling is a NP problem, Ant Colony Optimization (ACO) and Particle Swarm Optimization (PSO) have been proposed to optimize the cost. However, they have the characteristic of premature convergence in optimization process and therefore cannot effectively reduce the cost. To solve these problems, Chaotic Particle Swarm Optimization (CPSO) algorithm with chaotic sequence and adaptive inertia weight factor is applied to present the task-level scheduling. Chaotic sequence with high randomness improves the diversity of solutions, and its regularity assures a good global convergence. Adaptive inertia weight factor depends on the estimate value of cost. It makes the scheduling avoid premature convergence by properly balancing between global and local exploration. The experimental simulation shows that the cost obtained by our scheduling is always lower than the other two representative counterparts. PMID:26357510
Jasniewski, Jordane; Cailliez-Grimal, Catherine; Gelhaye, Eric; Revol-Junelles, Anne-Marie
2008-04-01
An optimization of the production and purification processes of carnobacteriocins Cbn BM1 and Cbn B2 from Carnobacterium maltaromaticum CP5, by heterologous expression in Escherichia coli is described. The genes encoding mature bacteriocin were cloned into an E. coli expression system and expressed as a fusion protein with a thermostable thioredoxin. Recombinant E. coli were cultivated following a fed-batch fermentation process with pH, temperature and oxygenation regulation. The overexpression of the fusion proteins was improved by replacing IPTG by lactose. The fusion proteins were purified by thermal coagulation followed by affinity chromatography. The thioredoxin fusion protein was removed by using CNBr instead of enterokinase and the carnobacteriocins were recovered by reverse-phase chromatography. These optimizations led us to produce up to 320 mg of pure protein per liter of culture, which is four to ten fold higher than what is described for other heterologous expression systems.
Box-Behnken statistical design to optimize thermal performance of energy storage systems
NASA Astrophysics Data System (ADS)
Jalalian, Iman Joz; Mohammadiun, Mohammad; Moqadam, Hamid Hashemi; Mohammadiun, Hamid
2018-05-01
Latent heat thermal storage (LHTS) is a technology that can help to reduce energy consumption for cooling applications, where the cold is stored in phase change materials (PCMs). In the present study a comprehensive theoretical and experimental investigation is performed on a LHTES system containing RT25 as phase change material (PCM). Process optimization of the experimental conditions (inlet air temperature and velocity and number of slabs) was carried out by means of Box-Behnken design (BBD) of Response surface methodology (RSM). Two parameters (cooling time and COP value) were chosen to be the responses. Both of the responses were significantly influenced by combined effect of inlet air temperature with velocity and number of slabs. Simultaneous optimization was performed on the basis of the desirability function to determine the optimal conditions for the cooling time and COP value. Maximum cooling time (186 min) and COP value (6.04) were found at optimum process conditions i.e. inlet temperature of (32.5), air velocity of (1.98) and slab number of (7).
NASA Astrophysics Data System (ADS)
Che, L.; Halvorsen, E.; Chen, X.
2011-10-01
The existence of insoluble residues as intermediate products produced during the wet etching process is the main quality-reducing and structure-patterning issue for lead zirconate titanate (PZT) thin films. A one-step wet etching process using the solutions of buffered HF (BHF) and HNO3 acid was developed for patterning PZT thin films for microelectomechanical system (MEMS) applications. PZT thin films with 1 µm thickness were prepared on the Pt/Ti/SiO2/Si substrate by the sol-gel process for compatibility with Si micromachining. Various compositions of the etchant were investigated and the patterns were examined to optimize the etching process. The optimal result is demonstrated by a high etch rate (3.3 µm min-1) and low undercutting (1.1: 1). The patterned PZT thin film exhibits a remnant polarization of 24 µC cm-2, a coercive field of 53 kV cm-1, a leakage current density of 4.7 × 10-8 A cm-2 at 320 kV cm-1 and a dielectric constant of 1100 at 1 KHz.
NASA Astrophysics Data System (ADS)
Bania, Piotr; Baranowski, Jerzy
2018-02-01
Quantisation of signals is a ubiquitous property of digital processing. In many cases, it introduces significant difficulties in state estimation and in consequence control. Popular approaches either do not address properly the problem of system disturbances or lead to biased estimates. Our intention was to find a method for state estimation for stochastic systems with quantised and discrete observation, that is free of the mentioned drawbacks. We have formulated a general form of the optimal filter derived by a solution of Fokker-Planck equation. We then propose the approximation method based on Galerkin projections. We illustrate the approach for the Ornstein-Uhlenbeck process, and derive analytic formulae for the approximated optimal filter, also extending the results for the variant with control. Operation is illustrated with numerical experiments and compared with classical discrete-continuous Kalman filter. Results of comparison are substantially in favour of our approach, with over 20 times lower mean squared error. The proposed filter is especially effective for signal amplitudes comparable to the quantisation thresholds. Additionally, it was observed that for high order of approximation, state estimate is very close to the true process value. The results open the possibilities of further analysis, especially for more complex processes.
NASA Technical Reports Server (NTRS)
Fishbach, L. H.
1979-01-01
The paper describes the computational techniques employed in determining the optimal propulsion systems for future aircraft applications and to identify system tradeoffs and technology requirements. The computer programs used to perform calculations for all the factors that enter into the selection process of determining the optimum combinations of airplanes and engines are examined. Attention is given to the description of the computer codes including NNEP, WATE, LIFCYC, INSTAL, and POD DRG. A process is illustrated by which turbine engines can be evaluated as to fuel consumption, engine weight, cost and installation effects. Examples are shown as to the benefits of variable geometry and of the tradeoff between fuel burned and engine weights. Future plans for further improvements in the analytical modeling of engine systems are also described.
Wu, Changzheng; Zhang, Feng; Li, Lijun; Jiang, Zhedong; Ni, Hui; Xiao, Anfeng
2018-01-01
High amounts of insoluble substrates exist in the traditional solid-state fermentation (SSF) system. The presence of these substrates complicates the determination of microbial biomass. Thus, enzyme activity is used as the sole index for the optimization of the traditional SSF system, and the relationship between microbial growth and enzyme synthesis is always ignored. This study was conducted to address this deficiency. All soluble nutrients from tea stalk were extracted using water. The aqueous extract was then mixed with polyurethane sponge to establish a modified SSF system, which was then used to conduct tannase production. With this system, biomass, enzyme activity, and enzyme productivity could be measured rationally and accurately. Thus, the association between biomass and enzyme activity could be easily identified, and the shortcomings of traditional SSF could be addressed. Different carbon and nitrogen sources exerted different effects on microbial growth and enzyme production. Single-factor experiments showed that glucose and yeast extract greatly improved microbial biomass accumulation and that tannin and (NH 4 ) 2 SO 4 efficiently promoted enzyme productivity. Then, these four factors were optimized through response surface methodology. Tannase activity reached 19.22 U/gds when the added amounts of tannin, glucose, (NH 4 ) 2 SO 4 , and yeast extract were 7.49, 8.11, 9.26, and 2.25%, respectively. Tannase activity under the optimized process conditions was 6.36 times higher than that under the initial process conditions. The optimized parameters were directly applied to the traditional tea stalk SSF system. Tannase activity reached 245 U/gds, which is 2.9 times higher than our previously reported value. In this study, a modified SSF system was established to address the shortcomings of the traditional SSF system. Analysis revealed that enzymatic activity and microbial biomass are closely related, and different carbon and nitrogen sources have different effects on microbial growth and enzyme production. The maximal tannase activity was obtained under the optimal combination of nutrient sources that enhances cell growth and tannase accumulation. Moreover, tannase production through the traditional tea stalk SSF was markedly improved when the optimized parameters were applied. This work provides an innovative approach to bioproduction research through SSF.
The solution of private problems for optimization heat exchangers parameters
NASA Astrophysics Data System (ADS)
Melekhin, A.
2017-11-01
The relevance of the topic due to the decision of problems of the economy of resources in heating systems of buildings. To solve this problem we have developed an integrated method of research which allows solving tasks on optimization of parameters of heat exchangers. This method decides multicriteria optimization problem with the program nonlinear optimization on the basis of software with the introduction of an array of temperatures obtained using thermography. The author have developed a mathematical model of process of heat exchange in heat exchange surfaces of apparatuses with the solution of multicriteria optimization problem and check its adequacy to the experimental stand in the visualization of thermal fields, an optimal range of managed parameters influencing the process of heat exchange with minimal metal consumption and the maximum heat output fin heat exchanger, the regularities of heat exchange process with getting generalizing dependencies distribution of temperature on the heat-release surface of the heat exchanger vehicles, defined convergence of the results of research in the calculation on the basis of theoretical dependencies and solving mathematical model.
Optimal cure cycle design of a resin-fiber composite laminate
NASA Technical Reports Server (NTRS)
Hou, Jean W.; Sheen, Jeenson
1987-01-01
A unified computed aided design method was studied for the cure cycle design that incorporates an optimal design technique with the analytical model of a composite cure process. The preliminary results of using this proposed method for optimal cure cycle design are reported and discussed. The cure process of interest is the compression molding of a polyester which is described by a diffusion reaction system. The finite element method is employed to convert the initial boundary value problem into a set of first order differential equations which are solved simultaneously by the DE program. The equations for thermal design sensitivities are derived by using the direct differentiation method and are solved by the DE program. A recursive quadratic programming algorithm with an active set strategy called a linearization method is used to optimally design the cure cycle, subjected to the given design performance requirements. The difficulty of casting the cure cycle design process into a proper mathematical form is recognized. Various optimal design problems are formulated to address theses aspects. The optimal solutions of these formulations are compared and discussed.
NASA Astrophysics Data System (ADS)
Mahapatra, Prasant Kumar; Sethi, Spardha; Kumar, Amod
2015-10-01
In conventional tool positioning technique, sensors embedded in the motion stages provide the accurate tool position information. In this paper, a machine vision based system and image processing technique for motion measurement of lathe tool from two-dimensional sequential images captured using charge coupled device camera having a resolution of 250 microns has been described. An algorithm was developed to calculate the observed distance travelled by the tool from the captured images. As expected, error was observed in the value of the distance traversed by the tool calculated from these images. Optimization of errors due to machine vision system, calibration, environmental factors, etc. in lathe tool movement was carried out using two soft computing techniques, namely, artificial immune system (AIS) and particle swarm optimization (PSO). The results show better capability of AIS over PSO.
NASA Astrophysics Data System (ADS)
Asplund, Erik; Klüner, Thorsten
2012-03-01
In this paper, control of open quantum systems with emphasis on the control of surface photochemical reactions is presented. A quantum system in a condensed phase undergoes strong dissipative processes. From a theoretical viewpoint, it is important to model such processes in a rigorous way. In this work, the description of open quantum systems is realized within the surrogate Hamiltonian approach [R. Baer and R. Kosloff, J. Chem. Phys. 106, 8862 (1997)], 10.1063/1.473950. An efficient and accurate method to find control fields is optimal control theory (OCT) [W. Zhu, J. Botina, and H. Rabitz, J. Chem. Phys. 108, 1953 (1998), 10.1063/1.475576; Y. Ohtsuki, G. Turinici, and H. Rabitz, J. Chem. Phys. 120, 5509 (2004)], 10.1063/1.1650297. To gain control of open quantum systems, the surrogate Hamiltonian approach and OCT, with time-dependent targets, are combined. Three open quantum systems are investigated by the combined method, a harmonic oscillator immersed in an ohmic bath, CO adsorbed on a platinum surface, and NO adsorbed on a nickel oxide surface. Throughout this paper, atomic units, i.e., ℏ = me = e = a0 = 1, have been used unless otherwise stated.
NASA Astrophysics Data System (ADS)
Mohammadi, Hadi
Use of the Patch Vulnerability Management (PVM) process should be seriously considered for any networked computing system. The PVM process prevents the operating system (OS) and software applications from being attacked due to security vulnerabilities, which lead to system failures and critical data leakage. The purpose of this research is to create and design a Security and Critical Patch Management Process (SCPMP) framework based on Systems Engineering (SE) principles. This framework will assist Information Technology Department Staff (ITDS) to reduce IT operating time and costs and mitigate the risk of security and vulnerability attacks. Further, this study evaluates implementation of the SCPMP in the networked computing systems of an academic environment in order to: 1. Meet patch management requirements by applying SE principles. 2. Reduce the cost of IT operations and PVM cycles. 3. Improve the current PVM methodologies to prevent networked computing systems from becoming the targets of security vulnerability attacks. 4. Embed a Maintenance Optimization Tool (MOT) in the proposed framework. The MOT allows IT managers to make the most practicable choice of methods for deploying and installing released patches and vulnerability remediation. In recent years, there has been a variety of frameworks for security practices in every networked computing system to protect computer workstations from becoming compromised or vulnerable to security attacks, which can expose important information and critical data. I have developed a new mechanism for implementing PVM for maximizing security-vulnerability maintenance, protecting OS and software packages, and minimizing SCPMP cost. To increase computing system security in any diverse environment, particularly in academia, one must apply SCPMP. I propose an optimal maintenance policy that will allow ITDS to measure and estimate the variation of PVM cycles based on their department's requirements. My results demonstrate that MOT optimizes the process of implementing SCPMP in academic workstations.
Cryptosporidium-contaminated water disinfection by a novel Fenton process.
Matavos-Aramyan, Sina; Moussavi, Mohsen; Matavos-Aramyan, Hedieh; Roozkhosh, Sara
2017-05-01
Three novel modified advanced oxidation process systems including ascorbic acid-, pro-oxidants- and ascorbic acid-pro-oxidants-modified Fenton system were utilized to study the disinfection efficiency on Cryptosporidium-contaminated drinking water samples. Different concentrations of divalent and trivalent iron ions, hydrogen peroxide, ascorbic acid and pro-oxidants at different exposure times were investigated. These novel systems were also compared to the classic Fenton system and to the control system which comprised of only hydrogen peroxide. The complete in vitro mechanism of the mentioned modified Fenton systems are also provided. The results pointed out that by considering the optimal parameter limitations, the ascorbic acid-modified Fenton system decreased the Cryptosporidium oocytes viability to 3.91%, while the pro-oxidant-modified and ascorbic acid-pro-oxidant-modified Fenton system achieved an oocytes viability equal to 1.66% and 0%, respectively. The efficiency of the classic Fenton at optimal condition was observed to be 20.12% of oocytes viability. The control system achieved 86.14% of oocytes viability. The optimum values of the operational parameters during this study are found to be 80mgL -1 for the divalent iron, 30mgL -1 for ascorbic acid, 30mmol for hydrogen peroxide, 25mgL -1 for pro-oxidants and an exposure time equal to 5min. The ascorbic acid-pro-oxidants-modified Fenton system achieved a promising complete water disinfection (0% viability) at the optimal conditions, leaving this method a feasible process for water disinfection or decontamination, even at industrial scales. Copyright © 2017 Elsevier Inc. All rights reserved.
Experimental test of an online ion-optics optimizer
NASA Astrophysics Data System (ADS)
Amthor, A. M.; Schillaci, Z. M.; Morrissey, D. J.; Portillo, M.; Schwarz, S.; Steiner, M.; Sumithrarachchi, Ch.
2018-07-01
A technique has been developed and tested to automatically adjust multiple electrostatic or magnetic multipoles on an ion optical beam line - according to a defined optimization algorithm - until an optimal tune is found. This approach simplifies the process of determining high-performance optical tunes, satisfying a given set of optical properties, for an ion optical system. The optimization approach is based on the particle swarm method and is entirely model independent, thus the success of the optimization does not depend on the accuracy of an extant ion optical model of the system to be optimized. Initial test runs of a first order optimization of a low-energy (<60 keV) all-electrostatic beamline at the NSCL show reliable convergence of nine quadrupole degrees of freedom to well-performing tunes within a reasonable number of trial solutions, roughly 500, with full beam optimization run times of roughly two hours. Improved tunes were found both for quasi-local optimizations and for quasi-global optimizations, indicating a good ability of the optimizer to find a solution with or without a well defined set of initial multipole settings.
NASA Astrophysics Data System (ADS)
Mole, Tracey Lawrence
In this work, an effective and systematic model is devised to synthesize the optimal formulation for an explicit engineering application in the nuclear industry, i.e. radioactive decontamination and waste reduction. Identification of an optimal formulation that is suitable for the desired system requires integration of all the interlacing behaviors of the product constituents. This work is unique not only in product design, but also in these design techniques. The common practice of new product development is to design the optimized product for a particular industrial niche and then subsequent research for the production process is conducted, developed and optimized separately from the product formulation. In this proposed optimization design technique, the development process, disposal technique and product formulation is optimized simultaneously to improve production profit, product behavior and disposal emissions. This "cradle to grave" optimization approach allowed a complex product formulation development process to be drastically simplified. The utilization of these modeling techniques took an industrial idea to full scale testing and production in under 18 months by reducing the number of subsequent laboratory trials required to optimize the formula, production and waste treatment aspects of the product simultaneously. This particular development material involves the use of a polymer matrix that is applied to surfaces as part of a decontamination system. The polymer coating serves to initially "fix" the contaminants in place for detection and ultimate elimination. Upon mechanical entrapment and removal, the polymer coating containing the radioactive isotopes can be dissolved in a solvent processor, where separation of the radioactive metallic particles can take place. Ultimately, only the collection of divided solids should be disposed of as nuclear waste. This creates an attractive alternative to direct land filling or incineration. This philosophy also provides waste generators a way to significantly reduce waste and associated costs, and help meet regulatory, safety and environmental requirements. In order for the polymeric film exhibit the desired performance, a combination of discrete constraints must be fulfilled. These interacting characteristics include the choice of polymer used for construction, drying time, storage constraints, decontamination ability, removal behavior, application process, coating strength and dissolvability processes. Identification of an optimized formulation that is suitable for this entire decontamination system requires integration of all the interlacing characteristics of the coating composition that affect the film behavior. A novel systematic method for developing quantitative values for theses qualitative characteristics is being developed in order to simultaneously optimize the design formulation subject to the discrete product specifications. This synthesis procedure encompasses intrinsic characteristics vital to successful product development, which allows for implementation of the derived model optimizations to operate independent of the polymer film application. This contribution illustrates the optimized synthesis example by which a large range of polymeric compounds and mixtures can be completed. (Abstract shortened by UMI.)
Defect design of insulation systems for photovoltaic modules
NASA Technical Reports Server (NTRS)
Mon, G. R.
1981-01-01
A defect-design approach to sizing electrical insulation systems for terrestrial photovoltaic modules is presented. It consists of gathering voltage-breakdown statistics on various thicknesses of candidate insulation films where, for a designated voltage, module failure probabilities for enumerated thickness and number-of-layer film combinations are calculated. Cost analysis then selects the most economical insulation system. A manufacturing yield problem is solved to exemplify the technique. Results for unaged Mylar suggest using fewer layers of thicker films. Defect design incorporates effects of flaws in optimal insulation system selection, and obviates choosing a tolerable failure rate, since the optimization process accomplishes that. Exposure to weathering and voltage stress reduces the voltage-withstanding capability of module insulation films. Defect design, applied to aged polyester films, promises to yield reliable, cost-optimal insulation systems.
Integrated System-Level Optimization for Concurrent Engineering With Parametric Subsystem Modeling
NASA Technical Reports Server (NTRS)
Schuman, Todd; DeWeck, Oliver L.; Sobieski, Jaroslaw
2005-01-01
The introduction of concurrent design practices to the aerospace industry has greatly increased the productivity of engineers and teams during design sessions as demonstrated by JPL's Team X. Simultaneously, advances in computing power have given rise to a host of potent numerical optimization methods capable of solving complex multidisciplinary optimization problems containing hundreds of variables, constraints, and governing equations. Unfortunately, such methods are tedious to set up and require significant amounts of time and processor power to execute, thus making them unsuitable for rapid concurrent engineering use. This paper proposes a framework for Integration of System-Level Optimization with Concurrent Engineering (ISLOCE). It uses parametric neural-network approximations of the subsystem models. These approximations are then linked to a system-level optimizer that is capable of reaching a solution quickly due to the reduced complexity of the approximations. The integration structure is described in detail and applied to the multiobjective design of a simplified Space Shuttle external fuel tank model. Further, a comparison is made between the new framework and traditional concurrent engineering (without system optimization) through an experimental trial with two groups of engineers. Each method is evaluated in terms of optimizer accuracy, time to solution, and ease of use. The results suggest that system-level optimization, running as a background process during integrated concurrent engineering sessions, is potentially advantageous as long as it is judiciously implemented.
NASA Astrophysics Data System (ADS)
Golinko, I. M.; Kovrigo, Yu. M.; Kubrak, A. I.
2014-03-01
An express method for optimally tuning analog PI and PID controllers is considered. An integral quality criterion with minimizing the control output is proposed for optimizing control systems. The suggested criterion differs from existing ones in that the control output applied to the technological process is taken into account in a correct manner, due to which it becomes possible to maximally reduce the expenditure of material and/or energy resources in performing control of industrial equipment sets. With control organized in such manner, smaller wear and longer service life of control devices are achieved. A unimodal nature of the proposed criterion for optimally tuning a controller is numerically demonstrated using the methods of optimization theory. A functional interrelation between the optimal controller parameters and dynamic properties of a controlled plant is numerically determined for a single-loop control system. The results obtained from simulation of transients in a control system carried out using the proposed and existing functional dependences are compared with each other. The proposed calculation formulas differ from the existing ones by a simple structure and highly accurate search for the optimal controller tuning parameters. The obtained calculation formulas are recommended for being used by specialists in automation for design and optimization of control systems.
A Chemical Engineer's Perspective on Health and Disease
Androulakis, Ioannis P.
2014-01-01
Chemical process systems engineering considers complex supply chains which are coupled networks of dynamically interacting systems. The quest to optimize the supply chain while meeting robustness and flexibility constraints in the face of ever changing environments necessitated the development of theoretical and computational tools for the analysis, synthesis and design of such complex engineered architectures. However, it was realized early on that optimality is a complex characteristic required to achieve proper balance between multiple, often competing, objectives. As we begin to unravel life's intricate complexities, we realize that that living systems share similar structural and dynamic characteristics; hence much can be learned about biological complexity from engineered systems. In this article, we draw analogies between concepts in process systems engineering and conceptual models of health and disease; establish connections between these concepts and physiologic modeling; and describe how these mirror onto the physiological counterparts of engineered systems. PMID:25506103
Vilas, Carlos; Balsa-Canto, Eva; García, Maria-Sonia G; Banga, Julio R; Alonso, Antonio A
2012-07-02
Systems biology allows the analysis of biological systems behavior under different conditions through in silico experimentation. The possibility of perturbing biological systems in different manners calls for the design of perturbations to achieve particular goals. Examples would include, the design of a chemical stimulation to maximize the amplitude of a given cellular signal or to achieve a desired pattern in pattern formation systems, etc. Such design problems can be mathematically formulated as dynamic optimization problems which are particularly challenging when the system is described by partial differential equations.This work addresses the numerical solution of such dynamic optimization problems for spatially distributed biological systems. The usual nonlinear and large scale nature of the mathematical models related to this class of systems and the presence of constraints on the optimization problems, impose a number of difficulties, such as the presence of suboptimal solutions, which call for robust and efficient numerical techniques. Here, the use of a control vector parameterization approach combined with efficient and robust hybrid global optimization methods and a reduced order model methodology is proposed. The capabilities of this strategy are illustrated considering the solution of a two challenging problems: bacterial chemotaxis and the FitzHugh-Nagumo model. In the process of chemotaxis the objective was to efficiently compute the time-varying optimal concentration of chemotractant in one of the spatial boundaries in order to achieve predefined cell distribution profiles. Results are in agreement with those previously published in the literature. The FitzHugh-Nagumo problem is also efficiently solved and it illustrates very well how dynamic optimization may be used to force a system to evolve from an undesired to a desired pattern with a reduced number of actuators. The presented methodology can be used for the efficient dynamic optimization of generic distributed biological systems.
Optimal interdependence enhances the dynamical robustness of complex systems.
Singh, Rishu Kumar; Sinha, Sitabhra
2017-08-01
Although interdependent systems have usually been associated with increased fragility, we show that strengthening the interdependence between dynamical processes on different networks can make them more likely to survive over long times. By coupling the dynamics of networks that in isolation exhibit catastrophic collapse with extinction of nodal activity, we demonstrate system-wide persistence of activity for an optimal range of interdependence between the networks. This is related to the appearance of attractors of the global dynamics comprising disjoint sets ("islands") of stable activity.
Optimal interdependence enhances the dynamical robustness of complex systems
NASA Astrophysics Data System (ADS)
Singh, Rishu Kumar; Sinha, Sitabhra
2017-08-01
Although interdependent systems have usually been associated with increased fragility, we show that strengthening the interdependence between dynamical processes on different networks can make them more likely to survive over long times. By coupling the dynamics of networks that in isolation exhibit catastrophic collapse with extinction of nodal activity, we demonstrate system-wide persistence of activity for an optimal range of interdependence between the networks. This is related to the appearance of attractors of the global dynamics comprising disjoint sets ("islands") of stable activity.
Shape Optimization for Navier-Stokes Equations with Algebraic Turbulence Model: Existence Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bulicek, Miroslav; Haslinger, Jaroslav; Malek, Josef
2009-10-15
We study a shape optimization problem for the paper machine headbox which distributes a mixture of water and wood fibers in the paper making process. The aim is to find a shape which a priori ensures the given velocity profile on the outlet part. The mathematical formulation leads to an optimal control problem in which the control variable is the shape of the domain representing the header, the state problem is represented by a generalized stationary Navier-Stokes system with nontrivial mixed boundary conditions. In this paper we prove the existence of solutions both to the generalized Navier-Stokes system and tomore » the shape optimization problem.« less
Optimal Solution for an Engineering Applications Using Modified Artificial Immune System
NASA Astrophysics Data System (ADS)
Padmanabhan, S.; Chandrasekaran, M.; Ganesan, S.; patan, Mahamed Naveed Khan; Navakanth, Polina
2017-03-01
An Engineering optimization leads a essential role in several engineering application areas like process design, product design, re-engineering and new product development, etc. In engineering, an awfully best answer is achieved by comparison to some completely different solutions by utilization previous downside information. An optimization algorithms provide systematic associate degreed economical ways that within which of constructing and comparison new design solutions so on understand at best vogue, thus on best solution efficiency and acquire the foremost wonderful design impact. In this paper, a new evolutionary based Modified Artificial Immune System (MAIS) algorithm used to optimize an engineering application of gear drive design. The results are compared with existing design.
Design Optimization of Gas Generator Hybrid Propulsion Boosters
NASA Technical Reports Server (NTRS)
Weldon, Vincent; Phillips, Dwight; Fink, Larry
1990-01-01
A methodology used in support of a study for NASA/MSFC to optimize the design of gas generator hybrid propulsion booster for uprating the National Space Transportation System (NSTS) is presented. The objective was to compare alternative configurations for this booster approach, optimizing each candidate concept on different bases, in order to develop data for a trade table on which a final decision was based. The methodology is capable of processing a large number of independent and dependent variables, adjusting the overall subsystems characteristics to arrive at a best compromise integrated design to meet various specific optimization criteria subject to selected constraints. For each system considered, a detailed weight statement was generated along with preliminary cost and reliability estimates.
Gholikandi, Gagik Badalians; Kazemirad, Khashayar
2018-03-01
In this study, the performance of the electrochemical peroxidation (ECP) process for removing the volatile suspended solids (VSS) content of waste-activated sludge was evaluated. The Fe 2+ ions required by the process were obtained directly from iron electrodes in the system. The performance of the ECP process was investigated in various operational conditions employing a laboratory-scale pilot setup and optimized by response surface methodology (RSM). According to the results, the ECP process showed its best performance when the pH value, current density, H 2 O 2 concentration and the retention time were 3, 3.2 mA/cm 2 , 1,535 mg/L and 240 min, respectively. In these conditions, the introduced Fe 2+ concentration was approximately 500 (mg/L) and the VSS removal efficiency about 74%. Moreover, the results of the microbial characteristics of the raw and the stabilized sludge demonstrated that the ECP process is able to remove close to 99.9% of the coliforms in the raw sludge during the stabilization process. The energy consumption evaluation showed that the required energy of the ECP reactor (about 1.8-2.5 kWh (kg VSS removed) -1 ) is considerably lower than for aerobic digestion, the conventional waste-activated sludge stabilization method (about 2-3 kWh (kg VSS removed) -1 ). The RSM optimization process showed that the best operational conditions of the ECP process comply with the experimental results, and the actual and the predicted results are in good conformity with each other. This feature makes it possible to predict the introduced Fe 2+ concentrations into the system and the VSS removal efficiency of the process precisely.
A Holistic Approach to Networked Information Systems Design and Analysis
2016-04-15
attain quite substantial savings. 11. Optimal algorithms for energy harvesting in wireless networks. We use a Markov- decision-process (MDP) based...approach to obtain optimal policies for transmissions . The key advantage of our approach is that it holistically considers information and energy in a...Coding technique to minimize delays and the number of transmissions in Wireless Systems. As we approach an era of ubiquitous computing with information
Development of a solar-powered residential air conditioner
NASA Technical Reports Server (NTRS)
1975-01-01
The initial objective of the program was the optimization (in terms of cost and performance) of a Rankine cycle mechanical refrigeration system which utilizes thermal energy from a flat solar collector for air conditioning residential buildings. However, feasibility investigations of the adsorption process revealed that a dessicant-type air conditioner offers many significant advantages. As a result, limited efforts were expended toward the optimization of such a system.
Game Theory and Risk-Based Levee System Design
NASA Astrophysics Data System (ADS)
Hui, R.; Lund, J. R.; Madani, K.
2014-12-01
Risk-based analysis has been developed for optimal levee design for economic efficiency. Along many rivers, two levees on opposite riverbanks act as a simple levee system. Being rational and self-interested, land owners on each river bank would tend to independently optimize their levees with risk-based analysis, resulting in a Pareto-inefficient levee system design from the social planner's perspective. Game theory is applied in this study to analyze decision making process in a simple levee system in which the land owners on each river bank develop their design strategies using risk-based economic optimization. For each land owner, the annual expected total cost includes expected annual damage cost and annualized construction cost. The non-cooperative Nash equilibrium is identified and compared to the social planner's optimal distribution of flood risk and damage cost throughout the system which results in the minimum total flood cost for the system. The social planner's optimal solution is not feasible without appropriate level of compensation for the transferred flood risk to guarantee and improve conditions for all parties. Therefore, cooperative game theory is then employed to develop an economically optimal design that can be implemented in practice. By examining the game in the reversible and irreversible decision making modes, the cost of decision making myopia is calculated to underline the significance of considering the externalities and evolution path of dynamic water resource problems for optimal decision making.
Decision support for operations and maintenance (DSOM) system
Jarrell, Donald B [Kennewick, WA; Meador, Richard J [Richland, WA; Sisk, Daniel R [Richland, WA; Hatley, Darrel D [Kennewick, WA; Brown, Daryl R [Richland, WA; Keibel, Gary R [Richland, WA; Gowri, Krishnan [Richland, WA; Reyes-Spindola, Jorge F [Richland, WA; Adams, Kevin J [San Bruno, CA; Yates, Kenneth R [Lake Oswego, OR; Eschbach, Elizabeth J [Fort Collins, CO; Stratton, Rex C [Richland, WA
2006-03-21
A method for minimizing the life cycle cost of processes such as heating a building. The method utilizes sensors to monitor various pieces of equipment used in the process, for example, boilers, turbines, and the like. The method then performs the steps of identifying a set optimal operating conditions for the process, identifying and measuring parameters necessary to characterize the actual operating condition of the process, validating data generated by measuring those parameters, characterizing the actual condition of the process, identifying an optimal condition corresponding to the actual condition, comparing said optimal condition with the actual condition and identifying variances between the two, and drawing from a set of pre-defined algorithms created using best engineering practices, an explanation of at least one likely source and at least one recommended remedial action for selected variances, and providing said explanation as an output to at least one user.
Numerical optimization of a multi-jet cooling system for the blown film extrusion
NASA Astrophysics Data System (ADS)
Janas, M.; Wortberg, J.
2015-05-01
The limiting factor for every extrusion process is the cooling. For the blown film process, this task is usually done by means of a single or dual lip air ring. Prior work has shown that two major effects are responsible for a bad heat transfer. The first one is the interaction between the jet and the ambient air. It reduces the velocity of the jet and enlarges the straight flow. The other one is the formation of a laminar boundary layer on the film surface due to the fast flowing cooling air. In this case, the boundary layer isolates the film and prevents an efficient heat transfer. To improve the heat exchange, a novel cooling approach is developed, called Multi-Jet. The new cooling system uses several slit nozzles over the whole tube formation zone for cooling the film. In contrast to a conventional system, the cooling air is guided vertically on the film surface in different heights to penetrate the boundary sublayer. Simultaneously, a housing of the tube formation zone is practically obtained to reduce the interaction with the ambient air. For the numerical optimization of the Multi-Jet system, a new procedure is developed. First, a prediction model identifies a worth considering cooling configuration. Therefore, the prediction model computes a film curve using the formulation from Zatloukal-Vlcek and the energy balance for the film temperature. Thereafter, the optimized cooling geometry is investigated in detail using a process model for the blown film extrusion that is able to compute a realistic bubble behavior depending on the cooling situation. In this paper, the Multi-Jet cooling system is numerically optimized for several different process states, like mass throughputs and blow-up ratios using one slit nozzle setting. For each process condition, the best cooling result has to be achieved. Therefore, the height of any nozzle over the tube formation zone is adjustable. The other geometrical parameters of the cooling system like the nozzle diameter or the nozzle width are fix.
Application of a Parallelizable Perfusion Bioreactor for Physiologic 3D Cell Culture.
Egger, Dominik; Spitz, Sarah; Fischer, Monica; Handschuh, Stephan; Glösmann, Martin; Friemert, Benedikt; Egerbacher, Monika; Kasper, Cornelia
2017-01-01
It is crucial but challenging to keep physiologic conditions during the cultivation of 3D cell scaffold constructs for the optimization of 3D cell culture processes. Therefore, we demonstrate the benefits of a recently developed miniaturized perfusion bioreactor together with a specialized incubator system that allows for the cultivation of multiple samples while screening different conditions. Hence, a decellularized bone matrix was tested towards its suitability for 3D osteogenic differentiation under flow perfusion conditions. Subsequently, physiologic shear stress and hydrostatic pressure (HP) conditions were optimized for osteogenic differentiation of human mesenchymal stem cells (MSCs). X-ray computed microtomography and scanning electron microscopy (SEM) revealed a closed cell layer covering the entire matrix. Osteogenic differentiation assessed by alkaline phosphatase activity and SEM was found to be increased in all dynamic conditions. Furthermore, screening of different fluid shear stress (FSS) conditions revealed 1.5 mL/min (equivalent to ∼10 mPa shear stress) to be optimal. However, no distinct effect of HP compared to flow perfusion without HP on osteogenic differentiation was observed. Notably, throughout all experiments, cells cultivated under FSS or HP conditions displayed increased osteogenic differentiation, which underlines the importance of physiologic conditions. In conclusion, the bioreactor system was used for biomaterial testing and to develop and optimize a 3D cell culture process for the osteogenic differentiation of MSCs. Due to its versatility and higher throughput efficiency, we hypothesize that this bioreactor/incubator system will advance the development and optimization of a variety of 3D cell culture processes. © 2017 S. Karger AG, Basel.
Adaptive self-organization of Bali's ancient rice terraces.
Lansing, J Stephen; Thurner, Stefan; Chung, Ning Ning; Coudurier-Curveur, Aurélie; Karakaş, Çağil; Fesenmyer, Kurt A; Chew, Lock Yue
2017-06-20
Spatial patterning often occurs in ecosystems as a result of a self-organizing process caused by feedback between organisms and the physical environment. Here, we show that the spatial patterns observable in centuries-old Balinese rice terraces are also created by feedback between farmers' decisions and the ecology of the paddies, which triggers a transition from local to global-scale control of water shortages and rice pests. We propose an evolutionary game, based on local farmers' decisions that predicts specific power laws in spatial patterning that are also seen in a multispectral image analysis of Balinese rice terraces. The model shows how feedbacks between human decisions and ecosystem processes can evolve toward an optimal state in which total harvests are maximized and the system approaches Pareto optimality. It helps explain how multiscale cooperation from the community to the watershed scale could persist for centuries, and why the disruption of this self-organizing system by the Green Revolution caused chaos in irrigation and devastating losses from pests. The model shows that adaptation in a coupled human-natural system can trigger self-organized criticality (SOC). In previous exogenously driven SOC models, adaptation plays no role, and no optimization occurs. In contrast, adaptive SOC is a self-organizing process where local adaptations drive the system toward local and global optima.
Development of a parameter optimization technique for the design of automatic control systems
NASA Technical Reports Server (NTRS)
Whitaker, P. H.
1977-01-01
Parameter optimization techniques for the design of linear automatic control systems that are applicable to both continuous and digital systems are described. The model performance index is used as the optimization criterion because of the physical insight that can be attached to it. The design emphasis is to start with the simplest system configuration that experience indicates would be practical. Design parameters are specified, and a digital computer program is used to select that set of parameter values which minimizes the performance index. The resulting design is examined, and complexity, through the use of more complex information processing or more feedback paths, is added only if performance fails to meet operational specifications. System performance specifications are assumed to be such that the desired step function time response of the system can be inferred.
Nie, Xianghui; Huang, Guo H; Li, Yongping
2009-11-01
This study integrates the concepts of interval numbers and fuzzy sets into optimization analysis by dynamic programming as a means of accounting for system uncertainty. The developed interval fuzzy robust dynamic programming (IFRDP) model improves upon previous interval dynamic programming methods. It allows highly uncertain information to be effectively communicated into the optimization process through introducing the concept of fuzzy boundary interval and providing an interval-parameter fuzzy robust programming method for an embedded linear programming problem. Consequently, robustness of the optimization process and solution can be enhanced. The modeling approach is applied to a hypothetical problem for the planning of waste-flow allocation and treatment/disposal facility expansion within a municipal solid waste (MSW) management system. Interval solutions for capacity expansion of waste management facilities and relevant waste-flow allocation are generated and interpreted to provide useful decision alternatives. The results indicate that robust and useful solutions can be obtained, and the proposed IFRDP approach is applicable to practical problems that are associated with highly complex and uncertain information.
Retinal optical coherence tomography at 1 μm with dynamic focus control and axial motion tracking
NASA Astrophysics Data System (ADS)
Cua, Michelle; Lee, Sujin; Miao, Dongkai; Ju, Myeong Jin; Mackenzie, Paul J.; Jian, Yifan; Sarunic, Marinko V.
2016-02-01
High-resolution optical coherence tomography (OCT) retinal imaging is important to noninvasively visualize the various retinal structures to aid in better understanding of the pathogenesis of vision-robbing diseases. However, conventional OCT systems have a trade-off between lateral resolution and depth-of-focus. In this report, we present the development of a focus-stacking OCT system with automatic focus optimization for high-resolution, extended-focal-range clinical retinal imaging by incorporating a variable-focus liquid lens into the sample arm optics. Retinal layer tracking and selection was performed using a graphics processing unit accelerated processing platform for focus optimization, providing real-time layer-specific en face visualization. After optimization, multiple volumes focused at different depths were acquired, registered, and stitched together to yield a single, high-resolution focus-stacked dataset. Using this system, we show high-resolution images of the retina and optic nerve head, from which we extracted clinically relevant parameters such as the nerve fiber layer thickness and lamina cribrosa microarchitecture.
Retinal optical coherence tomography at 1 μm with dynamic focus control and axial motion tracking.
Cua, Michelle; Lee, Sujin; Miao, Dongkai; Ju, Myeong Jin; Mackenzie, Paul J; Jian, Yifan; Sarunic, Marinko V
2016-02-01
High-resolution optical coherence tomography (OCT) retinal imaging is important to noninvasively visualize the various retinal structures to aid in better understanding of the pathogenesis of vision-robbing diseases. However, conventional OCT systems have a trade-off between lateral resolution and depth-of-focus. In this report, we present the development of a focus-stacking OCT system with automatic focus optimization for high-resolution, extended-focal-range clinical retinal imaging by incorporating a variable-focus liquid lens into the sample arm optics. Retinal layer tracking and selection was performed using a graphics processing unit accelerated processing platform for focus optimization, providing real-time layer-specific en face visualization. After optimization, multiple volumes focused at different depths were acquired, registered, and stitched together to yield a single, high-resolution focus-stacked dataset. Using this system, we show high-resolution images of the retina and optic nerve head, from which we extracted clinically relevant parameters such as the nerve fiber layer thickness and lamina cribrosa microarchitecture.
Han, Zhenyu; Sun, Shouzheng; Fu, Hongya; Fu, Yunzhong
2017-01-01
Automated fiber placement (AFP) process includes a variety of energy forms and multi-scale effects. This contribution proposes a novel multi-scale low-entropy method aiming at optimizing processing parameters in an AFP process, where multi-scale effect, energy consumption, energy utilization efficiency and mechanical properties of micro-system could be taken into account synthetically. Taking a carbon fiber/epoxy prepreg as an example, mechanical properties of macro–meso–scale are obtained by Finite Element Method (FEM). A multi-scale energy transfer model is then established to input the macroscopic results into the microscopic system as its boundary condition, which can communicate with different scales. Furthermore, microscopic characteristics, mainly micro-scale adsorption energy, diffusion coefficient entropy–enthalpy values, are calculated under different processing parameters based on molecular dynamics method. Low-entropy region is then obtained in terms of the interrelation among entropy–enthalpy values, microscopic mechanical properties (interface adsorbability and matrix fluidity) and processing parameters to guarantee better fluidity, stronger adsorption, lower energy consumption and higher energy quality collaboratively. Finally, nine groups of experiments are carried out to verify the validity of the simulation results. The results show that the low-entropy optimization method can reduce void content effectively, and further improve the mechanical properties of laminates. PMID:28869520
Han, Zhenyu; Sun, Shouzheng; Fu, Hongya; Fu, Yunzhong
2017-09-03
Automated fiber placement (AFP) process includes a variety of energy forms and multi-scale effects. This contribution proposes a novel multi-scale low-entropy method aiming at optimizing processing parameters in an AFP process, where multi-scale effect, energy consumption, energy utilization efficiency and mechanical properties of micro-system could be taken into account synthetically. Taking a carbon fiber/epoxy prepreg as an example, mechanical properties of macro-meso-scale are obtained by Finite Element Method (FEM). A multi-scale energy transfer model is then established to input the macroscopic results into the microscopic system as its boundary condition, which can communicate with different scales. Furthermore, microscopic characteristics, mainly micro-scale adsorption energy, diffusion coefficient entropy-enthalpy values, are calculated under different processing parameters based on molecular dynamics method. Low-entropy region is then obtained in terms of the interrelation among entropy-enthalpy values, microscopic mechanical properties (interface adsorbability and matrix fluidity) and processing parameters to guarantee better fluidity, stronger adsorption, lower energy consumption and higher energy quality collaboratively. Finally, nine groups of experiments are carried out to verify the validity of the simulation results. The results show that the low-entropy optimization method can reduce void content effectively, and further improve the mechanical properties of laminates.
NASA Technical Reports Server (NTRS)
Dewan, Mohammad W.; Huggett, Daniel J.; Liao, T. Warren; Wahab, Muhammad A.; Okeil, Ayman M.
2015-01-01
Friction-stir-welding (FSW) is a solid-state joining process where joint properties are dependent on welding process parameters. In the current study three critical process parameters including spindle speed (??), plunge force (????), and welding speed (??) are considered key factors in the determination of ultimate tensile strength (UTS) of welded aluminum alloy joints. A total of 73 weld schedules were welded and tensile properties were subsequently obtained experimentally. It is observed that all three process parameters have direct influence on UTS of the welded joints. Utilizing experimental data, an optimized adaptive neuro-fuzzy inference system (ANFIS) model has been developed to predict UTS of FSW joints. A total of 1200 models were developed by varying the number of membership functions (MFs), type of MFs, and combination of four input variables (??,??,????,??????) utilizing a MATLAB platform. Note EFI denotes an empirical force index derived from the three process parameters. For comparison, optimized artificial neural network (ANN) models were also developed to predict UTS from FSW process parameters. By comparing ANFIS and ANN predicted results, it was found that optimized ANFIS models provide better results than ANN. This newly developed best ANFIS model could be utilized for prediction of UTS of FSW joints.
NASA Astrophysics Data System (ADS)
Yoshida, Hiroaki; Yamaguchi, Katsuhito; Ishikawa, Yoshio
The conventional optimization methods were based on a deterministic approach, since their purpose is to find out an exact solution. However, these methods have initial condition dependence and risk of falling into local solution. In this paper, we propose a new optimization method based on a concept of path integral method used in quantum mechanics. The method obtains a solutions as an expected value (stochastic average) using a stochastic process. The advantages of this method are not to be affected by initial conditions and not to need techniques based on experiences. We applied the new optimization method to a design of the hang glider. In this problem, not only the hang glider design but also its flight trajectory were optimized. The numerical calculation results showed that the method has a sufficient performance.
McKinnon, Adam D; Ozanne-Smith, Joan; Pope, Rodney
2009-05-01
Injury prevention guided by robust injury surveillance systems (ISS's) can effectively reduce military injury rates, but ISS's depend on human interaction. This study examined experiences and requirements of key users of the Australian Defence Force (ADF) ISS to determine whether the operation of the ISS was optimal, whether there were any shortcomings, and if so, how these shortcomings might be addressed. Semistructured interviews were conducted with 18 Australian Defence Department participants located throughout Australia. Grounded theory methods were used to analyze data by developing an understanding of processes and social phenomena related to injury surveillance systems within the military context. Interviews were recorded and professionally transcribed and information contained in the transcripts was analyzed using NVivo. Key themes relating to the components of an injury surveillance system were identified from the analysis. A range of processes and sociocultural factors influence the utility of military ISS's. These are discussed in detail and should be considered in the future design and operation of military ISS's to facilitate optimal outcomes for injury prevention.
NASA Astrophysics Data System (ADS)
Wei, J.; Wang, G.; Liu, R.
2008-12-01
The Tarim River Basin is the longest inland river in China. Due to water scarcity, ecologically-fragile is becoming a significant constraint to sustainable development in this region. To effectively manage the limited water resources for ecological purposes and for conventional water utilization purposes, a real-time water resources allocation Decision Support System (DSS) has been developed. Based on workflows of the water resources regulations and comprehensive analysis of the efficiency and feasibility of water management strategies, the DSS includes information systems that perform data acquisition, management and visualization, and model systems that perform hydrological forecast, water demand prediction, flow routing simulation and water resources optimization of the hydrological and water utilization process. An optimization and process control strategy is employed to dynamically allocate the water resources among the different stakeholders. The competitive targets and constraints are taken into considered by multi-objective optimization and with different priorities. The DSS of the Tarim River Basin has been developed and been successfully utilized to support the water resources management of the Tarim River Basin since 2005.
Design optimization of aircraft landing gear assembly under dynamic loading
NASA Astrophysics Data System (ADS)
Wong, Jonathan Y. B.
As development cycles and prototyping iterations begin to decrease in the aerospace industry, it is important to develop and improve practical methodologies to meet all design metrics. This research presents an efficient methodology that applies high-fidelity multi-disciplinary design optimization techniques to commercial landing gear assemblies, for weight reduction, cost savings, and structural performance dynamic loading. Specifically, a slave link subassembly was selected as the candidate to explore the feasibility of this methodology. The design optimization process utilized in this research was sectioned into three main stages: setup, optimization, and redesign. The first stage involved the creation and characterization of the models used throughout this research. The slave link assembly was modelled with a simplified landing gear test, replicating the behavior of the physical system. Through extensive review of the literature and collaboration with Safran Landing Systems, dynamic and structural behavior for the system were characterized and defined mathematically. Once defined, the characterized behaviors for the slave link assembly were then used to conduct a Multi-Body Dynamic (MBD) analysis to determine the dynamic and structural response of the system. These responses were then utilized in a topology optimization through the use of the Equivalent Static Load Method (ESLM). The results of the optimization were interpreted and later used to generate improved designs in terms of weight, cost, and structural performance under dynamic loading in stage three. The optimized designs were then validated using the model created for the MBD analysis of the baseline design. The design generation process employed two different approaches for post-processing the topology results produced. The first approach implemented a close replication of the topology results, resulting in a design with an overall peak stress increase of 74%, weight savings of 67%, and no apparent cost savings due to complex features present in the design. The second design approach focused on realizing reciprocating benefits for cost and weight savings. As a result, this design was able to achieve an overall peak stress increase of 6%, weight and cost savings of 36%, and 60%, respectively.
Switching and optimizing control for coal flotation process based on a hybrid model
Dong, Zhiyong; Wang, Ranfeng; Fan, Minqiang; Fu, Xiang
2017-01-01
Flotation is an important part of coal preparation, and the flotation column is widely applied as efficient flotation equipment. This process is complex and affected by many factors, with the froth depth and reagent dosage being two of the most important and frequently manipulated variables. This paper proposes a new method of switching and optimizing control for the coal flotation process. A hybrid model is built and evaluated using industrial data. First, wavelet analysis and principal component analysis (PCA) are applied for signal pre-processing. Second, a control model for optimizing the set point of the froth depth is constructed based on fuzzy control, and a control model is designed to optimize the reagent dosages based on expert system. Finally, the least squares-support vector machine (LS-SVM) is used to identify the operating conditions of the flotation process and to select one of the two models (froth depth or reagent dosage) for subsequent operation according to the condition parameters. The hybrid model is developed and evaluated on an industrial coal flotation column and exhibits satisfactory performance. PMID:29040305
DOE Office of Scientific and Technical Information (OSTI.GOV)
Diaz, Luis A.; Clark, Gemma G.; Lister, Tedd E.
The rapid growth of the electronic waste can be viewed both as an environmental threat and as an attractive source of minerals that can reduce the mining of natural resources, and stabilize the market of critical materials, such as rare earths. Here in this article surface response methodology was used to optimize a previously developed electrochemical recovery process for base metals from electronic waste using a mild oxidant (Fe 3+). Through this process an effective extraction of base metals can be achieved enriching the concentration of precious metals and significantly reducing environmental impacts and operational costs associated with the wastemore » generation and chemical consumption. The optimization was performed using a bench-scale system specifically designed for this process. Operational parameters such as flow rate, applied current density and iron concentration were optimized to reduce the specific energy consumption of the electrochemical recovery process to 1.94 kWh per kg of metal recovered at a processing rate of 3.3 g of electronic waste per hour.« less
Diaz, Luis A.; Clark, Gemma G.; Lister, Tedd E.
2017-06-08
The rapid growth of the electronic waste can be viewed both as an environmental threat and as an attractive source of minerals that can reduce the mining of natural resources, and stabilize the market of critical materials, such as rare earths. Here in this article surface response methodology was used to optimize a previously developed electrochemical recovery process for base metals from electronic waste using a mild oxidant (Fe 3+). Through this process an effective extraction of base metals can be achieved enriching the concentration of precious metals and significantly reducing environmental impacts and operational costs associated with the wastemore » generation and chemical consumption. The optimization was performed using a bench-scale system specifically designed for this process. Operational parameters such as flow rate, applied current density and iron concentration were optimized to reduce the specific energy consumption of the electrochemical recovery process to 1.94 kWh per kg of metal recovered at a processing rate of 3.3 g of electronic waste per hour.« less
Shape optimization of tibial prosthesis components
NASA Technical Reports Server (NTRS)
Saravanos, D. A.; Mraz, P. J.; Davy, D. T.
1993-01-01
NASA technology and optimal design methodologies originally developed for the optimization of composite structures (engine blades) are adapted and applied to the optimization of orthopaedic knee implants. A method is developed enabling the shape tailoring of the tibial components of a total knee replacement implant for optimal interaction within the environment of the tibia. The shape of the implant components are optimized such that the stresses in the bone are favorably controlled to minimize bone degradation, to improve the mechanical integrity of the implant/interface/bone system, and to prevent failures of the implant components. A pilot tailoring system is developed and the feasibility of the concept is demonstrated and evaluated. The methodology and evolution of the existing aerospace technology from which this pilot optimization code was developed is also presented and discussed. Both symmetric and unsymmetric in-plane loading conditions are investigated. The results of the optimization process indicate a trend toward wider and tapered posts as well as thicker backing trays. Unique component geometries were obtained for the different load cases.
NASA Technical Reports Server (NTRS)
Welstead, Jason
2014-01-01
This research focused on incorporating stability and control into a multidisciplinary de- sign optimization on a Boeing 737-class advanced concept called the D8.2b. A new method of evaluating the aircraft handling performance using quantitative evaluation of the sys- tem to disturbances, including perturbations, continuous turbulence, and discrete gusts, is presented. A multidisciplinary design optimization was performed using the D8.2b transport air- craft concept. The con guration was optimized for minimum fuel burn using a design range of 3,000 nautical miles. Optimization cases were run using xed tail volume coecients, static trim constraints, and static trim and dynamic response constraints. A Cessna 182T model was used to test the various dynamic analysis components, ensuring the analysis was behaving as expected. Results of the optimizations show that including stability and con- trol in the design process drastically alters the optimal design, indicating that stability and control should be included in conceptual design to avoid system level penalties later in the design process.
Statistical process control using optimized neural networks: a case study.
Addeh, Jalil; Ebrahimzadeh, Ata; Azarbad, Milad; Ranaee, Vahid
2014-09-01
The most common statistical process control (SPC) tools employed for monitoring process changes are control charts. A control chart demonstrates that the process has altered by generating an out-of-control signal. This study investigates the design of an accurate system for the control chart patterns (CCPs) recognition in two aspects. First, an efficient system is introduced that includes two main modules: feature extraction module and classifier module. In the feature extraction module, a proper set of shape features and statistical feature are proposed as the efficient characteristics of the patterns. In the classifier module, several neural networks, such as multilayer perceptron, probabilistic neural network and radial basis function are investigated. Based on an experimental study, the best classifier is chosen in order to recognize the CCPs. Second, a hybrid heuristic recognition system is introduced based on cuckoo optimization algorithm (COA) algorithm to improve the generalization performance of the classifier. The simulation results show that the proposed algorithm has high recognition accuracy. Copyright © 2013 ISA. Published by Elsevier Ltd. All rights reserved.
Medverd, Jonathan R; Cross, Nathan M; Font, Frank; Casertano, Andrew
2013-08-01
Radiologists routinely make decisions with only limited information when assigning protocol instructions for the performance of advanced medical imaging examinations. Opportunity exists to simultaneously improve the safety, quality and efficiency of this workflow through the application of an electronic solution leveraging health system resources to provide concise, tailored information and decision support in real-time. Such a system has been developed using an open source, open standards design for use within the Veterans Health Administration. The Radiology Protocol Tool Recorder (RAPTOR) project identified key process attributes as well as inherent weaknesses of paper processes and electronic emulators of paper processes to guide the development of its optimized electronic solution. The design provides a kernel that can be expanded to create an integrated radiology environment. RAPTOR has implications relevant to the greater health care community, and serves as a case model for modernization of legacy government health information systems.
Using fuzzy rule-based knowledge model for optimum plating conditions search
NASA Astrophysics Data System (ADS)
Solovjev, D. S.; Solovjeva, I. A.; Litovka, Yu V.; Arzamastsev, A. A.; Glazkov, V. P.; L’vov, A. A.
2018-03-01
The paper discusses existing approaches to plating process modeling in order to decrease the distribution thickness of plating surface cover. However, these approaches do not take into account the experience, knowledge, and intuition of the decision-makers when searching the optimal conditions of electroplating technological process. The original approach to optimal conditions search for applying the electroplating coatings, which uses the rule-based model of knowledge and allows one to reduce the uneven product thickness distribution, is proposed. The block diagrams of a conventional control system of a galvanic process as well as the system based on the production model of knowledge are considered. It is shown that the fuzzy production model of knowledge in the control system makes it possible to obtain galvanic coatings of a given thickness unevenness with a high degree of adequacy to the experimental data. The described experimental results confirm the theoretical conclusions.
Optimization of a point-focusing, distributed receiver solar thermal electric system
NASA Technical Reports Server (NTRS)
Pons, R. L.
1979-01-01
This paper presents an approach to optimization of a solar concept which employs solar-to-electric power conversion at the focus of parabolic dish concentrators. The optimization procedure is presented through a series of trade studies, which include the results of optical/thermal analyses and individual subsystem trades. Alternate closed-cycle and open-cycle Brayton engines and organic Rankine engines are considered to show the influence of the optimization process, and various storage techniques are evaluated, including batteries, flywheels, and hybrid-engine operation.
Nelson, Carl A; Miller, David J; Oleynikov, Dmitry
2008-01-01
As modular systems come into the forefront of robotic telesurgery, streamlining the process of selecting surgical tools becomes an important consideration. This paper presents a method for optimal queuing of tools in modular surgical tool systems, based on patterns in tool-use sequences, in order to minimize time spent changing tools. The solution approach is to model the set of tools as a graph, with tool-change frequency expressed as edge weights in the graph, and to solve the Traveling Salesman Problem for the graph. In a set of simulations, this method has shown superior performance at optimizing tool arrangements for streamlining surgical procedures.
USDA-ARS?s Scientific Manuscript database
Shell eggs with microcracks are often undetected during egg grading processes. In the past, a modified pressure imaging system was developed to detect eggs with microcracks without adversely affecting the quality of normal intact eggs. The basic idea of the modified pressure imaging system was to ap...
NASA Astrophysics Data System (ADS)
Fox, Matthew D.
Advanced automotive technology assessment and powertrain design are increasingly performed through modeling, simulation, and optimization. But technology assessments usually target many competing criteria making any individual optimization challenging and arbitrary. Further, independent design simulations and optimizations take considerable time to execute, and design constraints and objectives change throughout the design process. Changes in design considerations usually require re-processing of simulations and more time. In this thesis, these challenges are confronted through CSU's participation in the EcoCAR2 hybrid vehicle design competition. The complexity of the competition's design objectives leveraged development of a decision support system tool to aid in multi-criteria decision making across technologies and to perform powertrain optimization. To make the decision support system interactive, and bypass the problem of long simulation times, a new approach was taken. The result of this research is CSU's architecture selection and component sizing, which optimizes a composite objective function representing the competition score. The selected architecture is an electric vehicle with an onboard range extending hydrogen fuel cell system. The vehicle has a 145kW traction motor, 18.9kWh of lithium ion battery, a 15kW fuel cell system, and 5kg of hydrogen storage capacity. Finally, a control strategy was developed that improves the vehicles performance throughout the driving range under variable driving conditions. In conclusion, the design process used in this research is reviewed and evaluated against other common design methodologies. I conclude, through the highlighted case studies, that the approach is more comprehensive than other popular design methodologies and is likely to lead to a higher quality product. The upfront modeling work and decision support system formulation will pay off in superior and timely knowledge transfer and more informed design decisions. The hypothesis is supported by the three case studies examined in this thesis.
Distributed State Estimation Using a Modified Partitioned Moving Horizon Strategy for Power Systems.
Chen, Tengpeng; Foo, Yi Shyh Eddy; Ling, K V; Chen, Xuebing
2017-10-11
In this paper, a distributed state estimation method based on moving horizon estimation (MHE) is proposed for the large-scale power system state estimation. The proposed method partitions the power systems into several local areas with non-overlapping states. Unlike the centralized approach where all measurements are sent to a processing center, the proposed method distributes the state estimation task to the local processing centers where local measurements are collected. Inspired by the partitioned moving horizon estimation (PMHE) algorithm, each local area solves a smaller optimization problem to estimate its own local states by using local measurements and estimated results from its neighboring areas. In contrast with PMHE, the error from the process model is ignored in our method. The proposed modified PMHE (mPMHE) approach can also take constraints on states into account during the optimization process such that the influence of the outliers can be further mitigated. Simulation results on the IEEE 14-bus and 118-bus systems verify that our method achieves comparable state estimation accuracy but with a significant reduction in the overall computation load.
Optimization of a RF-generated CF4/O2 gas plasma sterilization process.
Lassen, Klaus S; Nordby, Bolette; Grün, Reinar
2003-05-15
A sterilization process with the use of RF-generated (13.56 MHz) CF(4)/O(2) gas plasma was optimized in regards to power, flow rate, exposure time, and RF-system type. The dependency of the sporicidal effect on the spore inoculum positioning in the chamber of the RF systems was also investigated. Dried Bacillus stearothermophilus ATCC 7953 endospores were used as test organisms. The treatments were evaluated on the basis of survival curves and corresponding D values. The only parameter found to affect the sterilization process was the power of the RF system. Higher power resulted in higher kill. Finally, when the samples were placed more than 3-8 cm away from a centrally placed electrode in System 2, the sporicidal effect was reduced. The results are discussed and compared to results from the present literature. The RF excitation source is evaluated to be more appropriate for sterilization processes than the MW source. Copyright 2003 Wiley Periodicals, Inc. J Biomed Mater Res Part B: Appl Biomater 65B: 239-244, 2003
Intelligent Control of Micro Grid: A Big Data-Based Control Center
NASA Astrophysics Data System (ADS)
Liu, Lu; Wang, Yanping; Liu, Li; Wang, Zhiseng
2018-01-01
In this paper, a structure of micro grid system with big data-based control center is introduced. Energy data from distributed generation, storage and load are analized through the control center, and from the results new trends will be predicted and applied as a feedback to optimize the control. Therefore, each step proceeded in micro grid can be adjusted and orgnized in a form of comprehensive management. A framework of real-time data collection, data processing and data analysis will be proposed by employing big data technology. Consequently, a integrated distributed generation and a optimized energy storage and transmission process can be implemented in the micro grid system.
Service Bundle Recommendation for Person-Centered Care Planning in Cities.
Kotoulas, Spyros; Daly, Elizabeth; Tommasi, Pierpaolo; Kishimoto, Akihiro; Lopez, Vanessa; Stephenson, Martin; Botea, Adi; Sbodio, Marco; Marinescu, Radu; Rooney, Ronan
2016-01-01
Providing appropriate support for the most vulnerable individuals carries enormous societal significance and economic burden. Yet, finding the right balance between costs, estimated effectiveness and the experience of the care recipient is a daunting task that requires considering vast amount of information. We present a system that helps care teams choose the optimal combination of providers for a set of services. We draw from techniques in Open Data processing, semantic processing, faceted exploration, visual analytics, transportation analytics and multi-objective optimization. We present an implementation of the system using data from New York City and illustrate the feasibility these technologies to guide care workers in care planning.
Design and Analysis of a Sensor System for Cutting Force Measurement in Machining Processes
Liang, Qiaokang; Zhang, Dan; Coppola, Gianmarc; Mao, Jianxu; Sun, Wei; Wang, Yaonan; Ge, Yunjian
2016-01-01
Multi-component force sensors have infiltrated a wide variety of automation products since the 1970s. However, one seldom finds full-component sensor systems available in the market for cutting force measurement in machine processes. In this paper, a new six-component sensor system with a compact monolithic elastic element (EE) is designed and developed to detect the tangential cutting forces Fx, Fy and Fz (i.e., forces along x-, y-, and z-axis) as well as the cutting moments Mx, My and Mz (i.e., moments about x-, y-, and z-axis) simultaneously. Optimal structural parameters of the EE are carefully designed via simulation-driven optimization. Moreover, a prototype sensor system is fabricated, which is applied to a 5-axis parallel kinematic machining center. Calibration experimental results demonstrate that the system is capable of measuring cutting forces and moments with good linearity while minimizing coupling error. Both the Finite Element Analysis (FEA) and calibration experimental studies validate the high performance of the proposed sensor system that is expected to be adopted into machining processes. PMID:26751451
Design and Analysis of a Sensor System for Cutting Force Measurement in Machining Processes.
Liang, Qiaokang; Zhang, Dan; Coppola, Gianmarc; Mao, Jianxu; Sun, Wei; Wang, Yaonan; Ge, Yunjian
2016-01-07
Multi-component force sensors have infiltrated a wide variety of automation products since the 1970s. However, one seldom finds full-component sensor systems available in the market for cutting force measurement in machine processes. In this paper, a new six-component sensor system with a compact monolithic elastic element (EE) is designed and developed to detect the tangential cutting forces Fx, Fy and Fz (i.e., forces along x-, y-, and z-axis) as well as the cutting moments Mx, My and Mz (i.e., moments about x-, y-, and z-axis) simultaneously. Optimal structural parameters of the EE are carefully designed via simulation-driven optimization. Moreover, a prototype sensor system is fabricated, which is applied to a 5-axis parallel kinematic machining center. Calibration experimental results demonstrate that the system is capable of measuring cutting forces and moments with good linearity while minimizing coupling error. Both the Finite Element Analysis (FEA) and calibration experimental studies validate the high performance of the proposed sensor system that is expected to be adopted into machining processes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fong, Erika J.; Huang, Chao; Hamilton, Julie
Here, a major advantage of microfluidic devices is the ability to manipulate small sample volumes, thus reducing reagent waste and preserving precious sample. However, to achieve robust sample manipulation it is necessary to address device integration with the macroscale environment. To realize repeatable, sensitive particle separation with microfluidic devices, this protocol presents a complete automated and integrated microfluidic platform that enables precise processing of 0.15–1.5 ml samples using microfluidic devices. Important aspects of this system include modular device layout and robust fixtures resulting in reliable and flexible world to chip connections, and fully-automated fluid handling which accomplishes closed-loop sample collection,more » system cleaning and priming steps to ensure repeatable operation. Different microfluidic devices can be used interchangeably with this architecture. Here we incorporate an acoustofluidic device, detail its characterization, performance optimization, and demonstrate its use for size-separation of biological samples. By using real-time feedback during separation experiments, sample collection is optimized to conserve and concentrate sample. Although requiring the integration of multiple pieces of equipment, advantages of this architecture include the ability to process unknown samples with no additional system optimization, ease of device replacement, and precise, robust sample processing.« less
Integration of Product, Package, Process, and Environment: A Food System Optimization
NASA Technical Reports Server (NTRS)
Cooper, Maya R.; Douglas, Grace L.
2015-01-01
The food systems slated for future NASA missions must meet crew nutritional needs, be acceptable for consumption, and use resources efficiently. Although the current food system of prepackaged, moderately stabilized food items works well for International Space Station (ISS) missions, many of the current space menu items do not maintain acceptability and/or nutritive value beyond 2 years. Longer space missions require that the food system can sustain the crew for 3 to 5 years without replenishment. The task "Integration of Product, Package, Process, and Environment: A Food System Optimization" has the objective of optimizing food-product shelf life for the space-food system through product recipe adjustments, new packaging and processing technologies, and modified storage conditions. Two emergent food processing technologies were examined to identify a pathway to stable, wet-pack foods without the detrimental color and texture effects. Both microwave-assisted thermal sterilization (MATS) and pressure-assisted thermal stabilization (PATS) were evaluated against traditional retort processing to determine if lower heat inputs during processing would produce a product with higher micronutrient quality and longer shelf life. While MATS products did have brighter color and better texture initially, the advantages were not sustained. The non-metallized packaging film used in the process likely provided inadequate oxygen barrier. No difference in vitamin stability was evident between MATS and retort processed foods. Similarly, fruit products produced using PATS showed improved color and texture through 3 years of storage compared to retort fruit, but the vitamin stability was not improved. The final processing study involved freeze drying. Five processing factors were tested in factorial design to assess potential impact of each to the quality of freeze-dried food, including the integrity of the microstructure. The initial freezing rate and primary freeze drying temperature and pressure were linked to final product quality in freeze-dried corn, indicating processing modifications that could lead to improved product shelf life. Storage temperatures and packaging systems were also assessed for the impact to food quality. Reduced temperature storage had inconclusive impact to the progression of rancidity in butter cookies. Frozen storage was detrimental to fruit and vegetable textural attributes but refrigerated storage helped to sustain color and organoleptic ratings for plant-based foods. With regard to packaging systems, the metallized film overwrap significantly decreased the progression of the rancidity of butter cookies as compared to the highest barrier non-metallized film. The inclusion of oxygen scavengers resulted in noticeable moisture gains in butter cookies over time, independent of packaging film systems. Neither emergent processing technology nor the freeze dry optimization resulted in compelling quality differences from current space food provisions such that a five-year shelf life is likely with these processing changes alone. Using a combination of refrigeration and PATS processing is expected to result in organoleptically-acceptable fruit quality for most fruits through five years. The vitamin degradation will be aided somewhat by the cold temperatures but, given the labile nature of vitamin C, a more stable fortification method, such as encapsulation, should also be investigated to ensure vitamin delivery throughout the product life. Similarly, significant improvement to the packaging film used in the MATS processing, optimization of formulation for dielectric properties, vitamin fortification, and reduced temperature storage should be investigated as a hurdle approach to reach a five year shelf life in wet-pack entrees and soups. Baked goods and other environmentally-sensitive spaceflight foods will require an almost impenetrable barrier to protect the foods from oxygen and moisture ingress but scavengers and reduced storage temperature did not improve baked good shelf life and are not recommended at this time for these foods.
Contrast research of CDMA and GSM network optimization
NASA Astrophysics Data System (ADS)
Wu, Yanwen; Liu, Zehong; Zhou, Guangyue
2004-03-01
With the development of mobile telecommunication network, users of CDMA advanced their request of network service quality. While the operators also change their network management object from signal coverage to performance improvement. In that case, reasonably layout & optimization of mobile telecommunication network, reasonably configuration of network resource, improvement of the service quality, and increase the enterprise's core competition ability, all those have been concerned by the operator companies. This paper firstly looked into the flow of CDMA network optimization. Then it dissertated to some keystones in the CDMA network optimization, like PN code assignment, calculation of soft handover, etc. As GSM is also the similar cellular mobile telecommunication system like CDMA, so this paper also made a contrast research of CDMA and GSM network optimization in details, including the similarity and the different. In conclusion, network optimization is a long time job; it will run through the whole process of network construct. By the adjustment of network hardware (like BTS equipments, RF systems, etc.) and network software (like parameter optimized, configuration optimized, capacity optimized, etc.), network optimization work can improve the performance and service quality of the network.
Optimized piranha etching process for SU8-based MEMS and MOEMS construction
Holmes, Matthew; Keeley, Jared; Hurd, Katherine; Schmidt, Holger; Hawkins, Aaron
2011-01-01
We demonstrate the optimization of the concentration, temperature and cycling of a piranha (H2O2:H2SO4) mixture that produces high yields while quickly etching hollow structures made using a highly crosslinked SU8 polymer sacrificial core. The effects of the piranha mixture on the thickness, refractive index and roughness of common micro-electromechanical systems and micro-opto-electromechanical systems fabrication materials (SiN, SiO2 and Si) were determined. The effectiveness of the optimal piranha mixture was demonstrated in the construction of hollow anti-resonant reflecting optical waveguides. PMID:21423840
Optimized piranha etching process for SU8-based MEMS and MOEMS construction.
Holmes, Matthew; Keeley, Jared; Hurd, Katherine; Schmidt, Holger; Hawkins, Aaron
2010-11-01
We demonstrate the optimization of the concentration, temperature and cycling of a piranha (H(2)O(2):H(2)SO(4)) mixture that produces high yields while quickly etching hollow structures made using a highly crosslinked SU8 polymer sacrificial core. The effects of the piranha mixture on the thickness, refractive index and roughness of common micro-electromechanical systems and micro-opto-electromechanical systems fabrication materials (SiN, SiO(2) and Si) were determined. The effectiveness of the optimal piranha mixture was demonstrated in the construction of hollow anti-resonant reflecting optical waveguides.
Optimized piranha etching process for SU8-based MEMS and MOEMS construction
NASA Astrophysics Data System (ADS)
Holmes, Matthew; Keeley, Jared; Hurd, Katherine; Schmidt, Holger; Hawkins, Aaron
2010-11-01
We demonstrate the optimization of the concentration, temperature and cycling of a piranha (H2O2:H2SO4) mixture that produces high yields while quickly etching hollow structures made using a highly crosslinked SU8 polymer sacrificial core. The effects of the piranha mixture on the thickness, refractive index and roughness of common micro-electromechanical systems and micro-opto-electromechanical systems fabrication materials (SiN, SiO2 and Si) were determined. The effectiveness of the optimal piranha mixture was demonstrated in the construction of hollow anti-resonant reflecting optical waveguides.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Batiy, V.G.; Stojanov, A.I.; Schmieman, E.
2007-07-01
Methodological approach of optimization of schemes of solid radwaste management of the Object Shelter (Shelter) and ChNPP industrial site during transformation to the ecologically safe system was developed. On the basis of the conducted models researches the ALARA-analysis was carried out for the choice of optimum variant of schemes and technologies of solid radwaste management. The criteria of choice of optimum schemes, which are directed on optimization of doses and financial expenses, minimization of amount of the formed radwaste etc, were developed for realization of this ALARA-analysis. (authors)
Bi-Level Integrated System Synthesis (BLISS) for Concurrent and Distributed Processing
NASA Technical Reports Server (NTRS)
Sobieszczanski-Sobieski, Jaroslaw; Altus, Troy D.; Phillips, Matthew; Sandusky, Robert
2002-01-01
The paper introduces a new version of the Bi-Level Integrated System Synthesis (BLISS) methods intended for optimization of engineering systems conducted by distributed specialty groups working concurrently and using a multiprocessor computing environment. The method decomposes the overall optimization task into subtasks associated with disciplines or subsystems where the local design variables are numerous and a single, system-level optimization whose design variables are relatively few. The subtasks are fully autonomous as to their inner operations and decision making. Their purpose is to eliminate the local design variables and generate a wide spectrum of feasible designs whose behavior is represented by Response Surfaces to be accessed by a system-level optimization. It is shown that, if the problem is convex, the solution of the decomposed problem is the same as that obtained without decomposition. A simplified example of an aircraft design shows the method working as intended. The paper includes a discussion of the method merits and demerits and recommendations for further research.
Optimal control of complex atomic quantum systems
van Frank, S.; Bonneau, M.; Schmiedmayer, J.; Hild, S.; Gross, C.; Cheneau, M.; Bloch, I.; Pichler, T.; Negretti, A.; Calarco, T.; Montangero, S.
2016-01-01
Quantum technologies will ultimately require manipulating many-body quantum systems with high precision. Cold atom experiments represent a stepping stone in that direction: a high degree of control has been achieved on systems of increasing complexity. However, this control is still sub-optimal. In many scenarios, achieving a fast transformation is crucial to fight against decoherence and imperfection effects. Optimal control theory is believed to be the ideal candidate to bridge the gap between early stage proof-of-principle demonstrations and experimental protocols suitable for practical applications. Indeed, it can engineer protocols at the quantum speed limit – the fastest achievable timescale of the transformation. Here, we demonstrate such potential by computing theoretically and verifying experimentally the optimal transformations in two very different interacting systems: the coherent manipulation of motional states of an atomic Bose-Einstein condensate and the crossing of a quantum phase transition in small systems of cold atoms in optical lattices. We also show that such processes are robust with respect to perturbations, including temperature and atom number fluctuations. PMID:27725688
Optimal control of complex atomic quantum systems.
van Frank, S; Bonneau, M; Schmiedmayer, J; Hild, S; Gross, C; Cheneau, M; Bloch, I; Pichler, T; Negretti, A; Calarco, T; Montangero, S
2016-10-11
Quantum technologies will ultimately require manipulating many-body quantum systems with high precision. Cold atom experiments represent a stepping stone in that direction: a high degree of control has been achieved on systems of increasing complexity. However, this control is still sub-optimal. In many scenarios, achieving a fast transformation is crucial to fight against decoherence and imperfection effects. Optimal control theory is believed to be the ideal candidate to bridge the gap between early stage proof-of-principle demonstrations and experimental protocols suitable for practical applications. Indeed, it can engineer protocols at the quantum speed limit - the fastest achievable timescale of the transformation. Here, we demonstrate such potential by computing theoretically and verifying experimentally the optimal transformations in two very different interacting systems: the coherent manipulation of motional states of an atomic Bose-Einstein condensate and the crossing of a quantum phase transition in small systems of cold atoms in optical lattices. We also show that such processes are robust with respect to perturbations, including temperature and atom number fluctuations.
NASA Astrophysics Data System (ADS)
Reza, S. M. Mohsin
Design options have been evaluated for the Modular Helium Reactor (MHR) for higher temperature operation. An alternative configuration for the MHR coolant inlet flow path is developed to reduce the peak vessel temperature (PVT). The coolant inlet path is shifted from the annular path between reactor core barrel and vessel wall through the permanent side reflector (PSR). The number and dimensions of coolant holes are varied to optimize the pressure drop, the inlet velocity, and the percentage of graphite removed from the PSR to create this inlet path. With the removal of ˜10% of the graphite from PSR the PVT is reduced from 541°C to 421°C. A new design for the graphite block core has been evaluated and optimized to reduce the inlet coolant temperature with the aim of further reduction of PVT. The dimensions and number of fuel rods and coolant holes, and the triangular pitch have been changed and optimized. Different packing fractions for the new core design have been used to conserve the number of fuel particles. Thermal properties for the fuel elements are calculated and incorporated into these analyses. The inlet temperature, mass flow and bypass flow are optimized to limit the peak fuel temperature (PFT) within an acceptable range. Using both of these modifications together, the PVT is reduced to ˜350°C while keeping the outlet temperature at 950°C and maintaining the PFT within acceptable limits. The vessel and fuel temperatures during low pressure conduction cooldown and high pressure conduction cooldown transients are found to be well below the design limits. The reliability and availability studies for coupled nuclear hydrogen production processes based on the sulfur iodine thermochemical process and high temperature electrolysis process have been accomplished. The fault tree models for both these processes are developed. Using information obtained on system configuration, component failure probability, component repair time and system operating modes and conditions, the system reliability and availability are assessed. Required redundancies are made to improve system reliability and to optimize the plant design for economic performance. The failure rates and outage factors of both processes are found to be well below the maximum acceptable range.
Swarm based mean-variance mapping optimization (MVMOS) for solving economic dispatch
NASA Astrophysics Data System (ADS)
Khoa, T. H.; Vasant, P. M.; Singh, M. S. Balbir; Dieu, V. N.
2014-10-01
The economic dispatch (ED) is an essential optimization task in the power generation system. It is defined as the process of allocating the real power output of generation units to meet required load demand so as their total operating cost is minimized while satisfying all physical and operational constraints. This paper introduces a novel optimization which named as Swarm based Mean-variance mapping optimization (MVMOS). The technique is the extension of the original single particle mean-variance mapping optimization (MVMO). Its features make it potentially attractive algorithm for solving optimization problems. The proposed method is implemented for three test power systems, including 3, 13 and 20 thermal generation units with quadratic cost function and the obtained results are compared with many other methods available in the literature. Test results have indicated that the proposed method can efficiently implement for solving economic dispatch.
Structural Optimization of a Knuckle with Consideration of Stiffness and Durability Requirements
Kim, Geun-Yeon
2014-01-01
The automobile's knuckle is connected to the parts of the steering system and the suspension system and it is used for adjusting the direction of a rotation through its attachment to the wheel. This study changes the existing material made of GCD45 to Al6082M and recommends the lightweight design of the knuckle as the optimal design technique to be installed in small cars. Six shape design variables were selected for the optimization of the knuckle and the criteria relevant to stiffness and durability were considered as the design requirements during the optimization process. The metamodel-based optimization method that uses the kriging interpolation method as the optimization technique was applied. The result shows that all constraints for stiffness and durability are satisfied using A16082M, while reducing the weight of the knuckle by 60% compared to that of the existing GCD450. PMID:24995359
Design of shared unit-dose drug distribution network using multi-level particle swarm optimization.
Chen, Linjie; Monteiro, Thibaud; Wang, Tao; Marcon, Eric
2018-03-01
Unit-dose drug distribution systems provide optimal choices in terms of medication security and efficiency for organizing the drug-use process in large hospitals. As small hospitals have to share such automatic systems for economic reasons, the structure of their logistic organization becomes a very sensitive issue. In the research reported here, we develop a generalized multi-level optimization method - multi-level particle swarm optimization (MLPSO) - to design a shared unit-dose drug distribution network. Structurally, the problem studied can be considered as a type of capacitated location-routing problem (CLRP) with new constraints related to specific production planning. This kind of problem implies that a multi-level optimization should be performed in order to minimize logistic operating costs. Our results show that with the proposed algorithm, a more suitable modeling framework, as well as computational time savings and better optimization performance are obtained than that reported in the literature on this subject.
Storage Optimization of Educational System Data
ERIC Educational Resources Information Center
Boja, Catalin
2006-01-01
There are described methods used to minimize data files dimension. There are defined indicators for measuring size of files and databases. The storage optimization process is based on selecting from a multitude of data storage models the one that satisfies the propose problem objective, maximization or minimization of the optimum criterion that is…
Sequential use of simulation and optimization in analysis and planning
Hans R. Zuuring; Jimmie D. Chew; J. Greg Jones
2000-01-01
Management activities are analyzed at landscape scales employing both simulation and optimization. SIMPPLLE, a stochastic simulation modeling system, is initially applied to assess the risks associated with a specific natural process occurring on the current landscape without management treatments, but with fire suppression. These simulation results are input into...
ERIC Educational Resources Information Center
Schure, Alexander
A computer-based system model for the monitoring and management of the instructional process was conceived, developed and refined through the techniques of systems analysis. This report describes the various aspects and components of this project in a series of independent and self-contained units. The first unit provides an overview of the entire…
NASA Technical Reports Server (NTRS)
Baron, S.; Levison, W. H.
1977-01-01
Application of the optimal control model of the human operator to problems in display analysis is discussed. Those aspects of the model pertaining to the operator-display interface and to operator information processing are reviewed and discussed. The techniques are then applied to the analysis of advanced display/control systems for a Terminal Configured Vehicle. Model results are compared with those obtained in a large, fixed-base simulation.
Zsigraiova, Zdena; Semiao, Viriato; Beijoco, Filipa
2013-04-01
This work proposes an innovative methodology for the reduction of the operation costs and pollutant emissions involved in the waste collection and transportation. Its innovative feature lies in combining vehicle route optimization with that of waste collection scheduling. The latter uses historical data of the filling rate of each container individually to establish the daily circuits of collection points to be visited, which is more realistic than the usual assumption of a single average fill-up rate common to all the system containers. Moreover, this allows for the ahead planning of the collection scheduling, which permits a better system management. The optimization process of the routes to be travelled makes recourse to Geographical Information Systems (GISs) and uses interchangeably two optimization criteria: total spent time and travelled distance. Furthermore, rather than using average values, the relevant parameters influencing fuel consumption and pollutant emissions, such as vehicle speed in different roads and loading weight, are taken into consideration. The established methodology is applied to the glass-waste collection and transportation system of Amarsul S.A., in Barreiro. Moreover, to isolate the influence of the dynamic load on fuel consumption and pollutant emissions a sensitivity analysis of the vehicle loading process is performed. For that, two hypothetical scenarios are tested: one with the collected volume increasing exponentially along the collection path; the other assuming that the collected volume decreases exponentially along the same path. The results evidence unquestionable beneficial impacts of the optimization on both the operation costs (labor and vehicles maintenance and fuel consumption) and pollutant emissions, regardless the optimization criterion used. Nonetheless, such impact is particularly relevant when optimizing for time yielding substantial improvements to the existing system: potential reductions of 62% for the total spent time, 43% for the fuel consumption and 40% for the emitted pollutants. This results in total cost savings of 57%, labor being the greatest contributor, representing over €11,000 per year for the two vehicles collecting glass-waste. Moreover, it is shown herein that the dynamic loading process of the collection vehicle impacts on both the fuel consumption and on pollutant emissions. Copyright © 2012 Elsevier Ltd. All rights reserved.
Real-time optimizations for integrated smart network camera
NASA Astrophysics Data System (ADS)
Desurmont, Xavier; Lienard, Bruno; Meessen, Jerome; Delaigle, Jean-Francois
2005-02-01
We present an integrated real-time smart network camera. This system is composed of an image sensor, an embedded PC based electronic card for image processing and some network capabilities. The application detects events of interest in visual scenes, highlights alarms and computes statistics. The system also produces meta-data information that could be shared between other cameras in a network. We describe the requirements of such a system and then show how the design of the system is optimized to process and compress video in real-time. Indeed, typical video-surveillance algorithms as background differencing, tracking and event detection should be highly optimized and simplified to be used in this hardware. To have a good adequation between hardware and software in this light embedded system, the software management is written on top of the java based middle-ware specification established by the OSGi alliance. We can integrate easily software and hardware in complex environments thanks to the Java Real-Time specification for the virtual machine and some network and service oriented java specifications (like RMI and Jini). Finally, we will report some outcomes and typical case studies of such a camera like counter-flow detection.
System for monitoring an industrial or biological process
Gross, Kenneth C.; Wegerich, Stephan W.; Vilim, Rick B.; White, Andrew M.
1998-01-01
A method and apparatus for monitoring and responding to conditions of an industrial process. Industrial process signals, such as repetitive manufacturing, testing and operational machine signals, are generated by a system. Sensor signals characteristic of the process are generated over a time length and compared to reference signals over the time length. The industrial signals are adjusted over the time length relative to the reference signals, the phase shift of the industrial signals is optimized to the reference signals and the resulting signals output for analysis by systems such as SPRT.
System for monitoring an industrial or biological process
Gross, K.C.; Wegerich, S.W.; Vilim, R.B.; White, A.M.
1998-06-30
A method and apparatus are disclosed for monitoring and responding to conditions of an industrial process. Industrial process signals, such as repetitive manufacturing, testing and operational machine signals, are generated by a system. Sensor signals characteristic of the process are generated over a time length and compared to reference signals over the time length. The industrial signals are adjusted over the time length relative to the reference signals, the phase shift of the industrial signals is optimized to the reference signals and the resulting signals output for analysis by systems such as SPRT. 49 figs.
Depression screening optimization in an academic rural setting.
Aleem, Sohaib; Torrey, William C; Duncan, Mathew S; Hort, Shoshana J; Mecchella, John N
2015-01-01
Primary care plays a critical role in screening and management of depression. The purpose of this paper is to focus on leveraging the electronic health record (EHR) as well as work flow redesign to improve the efficiency and reliability of the process of depression screening in two adult primary care clinics of a rural academic institution in USA. The authors utilized various process improvement tools from lean six sigma methodology including project charter, swim lane process maps, critical to quality tree, process control charts, fishbone diagrams, frequency impact matrix, mistake proofing and monitoring plan in Define-Measure-Analyze-Improve-Control format. Interventions included change in depression screening tool, optimization of data entry in EHR. EHR data entry optimization; follow up of positive screen, staff training and EHR redesign. Depression screening rate for office-based primary care visits improved from 17.0 percent at baseline to 75.9 percent in the post-intervention control phase (p<0.001). Follow up of positive depression screen with Patient History Questionnaire-9 data collection remained above 90 percent. Duplication of depression screening increased from 0.6 percent initially to 11.7 percent and then decreased to 4.7 percent after optimization of data entry by patients and flow staff. Impact of interventions on clinical outcomes could not be evaluated. Successful implementation, sustainability and revision of a process improvement initiative to facilitate screening, follow up and management of depression in primary care requires accounting for voice of the process (performance metrics), system limitations and voice of the customer (staff and patients) to overcome various system, customer and human resource constraints.
Vicente, Tiago; Mota, José P B; Peixoto, Cristina; Alves, Paula M; Carrondo, Manuel J T
2011-01-01
The advent of advanced therapies in the pharmaceutical industry has moved the spotlight into virus-like particles and viral vectors produced in cell culture holding great promise in a myriad of clinical targets, including cancer prophylaxis and treatment. Even though a couple of cases have reached the clinic, these products have yet to overcome a number of biological and technological challenges before broad utilization. Concerning the manufacturing processes, there is significant research focusing on the optimization of current cell culture systems and, more recently, on developing scalable downstream processes to generate material for pre-clinical and clinical trials. We review the current options for downstream processing of these complex biopharmaceuticals and underline current advances on knowledge-based toolboxes proposed for rational optimization of their processing. Rational tools developed to increase the yet scarce knowledge on the purification processes of complex biologicals are discussed as alternative to empirical, "black-boxed" based strategies classically used for process development. Innovative methodologies based on surface plasmon resonance, dynamic light scattering, scale-down high-throughput screening and mathematical modeling for supporting ion-exchange chromatography show great potential for a more efficient and cost-effective process design, optimization and equipment prototyping. Copyright © 2011 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Karstedt, Jörg; Ogrzewalla, Jürgen; Severin, Christopher; Pischinger, Stefan
In this work, the concept development, system layout, component simulation and the overall DOE system optimization of a HT-PEM fuel cell APU with a net electric power output of 4.5 kW and an onboard methane fuel processor are presented. A highly integrated system layout has been developed that enables fast startup within 7.5 min, a closed system water balance and high fuel processor efficiencies of up to 85% due to the recuperation of the anode offgas burner heat. The integration of the system battery into the load management enhances the transient electric performance and the maximum electric power output of the APU system. Simulation models of the carbon monoxide influence on HT-PEM cell voltage, the concentration and temperature profiles within the autothermal reformer (ATR) and the CO conversion rates within the watergas shift stages (WGSs) have been developed. They enable the optimization of the CO concentration in the anode gas of the fuel cell in order to achieve maximum system efficiencies and an optimized dimensioning of the ATR and WGS reactors. Furthermore a DOE optimization of the global system parameters cathode stoichiometry, anode stoichiometry, air/fuel ratio and steam/carbon ratio of the fuel processing system has been performed in order to achieve maximum system efficiencies for all system operating points under given boundary conditions.
A decision modeling for phasor measurement unit location selection in smart grid systems
NASA Astrophysics Data System (ADS)
Lee, Seung Yup
As a key technology for enhancing the smart grid system, Phasor Measurement Unit (PMU) provides synchronized phasor measurements of voltages and currents of wide-area electric power grid. With various benefits from its application, one of the critical issues in utilizing PMUs is the optimal site selection of units. The main aim of this research is to develop a decision support system, which can be used in resource allocation task for smart grid system analysis. As an effort to suggest a robust decision model and standardize the decision modeling process, a harmonized modeling framework, which considers operational circumstances of component, is proposed in connection with a deterministic approach utilizing integer programming. With the results obtained from the optimal PMU placement problem, the advantages and potential that the harmonized modeling process possesses are assessed and discussed.
Huang, Mingzhi; Wan, Jinquan; Hu, Kang; Ma, Yongwen; Wang, Yan
2013-12-01
An on-line hybrid fuzzy-neural soft-sensing model-based control system was developed to optimize dissolved oxygen concentration in a bench-scale anaerobic/anoxic/oxic (A(2)/O) process. In order to improve the performance of the control system, a self-adapted fuzzy c-means clustering algorithm and adaptive network-based fuzzy inference system (ANFIS) models were employed. The proposed control system permits the on-line implementation of every operating strategy of the experimental system. A set of experiments involving variable hydraulic retention time (HRT), influent pH (pH), dissolved oxygen in the aerobic reactor (DO), and mixed-liquid return ratio (r) was carried out. Using the proposed system, the amount of COD in the effluent stabilized at the set-point and below. The improvement was achieved with optimum dissolved oxygen concentration because the performance of the treatment process was optimized using operating rules implemented in real time. The system allows various expert operational approaches to be deployed with the goal of minimizing organic substances in the outlet while using the minimum amount of energy.
A General Multidisciplinary Turbomachinery Design Optimization system Applied to a Transonic Fan
NASA Astrophysics Data System (ADS)
Nemnem, Ahmed Mohamed Farid
The blade geometry design process is integral to the development and advancement of compressors and turbines in gas generators or aeroengines. A new airfoil section design capability has been added to an open source parametric 3D blade design tool. Curvature of the meanline is controlled using B-splines to create the airfoils. The curvature is analytically integrated to derive the angles and the meanline is obtained by integrating the angles. A smooth thickness distribution is then added to the airfoil to guarantee a smooth shape while maintaining a prescribed thickness distribution. A leading edge B-spline definition has also been implemented to achieve customized airfoil leading edges which guarantees smoothness with parametric eccentricity and droop. An automated turbomachinery design and optimization system has been created. An existing splittered transonic fan is used as a test and reference case. This design was more general than a conventional design to have access to the other design methodology. The whole mechanical and aerodynamic design loops are automated for the optimization process. The flow path and the geometrical properties of the rotor are initially created using the axi-symmetric design and analysis code (T-AXI). The main and splitter blades are parametrically designed with the created geometry builder (3DBGB) using the new added features (curvature technique). The solid model creation of the rotor sector with a periodic boundaries combining the main blade and splitter is done using MATLAB code directly connected to SolidWorks including the hub, fillets and tip clearance. A mechanical optimization is performed with DAKOTA (developed by DOE) to reduce the mass of the blades while keeping maximum stress as a constraint with a safety factor. A Genetic algorithm followed by Numerical Gradient optimization strategies are used in the mechanical optimization. The splittered transonic fan blades mass is reduced by 2.6% while constraining the maximum stress below 50% material yield strength using 2D sections thickness and chord multipliers. Once the initial design was mechanically optimized, a CFD optimization was performed to maximize efficiency and/or stall margin. The CFD grid generator (AUTOGRID) reads 3DBGB output and accounts for hub fillets and tip gaps. Single and Multi-objective Genetic Algorithm (SOGA, MOGA) optimization have been used with the CFD analysis system. In SOGA optimization, efficiency was increased by 3.525% from 78.364% to 81.889% while only changing 4 design parameters. For MOGA optimization with higher weighting efficiency than stall margin, the efficiency was increased by 2.651% from 78.364% to 81.015% while the static pressure recovery factor was increased from 0.37407 to 0.4812286 that consequently increases the stall margin. The design process starts with a hot shape design, and then a hot to cold transformation process is explained once the optimization process ends which smoothly subtracts the mechanical deflections from the hot shape. This transformation ensures an accurate tip clearance. The optimization modules can be customized by the user as one full optimization or multiple small ones. This allows the designer not to be eliminated from the design loop which helps in taking the right choice of parameters for the optimization and the final feasible design.
NASA Astrophysics Data System (ADS)
Maidana, Carlos Omar
As part of an accelerator based Cargo Inspection System, studies were made to develop a Cabinet Safe System by Optimization of the Beam Optics of Microwave Linear Accelerators of the IAC-Varian series working on the S-band and standing wave pi/2 mode. Measurements, modeling and simulations of the main subsystems were done and a Multiple Solenoidal System was designed. This Cabinet Safe System based on a Multiple Solenoidal System minimizes the radiation field generated by the low efficiency of the microwave accelerators by optimizing the RF waveguide system and by also trapping secondaries generated in the accelerator head. These secondaries are generated mainly due to instabilities in the exit window region and particles backscattered from the target. The electron gun was also studied and software for its right mechanical design and for its optimization was developed as well. Besides the standard design method, an optimization of the injection process is accomplished by slightly modifying the gun configuration and by placing a solenoid on the waist position while avoiding threading the cathode with the magnetic flux generated. The Multiple Solenoidal System and the electron gun optimization are the backbone of a Cabinet Safe System that could be applied not only to the 25 MeV IAC-Varian microwave accelerators but, by extension, to machines of different manufacturers as well. Thus, they constitute the main topic of this dissertation.
Extension of optical lithography by mask-litho integration with computational lithography
NASA Astrophysics Data System (ADS)
Takigawa, T.; Gronlund, K.; Wiley, J.
2010-05-01
Wafer lithography process windows can be enlarged by using source mask co-optimization (SMO). Recently, SMO including freeform wafer scanner illumination sources has been developed. Freeform sources are generated by a programmable illumination system using a micro-mirror array or by custom Diffractive Optical Elements (DOE). The combination of freeform sources and complex masks generated by SMO show increased wafer lithography process window and reduced MEEF. Full-chip mask optimization using source optimized by SMO can generate complex masks with small variable feature size sub-resolution assist features (SRAF). These complex masks create challenges for accurate mask pattern writing and low false-defect inspection. The accuracy of the small variable-sized mask SRAF patterns is degraded by short range mask process proximity effects. To address the accuracy needed for these complex masks, we developed a highly accurate mask process correction (MPC) capability. It is also difficult to achieve low false-defect inspections of complex masks with conventional mask defect inspection systems. A printability check system, Mask Lithography Manufacturability Check (M-LMC), is developed and integrated with 199-nm high NA inspection system, NPI. M-LMC successfully identifies printable defects from all of the masses of raw defect images collected during the inspection of a complex mask. Long range mask CD uniformity errors are compensated by scanner dose control. A mask CD uniformity error map obtained by mask metrology system is used as input data to the scanner. Using this method, wafer CD uniformity is improved. As reviewed above, mask-litho integration technology with computational lithography is becoming increasingly important.
Optimizing RF gun cavity geometry within an automated injector design system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alicia Hofler ,Pavel Evtushenko
2011-03-28
RF guns play an integral role in the success of several light sources around the world, and properly designed and optimized cw superconducting RF (SRF) guns can provide a path to higher average brightness. As the need for these guns grows, it is important to have automated optimization software tools that vary the geometry of the gun cavity as part of the injector design process. This will allow designers to improve existing designs for present installations, extend the utility of these guns to other applications, and develop new designs. An evolutionary algorithm (EA) based system can provide this capability becausemore » EAs can search in parallel a large parameter space (often non-linear) and in a relatively short time identify promising regions of the space for more careful consideration. The injector designer can then evaluate more cavity design parameters during the injector optimization process against the beam performance requirements of the injector. This paper will describe an extension to the APISA software that allows the cavity geometry to be modified as part of the injector optimization and provide examples of its application to existing RF and SRF gun designs.« less
NASA Astrophysics Data System (ADS)
Zeng, Baoping; Liu, Jipeng; Zhang, Yu; Gong, Yajun; Hu, Sanbao
2017-12-01
Deepwater robots are important devices for human to explore the sea, which is being under development towards intellectualization, multitasking, long-endurance and large depth along with the development of science and technology. As far as a deep-water robot is concerned, its mechanical systems is an important subsystem because not only it influences the instrument measuring precision and shorten the service life of cabin devices but also its overlarge vibration and noise lead to disadvantageous effects to marine life within the operational area. Therefore, vibration characteristics shall be key factor for the deep-water robot system design. The sample collection and recycling system of some certain deepwater robot in a mechanism for opening the underwater cabin door for external operation and recycling test equipment is focused in this study. For improving vibration characteristics of locations of the cabin door during opening processes, a vibration model was established to the opening system; and the structural optimization design was carried out to its important structures by utilizing the multi-objective shape optimization and topology optimization method based on analysis of the system vibration. Analysis of characteristics of exciting forces causing vibration was first carried out, which include characteristics of dynamic loads within the hinge clearances and due to friction effects and the fluid dynamic exciting forces during processes of opening the cabin door. Moreover, vibration acceleration responses for a few important locations of the devices for opening the cabin cover were deduced by utilizing the modal synthesis method so that its rigidity and modal frequency may be one primary factor influencing the system vibration performances based on analysis of weighted acceleration responses. Thus, optimization design was carried out to the cabin cover by utilizing the multi-objective topology optimization method to perform reduction of weighted accelerations of key structure locations.
Encoder-Decoder Optimization for Brain-Computer Interfaces
Merel, Josh; Pianto, Donald M.; Cunningham, John P.; Paninski, Liam
2015-01-01
Neuroprosthetic brain-computer interfaces are systems that decode neural activity into useful control signals for effectors, such as a cursor on a computer screen. It has long been recognized that both the user and decoding system can adapt to increase the accuracy of the end effector. Co-adaptation is the process whereby a user learns to control the system in conjunction with the decoder adapting to learn the user's neural patterns. We provide a mathematical framework for co-adaptation and relate co-adaptation to the joint optimization of the user's control scheme ("encoding model") and the decoding algorithm's parameters. When the assumptions of that framework are respected, co-adaptation cannot yield better performance than that obtainable by an optimal initial choice of fixed decoder, coupled with optimal user learning. For a specific case, we provide numerical methods to obtain such an optimized decoder. We demonstrate our approach in a model brain-computer interface system using an online prosthesis simulator, a simple human-in-the-loop pyschophysics setup which provides a non-invasive simulation of the BCI setting. These experiments support two claims: that users can learn encoders matched to fixed, optimal decoders and that, once learned, our approach yields expected performance advantages. PMID:26029919
Encoder-decoder optimization for brain-computer interfaces.
Merel, Josh; Pianto, Donald M; Cunningham, John P; Paninski, Liam
2015-06-01
Neuroprosthetic brain-computer interfaces are systems that decode neural activity into useful control signals for effectors, such as a cursor on a computer screen. It has long been recognized that both the user and decoding system can adapt to increase the accuracy of the end effector. Co-adaptation is the process whereby a user learns to control the system in conjunction with the decoder adapting to learn the user's neural patterns. We provide a mathematical framework for co-adaptation and relate co-adaptation to the joint optimization of the user's control scheme ("encoding model") and the decoding algorithm's parameters. When the assumptions of that framework are respected, co-adaptation cannot yield better performance than that obtainable by an optimal initial choice of fixed decoder, coupled with optimal user learning. For a specific case, we provide numerical methods to obtain such an optimized decoder. We demonstrate our approach in a model brain-computer interface system using an online prosthesis simulator, a simple human-in-the-loop pyschophysics setup which provides a non-invasive simulation of the BCI setting. These experiments support two claims: that users can learn encoders matched to fixed, optimal decoders and that, once learned, our approach yields expected performance advantages.
NASA Astrophysics Data System (ADS)
Bascetin, A.
2007-04-01
The selection of an optimal reclamation method is one of the most important factors in open-pit design and production planning. It also affects economic considerations in open-pit design as a function of plan location and depth. Furthermore, the selection is a complex multi-person, multi-criteria decision problem. The group decision-making process can be improved by applying a systematic and logical approach to assess the priorities based on the inputs of several specialists from different functional areas within the mine company. The analytical hierarchy process (AHP) can be very useful in involving several decision makers with different conflicting objectives to arrive at a consensus decision. In this paper, the selection of an optimal reclamation method using an AHP-based model was evaluated for coal production in an open-pit coal mine located at Seyitomer region in Turkey. The use of the proposed model indicates that it can be applied to improve the group decision making in selecting a reclamation method that satisfies optimal specifications. Also, it is found that the decision process is systematic and using the proposed model can reduce the time taken to select a optimal method.
NASA Astrophysics Data System (ADS)
Wong, C.-W.; Yew, T.-K.; Chong, K.-K.; Tan, W.-C.; Tan, M.-H.; Lim, B.-H.
2017-11-01
This paper presents a systematic approach for optimizing the design of ultra-high concentrator photovoltaic (UHCPV) system comprised of non-imaging dish concentrator (primary optical element) and crossed compound parabolic concentrator (secondary optical element). The optimization process includes the design of primary and secondary optics by considering the focal distance, spillage losses and rim angle of the dish concentrator. The imperfection factors, i.e. mirror reflectivity of 93%, lens’ optical efficiency of 85%, circumsolar ratio of 0.2 and mirror surface slope error of 2 mrad, were considered in the simulation to avoid the overestimation of output power. The proposed UHCPV system is capable of attaining effective ultra-high solar concentration ratio of 1475 suns and DC system efficiency of 31.8%.
Dynamic state estimation based on Poisson spike trains—towards a theory of optimal encoding
NASA Astrophysics Data System (ADS)
Susemihl, Alex; Meir, Ron; Opper, Manfred
2013-03-01
Neurons in the nervous system convey information to higher brain regions by the generation of spike trains. An important question in the field of computational neuroscience is how these sensory neurons encode environmental information in a way which may be simply analyzed by subsequent systems. Many aspects of the form and function of the nervous system have been understood using the concepts of optimal population coding. Most studies, however, have neglected the aspect of temporal coding. Here we address this shortcoming through a filtering theory of inhomogeneous Poisson processes. We derive exact relations for the minimal mean squared error of the optimal Bayesian filter and, by optimizing the encoder, obtain optimal codes for populations of neurons. We also show that a class of non-Markovian, smooth stimuli are amenable to the same treatment, and provide results for the filtering and prediction error which hold for a general class of stochastic processes. This sets a sound mathematical framework for a population coding theory that takes temporal aspects into account. It also formalizes a number of studies which discussed temporal aspects of coding using time-window paradigms, by stating them in terms of correlation times and firing rates. We propose that this kind of analysis allows for a systematic study of temporal coding and will bring further insights into the nature of the neural code.
Gdowski, Andrew; Johnson, Kaitlyn; Shah, Sunil; Gryczynski, Ignacy; Vishwanatha, Jamboor; Ranjan, Amalendu
2018-02-12
The process of optimization and fabrication of nanoparticle synthesis for preclinical studies can be challenging and time consuming. Traditional small scale laboratory synthesis techniques suffer from batch to batch variability. Additionally, the parameters used in the original formulation must be re-optimized due to differences in fabrication techniques for clinical production. Several low flow microfluidic synthesis processes have been reported in recent years for developing nanoparticles that are a hybrid between polymeric nanoparticles and liposomes. However, use of high flow microfluidic synthetic techniques has not been described for this type of nanoparticle system, which we will term as nanolipomer. In this manuscript, we describe the successful optimization and functional assessment of nanolipomers fabricated using a microfluidic synthesis method under high flow parameters. The optimal total flow rate for synthesis of these nanolipomers was found to be 12 ml/min and flow rate ratio 1:1 (organic phase: aqueous phase). The PLGA polymer concentration of 10 mg/ml and a DSPE-PEG lipid concentration of 10% w/v provided optimal size, PDI and stability. Drug loading and encapsulation of a representative hydrophobic small molecule drug, curcumin, was optimized and found that high encapsulation efficiency of 58.8% and drug loading of 4.4% was achieved at 7.5% w/w initial concentration of curcumin/PLGA polymer. The final size and polydispersity index of the optimized nanolipomer was 102.11 nm and 0.126, respectively. Functional assessment of uptake of the nanolipomers in C4-2B prostate cancer cells showed uptake at 1 h and increased uptake at 24 h. The nanolipomer was more effective in the cell viability assay compared to free drug. Finally, assessment of in vivo retention in mice of these nanolipomers revealed retention for up to 2 h and were completely cleared at 24 h. In this study, we have demonstrated that a nanolipomer formulation can be successfully synthesized and easily scaled up through a high flow microfluidic system with optimal characteristics. The process of developing nanolipomers using this methodology is significant as the same optimized parameters used for small batches could be translated into manufacturing large scale batches for clinical trials through parallel flow systems.
Wang, Jun-Sheng; Yang, Guang-Hong
2017-07-25
This paper studies the optimal output-feedback control problem for unknown linear discrete-time systems with stochastic measurement and process noise. A dithered Bellman equation with the innovation covariance matrix is constructed via the expectation operator given in the form of a finite summation. On this basis, an output-feedback-based approximate dynamic programming method is developed, where the terms depending on the innovation covariance matrix are available with the aid of the innovation covariance matrix identified beforehand. Therefore, by iterating the Bellman equation, the resulting value function can converge to the optimal one in the presence of the aforementioned noise, and the nearly optimal control laws are delivered. To show the effectiveness and the advantages of the proposed approach, a simulation example and a velocity control experiment on a dc machine are employed.
2007-11-14
Artificial intelligence and 4 23 education , Volume 1: Learning environments and tutoring systems. Hillsdale, NJ: Erlbaum. Wickens, C.D. (1984). Processing...and how to use it to best optimize the learning process. Some researchers (see Loftin & Savely, 1991) have proposed adding intelligent systems to the...is experienced as the cognitive centers in an individual’s brain process visual, tactile, kinesthetic , olfactory, proprioceptive, and auditory
Multivariable optimization of an auto-thermal ammonia synthesis reactor using genetic algorithm
NASA Astrophysics Data System (ADS)
Anh-Nga, Nguyen T.; Tuan-Anh, Nguyen; Tien-Dung, Vu; Kim-Trung, Nguyen
2017-09-01
The ammonia synthesis system is an important chemical process used in the manufacture of fertilizers, chemicals, explosives, fibers, plastics, refrigeration. In the literature, many works approaching the modeling, simulation and optimization of an auto-thermal ammonia synthesis reactor can be found. However, they just focus on the optimization of the reactor length while keeping the others parameters constant. In this study, the other parameters are also considered in the optimization problem such as the temperature of feed gas enters the catalyst zone. The optimal problem requires the maximization of a multivariable objective function which subjects to a number of equality constraints involving the solution of coupled differential equations and also inequality constraints. The solution of an optimization problem can be found through, among others, deterministic or stochastic approaches. The stochastic methods, such as evolutionary algorithm (EA), which is based on natural phenomenon, can overcome the drawbacks such as the requirement of the derivatives of the objective function and/or constraints, or being not efficient in non-differentiable or discontinuous problems. Genetic algorithm (GA) which is a class of EA, exceptionally simple, robust at numerical optimization and is more likely to find a true global optimum. In this study, the genetic algorithm is employed to find the optimum profit of the process. The inequality constraints were treated using penalty method. The coupled differential equations system was solved using Runge-Kutta 4th order method. The results showed that the presented numerical method could be applied to model the ammonia synthesis reactor. The optimum economic profit obtained from this study are also compared to the results from the literature. It suggests that the process should be operated at higher temperature of feed gas in catalyst zone and the reactor length is slightly longer.
NASA Astrophysics Data System (ADS)
Kumar, Girish; Jain, Vipul; Gandhi, O. P.
2018-03-01
Maintenance helps to extend equipment life by improving its condition and avoiding catastrophic failures. Appropriate model or mechanism is, thus, needed to quantify system availability vis-a-vis a given maintenance strategy, which will assist in decision-making for optimal utilization of maintenance resources. This paper deals with semi-Markov process (SMP) modeling for steady state availability analysis of mechanical systems that follow condition-based maintenance (CBM) and evaluation of optimal condition monitoring interval. The developed SMP model is solved using two-stage analytical approach for steady-state availability analysis of the system. Also, CBM interval is decided for maximizing system availability using Genetic Algorithm approach. The main contribution of the paper is in the form of a predictive tool for system availability that will help in deciding the optimum CBM policy. The proposed methodology is demonstrated for a centrifugal pump.
Techniques for optimal crop selection in a controlled ecological life support system
NASA Technical Reports Server (NTRS)
Mccormack, Ann; Finn, Cory; Dunsky, Betsy
1993-01-01
A Controlled Ecological Life Support System (CELSS) utilizes a plant's natural ability to regenerate air and water while being grown as a food source in a closed life support system. Current plant research is directed toward obtaining quantitative empirical data on the regenerative ability of each species of plant and the system volume and power requirements. Two techniques were adapted to optimize crop species selection while at the same time minimizing the system volume and power requirements. Each allows the level of life support supplied by the plants to be selected, as well as other system parameters. The first technique uses decision analysis in the form of a spreadsheet. The second method, which is used as a comparison with and validation of the first, utilizes standard design optimization techniques. Simple models of plant processes are used in the development of these methods.
Techniques for optimal crop selection in a controlled ecological life support system
NASA Technical Reports Server (NTRS)
Mccormack, Ann; Finn, Cory; Dunsky, Betsy
1992-01-01
A Controlled Ecological Life Support System (CELSS) utilizes a plant's natural ability to regenerate air and water while being grown as a food source in a closed life support system. Current plant research is directed toward obtaining quantitative empirical data on the regenerative ability of each species of plant and the system volume and power requirements. Two techniques were adapted to optimize crop species selection while at the same time minimizing the system volume and power requirements. Each allows the level of life support supplied by the plants to be selected, as well as other system parameters. The first technique uses decision analysis in the form of a spreadsheet. The second method, which is used as a comparison with and validation of the first, utilizes standard design optimization techniques. Simple models of plant processes are used in the development of these methods.
On the use of topology optimization for improving heat transfer in molding process
NASA Astrophysics Data System (ADS)
Agazzi, A.; LeGoff, R.; Truc-Vu, C.
2016-10-01
In the plastic industry, one of the key factor is to control heat transfer. One way to achieve that goal is to design an effective cooling system. But in some area of the mold, where it is not possible to design cooling system, the use of a highly conductive material, such as copper pin, is often used. Most of the time, the location, the size and the quantity of the copper pin are made by empirical considerations, without using optimization procedures. In this article, it is proposed to use topology optimization, in order to improve transient conductive heat transfer in an injection/blowing mold. Two methodologies are applied and compared. Finally, the optimal distribution of cooper pin in the mold is given.
Design optimization of gas generator hybrid propulsion boosters
NASA Technical Reports Server (NTRS)
Weldon, Vincent; Phillips, Dwight U.; Fink, Lawrence E.
1990-01-01
A methodology used in support of a contract study for NASA/MSFC to optimize the design of gas generator hybrid propulsion booster for uprating the National Space Transportation System (NSTS) is presented. The objective was to compare alternative configurations for this booster approach, optimizing each candidate concept on different bases, in order to develop data for a trade table on which a final decision was based. The methodology is capable of processing a large number of independent and dependent variables, adjusting the overall subsystems characteristics to arrive at a best compromise integrated design to meet various specified optimization criteria subject to selected constraints. For each system considered, a detailed weight statement was generated along with preliminary cost and reliability estimates.
Image-plane processing of visual information
NASA Technical Reports Server (NTRS)
Huck, F. O.; Fales, C. L.; Park, S. K.; Samms, R. W.
1984-01-01
Shannon's theory of information is used to optimize the optical design of sensor-array imaging systems which use neighborhood image-plane signal processing for enhancing edges and compressing dynamic range during image formation. The resultant edge-enhancement, or band-pass-filter, response is found to be very similar to that of human vision. Comparisons of traits in human vision with results from information theory suggest that: (1) Image-plane processing, like preprocessing in human vision, can improve visual information acquisition for pattern recognition when resolving power, sensitivity, and dynamic range are constrained. Improvements include reduced sensitivity to changes in lighter levels, reduced signal dynamic range, reduced data transmission and processing, and reduced aliasing and photosensor noise degradation. (2) Information content can be an appropriate figure of merit for optimizing the optical design of imaging systems when visual information is acquired for pattern recognition. The design trade-offs involve spatial response, sensitivity, and sampling interval.
Manufacturing Bms/Iso System Review
NASA Technical Reports Server (NTRS)
Gomez, Yazmin
2004-01-01
The Quality Management System (QMS) is one that recognizes the need to continuously change and improve an organization s products and services as determined by system feedback, and corresponding management decisions. The purpose of a Quality Management System is to minimize quality variability of an organization's products and services. The optimal Quality Management System balances the need for an organization to maintain flexibility in the products and services it provides with the need for providing the appropriate level of discipline and control over the processes used to provide them. The goal of a Quality Management System is to ensure the quality of the products and services while consistently (through minimizing quality variability) meeting or exceeding customer expectations. The GRC Business Management System (BMS) is the foundation of the Center's ISO 9001:2000 registered quality system. ISO 9001 is a quality system model developed by the International Organization for Standardization. BMS supports and promote the Glenn Research Center Quality Policy and wants to ensure the customer satisfaction while also meeting quality standards. My assignment during this summer is to examine the manufacturing processes used to develop research hardware, which in most cases are one of a kind hardware, made with non conventional equipment and materials. During this process of observation I will make a determination, based on my observations of the hardware development processes the best way to meet customer requirements and at the same time achieve the GRC quality standards. The purpose of my task is to review the manufacturing processes identifying opportunities in which to optimize the efficiency of the processes and establish a plan for implementation and continuous improvement.
Operative planning of functional sessions for multisatellite observation and communication systems
NASA Astrophysics Data System (ADS)
Darnopykh, Valeriy V.; Malyshev, Veniamin V.
2012-04-01
An important control aspect of modern satellite observation and communication systems is the control of the functional processes. Functional sessions proceed under conditions of restricted technical ability, large amounts or information to be processed by the on-board equipment, practice inequality of the received information, intentions of system management and operators, interests of customers and other factors. A large number of spacecrafts (SC) in orbital constellation is one of the most important factors affecting the functional process also. Besides that some modern projects of satellite systems are multifunctional that is mixed operations of observation and communication. Therefore the functioning of SC on-board equipment must be accurately co-ordinate. That is why the problem of operative planning the functioning of these systems, while directly affecting the efficiency of the system, is very complex and actual at present. A methodical approach and software package for operative planning of functional processes for satellite observation and communication systems, including multifunctional projects, are considered in the paper. The base scheme of this approach consists of four main stages: stage 1—modeling of SC orbital kinematics and dynamics; stage 2—modeling of system functional processes with all kind of restrictions and criterion function values; stage 3—solving an optimization tasks by numerical applicable algorithms and constructing the optimal (or accuracy) plans; stage 4—repeated plan optimization (different variants) and analyzing. Such scheme is the result of authors practical research which have been realized during last 15 years by the operative planning as for any kinds of single SC as for satellite systems with different structure of orbital constellation. The research helps to unify the procedure of operative planning, to formulate basic principles and approaches for its solving, to develop special software package. The main aspects of the approach proposed are illustrated in the paper. The results of the calculations of applied planning problems are presented. The objects of research in these problems are: projects of CBERS observation systems (1-3 SC) and projects of Iridium (66 SC) global communication system.
Robust input design for nonlinear dynamic modeling of AUV.
Nouri, Nowrouz Mohammad; Valadi, Mehrdad
2017-09-01
Input design has a dominant role in developing the dynamic model of autonomous underwater vehicles (AUVs) through system identification. Optimal input design is the process of generating informative inputs that can be used to generate the good quality dynamic model of AUVs. In a problem with optimal input design, the desired input signal depends on the unknown system which is intended to be identified. In this paper, the input design approach which is robust to uncertainties in model parameters is used. The Bayesian robust design strategy is applied to design input signals for dynamic modeling of AUVs. The employed approach can design multiple inputs and apply constraints on an AUV system's inputs and outputs. Particle swarm optimization (PSO) is employed to solve the constraint robust optimization problem. The presented algorithm is used for designing the input signals for an AUV, and the estimate obtained by robust input design is compared with that of the optimal input design. According to the results, proposed input design can satisfy both robustness of constraints and optimality. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
Enhancing Nursing Staffing Forecasting With Safety Stock Over Lead Time Modeling.
McNair, Douglas S
2015-01-01
In balancing competing priorities, it is essential that nursing staffing provide enough nurses to safely and effectively care for the patients. Mathematical models to predict optimal "safety stocks" have been routine in supply chain management for many years but have up to now not been applied in nursing workforce management. There are various aspects that exhibit similarities between the 2 disciplines, such as an evolving demand forecast according to acuity and the fact that provisioning "stock" to meet demand in a future period has nonzero variable lead time. Under assumptions about the forecasts (eg, the demand process is well fit as an autoregressive process) and about the labor supply process (≥1 shifts' lead time), we show that safety stock over lead time for such systems is effectively equivalent to the corresponding well-studied problem for systems with stationary demand bounds and base stock policies. Hence, we can apply existing models from supply chain analytics to find the optimal safety levels of nurse staffing. We use a case study with real data to demonstrate that there are significant benefits from the inclusion of the forecast process when determining the optimal safety stocks.
A Response Surface Methodology for Bi-Level Integrated System Synthesis (BLISS)
NASA Technical Reports Server (NTRS)
Altus, Troy David; Sobieski, Jaroslaw (Technical Monitor)
2002-01-01
The report describes a new method for optimization of engineering systems such as aerospace vehicles whose design must harmonize a number of subsystems and various physical phenomena, each represented by a separate computer code, e.g., aerodynamics, structures, propulsion, performance, etc. To represent the system internal couplings, the codes receive output from other codes as part of their inputs. The system analysis and optimization task is decomposed into subtasks that can be executed concurrently, each subtask conducted using local state and design variables and holding constant a set of the system-level design variables. The subtasks results are stored in form of the Response Surfaces (RS) fitted in the space of the system-level variables to be used as the subtask surrogates in a system-level optimization whose purpose is to optimize the system objective(s) and to reconcile the system internal couplings. By virtue of decomposition and execution concurrency, the method enables a broad workfront in organization of an engineering project involving a number of specialty groups that might be geographically dispersed, and it exploits the contemporary computing technology of massively concurrent and distributed processing. The report includes a demonstration test case of supersonic business jet design.
NASA Astrophysics Data System (ADS)
Korotkova, T. I.; Popova, V. I.
2017-11-01
The generalized mathematical model of decision-making in the problem of planning and mode selection providing required heat loads in a large heat supply system is considered. The system is multilevel, decomposed into levels of main and distribution heating networks with intermediate control stages. Evaluation of the effectiveness, reliability and safety of such a complex system is carried out immediately according to several indicators, in particular pressure, flow, temperature. This global multicriteria optimization problem with constraints is decomposed into a number of local optimization problems and the coordination problem. An agreed solution of local problems provides a solution to the global multicriterion problem of decision making in a complex system. The choice of the optimum operational mode of operation of a complex heat supply system is made on the basis of the iterative coordination process, which converges to the coordinated solution of local optimization tasks. The interactive principle of multicriteria task decision-making includes, in particular, periodic adjustment adjustments, if necessary, guaranteeing optimal safety, reliability and efficiency of the system as a whole in the process of operation. The degree of accuracy of the solution, for example, the degree of deviation of the internal air temperature from the required value, can also be changed interactively. This allows to carry out adjustment activities in the best way and to improve the quality of heat supply to consumers. At the same time, an energy-saving task is being solved to determine the minimum required values of heads at sources and pumping stations.
Aparicio, Juan Daniel; Raimondo, Enzo Emanuel; Gil, Raúl Andrés; Benimeli, Claudia Susana; Polti, Marta Alejandra
2018-01-15
The objective of the present work was to establish optimal biological and physicochemical parameters in order to remove simultaneously lindane and Cr(VI) at high and/or low pollutants concentrations from the soil by an actinobacteria consortium formed by Streptomyces sp. M7, MC1, A5, and Amycolatopsis tucumanensis AB0. Also, the final aim was to treat real soils contaminated with Cr(VI) and/or lindane from the Northwest of Argentina employing the optimal biological and physicochemical conditions. In this sense, after determining the optimal inoculum concentration (2gkg -1 ), an experimental design model with four factors (temperature, moisture, initial concentration of Cr(VI) and lindane) was employed for predicting the system behavior during bioremediation process. According to response optimizer, the optimal moisture level was 30% for all bioremediation processes. However, the optimal temperature was different for each situation: for low initial concentrations of both pollutants, the optimal temperature was 25°C; for low initial concentrations of Cr(VI) and high initial concentrations of lindane, the optimal temperature was 30°C; and for high initial concentrations of Cr(VI), the optimal temperature was 35°C. In order to confirm the model adequacy and the validity of the optimization procedure, experiments were performed in six real contaminated soils samples. The defined actinobacteria consortium reduced the contaminants concentrations in five of the six samples, by working at laboratory scale and employing the optimal conditions obtained through the factorial design. Copyright © 2017 Elsevier B.V. All rights reserved.
Static and Dynamic Aeroelastic Tailoring With Variable Camber Control
NASA Technical Reports Server (NTRS)
Stanford, Bret K.
2016-01-01
This paper examines the use of a Variable Camber Continuous Trailing Edge Flap (VCCTEF) system for aeroservoelastic optimization of a transport wingbox. The quasisteady and unsteady motions of the flap system are utilized as design variables, along with patch-level structural variables, towards minimizing wingbox weight via maneuver load alleviation and active flutter suppression. The resulting system is, in general, very successful at removing structural weight in a feasible manner. Limitations to this success are imposed by including load cases where the VCCTEF system is not active (open-loop) in the optimization process, and also by including actuator operating cost constraints.
A Module Experimental Process System Development Unit (MEPSDU)
NASA Technical Reports Server (NTRS)
1981-01-01
A cost effective process sequence and machinery for the production of flat plate photovoltaic modules are described. Cells were fabricated using the process sequence which was optimized, as was a lamination procedure. Insulator tapes and edge seal material were identified and tested. Encapsulation materials were evaluated.
General Results in Optimal Control of Discrete-Time Nonlinear Stochastic Systems
1988-01-01
P. J. McLane, "Optimal Stochastic Control of Linear System. with State- and Control-Dependent Distur- bances," ZEEE Trans. 4uto. Contr., Vol. 16, No...Vol. 45, No. 1, pp. 359-362, 1987 (9] R. R. Mohler and W. J. Kolodziej, "An Overview of Stochastic Bilinear Control Processes," ZEEE Trans. Syst...34 J. of Math. anal. App.:, Vol. 47, pp. 156-161, 1974 [14) E. Yaz, "A Control Scheme for a Class of Discrete Nonlinear Stochastic Systems," ZEEE Trans
Optimal nonlinear information processing capacity in delay-based reservoir computers
NASA Astrophysics Data System (ADS)
Grigoryeva, Lyudmila; Henriques, Julie; Larger, Laurent; Ortega, Juan-Pablo
2015-09-01
Reservoir computing is a recently introduced brain-inspired machine learning paradigm capable of excellent performances in the processing of empirical data. We focus in a particular kind of time-delay based reservoir computers that have been physically implemented using optical and electronic systems and have shown unprecedented data processing rates. Reservoir computing is well-known for the ease of the associated training scheme but also for the problematic sensitivity of its performance to architecture parameters. This article addresses the reservoir design problem, which remains the biggest challenge in the applicability of this information processing scheme. More specifically, we use the information available regarding the optimal reservoir working regimes to construct a functional link between the reservoir parameters and its performance. This function is used to explore various properties of the device and to choose the optimal reservoir architecture, thus replacing the tedious and time consuming parameter scannings used so far in the literature.
Optimal nonlinear information processing capacity in delay-based reservoir computers.
Grigoryeva, Lyudmila; Henriques, Julie; Larger, Laurent; Ortega, Juan-Pablo
2015-09-11
Reservoir computing is a recently introduced brain-inspired machine learning paradigm capable of excellent performances in the processing of empirical data. We focus in a particular kind of time-delay based reservoir computers that have been physically implemented using optical and electronic systems and have shown unprecedented data processing rates. Reservoir computing is well-known for the ease of the associated training scheme but also for the problematic sensitivity of its performance to architecture parameters. This article addresses the reservoir design problem, which remains the biggest challenge in the applicability of this information processing scheme. More specifically, we use the information available regarding the optimal reservoir working regimes to construct a functional link between the reservoir parameters and its performance. This function is used to explore various properties of the device and to choose the optimal reservoir architecture, thus replacing the tedious and time consuming parameter scannings used so far in the literature.
Optimal nonlinear information processing capacity in delay-based reservoir computers
Grigoryeva, Lyudmila; Henriques, Julie; Larger, Laurent; Ortega, Juan-Pablo
2015-01-01
Reservoir computing is a recently introduced brain-inspired machine learning paradigm capable of excellent performances in the processing of empirical data. We focus in a particular kind of time-delay based reservoir computers that have been physically implemented using optical and electronic systems and have shown unprecedented data processing rates. Reservoir computing is well-known for the ease of the associated training scheme but also for the problematic sensitivity of its performance to architecture parameters. This article addresses the reservoir design problem, which remains the biggest challenge in the applicability of this information processing scheme. More specifically, we use the information available regarding the optimal reservoir working regimes to construct a functional link between the reservoir parameters and its performance. This function is used to explore various properties of the device and to choose the optimal reservoir architecture, thus replacing the tedious and time consuming parameter scannings used so far in the literature. PMID:26358528
Optimal PMU placement using topology transformation method in power systems.
Rahman, Nadia H A; Zobaa, Ahmed F
2016-09-01
Optimal phasor measurement units (PMUs) placement involves the process of minimizing the number of PMUs needed while ensuring the entire power system completely observable. A power system is identified observable when the voltages of all buses in the power system are known. This paper proposes selection rules for topology transformation method that involves a merging process of zero-injection bus with one of its neighbors. The result from the merging process is influenced by the selection of bus selected to merge with the zero-injection bus. The proposed method will determine the best candidate bus to merge with zero-injection bus according to the three rules created in order to determine the minimum number of PMUs required for full observability of the power system. In addition, this paper also considered the case of power flow measurements. The problem is formulated as integer linear programming (ILP). The simulation for the proposed method is tested by using MATLAB for different IEEE bus systems. The explanation of the proposed method is demonstrated by using IEEE 14-bus system. The results obtained in this paper proved the effectiveness of the proposed method since the number of PMUs obtained is comparable with other available techniques.
Baresel, Christian; Dalgren, Lena; Almemark, Mats; Lazic, Aleksandra
2016-01-01
Wastewater reclamation will be a significant part of future water management and the environmental assessment of various treatment systems to reuse wastewater has become an important research field. The secondary treatment process and sludge handling on-site are, especially, electricity demanding processes due to aeration, pumping, mixing, dewatering, etc. used for operation and are being identified as the main contributor for many environmental impacts. This study discusses how the environmental performance of reuse treatment systems may be influenced by surrounding conditions. This article illustrates and discusses the importance of factors commonly treated as externalities and as such not being included in optimization strategies of reuse systems, but that are necessary to environmentally assess wastewater reclamation systems. This is illustrated by two up-stream and downstream processes; electricity supply and the use of sludge as fertilizer commonly practiced in regions considered for wastewater reclamation. The study shows that external conditions can have a larger impact on the overall environmental performance of reuse treatment systems than internal optimizations could compensate for. These results imply that a more holistic environmental assessment of reuse schemes could provide less environmental impacts as externalities could be included in measures to reduce the overall impacts.
Marechal, Luc; Shaohui Foong; Zhenglong Sun; Wood, Kristin L
2015-08-01
Motivated by the need for developing a neuronavigation system to improve efficacy of intracranial surgical procedures, a localization system using passive magnetic fields for real-time monitoring of the insertion process of an external ventricular drain (EVD) catheter is conceived and developed. This system operates on the principle of measuring the static magnetic field of a magnetic marker using an array of magnetic sensors. An artificial neural network (ANN) is directly used for solving the inverse problem of magnetic dipole localization for improved efficiency and precision. As the accuracy of localization system is highly dependent on the sensor spatial location, an optimization framework, based on understanding and classification of experimental sensor characteristics as well as prior knowledge of the general trajectory of the localization pathway, for design of such sensing assemblies is described and investigated in this paper. Both optimized and non-optimized sensor configurations were experimentally evaluated and results show superior performance from the optimized configuration. While the approach presented here utilizes ventriculostomy as an illustrative platform, it can be extended to other medical applications that require localization inside the body.
Strategic Information Systems Planning.
ERIC Educational Resources Information Center
Rowley, Jennifer
1995-01-01
Strategic Information Systems Planning (SISP) is the process of establishing a program for implementation and use of information systems in ways that will optimize effectiveness of information resources and use them to support the objectives of the organization. Basic steps in SISP methodology are outlined. (JKP)
Framework for Multidisciplinary Analysis, Design, and Optimization with High-Fidelity Analysis Tools
NASA Technical Reports Server (NTRS)
Orr, Stanley A.; Narducci, Robert P.
2009-01-01
A plan is presented for the development of a high fidelity multidisciplinary optimization process for rotorcraft. The plan formulates individual disciplinary design problems, identifies practical high-fidelity tools and processes that can be incorporated in an automated optimization environment, and establishes statements of the multidisciplinary design problem including objectives, constraints, design variables, and cross-disciplinary dependencies. Five key disciplinary areas are selected in the development plan. These are rotor aerodynamics, rotor structures and dynamics, fuselage aerodynamics, fuselage structures, and propulsion / drive system. Flying qualities and noise are included as ancillary areas. Consistency across engineering disciplines is maintained with a central geometry engine that supports all multidisciplinary analysis. The multidisciplinary optimization process targets the preliminary design cycle where gross elements of the helicopter have been defined. These might include number of rotors and rotor configuration (tandem, coaxial, etc.). It is at this stage that sufficient configuration information is defined to perform high-fidelity analysis. At the same time there is enough design freedom to influence a design. The rotorcraft multidisciplinary optimization tool is built and substantiated throughout its development cycle in a staged approach by incorporating disciplines sequentially.
Image gathering and processing - Information and fidelity
NASA Technical Reports Server (NTRS)
Huck, F. O.; Fales, C. L.; Halyo, N.; Samms, R. W.; Stacy, K.
1985-01-01
In this paper we formulate and use information and fidelity criteria to assess image gathering and processing, combining optical design with image-forming and edge-detection algorithms. The optical design of the image-gathering system revolves around the relationship among sampling passband, spatial response, and signal-to-noise ratio (SNR). Our formulations of information, fidelity, and optimal (Wiener) restoration account for the insufficient sampling (i.e., aliasing) common in image gathering as well as for the blurring and noise that conventional formulations account for. Performance analyses and simulations for ordinary optical-design constraints and random scences indicate that (1) different image-forming algorithms prefer different optical designs; (2) informationally optimized designs maximize the robustness of optimal image restorations and lead to the highest-spatial-frequency channel (relative to the sampling passband) for which edge detection is reliable (if the SNR is sufficiently high); and (3) combining the informationally optimized design with a 3 by 3 lateral-inhibitory image-plane-processing algorithm leads to a spatial-response shape that approximates the optimal edge-detection response of (Marr's model of) human vision and thus reduces the data preprocessing and transmission required for machine vision.
2012-08-01
It suggests that a smart use of some a-priori information about the operating environment, when processing the received signal and designing the...random variable with the same variance of the backscattering target amplitude αT , and D ( αT , α G T ) is the Kullback − Leibler divergence, see [65...MI . Proof. See Appendix 3.6.6. Thus, we can use the optimization procedure of Algorithm 4 to optimize the Mutual Information between the target
Application of decomposition techniques to the preliminary design of a transport aircraft
NASA Technical Reports Server (NTRS)
Rogan, J. E.; Kolb, M. A.
1987-01-01
A nonlinear constrained optimization problem describing the preliminary design process for a transport aircraft has been formulated. A multifaceted decomposition of the optimization problem has been made. Flight dynamics, flexible aircraft loads and deformations, and preliminary structural design subproblems appear prominently in the decomposition. The use of design process decomposition for scheduling design projects, a new system integration approach to configuration control, and the application of object-centered programming to a new generation of design tools are discussed.
Optimal design of solidification processes
NASA Technical Reports Server (NTRS)
Dantzig, Jonathan A.; Tortorelli, Daniel A.
1991-01-01
An optimal design algorithm is presented for the analysis of general solidification processes, and is demonstrated for the growth of GaAs crystals in a Bridgman furnace. The system is optimal in the sense that the prespecified temperature distribution in the solidifying materials is obtained to maximize product quality. The optimization uses traditional numerical programming techniques which require the evaluation of cost and constraint functions and their sensitivities. The finite element method is incorporated to analyze the crystal solidification problem, evaluate the cost and constraint functions, and compute the sensitivities. These techniques are demonstrated in the crystal growth application by determining an optimal furnace wall temperature distribution to obtain the desired temperature profile in the crystal, and hence to maximize the crystal's quality. Several numerical optimization algorithms are studied to determine the proper convergence criteria, effective 1-D search strategies, appropriate forms of the cost and constraint functions, etc. In particular, we incorporate the conjugate gradient and quasi-Newton methods for unconstrained problems. The efficiency and effectiveness of each algorithm is presented in the example problem.
NASA Technical Reports Server (NTRS)
Lan, C. Edward; Ge, Fuying
1989-01-01
Control system design for general nonlinear flight dynamic models is considered through numerical simulation. The design is accomplished through a numerical optimizer coupled with analysis of flight dynamic equations. The general flight dynamic equations are numerically integrated and dynamic characteristics are then identified from the dynamic response. The design variables are determined iteratively by the optimizer to optimize a prescribed objective function which is related to desired dynamic characteristics. Generality of the method allows nonlinear effects to aerodynamics and dynamic coupling to be considered in the design process. To demonstrate the method, nonlinear simulation models for an F-5A and an F-16 configurations are used to design dampers to satisfy specifications on flying qualities and control systems to prevent departure. The results indicate that the present method is simple in formulation and effective in satisfying the design objectives.
NASA Astrophysics Data System (ADS)
Triplett, Michael D.; Rathman, James F.
2009-04-01
Using statistical experimental design methodologies, the solid lipid nanoparticle design space was found to be more robust than previously shown in literature. Formulation and high shear homogenization process effects on solid lipid nanoparticle size distribution, stability, drug loading, and drug release have been investigated. Experimentation indicated stearic acid as the optimal lipid, sodium taurocholate as the optimal cosurfactant, an optimum lecithin to sodium taurocholate ratio of 3:1, and an inverse relationship between mixing time and speed and nanoparticle size and polydispersity. Having defined the base solid lipid nanoparticle system, β-carotene was incorporated into stearic acid nanoparticles to investigate the effects of introducing a drug into the base solid lipid nanoparticle system. The presence of β-carotene produced a significant effect on the optimal formulation and process conditions, but the design space was found to be robust enough to accommodate the drug. β-Carotene entrapment efficiency averaged 40%. β-Carotene was retained in the nanoparticles for 1 month. As demonstrated herein, solid lipid nanoparticle technology can be sufficiently robust from a design standpoint to become commercially viable.
Optimal Parameter Design of Coarse Alignment for Fiber Optic Gyro Inertial Navigation System.
Lu, Baofeng; Wang, Qiuying; Yu, Chunmei; Gao, Wei
2015-06-25
Two different coarse alignment algorithms for Fiber Optic Gyro (FOG) Inertial Navigation System (INS) based on inertial reference frame are discussed in this paper. Both of them are based on gravity vector integration, therefore, the performance of these algorithms is determined by integration time. In previous works, integration time is selected by experience. In order to give a criterion for the selection process, and make the selection of the integration time more accurate, optimal parameter design of these algorithms for FOG INS is performed in this paper. The design process is accomplished based on the analysis of the error characteristics of these two coarse alignment algorithms. Moreover, this analysis and optimal parameter design allow us to make an adequate selection of the most accurate algorithm for FOG INS according to the actual operational conditions. The analysis and simulation results show that the parameter provided by this work is the optimal value, and indicate that in different operational conditions, the coarse alignment algorithms adopted for FOG INS are different in order to achieve better performance. Lastly, the experiment results validate the effectiveness of the proposed algorithm.
An evolutionary firefly algorithm for the estimation of nonlinear biological model parameters.
Abdullah, Afnizanfaizal; Deris, Safaai; Anwar, Sohail; Arjunan, Satya N V
2013-01-01
The development of accurate computational models of biological processes is fundamental to computational systems biology. These models are usually represented by mathematical expressions that rely heavily on the system parameters. The measurement of these parameters is often difficult. Therefore, they are commonly estimated by fitting the predicted model to the experimental data using optimization methods. The complexity and nonlinearity of the biological processes pose a significant challenge, however, to the development of accurate and fast optimization methods. We introduce a new hybrid optimization method incorporating the Firefly Algorithm and the evolutionary operation of the Differential Evolution method. The proposed method improves solutions by neighbourhood search using evolutionary procedures. Testing our method on models for the arginine catabolism and the negative feedback loop of the p53 signalling pathway, we found that it estimated the parameters with high accuracy and within a reasonable computation time compared to well-known approaches, including Particle Swarm Optimization, Nelder-Mead, and Firefly Algorithm. We have also verified the reliability of the parameters estimated by the method using an a posteriori practical identifiability test.
An Evolutionary Firefly Algorithm for the Estimation of Nonlinear Biological Model Parameters
Abdullah, Afnizanfaizal; Deris, Safaai; Anwar, Sohail; Arjunan, Satya N. V.
2013-01-01
The development of accurate computational models of biological processes is fundamental to computational systems biology. These models are usually represented by mathematical expressions that rely heavily on the system parameters. The measurement of these parameters is often difficult. Therefore, they are commonly estimated by fitting the predicted model to the experimental data using optimization methods. The complexity and nonlinearity of the biological processes pose a significant challenge, however, to the development of accurate and fast optimization methods. We introduce a new hybrid optimization method incorporating the Firefly Algorithm and the evolutionary operation of the Differential Evolution method. The proposed method improves solutions by neighbourhood search using evolutionary procedures. Testing our method on models for the arginine catabolism and the negative feedback loop of the p53 signalling pathway, we found that it estimated the parameters with high accuracy and within a reasonable computation time compared to well-known approaches, including Particle Swarm Optimization, Nelder-Mead, and Firefly Algorithm. We have also verified the reliability of the parameters estimated by the method using an a posteriori practical identifiability test. PMID:23469172
Modeling and Optimization of Multiple Unmanned Aerial Vehicles System Architecture Alternatives
Wang, Weiping; He, Lei
2014-01-01
Unmanned aerial vehicle (UAV) systems have already been used in civilian activities, although very limitedly. Confronted different types of tasks, multi UAVs usually need to be coordinated. This can be extracted as a multi UAVs system architecture problem. Based on the general system architecture problem, a specific description of the multi UAVs system architecture problem is presented. Then the corresponding optimization problem and an efficient genetic algorithm with a refined crossover operator (GA-RX) is proposed to accomplish the architecting process iteratively in the rest of this paper. The availability and effectiveness of overall method is validated using 2 simulations based on 2 different scenarios. PMID:25140328
NASA Astrophysics Data System (ADS)
Zheng, Qingyu; Zhang, Guoqiang; Che, Kai; Shao, Shikuan; Li, Yanfei
2017-08-01
Taking 660 MW generator unit denitration system as a study object, an optimization and adjustment method shall be designed to control ammonia slip, i.e. adjust ammonia injection system based on NO concentration distribution at inlet/outlet of the denitration system to make the injected ammonia distribute evenly. The results shows that, this method can effectively improve NO concentration distribution at outlet of the denitration system and decrease ammonia injection amount and ammonia slip concentration. Reduce adverse impact of SCR denitration process on the air preheater to realize safe production by guaranteeing that NO discharge shall reach the standard.
NASA Astrophysics Data System (ADS)
Spangemacher, Lars; Fröhlich, Siegmund; Buse, Hauke
2017-11-01
Water is an indispensable resource for many purposes and good drinking water quality is essential for mankind. This article is supposed to show the need for mobile water treatment systems and therefore to give an overview of different mobile drinking water systems and the technologies available for obtaining good water quality. The aim is to develop a simple to operate water treatment system with few processing stages such as multi-cyclone-cartridge and reverse osmosis with energy recuperation, while the focus is set on modeling and optimizing of hydrocyclone systems as the first treatment stage.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Hong; Wang, Shaobu; Fan, Rui
This report summaries the work performed under the LDRD project on the preliminary study on knowledge automation, where specific focus has been made on the investigation of the impact of uncertainties of human decision making onto the optimization of the process operation. At first the statistics on signals from the Brain-Computing Interface (BCI) is analyzed so as to obtain the uncertainties characterization of human operators during the decision making phase using the electroencephalogram (EEG) signals. This is then followed by the discussions of an architecture that reveals the equivalence between optimization and closed loop feedback control design, where it hasmore » been shown that all the optimization problems can be transferred into the control design problem for closed loop systems. This has led to a “closed loop” framework, where the structure of the decision making is shown to be subjected to both process disturbances and controller’s uncertainties. The latter can well represent the uncertainties or randomness occurred during human decision making phase. As a result, a stochastic optimization problem has been formulated and a novel solution has been proposed using probability density function (PDF) shaping for both the cost function and the constraints using stochastic distribution control concept. A sufficient condition has been derived that guarantees the convergence of the optimal solution and discussions have been made for both the total probabilistic solution and chanced constrained optimization which have been well-studied in optimal power flows (OPF) area. A simple case study has been carried out for the economic dispatch of powers for a grid system when there are distributed energy resources (DERs) in the system, and encouraging results have been obtained showing that a significant savings on the generation cost can be expected.« less
Signal processing and control challenges for smart vehicles
NASA Astrophysics Data System (ADS)
Zhang, Hui; Braun, Simon G.
2017-03-01
Smart phones have changed not only the mobile phone market but also our society during the past few years. Could the next potential intelligent device may be the vehicle? Judging by the visibility, in all media, of the numerous attempts to develop autonomous vehicles, this is certainly one of the logical outcomes. Smart vehicles would be equipped with an advanced operating system such that the vehicles could communicate with others, optimize the operation to reduce fuel consumption and emissions, enhance safety, or even become self-driving. These combined new features of vehicles require instrumentation and hardware developments, fast signal processing/fusion, decision making and online optimization. Meanwhile, the inevitable increasing system complexity would certainly challenges the control unit design.
Optimal reconstruction of the states in qutrit systems
NASA Astrophysics Data System (ADS)
Yan, Fei; Yang, Ming; Cao, Zhuo-Liang
2010-10-01
Based on mutually unbiased measurements, an optimal tomographic scheme for the multiqutrit states is presented explicitly. Because the reconstruction process of states based on mutually unbiased states is free of information waste, we refer to our scheme as the optimal scheme. By optimal we mean that the number of the required conditional operations reaches the minimum in this tomographic scheme for the states of qutrit systems. Special attention will be paid to how those different mutually unbiased measurements are realized; that is, how to decompose each transformation that connects each mutually unbiased basis with the standard computational basis. It is found that all those transformations can be decomposed into several basic implementable single- and two-qutrit unitary operations. For the three-qutrit system, there exist five different mutually unbiased-bases structures with different entanglement properties, so we introduce the concept of physical complexity to minimize the number of nonlocal operations needed over the five different structures. This scheme is helpful for experimental scientists to realize the most economical reconstruction of quantum states in qutrit systems.
A Method for Optimal Load Dispatch of a Multi-zone Power System with Zonal Exchange Constraints
NASA Astrophysics Data System (ADS)
Hazarika, Durlav; Das, Ranjay
2018-04-01
This paper presented a method for economic generation scheduling of a multi-zone power system having inter zonal operational constraints. For this purpose, the generator rescheduling for a multi area power system having inter zonal operational constraints has been represented as a two step optimal generation scheduling problem. At first, the optimal generation scheduling has been carried out for the zone having surplus or deficient generation with proper spinning reserve using co-ordination equation. The power exchange required for the deficit zones and zones having no generation are estimated based on load demand and generation for the zone. The incremental transmission loss formulas for the transmission lines participating in the power transfer process among the zones are formulated. Using these, incremental transmission loss expression in co-ordination equation, the optimal generation scheduling for the zonal exchange has been determined. Simulation is carried out on IEEE 118 bus test system to examine the applicability and validity of the method.
2008-06-01
Transportation Systems * The Worldwide Air Transportation and Air Traffic Control System * The Worldwide Web and the Underlying Internet * Automobile Production...their use in automobiles as a way to reduce gasoline consumption, increase fuel mileage, and reduce harmful emissions. They represent a power source that
When "Less is More": The Optimal Design of Language Laboratory Hardware.
ERIC Educational Resources Information Center
Kershaw, Gary; Boyd, Gary
1980-01-01
The results of a process of designing, building, and "de-bugging" two replacement language laboratory hardware systems at Concordia University (Montreal) are described. Because commercially available systems did not meet specifications within budgetary constraints, the systems were built by the university technical department. The systems replaced…